text
stringlengths 0
316k
| year
stringclasses 50
values | No
stringclasses 911
values |
---|---|---|
J. Norwood Crout Artificial Intelligence Corporation The INTELLECT natural language database query system, a product of Artificial Intelligence Corporation, is the only commercially available system with true English query capability. Based on experience with INTELLECT in the areas of quality assurance and customer support, a number of issues in evaluating a natural language data- base query system, particularly the INTELLECT system, will be discussed. A, I. Corporation offers licenses for customers to use the INTELLECT software on their computers, to access their databases. We now have a number of customer instal- lations, plus reports from companies that are marketing INTELLECT under agreements with us, so that we can begin to discuss user reactions as possible criteria for eval- uating our system. INTELLECT's basic function is to translate typed English queries into retrieval commands for a database manage- ment system, then present the retrieved data, or answers based on it, to the terminal user. It is a general software tool, which can be easily applied to a wide va- riety of databases and user environments. For each database, a Lexicon, or dictionary, must be prepared. The Lexicon describes the words and phrases relevant to the data and how they relate to the data items. The system maintains a log of all queries, for analysis of its performance. Artificial Intelligence Corporation was founded about five years ago, for the specific purpose of developing and marketing an English language database query pro- duct. INTELLECT was the creation of Dr. Larry Harris, who presently supervises its ou-golng development. The company has been successful in developing a marketable product and now looks forward to sisnlficant expansion of both its customer base and its product line. Ver- sions of the product presently exist for interfacing with ADABAS, VSAM, Multics Relational Data Store, and A. I. Corporation's own Derived File Access Method. Additional interfaces, including one to Cullinane's Integrated Database Management System, are nearing com- pletion. A. I. Corporation's quality assurance program tests the ability of the system to perform all of its intended re- trieval, processing, and data presentation functions. We also test its fluency: its ability to understand, re- trieve, and process requests that are expressed in a wide variety of English phrasings. Part of this fluency testing consists of free-wheellng queries, but a major component of it is conducted in a formalized way: a num- ber of phrases (between 20 and 50) are chosen, each of which represents either selection of records, specifica- tion of the data items or expressions to be retrieved, or the formatting and processing to be performed. A query generator program then selects different combina- tions of these phrases and, for each set of phrases, generates queries by arranging the phrases in different permutations, with and without connecting prepositions, conjunctions, and aruicles. The file of queries is then processed by the INTELLECT system in a batch mode, and the resulting transcript of queries and responses is scanned to look for instances of improper interpreta- tion. Such a file of queries will contain, in addition to reasonable English sentences, both sentence fragments and unnatural phrasings. This kind of test is desir- able, since users who are familiar with the system will frequently enter only those words and phrases chat are necessary to express their needs, with little regard for English syntax, in order to minimize the number of key- strokes. The system in fact performs quite well with such terse queries, and users appreciate this capabili- ty. Query statistics from this kind of testing are not meaningful as a measure of system fluency since many of the queries were deliberately phrased in an un-English way. In addition to our testing program, information on INTELLECT's performance comes from the experiences of our customers. Customer evaluations of its fluency are uniformly good; there is a lot of enthusiasm for this technical achievement and its usefulness. Statistics on • several hundred queries from two customer sites are pre- sented. They show a high rate of successful processing of queries. The main conclusion to be drawn from this is chat the users are able to communicate effectively with INTELLECT in their environment. INTELLECT's basic capability is data retrieval. Within the language domain defined by the retrieval semantics of the particular DBMS and the vocabulary of the parti- cular database, INTELLECT's understanding is fluent. INTELLECT's capabilities go beyond simple retrieval, however. It can refer back to previous queries, do arithmetic calculations with numeric fields, calculate basic functions such as maximum and total, sort and break down records in categories, and vary its output format. Through this ausmentatlon of its retrieval ca- pability, INTELLECT has become more useful in a business environment, but the expanded language domain is not so easily charaeterlzed, or described, to naive users. A big advantage of English language query systems is the absence of training as a requirement for its use; this permits people to access data who are unwilling or un- able to learn how to use a structured query system. All that is required is that a person know enough about the data to be able to pose a meaningful question and be able to type on a terminal keyboard. INTELLECT is a very attractive system for such casual or technically unsophisticated users. Such people, however, often do not have a clear concept of the data model being used and cannot distinguish between the data retrieval, sum- marization, or categorization of retrieved data which INTELLECT can do, and more complex processing. They may ask for thlngs that are outside the system's functional capabilities and, hence, its domain of language compre- hension. In st-,~-ry, we feel that INTELLECT has effectively solved the man-machine communication problem for database re- trieval, within its realm of applicability. We are now addressing the question of what business environments are best served by Engllsh-languaEe database retrieval while at the same time continuing our development by si~ificantly expanding INTELLECT's semantic, and hence its lin~uistlc, domain. 31 | 1981 | 7 |
SELECTIVE PLANNING OF INTERFACE EVALUATION~ William C. Mann USC Information Sciences Institute 1 The Scope of Evaluations The basic ides behind evaluation is 8 simple one: An object is produced and then subjected to trials of its I~trformance. Observing the trials revesJs things about the character of the object, and reasoning about those observations leads tO stJ=tements about the "value" of the object, a collection of such statements bein.3 &n "evaluation." An evaluation thus dlffe~ from a description, a critique or an estimate. For our purl:)oses here, the object is a database system with a natural language interface for users. Ideally. the trials are an instrumented variant of normal uSage. The character of the users, their tasks, the data, and so forth are reDreeentative of the intended use of the system. In thinking about evaluations we need to be clear about the intended scope. Is it the whole system that is to be evaluated, or just the natural language interface portion, or pos~bly both? The decision is crucial for planning the evaluation and understanding the results. As we will see. choice of the whole system as the scope of evaluation leads t O ver~ different designs than the choice of the interface module. It is unlikely that an evaluation which is supposed to cover both scopes will cover both well. 2 Different Plans for Different Consumers We can't expect a single form or method of evaluation to be suitable for all uses. In planning to evaluate (or not to evaluate) it heil~ a great deal to identify the potential usor of the evaluation. There are some obvious prlncipis¢ 1. If we can't identify the consumer of the evaluation, don't evaluate. 2. If something other than sn evaluation meets the consumer's needs better, plan tO use it instearl. Who are the potential consumers? Clearly they ate not the same as the sDonsors, who have often lost interest by the time an evaluation is timely. Instead, they are: 1. Organizations that Might Use the System ..- These consumers need a good overview of what the system can do. Their evaluation must be hotistic, not an evaluation of a module or of particular techniqueS, They need informal information, and possibly a formal system evaluation as well. However, they may do beet with no evaluation at all. Communication theorists point out that there has never been s comprehensive effectivenees study of the telephone. Telephone service is sold without such evaluations. 2. Public Observers of the Art ..." ScienOata and the general public alike have shown a great intermit in AI, and a legitimate concern over its social effects- The interest is especially great in natural language precepting. However, neatly all of them are like obsorvem of the recent space shuttle: They can understand liftoff, landing and some of the discus=dons of the heat of re(retry, but the critical details are completely out of reach. Rather than carefully controlled evaluations, the public needs competent and honest interpretations of the action. 3. The Implementers' Egos --. Human self-acceptance and enjoyment of life are worthwhile goals, even for system designers and iml=lementers, We aJl have e~o needs. The trouble with using evaluations to meet them is that they can give only too little, too late. Praise and encouragement aJong the way would be not only more timely, but more efficient. Implementers who plan an evaluation as their vindication or grand demonstration will almost surely be frustrated. The evaluation can serve them no better than receiving an academic degree serves a student. If the process of getting it hasn't been enjoyable, the final certification won't helD. 4. The Cultural Imperative ... There may be no potential consumers of the evaluation at all, but the scientific subculture may require one anyway. We seem to have asCenDed this one far more successfully than some fields of psychology, but we should Still avoid evaluations performed out of social habit. Otherwise We will have something like a school graduation, a big. eiaJoorete, exbenalve NO,OP. 5. The Fixers -°- These I:~ople, almost inevitably some of the implementers, are interested in tuning up the system to meet the needs of real usem. They must move from the implementation environment, driven by expectation and intuition, to a more taoistic world in which those expectations are at least vulnerable. Such Customers cannot be served by the sort of broad holistic performance test the" may serve the public or the organization that is about to acquire the system. Instead, they need detailed, specific exercises of the sort that will support a causal model of how the system really functions. The best sort of evaluation will function as a tutor, providing lots of ¢oecifi¢, well distributed, detailed information. 6. The Research and Developmeht Community ... These are the AI and system development Deople from outside of the project. They are like the engineers for Ford who test Dstsuns on the track. Like the implementerso they need dch detail to support causal models. Simple, ho(iStic evaluations are entirely inadequate. 7. The Inspector --- There is another model of how evaluations function. Its premises differ grossly from those u~d adore. In this model, the results of the evaluation, whatever they are, can be discarded because they have nothing tO do with the real effects. The effects come from the threat of an evaluation, and they are like the threat of a military inspection. All of the valuable effects are complete before the ins~oection takes piece. Of course, in s mature and stable culture, the insl:~cted learns to know what to expect, and the parties cart develop the game to a high state of irrelevance. Perhaps in AI the ins~Cter could still do some good. 33 t" Both the imptemantere and the researchers need a special kind of test. and for the same reeson: to support deaign, l The value of evaluations for them is in its influence on future design activity. There are two interesting psttems in the observations above. The first is on the differing needs of "insiders" and "outsiders." • The "outsiders" (public observers, potential organi;r.ations) need evaluations of the entire system, in relatively simple terms, well supplemented by informal interpretation and demonstration. • The "insiders," researcher~ in the same field, fixers and implementera, need complex, detailed evaluations that lead to many separate insights about the system at hand• They are much more ready to cope with such complexity, and the value of their evaluation de~enas on having it. These neede are so different, and their characteristics so contradictor./. that we should expect that to serve both neeOs would require bNO different evaluations. The second pattsm concerns relative benefits• The benefits of evaluations for "insiders" are immediate, tangible and hard to obtain in any other way. They are potentially of great value, especially in directing design. In contrast, the benefits of evaluations to "outsiders" are tenuous and arguable. The option of performing an evaluation is often dominated by better methods and the option of not evaluating is sometimes attractive. The significance of this contrast is this: SYSTEM EVALUATION BENEFITS PRINCIPALLY THOSE WHO ARE WITHIN THE SYSTEM DEVELOPMENT FIELD: iMPt.EMENTERS, RESEARCHERS, SYSTEM DESIGNERS AND OTHER MEMBERS OF THE TECHNICAL COMMUNITY. 2 It seems oiovious that evaluationa should therefore be planned Dnncipally for this community• As a result, the outcomes of evalustione tend to be ex~'emely conditional. The most defensible con¢luaione are the most conditional- • they say "This is what happena with these u~4, these questions, this much system load..." Since those conditions will never cooccur again, such results are rather useless. The key to doing better is in creating results which can be generalizsd. Evaluation plans are in tension between the possibility of creating highly credible but insignificant results on one hand and the I=osalbiUty of creating broad, general results without a credible amount of Support on the other. f know no general solution to the problem of making evaluation results ganeraliza/Die and significant. We can observe what others have done, even in this book, and proceed in a case by case manner. Focusing our attention on results for design will halb. Design proceeds from causal models of its subieot matter. Evaluation results should therefora be interpreted in cesual mode. There is a tendency, particularly when statistical results are involved, to avoid causal interpretations. This comes in ~ from the view that it is part of the nature of statistical models to not supbort causal intor~retetions. Avoiding causal interpretation is formally defensible, but entirely inappropriate. If the evaluation is to have effects and value, causal interpretationa will be made• They are inevitable in the normal course of successful activity. They must be made, and so these interpret,=tions should be made by those best qualified to do so. Who should make me first causal interpretation of an e~tmtion? Not the consumers of the evaluation, but the evaluetors themselves. They are in the best position tO do so, and the act of stating the interDrets~on ia a kind of che~ on its plal~libility. By identifying the consumer, focumn 0 on consequences for dui~n, and providing causal interpretabons of r~its, we can crest,, v,,,usiole evaluations. 3 The Kay Problem: Generalization We have already noticed that evaluations can become very complex, with both good and bad effects. The complexity comes from the tssk: Useful systems are complex, the knowledge they contain is complex, users are complex and natural language is complex. Beyond all thaL planning • test from which reliable conclusions can be drawn is itself a comptex matter. l~n the face of so much complexity, it is hoDelees to try to soan the full range of the phenomena of interest. One must sample in a many. dimensional sO=ace, hoping to focus attention where conclusions are both ac, cesalble anG ,significant. II~mgn hire. -,, m mo~ ~ ¢ons~m almost entirety of recleB~n. 2Th,q is no( to say that ~e anl not le~timate, important neecls anmng "ou~ecl'. Son~mn@ musZ select lmon O commmcmlly offered am~cs¢ CXOCum new ¢o~or sy.Jcems and so form. U~or~k'un4mtecy. me imvaiim ~mation lec~mgy dole nm e~m mmoteht sa~-oach • meth~l~ogy lot msetm 0 such ~ For ezamQle, is nothing com~IrlOCe to c43m1~r i0ef~cnmlrkin 9 methods for intm~cl~wl natuttl languag(l im~/lu:R. It is not thM "ou1~m~der~" don't hlve imoortant needs: rlm~r, vm anl ~any ~Wi~e= to n~m m41~ nml~l. 34 | 1981 | 8 |
FIELD TESTING THE TRANSFORMATIOHAL qUESTION AHSWERIHG (TqA) SYSTEM S. R. Patrick ~DM T.J. Watson Reseorch Center PO BOX 218 Yorktown Heights, NQW York 10598 The Transformatlonal question Answering (TqA) system was developed over a period of time beginning in the early part of the last decade and continuing to the present. Its syntactic component is a transformational grammar parser [1, 2, ~], and its semantic Gomponqnt is a Knuth attribute grammor [~, 5]. The combination of these components providQs sufficiQnt generality, conveniQnga, and efficiency to implement a broad range of linguistic models; in addition to a wide spectrum of transformational grammars, Gilder-type phrase structure grammar [6] and lexigal functional grammar [7] systems appear to be cases in point, for example. The Particular grammar Nhich was, in fact, developed, however, was closest tO those of the genQrative semantics variety of trsnsformationel grammar; both the underlying structures assigned to sQntences and the transformations employed to effect that assignmQnt traced their origins to the generative semantics model. The system ~orks by finding the undQrlying structures corresponding tO English queries through the use of the transformational parsing facility. Those underlying structures are then translated to logical forms in a domain relationol calculus by the Knuth attribute grammar component. Evaluation of logical forms with respect to a given data base completes the question-answering process. Our first logical form evaluator took the form of a toy implementation of a relational data base system in LISP. We soon reelaced the low level tuple retrieval facilities of this implementation with the RSS (Relational StorogQ System) portion of the IBM System R [8], This version of logicol form evoluation was the one employed in the field testing to be dQscribed. In a more recent version of the system, however, it has been replacod by a translation of logical forms, first to equivalent logical forms in a set domain relational calculus and then to appropriate expressions in the 5el language, SystQm RIs high level query language. The first data base to which the system was applied was one concerning business statistics such as the sales, earnings, number of employees, etc. of 60 large companies over a five-year Period. This was a toy data base, to be sure, but it was useful tO US in developing our System. A later dota base contained the basic land identification records of about 10,000 parcels of land in a city nQar our research center. It WaS developed for use by members of the city planning departmQnt and (less frequently) other departments to answQr questions concerning the information in that file. Our purpose in making the 1system available to those city employees was, of course, to provide access to o data base of real interest to a group of users and to fiQld test our system by evaluating their usa of it. Accordingly, thQ TqA system was tailored to the land usa file oppltcation and installed at City Hall at the and of 1977. It remained there during 1978 and 1979, during which time it WaS used intormittently as thQ need arose for ad hoc cuQry to supplement thQ report generation programs that were already available for the extraction of information. Total usage of the system Was less than we had expected would be the case when We made the decision to proceed with this application. This resulted from a number of factors, including a change in mission for the planning department, a reduction in the number of people in that dQpartment, a decision tO rebuild the office space during the period Of usage, and a degree of obsolescence of the data due to the length of time between uodatQS (which were to have been supplied by the planning department). During 1978 a total of 788 queries were addressed to the system, and during 1979 the total ~as 210. Damerau [9] giVQS thQ distribution of these quQries by month, and he alSO breaks thQm down by month into a number of different ¢atQgories. DamQPaU'S report of the gross performance statistics for the year 197~, ~nd a similar, as yet unpublished report of his for 1979, contain a WQaith of data that I w i l l not attempt to include in this b r i e f note. Even though his reports contain a large quantity of statistical performance data, honorer, there are a lot of important observations which can only bQ made from a detailed analysis of the day-by-day transcript of system usage. An analysis of sequences of related ouastions is a case in point as is an analysis of the attempts of users to phrase nQW queriQ5 in response tO failure of the system to procoss certain SQntances. A papQr in preperatlon by Plath is concerned with treating thesQ end similar issues with the care and detail which they ~arrsnto Time and SpaCQ considerations limit my contrlbution in this note tO just highlighting SOmQ of the major findings of DamQrau and Plath. Consider first a summary of the 1978 statistics: Total Queries 788 TQrmination Conditions: Completed (AnswQr rQachQd) $13 65.1 Aborted (System crash, QtC.) 53 6.7 USQr Cancelled 21 2.7 Program Error 39 ;.9 Parsing Failure 1~7 18.7 Unknown IS 1.9 OthQr ReIQvant Events: User Comment 96 12.2 OpQrator Message qS S.7 USQP Message 11 1.~ Word not in Laxicon 119 15.1 Lexical Choice RQsOlvQd by User 119 15.1 '~Nothing in Data Base" AnswQr 61 7.7 The pQrcQntage of successfully processed sQntQnCQS iS consistent with but slightly smallQr than that of such other invQstigators as Woods ClO], Bellard and Bierman [11], and Hershman Qt al [12]. Extreme care should bQ QxercisQd in intQrprQting any such OVQra~l numbers, however, and Qvan more garQ must be qxercisQd in comparing numbers from different studies. LQt me just mention a few considerations that must be keot in mind in interpreting the TqA results above. First of a11, our users t purposes varied tremendously from day to day and even from question to question, On one occasion, for QxamplQp a session might bQ devoted to a serious attempt to extract data needed for a federal grant proposal, and either the query comolexity might bQ relatively limited so as to minimize the changQ of error, or else the questions might be essentially repetitions of the some query, with minor variations to select different data. On another occasion, however, thQ session might be a demonstration, or i serious attempt to dQtermine th Q limits of the systemVs understanding capability, or even a frivolous OUQry tO Satisfy the user's curiosity as to the computorls response to a question outside its area of expertise. (One of our failurQs was the sQntence, "Who killed C~ck Robin?".) Our users varied widely in terms of their familiarity with the contents of the data base. Hone kne. anything abou~ the internal organization of information (e.g. ho, the data was arranged into relations), but some had good knowledge of just what kind of data was stored, some had limltQd knowledgQ, and some had no knowledge and even false expQctations as to what knowZQdge was included in the data base. In addition, thQy varied widely with respect to the amount of prior experiQnca they had with the systQm. Initially we provided no formal trolning in the use of the system, but some users acquired significant knowledge of the system through its sustalnQd use over a period of t~me. Something OVQr half of the total usage was mode by the individuol from the plannlng department who was responsiblQ for starting the system up and shutting it down each day. Usage was also made by other members of the planning department, bv members of OthQr departments, and by summer interns. %t should al~o be noted that the TeA system itself did not stay constant over the two-year period of tasting. AS problems werQ encountered, modifications werQ madQ tO many components of the system. %n particular, the lexicon, grammar, semantic interpretation fuzes (attribute grammar rules), and logical form evaluation functions all QVOlved OVer thQ period ~n question (continuously, ~ut at a decrQasing rata). The porsQr and the sQmantic interpreter ghonged little, if any. A rerun of all sentences, using thQ version of the grammar thor existed at the conclusion of thQ field test arogram showed that 50 ~ of thQ sentences which previously failed ware processed correctly. This is impressive when it iS observed that a large percentage of the rQmalning ~0 ~ constitute sQntQncos which are either ungrammatical (SOmQtimes sufficiently tO prQclude human comprehension) or QISQ contain references to sQmantic concepts OUtside OUr universe of (land use) discourse. On the whole, our USQrS indicated they were satisfied with the performance of thQ system. In a conferQnce with them 8t one point during the field test, they indicated thQy would prefer us to spQnd our time bringing more of thQir files on linQ (Q.g., the zoning board of aPPQalS file) rather than to spend more time 35 providing additional syntactic and associated semantic capability. Those instances whQro an unsuccessful query was followed uP by attempts to rephrase the query SO as to permit its processing showQd few instances where success was not achieved within three attempts. This data is obscured somewhat by the fact that users called us on • few occasions to get advice as to ho~ to record I query. On other occasions the terminal mQsSagQ facility WaS invoked for the PUrpose of obtaining advice, and this lof~ • record in our automatic logging facility. That facility preserved a record of aLL traffic between the uservs terminal, the computer, and our own monitoring terminal (which ~aS not always turned on or attended), and it included • time stamp for every Line displayed on the users f terminaL. A word is in order on tho real time performance of the system and on the amount of CPU time required. Oamerau [9] includes a chart which shows ham many queries required a given number of minutes of real timQ fOP complete processing. The total elapsed time for • query Was typically around three minutes (58X of the sentences ware processed in four minutes or Less). Slapsad time depended primarily on machine Load and user behavior at the terminal. The computer on ~hich the system operated was an IBM System 370/168 with an attached processor, ~ megabytes of memory and extensive peripheral storage, operating under the VR/370 operating system. There were typically in excess of ZOO users competing for PISCUPCeS on the system at the times when the TQA system was running during the L978-1979 field tests. Besides queuing for the CPU and memcry, this system dQVQLOpQd queues fop the IBM 3850 MaSS Storage System, on which tho TqA data base ~ao stored. Users had no complaint: about reel time response, but this may have been due to their Procedure for handling ad hoc quQries prior to the installation of the Tea system. That procedure caLLed for ad hoc queries to be coded in RPG by members Of the data Processing department, and the turnaround time was • matter of days rathQr than minutes. It is likely that the real time performance of the system caused users sometimes to look up data about a specific parcel in a hard copy printout rather than giving it to the system. ~ueries were most often of the type requiring statistical processing of a set of parcels or of the type requiring a search for the parcel or parcels that satisfied given search criteria. The CPU requirements of the system, broken da~n into a number Of categories, arc aLsc plotted by Oamereu [9]. The typical time tO process a sentenca l~ss ten seconds, but sentences with Large data base retrieval demands took up tO i minute. System hardware improvements made subsequent to the 1778-1777 field tests havQ cut this processing time approximately in half. Throughout our davaLopment of the TqA system, ¢onsideratton~ of speed have been secondary. He have idQntified meny areas in which racodt~g should produce I dramatic incrqasm in speed, but thio has been assigned • lesser priority than basic QnhantQmont of the SyStem and the coverage Of [ngLish provided through its transformational gremsar. Our experiment has sho~n that ~|aLd tasting of question answering systems provides certain information that is not otherwise available. The day to day usage of the system ~S different in many respects fPom usage that results from controLLed, but inevitably someNhat artificial, experiments. He did not influence our users by the wording of problems posed to them because wa gave them no problems; their requests for information were solely for their own purposes. Our sample queries that wa initially exhibited to city employees to indicate the system ~lO reedy to ba tasted wePe invariably greeted with mirth, due to the improbability that anyone would ~snt to know the information requested. (They poked fop Pmassurance that the system would also answer wreaLw questions). ~a alSO obtained valuable information on such matters aS haw Long USers persist in rephrasing queries when they encounter difficulties Of variouskinds, ho~ succaosful they are in correcting errors, and what neM errors are Likely to be lade while Correcting initial errors. ~ hope to discuss these and ether matters in more detail in the oral version of this paper. Valuable as our f|ald taste ere, they cannot provide certain information that must ba obtained from controlled experiments. Accordingly, ~a hops tO conduct a comparison of Tea with several formal query Languages in the neap fUtUrO, using the Latest enhanced version of the system and carefully controlling such factors as user training and problem stateloQnt. After teaching a course in data base management systems at queens CcLLege and the Pratt Institute, end after running informal axpQriments there comparing students f relative success in uoing TqA, ALPHA, relational algebra, qBE, and SEQUEL, I am convinced that even for educated, prsgralmlinQ-oriantad users with I fair amount Of experience in learning i formalL query Languaca, the Tea sys~ell offers.significant advantages over formal query ~anguages in retrieving data quickly and correctly. This remains to ba proved (or disproved) by conducting appropriate formal experiments. [1J Plath, W. J., Transformational Grammar and Transformational Parsing in the Request System, IBM Research Report RC 4396, Thomas J. Watscn Research Center, Yorktown Heights, H.Y., 1973. [2] Plath, W. J., String Transformations in the REQUEST System, American Journal of Computational Linguistics, Microfiche 8, 197;. [3] Potrick, S. R., Transformational Analysis, HatuPal Lanquaqe PPocessino (R. Rustin, ed.), ALgorithmics Press, 1973. [4] Knuth, O. E., Semantics of Context-Free Languages, MQthem~tlcal Systems Theory , ZI, June 1968 2, pp. 127-I¢5. [5] Potrick, S. R., Semantic Interpretation in the Request System, in Computational and Mathematical Linguistics, Proceeding: of the International Conference on Computational Linguistics, Piss, Z7/VIII-I/%X 1973, pp. 585-610. [6] Gazdar, Go J. M., Phrase Structure Grammar, to appear in Thq ~ature of Syntactic RecPes~ntation , (sdso P. Jacobson and G. K. PuLlum), 1979. [7] Sresnan, J. W. and gaplan, R. M., LoxicaL-FunctionaL Grammar: A Formal System for Grammatical Representation, to appear in T~ Mental Reprs=entation of Grammatical Relations (J. W. Bresnan, ed.), Cambridge: MIT Pross. C8] Astrahan, M.M.; 8Lasgen, M.W.; Chambqrlin, D.D.; Eswarln, K.P.; Gray, J.H.; Griffiths, P.P.; King, W.F.; Lories, R.A.; McJones, J.; Meh~, J.W.; PutzoLu, G.R.; Traiger, I.L.; Wade, B.W.; and Watscn, V., System R: Relational Approach to Database Manag~ent, ACM Transactions on Database Systems, Vo1. 1, No. 21, June, 1976, pp. 97-137. [9] Oamerau, F. J., The Transformational question Answering (Tea) System Operational Statistics ® 1978, tc appear in AJCL, June 1981. [10] Wocds, W. A., Transition Network Grammars, Natural Lanmuaae Procassinm (R. gustin, ed.), ALgorithmics Press, 1973. [11] Btermann, A. W. and Ballard, S. W., To~ard Natural Language Computation, AJCL, 9oi. 6, No. 2, April-June 1980, pp. 71-86. [12] Hershsan, R. L., Kelley, R. T., and Miller, H. C., User Performance with a Natural Language query Systsm for Colmaand Control, HPRDC TR 7917, Navy Personnel Research end Development Center, San Diego, Cal. 92152, January 1979.. 36 | 1981 | 9 |
TRANSLATING ENGLISH INTO LOGICAL FORM' Stanley J. Rosenscbein Stuart M, Shieber ABSTRACT A scheme for syntax-directed translation that mirrors com- positional model-theoretic semantics is discussed. The scheme is the basis for an English translation system called PArR and was used to specify a semantically interesting fragment of English, including such constructs as tense, aspect, modals, and various iexically controlled verb complement structures. PATR was embedded in a question-answering system that replied appropriately to questions requiring the computa- tion of logical entailments. I INTRODUCTION When contemporary linguists and philosophers speak of "semantics," they usually mean m0del-theoretic semantics-- mathematical devices for associating truth conditions with Sentences. Computational linguists, on the other hand, often use the term "semantics" to denote a phase of processing in which a data structure (e.g., a formula or network) is constructed to represent the meaning of a sentence and serve as input to later phases of processing. {A bet- ter name for this process might be "translation" or "traneduction.") Whether one takes "semantics" to be about model theory or translation, the fact remains that natural languages are marked by a wealth of com- plex constructions--such as tense, aspect, moods, plurals, modality, ad- verbials, degree terms, and sententiai complemonts--that make seman- tic specification a complex and challenging endeavor. Computer scientists faced with the problem of managing software complexity have developed strict design disciplines in their programming methodologies. One might speculate that a similar re- quirement for manageability has led linguists (since Montague, at least) to follow a discipline of strict compositiouality in semantic specification, even though model*theoretic semantics per me does not demand it. Compositionaiity requires that the meaning of a pbrase be a function of the meanings of its immediate constituents, a property that allows the grammar writer to correlate syntax and semantics on a rule-by-rule basis and keep the specification modular. Clearly, the natural analogue to compositionality in the case of translation is syntax-directed trans- lation; it is this analogy that we seek to exploit. We describe a syntax-directed translation scheme that bears a close resemblance to model-theoretic approaches and achieves a level of perspicuity suitable for the development of large and complex gram- mars by using a declarative format for specifying grammar rules. In our formalism, translation types are associated with the phrasal categories of English in much the way that logical-denotation types are associated Artificial Intelligence Center SRI International 333 Raveoswood Avenue Menlo Park, CA 94025 with phrasal categories in model-theoretic semantics. The translation 'types are classes of data objects rather than abstract denotations, yet they play much the same role in the translation process that denotation types play in formal semantics. In addition to this parallel between logical types and trans- lation types, we have intentionally designed the language in which translation rules are stated to emphasize parallels between the syntax- directed translation and corresponding model-theoretic interpretation rules found in, say, the GPSG literature [Gazdar, forthcoming]. In the GPSG approach, each syntax rule has an associated semantic rule (typically involving functional application) that specifies how to com- pose the meaning of a phrase from the meanings of its constituents. In an analogous fashion, we provide for the translation of a phrase to be synthesized from the translations of its immediate constituents according to a local rule, typically involving symbol/c application and ~-conversiou. It should be noted in passing that doing translation rather than model theoretic interpretation offers the temptation to abuse the formalism by having the "meaning" (translation) of a phrase depend on syntactic properties of the translations of its constituents--for in- stance, on the order of conjuncts in a logical expression. There are several points to be made in this regard. First, without severe a priori restrictions on what kinds of objects can be translations (coupled with the associated strong theoretical claims that such restrictions would embody) it seems impossible to prevent such abuses. Second, as in the case of programming languages, it is reasonable to mmume that there would emerge a set of stylistic practices that would govern the actual form of grammars for reasons of manageability and esthetics. Third, it is still an open question whether the model*theoretic program of strong compositiouality will actually succeed. Indeed, whether it succeeds or not is of little concern to the computational linguist, whose systems, in any event, have no direct way of using the sort of abstract model being proposed and whose systems must, iu general, be based on deduction (and hence translation). The rest of the paper discusses our work in more detail. Section II presents the grammar formalism and describes PATR, an implemented parsing and translation system that can accept a gram- mar in our formalism and uses it to process sentences. Examples of the system's operation, including its application in a simple deductive question-answering system, are found in Section HI. Finally, Section IV describes further extensions of the formalism and the parsing sys- tem. Three appendices are included: the first contains sample gram- mar rules; the second contains meaning postulates (axioms) used by the question-answering system; the third presents a sample dialogue session. "This research wns supported by the Defense Advanced Research Projects Agency under Contract N000SO-SO-C.-0575 with the Naval Electronic Systems Conunand. The views and conclusions contained in this document are those of the authors and should not be interpreted ns representative of the ol~cial policies, either expres~.d or implied, of the Defense Ad~eanced Research Projects Agency or the United States Government. il A GRAMMAR FORMALISM A General Characterization Our grammar formalism is beet characterized as n specialized type of augmented context-free grammar° That is, we take a grammar to be a set of context-fres rules that define a language and associate structural descriptions (parse trees) for each sentence in that language in the usual way. Nodes in the parse tree are assumed to have a set of features which may assume binary values (True or False), and there is a distinguished attribute--the "translation'--whoee values range over a potentially infinite set of objects, i.e., the translations of English phrases. Viewed more abstractly, we regard translation as a binary relation between word sequences and logical formulas. The use of a relation is intended to incorporate the fact that many word se- quences have several logical forms, while some have none at all. Furthermore, we view this relation as being composed (in the mathe- matical sense) of four simpler relations corresponding to the conceptual phases of analysis: (1) LEX (lexical analysis), (2) PARSE (parsing), (3) ANNOTATE (assignment of attribute values, syntactic filtering), and (4) TRANSLATE (translation proper, i.e., synthesis of logical form). The domains and ranges of these relations are as follows: Word Sequences -LEX-* Morpheme Sequences -PARSE-* Phrase Structure Trees -ANNOTATE-* Annotated Trees -TRANSLATE-* Logical Form The relational composition of these four relations is the full translation relation associating word sequences with logical forms. The subphases too are viewed as relations to reflect the inherent nondeterminism of each stage of the process. For example, the sentence =a hat by every designer sent from Paris was felt" is easily seen to be nondeterminis- tic in LEX ('felt'), PARSE (poetnominal modifier attachment), and TRANSLATE (quantifier scoping). It should be emphasized that the correspondence between processing phases and these conceptual phases is loose. The goal of the separation is to make specification of the process perspicous and to allow simple, clean implementations. An actual system could achieve the net effect of the various stages in many ways, and numerous op- timizatious could be envisioned that would have the effect of folding back later phases to increase efficiency. B The Relations LEX, PARSE, and ANNOTATE We now describe a characteristic form of specification ap- RULES: constant corn,' - (~ e (X Q CX x CP CQ x))))) S--* NPVP Truss: VP' [NP'] VP -* T~qSEV Aano: [-~Transitivo(V) ] Tr,=,: { couP' [~'] t~'] } lEXICON: If -* John Aano: [Proper(W) ] Truss: { John } TENSE -* &put Trash: { (X x CpastX)) } V-*go Anon: [ -~Trasnitivn(V) ] Trnn: { C~ x Can x)) } Figure 1: Sample specification of augmented phrase structure grammar propriate to each phase and illustrate how the word sequence "John went" is analyzed by stages as standing in tbe translation relation to "(past (go john))" according to the (trivial) grammar presented in Figure 1. Lexieal analysis is specified by giving a kernel relation between individual words and morpheme sequences I (or equivalently, a mapping from words to sets of morpheme sequences), for example: John -* (john) : vent -* (kput go) : persuaded -+ (kput persuade) o (kppl persuadn) : The kernel relation is extended in a standard fashion to the full LEX relation. For example, "went" is mapped onto the single morpheme sequence (&past go), and "John" is mapped to (john). Thus, by extension, "John went" is transformed to (John &post go) by the lexical analysis phase. Parsing is specified in the usual manner by a context-free grammar. Utilizing the eontext,-free rules presented in the sample system specification shown in Figure 1, (John 8cpast go) is transformed into the parse tree (S (NP john) C~ (r~rsE tput) Cvso))) Every node in the parse tree has a set of associated features. The purpo6e of ANNOTATE is to relate the bate parse tree to one that has been enhanced with attribute values, filtering out three that do not satisfy stated syntactic restrictions. These restrictions are given as Boolean expressions associated with the context-free rules; a tree is properly annotated only if all the Boolean expressions corresponding to the rules used in the analysis are simultaneously true. Again, using the rules of Figure 1, lof course, more sophisticated spprotehe~ to morpholoslesl sualysls would seek to analyze the LEX relgtion more fully. See, for example, ~Kartunnen, lgS2J gad [Ksplan, 19811. (s (SP john) (W ( ~ aput) (V go)) ) is transformed into (S (NP: Proper john) (W : "~ Trandlive ( ~ ~aet) (V: -Transitive go))) C The Relation TRANSLATE Logical-form synthesis rules are specified as augments to the context-free grammar. There is a language whose expressions denote translations (syntactic formulas); an expression from this language is attached to each context-free rule and serves to define the composite translation at a node in terms of the translations of its immediate constituents. In the sample sentence, TENSE' and V' {the translations of TENSE and V respectively) would denote the ),-expressions specified in their respective translation rules. VP' {the translation of the VP) is defined to be the value of (SAP (SAP COMP' TENSE') V'), where COMF' is a constant k-expression and SAP is the symbolic-application operator. This works out to be (k X [past (go X))). Finally, the symbolic application of VP' to N'P' yields (past (go John)). (For convenience we shall henceforth use square brackets for SAP and designate (SAP a ~) by a[~].) Before describing the symbolic-application operator in more detail, it is necessary to explain the exact nature of the data objects serving as translations. At one level, it is convenient to think of the translations as X-expressions, since X-expressions are a convenient notation for specifying how fragments of a translation are substituted into their appropriate operator-operand positions in the formula being assembled-especially when the composition rules follow the syntactic structure as encoded in the parse tree. There are several phenomena, however, that require the storage of more information at a node than can be represented in a bare k-expression. Two of the most conspicuous phenonema of this type are quantifier scoping and unbounded depen- dencies ("gaps"). Our approach to quantifier scoping has been to take a version of Cooper's storage technique, originally proposed in the context of model-tbeoretic semantics, [Cooper, forthcoming[ and adapt it to the needs of translation. For the time being, let us take translations to be ordered pairs whose first component (the head) is an expression in the target language, characteristically a k-expression. The second component of the pair is an object called storage, a structured collection of sentential operators that can be applied to a sentence matrix in such a way as to introduce a quantifier and "capture" a free variable occurring in that sentence matrix. 2 For example, the translation of "a happy man" might be < m , (X S (some m (and (man m)(happy m)) S)) >.s Here the head is m (simply a free variable), and storage consists of the X-expression (k S 2in the sample grammar presented in Appendix A, the storage.formlng operation is notated mk.mbd. 3Followlng [Moore, lO80~, a quantified expression is of the form (quauti6er, variable, restriction, body) ...). If the verb phrase "sleeps ~ were to receive the translation < (X X (sleep X)), ~ > (i.e., a unary predicate as head and no storage), then the symbolic application of the verb phrase translation to the noun phrase translation would compose the heads in the usual way and take the "uniou" of the storage yielding < (sleep m), (k S (some m (and (man m)(happy m)) S)) >. We define an operation called ~pull.s," which has the effect of "pulling" the sentence operator out of storage and applying it to the head. There is another pull operation, pull.v, which operates on heads representing unary predicates rather than sentence matrices. When pull.s is applied in our example, it yields < (some m (and (man m)(happy m)) (sleep m)), ~b >, corresponding to the translation of the clause ~a happy man sleeps." Note that in the process the free vari- able m has been "captured." In model-theoretic semantics this cap- ture would ordinarily be meaningless, although one can complicate the mathematical machinery to achieve the same effect. Since translation is fundamentally a syntactic process, however, this operation is well- defined and quite natural. To handle gaps, we enriched the translations with a third com- ponent: a variable corresponding to the gapped position. For example, the translation of the relative clause ".,.[that] the man saw" would be a triple: < (past (see X Y)), Y, (k S (the X (man X) $))>, where the second component, Y, tracks the free variable corresponding to the gap. At the node at which the gap was to be discharged, X-abstraction would occur (as specified in the grammar by the operation "uugap') producing the unary predicate (X Y (past (see X Y))), which would ultimately be applied to the variable corresponding to the head of the noun phrase. It turns out that triples consisting of (head, var, storage) are adequate to serve as translations of a large class of phrases, but that the application operator needs to distinguish two subcases (which we call type A and type B objects). Until now we have been discussing type A objects, whose application rule is given (roughly) as < hal,vat,san>l< hal',vat',san'>[ -~ <(hd hd'),var LI var', sto i3 sto'> where one of vat or vat' must be null. In the ease of type B objects, which are assigned primarily as translations of determiners, the rule is < h d,var ,san > [< hd',var',sto' >] = <var, var', hd(hd') U sto U sto'> For example, if the meaning of "every" is every' ~- <(k P (X S (every X (P X) S))), X, ~b> and the meaning of ~man" is man' ---- < man, ~, ~ > then the meaning of "every man" is every'[man'] = ( X , ¢, (X S (man X) S)> , as expected. Nondeterminism enters in two ways. First, since pull opera, tions can be invoked nondeterministically at various nodes in the parse tree (as specified by the grammar), there exists the possibility of com- puting multiple scopings for a single context-free parse tree. (See Section III.B for an example of this phenomenon.) In addition, the grammar writer can specify explicit nondeterminism by associating several distinct translation rules with a single context-free production. In this case, he can control the application of a translation schema by specifying for each schema a guard, a Boolean combination of features that the nodes analyzed by the production must satisfy in order for the translation schema to be applicable. D Implementation of a Translation System The techniques presented in Sections H.B and II.C were imple- mented in a parsing and translation system called PATR which was used as a component in a dialogue system discussed in Section III.B. The input to the system is a sentence, which is preprocessed by a lexical analyzer. Parsing is performed by a simple recursive descent parser, augmented to add annotations to the nodes of the parse tree. Translation is then done in a separate pass over the annotated parse tree. Thus the four conceptual phases are implemented as three actual processing phases. This folding of two phases into one was done purely for reasons of efficiency and has no effect on the actual results obtained by the system. Functions to perform the storage manipulation, gap handling, and the other features of translation presented earlier have all been realized in the translation component of the running system. The next section describes an actual grammar that has been used in conjunction with this translation system. III EXPERIMENTS IN PRODUCING AND USING LOGICAL FORM A A Working Grammar To illustrate the ease with which diverse semantic features could be handled, a grammar was written that defines a semantically interesting fragment of English along with its translation into logical form [Moore, 1981]. The grammar for the fragment illustrated in this dialogue is compact occupying only a few pages, yet it gives both syntax and semantics for modais, tense, aspect, passives, and lexically control- led infinitival complements. (A portion of the grammar is included as Appendix A.) 4 The full test grammar, Io,~ely based on DIAGRAM [Robinson, 1982] but restricted and modified to reflect changes in a~ proach, was the grammar used to specify the translations of the sen- tences in the sample dialogue of Appendix C. B An Example of the System's Operation The grammar presented in Appendix A encodes a relation between sentences and expressions in logical form. We now present a sample of this relation, as well as its derivation, with a sample sentence: "Every man persuaded a woman to go." Lexical analysis relates the sample sentence to two morpheme streams: every man &ppi persuade a woman to go 4Since this is just a small portion of the actual grammar selected for expository purposes, many of the phrasal categories and annotations will seem unmotivated and needlessly complex. These categories and annotations m'e utilized elsewhere in the test grammar. *, every man ,~past persuade a woman to go. The first is immediately eliminated because there is no context-free parse for it in the grammar. The second, however, is parsed as [S (SDEC (NP (DETP (DDET (VET every))) C~u CN0m~V (SOUN Cs re,a))))) (Pn~ICar~ (*u~ (TE~E kpaat)) (VPP (V? CV?T (Vpersuado))) (~ (DET? CA a)) (~u (Nnm~ (~vtm CN womm))))) (INFINITIVE (TO to) CV~ Cv? CWT CV go] While parsing is being done, annotations are added to each node of the parse tree. For instance, the NP -* DETP NOM rule includes the annotation rule AGREE( NP, DETP, Definite ). AGREE is one of a set of macros defined for the convenience of the grammar writer. This particular macro invocation is equivalent to the Boolean expression Definite(NP) ~ Definite(DETP). Since the DETP node itself has the annotation Definite as a result of the preceding annotation process, the NP node now gets the annotation Definite as wello At the bottom level, the Definite annotation was derived from the lexical entry for the word "evesy'. s The whole parse tree receives the following annotation: [S Cb'~O (lqP: Delinite (DETP: DeBnite CDDET: DeBnite (DET: DeBuite eve1"y) ) ) CNOU (stump CNO~ CSm~))))) CPR~ICATE CAU~ CTENSE ~put)) (VPP CVP: Active (VPT: Active, Ttansitlve, Takesln? (V: Active, Transitive, Takesfn[ porsuade) ) ) 0~' (DET? CA a) ) CNOU C~la'~ C~ml C~ ,,on~))))) CDr~ISZTZ'W (TO to) (vPP (w: Active (VPT: Active Cv: Active sol Finally, the entire annotated parse tree is traversed to assign translations to the nodes through a direct implementation of the process described in Section II.C. (Type A and B objects in the following examples are marked with a prefix 'A:' or 'B:'.) For instance, the VP node covering (persuade a woman to go), has the translation rule VPT'[N'P'][INFINITIVE']. When this is applied to the translations of the node's constituents, we have CA: CA X CA P (~ T (persuade ¥ X (P X)))~ [,CA: X2. ~,. C~ S (some X2 Cwomu X2) S))~] [cA: (~x C~x))~] which, after the appropriate applications are performed, yields CA: CAP (~Y (persuade YX2 CPX2)))). ~, (A S (some X2 (~- X2) S))~ 5Note that, although the annotation phase was described and is implemented pro- cedurally, the process actually used guarantees that the resulting annotation is ex" "t|y the one specified declaratlve~y by the annotation rules. [o,: (A x (gox))>] = CA: ()/¥ (persuadeTX2 (goX2))). ~b, CA S (some X2 (roman X2) S))~ After the past operator has been applied, we have <A: CA T (pant (persumde YX2 (goX2)))). ~b, CA S (some X2 (~znu X2) S))) At this point, the pull operator (pull.v) can be used to bring the quantifier out of storage, yielding 6 <A: CA Y (some ~2 (womb ][2) (pant (peramado T~ (go Yg))))). This will ultimately result in "a woman" getting narrow scope. The other alternative is for the quantifier to remain in storage, to be pulled only at the full sentence level, resulting in the other scoping. In Figure 2, we have added the translations to all the nodes of the parse tree. Nodes with the same translations as their parents were left unmarked. From examination of the S node translations, the original sentence is given the fully-scoped translations (every X2 (man ](2) (some Xi (woman Xi) (paSt (persuade %,9 X! (go Xl))))) and (some XI (vo~ Xl) (every X~2 (nan X2) (pant (persuade X2 Xl (go Xl) ))) ) C A Simple Question-Answering System As mentioned in Section I, we were able to demonstrate the semantic capabilities of our language system by assembling a small question-answering system. Our strategy was to first translate English into logical formulas of the type discussed in [Moore, 1981], which were then postprocessed into a form suitable for a first-order deduc- tion system. 7 (Another possible approach would have been to translate directly into first-order logic, or to develop direct proof procedures for the non-first-order language.) Thus, we were able to integrate all the components into a question-answering system by providing a simple control structure that accepted an input, translated it into logical form, reduced the translation to first-order logic, and then either asserted the translation in the case of declarative sentences or attempted to prove it in the case of interrogatives. (Only yes/no questions have been imple- mented.) The main point of interest is that our question-answering system was able to handle complex semantic entailments involving tense, modality, and so on--that, moreover, it was not restricted to extensional evMuation in a data base, as with conventional question- answering systems. For example, our system was able to handle the entailments of sentences like John could not have been persuaded to go. (The transcript of a sample dialogue is included as Appendix C.) 6For convenience, when a final constituent o1' a translation is ~ it is often not written. Thus we could have written <A: (k Y (some ...) ...)> in this cue. 7We used a connection graph theorem prover written by Mark Stickel [Stlckel, forthcoming]. (S: <A: (pant (persuade XI X2 (go ~))). ~. (A S (every X1 (nan X1) S)) ()~ S (some ~ (veto ][2) S))>, <A: (some ][2 (~man X2) (past Cpersua4e X1 Y,2 (go Yo)))) 0 ~. (~ 8 (every Zl (man ][I) S))> <A: (everyX2 CnanX2) (some XI (woman X1) (pant (persuade X2 Xl (go Y~)) )))> cA: (sou Xl (wuan X1) (every ][2 (man X2) (pant (p0rsuade X2 li (go ][2)) ) )) > (SV~ (NP: <A: Xl. ~. (A S (everyXl (muXl)S))) CDKTP: ¢~: CA P (~ S (every X (PI) S))). X~ (DDET (DET every))) (NDU: cA: CA X (man X))) (None (Nmm (x m~n))))) (PREDICATE: <A: (AX (past (persuade YX2 (goX2)))). ~b. CA S (some X2 (woma X2) S))), <X: CA X (son X2 (woeanX2) (pant (persuade YX2 (goX2))))). (AU~P: o,: CA P CA X (pant (P x))))> C'X'~ a~,,.t)) (VPP: <A: (A ¥ (persuade ¥ ][2 (go ][2))). ~b. CA S (some X2 (wn--X2) S))~ (VP (VPT: cA: (XX CA P ()~ Y (persuade ¥X (P ¥)))) (V persuade))) (~: cA: X2. ~, CA S (someX2 (wona Z2) S))~ (DETP: <S: (kP (AS (SoNX (PX) S))). X~ CA n)) (~li: (A: (XX (wommX))> (N0~ (N0uw (w ,mm~))))) (INFINITIVE (TO: none to) (VPP: ca: (>,X (goX))> (w (vPT (v so] Figure 2:. Node-by-node translation of a sample sentence The reduction of logical form to first-order logic (FOL) was parameterized by a set of recursive expansions for the syntactic ele- ments of logical form in a manner similar to Moore's use of an sxiomatization of a modal language of belief. [Moore, 1980] For ex- ample, (past P) is expanded, with respect to a possible world w, as (some w2 (and (past w2 w) <P,w2>)) where "<P,w2>" denotes the recursive FOL reduction of P relative to the world w2. The logical form that was derived for the sample sentence "John went ~ therefore reduces to the first-order sentence (some w (and (past w REALWORLD)(go w John))). More complicated illustrations of the results of translation and reduc- tion are shown in Figure 3. Note, for example, the use of restricted quantification in LF and ordinary quantification in FOL. To compute the correct semantic entailments, the deduction system was preloaded with a set of meaning postulates (axioms) giving inferential substance to the predicates associated with lexical items (see IMrffT: every ntanus~ be happy iF: (everyX (m X) (tacosnry (tad (happy X) (thlngX)))) FOL: (every x0172 (implies (mtuREALWORLDxOI72) (overywO173 (implies (posnBEALgORLD~173) (tad (happy~O175zOt72) (~hi~Ot73z0172)))))) II~UT: bill persuaded john to go iF: (ptat (porsutde bill john (go john))) FOL: (some s0175 (ud (pant w0175 RF..AL|QP.LD) (sou wOrTS (Imd (permaade w0175 bill John wOlT§) (go wOlTe John))))) Figure 3: Translation to LF and Reduction to FOL Appendix B). IV FURTHER EXTENSIONS We are continuing to refine the grammar formalism and im- prove the implementation. Some of the refinements are intended to make the annotations and translations easier to write. Examples in- clude: Allowing nonbinary features, including sets of values, in the annotations and guards (extending the language to include equality and set operations). Generalizing the language used to specify synthesis of logi- cal forms and developing a more uniform treatment of translation types. Generalizing the "gap* variable feature to handle ar- bitrary collections of designated variables by using an "environment" mechanism. This is useful in achieving a uniform treatment of free word order in verb complements and modifiers. In addition, we are working on extensions of the syntactic machinery, including phrase-linking grammars to handle displacement phenomena [Peters, 1981], and methods for generating the augmented phrase structure grammar through a metarule formalism similar to that of [Konolige, 1980]. We have also experimented with alternative parsing algorithms, including a chart parser [Bear, 197g] adapted to carry out annotation and translation in the manner described in this paper. REFERENCES Bear, John, and Lanri Karttunen. PSG: A Simple Phrase Structure Parser. Texas Linguistic Forum, vol. 14. 1979. Cooper, Robin. Quantification and Syntactic Theory. Forthcoming. Reidel, Dordrecht. Gazdar, Gerald. Phrase Structure Grammar. To appear in Jacobsen, O. and G. K. Pullum (eds.) On the Nature of Syntactic Representation. Kaplan, R. M., and Martin Kay. Personal communication. 1981. Karttunen, Lauri, Rebecca Root, and Hans Uszkoreit. Morphological analysis of Finnish by computer. Paper presented at the ACL session of the 1981 LSA Annual Meeting, New York, December 1981. Konolige, Karl. Capturing linguistic generalizations with metarules in an annotat.d phrase-structure grammar. Proceedings of the 18th Annual Meeting of the Association for Computational Linguistics, University of Pennsylvania, Philadelphia, June 1980. Moore, Robert C. Problems in Logical Form. Proceedings of the 19th Annual Meeting of the Association for Computational Linguistics, Stanford University, Pale Alto, June, 1981. Moore, Robert C. Reasoning About Knowledge and Action. SRI International, Technical Note 191. October, 1980. Peters, Stanley, and Robert W. Ritchie. Phrase Linking Grammars. December 1981. Unpublished manuscript. Robinson, Jane. DIAGRAM: A Grammar for Dialogues° Communications of the ACM, ~5:1 (January, 1982) 27--47. Stickel, Mark. A Non-Clausal Connection Graph Resolution Theorem Proving Program. Forthcoming. APPENDIX A. Sample Grammar Rules The following is a portion of a test grammar for the PATR English translation system. Only those portions of the grammar uti- lized in analyzing the sample sentences in the text were included. The full grammar handles the following constructs: medals, adjec- rivals, tense, predicative and nonpredicative copulatives, adverbials, quantified noun phrases, aspect, NP, PP, and infinitival complements, relative clauses, yes/no questions, restricted wh-questions, noun-noun compounds, passives, and prepositional phrases as predicates and ad* jectivals. a~smffiamumam GrlmlN, r hles •.m~ . . . . ~mssmtm Cone~mt EQ' • curry (X,,AIIBDA (X ¥) (equal X ¥)) Coast&at PASS' 8 <A: (LA~DA P (LAIEDA X ((P X) T))). NIL, (IIX.IIBD (QUOTE (LAIIBDA S (some T (thing Y) S)))) > Constant PhSSIIF' • <A: (LAM~)A P (LAMBDA I (~& x (((P x) I) ¥)))). NIL, (MI(.MBD (QUOTE (IAMBDA S Csome ¥ (thing ¥) S)))) > AUXP-> TENSE; Trtaslation: TENSE' DDET -> DET: Annotation: [ Defiaite(DDET) ] Trtaslation: DET' DETP -> A; Annotation: [ ~Definite(DETP) ] Translation: A' DETP -> DDET; Annotation: [ AGREE(DET?. DDET, Definite) ] Translation: DDET' II~INITIV~ -~ TO VPP; Annotation: [ AGREECINFINITIVE. VPP, G*ppy. |h) ] Translation: pull.v(VPP') NON -> NO~qD; Annotation: [ AOREE(NOM. NOMHD. O~ppy) ] Translation: NON~ID' NOMHD -) NOUN; Translation: NOUN' NOUN -> N; Translation: N' NP -) DE?P ~M; Annotation: [ AOP~CNP. NOM. Gappy) ] [ Predicative(NP) %/ ~Predicative(NP) ] [AGREE(N~. DETP, Definite) ] Translation: ~Predica~ive(~): DET~'[NOM'] Definite(NP) A Predicative(NP): E~'[DETP'[NQM']] ~Definite(NP) • Predicative(NP): NON' PREDICATE -> AU]~ ~; Annotation: [ AORE~(PREDICATE. VPP. Active. 0appy. ~h) ] Translation : pull.v(A~' [VPP']) S -) SDEC; Annotation: [ ~Oappy(.~'~EC) ] [ ~(~EC) ] Translation : SDEC' &DEC -) NP PREDICATE; Annotation: [ 0appy(NP) V Gappy(I~DICATE) ¢-) G~ppy(S)EC) ] [ ~Predicative(NP) ] [ |h(N~) ~/ |b(PREDICATE) <=> Wb(SDEC) ] [ - (Onppy (NP) a Onppy (PKEDICATE)) ] Truslation: pull.s(PR~DICATE'[NP']) VP -, VPT; Annotation: [ ~TrLnsitive(VPT) ] [-TLkeelnZCV~T) ] [ Active(VPT) ] [ ActiveCVP) ] Translation: VPT' VP -> VPT NP I~FINITIVE; Annotation: [ Takeslnf(VPT) ] [ Transitive(VPT) ] [ ~P~,dicativ,(~) ] [ AOP~:~(~. VPT. Active) ] [ Wh(NP) %/ Wh(INFmITIW) ~-* Wh(VP) ] [ IY(lctive(VPT). ((O&ppy(~) ~/ Oappy(~INITIVE)) ,=) Sappy(%~D)), (~Oappy(~T) k Oappy(NP))) ] Truslation: Active(%?): pulI.v(%~OT°[NP '] [I~INITI~']) ~Active(VP): pull.v(P~Sl~'~T'] [INFINITIVE']) V~ -~ VP; Annotation: [ a~(vl~. VP, Gappy. |h) ] [ Active(VP) ] Translation: VP' VPT -> V; Annotation: [ AOREE(VPT. V. Active. Transitive. T~kenInf) ] Trsnslatlon: V' N -> nan: Translation: ¢a: mum, NIL, NIL ) Translation : ¢A: ~man. NIL, NIL ) DET -) every: Annotation: [ Definite(DET) ] Translation: (B: (LAI~A P (LAMBDA S (every X (P X) S))). X. NIL • A -~ &; Translation: ~B: (IA~mDA P (~DA S (some X (P X) S))). X, NIL • V -~ persuade; Annotation: [ Transitive(V) ] [ Active(V) ~/ ~Active(V) ] [ TLkeslnf(V) ] Translation: curry (LAIfBDA (X P Y) (persuade Y l (P X))) V -> go; Annotation: [ ~Traneitive(V) ] [ -TskesZ~CV) ] [ ActiveCV) ] Translation: <A: go, NIL. NIL TENSE -> &past; Translation: curry (LAI~A (P X) (past ~ X))) APPENDIX B. Meaning Postulates (every • (every u (iff (pant • u) (.or (put u •] (eTery • (some u (put • U))) [every • (every • (every y (every z (implies (promise • • y z) (put • z] [every • (every • (every y (every z (implies (persuade • • y z) (pant • z] (every • (every • (thing • x))) [every • (every x (every z (implies (want • s z) (put • z] (every • (pose • v)) [every v (every u (implies (pant • u) (pose • u) [every • (every u (every v (implies (and (pantl • u) (pantl u v)) (pant2 • v] [every • (every z (implies (past2 • z) (pant • z] [every v (every z (if! (past • z) (putl• z] ~ is john a happy man Yes. • > no man could have hidden a book OK. >) did john hide a book No. >~ bill hid a book OK. • ~ is bill a man No. ~> ww john a sum I don't know. >> every •an •ill be a nan OK. >) •ill joh•be • nan Yes. ~, bill persuaded john to go OK. • > could john have been persuaded to p Yes. >> •ill john be persuaded to go I don't knee. APPENDIX C. Transcript of Sample Dialo~ue • ~ john is happy OK. ~ is john happy Yes. >> is john a happy mnn I don't kno•. >> john is a man nK. | 1982 | 1 |
ENGLISH WORDS AND DATA BASES: HOW TO BRIDGE THE GAP Remko J.H. Scha Philips Research Laboratories Eindhoven The Netherlands ABSTRACT If a q.a. system tries to transform an Eng- lish question directly into the simplest possible formulation of the corresponding data base query, discrepancies between the English lexicon and the structure of the data base cannot be handled well. To be able to deal with such discrepancies in a systematic way, the PHLIQAI system distinguishes different levels of semantic representation; it contains modules which translate from one level to another, as well as a module which simplifies expressions within one level. The paper shows how this approach takes care of some phenomena which would be problematic in a more simple-minded set-up. I INTRODUCTION If a question-answering system is to cover a non-trivial fragment of its natural input-language, and to allow for an arbitrarily structured data base, it cannot assume that the syntactic/semantic structure of an input question has much in common with the formal query which would formulate in terms of the actual data base structure what the desired information is. An important decision in the design of a q.a. system is therefore, how to embody in the system the necessary knowledge about the relation between English words and data base notions. Most existing programs, however, do not face this issue. They accept considerable constraints on both the input language and the possible data base structures, so as to be able to establish a fairly direct correspondence between the lexical items of the input language and the primitives of the data base, which makes it possible to translate input questions into query expressions in a rather straightforward fashion. In designing the PHLIQAI system, bridging the gap between free English input and an equally un- constrained data base structure was one of the main goals. In order to deal with this problem in a sys- tematic way, different levels of semantic analysis are distinguished in the PHLIQAI program. At each of these levels, the meaning of the input question is represented by an expression of a formal logical language. The levels differ in that each of them assumes different semantic primitives. At the highest of these levels,the meaning of the question is represented by an expression of the English-oriented Formal Language (EFL); this lan- guage uses semantic primitives which correspond to the descriptive lexical items of English. The prim- itives of the lowest semantic level are the prim- itives of the data base (names of files, attributes, data-items). The formal language used at this level is therefore called the Data Base Language (DBL). Between EFL and DBL, several other levels of mean- ing representation are used as intermediary steps. Because of the space limitations imposed on the present paper, I am forced to evoke a somewhat mis- leading picture of the PHLIQA set-up, by ignoring these intermediate levels. Given the distinctions just introduced, the problem raised by the discrepancy between the Eng- lish lexicon and the set of primitives of a given data base can be formulated as follows: one must devise a formal characterization of the relation between EFL and DBL, and use this characterization for an effective procedure which translates EFL queries into DBL queries. I will introduce PHLIQA's solution to this problem by giving a detailed dis- cussion of some examples I which display complica- tions that Robert Moore suggested as topics for the panel discussion at this conference. II THE ENGLISH-ORIENTED LEVEL OF MEANING REPRESENTATION The highest level of semantic representation is independent of the subject-domain. It contains a semantic primitive for every descriptive lexical item of the input-language 2. The semantic types of these primitives are systematically related to the syntactic categories of the corresponding lexical items. For example, for every noun there is a con- stant which denotes the set of individuals which fall under the description of this noun: corre- sponding to "employee" and "employees" there is a constant EMPLOYEES denoting the set of all employ- ees, corresponding to "department" and "depart- ments" there is a constant DEPARTMENTS denoting the set of all departments. Corresponding to an n-place verb there is an n-place predicate. For instance, "to have" corresponds to the 2-place predicate HAVE. Thus, the input analysis component . . . . . . . . . . . . . . . . . . . . . . . . . I There is no space for a definition of the logical formalism I use in this paper. Closely related log- ical languages are defined in Scha (1976), Lands- bergen and Scha (1979), and Bronnenberg et a1.(1980). 2 In previous papers it has been pointed out that this idea, taken strictly, leads not to an ordinary logical language, but requires a formal language which is ambiguous. I ignore this aspect here. What I call EFL corresponds to what was called EFL- in some other papers. SeeLandsbergenand Scha (1979) and Bronnenberg et al. (1980) for discussion. 57 of the system translates the question "How many departments have more than i00 employees ?" (i) into Count({x E DEPARTMENTS I Count({y e EMPLOYEESIHAVE(x,y)}) > I00}). (2) III THE DATA BASE ORIENTED LEVEL OF MEANING REPRESENTATION A data base specifies an interpretation of a logical language, by specifying the extension of every constant. A formalization of this view on data bases, an& its application to a CODASYL data base, can be found in Bronnenberg et ai.(1980). The idea is equally applicable to relational data bases. A relational data base specifies an inter- pretation of a logical language which contains for every relation R [K, At, .... An] a constant K de- noting a set, and n functions Al,..., An which have the denotation of K as their domain. ~ Thus, if we have an EMPLOYEE file with a DEPARTMENT field, this file specifies the extension of a set EMPS and of a function DEPT which has the denotation of EMPS as its domain. In terms of such a data base structure, (i) above may be formulated as Count({xe (for: EMPS, apply: DEPT) 1 Count((y e EMPSIDEPT(y)=x}) > i00}). (3) I pointed out before that it would be unwise to design a system which would directly assign the meaning (3) to the question (I). A more sensible strategy is to first assign (I) the meaning (2). The formula (3), or a logically equivalent dne, may then be derived on the basis of a specification of the relation between the English word meanings used in (i) and the primitive concepts at the data base level. IV THE RELATION BETWEEN EFL AND DBL Though we defined EFL and DBL independently of each other (one on the basis of the possible Eng- lish questions about the subject-domain, the other on the basis of the structure of the data base about it) there must be a relation between them. The data base contains information which can serve to answer queries formulated in EFL. This means that the denotation of certain EFL expressions is fixed if an interpretation of DBL is given. We now consider how the relation between EFL and DBL may be formulated in such a way that it can easily serve as a basis for an effective transla- tion from EFL expressions into DBL expressions. The most general formulation would take the form of a set of axioms, expressed in a logical language encompassing both EFL and DBL. If we allow the full generality of that approach, however, it leads to the use of algorithms which are not efficient and which are not guaranteed to terminate. An alterna- tive formulation, which is attractive because it can easily be implemented by effective procedures, is one in terms of translation rules. This is the approach adopted in the PHLIQAI system. It is de- scribed in detail in Bronnenberg et al. (1980) and can be summarized as follows. The relation between subsequent semantic levels can be described by means of local transla- tion rules which specify, for every descriptive constant of the source language, a corresponding expression of the target language I • A set of such translation rules defines for every source language query-expression an equivalent target language ex- presslono An effective algorithm can be constructed which performs this equivalence translation for any arbitrary expression. A translation algorithm which applies the translation rules in a straightforward fashion, often produces large expressions which allow for considerably simpler paraphrases. As we will see later on in this paper, it may be essential that such simplifications are actually performed. There- fore, the result of the EFL-to-DBL translation is processed by a module which applies logical equi- valence transformations in order ~o simplify the expression. At the most global level of description, the PHLIQA system can thus be thought to consist of the following sequence of components: Input analysis, yielding an EFL expression; EFL-to-DBL translation! simplification of the DBL expression; evaluation of the resulting expression. For the example introduced in the sections II and III, a specification of the EFL-to-DBL transla- tion rules might look llke this: DEPARTMENTS ~ (for: EMPS, apply: DEPT) EMPLOYEES ÷ EMPS HAVE ÷ (%x,y: DEPT(y)=x) These rules can be directly applied to the formula (2). Substitution of the right hand expressions for the corresponding left hand constants in (2), fol- lowed by X-reduction, yields (3). V THE PROBLEM OF COMPOUND ATTRIBUTES It is easy to imagine a different data base which would also contain sufficient information to answer question (i). One example would be a data base which has a file of DEPARTMENTS, and which has NUMBER-OF-EMPLOYEES as an attribute of this fileo This data base specifies an interpretation of a logical language which contains the set-constant DEPTS and the function #EMP (from departments to integers) as its descriptive constants. In terms of this data base, the query expressed by (i) would be: Count (~x e DEPTSI #EMP (x) > i00}). (5) If we try to describe the relation between EFL and DBL for this case, we face a difficulty which dld not arise for the data base structure of section III: the DBL constants do not allow the construction of DBL expressions whose denotations involve employees. So the EFL constant EMPLOYEES cannot be translated into an equivalent DBL expres- sion - nor can the relation HAVE, for lack of a suitable domain. This may seem to force us to give up local translation for certain cases: instead, we would have to design an algorithm which looks out for sub-expressions of the form I ignore the complexities which arise because of the typing of variables, if a many-sorted logic is used. Again, see Bronnenberget al. (1980), for details. 58 (%y: Count( {x EEMPLOYEES IHAVE(y,x)} )), where y is ranging over DEPARTMENTS, and then translates this whole expression into: #~. This is not attractive - it could only work if EFL expressions would be first transformed so as to always contain this ex- pression in exactly this form, or if we would have an algorithm for recognizing all its variants. Fortunately, there is another solution. Though in DBL terms one cannot talk about employees, one can talk about objects which stand in a one-to-one correspondence to the employees: the pairs consis- ting of a department d and a positive integer i such that i is not larger than than the value of #E~ for d. Entities which have a one-to-one correspon- dence with these pairs, and are disjoint with the extensions of all other semantic types, may be used as "proxies" for employees. Thus, we may define the following translation: EMPLOYEES ~ U(for: DEPTS, apply: (%d:(for: INTS(#EMP(d)), apply: (~ x:idemp ~ d,x>))))) DEPARTMENTS ~ DEPTS HAVE * (%y: rid(y[2])[l] = y[l]) where id is a functionwhich establishes a one- em -to-one correspondence between its domain and its range (its range is disjoint with all other seman- tic types); rid is the inverse of id ; INTS is a emp function which assigns to any integer i the set of integers j such that 0<j~i. Application of these rules to (2) yields: Count({x E DEPTS I Count({y~ U(for: DEPTS, apply:(%d:(for: INTS(#EMP(d)), apply: (%x:id ~ d,x>))))) 1 rid(y)[l] = x}) > i00}~ mp (6) which is logically equivalent to (5) above. It is clear that this data base, because of its greater "distance" to the English lexicon, re- quires a more extensive set of simplification rules if the DBL query produced by the translation rules is to be transformed into its simplest possible form. A simplification algorithm dealing succesful- ly with complexities of the kind just illustrated was implemented by W.J. Bronnenberg as a component of the PHLIQAI system. VI EXTENDING THE DATA BASE LANGUAGE Consider a slight variation on question (I): "How many departments have more than i00 people ?" (7~) We may want to treat "people" and "e~!oyees" as non-synonymous. For instance, we may want to be able to answer the question "Are all employees em- ployed by a department ?" with "Yes", but "Are all people employed by a department ?" with "I don't know". Nevertheless, (7) can be given a definite answer on the basis of the data base of section IlL The method as described so far hasaproblem with this example: although the answer to (7) is de- termined by the data base, the question as formula- ted refers to entities which are not represented in the data base, cannot be constructed out of such entities, and do not stand in a one-to-one corre- spondence with entities which can be so constructed. In order to be able to construct a DBL translation of (7) by means of local substitution rules of the kind previously illustrated, we need an extended version of DBL, which we will call DBL*, containing the same constants as DBL plus a constant NONEMPS, denoting the set of persons who are not employees. Now, local translation rules for the EFL-to-DBL* translation may be specified. Application of these translation rules to the EFL representation of (7) yields a DBL* expression containing the unevaluable constant NONEMPS. The system can only give a defi- nite answer if this constant is eliminated by the simplification component. If the elimination does not succeed, PHLIQA still gives a meaningful "conditional answer". It translates NONEMPS into ~ and prefaces the answer with "if there are no people other than employees, ...". Again, see Bronnenberg et al. (1980) for details. VII DISCUSSION Some attractive properties of the translation method are probably clear from the examples. Local translation rules can be applied effectively and have to be evoked only when they are directly re- levant. Using the techniques of introducing "prox- ies" (section V) and "complementary constants" (section VI) in DBL, a considerable distance be- tween the English lexicon and the data base struc- ture can be covered by means of local translation rules. The problem of simplifying the DBL* expres- sion (and other, intermediate expressions, in the full version of the PHLIQA method) can be treated separately from the peculiarities of particular data bases and particular constructions of the input language. VIII ACKNOWLEDGEMENTS Some of the ideas presented here are due to Jan Landsbergen. My confidence in the validity of the translation method was greatly enhanced by the fact that others have applied it succesfully. Espe- cially relevant for the present paper is the work by Wim Bronnenberg and Eric van Utteren on the translation rules for the PHLIQAI data base. Bipin Indurkhya (1981) implemented a program which shows how this approach accommodates the meaning postu- lates of Montague's PTQ and similar fragments of English. IX REFERENCES W.J.H.J. Bronnenberg, H.C. Bunt, S.P.J. Landsbergen, R.J.H. Scha, W.J. Schoenmakers and E.P.C. van Utte- ren: The Question Answering System PHLIQAI. In: L. Bolc (sd): Natural Lan~uase Question Answering Systems. M~nchen, Wien: Hanser. London, Basing- stoke: Macmillan. 1980. B. Indurkhya: Sentence Analysis Prosrams Based on Montague Grammar__~.Unpubl. Master's Thesis. Phi- lips International Institute. Eindhoven. 1981. S.P.J. Landsbergen and R.J.H. Scha: Formal Lan- guages for Semantic Representation. In: S. All~n and J.S. PetSfi (eds): AsRects of Automatized Text Processing. Hamburg: Buske. 1979. R.J.H. Sch~ Semantic Types in PHLIQAI. Preprints of the 6 ~h International Conference on C0mputa- tional Linsuistics. Ottawa. 1976. 59 | 1982 | 10 |
Problems ¥ith Domain-Independent Natural Language Database Access Systems Steven P. Shvartz Cognitive Systems Inc. 234 Church Street New Haven, Ca. 06510 Zn the past decade, a number of natural lang- uage database access systems have been constructed (e.g. Hendrix 1976; Waltz et el. 1976; Sac- erdoti 1978; Harris 1979; Lehner~ and Shwartz 1982; Shvartz 1982). The level of performance achieved by natural language database access sys- tems varies considerably, with the sore robust systems operating vithtn a narrow domain (i.e., content area) and relying heavily on domain-speci- fic knowledge to guide the language understanding process. Transporting a system constructed for one domain into a new domain is extremely resource-in- tensive because a new set of domain-specific know- ledge must be encoded. In order to reduce the cost of transportation, a great deal of current research has focussed on building natural language access systems that are domain-independent. More specifically, these sys- tems attempt to use syntactic knowledge in con- ~unction with knowledge about the structure of the database as a substitute for conceptual knowledge regarding the database content area. In this paper I examine the issue of whether or not it is possi- ble to build a natural language database access systee that achieves an acceptable level of per- formance without including domain-specific concep- tual knowledge. 6 gerforn=nca ~i~g~ion for oa~u£al language atoms= =X=~em=, The principle motivation for building natural language systems for database access is ~o free the user from the need for data processing instruction. A natural language front end is a step above the "English-like = query systems that presently domi- nate the commercial database retrieval field. English-like query systems allow the user to phrase requests as English sentences, but permit only a restricted subset of English and impose a rigid syntax on user requests. These English-like query systems are easy to learn, but a training period is still required for the user to learn to phrase re- quests that conform to ~hc~ restrictions. Howe- ver, the training period is often very brief, and natura~ language systems can be considered superior only if no computer-related training or knowledge is required of the user. This criterion can only be met if no restric- tions are placed on user queries. A user who has previously relied on a programmer-technician to code formal queries for information retrieval should be permitted to phrase inform%ion retrieval requests t~ the program in exactly the same way as to the technician. That is, whatever the techni- cian would understand, the program should understand. For example, a natural language front end to a stock market database should understand that (1) Did IBM go up yesterday? refers to PRZCE and not VOLUME. However, the sys- tem need not understand requests that a program- mer-technician would be unable to process, e.g. (2) Is GENCO a likely takeover target? That is, the programmer-technlcisn uorking for an investment firm would not be expected to know how t<) process requests that require "expert" knowledge and neither should | natural language front end, If, however, = natural language system cannot a- chieve the level of performance of a program- ear-technician it will seem stupid because it does not meet = user's expectations for an English un- derstanding system, The mprograemer-technician criterion m cannot possibly be met by = domain-independent natural language access system because language understan- ding requires domain-specific world knowledge. On a theoretical level, the need for a knowledge base in a natural language processing system has been well-documented (e.g. Schank A Abelson 1977; Lehnert 1978; Dyer 1982). It will be argued below that in an applied context, a system that does not have a conceptual knowledge base can pro- duce at best only a shallow level of understanding and one that does not meet the criterion specifled above. Further, the domain-independent approach creates a host of problems that are simply non-ex- istent in knowledge-based s~stems. E~oble== far dolai0:i0dg~a0dan~ =~=~®=~ infer- ence. ambiguity, sod aoagbora, Inferential processing is an integral part of natural language understanding. Consider the fol- lowing requests from PEARL (Lehnert and Shvartz 1982; Shwartz 1982) when it operates in the domain of geological map generation: 60 (3) Show ss ell oil veils from 1970 to 1980. (4) Show Is all oil veils fro! 8000 ~ 7000. (5) Show se all oil wells 1 t~a 2000. (6) Show ee all oil wells 40 to 41, 80 to 81. A programmer-technician In the petrochemical in- dustry would infer that (3) refers to drilling dates, (4) refers ~o veil depth, (5) refers ~o the sap scale, end (6) refers to latitude/longitude specifications. Correct processing of these requsst~ requires in- ferential processing that is based on knowledge of the petrochemical industry. That is, these con- ventions =re not in everyone's general working knowledge of the English language. Yet they are standard usage for people who communicate with each other about drilling data, and any systss that claims t~o provide a natural language interface t~ l data base of drilling data must have the knowledge to correctly process requests such as these. Without such inferential processing, the user is required to spell out everything in detail, some- thing that is sispty not necessary in normal Eng- lish discourse. Another probles for any natural language un- derstanding systes is the processing of ambiguous words. In some cases disambiguation can be per- formed syntactically. In other cases, the struc- ture of the database can provide the information necessary for word sense disambiguation (more on this below). However, in many cases disasbiguation can only be performed if domain-specific, world knowledge is available. For example, consider the processing of the word "sales = in (7), (8) and (9). (7) What is the average mark up for sales of stereo equipment? (8) What is the average mark down for sales of stereo equipment? (9) What is the average mark up during sales of stereo equipment? (10) What is the average mark down durlng sales of stereo equipment? These four requests, which are so nelrly identical both lexically and syntactically, have very dis- tinct meanings that derive from the fact that the correct sense of 'sliest in (7) ls quits different from the sense of "sales = intended in (8), (9), end (10). Nest people have little difficulty deter- mining which sense of =sales = is intended in these sentences, and neither would a knowledge-based un- derstander. The key to the disambiguation process involves world knowledge regarding retail sales. Problems of anaphora pose similar problems. For example, suppose the following requests were submitted to a personnel data base: (11) List all salesmen with retirement plans along with their salaries. (12) List all offices with women managers along with their salaries. While these requests are syntactically identical, the referents for "their" in (11) end (12) occupy different syntactic positions. As human informa- tion processors, ve have no trouble understanding 61 that salarie~ are associated with people, so retirement pllns and offices are never considered as possible referents. Again, domain-specific world knouledge is helpful in understanding these requests. ~Ug~u~al knQwlldgm i= m =uh=~i~u~m fo~ GQO¢ID~ual knowlsdgg, One of inner|aliens to eaerge from the con- struction of domain-independent systems is t clever mechanism that extracts dosain-speclflc knowledge free the structure of the data base. For example, the resolution of the pronoun 'their = in both (11) and (12) above could be accomplished by using only structural (rather than conceptual) knowledge of the domain. For example, suppose the payroll database for (11) were structured such that SALARY and RETIRENENT-PLANS were fields within a SALESMAN file. It would then be possible to infer that ltheir= refers to =salesmen = in (11) by noting that SALARY is a field in the SALESMEN file, but that SALARY is not an entry in I RETIREMENT-PLANS file. Unfortunately, this approach has lilited u- tility because it relies on a fortuitous de,abase structure. Consider what would happen if the data base had a top-level ERPLOYEES file (rather than individual files for each type of employee) with fields for JOB-TYPE, SALARY, COMMISSIONS, and RE- TZRENENT-PLANS, With this database organization, it would not he possible to detersine that (13) List all salesmen who have secrebaries along with their comsissions. ltheir= refers ~o meal=amen" and not "secretaries = in (13) on the basis of the structure of the data- bass. To the naive user, however, the seining of this sentence is perfectly clear. A person who couldn't determine the referent of "their = in (13) would not be perceived as having an adequate cos- sand of the English language and the same would be true for a computer system that did not understand the request. ~i~fall= a==g~il~Id wi~b ~bm dQ®zin:indag~ndln~ i~- In a knowledge-based systes such as PEARL, = natural language request is parsed into a concep- tual representation of the meaning of the request. The retrieval routine is then generated free this concepbual representation. As a result, the parser is independent of the logical structure of the database. That is, the same parser can be used for databases with different logical structures, but the same information content. Further, the same parser can be used whether the required information is located in = single file or in lultiple files. In a domaln-independent systes, the parser is entirely dependent on the structure of the database for domain-specific knowledge. As a result, one must restructure the parser for databases with i- dentical content but different logical structure. Sisilarly, the output of the parser lust be very dlfferent vhen the required information Is con- tained in mulSiple files rather than a single file. Because of their lack of conceptual knowledge regarding the database, domain-independent systems rely heavily on key words or phrases to indicate which database field iS being referred to. For example, (14) Vhat is Bill Smith's ~ob &male? High& be easily processed by simply retrieving the con&ants of a JOB-TITLE field. Different vlys of referring ~o job title can also be handled as syn- onyms. However, dosiin°independent systems get into deep trouble vhen the database field that needs to be accessed is not directly indicated by key words or phrases in the input request. For example, (15) Is John Jones the child of an alumnus? is easily processed if there exists a CHILD-OF-AN-ALUMNUS field, but the query (16) Is one of John Jones' paren&s an alumnus? contains no key word or phrase to indicate that the CHILD-OF-AN-ALURNUS field should be accessed, In a knowledge-based system, the retrieval routine is generated from a conceptual representation of the meaning of the user query and therefore key words or phrases arm not required. A related problem occurs with queries involving a~reption or quan- tity. For example, (17) How many employees are in the sales depart- ment? light require retrieving the value of a particular field (e.g. NUHBER-OF-EHPLOYEES), or it sight re- quire totalling the number of records in the EH- PLOYEE file that have the correct DEPARTNENT field value, or, if the departments are broken down into offices, it light require totalling the NUN- BER-OF-ENPLOYEES field for each office. In m do- main-independent system, the correct parse depends upon the structure of the database and is therefore difficult to handle in a general way. In a know- ledge-based system such as PEARL, the different database structures would simply require altering the mapping between the conceptual representaSion of the parse and the retrieval query. Finally, this reliance on database structure can lead to wrong answers. A classic example is Harris' (1979) 'snowmobile problem =. Yhen Harris' ROBOT system interfaces with a file containing in- formation about homeowner's insurance, the word 'snowmobile" is defined as any number • 0 in the 'snowmobile field" of an insurance policy record. This means that as far as ROBOT is concerned, the question 'How many snowmobiles are there? = is no different from "How many policies have snowmobile coverage?" However, the correct answers to the two questions will often be very different. If the first question is asked and the second question is answered, the result is an incorrect answer. If the first question cannot be answered due to the structure of the database, the system should inform the user the5 this is the case. ~oogluaioo=. I have argued above that conceptually-based domain-specific knowledge is absolutely essential for n|turll language database access systems. Systems that rely on dltabase structure for this domain-specific knowledge viii not achieve an ac- ceptable level of performance -- i.e. operate at the level of understanding of a programmer-techni- cian. Because of the requirement for delian-specific knowledge, conceptually-based systems are restric- ted t~o limited domains and are not readily portable ~o new content areas. However, eliminating the domain-speciflc conceptual knowledge is throwing &he baby out with the ba&h water. The conceptual- ly-based domain-specific knowledge is the key to robust understanding. The approach of the PEARL project with regard t~ the &ransportability problem is t~ try and I- dentify areas of discourse that are common t~ most domains and to build robust modules for natural language analysis within these domains. Examples of such domains are temporal reference, loci&ion reference, and report generation. These modules are knowledge-based and can be used by a wide va- riety of domains to help extract ~hm conceptual content of a requss5. REFERENCES Dyer, N. (1982). ~n:~9~h Und~£~aodiag~ ~ Cos- pu~nt HQdnl of In~ng£a~nd 8to,oaring fg£ Na~i- ~[X§ Cg~D£ObgU~igO. Yale University, Computer Science Dept., Research Report #219. Harris, t. R. (1979). Experience with ROBOT in 12 commercial natural language data base query ap- plications, g£~oeding= Of ~b| O~b [o~ncna~ioo- al Joins Cgnfntnnco on &£~ificial [n~olllgonco. Hendrix, G. G. (1976). LIFER: A natural language interface facility. SRZ Tech. Note 135. Dec. 1976. Lehnert, W. (1978). Ibo 8~o~o~ of Ggo~ioo 8O- sHO£iOg. Lawrence Erlbaum Associates, Hills- dale, New Jersey. Lehnert, ¥. and Shwartz, S. (1982). Nabural Language Data Base Access with Pearl. EzoCmod- logs of ~be Hin~b Io~ntna~ional Conference on Comp~aSioQal Linguistic=, Prague, Czechoslo- vakia. 5acerdoti, E. D. (1978). A LADOER user's guide. Technical Note 163. SRI Project 6891, Schank, R. C. and kbelson, R. (1977). ~£ig~. Elm0=, G~IIs add U0da£s~anding, Lawrence Erl- baum Associates, Hillsdale Ne~ Jersey, 1977. Shwartz, S. (1982). PEARL: 'k Natural Language Analysis System for Information Retrieval (sub- mitted to AAAI-82/applications division). Waltz, D. L., Finin. T., Green, F., Conrad, F., Goodman, B., Hadden, G. (1976). The planes system: natural language access to a lar~e data base. Coordinated Science Lab., Univ, of Il- linois, Urbane, Tech. Report T-34, (July 1976). 62 | 1982 | 11 |
ISSUES IN NATURAL LANGUAGE ACCESS TO DATABASES FROM A LOGIC PROGRAMMING PERSPECTIVE David H D Warren Artificial Intelligence Center SRI International, Menlo Park, CA 94025, USA I INTRODUCTION I shall discuss issues in natural language (NL) access to databases in the light of an experimental NL questlon-answering system, Chat, which I wrote with Fernando Perelra at Edinburgh University, and which is described more fully elsewhere [8] [6] [5]. Our approach was strongly influenced by the work of Alaln Colmerauer [2] and Veronica Dahl [3] at Marseille University. Chat processes a NL question in three main stages: translation planning execution English .... > logic .... > Prolog .... > answer corresponding roughly to: "What does the question mean?", "How shall I answer it?", "What is the answer?". The meaning of a NL question, and the database of information about the application domain, are both represented as statements in an extension of a subset of flrst-order logic, which we call "definite closed world" (DCW) logic. This logic is a subset of flrst-order logic, in that it admits only "definite" statements; uncertain information ("Either this or that") is not allowed. DCW logic extends flrst-order logic, in that it provides constructions to support the "closed world" assumption, that everything not known to be true is false. Why does Chat use this curious logic as a meaning representation language? The main reason is that it can be implemented very efficiently. In fact, DCW logic forms the basis of a general purpose programming language, Prolog [9] [I], due to Colmerauer, which has had a wide variety of applications. Prolog can be viewed either as an extension of pure Lisp, or as an extension of a relational database query language. Moreover, the efficiency of the DEC-10 Prolog implementation is comparable both with compiled Lisp [9] and with current relational database systems [6] (for databases within virtual memory). Chat's second main stage, "planning", is responsible for transforming the logical form of the NL query into efficient Prolog [6]. This step is analogous to "query optlmlsatlon" in a relational database system. The resulting Prolog form is directly executed to yield the answer to the original question. On that's domain of world geography, most questions within the English subset are answered in well under one second, including queries which involve taking Joins between relations having of the order of a thousand tuples. A disadvantage of much current work on NL access to databases is that the work is restricted to providing access to databases, whereas users would appreciate NL interfaces to computer systems in general. Moreover, the attempt to provide a NL "front-end" to databases is surely putting the cart before the horse. What one should really do is to investigate what "back-end" is needed to support NL interfaces to computers, without being constrained by the limitations of current database management systems. I would argue that the "logic programming" approach taken in Chat is the right way to avoid these drawbacks of current work in NL access to databases. Most work which attempts to deal precisely with the meaning of NL sentences uses some system of logic as an intermediate meaning representation language. Logic programm/ng is concerned with turning such systems of logic into practical computational formalisms. The outcome of this "top-down" approach, as reallsed in the language Prolog, has a great deal in common with the relational approach to databases, which can be seen as the result of a "bottom-up" effort to make database languages more like natural language. However Prolog is much more general than relational database formalisms, in that it permits data to be defined by general rules having the power of a fully general programming language. The logic programming approach therefore allows one to interface NL to general programs as well as to databases. Current Prolog systems, because they were designed with programming not databases in mind, are not capable of accommodating really large databases. However there seems to be no technical obstacle to building a Prolog system that is fully comparable with current relational database management systems, while retaining Prolog's generality and efficiency as a programming language. Indeed, I expect such a system to be developed in the near future, especially now that 63 Prolog has been chosen as the kernel language for Japan's "Fifth Generation" computer project [4]. II SPECIFIC ISSUES A. Aggregate Functions and Quantity Questions To cater for aggregate and quantity determiners, such as plural "the", "two", "how many", etc., DCW logic extends flrst-order logic by allowlng predications of the form: setof(X,P,S) to be read as "the set of Xs such that P is provable is S" [7]. An efficient implementation of *aetof" is provided in DEC-10 Prolog and used in Chat. Sets are actually represented as ordered llsts without dupllcate elements. Something along the lines of "setof" seems very necessary, as a first step at least. The question of how to treat explicitly stored aggregate information, such as "number of employees" in a department, is a speclal case of the general issue of storing and accessing non- primitive information, to be discussed below in section D. B. Time and Tense The problem of providing a common framework for time instants and time intervals is not one that I have looked into very far, but it would seem to be primarily a database rather than a linguistic issue, and to highlight the limitations of traditional databases, where all facts have to be stored explicitly. Queries concerning time instants and intervals will generally need to be answered by calculatlon rather than by simple retrieval. A common framework for both calculation and retrieval is precisely what the logic programming approach provides. For example, the predication: sailed(kennedy,July82,D) occurring in a query might invoke a Prolog procedure "sailed" to calculate the distance D travelled, rather than cause a simple data look- up. C. Quantifying into Questions Quantifying into questions is an issue which was an important concern in Chat, and one for which I feel we produced a reasonably adequate solution. The question "Who manages every department?" would be translated into the following logical form: answer(M) <- \+ exlsts(D, department(D) & \+manages(M,D)) where "\+" is to be read as "it is not known that", i.e. the logical form reads "M is an answer if there is no known department that M does not manage". The question "Who manages each department?", on the other hand, would translate into: answer(D-M) <- department(D) & manages(M,D) generating answers which would be pairs of the form: accounts - andrews ; sales - smith ; etc. The two different loglcal forms result from the different treatments accorded to "each" and "every" by Chat's determiner scoplng algorithm [8] [S]. D. Querying Semantically Complex Fields My general feeling here is that one should not struggle too hard to bend one's NL interface to fit an existing database. Rather the database should be designed to meet the needs of NL access. If the database does not easily support the kind of NL queries the user wants to ask, it is probably not a well-deslgned database. In general it seems best to design a database so that only primitive facts are stored explicitly, others being derived by general rules, and also to avoid storing redundant information. However this general philosophy may not be practicable in all cases. Suppose, indeed, that "childofalumnus" is stored as primitive information. Now the logical form for "Is John Jones a child of an alumnus?" would be: answer(yes) <- childof(X,JohnJones) & alumnus(X) What we seem to need to do is to recognlse that in this particular case a simplification is possible using the following definition: chlldofalumnus(X) <-> exlsts(Y, childof(Y,X) & alumnus(Y)) giving the derived query: answer(yes) <= childofalumnus(JohnJones) However the loglcal form: answer(X) <= childof(X,JohnJones) & alumnus(X) corresponding to "Of which ~!umnus is John Jones a child?" would not be susceptible to simplification, and the answer to the query would have to be "Don't know". 64 E. Multi-File Queries At the root of the difficulties raised here is the question of what to do when the concepts used in the NL query do not directly correspond to what is stored in the database. With the logic programming approach taken in Chat, there is a slmple solution. The database is augmented with general rules which define the NL concepts in terms of the explicitly stored data. For example, the rule: lengthofCS,L) <= classof(S,C) & classlengthof(C,L). says that the length of a ship is the length of that ship's class. These rules get invoked while a query is being executed, and may be considered to extend the database with "virtual files". Often a better approach would be to apply these rules to preprocess the query in advance of actual execution. In any event, there seems to be no need to treat Joins as implicit, as systems such as Ladder have done. Joins, which are equivalent to conjunctions in a logical form, should always be expressed explicitly, either in the original query, or in other domaln-dependent rules which help to support the NL interface. III A FURTHER ISSUE - SEMANTICS OF PLURAL "THE" A difficulty we experienced in developing Chat, which I would propose as one of the most pressing problems in NL access to databases, is to define an adequate theoretical and computational semantics for plural noun phrases, especially those with the definite article "the". It is a pressing problem because clearly even the most minimal subset of NL suitable for querying a database must include plural "the". The problem has two aspects: (I) to define a precise semantics that is strictly correct in all cases; (2) to implement this semantics in an efficient way, giving results comparable to what could be achieved if a formal database query language were used in place of NL. As a first approximation, Chat treats plural definite noun phrases as introducing sets, formallsed using the "setof" construct mentioned earlier. Thus the translation of "the European countries" would be S where: setof(C,european(C) & country(C),S). ~:" The main drawback of this approach is that it leaves open the question of how predicates applied to sets relate to those same predicates applied to individuals. Thus the question "Do the European countries border the Atlantic?" gets as part of its translation: borders(S,atlantlc) where S is the set of European countries. Should this predication be considered true if all European countries border the Atlantic, or if Just some of them do? Or does it mean something else, as in "Are the European countries allies?"? At the moment, Chat makes the default assumption that, in the absence of other information, a predicate is "distributive", i.e. a predication over a set is true if and only if it is true of each element. So the question above is treated as meaning "Does every European country border the Atlantic?". And "Do the European countries trade with the Caribbean countries?" would be interpreted as "Does each European country trade with each Caribbean country?". Chat only makes this default assumption in the course of query execution, which may well be very inefficient. If the "setof" can effectively be dispensed with, producing a simpler logical form, one would like to do this at an earlier stage and take advantage of optlmisatlons applicable to the simpler logical form. A further complication is illustrated by a question such as "Who are the children of the employees?". A reasonable answer to this question would be a table of employees with their children, which is what Chat in fact produces. If one were to use the more slmple-mlnded approximations discussed so far, the answer would be simply a set of children, which would be empty (1) if the "childof" predicate were treated as distributive. In general, therefore, Chat treats nested definite noun phrases as introducing '*indexed sets", although the treatment is arguably somewhat ad hoc. A phrase llke "the children of the employees" translates into S where: setof(E-CC,employee(E) & setof(C,childof(E,C),CC),S). If the indexed set occurs, not in the context of a question, but as an argument to another predicate, there is the further complication of defining the semantics of predicates over indexed sets. Consider, for example, "Are the major cities of the Scandinavian countries linked by rail?". In cases involving aggregate operators such as "total" and "average", an indexed set is clearly needed, and Chat handles these cases correctly. Consider, for example, "What is the average of the salaries of the part-time employees?". One cannot slmply average over a set of salaries, since several employees may have the same salary; an indexed set ensures that each employee's salary is counted separately. To summarise the overall problem, then, can one find a coherent semantics for plural "the" that is intuitively correct, and that is compatible with efficient database access? 65 REFERENCES • I. Clocksln W F and Mellish C S. Pro~ramm/ng i_.nn Prolo~. Springer-Verlag, 1981. 2. Colmerauer A. Un sous-ensemble interessant du francais. RAIRO 13, 4 (1979), pp. 309-336. [Presented as -~-An interesting natural language subset" at the Workshop on Logic and Databases, Toulouse, 1977]. 3. Dahl V. Translating Spanish into logic through loglc. AJCL 7, 3 (Sep 1981), pp. 149- 164. 4. Fuchi K. Aiming for knowledge information vrocessing systems. Intl. Conf. ou Fifth Generation Computer Systems, Tokyo, Oct 1981, pp. 101-114. 5. Perelra F C N. Logic for natural language analysis. PhD thesis, University of Edinburgh, 1982. 6. Warren D H D. Efficient processing of interactive relational database queries expressed in logic. Seventh Conf. on Very Large Data Bases, Cannes, France, Sep 1981, pp. 272-281. 7. Warren D H D. Higher-order extensions to Prolog - are they needed? Tenth Machine Intelligence Workshop, Cleveland, Ohio, Nov 1981. 8. Warren D H D and Pereira F C N. An efficient easily adaptable system for interpreting natural language queries. Research Paper 156, Dept. of Artificial Intelligence, University of Edinburgh, Feb 1981. [Submitted to AJCL]. 9. Warren D H D, Pereira L M and Perelra F C N. Prolog - the language and its implementation compared with Lisp. ACM Symposium on AI and Programming Languages, Rochester, New York, Aug 1977, pp. 109-115. 66 | 1982 | 12 |
NATURAL LANGUAGE DATABASE UPDATES Sharon C. Salveter David Maier Computer Science Depar=ment SUNY Stony Brook Stony Brook, NY 11794 ABSTRACT Although a great deal of research effort has been expended in support of natural language (NL) database querying, little effort has gone to NL database update. One reason for this state of affairs is that in NL querying, one can tie nouns and stative verbs in the query to database objects (relation names, attributes and domain values). In many cases this correspondence seems sufficient to interpret NL queries. NL update seems to require database counterparts for active verbs, such as "hire," "schedule" and "enroll," rather than for stative entities. There seem to be no natural can- didates to fill this role. We suggest a database counterpart for active verbs, which we call verbsraphs. The verbgraphs may be used to support NL update. A verbgraph is a structure for representing the various database changes that a given verb might describe. In addi- tion to describing the variants of a verb, they may be used to disamblguate the update command. Other possible uses of verbgraphs include, specification of defaults, prompting of the user to guide but not dictate user interaction and enforcing a variety of types of database integrity constraints. I. MOTIVIATION AND PROBLEM STATF~NT We want to support natural language interface for all aspects of database manipulation. English and English-like query systems already exist, such as ROBOT[Ha77], TQA[Da78], LUNAR[W076] and those described by Kaplan[Ka79], Walker[Wa78] and Waltz [Wz75]. We propose to extend natural language interac$ion to include data modification (insert, delete, modify) rather than simply data extraction. The desirability and unavailability of natural lan- guage database modification has been noted by Wiederhold, et al.[Wi81]. Database systems cur- rently do not contain structures for explicit model- ling of real world changes. A state of a database (OB) is meant to repre- sent a state of a portion of the real world. This research is partially supported by NSF grants IST-79-18264 and ENG-79-07794. We refer to the abstract description of the portion of the real world being modelled as the semantic data descri~tlo n (SDD). A SDD indicates a set of real world states (RWS) of interest, a DB defini- tion gives a set of allowable database states (DBS). The correspondence between the SDD and the DB definition induces connections between DB states and real world states. The situation is diagrammed in Figure i. Real World m o ~ RWSI c~o RWS2 ~ RWS3 semantic description Database > DBSI m ,~ m m D-o DBS2 m ~ m DBS3 < ~ database correspondence definition Figure 1 Natural language (NL) querying of the DB re- quires that the correspondence between the SDD and the DB definition be explicitly stated. The query system must translate a question phrased in terms of the SDD into a question phrased in terms of a data retrieval command in the language of the DB system. The response to the command must be trans- lated back into terms of the SDD, which yields information about the real world state. For NL database modification, this stative correspondence between DB states and real world states is not adequate. We want changes in the real world to be reflected in the DB. In Figure 2 we see that when some action in the real world causes a state change from RWSI to RWS2, we must perform some modifica- tion to the DB to change its state from DBSI to DBS2. Real World Database f action D}IL RWS2 ~ DBS2 Figure 2 67 We have a means to describe the action that changed the state of the real world: active verbs. We also have a means ~o describe a change in the DB state: data manipulation language (DML) com- mand sequences. But given a real world-action, how do we find a O~XL command sequence that will agcomp- lish the corresponding change in the DB? Before we explore ways to represent his active correspondence--the connection between real world actions and DB updates--, let us examine how the stative correspondence is captured for use by a NL query system. We need to connect entities and relationships in the SDD with files, fields and field values in the DB. This stative corres- pondence between RWS and DBS is generally specif- ied in a system file. For example, in Harris' ROBOT system, the semantic description is implici% and it is assumed to be given in English. The entities and relationships in the description are roughly English nouns and stative verbs. The correspondence of the SDD to the DB is given by a lexicon that associates English words with files, fields and field values in the DB. This lexicon also gives possible referents for word and phrases such as "who," "where" and "how much." Consider the following example. Suppose we have an office DB of employees and their scheduled meetings, reservations for meeting rooms and mes- sages from one employee to another. We capture this information in the following four relations, EMP(name,office,phone,supervisor) APPOINTMENT(name,date,time,duration,who, topic,location) MAILBOX(name,date,time,from,message) ROO~ESERVE(room,date,time,duration,reserver) with domains (permissible sets of values): DOIiAIN ATTRIBUTES personname name, who, from, reserver, supervisor roomnum room, location, office phonenum phone calendardate date clock~ime time elapsedtime duration text message~ topic Consider an analysis of the query "What is the name and phone # of the person who reserved room 85 for 2:45pm today?" Using the lexicon, we can tie words in the query to domains and relations. name - personname phone - phonenum person - personname who - personname reserve - ROOMRESERVE relation room - roomnum 2:45pm - clocktlme ~ay - calendardate We need to connect relations D~ and ROO~ESERVE. The possible joins are room-office and name- reserver. If we have stored the informa=ion that offices and reservable rooms never intersect, we can eliminate the first possibility. Thus we can arrive at the query i__nnEMP, ROOMKESERVE retrieve name, phone where name = reserver and room = 85 and time = 2:45pm and date = CURRE~DATE Suppose we now want to make a change to the database: "Schedule Bob Marley for 2:lbpm Friday." This request could mean schedule a meeting with an individual or schedule Bob Marley for a seminar. We want to connect "schedule" with the insertion of a tuple in either APPOINTMENT or ROO~ESERVE. Although we may have pointers from "schedule" to APPOINTMENT and ROOMRESERVE, we do not have ade- quate information for choosing the relation to up- date. Although files, fields, domains and values seem to be adequate for expressing the stative correspondence, we have no similar DB objects to which we may tie verbs that describe actions in the real world. The best we can do with files, fields and domains is to indicate what is to be modified; we cannot specify how to make the modif- ication. We need to connect the verbs "schedule," "hire" and "reserve" with some structures that dictate appropriate D:.~ sequences that perform the corresponding updates to the DB. The best we have is a specific D~ command sequence, a transaction, for each instance of "schedule" in the real world. No single transaction truly represents all the implications and variants of the "schedule" action. "Schedule" really corresponds to a set of similar transactions, or perhaps some parameterized version of a DB transaction. induced connections RWS2 ~ / ~ ~ DBS2 "Schedule"4.~Parameterized Transaction (PT) Figure 3 The desired situation is shown in Figure 3. We hg" ~ an active correspondence between "schedule" anG a parameterized DB transaction PT. Oifferent instances of the schedule action, S1 and $2, cause differenL changes in the real worl~ s~a~. From the active correspondence of "schedule" and PT, we want to produce the proper transaction, T1 or T2, to effect the correct change in the DB state. There is not an existing candidate for the high- level specification language for verb descriptions. 68 We must be able to readily express the correspond- ence between actions in the semantic world and verb descriptions in this high-level specification We depend heavily on this correspondence to proc- ess natural language updates, just as the statlve correspondence is used to process natural language queries. In the next section we examine these requirements in more detail and offer, by example, one candidate for the representation. Another indication of the problem of active verbs in DB shows up in looking a semantic data languages. Sematnic data models are systems for constructing precise descriptions of protions of the real world - semantic data description (SDD)- using terms that come from the real world rather than a particular DB system. A SDD is a starting point for designing and comparing particular DB implementations. Some of the semantic models that have been proposed are the entity-relationship model[Ch763, SDM[~81], RM/T[Co793, TAXIS[MB80] and Beta[Br78]. For some of these models, method- ologies exist for translating to a DB specification in various DB models, as well as for expressing the static correspondence between a SDD in the semantic model and a particular DB implementation. To express actions in these models, however, there are only terms that refer to DBs: insert, delete, modify, rather than schedule, cancel, postpone (the notable exception is Skuce[SkSO]). While there have been a number of approaches made to NL querying, there seems to be little work on NL update. Carbonell and Hayes[CHSl] have looked at parsing a limited set of NL update com- mands, but they do not say much about generating the DB transactions for these commands. Kaplan and Davidson[KDSl] have looked at the translation of NL updates to transactions, but the active verbs they deal with are synonyms for DB terms, essentially following the semantic data model as above. This limitation is intentional, as the following excerpt shows: First, it is assume that the underlying database update must be a series of trans- actions of the same type indicated in the request. That is, if the update requests a deletion, this can only be mapped into a series of deletions in the database. While some active verbs, such as "schedule," may correspond to a single type of DB update, there are other verbs that will require multiple types of DB updates, such as "cancel," which might require sending message as well as removing an appointment. ~apian and Davidson are also trying to be domain independent, while we are trying to exploit domain-specific information. II. NATURE OF THE REPRESENTATION We propose a structure, a verbgraph, to repre- sent action verbs. Verbgraph are extensions of frame-like structures used to represent verb mean- ing in FDRAN[Sa78] and [Sa79]. One verbgraph is associated with each sense of a verb; that struc- ture represents all variants. A real world change is described by a sentence that contains an active verb; the DB changes are accomplished by DML com- mand sequences. A verbgraph is used to select DNfL sequences appropriate to process the variants of verb sense. We also wish to capture that one verb that may be used as part of another: we may have a verb sense RESERVE-ROOM that may be used by itself or may be used as a subpart of the verb SCHEDULE-TALK. Figure 4 is an example of verbgraph. It models the "schedule appointment" sense of the verb "schedule." There are four basic variants we are attempting to capture; they are distinguished by whether or not the appointment is scheduled with someone in the company and whether or not a meeting room is to be reserved. There is also the possi- bility that the supervisor must be notified of the meeting. The verbgraph is directed acyclic graph (DAG) with 5 kinds of nodes: header, footer, informa- tion, AND (0) and OR (o). Header is the source of the graph, the footer is the sink. Every informa- tion node has one incoming and outgoing edge. An AND or OR node can have any number of incoming or outgoing edges. A variant corresponds to a directed path in the graph. We define a path to be connected subgraph such that I) the header is included; 2) the footer is included; 3) if it contains an information node, it contains the incoming and outgoing edge; 4) if it contains an AND node, it contains all incoming and outgoing edges; and 5) if it contains an OR node, it contains exactly one incoming and one outgoing edge. We can think of tracing a path in the graph by starting at the header and following its outgoing edge. Whenever we encounter an information node, we go through it. Whenever we encounter an ~ND node, the path divides and follows all outgoing edges. We may only pass through an AND node if all its incoming edges have been followed. An OR node can be entered on only one edge and we leave it by any of its outgoing edges. An example of a complete path is one that consists of theheader, footer, information nodes, A, B, D, J, and connector nodes, a, b, c, d, g, k, i, n. Although there is a direction to paths, we do not intend that the order of nodes on a path implies any order of processing the graph, except the footer node is always last to be processed. A variant of a verb sense is described by the set of all expressions in the information nodes con- tained in a path. Expressions in the information nodes can be of two basic types: assignment and restriction. The assignment type produces a value to be used in the update, either by input or computation; the key word input indicates the value comes from the user. Some examples of assignment are: 69 I ".l.~FI'....'~ae - ~/S~ APPT.~ul-atlon in=u~ fz~m el~sedtime APPT.cl~e - in?u+~ f'm~m ca!e~a:~iata APPT. ,~ho - L=put f:,~ ~e=somm,,e b B APPT. who in RI APPT~. =am~ - APPT. ~ho APPT..2. who - AP.~T. =Ame APPT2. Cite - AP~T. time APPT2. d~te - APPT. dais APPT2. topic - APPT. topic .~PT2. whets - APPT. whe:e with :e on %APPT. ~.te ! o $ C IRES. date - APPT. date i ~! I ~" :eserve= - AY~T. ~!~e IA~T'~° ~-~ ~--~ ,l~S.~. - ~.t~. RES. ~ul'Atlon A.~P~, iuz'ation l~:. ~,~ ~o_~t _~ R~ i L~T,. ~e~ R2J Figura 4 call I~r'OKM(R~, .~2Fg.name, 'Meeting I ~--~ ~TT. ~ho on f~T. ~te in I room ~PPT. vhere' ) ROONRESERVE inse.~ ~ES 70 i) (node labelled A in Figure 4) APPT.who ÷ input from personname The user must provide a value from the domain personname. 2) (node labelled D in Figure 4) RES.date ÷ APPT.date The value for ApPT.date is used as the value RES.date. An example of restriction is: (node B in Figure 4) APPT.who in R1 where R1 = in EMP retrieve name This statement restricts the value of APPT.who to be a company employee. Also in Figure 4, the symbols RI, R2, R 3 and R 4 stand for the retrievals R I = i_~nEMP retrieve name R 2 = i_nn EMP retrieve office where name = ApPT.name R 3 = i_~n EMP retrieve office where name = APPT.name or name = APPT.who. R 4 = in ~MP retrieve supervisor where name = APPT.name. In Node B, INFORM(APPT.who, APPT.name, 'meeting with me on %APPT.date at %APPT.time') stands for another verbgraph that represents sending a message by inserting a tuple in MAILBOX. We can treat the INFORM verbgraph as a procedure by specifying values for all the slots that must be filled from input. The input slots for INFORM are (name, from, message). III. WHAT CAN WE DO WITH IT? One use for the verbgraphs is in support of NL directed manipulation of the DB. in particular, they can aid in variant selection. We assume that the correct verb sense has already been selected; we discuss sense selection later. Our goal is to use information in the query and user responses to questions to identify a path in the verbgraph. Let us refer again to the verbgraph for SCHEDULE- APPOINTMENT shown in Figure 4. Suppose the user command is "Schedule an appointment with James Parker on April 13" where James Parker is a company employee. Interaction with the verbgraph proceeds as follows. First, information is extracted from the command and classified by domain. For example, James Parker is in domain personname, which can only be used to instantiate APPT.name, APPT.who, APPT2.name and APPT2.who. However, since USER is a system variable, the only slots left are APPT.who and APPT2.name, Wblch are necessarily the same. Thus we can instantiate APPT.who and ApPT2.name with "James Parker." We classify "April 13" as a calendar date and instantiate APPT.date, APPT2.date and RES.date with it, because all these must be the same. No more useful information is in the query. Second, we examine the graph to see if a unique path has been determined. In this case it has not. However, other possibilities are constrained because we know the path must go through node B. This is because the path must go through either node B or node C and by analyzing the response to retrieval RI, we can determine it must be node B (i.e., James Parker is a company employee). Now we must determine the rest of the path. One deter- mination yet to be made is whether or not node D is in the path. Because no room was mentioned in the query, we generate from the graph a question such as '";here will the appointment take place?" Suppose the answer is "my office." Presume we can translate "my office" into the scheduler's office number. This response has two effects. First, we know that no room has to be reserved, so node D is not in the path. Second, we can fill in APPT.where in node F. Finally, all that remains to be decided is if node H is on the path. A question like "Should we notify your supervisor?" is generated. Supposing the answer is "no." Now the path is completely determined; it contains nodes A, B and F. Now that we have determined a unique path in the graph, we discover that not all the information has been filled-in in every node on the path. We now ask the questions to complete these nodes, such as '~nat time?", "For how long?" and "~at is the topic?". At this point we have a complete unique path, so the appropriate calls to INFORM can be made and the parameterized trans- action in the footer can be filled-in. Note that the above interaction was quite rig- idly structured. In particular, after the user issues the original command, the verbgraph instan- tiation program chooses the order of the subsequent data entry. There is no provision for default, or optional values. Even if optional values were allowed, the program would have to ask questions for them anyway, since the user has no opportunity to specify them subsequent to the original command. We want the interaction to be more user-dlrected. Our general principle is to allow the user to volunteer additional information during the course of the interaction, as long as the path has not been determined and values remain unspecified. We use the following interaction protocol. The user enters the initial command and hits return. The program will accept additional lines of input. However, if the user just hits return, and the pro- gram needs more information, the program will gener- ate a question. The user answers the question, followed by a return. As before, additional infor- mation may be entered on subsequent lines. If the user hits return on an empty line, another question is generated, if necessary. Brodle[Br813 and Skuce[Sk80] both present systems for representing DB change. Skuce's goal is to provide an English-like syntax for DB procedure specification. Procedures have a rigid format and require all information to be entered at time of invocation in a specific order, as with any computer subprogram. Brodie is attempting to also specify DB procedures for DB change. He allows some information to be specified later, but the order is fixed. Neither allow the user to choose the order of entry, and neither accomodates 71 variants that would require different sets of values to be specified. However, like our method, and unlike Kaplan and Davidson[KD81], they attempt to model DB changes that correspond to real world actions rather than just specifying English syno- nyms for single DB come, ands. Certain constraints on updates are implicit on verbgraphs, such as APPT.where ÷ input from R3, which constrains the location of the meeting to be the office of one of the two employees. We also use verbgraphs to maintain database consistency. Integrity constraints take two forms: constraints on a single state and constraints on successive database states. The second kind is harder to en- force; few systems support constraints on succes- sive states. Verbgraphs provide many opportunities for specifying various defaults. First, we can specify default values, which may depend on other values. Second, we can specify default paths. Verbgraphs are also a means for specifying non-DB operations. For example, if an appointment is made with someone outside the company, generate a confirmation letter to be sent. All of the above discussion has assumed we are selecting a variant where the sense has already been determined. In general sense selection, being equivalent to the frame selection problem in Artifical Intelligence[CW76], is very difficult. We do feel that verbgraph will aid in sense selec- tion, but will not be as efficacious as for variant selection. In such a situation, perhaps the English parser can help disambiguate or we may want to ask an appropriate question to select the correct sense, or as a last resort, provide menu selection, IV. AN ALTERNATIVE TO VERBGRAPHS We are currently considering hierarchically structured transactions, as used in the TAXIS semantic model [MB80], as an alternative to verb- graphs. Verbgraphs can be ambiguous, and do not lend themselves to top-down design. Hierarchical transactions would seem to overcome both problems. Hierarchical transactions in TAXIS are not quite as versatile as verbgraphs in representing variants. The hierarchy is induced by hierarchies on the entity classes involved. Variants based on the relationship among particular entities, as recorded in the database, cannot be represented. For example, in the SCHEDULE-APPOINTME/{T action, we may want to require that if a supervisor schedules a meeting with an employee not under his supervision, a message must be sent to that employee's super- visor. This variant cannot he distinguished by classlfl [ng one entity as a supervisor and the othe£ as an employee because the variant does not apply when the supervisor is scheduling a meeting with his own employee. Also all variants in a TAXIS trausaction hierarchy must involve the same entity classes, where we may want to involve some classes only in certain variants. For example, a variant of SCHEDULE-APPOINTMENT may require that a secretary be present to take notes, introducing an entity into that variant that is not present elsewhere. We are currently trying to extend the TAXIS model so it can represent such variants. Our ex- tensions include introducing guards to distinguish specializations and adding optional actions and entities to transactions. A guard is a boolean expression involving the entities and the database that, when satisfied, indicates the associated specialization applies. For example, the guard scheduler i__nnclass(supervisor) and scheduler # supervisor-of(schedulee) would distinguish the variant described above where an employee's supervisor must be notified of any meeting with another supervisor. The dis- crimination mechanism in TAXIS is a limited form of guards that only allows testing for entities in classes. [Br78] [Br81] [C~Sl] [cw76] [Ch76] [Co79] [Da78] [~M81] [Ha77] V. REFERENCES Brodie, M.L., Specification and verifica- tion of data base semantic integrity. CSRG Report 91, Univ. of Toronto, April 1978. Brodie, M.L., On modelling behavioral semantics of database. VLDB 7, Cannes France, Sept. 1981. Carbonell, J. and Hayes, P., Multi- strategy construction-specification pars- ing for flexible database query and up- date. CMU Internal Report, July 1981. Charniak, E. and Wilks, Y., Computation Semantics. North Holland, 1976. Chen, P.P.-S., The entity-relationship model: toward a unified view of data. ACM TODS i:I, March 1976, pp. 9-36. Codd, E.F., Extending the database rela- tional model to capture more meaning. ACM TODS 4:4, December 1979, pp. 397-434. Damereau, F.J., The derivation of answers from logical forms in a question answering system. American Journal of Computational Linguistics. Microfiche 75, 1978, pp. 3-42. Hammer, M. and McLeod, D., Database description with SDM: A semantic database model. ACM TODS 6:3, Sept. 1981, pp. 351-386. Harris, L.R., Using the database itself as a semantic component to aid the parsing of natural language database queries. Dartmouth College Mathematics Dept. TR 77-2, 1977. 72 IRa79] [m~81] [~m8o] [Sa78] [Sa79] [skSO] [Wa78] [wisz] [Wo76] [wz7s] Kaplan, S.J., Cooperative responses from a natural language data base query system. Stanford Univ. Heuristic Programming Project paper HPP-79-19. Kaplan, S.J., and Davidson, J., Inter- preting Natural Language Updates. Proceed- ings of the 19th Annual Meeting of the Association for Computational Linsulstlcs, June 1981. Mylopoulos, J., Bernstein, P.A., and Wong, H.K.T., A language facility for designing database-lntensive applications. ACM TODS 5:2, June 1980, pp. 397-434. Salveter, S.C., Inferring conceptual struc- tures from pictorial input data. Univer- sity of Wisconsin, Computer Science Dept., TR 328, 1978. Salveter, S.C., Inferring conceptual graphs. Cognitive Science~3, pp. 141-166. Skuce, D.R., Bridging the gap between natural and computer language. Proc. of Int'l Congress on Applied Systems, and Cybernetics, Acapulco, December 1980. Walker, D.E., Understanding Spoken Language. American Elsevier, 1978. Wiederhold, G., Kaplan, S.J., and Sagalowicz, D., Research in knowledge base management systems. SIG%IOD Record, VII, #3, April 1981, pp. 26-54. Woods, W., et. al., Speech Understanding Systems: Final Technical Progress Report. BBN No. 3438, Cambridge, MA, 1976. Waltz, D., Natural language access to a large database: an engineering approach. In Proc. of the Fourth Int'l Joint Conf. onArtlficial Intelligence, 1976. 73 | 1982 | 13 |
PROCESSING ENGLISH WITH A GENERALIZED PHRASE STRUCTURE GRAMMAR Jean Mark Gawron, Jonathan King, John Lamping, Egon Loebner, Eo Anne Paulson, Geoffrey K. Pullum, Ivan A. Sag, and Thomas Wasow Computer Research Center Hewlett Packard Company 1501 Page Mill Road Palo Alto, CA 94304 ABSTRACT This paper describes a natural language processing system implemented at Hewlett-Packard's Computer Research Center. The system's main components are: a Generalized Phrase Structure Grammar (GPSG); a top-down parser; a logic transducer that outputs a first-order logical representation; and a "disambiguator" that uses sortal information to convert "normal-form" first-order logical expressions into the query language for HIRE, a relational database hosted in the SPHERE system. We argue that theoretical developments in GPSG syntax and in Montague semantics have specific advantages to bring to this domain of computational linguistics. The syntax and semantics of the system are totally domain-independent, and thus, in principle, highly portable. We discuss the prospects for extending domain-independence to the lexical semantics as well, and thus to the logical semantic representations. I. INTRODUCTION This paper is an interim progress report on linguistic research carried out at Hewlett-Packard Laboratories since the summer of 1981. The research had three goals: (1) demonstrating the computational tractability of Generalized Phrase Structure Grammar (GPSG), (2) implementing a GPSG system covering a large fragment of English, and (3) establishing the feasibility of using GPSG for interactions with an inferencing knowledge base. Section 2 describes the general architecture of the system. Section 3 discusses the grammar and the lexicon. A brief dicussion of the parsing technique used in found in Section 4. Section 5 discusses the semantics of the system, and Section 6 presents ~ detailed example of a parse-tree complete with semantics. Some typical examples that the system can handle are given in the Appendix. The system is based on recent developments in syntax and semantics, reflecting a modular view in which grammatical structure an~ abstract logical structure have independent status. The understanding of a sentence occurs in a number of stages, distinct from each other and governed by different principles of organization. We are opposed to the idea that language understanding can be achieved without detailed syntactic analysis. There is, of course, a massive pragmatic component to human linguistic interaction. But we hold that pragmatic inference makes use of a logically prior grammatical and semantic analysis. This can be fruitfully modeled and exploited even in the complete absence of any modeling of pragmatic inferencing capability. However, this does not entail an incompatibility between our work and research on modeling discourse organization and conversational interaction directly= Ultimately, a successful language understanding system wilt require both kinds of research, combining the advantages of precise, grammar-driven analysis of utterance structure and pragmatic inferencing based on discourse structures and knowledge of the world. We stress, however, that our concerns at this stage do not extend beyond the specification of a system that can efficiently extract literal meaning from isolated sentences of arbitrarily complex grammatical structure. Future systems will exploit the literal meaning thus extracted in more ambitious applications that involve pragmatic reasoning and discourse manipulation. The system embodies two features that simultaneously promote extensibility, facilitate modification, and increase efficiency. The first is that its grammar is context-free in the informal sense sometimes (rather misleadingly) used in discussions of the autonomy of grammar and pragmatics: the syntactic rules and the semantic translation rules are independent of the specific application domain. Our rules are not devised ad hoc with a particular application or type of interaction in mind. Instead, they are motivated by recent theoretical developments in natural language syntax, and evaluated by the usual linguistic canons of simplicity and generality. No changes in the knowledge base or other exigencies deriving from a particular context of application can introduce a problem for the grammar (as distinct, of course, from the lexicon). The second relevant feature is that the grammar ir the- system is context-free in the sense of formal language theory. This makes the extensive mathematical literature on context-free phrase structure grammars (CF-PSG's) directly relevant to the enterprise, and permits utilization of all the well-known techniques for the computational implementation of context-free grammars. It might seem anachronistic to base a language understanding system on context-free 74 parsing. As Pratt (1975, 423) observes: "It is fashionable these days to want to avoid all reference to context-free grammars beyond warning students that they are unfit for computer consumption as far as computational linguistics is concerned." Moreover, widely accepted arguments have been given in the linguistics literature to the effect that some human languages are not even weakly context-free and thus cannot possibly be described by a CF-PSG. However, Gazdar and Pullum (1982) answer all of these arguments, showing that they are either formally invalid or empirically unsupported or both. It seems appropriate, therefore, to take a renewed interest in the possibility of CF-PSG description of human languages, both in computational linguistics and in linguistic research generally. 2. COMPONENTS OF THE SYSTEM The linguistic basis of the GPSG linguistic system resides in the work reported in Gazdar (1981, 1982) and Gazdar, Pullum, and Sag (1981). 1 These papers argue on empirical and theoretical grounds that context-freeness is a desirable constraint on grammars. It clearly would not be so desirable, however, if (1) it led to lost generalizations or (2) it resulted in an unmanageable number of rules in the grammar. Gazdar (1982) proposes a way of simultaneously avoiding these two problems. Linguistic generalizations can be captured in a context-free grammar with a metagrammor, i.e. a higher-level grammar that generates the actual grammar as its language. The metagrammar has two kinds of statements: (1) Rule schemata. These are basically like ordinary rules, except that they contain variables ranging over categories and features. (2) Metarules. These are implicational statements, written in the form ===>B, which capture relations between rules. A metarule ===>t~ is interpreted as saying, "for every rule that is an instantiation of the schema =, there is a corresponding rule of form [5." Here 13 will be @(~), where 8 issome mapping specified partly by the general theory of grammar and partly in the metarule formulation. For instance, it is taken to be part of the theory of grammar that @ preserves unchanged the subcategorization (rule name) features of rules (cf. below). The GPSG system also assumes the Rule-to-Rule Hypothesis, first advanced by Richard Montague, which requires that each syntactic rule be associated with a single semantic I. See also Gazdar, Pullum, Sag, and Wasow (1982) for some further discussion and comparison with other work in the linguistic literature. translation rule. The syntax-semantics match is realized as follows: each rule is a triple consisting of a rule name, a syntactic statement (~ormally a local condition on node admissibility), and a semantic translation, specifying how the higher-order logic representations of the daughter nodes combine to yield the correct translation for the mother. = The present GPSG system has five components : 1. Grammar a. Lexicon b. Rules and Metarules 2. Parser and Grammar Compiler 3. Semantics Handler 4. Disambiguator 5. HIRE database 3. GRAMMAR AND LEXICON The grammar that has been implemented thus far is only a subset of a much larger GPSG grammar that we have defined on paper. It nevertheless describes a broad sampling of the basic constructions of English, including a variety of prepositional phrase constructions, noun-noun compounds, the auxiliary system, genitives, questions and relative clauses, passives, and existential sentences. Each entry in the lexicon contains two kinds of information about a lexical item, syntactic and semantic. The syntactic part of an entry consists of a syntactic feature specification; this includes, inter alia, information about any irregular morphology the item may have, and what is known in the linguistic literature as strict subcategorization information. In our terms the latter is information linking lexical items of a particular category to specific environments in which that category is introduced by phrase structure rules. Presence in the lexical entry for an item I of the feature R (where R is the name of a rule) indicates that / may appear in structures admitted by R, and absence indicates that it may not. The semantic information in a lexical entry is sometimes simple, directly linking a lexical item with some HIRE predicate or relation. With verbs or prepositions, there is also a specification of what case roles to associate with particular arguments (cf. below for discussion of case roles). Expressions that make a complex logical contribution to the sentence in which they appear witl in general have complicated translations. Thus every has the translation- 2. There is a theoretical issue here about whether semantic translation rules need to be stipulated for each syntactic rule or whether there is a general way of predicting their form. See Klein and Sag (t981) for an attempt to develop the latter view, which is not at present implemented in our system. 75 (LAMBDA P (LAMBDA Q ((FORALL X (P X)) --> (Q x)))), This indicates that it denotes a function which takes as argument a set P, and returns the set of properties that are true of all members of that set (cf. below for slightly more detailed discussion). A typical rule looks like this: <VPI09: V] -> V N]! N!I2: ((V N!!2) N!!)> The exclamation marks here are our notation for the bars in an X-bar category system. (See Jackendoff (1977) for a theory of this type--though one which differs on points of detail from ours.) The rule has the form <a: b: c>. Here a is the name 'VP109'; b is a condition that will admit a node labeled 'V!' if it has three daughter nodes labeled respectively 'V' (verb), 'Nit' (noun phrase at the second bar level), and 'NI!' (the numeral 2 being merely an index to permit reference to a specific symbol in the semantics, the metarules, and the rule compiler, and is not a part of the category label); and c is a semantic translation rule stating that the V constituent translates as a function expression taking as its argument the translation of the second N!!, the result being a function expression to be applied to the translation of the first N!!. By a general convention in the theory of grammar, the rule name is one of the feature values marked on the lexical head of any rule that introduces a lexical category (as this one introduces V). Only verbs marked with that feature value satisfy this rule. For example, if we include in the lexicon the word give and assign to it the feature VPI09, then this rule would generate the verb phrase gave Anne a job. A typical metarule is the passive metarule, which looks like this (ignoring semantics): <PAS: <V! -> V NI! W > => <V! -> V[PAS] W>> W is a string variable ranging over zero or more category symbols. The metarule has the form <N: <A> => <B>>, where N is a name and <A> and <B > are schemata that have rules as their instantiations when appropriate substitutions are made for the free variables. This metarule says that for every rule that expands a verb phrase as verb followed by noun phrase followed by anything else (including nothing else), there is another rule that expands verb phrase as verb with passive morphology followed by whatever followed the noun phrase in the given rule. The metarule PAS would apply to grammar rule VP109 given above, yielding the rule: <VP109: V! -> V[PAS] N{!> As we noted above, the rule number feature is preserved here, so we get Anne was given a job, where the passive verb phrase is given a job, but not *Anne was hired a job. 3 Passive sentences are thus analyzed directly, and not reduced to the form of active sentences in the course of being analyzed, in the way that is familiar from work on transformational grammars and on ATN's. However, this does not mean that no relation between passives and their active counterparts is expressed in the system, because the rules for analyzing passives are in a sense derivatively defined on the basis of' rules for analyzing actives. More difficult than treating passives and the like, and often cited as literally impossible within a context-free grammar'," is treating constructions like questions and relative clauses. The apparent difficulty resides in the fact that in a question like Which employee has Personnel reported that Anne thinks has performed outstandingly?, the portion beginning with the third word must constitute a string analyzable as a sentence except that at some point it must lack a third person singular noun phrase in a position where such a noun phrase could otherwise have occurred. If it lacks no noun phrase, we get ungrammatical strings of the type *Which employee has Personnel reported that Anne thinks Montague has performed outstandingly?. If it lacks a noun phrase at a position where the verb agreement indicates something other than a singular one is required, we get ungrammaticalities like *Which employee has Personnel reported that Anne thinks have performed outstandingly?. The problem is thus one of guaranteeing a grammatical dependency across a context that may be arbitrarily wide, while keeping the grammar context-free. The technique used is introduced into the linguistic literature by Gazdar (1981). It involves an augmentation of the nonterminal vocabulary of the grammar that permits constituents with "gaps" to be treated as not belonging to the same category as similar constituents without gaps. This would be an unwelcome and inelegant enlargement of the grammar if it had to be done by means of case-by-case stipulation, but again the use of a metagrammar avoids this. Gazdar (1981) proposes a new set of syntactic categories of the form a/B, where ~ and 15 are categories from the basic nonterminal vocabulary of the grammar. These are called slash categories. A slash category e/B may be thought of as representing a constituent of category = with a missing internal occurrence of !5. We employ a method of introducing slash categories that was suggested by Sag (1982): a metarule stating that for every rule introducing some B under = there is a parallel rule introducing 15/~ under =/~. In other words, any constituent can have a gap of type ~" if one of its daughter constituents does too. Wherever this would lead to a daughter constituent with the label [/~' in some 3. ~ regard was given a job not as a passive verb phrase itself but as a verb phrase containing the verb be plus a passive verb phrase containing given and a job. 4. See Pullum and Gazdar (1982) for references. 76 rule, another metarule allows a parallel rule without the ~'/;r, and therefore defines rules that allow for actual gaps--i.e., missing constituents. In this way, complete sets of rules for describing the unbounded dependencies found in interrogative and relative clauses can readily be written. Even long-distance agreement facts can be (and are) captured, since the morphosyntactic features relevant to a specific case of agreement are present in the feature composition of any given ~'. 4. PARSING The system is initialized by expanding out the grammar. That is, tile metarules are applied to the rules to produce the full rule set, which is then compiled and used by the parser. Metarules are not consulted during the process of parsing. One might well wonder about the possible benefits of the other alternative: a parser that made the metarule-derived rules to order each time they were needed, instead of consulting a precompiled list. This possibility has been explored by Kay (1982). Kay draws an analogy between metarules and phonological rules, modeling both by means of finite state transducers. We believe that this line is worth pursuing; however, the GPSG system currently operates off a precompiled set of rules. Application of ten metarules to forty basic rules yielded 283 grammar rules in the 1/1/82 version of the GPSG system. Since then the grammar has been expanded somewhat, though the current version is still undergoing some debugging, and the number of rules is unstable. The size of the grammar-plus-metarules system grows by a factor of five or six through the rule compilation. The great practical advantage of using a metarule-induced grammar is, therefore, that the work of designing and revising the system of linguistic rules can proceed on a body of statements that is under twenty percent of the size it would be if it were formulated as a simple list of context-free rules. The system uses a standard type of top-down parser with no Iookahead, augmented slightly to prevent it from looking for a given constituent starting in a given spot more than once. It produces, in parallel, all legal parse trees for a sentence, with semantic translations associated with each node. 5. SEMANTICS The semantics handler uses the translation rule associated with a node to construct its semantics from the semantics of its daughters. This construction makes crucial use of a procedure that we call Cooper storage (after Robin Cooper; see below). In the spirit of current research in formal semantics, each syntactic constituent is associated directly with a single logic expression (modulo Cooper Storage), rather than any program or procedure for producing such an expression. Our semantic analysis thus embraces the principle of "surface compositionality." The semantic representations derived at each node are referred to as the Logical Representation (LR). The disambiguator provides the crucial transition from LR to HIRoE queries; the disambiguator uses information about the sort, or domoin of definition, of various terms in the logical representation. One of the most important functions of the disambiguator is to eliminate parses that do not make sense in the conceptual scheme of HIRE. HIRE is a relational database with a certain amount of inferencin9 capability. It is implemented in SPHERE, a database system which is a descendant of FOL (described in Weyhrauch (1980)). Many of the relation-names output by the disambiguator are derived relations defined by axioms in SPHERE. The SPHERE environment was important for this application, since it was essential to have something that could process first-order logical output, and SPHERE does just that. A noticeable recent trend in database theory has been a move toward an interdisciplinary comingling of mathematical logic and relational database technology (see especially Gallaire and Minker (1978) and Gallaire, Minker and Nicolas (198])). We regard it as an important fact about the GPSG system that links computational linguistics to first-order logical representation just as the work referred to above has linked first-order logic to relational database theory. We believe that SPHERE offers promising prospects for a knowledge representation system that is principled and general in the way that we have tried to exemplify in our syntactic and semantic rule system. Filman, Lamping and Montalvo (]982) present details of some capabilities of SPHERE that we have not as yet exploited in our work, involving the use of multiple contexts to represent viewpoints, beliefs, and modalities, which are generally regarded as insuperable stumbling-blocks to first-order logic approaches. Thus far the linguistic work we have described has been in keeping with GPSG presented in the papers cited above. However two semantic innovations have been introduced to facilitate the disambiguator's translation from LR to a HIRE query. As a result the linguistic system version of LR has two new properties: (1) The intensional logic of the published work was set aside and LR was designed to be an extensional first-order language. Although constituent translations built up on the way to a root node may be second-order, the system- maintains first-order reducibility. This reducibility is illustrated by the following analysis of noun phrases as second-order properties (essentially the analysis of Montague (]970)). For example, the proper name Egon and the quantified noun phrase every opplicant are both translated as sets of properties: 77 Egon = LAMBDA P (P EGON) Every applicant = LAMBDA P (FORALL X ((APPLICANT X) --> (P X))) Egon is translated as the set of properties true of Egon, and every applicant, as the set of properties true of all applicants. Since basic predicates in the logic are first-order, neither of the above expressions can be made the direct • argument of any basic predicate; instead the argument is some unique entity-level variable which is later bound to the quantifier-expression by quantifying in. This technique is essentially the storage device proposed in Cooper (1975). One advantage of this method of "deferring" the introduction into the interpretation process of phrases with quantifier meanings is that it allows for a natural, nonsyntactic treatment of scope ambiguities. Another is that with a logic limited to first-order predicates, there is still a natural treatment for coordinated noun phrases of apparently heterogeneous semantics, such as Egon and every applicant. (2) HIRE represents events as objects. All objects in the knowledge base, including events, belong to various sorts. For our purposes, a sort is a set. HIRE relations are declared as properties of entities within particular sorts. For example, there is an employment sort, consisting of various particular employment events, and an employment.employee relation as well as employment .organization and employment.manager relations. More conventional relations, like employee.manager are defined as joins of the basic event relations. This allows the semantics to make some fairly obvious connections between verbs and events (between, say, the verb work and events of employment), and to represent different relations between a verb and its arguments as different first-order relations between an event and its participants. Although the lexical treatment sketched here is clearly domain dependent (the English verb work doesn't necessarily involve employment events), it was chosen primarily to simplify the ontology of a first implementation. As an alternative, one might consider associating work with events of a sort labor, one of whose subsorts was an employment event, defining employments as those labors associated with an organization. Whichever choice one makes about the basic event-types of verbs, the mapping from verbs to HIRE relations cannot be direct. Consider a sentence like Anne work5 for Egon. The HIRE representation will predicate the employment.manager relation of a particular employment event and a particular manager, and the employment.employee relation of that same event and ,~knl,~. Yet where Egon in this example is picked out with the employment .manager relation, the sentence Anne worl<s for HP will need to pick out HP with the employment.organization relation. I n order to accomodate this many-to-many mapping between a verb and particular relations in a knowledge base, the lexicon stipulates special relations that link a verb to its eventual arguments. Following Fillmore (1968), these mediating relations are called case roles. The disambiguator narrows the case roles down to specific knowledge base relations. To take a simple example, Anne works for HP has a logical representation reducible to: (EXISTS SIGMA (AND (EMPLOYMENT SIGMA) (AG SIGMA ANNE) (LOC SIGMA HP))) Here SIGMA is a variable over situations or event instantiations, s The formula may be read, "There is an employment-situation whose Agent is Anne and whose Location is HP." The lexical entry for work supplies the information that its subject is an Agent and its complement a Location. The disambiguator now needs to further specify the case roles as HIRE relations. It does this by treating each atomic formula in the expression locally, using the fact that Anne is a person in order to interpret AG, and the fact that HP is an organization in order to interpret LOC. In this case, it interprets the AG role as employment.employee and the LOC role as employment.organization. The advantages of using the roles in Logical Representation, rather than going directly to predicates in a knowledge base, include (1) the ability to interpret at least some prepositional phrases, those known as adjuncts, without subcategorizing verbs specially for them, since the case role may be supplied either by a verb or a preposition. (2) the option of interpreting 'vague' verbs such as have and give using case roles without event types. These verbs, then, become "purely" relational. For example, the representation of Egon gave Montague a job would be: (EXISTS SIGMA (AND ((SO EGON) SIGMA) ((POS MONTAGUE) SIGMA) (EMPLOYMENT SIGMA))) Here SO 'source' will pick out the same employment.manager relation it did in the example above; and POS 'possession' is the same relation as that associated with have. Here the situation-type is supplied by the translation of the noun job. It is important to realize that this representation is derived without giving the noun phrase a job any special treatment. The lexical entry for give contains the information that the subject is the source of the direct object, and the direct object the possession of the indirect object. If there were lamps in our knowledge base, the derived representation of Egon gave Montague a lamp would simply be the above formula with the predicate lamp replacing employment. The possession relation would hold between Montague and some 5. Our work in this domain has been influenced by the recent papers of Barwise and Perry on "situation semantics"; see e.c. Barwise and Perry (1982)). 78 lamp, and the disambiguator would retrieve whatever knowledge-base relation kept track of such matters. Two active research goals of the current project are to give all lexical entries domain independent representations, and to make all knowledge base-specific predicates and relations the exclusive province of the disambiguator. One important means to that end is case roles, which allow us a level of abstract, purely "linguistic" relations to mediate between logical representations and HIRE queries. Another is the use of general event types such as labor, to replace event-types specific to HIRE, such as employments. The case roles maintain a separation between the domain representation language and LR. Insofar as that separation is achieved, then absolute portability of the system, up to and including the lexicon, is an attainable goal. Absolute portability obviously has immediate practical benefits for any system that expects to handle a large fragment of English, since the effort in moving from one application to another will be limited to "tuning" the disambiguator to a new ontology, and adding "specialized" vocabulary. The actual rules governing the production of first-order logical representations make no reference to the facts of HIRE. The question remains of just how portable the current lexicon is; the answer is that much of it is already domain independent. Quantifiers like every (as we saw in the discussion of NP semantics) are expressed as logical constants; verbs like give are expressed entirely in terms of the case relations that hold among their arguments. Verbs like work can be abstracted away from the domain by a simple extension. The obvious goal is to try to give domain independent representations to a core vocabulary of English that could be used in a variety of application domains. 6. AN EXAMPLE We shall now give a slightly more detailed illustration of how the syntax and compositional semantics rules work. We are still simplifying considerably, since we have selected an example where rote frames are not involved, and we are not employing features on nodes. Here we have the grammar of a trivial subset of English: <$1: S -> NP VP: (NP Vp)>" <NPI: NP -> DET N: (DET N)> <VPI: VP -> V NP: iV NP)> <VP2: VP -> V A: A> Suppose that the lexicon associated with the above rules is: <every:DET: (LAMBDA P (LAMBDA Q (FORALL X ((P X) IMPLIES (Q X)))))> <applicant: N: APPLICANT> <interviewed: V[(RULE VP1)]: INTERVIEW> <Bill: NP: (LAMBDA P (P BILL))> <is: V[(RULE MP2)]: (BE)> <competent: A: (LAMBDA Y (EXPERT.LEVEL HIGH Y))> The syntax of a lexical entry is <L: C: T>, where L is the spelling of the item, C is its grammatical category and feature specification (if other than the default set) and T is its translation into LR. Consider how we assign an LR to a sentence like Every applicant is competent. The translation of every supplies most of the structure of the universal quantification needed in LR. It represents a function from properties to functions from properties to truth values, so when applied to applicant it yields a constituent, namely every applicant, which has one of the property slots filled, and represents a function from properties to truth-values; it is: (LAMBDA P (FORALL X ((APPLICANT X) IMPLIES (P X)))) This function can now be applied to the function denoted by competent, i.e. ( LAMBDA Y (EXPERT.LEVEL HIGH Y)) This yields: (FORALL X ((APPLICANT X) IMPLIES (LAMBDA Y (EXPERT.LEVEL HIGH Y)) X)) And after one more lambda-conversion, we have: ( FORALL X ((APPLICANT X) IMPLIES (EXPERT.LEVEL HIGH X))) Fig. 1 shows one parse tree that would be generated by the above rules, together with its logical translation. The sentence is Bill interviewed every applicant. The complicated translation of the VP is necessary because INTERVIEW is a one-place predicate that takes an entity-type argument, not the type of function that every applicant denotes. We thus defer combining the NP translation with the verb by using Cooper storage. A translation with a stored NP is represented above in angle-brackets. Notice that at the S node the NP every applicant is still stored, but the subject is not stored. It has directly combined with the VP, by taking the VP as an argument. INTERVIEW is itself a two-place predicate, but one of its argument places has been filled by a place-holding variable, X1. There is th~Js ~ only one slot left. The translation can now be completed via the operations of Storage Retrieval and lambda conversion. First, we simplify the part of the semantics that isn't in storage: 79 Fig. 1. A typical parse tree S <((LAMBDA P (P BILL))(INTERVIEW X1)), <(LAMBDA P (FORALL X ((APPLICANT X) IMPLIES (P X)))) >> NP ((LAMBDA P (P BILL))) VP <(INTERVIEW X1) (LAMBDA P (FORALL X ((APPLICANT X) IMPLIES (P X))))> Bill V INTERVIEW I interviewed NP (LAMBDA P (FORALL X ((APPLICANT X) IMPLIES (P X)))) ~ i ICANT DET applicant LAMBDA Q (LAMBDA P (FORALL X ((Q X) IMPLIES (P X)))) every ((LAMBDA P (P BILL))(INTERVIEW X1)) :> ((INTERVIEW Xl) BILL) The function (LAMBDA P (P BILL)) has been evaluated with P set to the value (INTERVIEW X1); this is a. conventional lambda-conversion. The rule for storage retrieval is to make a one-place predicate of the sentence translation by lambda-binding the placeholding variable, and then to apply the NP translation as a function to the result. The S-node translation above becomes: ((LAMBDA P (FORALL X ((APPLICANT X) IMPLIES (P X)))) (LAMBDA X1 ((INTERVIEW X1) BILL))) [lambda-conversion] ==> (FORALL X ((APPLICANT X) IMPLIES ((LAMBDA X1 ((INTERVIEW X1) BILL)) X))) [lambda-conversion] ::> (FORALL X ((APPLICANT X) IMPLIES (((INTERVIEW X) BILL)))) This is the desired final result. 7. CONCLUSION What we have outlined is a natural language system that is a direct implementation of a linguistic theory. We have argued that in this case the linguistic theory has the special appeal of computational tractability (promoted by its context-freeness), and that the system as a whole offers the hope of a happy marriage of linguistic theory, mathematical logic, and advanced computer applications. The system's theoretical underpinnings give it compatibility with current research in Generalized Phrase Structure Grammar, and its augmented first order logic gives it compatibility with a whole body of ongoing research in the field of model-theoretic semantics. The work done thus far is only the first step on the road to a robust and practical natural language processor, but the guiding principle throughout has been extensibility, both of the grammar, and of the applicability to various spheres of computation. ACKNOWLEDGEMENT Grateful acknowledgement is given to two brave souls, Steve Gadol and Bob Kanefsky, who helped give this system some of its credibility by implementing the actual hook-up with HIRE. Thanks are also due Robert Filman and Bert Raphael for helpful comments on an early version of this paper. And a special thanks is due Richard Weyhrauch, for encouragement, wise advice, and comfort in times of debugging. 80 APPENDIX This appendix lists some sentences that are actually translated into HIRE and answered by the current system. Declarative sentences presented to the system are evaluated with respect with their truth value in the usual way, and thus also function as queries. SIMPLE SENTENCES 1. HP employs Egon. 2. Egon works for HP. 3. HP offered Montague the position. 4. HP gave Montague a job. 5. Montague got a job from HP. 6. Montague's job is at HP 7. HP's offer was to Capulet. 8. Montague had a meeting with Capulet. 9. Capulet has an offer from Xerox. 10. Capulet is competent. IMPERATIVES AND QUESTIONS 11. Find the programmers in CRC who attended the meeting. 12. How many applicants for the position are there? 13. Which manager interviewed Capulet? 14. Whose job did Capulet accept? 15. Who is a department manager? 16. Is there a LISP programmer who Xerox hired? 17. Whose job does Montague have? 18. How many applicants did Capulet interview? RELATIVE CLAUSES 19. The professor whose student Xerox hired visited HP. 20. The manager Montague met with hired the student who attended Berkeley. NOUN-NOUN COMPOUNDS 21. Some Xerox programmers visited HP. 22. Montague interviewed a job applicant. 23. Who are the department managers? 24. How many applicants have a LISP programming background? COORDINATION 25. Who did Montague interview and visit? 26. Which department's position did every programmer and a manager from Xerox apply for? PASSIVE AND EXISTENTIAL SENTENCES 27. Egon was interviewed by Montague. 28. There is a programmer who knows LISP in CRC. INFINITIVAL COMPLEMENTS 29. Montague managed to get a job at HP. 30. HP failed to hire a programmer with Lisp programming background. REFERENCES Barwise, Jon, and John Perry. 1981. "Situations and attitudes." Journal of Philosophy 78, 668-692. Cooper, Robin. 1975. Montague's Semantic Theory and Transformational Syntax. Doctoral dissertation, University of Massachusetts, Amherst. Fillmore, Charles. 1968. "The Case for Case." In Bach, Emmon and Robert Harms. Universals in Linguistic Theory. New York: Holt, Rinehart and Winston. Filman, Robert E., John Lamping, and Fanya Nlontalvo. 1982. "Metalanguage and Metareasoning." Submitted for presentation at the AAAI National Conference on Artificial Intelligence, Carnegie-Mellon University, Pittsburgh, Pennsylvania. Gallaire, Herv$, and Jack Minker, eds. 1978. Logic and Data Bases. New York: Plenum Press. Gallaire, Herv$, Jack Minker, and Jean Marie Nicolas, eds. 1981. Advances in Date Base Theory. New York: Plenum Press. Gazdar, Gerald. 1981. "Unbounded Dependencies and Coordinate Structure." Linguistic Inquiry 12, 155-184. Gazdar, Gerald. 1982. "Phrase Structure Grammar." In Pauline Jacobson and Geoffrey K. Pullum, eds. The Nature of Syntactic Representation. Dordrecht: D. Reidel. Gazdar, Gerald, Geoffrey K. Pullum, and Ivan A. Sag. In press. "Auxiliaries and Related Phenomena." Language. Gazdar, Gerald, Geoffrey K. Pullum, Ivan A. Sag, and Thomas Wasow. 1982. "Coordination and Transformational Grammar". Linguistic Inquiry 13. Jackendoff, Ray. 1977. ~" Syntax. Cambridge: MIT Press. Kay, Martin. 1982. "When Metarules are not Metarules." Ms. Xerox Palo Alto Research Center. Montague, Richard. 1970. "The Proper Treatment of Quantification in English." in Richmond Thomason, ed. 1974. Formal Philosophy. New Haven: Yale University Press. Pratt, Vaughan R. 1975. "LINGOL a progress report." Advance Papers of the Fourth /nternational Joint Conference on Artificia/ /nte//igence, Tbilisi, Georgia, USSR, 3-8 September 1975. Cambridge, MA: Artificial Intelligence Laboratory. 422-428. Pullum, Geoffrey K. and Gerald Gazdar. 1982. Natural languages and context-free languages. Linguistics and phitos.ophy 4. Sag, Ivan A. 1982. "Coordination, Extraction, and Generalized Phrase Structure Grammar." Linguistic Inquiry 13. Weyhrauch, Richard W. 1980. "Prolegomena to a theory of mechanized formal reasoning." Artificial Intelligence, 1, pp. 133-170. 81 | 1982 | 14 |
Experience with an Easily Computed Metric for Ranking Alternative Parses George E. Heidorn Computer Sciences Department IBM Thomas J. Watson Research Center Yorktown Heights, New York 10598 Abstract This brief paper, which is itself an extended abstract for a forthcoming paper, describes a metric that can be easily com- puted during either bottom-up or top-down construction of a parse tree for ranking the desirability of alternative parses. In its simplest form, the metric tends to prefer trees in which constituents are pushed as far down as possible, but by appro- priate modification of a constant in the formula other behavior can be obtained also. This paper includes an introduction to the EPISTLE system being developed at IBM Research and a discussion of the results of using this metric with that system. Introduction Heidorn (1976) described a technique for computing a number for each node during the bottom-up construction of a parse tree, such that a node with a smaller number is to be preferred to a node with a larger number covering the same portion of text. At the time, this scheme was used primarily to select among competing noun phrases in queries to a program explanation system. Although it appeared to work well, it was not extensively tested. Recently, as part of our research on the EPISTLE system, this idea has been modified and extend- ed to work over entire sentences and to provide for top-down computation. Also, we have done an analysis of 80 sentences with multiple parses from our data base to evaluate the per- formance of this metric, and have found that it is producing very good results. This brief paper, which is actually an extended abstract for a forthcoming paper, begins with an introduction to the EPISTLE system, to set the stage for the current application of this metric. Then the metrie's computation is described, fol- lowed by a discussion of the results of the 80-sentence analy- sis. Finally, some comparisons are made to related work by others. The EPISTLE System In its current form, the EPISTLE system (Miller, Heidorn and Jensen 1981) is intended to do critiquing of a writer's use of English in business correspondence, and can do some amount of grammar and style checking. The central compo- nent of the system is a parser for assigning grammatical struc- tures to input sentences. This is done with NLP, a LISP-based natural language processing system which uses augmented phrase structure grammar ~APSG) rules (Heidorn 1975) to specify how text is to be converted into a network of nodes consisting of attribute-value pairs and how such a network can be converted into text. The first process, decoding, is done in a bottom-up, parallel processing fashion, and the inverse proc- ess, encoding, is done in a top-down, serial manner. In the current application the network which is constructed is simply a decorated parse tree, rather than a meaning representation. Because EPISTLE must deal with unrestricted input (both in terms of vocabulary and syntactic constructions), we are trying to see how far we can get initially with almost no se- mantic information. In particular, our information about words is pretty much limited to parts-of-speech that come from an on-line version of a standard dictionary of over 100,000 entries, and the conditions in our 250 decoding rules are based primarily on syntactic cues. We strive for what we call a unique approximate parse for each sentence, a parse that is not necessarily semantically accurate (e.g., prepositional phrase attachments are not always done right) but one which is ade- quate for the text critiquing tasks, nevertheless. One of the things we do periodically to test the perform- anee of our parsing component is to run it on a set of 400 actual business letters, consisting of almost 2,300 sentences which range in length up to 63 words, averaging 19 words per sentence. In two recent runs of this data base, the following results were obtained: No. of parses June 1981 Dec. 1981 0 57% 36% 1 31% 41% 2 6% 11% >2 6% 12% The improvement in performance from June to December can be attributed both to writing additional grammar rules and to relaxing overly restrictive conditions in other rules. It can be seen that this not only had the desirable effect of reducing the percentage of no-parse sentences (from 57% to 36%) and increasing the percentage of single-parse sentences (from 31% to 41%), but it also had the undesirable side effect of inerez., •, ing the multiple-parse sentences (from 12% to 23%). Be- cause we expect th!:; ~;';~.ation to continue as we further in- crease our grammatical coverage, the need for a method of ranking multiple parses in order to select the best one on which to base our grammar and style critiques is acutely felt, 82 The Metric and Its Computation The metric can be stated by the following recursive for- mula: Scorephrase = ~ KMod(Sc°reMod+l) Mods where the lowest score is considered to be the best. This for- mula says that the score associated with a phrase is equal to the sum of the scores of the modifying phrases of that phrase adjusted in a particular way, namely that the score of each modifier is increased by 1 and then multiplied by a constant K appropriate for that type of modifier. A phrase with no modi- fiers, such as an individual word, has a score of 0. This metric is based on a flat view of syntactic structure which says that each phrase consists of a head word and zero or more pre- and post-modifying phrases. (In this view a sentence is just a big verb phrase, with modifiers such as subject, objects, adverbs, and subordinate clauses.) In its simplest form this metric can be considered to be nothing more than the numerical realization of Kimbatl's Prin- ciple Number Two (Kimball 1972): "Terminal symbols opti- mally associate to the lowest nonterminal node." (Although Kimball calls this principle right association and illustrates it with right-branching examples, it can often apply equally well to left-branching structures.) One way to achieve this simplest form is to use a K of 0.1 for all types of modifiers. An example of the application of the metric in this sim- plest form is given in Figure 1. Two parse trees are shown for the sentence, "See the man with the telescope," with a score attached to each node (other than those that are zero). A node marked with an asterisk is the head of its respective phrase. In this form of flat parse tree a prepositional phrase is displayed as a noun phrase with the preposition as an addition- al premodifier. As an example of the calculation, the score of the PP here is computed as 0.1(0+ 1)+0.1(0+1), because the scores of its modifiers m the ADJ and the PREP m are each 0. Similarly, the score of the NP in the second parse tree is computed as 0.1(0+ 1)+0.1(0.2+ 1), where the 0.2 within it is the score of the PP. It can be seen from the example that in this simplest form the individual digits of the score after the decimal point tell how many modifiers appear at each level in the phrase (as long as there are no more than nine modifiers at any level). The farther down in the parse tree a constituent is pushed, the farther to the right in the final score its contribution will ap- pear. Hence, a deeper structure will tend to have a smaller score than a shallower structure, and, therefore, be preferred. In the example, this is the second tree, with a score of 0.122 vs. 0.23. That is not to say that this would be the semantically correct tree for this sentence in all contexts, but only that if a choice cannot be made on any other grounds, this tree is to be preferred. Applying the metric in its simplest form does not produce the desired result for all grammatical constructions, so that values for K other than 0.1 must be used for some types of modifiers. It basically boils down to a system of rewards and penalties to make the metric reflect preferences determined heuristically. For example, the preference that a potential auxiliary verb is to be used as an auxiliary rather than as a main verb when both parses are possible can be realized by using a K of 0, a reward, when picking up an auxiliary verb. Similarly, a K of 2, a penalty, can be used to increase the score (thereby lessening the preference) when attaching an adverbial phrase as a premodifier in a lower level clause (rather than as a postmodifier in a higher level clause). When semantic infor- mation is available, it can be used to select appropriate values for K, too, such as using 100 for an anomalous combination. Straightforward application of the formula given above implies that the computation of the score can be done in a bottom-up fashion, as the modifiers of each phrase are picked up. However, it can also be done in a top-down manner after doing a little bit of algebra on the formula to expand it and regroup the terms. In the EPISTLE application it is the latter approach that is being used. There is actually a set of ten NLP encoding rules that do the computation in a downward traversal of a completed parse tree, determining the appropri- ate constant to use at each node. The top-down method of computation could be done during top-down parsing of the sort typically used with ATN's, also. SENT(0.23)~ .... VERB* I .... NP(0.1) i i .... PP(0.2) "SEE" ADJ "THE" NOUN * "MAN" PREP ...... "WITH" ADJ "THE" I .... NOUN* "TELESCOPE" SENT(0.122) l--- VERB* ...... "SEE" i--- NP(0.22)i--- ADJ "THE" I--- NOUN* "MAN" i--- pp(0.2) I--- PREP I--- ADJ ...... i--- NOUN* "WITH" "THE" "TELESCOPE" Figure 1. Two alternative parses with their scores. 83 Performance of the Metric To test the performance of the metric in our EPISTLE application, the parse trees of 80 multiple-parse sentences were analyzed to determine if the metric favored what we consid- ered to he the best tree for our purposes. A raw calculation said it was right in 65% of the cases. However, further analy- sis of those cases where it was wrong showed that in half of them the parse that it favored was one which will not even be produced when we further refine our grammar rules. If we eliminate these from consideration, our success rate increases to 80%. Out of the remaining "failures," more than half are cases where semantic information is required to make the correct choice, and our system simply does not yet have enough such information to deal with these. The others, about 7%, will require further tuning of the constant K in the for- mula. (In fact, they all seem to involve VP conjunction, for which the metric has not been tuned at all yet.) The analysis just described was based on multiple parses of order 2 through 6. Another analysis was done separately on the double parses (i.e. order 2). The results were similar, but with an adjusted success rate of 85%, and with almost all of the remainder due to the need for more semantic information. It is also of interest to note that significant right- branching occurred in about 75% of the eases for which the metric selected the best parse. Most of these were situations in which the grammar rules would allow a constituent to be attached at more than one level, but simply pushing it down to the lowest possible level with the metric turned out to produce the best parse. Related Research There has not been much in the literature about using numerical scores to rank alternative analyses of segments of text. One notable exception to this is the work at SRI (e.g., Paxton 1975 and Robinson 1975, 1980), where factor statements may be attached to an APSG rule to aid in the calculation of a score for a phrase formed by applying the rule. The score of a phrase is intended to express the likelihood that the phrase is a correct interpretation of the input. These scores apparently can be integers in the range 0 to 100 or symbols such as GOOD or POOR. This method of scoring phrases provides more flexibility than the metric of this paper, but also puts more of a burden on the grammar writer. Another place in which scoring played an important role is the syntactic component of the BBN SPEECHLIS system (Bates 1976), where ,an integer score is assigned to each configuration during the processing of a sentence to reflect the likelihood that the path which terminates on that configuration is correct. The grammar writer must assign weights to each are of the ATN grammar, but the rest of the computation appears to be done by the system, utilizing such information as the number of words in a constituent. Although this scoring mechanism worked very well for its intended purpose, it may not be more generally applicable. A very specialized scoring scheme was used in the JIMMY3 system (Maxwell and Tuggle 1977), where each parse network is given an integer score calculated by rewarding the finding of the actor, object, modifiers, and prepositional phrases and punishing the ignoring of words and terms. Final- ly, there is Wilks' counting of dependencies to find the analysis with the greatest semantic density in his Preference Semantics work (eg., Wilks 1975). Neither of these purports to propose scoring methods that are more generally applicable, either. Acknowledgements I would like to thank.Karen Jensen, Martin Chodorow and Lance Miller for the help that they have given me in the devel- opment and testing of this parsing metric, and John Sowa for his comments on an earlier draft of this paper. References Bates, M. 1976. "Syntax in Automatic Speech Understanding" Am. J. Comp. Ling. Microfiche 45. Heidorn, G.E. 1975. "Augmented Phrase Structure Gram- mars" Theoretical Issues in Natural Language Processing, B.L. Webber and R.C. Schank (Eds.), Assoc. for Comp. Ling., June 1975, 1-5. Heidorn, G.E. 1976. "An Easily Computed Metric for Rank- ing Alternative Parses" Presented at the Fourteenth Annual Meeting of the Assoc. for Comp. Ling., San Francisco, Octo- ber 1976. Kimball, J. 1972. "Seven Principles of Surface Structure Pars- ing in Natural Language" Cognition 2, 1, 15-47. Maxwell, B.D. and F.D. Tuggle 1977. "Toward a 'Natural' Language Question-Answering Facility" Am. J. Comp. Ling. Microfiche 61. Miller, L.A., G.E. Heidorn and K. Jensen 1981. "Text- Critiquing with the EPISTLE System: An Author's Aid to Better Syntax" AFIPS - Conference Proceedings, Vol. 50, May 1981, 649-655. Paxton, W.H. 1975. "The Definition System" in Speech Un- derstanding Research, SRI Annual Technical Report, June 1975, 20-25. Robinson, J.J. 1975. "A Tuneable Performance Grammar" Am. J. Comp. Ling., Microfiche 34, 19-33. Robinson, J.J. 1980. "DIAGRAM: A Grammar for Dia- logues" SRI Technical Note 205, Feb. 1980. Wilks, Y. 1975. "An Intelligent Analyzer and Understander of English" Comm. ACM 18, 5 (May 1975), 264-274. 84 | 1982 | 15 |
An Improved Heuristic for Ellipsis Processing* Ralph M. Welschedel Department of Computer & Information Sciences University of Delaware Newark, Delaware 19711 and Norman K. Sondheimer Software Research Sperry Univac MS 2G3 Blue Bell, Pennsylvania 19424 I. Introduction Robust response to ellipsis (fragmen- tary sentences) is essential to acceptable natural language interfaces. For in- stance, an experiment with the REL English query system showed 10% elliptical input (Thompson, 1980). In Quirk, et al. (1972), three types of contextual ellipsis have been identi- fied: I. repetition, if the utterance is a fragment of the previous sentence. 2. replacement, if the input replaces a structure in the previous sentence. 3. expansion, if the input adds a new type of structure to those used in the previous sentence. Instances of the three types appear in the following example. Were you angry? a) I was. b) Furious. c) Probably. d) For a time. e) Very. f) I did not want to be. g) Yesterday I was. (repetiion with change in person) (replacement) (expansion) (expansion) (expansion) (expansion) (expansion & repetition) In addition to appearing as answers fol- lowing questions, any of the three types can appear in questions following state- ments, statements following statements, or in the utterances of a single speaker. This paper presents a method of au- tomatically interpreting ellipsis based on dialogue context. Our method expands on p~evious work by allowing for expansion ellipsis and by allowing for all combina- tions of statement following question, question following statement, question following question, etc. *This material is based upon work partially sup- ported by the National Science Foundation under Grant No. IST-8009673. 2. Related Work Several natural language systems (e.g., Bobrow et al., 1977; Hendrix et al., 1978; Kwasny and Sondheimer, 1979) include heuristics for replacement and repetition ellipsis, but not expansion ellipsis. One general strategy has been to substitute fragments into the analysis of the previous input, e.g., substituting parse trees of the elliptical input into the parse trees of the previous input in LIFER (Hendrix, et al., 1978). This only applies to inputs of the same type, e.g., repeated questions. Allen (1979) deals with some examples of expansion ellipsis, by fitting a parsed elliptical input into a model of the speaker's plan. This is similar to other methods that interpret fragments by plac- ing them into prepared fields in frames or case slots (Schank et al., 1980; Hayes and Mouradian, 1980; Waltz, 1978). This ap- proach seems most applicable to limited- domain systems. 3. The Heuristic There are three aspects to our solu- tien: a mechanism for repetition and replacement ellipsis, an extension for inputs of different types, such as frag- mentary answers to questions, and an ex- tension for expansion ellipsis. 3.1 Repetition and Replacement As noted above, repetition and re- placement ellipsis can be viewed as sub- stitution in the previous form. We have implemented this notion in an augmented transition network (ATN) grammar inter- preter with the assumption that the "pre- vious form" is the complete ATN path that parsed the previous input and that the lexical items consumed along that path are associated with the arcs that consumed them. In ellipsis mode, the ATN inter- preter executes the path using the ellipt- ical input in the following way: 85 I. Words from the elliptical input, i.e., the curren~ input, may be con- sumed along the path at any point. 2. Any arc requiring a word not found in the current input may be traversed using the lexical item associated with the arc from the previous input. 3. However, once the path consumes the first word from the elliptical input, all words from the elliptical input must be consumed before an arc can use a word from the previous input. 4. Traversing a PUSH arc may be accom ~ plished either by following the sub- path of the previous input or by finding any constituent ef the re- quired type in the current input. The entire ATN can be used in these cases. Suppose that the path for "Were you angry?" is given by Table I. Square brackets are used to indicate subpaths resulting from PUSHes. "..." indicates tests and actions which are irrelevant te the current discussion. 01d Lexical State Arc Item S (CAT COPULA ... (TO Sx)) "w--~'r~e" Sx (PUSH NP ... (TO Sy)) [NP (CAT PRO ... (TO NPa)) "you" NPa (POP ...) ] Sy (CAT ADJ ... (TO Sz)) "angry" Sz (POP ...) Table I An ATN Path for "Were you Angry?" An elliptical input of "Was he?" fol- lowing "Were you angry?" could be under- steed by traversing all of the arcs as in Table I. Following point I above, "was" and "he" would be substituted for "were" and "you". Following point 3, in travers- ing the arc (CAT ADJ ... (TO Sz)) the lex- ical item "angry" from the previous input would be used. Item 4 is illustrated by an elliptical input of "Was the old man?"; this is understood by traversing the arcs at the S level of Table I, but using the appropriate path in the NP network to parse the old man 3.2 Transformations of the Previous Form While the approach illustrated in Section 3.1 is useful in a data base query environment where ~]liptical input typi- cally is a modlfication of the previous query, it does not account for elliptical statements following questions, elliptical questions following statements, etc. Our approach to the problem is to write a set ef transformations which map the parse path of a question (e.g., Table I) into an expected parse path for a declarative response, and the parse ~path for a de- clarative into a path for an expected question, etc. The left-hand side of a transforma- tion is a pattern which is matched against the ATN path of the previous utterance. Pattern elements include literals refer- ring te arcs, variables which match a sin- gle arc or embedded path, variables which match zero or mere arcs, and sets ef al- ternatives. It is straightforward to con- struct a discrimination net corresponding to all left-hand sides for efficiently finding what patterns match the ATN path of the previous sentence. The right-hand side ef a transformation is a pattern which constructs an expected path. The form of the pattern en the right-hand side is a list of references to states, arcs, and lexical entries. Such references can be made through items matched on the left-hand side or by explicit construction ef literal path elements. Our technique is to restrict the map- ping such that any expected parse path is generated by applying only one transforma- tion and applying it only once. A special feature of our transformational system is the automatic allowance for dialogue diexis. An expected parse path for the answer to "Were you angry?" is given in Table 2. Note in Table 2, "you" has be- come "I" and "were" has become "was" Old Lexical State Arc Item (PUSH NP ... (TO Sa)) (CAT PRO ... (TO NPa)) (PoP ...) (CAT COPULA ... (TO Sy)) (CAT ADJ ... (TO Sz)) (POP ...) S [NP "I" NPa ] Sa "was " Sy "angry" Sz Table 2 Declarative for the expected answer for "Were you angry?". Using this path, the ellipsis interpreter de'scribed in Section 3.1 would understand the ellipses in "a)" and "b)" below, in the same way as "a')" and "b'i" a) I was. a') I was angry. b) ~y spouse was. b') My spouse was angry. 86 3.3 Expansions A large class of expansions are sim- ple adjuncts, such as examples c, d, e, and g in section I. We have handled this by building our ellipsis interpreter to allow departing from the base path at designated states to consume an adjunct from the input string. We mark states in the grammar where adjuncts can occur. For each such state, we list a set of linear (though possibly cyclic) paths, called "expansion paths". Our interpreter as implemented allows departures from the base path at any state so marked in the grammar; it follows expansion paths by consuming words from the input string, and must return to a state on the base form. Each of the examples in c, d, e, and g of section I can be handled by expansion paths only one arc long. They are given in Table 3. Initial State Sy Expansion Path (PUSH ADVERB ... (TO S)) Probably (I was angry). (PUSH PF ... (To s)) For a time (I was angry). (PUS~ ~P (* this includes a teat that the NP is one of time or place) • .. (TO S)) Yesterday (I was angry). (PUSH INTENSIFIER-ADVERB ... (TO Sy)) (I was) very (angry). Table 3 Example Expansion Paths Since this is an extension to the ellipsis interpreter, combinations of repetition, replacement, and expansion can all be han- dled by the one mechanism. For instance, in response to "Were you angry?", "Yester- day you were (angry)" would be treated using the expansion and replacement mechanisms. ~. Special Cases and Limitations The ideal model of contextual el- lipsis would correctly predict what are appropriate elliptical forms in context, what their interpretation is, and what forms are not meaningful in context. We believe this requires structural restric- tions, semantic constraints, and a model of the goals of the speaker. Our heuris- tic does not meet these criteria in a number of cases. Only two classes of structural con- straints are captured. One relates the ellipsis to the previous form as a combi- nation of repetition, replacement, and expansion. The o~her constraint is that the input must be consumed as a contiguous string. This constraint is violated, for instance, in "I was (angry) yesterday" as a response to "Were you angry?" Nevertheless, the constraint is computa- tionally useful, since allowing arbitrary gaps in consuming the elliptical input produces a very large space of correct interpretations. A ludicrous example is the following question and elliptical response: Has the boss given our mutual friend a raise? A fat raise. Allowing arbitrary gaps between the sub- strings of the ellipsis allows an in- terpretation such as "A (boss has given our) fat (friend a) raise." While it may be possible to view all contextual ellipsis as combinations of the operations repetition, replacement, and expansion applied to something, our model makes the strong assumption that these operations may be viewed as applying to an ATN path rather straightforwardly related to the previous utterance. Not all expan- sions can be viewed that way, as example f in Section I illustrates. Also, answers of "No" require special processing; that response in answer to "Were you angry" should not be interpreted as "No, I was angry." One should be able to account for such examples within the heuristic described in this paper, perhaps by allow- ing the transformation system described in section 3.2 to be completely general rath- er than strongly restricted to one and only one transformation application. Row- ever, we propose handling such cases by special purpose rules we are developing. These rules for the special cases, plus the mechanism described in section 3 to- gether will be formally equivalent in predictive power to a grammar for ellipti- cal forms. Though the heuristic is independent of the individual grammar, designating expansion paths and transformations obvi- ously is not. The grammar may make this an easy oz" difficult task. For instance in the grammar we are using, a subnetwork that collects all tense, aspect, and mo- dality elements would simplify some of the transformations and expansion paths. ~aturally, semantics must play an important part in ellipsis processing. Consider the utterance pair below: 87 Did the bess have a martini at lunch? Some wine. Though syntactically this could be inter- preted either as "Some wine (did have a martini at lunch)", "(The boss did have) some wine (at lunch)", or "(The boss did have a martini at) some wine". Semantics should prefer the second reading. We are testing our heuristic using the RUS gram- mar (Bebrow, 1978) which has frequent calls from the grammar requesting that the semantic component decide whether to build a semantic interpretation for the partial parse found or to veto that partial parse. This should aid performance. ~. Summary and Conclusion There are three aspects te our solution: a mechanism for repetition and replacement ellipsis, an extension for inputs of different types, such as frag- mentary answers to questions, and an ex- tension for expansion ellipsis. Our heuristic deals with the three types of expansion ellipsis as follows: Repetition ellipsis is processed by re- peating specific parts of a transformed previous path using the same phrases as in the transformed form ("I was angry"). Replacement ellipsis is processed by sub- stituting the elliptical input for contig- uous constituents on a transformed previ- ous path. Expansion ellipsis may be pro- cessed by taking specially marked paths that detour from a given state in that path. Combinations of the three types of ellipsis are represented by combinations of the three variations in a transformed previous path. There are two contributions of the work. First, our method allows for expan- sion ellipsis. Second, it accounts for combinations of previous sentence form and ellided form, e.g., statement following question, question following statement, question following question. Furthermore, the method works without any constraints on the ATN grammar. The heuristics carry over to formalisms similar to the ATN, such as context-free grammars and augment- ed phrase structure grammars. Our study of ellipsis is part of a much broader framework we are developing for processing syntactically and/or semantically ill-formed input; see Weischedel and Sondheimer (1981). References Allen, James F., "A Plan-Based Approach to Speech Act Recognition," Ph.D. Thesis, Dept. of'Computer Science, University of Toronto, Toronto, Canada, 1979. Bobrew, D., R. Kaplan, M. Kay, D. Norman, H. Thompson and T. Winograd, "GUS, A Frame-driven Dialog System", Artificial Intelligence, 8, (1977), 155-173. Bobrow, R., "The RUS System", in Research in Natural Language Understandin$, by B. Webber and R. Bobrow, BBN Report No. 3878, Belt Beranek and Newman, Inc., Cambridge, MA, 1978. Hayes, P. and G. Mouradian, "Flexible Parsing", in Proc. of the 18th Annual Meetin~ of the Assoc. for Cemp. Ling., Philadelphia, June, 1980, 97-103. Hendrix, G., E. Sacerdoti, D. Sagalowicz and J. Slocum, "Developing a Natural Language Interface to Complex Data", ACM Trans. on Database S~s., 3, 2, (1978--~, 105-147. Kwasny, S. and N. Sondheimer, "Ungrammati- cality and Extragrammaticality in Natural Language Understanding Systems", in Proc. ef the 17th Annual Meeting of the Assoc. for Comp. Lin~., San Diego, August, 1979, 19-23. Quirk, R., S. Greenbaum, G. Leech and J. Svartvik, A Grammar of Centempory English, Seminar Press, New York, 1972. Schank, R., M. Lebowitz and L. Birnbaum, "An Integrated Understander", American Journal of Comp. Ling., 6, I, (1980), 13-30. Thompson, B. H., "Linguistic Analysis of' Natural Language Communication with Com- puters", p~'oceedings of the Eighth International Conference on Computationai Linguistics, Tokyo, October, 1980, 190-201. Waltz, D., "An English Language Question Answering System for a Large Relational Database", Csmm. ACM, 21, 7, (1978), 526-559. Weischedel, Ralph M. and Norman K. Son- dheimer, "A Framework for Processing Ill- Formed Input", Technical Report, Dept. of Computer & Informatiou Sciences, Universi- ty of Delaware, Ne~ark, DE, 1981. Acknowledgement ~luch credit is due to Amir Razi for his programming assistance. 88 | 1982 | 16 |
REFLECTIONS ON 20 YEARS OF THE ACL AN INTRODUCTION Donald E. Walker Artificial In~elligence Center SRI International Menlo Park, California 94025, USA Our society was founded on 13 June 1962 as the Association for Machine Translation and Computational Linguistics. Consequently, this 1982 Annual Meeting represents our 20th anniversary. We did, Of course, change our name to the Association for Computational Linguistics in 1968, but that did not affect the continuity of the organization. The date of this panel, 17 June, misses the real anniversary by four days, but no matter; the occasion still allows us to reflect on where we have been and where we are going. I seem to be sensitive to opportunities for celebrations. In looking through my AMTCL/ACL correspondence over the years, I came across a copy of a memo sent to Bob Simmons and Hood Roberts during our lOth anniversary year, recommending that something in commemoration might be appropriate. I cannot identify anything in the program of that meeting or in my notes about it that suggests they took me seriously then, but that reflects the critical difference between volunteering a recommendation and Just plain volunteerlngl My invitation to participate in this panel was sent out to the presidents of the Association, who were, in order, Vic Yngve, Dave Hays, Win Lehmann, Paul Garvin, Susumo Kuno, (I fit here in the sequence), Martin Kay, Warren Plath, Joyce Friedman, Bob Simmons, Bob Barnes, Bill Woods, Aravind Joshi, Stan Petrick, Paul Chapin, Jon Allen, RonKaplan, Bonnie Webber, Norm Sondheimer, and Jane Robinson, and to my predecessor as Secretary-Treasurer, Hood Roberts. Harry Josselson, our first Secretary-Treasurer, is no longer among us, but he would have enjoyed such a gathering, being one for ceremony and celebration. Vic, Dave, Martin, Warren, Joyce, Bob Simmons, Bill, Aravind, Stan, Paul, Jon, Ron, Bonnie, and Norm agreed to Join me on the panel. Jane refused on the grounds that she was not yet part of history and that her Presidential Address provided ample platform to convey her reflections. Win, Paul, Susumo, and Bob Barnes were not able to come, and Hood was still waffling when this piece was being written. Vic, Dave, Win, Bob Simmons, Aravind, Paul, Jon, Norm, and I have written down some of our reflections; they appear on the following pages. My charge to the panelists, with respect to both oral and written tradition, was quite broad: "You are asked to reflect on significant experiences during your tenure of that office, in particular as they reflect on the state of computational linguistics then and now, and perhaps with some suggestions for what the future will bring." The written responses are varied, as you can see; I am sure that the oral responses will prove to be equally so. To provide some perspective--and record some history, I am attaching a synopsis of "officers, editors, committees, meetings, and program chairing" (please let me know about errors!). It is interesting to note the names of people--many of whom are still prominent in the field, the practices associated with our annual meetings, and our publication history. I will comment on the latter two. Our first meeting was held in conjunction with the 1963 ACM National Conference, but it is clear that our primary allegiance has been with the Linguistic Society of America, since we met seven times in conjunction with its summer meetings. For a period, we alternated between the LSA and the Spring Joint Computer Conference--and actually included that schedule in our membership flyer. We Joined with the American Society for Information Science twice, and the Cognitive Science Society once. The convocation of the first International Conference on Computational Linguistics, now known popularly as COLING, replaced our annual meeting in 1965, and we are scheduled to host COLING-84 in two years. Recently, we have been meeting independently, reflecting an increased confidence in our ability to "make it on our own!" The publication history of the Association has been equally varied. The Finite String, our newsletter, was published as a separate under the editorship of Hood Roberts from 1964 through 1973, and has continued in various forms ever since. In 1965, the Association adopted MT: Mechanical Translation, a Journal founded by Vic Yngve in 1954, changing its name to Mechanical Translation and Computational Linguistics in the process. However, that Journal was not able to sustain a sufficient flow of manuscripts, and the last issue, dated 1968, was published in 1970. After a lengthy exploration of an alternative primary Journal, Dave Hays brought the American Journal of Computational Linguistics into being in 1974. Hi-as intention had been to create a printed Journal that contained extended abstracts, supplemented by microfiches that provided details, programs, and computer listings. This proposal was submitted to the National Science Foundation for support. A grant was approved, but it stipulated that we publish a microfiche-only Journal, and we did that until 1978, The Finite String being issued as a separate microfiche during this period. It became increasingly clear during the five microfiche years that the micropublishing industry was not going to develop as predicted in the early 1970s. 89 Microfiche readers that were both inexpensive and convenient had not materialized, and our members were reluctant to commit their manuscripts to a medium that restricted readership to a dedicated few. Consequently, George Heldorn set about converting the AJCL to a printed Journal, the first issue of which appeared in 1980. Respectful of its microformal origins, it is distributed with a microfiche that duplicates the printed version but sometimes contains additional material. The Finite Strin~ Newsletter continues to provide general information of interest to the membership as a special section. To complete our publlcatlon history, I can announce a new venture that the Association is Just beginning, Studies in Natural Language Processln~, a monograph series under the Editorship of Aravlnd Joshl. It was prompted by Norm Sondheimer and brought into being through the organizatlonal efforts of Paul Chapln. We are Just completing negotiations with Cambridge University Press, which will publish it for us. So much for general history, I will reserve my proper place down the llne for other kinds of commentary. ASSOCIATION FOR MACHINE TRANSLATION AND COMPUTATIONAL LINGUISTICS (founded 6-13-1962) ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (renamed 7-24-1968) Officers, Editors, Committees, Meetings, and Program Chairing Program Chair 1963 President Yngve Vice-President Hays Set-Treasurer Josselson Executive Rhodes Committee Garvin Members Lehmann Editor (F8) Roberts Editor (MTCL) Nominating See Committee Oettlnger Members Lamb Annual Meeting Denver 8/25-26 (ACM) Yngve 1964 Hays Alt Josselson Sebeok Garvin Lehmann Roberts Yngve Yngve Oettlnger Lamb Bloomington 7/29-30 (LSA) Chafe 1965 1966 President Lehmann Garvin Vice-President Garvin Oettinger Set-Treasurer Josselson Josselson Executive Sebeok Sebeok Committee Hockett Hockett Members Kuno Prendergraft Editor (FS) Roberts Roberts ~ditor (MTCL) Yngve Yngve Nominating Yngve Yngve Committee Rays Rays Members Lamb Lieberman Annual Meeting New York Los Angeles 5/19-21 ffi ICCL 7/26-27 (LSA) Program Chair Pendergraft Kay 1967 1968 President Kuno Walker Vice-President Walker Mersel See-Treasurer Josselson Josselson Executive Satterthwalt Satterthwalt Committee Hockett Fromkln Members Pendergraft Pendergraft Editor (FS) Roberts Roberts Editor (MTCL) Yngve Yngve Assoc Editor Chapln Nominating Garvin Garvin Committee Hays Kuno Members Lieberman Lieberman Annual Meeting Atlantic City Urbana 4/21 (SJCC) 7/24-25 (LSA) Program Chair Walker Petrlck 1969 President Kay Vice-Presldent Plath Set-Treasurer Josselson Executive Satterthwalt Committee Fromkln Members Montgomery Editor (FS) Roberts Editor (MTCL) Yngve Assoc Editor Chapin Nominating Garvin Committee Kuno Members Walker Annual Meeting Boston 5113 (sJcc) Program Chair Fraser 1970 Plath Friedman Josselson Wall Fromkin Montgomery Roberts Yngve Chapin Kay Kuno Walker Columbus 7/22-23 (LSA) Wall 1971 1972 President Friedman Simmons Vice-Pres Simmons Fromkin Sec-Treas Josselson Roberts Executive Wall Wall Committee Robinson Robinson Members Montgomery Chapln Editor (FS) Roberts Roberts Nominating Kay Kay Committee Plath Plath Members Walker Friedman Annual Meeting Atlantic City Chapel Hill 5/17 (SJCC) 7/26-27 (LSA) Program Chair Barnes Schank Program Chair 1973 President Barnes Vice-President Woods Set-Treasurer Roberts Executive Martins Committee Robinson Members Chapin Editor (FS) Roberts Editor (AJCL) Nominating Simmons Committee Plath Members Friedman Annual Meeting Ann Arbor 8/1-2 (LSA) Friedman 1974 Woods Wall Roberts Martins Joshl Chapln Hays Simmons Barnes Friedman Amherst 7/26-27 (LSA) Nash-Webber 90 1975 1976 President Joshi Petrick Vice-Preaident Petrick Grimes Sec-Treasurer Roberts Roberts/Walker Executive Martins Diller Committee Rieger Rieger Members Nash-Webber Nash-Webber Editor (AJCL) Hays Hays Nominating Simmons Joshl Committee Barnes Barnes Members Woods Woods Annual Meeting Boston San Francisco 10/30-11/I(ASIS)5/IO (ASIS) Program Chair Diller Chapin 1977 1978 President Ch'apin Allen Vice-President Allen Kaplan Sec-Treasurer Walker Walker Executive Diller Diller Committee Hobbs Hobbs Members Nash-Webber Bruce Editor (AJCL) Hays Hays Assoc Editor Heldorn Heidorn Nominating Joshi Joshi Committee Petrick Petrick Members Woods Chapin Annual Meeting Georgetown Urbana 3/16-17 (RTLL) 7/25-27 = TNLP Program Chair Allen Waltz 1979 1980 President Kaplan Webber Vice-President Webber Sondheimer Sec-Treasurer Walker Walker Executive Rosenschein Rosenschein Committee Hobbs Lehnert Members Bruce Bruce Editor (AJCL) Heidorn Heldorn Assoc Editor McCord Nominating Allen Allen Committee Petrlck Kaplan Members Chapln Chapln Annual Meeting La Jolla Philadelphia 8/11-12 (CSS) 6/19-22 Program Chair Sondheimer Hendrix 1981 1982 President Sondheimer Robinson Vice-President Robinson Perrault Sec-Treasurer Walker Walker Executive Rosenschein Karttunen Committee Lehnert Lehnert Members Mann Mann Editor (AJCL) Heldorn Petrick/Damerau Assoc Editor McCord McCord Editor (SNLP) Joshi Nominating Allen Sondhelmer Committee Kaplan Kaplan Members Webber Webber Annual Meeting Stanford Toronto 6/29-7/1 6/16-18~ Program Chair Perrault Bates President Vice-President Sec-Treasurer Executive Committee Members Editor (AJCL) Assoc Editor Editor (SNLP) Nominating Committee Members Annual Meeting Program Chair 1983 1984 Karttunen Mann Karttunen Joshi Sondheimer Sondheimer Webber " Cambridge Stanford June July = COLING-84 SOME ABBREVIATIONS Publications: FS = The Finite String (1964-present) MTCL = Machine Translation and Computational Linguistics (1965-1968) AJCL = AmericanJournal of Computational Linguistics (1974-present) SNLP = Studies in Natural Language Processin~ (Cambridge University Press Monograph Series, 1982-1987) Other organizations in conjunction () with which our meeting was held or which coopted = our meeting: ACM LSA ICCL SJCC ASIS RTLL TNLP CSS COLING = Association for Computing Machinery = Linguistic Society of America - International Conference on Computational Linguistics (now called COLING) m Spring Joint Computer Conference (now called National Computer Conference) - American Society for Information Science - Georgetown Round Table on Languages and Linguistics m Theoretical Issues on Natural Language Processing-2 = Cognitive Science Society - International Conference on Computational Linguistics 91 | 1982 | 17 |
OUR DOUBLE ANNIVERSARY Victor H. Yngve University of Chicago Chlcngo, 1111nols 60637 USA ABSTRACT In June of 1952, ten years before the founding of the Association, the first meeting ever held on computational linguistics took place. This meet- ing, the succeeding ten years, and the first year of the Association are discussed. Some thoughts are offered as to what the future may bring. I THE EARLY YEARS When the suggestion came from Don Walker to celebrate our twentieth anniversary by a panel discussion I responded with enthusiasm at the op- portunlty for us all to reminisce. Much has hap- pened in those twenty years to look back on, and there have been many changes: Not many here will remember that founding meeting. As our thoughts go back to the beginnings it must also be with a note of sadness, for some of our most illustrious early members can no longer be counted among the living. Not many of you will remember either that our meeting here today marks another anniversary of signal importance for this Association. Thirty years ago the first organized conference ever to be held in the field of computational linguistics took place. The coincidence of the dates is remarkable. This conference is on June 16-18, 1982, that one was on June 17-20, 1952, overlapping two of our three dates. That meeting was the M.I.T. Confer- ence on Mechanical Translation. It was an inter- national meeting organized by ¥. Bar-Hillsl and held at the M.I.L faculty club. If our association was born twenty years ago, this was the moment of its conception, exactly thirty years ago. I will try to recall that meeting for you, as best I can, for I propose that we celebrate that anniversary as well. For that very first meeting Bar-Hillel had brought together eighteen interested people from both coasts and from En~In~d. The first session was an evening session open to the public. It consisted of five short semi-popular talks. The real business of the meeting took place the next three days in closed sessions in a pleasant room overlooking the Charles River. We sat around a kind of rectangular round-table, listened to fif- teen prepared papers or presentations, and discus- sed them with a no-holds-barred give-and-take cata- lyzed by the intense, open, and candidly outspoken personality of Bar-Hillel. He was the only person I ever knew who could argue with you, shouting excitedly at the top of his lungs until your back was literally against the wall, and always with that angelic smile on his face and you couldn't help llklng him through it all. The stenotype transcript of the dlsousslon at that first meetlng makes interesting reading even today. The partici- pants grappled in a preliminary but often insight- ful way with difficult issues many of which are still with u~ As for the papers at the conference, three were given by Erwin Relfler of the Far Eastern and Russian Institute, the University of Washington; two by Victor Oswald of the Department of Germanic Languages, UCLA; two by Willlam Bull of the Depart- ment of Spanish, UCLA; one each by Stuart Dodd of the University of Washington, William Locke of the Department of Modern Languages, M.I.T., James Perry of the Center for International Studies, M.I.T., Harry Huskey of the National Bureau of Standards computer lab at UCLA, and Jay Forrester of the Digital Computer Laboratory, M.I.T. Two were by Bar-Hillel hlmself, from M.I°T.; and one was by A. D° Booth of the Electronic Computer Section, Birk- beck college, London. Most of the substantial papers were later revised for publication as some of the fourteen articles in the volume Machine ~ o f L a n ~ u ~ e s edited by Locke and Booth, or in the pages of the Journal Mechanical Transla- tion, which was started in March of 195~. Two reports of the conference were subsequently pub- lished in the Journal, one by Erwln Relfler and one by Craig Reynolds, J~ of IBM. The ten years between the first conference and the founding of the Association were marked by many newsworthy events and considerable technical prog- ress. A number of individuals and groups entered the field, both here and abroad, and an adequate level of support materialized, mostly from govern- ment agencies. This important contribution to progress in our field should be a matter of pride to the agencies involved. It was an essential ingredient in the mix of efforts that have put us where we are today. Progress in that first ten years can be estimated by considering that up to the time of the founding of the Association the journal ~ ~ p u b l l s h e d 52 arti- cles, 187 abstracts of the llterature, and ran to 532 pages. 92 To review all of that research adequately would be a large task, and one that I will not undertake here. But I should like to say that it includes a number of cases where computer tech- niques have played an essential role in linguistic research. Just one example is the work on the depth hypothesis during the summer of 1959, which owes everything to the heuristic advantages of computer modeling in linguistics. Those linguists who still scorn or ignore computational linguistics should consider carefully those many examples of the efflcaoy of computer methods in their dlsoi- pllne. II FOUNDING THE SOCIETY Toward the end of those ten years the need for a professional society became clear. We did keep in touch byphone and letter, and ad hoc committees had been formed for various purposes. But most of all we needed a formal organization to bring a degree of order into the process of planning meet- ings. We could make plans through our informal contacts, but there was always the problem that new groups or existing organizations would go ahead with plans of their own for meetings too soon before or after our own. There were also requests from sponsoring agencies for symposia reviewing progress and encouraging cooperation between the growing number of federally supported projects. We wanted regular meetings but we resisted the idea of having too many. As an example of the situation we faced, I received aletter early in 1959 from the Associa- tion for Computing Machinery, who were planning a National Conference to be held at M.I.~ September 1-3, 1959. They asked me if I thought that people connected with mechanical translation would llke to have a session at the meeting or meet concurrently. I said I didn't know, but agreed to write to some people in the field about it. I did write, offer- ing to set up a session or a separate meeting if others wanted me to do it, but expressing the thought that there were very few of us doing re- search in the field and that there now were a number of organizations that would llke to include mechanical translation papers in their programs to build interest and attendance. It was a hot topic at the time. We did not take up the ACM in their kind offer. Had we done so, we might today be a Special Interest Group of the AC~l, and that would have hindered our close ties to linguistics. In any event, the people at UCLA organized a National Symposium on Machine Translation, which took place on February 2-5, 1960, Just five months after the date of the ACM meeting, and five months after that, on July 18-22, 1950, a meeting of federally sponsored machine translation workers, organized by Harry Josselson and supported by NSF and ONR was held at the Princeton Inn, Princeton, New Jersey. The next year, on April q-7, 1961, a similar conference was held st Georgetown Univer- sity, and Just five months after that, on September 5-8, 1961, the National Physical Laboratory in Teddlngton, England hosted an International Confer- ence on Machine Translation of Languages and £p- plied Language Analysis. SomethlnE clearly had to be done. So the stage had been set, and nine months later, on June 13, 1962, at another confer- ence organized by the irrepressible Harry Josselson at the Princeton Inn, we finally founded a profes- sional society: The Association for Machine Trans- lation and Computational Linguistics, renamed six years later the Association for Computational Lin- gulstlca. I have not been able to locate a llst of our charter members. I am sure one exists. The offi- cers for the first year were Victor H. Yngve, President; David G. Hays, Vice-Presldent; and Harry H. Josselson, Secretary-Treasurer. Mrs. Ida Rhodes, Paul Garvln, and Wlnfred P. Lehmann were members of the Executive Council. Richard See, Anthony G. Oettinger, and Sydney M. Lamb were members of the NominatlngCommlttee. Our announced purpose was to encourage high professional standards by aponsoring meetings, publication, and other exchange of ln/or- mation. It was to provide a means of doing to- gether what individuals cannot do alone. Many of us had hoped for a truly international association. We felt this would be particularly appropriate for an organization involved in trying to improve the means for international communica- tion through mechanical translation. But the cost of travel, travel restrictions from some countries, and various other practical problems stood in the way. We became an international but predominantly American association. We decided from the begin- ning to meet in alternate years in conjunction with a major computer conference and a major linguistics conference, My year of tenure as President was uneventful, or so it seemS. It is difficult to extract one year of memories twenty years ago. I do remember a trip to Denver to see about arrangements for our first annual meetlng at the Denver Hilton, to take place August 25 and 26, 1963, the two days immedi- ately preceding the ACM National Conference. The local arrangements people for that meeting were most helpful. The program was put together by Harry Josselson. There were thirty-four papers covering a wide variety of topics including syntac- tic analysis, semantics, particulars of languages, theoretical linguistics, research procedures, and research techniques. Abstracts for the thirty four papers were published i n ~ ~ , Yol. 7, No. 2, and a group photograph of some of the delegates attending appeared in Vol. 8, No. I. Looking at this photograph and those taken at- earlier conferences and published in earlier issues invokes considerable nostalgia for those days. III THE FUTURE I do remember my presidential address, for it stressed some matters that I thought were particu- 95 larly important for the future. These thoughts were also embodied in a longer paper read to the American Philosophical Society three months later, in November 1963, and published the next year by that organization. I should like to quote a few sentences for they are particularly appropriate at this point: • A new field of research has grown up which revolves about languages, computers, and symbolic processes. This sometimes is called computational llnguistlcs, mechanical linguistics, information processing, symbol manipulation, and so on. None of the names are really adequate. The implications Of this research for the future are far-reachlng. Imagine what it would mean if we bad computer programs that could actually understand English. Besides the obvious practical implications, the implications for our understanding of language are most exciting. This research promises to give us new insights into the way in which languages convey information, the way in which people understand English, the nature of thought processes, the na- ture of our theories, ideas, and prejudices, and eventually a deeper understanding of ourselves. Perhaps one of the last frontiers of man~s under- standing of his environment is his understandlr~of man and his mental processes. "This new field touches, with various degrees of overlap and interaction, the already well-estab- lished diverse fields of linguistics, psychology, logic, philosophy, information theory, circuit theory, and computer design. The interaction with linguistics has already produced several small revolutions in methodology, point of view, insight into language, and standards of rigor and exact- ness. It appears that before we are done, linguis- tics will be completely revolutionized." This quotation is particularly apt because I still believe that before we are done linguistics will be completely revolutionized. Let me explain. First, the difficulties in mechanlzlng translation had already at that early date called attention to fundamental inadequacies in linguistic theory, traditional or transformational, it makes no dif- ference. Second, the depth hypothesis and the problems raised in trying to square it with current linguistic theory threw further doubt on the scien- tific integrity of linguistics. And third, the depth hypothesis also provided an important clue as to how the Inadequacies in linguistic theory might eventually be overcome. I have spent the last two decades or so following this lead and trying to find a more satisfactory foundation for linguis- tics. The following is a brief progress report to the parent body, as it were. A recent written report may be found in the J a n u a ~ S e r i e s Major volume 97, edited by Florian Coulmas. Modern scientific linguistics, since its be- Elnnlng a century and a half ago, has been charac- terized by three central goals (1) that it study language, (2) that it be scientific, and (3) that it seek explanations in terms of people. It turns out that these goals are contradictory and mutually incompatible, and this is the underlying reason for the most serious Inadequacies in linguistic theor~ Linguistics, and that includes computational linguistics, is faced with two mutually exclusive alternatives. We can either accept the first goal and study language by the methods of grammar, or we can accept the second and third goals and seek explanations of communicative phenomena in terms of people by the methods ofsclence. We cannot continue with business a usual and try to have it both ways. Basically this is be- cause science studies real objects given in advance whereas grammar studies objects that are only created by a point of vlew, as Saussure realized. Their study rests on a special assumption that places grammar outside of science. To try to have it both ways also leads to the fallacies of the psychologlcal and social reality of grammar. The full implications of this fork in the road that linguistics faces is Just now sinking in. Only the second alternative is viable, science rather than grammar. This means we will have to give up the two thousand year grammatical tradition at the core of linguistic thought and reconstruct the discipline on well-known scientific principles instead. This will open up vast opportunities for research to uncover that essential and unique part of human nature, how people communicate. We may then finally be able to do all those things we have been trying so hard to do. In this necesaary reconstruction I foresee that computational linguistics is destined to play an essential role. 94 | 1982 | 18 |
2002: ANOTHER SCORE David G. Hays Metagram 25 Nagle Avenue, Apartment 3-G New York, NY 10040 Twenty years is a long time to spend in prison, but it is a short time in intellectual history. In the few years Just prior to the foundation of this Association, we had come from remarkably complex but nevertheless rather superficial analysis of text as strings of characters or, perhaps, lexlcal units to programs for parsing that operated on complex grammatical symbols but according to rather simple general principles; the programs could be independent of the language. And at the moment of foundation, we had--In the words of D. R. Swanson--run up against the stone wall of semantics. No one at the time could say whether it was the wall of a prison yard or another step in the old intellectual pyramid. On my reading, the record is unmistakable. The best work of the past twenty years has been on a higher level than that of 1962. Those who learned about syntactic, semantic, and cognitive structures as students must feel quite scornful of the timidity with which we introduced these new topics to a world that doubted their propriety. But then some were not so timid. After all, the new ideas are in the curriculum. Meanwhile, the commercial significance of strings of characters has come to everyone's attention. So-called word processors are widely used, and the market grows. Commercialization of our most rudimentary techniques has taken twenty years. We may wonder how long it will take to put on the market systems with our more recent, more advanced techniques, but we can be sure that the market will eventually buy them. We can also be sure of encountering new barriers. Our most important gain in the past twenty years is, as I see it, the assurance that whatever barrier we meet can be climbed. This is no case of "Climb one, climb them all." Such arrogance is folly. Language is closely associated with thought. Knowledge of them both, and of their association, is Just what carried us over the barriers that were insurmountable twenty yea~s ago. The bazrlers we meet are inherent in the systems of thought that we use. We know enough about thought to announce that its characteristic and dlsczlminatlng feature is the capacity to generate new and more powerful systems of its own kind. A railroad does not become an elevator when it reaches a cliff, but thought does Just that. No one anticipated in 1962 that the study of language or the investigation of "thinking machines" would lead in twenty yea~s to an understanding of how intellectual bazzlers convert themselves into scaffolding for the erection of new theoretical systems, and no great social institution--not the university, and certainly not government--has yet recognized the revolutionary thrust of our small enterprise. The world understands, vaguely, that great change is taking place, but who understands that the pace of change will never slow down? Intellectual progress consists in the routinlzatlon of the work of intuitive genius. Before the Renaissance in Europe, some persons by insight could establish the sum of two numbers or the truth of some fact about nature. Since the Renaissance we take these accomplishments so much for granted that we scarcely understand the system of thought in which they were problematic. At most twenty-flve years ago, the determination of the surface structure of a sentence was problematic. By now we understand rather clearly how phonological, syntactic, semantic, and cognitive considerations interact in parsing. We, as a global culture, have taken a step comparable to the Renaissance, and we, as the members of an Association, have had a significant role. Advances in linguistics, in cognitive science, in the art of computation, and in artificial intelligence have contributed to our work. Some would say that we are merely users of their results. I think that we have supplied a crucial element, and I understand our name-- computational linguistics--to designate our special conceptualization. Until we went to work, the standard conceptualization of analysis in western thought was translation into a universal scheme. Logic, mathematics, and computation assumed that all argument would be by manipulation of certain forms. The logician, mathematician, or computatlonist was expert in these forms and manipulations. Given any problem domain, someone would translate its material into the standard form. After manipulations, someone would translate the results back. Computational linguistics has the idea that computation can be designed on the pattern of linguistic theory. In this respect, it seems to me, there is a sharp distinction between computational linguistics and natural language processing. The latter seems to belong to artificial intelligence, and artificial intelligence seems to be the inheritor of the standard assumptions. I think that computational linguistics has the advantage. Language and thought are fantastically complex, and when their mechanisms are translated into the old universal forms the representations 95 are equally complex. Yet, from the right perspective, we have seen that language and thought have simple structures of their own. If we translate linguistic mechanisms into computational terms, the first step is hard, but the zest is comparatively easy. The making of software is still, as it has been from the beginning, a grave problem. For this problem I see only one remedy. Computational mechanisms must be translated into the terms of the user for whom the rest will be easy. But the user is not unique; the class of users is heterogeneous. Hence computational mechanisms must be translated into many different kinds of terms, and so far this translation seems very difficult. "Metagramming" is my name for an approach to the simplification of the hard part. For thousands or tens of thousands of years humanity has engaged in the translation of linguistic mechanisms into the terms of different perspectives on the world. Thus, cultures and languages vary in profound ways. And cultures and languages vary together. Until now no one has understood this process. It went on In billions of brains, and it was effective. Now we try to understand it and to extend it from the linguistic level to the computational. The curious formula that we offer for the conversion of intellectual barriers into scaffolding is Just this: Formulate a description of the barrier. Translate the mechanisms of thought or of computation into the terms of the description. Execute the new mechanisms. As I see the matter, such work was done by intuitive genius until recently, but we ale routinizing it. This formula generalizes on a central notion of computational linguistics and seems to me our first contribution to universal knowledge. The formula contains an inexplicit element. What are the terms of the description to be? In what language does one formulate the description? I see no plain answer to this question. In fact, I am willing to take it as identifying, but not as describing, the next barrier. Another way to put the matter is to say that the proper description of the barrier is a metaphor of its elimination. Metaphor is at present in the same limelight that illuminated semantics twenty years ago. We have not yet found the correct angle to illuminate the problem of metaphor, the proper language for description of the problem. Again, I suggest that metaphors serve us in discussions of abstract matters. Surmounting an intellectual barrier is stepping to a higher level of abstraction or, in a somewhat novel technical sense, moving to a metalevel. And finally I point out our inability to characterize the mutual influence of any complex whole and its myriad parts. If we consider a play or novel, a religion, a culture, or a science and ask how the unique quality of the whole emerges from the mass of elements, we have little or nothing of a scientific nature to say. And if we ask how to construct a system of this kind, how to design a building or a programming language, how to enhance a culture or educate a child, we find ourselves with traditions and intuitions but without explicit theories. So I see a goal worth scoring, and I imagine the possibility that computational linguistics can move toward it. Deep study of computation inculcates powerful methods of thought, and deep study of language supplies the right objects of thought. Computational linguistics contains what I reckon to be needed by those who would wrestle with abstraction, metaphor, and metasystems. Mankind is a complex whole, and its individual human parts are myriad. The computer in every home will alter the mutual influence of person and population. For better or for worse, no one can yet say. Moral issues arise. Technical issues will determine not only whether morally sound principles can be put into practice, but also how we formulate the moral questions. Here is work for twenty years to come and beyond. 96 | 1982 | 19 |
LINGUISTIC AND COMPUTATIONAL SEMANTICS* Brian Cantwell Smith XEROX Palo Alto Research Center 3333 Coyote Hill Road, Palo Alto, CA 94304 ABSTRACT We argue that because the very concept of computation rests on notions of interpretation, the semantics of natural languages and the semantics of computational formalisms are in the deepest sense the same subject. The attempt to use computational formalisms in aid of an explanation of natural language semantics, therefore, is an enterprise that must be undertaken with particular care. We describe a framework for semantical analysis that we have used in the computational realm, and suggest that it may serve to underwrite computadonally-oriented linguistic ser.antics as well. The major feature of this framework is the explicit recognition of both the declarative and the procedural import of meaningful expressions; we argue that whereas these two viewpoints have traditionally been taken as alternative, any comprehensive semantical theory must account for how both aspects of an expression contribute to its overall significance. We have argued elsewhere 1 that the distinguishing mark of those objects and processes we call computational has to do with attn'buted semantics." we humans find computational processes coherent exactly because we attach semantical significance to their behaviour, ingredients, and so forth. Put another way, computers, on our view, are those devices that we understand by deploying our linguistic faculties. For example, the reason that a calculator is a computer, but a car is not, is that we take the ingredients of the calculator to be symbolic (standing, in this particular case, for numbers and functions and so forth), and understand the interactions and organisation of the calculator in terms of that interpretation (this part divides, this part represents the sum, and so on). Even though by and large we are able to produce an explanation of the behaviour that does not rest on external semantic attribution (this is the formality condition mentioned by Fodor, Haugeland. and othersz), we nonetheless speak, when we use computational terms, in terms of this semantics. These semantical concepts rest at the foundations of the discipline: the particular organisations that computers have their computational raison d'etre ~ emerge not only from their mechanical structure but also from their semantic interpretability. Similarly, the terms of art employed in computer science -- program, compiler, implementation, interpreter, and so forth -- will ultimately he definable only with reference to this attributed semantics; they will not, on our view, ever be found reducible to non-semantical predicates? This is a ramifying and problematic position, which we cannot defend here. 4 We may simply note, however, the overwhelming evidence in favour of a semantical approach manifested by everyday computational language. Even the simple view of computer science as the study of symbol manipulation s reveals this bias. Equally telling is the fact that programming languages are called languages. In addition, language-derived concepts like name and reference and semantics permeate computational jargon (to say nothing of interpreter, value, variable, memory, expression, identifier and so on) -- a fact that would be hard to explain if semantics were not crucially involved. It is not just that in discussing computation we use language; rather, in discussing computation we use words that suggest that we are also talking about linguistic phenomena. The question we will focus on in this paper, very briefly, is this: if computational artefacts are fundamentally linguistic, and if, therefore, it is appropriate to analyse them in terms of formal theories of semantics (it is apparent that this is a widely held view), then what is the proper relationship between the so-called computational semantics that results, and more standard linguistic semantics (the discipline that studies people and their natural languages: how we mean, and what we are talking about, and all of. that good stuff)? And furthermore, what is it to use computational models to explain natural language semantics, if the computational models are themselves in need of semantical analysis? On the face of it, there would seem to be a certain complexity that should he sorted out. In answering these questions we will argue approximately as follows: in the limit computational semantics and linguistic semantics will coincide, at least in underlying conception, if not in surface detail (for example some issues, like ambiguity, may arise in one case and not in the other). Unfortunately, however, as presently used in computer science the term "semantics" is given such an operational cast that it distracts attention from the human attribution of significance to computational structures. 6 In contrast, the most successful models of natural language semantics, embodied for example in standard model theories and even in Montague's program, have concentrated almost exclusively on referential or denotational aspects of declarative sentences. Judging only by surface use, in other words, computational semantics and linguistic semantics appear almost orthogonal in concern, even though they are of course similar in so'le (for example they both use meta-theoretic mathematical techniques -- functional composition, and so forth -- to recursively specify the semantics of complex expressions from a given set of primitive atoms and formation rules). It is striking, however, to observe two facts. First, computational semantics is being pushed (by people and by need) more and more towards declarative or referential issues. Second, natural language semantics, particularly in computationally-based studies, is focusing more and more on pragmatic questions of use and psychological import. Since computational linguistics operates under the computational hypothesis of mind, psychological issues are assumed to be modelled by a field of computational structures and the state of a processor running over them; thus these linguistic concerns with "use" connect naturally with the "operational" flavour of standard programming language semantics. It seems not implausible, therefore -- we betray our caution with the double negative -- that a unifying framework might be developed. It will be the intent of this paper to present a specific, if preliminary, proposal for such a framework. First, however, some introductory comments. In a general sense of the term, semantics can be taken as the study of the relationship between entities or phenomena in a syntactic domain s and corresponding entities in a semantic domain t). as pictured in the following diagram. I S2ntactic Domain Si @ Semantic Domain D,, I We call the function mapping dements from the first domain into elements of the second an interpretation function (to be sharply distinguished 7 from what in computer science is called an interpreter, which is a different beast altogether). Note that the question of whether an element is syntactic or semantic is a function of the point of view; the syntactic domain for one interpretation function can readily be the semantic domain of another (and a semantic domain may of course include its own syntactic domain). Not all relationships, of course, count as semantical; the "grandmother" relationship fits into the picture just sketched, but stakes no claim on being semantical. Though it has often been discussed what constraints on such a relationship characterise genuinely semantical ones (compositionality or recursive specifiability, and a certain kind of formal character to the syntactic domain, are among those typically mentioned), we will not pursue such questions here. Rather, we will complicate our diagram as follows, so as to enable us to characterise a rather large class of computational and linguistic formalisms: [ )¢otation )¢l ] ] )~otation ~2 ] t ~ ua and N2 are intended to be notational or communicational expressions, in some externally observable and consensually established medium of interaction, st!21 an strings of characters, streams of words, or sequences of display images on a computer terminal. The relationship O is an interpretation function mapping notations into internal elements of some process over which the primary semantical and processing regimens are defined. In first- order logic, sl and s2 would be something like abstract derivation tree types of first-order formulae; if the diagram were applied to the human mind, under the hypothesis of a formally encoded mentalese, s~ and s2 would be tokens of internal mentalese, and e would be the function computed by the "linguistic" faculty (on a view such as that of Fodora). In adopting these terms we mean to be speaking very generally; thus we mean to avoid, for example, any claim that tokens of English are internalised (a term we will use for o) into recognisable tokens of mentalese. In particular, the proper account of e for humans could well simply describe how the field of mentalese structures, in some configuration, is transformed into some other configuration, upon being presented with a particular English sentence; this would still count, on our view, as a theory of o. In contrast, ~ is the interpretation function that makes explicit the standard denotational significance of linguistic terms, relating, we may presume, expressions in $ to the world of discourse. The relationship between my mental token for T. S. Eliot, for example, and the poet himself, would he formulated as pan of ~. Again, we speak very broadly; ¢ is intended to manifest what, paradigmatically, expressions are about, however that might best be formulated (,1, includes for example the interpretation functions of standard model theories), q,, in contrast, relates some internal structures or states to others -- one can imagine it specifically as the formally computed derivability relationship in a logic, as the function computed by the primitive language processor in a computational machine (i.e., as tzsP'S EVAL), or more generally as the function that relates one configuration of a field of symbols to another, in terms of the modifications engendered by some internal processor computing over those states. (~ and q, are named, for mnemonic convenience, by analogy with philosophy and psychology, since a study of • is a study of the relationship between expressions and the world -- since philosophy takes you "out of your mind", so to speak -- whereas a study of ~v is a study of the internal relationships between symbols. all of which, in contrast, are "within the head" of the person or machine.) Some simple comments. First` N~, N2, Sl, S~, o~, and oz need not all necessarily be distinct: in a case where sl is a self-referential designator, for example, D~ would he the same as s~; similarly, in a case where ~, computed a function that was designation-preserving, then D~ and o 7 would be identical. Secondly, we need not take a stand on which of x~ and • has a prior claim to being the semantics of sl. In standard logic, q, (i.e., derivability: }-) is a relationship, hut is far from a function, and there is little tendency to think of it as semantical; a study of ,I, is called proof theory. In computational systems, on the other hand, q, is typically much more constrained, and is also, by and large, analysed mathematically in terms of functions and so forth, in a manner much more like standard model theories. Although in this author's view it seems a little far-fetched to call the internal relationships (the "use" of a symbol) semantical, it is nonetheless true that we are interested in characterising both, and it is unnecesary to express a preference. For discussion, we will refer to .he ",-semantics of a symbol or expression as its declarative /mp0rt, and refer to its *-semantics as its procedural consequence. We have heard it said in other quarters that "procedural" and "declarative" theories of semantics are contenders; 9 to the extent that we have been able to make sense of these notions, it appears that we need both. l0 It is possible to use this diagram to characterise a variety of standard formal systems. In the standard models of the k-calculus, for example, the designation function ~, takes h-expressions onto functions; the procedural regimen % usually consisting of =- and/l- reductions, can be shown to be ~,-preserving. Similarly, if in a standard predicate logic we take • to be (the inverse of the) satisfaction relationship, with each element of S being a sentence or set of sentences, and elements of o being those possible worlds in which those sentences are true, and similarly take ,I, as the derivability relationship, then soundness and completeness can he expressed as the equation 'l'(sl,s2) m [ o~ C_ D~ ]. As for all formal systems (these presumably subsume the computational ones), it is crucial that ,t, he specifiable independent of ,l,. The h-calculus and predicate logic systems, furthermore, have no notion of a processor with state; thus the appropriate • involves what we may call local procedural conse.quence, relating a simple symbol or set of symbols to another set. In a more complex computational circumstance, as we will see below, it is appropriate to characterise a more complex f~rll procedural consequence involving not only simple expressions, but fuller encodings of the state of various aspects of the computational machine (for example, at least environments and continuations in the typical computational easel0). An important consequence of the analysis illustrated in the last figure is that it enables one to ask a question not typically asked it" computer science, about the (q,-) semantic character of the function computed by ~,. Note that questions about soundness and completeness in logic are exactly questions of this type. In separate research, 11 we have shown, by subjecting it to this kind of analysis, tJ~at computational formalisms can be usefully analysed in these terms as well. In particular, we demonstrated that the universally a:cepted LISP evaluation protocol is semantically Confused, in the fbllowing sense: sometimes it preserves • (i.e. ~(,I,(S)) = ~,(s)), and sometimes it embodies • (i.e., ,l,(s) = ,l,(s)). The traditional LISP notion of evaluation, in other words, conflates simplification and reference relationships, to its peril (in that report we propose some LISP dialects in which these two are kept strictly separate). The current moral, however, is merely that our approach allows the question of the semantical import of ,~ to be asked. As well as considering LISP. we may use our diagram to c~laracterise the various linguistically oriented projects carried on under the banner of "semantics". Model theories and formal theories of language (we include Tarski and Montague in one sweep) have concentrated primarily on ~,. Natural language semantics in some quarters 12 focuses on o ~ on the translation into an internal medium ~ although the question of what aspects of a given sentence must be preserved in such a translation are of course of concern (no translator could ignore the salient properties, semantical and otherwise, of the target language, be it mentalese or predicate logic, since the endeavour would otherwise be without constraint). l.ewis (for one) has argued that the project of articulating O ~ an ¢ndeavour he calls markerese semantics -- cannot really be called semantics at all, 13 since it is essentially a translation relationship, zlthough it is worth noting that e in computational formalisms is not z.lways trivial, and a case can at least be made that many superficial aspects of natural language use, such as the resolution of indexicals, raay be resolved at this stage (if for example you say I am warm then I may internalise your use of the first person pronoun into my iaternal name for you). Those artificial intelligence researchers working in knowledge representation, perhaps without too much distortion, can be divided into two groups: a) those whose primary semantical allegiance is to ~, and who (perhaps as a consequence) typically use an encoding of first-order logic as.their representation language, and b) those who concern themselves primarily with ,~, and who therefore (legitimately enough) reject logic as even suggestive (* in logic -- derivability is a relatively unconstrained relationship, for one thing; secondly, the relationship between the entailment relationship, to which derivability is a hopeful approximation, and the proper "~," of rational belief revision, is at least a matter of debatel4). Programming language semantics, for reasons that can at least be explored, if not wholly explained, have focused primarily on q,, although in ways that tend to confuse it with ~. Except for PROLOG, which borrows its • straight from a subset of first-order logic, and the LIsPs mentioned earlier, is we have never seen a semantical account of a programming language that gave independent accounts of • and ,1,. There are complexities, furthermore, in knowing just what the proper treatment of general languages should be. In a separate paper 16 we argue that the notion program is inherently defined as a set of expressions whose (~-) semantic domain includes data structures (and set-theoretic entities built up over them). In other words, in a computational process that deals with finance, say, the general data structures will likely designate individuals and money and relationships among them, but the terms in that pan of the process called a program will not designate these people and their money, but will instead designa:~' the data ztructures that designate people and money (plus of course relationships and functions over those data structures). Even on a declarative view like ours, in other words, the appropriate semantic domain for programs is built up over data structures -- a situation strikingly like the standard semantical accounts that take abstract records or locations or whatever as elements of the otherwise mathematical domain for programming language semantics. It may be that this fact that all base terms in programs are meta-syntactic that has spawned the confusion between operations and reference in the computational setting. Although the details of a general story remain to be worked out, the LiSP case mentioned earlier is instructive, by way of suggestion as to how a more complete computational theory of language semantics might go. In particular, because of the context relativity and non-local effects that can emerge from processing a LISP expression, ~, is not specifiable in a strict compositional way. ,~ -- when taken to include the broadest possible notion that maps entire configurations of the field of symbols and of the processor itself onto other configurations and states -- is of course recursively specifiable (the same tact, in essence, as saying that LISP is a deterministic formal calculus). A pure characterlsation of ,I, without a concomitant account of $, however, is unmotivated -- as empty as a specification of a derivability relationship would be for a calculus for which no semantics had been given. Of more interest is the ability to specify what we call a general significance .function 2, that recursively specifies ,I, and ,~ together (this is what we were able to do for LZSP). In particular, given any expression s~, any configuration of the rest of the symbols, and any state of the processor, the function z will specify the configuration and state that would result (i.e.. it will specify the use of sx), and also the relationship to the world that the whole signifies. For example, 1t given a LISP expression of the form (+ z (PROG (SETQ A 2) A)), ~g would specify that the whole expression designated the number three, that it would return the numeral "3", and that the machine would be left in a state in which the binding of the variable A was changed to the numeral "z". A modest result; what is important is merely a) that both declarative import and procedural significance must be reconstructed in order to tell .a full story about LISP; and b) that they must be formulated together. Rather than pursue this view in detail, it is helpful to set out several points that emerge from analyses developed within this framework: a. In most programming languages, o can be specified compositionally and independently of 4, or * -- this amounts to a formal statement of Fodor's modularity thc~m for language, z7 In the ease of formal systems, O is often context free and compositional, but not always (reader macros can render it opaque, or at least intensional, and some languages such as ALGOL ale apparently context-sensitive). It is noteworthy, however, that there have been computational languages for which e could not be specified indepently of * a fact that is often stated as the fact that the programming language "cannot be parsed except at runtime" (TEC0 and the first versions of SHALLTALK had this character). b. Since LISP is computational, it follows that a full account of its * can be specified independent of 4,; this is in essence the formality condition. It is important to bring out, however, that a local version of * will typically not be compositional in a modem computational formalism, even though such locality holds in purely extensional context-free side-effect free languages such as the h-calculus. c. It is widely agreed that * does not uniquely determine ,I, (this is the "psychology narrowly construed" and the concomitant methodological solipsism of Putnam and Fodor and othemlS). However this fact is compatible with our foundational claim that computational systems are distinguished in virtue of having some version of 4, as part of their characterisation. A very similar point can be made for logic: although any given logic can (presumably) be given a mathematically-specified model theory, that theory doesn't typically tie down what is often called the standard model or interpretation -- the interpretation that we use. This fact does not release us, however, from positing as a candidate logic only a formalism that humans can interpret. d. The declarative interpretation 4, cannot be wholly determined independent of *, except in purely declarative languages (such as the x-calculus and logic and so forth). This is to say that without some account of the effect on the processor of one fragment of a whole linguistic structure, it may be impossible to say what that processor will take another fragment as designating. The use of StTQ in LISP is an example; natural language instances will be explored, below. This last point needs a word of explanation. It is of course possible to specify 4, in mathematical terms without any explicit mention of a • -like function; the approach we use in LISP defines both. and in terms of the overarching function • mentioned above, and we could of course simply define 4, without defining . at all. Our i~oint, rather, is that any successful definition of ~, will effectively have to do the work of *, more or less explicidy, either by defining some identifiable relationship, or else by embedding that relationship within the recta-theoretic machinery. We are arguing, in other words, only that the subject we intend * to cover must be treated in some fashion or other. What is perhaps surprising about aII of this machinery is that it must be brought to bear on a purely procedural language -- all three relationships (O, 4,, and .) figure crucially in an account even of LISP. we are not suggesting that LzsP is like natural languages: to point out just one crucial difference, there is no way in LISP or in any other programming language (except PROLOG) tO say anything, whereas the ability to say things is clearly a foundational aspect of any human language. The problem in the procedural languages is one of what we may call assertional force; although it is possible to construct a sentence-like expression with a clear declarative semantics (such as some equivalent of "x • 3"), one cannot use it in such a way as to actually mean it -- so as to have it carry any assertional weight. For example, it is trivial to set some variable x to a, or to ask whether x is 3, but there is no way to state that x is 3, It should be admitted, however, that computational languages bearing assertional force are under considerable current investigation. This general interest is probably one of the reasons for PaOLOG'S emergent popularity; other computational systems with an explicit declarative character include for example specification languages, data base models, constraint languages, and knowledge representation languages in A.I. We can only assume that the appropriate semantics for all of these formalisms will align even more closely with an illuminating semantics for natural language. What does all of this have to do with natural language, and with computational linguistics? The essential point is this: tf this characterisation of formal systems is tenable, and if the techniques of standard programming language semantics can be fit into this mould, then it may be possible to combine those approaches with the techniques of programming language semantics and of logic and model theories, to construct complex and interacting accounts of * and of 4,. To take just one example, the techniques that are used to construct mathematical accounts of environments and continuations might be brought to bear on the issue of dealing with the complex circumstances involving discourse models, theories of focus in dealing with anaphora, and so on; both cases involve an attempt to construct a recursively specifiable account of non-local interactions among disparate fragments of a composite text. But the contributions can proceed in the other direction as well: even from a very simple application of this framework to this circumstance of LISP, for example, we have been able to show how an accepted computational notion fails to cohere with our attributed linguistically based understanding, involving us in a major reconstruction of LZSP'S foundations. The similarities are striking. Our claim, in sum, is that similar phenomena occur in programming languages and natural languages, and that each discipline could benefit from the semantical techniques developed in the other. Some examples of these similar phenomena will help to motivate this view. The first is the issue ~ t,,. appropriate use of noun phrases: as well as employing a noun phrase in a standard e .~,lnmnal position, natural language semantics has concerned itself with more difficult cases such as intensional contexts (as in the underlined designator in I didn't know The Big Apple was an island. where the co-designating term New York cannot be substituted without changing the meaning), the so-called attributive~referential 12 distinction of Donellan z9 (the difference, roughly, between using a noun phrase like "the man with a martini" to inform you that someone is drinking a martini, as Opposed to a situation where one uses the heater's belief or assumption that someone is drinking a martini to refer to him), and so on. Another example different from either of these is provided by the underlined term in For the next 20 years let's re~trict the president's salary to $20,000, on the reading in which after Reagan is defeated he is allowed to earn as much as he pleases, but his successor comes under our constraint. The analagous computational cases include for example the use of an expression like (the formal analog of) make the sixth array element be 10 (i.e., A(B) ::~ 10). where we mean not that the current sixth element should be 10 (the current sixth array element might at the moment tie 9, and 9 can't be 10), but rather that we would like the description "the sixth array element" to refer to 10 ~so-called "L- values", analogous to HACI.ISP'S serf construct). Or, to take a ,:lifferent case, suppose we say set x to the sixth array element (i.e., x :: = A(B)), where we mean not that x should be set to the current sixth array element, but that it should always be equal to that element (stated computationaUy this might be phrased as saying that :~ should track a(6); stated linguistically we might say that X should mean "the sixth array element"). Although this is not a standard type of assignment, the new constraint languages provide exactly such facilities, and macros (classic computational intensional operators) can be used in more traditional languages for such purposes, Or, for a final example, consider the standard dec~ation: z~r[GeA x, in which the term "x" refers neither to the variable itself (variables are variables, not numbers), nor to its current designation, but rather to whatever will satisfy the description "the value of x" at any point in the course of a computation. All in all, we cannot ignore the attempt on the computationalists' part to provide complex mechanisms so strikingly similar to the complex ways we use noun phrases in English. A very different sort of lingusitic phenomenon that occurs in both programming languages and in natural language are what we might call "premature exits": cases where the processing of a local fragment aborts the standard interpretation of an encompassing discourse. If for example I say to you I was walking down the street that leads to the house that Mary's aunt used to ... forget it; [ was taking a walk, then the "forget it" must be used to discard the analysis of some amount of the previous sentence. The grammatical structure of the subsequent phrase determines how much has been discarded, of course; the sentence would still be comprehensible if the phrase "an old house I like" followed the "forget it". We are not accustomed to semantical theories that deal with phenomena like this, of course, but it is clear that any serious attempt to model real language understanding will have to face them. Our present point is merely that continuations z° enable computational formalisms to deal exactly with the computational analogs of this: so-called escape operators like I, IACLISP'S THROW and CATCH and QUIT. In addition, a full semantics of language will want to deal with such sentences as If by "flustrated" you mean what I think, then she was certainly fluslrated. The proper treatment of the first clause in this sentence will presumably involve lots of ",t," sorts of considerations: its contribution to the rcmainder of the sentence has more to do with the mental states of speaker and hearer than with the world being describe by the presumed conversation. Once again, the overarching computational hypothesis suggests that the way these psychological effects must be modelled is in terms of alterations in :he state of an internal process running over a field of computational structures. As well as these specific examples, a couple of more general morals can be drawn, important in that they speak directly to styles of practice that we see in the literature. The first concerns the suggestion, apparently of some currency, that we reject the notion of logical form, and "do semantics directly" in a computational model On our account this is a mistake, pure and simple: to buy into the computational framework is to believe that the ingredients in any computational process are inherently linguistic, in need of interpretation. Thus they too will need semantics; the internalisation of English into a computer (O) is a translation relationship (in the sense of preserving ~, presumably) -- even if it is wildly contextual, and even if the internal language is very different in structure from the st.rucmre of English. It has sometimes been informally suggested, in an analogous vein, that Montague semantics cannot be taken seriously computationally, because the models that Montague proposes are "too big" -- how could you possibly carry these infinite functions around in your head, we are asked to wonder. But of course this argument comits a use/mention mistake: the only valid computational reading of Montague would mean that mentalse (,~) would consist of designators of the functions Montague propose~ and those designators can of course be a few short formulae, It is another consequence of our view that any semanticist who proposes some kind of "mental structure" in his or her account of language is commited to providing an interpretation of that structure. Consider for example a proposal that posits a notion of "focus" for a discourse fragment. Such a focus might be viewed as a (possibly abstracO entity in the world, or as a element of computational structure playing such-and-such role in the behavioural model of language understanding. It might seem that these are alternative accounts: what our view insists is that an interpretation of the latter must give it a designation (e~); thus there would be a computational structure (being biased, we will call it the focus-designator), and a designation (that we call the focus.itsel]). The complete account of focus would have to specify both of these (either directly, or else by relying on the generic declarative semantics to mediate between them), and also tell a story about how the focus-designator plays a causal role (,I,) in engendering the proper behaviour in the computational model of language understanding. There is one final problem to be considered: what it is to design an internal folvnatism S (the task, we may presume, of anyone designing a knowledge representation language). Since, on our view, we must have a semantics, we have the option either of having the semantics informally described (or, even worse, tacitly assumed), or else we can present an explicit account, either by defining such a story ourselves or by borrowing from someone else. If the LIsp case can be taken as suggestive, a purely declarative model theory will be inadequate to handle the sorts of comptuational interactions that programming languages have required (and there is no a priori reason to assume that successful computational models for natural language will be found that are simpler than the programming languages the community has found necessary for the modest sons of tasks computers are presently able to perform). However it is also reasonable to expect that no direct analog to programming language semantics will suffice, since they have to date been so concerned with purely procedural (behavioural) consequence. It seems at least 13 reasonable to suppose that a general interpretation function, of the z sort mentioned earlier, may be required. Consider for example the ZLONE language presented by Brachman et aL 21 Although no semantics for KLONE has been presented, either procedural or declarative, its proponents have worked both in investigating the o-sehaantics (how to translate English into KLONE), and in developing an informal account of the procedural aspects. Curiously, recent directions in that project would suggest that its authors expect to be able to provide a "declarative- only" account of KLONE semantics (i.e., expect to be able to present an account of ~, independent of ~,), in spite of our foregoing remarks. Our only comment is to remark that independence of procedural consequence is not a pre-requisite to an adequate semantics; the two can be recursively specifiable together; thus this apparent position is stronger than formally necessary ~ which makes it perhaps of considerable interest. In sum, we claim that any semantical account of either natural language or computational language must specify O, ,I,, and ,~; if any are leR out, the account is not complete. We deny, furthermore, that there is any fundamental distinction to be drawn between so-called procedural languages (of which LISP is the paradigmatic example in A.I.) and other more declarative languages (encodings of logic, or representation languages). We deny as well, contrary to at least some popular belief, the view that a mathcmatically well-specified semantics for a candidate "mcntalese" must bc satisfied by giving an independently specified declarative semantics (as would be possible for an encoding of logic, for example). The designers of zat, zz for example, for principled reasons denied the possibility of giving a semantics indcpendent of the procedures in which the Kat structures participated; our simple account of LISP has at least suggested that such an approach could be pursued on a mathematically sound footing. Note however, in spite of our endorsement of what might be called a procedural semantics, that this in no way frees one from from giving a declarative semantics as well; procedural semantics and declarative semantics are two pieces of a total story; they are not alternatives. NOTES * I am grateful to Barbara Grosz and Hector Levesque for their comments on an earlier draft of this short paper, and to Jane Robinson for her original suggestion that it be written. 1. Smith (19821o) 2. Fodor (1978), Fodor (1980), Haugeland (forthcoming) 3. At least until the day arrives -- if ever -- when a successful psychology of language is presented wherein all of human semantieity is explained in non-semantical terms. 4. Problematic because it defines computation in a manner that is derivative on mind (in that language is fundamentally a mental phenomenon), thus dashing the hope that computational psyc.~,:!c, td will offer a release from the semantic irreducibility of previous accounts of human cognition. Though we state this position and explore some of its consequences in Smith (1982b), a considerably fuller treatment will be provided in Smith (forthcoming). 5. See for example Newelt (1980) 6. The term "semantics" is only one of a large collection of terms, unfortunately, that are technical terms in computer science and in the attendant cognitive disciplines (including logic, philosophy of language, linguistics, and psychology), with different meanings and different connotations. Reference, interpretation, memory, and value are just a few examples of the others. It is our view that in spite of the fact that semantical vocabulary is used in different ways, the fields are both semantical in fundamentally the same ways: a unification of terminology would only be for the best. 7. An example of the phenomenon noted in foomote 6. 8. Fodor (forthcoming) 9. Woods (1981) 10. For a discussion of continuations see Gordon (1979), Steele and Sussman (1978), and Smith (1982a); the formal device is developed in Strachey & Wadsworth (1974). H. Smith (1982a). 12. A classic example is Katz and Postal (1964), but much of the recent A.I. research in natural language in A.L can be viewed in this light. 13. Lewis (1972). 14. Israel (1980). 15. For a discussion of P~OLOG see Clocksin & Mellish (198l); the LtSPS are described in Smith (1982a)° 16. Smith (forthcoming). 17. Fodor (forthcoming). 18. The term "methodological solipsism" is from Putnam (1975); see also Fodor (1980). 19. Donnellan (1966). 20. See note 10, above. 21. Brachman (1979). 22. Bobrow and Winograd (1977). REFERENCES Bobrow, Daniel G.. and Winograd, Terry, "An Overview of KRL: A Knowledge Representation Language", Cognitive Science 1 pp. 3- 46, 1977. Brachman, Ronald, "On the Epistemological Status of Semantic Networks", in Findlerl Nicholas V. (ed.), Associative Networks: Representation and Use of Knowledge by Computers, New York: Academic Press, 1979. Clocksin. W. F., and Mellish, C. S., Programming in Prolog, Berlin: Springer-Verlag, 1981. Donnellan, K., "Reference and Definite Descriptions", Philosophical Review 75:3 (1966) pp. 281-304; reprinted in Rosenberg and Travis (eds.), Readings in the Philosophy of Language, Prentice- Hall, 1971. Fodor, Jerry, "Tom Swift and his Procedural Grandmother", Cognition 6, 1978; reprinted in Fodor (1981). "Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology", The Behavioral and Brain Sciences, 3:1 (1980) pp. 63-73; reprinted in Haugeland (ed.), Mind Design, Cambridge: Bradford, 1981, and in Fodor (1981). 14 Israel, David, "What's Wrong with Non-Monotonic Logic?", Proceedings of the First Annual Conference of the American Association for Artificial Intelligence, Stanford, California, 1980, pp. 99-101. Katz, Jerrold, and Postal, Paul, An Integrated Theory of Linguistic Descriptions, Cambridge: M.I.T. Press, 1964. Lewis, David, "General Semantics", in Davidson and Harman (eds.), Semantics of Natural Langauges, Dordrecht, Holland: D. Reidel, 1972, pp. 169-218. NeweU, Alien, "Physical Symbol Systems", Cognitive Science 4, pp. 135-183, 1980. Putnam, Hilary, "The meaning of 'meaning'", in Putnam, Hilary, Mind Language and Reality, Cambridge, U.K.: Cambridge University Press, 1975. Smith. Brian C., Reflection and Semantics in. a Procedural Language, Laboratory for Computer Science Report LCS-TR-272, M.I.T., Cambridge, Mass., 1982 (a). , "Semantic Attribution and the Formality Condition", presented at the Eighth Annual Meeting of the Society for Philosophy and Psychology, London, Ontario, Canada, May 13- 16, 1982 (b). , The Computational Metaphor, Cambridge: Bradford (forthcoming). Steele, Guy, and Sussman, Gerald J., "The Art of the Interpreter, or the Modularity Complex (parts Zero, One, and Two)", M.LT. Artificial Intelligence Laboratory Mcmo AIM-453, Cambridge, Mass, 1978. Strachey, C., and Wadsworth, C. P., "Continuations -- a Mathematical Semantics for Handling Full Jumps", PRG-I1, Programming Rcsearch Group, University of Oxford, 1974. Woods, William A., "Procedural Semantics as a Thcory of Meaning", Report No. 4627, Bolt Beranek and Newman, 50 Moulton St., Cambridge, Mass., 02138; reprinted in Joshi, A., Sag, I., and Webber, B., Computational Aspects of Linguistic Structures and Discourse Settings, Cambridge, U.K.: Cambridge University Press, 1982. 15 | 1982 | 2 |
MY TERM Winfred P. Lehmann Department of Linguistics The University of Texas Austin, Texas 78712 My term came at the time of the New York World Fair. The Association, still of MT as well as CL, was trying to crash the club that shared profits from the annual meetings of AFIFS. These were producing something over $20,000, a sum which in those days would do more than pay a fraction of one's annual overhead on an NSF grant. ACM of course was grabbing the bulk of this, but ÁEEE wasn't doing badly, to Judge by the magnificence of its Journal. I attended the powwows of the powers national and international. In spite of our run-down heels, they treated me courteously. Among other actlvltfes we Journeyed out for a preview of the World Fair exhibits. IBM's massive show, with a que~tlon-answer demonstration as its highlight, didn't absorb all that much attention from a group, all of whom may have shaken the hand of Seymour Czay and pondered at his hilltop. Convention memories fade aftez so long a time. I remember best a conversation from a representative from Japan. He expressed great wonder at all things and beings American. The head of our computation center was a tall tennis- buff. When I gave my new friend the name his response was: "Oh the great ...?" I still haven't reconciled the possible interpretations on his use of the adjective. A couple of the more prominent members of the hardware crowd offered me a ride back to town after our group had paid its respects to a few more pavilions of the Fair. When we located their car it turned out to be an old four-door Buick, barely hanging together. The back seat was clogged with computer parts, an overflow from the trunk; I made a bit of zoom for myself among the pirated splendors and llstened to the hopeful chatter on the wonders of the new wo~id, hoping both the Buick and the absorbed drlvez would preserve me for it. Thanks to a successful outcome of the zlde no doubt, I received issues of the elegant IEEE Journal for a few years. The Association did get a cut out of the AFIPS pot, as I recall, but since it was distributed i~ accordance with membership, our 600 didn't stack up too well against the 20,000 of ACM. The rest of my year was more routine. 97 | 1982 | 20 |
A SOCIETY IN TRANSITION Donald E. Walker Artificial Intelligence Center SRI International Menlo Park, California 94025, USA I was President in 1968, the year during which the Association for Machine Translation and Computational Linguistics became the Association for Computational Linguistics. Names always create controversy, and the founding name, selected in 1962, was chosen in competition with others, not the least of which was the one that subsequently replaced it. In fact, a change of name to the Association for Computational Linguistics was actually approved in 1963, at what has been described as an "unofficial meeting." However, that action was subsequently ruled out of order, since it did not result from a constitutional amendment. Five years later, proper procedure having been followed, the change was made officially. The organizational impetus for the establishment of the Association did came primarily from a group of people who had been working on machine translation. However, as in most scientific societies, there has always been-- and probably always will be--a tension between research and applicatlons. The primary motivation for the name change in 1968 was the recognition, shared but by no means universal, that we needed to address more basic issues first. The report on Language and Machines: Computers in Translation Linguistics by the Automatic Language Processing Advisory Committee (ALPAC) in 1966 became a focal point for this controversy. It was viewed by translation specialists as an attack on their work, and it certainly resulted in dramatic reductions in funding for machine translation. However, its authors claim it was intended as an argument for increased support of research. Certainly, the lead article in the October- November 1966 issue of The Finite String, which was titled "Potential Bright for Language-Analysis by Computer; NRC Report Urges Support," presented that view. In any case, the argument that computational linguistics "should be supported as a science and should not be Judged by any immediate or foreseeable contributions to practical translation" seemed to frighten rather than challenge the funding agencies in the immediately following years. Another factor motivating the change of name in 1968 was the appreciation of the potential for the use of computational linguistics in information retrieval and in stylistic analysis. ~.~nanical translation was only one of a number of exciting application areas. During that year, I spent a substantial amount of time coordinating with the Special Interest Group on Information Retrieval (SIGIR) and the Special Interest Committee on Language Analysis and Studies in the Humanities (SICLASH), both of the Association for Computing Machinery, and with the Special Interest Group on Automated Language Processing (SlGALP) of the American Society for Information Science. There were also meetings with the Linguistic Society of America, the Modern Language Association, and the Center for Applied Linguistics. In addition, during the year, ACL became a constituent society of the American Federation of Information Processing Societies (AFIPS), of which it had been first an unofficial and then an official "observer" since 1964. I was the first ACL member of the Board of Directors, and I became actively involved in a number of its committees, most particularly the one on Information Systems. I also became the ACL representative to the newly formed International Joint Conference on Artificial Intelligence and, subsequently, Program Chairman of its first meeting in 1969. These organizational relationships, coupled with my own tendency toward global perspectives, led me to view the ACL as a central point around which everything else revolved. In computational linguistics, there are aspects of science, engineering, the humanities, the social and behavioral sciences, education, and communications. I contemplated the time when, in addition to being members of AFIPS, our Association would be equally implicated in the American Association for the Advancement of Science, the American Council of Learned Societies, and the Social Science Research Council. And I only regretted that at that time there were no aggregate groups for education, and communications to round out the picture. This "message" was the substance of my Presidential address. I tried to leaven it with a little humor to make it more palatable, but I remember the banquet which preceded it (prepared by the University of lllinois 1111ni Union) as the most horrible meal the ACL has ever had to confront. I am sure that the failure of the members attendant to rally to my cause and carry my message to the multitudes (or even to Garcia) was due in no small measure to the poor quality of the food. But let me return briefly to name changes and to the tension between research and applications. During our discussions about the forthcoming Conference on Applied Natural Language Processing, which the ACL will be cosponsoring with the Naval Research Laboratory, 1-3 February 1983, in Santa 98 Monlca, we considered calling it "Conference on Applied Computational Linguistics." However, it became clear that we could not expect to have as broadly based a meeting as we wanted with that name. It is important now to reach out to the larger community. Once we have them listening to us, it will be all right for them to find out that they have been "practicing computational linguistics all their (professional) lives!" Fascinated as I am about the applicability of computational linguistics for its own sake, what I find most exciting is the value that the use of our systems will have for deepening our insights into the basic research issues that still face us. I believe that studying people "organizing and using information" on the kinds of systems we are now beginning to develop can revolutlonlze our understanding of what we do and do not know about computational linguistics, as well as guide the improvement of our systems more effectively (Walker, 1971, 1972, in press). Walker, DE. "The Organization and Use of Information: Contributions of Information Science, Computational Linguistics and Artificial Intelligence. Journal of the American Society for Information Science 1981, 32:347-363. Walker, D. "Natural-Language-Access Systems and the Organization and Use of Information." In COLING 82: Proceedings of the Ninth International Conference on Computatlonal Linguistlcs. North-Holland Publishlng Company, Amsterdam, Netherlands, 1982. Walker, DE. "Computational Strategies for Analyzing the Organization and Use of Information." In Knowledge Structure and Use: Perspectives o_~n Synthesis and Interpretation. Edited by S Ward and L Reed. National Institute of Education, Washington, D.C., in cooperation with CEMREL, Inc., St. Louis, Missouri (in press). REFERENCES Automatic Language Processlng Advisory Cor.m/ttee. Languages and Machines: Computers inn Translatlon and Linguistics° National Academy of Sciences, National Research Council, Publlcatlon 1416, Washington, D.C., 1966. 99 | 1982 | 21 |
THEMES FROM 1972 Robert F. Simmons Department of Computer Sciences University of Texas at Austin Austin, TX 78712 Although 1972 was the year that Winograd published his now classic natural language Study of the blocks world, that fact had not yet penetrated to the ACL. At that time people with AI computational interests were strictly in a minority in the association and it was a radical move to appoint Roger Schank as program chairman for the year's meeting. That was also the year that we didn't have a presidential banquet, and my "speech" was a few informal remarks at the roadhouse restaurant somewhere in North Carolina reassuring a doubtful few members that computational understanding of natural language was certainly progressing and that applied natural language systems were distinctly feasible. My own perceptions of the state of computational linguistics during that period were given in "On Seeing the Elephant" in the Finite String, March-Aprll 1972. I saw it as a time of confusion, of competition among structuralists, transformationallsts, and the new breed of computernlks. "On Seeing the Elephant" was a restatement of the old Sufi parable that suggested that we each perceived only isolated parts of our science. That was the period during which Jonathan Slocum and I were concerned with using Augmented Transition Networks to generate coherent English from semantic networks. That llne of research was originated by the first President of the Association, Victor Yngve, who in 1960 had published descriptions of algorithms for using a phrase structure grammar to generate syntactically well-formed nonsense sentences [Yngve 1960]. Sheldon Klein and I about 1962-1964 were fascinated by the technique and generalized it to a method for controlling the sense of what was generated by respecting the semantic dependencies of words as they occurred in text. Yngve's work was truly seminal and it continued to inspire Sheldon for years as he developed method after method for generating detective stories and now operas. I, too, with various students continued to explore the generation side of language, most recently with Correlra [1979], using a form of story tree to construct stories and their summaries. No matter that Meehan found better methods and Bill Mann and his colleagues continue to improve on the techniques. The use of a phrase structure grammar to control the sequence in which sentences and words are p~oduced remains quite as fascinating as its use in translatln~ sentences to representations of meaning. It is possible to communicate the technique for controlled generation of text in Just a few paragraphs, so in dedication to Yngve, Klein, and i00 the many others of the discipline who share our fascination with generation of meaningful language, the following description is presented. The last two lines of Keats" "Ode to a Grecian Urn" are: Beauty is truth, truth beauty, that is all Ye know on earth and all ye need to know. To form semantically controlled variations on this verse we can create substitution classes as below: [SCLASS [SCLASS [SCLASS [SCLASS [SCLASS [SCLASS [SCLASS BEAUTY life knowledge wisdom love this] TRUTH honor Joy rapture love all] (THAT IS ALL)(that's all)(that's what) (it's all)(it's what)] YE you we I some they] KNOW sense have get see meet] (ON EARTH) (for living)(til heaven) (til hell)(in llfe)] (NEED TO) (have to)(ought to)(want to)] and llne rules similar to phrase structure forms. (I think of the couplet as a three llne verse.) [KLINEI beauty is truth -- truth beauty] [KLINE2 (that is all) ye know (on earth)] [KLINE3 (that is all) ye (need to) know] Each KLINEi rewrites as a conjunction terms, e.g., KLINEI --> beauty + is + truth ... + beauty. of The line rules are composed of terms such as "beauty", "that is all", etc., that begin SCLASS predications, and of terminals such as "is" and "- -" that do not. Poem and verse can also be defined as rules: [POEM title verse verse ... verse] [TITLE (Variation on Keats" Truth is Beauty)] [VERSE klinel kllne2 kllne3] Actually it is more convenient to define these latter three elements as program to control choice of grammar, spacing, and number of verses. In either case, a POEM is a TITLE followed by VERSEs, an~ ~ VERSE is three lines each composed of terminals that occur in a KLINE or of selections from the matching substitution class. Only one other program element is required: a random selection function to pseudo-randomly choose an element from a substitution class and to record that element as chosen: ((CHOOSE ( FIRST. REMDR) CHOICE) < (CHOSEN FIRS~ CHOICE)) ((CHOOSE ( FYRST. REMDR) CHOICE) < (RANDOM* ( FIRST. REMI~R) CHOICE) (ASSERT (CHOSEN jHOICE))~ Note: CHOOSE is called with the content of an SCLASS rule in the list (FIRST.REMDR); if a choice for the term has previously been made in the verse, CHOICE is taken from the predicate, (CHOSEN FIRST CHOICE). If not, RANDOM* selects an element and records it as CHOSEN. When a verse is begun, any existing CHOSEN predicates are deleted. This is a procedural logic program with lists in dot notation and variables marked using the underscore. It is presented to give a sense of how the program appears in Dan Chester's LISP version of PROLOG. The rest of the program follows the poem, verse, and Keats-LINE rules given above. The program is called by (POGEN KEATS 4), KEATS selecting the grammar and 4 signifying the number of verses. A couple of recordings of its behavior appear below. *(POGEN KEATS 4) (VARIATIONS ON KEATS" TRUTH IS BEAUTY) (LOVE IS LOVE -- LOVE LOVE) (ITS ALL YE HAVE TIL HEAVEN) (ITS ALL YE NEED TO HAVE) (LOVE IS LOVE -- LOVE LOVE) (THATS ALL THEY KNOW ON EARTH) (THATS ALL THEY OUGHT TO KNOW) (WISDOM IS RAPTURE -- RAPTURE WISDOM) (ITS WHAT YOU MEET FOR LIVING) (ITS WHAT YOU WANT TO MEET) (LOVE IS ALL -- ALL LOVE) (THATS ALL WE SENSE TIL HELL) (THATS ALL WE HAVE TO SENSE) ((POGEN KEATS 4)) *(POGEN KEATS 5) (VARIATIONS ON KEATS" TRUTH IS BEAUTY) (BEAUTY IS TRUTH -- TRUTH BEAUTY) (THATS WHAT YE SEE ON EARTH) (THATS WHAT YE WANT TO SEE) (KNOWLEDGE IS ALL -- ALL KNOWLEDGE) (ITS ALL THEY MEET TIL HELL) (ITS ALL THEY HAVE TO MEET) *** (LOVE IS RAPTURE -- RAPTURE LOVE) (ITS ALL SOME KNOW TIL HEAVEN) (ITS ALL SOME NEED TO KNOW) (LIFE IS RAPTURE -- RAPTURE LIFE) (THATS WHAT SOME GET FOR LIVING) (THATS WHAT SOME OUGHT TO GET) *** (LIFE IS LOVE -- LOVE LIFE) (ITS ALL I SENSE ON EARTH) (ITS ALL I WANT TO SENSE) *** ((POGEN KEATS 5)) Perhaps these verses might best be characterized as those Keats wisely rejected. Nevertheless our robot-poet demonstrates the effectiveness of phrase structure organization and substitution classes for selecting and ordering actions. The ideas of Pogen led to related methods for creating paraphrases, answering questions, and translating between languages. The principle of phrase structure organization has permeated our NL efforts and found a particularly friendly environment in procedural logic where Chester and I [1982] show that the same grammar that translates English strings into semantic representations can serve to translate the representations into English strings. This result, confirming an earlier finding by Heldorn, greatly simplifies the linguistic programming requirements for NL translation and text questioning systems. Since 1972 the computational linguistics world has changed much. Today AI and Logic interests tend to overshadow linguistic approaches to language. But despite all the complexities in translating between NL constituents and computational representations, augmented phrase structure grammars provide a general and effective means to guide the flow of computation. REFERENCES Simmons, R.F., and Chester, D.L., "Relating Sentences and Semantic Networks with Procedural Logic," Communications of the ACM, September 1982, (in press). Simmons, R.F., and Correira, A., "Rule Forms for Verse, Sentence, and Story Trees," in Findler, N.V., (ed.) Associative Networks, pp. 363-392, Academic Press, New York, 1979. Yngve, V., "A Model and a Hypothesis for Language Structure," Proceedings of the American [hilosophical Society, pp. 444-466, 1960, Volume 104. i01 | 1982 | 22 |
Tw'SNTYYEARSOFREF~C~S* ~avind K. ~shi Department of Computer and Information Science R. 268 Moore School University of Pennsylvania Philadelphia, PA 19104 As I was reflecting deeply in front of the statue of Bodhisattva of G r ~ and Wisdom in the University Muset~n, I was startled to see Jane. Having heard from Don that he had asked the old cats to reflect on the 20 years of ACL, Jane had decided that she should drop in on some of them to seek their advice concerning the future of ACL. *This work is supported by the unfunded grant FSN- RND-HIN-82-57. I wish to thank Alice, the Cheshire Puss, and Lewis Carroll for their help in the pre- paration of this paper. '~4hat brings you here?" I asked with a grin. Jane thought for a while and said: "Would you tell me, please, which way I ought to take ACL in the future?" "That depends a good deal on where you think it should go ~" I replied. "I don't much care where ,", said Jane. "Then it doesn't matter ~c~ way you take it," I said after prolonged reflection. " so long as I take it somewhere," Jane added as an explanation. "Oh, you are sure to do that," said I, "if you only parse long enough." Jane felt that this could not be denied, so just to be friendly she decided to ask another question: '"What sort of computational linguists live about here?" "Well ~ in that direction lives Bonnie," I said waving my right paw and waving the other paw, "and in that direction lives Barbara. Visit either you like:they're both mad." "But I don't want to go among mad people," Jane remarked. "Oh, you can't help that," said I, "we're all mad here. I ' m mad. You' re mad." "How do you know I'm mad?" said Jane. '~fou must be," said I, "or you wouldn't have come here." Jane didn't think that proved it at all. How- ever, she went on: "And how do you know that you're mad?" "Well, to begin with," said I, "Don is not mad. You grant that?" "I suppose so," said Jane. '"~ell, then," I went on, "Don is not mad and I am not Don. Therefore, I am mad." Jane didn't appear to be satisfied with this bit of catatonic logic (quite distinct from the monotonic logic). "I must go for a walk now and continue re- flecting," I said, as I left her, leaving my grin behind. 102 | 1982 | 23 |
ACL IN 1977 Paul G. Chapin National Science Foundation 1800 G St. NW Washington, D.C. 20550 As I leaf through my own "ACL (Historical)" file (which, I am frightened to observe, goes back to the Fourth Annual Meeting, in 1966), and focus in particular on 1977, when I was President, it strikes me that pretty much everything significant that happened in the Association that year was the work of other people. Don Walker was completing the mammoth task of transferring all of the ACL's records from the East Coast to the West, paying off our indebtedness to the Center for Applied Linguistics, and in general getting the Associa- tion onto the firm financial and organizational footing which it has enjoyed to this day. Dave Hays was seeing to it that the microfiche journal kept on coming, and George Heldorn Joined him as Associate Editor that year to begin the move toward hard copy publication. That was the year when we hitched up our organizational pants and moved our Annual Meeting back to the Spring, after some years when it had been in the Fall. Jonathan Allen and his Program Committee coped admirably with the challenge of putting on an Annual Meeting program less than six months after the last one. The culinary staff at the Foundry Restaurant in Georgetown provided a banquet that I still remem- ber as delicious. AFIPS weighed in in a less constructive fashion with their demand that we enroll a member- ship of 1500 by a certain time (1982, I think) to retain our status as full-fledged members, which would require tripling our membership (maybe we'll make it yet--who knows?). They were also respon- sible for one of the non-events of the decade, the abortive founding of abacus, which was to be the Scientific American of computing. We pledged $5,000 to the start-up costs on that, payable on request, but they never got far enough to make the request. What of the field? The program for the 1977 Annual Meeting shows names which are mostly still familiar to all of us, speaking on topics which would not be out of place at the 1982 Annual Meeting. I take this as a sign not of stagnation, but of persevering people working on problems of enormous complexity. One event of 1977 may end up having more impact on our field than anything the ACL did that year. That was the year that the Sloan Foundation made the first grants in its Particular Program in Cognitive Science. It will be a long time before we know all of the results of the ferment that Program created, but it is already abundantly clear that one result has been a massive increase in the interested attention of theoretical linguists to computational linguistics. This is going to be beneficial to both fields, but especially so, I think to computational linguis- tics, by keeping our attention fixed on problems far larger than making the program work. 103 | 1982 | 24 |
P~FLECTIONS ON TWENTY YEARS OF THE ACL Jonathan Allen Research Laboratory of Electronics and Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Cambridge, MA 02139 I entered the field of computational linguistics in 1967 and one of my earliest recollections is of studying the Harvard Syntactic Analyzer. To this date, this parser is one of the best documented programs and the extensive discussions cover a wide range of English syntax. It is sobering to recall that this analyzer was implemented on an IBM 7090 computer using 32K words of memory with tape as its mass storage medium. A great deal of attention was focussed on means to deal with the main memory and mass storage limitations. It is also interesting to reflect back on the decision made in the Harvard Syntactic Analyzer to use a large number of parts of speech, presumably, to aid the refinement of the analysis. Unfortunately, this introduction of such a large number of parts of speech (approximately 300) led to a large number of unanticipated ambiguous parsings, rather than cutting down on the number of legitimate parsings as had been hoped for. This analyzer functioned at a time when revelations about the amount of inherent ambiguity in English (and other natural languages) was a relatively new thing and the Harvard Analyzer produced all possible parsings for a given sentence. At that time, some effort was focused on discovering a use for all these different parsings and I can recall that one such application was the parsing of the Geneva Nuclear Convention. By displaying the large number of possible interpretations of the sentence, it was in fact possible to flush out possible misinterpretations of the document and I believe that some editing was performed in order to remove these ambiguities. In the late sixties, there was also a substantial effort to attempt parsing in terms of a transformational grammar. Stan Petrick's Doctoral Thesis dealt with this problem, using underlying logical forms very different from those described by Chomsky, and another effort at Mitre Corporation, led by Don Walker, also built a transformational parser. I think it is signifi- cant that this early effort at Mitre was one of the fJr=~ examples where linguists were directly involved in computational applications. It is in=cresting that in the development of syntax, from the perspective of both linguists and computational linguists, there has been a continuing need to develop formalisms that provided both insight, as well as coverage. I think these two requirements can be seen both in transformational grammar and the ATN formalism. Thus, transformational grammar provided a simple, insightful base through the use of context-free grammar and then provided for the difficulties of the syntax by adding on to this base the use of transformations and of course, gaining turing machine power in the process. Similarly, ATNs provided the simple base of a finite state machine and added to it turing machine power through the use of actions on the arcs. It seems to be necessary to provide some representational means that is relatively easy to think about as a base and then contemplate how these simpler base forms can be modified to provide for the range of actual facts of natural language. Moving to today's emphasis, we see increased interest in psychological reality. An example of this work is'the thesis of M itch Marcus, which attempts to deal with constraints imposed by human performance, as well as constraints of a more universal nature recently characterized by linguists. This model has been extended further by Bob Berwick to serve as the basis for a learning model. Another recent trend that causes me to smile a little is the resurgence of interest in context free grammars. I think back to Lyons' book on theoretical linguistics where context free grammar is chastised as was the custom, due to its inability to insightfully characterize subject- verb agreement, discontinuous constituents, and other things thought inappropriate for context free grammars. The fact that a context free grammar can always characterize any finite segment of the language was not a popular notion in the early days. Now we find increasing concern with efficiency arguments, and also due to the increasing emphasis in trying to find the simplest possible grammatical formalism to describe the facts of language, a vigorous effort to provide context free systems that provide a great deal of coverage. In the earlier days, the necessity of introducing additional non-terminals to deal with problems such as subject-verb agreement was seen as a definite disadvantage, but today such criticisms are hard to find. An additional trend that is interesting to observe is the current emphasis on ill-formed sentences which are now recognized as valid exemplars of the language and with which we must deal in a variety of computational applications. Thus, there has been attention focused on relaxation techniques and the 104 ability to parse limited phrases within discourse structures that may be ill-formed. In the early days of the ACL, I believe that computation was seen mainly as a tool used to represent algorithms and provide for their execution. Now there is a much different emphasis on computation. Computing is seen as a metaphor, and as an important means to model varioUs linguistic phenomena, as well as more broadly cognitive phenomena. This is an important trend, and is due in part to the emphasis in cognitive science on representational i§sues. When we must deal representations explicitly, then the branch of knowledge that provides the most help is computer science, and this fact is becoming much more widely appreciated, even by those workers who are not focused primarily on computing. This is a healthy trend, I believe, but we need also to be aware of the possibility of introducing biases and constraints on our thinking dictated by our current understanding and view of computation. Since our view of computation is in turn condi- tioned very substantially by the actual computing technology that is present at any given time, it is well to be very cautious in attributing basic understanding of these representations. A particular case in point is the emphasis, quite popular today, on parallelism. When we were used to thinking of computation solely in terms of single-sequence Von Neumann machines, then parallelism did not enjoy a prominent place in our models. Now that it is possible technologi- cally to implement a great deal of parallelism, one can even discern more of a move to breadth first rather than depth first analyses. It seems clear that we are still very much the children of the technology that surrounds us. I want to turn my attention now to a discussion of the development of speech processing technology, in particular, text-to-speech conversion and speech recognition, during the last twenty years. Speech has been studied over many decades, but its secrets have been revealed at a very slow pace. Despite the substantial in fusion of money into the study of speech recognition in the seventies, there still seems to be a natural gestation period for achieving new understanding of such complicated phenomena. Nevertheless, during these last twenty years, a great deal of useful speech processing capability has been achieved. Not only has there been much achieve- ment, but these results have achieved great prominence through their coupling with modern technology. The outstanding example in speech synthesis technology has been of course the Texas Instruments Speak and Spell which demonstrated for the first time that acceptable use of synthetic speech could be achieved for a very modest price. Currently, there are at least 20 different integrated circuits, either already fabricated or under development, for speech synthesis. So a huge change has taken place. It is possible today to produce highly intelligible synthetic speech from text, using a variety of techniques in computational linguistics, including morphological analysis, letter-to-sound rules, lexical stress, syntactic parsing, and prosodic analysis. While this speech can be highly intelligible, it is certainly not very natural yet. This reflects in part the fact we have been able to determine sufficient correlates for the percepts that we want to convey, but that we have thus far been unable to characterize the redundant interaction of a large variety of correlates that lead to integrated percepts in natural speech. Even such simple distinctions as the voiced/unvolced contrast are marked by more than a dozen different correlates. We simply don't know, even after all these years, how these different correlates are interrelated as a function of the local context. The current disposition would lead one to hope that thls interaction is deterministic in nature, but I suppose there is still some segment of the research community that has no such hopes. When the redundant interplay of correlates is properly understood, I believe this will herald a new improvement in our understanding needed for high performance speech recognition systems. Neverthe- less, it is important to emphasize that during these twenty years, commercially acceptable text- to-speech systems have become viable, as well as many other speech synthesis systems utilizing parametric storage or waveform coding techniques of some sort. Speech recognition has undergone a lot of change during this period also. The systems that are available in the marketplace are still based exclusively on template matching techniques, which probably have little or nothing to do with the intrinsic nature of speech and language. That is to say, they usa some form of informationally- reduced representation of the input speech wave- form and then contrive to match this representa- tion against a set of stored templates. Various techniques have been introduced to improve the accuracy of this matching procedure by allowing for modifications of the input representation or the stored templates. For example, the use of dynamic programming to facilitate matching has been very popular, and for good reason, since its use has led to improvements in accuracy of between 20 and 30 percent. Nevertheless, I believe that the use of dynamic programming will not remain over the long pull and that more phonetically and linguistically based techniques will have to be used. This prediction is predicated, of course, on the need for a huge amount of improved understanding of language in all of its various representations and I feel that there is need for an incredibly large amount of new data to be acquired before we can hope to make substantial progress on these issues, Certainly an important contribution of computa- tional linguistics is the provision of instru- mental means to acquire data, In my view, the study of both speech synthesis and speech recognition has been hampered over the years in large part due to the sheer lack of insufficient data on which to base models and theories. While we would still like to have more computational power than we have, at present, we are able to provide highly capable interactive research environments for exploring new areas. The fact that there is none too much of these computational resources is supported by the fact that the speech 105 recognition group at IBM is, I believe, the largest user of 370/168 time at Yorktown Heights. An interesting aspect of the study of speech recognition is that there is still no agreement among researchers as to the best approach. Thus, we see techniques based on statistical decoding, those based on template matching using dynamic programming, and those that are much more phonetic and linguistic in nature. I believe that the notion, at one time prevalent during the seventies, that the speech waveform could often be ignored in favor of constraints supplied by syntax, semantics, or pragmatics is no longer held and there is an increasing view that one should try to extract as much information as possible from the speech waveform. Indeed, word boundary effects and manifestations at the phonetic level of high level syntactic and semantic constraints are being discovered continually as research in speech production and perception continues. For all of our research into speech recognition, we are still a long ways away from approximating human speech perception capability. We really have no idea as to how human listeners are able to adapt to a large variety of speakers and a large variety of communication environments, we have no idea how humans manage to reject noise in the background, and very little understanding as to the interplay of the various constraint domains that are active. Within the last five years, however we have seen an increasing level of cooperation between linguists, psycholinguists and computational linguists on these matters and I believe that the depth of understanding in psycholinguisties is now at a level where it can be tentatively exploited by computational linguists for models of speech perception. Over these twenty years, we have seen computational linguistics grow from a relatively esoteric academic discipline to a robust con~ercial enterprise. Certainly the need within industry for man-machlne interaction is very strong and many computer companies are hiring computational linguists to provide for natural language access to data bases, speech control of instruments, and audio announcements of all sorts. There is a need to get newly developed ideas into practice, and as a result of that experience, provide feedback to the models that computational linguists create. There is a tension, I believe, between, on the one hand, the need to be far reaching in our research programs vs. the need for short-term payoff in industrial practice. It is important that workers in the field seek to influence those that control resources to maintain a healthy balance between these two influences. For example, the relatively new interest in studying discourse structure is a difficult, but important area for long range research and it deserves encouragement, despite the fact that there are large areas of ignorance and the need for extended fundamental research. One can hope however, that the demonstrated achi~vp~nt of computational linguistics over the last twenty years will provide a base upon which society will be willing to continue to support us to further explore the large unknowns in language competence and behavior. 106 | 1982 | 25 |
ON THE PRESENT Norman K. Sondheimer Sperry Univac Blue Bell, PA 19424 USA The Association for Computational Linguistics is twenty years old. We have much to be proud of: a fine journal, significant annual meetings, and a strong presence in the professional community. Computational Linguistics, itself, has much to be proud of: influence in the research community, courses in universities, research support in government and industry, and attention in the popular press. Not to spoil the fun, but the same was true twenty years ago and the society and the field has had to go through some difficult times since then. To be sure, much has changed. The ACL has over 1200 members. Computational Linguistics has many new facets and potential applications. However to an outsider, we still appear to be a field with potential rather than one with achievement. Why is that? There are certainly many reasons. One is the attractiveness of our most abstract theories. They are widely presented and receive the most scholarly attention. The popular and technical press contributes by pub1~cizing our w~]der claims and broadest hopes. ~imilar]y, the press oversells our current systems, leading more careful observers to wonder even about these. Finally, mechanizing the understanding of natural language ~s very difficult. We can not hope to achieve many of our goals in the near future. Making do with the technology now available is very frustrating. All this contributes to we the members of the field gravitating to theorizing and small laboratory studies. We are choosing to focus on the £uture rather than the present. There is a real danger in this state of affairs. The build up of public and institutional expectations without a corresponding emergence of useful systems will produce a counter reaction. We have seen it before. To this day, machine translation research in the United States has not completely recovered. There is more need than ever~ there is more technology than before, word processing and computer typesetting have changed the price equation, but it is stilS not considered wise to be associated with MT. We can not let this sort of reversal happen to us again. Fortunately, we need not. We do have substantial achievements. Over the years, we have produced or bad influence on useful systems for information storage and retrieval, speech understanding and generation, and document processing. Natural language interfaces to databases are just now reaching the market. There are even limited but useful machine translation systems. There is more that we all know can and will be done in these areas. This will not be easy. We must accept the compromises forced on us by our limited technology. We must accept the unglamorous work that needs to be done. We must be careful in the way we present our work. It will not be all bad. There appear to be some attractive financia] returns. These are not to be ignored. In fact, it would probably do us all good if Computational Linguistics ~ad a few millionaires to its credit. We must congratulate ourselves on twenty years of life, but we must also work hard to carry off a~other twenty years. I am sure we will. 107 | 1982 | 26 |
PLANNING NATURAL LANGUAGE REFERRING EXPRESSIONS Douglas E. Appelt SRI International Menlo Park, California ABSTRACT This paper describes how a language-planning system can produce natural-language referring expressions that satisfy multiple goals. It describes a formal representation for reasoning about several agents' mutual knowledge us- ing possible-worlds semantics and the general organization of a system that uses the formalism to reason about plans combining physical and linguistic actions at different levels of abstraction. It discusses the planning of concept ac- tivation actions that are realized by definite referring ex- pressions in the planned utterances, and shows how it is possible to integrate physical actions for communicating intentions with linguistic actions, resulting in plans that include pointing as one of the communicative actions avail- able to the speaker. I. INTRODUCTION One of the mo~t important constituent processes of natural-language generation is the production of referring expressions, which occur in almost every utterance. Refer- ring expressions often carry the burden of informing the hearer of propositions as well as referring to objects. There- fore, many phenomena that are observed in dialogues can- et.¥_w~eet../- J "-°'~ ""' "-~ Figure 1 Satisfying Multiple Goals with a Referring Expression The author gratefully acknowledges the support for this research provided in part by the Office of Naval Research under contract N0014-80-C-0296 and in part by the National Science Foundation under grant MCS-8115105. not be explained by the simple view that referring expres- sions are descriptions of the intended referent sufficient to distinguish the referent from other objects in the domain or in focus. Consider the situation (depicted in Figure 1) in which two agents, an apprentice and an expert, are cooperating on a common task, such as disassembling an air compres- sor. Several tools are lying on the workbench, and al- though the apprentice knows that the objects are there, he may not necessarily know where they are. The expert might say: Use the wheelpuller to remove the flywheel. (1) while pointing at the wheelpuller. The apprentice may think to himself at this point, "Ah, ha, so that's a wheel- puller," and then proceed to remove the flywheel. What the expert is accomplishing through the utterance of (1) by using the noun phrase "the wheelpuller" cannot be fully explained by treating definite referring expressions simply as descriptions that are uniquely true of some ob- ject, even taking focusing [71[11] into account. The expert uses "the wheelpuller" to refer to an object that in fact uniquely fits the description predicated of it, so this simple analysis is incapable of accounting for the effects the expert intends his utterance to have. If one takes the knowledge and intentions of the speaker and hearer into account, a more accurate account of the speaker's use of the referring expression can be developed. The apprentice does not know what the object is that fits the description "the wheelpuller". The expert knows that the apprentice doesn't know this, and performs the pointing action to guarantee that his intentions will be recognized correctly. The apprentice must recognize what the expert is try- ing to communicate by pointing -- he must realize that pointing is not just a random gesture, but is intended by the speaker to be recognized as a communicative act by the hearer in much the same way as his utterances are recognized as communicative acts. Furthermore, the ap- prentice must recognize how the pointing act is cw:,'elated with the utterance the expert is producing. Although there is no sped~: deictic reference in the expert's utterance, it is clear that he does not mean the flywheel, since we will assume that the apprentice can determine that the object 108 he is pointing to is a tool. The apprentice realizes that the object the expert is pointing to is the intended referent of "the wheelpuUer," but in the process, he also acquires the information that the expert believes the object he is pointing to is a wheelpuller, and that the exPert has also informed him of that fact. A language-planning system called KAMP (for Know- ledge And Modalities Planner} has been developed that can plan utterances similar to example {1) above, coor- dinate the linguistic actions with physical actions, and know that the utterance it plans will have the intended multiple effects on the hearer. KAMP builds on Cohen and Perrault's idea of planning speech acts [4], but extends the planning activity down to the level of constructing sur- face English sentences. A detailed description of the en- tire KAMP system can be found in [2]. The system has been implemented and tested on examples in a cooperative equipment assembly domain, such as the one in example {1). This paper develops and extends some of the ideas of an early prototype system described in [1]. The reference problems that KAMP addresses are a sub- set of a more general problem, which, following Cohen [5] will be called 'identification.' Whenever a speaker makes a definite reference, he intends the hearer to identify some object in the world as the referent. Identifying a refer- en~ requires that the agent perform some cognitive ac- tivity, such as the simple case of matching the description with what he knows, or in some cases plan to perform perceptual actions that lead to the identification. KAMP simplifies the problem by not considering perceptual ac- tions, and assumes that there is some 'perceptual field' common to the participants in a dialogue, and that the objects that lie within that field are mutually known to the participants, along with the observable properties and relations that hold among them. For example, the speaker and hearer in (1) are assumed to mutually know the size, shape and location of all objects on the workbench. The agents may not know unobservable properties of the objects, such as the fact that a particular tool is a wheelpuller. Similarly, the participants are as- sumed to be mutually aware of physical actions that take place within their perceptual field, without explicitly per- forming any perceptual actions. When the expert points at the wheelpuller, the apprentice is simply assumed to know that he is doing it. H. KNOWLEDGE REPRESENTATION KAMP uses an intensional logic to describe facts about the world, including the knowledge of agents. The possible- worlds semantics of this intensional logic is axiomatized in first-order logic as described by Moore [8]. The axiomatiza- tion enables KAMP to reason about how the knowledge of both the speaker and the hearer changes as they perform actions. * What it means to identify an object is somewhat problematical. KAMP assumes that identification means that the referring descrip- tion conjoined with focusing knowledge picks out the same individual in all possible worlds consistent with what the agent knows. Moore's central idea is to axiomatize operators such as Know as relations between possible worlds. For example, if Wo denotes the real world, then Know(John, P) means P is true in every possible world that is consistent with what John knows. This is stated formally in the axiom schema: Vwl T(w,, Know(A, P)) Vw2 K(A, w,, w2) D T(w2,P). (1) The predicate T(w,P) means that P is true in possible world w. The predicate K(A,w,,w2) means that w2 is consistent with what A knows in w,. Actions are described by treating possible worlds as state variables, and axiomatizing actions as relations be- tween possible worlds. Thus, R(E, wl, w2) means that world w2 is the result of event E happening in world w2. It is important that a language planning system reason about mutual knowledge while planning referring expres- sions [31151. Failure to consider the mutual knowledge of the speaker and hearer can lead to the failure of the refer- ence. K.AMP uses an axiomatization of mutual knowledge in terms of relations on possible worlds. An agent's know- ledge is described as everything that is true in all pos- sible worlds compatible with his knowledge. The mutual knowledge of two agents A and B is everything that is true in the union of the possible worlds compatible with A's knowledge and B's knowledge.* To state this fact for- mally, an individual called the kernel of A and B is defined such that the set of possible worlds compatible with the kernel's knowledge is the set of all worlds compatible with either A's knowledge or B's knowledge. This leads to the following definition of mutual knowledge: Vw, T(wl, MutuallyKnow(A, B, P)) Vw2 K(Kernel(A, B), U]l, I/)2) D r(w2, P). (2) In (2), T(w, P) means that the object language proposition P is true in possible world w, and K(a, w,, w~) is a predi- cate that describes the relation between possible worlds that means that w2 is a possible alternative to w, accord- ing to a's knowledge. The second axiom needed is: Vz, w,, w2 K(z, w,, w2) D VyK(Kernel(z, y), wl, w~) (3) Axiom (3) states that the possible worlds consistent with any agent's knowledge is a subset of the possible worlds consistent with the kernel of that agent and any other agent. HI. THE KAMP PLANNING SYSTEM KAMP is a multiple-agent planning system designed around a NOAH-like hierarchical planner [10]. KAMP uses two descriptions of each action available to the planning agent: a complete axiomatization of the action using the possible-worlds approach outlined above, and an action * Notice that the "intersection" of the propositions believed by two agents is represented by the union of possible worlds compatible with their knowledge. 109 summary consisting of a simplified description of the action that serves as a heuristic to aid in proposing plans that are likely to succeed. KAMP forms a plan using the simplified action summaries first, and then verifies the plan using the full axiomatization. Since the possible-worlds axioms lend themselves more efficiently to proving a plan correct than in generating a plan in the first place, such an approach results in a system that is considerably more efficient than one relying on the possible-worlds axioms alone. Because action summaries represent actions in a sim- plified form, the planner can ignore details of the effects of communicative acts to produce a plan that is likely to work in most circumstances. For example, if a simplified description of the effects of informing states that the hearer knows the proposition, then the planner can reason that a plan to achieve the goal of the hearer knowing P is likely to include the action of informing him that P is true. In the relatively unlikely event that this description is inadequate, this fact will be detected during the verification phase where the more complete description is invoked. The flow of control during KAMP's heuristic plan-gen- eration phase is similar to that of NOAH's. If a goal needs to be satisfied, KAMP searches for actions that can achieve the goal and inserts them into the plan, along with the preconditions, which become new goals to be satisfied. When the entire plan has been expanded to one level of abstraction, then if there is a lower level, all high-level actions that have low-level expansions are expanded. Between each stage of expansion, critics are invoked that examine the plan for global interactions between ac- tions, and make changes in the structure of the plan to avoid the bad effects of the interactions and take advantage of the beneficial ones. Critics play an important role in the planning of referring expressions, and their functions are described more fully in Section IV. I IIIocuUonary Acts [ Ilequ~Nalnql I Surface Speech Acts I Cammm~ Oe~lam Judi ! °°.o.°, I ___ _ 1 , Utterance Acts I Figure 2 A Hierarchy of Actions Related to Lanb~uage KAMP's hierarchy of linguistic actions is illustrated in Figure 2. The hierarchy consists of illocntionary acts, sur- face speech-acts, concept-activation actions, and utterance acts• Illocutionary acts are speech acts such as inform- ing and requesting, which are planned at the highest level without regard for any specific linguistic realization. The next level consists of surface speech-acts, which are abstrac- tions of the actions of uttering particular sentences with particular syntactic structures. At this level the planner starts making commitments to particular choices in syn- tactic structure, and linguistic knowledge enters the plan- ning process. One surface speech-act can realize one or more illocutionary acts. The next level consists of concept- activation actions, which entail the planning of descrip- tions that are mutually believed by the speaker and hearer to refer to objects in the world. This is the level of abstrac- tion at which noun phrases for definite reference are plan- ned. Finally, at the lowest level of abstraction are ut- terance acts, consisting of the utterance of specific words. IV. PLANNING CONCEPT-ACTIVATION ACTIONS Concept-activation actions describe referring at a high enough level of abstraction so that they are not constrained to have purely linguistic realizations. When a concept- activation action is expanded to a lower level of abstrac- tion, it can result in the planning of a noun phrase within the surface speech-act of which the concept activation is a part, and physical actions such as pointing that also com- municate the speaker's intention to refer. KAMP can plan referential definite noun phrases that realize concept-activation actions. (The planning of at- tributive and indefinite referring expressions has not yet been addressed.) KAMP recognizes the need to plan a concept activation when it is expanding a surface speech- act. The surface speech-act is planned with a particular proposition that the hearer has to come to believe the speaker wants him to know or want. It is necessary to include whatever information the hearer needs to recog- nize what the proposition is, and this leads to the neces- sity of referring to the particular objects mentioned in the proposition. The planner often reasons that some objects do not need to be referred to at all. For example, in re- questing a hearer to remove the pump from the platform in an air-compressor assembly task, if the hearer knows that the pump is attached to the platform and nothing else, it is not necessary to mention the platform, since it is sufficient to say "Remove the pump," for the hearer to recognize the following propomtlon: Want(S, Do(H, Remove(pumpl, platforml))). The planning of a concept-activation action is similar to the planning of an illocutionary act in that the speaker is trying to get the hearer to recognize his intention to perform the act. This means that all that is necessary from a high-level planning point of view is that the speaker perform some action that signals to the hearer that the * For a description of KAMP's formalization of wanting, see Appelt, 12]• ii0 speaker wants to refer to the object. This is often done by incorporating a mutually believed description of the ob- ject into the utterance, but there is no requirement that the means by which the speaker communicates this inten- tion be linguistic. For example, the speaker could point at an object (almost always a communicative act), or per- haps throw it at the hearer (not so clearly communicative but definitely attention-getting. The hearer has to reason whether there are any communicative intentions behind the act.) Since concept-activation actions are planned during the expansion of surface speech-acts, the actions that realize them must somehow become part of the utterance being planned. Therefore, all concept-activation actions are ex- panded with two components: an intention-communication component and a surface-linguistic component. The inten- tion-communication component is an abstraction of the speaker's plan to communicate his intention to refer, and may be realized by a plan that includes physical and lin- guistic actions. The surface-linguistic component consists of the realization (in some linguistic expression) of the intention-communication component as part of the surface speech.act being planned, which means that the realization must be grammatically consistent with the sentence. The following two axiom schemata describe concept activation in KAMP's possible worlds representation: Vwl, w2 R(Do(A, Cact(B, C)), w,, w2) D T(w,, Want(A, Active(A, B, C))) A T{w2, Active(A, B, C)) (4) Vw,, w2 R(Do(A, Cact(B, C)), Wl, w2) D Vw3 K(Kernel(A, B), w2, wa) D 3w4 R(Do(A, Cact(B, C)), w4, ws) A (5) K(Kernel(A, B), w,, w4) Axiom schema (4) says that when an agent A performs a concept activation for an agent B, he must first want the object C to be active, and as a result of performing it, C becomes active with respect to A and B; Axiom schema (5) says that after agent A performs the action, the two agents A and B mutually know that the action has been performed. The consequence for the planner of axiomatizing con- cept activation as in (4) and (5) is that the problem of ac- tivating a concept now becomes one of getting the hearer to know that the speaker wants a particular concept to be active. This is the role of the intention-communication component in the expansion of the concept activation. KAMP knows about two types of actions that produce knowledge about what concepts a speaker wants to be ac- tive. One is an action called describe, which is ultimately expanded into a linguistic description corresponding to the concept the speaker intends to activate, and the other is called point, which is a generalized pointing action. The point action is assumed to directly communicate the inten- tion to activate a concept, thereby avoiding the problem of observing a gesture and deciding whether it is a pointing, or an attempt to scratch an itch. The following schema defines the describe action: VWlW2 R(Do(A, Describe(B, P}), w,, w2) D 3. A (vy D'(y) 3 • = y)) - (6) T(wl, Want(A, Active(A, B, z))) Axiom (6) says that the precondition for an agent to per- form an action of describing using a particular description P is that the speaker wants an objee~ to be active if and only if it uniquely fits the description predicated of it. In (6), the symbol P denotes a description consisting of object language predicates that can be applied to the object being described. It could be defined as P ~- Xx.(D,(z) A... A D.(x)) where the Di(z) are the individual descriptors that com- prise the description. The symbol D* denotes a similar ex- pression, which includes all the descriptors of P conjoined with a set of predicates that describe the focus of thedis- course. An axiom similar to (5) is also needed to assert that the speaker and hearer will mutually know, after the action is performed, that it has taken place. Therefore, if the speaker and hearer mutually know of an object that satisfies P in focus, then they mutually know that the speaker wants it to be active. The pointing action is much simpler because it does not require either the speaker or the hearer to know anything at all about the object. Vwl, w2 R(Do(A, Point(B,X)), w,, w~) D T(w,, Want(A, Active(A, B, X))). (7) According to the above axiom, if an agent points at an object, that implies that he wants the object to be active. As usual, an axiom similar to (5) is required to assert that the agents mutually know the action has been performed. Axioms (4) and (5) work together with (6) and (7) to produce the desired effects. When a speaker utters a description, or points, he communicates his intention to refer. When he performs the concept-activation action by incorporating the surface-linguistic component of his action into a surface speech-act, his intentions are carried out. Because the equivalence of axiom (6) can be used in both directions, if the speaker wants an object to be active, then one can reason that he knows the description predicated of it is true. A major problem facing the planner is deciding when the necessary conditions obtain to be able to take ad- vantage of the interactions between (6) and (7). Since this task involves examining several actions in the plan, it is performed by a critic called the action-subsumption critic. This critic notices when the speaker is informing the hearer * A complete discussion of focusing in KAMP is beyond the scope of this paper. KAMP uses an axiomatization of Sidner's focusing rules Ill]to keep track of focus shifts. Iii of a predication that could be included in the description associated with a concept activation. When such an in- teraction is noticed, the critic proposes a modification to the plan. If the surface-linguistic component does not in- sist that the modification is impossible given the grammar, then the action subsumption is carried out. In example (1), for instance, the expert has a high-level plan that includes the performance of two illocutionary acts: requesting that the apprentice remove the pump us- ing a particular tool (call it tool1), and informing the ap- prentice that tool1 is a wheelpuller. The action subsump- tion critic notices that in the request the expert is referring to tool1 and also wants to inform the hearer of a property of tool1. Therefore, it proposes combining the property of being a wheelpuller into the description used for referring to tool1 while making the request. V. CONCLUSION This paper has described a formalism for describing the action of referring in a manner that is useful for a genera- tion system based on planning, like KAMP. The central idea is to divide referring into two tasks: an intention- communication task and a surface-linguistic task. By so doing, it is possible to axiomatize different actions that communicate a speaker's intention to refer. Thus, the planner is able to produce plans that produce natural- language referring expressions, but take the larger context of the speaker's nonlinguistic actions into account as well. KAMP currently plans only simple definite reference. One promising extension of this approach for future re- search is to extend the active predicate to apply to inten- sional concepts in addition to the extensional ones now required for definite reference. We hope this will allow for the planning of attributive and indefinite reference as well. KAMP currently does not plan quantified noun phrases, nor can it refer generically, nor can it refer to collections of entities. Much basic research needs to be done to ex- tend KAMP to handle these other cases, but we hope that the formalism outlined here will provide a good base from which to investigate these extensions. VI. ACKNOWLEDGEMENTS The author is grateful to Barbara Grosz, Bob Moore and Nils Nilsson for comments on earlier drafts of this paper. VII. REFERENCES [3] [4] [51 [6] [7] [8] I9] [10] [11] Clark, Herbert, and C. Marshall, Definite Reference and Mutual Knowledge, in Joshi et. al. (eds.), Ele- ments of Discourse Understanding, Cambridge University Press, Cambridge, 1981. Cohen, Philip and C. R. Perrault, Elements of a Plan- Based Theory of Speech Acts, Cognitive Science, vol. 3, pp. 177-212, 1979. Cohen, Philip, and H. Levesque, Speech Acts and the Recognition of Shared Plans,, Proceedings of the Canadian Society for Computational Studies in Intel- ligence, 1980. Cohen, Philip, The Need for Referent Identification as a Planned Action, Proceedings of IJCAI-7, 1981. Grosz, Barbara J., Focusing and Description in Nat- ural Language Dialogs, in Joshi et al. (eds.), Elements of Discourse Understanding: Proceedings of a Workshop on Computational Aspects of Lin- guistic Structure and Discourse Setting, Cam- bridge University Press, Cambridge, 1980. Moore, Robert C., Reasoning about Knowledge and Action, SRI International Technical Note No. 191, 1980. Olson, D., From Utterance to Text: The Bias of Lan- guage in Speech and Writing, Harvard Educational Review, Vol, 47, No. 3, August, 1077. Sacerdoti, Earl, A Structure for Plans and Be- havior, Elsevier North-Holland, Inc., Amsterdam, 1977. Sidner, Candacl L., Toward a Computational Theory of Definite Anaphora Comprehension in English, MIT Technical Report AI-TR-537, 1979. I1] I2] Appelt, Douglas E., Problem Solving Applied to Lan- guage Generation, Proceedings of the 18th Annual Meeting of the ACL, 1980. Appelt, Douglas E., Planning Natural Language Utter- ances To Satisfy Multiple Goals, SRI International Technical Note No. 259, 1982. 112 | 1982 | 27 |
THE TEXT SYSTEM FO~NATURAL LANGUAGE GENERATION: AN OVERVIEW* Kathleen R. M::Keown Dept. of Computer & Information Science The Moore School University of Pennsylvania Philadelphia, Pa. 19104 ABSTRACT Computer-based generation of natural language requires consideration of two different types of problems: i) determining the content and textual shape of what is to be said, and 2) transforming that message into English. A computational solution to the problems of deciding what to say and how to organize it effectively is proposed that relies on an interaction between structural and semantic processes. Schemas, which encode aspects of discourse structure, are used to guide the generation process. A focusing mechanism monitors the use of the schemas, providing constraints on what can be said at any point. These mechanisms have been implemented as part of a generation method within the context of a natural language database system, addressing the specific problem of responding to questions about database structure. 1.0 INTRODUCTION Deciding what to say and how to organize it effectively are two issues of particular importance to the generation of natural language text. In the past, researchers have concentrated on local issues concerning the syntactic and lexical choices involved in transforming a pre-determined message into natural language. The research described here ~nphasizes a computational Solution to the more global problems of determining the content and textual shape of what is to be said. ~re specifically, my goals have been the development and application of principles of discourse structure, discourse coherency, and relevancy criterion to the computer generation of text. These principles have been realized in the TEXT system, reported on in this paper. The main features of the generation method used in TEXT include I) an ability to select relevant information, 2) a system for pairing rhetorical techniques (such as analogy) with discourse purv~ses (such as defining terms) and 3) a focusing mec~mnism. Rhetorical techniques, which encode aspects of discourse structure, guide the selection of information for inclusion in the text from a relevant knowledge poq~l - a subset of *This work was partially supported by National Science ~Dundation grant #MCS81-07290. the knowledge base which contains information relevant to the discourse purpose. The focusing mechanism helps maintain discourse coherency. It aids in the organization of the message by constraining the selection of information to be talked about next to that which ties in with the previous discourse in an appropriate way. These processes are described in more detail after setting out the framework of the system. 2.0 APPLICATION In order to test generation principles, the TEXT system was developed as part of a natural language interface to a database system, addressing the specific problem of generating answers to questions about database structure. Three classes of questions have been considered: questions about information available in the database, requests for definitions, and questions about the differences between database entities [MCKE(3WN 80]. In this context, input questions provide the initial motivation for speaking. Although the specific application of answering questions about database structure was used primarily for testing principles about text generation, it is a feature that many users of such systems would like. Several experiments ([MALHOTRA 75], [TENNANT 79]) have shown that users often ask questions to familiarize themselves with the database structure before proceeding to make requests about the database contents. The three classes of questions considered for this system were among those shown to be needed in a natural language database system. Implementation of the TEXT system for natural language generation used a portion of the Office of Naval Research (ONR) database containing information about vehicles and destructive devices. Some examples of questions that can be asked of the system include: > What is a frigate? > What do you know about submarines? > What is the difference between a and a kitty hawk? whisky 113 The kind of generation of which the system is capable is illustrated by the response it generates to question (A) below. A) ~at kind of data do you have? All entities in the (INR database have DB attributes R~MARKS. There are 2 types of entities in the ONR database: destructive devices and vehicles. The vehicle has DB attributes that provide information on SPEED-INDICES and TRAVEL-MEANS. The destructive device has DB attributes that provide information on LETHAL-INDICES. TEXT does not itself contain a facility for interpreting a user's questions. Questions must be phrased using a simple functional notation (shown below) which corresponds to the types of questions that can be asked . It is assumed that a component could be built to perform this type of task and that the decisions it must make would not affect the performance of the generation system. I. (definition <e>) 2. (information <e>) 3. (differense <el> <e2>) where <e>, <el>, <e2> represent entities in the database. 3.0 SYSTEM OVERVIEW In answer ing a question about database structure, TEXT identifies those rhetorical techniques that could be used for presenting an appropriate answer. On the basis of the input question, semantic processes produce a relevant knowledge pool. A characterization of the information in this pool is then used to select a single partially ordered set of rhetorical techniques from the various possibilities. A formal representation of the answer (called a "message" ) is constructed by selecting propositions from the relevant knowledge pool which match the rhetorical techniques in the given set. The focusing mechanism monitors the matching process; where there are choices for what to say next (i.e. - either alternative techniques are possible or a single tec~mique matches several propositions in the knowledge pool), the focusing mechanism selects that proposition which ties in most closely with the previous discourse. Once the message has been constructed, the system passes the message to a tactical component [BOSSIE 81] which uses a functional grammar [KAY 79] to translate the message into English. 4.0 KNOWLEDGE BASE Answering questions about the structure of the database requires access to a high-level description of the classes of objects ino the database, their properties, and the relationships between them. The knowledge base used for the TEXT system is a standard database model which draws primarily from representations developed by Chen [CHEN 76], the Smiths [SMITH and SMITH 77], Schubert [SCHUBERT et. al. 79], and Lee and Gerritsen [LEE and GERRITSEN 78]. The main features of TEXT's knowledge base are entities, relations, attributes, a generalization hierarchy, a topic hierarchy, distinguishing descriptive attributes, supporting database attributes, and based database attributes. Entities, relations, and attributes are based on the Chen entity-relationship model. A generalization hierarchy on entities [SMITH and ~94ITH 77], [LEE and GERRITSEN 78], and a to~ic hierarch Y on attributes [SCHUBERT et. al. 79] are also used. In the topic hierarchy, attributes such as MAXIMUM SPEED, MINIMUMSPEED, and ECONOMIC SPEED are gene?alized as SPEED INDICES. In -the general ization hierarchy,--entities such as SHIP and SUBMARINE are generalized as WATER-GOING VEHICLE. ~he generalization hierarchy includes both generalizations of entities for which physical records exist in the database (database entity classes) and sub-types of these entities. The sub-types were generated automatically by a system developed by McCoy [MCCOY 82]. An additional feature of the knowledge base represents the basis for each split in the hierarchy [LEE and GERRITSEN 78]. For eneralizations of the database entity classes, partltlons are made on the basis of different attributes possessed, termed sup[~or tin~ db attributes. For sub-t~pes of the database entit-y classes, partitions are made on the basis of different values possessed for given, shared attributes, termed based db attributes. ~dditional d esc r i pt ive--"--in fo"~a t ion that distinguishes sub-classes of an entity are captured in ~ descriptive attributes (DDAs). For generalizati6ns Of 6he database entity classes, such DDAs capture real-world characteristics of the entities. Figure 1 shows the DDAs and supporting db attributes for two generalizations. (See [MCCOY 82] for discussion of information associated with sub-types of database entity classes). 114 (ATER-VEHIC 9 'rP&VEI-MEDIUM / ~DE~A~R (DDA) SURFACE (DDA) -DRAFT,DISPLACEMENT -DEPTH, MAXIMHM (s~rting dbs) SUBM<GED SPEED (supporting dbs) FIGURE i DDAS and supporting db attributes 5.0 SELECTING RELEVANT INFOPJ~ATION The first step in answering a question is to circumscribe a subset of the knowledge base containing that information which is relevant to t~ question. This then provides limits on what information need be considered when deciding what to say. All information that might be relevant to the answer is included in the partition, but all information in the partition need not be included in the answer. The partitioned subset is called the relevant ~ow~l~_~e pool. It is similar to what Grosz has called mglo-6~ focus" [GROSZ 77] since its contents are focused throughout the course of an answer. The relevant knowledge pool is constructed by a fairly simple process. For requests for definitions or available information, the area around the questioned object containing the information immediately associated with the entity (e.g. its superordinates, sub-types, and attributes) is circumscribed and partitioned from the remainir~ knowledge base. For questions about tk~ difference between entities, the information included in the relevant knowledge pool depends on how close in the generalization hierarchy t~ two entities are. For entities that are very similar, detailed attributive information is included. For entities that are very different, only generic class information is included. A combination of this information is included for entities falling between t~se two extremes. (See [MCKEOWN 82] for further details). 6.0 R~LETORICAL PREDICATES ~%etorical predicates are the means which a speaker has for describing information. ~hey characterize the different types of predicating acts s/he may use and delineate the structural relation between propositions in a text. some examples are "analogy" (comparison with a familiar object), "constituency" (description of sub-parts or sub-types), and "attributive" (associating properties with an entity or event). Linguistic discussion of such predicates (e.g. [GRIMES 75], [SHEPHERD 26]) indicates that some combinations are preferable to others. Moreover, Grimes claims that predicates are recursive and can be used to identify the organization of text on any level (i.e. - proposition, sentence, paragraph, or longer sequence of text), alti~ugh he does not show how. I have examined texts and transcripts and have found that not only are certain combinations of rhetorical tec~miques more likely than others, certain ones are more appropriate in some discourse situations than others. For example, I found that objects were frequently defined by employing same combination of the following means: (i) identifying an item as a memDer of some generic class, (2) describing an object's function, attributes, and constituency (either physical or class), (3) making analogies to familiar objects, and (4) providing examples. These techniques were rarely used in random order; for instance, it was common to identify an item as a member of some generic class before providing examples. In the TEXT system, these types of standard patterns of discourse structure have been captured in schemas associated with explicit discourse purposes. The schemas loosely identify normal patterns of usage. The~ are not intended to serve as grammars of text. The schema shown be-~ ~rves the purposes o~ providing definitions: Identification Schema identification (class&attribute/function) [analogy~constituency~attributive]* [particular-illustration~evidence]+ {amplification~analogy~attributive} {particular-illustration/evidence} Here, "{ ]" indicates optionality, "/" indicates alternatives, "+" indicates that the item may appear l-n times, and "*" indicates that the item may appear O-n times. The order of the predicates indicates that the normal pattern of definitions is an identifying pro~'~tion followed by any number of descriptive predicates. The speaker then provides one or more examples and can optionally close with some additional descriptive information and possibly another example. TEXT's response to the question "What is a ship?" (shown below) was generated using the identification schema. ~e sentences are numbered to show the correspondence between each sentence and the predicate it corresponds to in the instantiated schema (tile numbers do not occur in the actual output). 115 (definition SHIP) Schema selected: identification i) identification 2) evidence 3) attributive 4) particular-illustration I) A ship is a water-going vehicle that travels on the surface. 2) Its surface-going capabilities are provided by the DB attributes DISPLACEMENT and DRAFT. 3) Other DB attributes of the ship include MAXIMUM_SPEED, PROPULSION, FUEL (FUELCAPACITY and FUEL_TYPE), DIMENSIONS, SPEED DEPENDENT RANGE and OFFICIAL NAME. 4) The-- ~ES,-- for example, has MAXIMUM SPEED of 29, PROPULSION of STMTURGRD, FUEL~f 810 (FUEL CAPACITY) and BNKR (FUEL TYPE), DIMENSIONS of ~5 (DRAFT), 46 (BEAM), and 438 (LENGTH) and SPEED DEP~DENT RANGE of 4200 (ECONOMIC_RANGE) and 2~00 (ENDUP~NCE_RANGE). Another strategy commonly used in the expository texts examined was to describe an entity or event in terms of its sub-parts or sub-classes. This strategy involves: I) presenting identificational or attributive information about the entity or event, 2) presenting its sub-parts or sub-classes, 3) discussing attributive or identificational information with optional evidence about each of the sub-classes in turn, and 4) opt--'l-6~al~'y returning to the orig-{nal-~ity with additional attributive or analogical information. The constituency schema, shown below, encodes the techniques used in £his strategy. The Constituency Schema attributive/identification (entity) constituency (entity) { attributive/identification (sub-classl, sub-class2,..) {evidence (sub-classl, sub-class2, ...)} }+ {attributive/analogy (entity) } TEXT'S response to the question "What do you know about vehicles?" was generated using the constituency schema. It is shown below along with the predicates that were instantiated for the answer. (information VEHICLE) J Schema selected: constituency i) attributive 2) constituency 3) attributive 4) attributive 5) attributive i) The vehicle has DB attributes that provide information on SPEED INDICES and TRAVEL MEANS. 2) qhere are 2- types of vehicl~s in the ONR data~]se: aircraft and water-going vehicles. 3) The water-going vehicle has DB attributes that provide information on TRAVEL MEANS and WATER GOING OPERATION. 4) The ~ircraft has DB ° attributes -- that provide information on TRAVEL MEANSf FLIGHT RADIUS, CEILING and ROLE. Other DB attributes -of the vehicle include FUEL( FUEL_CAP~EITY and FUEL_TYPE) and FLAG. Two other strategies were identified in the texts examined. These are encoded in the attributive schema, which is used to provide detailed information about a particular aspect of an entity, and the compar e and contrast schema, which encodes a strategy --~r contrasting two entities using a description of their similarities and their differences. For more detail on these strategies, see [MCKEGWN 82]. 7.0 USE OF THE SCHEMAS As noted earlier, an examination of texts revealed that different strategies were used in different situations. In TEXT, this association of technique with discourse purpose is achieved by associating the different schemas with different question-types. For example, if the question involves defining a term, a different set of schemas (and therefore rhetorical techniques) is chosen than if the question involves describing the type of information available in the database. The identification schema can be used in response to a request for a definition. The purpose of the attributive schema is to provide detailed information about one particular aspect of any concept and it can therefore be used in response to a request for information. In situations where an object or concept can be described in terms of its sub-parts or sub-classes, the constituency schema is used. It may be selected in response to requests for either definitions or information. The compare and contrast schema is used in response ~o a questl'i'~ about the difference between objects. A surmary of the assignment of schemas to question-types is shown in Figure 2. 116 Schemas used for TEXT i. 2. 3. 4. identification -requests for definitions attributive -requests for available information constituency -requests for definitions -requests for available information compare and contrast -requests about the differenceS between objects FIGURE 2 Once a question has been posed to TEXT, a schema must be selected for the response structure which will then be used to control the decisions involved in deciding what to say when. On the basis of the given question, a set of schemas is selected as possible structures for the response. This set includes those sch~nas associated with the given question-type (see Figure 2 above). A single schema is selected out of this set on the basis of the information available to answer the question. For example, in response to requests for definitions, the constituency schema is selected when the relevant knowledge pool contains a rich description of the questioned object's sub-classes and less information about the object itself. When this is not the case, the identification schema is used. The test for what kind of information is available is a relatively simple one. If the questioned object occurs at a higher level in the hierarchy than a pre-determined level, the constituency schema is used. Note that the higher an entity occurs in the hierarchy, the less descriptive information is available about the entity itself. More information is available about its sub-parts since fewer common features are associated with entities higher in the hierarchy. This type of semantic and structural interaction means that a different schema may be used for answering the same type of question. An earlier example showed that the identification schema was selected by the TEXT system in response to a request for a definition of a ship. In response to a request for a definition of a guided projectile (shown below), the constituency schema is selected since more information is available about the sub-classes of the guided projectile than about the guided projectile itself. (definition GUIDED) Schema selected: Constituency i) identification 2) constituency 3) identification 4) identification 5) evidence 6) evidence 7) attributive I) A guided projectile is a projectile that is self-propelled. 2) There are 2 types of guided projectiles in the ONR database: torpedoes and missiles. 3) The missile has a target location in the air or on the earth's surface. 4) The torpedo has an underwater target location. 5) The missile' s target location is indicated by the DB attribute DESCRIPTION and the missile' s flight capabilities are provided by the DB attribute ALTITUDE. 6) The torpedo' s underwater capabilities are provided by the DB attributes under DEPTH ( for exampl e, MAXIMUM OPERATING DEPTH). 7) The guided proj ec t~-i e ~as DB attributes TIME TO_TARGET & UNITS, HORZ RANGE_& UNITS and NAME. Once a schema has been selected, it is filled by matching the predicates it contains against the relevant knowledge pool. The semantics of each predicate define the type of information it can match in the knowledge pool. The semantics defined for TEXT are particular to the database query dumain and would have to be redefined if the schemas were to be used in another type of system (such as a tutorial system, for example). The semantics are not particular, however, to the domain of the database. When transferring the system from one database to another, the predicate semantics would not have to be altered. A proposition is an instantiated predicate; predicate arguments have been filled with values from the knowledge base. An instantiation of the identification predicate is shown below along with its eventual translation. Instantiated predicate: (identification (OCEAN-ESCORT CRUISER) (non-restrictive TRAVEL-MODE SURFACE)) SHIP Eventual translation: The ocean escort and the cruiser are surface ships. The schema is filled by stepping through it, using the predicate s~nantics to select information which matches the predicate arguments. In places where alternative predicates occur in the schema, all alternatives are matched against the relevant knowledge pool producing a set of propositions. The focus constraints are used to select the most appropriate proposition. i17 The schemas were implemented using a formalism similar to an augmented transition network (ATN). Taking an arc corresponds to the selection of a proposition for the answer. States correspond to filled stages of the schema. The main difference between the TEXT system implementation and a usual ATN, however, is in the control of alternatives. Instead of uncontrolled backtracking, TEXT uses one state lookahead. From a given state, it explores all possible next states and chooses among them using a function that encodes the focus constraints. This use of one state lookahead increases the efficiency of the strategic component since it eliminates unbounded non-determinism. 8.0 FOCUSING MECHANISM So far, a speaker has been shown to be limited in many ways. For example, s/he is limited by the goal s/he is trying to achieve in the current speech act. TEXT's goal is to answer the user's current question. To achieve that goal, the speaker has limited his/her scope of attention to a set of objects relevant to this goal, as represented by global focus or the relevant knowledge pool. The speaker is also limited by his/her higher-level plan of how to achieve the goal. In TEXT, this plan is the chosen schema. Within these constraints, however, a speaker may still run into the problem of deciding what to say next. A focusing mechanism is used to provide further constraints on what can be said. The focus constraints used in TEXT are immediate, since they use the most recent proposition (corresponding to a sentence in the ~glish answer) to constrain the next utterance. Thus, as the text is constructed, it is used to constrain what can be said next. Sidner [SIDNER 79] used three pieces of information for tracking immediate focus: the immediate focus of a sentence (represented by the current focus - CF), the elements of a sentence ~---I~hare potential candidates for a change in focus (represented by a potential focus list - PFL), and past immediate focY [re--pr--esent--'-~--6y a focus stack). She showed that a speaker has the 3~6~win-g'~tions from one sentence to the next: i) to continue focusing on the same thing, 2) to focus on one of the items introduced in the last sentence, 3) to return to a previous topic in ~lich case the focus stack is popped, or 4) to focus on an item implicitly related to any of these three options. Sidner's work on focusing concerned the inter~[e__tation of anaphora. She says nothing about which of these four options is preferred over others since in interpretation the choice has already been made. For generation, ~.~ver, a speaker may have to choose between these options at any point, given all that s/he wants to say. The speaker may be faced with the following choices: i) continuing to talk about the same thing (current-focus equals current-focus of the previous sentence) or starting to talk about something introduced in the last sentence (current-focus is a member of potential-focus-list of the previous sentence) and 2) continuing to talk about the same thing (current focus remains the same) or returning to a topic of previous discussion (current focus is a member of the focus-stack). When faced with the choice of remaining on the same topic or switching to one just introduced, I claim a speaker's preference is to switch. If the speaker has sanething to say about an item just introduced and does not present it next, s/he must go to the trouble of re-introducing it later on. If s/he does present information about the new item first, however, s/he can easily continue where s/he left off by following Sidner's legal option #3. ~qus, for reasons of efficiency, the speaker should shift focus to talk about an item just introduced when s/he has something to say about it. When faced with the choice of continuing to talk about the same thing or returning to a previous topic of conversation, I claim a speaker's preference is to remain on the same topic. Having at some point shifted focus to the current focus, the speaker has opened a topic for conversation. By shifting back to the earlier focus, the speaker closes this new topic, implying that s/he has nothing more to say about it when in fact, s/he does. Therefore, the speaker should maintain the current focus when possible in order to avoid false implication of a finished topic. These two guidelines for changing and maintaining focus during the process of generating language provide an ordering on the three basic legal focus moves that Sidner specifies: I. 2. 3. change focus to member of previous potential focus list if possible - CF (new sentence) is a member of PFL (last sentence) maintain focus if possible - CF (new sentence) = CF (last sentence) return to topic of previous discussion - CF (new sentence) is a member of focus-stack I have not investigated the problem of incorporating focus moves to items implicitly associated with either current loci, potential focus list members, or previous foci into this scheme. This remains a topic for future research. Even these guidelines, however, do not appear to be enough to ensure a connected discourse. Although a speaker may decide to focus on a specific entity, s/he may want to convey information about several properties of that entity. S/he will describe related properties of the entity before describing other properties. 118 Thus, strands of semantic connectivity will occur at more than one level of the discourse. An example of this phenomenon is given in dialogues (A) and (B) below. In both, the discourse is focusing on a single entity (the balloon), but in (A) properties that must be talked about are presented randomly. In (B), a related set of properties (color) is discussed before the next set (size). (B), as a result, is more connected than (A). (A) The balloon was red and white striped. Because this balloon was designed to carry men, it had to be large. It had a silver circle at the top to reflect heat. In fact, it was larger than any balloon John had ever seen. (B) The balloon was red and white striped. It had a silver circle at the top to reflect heat. Because this balloon was designed to carry men, it had to be large. In fact, it was larger than any balloon John had ever seen. In the generation process, this phenomenon is accounted for by further constraining the choice of what to talk about next to the proposition with the greatest number of links to the potential focus list. 8.1 Use Of The Focus Constraints TEXT uses the legal focus moves identified by Sidner by only matching schema predicates against propositions which have an argument that can be focused in satisfaction of the legal options. Thus, the matching process itself is constrained by the focus mechanism. The focus preferences developed for generation are used to select between remaining options. These options occur in TEXT when a predicate matches more than one piece of information in the relevant knowledge pool or when more ~,an one alternative in a schema can be satisfied. In such cases, the focus guidelines are used to select the most appropriate proposition. When options exist, all propositions are selected which have as focused argument a member of the previous PFL. If none exist, then whose focused current-focus. propositions are is a member of filtering steps possibilities to proposition with all propositions are selected argument is the previous If none exist, then all selected whose focused argument the focus-stack. If these do not narrow down the a single proposition, that the greatest number of links to the previous PFL is selected for the answer. Tne focus and potential focus list of each proposition is maintained and passed to the tactical component for use in selecting syntactic constructions and pronominalization. Interaction of the focus constraints with the schemas means that although the same schema may be selected for different answers, it can be instantiated" in different ways. Recall that the identification schema was selected in response to the question "What is a ship?" and the four predicates, identification, evidence, attributive, and ~articular-illustrati0n, were instantiated. Tne identification schema was also selected in response to the question "What is an aircraft carrier?", but different predicates were instantiated as a result of the focus constraints: (definition AIRCRAFT-CARRIER) Schema selected: identification I) identification 2) analogy 3) particular-illustration 4) amplification 5) evidence i) An aircraft carrier is a surface ship with a DISPLACEMENT between 78000 and 80800 and a LENGTH between 1039 and 1063. 2) Aircraft carriers have a greater LENGTH than all other ships and a " greater DISPLACEMENT than most other ships. 3) Mine warfare ships, for example, have a DISPLACF24ENT of 320 and a LENGTH of 144. 4) All aircraft carriers in the ONR database have REMARKS of 0, FUEL TYPE of BNKR, FLAG of BLBL, BEAM of 252, ENDU--I~NCE RANGE of 4000, ECONOMIC SPEED of 12, ENDURANCE SPEED of 30 and PRO~LSION of STMTURGRD. 5)--A ship is classified as an aircraft carrier if the characters 1 through 2 of its HULL NO are CV. 9.0 FUTURE DIRECTIONS Several possibilities for further development of the research described here include i) the use of the same strategies for responding to questions about attributes, events, and relations as well as to questions about entities, 2) investigation of strategies needed for responding to questions about the system processes (e.g. How is manufacturer ' s cost determined?) or system capabilities (e.g. Can you handle ellipsis?) , 3) responding to presuppositional failure as well as to direct questions, and 4) the incorporation of a user model in the generation process (currently TEXT assumes a static casual, naive user and gears its responses to this characterization). Tnis last feature could be used, among other ways, in determining the amount of detail required (see [ MCKEOWN 82] for discussion of the recursive use of the sch~nas). 119 10.0 CONCLUSION The TEXT system successfully incorporates principles of relevancy criteria, discourse structure, and focus constraints into a method for generating English text of paragraph length. Previous work on focus of attention has been extended for the task of generation to provide constraints on what to say next. Knowledge about discourse structure has been encoded into schemas that are used to guide the generation process. The use of these two interacting mechanisms constitutes a departure from earlier generation systems. The approach taken in this research is that the generation process should not simply trace the knowledge representation to produce text. Instead, communicative strategies people are familiar with are used to effectively convey information. This means that the same information may be described in different ways on different occasions. The result is a system which constructs and orders a message in response to a given question. Although the system was designed to generate answers to questions about database structure (a feature lacking in most natural language database systems), the same techniques and principles could be used in other application areas (for example, computer assisted instruction systems, expert systems, etc.) where generation of language is needed. ~owl~~ I would like to thank Aravind Joshi, Bonnie Webber, Kathleen McCoy, and Eric Mays for their invaluable comments on the style and content of this paper. Thanks also goes to Kathleen Mccoy and Steven Bossie for their roles in implementing portions of the sys~om. References [BOSSIE 82]. Bossie, S., "A tactical model for text generation: sentence generation using a functional grammar," forthcoming M.S. thesis, University of Pennsylvania, Philadelphia, Pa., 1982. [CHEN 76]. Chen, P.P.S., "The entity-relationship model - towards a unified view of data." __ACM Transactions --°n Database Svstems, Vol. I, No. I. (1976). [GRIMES 75]. Grimes, J.E. The Thread of Discourse. Mouton, The Hague, Par-~. (1975). [GROSZ 77]. Grosz, B. J., "The representation and use of focus in dialogue understanding." Technical note 151, Stanford Research Institute, Menlo Park, Ca. (1977). [LEE a[~ GERRITSEN 78]. Lee, R.M. end R. Gerritsen, "Extended semantics Lot generalization hierarchies", in Proceedings of the 1978 ACM-SIGMOD International Conference on Management of Data, Aus£1n, Tex., 1978. [KAY 79]. Kay, M. "Functional grammar." Proceedings of the 5th ;~inual Meetin~ of the Berkele Z Ling~[stl-l~Soc--[~ty. (1979). [MALHOTRA 75]. Malhotra, A. "Design criteria for a knowledge-based English language system for management: an experimental analysis." MAC TR-146, MIT, Cambridge, Mass. (1975). [MCCOY 82]. McCoy, K. F., "Augmenting a database knowledge representation for natural language generation," in Proc. of the 20th Annual Conference of the ~soc-~tion~or Com~utatlo-~ Linguistics , Toronto, Canada, 1982. [MCKEOWN 80]. McKeown, K.R., "Generating relevant explanations: natural language responses to questions about database structure." in Proceedinss of AAAI, Stanford Univ., Stanford, Ca. (1980). pp. 306-9. [MCKEOWN 82]. McKeown, K. R., "Generating natural language text in response to questions about database structure." Ph.D. dissertation, University of Pennsylvania, Philadelphia, Pa. 1982. [SHEPHERD 26]. Shepherd, H. R., Tne Fine Art of Writinc/, The Macmillan Co., New York, N. Y., 1926. [SIDNER 79]. Sidner, C.L., "Towards a computational theory of definite anaphora comprehension in English discourse." Ph.D. dissertation, MIT AI Technical Report #TR-537, Cambridge, Mass. (1979). [SMITH and SMITH 77]. Smith, J.M. and Smith, D. C.P., "Database abstractions: aggregation and generalization." University of Utah, ACM Transactions on Database Systems, Vol. 2, #2, June 1977, pp. 105-33. [TENNANT 79]. Tennant, H., "Experience with the evaluation of natural language question answerers." Working paper #18, Univ. of Illinois, Urbana-Champaign, Ill. (1979). 120 | 1982 | 28 |
At~3MENTING A DATABASE KNOWLEDGE REPRESENTATION FOR NATURAL LANGUAGE GENERATION* Kathleen F. M~Coy Dept. of Computer and Information Science The Moore School University of Pennsylvania Philadelphia, Pa. 19104 ABSTRACT The knowledge representation is an important factor in natural language generation since it limits the semantic capabilities of the generation system. This paper identifies several information types in a knowledge representation that can be used to generate meaningful responses to questions about database structure. Creating such a knowledge representation, however, is a long and tedious process. A system is presented which uses the contents of the database to form part of this knowledge representation automatically. It employs three types of world knowledge axioms to ensure that the representation formed is meaningful and contains salient information. representation reflects both the database contents and the database designer's view of the world. One important class of questions involves comparing database entities. The system's knowledge representation must therefore contain meaningful information that can be used to make comparisons (analogies) between various entity classes. This paper focuses specifically on those aspects of the knowledge representation generated by ENHANCEwhich facilitate the use of analogies. An overview of the knowledge representation used by TEXT is first given. This is followed by a discussion of how part of this representation is automatically created by ENHANCE. i. 0 IN'IIRODUCTION In order for a user to extract meaningful information from a database system, s/he must first understand the system's view of the world what information the system contains and what that information represents. An optimal way of acquiring this knowledge is to interact, in natural language, with the system itself, posing questions to it about the structure of its contents. The TEXT system [McKeown 82] was developed to faci~te this type of interaction. In order to make use of the TEXT system, a system's knowledge about itself must be rich enough to support the generation of interesting texts about the structure of its contents. As I will demonstrate, standard database models [Chen 76], [Smith & Smith 77] are not sufficient to support this type of generation. Moreover, since time is such an important factor when generating answers, and extensive inferencing is therefore not practical, the system's self knowledge must be i~ediately available in its knowledge representation. Tne ENHANCE system, described here, has been developed to augment a database schema with the kind of information necessary for generating informative answers to users' queries. The ENHANCE system creates part of the knowledge representation used by TEXT based on the contents of the database. A set of world knowledge axioms are used to ensure that this knowledge ~ r k was partially supported by National Science 5oundatlon grant #MCS81-07290. 2.0 KNOWLEDGE REPRESENTATION FOR G~ERATION The TEXT system answers three types of questions about database structure: (i) requests for the definition of an entity; (2) requests for the information available about an entity; (3) requests concerning the difference between entities. It was implemented and tested using a portion of an 0NR database which contained information about vehicles and destructive devices. TEXT needs several types of information to answer the above questions. Some of this can be provided by features found in a variety of standard database models [Chen 76], [Smith & Smith 77], [Lee & Gerritsen 78]. Of these, TEXT uses a generalization hierarch Z on the entities in order to define or identify them in terms of (I) their constituents (e.g. "There are two types of entities in the ONR database: destructive devices and vehicles."*) (2) their superordinates (e.g. "A destroyer is a surface ship .. A bomb is a free falling projectile." and "A whiskey is an underwater submarine ..."). Each node in the hierarchy contains additional descriptive information based on standard features which is used to identify the database information associated with each entity and to indicate the distinguishing features of the entities. * The quoted material is excerpted from actual output from TEXT. 121 One type of comparison that TEXT must ger~erate has to do with indicating why a particular individual falls into one entity sub-class as opposed to another. For example, "A ship is classified as an ocean escort if the characters 1 through 2 of its HULL NO are DE ... A ship is classified as a cruis--er if the characters 1 through 2 of its HULL NO are CG." and "A submarine is classified as an e~ho II if its CLASS is ECHO II." In order to generate this kind of comparison, TEXT must have available database information indicating the reason for a split in the generalization hierarchy. This information is provided in the based DB attribute. In comparing two entities, TEXT must be able to identify the major differences between them. Part of this difference is indicated by the descriptive distinguishing features of the entities. For example, "The missile has a target location in the air or on the earth's surface ... The torpedo has an underwater target location." and "A whiskey is an underwater submarine with a PROPULSION TYPE of DIESEl and a FLAG of RDOR." These dist'inguishing features consist of a number of attribute-value* pairs associated with each entity. They are provided in an information type termed the distinguishing descriptive attributes (DDAs) of an entity. In order for TEXT to answer questions about the information available about an entity, it must have access to the actual database information associated with each entity in the generalization hierarchy. This information is provided in what are termed the actual DB attributes (and constant values) and the r ela'~i6nal atEr ibutes (and values). This informa£ioh -is also useful in comparing the attributes and relations associated with various entities. For example, "Other DB attributes of the missile include PROBABILITY OF KILL, SPEED, ALTI~DE ... Other DB attributes -of- the torpedo include FUSE TYPE, MAXIMUM DEPTH, ACCURACY & UNITS..." and "Echo IIs carry 16 torpedoes, betwe--e~ 16 and 99 missiles and 0 guns." 3.0 AUGMENTING THE KNOWLEDGE REPRESENTATION The need for the various pieces of information in the knowledge representation is clear. How this representation should be created remains unanswered. The entire representation could be hand coded by the database designer. This, however, is a long and tedious process and therefore a bottleneck to the portability of TEXT. In this work, a level in the generalization hierarchy is identified that contains entities for which physical records exist in the database ~4~tabase entity classes). It is asstmled that the hierarchy above this level must be hand ceded. The information below this level, however, can be derived fr~ the contents of the database itself. * these attributes are not necessarily attributes contained in the database. The database entity classes can be subclassified on the basis of attributes whose values serve to partition the entity class into a number of mutually exclusive sub-types. For example, PEOPLE can be subclassified on the basis of attribute SEX: MALE and FEMALE. As pointed out by Lee and Gerritsen [Lee & Gerritsen 78], some partitions of an entity class are more meaningful than others and hence more useful in describing the system's knowledge of the entity class. For example, a partition based on the primary key of the entity class would generate a single member sub-class for each instance in the database, thereby simply duplicating the contents of the database. The ENHANCE system relies on a set of world knowledge axioms to determine which attributes to use for partitioning and which resulting breakdowns are mean ing f ul. For each meaningful breakdown of an entity class, nodes are created in the generalization hierarchy. These nodes must contain the information types discussed above. ENHANCE computes this information based on the facts in the database. The attribute used to partition the entity class appears as the based DB attribute. The DDAs are a list of actual DB attributes, other than the based DB attribute, which when taken together distinguish a sub-class from all others in the breakdown. Since the sub-classes inherit all DB attributes from the entity class, the values of the attributes within the sub-class are important. ENHANCE records the values of all constant DB attributes and the range of values of any DB attributes which appear in the DDA of any sibling sub-class. These can be used by TEXT to compare the values of the DDAs of one sub-class with the values of the same attributes within a sibling sub-class. The values of relational attributes within a sub-class are also recorded by ENHANCE. The descriptive information will be used by the generation system to indicate how the sub-classes differ. It is therefore important that the most salient differences between the sub-classes are indicated. Here again, the world knowledge axioms are used to guide the systam in choosing the most salient information. The world knowledge axioms fall into three categories which reflect the extent to which they must be changed when applying ENHANCE to a new database. ~hey range from very specific axioms, which must always be changed, to very general axioms, which are domain independent. The axioms and their use by the system will be described after first giving an example of a question answered by TEXT based on information created by ENHANCE. 3.1 EXAMPLE USING AUGMENTED REPRESENTATION This example illustrates how TEXT integrates the above information to form meaningful comparisons between entities. This example is taken from actual TEXT output in response to the 122 question: What is the difference between an OCEAN-ESCORT and a CRUISER?* i) The cruiser and the ocean escort are surface ships. 2) The ship's surface-going capabilities are provided by the DB attributes DISPLACEMENT and DRAFT. 3) The ship has DB attributes MAXIMUM_SPEED, PROPULSION, FUEL( FUEL CAPACITY and FUEL TYPE), DIMENSIONS, SPE~DEPENDENT RANGE and OFFICIAL NAME. 4) Ocean escorts have a DISPLACEMENT ~tween 3400 and 4100. 5) All ocean escorts in the ONR database have R~MARKS of 0, FUEL TYPE of BNKR, FLAG of BLBL and MAST HEIGHT--of 85. 6) Ocean escorts carry between 2 and 22 torpedoes, 16 missiles and between 1 and 2 guns. 7) A ship is classified as an ocean escort if the characters 1 through 2 of its HULL NO are DE. 8) Cruisers have a PROPULSION of S--TMTURGRD and a LENGTH between 510 and 673. 9) All cruisers in the ONR database have RE~MA~S of 0. I0) Cruisers carry between 8 and 42 torpedoes, between 4 and 98 missiles and between I and 4 guns. ii) A ship is classified as a cruiser if the characters 1 through 2 of its HULL NO are CL or the characters i through 2 o~ its HULL NO are CG. 12) The ocean escort, therefore, has a smaller LENGTH and a smaller DISPLACEMENT than the cruiser. The first sentence is derived from the fact that both ocean-escorts and cruisers are sub-types of entity class ship. TEXT then goes on to describe some characteristics of the ship (sentences 2 and 3). Information about the ship is part of the hand coded representation, it is not generated by ENHANCE. Next, the distinguishing features (indicated by the DDA) of the ocean-escort are identified followed by a listing of its constant DB attributes (sentences 4 and 5). The values of the relation attributes are then identified (sentence 6) followed by a statement drawn from the based DB attribute of the ocean-escort. Next, this same type of information is used to generate parallel information about the cruiser. 1~e text closes with a simple inference based on the DDAs of the two types of ships. 4.0 WORLD KNOWLEDGE AXIOMS In order for the generation system to give meaningful descriptions of the database, the knowledge representation must effectively capture both a typical user's view of the domain and how that domain has been modelled within the system. Without real world knowledge indicating what a user finds meaningful, there are several ways in which an automatically generated taxonomy may deviate from how a user views the domain: (I) the representation may fail %o capture the user's preconceived notions of how a certain database * The sentences are numbered here to simplify the discussion: there are no sentence n~nbers in the actual material produced by TEXT. entity class should be partitioned into sub-classes; (2) the system may partition an entity class on the basis of a non-salient attribute leading to an inappropriate breakdown; (3) non-salient information may be chosen to describe the sub-classes leading to inappropriate descriptions; (4) a breakdown may fail to add meaning to the representation (e.g. a partition chosen may simply duplicate information already available). qhe first case will occur if the sub-types of these breakdowns are not completely reflected in the database attribute names and values. For example, even though the partition of SHIP into its various types (e.g. Aircraft-Carrier, Destroyer, etc.) is very common, there may be no attribute SHIP TYPE in the database to form this partition. Th~ partition can be derived, however, if a semantic mapping between the sub-type names and existing attribute-value pairs can be identified. In this case, the partition can be derived by associating the first few characters of attribute HULL NO with the various ship-types. The ~ s~:~ific axioms are provided as a means for defl- ning such mappings. The taxonomy may also deviate from what a user might expect if the system partitions an entity class on the basis of non-salient attributes. It seems very natural to have a breakdown of SHIP based on attribute CLASS, but one based on attribute FUEL-CAPACITY would seem less appropriate. A partition based on CLASS would yield sub-classes of SHIP such as SKORY and KITFY-HAWK, while one on FUEL CAPACITY could only yield ones like SHI PS-4~q~H- 10 0-FUEL-CAPAC ITY. Since saliency is not an intrinsic property of an attribute, there must be a way of indicating attributes salient in the domain. The specific axioms are provided for this purpose. The user's view of the domain will not be captured if the information chosen to describe the sub-classes is not chosen from attributes important to the domain. Saliency is crucial in choosing the descriptive information (particularly the DDAS) for the sub-classes. Even though a DESTROYER may be differentiated from other types of ships by its ECONOMIC-SPEED, it seems more informative to distinguish it in terms of the more commonly mentioned property DISPLACEMENT. Here again, this saliency information is provided by the specific axioms. A final problem faced by a system which only relies on the database contents is that a partition formed may be essentially meaningless (adding no new information to the representation). This will occur if all of the instances in the database fall into the same sub-cl~ss or if each falls into a different one. Such breakdowns either exactly reflect the entity class as a whole, or reflect the individual instances. This same type of problem occurs if the only difference between two sub-classes is the attribute the breakdown is based on. Thus, no trend can be found among the other attributes within the sub-classes formed. Such a breakdown would add no 123 information that could not be trivially derived from the database itself. These types of breakdowns are "filtered out" using the @eneral ax{oms. The world knowledge axioms guide ENHANCE to ensure that the breakdowns formed are appropriate and that salient information is chosen for the sub-class descriptions. At the same time, the axioms give the designer control over the representation formed. The axioms can be changed and the system rerun. The new representation will reflect the new set of world knowledg e axioms. In this way, the database designer can tune the representation to his/her needs. Each axiom category, how they are used by ENHANCE, and the problems each category solves are discussed below. 4.1 Ver~ Specific Axioms The very specific axioms give the user the most control over the representation formed. They let the user specify breakdowns that s/he would a priori like to appear in the knowledge representation. The axioms are formulated in such a way as to allow breakdowns On parts of the value field of a character attribute, and on ranges of values for a numeric attribute (examples of each are given below). This type of breakdown could not be formed without explicit information indicating the defining portions of the attribute value field and their associated semantic values. A sample use of the very specific axioms can be found in classifying ships by their type (ie. Aircraft-carriers, Destroyers, Mine-warfare-ships, etc...), qhis is a very common breakdown of ships. Assume there is no database attribute which explicitly gives the ship type. With no additional information, there is no way of generating that breakdown for ship. A user knowledgeable of the domain would note that there is a way to derive the type of a ship based on its HULL NO. In fact, the first one or two characters of [he HULL NO uniquely identifies the ship type. ~Dr example,--all AIRCRAFT-CARRIERS have a HULL NO whose first two characters are CV, while the fi?st two characters of the HULL NO of a CRUISER are CA or CG or CL. This information can be captured in a very specific axiom which maps part of a character attribute field into the sub-type names. An example of such an axiom is shown in Figure i. (SHIP "SHIP HULL NO" "OTHER-SH IP-TYPE" (I 2 "C~' "AIRCRAFT-CARRIER") (i 2 "CA" "CRUISER") (I 2 "CG" "CRUISER") (i 2 "CL" "CRUISER") (i 2 "DD" "DESTROYER") (i 2 "DL" "FRIGATE") (I 2 "DE" "OCEAN-ESCORT") (i 2 "PC" "PATROL-SHIP-AND-CRAFT") (i 2 "PG" "PATROL-SHIP-AND-CRAFT") (i 2 "PT" "PATROL-SHIP-AND-CRAFT") (i 1 "L" "AMPHIBIOUS-AND-LANDING-SHIP") (i 2 "MC" ,MINE-WARFARE-SHIP") (I 2 "MS" "MINE-WARFARE-SHIP") (i 1 "A" "AUXILIARY-SHIP")) Figure I. Very Specific (Character) Axiom Sub-typing of entities may also be specified based on the ranges of values of a numeric attribute. For example, the entity BCMB is often sub-typed by the range of the attribute BOMB WEIGHT. A BOMB is classified as being HEAVY if i~s weight is above 900, MEDIUM-WEIGHT if it is between 100 and 899, and LIGHT-WEIGHT if its weight is less than i00. An axiom which specifies this is shown in FIGURE 2. (BOMB "BCMB WEIGHT" "OTHER-WEIGHT-BOMB" (900 99999 "HEAVY-BOMB") (i00 899 "MEDIUM-WEIGHT-BOMB" ) (0 99 "LIGHT-WEIGHT-BOMB") ) Figure 2. Very Specific (Numeric) Axiom Formation of the very specific axioms requires in-depth knowledge of both the domain the database reflects, and the database itself. Knowledge of the domain is required in order to make common classifications (breakdowns) of objects in the domain. Knowledge of the database structure is needed in order to convey these breakdowns in terms of the database attributes. It should be noted that this type of axiom is not required for the system to run. If the user has no preconceived breakdowns which should appear in the representation, no very specific axioms need to be specified. 4.2 Specific Axioms The specific axioms afford the user less control than the very specific axioms, but are still a powerful device. The specific axioms point out which database attributes are more important in the domain than others. They consist 124 of a single list of database attributes called the im~ortant attributes list. The important at£ributes list does not "control" the system as the very specific axioms do. Instead it suggests paths for the system to try; it has no binding effects. The important attributes list used for testing ENHANCE on the ONR database is shown in Figure 3. (CLASS FLAG DISPLACEMENT LENGTH WEIGHT LETHAL RADIUS MINIMUM ALTITUDE ACCURAC~ HO~Z RANGE MAXIMUM ALTITUDE FUSE TYPE PROPULS I ON TYPE PROPULSI ON-- MAXIMUM OPERATING DEPTH PRI~YZRo~) - Figure 3. Important Attributes List ENHANCE has two major uses for the important attributes list: (i) It attempts to form breakdowns based on some of the attributes in the list. (2) It uses the list to decide which attributes to use as DDAs for a sub-class. ENHANCE must decide which attributes are better as the basis for a breakdown and which are better for describing the resulting sub-classes. While most attributes important to the domain are good for descriptive purposes, character attributes are better than others as the basis for a breakdown. Attributes with character values can more naturally be the basis for a breakdown since they have a small set of legal values. A breakdown based on such an attribute leads to a small well-defined set of sub-classes. Nt~meric attributes, on the other hand, often have an infinite number of legal values. A breakdown based on individual numeric values could lead to a potentially infinite number of sub-classes. This distinction between numeric and character (symbolic) attributes is also used in the TEAM system [Grosz et. al. 82]. ENHANCE first attempts to form breakdowns of an entity based on character attributes from the important attributes list. Only if no breakdowns result from these attempts, does the system attempt breakdowns based on numeric attributes. The important attributes list also plays a major role in selecting the distinguishing descriptive attributes (DDAs) for a particular sub-class. Recall that the DDAs are a set of attributes whose values differentiate one sub-class from all other sub-classes in the same breakdown. It is often the case that several sets of attributes could serve this purpose. In this situation, the important attributes list is consulted in order to choose the most salient distinguishing features. The set of attributes with the highest number of attributes on the important attributes list is chosen. The important attributes list affords the user less control over the representation formed than the very specific axioms since it only suggests paths for the system to take. The system attempts to form breakdowns based on the attributes in the list, but these breakdowns are subjected to tests encoded in the general axioms which are not used for breakdowns formed by the very specific axioms. Breakdowns formed using the very specific axioms are not subjected to as many tests since they were explicitly specified by the database designer. 4.3 General Axioms The final type of world knowledge axioms used by ENHANCE are the general axioms. These axioms are domain independent and need not be changed by the user. They encode general principles used for deciding such things as whether sub-classes formed should be added to the knowledge representation, and how sub-classes should be named. The ENHANCE system must be capable of naming the sub-classes. The name must uniquely identify a sub-class and should give some semantic indication of the contents of the sub-class. At the same time, they should sound reasonable to the ~HANCE user. These problems are handled by the general axioms entitled naming conventions. An example of a naming convention is: Rule 1 - The name of a sub-class of entity ENT formed using a character* attribute with value VAL will be: VAL-ENT. Examples of sub-classes named using this rule include: WHISKY-SUBMARINE and FORRESTAL-SHIP. The ENHANCE system must also ensure that each of the sub-classes in a particular breakdown are meaningful. For instance, some of the sub-classes may contain only one individual from the database. If several such sub-classes occur, they are combined to form a CLASS-OTHER sub-class. This use of CLASS-OTHER compacts the representation while indicating that a number of instances are not similar enough to any others to form a sub-class. The DDA for CLASS-OTHER indicates what attributes are common to all entity instances that fail to make the criteria for membership in any of the larger named sub-classes. Without CLASS-OTHER this information would have to be derived by the generation system; this is a potentially time consuming process. The general axioms contain several rules which will block the formation of "CLASS-OTHER" in circumstances where it will not add information to the representation. These * This is a slight simplification of the rule actually used by EN}~NCE, see [McCoy 82] for further details. 125 include: Rule 2 - Do not form CLASS-(TfHER if it will contain only one individual. Rule 3 - Do not form CLASS-OTHER if it will be the only child of a superordinate. Perhaps the most important use of the general axioms is their role in deciding if an entire breakdown adds meaning to the knowledge representation. The general axioms are used to "filter out" breakdowns whose sub-classes either reflect the entity class as a whole, Or the actual instances in the database. They also contain rules for handling cases when no differences between the sub-classes can be found. Examples of these rules include: Rule 4 - If a breakdown results in the formation of only one sub-type, then do not use that breakdown. Rule 5 - If every sub-class in two different breakdowns contains exactly the same individuals, then use only one of the breakdowns. 5.0 SYSTEM OVERVIEW The ENHANCE system consists of ~ set of independent modules; each is responsible for generating some piece of descriptive information for the sub-classes. When the system is invoked for a particular entity class, it first generates a number of breakdowns based on the values in the database. These breakdowns are passed from one module to the next and descriptive information is generated for each sub-class involved. This process is overseen by the general axioms which may throw out breakdowns for which descriptive information can not be generated. Before generating the breakdowns from the values in the database, the constraints on the values are checked and all units are converted to a common value. Any attribute values that fail to meet the constraints are noted in the representation and not used in the calculation. From these values a number of breakdowns are generatc~d using the very specific and specific axioms. The breakdowns are first passed to the "fitting algoritl~n". ~en two or more breakdowns are generated for an entity-class, the sub-classes in one breakdown may be contained in the sub-classes of the other. In this case, the sub-classes in the first breakdown should appear as the children of the sub-classes of the second breakdown, adding depth to tl~ hierarchy. ~e fitting algorit|un is used to calculate where the sub-classes fit in the generalization hierarchy. After the fitting algoritt~ is run, the general axioms may intervene to throw out any breakdowns which are essentially duplicates of other breakdowns (see rule 5 above). At this point, the DDAs of the sub-classes within each breakdown are calculated. The algorithm used in this calculation is described below to illustrate the combinatoric nature of the augmentation process. If no DDAs can be found for a breakdown formed using the important attributes list, the general axioms may again intervene to throw out that breakdown. Flow of control then passes through a number of modules responsible for calculating the based DB attribute and for recording constant DB attributes and relation attributes. The actual nodes are then generated and added to the hierarchy. Generating the descriptive information for the sub-classes involves combinatoric problems which depend on the number of records for each entity in the database and the number of sub-classes formed for these entities. The ENHANCE system was implemented on a VAX 11/780, and was tested using a portion of an ONR database containing 157 records. It generated sub-type information for 7 entities and ran in approximately 159157 CPU seconds. For a database with many more records, the processing time may grow exponentially. This is not a major problem since the system is not interactive; it can be run in batch mode. In addition, it is run only once for a particular database. After it is run, the resulting representation can be used by the interactive generation system on all subsequent queries. A brief outline of the processing involved in generating the DDAs of a particular sub-class will be given. This process illustrates the kind of combinatoric problems encountered in automatic generation of sub-type information making it unreasonable computation for an interactive generation system. 5.1 Generatin@ DDAs The Distinguishing Descriptive Attributes (DDAs) of a sub-class is a set of attributes, other than the based DB attribute, whose collective value differentiates that sub-class from all other sub-classes in the same breakdown. Finding the DDA of a sub-class is a problem which is ccmbinatoric in nature since it may require looking at all combinations of the attributes of the entity class. This problem is accentuated since it has been found that in practice, a set of attributes which differentiates one sub-class from all other sub-classes in the same breakdown does not always exist. Unless this problem is identified ahead of time, the system would examine all combinations of all of the attributes before deciding the sub-class can not be distinguished. There are several features of the set of DDAs which are desirable. (i) The set should be as s,~all as possible. (2) It should be made up of salient attributes (where possible). (3) The set should add information about that sub-class not already derivable from the representation. In other words, they should be different from the 126 DDAS of the parent. A method for generating the DDAs could involve simply generating all 1-combinations of attributes, followed by 2-combinations etc.. until a set of attributes is found which differentiates the sub-class. Attributes that appeared in the DDA of the immediate parent sub-class would not be included in the combinations formed. To ensure that the DDA was made up of the most salient attributes, combinations of attributes from the important attributes list could be generated first. This method, however, does not avoid any of the combinatoric problems involved in the processing. To avoid some of these problems, a pre-processor to the combination stage of the calculation was developed. The combinations are formed of only potential-DDAs. These are a set of attributes whose value -can be used to differentiate the sub-class from at least one other sub-class. The attributes included in potential-DDAs take on a value within the sub-class that is different from the value the attributes take on in at least one other sub-class. Using the potential-DDAs ensures that each attribute in a given combination is useful in distinguishing the sub-class from all others. Calculating the potential-DDAs requires comparing the values of the attributes within the sub-class with the values within each other sub-class in turn. This calculation yields two other pieces of important information. If for a particular sub-class this comparison yields only one attribute, then this attribute is the only means for differentiating that sub-class from the sub-class the DDAs are being calculated for. In order for the DDA to differentiate the sub-class from all others, it must contain that attribute. Attributes of this type are called definite-DDAs. The second type of information identified has to do with when the sub-class can not be differentiated from all others. The comparing of attribute values of sub-classes makes immediately apparent when the DDA for a sub-class can not be found. In this case, the general axioms would rule out the breakdown containing that sub-class.* Assuming that the sub-class is found to be distinguishable, the system uses the potential-DDAs and the definite-DDAs to find the smallest and most salient set of attributes to use as the DDA. It forms combination of attributes using the definite-DDAs and me~rs of the potential-DDAs. The important attributes list is consulted to ensure that the most salient attributes are chosen as the DDA. 5.2 Time/Space Tradeoff There is a time/space tradeoff in using a * There are several cases in which ENHANCE would not rule out the breakdown, see [McCoy 82] for details. system like ENHANCE. Once the ~ C E system is run, the generation system is relieved from the time consuming task of sub-type inferencing. ~his means, however, that a much larger knowledge representation for the generation system's use results. Since the generation system must be concerned with the amount of time it takes to answer a question, the cost of the larger knowledge representation is well worth the savings in inferencing time. If, however, at some future point, time is no longer a major factor in natural language generation, many of the ideas put forth here could be used to generate the sub-type information only as it is needed. 6.0 USE OF REPRESENTATION CREATED BY ENHANCE The following example illustrates how the TEXT system uses the information generated by ENHANCE. The example is taken from actual output generated by the TEXT system in response to the question : What is an AIRCRAFT-CARRIER?. It utilizes the portion of the representation generated by ENHANCE. Following the text is a brief description of where each piece of information was found in the representation. (The sentences are numbered here to simplify the discussion: there are no sentence numbers in the actual material produced by TEXT). (i) An aircraft carrier is a surface ship with a DISPLACEMENT between 78000 and 80800 and a LENGTH between 1039 and 1063. (2) Aircraft carriers have a greater LENGTH than all other ships and a greater DISPLACEMENT than most other ships. (3) Mine warfare ships, for example, have a DISPLACEMENT of 320 and a LENGTH of 144. (4) 7%11 aircraft carriers in the ONR database have R ~ S of 0, FUEL TYPE of BNKR, FLAG of BLBL, BEAM of --252, ENDURANCE RANGE of 4000, ECONOMIC SPEED of 12, ENDURANCE--SPEED of 30 and PROPULSION of STM~'ORGRD? (5) A ship is classified as an aircraft carrier if the characters 1 through 2 of its HULL NO are CV. In this example, the DDAs of aircraft carrier are used to identify its features (sentence i) and to make a comparison between aircraft carriers and all other types of ships (sentences 2 and 3). Since the ENHANCE system ensures that the values of the DDAs for one sub-class appear in the DB attribute list of every other sub-class in the same breakdown, the comparisons between the sub-classes are easily calculated by the TEXT system. M~reover, since ENHANCE has selected out several attributes as more important than others (based on the world knowledge axioms), TEXT can make a meaningful comparison instead of one less relevant. The final sentence is derived from the based DB attribute of aircraft carrier. 127 7.0 FUTURE WORK There are several extensions of the ENHANCE system which would make the knowledge representation more closely reflect the real world. These include (i) the use of very specific axioms in the calculation of descriptive information and (2) the use of relational information as the basis for a breakdown. At the present time, all descriptive sub-class information is calculated from the actual contents of the database, although sub-class formation may be based on the very specific axioms. The database contents may not adequately capture the real world distinctions between the sub-classes. For this reason, a set of very specific axioms specifying descriptive information could be adopted. The need for such axioms can best be seen in the DDA generated for ship sub-type AIRCRAFT-CARRIER. Since there are no attributes in the database indicating the function of a ship, there is no way of using the fact that the function of an AIRCRAFT-CARRIER is to carry aircraft to distinguish AIRCRAFT-CARRIERS from other ships. This is, however, a very important real world distinction. Very specific axioms could be developed to allow the user to specify these important distinctions not captured the the contents of the database. The ENHANCE system could also be improved by utilizing the relational information when creating the breakdowns. For example, missiles can be divided into sub-classes on the basis of what kind of vehicles they are carried by. AIR-TO-AIR and AIR-TO-SURFACE missiles are carried on aircraft, while SURFACE-TO-SURFACE missiles are carried on ships. Thus, the relations often contain important sub-class distinctions that could be used by the system. 8.0 CONCLUSION A system has been described which automatically creates part of a knowledge representation used for natural language generation. 'IRis enables the generation system to give a richer description of the database, since the information generated by ENHANCE can be used to make comparisons between sub-classes which would otherwise require use of extensive inferencing. ENHANCE generates sub-classes of the entity classes in the database; it uses a set of world knowledge axioms to guide the formation of the sub-classes. The axioms ensure the sub-classes are meaningful and that salient information is chosen for the sub-class descriptions. This in turn ensures that the generation system will have salient information available to use making the generated text more meaningful to the user. 9.0 ACKNCWLEDGEMENTS I would like to thank Aravind Joshi and Kathleen McKeown for their many helpful comments throughout the course of this work, and Bonnie Webber, Eric Mays, and Sitaram Lanka for their comments on the content and style of this paper. i0.0 REFERENCES [Chen 76]. (:hen, P.P.S., "The Dltity-Relationship Model - Towards a Unified view of Data", ACM Transactions on Database Systems, Vol. i, No. I, 1976. [Grosz et. el. 82]. Grosz, B., et. el., "TEAM: A Transportable Natural Language System", Tech Note 263, Artificial Intelligence Center, SRI International, Menlo Park, Ca., (to appear). [Lee & Gerritsen 78]. Lee, R.M., and Gerritsen, R., "Extended Semantics for Generalization Hierarchies", Proceedings of the 1978 ACM-SIGMOD International Conference-'on ~%an!~ement of Data, Austin, Texas, May 31 to J~-e 2, 1978. i [McCoy 82]. McCoy, K.F., "The ENHANCE System: Creating Meaningful Sub-Types in a Database Knowledge Representation For Natural Language Generation", forthcoming Master' s Thesis, University of Pennsylvania, Philadelphia, pa., 1982. [McKeown 82A]. McKeown, K.R., "Generating Natural Language Text in Response to Questions About Database Structure", Ph.D. Dinner tatio: ~, ; University of Pennsylvania, Philadelphia, Pa., 1982. [McKeown 82B]. McKeown, K.R., "The TEXT system for Natural Language Generation: An Overview", to appear in Proceedings of the 20th Ant ual Conference of the Association of Computational Lin~uis£[cs, Toronto, Canada, June 1982. [Smith and Smith 77]. Smith, J.M., and Smith, D.C.P., "Database Abstractions: Aggregation and Generalization", ACM Transactions on Database Systems, Vol. 2, No. 2, June 1977. 128 | 1982 | 29 |
THE REPRESENTATION OF INCONSISTENT INFORMATION IN A DYNAMIC MODEL-THEORETIC SEMANTICS Douglas B. Moran Department of Computer Science Oregon State University Corvallis, Oregon 97331 ABSTRACT Model-theoretic semantics provides a computationally attractive means of representing the semantics of natural language. However, the models used in this formalism are static and are usually infinite. Dynamic models are incomplete models that include only the information needed for an application and to which information can be added. Dynamic models are basically approximations of larger conventional models, but differ is several interesting ways. The difference discussed here is the possibility of inconsistent information being included in the model. If a computation causes the model to expand, the result of that computation may be different than the result of performing that same computation with respect to the newly expanded model (i.e. the result is inconsistent with the information currently in the dynamic model). Mechanisms are introduced to eliminate these local (temporary) inconsistencies, but the most natural mechanism can introduce permanent inconsistencies in the information contained in the dynamic model. These inconsistencies are similar to those that people have in their knowledge and beliefs. The mechanism presented is shown to be related to both the intensional isomorphism and impossible worlds approaches to thi~ problem. I. INTRODUCTION In model-theoretic semantics, the semantics of a sentence is represented with a logical formula, and its meaning is the result of evaluating that formula with respect to a logical model. The model-theoretic semantics used here is that given in The proper treatment of quantification in ordinar~ English (PTQ) [Montague 1973], but the problems and results discussed here apply to similar systems and theories. From the viewpoint of natural language understanding, the conventional ~oO~l-theoretic semantics used in descriptive theories has two basic problems: (I) the information contained in a mod~ is complete and unchanging whereas the information possessed by a person listening to an utterance is incomplete and may be changed by the understanding of that utterance, and (2) the models are usually presumed to be infinite, whereas a person possesses only finite information. Dynamic model-theoretic semantics [Friedman, Warren, and Moran 1978, 1979; Moran 1980] addresses these problems by allowing the models to contain incomplete information and to have information added to the model. A dynamic model is a "good enough" approximation to an infinite model when it contains the finite subset of information that is needed to determine the meanings of the sentences actually presented to the system. Dynamic model-theoretic semantics allows the evaluation of a formula to cause the addition of information to the model. This interaction of the evaluation of a formula and the expansion of the model produces several linguistically interesting side-effects, and these have been labelled model-theoretic pra~matics [Moran 19~0]. One of these effects occurs when the information given by an element of the model is expanded between the time when that element is identified as the denotation of a sub-expression in the formula and the time when it is used in combination with other elements. If the expansion of the model is not properly managed, the result of the evaluation of such a formula can be wrong (i.e. inconsistent with the contents of the model). Two mechanisms for maintaining the correctness of the denotational relationship are presented. In the first, the management of the relationship is external to the model. This mechanism has the disadvantage that it involves high overhead - the denotational relationships must be repeatedly verified, and unnecessary expansions of the model may be performed. The second mechanism is similar to the first, but eliminates much of this overhead: it incorporates the management of the denotational relationship into the model by augmenting the model's structure. It is this second mechanism that is of primary interest. It was added to the system to eliminate a source of immediate errors, but it was found to introduce long-term "errors". These errors are interesting because they are the kinds of errors that people frequently make. The structure added to the model permits it to contain inconsistent pieces of information (the structure of a conventional model prevents this), and the mechanism provides a motivated means for controllin~ which inconsistencies may and may not be entered into the dynamic model. An important subclass of the inconsistencies provided by this mechanism are known as intensional 16 substitution failure and this mechanism can be viewed as a variant of both the "impossible" worlds [e.g. Cresswell 1973: 39-41] and the intenslonal isomorphism [e.g. Lewis 1972] approaches. Since intensionality alone does not provide an account for Intensional substitution failure, this mechanism provides an improved account of propositional attitudes. Finding the argument to which the ~-expression is applied before evaluating the ~-expression is not a viable solution for two reasons. First, some h-expressions are not applied to arguments, but they have the same problem with their denotations changing as the model expands. Second, having to find the argument to which a h-expression is applied eliminates one of the system's major advantages, compositionality. II. THE PROBLEM Dynamic models contain incomplete information, and the sets, relations, and functions in these models can be incompletely specified (their domains are usually incomplete). In PTQ, some phrases translate to ~-expressions; other ~-expressions are used to combine and reorder subexpressions. The possible denotations of these ~-expressions are the higher-order elements of the model (sets, relations, and functions). For example, the proper name "John" translates to the logical expression (omitting intensionality for the time being): (I) [~ P P(j)] where P ranges over properties of individuals and has as its denotation the set of properties that John has. The sentence "John talks" translates to: (2) [~ P P(j)](talk) This formula evaluates to true or false depending on whether or not the property that is the denotation of "talk" is in the set of properties that John has. The dynamic model that is used to evaluate (2) may not contain the element that is the denotation of "talk". If so, a problem ensues. If the formula is evaluated left-to-right, the set of properties denoted by the ~ -expression is identified, followed by the evaluation of "talk". This forces the model to expand to contain the property of talking. The addition of this new property expands the domain of the set of properties denoted by "John", thus forcing the expansion of the characteristic function of that set to specify whether or not talking is to be included. However, because the relationship between the Z-expression for "John" and the set of properties denoted is maintained only during the evaluation of the ~-expression (there is no link from the denotation back to the expression that it denotes), there are no restrictions on how the set is to be expanded. Thus, it is possible to define the property of talking to have John talking and to expand the set previously identified as being denoted by "John" to not include talking, or vice versa. If such an expansion were made, the inconsistency would exist only in the evaluation of that particular formula, and not in the model. Subsequent evaluations of the sentence would recompute the denotation of "John" and get the correct set of properties. This is not a problem with the direction of evaluation - the argument to which the ~-expression is applied may occur to the left of that ~-expression, for example: (3) [R R R(talk)](AP P(j)) (note: (3) is equivalent to (2) above). III. THE FIRST MECHANISM - EXTERNAL MANAGEMENT The mechanism that evaluates a formula with respect to a model has been augmented with a table that contains each ~-expression and the ima6e of its denotation in the current stage of the dynamic model. When the domain of the ~-expression expands, the correct denotational relationship is maintained by expanding the image in the table using the ~-expression, and then finding the corresponding element in the model. If the element in the model that was the denotation of the h-expression was not expanded in the same way as the image in this table, a new element corresponding to the expanded image is added to the model. This table allows two ~-expressions that initially have the same denotation to have different denotations after the model expands. Since the expansion of elements in the model is undirected, an element that was initially the denotation of a ~-expression may expand into an unused element. The accumulation of unused elements and the repeated comparisions of images in the table to elements in the model frequently imposes a high overhead. IV. THE SECOND MECHANISM - AUGMENTING THE MODEL The second mechanism for maintaining the correctness of the denotations of ~-expressions basically involves incorporating the table from the first mechanism into the model. In effect, the R-expressions become meanin6ful names for the elements that they denote. These meaningful names are then used to restrict the expansion of the named elements; once an element has been identified as the denotation of a ~-expresslon, it remains its denotation.* In the first mechanism, when the domain of two ~-expressions does not contain any of the elements that distinguish them, they will have the same denotation, and when such a distinguishing element is added to the model, the denotations of the two h-expressions will become different. With meaningful names, this is not possible because the denotational relationship between a h-expression * Meaningful names are also useful for other purposes, such as generating sentences from the information in the model and for providing procedural - rather than declarative - representations for the information in the model [Moran 1980]. 17 end its denotation in the model is permanent. Since the system cannot anticipate how the model will be expanded, if it is possible to add to the domain of two h-expresslons an element that would distinguish their denotations, those expressions must be treated as having distinct denotations. Thus, all and only the logically-equivalent expressions should be identified as having the same denotation. If two equivalent expressions were not so identified, their denotations would be different elements in the model and this would allow them to be treated differently. For example, if "John and Mary" was not identified to be the same as "Mary and John", it would be possible to have the model contain the inconsistent information that "John and Mary talk" is true and that "Mary and John talk" is false. If two non-equivalent ~-expressions were identified as being equivalent, they would have the same element as their denotation. When an element that would distinguish the denotations of these two expressions was added to the model, the expansion of the element that was serving as both their denotations would be incorrect for one of them and thus introduce an inconsistency. This need to correctly identl~y equivalent expressions presents a problem because even within the subset of expressions that are the translations of English phrases in the PTQ fragment, equivalence is undecldable [Warren 1979]. It is this undecidability that is the basis of the introduction of inconsistencies into the model. To be useful in a natural language understanding system, this mechanism needs to have timely determinations of whether or not two expressions are equivalent, and thus it will use techniques (including heuristics) that will produce false answers for some pairs of expressions. It is the collection of techniques that is used that determines which inconsistencies will and will not be admitted into the model.* V. PROPOSITIONAL ATTITUDES AND INTENSIONAL SUBSTITUTIONAL FAILURE Intensional substitution failure occurs when one has different beliefs about intensionally- equivalent propositions. For example, all theorems are intenslonally-equlvalent (each is true in all possible worlds), but it is possible to believe one proposition that is a theorem and not believe another. The techniques used by the second mechanism to identify logically-equivalent formulas can be viewed as similar to Carnap's Intensional isomorphism approach in that it is based on finding equivalences between the constituents and the structures of the expressions being compared. This mechanism can also be viewed as using an * While the fragment of English used in PTQ is large enough to demonstrate the introduction of inconsistent information, it is viewed as not being large enough to permit interesting claims about what are useful techniques for testing equivalences. Consequently, this part of the mechanism has not been implemented. "impossible" worlds approach: if two intensionally-equivalent formulas are not identified as being equivalent, the mechanism "thinks" that it is possible to expand their domain to include a distinguishing element. Since the formulas are equivalent in all possible worlds, the expected distinguishing element must be an "impossible" world. The presence of intensional substitution failure is one of the important tests of a theory of propositional attitudes. This mechanism is a correlate of that of Thomason [1980], with the addition of meaningful names to intensional objects serving the same purpose as Thomason's additional layer of types. VI. REFERENCES Cresswell, M. J. (1973) Logic and Languages, Methuen and Company, London. Friedman, J., D. Moran, and D. Warren (1978) "An interpretation system for Montague grammar", American Journal for Computational Linguistics, microfiche 74, 23-96. Friedman, J., D. Moran, and D. Warren (1979) "Dynamic interpretations", Computer Studies in Formal Linguistics report N-16, Dept. of Computer and Communication Sciences, The University of Michigan; earlier version presented to the October 1978 Sloan Foundation Workshop on Formal Semantics at Stanford University. Lewis, D. (1972) "General semantics", in D. Davidson and G. Harman (eds.) (1972) Semantics of Natural Language, D. Reidel, Dordrecht, 169-218; reprinted in B. H. Partee (ed.) (1976) Monta6ue Grammar, Academic Press, New York, 1-50. Montague, R. (1973) "The proper treatment of quantification in ordinary English" (PTQ), in J. Hintikka, J. Moravesik, and P. Suppes (eds.) (1973) Approaches to Natural Language, D. Reidel, Dordrecht, 221-242; reprinted in R. Montague (1974) Formal Philosophy: Selected Papers of Richard Monta~ue, edited and with an introduction by Richmond Thomason, Yale University Press, New Haven, 247-270. Moran, D. (1980) Model-Theoretic Pra~matics: Dynamic Models and an Application to Presupposition and Implicature, Ph.D. dissertation, Computer Studies in Formal Linguistics, Dept. of" Computer and Communication Sciences, The University of Michigan. Thomason, R. H. (1980) "A model theo~ for propositional attitudes", Linguistics and Philosophy, 4, I 47-70. Narren, D. (1979) Syntax and Semantics in Parsin%: An Application to Monta~ue Grammar, Ph.D. dissertation, Computer Studies in Formal Linguistics report N-18, Dept. of Computer anc Communication Sciences, The University of Michigan. 18 | 1982 | 3 |
SALIENCE: THE KEY TO THE SELECTION PROBLEM IN NATURAL LANGUAGE GENERATION E. Jeffrey Conklin David D. McDonald Department of Computer and Information Science University of Massachusetts Amherst, Massachusetts 01003 USA I ABSTRACT We argue that in domains where a strong notion of salience can be defined, it can be used to provide: (I) an elegant solution to the selection problem, i.e. the problem of how to decide whether a given fact should or should not be mentioned in the text; and (2) a simple and direct control framework for the entire deep generation process, coordinating proposing, planning, and realization. (Deep generation involves reasoning about conceptual and rhetorical facts, as opposed to the narrowly linguistic reasoning that takes place during realization.) We report on an empirical study of salience in pictures of natural scenes, and its use in a computer program that generates descriptive paragraphs comparable to those produced by people. I. The Selection Problem At the heart of research on natural language generation is the question of how to decide what to say and, equally important, what not to say. This is the "selection problem", and it has been approached in various ways in the past: Direct translation generators such as [Swartout 1981, Clancey to appear] avoid the problem by leaving the decision to the original designer of the data structures that serve as the templates to the generator; this places the burden on that designer to correctly anticipate what degree of detail and presupposed knowledge will be appropriate to a specific audience since on-line adjustments are not possible. I. This report describes work done in the Department of Computer and Information Science at the University of Massachusetts. It was supported in Dart by National Science Foundation grant IST#8104984 (Michael Aroin and Davis McDonald, Co-Principal Investigators). Mann and Moore [1981], on the other hand, while assembling texts dynamically to suit their audience, do so by "over-generating" the set of facts that will be related, and then passing them all through a special filter, leaving out those that are judged to be already known to the audience and letting through those that are new. McKeown [1981] uses a similar technique -- her generator, like Mann and Moore's, must examine every potentially mentionable object in the domain data base and make an explicit judgement as to whether to include it. We argue that in a task domain where salience information is available such filters are unnecessary because we can simply define a cut-off salience level below which an object is ignored unless independently required for rhetorical reasons. The most elaborate and heuristic systems to date use meta-knowledge about the facts in the domain and the listener's knowledge of them to plan utterances to achieve some desired effect. Cohen [1978] used speech-act theory to define a space of possible utterances and the goals they could achieve, which he searched by using backwards chaining. Appelt [1982] uses a compiled form of this search procedure which he encodes using Saccerdotti's procedural nets; he is able to plan the achievement of multiple rhetorical goals by looking for opportunities to "piggyback" additional phrases (sub-plans) into pending plans for utterances. We argue that in domains where salience information is already available, such thorough deliberations are often unnecessary, and that a straight-forward enumeration of the domain objects according to their relative salience, augmented with additional rhetorical and stylistic information on a strictly local basis, is sufficient for the demands of the task. 129 II. Deep Generation and Scene Descriptions In this paper we present an approach to deep generation that uses the relative salience of the objects in the source data base to control the order and detail of their presentation in the text. We follow the usual view that natural language generation is divided into two interleaved phases: one in which selection takes place reflecting the speaker's goals, and the selected material is composed into a (largely conceptual) ,realization specification ,,I (abbreviated "r-spec") according to high-level rhetorical and stylistic conventions, and a second in which the r-spec is realized -- the text actually produced -- in accordance with the syntactic and morphological rules of the language. We call the first phase "deep generation" -- instead of the more specific term "planning" -- to reflect our view that its use of actual planning techniques will be limited when compared to their use in the generators developed by Cohen, Appelt, or Mann and Moore. We are developing our theory of deep generation in the context of a computer program that produces simple paragraphs describing photographs of natural scenes similar to those analyzed by the UMass VISIONS System [Hanson and Riseman 1978, Parma 1980]. Our input is a mock-up of their final analysis of the scene, including a mock-up annotation of the salience of all of the objects and their properties as would be identified by VISIONS; this representation is expressed in a locally developed version of KL-ONE. The paragraphs are realized using MUMBLE [McDonald 1981, 1982], which is responsible for all low-level linguistic decisions and for carrying out the rhetorical directives given in the r-spec. I. We are introducing this new term -- "realization specification" -- in place of the term ,,message 'r which had been used in earlier ~ ublications on McDonald's generation sy§tem. his is a change in name only: these Objects have the same formal properties as before. The shift reflects the kind of communication metaphor on which this work has actually been based: the old term has often connoted a view of communication as a process of translating a data structure in the speaker's head into language and then reconstructing it in the audience's head. (the so-called "conduit" metaphor). Instead, we take it that a speaker has a set of goals whose realization may entail entirely d~¢fe-ent utterances depending upon who the a~dience is and what they already know; that the speaker's knowledge of their language consist 9 in large part of a catalog of wnat might be saia and the effects it is likely to have on the audience; and that, accordingly, language generation entails a plannin~ process, selecting among these effects according to the desired outcome. As of the beginning of February 1982, the initial version of the deep generation phase has been designed and implemented. Figure I shows the kind of scene we are using in our studies and an example of the kind of paragraph description targeted for our system. Efforts to "This is a picture of a large white house with a white fence in front of it. In front of the fence is a cement sculpture. In front of this is a street, Across the street is a grassy patch with a white mailbox. There are trees all around, with one evergreen to the right of the driveway, which runs next to the house. It is fall, the sky is overcast, and the ground is wet." Figure I. One of the pictd~es used in the experimental studies with one of the subjects' descriptions of it. A mocked-up analysis of this picture was used as the input to the deep generation process in the example discussed below. modify MUMBLE to run in NIL on our VAX are underway, and we anticipate having an initial realization dictionary up and the first texts produced before the end of May. During the summer and fall of 1981, Jeff Conklin (Conklin and Ehrlich, in preparation) carried out the series of psychological experiments discussed immediately below. The results have been use~ to determine the salience ratings for the mock-up of the analyzed scenes, and to provide a corpus of the kinds of texts people actually produce as descriptions of scenes of suburban houses. III. Visual Salience Our theory of visual salience states that a given person looking at a given picture in a given context assigns a salience (an ordering, rather than a numeric value) to each object as a 130 natural and automatic part of the process of perceiving and organizing the scene. Intuitively the salience of an object is based on its size and centrality (how central it is) in the image, its degree of unexpectedness, and its intrinsic appeal or importance to the viewer. To substantiate and explore these intuitions we ran a series of experiments in which a group of subjects rated the salience of items in color slides of natural scenes. For each picture each subject had a form listing all of the major items in the scene, and their task was to rate the salience of each item on a zero to seven scale. In order to define a controlled context the subjects were asked to imagine that they worked for a library which had a large picture section, and that their ranking scores would be used to Catalog the pictures. The controlled context is necessary because salience is generally only defined within a perceptual or conceptual context -- there is no salience in a vacuum. (However, we claim that there is a default context for viewing pictures which "anchors" the notion of salience when no other context is specified: that pictures are taken for the purpose of showing or telling the viewer something. While this is not a strong context, it allows one to talk about visual salience without precisely defining a purpose for the viewer.) In several experiments the subjects were given a second task: writing a description of the same pictures for which they were doing the rating task (one such description appears in Figure I). In these experiments the series of pictures was shown twice; in the first viewing, half of the subjects did the rating task and the other half did the description task, while in the second viewing the tasks were reversed, (It turned out that the description task had no significant effect on the rating scores.) Although we are still analyzing the data from these experiments, _there are several interesting results. The rating technique is a fairly stable and consistent non-subjective measure of salience (when averaging over a ~roup) , and is also quite sensitive to changes in the size and centrality of objects in the scene. Figure 2 shows a series of pictures that were used to determine the affects of size and centrality. The salience ratings assigned by subjects to the parking meter in this serAes were significantly different from each other (P<.05, as measured by the Wilcoxon rank sum test). That is, the rating task is sensitive enough to reveal small changes in the size and/or centrality of objects in a picture. Figure 2 A series of views of a parking meter used to measure the affects of size and centrality. 131 Also, it was found that salience was a strong determinant in the order of mention of objects in the paragraphs. Specifically, the higher the salience rating given an object by a subject, the more likely that object was to appear in the subject's description. Furthermore, there was a good correlation between the ranking of the objects (by decreasing salience) and the order in which the objects were mentioned in the description. Interestingly, the exceptions to a perfect correlation were generally the cases where a low salience item was "pulled up" into an earlier position in the text, seemingly for rhetorical reasons. The explanation that we propose is that salience is the primary force in selection in scene descriptions, but that rhetorical factors can override it (as illustrated below). IV. An Example Here is an short example of the kind of paragraph which our system currently generates: "This is a picture of a white house with a fence in front of it. The house has a red door and the fence has s red gate. Next to the house is a driveway. In the foreground is s mailbox. It is a cloudy winter day." This paragraph was generated from a perceptual representation (in KL-ONE) in which the most salient objects, in order of decreasing salience, were: House, Fence, Door, Driveway, Gate, and Mailbox. The deep generation component (called GENARO) maintains this list as the "Unmentioned Salient Objects List" (USOL), and it is this data structure which mediates between GENARO and the domain data base (see Figure 3). It should be stressed that the USOL contains only objects -- not properties of objects or relationships between objects -- since we specifically claim that such an "object-driven" approach is not only more natural but also is adequate to the task. There are two "registers" which are used for focus: "Current-Item" and "Main-Item". The Current-Item register contains the object currently in focus (and hence the most salient object which has not previously been mentioned), and the Main-Item register points to the data base's most salient object as the topic of the entire paragraph (this register is set once at the beginning of the paragraph generation process). An object moves into focus by being "popped" from the USOL and placed in the DATA BASE 0 0 0 0 0 0 0 ° 0 USOL (least salient) (most salient) $ Rhetorical Rules (in packets) Paragraph ~" Driver [ Proposed R-Spec Elements i one MUMBLE Figure ~. ~ Liock diagram of the GENARO system. The "O"s in the "Data Base" represent objects in the domain represen- tation, whereas the "~"s are the themeatic "shadows" of these objects used by GENARO for its rhetorical processing. Each of the ovals in the "Rhetorical Rules" box are packets containing one or more rhetorical rules. 132 Current-Item register, along with its most salient properties and relationships (for ease of access). When formulating the r-spec, most of the rhetorical rules then look only at the Current-Item. (Some rules look down "into" the USOL, or into the r-spec under construction, as elaborated below.) GENARO stores its rhetorical conventions in the form of production rules, which are organized in packets (a la Marcus, 1980). The packets are used for high-level rhetorical control (i.e. introducing, elaborating, shifting-topic, concluding), and are turned on and off by a Paragraph Driver (which encodes the format of descriptive paragraphs). We call this control structure for the production rules "Iteratlve Proposing": each of the rules in the active packets whose condition is satisfied makes a proposal and gives it a rhetorical priority; the proposals are then ranked, and the one with the highest priority wins. Thls process is Itterated until the r-spec is complete. The environment in which the rules' conditions are evaluated may change from itteration to Jr,era,ion as a result of actions performed by the winning proposals. The r-spec can thus be thought of as a "molecule", each of whose "atoms" is the result of a successful rule. The atoms are "specification elements" to be processed by MUMBLE; they are either objects, properties, or relations from the domain, or rhetorical instructions that originate with GENARO. (N.b. In the course of producing a paragraph many r-specs will pass from GENARO to MUMBLE. The flow of the paragraph is determined by which rules are turned on -- via the Paragraph Driver's control of which packets are on -- and each r-spec is produced "locally", without an awareness of previous r-specs or a planning of future ones.) GENARO starts with an empty message buffer and with Current-item (in our example) set to House, the first item in the Unused Salient Object List. The Introduce packet, which is turned on initially, has a rule which proposes to "Introduce(House)"; this rule's conditions are that the value of the Current-Item be value of the Main-Item (i.e. the Main-Item is in focus), and that the salience of the Main-Item be above some specified threshold. In this example both of these conditions are met, and the "atom" Introduce(House) is proposed at a high rhetorical priority, thus guaranteeing not only that it will be included in the first r-spec, but that it will be the dominant atom in that r-spec. Another rule (in the Elaborate packet), proposes including the color of the house (e.g. Color(House,White)), not because the color is itself salient, but to "flesh out" the. introductory sentence. This rule is included because we noticed that salient items were rarely mentioned as "bare" objects -- some property was always given. (Note also that there are other rules that propose mentioning properties of objects on other grounds, i.e. because the property itself is salient.) Finally, there is a rule which notices that Fence is both quite salient and directly related to the current topic, and so proposes In-Front-Of(Fence, House). Since the r-spec now contains three atoms and there are no strong grounds based on salience or considerations of style to continue adding to it, the r-spec is sent (via a narrow bandwith system message) to the process MUMBLE, which immediately starts realizing it. MUMBLE's dictionary contains entries for all of the symbols used in the r-spec, e.g. Introduce, In-front'of, House, etc., which are used to construct a linguistic phrase marker which then controls the realization process, outputing "This is a picture of a white house with a fence in front of it.". Back in GENARO, after the r-spec was sent, the Introduce packet was turned off, the message buffer cleared, Door (the next unused object) removed from the USOL and placed in the Current-Item register, and the Iterative Proposing process started over. In building the next r-spec, Part-of(Door, House) and Color(Door, Red) are inserted, by rules similiar to the ones described above. Suppose, however, that there are no other salient relations or properties to mention about the Current-Item Door: nothing of high rhetorical priority is left to be proposed (n.b. once a rule's proposal is accepted that rule turns itself off until that r-spec is complete). There is, however, a rule called "Condense" which looks for rhetorical parallels and proposes them at low priority (i.e. they only win when there are no, more useful, rhetorical effects which apply). Condense notices that both Door (the Current-Item) and Gate (which is somewhere "down" in the USOL) have the property Red, and that the salience of Gate and of the property Color(Gate, Red) are above the appropriate thresholds, and so proposes that Gate be made the local focus. When this action 133 is taken, a conjunction marker is added to the r-spec, and Gate is pulled out of the USOL and made the Current-item. The r-spec created by these actions is realized as "The house has a red door and the fence has a red gate.". When the USOL is empty the Conclude packet is turned on, and a rule in it proposes the r-spec about the lighting in the picture. (The facts about "cloudy" and "winter" are present in the perceptual representation -- no extra generation work was done to make that message.)' V. A Rhetorical Problem One of the issues that we are using GENARO to investigate is that in their written descriptions people sometimes "chain" spatially through a picture, linking objects which are spatially close to each other or are in certain other strong relationships to each other. The paragraph in Figure I contains a good example of this style -- the rhetorical skeleton is: This is a picture of an A with a B in front of it. In front of the B is a C. In front of the C is a D. Across the D is an E. As can be seen by inspecting the picture in Figure I, A thru E (i.e. house, fence, sculpture, street, and grassy patch) are arrayed from background to foreground in the picture in a way which allows the "in-front-of" relation to be used between them. I The question is: By what mechanism do we allow the strong spatial links between these items to override the system's basic strategy of mentioning objects in the order of decreasing salience? The first part of the answer is that the machinery for such chaining already exists in the way the Current-Item register is used (and can be reset) by the rhetorical rules. Since one of the actions rules are allowed is to reset the Current-Item to some object, a rule can be written which says "If the Current-Item has a salient relationship Relation to object X, then propose Relatlon(Current-Item,X) and make X the Current-Item". This rule (let's call it Chain) would have the effect of chaining from object to object as long as no other rules had a higher I. "Across" in this case would be a lexical variation on "in-front-of" introduced deliberately by MUMBLE to break up the repetition. (rhetorical) priority and the various "Relation"'s of the respective Current-Items were salient enough to satisfy the rule's condition. But this kind of chaining would only happen as the result of a happy series of the right local decisions -- each successful firing of Chain would be independent of the others. Furthermore, there would be no guarantee that the successive "Relation"'s would be the same, as is the case in the above example. What is needed, perhaps, is to give Chain the ability to look at the structure of the evolving r-spec and to notice when there is an opportunity to build upon a structural parallel (e.g. X in front of Y, Y in front of Z). We are currently investigating ways to make this kind of structural parallel visible within r-specs and still maintain them as a concise and narrow-bandwidth channel between GENARO and MUMBLE. VI. References Appelt, D. Planning Natural Lan~uase Utterances to~fy-'-~Dle Goals, vh.D. Disser~Y'io~ord dni%ersi~:-yT-~o appear as a technical report from SRI International, 1982. Clancey, W. (to appear) "The Epistemology of a Rule-Based Expert System: A Framework for Explanation", Journal of Artificial Intelligence; also available as Heuristic Programming Project Report 81-17, Stanford University, November 1981. Cohen, P., On Knowing What to Say: Planning Speech--Ac-~niversit'~- of I~oron~o, l%chnlcal--~port 118, 1978. Conklin E. J. (in preparation) PhD. Dissertation, COINS, University of Massachusetts, Amherst, 01003. and Ehrlich K. (in preparation) "An Investigation of Visual Salience", Technical Report, COINS, U. Mass., Amherst, Ma. 01003. Hanson, A. R. and Riseman, E. M. "VISIONS: A Com~uter System for Interpreting Scenes", in Computer Vision Systems, Hanson, A. R. ands~,--E~. -(~Academic Press, New York, pp 449-510, 1978. Marcus, M. A Theory of syntactic Recognition for Natural Language, MIT Press, ~ b r i c ~ s a c h - ~ , 1980. McDonald, David D. "Language Generation: the source of the dictionary", in the Proceedings of the Annual Conference of the Association for Computational Linguistics, Stanford University, June, 1981. ..... "Natural Language Generation as a Computational Problem: an introduction" in Brady ed. "Computational Th~,:~ies of Discourse", MIT Press, to appear, fall 1982. 134 McKeown, K. , Generatin~ Natural Language: What to ~ Nex.~niversi~y of FeTnTsy~anTa3-, -- ---1-echnicaz ~eproc MS-CIS-81-I, 1981. Mann, W. and Moore, J. "ComPuter Generation of Multiparagraph Text", American Journal of Computational Linguistics, 7:1, Jan-Mar 1981, pp 17-29, 1981. Parma, Cesare C., Hanson, A. R., and Riseman, E. M. "Experiments in Schema-Driven Interpretation of a Natural Scene", in Digital Image Processing. Simon. J. C. and HaPalzcK,----'R. M. (~ds), D. Reidel Publishing Co., Dordrecht, Holland, pp 303-334, 1980. Swartout, W. Producing Explanations and Justifications oz ~xper~ ~onsultzn~ Programs, Technica-l-Repor-6-~1, Laboratory rot computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, 1981. 135 | 1982 | 30 |
A KNOWLEDGE ENGINEERING APPROACH TO NATURAL LANGUAGE UNDERSTANDING Stuart C. Shapiro & Jeannette G. Neal Department of Computer Science State University of New York at Buffalo Amherst, New York 14226 ABSTRACT This paper describes the results of a preliminary study of a Knowledge Engineering approach to Natural Language Understanding. A computer system is being developed to handle the acquisition, representation, and use of linguistic knowledge. The computer system is rule-based and utilizes a semantic network for knowledge storage and representation. In order to facilitate the interaction between user and system, input of linguistic knowledge and computer responses are in natural language. Knowledge of various types can be entered and utilized: syntactic and semantic; assertions and rules. The inference tracing facility is also being developed as a part of the rule-based system with output in natural language. A detailed example is presented to illustrate the current capabilities and features of the system. I INTRODUCTION This paper describes the results of a • preliminary study of a Knowledge Engineering (KE) approach to Natural Language Understanding (NLU). The KE approach to an Artificial Intelligence task involves a close association with an expert in the task domain. This requires making it easy for the expert to add new knowledge to the computer system, to understand what knowledge is in the system, and to understand how the system is accomplishing the task so that needed changes and corrections are easy to recognize and to make. It should be noted that our task domain is NLU. That is, the knowledge in the system is knowledge about NLU and the intended expert is an expert in NLU. The KE system we are using is the SNePS semantic network processing system [ii]. This system inci~ ~es a semantic network system in which ** This work was supported in part by the National Science Foundation under Grants MCS80-06314 and SPI-8019895. all knowledge, including rules, is represented as nodes in a semantic network, an inference system that performs reasoning according to the rules stored in the network, and a tracing package that allows the user to follow the system's reasoning. A major portion of this study involves the design and implementation of a SNePS-based system, called the NL-system, to enable the NLU expert to enter linguistic knowledge into the network in natural language, to have this knowledge available to query and reason about, and to use this knowledge for processing text including additional NLU knowledge. These features distinguish our system from other rule-based natural language processing systems such as that of Pereira and Warren [9] and Robinson [i0]. One of the major concerns of our study is the acquisition of knowledge, both factual assertions and rules of inference. Since both types of knowledge are stored in similar form in the semantic network, our NL-system is being developed with the ability to handle the input of both types of knowledge, with this new knowledge immediately available for use. Our concern with the acquisition of both types of knowledge differ~ from the approach of Haas and Hendrix [i], who a~e pursuing only the acquisition of large aggregations of individual facts. The benefit of our KE approach may be seen by considering the work of Lehnert [5]. She compiled an extensive list of rules concerning how questions should he answered. For example, when asked, "Do you know what time it is?", one should instead answer the question "What time is it?". Lehnert only implemented and tested some of her rules, and those required a programming effort. If a system like the one being proposed here had been available to her, Lehnert could have tested all her rules with relative ease. Our ultimate goal is a KE system with all its linguistic knowledge as available to the language expert as domain knowledge is in other expert systems. In this preliminary study we explore the feasibility of our approach as implemented in our representations and N-L-system. 136 II OVERVIEW OF THE NL-SYSTEM III PRELIMINARIES FOR ENTERING RULES A major goal of this study is the design and implementation of a user-friendly system for experimentation in KE applied to Natural Language Understanding. The NL-system consists of two logical components: a) A facility for the input of linguistic knowledge into the semantic network in natural language., This linguistic knowledge primarily consists of rules about NLU and a lexicon. The NL-system contains a core of network rules which parse a user's natural language rule and build the corresponding structure in the form of a network rule. This NL-system facility enables the user to manipulate both the syntactic and semantic aspects of surface strings. b) A facility for phrase/sentence generation and question answering via rules in the network. The user can pose a limited number of types of queries to the system in natural language, and the system utilizes rules to parse the query and generate a reply. An inference tracing facility is also being developed which uses this phrase/sentence generation capability. This will enable the user to trace the ~ inference processes, which result from the activation of his rules, in natural language. When a person uses this NL-system for experimentation, there are two task domains co- resident in the semantic network. These domains are: (I) The NLU-domain which consists of the collection of propositions and rules concerning Natural Language Understanding, including both the N'L-system core rules and assertions and the user- specified rules and assertions; and (2) the domain of knowledge which the user enters and interacts with via the NLU domain. For this study, a limited '~Bottle Domain" is used as the domain of type (2). This domain was chosen to let us experiment with the use of semantic knowledge to clarify, during parsing, the way one noun madifies another in a noun-noun construction, viz. "milk bottle" vs. "glass bottle". In a sense, the task domain (2) is a sub- domain of the NLU-domain since task domain (2) is built and used via the NLU-domain. However, the two domains interact when, for example, knowledge from both domains is used in understanding a sentence being "read" by the system. The system is dynamic and new knowledge, relevant to either or both domains, can be added at any time. The basic tools that the language expert will need to enter into the system are a lexicon of words and a set of processing rules. This system enables them to be input in natural language. The system initially uses five "undefined terms": L-CAT, S-CAT, L-REL, S-REL, and VARIABLE. L-CAT is a term which represents the category of all lexical categories such as VERB and NOUN. S- CAT represents the category of all string categories such as NOUN PHRASE or VERB PHRASE. L- REL is a term which represents the category of relations between a string and its lexical constituents. Examples of L-RELs might be MOD NOUN and HEAD NOUN (of a NOUN NOUN PHRASE). S-REL represents the category of relations between a string and its sub-string constituents, such as FIRST NP and SECOND NP (to distinguish between two NPs within one sentence). VARIABLE is a term which represents the class of identifiers which the user will use as variables in his natural language rules. Before entering his rules into the system, the user must inform the system of all members of the L-CAT and VARIABLE categories which he will use. Words in the S-CAT, L-REL and S-REL categories are introduced by the context of their use in user-specified rules. The choice of all linguistic names is totally at the discretion of the user. A list of the initial entries for the example of this paper are given below. The single quote mark indicates that the following wordis mentioned rather than used. Throughout this paper, lines beginning with the symbol ** are entered by the user and the following line(s) are the computer response. In response to a declarative input statement, if the system has been able to parse the statement and build a structure in the semantic network to represent the input statement, then the computer replies with an echo of the user's statement prefaced by the phrase "I UNDERSTAND THAT". In other words, the building of a network structure is the system's "representation" of understanding. ** 'NOUN IS AN L-CAT. I UNDERSTAND THAT ' NOUN IS AN L-CAT ** 'G-DETERMINER IS AN L-CAT. (NOTE: Computer responses are omitted for these input statements due to space constraints of this paper. Responses are all similar to the one shown above°) ** 'RELATION IS AN L-CAT. ** I E IS A VARIABLE. ** 'X IS A VARIABLE. 137 ** 'Y IS A VARIABLE. ** 'ON IS A RELATION. ** 'A IS A G-DETERMINER. ** 'BOTTLE IS A NOUN. ** 'CONTAINER IS A NOUN. ** 'TABLE IS A NOUN. ** 'DESK IS A NOUN. ** 'BAR IS A NOUN. *~ 'FLUID IS A MASS-NOUN. ** 'MATERIAL IS A MASS-NOUN. ** 'MILK IS A MASS-NOUN. ** 'WATER IS A MASS-NOUN. ** 'GLASS IS A MASS-NOUN. IV THE CORE OF THE NL-SYSTEM The core of the NL-system contains a collection of rules which accepts the language defined by the grammar listed in the Appendix. The core is responsible for parsing the user's natural language input statements and building the corresponding network structure. It is also necessary to start with a set of semantic network structures representing the basic relations the system can use for knowledge representation. Currently these relations are: a) Word W is preceded by "connector point" P in a surface string; e.g. node M3 of figure I represents that word IS is preceded by connector point M2 in the string; b9 Lexeme L is a member of category C; e.g. this is used to represent the concept that 'BOTTLE IS A NOUN, which was input in Section 3; c) The string beginning at point Pl and ending at point P2 in a surface string is in category C; e.g. node M53 of figure 3 repre- sents the concept that '~ bottle" is a GNP; d) Item X has the relation R to item Y; e.g. node M75 of figure 1 represents the concept that the class of bottles is a subset of the class of containers; e) A class is characterized by its members participating in some relation; e.g. the class of glass bottles is characterized by its members being made of glass; f) The rule structures of SNePS. V SENTENTIAL COMPONENT REPRESENTATION The representation of a surface string utilized in this study consists of a network version of the list structure used by Pereira and Warren [I0] which eliminates the explicit "connecting" tags or markers of their alternate representation. This representation is also similar to Kay's charts [4] in that several structures may be built as alternative analyses of a single substring. The network structure built up by our top-level "reading" function, without any of the additional structure that would be added as a result of processing via rules of the network, is illustrated in figure I. As each word of an input string is read by the system, the network representation of the string is extended and relevant rules stored in the SNePS network are triggered. All applicable rules are started in parallel by Processes of our MULTI package [8], are suspended if not all their antecedents are satisfied, and are resumed if more antecedents are satisfied as the string proceeds. The SNePS bidirectional inference capability [6] focuses attention towards the active parsing processes and cuts down the fan out of pure forward or backward chaining. The system has many of the attributes and benefits of Kaplan's producer-consumer model [3] which influenced the design of the inference system. The two SNePS subsystems, the MULTI inference system and the MATCH subsystem, provide the user with the pattern matching and parse suspension and continuation capability enjoyed by the Flexible Parser of Hayes & Mouradian [2]. VI INPUT AND PROCESSING OF THE USER'S RULES After having entered a lexicon into the system as described above, the user will enter his natural language rules. These rules must be in the IF-THEN conditional form. A sample rule that the user might enter is: IF A STRING CONSISTS OF A G-DETERMINER FOLLOWED BY A NOUN CALLED THE MOD-NOUN FOLLOWED BY ANOTHER NOUN CALLED THE HEAD-NOUN THEN THE STRING IS AN NNP. PRED PRED PRED ®< o \ PRED / ~o~ < PRED Figure i. Network representation of a sentence. 138 The words which are underlined in the above rule are terms selected by the user for certain linguistic entities. The lexical category names such as G-DETERMINER and NOUN must be entered previously as discussed above. The words MOD-NOUN and HEAD-NOUN specify lexical constituents of a string and therefore the.system adds them to the L-REL category. The string name NNP is added to the S-CAT category by the system. The user's rule-statement is read by the system and processed by existing rules as described above. When it has been completely analyzed, a translation of the rule-statement is asserted in the form of a network rule structure. This rule is then available to analyze further user inputs. The form of these user rules is determined by the design of our initial core of rules. We could, of course, have written rules which accept user rules of the form NNP ---> G-DETERMINER NOUN NOUN. Notice, however, that most of the user rules of this section contain more information than such simple phrase-structure rules. Figure 2 contains the list of the user natural language rules which are used as input to the NL-system in the example developed for this paper. These rules illustrate the types of rules which the system can handle. By adding the rules of figure 2 to the system, we have enhanced the ability of the NL- i. ** IF A STRING CONSISTS OF AMASS-NOUN * THEN THE STRING IS A GNP * AND THE GNP EXPRESSES THE CONCEPT NAMED BY THE MASS-NOUN. I UNDERSTAND THAT IF A STRING CONSISTS OF A MASS-NOUN THEN THE STRING IS A GNP AND THE GNP EXPRESSES THE CONCEPT NAMED BY THE MASS-NOUN 2. ** IF A STRING CONSISTS OF A G-DETERMINER FOLLOWED BY A NOUN * THEN THE STRING IS A GNP * AND THE GNP EXPRESSES THE CONCEPT NAMED BY THE NOUN. (NOTE: Computer responses omitted for these rules due to space constraints of this paper. Responses are exemplified by the response to first rule above.) 3. ** IF A STRING CONSISTS OF A G-DETERMINER FOLLOWED BY A NOUN CALLED * THE MOD-NOUN FOLLOWED BY ANOTHER NOUN CALLED THE HEAD-NOUN * THEN THE STRING IS AN NNP. 4. ** IF A STRING CONSISTS OF AN NNP * THEN THERE EXISTS A CLASS E SUCH THAT * THE CLASS E IS A SUBSET OF THE CLASS NAMED BY THE HEAD-NOUN * AND THE NNP EXPRESSES THE CLASS E. 5. ** IF A STRING CONSISTS OF AN NNP * AND THE NNP EXPRESSES THE CLASS E * AND THE CLASS NAMED BY THE MOD-NOUN IS A SUBSET OF MATERIAL * AND THE CLASS NAMED BY THE HEAD-NOUN IS A SUBSET OF CONTAINER * THEN THE CHARACTERISTIC OF E IS TO BE MADE-OF THE ITEM NAMED * BY THE MOD-NOUN. 6. ** IF A STRING CONSISTS OF AN NNP * AND THE NNP EXPRESSES THE CLASS E * AND THE CLASS NAMED BY THE MOD-NOUN IS A SUBSET OF FLUID * AND THE CLASS NAMED BY THE HEAD-NOUN IS A SUBSET OF CONTAINER * THEN THE FUNCTION OF E IS TO BE CONTAINING THE ITEM NAMED BY THE * MOD-NOUN. 7. ** IF A STRING CONSISTS OF A GNP CALLED THE FIRST-GNP FOLLOWED BY * THE WORD 'IS FOLLOWED BY A GNP CALLED THE SECOND-GNP * THEN THE STRING IS A DGNP-SNTC. 8. ** IF A STRING CONSISTS OF A DGNP-SNTC * THEN THE CLASS NAMED BY THE FIRST-GNP IS A SUBSET OF THE CLASS * NAMED BY THE SECOND-GNP * AND THE DGNP-SNTC EXPRESSES THIS LAST PROPOSITION. 9. ** IF A STRING CONSISTS OF AN NNP FOLLOWED BY THE WORD 'IS * FOLLOWED BY A RELATION FOLLOWED BY A GNP * THEN THE STRING IS A SENTENCE * AND THERE EXISTS AN ITEM X AND THERE EXISTS AN ITEM Y * SUCH THAT THE ITEM X IS A MEMBER OF THE CLASS NAMED BY THE NNP * AND THE ITEM Y IS A MEMBER OF THE CLASS NAMED BY THE GNP * AND THE ITEM X HAS THE RELATION TO THE ITEM Y * AND THE SENTENCE EXPRESSES THIS LAST PROPOSITION. I0.** IF THE FUNCTION OF E IS TO BE CONTAINING THE ITEM X * AND Y IS A MEMBER OF E * THEN THE FUNCTION OF Y IS TO BE CONTAINING THE ITEM X. ii.** IF THE CHARACTERISTIC OF E IS TO BE MADE OF THE ITEM X * AND Y IS A MEMBER OF E * THEN THE CHARACTERISTIC OF Y IS TO BE MADE OF THE ITEM X. Figure 2. The rules used as input to the system. 139 system to '%nderstand" surface strings when '~ead" into the network. If we examine rules 1 and 2, for example, we find they define a GNP (a generic noun phrase). Rules 4, 8, and 9 stipulate that a relationship exists between a surface string and the concept or proposition which is its intension. This relationship we denoted by "expresses". When these rules are triggered, they will not only build syntactic information into the network categorizing the particular string that is being "read" in, but will also build a semantic node representing the relationship '~xpresses" between the string and the node representing its intension. Thus, both semantic and syntactic concepts are built and linked in the network. In contrast to rules i - 9, rules I0 and II are purely semantic, not syntactic. The user's rules may deal with syntax alone, semantics alone, or a combination of both. All knowledge possessed by the system resides in the same semantic network and, therefore, both the rules of the NL-system core and the user's rules can be triggered if their antecedents are satisfied. Thus the user's rules can be used not "only for the input of surface strings concerning the task domain (2) discussed in Section 2, but also for enhancing the NL-system's capability of '%nderstanding" input information relative to the NLU domain. VII PROCESSING ILLUSTRATION Assuming that we have entered the lexicon via the statements shown in Section 3 and have entered the rules listed in Section 6, we can input a sentence such as "A bottle is a container". Figure 3 illustrates the network representation of the surface string "A bottle is a container" after having been processed by the user's rules listed in Section 6. Rule 2 would be triggered and would identify "a bottle" and "a container" as GNPs, building nodes M53, M55, M61, and M63 of figure 3. Then the antecedent of rule 7 would be satisfied by the sentence, since it consists of a GNP, namely "a bottle", followed by the word "is", followed by a GNP, namely "a container". Therefore the node Mg0 of figure 3 would be built identifying the sentence as a DGNP-SNTC. The addition of this knowledge would trigger rule 8 and node M75 of figure 3 would be built asserting that the class named "bottle" is a subset of the class named "container". Furthermore, node M91 would be built asserting that the sentence EXPRESSES the above stated subset proposition. Let us now input additional statements to the system. As each sentence is added, node structures are built in the network concerning both the syntactic properties of the sentence and the underlying semantics of the sentence. Each of these structures is built into the system only, however, if it is the consequence of the triggering of one of the expert's rules. We now add three sentences (preceded by the **) and the program response is shown for each. **A BOTTLE IS A CONTAINER. I UNDERSTAND THAT A BOTTLE IS A CONTAINER CAT CAT ARG2 Figure 3. Network representation of processed surface string. 140 **MILK IS A FLUID. I UNDERSTAND THAT MILK IS A FLUID **GLASS IS A MATERIAL. I UNDERSTAND THAT GLASS IS A MATERIAL Each of the above input sentences is parsed by the rules of Section 6 identifying the various noun phrases and sentence structures, and a particular semantic subset relationship is built corresponding to each sentence. We can now query the system concerning the information just added and the core rules will process the query. The query is parsed, an answer is deduced from the information now stored in the semantic network, and a reply is generated from the network structure which represents the assertion of the subset relationship built corresponding to each of the above input statements. The next section discusses the question-answering/generation facility in more detail. ** WHAT IS A BOTTLE? A BOTTLE IS A CONTAINER Now we input the sentence "A milk bottle is on a table". The rules involved are rules 2, 3, 4, 6, 9, and 10. The phrase "a milk bottle" triggers rule 3 which identifies it as a NNP (noun-noun phrase). Then since the string has been identified as an NNP, rule 4 is triggered and a new class is created and the new class is a subset of the class representing bottles. Rule 6 is also triggered by the addition of the instances of the consequents of rules 3 and 4 and by our previous input sentences asserting that "A bottle is a container" and "Milk is a fluid". As a result, additional knowledge is built into the network concerning the new sub-class of bottles: the function of this new class is to contain milk. Then since "a table" satisfies the conditions for rule 2, it is identified as a GNP, rule 9 is finally triggered, and a structure is built into the network representing the concept that a member of the set of bottles for containing milk is on a member of the set of tables. The antecedents of rule i0 are satisfied by this member of the set of bottles for containing milk, and an assertion is added to the effect that the function of this member is also to contain milk. The computer responds "I UNDERSTAND THAT . . ." only when a sructure has been built which the sentence EXPRESSES. ** A MILK BOTTLE IS ON A TABLE. I UNDERSTAND THAT A MILK BOTTLE IS ON A TABLE In order to further ascertain whether the system has understood the input sentence, we can query the system as follows. The system's core rules again parse the query, deduce the answer, and generate a phrase to express the answer. ** WHAT IS ON A TABLE? A BOTTLE FOR CONTAINING MILK We now input the sentence '~ glass bottle is on a desk" to be parsed and processed by the rules of Section 6. Processing of this sentence is similar to that of the previous sentence, except that rule 5 will be triggered instead of rule 6 since the system has been informed that glass is a material. Since the string "a glass bottle"is a noun-noun phrase, glass is a subset of material, and bottle is a subset of container, a new class is created which is a subset of bottles and the characteristic of this class is to be made of glass. The remainder of the sentence is processed in the same way as the previous input sentence, until finally a structure is built to represent the proposition that a member of the set of bottles made of glass is on a member of the set of desks. Again, this proposition is linked to the input sentence by an EXPRESSES relation. When we input the sentence (again preceded by the **) to the system, it responds with its conclusion as shown here. ** A GLASS BOTTLE IS ON A DESK. I UNDERSTAND THAT A GLASS BOTTLE IS ON A DESK To make sure that the system understands the difference between "glass bottle" and "milk bottle", we query the system relative to the item on the desk: ** WHAT IS ON A DESK? A BOTTLE MADE OF GLASS We now try "A water bottle is on a bar", but the system cannot fully understand this sentence since it has no knowledge about water. We have not t01d the system whether water is a fluid or a material. Therefore, rules 3 and 4 are triggered and a node is built to represent this new class of bottles, but no assertion is built concerning the properties of these bottles. Since only three of the four antecedents of rule 6 are satisfied, processing of this rule is suspended. Rule 9 is triggered, however, since all of its antecedents are satisfied, and therefore an assertion is built into the network representing the proposition that a member of a subset of bottles is on a member of the class of bars. Thus the system replies that it has understood the input sentence, but really has not fully understood the phrase "a water bottle" as we can see when we query the system. It does not respond that it is "a bottle for containing water". 141 ** A WATER BOTTLE IS ON A BAR. I UNDERSTAND THAT A WATER BOTTLE IS ON A BAR **WHAT IS ON A BAR? A BOTTLE Essentially, the phrase "water bottle" is ambiguous for the system. It might mean '%ottle for containing water", 'bottle made of water", or something else. The system's '~epresentation" of this ambiguity is the suspended rule processing. Meanwhile the parts of the sentence which are "comprehensible" to the system have been processed and stored. After we tell the system '~ater is a fluid", the system resumes its processing of rule 6 and an assertion is established in the network representing the concept that the function of this latest class of bottles is to contain water. The ambiguity is resolved by rule processing being completed in one of the ways which were previously possible. We can then query the system to show its understanding of what type of bottle is on the bar. ** WATER IS A FLUID. I UNDERSTAND THAT WATER IS A FLUID **WHAT IS ON A BAR? A BOTTLE FOR CONTAINING WATER This example demonstrates two features of the system: I) The combined use of syntactic and semantic information in the processing of surface strings. This feature is one of the primary benefits of having not only syntactic and semantic, but also hybrid rules. 2) The use of bi-directional inference to use later information to process or disambiguate earlier strings, even across sentence boundaries. Vlll QUESTION-ANSWERING/GENERATION The question-answering/generation facility of the NL-system, mentioned briefly in Section 2, is completely rule-based. When a query such as 'What is a bottle?" is entered into the system, the sentence is parsed by rules of the core in conjunction with user-defined rules. That is, rule 2 of Section 6 would identify "a bottle" as a GNP, but the top level parse of the input string is accomplished by a core rule. The syntax and corresponding semantics designated by rules 7 and 8 of Section 6 form the basis of the core rule. Our current system does not enable the user to specify the syntax and semantics of questions, so the core rules which define the syntax and consequents of a question were coded specifically for the example of this paper, we intend to pursue this issue in the future. Currently, the two types of questions that our system can process are: WHAT IS <NP> ? WHAT IS <RELATION> <NP> ? Upon successful parse of the query, the system engages in a deduction process to determine which set is a superset of the set of bottles. This process can either find an assertion in the network answering the query or, if necessary, the process can utilize bi-directional inference, initiated in backword-chaining mode, to deduce an answer. In this instance, the network structure dominated by node M75 of figure 3 is found as the answer to the query. This structure asserts that the set of bottles is a subset of the set of containers. Another deduction process is now initiated to generate a surface string to express this structure. For the purpose of generation, we have deliberately not used the input strings which caused the semantic network structures to be built. If we had deduced a string which EXPRESSES node M75, the system would simply have found and repeated the sentence represented by node M90 of figure 3. We plan to make use of these surface strings in future work, but for this study, we have employed a second "expresses" relation, which we call EXPRESS-2, and rules of the core to ><lXi)< J Figure 4. Network representation of a generated surface string. 142 generate surface strings to express, semantic structures. Figure 4 illustrates the network representation of the surface string generated for node M75. The string "A bottle", dominated by node M221, is generated for node M54 of figure 3, expressing an arbitrary member of the set of bottles. The string "a container", dominated by node M223, is generated to express the set of containers, represented by node M62 of figure 3. Finally, the surface string "A bottle is a container", represented by node M226, is established to express node M75 and the answer to the query. In general, a surface sentence is generated to EXPRESS-2 a given semantic structure by first generating strings to EXPRESS-2 the sub- structures of the semantic structure and by assembling these strings into a network version of a list. Thus the semantic structure is processed in a bottom-up fashion. The structure of the generated string is a phrase-structured representation utilizing FIRST and REST pointers to the sub-phrases of a string. This representation reflects the subordinate relation of a phrase to its "parent"phrase. The structures pointed to by the FIRST and REST arcs can be a) another list structure with FIRST and REST pointers; b) a string represented by a node such as Mg0 of figure 3 with BEG, END, and CAT arcs; or c) a node with WORD arc to a word and an optional PRED arc to another node with PRED and WORD arcs. After the structure representing the surface string has been generated, the resulting list or tree is traversed and the leaf nodes printed as response. IX CONCLUSIONS Our goal is to design a NLU system for a linguistic theorist to use for language processing. The system's linguistic knowledge should be available to the theorist as domain knowledge. As a result of our preliminary study of a KE approach to Natural Language Understanding, we have gained valuable experience with the basic tools and concepts of such a system. All aspects of our NL-system have, of course, undergone many revisions and refinements during development and will most likely continue to do so. During the course of our study, we have a) developed two representations of a surface string: I) a linear representation appropriate for input strings as shown in figure i; and 2) a phrase-structured representation appropriate for generation, shown in figure 4; b) designed a set of SNePS rules which are capable of analyzing the user's natural language input rules and building the corresponding network rules; c) identified basic concepts essential for linguistic analysis: lexical category, phrase category, relation between a string and lexical constituent, relation between a string and sub- strimg, the expresses relations between syntactic structures and a semantic structures, and the concept of a variable that the user may wish to use in input rules; d) designed a set of SNePS rules which can analyze some simple queries and generate a response. X FUTURE DIRECTION As our system has evolved, we have striven to reduce the amount of core knowledge which is essential for the system to function. We want to enable the user to define the language processing capabilities of the system~ but a basic core of rules is essential to process the user's initial lexicon entries and rules. One of our high priority items for the immediate future is to pursue this issue. Our objective is to develop the NL-system into a boot-strap system to the greatest degree possible. That is, with a minimal core of pre-programmed knowledge, the user will input rules and assertions to enhance the system's capability to acquire both linguistic and non- linguistic knowledge. In other words, the user will define his own input language for entering knowledge into the system and conversing with the system. Another topic of future investigation will be the feasibility of extending the user's control over the system's basic tools by enabling the user to define the network Case frames for syntactic and semantic knowledge representation. We also intend to extend the capability of the system so as to enable the user to define the syntax of questions and the nature of response. XI SUMMARY This study explores the realm of a Knowledge Engineering approach to Natural Language Understanding. A basic core of NL rules enable the NLU expert to input his natural language rules and his lexicon into the semantic network knowledge base in natural lan~uame. In this system, the rules and assertions concerning both semantic and syntactic knowledge are stored in the network and undergo interaction during the deduction processes. An example was presented to illustrate: entry of the user's lexicon into the system; entry of the user's natural language rule statements 143 into the system; the types of rule statements which the user can utilize; how rules build conceptual structures from surface strings; the use of knowledge for disambiguating surface structure; the use of later information for disamhiguating an earlier, partially understood sentence; the question-answering~generation facility of the NL-system. REFERENCES I. Haas,N. & Hendrix,G.G., "An Approach to Acquiring and Applying Knowledge", Proceedings of the AliA%, pp. 235-239, 1980. 2. Hayes, P. & Mouradian, G., "Flexible Parsing", Proceedings of the iSth Annual Meetin~ of the Association for Computational Linguistics , pp. 97-103, 1980. 3. Kaplan, R.M., "A Multi-processing Approach to Natural Language", Proceedings of the National Computer Conference, AFIPS Press, Montvale, NJ) pp. 435-440,1973.. 4. Kay, M., "The Mind System", In R. Rustin, ed. Natural Language Processing, Algorithmics Press, New York, pp. 153-188, 1973. 5. Lehnert, W. G., The process of Question Answering, Lawrence Erlbaum, Hillsdale, NJ, 1978. 6. Martins, J., McKay, D.P., & Shapiro, S.C., Bi- directional Inference, Technical Report No. 174, Department of Computer Science, SUNY at Buffalo, 1981. 7. McCord, M.C., Usin K Slots and Modifiers in Logic Grammars for Natural LanKuaKe , Technical Report No. 69A-80, Univ. of Kentucky, rev. October, 1980, 8. McKay, D.P. & Shapiro, S.Co, "MULTI - A LISP Based Multiprocessing System", Conference Record of the 1980 LISP Conference, Stanford Univ., pp. 29-37, 1980. 9. Pereira, F.C.N. & Warren, D.H.D., "Definite Clause Grammars for Language Analysis -A Survey of the Formalism and a Comparison with Augmented Transition Networks", Artificial IntelliKence) pp. 231-278, 1980. 10.Robinson) J.J., "DIAGRAM, A Grammar for Dialogues", CACM, pp. 27-47, January, 1982. ll.Shapiro, S.C., "The SNePS Semantic Network Processing System". In N. Findler, ed. Associative Networks - The Representation and Use of Knowledge by Computers, Academic Press, New York, pp. 179a-203, 1979. 12.Shapiro, S.C., "Generalized Augmented Transition Network Grammars for Generation ~,~pu~ Semantic Networks", Proceedings of the 17th Annual Meetiy_~ of the Association for Computational Linguistics, pp. 25-29, 1979. Xll APPENDIX - NL CORE GRAMMAR The following grammar is a definitive description of the language in which the user can enter linguistic statements into the semantic network. The Backus-Naur Form (BNF) grammar is used in this language definition. Notational conventions: - Phrase in lower case letters explains the word required by the user - Standard grammar metasymbols: <> enclose nonterminal items | for alternation [] enclose optional items () for grouping Space represents concatenation - Concatenation has priority over alternation <LEX-STMT> : := '<WORD> IS (AJAN) (L-CAT|<L-CAT-MEMBER>) <RULE> ::= IF <ANT-STMT> THEN <CQ-STMT> <ANT-STMT> : := <ANT-STMT> AND <ANT-STMT> I A STRING CONSISTS OF <STR-DESCRIPTION> I <STMT > <CQ-STMT> : := <CQ-STMT> AND <CQ-STMT> | THE STRING IS <G-DET> <STRING-NAME> I THERE EXISTS A <CONCEPT-WORD> <VAR> I <STMT> <STMT> : := <CL-REF> <REL-REF> <CL-REF> ! THE <STRING-NAME> EXPRESSES <CL-REF> I THE <STRING-NAME> EXPRESSES THIS LAST PROPOS ITION I THE <FUN-CHAR-WORD> OF <CL-REF> IS TO BE <FUN-CHAR-VERB> <CL-REF> <STR-DESCRIPTION> : := <STR-DESCRIPTION> FOLLOWED BY <STR-DESCRIPTION> | <G-DET> <LEX-NAME> [<LABEL-PHRASE>] | THE WORD ' <LITERAL> <LABEL-PHRASE> ::-- CALLED <DET> <LABEL> <LEX-NAME> ::= any lexical category name <LABEL> ::= any name or label <STRING-NAME> ::= any string category name <REL-REF> ::= IS A (SUBSET|MEMBER) OF | HAS THE <REL-WORD> TO <CL-REF> ::= THE <CONCEPT-WORD> <VAR> | THE CLASS NAMED BY THE <NAME> I a member of an L-CAT category <FUN-CHAR-WORD> : := (FUNCTION |CHARACTERISTIC) <FUN-CHAR-VERB> : := any verb <NAME> ::= name of a string phrase or the constituent of a string phrase <VAR> ::= any member of the category VARIABLE <G-DET> : :-- A I AN l ANOTHER <DET> : := <G-DET> I THE <REL-WORD> ::~ a member of L-CAT which should denote "relation" <WORD> ::= any word 144 | 1982 | 31 |
A Model of Early Syntactic Development Pat Langley The Robotics Institute Carnegie-Mellon University Pittsburgh, Pennsylvania 1521,3 USA ABSTRACT AMBER is a model of first language acquisition that improves its performance through a process of error recovery. The model is implemented as an adaptive production system that introduces new condition-action rules on the basis of experience. AMBER starts with the ability to say only one word at a time, but adds rules for ordering goals and producing grammatical morphemes, based on comparisons between predicted and observed sentences. The morpheme rules may be overly general and lead to errors of commission; such errors evoke a discrimination process, producing more conservative rules with additional conditions. The system's performance improves gradually, since rules must be relearned many times before they are used. AMBER'S learning mechanisms account for some of the major developments observed in children's early speech. 1. Introduction In this paper, I present a model that attempts to explain the regularities in children's early syntactic development. The model is called AMBER, an acronym for Acquisition Model Based on Error Recovery. As its name implies, AMBER learns language by comparing its own utterances to those of adults and attempting to correct any errors. The model is implemented as an adaptive production system - a formalism well-suited to modeling the incremental nature of human learning. AMEER focuses on issues such as the omission of content words, the occurrence of telegraphic speech, and the order in which function words are mastered. Before considering AMBER in detail, I will first review some major features of child language, and discuss some earlier models of these phenomena. Children do not learn language in an all.or.none fashion. They begin their linguistic careers uttering one word at a time, and slowly evolve through a number of stages, each containing more adult-like speech than the one before. Around the age of one year, the child begins to produce words in isolation, and continues this strategy for some months. At approximately 18 months, the child begins to combine words into meaningful sequences. In order-based languages such as English, the child usually follows the adult order. Initially only pairs of words are produced, but these are followed by three-word and later by four-word utterances. The simple sentences occurring in this stage consist almost entirely of content words, while grammatical morphemes such as tense endings and prepositions are largely absent. During the period from about 24 to 40 months, the child masters the grammatical morphemes which were absent during the previous stage. These "function words" are learned gradually; the time between the initial production of a morpheme and its mastery may be as long as 16 months. Brown (1973) has examined the order in which 14 English morphemes are acquired, finding the order of acquisition to be remarkably consistent across children. In addition, those morphemes with simpler meanings and involved in fewer transformations are learned earlier than more complex ones. These findings place some strong constraints on the learning mechanisms one postulates for morpheme acquisition. Now that we have reviewed some of the major aspects of child language, let us consider the earlier attempts at modeling these phenomena. Computer programs that learn language can be usefully divided into two groups: those which take advantage of semantic feedback, and those which do not. In general, the early work concerned itself with learning grammars in the absence of information about the meaning of sentences. Examples of this approach can be found in Solomonoff (1959), Feldman (1969) and Homing (1969). Since children almost certainly have semantic information available to them, I will not focus on their research here. However, much of the early work is interesting in its own right, and some excellent systems along these lines have recently been produced by Berwick (1980) and Wolff (1980). In the late 1960's, some researchers began to incorporate semantic information into their language learning systems. The majority of the resulting programs showed little concern with the observed phenomena, including Siklossy's ZBIE (1972), Ktein's AUTOLING (1973), Hedrick's production system model (1976), Anderson's LAS (1977), and Sembugamoorthy's PLAS (1979). These systems failed as models of human language acquisition in two major areas. First, they learned language in an all-or.none manner, and much too rapidly to provide useful models of child language. Second, these systems employed conservative learning strategies in the hope of avoiding errors. In contrast, children themselves make many errors in their early constructions, but eventually recover from them. However, a few researchers have attempted to construct plausible models of the child's learning process. For example, Kelley (1967) has described an "hypothesis testing" model that learned successively more complex phrase structure grammars for parsing simple sentences. As new syntactic classes became available, the program rejected its current grammar in favor of a more accurate one. Thus, the model moved from a stage in which individual words were viewed as "things" to the more sophisticated view that "subjects" precede "actions". One drawback of the model was that it could not learn new categories on its own initiative; instead, the author was forced to introduce them manually. Reeker (1976) has described PST, another theory of early syntactic development. This model assumed that children have limited short term memories, so that they store onty portions of an adult sample sentence. The model compared this reduced sentence to an internally generated utterance, and differences 145 between the two were noted. Six types of differences were recognized (missing prefixes, missing suffixes, missing infixes, substitutions, extra words, and transpositions), and each led to an associated alteration of the grammar. PST accounted for children's omission of content words and the gradual increase in utterance length. The limited memory hypothesis also explained the telegraphic nature of early speech, though Reeker did not address the issue of function word acquisition. Overgeneral- izations did occur in PST, but the model could revise its grammar upon their discovery, so as to avoid similar errors in the future. PST also helped account for the incremental nature of language acquisition, since differences were addressed one at a time and the grammar changed only slowly. Selfridge (1981) has described CHILD, another program that attempted to explain some of the basic phenomena of first language acquisition. This system began by learning the meanings of words in terms of a conceptual dependency representation. Word meanings were initially overly specific, but were generalized as more examples were encountered. As more words were learned and their definitions became less restrictive, the length of CHILD'S utterances increased. CHILD differed from other models of language learning by incorporating, a non- linguistic component. This enabled the system to correctly respond to adult sentences such as Put the ba/I in the box, and led to the appearance that the system understood language before it could produce it. Of course, this strategy sometimes led to errors in comprehension. Coupled with the disapproval of a tutor, such errors were one of the major spurs to the learning of word orders. Syntactic knowledge was stored with the meanings of words, so that the acquisition of syntax necessarily occurred after the acquisition of individual words. Although tl~ese systems fare much better as psychological models than other language learning programs, they have some important limitations. We have seen that Kelley's system required syntactic classes to be introduced by hand, making his explanation less than satisfactory. Selfridge's CHILD was much more robust than Kelley's program, and was unique in modeling children's use of nonlinguistic cues for understanding. However, CHILD'S explanation for the omission of content words - that those words are not yet known - was implausible, since children often omit words that they have used in previous utterances. Reeker's PST explained this phenomenon through a limited memory hypothesis, which is consistent with our knowledge of children's memory skills. Still, PST included no model of the process through which memory improved; in order to simulate the acquisition of longer constructions, Reeker would have had to increase the system's memory size by hand. Both CHILD and PST learned relatively slowly, and made mistakes of the general type observed with children. Both systems addressed the issue of error recovery, starting off as abominable language users, but getting progressively better with time. This is a promising approach that I' attempt to develop it in its extreme form in the following pages. 2. An Overview of AMBER Although Reeker's PST and Selfridge's CHILD address the transition from one-word to multi-word utterances, we have seen that problems exist with both accounts. Neither of these programs focus on the acquisition of function words, their explanations of content word omissions leave something to be desired, and though they learn more slowly than other systems, they still learn more rapidly than children. In response to these limitations, the goals of the current research are: • Account for the omission of content" words, and the eventual recovery from such omissions. • Account for the omission of function words, and the order in which these morphemes are mastered. • Account for the gradual nature of both these linguistic developments. In this section I provide an overview of AMBER, a model that provides one set of answers to these questions. Since more is known about children's utterances than their ability to understand the utterances of others, AMBER models the learning of generation strategies, rather than strategies for understanding language. Selfridge's and Reeker's models differ from other language learning systems in their concern with the problem of recovering from errors. The current research extends this idea even further, since all of AMBER'S learning strategies operate through a process of error recovery. 1 The model is presented with three pieces of information: a legal sentence, an event to be described, and a main goal or topic of the sentence. An event is represented as a semantic network, using relations like agent, action, object, size, color, and type. The specification of one of the nodes as the main topic allows the system to restate the network as a tree structure, and it is from this tree that AMBER generates a sentence. If this sentence is identical to the sample sentence, no learning is required. If a disagreement between the two sentences is found, AMBER modifies its set of rules in an attempt to avoid similar errors in the future, and the system moves on to the next example. AMBER'S performance system is stated as a set of condition- action rules or productions that operate upon the goal tree to produce utterances. Although the model starts with the potential for producing (unordered) telegraphic sentences, it can initially generate only one word at a time. To see why this occurs, we must consider the three productions that make up AMBER'S initial performance system. The first rule (the start rul~) is responsible for establishing subgoals; it may be paraphrased as: START If you want to describe node1, and node2 is in relation to node1, then describe node2. Matching first against the main goal node, this rule selects one of the nodes below it in the tree and creates a subgoal to describe that node. This rule continues to establish lower level goals until 1 In spirit, AMBER is very similar to Reeker's model, though they differ in many details. Historically, PST had no impact on the development of AMBER. The initial plans for AMBER arose from discussions with John R..Anderson in the fall of 1979, while I did not become aware of Reeker's work until the fall of 1980. 2For the sake of clarity, I will be presenting only English paraphrases of the actual PRISM productions. All variables are italicized; these may match against any symbol, but all occurrences of a variable -" ~'. ~,~atch to the same element. 146 a terminal node is reached. At this point, a second production (the speak rule) is matched; this rule may be stated: SPEAK If you want to describe a conceptt and word is the word for concept, then say word and note that concept has been described. This production retrieves the word for the concept AMBER wants to describe, actually says this word, and marks the terminal goal as satisfied. Once this has been done, the third and final performance production becomes true. This rule matches whenever a subgoal has been satisfied, and attempts to mark the supergoal as satisfied; it may be paraphrased as: STOP If you want to describe node1, and node2 is in re/ation to nodel, and node2 has already been described, then note that node1 has been described. Since the stop rule is stronger 3 than the start rule (which would like to create another subgoal), it moves back up the tree, marking each of the active goals as satisfied (including the main goal). As a result, AMBER believes it has successfully described an event after it has uttered only a single word. Thus, although the model starts with the potential for producing multi.word utterances, it must learn additional rules (and make them stronger than the stop rule) before it can generate multiple content words in the correct order. In general, AMBER learns by comparing adult sentences to the sentences it would produce in the same situations. These predictions reveal two types of mistakes - errors of omission and errors of commission. These errors are detected by additional/earning productions that are responsible for creating new performance rules. Thus, AMBER is an example of what Waterman (1975) has called an adaptive production system, which modifies its own behavior by inserting new condition- action rules. Below I discuss AMBER'S response to errors of omission, since these are the first to occur and thus lead to the system's first steps beyond the one-word stage. I consider the omission of content words first, and then the omission of grammatical morphemes. Finally, I discuss the importance of errors of commission in discovering conditions on the production of morphemes. 3. Learning Preferences and Orders AMBER'S initial self-modifications result from tile failure to predict content words. Given its initial ability to say one word at a time, the system can make two types of content word omissions - it can fail to predict a word before a correctly predicted one, or it can omit a word after a correctly predicted one. Rather different rules are created in each case. For example, imagine that Daddy is bouncing a ball, and suppose that AMBEa predicted only the word "ball", while hearing the sentence "Daddy is bounce ing the ball". In this case, one of the system's learning rules would note the omitted content word 3The notion of strength plays an important role in AMBER'S explanation of language learning. When a new rule is created, it is given a low initial strength, but this is increased whenever that rule is relearned. And since stronger productions are preferred to their weaker competitors, rules that have been learned many times determine behavior. "Daddy" before the content word "ball", and an agent production would be created: AGENT If you want to describe event1, and agent1 is the agent of event1, then desc ribe agent1. Although I do not have the space to describe the responsible learning rule in detail, I can say that it matches against situations in which one content word is omitted before another, and that it always constructs new productions with the same form as the agent rule described above. In this case, it would also create a similar rule for describing actions, based on the omitted "bounce". Note that these new productions do not give AMBER the ability to say more than one word at a time. They merely increase the likelihood that the program will describe the agent or action of an event instead of the object. However, as AMBER begins to prefer agents to actions and actions to objects, the probability of the second type of error (omitting a word after a correctly predicted one) increases. For example, suppose that Daddy is again bouncing a ball, and the system says "Daddy" while it hears "Daddy is bounce ing the ball". In this case, a slightly different production is created that is responsible for ordering the creation of goals. Since the agent relation was described but the object was omitted, an agent. object rule is constructed: AGENT- OBJECT If you want to describe event1, and agent1 is the agent of event1, and you have described agent1, and object1 is the object of event1, then describe object1. Together with the agent rule shown above, this production lets AMBER produce utterances such as "Daddy ball". Thus, the model provides a simple explanation of why children omit some content words in their early multi-word utterances. Such rules must be constructed many times before they become strong enough to have an effect, but eventually they let the system produce telegraphic sentences containing all relevant content words in the standard order and lacking only grammatical morphemes. 4. Learning Suffixes and Prefixes Once AMBER begins to correctly predict content words, it can learn rules for saying grammatical morphemes as well. As with content words, such rules are created when the system hears a morpheme but fails to predict it in that position. For example, suppose the. program hears the sentence "Daddy ° is bounce ing "the ball", 4 but predicts only "Daddy bounce ball". In this case, the following rule is generated: ING-1 If you have described action1, and action1 is the action of event1, then say ING. Once it has gained sufficient strength, this rule will say the morpheme "ing" after any action word. As stated, the production is overly general and will lead to errors of commission. I consider AMBER'S response to such errors in the following section. 4Asterisks represent pauses in the adult sentence. These cues are necessary for AMBER to decide that a morpheme like "is" is a prefix for "bounce" instead of a suffix for "Daddy". 147 The omission of prefixes leads to very similar rules. In the above example, the morpheme "is" was omitted before "bounce", leading to the creation of a prefix rule for producing the missing function word: IS-1 If you want to describe action1, and action I is the action of event1, then say IS. Note that this rule will become true before an action has been described, while the rule ing-I can apply only after the goal to describe the action has been satisfied. AMBER uses such conditions to control the order in which morphemes are produced. Figure 1 shows AMBER'S mean length of utterance as a function of the number of sample sentences (taken in groups of five) seen by the program, b As one would expect, the system starts with an average of around one word per utterance, and the length slowly increases with time. AMBER moves through a two. word and then a three-word stage, until it eventually produces sentences lacking only grammatical morphemes. Finally, the morphemes are included, and adult-like sentences are produced. The incremental nature of the learning curve results from the piecemeal way in which AMBER learns rules for producing sentences, and from the system's reliance on the strengthening process. m 9 °! o ;o Jo ,bo Number of sample sen tences Figure 1. Mean length of AMBER's utterances. 5. Recovering from Errors of Commission Errors of commission occur when AMBER predicts a morpheme that does not occur in the adult sentence. These errors result from the overly general prefix and suffix rules that we saw in the last section. In response to such errors, AMBER calls on a discrimination routine in an attempt to generate more conservative productions with additional conditions. ~ Earlier, I considered a rule (is-1) for producing "is" before the action of an event. As stated, this rule would apply in inappropriate situations as well as correct ones. For example, suppose that AMBER learned this rule in the context of the sentence "Daddy is bounce ing the ball". Now suppose the system later uses this rule to predict the same sentence, but that it instead hears the sentence "Daddy was bounce ing the ball". 5AMBER iS implemented on a PDP KL. tO in PRISM (Langley and Neches, t981), an adaptive production system language designed for modeling learning phenomena; the run summarized in Figure t took approximately 2 hours of CPU time. At this point, AMBER'S discrimination routine would retrieve the rule responsible for predicting "is" and lowers its strength; it would also retrieve the situation that led to the faulty application, passing this information to the discrimination routine. Comparing the earlier good case to the current bad case, the discrimination mechanism finds only one difference - in the good example, the action node was marked present, while no such marker occurred during the faulty application. The result is a new production that is identical to the original rule, except that an additional condition has been included: IS-2 If you want to describe action1, and action I is the action of event1, and action1 is in the present, then say IS. This new condition will let the variant rule fire only when the action is marked as occurring in the present. When first created, the is-2 production is too weak to be seriously considered. However, as it is learned again and again, it will eventually come to mask its predecessor. This transition is aided by the weakening of the faulty is-1 rule each time it leads to an error. Once the variant production has gained enough strength to apply, it will produce its own errors of commission. For example, suppose AMBER uses the is-2 rule to predict "The boy s is bounce ing the ball", while the system hears "The boy s are bounce ing the ball". This time the difference is more complicated. The fact that the action had an agent in the good situation is no help, since an agent was present during the faulty firing as well. However, the agent was singular in the first case but not during the second. Accordingly, the discrimination mechanism creates a secondvariant: IS-3 If you want to describe action1, and action1 is the action of event1, and action1 is in the present, and agent1 is the agent of event1, and agent1 is singular, then say IS. The resulting rule contains two additional conditions, since the learning process was forced to chain through two elements to find a difference. Together, these conditions keep the production from saying the morpheme "is" unless tl~e agent of the current action is singular in number. Note that since the discrimination process must learn these sets of conditions separately, an important prediction results: the more complex the conditions on a morpheme's use, the longer it will take to master. For example, three sets of conditions are required for the "is" rule, while only a single condition is needed for the "ing" production. As a result, the former is mastered after the latter, just as found in children's speech. Table 1 presents the order of acquisition for the six classes of morpheme learned by AMBER, and the order in which the same morphemes were mastered by Brown's children. The number of sample sentences the model required before mastery are also included. 6Anderson's ALAS (1981) system uses a very similar process to recover from overly general morpheme rules. AMBER and AL, ~ :~ have much in common, both having grown out of discussions between Anderson and the author. Although there is considerable overlap, ALAS generally accounts for later developments in children's speech than does AMBER. 148 The general trend is very similar for the children and the model, but two pairs of morphemes are switched. For AMEER, the plural construction was mastered before "ing", while in the observed data the reverse was true. However, note that AMBER mastered the progressive construction almost immediately after the plural, so this difference does not seem especially significant. Second, the model mastered the articles "the", "a", and "some" before the construction for past tense. However, Brown has argued that the notions of "definite" and "indefinite" may be more complex than they appear on the surface; thus, AMBER'S representation of these concepts as single features may have oversimplified matters, making articles easier to learn than they are for the child. Thus, the discrimination process provides an elegant explanation for the observed correlation between a morpheme's complexity and its order of acquisition. Observe that if the conditions on a morpheme's application were learned through a process of generalization such as that proposed by Winston (1970), exactly the opposite prediction would result. Since generalization operates by removing conditions which differ in successive examples, simpler rules would be finalized later than more complex ones. Langley (1982) has discussed the differences between generalization-based and discrimination. based approaches to learning in more detail. CHILDREN'S ORDER AMBER'S ORDER LEARNING TIME PROGRESSIVE PLURAL 59 PLURAL PROGRESSIVE 63 PAST TENSE A RTICLES 166 A RTICLES PAST TENSE 1S6 THIRD PERSON THIRD PERSON 283 AUXILIARY AUXILIARY 306 Table 1. Order of morpheme mastery by the child and AMBER. Some readers will have noted the careful crafting of the above examples, so that only one difference occurred in each case. This meant that the relevant conditions were obvious, and the discrimination mechanism was not forced to consider alternate corrections. In order to more closely model the environment in which children learn language, AMBER was presented with randomly generated sentence/meaning pairs. Thus, it was usually impossible to determine the correct discrimination that should be made from a single pair of good and bad situations. AMBER'S response to this situation is to create all possible discriminations, but to give each of the variants a low initial strengtl~. Correct rules, or rules containing at least some correct conditions, are learned more often than rules containing spurious conditions. And since AMBER strengthens a production whenever it is relearned, variants with useful conditions come to be preferred over their competitors. Thus, AMEER may be viewed as carrying out a breadth-first search through the space of possible rules, considering many alternatives at the same time, and selecting the best of these for further attention. Only variants that exceed a certain threshold (generally those with correct conditions) lead to new errors of commission and additional variants. Eventually, this search process leads to the correct rule, even in the presence of many irrelevant features. Figure 2 presents the learning curves for the "ing" morpheme. Since AMEER initially lacks an "ing" rule, errors of commission abound at the outset, but as this production and its variants are strengthened, such errors decrease. In contrast, errors of commission are absent at the beginning, since AMEER lacks an "ing" rule to make false predictions. As the morpheme rule becomes stronger, errors of commission grow to a peak, but they disappear as discrimination takes effect. By the time it has seen 63 sample sentences, the system has mastered the present progressive construction. 0.8 ,,~ trots of omi 0.6 0.4 0.2 Errors of corn miss,o 7 . ~ , . : - 0 1"0 20 30 =~0 50 60 70 80 90 100 Number of sample sentences Figure 2. AMBER's learning curves for the morpheme "ing". 6. Directions for Future Research In the preceding pages, we have seen that AMEER offers explanations for a number of phenomena observed in children's early speech. These include the omission of content words and morphemes, the gradual manner in which these omissions are overcome, and the order in which grammatical morphemes are mastered. As a psychological model of early syntactic development, AMEER constitutes an improvement over previous language learning programs. However, this does not mean that the model can not be improved, and in this section I outline some directions for future research efforts. 6.1. Simplicity and Generality One of the criteria by which any scientific theory can be judged is simplicity, and this is one dimension along which AMEER could stand some improvement. In particular, some of AMBER'S learning heuristics for coping with errors of omission incorporate considerable knowledge about the task of learning a language. For example, AMEER knows the form of the rules it will learn for ordering goals and producing morphemes. Another questionable piece of information is the distinction between major and minor meanings that lets AMEER treat content words and morphemes as completely separate entities. One might argue that the child is born with such knowledge, so that any model of language acquisition should include it as well, However, until such innateness is proven, any model that can manage without such information must be considered simlsler, more elegant, and more desirable than a model that requires it to learn a language. 149 In contrast to these domain-apecific heuristics, AMBER'S strategy for dealing with errors of commission incorporates an apparently domain-independent learning mechanism - the discrimination process. This heuristic can be applied to any domain in which overly general rules lead to errors, and can be used on a variety of representations to discover the conditions under which such rules should be selected. In addition to language development, the discrimination process has been applied to concept learning (Anderson, Kline, and Beasely, 1979; Langley, 1982) and strategy acquisition (Brazdil, 1978; Langley, 1982)~ Langley (1982) has discussed the generality and power of discrimination-based approaches to learning in greater detail. As we shall see below, this heuristic may Provide a more plausible explanation for the learning of word order. Moreover, it opens the way for dealing with some aspects of language acquisition that AMBER has so far ignored - the learning of word/concept links and the mastering of irregular constructions. 6.2. Learning Word Order Through Discrimination AMBER learns the order of content words through a two-stage process, first learning to prefer some relations (like agent) over others (like action or object), and then learning the relative orders in which such relations should be described. The adaptive productions responsible for these transitions contain the actual form of the rules that are learned; the particular rules that result are simply instantiations of these general forms. Ideally, future versions of AMBER should draw on more general learning strategies to acquire ordering rules. Let us consider how the discrimination mechanism might be applied to the discovery of such rules. In the existing system, the generation of "ball" without a preceding "Daddy" is viewed as an error of omission. However, it could as easily be viewed as an error of commission in which the goal to describe the object was prematurely satisfied. In this case, one might use discrimination to generate a variant version of the start rule: If you want to describe node1, and node2 is the object of node1, and node3 is the agent of nodel, and you have described node3, then describe node2. This production is similar to the start rule, except that it will set up goals only to describe the object of an event, and then only if the agent has already been described. In fact, this rule is identical to the agent-object rule discussed in an earlier section; the important point is that it is also a special case of the start rule that might be learned through discrimination when the more general rule fires inappropriately. The same process could lead to variants such as the agent rule, which express preferences rather than order information. Rather than starting with knowledge of the forms of rules at the outset, AMBER would be able to determine their form through a more general learning heuristic. 6.3. Major and Minor Meanings The current version of AMSEn relies heavily on the representational distinction between major meanings and mcJulations of those meanings. Unfortunately, some languages express through content wor~s what others express through grammatical morphemes. Future versions of the system should lessen this distinction by using the same representation for both types o[ information. In addition, the model might employ a single production for learning to produce both content words and morphemes; thus, the program would lack the speak rule described earlier, but would construct specific versions of this production for particular words and morphemes. This would also remedy the existing model's inability to learn new connections between words and concepts. Although the resulting rules would probably be overly general, AMBER would be able to recover from the resulting errors by additional use of the discrimination mechanism. The present model also makes a distinction between morphemes that act as prefixes (such as "the") and those that act as suffixes (such as "ing"). Two separate learning rules are responsible for recovering from function word omissions, and although they are very similar, the conditions under which they apply and the resulting morpheme rules are different. Presumably, if a single adaptive production for learning words and morphemes were introduced, it would take over the functions of both the prefix and suffix rules. If this approach can be successfully implemented, then the current reliance on pause information can be abandoned as welt, since the pauses serve only to distinguish suffixes from prefixes. Such a reorganization would considerably simplify the theory, but it would also lead to two complications. First, the resulting system would tend to produce utterances like "Daddy ed" or "the bounce", before it learned the correct conditions on morphemes through discrimination. (This problem is currently avoided by including information about the relation when a morpheme rule is first built, but this requires domain-specific knowledge about the language learning task.) Since children very seldom make such errors, some other mechanism must be found to explain their absence, or the model's ability to account for the observed phenomena will suffer, Second, if pause information (and the ability to take advantage of such information) is removed, the system wilt sometimes decide a prefix is a suffix and vice versa. For example, AMBER might construct a rule to say "ing" before the object of an event is described, rather than after the action has been mentioned. However, such variants would have little effect on the system's overall performance, since they would be weakened if they ever led to deviant utterances, and they would tend to be learned less often than the desired rules in any case. Thus, the strengthening and weakening processes would tend to direct search through the space of rules toward the correct segmentation, even in the absence of pause information. 6.4, Mastering Irregular Constructions Another of AMBER'S limitations lies in its inability to learn irregular constructions such as "men" and "ate". However, by combining discrimination and the approach to learning word/concept links described above, future implementations should fare much better along this dimension. For example, consider the irregular noun "foot", which forms the plural "feet". Given a mechanism for connecting words and concepts, AMBER might initially form a rule connecting the concept *foot to the word "foot". After gaining sufficient strength, this rule would say "~?'~+" whenever seeing an example of the concept °foot. Upon encountering an occurrence of "feet", the system would note the error of commission and call on discrimination. This would lead to a variant rule that produced "foot" only when a sing/e marker was present. Also, a new rule connecting "foot to "feet" would be created. Eventually, this new rule would also lead to errors of commission, and a variant with a plural condition would come to replace it. 150 Dealing with the rule for producing the plural marker "s" would be somewhat more difficult. Although AMBER might initially learn to say "foot" and "feet" under the correct circumstances, it would eventually learn the general rule for saying "s" after plural agents and objects. This would lead to constructions such as "feet s", which have been observed in children's utterances. The system would have no difficulty in detecting such errors of commission, but the appropriate response is not so clear. Conceivably, AMBER could create variants of the "s" rule which stated that the concept to be described must not be =foot. However, a similar condition would atso have to be included for every situation in which irregular pluralization occurred (deer, man, cow, and so on). Similar difficulties arise with irregular constructions for the past tense. A better solution would have AMBER construct a special rule for each irregular word, which "imagined" that the inflection had already been said. Once these productions became stronger than the %" and "ed" rules, they would prevent the latter's application and bypass the regular constructions in these cases. Overly general constructions like "foot s" constitute a related form of error. Although AMBER would generate such mistakes before the irregular form was mastered, it would not revert to the overgeneral regular construction at a later point, as do many children. The area of irregular constructions is clearly a phenomenon that deserves more attention in the future. 7. Conclusions In conclusion, AMBER provides explanations for severat important phenomena observed in children's early speech. The system accounts for the one-word stage and the child's transition to the telegraphic stage. Although AMBER and children eventually learn to produce all relevant content words, both pass through a stage where some are omitted. Because it learns sets of conditions one at a time, the discrimination process explains the order in which grammatical morphemes are mastered. Finally, AMBER learns gradually enough to provide a plausible explanation of the incremental nature of first language acquisition. Thus the system constitutes a significant addition to our knowledge of syntactic development. Of course, AMBER has a number of limitations that should be addressed in future research. Successive versions should be able to learn the connections between words and concepts, should reduce the distinction between content words and morphemes, and should be able to master irregular constructions. Moreover, they should require less knowledge of the language learning task, and rely more of domain- independent learning mechanisms such as discrimination. But despite its limitations, the current version of AMBER has proven itself quite useful in clarifying the incremental nature of language acquisition, and future models promise to further our understanding of this complex process. References Anderson, J. R. Induction of augmented transition networks. Cognitive Science, 1977, 1,125-157. Anderson, J. R. A theory of language acquisition based on general learning principles. Proceedings of the Seventh International Joint Conference on Artificial Intelligence, 1981. Anderson, J. R., Kline, P. J., and Beasely, C. M. A general learning theory and its application to schema abstraction. In G. H. Bower (ed.), The Psychology of Learning and Motivation, Volume 13, 1979. Berwick, R. Computational analogues of constraints on grammars: A model of syntactic acquisition. Proceedings of the 18th Annual Conference of the Association for Computational Linguistics, 49-53, 1980. BrazdU, P. Experimental learning model. Proceedings of the AISB Conference, 1978, 46-50. Brown, R. A First Language: The Early Stages. Cambridge, Mass.: Harvard Universi~ Press, 1973. Feldman, J. A., Gips, J., Homing, J. J., and Reder, S. Grammatical complexity and inference. Technical Report No. CS 125, Computer Science Department, Stanford University, 1969. Hedrick, C. Learning production systems from examples. Artificial Intelligence, 1976, 7, 21.49. Horning, J. J. A study of grammatical inference. Technical Report No. CS 139, Computer Science Department, Stanford University, 1969. Kelley, K. L. Early syntactic acquisition. Rand Report P-3719, 1967. Klein, S. Automatic inference of semantic deep structure rules in generative semantic grammars. Technical Report No. 180, Computer Sciences Department, University Of Wisconsin, 1973. Langley, P. A general theory of discrimination learning. To appear in Klahr, D., Langley, P., and Neches, R. T. (eds.) Self.Modifying Production System Mooels of Learning and Development, 1982. Langley, P. and Neches, R. T. PRISM User's Manual. Technical Report, Department of Computer Science, Carnegie-Mellon University, 1981. Reeker, L. H. The computational study of language acquisition. In M. Yovits and M. Rubinoff (eds.), Advances in Computers, Volume 15. New York: Academic Press, 1976. Selfridge, M. A computer model of child language acquisition. Proceedings of the Seventh International Joint Conference on Artificial Intelligence, 1981,92-96. Sembugamoorthy, V. PLAS, a paradigmatic language acquisition system: An overview. Proceedings of the Sixth International Joint Conference on Artificial Intelligence, 1979, 788-790. Siklossy, L. Natural language learning by computer. In H. A. Simon and L. Siklossy (eds.), Representation and Meaning: Experiments with Information Processing Systems. Englewood Cliffs, N. J.: Prentice.Hall, 1972. Solomonoff, R. A new method for discovering the grammars of phrase structure languages. Proceedings of the International Conference on Information Processing, UNESCO, 1959. Waterman, D.A. Adaptive production systems. Proceedings of the Fourth International Joint Conference on Artificial Intelligence, 1975, 296-303. Winston, P. H. Learning structural descriptions from examples. MIT AI-TR-231, 1970. Wolff, J. G. Language acquisition and the discovery of phrase structure. Language and Speech, 1980, 23,255-269. 151 | 1982 | 32 |
BUILDING NON-NORMATIVE SYSTEMS - THE SEARCH FOR ROBUSTNESS AN OVERVIEW Mitchell P. Marcus Bell Laboratories Murray Hill, New Jersey 07974 Many natural language understanding systems behave much like the proverbial high school english teacher who simply fails to understand any utterance which doesn't conform to that teacher's inviolable standard of english usage. But while the teacher merely pretends not to understand, our systems really don't. The teacher consciously stonewalls when confronted with non-standard usage to prescribe quite rigidly what is acceptable linguistic usage and what is not. What is so artificial about this behavlour, of course, is that our implicit linguistic models are descriptive and not prescriptive; they model what we expect, not what we demand. People are quite good at understanding language which they, when asked, would consider to he non-standard in some way or other. Our programs, on the other hand, tend to be very rigid. They usually fail to degrade gracefully when their internal models of syntax, semantics or pragmatlcs are violated by user input. In essence, the models of linguistic well- formedness which these programs embody become normative; they prescribe quite rigidly what is considered standard linguistic usage and what isn't. Old solutions to this problem include extending a system's linguistic coverage or intentionally excluding linguistic constraints that are occasionally violated by speakers. But neither of these approaches changes the fundamental situation - that when confronted with input which fails to conform to the system builder's expectations, however broad and however loose, the system will entirely reject the input. Furthermore, these techniques bar a system from utilizing the fact that people normally do obey certain linguistic standards, even if they violate them on occasion. More recently, a range of approaches have been investigated that allow a system to behave more robustly when confronted with input which violates its designer's expectations about standard english usage. Most of this work has been within the realm of syntax. These techniques allow grammars to he descriptive without being normative. This panel focuses on these techniques for building what might be termed non- normative systems. Panelists were asked to consider the following range of issues: Are there different kinds of non-standard usage? Candidates for a subclasslficatlon of non- standard usage might include the telegraphic language of massages and newspaper headlines; the informal colloquial use of language, even by speakers of the standard dialect; non-standard dialects; plain out-and-out grammatical errors; and the specialized sublanguage used by experts in a given domain. To what extent do these various forms have different properties, and are there independently characterizable dimensions along which they differ? What kinds of generalizations can be expressed about each of them individually or about non-standard usage in general? What are the techiques for dealing with non- standard input robustly? A range of techniques have been discussed in the literature which can be invoked when a system is faced with input which is outside the subset of the language that its grammar describes. These include~ (a) the use of special "un-grammatlcal" rules, which explicitly encode facts about non-standard usage; (b) the use of "meta-rules" to relax the constraints imposed by classes of rules of the grammar; (c) allowing flexible interaction between syntax and semantics, so that semantics can directly analyze substrlngs of syntactic fragments or individual words when full syntactic analysis fails. How well do these techniques, and others, work with respect to the dimensions of non-standard input discussed above? What are the relative strengths and weaknesses of each of these techniques? To what extent are each of these techniques useful if one's goal is not to build a system which understands input, even if non-standard; but rather to build an explicitly normative system which can either (i) pinpoint ' grammatical errors, or (2) correct errors after pinpointing them? (Ironically, a system can be normative in a useful way only if it can understand what the user meant to say.) Are there more general approaches to building systems that degrade gracefully that can be applied to this set of problems? And finally, what the near- and long-term prospects for application ~f' ~lese techniques to practical working systems? 152 | 1982 | 33 |
DESIG~ DIMENSIONS FOR NON-NORMATIVE ONDERSTARDING SYSTEMS Robert J. Bobrow Madelelne Bates Bolt Beranek and Newman Inc. 10 Moulton Street Cambridge, Massachusetts 02238 I. Introduction This position paper is not based upon direct experience with the design and implementation of a "non-normative" natural language system, but rather draws upon our work on cascade [11] architectures for understanding systems in which syntactic, semantic and discourse processes cooperate to determine the "best" interpretation of an utterance in a given discourse context. The RUS and PSI-KLONE systems [I, 2, 3, 4, 5], which embody some of these principles, provide a strong framework for the development of non-normatlve systems as illustrated by the work of Sondhelmer and Welschedel [8, 9, 10]and others. Here we pose a number of questions in order to clarify the theoretical and practical issues involved in building "non-normatlve" natural language systems. We give brief indications of the range of plausible answers, in order to characterize the space of decisions that must be made in deslgnlng such a system. The first questions cover what is intended by the ill- defined term "non-normatlve system", beyond the important but vague desire for a "friendly and flexible" computer system. The remaining questions cover several of the architectural issues involved in building such a system, including the categories of knowledge to be represented in the system, the static modularization of these knowledge sources, and the dynamic information and control flow among these modules. The way the system is to deal with ill-formed input depends in a strong way on how much the system is expected to do with well-formed input. Ad hoc data base retrieval systems (a currently hot topic) pose different constraints than systems that are expected to enter into a substantlal dialogue with the user. When the behavior of the system is severely limited even given perfect input, the space of plausible inputs is also limited and the search for a reasonable interpretation for ill-formed input can be made substantially easier by asking the user a few well-chosen questions. In the dbms retrieval domain, even partially processed input can be used to suggest what information the user is interested in, and provide the basis for a useful clarification dialogue. What is the system expected to do with ill- formed input? The system may be expected to understand the input but not provide direct feedback on errors (e.g. by independently decldlng on the (most plausible) interpretation of the input, or by questioning the user about possible alternative interpretations). Alternatively, the system might provide feedback about the probable source of its difficulty, e.g. by pointing out the portion of the input which it could not handle (if it can be localized), or by characterizing the type of error that occurred and describing general ways of avoiding such errors in the future. 2. System performance goals What are the overall performance objectives of the system? Marcus has argued [7] that the "well- formedness" constraints on natural langua6e make it possible to parse utterances with minimal (or no) search. 2 The work we have done on the RU3 system has convinced us that this is true and that cascading semantic interpretatlon with syntactic analysis can further improve the efficiency of the overall system. The question naturally arises as to whether the performance characteristics of this model must be abandoned when the input does not satisfy the well-formedness constraints imposed by a competence model of language. We believe that it is possible to design natural language systems that can handle well-formed input efficiently and ill-formed input effectively. 3. Architectural issues In order to design a fault-tolerant language processing system, it is important to have a model for the component processes of the system, how they interact in handling well-formed input, and how each process is affected by the different types of constraint vlolatlons occurring In 111- formed input. What categories of knowledge are needed to understand well-formed input, and how are they used? Typically, a natural language understandlng system makes use of lexical and morphological knowledge (to categorize the syntactic and semantic properties of input items), syntactic knowledge, semantic knowledge, and knowledge of discourse phenomena (here we include issues of ellipsis, anaphora and focus, as well as plan 153 recognition ("why did he say this to me now?") and rhetorical structure). Of course, saying that these categories of knowledge are represented does not imply anything about the static (representational) or dynamic (process interaction) modularizatlon of the resulting system. We will assume that the overall system consists of a set of component modules. One common decomposition has each category of knowledge embodied in a separate component of the NLU system, although it is possible to fuse the knowledge of several categories into a single prooeas. Given this assumption, we must then ask what control and information flow can be imposed on the interaction of the modules to achieve the overall performance goals imposed on the system. In analyzing how violations of constraints affect the operation of various components, it is useful to distinguish clearly between the information used wlthina component to compute its output, and the structure and content of the information which it oasses on to other components. It is also important to determine how critically the operation of the receiving component depends on the presence, absence or internal inconsistency of various features of the inter-component information flow. As an example, we will consider the interaction between a ayntactic component (parser) and a semantic interpretation component. Typically, the semantic interpretation process is componential, building up the interpretation of a phrase in a lawful way from the interpretations of its constituents. Thus a primary goal for the parser is to determine the (syntactically) acceptable groupings of words and constituents (a constituent structure tree, perhaps augmented by the equivalent of traces to tie together components). Unless such groupings can be made, there is nothing for the semantic interpreter and subsequent components to operate on. Some syntactic features are used only within the parser to determine the acceptability of possible constituent groupings, and are not passed to the semantic component (e.g. some verbs take clause complements, and require the verb in the complement to be subjunctive, infinitive, etc.). The normal output of the parser may also specify other properties of the input not immediately available from the lexical/morphological analysis of individual words, such as the syntactic number of noun phrases, and the case structure of clauses. Additionally, the parser may indicate the functional equivalent of "traces", showing how certain constituents play multiple roles within a st,uc~ure, appearing as functional constituents of mi~c than one separated phrase. From the point of view of semantics, however, the grouping operation is of primary importance, since it is difficult to reconstruct the intended grouping without making use of both local and global syntactic constraints. The other results of the parsing process are less essential. Thus, for example, the case structure of clauses is often highly constrained by the semantic features of the verb and the constituent noun phrases, and it is possible to reconstruct it even with minimal syntactic guidance (e.g. "throw" "the bali" "the boy"). How can each component fill its role in the overall system when the constraints and assumptions that underlie its design are violated by ill-formed input? The distinction between the information used within a component from the information which that component is required to provide to other components is critical in designing processing strategies for each component that allow it to fulfill its primary output goals when its input violates one or more well-formedness constraints. Often more than one source of information or constraint may be available to determine the output of a component, and it is possible to produce well-formed output based on the partial or conflicting internal information provided by Ill- formed input. For example, in systems with feedback between components, it is possible for that feedback to make up for lack of information or violation of constraints in the input, as when semantic coherence between subject and verb is sufficient to override the violation of the syntactic number agreement constraint. When the integrity of the output of a component can be maintained in the face of ill-formed input, other components can be totally shielded from the effects of that input. A clear specification of the interface language between components makes it possible to have recovery procedures that radically restructure or totally replace one component without affecting the operation of other components. In general, the problem to be solved by a non-normative language understander can be viewed as one of finding a "sufficiently good explanation" for an utterance in the given context. ~ A number of approaches to this problem can be distinguished. One approach attempts ~o characterize the class of error producing mechanisms (such as word transposition, mistyping of letters, morphological errors, resumptive pronouns, etc.). Given such a characterization, recognition criteria for different classes of errors, and procedures to invert the error process, an "explanation" for an ill-formed utterance could be generated in the form of an intended well-formed utterance and a sequence of error transformations. The system would then try to understand the hypothesized well-formed utterance. While some "spelling corrector" algorithms use this approach, we know of no attempt to apply it to the full range of syntactic, semantic and pragmatic errors. We believe that some strategies of this sort might prove useful as components in a larger error- correcting system. A more thoroughly explored set of strategies for non-normative processing is based on the concept of "constraint relaxation". If a component can find no characterization of the utterance because it violates one or more 154 constraints, then it is necessary to relax such constraints. A number of strategies have been proposed for relaxing well-formedness constraints on input to permit components to derive well- structured output for both well-formed and ill- formed input: 1. extend the notion of well-formed input to include (the Common cases of) ill- formed input (e.g. make the gr,m,~r handle ill-formed input explicitly); 2. allow certain specific constraints to be overridden when no legal operation succeeds; 3. provide a process that can diagnose failures and flexibly override constraints. Somehow the "goodness" of an explanation must be related to the number and type of constraints which must be relaxed to allow that explanation. How good an explanation must be before it is accepted is a matter of design choice. Must it simply be "good enough" (above some threshold), or must it be guaranteed to be "the best posslble" explanation? If it must be "the best possible", then one can either generate all possible explanations and compare them, or use some strategy like the shortfall algorithm [12] that guarantees the first explanation produced will be optimal. While space prohibits discussion of the advantages and disadvantages of each of these strategies, we would llke to present a number of design dimensions along which they might be usefully compared. We believe that choices on these dimensions (made implicitly or explicitly) have a substantial effect on both the practical performance and theoretical interest of the resulting strategies. These dimensions are exemplified by the following questions: o Does the component have an explicit internal competence model that is clearly separated from its performance strategles? 4 o What information is used to determine which constraints to attempt to relax? Is the decision purely local (based on the constraint and the words in the immediate vicinity of the failure) or can the overall properties of the utterance and/or the discourse context enter into the decision? o When is relaxation tried? How are various alternatlves scheduled? Is it possible, for example, that a "parse" including the relaxation of a syntactic constraint may be produced before a parse that involves no such relaxation? o Does the technique permit incremental feedback between components, and is such feedback used in determining which constraints to relax? 5 Non-syntactic ill-formedness While the overall framework mentioned above raises questions about errors that affect components other than syntax, the discussion centers primarily on syntactic ill-formedness. In this we follow the trend in the field. Perhaps because syntax is the most clearly understood component, we have a better idea as to how it can go wrong, while our models for semantic interpretation and discourse processes are much less complete. Alternatively, it might be supposed that the parsing process as generally performed is the most fragile of the components, susceptible to disruption by the slightest violation of syntactic constraints. It may be that more robust parslr~ strategies can be found. Without stating how the semantic component might relax its constraints, we might still point out the parallel between constraint violation in syntax and such semantic phenomena as metaphor, personification and metonymy. We believe that, as in the syntactic case, it will be useful to distinguish between the internal operation of the semantic interpreter and the interface between it and discourse level processes. It should also be possible to make use of feedback from the discourse component to overcome violations of semantic constraints. In the context of a waiter talking to a cook about a customer complaint, the sentence "The hamburger is getting awfully impatient." should be understood. q. Conclusions We believe that it will be possible to design robust systems without giving up many valuable features of those systems which already work on well-formed input. In particular, we believe it will be possible to build such systems on the basis of competence models for various linguistic components, which degrade gracefully and without the use of ad hoc techniques such as pattern matching. One critical resource that is needed is a widely available, reasonably large corpus of "ill- formed input", exhibiting the variety of problems which must be faced by practical systems. This corpus should be sub-divlded by modallty, since it is known that spoken and typewritten interactions have different characteristics. The collections that we know of are either limited in modality (e.g. the work on speech errors by Fromkin [6]) or are not widely available (e.g. unpublished material collected by Tony Kroch). It would also be valuable if this material were analyzed in terms of possible generative mechanisms, to provide needed evidence for error recovery strategies based on inversion of error generation processes. 155 Finally, we believe that many error recovery problems can be solved by using constraints from one knowledge category to reduce the overall sensitivity of the system to errors in another category. To this end, work is clearly needed in the area of control structures and cooperative process architectures that allow both pipelinlng and feedback among components with vastly different internal knowledge bases. 1The preparation of this paper was supported by the Advanced Research Projects Agency of the Department of Defense, and monitored by the Office of Naval Research under contract NO001q-77-C-0378. 2The parser designed by G. Ginsparg also has similar search characteristics, given grammatical input. 3What constitutes "sufficiently good" depends, of course, on the overall goals of the system. ~In almost any case, we believe, the information available at the Interface between components should be expressed primarily in terms of some competence model. REFERENCES 1. Bates, M., Bobrow, R. J. and Webber, B.L. Tools for Syntactic and Semantic Interpretation. BBN Report ~785, Bolt Beranek and Newman Inc., 1981. 2. Bates, M. and Bobrow, R. J. The RUS Parsing System. Bolt Beranek and Newman Inc., forthcoming. 3. Bobrow, R. J. The RUS System. BBN Report 3878, Bolt Beranek and Newman Inc., 1978. q. Bobrow, R. J. & Webber, B. L. PSI-KLONE - Parsing and Semantic Interpretation in the BBN Natural Language Understanding System. CSC3I/CSEIO Annual Conference, CSCSI/CSEIO, 1980. 5. Bobrow, R. J. & Webber, B. L. Knowledge Representation for Syntactic/Semantic Processing. Proceedings of The First Annual National Conference on Artificial Intelligence, American Association for Artificial Intelligence, 1980. 6. Fromkln, Victoria A.. J a n u a ~ , Series malor. Volume 77: Soeeoh Errors s s l ~ Evidence. Mouton, The Hague, 1973. 7. Marcus, M.. A Theory of S v n t a c t i c ~ for Nstural LB£E~Ig~. MIT Press, 1980. 8. Sondhelmer, N.K. and Weisohedel, R.M. A Rule- Based Approach to Ill-Formed Input. Proo. 8th Int'l Conf. on Computational Linguistics, Tokyo, Japan, October, 1980, pp. 46-54. 9. Welsohedel, Ralph M. and Black, John E. "If The Parser Fails." Proceedln~s of the 18th Annual Meetlnsof the ACL (June 1980). 10. Welschedel, Ralph and Sondhelmer, Norman. A Framework for Processing Ill-Formed Input. Department of Computer and Information Sciences, University of Delaware, October, 1981. 11. Woods, W. A. "Cascaded ATN Grammars." ~. E ~ L I ~ ~ , I (Jan.-Mar. 1980). ~Q 12. Woods, W. A. "Optimal Search Strategies for Speech Understanding Control." Intelli=ence 18, 3 (June 1982). 156 | 1982 | 34 |
Scruffy Text Understanding: Design and Implementation of 'Tolerant' Understanders Richard H. Granger Artificial Intelligence Project Computer Science Department University of California Irvine, California 92717 AB STRACT Most large text-understanding systems have been designed under the assumption that the input text will be in reasonably "neat" form, e.g., newspaper stories and other edited texts. However, a great deal of natural language texts e.g.~ memos, rough drafts, conversation transcripts~ etc., have features that differ significantly from "neat" texts, posing special problems for readers, such as misspelled words, missing words, poor syntactic constructlon, missing periods, etc. Our solution to these problems is to make use of exoectations, based both on knowledge of surface English and on world knowledge of the situation being described. These syntactic and semantic expectations can be used to figure out unknown words from context, constrain the possible word-senses of words with multiple meanings (ambiguity), fill in missing words (elllpsis), and resolve referents (anaphora). This method of using expectations to aid the un- derstanding of "scruffy" texts has been incorp- orated into a working computer program called NOMAD, which understands scruffy texts in the do- main of Navy messages. 1.0 Introduction Consider the following (scribbled) message, left by a computer science professor on a colleague's desk: [i] Met w/chrmn agreed on changes to prposl nxt mtg 3 Feb. A good deal of informal text such as everyday mes- sages like the one above are very ill-formed gram- matically and contain misspellings, ad hoc abbre- viations and lack of important punctuation such as periods between sentences. Yet people seem to easily understand such messages, and in fact most people would probably understand the above message just as readily as they would a more '~ell-formed" version: "I met with the chairman, and we agreed on what changes had to be made to the proposal. Our next meeting will be on Feb. 3." This research was supported in part by the Naval Ocean Systems Center under contract N-00123-81-C-I078. No extra information seems to be conveyed by this longer version, and message-writers appear to take advantage of this fact by writing extremely terse messages such as [I], apparently counting on read- ers" ability to analyze them in spite of their messiness. If "scruffy" messages such as this one were only intended for a readership of one, there wouldn't be a real problem. However, this informal type of "memo" message is commonly used for infor- mation transfer in many businesses, universities, government offices, etc. An extreme case of such an organization is the Navy, which every hour re- ceives many thousands of short messages, each of which must be encoded into computer-readable form for entry into a database. Currently, these mes- sages come in in very scruffy form, and a growing number of man-hours is spent on the encoding-by- hand process. Hence there is an obvious benefit to partially automating this encoding process. The problem is that most existing text-understanding systems (e.g. ELI [Riesbeck and Schank 76], SAM [Cullingford 77], FRUMP [DeJong 79], IPP [Lebowitz 80]) would fai£ to successfully analyze ill-formed texts like [i], because they have been designed under the assumption that they will receive "heater" input, e.g., edited input such as is found in newspapers or books. This paper briefly outlines some of the prop- erties of texts like [i], that allow readers to unaerstand it in spite of its scruffiness; and some of the knowledge and mechanisms that seem to underlle readers" ability to understand such texts. A text-processing system called NOMAD is discussed which makes use of the theories described here to process scruffy text in the domain of everyday Navy messages. NOMAD's operation is based on the use of expectations during understanding, based both on knowledge of surface English and on world knowledge of the s~tuation being described. These syntactic and semantic expectations can be used to aid naturally in the solution of a wide range of prob- lems that arise in understanding both "scruffy" texts and pre-edited texts, such as figuring out unknown words from context, constraining the pos- sible word-senses of words with multiple meanings (ambiguity), filling in missing words (ellipsis), and resolving unknown referents (anaphora). 157 2.0 Background: Tolerant text processing 2.1 FOUL-UP figured out unknown words from context The FOUL-UP program (Figuring Out Unknown Lexemes in the Understanding Process) [Granger 1977] was the first program that could figure out meanings of unknown words encountered during text understanding. FOUL-UP was an attempt to model the corresponding human ability commonly known as "f~guring out a word from context". FOUL-UP worked with the SAM system [Cullingford 1977], using the expectations generated by scripts [Schank and Abelson 19771 to restrict the possible meanings of a word, based on what object or action would have occurred in that position according to the script for the story. For instance, consider the following excerpt from a newspaper report of a car accident: [2] Friday, a car swerved off Route 69. The vehicle struck an embankment. The word "embankment" was unknown to the SAM sys- tem, but it had encoded predictions about certain attributes of the expected conceptual object of the PROPEL action (the object that the vehicle struck); namely, that it would be a physical object, and would function as an "obstruction" in the vehicle-accident script. (In addition, the con- ceptual analyzer (ELI - [Riesbeck and Schank 1976]) had the expectation that the word in that sentence position would be a noun.) Hence, when the unknown word was encountered, FOUL-UP would make use of those expected attributes to construct a memory entry for the word "embankment", indicating that it was a noun, a physical object, and an "obstruction" in vehicle- accident situatlons. It would then create a dic- tionary definition that the system would use from then on whenever the word was encountered in this context. 2.2 Blame Assignment in the NOMAD system But even if the SAM system had known the word "embankment", it would not have been able to handle a less edited version of the story, such as this: [3] Vehcle act Rt69; car strck embankment; drivr dead one psngr in,; ser dmg to car full rpt frtncmng. While human readers would have little difficulty understanding this text, no existing computer pro- grams could do so. The scope of this problem is wide; examples of texts that present "scruffy" difficulties to readers are completely unedited texts, such as messages composed in a hurry, with little or no re-writlng, rough drafts, memos, transcripts of conversatzons, etc. Such texts may contain missing words, ad hoc abbreviations of words, poor syntax, confusing order of presentation of ideas, mis- spellzngs, lack of punctuation, etc. Even edited texts such as newspaper stories often contain mis- spellzngs, words unknown to the reader, and am- biguities; and even apparently very simple texts may contain alternative possible interpretations, which can cause a reader to construct erroneous initial inferences that must later be corrected (see [Granger 1980,1981]). The following sections describe the NOMAD system, which incorporates FOUL-UP's abilities as well as significantly extended abilities to use syntactic and semantic expectations to resolve these difficulties, in the context of Naval mes- sages. 3.0 How NOMAD Recognizes and Corrects Errors 3.1 Introduction NOMAD incorporates ideas from, and builds on, earlier work on conceptual analysis (e.g., [Riesbeck and Schank 1976], [Birnbaum and Selfridge 1979]); situation and intention inference (e.g., [Cullingford 1977|, [Wilensky 1978]); and English generatlon (e.g. [Goldman 1973], [McGuire 1980]). What differentiates NOMAD significantly from its predecessors are its error recognition and error correction abilities, which enable it to read texts more complex than those that can be handled by other text understanding systems. We have so far identified the following five types of problems that occur often in scruffy un- edited texts. Each problem is illustrated by an example from the domain of Navy messages. The next section will then describe how NOMAD deals with each type of error. I. Unknown words (e.g., Enemy "scudded" bombs at us. -- the verb is unknown to the system); 2. Missing subject, object, etc. of sentences. (e.g., Sighted enemy ship. Fired -- the actor who fired is not explicitly stated); 3. Missing sentence and clause boundaries. (e.g., Locked on opened fire. -- two actions, aiming and firing); 4. Missing situational (scripty) events. (e.g., Midway lost contact on Kashin. -- no previous contact mentioned); 5. Ambiguous word usage. (e.g., Returned bombs to Kashin. -- "returned" in the sense of re- tal~ation after a previous attack, or "return- ed" in the sense of "peaceably delivered to"?) When these problems arise in a message, NOMAD must first recognize what the problem is (which is often difficult to do), and then attempt t~ ~orrect the error. These two processes are briefly described in the fnllowing sections. 158 3.2 Recognizing and correcting errors For each of the above examples of problems encountered, NOMAD's method of recognizing and correcting the problem are described here, along with actual English input and output from NOMAD. I. INPUT: ENEMY SCUDDED BOMBS AT US. Problem: Unknown word. The unknown word "scudded" is trivial to recognize, since it is the only word without a dictionary entry. Once it has been recognized, NOMAD checks it to see if it could be (a) a misspelllng, (b) an abbreviation or (c) a regular verb-tense of some known word. Solution: Use expectations to figure out word meaning from context. When the spelling checkers fail, a FOUL-UP mechanisms is called which uses knowledge of what actions can be done by an enemy actor, to a weapon object, directed at us. It in- fers that the action is probably a propel. Again, this is only an educated guess by the system, and may have to be corrected later on the basis of future information. NOMAD OUTPUT: An enemy ship fired bombs at our ship. 2. INPUT: MIDWAY SIGHTED ENEMY. FIRED. Problem: Missing subject and objects. "Fired" builds a PROPEL, and expects a subject and objects to play the conceptual roles of ACTOR (who did the PROPELing), OBJECT (what got PROPELed) and RECIPI- ENT (who got PROPELed at). However, no surface subjects or objects are presented here. Solution: Use expectations to fill in conceptual cases. NOMAD uses situational expectations from the known typical sequence of events in an "ATTACK" (which consists of a movement (PTRANS), a sighting (ATTEND) and firing (PROPEL)). Those expectations say (among other things) that the actor and recip- ient of the PROPEL will be the same as the actor and direction of the ATTEND, and that the OBJECT that got PROPELed will be some kind of projectile, which is not further specified here. NOMAD OUTPUT: We sighted an enemy ship. We fired at the ship. 3. INPUT: LOCKED ON OPENED FIRE. Problem: Missing sentence boundaries. NOMAD has no expectations for a new verb ("opened") to appear immediately after the completed clause "locked on". It tries but fails to connect "opened" to the phrase "locked on". Solution: Assume the syntactic expectations failed because a clause boundary was not adequately marked in the message; assume such a boundary is there. NOMAD assumes that there may have been an intended sentence separation before "opened", since no expectations can account for the word in this sen- tence position. Hence, NOMAD saves "locked on" as one sentence, and continues to process the rest of the text as a new sentence. NOMAD OUTPUT: We aimed at an unknown object. object. We fired at the 4. INPUT: LOST CONTACT ON ENEMY SHIP. Problem: Missing event in event sequence. NOMAD"s knowledge of the "Tracking" situation cannot un- derstand a ship losing contact until some contact has been gained. Solution: Use situational expectations to infer missing events. NOMAD assumes that the message implies the previous event of gaining contact with the enemy ship, based on the known sequence of events in the "Tracking" situation. NOMAD OUTPUT: We sighted an enemy ship. Then we lost radar visual contact with the ship. or 5. INPUT: RETURNED BOMBS TO ENEMY SHIP. Prob!em: Ambiguous interpretation of action. NOMAD cannot tell whether the action here is "re- turning" fire to the enemy, i.e., firing back at them (after they presumably had fired at us), or peaceably delivering bombs, with no firing implied. Solution: Use expectations of probable goals of actors. NOMAD first interprets the sentence as "peaceably delivering" some bombs to the ship. However, NOMAD contains the knowledge that enemies do not give weapons, information, personnel, etc., to each other. Hence it attempts to find an al- ternative interpretation of the sentence, in this case finding the "returned fire" interpretation, which does not violate any of NOMAD's knowledge about goals. It then infers, as in the above ex- ample, that the enemy ship must have previously fired on us. NOMAD OUTPUT: An unknown enemy ship fired on us. bombs at them. Then we fired 4.0 Conclusions The ability to understand text is dependent on the ability to understand what is being described in the text. Hence, a reader of, say, English text must have applicable knowledge of both the situa- tions that may be described in texts (e.g., ac- tions, states, sequences of events, goals, methods of achieving goals, etc.) and the the surface structures that appear in the language, i.e., the relatlons between the surface order of appearance of words and phrases, and their correspondin~ meaning structures. 159 The process of text understanding is the com- bined applicatlon of these knowledge sources as a reader proceeds through a text. This fact becomes clearest when we investigate the understanding of texts that present particular problems to a reader. Human understanding is inherently tolerant; people are naturally able to ignore many types of errors, omissions, poor constructions, etc., and get straight to the meaning of the text. Our theories have tried to take this ability into account by including knowledge and mechanisms of error noticing and correcting as implicit parts of our process models of language understanding. The NOMAD system is the latest in a line of "tolerant" language understanders, beginning with FOUL-UP, all based on the use of knowledge of syn- tax, semantics and pragmatics at all stages of the understanding process to cope with errors. 5.0 References Schank, R.C., and Abelson, R. 1977 Scripts, P~ans, Goals ~nd Understanding. Lawrence Erlbaum Associates, Hillsdale, N.J. Wilensky, R. 1978. Understanding Goal-Based Stories. Computer Science Technical Report 140, Yale University. Birnbaum, L. and Selfridge, M. 1980. Conceptual Analysis of Natural Language, in R. Schank and C. Riesbeck, eds., Inside Computer Understanding. Lawrence Erlbaum Associates, Hillsdale, N.J. Cullingford, R. 1977. Controlling Inferences in Story Understanding. P;oceedin~s of ~he F~th International Joint Conference onArtificial Intelllgence ~IJCAI), Cambridge, Mass. DeJong, G. 1979. Skimming Stories in Real Time: An Experiment in Integrated Understanding. Ph.D. Thesis, Yale Computer Science Dept. Goldman, N. 1973. The generation of English Sen- tences from a deep conceptual base. Ph.D. Thesis, Stanford University. Granger, R. 1977. FOUL-UP: A program that figures out meanings of words from context. Proceedings of the Fifth IJCAI, Cambridge, Mass. Granger, R.H. 1980. When expectation fails: Towards a self-correcting inference system. In Proceedings of the First National Conference on Artificial Intelllgence, Stanford University. Granger, R.H. 1981. Directing and re-directing in- ference pursuit: Extra-textual influences on text interpretation. In Proceedings of the Seventh Internatlonal Joint Conference on Artificial Intelllgence (IJCAI), Vancouver, British Columbia. Lebowitz, M. 1981. Generalization and Memory in an Integrated Understanding System. Computer Science Research Report 186, Yale University. McGuire, R. 1980. Political Primaries and Words of Pain. Unpublished ms., Dept. of Computer Science, Yale University. Riesbeck, C. and Schank, R. 1976. Comprehension by computer: Expectation-based analysis of sentences in context. Computer Science Research Report 78, Yale University. 160 | 1982 | 35 |
ON THE LINGUISTIC CHARACTER OF NON-STANDARD INPUT Anthony S. Kroch and Donald Hindle Department of Linguistics University of Pennsylvania Philadelphia, PA 19104 USA ABSTRACT If natural language understanding systems are ever to cope with the full range of English language forms, their designers will have to incorporate a number of features of the spoken vernacular language. This communication discusses such features as non-standard grammatical rules, hesitations and false starts due to self-correction, systematic errors due to mismatches between the grammar and sentence generator, and uncorrected true errors. There are many ways in which the input to a natural language system can be non-standard without being uninterpretable ~ Most obviously, such input can be the well-formed output of a grammar other than the standard language grammar with which the interpreter is likely to be equipped. This difference of grammar is presumably what we notice in language that we call "non-standard" in everyday life. Obviously, at least from the perspective of a linguist, it is wrong to think of this difference as being due to errors made by the non-standard language user; it is simply a dialect difference. Secondly, the non-standard input can contain hesitations and self-correctlons which make the string uninterpretable unless some parts of it are edited out. This is the normal state of affairs in spoken language so that any system designed to understand spoken communication, even at a rudimentary level must be able to edit its input as well as interpret it. Thirdly, the input may be ungrammatical even by the rules of the grammar of the speaker but be the expected output of the speaker's sentence generating device. This case has not been much discussed, but it is important because in certain environments speakers (and to some extent unskilled writers) regularly produce ungrammmatical output in preference to grammatically unimpeachable alternatives. Finally, the input t~at the system receives may simply contain uncorrected errors. How important this last source of non-standard input would be in a functioning system is hard to judge and would * The discussion in this paper is based an on-going study of the syntactic differences between written and of spoken language funded by the National Institute of Education under grants G78-0169 and G80-0163. depend on the environment of use. Uncorrected errors are, in our experience, reasonably rare in fluent speech but they are more common in unskilled writing. These errors may be typographical, a case we shall ignore in this discussion, or they may be grammatical. Of most interest to us are the cases where the error is due to a language user attempting to use a standard language construction that he/she does not natively command. In the course of this brief communication we shall discuss each of the above cases with examples, drawing on work we have done describing the differences between the syntax of vernacular speech and of standard writing (Kroch and Nindle, 1981). Our work indicates that these differences are sizable enough to cause problems for the acquisition of writing as a skill, and they may arise'as well when natural language understanding systems come to be used by a wider public. Whether problems will indeed arise is, of course, hard to say as it depends on so many factors. The most important of these is whether natural language systems are ever used with oral, as well as typed-in, language. We do not know whether the features of speech that we will be outlining will also show up in "keyboard" language; for its special characteristics have been little studied from a linguistic point of view (for a recent attempt see Thompson 1980). They will certainly occur more sporadically and at a lower incidence than they do in speech; and there may be new features of "keyboard" language that are not predictable from other language modes. We shall have little to say about how the problem of non-standard input can be best handled in a working system; for solving that problem will require more research. If we can give researchers working on natural language systems a clearer idea of what their devices are likely to have to cope with in an environment of widespread public use, our remarks will have achieved their purpose. Informal. generally spoken, English exists in a number of regional, class and ethnic varieties, each with its own grammatical peculiarities. Fortunately, the syntax of these dialects is somewhat less varied than the phonology so that we may reasonably approximate the situation by speaking of a general "non-standard vernacular (NV)", which contrasts in numerous ways with standard written English (SWE). Some of the differences between the two dialects can lead to problems for parsing and interpretation. Thus, 161 subject-verb agreement, which is categorical in SWE, is variable in NV. In fact, in some environments subject-verb agreement is rarely indicated in NV, the most notable being sentences with dummy there subjects. Thus, the first of the sentences in (i) is the more likely in NV while, of course, only the second can occur in SWE: (I) a. There was two girls on the sofa. b. There were two girls on the sofa. Since singular number is the unmarked alternative, it occurs with both singular and plural subjects; hence only plural marking on a verb can be treated as a clear signal of number in NV. This could easily prove a problem for parsers that use number marking to help find subject-verb pairs. A further, perhaps more difficult, problem would be posed by another feature of NV, the deletion of relative clause ¢omplementizers on subject relatives. SWE does not allow sentences like those in (2); but they are the most likely form in many varieties of NV and occur quite freely in the speech of people whose speech is otherwise standard: (2) a. Anybody says it is a liar. b. There was a car used to drive by here. Here a parser that assumes that the first tensed verb following an NP that agrees with it is the main verb, will be misled. There are severe constraints on the environments in which subject relatives can appear without a complementizer, apparently to prevent hearers from "garden-pathing" on this construction, but these restrictions are not statable in a purely structural way. A final example of a NV construction which differs from what SWE allows is the use of it for expletive there, as in (3): --(3) It was somebody standing on the corner, This construction is categorical in black English, but it occurs with considerable frequency in the speech of whites as well, at least in Philadelphia, the only location on which we have data. This last example poses no problems in principle for a natural language system; it is simply a grammatical fact of NV that has to be incorporated into the grammar implemented by the natural language understanding system. There are many features like this, each trivial in itself but nonetheless a productive feature of the language. Hesitations and false starts are a consistent feature of spoken language and any interpreter that -cannot handle them will fail instantly. In one count we found that 52% of the sentences in a 90 minute conversational interview contained at least one instance (Hindle, i981b). Fortunately, the deformation of grammaticality caused by self-correction induced disfluency is quite limited and predictable (Labov, 1966). With a small set of editing rules, therefore, we have been able to normalize more than 95% of such disfluencies in preprocessing texts for input to a parser for spoken language that we have been constructing (Hindle, 1981b). These rules are based on the fact that false starts in speech are phonetically signaled, often by truncation of the final syllable. Marking the truncation and other phonetic editing signals in our transcripts, we find that a simple procedure which removes the minimum number of words necessary to create a parsable sequence eliminates most ill-formedness. The spoken language contains as a normal part of its syntactic repertoire constructions like those illustrated below: (4) The problem is is that nobody understands me. (5) That's the only thing he does is fight. (6) John was the only guest who we weren't sure whether he would come. (7) Didn't have to worry about us. These are constructions that it is difficult to accomodate in a linguistically motivated syntax for obvious reasons. Sentence (4) has two tensed verbs; (5), which has been called a "portmanteau construction", has a constituent belonging simultaneously to two different sentences; (6) has a wh- movement construction with no trace (see the discussion in Kroch, 1981); and (7) violates the absolute grammatical requirement that English sentences have surface subjects. We do not know why these forms occur so regularly in speech, but we do know that they are extremely common. The reasons undoubtedly vary from construction to construction. Thus, (5) has the effect of removing a heavy NP from surface subject position while preserving its semantic role as subject. Since we know that heavy NPs in subject position are greatly disfavored in speech (Kroch and Hindle, 1981), the portmanteau construction is almost certainly performing a useful function in simplifying syntactic processing or the presentation of information. Similarly, relative clauses with resumptlve pronouns, like the one in (6), seem to reflect limitations on the sentence planning mechanism used in speech. If a relative clause is begun without computing its complete syntactic analysis, as a procedure like the one in MacDonald 162 (1980) suggests, then a resumptlve pronoun might be used to fill a gap that turned out to occur in a non-deletable position. This account explains why resumptlve pronouns do not occur in writing. They are ungrammatical and the real-tlme constraints on sentence planning that cause speech to be produced on the basis of limited look-ahead are absent. Subject deletion, illustrated in (7), is clearly a case of ellipsis induced in speech for reasons of economy llke contraction and clltlcizatlon. However, English grammar does not allow subjectless tensed clauses. In fact, it is this prohibition that explains the existence of expletive it in English, a feature completely absent from lang~ges with subJectless sentences. Of course, subject deletion in speech is highly constrained and its occurrence can be accommodated in a parser without completely rewriting the grammar of English, and we have done so. The point here, as with all these examples, is that close study of the syntax of speech repays the effort with improvements in coverage. The final sort of non-standard input that we will mention is the uncorrected true error. In our analysis of 40 or more hours of spoken interview material we have found true errors to be rare. They generally occur when people express complex ideas that they have not talked about before and they involve changing direction in the middle of a sentence. An example of this sort of mistake is given in (8), where the object of a prepositional phrase turns into the subject of a following clause: (8) When I was able to understand the explanation of the moves of the chessmen started to make sense to me, he became interested. Large parts of sentences with errors llke this are parsable, but the whole may not make sense. Clearly, a natural language system should be able to make whatever sense can be made out of such strings even if it cannot construct an overall structure for them. Having done as well as it can, the system must then rely on context, just as a human interlocutor would. Unlike vernacular speech, the writing of unskilled writers quite commonly displays errors. One case, which we have studied in detail is that of errors in relative clauses with "pied-plped" prepositional phrases. We often find clauses like the ones in (9), where the wrong preposition (usually in) appears at the beginning of the clause. (9) a. methods in which to communicate with other people b. rules in which people can direct their efforts Since pied-plped relatives are non-existent in NV, the simplest explanation for such examples is that they are errors due to imperfect learning of the standard language rule. More precisely, instead of moving a wh- prepositional phrase to the complementlzer position in the relative clause, unskilled writers may analyze the phrase in which as a general oblique relativizer equivalent to where, the form most commonly used in this function in informal speech. In summary, ordinary linguistic usage exhibits numerous deviations from the standard written language. The sources of these deviations are diverse and they are of varying significance for natural language processing. It is safe to say, however, that an accurate assessment of their nature, frequency and effect on interpretability is a necessary prerequisite to the development of truly robust systems. REFERENCES Hindle, Donald. "Near-sentences in spoken English." Paper presented at NWAVE X, 1981a. Hindle, Donald. "The syntax of self-correctlon." Paper presented at the Linguistic Society of America annual meeting, 1981b. Kroch, Anthony. "On the role of resumptive pronouns in amnestying island constraint violations." in CLS #17, 1981. Kroch, Anthony and Donald Hindle. ~ quantitative stud Z of the syntax of speech and writin$. Final report to the National Institute of Education on grant #78-0169, 1981. Labor, William. "On the grammatlcallty of everyday speech." unpublished manuscript, 1966. MacDonald, David "Natural language production as a process of decision-making under constraint." draft of an MIT Artifical Intelligence Lab technical report, 1980, Thompson, Bozena H. "A linguistic analysis of natural language communication with computers." in Proceedings o_f the eishth international conference on computational llnsulstics. Tokyo, 1980. 163 | 1982 | 36 |
Ill-Formed and Non-Standard Language Problems Stan Kwasny Computer Science Department Indiana University Bloomington, IN 47405 Abstract Prospects look good for making real improve- ments in Natural Language Processing systems with regard to dealing with unconventional inputs in a practical way. Research which is expected to have an influence on this progress as well as some predictions about accomplishments in both the short and long term are discussed. i. Intr~ductio~ Developing Natural Language Understanding systems which permit language in expected forms in anticipated environments having a well-defined semantics is in many ways a solved problem with today's technology. Unfortunately, few interest- ing situations in which Natural Language is useful live up to this description. Even a modicum of machine intelligence is not pcsslble, we believe, without continuing the pursuit for more sophisti- cated models which deal with such problems and which degrade gracefully (see Hayes and Reddy, 1979). Language as spoken (or typed) breaks the "rules". Every study substantiates this fact. Malhotra (1975) discovered this in his studies of live subjects in designing a system to support decision-making activities. An extensive investi- gation by Thompson (1980) provides further evi- dence that providing a grammar of "standard English" does not go far enough in meeting the prospective needs of the user. Studies by Fromkin an~ her co-workers (1980), likewise, provide new insights into the range of errors that can occur in the use of language in various situations. Studies of this sort are essential in identifying the nature of such non-standard usages. But more than merely anticipating user inputs is required. Grammaticality is a continuum phenomenon with many dimensions. So is intelligi- bility. In hearing language used in a strange way, we often pass off the variation as dialectic, or we might unconsciously correct an errorful utterance. Occasionally, we might not understand or even misunderstand. What are the rules (zeta- rules, etc.) under which we operate in doing this? Can introspection be trusted to provide the proper ~erspecCives? The results of at least one investigator argue against the use of intuitions in discovering these rules (Spencer, 1973). Com- putational linguists must continue to conduct stu- dies and consider the results of studies conducted by others. ~. Persoective$ Several perspectives exist which may give insights on the problem. We present some of these, not to pretend to exhaustively summarize them, but to hopefully stimulate interest among researchers to pursue one or more of these views of what is needed. Certain telegraphic forms of language occur in situations where two or more speakers of dif- ferent languages must communicate. A pidgin form of language develops which borrows features from each of the languages. Characteristically, it has limited vocabulary and lacks several grammatical devices (like number and gender, for example) and exhibits a reduced number of redundant features. This phenomenon can similarly he observed in some styles of man-machine dialogue. Once the user achieves some success in conversing with the machine, whether the conversation is being con- ducted in Natural Language or not, there is a ten- dency to continue to use those forms and words which were previously handled correctly. The result is a type of pidginization between the machine dialect and the user dialect which exhi- bits pidgin-like characteristics: limited vocabu- lary, limited use of some grammatical devices, etc. It is therefore reasonable to study these forms of language and to attempt to accomodate them in some natural way within our language models. Woods (1977) points out that the use of Natural Language: "... does not preclude the introduction of abbreviations and telegraphic shorthands for complex or high frequency concepts -- the ability of natural English to accommodate such abbreviations is one of its strengths." (p.18) Specialized sublanguages can often be identified which enhance the quality of the communication and prove to be quite convenient especially to fre- quent users. 164 Conjunction is an extremely common and yet poorly understood phenomenon. The wide variety of ways in which sentence fragments may be joined argues against any approach which attempts to account for conjunction within the same set of rules used in processing other sentences. Also, constituents being joined are often fragments, rather than complete sentences, and, therefore, any serious attempt to address the problem of con- Junction must necessarily investigate ellipsis as well. Since conjunction-handling involves ellipsis-handling, techniques which treat non- standard linguistic forms must explicate both. ~. Technicues What approaches work well in such situta- tions? Once a non-standard language form has been identified, the rules of the language processing component could simply be expanded to accomodate that new form. But that approach has limitations and misses the general phenomenon in most cases. Dejong (1979) demonstrated that wire service stories could be "skimmed" for prescribed concepts without much regard to gramn~aticality or accepta- bility issues. Instead, as long as coherency existed among the individual concepts, the overall content of the story could be summarized. The whole problem of addressing what to do with non- standard inputs was finessed because of the con- text. Techniques based on meta-rules have been explored by various researchers. Kwasny (1980) investigated specialized techniques for dealing with cooccurrence violations, ellipsis~ and con- junction within an ATN gra~mlar. Sondheimer and Weischedel (1981) have generalized and refined this approach by making the meta-rules more expli- cit and by designing strategies which manipulate the rules of the grammar using meta-rules. Other systems have taken the approach that the user should play a major role in exercising choices about the interpretations proposed by the system. With such feedback to the user, no time- consuming actions are performed without his appro- val. This approach works well in database retrieval tasks. A. Near and Long Ter~ Prospects In the short term, we must look to what we understand and know about the language phenomena and apply those techniques that appear promising. Non-standard language forms appear as errors in the expected processing paths. One of the functions of a style-checking pro- gram (for example the EPISTLE system by Miller et al., 1981) is to detect and, in some cases, correct certain types of errors made by the author of a document. Since such programs are expected to become more of a necessary part of any author support system, a great deal of research can be expected to be directed at that problem. A great deal of research which deals with errors in language inputs comes from attempts to process continuous speech (see, for example, Bates, 1976). The techniques associate with non- left-to-right processing strategies should prove useful in narrowing the number of legal alterna- tives to be attempted when identifying and correcting some types of error. It is quite con- ceivable that an approach to this problem that parallels the work on speech understanding would be very fruitful. Note that this does not involve inventing new methods, but rather borrows from related studies. The primary impediment, at the moment, to this approach, as with some of the other approaches mentioned, is the time involved in considering viable alternatives. As these problems are reduced over the next few years, I feel that we should see Natural Language systems with greatly improved communication abilities. In the long term, some form of language learning capability will be critical. Both rules and meta-rules will need to be modifiable. The system behavior will need to improve and adapt to the user over time. User models of style and pre- ferred forms as well as common mistakes will be developed as a necessary part of such systems. As speed increases, more opportunity will be avail- able for creative architectures such as was seen in the speech projects, but which still respond within a reasonable time frame. Finally, formal studies of user responses will need to be conducted in an ongoing fashion to assure that the systems we build conform to user needs. ~. Reference~ Bates, M., "Syntax in Automatic Speech Understand- ing," A~JournalofComoutational Lingu~s- J~, Microfiche 45, 1976. DeJong, G.F., "Skimming Stories in Real Time: An Experiment in Integrated Understanding," Techni- cal Report 158, Yale University, Computer Sci- ence Department, 1979. Fromkin, V.A., ed., Errors in Linguistic perfgr- man99: SliPs of the Tongue, Ear, Pen, and Hand, Academic Press, New York, 1980. Hayes, P.J., and R. Reddy, "An Anatomy of Graceful Interaction in Spoken and Written Man-Machine Communication," Technical Report, Carnegie- Mellon University, August, 1979. 165 Kwasny, S.C., ,Treatment of Ungrammatical and Extra-grammatical Phenomena in Natural Language Understanding Systems," Ph.D. Thesis, Ohio State University, 1980, (available through the Indiana University Linguistics Club, Bloomington, Indi- ana). Kwasny, S.C., and N.K. Sondheimer, "Relaxation Techniques for Parsing Ill-Formed Input," Ameri- can ~ournal of Computational Linguistics, Vol. 7, No. 2, April-June, 1981, 99-108. Malhotra, A., "Design Criteria for a Knowledge- Based English Language System for Management: An Experimental Analysis," MAC TR 146, Cambridge, ~, M.I.T., February, 1975. Miller, L.A., G.E. Heidorn, and K. Jensen, "Text- Critiquing with the EPISTLE System: An Author's Aid to Better Syntax," Proceedings of the National Computer Conference, AFIPS Press, Montvale, NJ, 1981. Sondheimer, N.K., and R.M. Weischedel, "A Computa- tional Linguistic Approach to Ungrammaticality Based on Meta-Rules" Annual Meetinz of the Linguistic society of America New York, NY, December, 1981. Spencer, N.J., "Differences Between Linguists and Nonlinguists in Intuitions of Grammaticality- Acceptability" Journal of Ps~cholin~uistic Research, 2, 2, 1973, 83-99. Thompson, B.B., "Linguistic Analysis of Natural Language Communication with Computers," Proceed- ~ngs of the Eighth International Conference o_An ~emputational ~inguistic~, Tokyo, October, 1980, 190-201. Weischedel, R.M., and N.K. Sondheimer, "A Frame- work for Processing Ill-Formed Input," Technical Memorandum H-00519, Sperry-Univac, Blue Bell, PA, October 16, 1981. Woods, W.A., "A Personal View of Natural Language Understanding," SIGART Newsletter, No. 61, February, 1977, 17-18. 166 | 1982 | 37 |
"NATURAL LANGUAGE TEXTS ARE NOT NECESSARILY GRAMMATICAL AND UNAMBIGUOUS OR EVEN COMPLETE." Lance A. Miller Behavioral Sciences and Linguistics Group IBM Watson Research Center P. O. Box 218 Yorktown Heights, NY 10598 The EPISTLE system is being developed in a research project for exploring the feasibility of a variety of intelligent applications for the processing of business and office text (!'Z; the authors of are the project workers). Although ultimately intended functions include text generation (e.g., 4), present efforts focus on text analysis: devel- oping the capability to take in essentially unconstrained business text and to output grammar and style critiques, on a sentence by sentence basis. Briefly, we use a large on-line dictionary and a bottom-up parser in connection with an Augmented Phrase Structure Grammar (5) to obtain an approxi- mately correct structural description of the surface text (e.g., we posit no transformations or recovery of deleted material to infer underlying "deep" structures). In this process we always try to force a single parse output, even in the pres- ence of true ambiguity. Grammatical critiques are provided by having very strong grammar restrictions in an initial processing of the sentence; should the application of grammar rules fail to lead to the identification of a complete, syntactically correct, sentence, we then process the material a second time, adding other rules which essentially relax certain constraints, such as subject-verb number agreement, thereby permitting us to recog- nize a wide variety of true grammatical errors. The stylistic critiques are based on measurements of the detailed hierarchical structure descriptions produced by the parser, letting us detect a variety of stylistic characteristics judged by "experts" to be undesirable: too great a distance between subject and verb, too much embedding, unbalanced subject/predicate size, excessive negation or quan- tification, etc. The text corpus used for system construction and testing is a set of some 400 business letters, mostly written by individuals from within various organizations to individuals outside those organ- izations. These letters, which consist of approxi- mately 2300 sentences, were selected from a larger collection (about 2000 letters) as being represen- tative of the wide variety of styles, tones, subject matter, purposes, lengths, factual content, and organization-type found in the overall popu- lation of business letters. A corpus differing in so many of the above features is also heterogeneous with respect to syntactic structures -- and there- fore with respect to the grammatical capabilities that must be incorporated for correct recognition. However, it was one thing to be prepared for struc- tural diversity; it was quite another thing to be faced with the fact that our business letters are not some small to moderate subset of grammatical phenomena. Rather, they include all of the common and most of the arcane constructions one could find in, say, Warriner and Griffith (6). For example, the very first sentence we tackled was 29 words long and began "How nice it was to receive your letter complimenting our Manager, Bud Handy, on his courtesy • .. : we ran into extraposition, inver- sion, infinitive nominalization, gerund phrase, and appositive all within the first 13 words! A prima- ry consequence of this rich jumble of syntactic scree was the frequent annoyance of being stopped dead in our processing tracks as our grammar revealed itself to be yet once more incomplete. But it was not only the incompleteness of the gram- mar (for correct sentences) that gave us trouble: many words were not recognized, sometimes sentences were incomplete, other times they were truly ungrammatical (via normal abnormalities of grammar or via what appeared to be a rather thoughtless -- or at least uninformed -- scattering of apostrophes and semicolons within the text) and often we were raced not with our desired single parse but with many. These then are the situations which cried out for techniques either to keep processing going or, at least, to keep it alive long enough for it to scratch out detailed informative guesses at struc- ture on the parsing floor before expiring. The techniques for hardiness and robustness which we have developed in the two years of implementa- tion, and particularly recently, are mostly specific to the five trouble situations referred to above. For (i) unrecognized words (words not in our 125K entry on-line dictionary) we check first either for initial capitalization or for an inter- nal hyphen, presuming a proper name -- noun -- part of speech for the former and either noun or adjec- tive for the latter. As we improve our dictionary processing, to support efficient affix-stripping and stem storage, we now plan to hypothesize parts of speech based upon, in particular, the outer suffixes (e.g., "ly" pretty conclusively estab- lishes multi-syllabic words as adverbs). This more "intelligent" processing at the part-of-speech level is particularly important for avoiding multi- ple false parses. 167 For the two situations of either (ii) an incomplete grammar failing to process a complete grammatical sentence, or (iii) an actual incomplete sentence (sentence fragment), we are no able to output a single "best" structural description when the gram- mar can do no more (Jensen and Heidorn, forthcoming). This partial structure is "best" in the sense that it provides the largest and most continuous coverage of the input text string, and it also adheres to certain orderings of parts of speech and non-terminal constituents. Our experi- ence with such structures is that they are quite often correct, always better than a "CANNOT PARSE" outcome, and appear to be fairly usable for style critiquing. In the future we believe more can be done with sentence fragments by assuming, first, they are simply to be conjoined to some element of the previous sentence, or, second, they are an elaboration of an immediately preceding element; in either case the partial structure output should provide sufficient information to "hook" the frag- ments in correctly. For (iv) truly ungrammatical sentences~ as mentioned previously, we introduce a second pass with a number of grammatical restrictions relaxed; should any complete sentence structure result we can determine which relaxations were responsible and thereby actually identify the class of ungram- maticality. From the point of view of useful applications, this is much more of a desirable user-oriented function than an internal robust recovery procedure. Nonetheless, from the point of view of the style critiques at the sentence and paragraph levels, this procedure assures the best possible starting point, despite "noise" in the input text. Finally, (v) the situation of multiple parses is dealt with by two techniques. The first is the deliberate attempt to construct the grammar rules such that no more than a single parse can squeeze through in most situations; the second is the development of a metric which computes a real number for each parse, based on its structural features, with the decision rule simply being to choose the parse with the smallest number (~). Our experience with this metric is that it usually leads to selection of the best all-around parse; such errors as are made would seem to require semantic -- and even pragmatic -- information to be weighed in the metric, a capability presently beyond our means. REFERENCES i. Miller, Lance A. "Project EPISTLE: A system for the automatic analysis of business correspond- ence." Proceedings of the First Annual National Conference on Artificial Intelligence~ Stanford University, August, 1980, 280-282. 2. Miller, Lance A., George E. Heidorn, and Karen Jensen "Text-Critiquing with the EPISTLE System: An Author's Aid to Better Syntax." AFIPS Proceedings of the National Computer Conference~ Chicago, May 4-7, 1981, 649-655. 3. Heidorn, George E., Karen Jensen, Lance A. Mill- er, Roy J. Byrd, and Martin S. Chodorow "The EPISTLE Text-Critiquing System." IBM Systems Journal~ to appear Fall, 1982. 4. Jensen, Karen Computer Generation of Topic Paragraphs : Structure and Style". Paper presented at the ACL Session of LSA Annual Meeting, New York City, December, 1981 (IBM Research Report, 1982). 5. Heidorn, George E. "Augmented Phrase Structure Grammars". In B. Nash-Webber and R. Schank (Eds.), Theoretical Issues in Natural Language Processing, Association for Computational Linguistics, 1975. 6. Warriner, J. E. and F. Griffith English Grammar and Composition. New York: Harcourt, Brace and World, Inc., 1963. 7. Heidorn, George E. "Experience with an easily computed metric for ranking alternative parses". Presentation at the Association for Computational Linguistics Meeting, Toronto, Canada, June 17, 1982. 168 | 1982 | 38 |
SOLUTIONS TO ISSUES DEPEND ON THE KNOWLEDGE REPRESENTATION Frederick B. Tho~psoH ~ l i £ o r n i ~ I n s t i t u t e o£ Technology Pasadena, C~,li?orni~ In orpQnizing This p~nel, our Ch(tirmon, Bob Moore, expressed the view thor too often discussion o? Hoturra'l l,',nguage occess To dol'o buses has focused on whot p~rticulc~r systems c~*n or cQnnot do, ro'ther than on underlying issues. He Then sd~irr4bly proceeded to orgonize the prJnel nr. ound issues r-qther th~n systems. In responding, I qttempted to ?rr.iMe my ~'emr~rk~, on e,ach o? his five issues in r~ gener~l woy that would not reflec~ ~y ,wn pr4rochiul experience qnd i n t e r e s t , At one point I thought th~.~t I h~d s,cceeded q u i t e w e l l . Howe,.,er~ o f f e r t~king a cleorer eyed view~ it wqs qpparent thor my remarks reflec~c;d ~ssumptions obout knowledge representotion theft were by no Meons univers~ol. This suggests ,a s i x t h issue which I would l i k e ~o r~omin~t,.,: Are there r'eolly useful g e n e r o l i z N t i o n s ~bou~ comput~Jtionr~l l i n g u i s t i c issues th<~t c~r,e independer~t of r~ssumptions concerning knowledq,.~ r'epre.sentcition? I will come back 'to this sixth issue q?ter discussing t'i~e ?ire cho-~.,n by our Choir~or,. I ~s~,e @i : A.q.ctreq~te_.....F.~nc.t i o ns...qnd ~uon.!i.S.~. F'irst~ let us csst this issue in o ~omewhot di.fferent way, Irl m~.sny d~tc, b~se ~:ituo.1io~s., there ,',r'e closes of i n d i v i d u a l s o l I of whose Me~bers shcire the ~.:oMs ot'~ributes ~and thus, ÷'rum the point of view o.f the dr~'t~ bose~ ,ir~ .~.ndi~tJ,vuish,~ble. 'Thus there is no need ~o ~dd ,*II of the~.~. , i n d i v i d u a l s Qs ~':.e. prw, o't(~ entities, To use Bob Moore's ex,a~ple., i? ,:z DEPARTMENT t i l e h~ ,.~ ~ield Cur NUMPER"OF-EMPL.OYEES~ i t strands To r'~~,.~on th~L~' the pczr!icul~:~r [ndividu~.~Is ;~ho ~c~uolly existed in the v,ar'ious dep~r't~tHH~'~ would r~o~ be s~ep,ar,~tely r'~pr'e,>~:~,~ed in the. dc~~obose (for uther'wise there would be o redund,.~ncy whose ,..:or1~J'-.:'~.;nc,./ would be h~r'd to p o l i c e ) , In ~u,-'h :~i~u,:~io~s we need the. notion of ~'~ ",~olleL'ti,~e," homely ~a single dr~tr~ b,ase object ~'hot ~,.,l(e~ the; pl~.'~c~, of ra number of .ir~divid~.,r, ls end which c,~r~ c,ar'ry their cot~on ,~r!'r.Lbul'e~ together with one ,'~ddition,il item o? in~'ormotion, nomely ~heir r~umber, Thus ~ DEPARTMENT could h~ve ~s o single ~ember su,:h ~ c o l l e c t i v e ,.,f employees, indeed i t could hove severQl such c o l l e c t i v e MeMbers ond other indiuidu~l MeMbers ~s w e l l . The procedure thor is c~11ed when onswering "how Mony" ~nd "nu(~ber of" questions would know the dif?eren(-e between subcl~s,~es, indiuidurJl (~eMber~ r~nd collective ~embers~ it would know to recurse on subclc~sses, ~.~dd one to its coun'~ for individual MeMbers ond odd the indicoted' number" to its count-for' collec:tJve MeMbers. This ~ppe~rs To be uni?ied ?r,~ework th,,t w i l l h(1~d].e ,~11 of the c:~s,e~ mentioned in Bob Moore's stQtement o? Issue #I. Issue '~2: .T,,iM,e, qnci T_e_p._s_e I should l i k e to s p l i t this issue into two, The ? i r s t sub-issue is the problem of hondling continuusly varying phenoMeno> ~.~uch ra,:, 1he MoveMent of ships~ the chqnging of relotiv,., ~zMount,:~ of ingredients in che~ic~l reQc~ions~ or "the percent completions o9 tnsks, Here it is ~pp(~rent thr~t eoch instonce will require s ~peci~lized procedure to hKLnflle interpol,~tion. Ships cr~nnot s ~ i l Qcross Irjnd~ thus ~n i n t e r p o l a t i o n procedure thor produce~ the position of o ship on The bdJsis of its points o~' dep~u'tur~, ond des'tinotJ, orl w i l l need To know obout the c0cAstlines o? conrinents~ Movements to cheMJ.c~l equilibriums. ~re not line~r~ t~sk coMpletioNs depend on changing personnel ossignf~ents. Just rls we coMputotionol linguists provide to our syste(,~ u~er the (.opobili'¢y to introduce into his dotr~ b~se system ~uch notion.'~ ,a~ loc,=rions of" port.~ end ship~, etc.> we Must ,also provide ~he Met:ins by which he crarl define such '.:ontinuously vr~ryir',~ p,ar(~meters r~s position in such wqys th~.~t ~tpproprJ.ote in~erpo],',tions c~zn be ~de. by ~he generol system in con.junctioN with the p,lrtlculor defini'tJon. For" example, 'the user mr~y de?ine~ "position o? X" in ter~ of ('.r~lcu]:'~%ions, perhops extensive> involvin 9 ~he ,~ctu~.~l 9eo~etr'y of the ~eq~. 169 The second sub-issue on which I would like. to coMmen~ concerns ~hose c~ses where discrete ~ime intervals provide rin r.~dequ(l~(~, r, epresentr.~tion o@ "(he time aspects relevr, nt to the da'tq bqse. In *hese coses~ if` ~he time inf`orf~tion is coMplete~ i,e,, rictu(:l st~Ir~lng ~nd ending rimes of` rill events ripe recorded in ~he d~tQ b,~se, the h~ndling o? time is rrither strriightforwrird, However this c~se of"~en does not ripply, Consider the ?allowing e xQmp], e. : "The Ki~tyhriwk ~rrived in London Monday, The Mriru will soil ?ram London Friday. Will th~ Kitcyhowk ~nd Hrir'u h~ve been in London ~t the s~e time?" One is teP~pted ~o ollow the computer lo give q response: "Possibly," however the introduction of` a three uolued logic is Tr'qught with well known d~ngers of" its own, A More protrricted response gets in the way o~ clnuse [~bedding; how does one hrind].e~ "Will si~ips thrit have been in London together sriil together?" One rins~er would be: "The Ki~i'.vh~.~wk rirrived IriF~t Mondc~y~ the Mor, u will soil next Friday, I~ they will hrive b~.en there qt the same ~iMe~ then not rill ships ~hr~t were in London together will sriil 1'ogether~ bu~ they v:ould be the only exceptions," Choosing ~ relev~mt dic~gnosti,: t,eessoge~ os obove., is o Mrijor r~nd di£?icul't coMputotion,:~l liguis~c issue (:~oing w~],l beyound questions concerning t iMe ~lII,J *,ense, Issue %3 : _~_~r.ULi.f,,.!..i.]:Lg ~,p_xo...~u.esx ions This i~ a deep~ philosoph[c,ll ques'~io,~. CoMput~,i. onol linguists hove pr, ogrw:s~ed beyond tl'w. ,:onsiderqtion o? ~:~ing].e ~,.~'ntences, r~nd rire seeking ~o ,?ollow ~che 'focus of ,] dirjlogu~. ~ (And iden'ti.fy 'the theme o? ~ discourse, This is ev~.r}~uqlly ,4n infinite regr'e,~ ultLMritely invo].vi~g cross cul~'ur~l brickgrounds, ~he (perh~/ps Mc, chiclvellion> irlt,~.n~ o~ those ,.,,ha co~,trol ~he u'~e oT o particular ctpplicr:~Lon;. ,:It, Dv~: the eng.i.neering prob].ef~, c~ le,~s'r ~ ~he present eta're o£ • the c~rt,, i~ :~Lapl~_: whr, t response is ~os~ :Jse£u] "~.o ~'he use.r? Consider kwo possible (n~swers ~o ,.'he Following question: "Who ,.~on,.'~ges ec~cl~ deportment?" Ai: "No single person M~nages ~ll o~ the depor~Ments," dept. A ~rin~ger A Unless ~here were ~n undue number o~" dep~rtMents involved~ the second is (.Ice, Ply prei~erred~ ?or it ~uf`f`ices "even i? the f`irst were intended, I. our own experie.,ce, "e~ch" con usefully be in'terpreted ,~s coiling ~or ~, l~b~.led list ,~s onswer in ~l~ost oll coses, The diff"icul¢ies of" being t~ore clever ore great arid will o~°ten result in coMbinqtori(ll explosion, I (~M sur~., for o ]ong tJi~e into the future, we will be seeking simple solutions that (?.~) ore respon,~,ive in Mos~ c(~ses> (b) provide the needed inforMr~eion~ even though redund~nt Jn SOMe cGse~ rind (c) M~ke c:lerir the Misinterpre~rition in the £ew c,~se where this rirjses, even though these solutions May violrite strict linguistic rinqlysis, Issue #4 : ~4e_.,r vi.n~..~.eM~n~ ic~.!_l..y_ CoMp ke X In presenting i~his issue Io the prlnel~ Bob Moore used the ?ollowlng three questions ris ~n example: "Is John Jones (, child oF rln HIT rll U MIIU ~- ? " "Is one o£ John ~ones~s p~rsnts on HIT (ll u~)n u.~ 9" "Did e.i'~her poren~ o~" /ohn ]ones t~ttend HIT?" The appqre.t problem is 'the po.~sibilit~,, of" Multiple des(-rLptions~ o~'ten involving dispor'rite words~ .For, getting ~lt dril:t, in ~he datri h~se, In (.JeBicjrlillg our" systems) we recognize two tru~h~ which ~ppe,~r' to con,flick: (q) the v~lue o.F MiniMizing the reduhdrincy o£ .LnforMf~tion in the dqt(t b,1~e. (b) the necessi-iy o£ non-independent words in the vocobulr~ry, In our' own work~ ,~s Mo.~t o? you know, we hove stressed the use o? definitions c~s u Me~zns of ,'Ich.i.eving o synthe~i~ oF '~-hese *.wo princ:iples. I recoMMe.d it to you u~ ri v~.r~ o~p.i~ul tool in hondlino problems like Bob presents. We illustrate how Bob~s excLMpl~; c,.~n be hr~ndled : "de'fini~ion'child:converse o~" parent ve.rb:John ",;it~end"~ HITmJoh. is '.-'tud~.t,~ o? HIT dei'~inition:~lu~,r-'s'person who hod been ~ student" 170 The ubove three questions then ore ?.~n~1 y zed ~s: "3"ohn )ones is (converse o? parent) o? a person who had been ~ student of HIT?" "One of ~ohn /ones's parents is a person who had been a student o-t HIT?" "W~s ei'ther parent o£ ~ohn ~ones a student of HIT?" I do not wish to slur' over' the fact that ~.= d e f i n i t i o n Mech~.,nisM ~ust be h i f h l y :sophis~'~coted in i~s handling of f'ree variables,, bu~ our ~xperience i~dic~*te.~ tha~ ~l'~s can be done quite s~tisfac toril y. Issue #5: Hu~ti-Fil#._~uer'ie_.s This issue has been stated by Bob in terms of G tr'~dixional Multiple file de=to b,~se s'tructure, This issue h~s its coun'ter'p~rt in seM~intic neT data. base structures discussed in pr4per,~ on k~ow].edge representation, Since we use such q semantic net s~ructure For, our data, le't me rephrase the issue in those ~erMs. In Dab's st~tteMerlt of the issue~ he uses tl'~ example of the SHIP f i l e and the PORT .File; wl}ere the SHIP f.ile h,~s fields -For ho~,~¢ port, departure port and destination port. P,~,r'allelLnq his exa~p](:, let us consider ~h~ phrase: "London ship", Suppose ~hr.~t (q) there w~s ship n,~r~ed London, nnd (b) London was a ho~e por~, port of depqrtur~ and des'tir~o'~ion~ not necessarily o~" the same ship, Then "London ship" is four ways ,~Mbifuous~ ~e~ning: (i) the ship London~ (2) London (ho~e port) ships, (3) l.ondon (depr~r~-ur~z par-X) ~hips ~nd (4) London (destinq~ion port) ~hips, In this for~ul~tion of the probleM~ ~II is easy~ insofar ~ the phr~s~; "Londo, ship" is not '.iisc~Mbigu~Ted in con'text~ the user is informed o? the ~lMbiguous M~lrlincjs (Ind the ,~ssoci,:~'ted responses. The difficulty urises when There ar'~. pos.~ibile .i.nterpr,.'<'~'ations ?,~r~her (~field, Fort Collins is n,.~.itl')er ,~ port nor ~ ship~ however ~.he headqunr'ters of the ABC Sllippirt,# CoMpany i~ there un,:l they own ~everol ships. Wh,~'t ?~r'e we ~o ~e~n by "For"t Collins .~hip"? The.~e ~u-e pr'obleM~ tha't wer.e ?irs~ ~1'*:acked by Quillicm, and f qM not ~ur'e ~'t~(~'t unyone I..~.~ c~dded to hi~ !=emii~ol ~r~r~ly':sj.s o£ lhe~, In our own work~ we he*re s~uppecJ at "once re~-1oved" ,.:onnec.'tJons., ,:~ il].u~r~zTed by the four- w~,~y ,~mbiq,)ity ,Ibove. Issue ~6: Solution~....tO. Is~q.es_._D~.~.n.. As I look back on the abuv~ reMork~ t:oncerning Sob's five issues~ i t becomes ~pparent thr~t the u.~efulness of these remarks depends on The degree one is aware of the knowledge representatLon that underlie.s the solution suggested, For ex~Mple~ in the case of the last ,Ls~ue,. i l ~ one only knew about t r a d i t i o n a l f i l e structures~ f i n d i n g paths theft l i n k f i e l d s in More Than one f i l e appears a l l but unsolvable, Even if one is accustomed to semantic net structures~ the viability of f i n d i n g connective pnth.~ is highl~ dependccn~ on the existence of back links between a t t r i b u t e s and t h e i r ,~rgu~en~s and values. Adding a definitional capability~ other thun simple abbreviqtions ~md synonyf4s~ Burns on the way free variables ore handled in 9ener~Jl cmd on the opporo.tus +'or binding theM) for example, in processing the de+'inition: "dei~]nition:are~:length times width" when applied ~o q class> say "areas of ;~hips", how does one ensure ~hat he will ob rain : "lengTh(i) ~k w i d t h ( i ) fop i = I to number of ship~" rather Thonl "lengTh(i) ~ width(j) ?or i~j = i to nut.~ber o? ship..=.?" It coMe~ down to how variables qre MainTained in The underlying knowledge represen'tat ion, One is £or'ced to conclude ~hat the basis ~'or the integrcltion of the syntax cu,d ~emonTics o? coMput,~tionr~l linguistic systems i~ -ccoMplished wh..n tile d¢ci~4ion~ on knowledge r'epre~en~tiun ~r'e Made, Di~Jcussions 0£ #ur w:~rLous sotut.En.n to ~he J~sues of coMputaTional linguistics can Meaningfully ~uke pl<~ce only in terM~ uf the,:,~ underlying knowledge repr'eser~tot ions. 171 | 1982 | 39 |
What's in a Semantic Network? James 17. A lien Alan M. Frisch Computer Science Department The University of Rochester Rochester, NY 14627 Abstract Ever since Woods's "What's in a Link" paper, there has been a growing concern for formalization in the study of knowledge representation. Several arguments have been made that frame representation languages and semantic-network languages are syntactic variants of the ftrst-order predicate calculus (FOPC). The typical argument proceeds by showing how any given frame or network representation can be mapped to a logically isomorphic FOPC representation. For the past two years we have been studying the formalization of knowledge retrievers as well as the representation languages that they operate on. This paper presents a representation language in the notation of FOPC whose form facilitates the design of a semantic-network-like retriever. I. Introduction We are engaged in a long-term project to construct a system that can partake in extended English dialogues on some reasonably well specified range of topics. A major part of this effort so far has been the specification of a knowledge representation. Because of the wide range of issues that we are trying to capture, which includes the representation of plans, actions, time, and individuals' beliefs and intentions, it is crucial to work within a framework general enough to accommodate each issue. Thus, we began developing our representation within the first-order predicate calculus. So far, this has presented no problems, and we aim to continue within this framework until some problem forces us to do otherwise. Given this framework, we need to be able to build reasonably efficient systems for use in the project. In particular, the knowledge representation must be able to support the natural language understanding task. This requires that certain forms of inference must be made. ~'~ Within a general theorem-proving framework, however, those inferences desired would be lost within a wide range of undesired inferences. Thus we have spent considerable effort in constructing a specialized inference component that can support the language understanding task. Before such a component could be built, we needed to identify what inferences were desired. Not surprisingly, much of the behavior we desire can be found within existing semantic network systems used for natural language understanding. Thus the question "What inferences do we need?" can be answered by answering the question "What's in a semantic network?" Ever since Woods's [1975] "What's in a Link" paper, there has been a growing concern for formalization in the study of knowledge representation. Several arguments have been made that frame representation languages and semantic-network languages are syntactic variants of the f~st-order predicate calculus (FOPC). The typical argument (e.g., [Hayes, 1979; Nilsson, 1980; Charniak, 1981a]) proceeds by showing how any given frame or network representation can be mapped to a logically isomorphic (i.e., logically equivalent when the mapping between the two notations is accounted for) FOPC representation. We emphasize the term "logically isomorphic" because these arguments have primarily dealt with the content (semantics) of the representations rather than their forms (syntax). Though these arguments are valid and scientifically important, they do not answer our question. Semantic networks not only represent information but facilitate the retrieval of relevant facts. For instance, all the facts about the object JOHN are stored with a pointer directly to one node representing JOHN (e.g., see the papers in [Findler, 1979]). Another example concerns the inheritance of properties. Given a fact such as "All canaries are yellow," most network systems would automatically conclude that "Tweety is yellow," given that Tweety is a canary. This is typically implemented within the network matcher or retriever. We have demonstrated elsewhere [Frisch and Allen, 1982] the utility of viewing a knowledge retriever as a specialized inference engine (theorem prover). A specialized inference engine is tailored to treat certain predicate, function, and constant symbols differently than others. This is done by building into the inference engine certain true sentences involving these symbols 19 and the control needed to handle with these sentences. The inference engine must also be able to recognize when it is able to use its specialized machinery. That is, its specialized knowledge must be coupled to the form of the situations that it can deal with. For illustration, consider an instance of the ubiquitous type hierarchies of semantic networks: FORDS I subtype MUSTANGS l type OLD-BLACK By mapping the types AUTOS and MUSTANGS to be predicates which are true only of automobiles and mustangs respectively, the following two FOPC sentences are logically isomorphic to the network: (1.1) V x MUSTANGS(x) --) FORDS(x) (1.2) MUSTANGS(OLD-BLACK1) However, these two sentences have not captured the form of the network, and furthermore, not doing so is problematic to the design of a retriever. The subtype and type links have been built into the network language because the network retriever has been built to handle them specially. That is, the retriever does not view a subtype link as an arbitrary implication such as (1.1) and it does not view a type link as an arbitrary atomic sentence such as (1.2). In our representation language we capture the form as wetl as the content of the network. By introducing two predicates, TYPE and SUBTYPE, we capture the meaning of the type and subtype links. TYPE(~O is true iff the individual i is a member of the type (set of objects) t, and SUBTYPE(tl, t 2) is true iff the type t I is a subtype (subset) of the type t 2. Thus, in our language, the following two sentences would be used to represent what was intended by the network: (2.1) SUBTYPE(FORDS,MUSTANGS) (2.2) TYPE(OLD-BLACK1,FORDS) It is now easy to build a retriever that recognizes subtype and type assertions by matching predicate names. Contrast this to the case where the representation language used (1.1) and (1.2) and the retriever would have to recognize these as sentences to be handled in a special manner. But what must the retriever know about the SUBTYPE and TYPE predicates in order that it can reason (make inferences) with them? There are two assertions, (A.1) and (A.2), such that {(1.1),(1.2)} is logically isomorphic to {(2.1),(2.2),(A.1),(A.2)}. (Note: throughout this paper, axioms that define the retriever's capabilities will be referred to as built-in axioms and specially labeled A.1, A.2, etc.) (A.1) v tl,t2,t 3 SUBTYPE(tl,t2) A SUBTYPE(t2,t3) --, SUBTYPE(tl,t3) (SUBTYPE is transitive.) (A.2) v O,tl,t 2 TYPE(o,tl) A SUBTYPE(tl,t2) TYPE(o,t2) (Every member of a given type is a member of its supertypes.) The retriever will also need to know how to control inferences with these axioms, but this issue is considered only briefly in this paper. The design of a semantic-network language often continues by introducing new kinds of nodes and links into the language. This process may terminate with a fixed set of node and link types that are the knowledge- structuring primitives out of which all representations are built. Others have referred to these knowledge- structuring primitives as epistemological primitives [Brachman, 1979], structural relations [Shapiro, 1979], and system relations [Shapiro, 1971]. If a fLxed set of knowledge-structuring primitives is used in the language, then a retriever can be built that knows how to deal with all of them. The design of our representation language very much mimics this approach. Our knowledge-structuring primitives include a fixed set of predicate names and terms denoting three kinds of elements in the domain. We give meaning to these primitives by writing domain- independent axioms involving them. Thus far in this paper we have introduced two predicates (TYPE and SUBTYPE'), two kinds of elements (individuals and types), and two axioms ((A.1) and (A.2)). We shall name types in uppercase and individuals in uppercase letters followed by at least one digit. Considering the above analysis, a retrieval now is viewed as an attempt to prove some queried fact logically follows from the base facts (e.g., (2.1), (2.2)) and the built-in axioms (such as A.1 and A.2). For the purposes of this paper, we can consider aa~ t~ase facts to be atomic formulae (i.e., they contain no logical operators except negation). While compound formulae such as disjunctions can be represented, they are of little use to the semantic network retrieval facility, and so will 20 not be considered in this paper. We have implemented a retriever along these lines and it is currently being used in the Rochester Dialogue System [Allen, 1982]. 2. The Basic Representation: Objects, Events, and Relations An important property of a natural language system is that it often has only partial information about the individuals (objects, events, and relations) that are talked about. Unless one assumes that the original linguistic analysis can resolve all these uncertainties and ambiguities, one needs to be able to represent partial knowledge. Furthermore, the things talked about do not necessarily correspond to the world: objects are described that don't exist, and events are described that do not occur. In order to be able to capture such issues we will need to include in the domain all conceivable individuals (cf. all conceivable concepts [Brachman, 1979]). We will then need predicates that describe how these concepts correspond to reality. The class, of individuals in the world is subcategorized into three major classes: objects, events, and relations. We consider each in turn. 2.1 Objects Objects include all conceivable physical objects as well as abstract objects such as ideas, numbers, etc. The most important knowledge about any object is its type. Mechanisms for capturing this were outlined above. Properties of objects are inherited from statements involving universal quantification over the members of a type. The fact that a physical object, o, actually exists in the world will be asserted as 1S-REAL(o). 2.2 Events The problems inherent in representing events and actions are well described by Davidson [1967]. He proposes introducing events as elements in the domain and introducing predicates that modify an event description by adding a role (e.g., agent, object) or by modifying the manner in which the event occurred. The same approach has been used in virtually all semantic network- and frame-based systems [Charniak, 1981b], most of which use a case grammar [Fillmore, 1968] to influence the choice of role names. This approach also enables quantification over events and their components such as in the sentence, "For each event, the actor of the event causes that event." Thus, rather than representing the assertion that the ball fell by a sentence such as (try-l) FALL(BALL1), the more appropriate form is (try-2) 3 e TYPE(e,FALL-EVENTS) A OBJECT-ROLE(e,BALL1). This formalism, however, does not allow us to make assertions about roles in general, or to assert that an object plays some role in an event. For example, there is no way to express "Role fillers are unique" or "There is an event in which John played a role." Because we do not restrict ourselves to binary relations, we can generalize our representation by introducing the predicate ROLE and making rolenames into individuals in the domain. ROLE(o, r, v) asserts that individual o has a role named r that is filled with individual v. To distinguish rolenames from types and individuals, we shall use italics for rolenames. Finally, so that we can discuss events that did not occur (as opposed to saying that such an event doesn't exis0, we need to add the predicate OCCUR. OCCUR(e) asserts that event e actually occurred. Thus, finally, the assertion that the ball fell is expressed as (3) 3 e TYPE(e,FALL-EVENTS) A ROLE(e,OBJECT, BALL1) A OCCUR(e). Roles are associated with an event type by asserting that every individual of that type has the desired role. To assert that every event has an OBJECT role, we state (4) v e 3 r TYPE(e, EVENTS) --. ROLE(e, OBJECT, r). Given this formulation, we could now represent that "some event occurred involving John" by (5) a e, rolename TYPE(e,EVENTS) A ROLE(e, rolename, JOHN1) A OCCUR(e) By querying fact (5) in our retriever, we can find all events involving John. One of the most important aspects of roles is that they are functional, e.g., each event has exactly one object role, etc. Since this is important in designing an efficient retriever, it is introduced as a built-in axiom: (A.3) v r,o,vl,v2 ROLE(o,r, vl) A ROLE(o,r,v2) --, (vl = v2). 2.3 Relations The final major type that needs discussing is the class of relations. The same problems that arise in representing events arise in representing relations, l:or 21 instance, often the analysis of a simple noun-noun phrase such as "the book cook" initially may be only understood to the extent that some relationship holds between "book" and "cook." If we" want to represent this, we need to be able to partially describe relations. This problem is addressed in semantic networks by describing relations along the same lines as events. For example, rather than expressing "John is 10" as (6) AGE-OF(JOHN1,10) we use the TYPE and ROLE predicates introduced above to get (7) 3 p TYPE(p,AGE-RELATIONS) A ROLE(p, OBJECT, JOHN1) A ROLE(p, VALUE,10). This, of course, mirrors a semantic network such as AGE-RE~.ATIONS I type P1 objects ~,.~alue JOHN1 10 As with events, describing a relation should not entail that the relation holds. If this were the case, it would be difficult to represent non-atomic sentences such as a disjunction, since in describing one of the disjuncts, we would be asserting that the disjunct holds. We assert that a relation, r, is true with HOLDS(r). Thus the assertion that "John is 10" would involve (7) conjoined with HOLDS(p), i.e., (8) ] p TYPE(p,AGE-RELATIONS) A ROLE(p, OBJECT, JOHN1) A ROLE(p, VALUE, IO) ^ HOLDS(p) The assertion "John is not 10" is not the negation of (8), but is (7) conjoined with -HOLDS(p), i.e., (9) ] p TYPE(p,AGE-RELATIONS) A ROLE(p, OBJECT;JOHN1) A ROLF(p, VALUE, IO) A -HOLDS(p). We could also handle negation by introducing the type NO'I'-REIATIONS, which takes one rd. ~,.,,, is filled by another relation. To assert the above, we woutd construct an individual N1, of type NOT-RELATIONS, with its role filled with p, and assert that N1 holds. We see no advantage to this approach, however, since negation "moves through" the HOLDS predicate. In other words, the relation "not p" holding is equivalent to the relation "p" not holding. Disjunction and conjunction are treated in a similar manner. 3. Making Types Work for You The system described so far, though simple, is close to providing us with one of the most characteristic inferences made by semantic networks, namely inheritance. For example, we might have the following sort of information in our network: (10) SUBTYPE(MAMMALS,ANIMALS) (11) S UBTYPE(2-LEGGED-ANIMALS,ANIMALS) (12) SUBTYPE(PERSONS,MAMMALS) (13) SUBTYPE(PERSONS,2-LEGGED-ANIMALS) (14) SUBTYPE(DOGS,MAMMALS) (15) TYPE(GEORGE1,PERSONS) In a notation like in [Hendrix, 1979], these facts would be represented as: ANIMALS 2-LE MAMMALS PERSONS DOGS T GEORGE1 In addition, let us assume we know that all instances of 2-LEGGED-ANIMALS have two legs and that all instances of MAMMALS are warm-blooded: (16) v x TYPE(x,2-LEGGF_.D-ANIMALS) HAS-2-LEGS(x) (17) v y TYPE(y,MAMMALS) . -~ WARM-BLOODED(y) These would be captured in the Hendrix formalism using his delineation mechanism. Note that relations such as "WARM-BLOODED" and "HAS-2-LEGS" should themselves be described as relations with roles, but that is not necessary for this example. Given these facts, and axioms (A.1) to (A.3), we can prove that "George has two legs" by using axiom (A.2) on (13) and (15) to conclude (18) TYPE(GEORGE1,2-LEGGED-ANIMALS) 22 and then using (18) with (16) to conclude (19) HAS-2-LEGS(GEORGE1). In order to build a retriever that can perform these inferences automatically, we must be able to distinguish facts like (16) and (17) from arbitrary facts involving implications, for we cannot allow arbitrary chaining and retain efficiency. This could be done by checking for implications where the antecedent is composed entirely of type restrictions, but this is difficult to specify. The route we take follows the same technique described above when we introduced the TYPE and SUBTYPE predicates. We introduce new notation into the language that explicitly captures these cases. The new form is simply a version of the typed FOPC, where variables may be restricted by the type they range over. Thus, (16) and (17) become (20) v x:2-LEGGED-ANIMAI.S HAS-2-LEGS(x) (21) V y:MAMMALS WARM-BLOODED(y), The retriever now can be implemented as a typed theorem prover that operates only on atomic base facts (now including (20) and (21)) and axioms (A.1) to (A.3). We now can deduce that GEORGE1 has two legs and that he is warm-blooded. Note that objects can be of many different types as well as types being subtypes of different types. Thus, we could have done the above without the type PERSONS, by making GEORGE1 of type 2-LEGGED-ANIMALS and MAMMALS. 4. Making Roles Work for You In the previous section we saw how properties could be inherited. This inheritance applies to role assertions as well. For example, given a type EVILNTS that has an OBJECT role. i.e., (22) SUBTYPE(EVENTS,INDIVIDUALS) (23) v x:EVENTS 3 y:PHYS-OBJS ROLE(x, OBJECT, y). Then if ACTIONS are a subtype of events, i.e., (24) SUBTYPE(ACTIONS,EVENTS), it follows from (A.2), (23), and (24) that for every action there is something that fills its OBJECT role, i.e., (25) v x:ACTIONS 3 y:PHYS-OBJS ROLE(x,OBJECT;y). Note that the definition of the type ACTIONS could further specify the type of the values of its OMI".CT role, but it could not contradict fact (25). Thus (26) V x:ACTIONS 3 y:PERSONS ROLE(x, OBJECT, y), further restricts the value of the OBJECT role for all individuals of type ACTIONS to be .of type PERSONS. Another common technique used in semantic network systems is to introduce more specific types of a given type by specifying one (or more) of the role values. For instance, one might introduce a subtype of ACTION called ACTION-BY-JACK, i.e., (27) (28) SUBTYPE(ACTION-BY-JACK,ACTIONS) ¥ abj:ACTION-BY-JACK ROLE(abj,ACTOR,JACK). Then we could encode the general fact that all actions by Jack are violent by something like (29) v abj:ACTION-BY-JACK VIOLENT(abj). This is possible in our logic, but there is a more flexible and convenient way of capturing such information. Fact (29), given (27) and (28), is equivalent to (30) v a:ACTIONS (ROLE a ACTOR JACK) • --, VIOLENT(a). If we can put this into a form that is recognizable to the retriever, then we could assert such facts directly without having to introduce arbitrary new types. The extension we make this time is from what we called a type logic to a role logic. This allows quantified variables to be restricted by role values as well as type. Thus, in this new notation, (30) would be expressed as (31) v a:ACH'IONS [ACTOR JACK] VIOLENT(a). In general, a formula of the form v a:T [R1V1]...[RnVn] Pa is equivalent to v a (TYPE(a,T) A ROLE(a,R1,V1) A ... A ROLE(a,Rn,Vn)) • -* Pa. 23 Correspondingly, an existentially cluantitied formula such as 3 a:T [R1V1]...[RnVn] Pa is equivalent to 3 a TYPE(a,T) A ROLE(a, R1,V1) A ... ^ ROLE(a,Rn,V n) ^ Pa. The retriever recognizes these new forms and fully reasons about the role restrictions. It is important to remember that each of these notation changes is an extension onto the original simple language. Everything that could be stated previously can still be stated. The new notation, besides often being more concise and convenient, is necessary only if the semantic network retrieval facilities are desired. Note also that we can now define the inverse of (28), and state that all actions with actor JACK are necessarily of type ACTION-BY-JACK. This can be expressed as (32) v a:ACTIONS [ACTOR JACK] TYPE(a, ACTION-BY-JACK). 5. Equality One Of the crucial facilities needed by natural language systems is the ability to reason about whether individuals are equal. This issue is often finessed in semantic networks by assuming that each node represents a different individual, or that every type in the type hierarchy is disjoint. This assumption has been called E-saturation by [Reiter, 1980]. A natural language understanding system using such a representation must decide on the referent of each description as the meaning representation is constructed, since if it creates a new individual as the referent, that individual will then be distinct from all previously known individuals. Since in actual discourse the referent of a description is not always recognized until a few sentences later, this approach lacks generality. One approach to this problem is to introduce full reasoning about equality into the representation, but this rapidly produces a combinatorially, prohibitive search space. Thus other more specialized techniques are desired. We shall consider mechanisms for proving inequality f'trst, and then methods for proving equality. Hendrix [1979] introduced some mechanisms that enable inequality to be proven. In his system, mere are two forms of subtype links, and two forms of instance links. This can be viewed in our system as follows: the SUBTYPE and TYPE predicates discussed above make no commitment regarding equality. However, a new relation, DSUBTYPE(tl,t2) , asserts that t 1 is a SUBTYPE of t 2, and also that the elements of t 1 are distinct from all other elements of other DSUBTYPES oft 2. This is captured by the axioms (A.4) v t, tl,t2,il,i2 (DSUBTYPE(tl,t) A DSUBTYPE(t2,t) A TYPE(il,tl) A TYPE(i2,t 2) A ~IDENTICAL(tl,t2)) --, (i 1 * i 2) (A.5) v tl,t DSUBTYPE(tl,t) ---, SUBTYPE(tl,t) We cannot express (A.4) in the current logic because the predicate IDFA',ITICAL operates on the syntactic form of its arguments rather than their referents. Two terms are IDENTICAL only if they are lexicaUy the same. To do this formally, we have to be able to refer to the syntactic form of terms. This can be done by introducing quotation into the logic along the lines of [Perlis, 1981], but is not important for the point of this paper. A similar trick is done with elements of a single type. The predicate DTYPE(i,t) asserts that i is an instance of type t, and also is distinct from any other instances of t where the DTYPE holds. Thus we need (A.6) v il,i2,t (DTYPE(il,t) A DTYPE(i2,t) A ~ IDENTICAL(il,i2) ) • --, (i 1 * i 2) (A.7) vi, t DTYPE(i,t) ---, TYPE(i,t) Another extremely useful categorization of objects is the partitioning of a type into a set of subtypes, i.e., each element of the type is a member of exactly one subtype. This can be defined in a similar manner as above. Turning to methods for proving equality, [Tarjan, 1975] describes an efficient method for computing relations that form an equivalence class. This is adapted to support full equality reasoning on ground terms. Of course it cannot effectively handle conditional assertions of equality, but it covers many of the typical cases. Another technique for proving equality exploits knowledge about types. Many types are such that their instances are completely defined by their roles. For such a type T, if two instances I1 and 12 of T agree on all their respective rc!~ then they are equal. If I1 and I2 have a role where their values are not equal, then I I and I2 are not equal. If we finally add the assumption that every instance of T can be characterized by its set of role values, then we can enumerate the instances of type T using a function (say t) that has an argument for each role value. 24 For example, consider the type AGE-RELS of age properties, which takes two roles, an OBJECT and a VALUE. Thus, the property P1 that captures the assertion "John is 10" would be described as follows: (33) TYPE(P1,AGE-RELS) A ROLE(PI,OBJECT, JOHN1) A ROLE(P1, VALUE, IO). The type AGE-RELS satisfies the above properties, so any individual of type AGE-RELS with OBJECT role JOHN1 and VALUE role 10 is equal to P1. The retriever encodes such knowledge in a preprocessing stage that assigns each individual of type AGE-RELS to a canonical name. The canonical name for P1 would simply be "age-rels(JOHNl,10)". Once a representation has equality, it can capture some of the distinctions made by perspectives in KRL. The same object viewed from two different perspectives is captured by two nodes, each with its own type, roles, and relations, that are asserted to be equal. Note that one cannot expect more sophisticated reasoning about equality than the above from the retriever itself. Identifying two objects as equal is typically not a logical inference. Rather, it is a plausible inference by some specialized program such as the reference component of a natural language system which has to identify noun phrases. While the facts represented here would assist such a component in identifying possible referencts for a noun phrase given its description, it is unlikely that they would logically imply what the referent is. 6. Associations and Partitions Semantic networks are useful because they structure information so that it is easy to retrieve relevant facts, or facts about certain objects. Objects are represented only once in the network, and thus there is one place where one can find all relations involving that object (by following back over incoming ROLE arcs). While we need to be able to capture such an ability in our system, we should note that this is often not a very useful ability, for much of one's knowledge about an object will ,lot be attached to that object but will be acquired from the inheritance hierarchy. In a spreading activation type of framework, a considerable amount of irrelevant network will be searched before some fact high up in the type hierarchy is found. In addition, it is very seldom that one wants to be able to access all facts involving an object; it is much more likely that a subset of relations is relevant. If desired, such associative links between objects can be simulated in our system. One could find all properties of an object ol (including those by inheritance) by retrieving all bindings of x in the query 3x,r ROLE(x,r,ol). The ease of access provided by the links in a semantic network is effectively simulated simply by using a hashing scheme on the structure of all ROLE predicates. While the ability to hash on structures to find facts is crucial to an efficient implementation, the details are not central to our point here. Another important form of indexing is found in Hendrix where his partition mechanism is used to provide a focus of attention for inference processes [Grosz, 1977]. This is just one of the uses of partitions. Another, which we did not need, provided a facility for scoping facts within logical operators, similar to the use of parentheses in FOPC. Such a focus mechanism appears in our system as an extra argument on the main predicates (e.g., HOLDS, OCCURS, etc.). Since contexts are introduced as a new class of objects in the language, we can quantify over them and otherwise talk about them. In particular, we can organize contexts into a lattice-like structure (corresponding to Hendrix's vistas for partitions) by introducing a transitive relation SUBCONTEXT. (A.8) v c,cl,c2 SUBCONTEXT(c,cl) A SUBCONTEXT(cl,c2) SUBCONTEXT(c,c2) To relate contexts to the HOLDS predicate, a proposition p holds in a context c only if it is known to hold in c explicitly, or it holds in a super context of c. (A.9) v p,t,c,c' SUBCONTEXT(c,c,)A HOt.DS(p,c') --, HOLDS(p,c), As with the SUBTYPE relation, this axiom would defy an efficient implementation if the contexts were not organized in a finite lattice structure. Of course, we need axioms similar to (A,9) for the OCCURS and IS-RF_.AL predicates. 7. Discussion We have argued that the appropriate way to design knowledge representations is to identify those inferences that one wishes to facilitate. Once these are identified, one can then design a specialized limited inference mechanism that can operate on a data base of first order 25 facts. In this fashion, one obtains a highly expressive representation language (namely FOPC), as well as a well-defined and extendable retriever. We have demonstrated this approach by outlining a portion of the representation used in ARGOT, the Rochester Dialogue System [Allen, 1982]. We are currently extending the context mechanism to handle time, belief contexts (based on a syntactic theory of belief [Haas, 1982]), simple hypothetical reasoning, and a representation of plans. Because the matcher is defined by a set of axioms, it is relatively simple to add new axioms that handle new features. For example, we are currently incorporating a model of temporal knowledge based on time intervals [Allen, 1981a]. This is done by allowing any object, event, or relation to be qualified by a time interval as follows: for any untimed concept x, and any time interval t, there is a timed concept consisting of x viewed during t which is expressed by the term (t-concept x t). This concept is of type (TIMED Tx), where Tx is the type of x. Thus we require a type hierarchy of timed concepts that mirrors the hierarchy of untimed concepts. Once this is done, we need to introduce new built-in axioms that extend the retriever. For instance, we define a predicate, DURING(a,b), that is true only if interval a is wholly contained in interval b. Now, if we want the retriever to automatically infer that if relation R holds during an interval t, then it holds in all subintervals of t, we need the following built-in axioms. First, DURING is transitive: (A.10) V a,b,c DURING(a,b) A DURING(b,c) --, DURING(a,c) Second, if P holds in interval t, it holds in all subintervals of t. (A.11) v p,t,t',c HOLDS(t-concept(p,t),c) A DURING(t' ,t) ---, HOLDS(t-concept(p,t'),c). Thus we have extended our representation to handle simple timed concepts with only a minimal amount of analysis. Unfortunately, we have not had the space to describe how to take the specification of the retriever (namely axioms (A.1) - (A.11)) and build an actual inference program out of it. A technique for building such a limited inference mechanism by moving to a meta-logic is described in [Frisch and Allen, 1982]. One of the more interesting consequences of this approach is that it has led to identifying various difference modes of retrieval that are necessary to support a natural language comprehension task, We have considered so far only one mode of retrieval, which we call provability mode. In this mode, the query must be shown to logically follow from the built-in axioms and the facts in the knowledge base. While this is the primary mode of interaction, others are also important. In consistency mode, the query is checked to see if it is logically consistent with the facts in the knowledge base with respect to the limited inference mechanism. While consistency in general is undecidable, with respect to the limited inference mechanism it is computationally feasible. Note that, since the retriever is defined by a set of axioms rather than a program, consistency mode is easy to define. Another important mode is compatibility mode, which is very useful for determining the referents of description. A query in compatibility mode succeeds if there is a set of equality and inequality assertions that can be assumed so that the query would succeed in provability mode. For instance, suppose someone refers to an event in which John hit someone with a hat. We would like to retrieve possible events that could be equal to this. Retrievals in compatibility mode are inherently expensive and so must be controlled using a context mechanism such as in [Grosz, 1977]. We are currently attempting to formalize this mode using Reiter's non- monotonic logic for default reasoning. We have implemented a version of this system in HORNE [Allen and Frisch, 1981], a LISP embedded logic programming language. In conjunction with this representation is a language which provides many abbreviations and facilities for system users. For instance, users can specify what context and times they are working with respect to, and then omit this information from their interactions with the system. Also, using the abbreviation conventions, the user can describe a relation and events without explicitly asserting the TYPE and ROLE assertions. Currently the system provides the inheritance hierarchy, simple equality reasoning, contexts, and temporal reasoning with the DURING hierarchy. 26 Acknowledgments This research was supported in part by the National Science Foundation under Grant IST-80-12418, and in part by the Office of Naval Research under Grant N00014-80-C-0197. References Allen, J.F., "ARGOT: A system overview," TR 101, Computer Science Dept., U. Rochester, 1982. Allen, J.F., "An interval-based representation of temporal knowledge," Proc., 7th IJCAI, Vancouver, B.C., August 1981a. Allen, J.F., "What's necessary to hide?: Reasoning about action verbs," Proc., 19th ACL, Stanford U., 1981b. Allen, J.F. and A.M. Frisch, "HORNE user's manual," Computer Science Dept., U. Rochester, 1981. Bobrow, D.G. and T. Winograd, "An overview of KRL, a knowledge representation language," Cognitive Science 1, 3-46, 1977. Brachman, R.J., "On the epistemological status of semantic networks," in N.V. Findler, 1979. Charniak, E., "A common representation for problem- solving and language-comprehension information," Artificial Intelligence 16, 3, 225-255, July 1981a. Charniak, E., "The case-slot identity theory," Cognitive Science 5, 3, 1981b. Davidson, D., "The logical form of action sentences," in N. Rescher (F_A). The Logic of Decision and Action. Pittsburgh, PA: U. Pittsburgh Press, 1967. Fillmore, C.J., "The case for case," in E. Bach and R. Harmes (Eds), Universals in Linguistic Theory. New York: Holt, Rinehart and Winston, 1968. Findler, N.V. (Ed). Associative Networks: Representation and Use of Knowledge by Computers. New York: Academic Press, 1979. Frisch, A.M. and J.F. Allen, "Knowledge retrieval as limited inference," Proc., 6th Conf. on Automated Deduction, New York, June 1982. Grosz, B.J., "The representation and use of focus in dialogue understanding," TN 151, SRI, July 1977. Haas, A., "Mental states and mental actions in planning," Ph.D. thesis, Computer Science Dept., U. Rochester, 1982. Hayes, P.J., "The logic of frames," in D. Metzing (Ed). Frame Conceptions and Text Understanding. Walter de Gruyter & Co., 1979. Hendrix, G.G., "Encoding knowledge in partitioned networks," in N.V. Findler, 1979. Kowalski, R.A° Logic for Problem Solving. New York: North Holland, 1979. Levesque, H. and J. Mylopolous, "A procedural semantics for semantic networks," in N.V. Findler, 1979. Nilsson, N.J. Principles of Artificial Intelligence. Palo Alto, CA: Tioga Publishing Co., 1980. Perlis, D., "Language, computation, and reality," Ph.D. thesis, Computer Science Dept., U. Rochester, 1981. Reiter, R., "A logic for default reasoning," Artificial Intelligence 13, 1-2, 81-132, April 1980. Shapiro, S. C., "The SNePS semantic network processing system," in N.V. Findler, 1979. Shapiro, S. C., "A net structure for semantic information storage, deduction and retrieval," Proc., IJCAI, 1971. Tarjan, R.E., "Efficiency of a good but not linear set union algorithm," JACM 22, 2, April 1975. Woods, W. A., "What's in a link: Foundations for semantic networks," in D.G. Bobrow and A.M. Collins (Eds). Representation and Understanding. New York: Academic Press, 1975. 27 | 1982 | 4 |
DEPENDENCIES OF DISCOURSE STRUCTURE ON THE MODALITY OF CCI~4t~ICATION: TELEPHONE vs. TELETYPE Philip R. Cohen Dept. of Computer Science Oregon State University Corvallis, OR 97331 Scott Fertig Bolt, Beranek and Newman, Inc. Cambridge, MA 02239 Kathy Starr Bolt, Beranek and Newman, Inc. Cambridge, MA 02239 ABSTRACT A desirable long-range goal in building future speech understanding systems would be to accept the kind of language people spontaneously produce. We show that people do not speak to one another in the same way they converse in typewritten language. Spoken language is finer-grained and more indirect. The differences are striking and pervasive. Current techniques for engaging in typewritten dialogue will need to be extended to accomodate the structure of spoken language. I. INTRODUCTION If a machine could listen, how would we talk to it? Tnis question will be hard to answer definitively until a good mechanical listener is developed. As a next best approximation, this paper presents results of an exploration of how people talk to one another in a domain for which keyboard-based natural language dialogue systems would be desirable, and have already been built (Robinson et al., 1980; Winograd, 1972). Our observations are based on transcripts of person-to-person telephone-mediated and teletype-mediated dialogues. In these transcripts, one specific kind of communicative act dominates spoken task-related discourse, but is nearly absent from keyboard discourse. Importantly, when this act is performed vocally it is never performed directly. Since most of the utterances in these simple dialogues do not signal the speaker's intent, techniques for inferring intent will be crucial for engaging in spoken task-related discourse. The paper suggests how a plan-based theory of communication (Cohen and Perrault, 1979; Perrault and Allen, 1980) can uncover the intentions underlying the use of various forms. This research was supported by the National Institute of Education under contract US-NIE-C-400-76-0116 to the Center for the Study of Reading of the University of Illinois and Bolt, Beranek and Newman, Inc. II. THE STUDY Motivated by Rubin's (1980) taxonomy of language experiences and influenced by Chapanis et al.'s (1972, 1977) and Grosz' (1977) communication mode and task-oriented dialogue studies, we conducted an exploratory study to investigate how the structure of instruction-giving discourse depends on the communication situation in which it takes place. Twenty-five subjects ("experts") each instructed a randomly chosen "apprentice" in assembling a toy water pump. All subjects were paid volunteer students from the Lhiversity of Illinois. Five "dialogues" took place in each of the following modalities: face-to-face, via telephone, teletype ("linked" CRT' s) , (non-interactive) audiotape, and (non-interactive) written. In all modes, the apprentices were videotaped as they followed the experts ' instructions. Telephone and Teletype dialogues were analyzed first since results would have implications for the design of speech understanding and production systems. Each expert participated in the experiment on two consecutive days, the first for training and the second for instructing an apprentice. Subjects playing the expert role ware trained by: following a set of assembly directions consisting entirely of imperatives, assembling the pump as often as desired, and then instructing a research assistant. This practice session took place face-to-face. Experts knew the research assistant already knew how to assemble the pump. Experts were given an initial statement of the purpose of the experiment, which indicated that communication would take place in one of a n~ber of different modes, but were not informed of which modality they would communicate in until the next day. In both modes, experts and apprentices were located in different rooms. Experts had a set of pump parts that, they were told, were not to be assembled but could be manipulated. In Telephone mode, experts communicated via a standard telephone and apprentices communicated through a speaker-phone, which did not need to be held and which allowed simultaneous two-way communication. Distortion of the expert's voice was apparent, but not measured. Subjects in "Teletype" (TTY) mode typed their co~mnunication on Elite Datamedia 1500 CRT 28 terminals connected by the Telenet computer network to a computer at Bolt, Beranek and Newman, Inc. The terminals were "linked" so that whatever was typed on one would appear on the other. Simultaneous typing was possible and did occur• Subjects were informed that their typing would not appear simultaneously on either terminal. Response times averaged 1 to 2 seconds, with occasionally longer delays due to system load. A. Sample Dialogue Fragments The following are representative fragments of Telephone and Teletype discourse. A Telephone Fra~ent S: J: "OK. Take that. Now there's a thing called a plunger. It has a red handle on it, a green bottom, and it's got a blue lid. OK OK now, the small blue cap we talked about before? J: Yeah S: Put that over the hole on the side of that tube -- J: Yeah S: -- that is nearest to the top, or nearest to the red handle. J: OK S: You got that on the hole? J: yeah S: Ok. now. now, the smallest of the red pieces? J: OK" A Teletype Dialogue Fragment B: N: B: N: B: N: "fit the blue cap over the tube end done put the little black ring into the large blue cap with the hiole in it... ok put the pink valve on the twD pegs in that blue cap... ok" Communication in Telephone mode has a distinct pattern of "find the x" "put it into/onto/over the y", in which reference and predication are addressed in different steps. To relate these steps, more reliance is placed on strategies for signalling dialogue coherence, such as the use of pronouns. Teletype communication involves primarily the use of imperatives such as "put the x Into/onto/around the y". Typically, the first time each object (X) is mentioned in a TrY discourse is within a request for a physical action. B. A Methodolog:{ for Discourse Analysis This research aims to develop an adequate method for conducting discourse analysis that will be useful to the computational linguist. The method used here integrates psychological, linguistic, and formal approaches in order to characterize language use. Psychological methods are needed in setting up protocols that do not bias the interesting variables. Linguistic methods are needed for developing a scheme for describing the progress of a discourse. Finally, formal methods are essential for stating theories of utterance interpretation in context. To be more specific, we are ultimately interested in similarities and differences in utterance processing across modes, Utterance processing clearly depends on utterance form and the speaker ' s intent. The utterances in the transcripts are therefore categorized by the intentions they are used to achieve. Both utterances and categorizations become data for cross-modal measures as well as for formal methods. Once intentions differing across modes are isolated, our strategy is to then examine the utterance forms used to achieve those intentions. Thus, utterance forms are not compared directly across modes; only utterances used to achieve the same goals are compared, and it is those goals that are expected to vary across modes. With form and function identified, one can then proceed to discuss how utterance processing may differ from one mode to another. Our plan-based theory of speech acts will be used to explain how an utterance's intent coding can be derived from the utterance's form and the prior interaction. A computational model of intent recognition in dialogue (Al~en, 1979; Cohen, 1979; Sidner et al., 1981) can then be used to mimic the theory's assignment of intent. Thus, the theory of speech act interpretation will describe language use in a fashion analogous to the way that a generative grammar describes how a particular deep structure can underlie a given surface structure. C. Coding the Transcripts The first stage of discourse analysis involved the coding of the conm~unicator's intent in making various utterances• Since attributions of intent are hard to make reliably, care was taken to avoid biasing the results. Following the experiences of Sinclair and Coulthard (1975), Dote et al. (1978) and Mann et al. (1975), a coding 29 scheme was developed and two people trained in its use. The coders relied both on written transcripts and on videotapes of the apprentices' assembly. The scheme, which was tested and revised on pilot data until reliability was attained, included a set of approximately 20 "speech act" categories that ware used to label intent, and a set of "operators" and propositions that were used to describe the assembly task, as in (Sacerdoti, 1975). The operators and propositions often served as the propositional content of the communicative acts. In addition to the domain actions, pilot data led us to include an action of "physically identifying the referent of a description" as part of the scheme (Cohen, 1981). This action will be seen to be requested explicitly by Telephone experts, but not by experts in Teletype mode. Of course, a coding scheme must not only capture the domain of discourse, it must be tailored to the nature of discourse per se. Many theorists have observed that a speaker can use a ntmber of utterances to achieve a goal, and can use one utterance to achieve a number of goals. Correspondingly, the coders could consider utterances as jointly achieving one intention (by "bracketing" them), could place an utterance in multiple categories, and could attribute more than one intention to the same utterance or utterance part. It was discovered that the physical layout of a transcript, particularly the location of line breaks, affected which utterances were coded. To ensure uniformity, each coder first divided each transcript into utterances that he or she would code. These joint "bracketings" were compared by a third party to yield a base set of codable (sic) utterance parts. The coders could later bracket utterances differently if necessary. The first attempt to code the transcripts was overly ambitious -- coders could not keep 20 categories and their definitions in mind, even with a written coding manual for reference. Our scheme was then scaled back -- only utterances fitting the following categories were considered: Requests-for-assembly-actions (RAACT) (e.g., "put that on the hole".) Requests-for-orientation-actions (RORT) (e.g., "the other way around", "the top is the bottom". ) Requests-to-pick-up (RPUP) (e.g., "take the blue base".) Requests-for-identification (RID) (e.g., "there is a little yellow rubber".) piece o Requests-for-other (ROTH) (e.g., requests for repetition, requests to stop, etc.) Inform-completion(action) (e.g., "OK", "yeah", "got it".) Label (e.g., "that's a plunger") Interrater reliabilities for each category (within each mode), measured as the nunber of agreements X 2 divided by the ntmber of times that category was coded, ware high (above 90%). Since each disagreement counted twice (against both categories that ware coded), agreements also counted twice. D. Analysis i: Frequency of Request types Since most of each dialogue consisted of the making of requests, the first analysis examined the frequency of the various kinds of requests in the corpus of five transcripts for each modality. Table I displays the findings. TABLE I Distribution of Requests Telephone Teletype Type I N~mber Percent ~.ACT I 73 25% RORT I 26 9% ROTH l 43 15% RPUP I 45 16% RID I i01 35% Ntm~er Percent 69 51% ii 8% 18 13% 23 17% 13 10% Total: 288 134 This table supports Chapanis et al.'s (1972, 1977) finding that voice modes were about "twice as wordy" as non-voice modes. Here, there are approximately twice as many requests in Telephone mode as Teletype. Chapenis et al. examined how linguistic behavior differed across modes in terms of measures of sentence length, message length, ntm~ber of words, sentences, messages, etc. In contrast, the present study provides evidence of how these modes differ in utterance function. Identification requests are much more frequent in Telephone dialogues than in Teletype conversations. In fact, they constitute the largest category of requests-- fully 35%. Since utterances in the RORT, RPUP, and ROTH categories will often be issued to clarify or follow up on a previous request, it is not surprising they would increase in number (though not percentage) with the increase in RID usage. Furthermore, it is sensible that there are about the same number of requests for assembly actions (and hence half the percentage) in each mode since the same "assembly wDrk" is accomplished. ~t~rufore, identification requests seem to be the primary request differentiating the two modalities. E. Analysis 2: First time identifications Frequency data are important for computational linguistics because they indicate the kinds of utterances a system may have to 30 interpret most often. However, frequency data include mistakes, dialogue repairs, and repetition. Perhaps identification requests occur primarily after referential misco~unication (as occurs for teletype dialogues (Cohen, 1981)). One might then argue that people would speak more carefully to machines and thus would not need to use identification requests frequently. Alternatively, the use of such requests as a step in a Telephone speaker's plan may truly be a strategy of engaging in spoken task-related discourse that is not found in TI~ discourse. To explore when identification requests were used, a second analysis of the utterance codings was undertaken that was limited to "first time" identifications. Each time a novice (rightly or wrongly) first identified a piece, the communicative act that caused him/her to do so was indicated. However, a coding was counted only if that speech act was not jointly present with another prior to the novice's part identification attempt. Table II indicates the results for each subject in Telephone and Teletype modes. TABLE II Speech Acts just preceding novlces' attempts .... tol-q-d-6ntifyl2pleces. Telephone Teletype SUBJ RID RPUP RAACT 1 9 2 1 2 1 i0 1 3 ii 1 0 4 9 1 0 5 i0 0 0 RID RPUP RAACT 1 2 9 0 2 9 1 2 9 0 6 3 2 6 4 Subjects were classifed as habitual users of a communicative act if, out of 12 pieces, the subject "introduced" at least 9 of the pieces with that act. In Telephone mode, four of five experts were habitual users of identification requests to get the apprentice to find a piece. In Teletype mode, no experts were habitual users of that act. To show a "modality effect" in the use of the identification request strategy, the ntmber of habitual users of RID in each mode were subjected to the Fischer's exact probability test (hypergeometric). Even with 5 subjects per mode, the differences across modes are significant (p = 0.023), indicating that Telephone conversation per se differs from Teletype conversation in the ways in which a speaker will make first reference to an object. F. Analysis 3: Utterance forms ThUS far, explicit identification requests have been shown to be pervasive in Telephone mode and to constitute a frequently used strategy. One might expect that, in analogous circumstances, a machine might be confronted with many of these acts. Computational linguistics research then must discover means by which a machine can determine the appropriate response as a function, in part, of the form of the utterance. To see just which forms are used for our task, utterances classified as requests-for-identification were tabulated. Table III presents classes of these utterance, along with an example of each class. The utterance forms are divided into four major groups, to be explained below. One class of utterances comprising 7% of identification requests, called "supplemental NP" (e .g., "Put that on the opening in the other large tube. with the round top"), was unreliably coded not c--6~-side~-6d for the analyses below. Category labels followed by "(?) " indicate that the utterances comprising those categories might also have been issued with rising intonation. TABLE III Kinds of Requests to Identif[ i__nn Telephone Mode Group CATEGORY [example] Per Cent of RID's A. ACTION-BASED i. THERE'S A NP(?) 28% ["there's a black o-ring(?)"] 2. INFORM(IF ACT THEN EFFECT) 4% ["If you look at the bottom you will see a projection"] 3. QUESTION (EFFECT) 4% ["Do you see three small red pieces?"] 4. INFORM(EFFECT) 3% ["you will see two blue tubes"] B. FRAGMENTS I. NP AND PP FRAGMENTS (?) 9% ["the smallest of the red pieces?"] 2. PREPOSED OR INTERIOR PP (?) 6% ["In the green thing at the bottom <pause> there is a hole"] ["Put that on the hole on the side of that tube...that is nearest the top" ] C. INFORM(PROPOSITION) --> REQUEST(CONFIRM) i. OBJ HAS PART 18% ["It's got a peg in it"] 2. LISTENER HAS OBJ 5% ["Now you have two devices that are clear plastic"] 3. DESCRIPTION1 = DESCRIPTION2 8% ["The other one is a bubbled piece with a blue base on it with one spout"] 31 D. NEARLY DIRECT REQUESTS ["Look on the desk"] ["The next thing your gonna look for is..."] 2% 1% Notice that in Telephone mode identification requests are never performed directly. No speaker used the paradigmatic direct forms, e.g. "Find the rubber ring shaped like an O", which occurred frequently in the written modality. However, the use of indirection is selective -- Telephone experts frequently use direct imperatives to perform assembly requests. Only the identification-request seems to be affected by modality. III. INTERPRETING INDIRECT REQUESTS FOR REFERENT IDENTIFICATION Many of the utterance forms can be analyzed as requests for identification once an act for physically searching for the referent of a description has been posited (Cohen, 1981). Assume that the action IDENTIFY-REF (AGT, DESCRIPTION) has as precondition "there exists an object 0 perceptually accessible to agt such that 0 is the (semantic) reference of DESCRIPTION." The result, of the action might be labelled by (IDENTIFIED-REF AGT DESCRIPTION). Finally, the means for performing the act will be some procedural combination of sensory actions (e.g., looking) and counting. The exact combination will depend on the description used. The utterances in Group A can then be analyzed as requests for IDENTIFY-REFERENT using Perrault and Allen' s (1980) method of applying plan recognition to the definition of communicative acts. A. Action-based Utterances Case 1 ("There is a NP") can be interpreted as a request that the hearer IDENTIFY-REFERENT of NP by reasoning that a speaker's informing a hearer that a precondition to an action is true can cause the hearer to believe the speaker wants that action to be performed. All utterances that communicate the speaker's desire that the hearer do some action are labelled as requests. Using only rules about action, Perrault and Allen's method can also explain why Cases 2, 3, and 4 all convey requests for referent identification. Case 2 is handled by an inference saying that if a speaker communicates that an act will yield some desired effect, then one can infer the speaker wants that act performed to achieve that effect. Case 3 is an example of questioning a desired effect of an act (e.g., "Is the garbage out?") to convey that the act itself is desired. Case 4 is similar to Case 2, except the relationship between the desired effect and some action yielding that effect is presumed. In all these cases, ACT = LOOK-AT, and EFFECT = "HEARER SEE X". Since LOOK-AT is part of the "body" (Allen, 1979) of IDENTIFY-REFERENT, Allen's "body-action" inference will make the necessary connection, by inferring that the speaker wanted the hearer to LOOK-AT something as part of his IDENTIFY-REFEPdR~T act. B. Fragments Group B utterances constitute the class of fragments classified as requests for identification. Notice that "fragment" is not a simple syntactic classification. In Case 2, the speaker peralinguistically "calls for" a hearer response in the course Of some linguistically complete utterance. Such examples of parallel achievement of communicative actions cannot be accounted for by any linguistic theory or computational linguistic mechanism of which ~ are aware. These cases have been included here since we believe the theory should be extended to handle them by reasoning about parallel actions. A potential source of inspiration for such a theory would be research on reasoning about concurrent programs. Case 1 includes NP fragments, usually with rising intonation. The action to be performed is not explicitly stated, but must be supplied on the basis of shared knowledge about the discourse situation -- who can do what, who can see what, what each participant thinks the other believes, what is expected, etc. Such knowledge will be needed to differentiate the intentions behind a traveller's saying "the 3:15 train to Montreal?" to an information booth clerk (who is not intended to turn around and find the train), from those behind the uttering of "the smallest of the red pieces?", where the hearer is expected to physically identify the piece. According to the theory, the speaker ' s intentions conveyed by the elliptical question include i) the speaker's wanting to know whether some relevant property holds of the referent of the description, and 2) the speaker's perhaps wanting that property to hold. Allen and Perrault (1980) suggest that properties needed to "fill in" such fragments come from shared expectations (not just from prior syntactic forms, as is current practice in computational linguistics) . The property in question in our domain is IDENTIFIED-REFERENT(HEARER, NP), which is (somehow) derived from the nature of the task as one of manual assembly. Thus, expectations have suggested a starting point for an inference chain -- it is shared knowledge that the speaker wants to know whether IDENTIFIED-REFERENT(~, NP). In the same way that questioning the completion of an action can convey a request for action, questioning IDENTIFIED-REFERENT conveys a request for IDENTIFY-REFERENT (see Case 3, Group A, above) . Thus, ~ our positing an IDENTIFY-REFERENT act, and by assuming such an act is expected of the user, the inferential machinery can derive the appropriate intention behind the use of a noun phrase fragment. The theory should account for 48% of the 32 identification requests in our corpus, and should be extended to account for an additional 6%. The next group of utterances cannot now, and perhaps should not, be handled by a theory of communication based on reasoning about action. C. Indirect Requests for Confirmation Group C utterances (as well as Group A, cases i, 2, and 4) can be interpreted as requests for identification by a rule stipulated by Labor and Fanshel (1977) -- if a speaker ostensibly informs a hearer about a state-of-affairs for which it is shared knowledge that the hearer has better evidence, then the speaker is actually requesting confirmation of that state-of-affairs. In Telephone (and Teletype) modality, it is shared knowledge that the hearer has the best evidence for what she "has", how the pieces are arranged, etc. ~hen the apprentice receives a Group C utterance, she confirms its truth perceptually (rather than by proving a theorem), and thereby identifies the referents of the NP's in the utterance. The indirect request for confirmation rule accounts for 66% of the identification request utterances (overlapping with Group A for 35%). This important rule cannot be explained in the theory. It seems to derive more from properties of evidence for belief than it does from a theory of action. As such, it can only be stipulated to a rule-based inference mechanism (Cohen, 1979), rather than be derived from more basic principles. D. Nearly Direct Requests Group D utterance forms are the closest forms to direct requests for identification that appeared, though strictly speaking, they are not direct requests. Case 1 mentions "Imok on", but does not indicate a search explicitly. The interpretation of this utterance in Perrault and Allen' s scheme would require an additional "body-action" inference to yield a request for identification. Case 2 is literally an informative utterance, though a request could be derived in one step. Importantly, the frequency of these "nearest neighbors" is minimal (3%). E. S~mary The act of requesting referent identification is nearly al~ys performed indirectly in Telephone mode. This being the case, inferential mechanisms are needed for uncovering the speaker's intentions from the variety of forms with which this act is performed. A plan-based theory of communication augmented with a rule for identifying indirect requests for confirmation would account for 79% of the identification requests in our corpus. A hierarchy of communicative acts (including" their propositional content) can be used to organize derived rules for interpreting speaker intent based on utterance form, shared knowledge and shared expectations (Cohen, 1979). Such a rule-based system could form the basis of a future pragmatics/discourse component for a speech understanding system. IV. RELATIONSHIP TO OTHER STUDIES These results are similar in soma ways to observations by Ochs and colleagues (Ochs, 1979; Ochs, Schieffelin, and Pratt, 1979). They note that parent-child and child-child discourse is often comprised of "sequential" constructions -- with separate utterances for securing reference and for predicating. They suggest that language development should be regarded as an overlaying of newly-acquired linguistic strategies onto previous ones. Adults will often revert to developmentally early linguistic strategies when they cannot devote the appropriate time/resources to planning their utterances. Thus, Ochs et al. suggest, when competent speakers are communicating while concentrating on a task, one would expect to see separate utterances for reference and predication. This suggestion is certainly backed by our corpus, and is important for computational linguistics since, to be sure, our systems are intended to be used in soma task. It is also suggested that the presence of sequential constructions is tied to the possibilities for preplanning an utterance, and hence oral and written discourse would differ in this way. Our study upholds this claim for Telephone vs. Teletype, but does not do so for our Written condition in which many requests for identification occur as separate steps. Furthermore, Ochs et al.'s claim does not account for the use of identification requests in Teletype modality after prior referential miscommunication (Cohen, 1981). Thus, it would seem that sequential constructions can result from (what they term) planned as well as unplanned discourse. It is difficult to compare our results with those of other studies. Chapanis et al. ' s observation that voice modes are faster and wordier than teletype modes certainly holds here. However, their transcripts cannot easily be used to verify our findings since, for the equipment assembly problem, their subjects were given a set of instructions that could be, and often were, read to the listener. Thus, utterance function would often be predetermined. Our subjects had to remember the task and compose the instructions afresh. Grosz' (1977) study also cannot be directly compared for the phenomena of interest here since the core dialogues that were analyzed in depth employed a "mixed" communication modality in which the expert communicated with a third party by teletype. The third party, located in the same room as the apprentice, vocally transnitted the expert's communication to the apprentice, and typed the apprentice's vocal response to the expert. The findings of finer-grained and indirect vocal requests would not appear under these conditions. Thompson's (1980) extensive tabulation of utterance forms in a multiple modality comparison overlaps our analysis at the level of syntax. Both Thompson's and the present study are primarily concerned with extending the 33 habitability of current systems by identifying phenomena that people use but which would be problematic for machines. However, our two studies proceeded along different lines. Thompson's was more concerned with utterance forms and less with pragmatic function, whereas for this study, the concerns are reversed in priority. Our priority stems from the observation that differences in utterance function will influence the processing of the same utterance form. However, the present findings cannot be said to contradict Thompson's (nor vice-verse). Each corpus could perhaps be used to verify the findings in the other. V. CGNCI/JSIONS Spoken and teletype discourse, even used for the same ends, differ in structure and in form. Telephone conversation about object assembly is dominated by explicit requests to find objects satisfying descriptions. However, these requests are never performed directly. Techniques for interpreting "indirect speech acts" thus may become crucial for speech understanding systems. These findings must be interpreted with two cautionary notes. First, the request-for-identification category is specific to discourse situations in which the topics of conversation include objects physically present to the hearer. Though the same surface forms might be used, if the conversation is not about manipulating concrete objects, different pragmatic inferences could be made. Secondly, the indirection results may occur only in conversations between humans. It is possible that people do not wish to verbally instruct others with fine-grained imperatives for fear of sounding condescending. Print may remove such inhibitions, as may talking to a machine. This is a question that cannot be settled until good speech understanding systems have been developed. We conjecture that the better the system, the more likely it will be to receive fine-grained indirect requests. It appears to us preferable to err on the side of accepting people's natural forms of speech than to force the user to think about the phrasing of utterances, at the expense of concentrating on the problem. ACKNCWLEDGEMENTS We would like to thank Zoltan Ueheli for conducting the videotaping, and Debbie Winograd, Rob Tierney, Larry Shirey, Julie Burke, Joan Hirschkorn, Cindy Hunt, Norma Peterson, and Mike Nivens for helping to organize the experiment and transcript preparation. Than~s also go to Sharon Oviatt, Marilyn Adams, Chip Bruce, Andee Rubin, Pay Perrault, Candy Sidner, and Ed Smith for valuable discussions. VI. REDES Allen, J. F., A plan-based approach to speech act recognition, Tech. Report 131, Department of Computer Science, University of Toronto, January, 1979. Allen, J. F., and Farrault, C. R., "Analyzing intention in utterances", Artificial Intelligence, vol. 15, 143-178, 1980. Chapanis, A., Parrish, R., N., Ochsman, R. B., and Weeks, G. D., "Studies in interactive communication: II. The effects of four communication modes on the Iinguistic performance of teams during cooperative problem solving", Human Factors, vol. 19, No. 2, April, 1977. Chapanis, A., Parrish, R. N., Ochsman, R. B., and Weeks, G. D., "Studies in interactive communication: I. The effects of four communication modes on the behavior of teams during cooperative problem-solving", Human Factors, vol. 14, 487-509, 1972. Cohen, P. R., "The Pragmatic/Discourse Component", in Brachman, R., Bobrow, R., Cohen, P., Klovstad, J., Webbar, B. L., and Woods, W. A., "Research in Knowledge Representation for Natural Language Understanding", Technical Report 4274, Bolt, Beranek, and Nowman, Inc., August, 1979. Cohen, P. R., "The need for referent identification as a planned action", Proceedings of the Seventh International Joint Conference on Artificial Intelligence, Vancouver, B. C., 31-36, 1981. Cohen, P. R., and Perrault, C. R., "Elements of a plan-based theory of speech acts", Cognitive Science 3, 1979, 177-212. Dore, J., No,man, D., and Gearhart, M., "The structure of nursery school conversation", Children ' s Language, Vol. 1, Nelson, Keith (ed.), Gardner Press, NOw York, 1978. Grosz, B. J., "The representation and use of focus in dialogue understanding", Tech. Report 151, Artificial Intelligence Canter, SRI International, July, 1977. Labor, W., and Fanshel, D., Therapeutic Discourse, Academic Press, Now York, 1977. Mann, W. C., Moore, J. A., Levin, J. A., and Carlisle, J. H., "Observation methods for htamn dialogue", Tech. Report 151/RR-75-33, Information Sciences Institute, Marina del Rey, Calif., June, 1975. Ochs, E., "Planned and Unplanned Discourse", Syntax and Semantics, Volume 12: ]Yi~rse ~ Syntax, Givon, T., (ed.-~, Academic Press, Now York, 51-80, 1979. 34 Ochs, E., Schieffelin, B. B., and Pratt, M. L., "Propositions across utterances and speakers", in Developmental Pragmatics, Ochs, E., and Schleffelin, B. B., (eds.), Academic Press, New York, 251-268, 1979. Perrault, C. R., and Allen, J. F., "A plan-based analysis of indirect speech acts", American Journal of Computational Linguistics, vo~,no.--~J, 167-182, 1980. Robinson, A. E., Appelt, D. E., Grosz, B. J., Rendrix, G. G., and Robinson, J., "Interpreting natural-language utterances in dialogs about tasks", Technical Note 210, Artificial Intelligence Canter, SRI International, March, 1980. Rubin, A. D., "A theoretical taxonomy of the differences between oral and written language", Theoretical Issues in Reading Comprehension, Spiro, R. J.-'[ Bruce, B. C., and Brewer, W. F., (eds.), Lawrence Erlbaun Press, Hillsdale, N. J., 1980. Sacerdoti, E., "Reasoning about Assembly/Disassembly Actions", in Nilsson, N. J., (ed.), Artificial Intelligence -- Research and Applications, Progress Report, Artificial Intelligence Canter, SRI International, Menlo Park, Calif., May, 1975. Sidner, C. L., Bates, M., Bobrow, R. J., Brachman, R. J., Cohen, P. R., Israel, D. J., Schmolze, J., Webber, B. L., and Woods, W. A., "Research in Knowledge Representation for Natural Language Understanding", BBN Report 4785, Bolt, Beranek, and Newman, Inc., Nov., 1981 Sinclair, J. M., and Coulthard, R. M., Towards an Analysis of Discourse: The ]~glish Used ---b__~ Teachers a~ ~p~,Oxford--~iversity Pres~,l--gg'5. Thompson, B. H., "Linguistic analysis of natural language communication with computers", Proceedings of COLING-80, Tokyo, 190-201, 1980. Winog rad, T., Understanding Natural Language, Academic Press, New York, 1972. 35 | 1982 | 5 |
TOWARDS A THEORY OF COMPREHENSION OF DECLARATIVE CONTEXTS Fernando Gomez Department of Computer Science University of Central Florida Orlando, Florida 32816 ABSTRACT An outline of a theory of comprehension of declarative contexts is presented. The main aspect of the theory being developed is based on Kant's distinction between concepts as rules (we have called them conceptual specialists) and concepts as an abstract representation (schemata, frames). Comprehension is viewed as a process dependent on the conceptual specialists (they contain the infe- rential knowledge), the schemata or frames (they contain the declarative knowledge), and a parser. The function of the parser is to produce a segmen- tation of the sentences in a case frame structure, thus determininig the meaning of prepositions, polysemous verbs, noun group etc. The function of this parser is not to produce an output to be in- terpreted by semantic routines or an interpreter~ but to start the parsing process and proceed until a concept relevant to the theme of the text is recognized. Then the concept takes control of the comprehension process overriding the lower level linguistic process. Hence comprehension is viewed as a process in which high level sources of know- ledge (concepts) override lower level linguistic processes. i. Introduction This paper deals with a theory of computer comprehension of descriptive contexts. By "descriptive contexts" I refer to the language of scientific books, text books, this text, etc.. In the distinction performative vs. declarative, descriptive texts clearly fall in the declarative side. Recent work in natural language has dealt with contexts in which the computer understanding depends on the meaning of the action verbs and the human actions (plans, intentions, goals) indicated by them (Schank and Abelson 1977; Grosz 1977; Wilensky 1978; Bruce and Newman 1978). Also a considerable amount of work has been done in a plan-based theory of task oriented dialogues (Cohen and Perrault 1979; Perrault and Allen 1980; Hobbs and Evans 1980). This work has had very little bearing on a theory of ~omputer understanding of descriptive contexts. One of the main tenets of the proposed research is that descriptive (or declarative as we prefer to call them) contexts call for different theoretical ideas compared to those proposed for the understanding of human actions, although~ naturally there are aspects that are common. An important characteristic of these contexts is the predominance of descriptive predicates and verbs (verbs such as "contain," "refer," "consist of," etc.) over action verbs. A direct result of this is that the meaning of the sentence does not depend as much on the main verb of the sentence as on the concepts that make it up. Hence meaning representations centered in the main verb of the sentence are futile for these contexts. We have approached the problem of comprehension in these contexts by considering concepts both as active agents that recognize themselves and as an abstract representation of the properties of an object. This aspect of the theory being developed is based on Kant's distinction between concepts as rules (we have called them conceptual specialists) and con- cepts as an abstract representation (frames, sche- mata). Comprehension is viewed as a process depen- dent.on the conceptual specialists (they contain the inferential knowledge), the schemata (they con- tain structural knowledge), and a parser. The function of the parser is to produce a segmentation of the sentences in a case frame structure, thus determining the meaning of prepositions, polysemous verbs, noun group, etc.. But the function of this parser is not to produce an output to be interpre- ted by semantic routines, but to start the parsing process and to proceed until a concept relevant to the theme of the text is recognized. Then the concept (a cluster of production rules) takes con- trol of the comprehension process overriding the lower level linguistic processes. The concept continues supervising and guiding the parsing until the sentence has been understood, that is, the meaning of the sentence has been mapped into the final internal representation. Thus a text is parsed directly into the final knowledge structures. Hence comprehension is viewed as a process in which high level sources of knowledge (concepts) override lower level linguistic processes. We have used these ideas to build a system, called LLULL, to unde{stand programming problems taken verbatim from introductory books on programming. 2. Concepts, Schemata and Inferences In Kant's Critique of Pure Reason one may find two views of a concept. According to one view, a concept is a system of rules governing the applica- tion of a predicate to an object. The rule that 36 tells us whether the predicate "large" applies to the concept Canada is a such rule. The system of rules that allows us to recognize any given instance of the concept Canada constitutes our concept of Canada. According to a second view, Kant considers a concept as an abstract represen- tation (vorstellung) of the properties of an object. This second view of a concept is akin to the notion of concept used in such knowledge representation languages as FRL, KLONE and KIIL. Frames have played dual functions. They have been used as a way to organize the inferences, and also as a structural representation of what is re- membered of a given situation. This has caused confusion between two different cognitive aspects: memory and comprehension (see Ortony, 1978). We think that one of the reasons for this confusion is due to the failure in distinguishing between the two types of concepts (concepts as rules and concepts as a structural representation). We have based our analysis on Kant's distinction in order to separate clearly between the organization of the inferences and the memory aspect. For any given text, a thematic frame contains structural knowledge about what is remembered of a theme. One of the slots in this frame contains a list of the relevant concepts for that theme. Each of these concepts in this list is separately organized as a cluster of production rules. They contain the inferential knowledge that allows the system to interpret the information being presently processed, to anticipate incoming information, and to guide and supervise the parser (see below). In some instances, the conceptual specialists access the knowledge stored in the thematic frame to per- form some of these actions. 3. Linguistic Knowledge, Text Understanding and P arsin$ In text understanding, there are two distinct issues. One has to do with the mapping of individ- ual sentences into some internal representation (syntactic markers, some type of case grammar, Wilks' preference semantics, Schank's conceptual dependency etc.). In designing this mapping, several approaches have been taken. In Winograd (1972) and Marcus (1979), there is an interplay between syntax, and semantic markers (in that order), while in Wilks (1973) and Riesbeck (1975) the parser rely almost exclusively on semantic categories. A separate issue has to do with the meaning of the internal representation in relation to the understanding of the text. For instance, consider the following text (it belongs to the second example): "A bank would like to produce records of the transactions during an account- ing period in connection with their checking accounts. For each account the bank wants a list showing the balance at the beginning of t1~e period, the number of deposits and withdrawals, and the final balance." Assume that we parse these sentences into our favorite internal representation. Now what we do with the internal representation? It is still far distant from its textual meaning. In fact, the first sentence is only introducing the topic of the programming problem. The writer could have achieved the same effect by saying: "The following is a checking account problem". The textual mean- ing of the second sentence is the description of the output for that problem. The writer could have achieved the same effect by saying that the output for the problem consists of the old-balance, deposits, withdrawals, etc.. One way to produce the textual meaning of the sentence is to interpret the internal representation that has already been built. Of course, that is equivalent to reparsing the sentence. Another way is to map the sentence directly into the final representation or the textual meaning of the sentence. That is the approach we have taken. DeJong (1979) and Schank etal. (1979) are two recent works that move in that direction. DeJong's system, called FRUMP, is a strong form of top down parser. It skims the text looking for those concepts in which it is interested. When it finds all of them, it ignores the remainder of the text. In analogy to key-word parsers, we may describe FRUMP as a key-concept parser. In Schank etal. (1979), words are marked in the dictionary as skippable or as having high relevance for a given script. When a relevant word is found, some questions are formulated as requests to the parser. These requests guide the parser in the understanding of the story. In our opinion, the criteria by which words are marked as skippable or relevant are not clear. There are significant differences between our ideas and those in the aforementioned works. The least signi£icant o~ them is that the internal representation selected by us has been a type of case grammar, while in those works the sentences are mapped into Schank's conceptual dependency notation. Due to the declarative nature of the texts we have studied, we have not seen a need for a deeper representation of the action verbs. The most important difference lies in the incorporation in our model of Kant's distinction between concepts as a system of rules and concepts as an abstract representation (an epistemic notion that is absent in Schank and his collobarators' work). The in- clusion of this distinction in our model makes the role and the organization of the different compo- nents that form part of comprehension differ markedly from those in the aforementioned works. 4. Organization and Communication between the System Components The organization that we have proposed appears in Fig. I. Central to the organization are the conceptual specialists. The other components are subordinated to them. 37 I ACTIVE FRAMES I FJ.$ure 1 Sys=em Orsanizai::Lon • "ne parser is essentially based on semantic markers and parses a sentence in to a case frame structure. The specialists contain contextual knowledge rele- vant to each ~pecific topic. This knowledge is 6f inferential type. What we have termed "passive frames" contain what the system remembers of a given topic. At the beginning of the parsing pro- cess, the active frames contain nothing. At the end of the process, the meaning of the text will be recorded in them. Everything in these frames, including the name of the slots, are built from scratch by the conceptual specialists. The communication between these elements is as follows. When a text is input to the system, the parser begins to parse the first sentence. In the parser there are mechanisms to recognize the passive frame associated with the text. Once this is done, mechanisms are set on to check if the most recent parsed conceptual constituent of the sen- tence is a relevant concept. This is done slmply by checking if the concept belongs to the list of relevant concepts in the passive frame. If that is the case the specialist (concept) override the parser. What does this exactly mean? It does not mean that the specialist will help the parser to produce the segmentation of the sentence, in a way similar to Winograd's and Marcus' approaches in which semantic selections help the syntax component of the parser to produce the right segmentation of the sentence. In fact when the specialists take over the segmentation of the sentence stops. That is what "overriding lower linguistic processes" exactly means. The specialist has knowledge to interpret whatever structure the parser has built as well as to make sense directly of the remaining constituents in the rest of the sentence. "To in- terpret" and "make sense directly" means that the constituents of the sentence will be mapped direct- ly into the active frame that the conceptual specialists are building. However this does not mean that the parser will be turned off. The par- ser continues functioning, not in order to continue with the segmentation of the sentence but to return the remaining of the conceptual constituents of the sentence to the specialist in control when asked by it. Thus what we have called "linguistic know- ledge" has been separated from the high level "inferential knowledge" that is dependent on the subject matter of a given topic as well as from the knowledge that is recalled from a given situation. These three different cognitive aspects correspond to what we have called "parser," "con- ceptual specialists," and "passive frames" respectively. 5. The Parser In this section we explain some of the compo- nents of the parser so that the reader can follow the discussion of the examples in the next section. We refer the reader to Gomez (1981) for a detailed description of these concepts. Noun Group: The function that parses the noun group is called DESCRIPTION. DESCR is a semantic marker used to mark all words that may form part of a noun group. An essential component of DESCRIPTION is a mecha- nism to identify the concept underlying the complex nominals (cf. Levi, 1978). See Finin (1980) for a recent work on complex nominals that concen- trates on concept modification. This is of most importance because it is characteristic of declar- ative contexts that the same concept may be referred to by different complex nominals. For in- stance, it is not rare to find the following com- plex nominals in the same programming problem all of them referring to the same concept: "the previous balance," "the starting balance," "the old balance" "the balance at the beginning of the period." DESCRIPTION will return with the same token (old-bal) in all of these cases. The reader may have realized that "the balance at the beginn- ing of the period" is not a compound noun. They are related to compound nouns. In fact many com- pound nouns have been formed by deletion of prepo- sitions. We have called them prepositional phrases completing a description, and we have treated them as complex nominals. Prepositions: For each preposition (also for each conjunction) there is a procedure. The function of these pre- positional experts (cf. Small, 1980) is =o deter- mine the meaning of the preposition. We refer to them as FOR-SP, ON-SP, AS-SP, etc.. Descri~tiue Verbs: (D-VERBS) are those used to describe. We have categorized them in four classes. There are those that describe the constituents of an object. Among them are: consist of, show, include, be ~iven by, contain, etc.. We refer to them as CONSIST-OF D-VERBS. A second class are those used to indicate that something is representing something. Represent, indicate, mean, describe, etc.. belong to this class. We refer to them as REPRESENT D-VERBS. A third class are those that fall under the notion of appear. To this class belong appear, belong, be $iven on etc.. We refer to them as APPEAR D-VERBS. The fourth class are formed by those that express a spatial relation. Some of these are: follow, precede , be followed by any spatial verb. We refer to them as SPATIAL D-VERBS. Action Verbs: We have used different semantic features, which indicate different levels of abstraction, to tag action verbs. Thus we have used the marker SUPL to mark in the dictionary "supply", "provide", "furnish", but not "offer". From the highest level of abstraction all of them are tagged with the marker ATRANS. The procedures that parse the action verbs and the descriptive verbs are called ACTION-VERB and DESCRIPTIVE-VERB respectively. 6. Recognition of C~ ~pts The concepts relevant to a programming topic are grouped in a passive frame. We distinguish between those concepts which are relevant to a 38 specific programming task, like balance to check- ing-account programs, and those relevant to any kind of program, like output, inRut, end-of-data, etc.. The former can be only recognized when the programming topic has been identified. A concept like output will not only be activated by the word "output" or by a noun group containing that word. The verb "print" will obviously activate that con- cept. Any verb that has the feature REQUEST, a semantic feature associated with such verbs as "like," "want," "need," etc., will activate also the concept output. Similarly nominal concepts like card and verbal concepts like record, a se- mantic feature for verbs like "record," "punch," etc. are Just two examples of concepts that will activate the input specialist. The recognition of concepts is as follows: Each time that a new sentence is going to be read, a global variable RECOG is initialized to NIL. Once a nominal or verbal concept in the sentence has been parsed, the function RECOGNIZE-CONCEPT is invoked (if the value of RECOG is NIL). This function checks if the concept that has been parsed is relevant to the progran~ning task in general or (if the topic has been identified) is relevant to the topic of the programming example. If so, RECOGNIZE-CONCEPT sets RECOG to T and passes con- trol to the concept that takes control overriding the parser. Once a concept has been recognized, the specialist for that concept continues in con- trol until the entire sentence has been processed. The relevant concept may be the subject or any other case of the sentence. However if the rele- vant concept is in a prepositional phrase that starts a sentence, the relevant concept will not take control. The following data structures are used during parsing. A global variable, STRUCT, holds the re- sult of the parsing. STRUCT can be considered as a STM (short term memory) for the low level linguis- tic processes. A BLACKBOARD (Erman and Lesser, 1975) is used for communication between the high level conceptual specialists and the low level linguistic experts. Because the information in the blackboard does not go beyond the sentential level, it may be considered as STM for the high level sources of knowledge. A global variable WORD holds the word being examined, and WORDSENSE holds the semantic features of that word. 7. Example 1 An instructor records the name and five test scores on a data card for each student. The regis- trar also supplies data cards containing a student name, identification number and number of courses passed. The parser is invoked by activating SENTENCE. Because "an" has the marker DESCR, SENTENCE passes control to DECLARATIVE which handles sentences starting with a nominal phrase. (There are other functions that respectively handle sentences start- ing with a prepositional phrase, an adverbial clause, a co~nand, an -ing form, and sentences introduced by "to be" (there be, will be, etc.) with the meaning of existence.) DECLARATIVE in- vokes DESCRIPTION. This parses "an instructor" ob- taining the concept instructor. Before returning control, DESCRIPTION activates the functions RECOG- NIZE-TOPIC and RECOGNIZE-CONCEPT. The former function checks in the dictionary if there is a frame associated with the concept parsed by DESCRIPTION. The frame EXAM-SCORES is associated with instructor, then the variable TOPIC is instan- tiated to that frame. The recognition of the frame, which may be a very hard problem, is very simple in the programming problems we have studied and normally the first guess happens to be correct. Next, RECOGNIZE-CONCEPT is invoked. Because instructor does not belong to the relevant concepts of the EXAM-SCORES frame, it returns control. Finally DESCRIPTION returns control to DECLARATIVE, along with a list containing the semantic features of instructor. DECLARATIVE, after checking that the feature TIME does not belong to those features, inserts SUBJECT before "instructor" in STRUCT. Be- fore storing the content of WORD, "records," into STRUCT, DECLARATIVE invokes RECOGNIZE-CONCEPT to recognize the verbal concept. All verbs with the feature record, as we said above, activate the in- put specialist, called INPUT-SP. When INPUT-SP is activated, STRUCT looks like (SUBJ (INSTUCTOR)). As we said in the introduction, the INPUT special- ist is a collection of production rules. One of those rules says: IF the marker RECORD belongs to WORDSENSE then activate the function ACTION- VERB and pass the following reco- mmendations to it: l)activate the INPUT-SUPERVISOR each time you find an object 2) if a RECIPIENT case is found then if it has the feature HVM_AN, parse and ignore it. Otherwise awaken the INPUT-SUPERVISOR 3) if a WHERE case (the object where something is recorded) is found, awaken the INPUT-SUPERVISOR. The INPUT-SUPERVISOR is a function that is controlling the input for each particular problem. ACTION-VERB parses the first object and passes it to the INPUT-SUPERVISOR. This checks if the seman- tic feature IGENERIC (this is a semantic feature associated with words that refer to generic infor- mation like "data," "information," etc.) does not belong to the object that has been parsed by ACTION-VERB. If that is not the case, the INPUT- SUPERVISOR, after checking in the PASSIVE-FRAME that name is normally associated with the input for EXAM-SCORES, inserts it in the CONSIST-OF slot of input. The INPUT-SUPERVISOR returns control to ACTION-VERB that parses the next object and the process explained above is repeated. When ACTION-VERB finds the preposition "on," the routine ON-SP is activated. This, after check- ing that the main verb of the sentence has been parsed and that it takes a WHERE case, checks the BLACKBOARD to find out if there is a recommendation for it. Because that is the case, ON-SP tells DESCRIPTION to parse the nominal phrase "on data cards". This returns with the concept card. ON- SP activates the INPUT-SUPERVISOR with card. This routine, after checking that cards is a type of input that the solver handles, inserts "card" in 39 the INPUT-TYPE slot of input and returns control. What if the sentence had said "... on a notebook"? Because notebook is not a form of input, the INPUT -~ SUPERVISOR would have not inserted "book" into the INPUT-TYPE slot. Another alternative is to let the INPUT-SUPERVISOR insert it in the INPUT-TYPE slot and let the problem solver make sense out of it. There is an interesting tradeoff between under- standing and problem solving in these contexts. The robuster the understander Is~ the weaker the solver may bed and vice versa. The prepositional phrase "for each student" is parsed similarly. ACTION-VERB returns control to INPUT-SP that in- serts "instructor" in the SOURCE slot of input. Finally, it sets the variable QUIT to T to indi- cate to DECLARATIVE that the sentence has been parsed and returns control to it. DECLARATIVE after checking that the variable QUIT has the value T, returns control to SENTENCE. This resets the variables RECOG, QUIT and STRUCT to NIL and begins to examine the next sentence. The calling sequence for the second sentence is identical to that for the first sentence except that the recognition of concepts is different. The passive frame for EXAM-SCORES does not contain any- thing about "registrar" nor about "supplies". DECLARATIVE has called ACTION-VERB to parse the verbal phrase. This has invoked DESCRIPTION to parse the object "data cards". STRUCT looks like: (SUBJ (REGISTRAR) ADV (ALSO) AV (SUPPLIES) OBJ ). ACTION-VERB is waiting for DESCRIPTION to parse "data cards" to fill the slot of OBJ. DESCRIPTION comes with card from "data cards," and invokes RECOGNIZE-CONCEPT. The specialist INPUT-SP is connected with card and it is again activated. This time the production rule that fires says: If what follows in the sentence is <univer- sal quatifier> + <D-VERB> or simply D-VERB then activate the function DESCRIPTIVE-VERB and pass it the recommendation of activating the INPUT-SUPERVISOR each time a complement is found. The pattern <universal quantifier> + <D-VERB> appears in the antecedent of the production rule because we want the system also to understand: "data cards each containing...". The rest of the sentence is parsed in a similar way to the first sentence. The INPUT-SUPERVISOR returns control to INPUT-SP that stacks "registrar" in the source slot of input. Finally the concept input for this prob- lem looks: INPUT CONSIST-OF (NAME (SCORES CARD (5))) SOURCE (INSTRUCTOR) (NAME ID-NUMBER P-COURSES) SOURCE (REGISTRAR) INPUT-TYPE (CARDS) If none of the concepts of a sentence are recog- nized - that is the sentence has been parsed and the variable RECOG is NIL - the system prints the sentence followed by a question mark to indicate that it could not make sense of it. That will happen if we take a sentence from a problem about checking~accounts and insert it in the middle of a problem about exam scores. The INPUT-SP and the INPUT-SUPERVISOR are the same specialists. The former overrides and guides the parser'when a con- cept is initially recognized, the latter plays the same role after the concept has been recognized. The following example illustrates how the INPUT- SUPERVISOR may furthermore override and guide the parser. The registrar also provides cards. Each card contains data including an identification number ... When processing the subject of the second sentence, INPUT-SP is activated. This tells the function DESCRIPTIVE-VERB to parse starting at "contains ..." and to awaken the INPUT-SUPERVISOR when an object is parsed. The first object is "data" that has the marker IGENERIC that tells the INPUT-SUPER- VISOR that "data" can not be the value for the input. The INPUT-SUPERVISOR will examine the next concept looking for a D-VERB. Because that is the case, it will ask the routine DESCRIPTIVE-VERB to parse starting at "including an identification n~mber..." 8. Example 2 We will comment briefly on the first six sentences of the example in Fig. 2. We will name each sentence by quoting its beginning and its end. There is a specialist that has grouped the know- ledge about checking-accounts. This specialist, whose name is ACCOUNT-SP, will be invoked when the parser finds a concept that belongs to the slot of relevant concepts in the passive frame. The first sentence is: "A bank would like to produce... checking accounts". The OUTPUT-SP is activated by "like". When 0UTPUT-SP is activated by a verb with the feature of REQUEST, there are only two produc- tion rules that follow. One that considers that the next concept is an action verb, and another that looks for the pattern <REPORT + CONSIST D-VERB> (where "REPORT" is a semantic feature for "report," "list," etc.). In this case, the first rule is fired. Then ACTION-VERB is activated with the recommendation of invoking the OUTPUT-SUPERVI- SOR each time that an object is parsed. ACTION- VERB awakens the OUTPUT-SUPERVISOR with (RECORDS ABOUT (TRANSACTION)), Because "record" has the feature IGENERIC the OUTPUT-SUPERVISOR tries to redirect the parser by looking for a CONSIST D-VERB. Because the next concept is not a D-VERB, OUTPUT-SUPERVISOR sets RECOG to NIL and returns control to ACTION-VERB. This parses the adverbial phrase introduced by "during" and the prepositional phrase introduced by "with". ACTION-VERB parses the entire sentence without recognizing any rele- vant concept, except the identification of the frame that was done while processing "a bank". The second sentence "For each account the bank wants ... balance." is parsed in the following way. Although "account" belongs to slot of rele- vant concepts for this problem, it is skipped be- cause it is in a prepositional phrase that starts a sentence. The 0UTPUT-SP is activated by a 40 REQUEST type verb, "want". STRUCT looks like: (RECIPIENT (ACCOUNT UQ (EACH)) SUBJECT (BANK)). The production rule whose antecedent is <RECORD + CONSIST D-VERB> is fired. The DESCRIPTIVE-VERB function is asked to parse starting in "showing," and activate the OUTPUT-SUPERVISOR each time an object is parsed. The OUTPUT-SUPERVISOR inserts all objects in the CONSIST-OF slot of output, and returns control to the OUTPUT-SP that inserts the RECIPIENT, "account," in the CONSIST-OF slot of output and returns control. The next sentence is "The accounts and trans- actions ... as follows:" DECLARATIVE asks DESCRIPTION to parse the subject. Because account belongs to the relevant concepts of the passive frame, the ACCOUNT-SP specialist is invoked. There is nothing in STRUCT. When a topic specialist is invoked and the next word is a boolean conjunction, the specialist asks DESCRIPTION to get the next concept for it. If the concept does not belong to the llst of relevant concepts, the specialist sets RECOG to NIL and returns control. Otherwlse it continues examining the sentence. Because trans- action belongs to the slot of relevant concepts of the passive frame, ACCOUNT-SP continues in control. ACCOUNT-SP finds "for" and asks DESCRIPTION to parse the nominal phrase. ACCOUNT-SP ignores anything that has the marker HUMAN or TIME. Finally ACCOUNT-SP finds the verb, an APPEAR D-VERB and invokes the DESCRIPTIVE-VERB routine with the recommendation of invoking the ACCOUNT-SUPERVISOR each time a complement is found. The ACCOUNT- SUPERVISOR is awakened with card. This inserts "card" in the INPUT-TYPE slot of account and transaction and returns control to the DESCRIPTIVE- VERB routine. AS-SP (the routine for "as") is invoked next. This, after finding "follows" followed by ":," indicate to DESCRIPTIVE-VERB that the sentence has been parsed. ACCOUNT-SP returns control to DECLARATIVE and this, after checking that QUIT has the value T, returns control to SENTENCE. The next sentence is: "First will be a sequence of cards ... accounts." The INPUT-SP specialist is invoked. STRUCT looks like: (ADV (FIRST) EXIST ). "Sequence of cards" gives the concept card activating the INPUT-SP specialist. The next concept is a REPRESENT D-VERB. INPUT-SP activates the DESCRIPTIVE-VERB routine and asks it to activate the INPUT-SUPERVISOR each time an object is found. The INPUT-SUPERVISOR checks if the object belongs to the relevant concepts for checking accounts. If not, the ACCOUNT-SUPERVISOR will complain. That will be the case if the sen- tence is: "First will be a sequence of cards describing the students". Assume that the above sentence says: "First will be a sequence of cards consisting of an account number and the old balance." In that case, the INPUT-SP will activate also the INPUT-SUPERVISOR but because the verbal concept is a CONSIST D-VERB, the INPUT-SUPERVISOR will stack the complements in the slot for INPUT. Thus, what the supervisor specialists do depend on the verbal concept and what is coming after. The next sentence is: "Each account is described by ..., in dollars and cents." Again, the ACCOUNT-SP is activated. The next concept is a CONSIST D-VERB. ACCOUNT-SP assumes that it is the input for accounts and activates the DESCRIPTIVE-VERB function, and passes to it the recommendation of activating the INPUT-SUPERVISOR each time an object is parsed. The INPUT-SUPERVI- SOR is awakened with (NUMBERS CARDINAL (2)). Be- cause number is not an individual concept (like, say, 0 is) the INPUT-SUPERVISOR reexamines the sen- tence and finds ":," it then again asks to DESCRIPTIVE-VERB to parse starting at "the account number...". The INPUT-SUPERVISOR stacks the com- plements in the input slot of the concept that is being described: account. The next sentence is: "The last account is followed by ... to indicate the end of the list." The ACCOUNT-SP is invoked again. The following production rule is fired: If the ordinal "last" is modifying "account" and the next concept is a SPATIAL D-VERB then activate the END-OF-DATA specialist. This assumes control and asks DESCRIPTIVE-VERB to parse starting at "followed by" with the usual recommendation of awakening the END- OF-DATA supervisor when a complement is found, and the recommendation of ignoring a PURPOSE clause if the concept is end-of-list or end-of-account. The END-OF-DATA is awakened with "dummy-account". Because "dtumny-account" is not an individual con- cept, the END-OF-DATA supervisor reexamines the sentence expecting that the next concept is a CONSIST D-VERB. It finds it, and redirects the parser by asking the DESCRIPTIVE-VERB to parse starting in "consisting of two zero values." The END-OF-DATA is awakened with "(ZERO CARD (2))". Because this time the object is an individual concept, the END-OF-DATA supervisor inserts it in- to the END-OF-DATA slot of the concept being des- cribed: account. 9. Conclusion LLULL was running in the Dec 20/20 under UCI Lisp in the Department of Computer Science of the Ohio State University. It has been able to under- stand ten programming problems taken verbatim from text books. A representative example can be found in Fig. 2. After the necessary modifications, the system is presently running in a VAXlI/780 under Franz Lisp. We are now in the planning stage of extensively experimenting with the system. We predict that the organization that we have proposed will make relatively simple to add new problem areas. Assume that we want LLULL to understand programming problems about roman numerals, say. We are going to find uses of verbs, prepositions, etc. that our parser will not be able to handle. We will integrate those uses in the parser. On top of that we will build some conceptual special- ists that will have inferential knowledge about roman numerals, and a thematic frame that will hold structural knowledge about roman numerals. We are presently following this scheme in the extension of LLULL. In the next few months we expect to fully evaluate our ideas. I0. A Computer Run 41 The example below has been taken verbatim from Conway and GriPs (1975). Some notes about the output for this problem are in order. i) "SPEC" is a semantic feature that stands for specification. If it follows a concept,- it means that the concept is being further specified or described. The semantic feature "SPEC" is followed by a descriptive verb or adjective, and finally it comes the complement of the specification in paren- theses. In the only instance in which the descrip- tive predicate does not follow the word SPEC is in expressions like "the old balance in dollars and cents". Those expressions have been treated as a special construction. 2) All direct objects connected by the conjunction "or" appear enclosed in parentheses. 3) "REPRESENT" is a semantic marker and stands for a REPRESENT D-VERB. 4) Finally "(ZERO CARD (3))" means three zeros. (A BANK WOULD LIKE TO PRODUCE RECORDS OF THE TRANSACTIONS DURING AN ACCOUNTING PERIOD IN CONNECTION WITH THEIR CHECKING ACCOUNTS. FOR EACH ACCOUNT THE BANK WANTS A LIST SHOWING THE BALANCE AT THE BEGINNING OF THE PERIOD, THE NUMBER OF DEPOSITS AND WITHDRAWALS, AND THE FINAL BALANCE. THE ACCOUNTS AND TRANSACTIONS FOR AN ACCOUNTING PERIOD WILL BE GIVEN ON PUNCHED CARDS AS FOLLOWS: FIRST WILL BE A SEQUENCE OF CARDS DESCRIBING THE ACCOUNTS. EACH ACCOUNT IS DESCRIBED BY TWO NUM- BERS: THE ACCOUNT NUMBER (GREATER THAN 0), AND THE ACCOUNT BALANCE AT THE BEGINNING OF THE PERIOD, IN DOLLARS AND CENTS. %~E LAST ACCOUNT IS FOLLOWED BY A DUMMY ACCOUNT CONSISTING OF TWO ZERO VALUES TO INDICATE THE END OF THE LIST. THERE WILL BE AT MOST 200 ACCOUNTS. FOLLOWING THE ACCOUNTS ARE THE TRANSACTIONS. EACH TRANSACTION IS GIVEN BY THREE NUMBERS: THE ACCOUNT NUMBER, A i OR -I (INDICATING A DEPOSIT OR WITHDRAWAL, RESPECTIVELY), AND THE TRANSACTION AMOUNT, IN DOLLARS AND CENTS. THE LAST REAL TRANSACTION IS FOLLOWED BY A DUMMY TRANSACTION CONSISTING OF THREE ZERO VALUES.) Figure 2 A Programming Problem OUTPUT CONSIST-OF (ACCOUNT OLD-BAL DEPOSITS WITHDRAWALS FINAL-BAL) ACCOUNT INPUT (ACCOUNT-NUMBER SPEC GREATER (0) OLD-BAL SPEC (DOLLAR-CENT)) INPUT-TYPE (CARDS) END-OF-DATA ((ZERO CARD (2))) NUMBER-OF-ACCOUNTS (200) TRANSACTION INPUT (ACCOUNT-NUMBER (1 OR -i) REPRESENT (DEPOSIT OR WITHDRAWAL) TRANS-AMOUNT SPEC (DOLLAR-CENT)) INPUT-TYPE (CARDS) END-OF-DATA ((ZERO CARD (3))) Figure 3 System Output for Problem in Figure 2 ACKNOWLEDGEMENTS This research was supported by the Air Force Office of Scientific Research under contract F49620-79-0152, and was done in part while the author was a member of the AI group at the Ohio State University. I would llke to thank Amar Mukhopadhyay for reading and providing constructive comments on drafts of this paper, and Mrs. Robin Cone for her wonderful work in typing it. REFERENCES Bruce, B. and Newman D. Interacting Plans. Cogni- tive Science. v. 2, 1978. Cohen, P. and Perrault R. Elements of a Plan-Based Theory of Speech Acts. Cognitive Science, v. 3, n. 3, 1979. Conway, R. and GriPs, D. An Introduction to Pro- gramming. Winthrop Publishers, Inc., Massachu- setts, 1975. DeJong, G. Prediction and Substantiation: A New Approach to Natural Language Processing. Cogni- tive Science, v. 3, n. 3, 1979. Erman, D. and Lesser V. A Multi-Level Organization for Problem-Solving Using Many Diverse Coopera- ting Sources of Knowledge. IJCAI-75, University Microfilms International, PO BOX 1467, Ann Arbor, Michigan 48106, 1975. Finin, T. The Semantic Interpretation of Compound Nominals. Report T-96, Dept. of Computer Science, University of Illinois, 1980. Gomez, F. Understanding Programming Problems Stated in Natural Language. OSU-CISR-TR-81, Dept. of Computer Science, The Ohio State University, 1981. Grosz, B. The Representation and Use of Focus in Dialogue Understanding. SRI Technical Note 151, Menlo Park, Ca., 1977. Hobbs, J. and Evans D. Conversation as Planned Behavior. CQsnltlve Science. v.4, no. 4, 1980. Levi, J. N. The Syntax and Semantics of Complex Nominals. Academic Press, 1978. Marcus, M. ~ Theor 7 of Syntantic Recognition fo_._~r Natural Language. MIT Press, 1979. Ortony~ ,~, a~membering, Understanding, and Repre- sentation. Cognitive Science, v. 2, n. i, 1978. Perrault, R. and Allen F. A Plan-Based Analysis of Indirect Speech Acts. American Journal of Computational Linguistics, v. 6, n. 3, 1980. 42 Riesbeck, C. K. Conceptual Analysis. In R. Schank (Ed.), Conceptual Information P rocessin ~. N. York, Elvesier-North Holland, 1975. Schank, R. and Abelson, R. Scripts, Plans, Goals, and Understanding. Laurence Erlbaum Associates, Hillsdale N. J., 1977. Schank, R. C., Lebowitz, M., and Lawrence, B. Parsing Directly in Knowledge Structures. in IJCAI-79, Computer Science Department, Stanford University, stanford, CA 94305. Small, S. Word Expert Parsing: A Theory of Dis- tributed Word-Based Natural Language Under- standing. Tech. Report 954, Dept. of Computer Science, University of Maryland, 1980. Wilks, Y. An Artificial Intelligence Approach to Machine Translation. In Schank and Colby (eds.) Computer Models of Thought and Language. San Francisco, W. H. Freeman and Co., San Francisco, 1973. Wilensky, R. Understanding Goal-Based Stories. Dept. of Computer Science, Yale University. Tech. Report 140, 1978. Winograd, T. Understanding Natural Language. N. York, Academic Press, 1972. 43 | 1982 | 6 |
NATURAL-LANGUAGE ACCESS TO DATABASES--THEORETICAL/TECHNICAL ISSUES Robert C. Moore Artificial Intelligence Center SRI International, Menlo Park, CA 94025 I INTRODUCTION Although there have been many experimental systems for natural-language access to databases, with some now going into actual use, many problems in this area remain to be solved. The purpose of this panel is to put some of those problems before the conference. The panel's motivation stems partly from the fact that, too often in the past, discussion of natural-language access to databases has focused, at the expense of the underlying issues, on what particular systems can or cannot do. To avoid this, the discussions of the present panel will be organized around issues rather than systems. Below are descriptions of five problem areas that seem to me not to be adequately handled by any existing system I know of. The panelists have been asked to discuss in their position papers as many of these problems as space allows, and have been invited to propose and discuss one issue of their own choosing. II QUANTITY QUESTIONS Database query languages typically provide some means for counting and totaling that must be invoked for answering "how much" or "how many" questions. The mapping between a natural-language question and the corresponding database query, however, can differ dramatically according to the way the database is organized. For instance, if DEPARTMENT is a field in the EMPLOYEE file, the database query for "How many employees are in the sales department?" will presumably count the number of records in the EMPLOYEE file that have the appropriate value for the DEPARTMENT field. On the other hand, if the required information is stored in a NUMBER-OF-EMPLOYEES field in a DEPARTMENT file, the database query will merely return the value of this field from the sales department record. Yet a third case will arise if departments are broken down into, say, offices, and the number of exployees in each office is recorded. Then the database query will have to total the values of the NUMBER-OF-EMPLOYEES field in all the records for offices in the sales department. In each case, the English question is the same, but the required database query is radically different. Is there some unified framework that will encompass all these cases? Is this a special case of a more general phenomenon? III TIME AND TENSE This is a notorious black hole for both theoretical and computational linguistics, but, since many databases are fundamentally historical in character, it cannot really be circumvented. There are many problems in this general area, but the one I would suggest is how to handle, within a common framework, both concepts defined with respect to points in time and concepts defined with respect to intervals. The location of an object is defined relative to a point; it makes sense to ask "Where was the Kennedy at 1800 hours on July I, 19807" The distance an object has traveled, however, is defined solely over an interval; it does not make sense to ask "How far did the Kennedy sall at 1800 hours on July I, 19807" Or, to turn things around, "How far did the Kennedy sell during July 1982?" has only a single answer (for the entire interval)-- but "Where was the Kennedy during July 1982?" may have many different answers (in the extreme case, one for each point in the interval). Must these queries be treated as two completely distinct types, or is there a unifying framework for them? If they are treated separately, how can a system recognize which treatment is appropriate? The fact that any interval contains an infinite number of points creates a special problem for the representation of temporal information in databases. Typically, information about a tlme-varying attribute such as location is stored as samples or snapshots. We might know the position of a ship once every hour, but obviously we c-~-~k have a record in an extensional database for every point in time. How then are we to handle questions about specific points in time not stored in the database, or questions that quantify over periods of time? (E.g., "Has the Kennedy ever been to Naples?") Interpolation naturally suggests itself, but is it really appropriate in all cases? 44 IV QUANTIFYING INTO QUESTIONS Vl MULTIFILE QUERIES Normally, most of the inputs to a system for nat~ral-language access to databases will be questions. Their semantic interpretation, however, is not yet completely understood. In particular, quantlflers in questions can cause special problems. In speech act theory, it is generally assumed that a question can be analyzed as a having a propositional content, which is a description, and an illocutionary force, which is a request to enumerate the entities that satisfy the description. Questions such as "Who manages each department?" resist this simple analysis, however. If "each" is to be analyzed as a universal quantifier (as in "Does each department have a manager?"), then its scope, in some sense, must be wider than that of the indicator of the sentence's illocutlonary force. That is, what the question actually means is "For each department, who manages the department?" If we to try to force the quantifier to be part of the description of the entities to be enumerated, we seem to be asking for a single manager who manages every department--i.e., "Who is the manager such that he manages each department?" The main issues are: What would be a suitable representation for the meaning of this sort of question, and what would be the formal semantics of that representation? V QUERYING SEMANTICALLY COMPLEX FIELDS Natural-language query systems usually assume that the concepts represented by database fields will always be expressed in English by single words or fixed phrases. Frequently, though, a database field will have a complex interpretation that can be interrogated in many different ways. For example, suppose a college admissions office wants to record which applicants are children of alumni. This might be indicated in the database record for each applicant by a CHILD-OF-ALUMNUS field with the possible values T or F. If this field were queried by asking "Is John Jones a child of an alumnus?" then "child of of an alumnus" could be treated as if it were a fixed phrase expressing a primitive predicate. The difficulty is that the user of the system might Just as well ask "Is one of John Jones's parents an alumnus?" or "Did either parent of John Jones attend the college?" Can anything be done to handle cases llke this, short of treating an entire question as a fixed form? All the foregoing examples involve questions that can be answered by querying a single file. In a multifile database, of course, questions will often arise that require information from more than one file, which raises the issue of how to combine the information from the various files involved. In database terms, this often comes down to forming the "Join" of two files, which requires deciding what fields to compute the Join over. In the LADDER system developed at SRI, as well as in a number of other systems, it was assumed that for any two files there is at most a single pair of fields that is the "natural" pair of fields to Join. For instance , in a SHIP file there may be a CLASS field containing the name of the class to which a ship belongs. Since all ships in the same class are of the same design, attributes such as length, draft, speed, etc., may be stored in a CLASS file, rather than being given separately for each ship. If the system knows that the natural Join between the two files is from the CLASS field of the SHIP file to the CLASSNAME field of the CLASS file, it Can retrieve the length of a particular ship by computing this join. The scheme breaks down, however, when there is more than one natural Join between two files, as would be the case if there were a PORT file and fields for home port, departure port, and destination port in the SHIP file. This is sometimes called the "multlpath problem." Is there is a solution to this problem in the general case? If not, what is the range of special cases that one can reasonably expect to handle? 45 | 1982 | 7 |
TRANSPORTABLE NATURAL-LANGUAGE INTERFACES: PROBLEMS AND TECHNIQUES Barbara J. Grosz Artificial Intelligence Center SRI International, Menlo Park, CA 94025 Department of Computer and Information Science 1 University of Pennsylvania, Philadephia, PA 19104 I OVERVIEW I will address the questions posed to the panel from wlthln the context of a project at SRI, TEAM [Grosz, 1982b], that is developing techniques for transportable natural-language interfaces. The goal of transportability is to enable nonspeciallsts to adapt a natural-language processing system for access to an existing conventional database. TEAM is designed to interact with two different kinds of users. During an acquisition dlalogue, a database expert (DBE) provides TEAM with information about the files and fields in the conventlonal database for which a natural-language interface is desired. (Typlcally this database already exists and is populated, but TEAM also provides facillties for creating small local databases.) This dlalogue results in extension of the language-processlng and data access components that make it possible for an end user to query the new database in natural language. A major benefit of using natural language is that it shifts onto the system the burden of mediating between two views of the data--the way in which it is stored (the "database view") and the way in which an end user thinks about it (the "user's view"). Basically, database access is done in terms of files, records, and fields, while natural-language expressions refer to the same information in terms of entities and relationships in the world. In my discussion, I will assume the use of a general grammar of English rather than a semantic grammar, and also that the interpretation of queries will pass through an intermediate stage in which a database-lndependent representation of the meaning of the query is derived before constructing the formal database query. This is because systems based on semantic grammars amalgamate i~formatlon about language, about the domain, ~ asout the database in ways that make it difficult to transfer those systems to new databases. I will use the term "conceptual schema" to refer to the internal representation of 1 Currently visiting under the auspices of the Program in Cognitive Science at the Unlversity of Pennsylvania. information about the entities in the domain of discourse and the relationships that can hold among them, 2 and "database schema" to refer to the encoding of information about the way concepts in the conceptual schema map onto the structures of the database. In addition, I will use the term "logical form" to refer to the representation of the literal meaning of an expression in the context of an utterance. The insistence on transportability (which distinguishes TEAM from previous systems such as LADDER [Hendrlx et al., 1978], LUNAR [Woods, Kaplan, and Webber, 1972], PLANES [Waltz, 1975], REL [Thompson, 1975], and CHAT [Warren, 1981]) entails two major consequences for the design of a natural-language interface. First, the database cannot be restructured to make the way in which it stores data more compatible with the way in which a user would pose his questions. Second, because the DBE cannot be expected to know about the internal structure of the conceptual schema and the database schema, these must be organized so that the information they encode about any particular database and its corresponding domain can be obtained systematically (and, therefore, automatically). These differences are crucial to any consideration of the issues before this panel. Although, for any partlcular database, it may be possible to handcraft solutions to each problem, such an approach is not viable for a transportable system. Handcraftlng requires expertise in computational linguistics, knowledge of the internal structures and algorithms used in an interface, and so forth--none of which the DBE can be expected to possess. In addition, interfacing to an existing conventional database introduces many problems caused by the difference between the database view and the end user's view. Many of these problems can be avoided if one is allowed to design the database as well as the natural- language system. However, given the prevalence of existing conventional databases, approaches that make this assumption are likely to have llmited applicability in the near future. Most of the issues the panel has been asked to address arise (or have analogues) in any 2 This schema is a restricted form of the standard AI knowledge base. 46 application of natural-language processing. In the sections that follow, my objective in dlscusslng each of these issues will be to point out where I see the constraints of the database query task as simplifying the general problem and where, on the other hand, transportability (and the way in which database systems typically structure information and view the world) makes things more difficult. Inevitably, l will be raising at least as many questions as I answer. II AGGREGATES It is useful to separate problems involving aggregates into two categories: (I) those t!mt involve mapping from natural-language to logical form, and (2) those that involve translating from logical form into a formal database query. The examples presented to the panel have elements of each of these. In addressing the question of logical form, I first want to note how similar "how many" and "how much" questions are to other degree questions (e.g., "How tall is John?"). Consider, for example, (I) James is old./ How old is James? (2) The department is big./ How big is the department? (3) (4) The department has many employees./ How many employees does the department have? The ship is heavy./ How heavy is the ship? (5) The ship is carrying much coal./ How much coal is the ship carrying? Hence, it seems that the logical forms for the queries ought to bear a close resemblance. In interpreting degree questions, the language- processing component of TEAM [Gzosz et al., 1982a], applies a hlgher-order degree operator to the predicate that underlies the adjective. For example, the logical form for "How tall is John?" would be (WHAT H (HEIGHT H) ((DEGREE TALL) JOHN H)) The problem in transferring this treatment to "how many" and "how much" questions is that while adjectives llke "heavy" are usually treated as predicates, "many" is usually treated as a quantifier. So, if "how" is treated by uniformly applying some kind of hlgher-ozdez degree operator, then that operator has to apply to both predicates and quantiflers. Another possibility would be to apply the degree operator to an entire fozmula, as in (WHAT H (HEIGHT H) ((DEGREE (TALL JOHN) H)) rather than Just to the head of the formula. Whether this can be made to work, however, depends on whether a satisfactory analysis can be provided when the formula consists of mole than Just a predicate and its arguments. The problem of an appropriate logical form for these questions is not affected by the need for transportability. However, transportability does make the problem of translating from logical form into a database query more difficult. Fields that store count totals, llke NUMBER-OF-EMPLOYEES, are semantically complex in much the same way as the CHILD-OF-ALUMNUS field (the predicate encoded by a count field can be defined in terms of a count operator and the domain entities that are to be counted), and they present similar problems for transportability and database access (see section 5). The question therefore (to which I do not have an answer) is whether this kind of semantically complex field is any simpler to handle than the more general case. In addition, some ways of storing information about aggregates in these semantically complex fields may require inferences to be drawn to answer ceztaln kinds of queries. For example, if the number of employees in a department must be calculated from the number of employees in each office of the department, answering queries about the number of employees in a department will require reasoning about the part/whole relationship between offices and departments and how the number of employees in a department depends on that relationship. A general treatment of such cases would require both the acquisition of information about the part/whole relationship Impllcltly encoded in the database, and the ability to infer that (in this case) the count for the whole is the sum of the counts for the parts. The need for drawing inferences arises with mass fields as well as with count fields. For example, consider a database of ships and their cargoes, with separate entries for the different kinds of cargo a ship is carrylng. Then an answer to "How much cargo is the ship cazzylng?" will require the same kind of totaling operation as does the query about the number of employees in the above example. It may be possible to handle the most straightforward cases of these phenomena by adding special purpose information ("hacks" to compensate for the lack of theorem-proving capabilltes) for each operator corresponding to a data access system aggregate function, specifying how it interacts with part/whole relationships (AVERAGE will work differently from TOTAL). 47 III TIME AND TENSE The context of database querying does not seem to make questions concerning time and tense any easier than they are for linguistics or philosophy in general; in fact, they are actually more difficult because of the extensional nature of the temporal information stored in a database. It does not appear useful, even in the database query context, to have different representations for sentences involving concepts related to points in time and those involving intervals. The same natural-language expressions about time may be used to refer to a given time as either a point or an interval. Consider, (6) How far did the Fox travel yesterday? (yesterday as an interval over which an event extends) <7) Who was the officer of the day yesterday? (yesterday as a point in a sequence of days) It is fairly easy to imagine databases against which each of these queries might be posed and, in each case, "yesterday" might correspond either to a single database entzy or to a set of entries spanning an interval. Furthermore, the same verb can be used to refer to activities in terms of points or intervals--e.g., (8) The ship is sailing to Naples. (interval) (9) The ship is sailing tomorrow. (point) --and the same event may be viewed as occurring during an interval or at some single point [Moore, 1981]. (See Prince [1982] for an interesting discussion of the differences between (9) and "The ship sails tomorrow.") On the issue of interpolation, we should note that questions involving temporal attributes also involve at least one other attribute of an entity (e.g., its location). To handle adequately queries about times not explicitly represented in the database, such factors must he taken into account as the time scale over which an attribute changes (e.g., a ship's position changes more slowly than an airplane's), and whether or not the change is linear. In general, this requires mechanisms for reasoning about temporal relationships and complex events, mechanisms normally absent in database systems. Also note that, even when interpolation is possible, additional mechanisms are needed to handl- queries about times beyond the last zecord~d~ e+me. (I have been living in Philadelphia for the last four month , Out I will not be two months hence.) All this suggests that naive interpolation is likely to result in incorrect answers (entities may even have ceased to exist since the last data about them was recorded). I believe it is misleading to provide direct responses involving such interpolation, because the user has no way of knowing that the system's reasoning is only approximate, or knowing on what it has based its answer. If the natural-language interface isolates a user from the manner in which information is stored, it must compensate by furnishing sufficient information in its responses to allow the user to assess their validity. Of course, this is a more general issue than one concerning Just time, but the appeal of interpolatlon (as a simple solution) may mislead us into thinking we can provide the user with an answer that later reflection will reveal as worse than no answer at all. In an interface designed for a particular database, special purpose routines may be provided that take such factors as time scale into account. The problem is more difficult to deal with for a transportable natural-language interface, but two strategies appear possible. One is to provide the two values of the attribute being queried that correspond to times that bracket the time specified in an actual query. The second is to associate with each attribute-time pairing an interval oyez which the attribute value can be considered to be constant, as well as possibly a function for interpolating between values and extrapolating from them. The problem for transportability, then, is obtaining the ~equisite information from the DBE. IV QUANTIFYING INTO QUESTIONS The problem of quantifying into questions may have a simpler solution in the database query environment than it does in general. Database queries usually seek an enumeration (as opposed to queries seeking a description, as in "Which woman does every Englishman admire most? His mother." [Engdahl, 1982]). For such cases, it seems possible to analyze a question as a REQUEST to INFORM (an analysis done in [Cohen and Perrault, 1979] to allow planning of questions, taking into account plans and goals of both speakers and hearers), with REQUEST being the illocutionary-force operator. If this is done, a quantifier can outscope the INFORM without outscoplng the REQUEST. Thus, the logical form of "Who commands each ship" would be something like (REQUEST (EVERY X (SltI? X) (INFORM "who commands X"))) 48 V SEMANTICALLY COMPLEX FIELDS The predicate represented in a semantically complex field llke CHILD-OF-ALUMNUS typically has a definition in terms of simpler concepts, namely an existential quantifier and whatever entity is being quantified over (in this case ALUMNUS). In a nontranspoztable system, some of the variability of expression that these fields give rise to can be handled by enriching the conceptual schema appropriately (e.g., adding to it the class of alumnl). However, as the query "Did either of John Jones's parents attend the college?" illustrates, this by itself is not sufficient in general. In extreme cases, sophisticated deductive capabilities may be necessary to answer questions that can arise in connection with semantically complex fields. For example, the BLI~FILE database (to which LADDER provided an interface) has a field DOC that records whether or not a ship has a doctor on board. To answer a query like "Is there a doctor within 200 miles of Philadelphia?" requires not only repzesentlon of the connection between a positive value In the DOC field and the existence of a doctor, but also the ability to reason that, if a ship that has a doctor on board is within 200 miles of Philadelphia, then the doctor himself is within 200 miles of Philadelphia. An apparent precondition for the correct treatment of semantically complex fields is that the system should have a richer model of the domain than the model constituted by the database itself. Konollge [1981] suggests one possible approach to this in which a metatheory is employed to describe both the domain of discourse and the information the database contains. Axioms in the metalanguage are used to encode things llke the connection between the existence of an alumnus and a particular value in the CHILD-OF-ALLPMNUS field. It does not seem possible to handle a wide variety of semantically complex fields In a transportable system, unless the system is much richer than typical DB systems (in which case much more general knowledge acquisition schemes must be implemented, such as those proposed by Hendrix and Haas [1982], for example). ~owever, transportable systems can provide for a fairly wide range of fixed phrases corresponding to these fields [Grosz et el, 1982b]). Vl MULTIFILE QUERIES over which the Join must be made possess compatible values). Two basic problems arise in coordinating information from multiple files: (i) determining the relationships among the domains corresponding to the different fields; (2) accounting for the composition of relations across files. It is relatively straightforward to achieve correctness in (I) even in a transportable system. The composition of relations that are introduced by Joins over distinct files presents greater difficulties because natural-language queries may refer only implicitly to the composition. I want to consider two such cases: (I) the use of a field value (or a synonym) to modify a noun phrase (e.g., "Italian ships"), and (2) the use of a field value as a head noun referring to entities possessing that value for the attribute represented by the field (e.g., in a database about cars, "Fords" might refer to those cars with manufacturermFORD). In both cases, it may be ambiguous as to exactly what relationship is being expressed. If we restrict natural-language interface systems to handling only isolated queries, the DBE can be asked to eliminate certain of these ambiguities by establishing which fields have values that can be used to modify (or stand alone for) the entities in the database. Thus, for example, a DBE might establish that "Italian ships" will never be used to refer to ships with a port of departure in Italy. Once discourse contexts are taken into account, the problem becomes more difficult. For any field, it is fairly easy to create a context in which the relation represented by that field can be implicitly expressed by using one of its values as a modifier. For example, following the query "Are there more ships sailing from Italy or France this month?", the query "What cargoes are the Itallan ships carrying?" uses "Italian ships" to refer specifically to ships departing from Italy. Vll Acknowledgments Robert Moore and Bonnie Webber provided many helpful comments on the content and form of this paper. Many of the ideas in it have resulted from discussions among the members of the TEAM project at SRI. The TEAM project is supported by the Defense Advanced Research Projects Agency under Contract N00039-80-C-0645 with the Naval Electronic Systems Command. I will address only those aspects of this problem that are directly concerned with interpreting natural-language queries correctly, and not those that are concerned primarily with database access (e.g., ensuring that the fields 49 REFERENCES Cohen, P. R. and C. R. Perzault [1979] "A Plan- Based Theory of Speech Acts," Cognitive Science, Vol. 3, No. 3, pp. 177-212 (July- September 1979) Gzosz, B. et al. [1982a] "DIALOGIC: A Core Natural-Language Processing System," to appear in Proceedings of the Ninth International Conference on Computational Linguistics, Prague, Czechoslovakia (July 1982) Gzosz, B. et al. [1982b] "TEAM: A Transportable Natural-Language System," Technical Note No. 263, Aztlflclal Intelligence Center, SRI Internatlonal, Menlo Park, Callfoznla (April 1982). Engdahl, E. [1982] "Constituent Questions, Topicallzation, and Surface Structure Interpretation," to appear in proceedings from the First West Coast Conference on Formal Linguistics, D. Flicklnger, M. Macken, and N. Wiegand, eds., Stanford, California (1982). Thompson, F.B., and B.H. Thompson [1975] "Practical Natural Language Processing: The REL System as Prototype," in Advances in Computers 13, M. Rublnoff and M. C. Yovits, eds. (Academic Press, New York, New York, 1975). Waltz, D. [1975] "Natural Language Access to a Large Data Base: An Engineering Approach," Advance Papers of the Fourth International Joint Conference on Artificial Intelligence, pp. 868-872, Tbilisi, (September 1975). Georgia, USSR Warren, D.H. [1981] "Efficient Processing of Interactive Relational Database Queries Expressed in Logic," Proc. Seventh International Conference on Very Large Data Bases, pp. 272-283, Cannes, France (September 1981). Woods, W. A., R. M. Kaplan, and B. N-Webber [1972] "The Lunar Sciences Natural Language Information System," BBN Report 2378, Bolt Bezanek and Newman, Cambridge, Massachusetts (1972). Rendrix, G.G., et al. [1978] "Developing a Natural Language Interface to Complex Data," ACM Transactions on Database Systems, Vol. 3, No. 2, pp. 105-147 (June 1978). Eendzlx, G. G. and Haas, N. [1982] "Learning by Being Told: Acquiring Knowledge for Information Management," to appear in Machine Learning, R.S. Michalskl, J. Carbonell, and T. Mitchell, eds. (Tioga Publishing Co., Palo Alto, California, 1982). Konolige, K.G. [1981] "A Metalanguage Representation of Relational Databases for Deductive Question-Answering Systems," Proceedings of the Seventh International Joint Conference on Artificial Intelligence, pp. 496-503, Vancouver, British Columbia, Canada (August 24-28, 1981). Moore, R. C. [1981] "Problems in Logical Form," Proceedings of the 19th Annual Meeting of the Association for Computational Linguistics, pp. 117-124, Stanford University, Stanford, California (June 29-July I, 1981). Prince, E. [1982] "The Simple Futurate: Not Simply Progrsslve Futurate Minus Progressive," Meeting of the Chicago Linguistics Society, Chicago, Illinois (April 1982). 50 | 1982 | 8 |
THEORETICAL/TECHNICAL ISSUES IN NATURAL LANGUAGE ACCESS TO DATABASES S. R. Petrick IBM T.J. Watson Research Center INTRODUCTION In responding to the guidelines established by the session chairman of this panel, three of the five topics he set forth will be discussed. These include aggregate functions and quantity questions, querying semantically complex fields, and multi-file queries. As we will make clear in the sequel, the transformational apparatus utilized in the TQA Ques- tion Answering System provides a principled basis for handling these and many other problems in natural language access to databases. In addition to considering some subset of the chairman's five problems, each of the panelists was invited to propose and choose one issue of his/her own choosing. If time and space permitted, I would have chosen the subject of extensibility of natural language systems to new applications. In light of existing restrictions, however, I have chosen a more tractable problem to which I have given some atten- tion and in whose treatment I am interested; this is the translation of quantified relational calculus expressions to a formal query language such as SQL. AGGREGATE FUNCTIONS AND QUANTITY QUESTIONS Questions such as "How many employees are in the sales department?" must be mapped into three radi- cally different database query language expressions depending on how the database is set up. It may he appropriate to retrieve a pre-stored total number of employees from a NUMBER-OF-EMPLOYEES field of a DEPARTMENT file, or to count the number of records in an EMPLOYEE file that have the value SALES in the DEPARTMENT field, or, if departments are broken down into offices with which are associated the total numbers of employees employed therein, to total the values of the NUMBER-OF-EMPLOYEES field in all the records for offices in the sales department. In the TQA System there are a number of differ- ent levels of representation of a given query. The grammar which assigns structure to a query has some core components which are essentially application-independent (e.g., the cyclic and post- cyclic transformations) and has other components that are application-dependent (e.g., portions of the lexicon and precyclic transformations). Surface structures are mapped by the application-independent post cyclic and cyclic transformations into a rela- tively deep structural level which is referred to as the underlying structure level. In this represen- tation, sentence nodes are expanded into a verb fol- lowed by a sequence of noun phrases, and the representation of reference is facilitated by the use of logical variables XI, X2, .... The underly- ing structure corresponding to the previously cited example sentence would be something like the follow- ing (suppressing details): LOCATED WH SOME MANY EMPLOYEE X1 SALES DEPARTMENT Now, depending on feature information associated with the lexical items in the two NP's, application-specific precyclic transformations can be formulated to map this underlying structure into any of three query structures that directly reflect the three data structures and corresponding formal queries previously discussed. Rather than sketching query structures that could be produced for this example, let me be more specific by substituting the actual treatment of two similar sentences currently treated by the TQA System land-use application. These are the sentences: (i) "How many parking lots are there in ward 1 block 2?" (2) "How many parking spaces are there in ward 1 block 2?" In the current data base, individual lots are identified as being parking lots by a land use code relation LUCF, which has attributes that include JACCN (parcel account number) and LUC (land use code). Parking lots have an LUC value of 460. Anoth- er relation, PARCFL, has attributes which include JACCN and JPRK (the number of parking spaces on a given parcel). The underlying structures assigned to both these sentences are nearly identical, differing only in the lexical distinctions between "parking lot" and "parking space". The common structure is very much like that of the previously given tree structure except that PARKING LOT or PARKING SPACE (together with their associated features) replaces EMPLOYEE, and the second NP dominates the string "WARD 1 BLOCK 2". The feature + UNIT on a node that dominates PARKING SPACE is not found in the corresponding structure involving PARKINGLOT, and this feature (together with a number of other structural prereq- 51 uisites) triggers a pair of precyclic transformations. The action of those two transf- ormations is roughly indicated by the following sequence of bracketted terminal strings (the actual trees together with all their features would take up much more space): (BD LOCATED ((WH SO~ MANY) (PARKINGSPACE X3)) ((WARD i) (BLOCK 2)) BD) TOTPUNIT .> (BD TOTAL ((WH SOME) (THING X46)) (THE (X3 (BD PARKING SPACE X3 ((WARD I) (BLOCK 2)) BD))) BD) LOTINS2 -> (BD TOTAL ((WH SOME) (THING X46)) (THE (X3 (BD PARKING SPACE X3 (THE ((LOT X48) (BD LOCATED X48 ((WARD i) (BLOCK 2)) BD))) BD))) BD) Note that the lot ~nsertion transformation LOTINS2 has produced structure of the type which is more directly assigned to the input query, "What is the total number of parking spaces in the lots which are located in ward 1 block 2?". This structure is then further transformed by a transformation LOCATION that replaces the abstract verb LOCATED by a verb (WBLOCK in this instance) which corresponds to an existing data base relation. LOCATION -> (BD TOTAL ((WH SOME) (THINS X46)) (THE (X3 (BD PARKING SPACE X3 (THE ((LOT X48) (BD WBLOCK ((WARD i) (BLOCK 2)) X48 BD))) BD))) BD) The latter structure is mapped via the TQA Knuth attribute grammar formalism into the logical form: (setx 'X46 '(total X46 (bagx 'X3 '(setx 'X48 '(and (RELATION 'PARCFL '(JPRK JACCN) '(x3 x48) '(= =) ) (RELATION 'PARCFL '(WBLOCK JACCN '('100200 X48) '(= =))))))) This logical form is in a set domain logical calcu- lus to be discussed later in the paper. Roughly, it denotes the set of elements X46 such that X46 is the sum of the members of the bag (like a set, but with possible duplicate elements) of elements X3 such that a certain set is not empty, namely the set of elements X48 such that X48 is the account number (JACCN) of a parcel whose number of parking spaces (JPRK) is X3 and whose wardblock (WBLOCK) is 100200. The expression (RELATION 'PARCFL '(JPRKJACCN) '(X3 X48) '(==) ) in the above logical form denotes the proposition that the relation formed from the PARCFL relation by projecting over the attributes JPRK and JACCN con- tains the tuple (X3 X48). The logical form is straightforwardly translated by means of a LISP pro- gram whose details we will not concern ourselves with into the SQL query: SELECT SUM(A.JPRK) FROM PARCFL A WHERE A.WBLOCK = '100200'; The other structure (for the sentence with PARKING LOT) lacks the triggering feature + UNIT, and hence .transformations TOTPUNIT and LOTINS2 do not apply; furthermore, the LOCATION transformation applies to the original instance of the verb LOCATED rather than the copy of LOCATED introduced by the lot insertion transformation LOTINS2 in the analysis of the previous sentence: (BD LOCATED ((WE SOME MANY) ((PARKING LOT 460) X3)) ((WARD i) (BLOCK 2)) BD) LOCATION -> (BD WBLOCK ((WARD I) (BLOCK 2)) ((WH SOME MANY) ((PARKING_LOT 460) X3)) BD) This structure is mapped via the Knuth attribute grammar into the logical form: (setx 'X48 '(quantity X48 (setx 'X3 '(and (RELATION 'PARCFL '(WBLOCK JACCN) '('100200 X3) '(==) ) (RELATION 'LUCF '(LUC JACCN) '('0460 X3) '(==))) ))) 52 and this logical form is translated to the SQL query: SELECT COUNT(UNIQUEA.JACCN) FROM PARCFL A, LUCF B WHERE AoJACCN = B.JACCN AND B°LUC = '0460' AND A.WBLOCK = '100200' ; ~qle points to be made with respect to this treatment are that the information indicating dif- ferential, database-specific treatment can be encoded in lexical features, and that differential treatment itself can be implemented by means of pre- cyclic transformations which are formally of the same type that the TQA system uses to relate under- lying to surface structures. The features, such as + UNIT in our example, are principled enough to per- mit their specification by a data base administrator with the help of an on-line application customiza- tion program. (+ UNIT is also required in lexical items such as DWELLING UNITS and STORIES). If the database organization had been different, simple lexical changes could have been made to trig- ger different sequences of transformations, result- ing in structures and ultimately SQL expressions appropriate for that database organization. In this way, it would be easy to handle such database organ- izations as that in which the total number of parking lots and/or parking spaces is stored for each wardblock, and that in which such totals are stored for each splitblock which is included within a given wardblock. QUERYING SE~IANTICALLY COMPLEX FIELDS In posing this problem, the session chairman pointed out that natural language query systems usu- ally assume that the Concepts represented by data- base fields will always be expressed in English by single words or fixed phrases. He cited as an exam- ple the query "Is John Jones a child of an alumnus?" where "child of an alumnus" is a fixed phrase expressing the binary relation with attributes APPLICANT (whose values are the names of applicants) and CHILD-0F-ALUMNUS (whose values are either T or F). He further noted that related queries such as "Is one of John Jones' parents an alumnus?" or "Did either parent of John Jones attend the college?)" require some different treatment. The approach we have taken in TQA is, insofar as possible, to provide the necessary coverage to per- mit all the locutions that are natural in a given application. The formalism by which this is attempted is, once again, the transformational appa- ratus. Transformations often coalesce queries which have the same meaning but differ substantially in their surface forms into common underlying or query structures. There is~ however, no requirement that this always be done, so such queries are sometimes mapped into logically equivalent rather than identi- cal query structures. In either case, the transformational formalism provides a solid basis for assigning very deep semantic structures to a wide spectrum of surface sentence structures. The extent to which we have been successful in allowing broad coverage of logically equivalent alternative statements of a query is difficult to quantify, but we believe that we have done well relative to other efforts for two reasons: (I) We have made an effort to cover as many underlying relations and their sur- face realizations as possible in treating a given application, and (2) The transformational formalism we use is effective in providing the broa~ coverage which reflects all the allowable interactions between the syntactic phenomena treated by a partic- ular grammar. MULTI-FILE QUERIES This problem deals with multi-file databases and the questions of which files are relevant to a given query and how they should be joined. This "problem" is one which is often raised, and which invariably reflects a quick-and-dirty approach to syntactic and semantic analysis. Within a framework such as that provided by the transformational apparatus in TQA, this problem simply doesn't arise. More accurately, it is a problem which doesn't arise if an adequate grammar is produced that assigns structure of the depth of the TQA System's query structures. This, of course, is no easy task, but it is one which is central to the transformational grammar-based approach~ and its successful treatment does provide a principled basis for eliminating a number of potential difficulties such as this multi-file query problem. To see why this is so, let us consider how, for a given query, relations are identified and joined in TQA. As we have already indicated, TQA underlying structures and query structures consist of sentence nodes which dominate a verb followed by a sequence of noun phrases. These simple structures are joined together to form a complete sentence structure through the use of additional phrase structure rules which indicate conjunction, relative clause-main clause connection, etc. Query structure verbs cor- respond, for the most part, to database relations, and the noun phrase arguments of those verbs corre- spond to attributes of their a~sociated relations. Furthermore, query structures contain logical vari- ables which serve the function of establishing reference, including identity of reference. Thus if the query structure assigned to a query identifies two (or more) relations which have attributes whose values are the same logical variable, we have an indication that it is those attributes over which the relations should be joined. An example should make this clearer. Consider the query structure which TQA assigns to the sen- tence "What is the zone of the vacant parcels in subplan- ning area 410?" 53 (We omit feature information and some structure which is irrelevant to the subsequent discussion in the structure below.) I NP-, IDET I N ~ x4 / : / \ THE NOM /SIB- SUBPLAN_AREA 410 X8 LUC 910 X8 This structure represents the set of elements X4 such that X4 is the zone of an element of the set of lots X8 such that the land use code (LUC) of X8 is 910 and the subplanning area (SUBPLAN_AREA) of X8 is 410. The structure is mapped in straightforward fashion by a Knuth attribute grammar translation procedure into the set domain relational calculus expression: (setx 'X4 '(setx 'X8 '(and (RELATION 'ZONEF '(ZONE JACCN) '(X4 XS) '(= =) ) (RELATION 'GEOBASE* '(SUBPLA JACCN) '('410 XS) '(==) ) (RELATION 'LUCF '(LUC JACCN) '('910 X8) '(= =)))))) Each deep (query structure) verb such as ZONE has associated with it (by means of a translation table entry) a relation, which is usually the projection of an existing data base relation. Thus instead of translating a portion of the above tree to (ZONE X4 X8), an expression which is true if X4 is the zone of the parcel whose account number is XS, the translation table is used to produce (RELATION 'ZONEF '(ZONE JACCN) '(x4 xs) '(= =) ) which is true if the projection of the ZONEF relation over attributes ZONE and JACCN (account number) contains a tuple (X4 XS). The conjunction of three relations with a common JACCN attribute value of X8 indicates that the three relations are to be joined over the attribute JACC ~' There is, however, one complication in translat- ing the relatiorul calculus expression above into a formal query language such as SQL. The relations ZONEF and LUCF are existing database relations, but there is no relation GEOBASE* in the database, giv- ing the subplanning area of specific parcels. Instead, the PARCEL relation gives the splitblock (SBLOCK) of a given parcel (JACCN) and the GEOBASE relation gives the subplanning area (SUBPLA) of all the parcels within a given splitblock (SBLOCK). There are at least three solutions to the prob- lem of bridging the gap between relational calculus expressions such as this and appropriate formal que- ry language expressions. These are: (i) Write a precyclic database-specific splitblock insertion transformation which assigns query struc- ture corresponding to the query, "What are the zones of the vacant parcels which are located in split- blocks in subplanning area 410?" (2) Store information that permits replacing expressions involving virtual relations such as (RELATION 'GEOBASE* '(SUBPLAJACCN) '('0410 XS) '(==) ) by existentially quantified expressions involving only real database relations such as: (setx 'XIII (and (RELATION ' PARCFL ' (SBLOCK JACCN) ' (X111 XS) , (= =) ) (RELATION ' GEOBASE ' (SUBPLA SBLOCK) '('410 XlII) '(==) ) ) (3) Make the data base administrator (DBA) respon- sible for providing a formal query language defi- nition of the virtual relations produced. In this case that would take the form of defining GEOBASE* as the appropriate join of projections over GEOBASE and PARCFL. All three solutions have been implemented in the TQA System and used in specific cases as seems appropriate. For a database system with the defini- tional facilities available in SQL, solution (3) is particularly attractive because it is the type of activity with which data base administrators are familiar. Solutions (I) and (2) were also imple- mented at various times for examples such as the one in question, leading to the following SQL query: SELECT UNIQUE A.ZONE, A.JACCN FROM ZONEF A, GEOBASE B, PARCFL C, LUCF D WHERE A.JACCN = C.JACCN AND C.JACCN = D.JACCN AND B.SBLOCK = C.SBLOCK AND D.LUC = '0910' AND B.SUBPLA= '4100'; (We note for the careful reader that '0910' and '4100' are not misprints, but the discussion of how such normalization can be automatically achieved from DBA declarations is outside the scope of the present paper.) 54 TRANSLATING QUANTIFIED RELATIONAL CALCULUS EXPRESSIONS TO FORMAL QUERY LANGUAGE EQUIVALENTS In this section we consider a problem of our own choosing. In most of the existing relational calcu- lus formalisms, use is made of logical variables and some type of universal and existential quantifiers. Early versions of TQA were typical in this respect. The version of TQA which was tested in the White Plains experiment, for example, made use of quanti- fiefs FORATLEAST and FORALL whose nature is best explained by an example. The lozical form assigned to the previously considered sentence was, at one time: (setx 'X4 '(foratleast i 'XII2 (setx 'X8 ' (and (RELATION ' GEOBASE* ' (SUBPLA JACCN) '('410 XS) '(==) ) (RELATION ' LUCF ' (LUC JACCN) '('910 X8) '(= =) ) ) ) (RELATION ' ZONEF ' (ZONE JACCN) ' (X4 X112) '(==)))) This logical form denotes (roughly) the set of zones X4 such that for at least one element XII2 of the set of parcels X8 which are in subplanning area 410 and have a land use code of 910, parcel XII2 is in zone X4. In simple examples such as this, where only existential quantification of logical forms is involved, there is no problem in translating to a formal query language such as SQL. However, when various combinations of existential and universal quantification are involved in a logical form, the corresponding quantification-indicating constructs to be used in the formal query language translation of that logical form is not at all obvious. An exam- ination of the literature indicates that the arguments used in establishing the completeness of query languages offer little or no guidance as to the construction of a practical translator from relational calculus to a formal query language such as SQL. Hence, the approach used in translating TQA logical forms to corresponding SQL expressions will be discussed, in the expectation of eliciting expla- nations of how the translation of quantification is handled in other systems. We begin by observing that a logical form (foratleast 1Xl (setx X2 (f X2)) (g El)) (which denotes the proposition that for at least one Xl which belongs to the set of elements X2 such that f(X2) is true, g(Xl) is true) is equivalent to the requirement of the non-emptiness of the set (I) (setx 'Xl '(@nd (f Xl) (g XI))) Similarly, (forall X1 (aetx X2 (f X2)) (g Xl)) (which denotes the proposition that for all X1 in the set of elements X2 such that f(X2) is true, g(Xl) is true), is equivalent to a requirement of the emp- tiness of the set (2) (setx 'XI '(and (f Xl) (not (g Xl)))) Conversion of expressions with universal and exis- tential quantifiers is then possible to expressions involving only set notation and a predicate involv- ing the emptiness of a set. The latter type of expressions are called set domain relational calcu- lus expressions. Fortunately, SQL provides operators EXISTS and NOT EXISTS which take as their argument an SQL SELECT expression, the type of expression into which logical forms of the type (setx 'XI ... ) are trans- lated. A recursive call to the basic logical form-to-SQL translation facility then suffices to supply the SQL argument of EXISTS or NOT EXISTS. It is worth noting that, under certain circum- stances which we will not explore here, the "(setx X2" portion of an embedded expression (setx 'X2 (f X2)) can be pulled forward, creating a prefix-normal-form-like expression of the type (setx 'Xl (setx 'X2 ... )), and the logical variables that can be pulled all the way forward correspond to information implicitly requested in English queries. The values which satisfy these variables should also be printed to satisfy users' implicit requests for information. For example, in our previously consid- ered query "What are the zones of the vacant parcels in sub- planning area 410?" one probably wants the parcels identified in addi- tion to their zones. Translation to the form of set domain relational calculus used in TQA then provides a basis for either taking the initiative in automat- ically printing these implicitly requested values or for engaging in a dialog with the user to determine whether they should be printed. As a final example of this method of translating quantified logical forms, consider the sentence "What gas stations are in a ward in which there is no drug store?" The logical form initially assigned by TQA to this sentence is 55 (setx 'X2 ' (and (RELATION ' LUCF ' (LUC JACCN) ' ('0553 X2) '(==) ) (foratleast I 'X81 (setx 'X7 (forall 'XS0 (setx 'X13 (RELATION ' LUCF ' (LUC JACCN) '('0591 XI3) '(= =) ) ) (RELATION ' PARCFL ' (WARD JACCN) ' (X7 XS0) '(-==) ) ) ) (RELATION ' PARCFL ' (WARD JACCN) ' (XSl X2) '(==))))) which is translated to the set domain logical form: (setx 'X2 ' (setx 'X7 ' (and (RELATION ' PARCFL ' (WARD JACCN) ' (x7 xz) '(==) ) (not (setx 'X13 ' ( and (RELATION ' PARCFL ' (WARD JACCN) ' (X7 El3) '(==) ) (RELATION ' LUCF ' (LUC JACCN) ' ('0591 XI3) '(==))))) (RELATION ' LUCF ' (LUC JACCN) '('0553 X2) '(= =) ) ) ) ) The latter form translates easily expression: SELECT UNIQUE A.JACCN, A.WARD FROM PARCFL A, LUCF B WHERE A.JACCN = B.JACCN -AND B.LUC = '0553' AND NOT EXISTS (SELECT UNIQUE C.JACCN FROM PARCFL C, LUCF D WHERE C.WARD = A.WARD AND C.JACCN = DoJACCN AND D.LUC = '0591'); into the SQL REFERENCES Astrahan, M.M.; Blasgen, M.W.; Chamberlin, D.Do; Eswaran, K.P.; Gray, J.N.; Griffiths, P.P.; King, W.F.; Lories, R.A.; McJones, J.; Mehl, J.W.; Put- zolu, G.R.; Traiger, I;L.; Wade, B.W.; and Watson, V., "System R: Relational Approach to Database Management," ACM Transactions on Data- base Systems~ Vol. i, No. 21, June, 1976, pp. 97-137. Damerau~ F.J., "Advantages of a Transformational Grammar for Question Answering," Proc. 5th IJCAI~ Vol. i, 1977, p° 192. Damerau, F.J., "Operating Statistics for The Trans- formational Question Answering System," American Journal of Computational Linguistics~ Vol. 7, No. it January-March 1981, pp. 30-42. Petrick, S. R., "Semantic Interpretation in the Request System," in Computational and Mathemat- ical Linguistics, Proceedings of the Interna- tional Conference on Computational Linguistics, Pisa, 27/VIII-I/IX 1973, pp. 585-610. Petrick, S.R., Systems," Development~ 314-325. "On Natural Language Based Computer IBM Journal of Research and Vol. 20, No. 4, July 1976, pp. Petrick, S.R. "Field Testing the Transformational Question Answering (TQA) System," Proc. 19th Ann. Mtg. of the ACL~ June 1981, pp. 35-36. Plath, W.J., "REQUEST: A Natural Language Question-Answering System," IBM Journal of Research and Development~ Vol. 20, No. 4, July 1976, pp. 326-335. 56 | 1982 | 9 |
CONTEXT-FREENESS AND THE COMPUTER PROCESSING OF HUMAN LANGUAGES Geoffrey K. Pullum Cowell College University of California, Santa Cruz Santa Cruz, California 95064 ABSTRACT Context-free grammars, far from having insufficient expressive power for the description of human fan K - uages, may he overly powerful, along three dimen- sions; (i) weak generative capacity: there exists an interesting proper subset of the CFL's, the profligate CFL's, within which no human language appears to fall; (2) strong generative capacity: human languages can be appropriately described in terms of a proper subset of the CF-PSG's, namely those with the ECPO property; (3) time complexity: the recent controversy about the importance of a low deterministic polynomial time bound on the recognition problem for human languages is mis- directed, since an appropriately restrictive theory would guarantee even more, namely a linear bound. 0. INTRODUCTION Many computationally inclined linguists appear to think that in order to achieve adequate gr~----rs for human languages we need a hit more power than is offered by context-free phrase structure gram- mars (CF-PSG's), though not a whole lot more. In this paper, I am concerned with the defense of a more conservative view: that even CF-PSG's should be regarded as too powerful, in three computation- ally relevant respects: weak generative capacity, strong generative capacity, and time complexity of recognition. All three of these matters should be of concern to theoretical linguists; the study of what mathematically definable classes human languages fall into does not exhaust scientific linguistics, hut it can hardly he claimed to he irrelevant to it. And it should be obvious that all three issues also have some payoff in terms of certain computationally interesting, if rather indirect, implications. I. WEAK GENERATIVE CAPACITY Weak generative capacity (WGC) results are held by some linguists (e.g. Chomsky (1981)) to be unim- portant. Nonetheless, they cannot be ignored by linguists who are interested in setting their work in a context of (even potential) computational implementation (which, of course, some linguists are not). To paraphrase Montague, we might say that linguistically (as opposed to psycholinguisti- cally) there is no important theoretical difference between natural languages and high-level program- ming languages. Mediating programs (e.g. a com- piler or interpreter), of considerable complexity, will be needed for the interpretation of computer input in either Prolog or Japanese. In the latter case the level of complexity will be much higher, but the assumption is that we are talking quantita- tively, not qualitatively. And if we are seriously interested in the computational properties of either kind of language, we will be interested in their language-theoretic properties, as well as properties of the grammars that define them and the parsers that accept them. The most important language-theoretic class con- sidered by designers of programming languages, com- pilers, etc. is the context-free languages (CFL's). Ginsburg (1980, 7) goes so far as to say on behalf of formal language theorists, "We live or die on the context-free languages.") The class of CFL's is very rich. Although there are simply definable languages well known to be non-CF, linguists often take CFL's to be non-CF in error. Several examples are cited in Pullum and Gazdar (1982). For another example, see Dowry, Wall and Peters (1980; p.81), where exercise 3 invites the reader to prove a certain artificial language non- CF. The exercise is impossible, for the language i__% a CFL, as noted by William H. Baxter (personal communication to Gerald Gazdar). From this point on, it will he useful to be able to refer to certain types of formal language by names. I shall use the terms defined in [i) thru (3), among others. (i) Triple Counting Languages: languages that can be mapped by a homomorphism onto some language of the form ~ b n ~1 nZl~ (2) String Matching Languages: languages that can be mapped by a homomorphism onto some language of the form {xxlx is in some infinite language A} (3) String Contrasti~ Languages: languages that can be mapped by a homomorphism onto some language of the form {xcy[x and y are in some infinite language A and x ~ y} Programming languages are virtually always designed to be CF, except that there is a moot point concerning the implications of obligatory initial declaration of variables as in ALGOL or Pascal, since if variables (identifiers) can be alphanumeric strings of arbitrary length, a syntac- tic guarantee that each variable has been declared is tantamount to a syntax for a string matching language. The following view seems a sensible one to take about such cases: languages like ALGOL or Pascal are CF, but not all ALGOL or Pascal programs compile or run. Programs using undeclared vari- ables make no sense either to the compiler or to the CPU. But they are still programs, provided they conform in all other ways to the syntax of the language in question, just as a program which always goes into an infinite loop and thus never gives any output is a program. Aho and Ullmann (1977, 140) take such a view: the syntax of ALGOL...does not get down to the level of characters in a name. Instead, all names are represented by a token such as i d, and it is left to the bookkeeping phase of the compiler to keep track of declarations and uses of particular names. The bookkeeping has Co be done, of course, even in the case of languages like LISP whose syntax does not demand a list of declarations at the start of each program. Various efforts have been made in the linguistic literature to show that some human language has an infinite, appropriately extractable subset that is a triple counting language or a string matching language. (By appropriately extractable I mean isolable via either homomorphism or intersection with a regular set.) But all the published claims of this sort are fallacious (Pullum and Gazdar 1982). This lends plausibility to the hypothesis that human languages are all CF. Stronger claims than this (e.g. that human languages are regular, or finite cardinality) have seldom seriously defended. I now want to propose one, however. I propose that human languages are never profli- gate CYL's in the sense given by the following definition. (i) A CFL is profligate if all CF-PSG's generating it have nonterminal vocabularies strictly larger than their terminal vocabularies. (ii) A CFL is profligate if it is the image of a profligate language under some homomorphism. [OPEN PROBLEM: Is profligacy decidable for an arbitrary CFL? I conjecture that it is not, but I have not been able to prove this.] Clearly, only an infinite CPL can be profligate, and clearly the most commonly cited infinite CFL's are not profligate. For instance, {!nbn~n ~ 0} is not profligate, because it has two terminal symbols but there is a grammar for it that has only one nonterminal symbol, namely S. (The rules are: (S --> aSb, S --> e}.) However, profligate CFL's do exist. There are even regular languages that are profligate: a simple example (due to Christopher Culy) is (A* + ~*). More interesting is the fact that some string contrasting languages as defined above are profli- gate. Consider the string contrasting language over the vocabulary {~, k, K} where A = (A + ~)*. A string xcv in (~ + b)*~(~ + A)* will be in this language if any one of the following is met: (a) ~ is longer than Z; (b) K is shorter than ~; (c) ~ is the same length as ~ but there is an such that the ith symbol of K is distinct from the ith symbol of ~. The interesting Condition here is (c). The grammar has to generate, for all ~ and for all pairs <u, v> of symbols in the terminal vocabulary, all those strings in (a + b)*c(a + b)* such that the ~th sym- bol is ~ and the ~th symbol after ~ is Z. There is no bound on l, so recursion has tO be involved. But it must be recursion through a category that preserves a record of which symbol is crucially going to be deposited at the ~th position in the terminal string and mismatched with a distinct sym- bol in the second half. A CF-PSG that does this can be constructed (see Pullum and Gazdar 1982, 478, for a grammar for a very similar language). But such a grammar has to use recursive nontermi- nals, one for each terminal, to carry down informa- tion about the symbol to be deposited at a certain point in the string. In the language just given there are only two relevant terminal symbols, but if there were a thousand symbols that could appear in the ~ and ~ strings, then the vocabulary of recursive nonterminals would have to be increased in proportion. (The second clause in the defini- tion of profligacy makes it irrelevant whether there are other terminals in the language, like g in the language cited, that do not have to partici- pate in the recursive mechanisms just referred to.) For a profligate CFL, the argument that a CF-PSG is a cumbersome and inelegant form of grammar might well have to be accepted. A CF-PSG offers, in some cases at least, an appallingly inelegant hypothesis as to the proper description of such a language, and would be rejected by any linguist or program- mer. The discovery that some human language is profligate would therefore provide (for the first time, I claim) real grounds for a rejection of CF- PSG's on the basis of strong generative capacity (considerations of what structural descriptions are assigned to strings) as opposed to weak (what language is generated). However, no human language has been shown to be a profligate CFL. There is one relevant argument in the literature, found in Chomsky (1963). The argument is based on the nonidentity of consti- tuents allegedly required in comparative clause constructions like (4). (4) She is more competent as [a designer of programming languages] than he is as [a designer of microchips]. Chomsky took sentences like (5) to be ungrammati- cal, and thus assumed that the nonidentity between the bracketed phrases in the previous example had to be guaranteed by the grammar. (5) She is more competent as [a designer of programming languages] than he is as [a designer of programming languages|. Chomsky took this as an argument for non-CF-ness in English, since he thought all string contrasting languages were non-CF (see Chomsky 1963, 378-379), but it can be reinterpreted as an attempt to show that English is (at least) profligate. (It could even be reconstituted as a formally valid argument that English was non-CF if supplemented by a demonstration that the class of phrases from which the bracketed sequences are drawn is not only" infinite but non-regular; of. Zwicky and Sadock.) However, the argument clearly collapses on empir- ical grounds. As pointed out by Pullum and Gazdar (1982, 476-477), even Chomsky now agrees that strings like (5) are grammatical (though they need a contrastive context and the appropriate intona- tion to make them readily acceptable to infor- mants). Hence these examples do not show that there is a homomorphism mapping English onto some profligate string contrasting language. The interesting thing about this, if it is correct, is that it suggests that human languages not only never demand the syntactic string com- parison required by string matching languages, they never call for syntactic string comparision over infinite sets of strings at all, whether for symbol-by-symbol checking of identity (which typi- cally makes the language non-CF) or for specifying a mismatch between symbols (which may not make the language non-CF, but typically makes it profli- gate). There is an important point about profligacy that" I should make at this point. My claim that human languages are non-profligate entails that each human language has at least one CF-PSG in which the nonterminal vocabulary has cardinality strictly less than the terminal vocabulary, but not that the best granzaar to implement for it will necessarily meet this condition. The point is important, because the phrase structure grammars employed in natural language processing generally have complex nouterminals consisting of sizeable feature bundles. It is not uncommon for a large natural language processing system to employ thirty . or forty binary features (or a rough equivalent in terms of multi-valued features), i.e. about as many features as are employed for phonological descript- ion by Chomsky and Halle (19681. The GPSG system described in Gawron et al. (1982) has employed features on this sort of scale at all points in its development, for example. Thirty or forty binary features yields between a billion and a trillion logically distinguishable nonterminals (if all values for each feature are compatible with all combinations of values for all other features). Because economical techniques for rapid checking of relevant feature values are built into the parsers normally used for such grammars, the size of the potentially available nonterminal vocabulary is not a practical concern. In principle, if the goal of capturing generalizations and reducing the size of the grammar formulation were put aside, the nonter- minal vocabulary could be vastly reduced by replac- ing rule schemata by long lists of distinct rules expanding the same nonterminal. Naturally, no claim has been made here that pro- fligate CFL's are computationally intractable. No CFL's are intractable in the theoretical sense, and intractability in practice is so closely tied to details of particular machines and programming environments as to be pointless to talk about in terms divorced from actual measurements of size for grammars, vocabularies, and address spaces. I have been concerned only to point out that there is an interesting proper subset of the infinite CFL's within which the human languages seem to fall. One further thing may be worth pointing out. The kind of string contrasting languages I have been concerned with above are strictly nondeter- ministic. The deterministic CFL's (DCFL's) are closed under complementation. But the cor~ I _nt of (6) {xcvJx and ~ are in (& + ~)* and ~ # ~} in (~ + b)*E(& + ~)* is (7a), identical to (7b), a string matching language. (7)a. {xcvl~ and ~ are in (~ + b)* and x = ~} b. {xcx[x is in (a + b)*} If (7a) [=(Yb)] is non-CF and is the complement of (6), then (6) is not a DCFL. [OPEN PROBLEM: Are there any nonregular profligate DCFL's?] 2. STRONG GENERATIVE CAPACITY I now turn to a claim involving strong genera- tive capacity (SGC). In addition to claiming that human languages are non-profligate CFL's, I want to suggest that every human language has a linguisti- cally adequate grammar possessing the Exhaustive Constant Partial Ordering (ECPO) property of Gazdar and Pullum (1981). A grammar has this property if there is a single partial ordering of the nontermi- hal vocabulary which no right hand side of any rule violates. The ECPO CF-PSG's are a nonempty proper subset of the CF-PSG's. The claim that human languages always have ECPO CF-PSG's is a claim about the strong generative capacity that an appropriate theory of human language should have--- one of the first such claims to have been seriously advanced, in fact. It does not affect weak generative capacity; Shieber (1983a) proves that every CFL has an ECPO grammar. It is always poss- ible to construct an ECPO grammar for any CFL if one is willing to pay the price of inventing new nonterminals ad hoc to construct it. The content of the claim lies in the fact that linguists demand independent motivation for the nonterminals they postulate, so that the possibility of creating new ones just to guarantee ECPO-ness is not always a reasonable one. [OPEN PROBLEM: Could there be a non-profligate CFL which had #(N) < #T (i.e. nonterminal vocabulary strictly smaller than terminal vocabulary) for at least one of its non-ECPO grammars, but whose ECPO grammars always had #(N) > #(T)?] When the linguist's criteria of evaluation are kept in mind, it is fairly clear what sort of facts in a human language would convince linguists to abandon the ECPO claim. For example, if English had PP - S" order in verb phrases (explain to him ~a~ he'll have to leave) but had S" - PP order in adjectives (so that lucky for us we found you had the form lucky we found you for us), the grammar of English would not have the ECPO property. But such facts appear not to turn up in the languages we know about. The ECPO claim has interesting consequences relating to patterns of constituent order and how these can be described in a fully general way. If a gr~r has the ECPO property, it can be stated in what Gazdar and Pullum call ID/LP format, and this renders numerous significant generalizations elegantly capturable. There are also some poten- tially interesting implications for parsing, stu- died by Shieber (1983a), who shows that a modified Earley algorithm can be used to parse ID/LP format gr----mrs directly° One putative challenge to any claim that CF- PSG's can be strongly adequate descriptions for human languages comes from Dutch and has been dis- cussed recently by Bresnan, Kaplan, Peters, and Zaenen (1982). Dutch has constructions like (7) dat Jan Pier Marie zag leren zwemmen that Jan Pier Marie saw teach swim "that Jan saw Pier teach Marie to swim" These seem to involve crossing dependencies over a domain of potentially arbitrary length, a confi- guration that is syntactically not expressible by a CF-PSG. In the special case where the dependency involves stringwise ~dentity, a language with this sort of structure reduces to something like {xx[~ is in ~*}, a string matching language. However, analysis reveals that, as Bresnan et el. accept, the actual dependencies in Dutch are not syntactic. Grammaticality of a string like (7) is not in gen- eral affected by interchanging the NP's with one another, since it does not matter to the ~th verb what the ith NP might he. What is crucial is that (in cases with simple transitive verbs, as above) the ~th predicate (verb) takes the interpretation of the i-lth noun phrase as its argument. Strictly, this does not bear on the issue of SGC in any way that can be explicated without making reference to semantics. What is really at issue is whether a CF-PSG can assign syntactic qtructures to sentences of Dutch in a way that supports semantic interpretation. Certain recent work within the framework of gen- eralized phrase structure gran~mar suggests to me that there is a very strong probability of the answer being yes. One interesting development is to be found in Culy (forthcoming), where it is shown that it is possible for a CFL-inducing syntax in ID/LP format to assign a "flat" constituent structure to strings like Pier Marie za~ leren zwemmen ('saw Pier teach Marie to swim'), and assign them the correct semantics. Ivan Sag, in unpublished work, has developed a different account, in which strings like za~ leren zwemmen ('saw teach to swim') are treated as com- pound verbs whose semantics is only satisfied if they are provided with the appropriate number of NP sisters. Whereas Culy has the syntax determine the relative numbers of NP's and verbs, Sag is explor- ing the assumption that this is unnecessary, since the semantic interpretation procedure can carry this descriptive burden. Under this view too, there is nothing about the syntax of Dutch that makes it non-CF, and there is not necessarily any- thing in the grammar that makes it non-ECPO. Henry Thompson "also discusses the Dutch problem from the GPSG standpoint (in this volume). One other interesting line of work being pursued (at Stanford, like the work of Culy and of Sag) is due to Carl Pollard (Pollard, forthcoming, provides an introduction). Pollard has developed a general- ization of context-free grammar which is defined not on trees but on "headed strings", i.e. strings with a mark indicating that one distinguished ele- ment of the string is the "head", and which com- bines constituents not only by concatenation but also by "head wrap". This operation is analogous to Emmon Bach's notion "right (or left) wrap" but not equivalent to it. It involves wrapping a con- stituent ~ around a constituent B so that the head is to the left (or right) of B and the rest of ~ is to the right (or left) of ~. Pollard has shown that this provides for an elegant syntactic treat- ment of the Dutch facts. I mention his work because I want to return to make a point about it in the immediately following section. 3. TIME COMPLEXITY OF RECOGNITION The time complexity of the recognition problem (TCR) for human languages is like WGC questions in being decried as irrelevant by some linguists, but again, it is hardly one that serious computational approaches can legitimately ignore. Gazdar (1981) has recently reminded the linguistic community of this, and has been answered at great length by Berwick and Weinberg (1982). Gazdar noted that if transformational grammars (TG's) were stripped of all their transformations, they became CFL- inducing, which meant that the series of works showing CFL's to have sub-cubic recognition times became relevant to them. gerwick and Weinberg's paper represents a concerted eff6rt to discredit any such suggestion by insisting that (a) it isn't only the CFL's that have low polynomial recognition time results, and (b) it isn't clear that any asymptotic recognition time results have practical implications for human language use (or for com- puter modelling of it). Both points should be quite uncontroversial, of course, and it is only by dint of inaccurate attri- bution that Berwick and Weinberg manage to suggest that Gazdar denies them. However, the two points simply do not add up to a reason for not being con- cerned with TCR results. Perfectly straightforward considerations of theoretical restrictiveness dic- tate that if the languages recognizable in polyno- mial time are a proper subset of those recognizable in exponential time (or whatever), it is desirable to explore the hypothesis that the human languages fall within the former class rather than just the latter. Certainly, it is not just CFL's that have been shown to be efficiently recognizable in determinis- tic time on a Turing machine. Not only every context-free grammar but also every context- sensitive grammar that can actually be exhibited generates a language that can be recognized in deterministic linear time on a two-tape Turing machine. It is certainly not the case that all the context-sensitive languages are linearly recogniz- able; it can be shown (in a highly indirect way) that there must be some that are not. But all the examples ever constructed generate linearly recog- nizable languages. And it is still unknown whether there are CFL's not linearly recognizable. It is therefore not at all necessary that a human language should be a CFL in order to be effi- ciently recognizable. But the claims about recog- nizability of CFL's do not stop at saying that by good fortune there happens to be a fast recognition algorithm for each member of the class of CFL's. The claim, rather, is that there is ~ single, universal algorithm that works for every member of the class and has a low deterministic polynomial time complexity. That is what cannot be said of the context-sensitive languages. Nonetheless, there are well-understood classes of gr~-m-rs and automata for which it can be said. For example, Pollard, in the course of the work mentioned above, has shown that if one or other of left head wrap and right head wrap is permitted in the theory of generalized context-free grammar, recognizability in deterministic time ~5 is guaranteed, and if both left head wrap and right head wrap are allowed in gr---.-rs (with individual gr-----rs free to have either or both), then in the general case the upper bound for recognition time is ~7o These are, while not sub-cubic, still low deterministic polynomial time bounds. Pollard's system contrasts in this regard with the lexical- functional gra~ar advocated by Bresnan etal., which is currently conjectured to have an NP- complete recognition problem. I remain cautious about welcoming the move that Pollard makes because as yet his non-CFL-inducing syntactic theory does not provide an explanation for the fact that human languages always seem to turn out to be CFL's. It should be pointed out, however, that it is true of every grammatical theory that not every grammar defined as possible is held to be likely to turn up in practice, so it is not inconceivable that the gr-----rs of human languages might fall within the CFL-inducing proper subset of Pollard-style head gra=mars. Of course, another possibility is that it might turn out that some human language ultimately pro- vides evidence of non-CY-ness, and thus of a need for mechanisms at least as powerful as Pollard's. Bresman etal. mention at the end of their paper on Dutch a set of potential candidates: the so called "free word order" or "nonconfigurational" languages, particularly Australian languages like Dyirbal and Walbiri, which can allegedly distribute elements of a phrase at random throughout a sen- tence in almost any order. I have certain doubts about the interpretation of the empirical material on these languages, but I shall not pursue chat here. I want instead to show that, counter to the naive intuition that wild word order would neces- sarily lead to gross parsing complexity, even ram- pantly free word order in a language does not necessarily indicate a parsing problem that exhi- bits itself in TCR terms. Let us call transposition of adjacent terminal symbols scrambling, and let us refer to the closure of a language ~ under scrambling as the scramble of 2- The scramble of a CFL (even a regular one) can he non-CF. For example, the scramble of the regu- lar language (abe)* is non-CF, although (abc)* itself is regular. (Of course, the scramble of a CFL is not always non-CF. The scramble of a*b*c* is (~, b, !)*, and both are regular, hence CF.) Suppose for the sake of discussion that there is a human language that is closed under scrambling (or has an appropriately extractable infinite subset that is). The example just cited, the scramble of (abc)*, is a fairly clear case of the sort of thing that might be modeled in a human language that was closed under scrambling. Imagine, for example, the case of a language in which each transitive clause had a verb (~), a nominative noun phrase (~), and an accusative noun phrase (~), and free word order permitted the ~, b, and ~ from any number of clauses to occur interspersed in any order throughout the sentence. If we denote the number of ~'s in a string Z by Nx(Z), we can say ~nat the scramble of (abc)* is (8). (8){~J~ is in (~, b, &)* and N_a(~) = N b(~) = N=(~)} Attention was first drawn to this sort of language by Bach (1981), and I shall therefore call it a Bach lan~uaze. What TCR properties does a Bach language have? The one in (8), at least, can be shown to be recognizable in linear time. The proof is rather trivial, since it is just a corollary of a previously known result. Cook (1971) shows that any language that is recognized by a two-way deter- ministic pushdown stack automaton (2DPDA) is recog- nizable in linear time on a Turing machine. In the Appendix, I give an informal description of a 2DPDA that will recognize the language in (81. Given this, the proof that (8) is linearly recognizable is trivial. • Thus even if my WGC and SGC conjectures were falsified by discoveries about free word order languages (which I consider that they have not been), there would still be no ground for tolerat- ing theories of grammar and parsing that fail to impose a linear time bound on recognition. And recent work of Shieber (1983b) shows that there are interesting avenues in natural language parsing to be explored using deterministic context-free parsers that do work in linear time. In the light of the above remarks, some of the points made by Berwick and Weinberg look rather peculiar. For example, Berwick and Weinberg argue at length that things are really so complicated in practical implementations that a cubic bound on recognition time might not make much difference; for short sentences a theory that only guarantees an exponential time bound might do just as well. This is, to begin with, a very odd response to be made by defenders of TG when confronted by a theoretically restrictive claim. If someone made the theoretical claim that some problem had the time complexity of the Travelling Salesman problem, and was met by the response that real-life travel- ling salesmen do not visit very many cities before returning to head office, I think theoretical com- puter scientists would have a right to be amused. Likewise, it is funny to see practical implementa- tion considerations brought to bear in defending TG against the phrase structure backlash, when (a) no formalized version of modern TG exists, let alone being available for implementation, and (b) large phrase structure grammars.are being implemented on computers and shown to run very fast (see e.g. Slo- cum 1983, who reports an all-paths, bottom-up parser actually running in linear time using a CF- PSG with 400 rules and i0,000 lexical entries). Berwick and Weinberg seem to imply that data permitting a comparison of CF-PSG with TG are available. This is quite untrue, as far as I know. I therefore find it nothing short of astonishing to find Chomsky (1981, 234), taking a very similar position, affirming that because the size of the grammar LS a constant factor in TCR calculations, and possibly a large one, The real empirical content of existing results.., may well be that grammars are preferred if they are not too complex in their rule structure. If parsability is a factor in language evolution, we would expect it to prefer "short grammars'---such as transformational gr--~-rs based on the projection principle or the binding theory... TG's based on the "projection principle" and the '~inding theory" have yet to be formulated with sufficient explicitness for it to be determined whether they have a rule structure at all, let alone a simple one, and the existence of parsing algorithms for them, of any sort whatever, has not been demonstrated. The real reason to reject a cubic recognition- time guarantee as a goal to be attained by syntac- tic theory construction is not that the quest is pointless, but rather that it is not nearly ambi- tious enough a goal. Anyone who settles for a cubic TC~ bound may be settling for a theory a lot laxer than it could be. (This accusation would be levellable equally at TG, lexical-functional gram- mar, Pollard's generalized context-free gr-----r, and generalized phrase structure gr~--,-r as currently conceived.) Closer to what is called for would be a theory that defines human gr-,,,,-rs as some proper subset of the ECPO CF-FSG's that gen- erate infinite, uonprofligate, linear-time recog- nizable languages. Just as the description of ALGOL-60 in BNF formalism had a galvanizing effect on theoretical computer science (Ginsburg 1980, 6- 7), precise specification of a theory of this sort might sharpen quite considerably our view of the computational issues involved in natural language processing. And it would simultaneously be of con- siderable linguistic interest, at least for those who accept that we need a sharper theory of natural language than the vaguely-outlined decorative nota- tions for Turing machines that are so often taken for theories in linguistics. ACKNOWLEDGEMENT I thank Chris Culy, Carl Pollard, Stuart Shieber, Tom Waaow, and Arnold Zwicky for useful conversa- tions and helpful comments. The research reported here was in no way supported by Rewlett-Packard. References Aho, A. V. and J. D. Ullmann (1977). principles of C~mviler Design. Addison-Wesley. Bach, E. (1981). Discontinuous constituents in generalized categorial grammars. NELS II, 1-12. Berwick, R., and A. Weinberg (1982). Parsing effi- ciency and the evaluation of grammatical theories. L_!I 13.165-191. Bresnan, J. W.; R. M. Kaplan; S. Peters; and A. Zaenen (1982). Cross-serial dependencies in Dutch. L_I.I13.613-635. Chomsky, N. (1963) Formal properties of grammars. In R. D. Luce, R. R. Bush, and E. Galanter, eds., H~ndbook of Mathematical Psychology If. John Wiley. Chomsky, N. (1981). Knowledge of language: its elements and origins. Phil. Trans. of the Royal Sgc. of Loud. B 295, 223-234. Cook, S. A. (1971). Linear time simulation of deterministic two-way pushdown automata. Proceedinzs of the 19711FIP Conference, 75-80. North-Holland. Culy, C. D. (forthcoming). An extension of phrase structure rules and an application to natural language. Stanford MA thesis, of Linguistics, Stanford University. Dowry, D. Ro; R° Wall; and P. S. Peters (1980). Introduction t_~oMonta~ue Semantics. D. Reidel. Gawron, J. M., et al. (1982). The GPSG linguistic system. Proc. 20th Ann. Meetin~ of ACL 74-81. Gazdar, G. (1981). Unbounded dependencies and coordinate structure. LI 12.155-184. Gazdar, G. and G. K. Pullum (1981). Subcategoriza- tion, constituent order, and the notion "head'. In M. Moortgat~ H. v. d. Hulst, and T. Hoekstra (edso), Th__fie Scope of Lexical Rules, 107-123. Foris° Ginsburg, S. (1980). Methods for specifying formal languages--past-present-future. In R. V. Book, ed., Formal Lan~uaRe Theory: Perspectives and . Qpen Problems, 1-47. Academic Press. Pollard, C. J. (forthcoming). Generalized context-free grammars, head grammars, and natural language. Pullum, G. K. (1982). Free word order and phrase structure rules. ~ELS 12, 209-220. Pullum, G. K. and Gazdar, G. (1982). Natural languages and context-free languages. Lin~. and Phil. 4.471-504. Shieber, S. M. (1983a). Direct parsing of ID/LP grn----rs. Unpublished, SRI, Menlo Park, CA. Shieber, S. M. (1983b). Sentence disambiguation by a shift-reduce parsing technique. In this volume. Slocum, J. (1983). A status report on the LEC machine translation system. Cgnf. on Applied Nat. Lan~. Proc. 166-173. ACL, Menlo Park, CA. Zwicky, A. M. and J. M. Sadock (forthcoming). A note on xv languages. Submitted to Lin~. and ~l. Appendix: a 2DPDA that recognizes a Bach language The language {~[~ is in (~ + ~ + ~)* and Na(x) = N_b(x) = N_.c(E)} is accepted by a 2DPDA with a single symbol ~ in its stack vocabulary, {~, ~, ~} as input vocabulary, four states, and the following instruction set. State I: move rightward, reading ~'s, b's, and E's, and adding a ~ to the stack each time ~ appears on the input tape. On encountering right end marker in state i, go to state 2. State 2: move left, popping a ~ each time a ~ appears. On reaching left end marker in state 2 with empty stack (which will mean Na(~) = Nb(~)), go to state 3. State 3: move right, pushing a ~ on the stack for every ~ encountered. On reaching right end marker in state 3, go to state 4. State 4: move left, popping a ~ for each E encountered. On reaching left end marker in state 4 with empty stack (which will mean Na(w) = Nc(w)), accept. | 1983 | 1 |
A FOUNDATION FOR SEMANTIC INTERPRETATION Graeme Hirst Department of Computer Science Brown University Providence, RI 02912 Abstract Traditionally, translation from the parse tree repre- senting a sentence to a semantic representation (such as frames or procedural semantics) has a/ways been the most ad hoc part of natural language understand- •ng (NLU) systems. However, recent advances in lin- guistics, most notably the system of formal semantics known as Montague semantics, suggest ways of putting NLU semantics onto a cleaner and firmer foundation. We are using a Montague-inspired approach to seman- tics in an integrated NL U and pro blem-solving system that we are building. Like Montague's, our semantics are compositional by design and strongly typed, with semantic rules in one-to-one correspondence with the meaning-affecting rules of a Marcus-style parser. We have replaced Montague's semantic objects, functors and truth conditions, with the elements of the frame language Frail, and added a word sense and case slot disambiguation system. The result is a foundation for semantic interpretation that we believe to be superior ~o previous approaches. I. Introduction By semantic interpretation we mean the process of mapping from a syntactically analyzed sentence of natural language to a representation of its meaning. We exclude from semantic interpretation any con- sideration of discourse pragmatics; rather, discourse pragmatics operate upon the output of the semantic interpreter. We also exclude syntactic analysis; the integration of syntactic and semantic analysis becomes very messy when complex syntactic constructions are considered, and, moreover, it is our observation that those who argue for the integration of the two are usually arguing for subordinating the role of syntax, a position we reject. This is not to say that parsing can get by without semantic help; indirect object finding, This work was supported by the Oflfice of Naval Research under contract number N00014-79-C-0592. and prepositional phrase and relative clause attach- ment, for example, often require semantic knowledge. Below we will show that syntax and semantics may work well together while remaining distinct modules. Research on semantic interpretation in artificial intelligence goes back to Woods's dissertation (1967, 1968), which introduced procedural semantics in a natural-language front-end for an airline reservation system. Woods's system had rules with patterns that, when they matched part of the parsed input sentence, contributed a string to the semantic representation of the sentence. This string was usually constructed from the terminals of the matched parse tree frag- ment. The strings were combined to form a procedure call that, when evaluated, entered or retrieved the ap- propriate database information. This approach is still the predominant one today, and even though it has been refined over the years, semantic interpretation remains perhaps the least understood and most ad hoc area of natural language understanding (NLU).I However, recent advances in linguistics, most not- ably Montague semantics (Montague 1973; Dowry, Wall and Peters 1981), suggest ways of putting NLU semantic interpretation on a cleaner and firmer foun- dation than it now is. In this paper, we describe such a foundation. 2 2. Montague semantics In his well-known "PTQ" paper (Montague 1973), Richard Montague presented the complete syntax and semantics for a small fragment of English. Although it was limited in vocabulary and syntactic com- plexity, Montague's fragment dealt with such impor- lit is also philosophically controversial. For discussion, see Fodor 1978, Johnson-Laird 1978, Fodor 1979, and Wilks 1982. 2Ours is not the only current work with this Ko~tl; in Section 7 we discuse other similarly motivated work, 64 tant semantic problems as opaque contexts, different types of predication with the word be, and the "the temperature is 90" problem; 3 for details of these, see Dowty, Wall and Peters (1981). Montague's semantic rules correspond to what we have been calling semantic interpretation. That is, in conjunction with a syntactic process, they produce a semantic representation, or translation, of a sentence. There are four important properties of Montague semantics that we will examine here. Below, we will carry three of these properties over into our own semantics. The first property, the one that we will later drop, is that for Montague, semantic objects, the results of the semantic translation, were such things as in- dividual concepts (which are functions to individuals from the cartesian product of points in time and pos- sible worlds), properties of individual concepts, and functions of functions of functions of functions. At the top level, the meaning, of a sentence was a truth con- dition relative to a possible world and point in time. These semantic objects were represented by expres- sions of intensional logic; that is, instead of translat- ing English directly into these objects, a sentence was first translated to an expression of intensional logic, for which, in turn, there existed an interpretation in terms of these semantic objects. Second, Montague had a strong theory of types for his semantic objects: a set of types that corresponded to types of syntactic constituents. Thus, given a par- ticular syntactic category, such as proper noun or ad- verb, Montague was able to say that the meaning of a constituent of that category was a semantic object of such and such a type. 4 Montague's system of types was recursively defined, with entities, truth values and intensions as primitives, and other types defined as functions from one type to another in such a manner that if syntactic category X was formed by adding category Y to category Z, then the type correspond- ing to g would be functions from senses of the type of 3That is, to ensure that "The temperature is ~0 and the tem- perature is rising* cannot lead to the inference that "90 is ris- ing". 4To be precise: the semantic type of a proper noun is set of properties of individual concepts; that of an adverb is function between set~ v[ individual concepts (Dowry ¢~ al Ig81: 183, 187). Y to the type of X. 5 Third, in Montague's system the syntactic rules and semantic rules are in one-to-one correspondence. Each time a particular syntactic rule applies, so does the corresponding semantic rule; while the one operates on some syntactic elements to create a new element, the other operates on the corresponding semantic objects to create a new object that will cor- respond to the new syntactic element. Thus the two sets of rules operate in tandem. Fourth, Montague's semantics is compositional, which is to say that the meaning of the whole is a systematic function of the meaning of the parts. At first glance this sounds trivial; if the noun phrase my pet penguin denotes by itself some particular entity, namely the one sitting on my lap as I write this paper, then we do not expect it to refer to a different entity when it is embedded in the sentence [ love my pet penguin, and a semantic system that did not reflect this would be a loser indeed. Yet there are alternatives to compositional semantics. The first alternative is that the meaning of the whole is a function of not just the parts but also the situation in which the sentence is uttered. For ex- ample, the possessive in English is highly dependent upon pragmatics; the phrase Nadia's penguin could refer, in different circumstances, to the penguin that Nadia owns, to the one that she is carrying but doesn't actually own, or to the one that she just bet on at the penguin races. Our definition above of semantic inter- pretation excluded this sort of consideration, but this should not be regarded as uncontroversial. The second alternative to compositional semantics is that the meaning of the whole is not a systematic function of the parts in any reasonable sense of the word. This is exemplified by the interpretation of the word depart in Woods's original system, which varied greatly depending on the preposition it dominated (Woods 1967:A-43-A-46). For example, the interpreta- tion of the sentence: AA-57 departs from Boston. is, not unreasonably: 5For example, the semantic type of prepositions is functions mapping senses of the type of noun phrases to the semantic type of prepositional phrases. 65 depar~ (as-57, boston). That is, the semantic object into which depart is translated is the procedure depart. (AA-57 is an air- line Right.) However, the addition of a prepositional phrase changes this; Table 1 shows the interpreta- tion of the same sentence after wrious prepositional phrases have been appended. For example, the addi- tion of ~o Chicago changes the translation of depart; to connect, though the intended sense of the word is clearly unchanged, s This is necessitated by the particular set of database primitives that Woods used, selected for their being %tom/c" (1967:7-4-7-11) rather than for promoting compositions/Sty. Rules in the system axe able to generate non-compositional representations be- cause they have the power to set an arbitrarily complex parse tree as their trigger, and to return an axbitrary representation that could modify or completely ignore the components of the parse trees they are supposed to be interpreting/ For example, a rule can say (1967:A- 44): If you have a sentence whose subject is a flight, whose verb is leave or depart, and which has two (or more) prepositional phrases modifying the verb, one with /from and a place name, the other with a~ and a time, then the interpretation is equal (dtime (a, b), c), where a is the flight, b is the place, and c is the time. Thus while Woods's semantics could probably be made • reasonably compositional simply by appropriate ad- justment of the procedure calls into which sentences are translated, it would still not be compositional by design the way Montague semantics is. 8~Ve have simplified a Little here in order to make our point. In fact, sentences like those in Table I with prepositional phrases will ~ctually cause the execution of two semantic rules: one for the complete sentence, and one for the sentence it happens to contain, A.A-57 depcrts from 8os~o~. The resulting interpreta- tion will be the conjunction of the output from each rule (Woods 1967~9-5): AA-57 depLrts from Boston to Chicago. depar~ (aa-ST, boston) and connec~ (aa-57. boston, c~icago) Woods leaves it open (1967:9-7) a,s to how the semantic redun- dancy in such expressions should be handled, thou~,h one of hie suggestions is a filter that would remove conjuncts implied by others, giving, in this case, the interpretation shown in Table 1. 7Nor is there &nything that prevents the construction of rules that would result in conjunctions with conflicting, rather than merely redund~tnt, terms. TABLE 1. NONCOMPOSITIONALITY IN WOODS'S SYSTEM AA-57 departs from Boston. depart (aa-57, bos~on) A.A-57 departs from Boston to Chicago. conltecT, (aa-5T, besT, on. chicago) AA-57 departs from Boston on Monday. dday (aa-57, boston, monday) AA-57 departs from Boston at 8:00am. equal (dtlme (aa-5T. boston), 8:00am) AA-57 departs from Boston after 8:00am. greater (dtime (aa-5T, boston), 8:00am) A.A-57 departs from Boston before 8:00am. greater (8:00am, dtlme (aa-5T. boston)) Although Montague semantics has much to recom- mend it, it is not possible, ho~vever, to implement it directly in a practical NLU system, for two reasons. The first is that Montague semantics as currently for- mulated is computationally impractical. It throws around huge sets, infinite objects, functions of func- tions, and piles of possible worlds with great abandon. Friedman, Moran and Warren (1978a) point out that in the smallest possible Montague system, one with. two entities and two points of reference, there are, for example, 22"s= elements in the class of possible denota- tions of prepositions, each element being a set contain- ing 2512 ordered pairs, s The second reason we can't use Montague seman- tics directly is that truth-conditional semantics are not useful in AI; A/uses know/edge semant.ics (Tarnawksy 1982) in which semantic objects tend to be symbols or expressions in a declarative or procedural knowledge representation system. Moreover, truth-conditional semantics really only deals with declarative sentences (Dowry eC al 1981:13) (though there has been work attempting to extend Montague's work to questions; e.g. Hamblin 1973); a practical NLU system needs to be able to deal with commands and questions as well as declarative sentences. 8Despite this problem, Friedman et ¢I (1978b, 1978c) have imple- mented Mont~gue semantics computationally by using tech- n/ques for maintaining partially specified models. However, their system is intended ~s ~ tool for understanding Montague seman- tics better, r~ther than &s ~ usable NLU system (1978b:26). 66 There have, however, been attempts to take the intensional logic that Montague uses as an inter- mediate step in his translations, and give it a new in- terpretation in terms of AI-type semantic objects, thus preserving all other aspects of Montague's approach; see, for example, Hobbs and Rosenschein 1977, and Smith's (1979) objections to their approach. There has also been interest in using the intensional logic itself (or something similar) as an AI representation ~ (e.g. Moore 1981). But while it may be possible to make limited use of intensional logic expressions, I° there are many problems that need to be solved before inten- sional logic or other flavors of logical forms could sup- port the type of inference and problem solving that AI requires of its semantic representations; see Moore 1981 for a useful discussion. Moreover, Gallin (1975) has shown Montague's intensional logic to be incom- plete. (See also the discussion in Section 7 of work using logical forms.) Nevertheless, it is possible to use many aspects of Montague's approach in semantics in AI. The seman- tic interpreter that we describe below maintains three of the four properties of Montague semantics that we described above, and we therefore refer to it as "Montague-inspired". TABLE 2. TYPES IN THE AHSITY SEMANTIC INTERPRETER BASIC TYPES Frame a (penguin ?x), Clove ?x) Slot color, agent Frame determiner b (t~e ?x), Ca ?x) OTHER TYPES Slot-filler pair = slot ~ frame statement (color=red), (agent=(the ?x (f±sh ?x))) Frame descriptor = frame ~ slot-filler pair* (pen~uln ?x (owner=Nadla)), (love ?x (agent=Ross) (patient=Nadla)), (dog ?x) Frame statement [or instance c] = frame determiner -~ frame descriptor (the ?x (penguin ?x (owner=Nadla))), (a ?x (love ?x (agent=Ross) (pail ent=Nadl a) ) ), (the ?x (dog ?x)). pen~ln87 [an instancel 3. Our semantic interpreter Our semantic interpreter is a component of a system that uses a frame-like representation for both story comprehension and problem-solving. The system in- cludes a frame language, named Frail, a problem sol- ver, and a discourse pragmatics component; further details may be found in Charniak 1981, Wong 1981a, and Wong 1981b. The natural language front-end in- cludes Paragram, a deterministic parser based on that of Marcus (1980). Unlike Marcus's parser, Paragram has two types of rule: base phrase structure rules and transformational rules. It is also able to parse un- grammatical sentences; it always uses the rule that matches best, even if none match exactly. Paragram is described in Charniak 1983. 91tonically, Montague regarded intensional logic merely as a con- venience in specifyin K his translation, and one that was com- pletely irrelevant to the substance of his semantic theories. lOGodden (1981) in f~ct uses them for simple translation bet- ween Thai and English. aThe queJtion-m~rk prefix indicates & variable. Whenever a free v~iable in a frame is bound to a v~iable in a frame determiner, a unique new name is generated for that variable and its bindings. In this paper, we shall assume for simplicity that vaxiable names ~re maKically ~correct" from the start. bDo not be misled by the fact that frames and frame determiners look similar. They He actually very different: the first is a gtatic data structure; the second is a frame retrieva~l procedure. CAn instance is the result of evaluating a frame statement in Frail. It is a symbol that denotes the object referenced by the frame statement. To Absity, there is no distinction between the two; ~n instan.ce can be used wherever ~ frame Itatement c~n. The semantic interpreter is named Absity (for reasons too obscure to burden the reader with). As we mentioned above, it retains three of the four properties of Montague semantics that we discussed. The property that we have dropped is, of course, truth conditionality and Montague's associated treasury of semantic objects. We have replaced them with AI- style semantics, and our own repertory of objects, 67 TABLE 3. TYPE CORRESPONDENCES IN ABSITY SYNTACTIC TYPE SEMANTIC TYPE Major sentence Sentence Noun Adjective Determiner Noun phrase Preposition Prepositional Phrase Verb Adverb Auxiliary Verb phrase Clause end Frame statement, instance Frame descriptor Frame Slot-filler pair Frame determiner Frame statement, instance Slot name Slot-filler pair (Action) frame Slot-filler pair Slot-filler pair Frame descriptor Frame determiner. which are components of the frame language Frail. 11 We do, however, retain a strong typing upon our semantic objects, that is, each syntactic category has an associated semantic type. Table 2 shows the types of components of Frail, how they may be combined, and examples of each; the nature of the components listed will become clearer with the examples in the next section. Table 3 gives the component of Frail that corresponds to each syntactic type. As a consequence of the kind of semantic objects we are dealing with, the system of types is not recursively defined in the Montague style, but we retain the idea that the type of a semantic object should be a function of the types of the components of that object. We have also carried over from Montague seman- tics the operation of syntactic and semantic rules in tandem upon corresponding objects. However, it is not possible to maintain the one-to-one correspondence of rules when we replace Montague's simple syntax with the much larger English grammar of the Paragram parser. This is because in Montague's system each syn- tactic rule either creates a new node from old ones-- for example, forming an intransitive verb phrase from a transitive verb and a noun phrase--or places a new llAlthou~h the object that represents a Sentence is • procedure call in Frail upon a knowledge basej this is not procedur~l sem~n- tics in the strict Woods sense, as the mes~aing inheres not in the procedures but in the objects they manipulate. node under an existing one--such as adding an adverb to an existing intransitive verb phrase. These are" ac- tions that clearly have semantic counterparts. When we start to add movement rules such as passivizatioa and dative movement to the grammar, we find our- selves with rules that have no clear semantic counter- part; indeed with rules that, it is often claimed (e.g. Chomsky 1965:132), leave the meaning of a sentence quite unchanged. We therefore distinguish between parser rules that should have corresponding semantic rules and those that should not. As the above discussion suggests, rules that attach nodes are the ones that have seman- tic counterparts. In Paragram, these are the base structure rules. For this subset of the syntactic rules, semantic rules run in tandem, just as in Montague's semantics, m It is a consequence of the above properties of our semantic interpreter that we have also retained the property of compositionaiity by design. This fol- lows from the uniform typing; the correspondence bet- ween syntactic and semantic rules that maintains this uniformity; and there being a unique semantic object corresponding to each word of English i~ (see Dowty e~ al 1981:180-181). Unlike those of Woods's (1967) air- line reservation system front-end discussed in Section 2, our semantic rules are very weak: they cannot change or ignore the components upon which they operate, nor can more than one rule volunteer an inter- pretation for any node of the parse tree. The power of the system comes from the nature of the semantic ob- jects and the syntax-directed application of semantic rules, rather than from the semantic rules themselves. 4. Examples Some examples will make our semantic interpreter clearer. First, let's consider a simple noun phrase, the book. From Table 3, the semantic type for the determiner She is a frame determiner function, in this case (the ?x), and the type for the noun book is a kind of frame, here (book ?x). These are combined 12In her synthesis of transformationa.l syntax with Monta6,ue acrostics, Partee (1973, 1975) observes that the semantic rule corresponding to many transformations will simply be the iden- tity mapping. 13We show in Section 6 how this may be reconciled with lexical ambiguity. 68 in the canonical way--the frame name is added as an argument to the frame determiner function--and the result, (the ?x (book ?x)), is a Frail frame state- ment (which evaluates to an instance) that represents the unique book referred to. 14 A descriptive adjective corresponds to a slot-filler pair; for example, red is represented by (color=red), where color is the name of a slot and red is a frame instance, the name of a frame. A slot-filler pair can be added as an argument to a frame, so the red book would have the semantic interpretation (the ?x (book ?x (color=red))). Now let's consider a complete sentence: Nadia bought the book from a store in the mall. Table 4 shows the representation for each component of the sentence; note that the basic noun phrases have already been formed in the manner described above. Note also that we have inserted the pseudo- prepositional subject and object markers susJ and osJ, which are then treated as if they were real prepositions; see Hirer and Charniak 1982 or Hirst 1983 for details of this. For simplicity, we assume that each word is unambiguous (we discuss our disambigua- tion procedures in Section 6); we also ignore the tense cn the verb. Table 5 shows the next four stages in the interpretation. First, noun phrases and their preposi- tions are combined, forming slot-filler pairs. Then the prepositional phrase in the mall can be attached to a store (since a noun phrase, being a frame, can have a slot-filler pair added to it), and the prepositional phrase from a store in the marl is formed. The third stage shown in the Table is the attachment of the slot- filler pairs for the three top-level prepositional phrases to the frame representing the verb. Finally, the period, which is translated as a frame determiner function, causes instantiation of the buy frame, and the trans- lation is complete. 5. Semantic help for the parser As we mentioned earlier, any parser will occasionally need semantic help. In Marcus-type parsers, this need occurs in rules that have the form "If semantics prefers 14Note ~hat it is the responsibility" of the frame system to deter- mine with the help of the pragmatics module which one of the books that it m~ty know about is the correct one in context. TABLE 4. ABSITY EXAMPL E WORD OR PHRASE SEMANTIC OBJECT SUBJ agent Nadia (the ?x (thing ?x (propername="Nadla"))) bought (buy ?x) oBJ pa~len~ the book (the ?y (book ?y)) from source a store (a ?z (el;ore ?z)) in loca~lon the mall (the ?w (mall ?w)) • [period I (a ?u) X over Y then do X'; otherwise do Y". To answer such questions, we have a Semantic Enquiry Desk r, hat operates upon the same semantic objects as the seman- tic interpreter. Because these objects are components of the Frail frame language, the Enquiry Desk can use the full retrieval and inference power of Frail in answering the enquiry. 6. Word sense disambiguation One problem that Montague semantics does not ad- dress is that of word disambiguation. Rather, there is assumed to exist a function that maps each word to a unique sense, and the semantic formalism operates on the values of this function.Is Clearly, however, a prac- tical NLU system must take account of word sense am- biguity, and so we must add a disambiguation facility to our interpreter. Fortunately, the word translation function allows us to ~dd this facility transparently. Instead of simply mapping a word to an invariant unique sense, the function can map it to whatever sense is correct for a particular instance. Our disambiguation facility is called Polaroid Words. Is Each word in the system is represented by 15This is not quite true. Specified unique translations axe given for proper names and for a few important function words, such as the and be; see Monta~e 197312]:261 , or Dowry ~ ~l 1981:192ff. 16polaroid is a trademark of the Polaroid Corporation. 69 TABLE 5. ABSITY EXAMPLE (CONTINUED) SUBJ Nadia (agent,= (the ?x (thlng ?x (propername="Nadla")))) OSJ the book (patlenl;=(the ?y (book ?y))) in the mall (loca~lon:C1;he ?~ (mall ?w))) a store in the mall (a ?z (s~core ?z (loca~ion=C~he ?w (mall ?w))))) from a store in the mall (source=Ca ?z (s~ore ?z (locatlon=(the ?w (mall ?W)))))) NaSa bought the book from a storein the mall (buy ?u (agent=(the ?x (thlng ?x (propername="Sadia")))) (patient=(the ?y (book ?y))) (source=(a ?z (store ?z (location=(the ?w (m~ll ?w))))))) Nadia bought the book from a store in the mail. (a ?u (buy ?u (agenr,=(the ?x (thing ?x (propername=" N adla" ) ) ) ) (patient= (the ?y (book ?y))) (source=(a ?z (store ?z (locatlon=(1;he ?w (marl ?w))))))) a separate process that, by talking to other processes and by looking at paths made by spreading activation in the knowledge base, figures out the word's mean- ing. Each word is like a self-developing photograph that can be manipulated by the semantic interpreter even while the picture is forming; and if some other process needs to look at the picture (e.g. if the Semantic Enquiry Desk has an "if semantics prefers ~ question from the parser), then a half-developed pic- ture may provide enough information. Exactly the same process, without the spreading-activation phase, is used to disambiguate case roles as well. Polaroid Words are described more fully in Hirst and Charniak 1982 and Hirst 1983. 7. Comparison with other work Our approach to semantic interpretation may usefully be compared with other recent work with similar goals to ours. One such project is that of Jones and Warren (1982), who attempt a conciliation between Montague semantics and a conceptual dependency representation (Schank 1975). Their approach is to modify Montague's translation from English to intensional logic so that the resulting expressions have a canonical interpreta- tion in conceptual dependency. They do not ad- dress such issues as extending Montague's syntax, nor whether their approach can be extended to deal with more modern Schankian representations (e.g. Schank 1982). Nevertheless, their work, which they describe as a hesitant first step, is similar in spirit to ours, and it will be interesting to see how it develops. Important recent work that extends the syntac- tic complexity of Montague's work is that on general- ized phrase structure grammar (GPSG) (Gazdar 1982). Such grammars combine a complex transformation- free syntax with Montague's semantics, the rules again operating in tandem. Gawron et al (1982) have imple- mented a database interface based on GFSG. In their system, the intensional logic of the semantic com- ponent is replaced by a simplified extensional logic, which, in turn, is translated into a query for database access. Schubert and Peiletier (1982) have also sought to simplify the semantic output of a GPSG to a more ~conventional" logical form; and Rosenschein and Shieber (1982) describe a similar translation process into extensional logical forms, using a context-free grammar intended to be similar to a GPSG. Iv The GPSG approaches differ from ours in that their output is a logical form rather than an im- mediate representation of a semantic object; that is, the output is not tied to any representation of knowledge. In Gawron et al's system, the database 17 Rosenschein and Shieber's semaxltic translation fonow~ pars- ing rather than running in parallel with it, but it iv strongly syntax-dLrected, and is, it seems, isomorphic to ~n in-t~ndem translation that provides no feedback to the p~rser. 70 provides an interpretation of the logical form, but only in a weak sense, as the form must first pass through another (apparently somewhat ad hoc) trans- lation and disambiguati0n process. Nor do these ap- proaches provide any semantic feedback to the par- set. is These differences, however, are independent of the choice of GPSG; it should be easy, at least in prin- ciple, to modify these approaches to give Frail output, or, conversely, to replace Paragram in our system with a GPSG parser. 19 The PSX-KLON~- system of Bobrow and Webber (1980a, 1980b) also has a close coupling between syn- tax and semantics. Rather than operating in tandem, though, the two are described as "cascaded', with an ATN parser handing constituents to a semantic in- terpreter, which is allowed to return them (causing the ATN to back up) if the purser's choice is found to be semantically untenable. Otherwise, a process of incremental description refinement is used to in- terpret the constituent; this relies on the fact that the syntactic constituents are represented in the same formalism, KL-OSZ (Brachman 1978), as the system's knowledge base. The semantic interpreter uses projec- tion rules to form an interpretation in a language called JAaGON, which is then translated into KL-ONZ. Bobrow and Webber are particularly concerned with using this framework to determine the combinatoric relationship between quantifiers in a sentence. Bobrow and Webber's approach addresses several of the issues that we do, in particular the relationship between syntax and semantics. The information feed- back to the parser is similar to our Semantic Enquiry Desk, though in our system, because the parser is deterministic, semantic feedback cannot be con fluted with syntactic success or failure. Both approaches rely on the fact that the objects manipulated are objects of a knowledge representation that permits appropriate judgments to be made, though in rather a different manner. Hendler and Phillips (1981; Phillips and Hendler 1982) have implemented a control structure for NLU 18Gawron et al produce all poslible trees and their tranilations for the input sentence, s.nd then throw away any that don't make sense to the database. If'Our choice of Paragram was largely pragmatic~it w&s avL/l- • ble--and does not represent &ny particular commitment to transformational g~ammar s. based on message passing, with the goal of running syntax and semantics in parallel and providing seman- tic feedback to the parser. A ~moderator" trans- lates between syntactic constructs and semantic repre- sentations. However, their approach to interpretation is essentially ad hoc (James Hendler, persoaoi cum- munication), and they do not attempt to put syntactic and semantic rules in strict correspondence, nor type their semantic objects. None of the work mentioned above addresses issues of lexical ambiguity as ours does, though Bobrow and Webber's incremental description refine- ment could possibly be extended to cover it. Also, Gawron et al have a process to disambiguate case roles in the logical form after it is complete, which operates in a manner not dissimilar to the case-slot part of Polaroid Words. 8. Conclusion We have described a new approach to semantic inter- pretation, one suggested by the semantic formalism of Richard Montague. We believe this work to be a clean and elegant foundation for semantic interpreta- tion, in contrast to previous ad hoc approaches. At the moment, though, the work is only a foundation; the test of a foundation is what can be constructed on top of it. We do not expect the construction to be unproblematic; here are some of the problems we will have to solve. First, the approach is not just compositional but almost too compositional. At present, noun phrases are taken to be invariably and unalterably specific and extensional, that is to imply the existence of the unique entity or set of entities that they specify. In English, this is not always correct. A sentence such as: Nadia owns a unicorn. implies that a unicorn exists, but this is not true of: Nadia talked abou~ a unicorn. which also has a non-specific reading. Montague's solution to this problem does not seem easily adaptable 71 to Absity. 2° Similarly, a sentence such as: The lion is not a beast to be trifled w/th. can be a generic statement intended to be true of all lions; Montague did not treat generics. Second, the approach is heavily dependent upon the expressive power of the underlying frame language. For example, our language, Frail, is yet deficient in its handling of time, and this is clearly reflected in Absity. Further, the approach makes certain claims about the nature of frame representations~that a descriptive adjective in some sense is a slot-filler pair, for example that might be shown to be untenable. We will also have to deal with problems in quantification, anaphoric reference, and many other areas. Nevertheless, we believe that this approach to semantic interpretation shows considerable promise. Acknowledgemems I am grateful to Eugene Charniak, C~role Chaski, Jim Hendler, Polly Jacobson, and Nadia Talent for their comments upon earlier versions of this paper. References BOBROW, Robert J ~nd WEBBER, Bonnie Lynn (1980&). =PSI- KLONE: Pa.rsing and semantic interpretation in the BBN natural language understa.nding system. ~ l~oceedings O~ th.c T~ird ~iennial Conference, Co.nadio.n Society /or Computational Studies o[ [nte|llgenee / Soei~t~ Canadienne pour ~t~des d'Inte~ligence par Ordinateur, Victoria., Ma.y 1980. 131-142. BOBROW, Robert J and WEBBER, Bonnie Lynn (1980b). "Knowledge representation for syntactic/semantic process- i~g." P~ociedin~s of ~&e First Anr~a~ lVationa~l Confer- ence 0~ Artificial Intelligence, Stanford, August 1980. 316-323. BRACHMAN, Ronald J (1978). =A structural pArs.ditto for rep- resenting knowledge. ~ Report 3605, Bolt, Beranek and NewmaJ~, Cambridge, MA 02138. May 1978. CHARNIAK, Eugene (1981). "A common representation for problem-solving and langua.ge-comprehension information." Artificial [ntelli.gence, 16(3), July 1981, 225-255. CHARNIAK, Eugene (1983). ~A p~urser with ~omething for everyone. ~ [11 in: King, M~gaxet(editor). P~rsingnatt~ral language, London: Academic Press, 1983. [2] Tech~ic~l report CS-70, Department of Computer Science, Brown University, Providence, R[ 02912. April 1981. CHOMSKY, Avr~m Noa.m (1965). Aspects of the theOr~l Of synta=. Ca.mbridge, MA: The MIT Press, 1965. 20He h~ndled such sentences by having two distinct parsee, one for each rea.dinK; a mea.ning postulate equa.tes the repre- sentations of the two parses where the verb ma.kes it appropriate to do so. DOWTY, David R; WALL, Robert Eugene and PETERS, Staaley (1981). [ntrodt*ction to Montague semantics (-~- Synthese l~n~u~ge library 11). Dordrecht: D. Reidel, 1981. FODOR, Jerry AlAn (1978). "Tom Swift and his procedural gr~nd~other." Cognition, 6(3), September 1978, 229-247. FODOR, Jerry AlaJ1 (1970). "In reply to Philip Johnson-Laird." Cognition, T(1), Maxch 1979, 03-95. FRIEDMAN, Joyce; MORAN, Dougla.s Bailey ~nd WARREN, De.rid Scott (1978a.). "Explicit finite intensional models for PTQ. ~ [I] American jo~rn¢l o/ computational linguistics, 1978:1, microfiche 74, 3-22. [2] Paper N-3, Computer S~udies in Formal Linguistics, Depzrtment of Computer amd Communion.lion Sciences, University of Michigan, Ann Arbor, MI 48109. FRIEDMAN, Joyce; MORAN, Douglas Ba~ley and WARREN, Da.vid Scott, (1978b). =An interpretation system for Montague Kr,~mmar." [11 American yournal of compura- tional linguistics, 1078:1, microfiche 74, 23-96. (21 Pa.per N-4, Computer Studies in Formal Linguistics, Department of Computer ~nd Communication Sciences, University of Michigan , Ann Arbor, MI 48109. FRIEDMAN, Joyce; MORAN, Douglas Bailey a~d WARREN, D~vid Scott, (1978c). =EvMuatmg English sentences in a. logical mode[: A process version of Mont~gue ~am- max." {1] P~oceed~ngs of the 7th International Cor~ferene¢ on Computational LingtListics, Bergen, Norway, August 1978. [2} Paper N-15, Computer Studies in Forma.l Linguistics, Depactment of Computer cud Communication Sciences, University of Michigan, Ann Arbor, MI ~8109. August 1978. GALLIN, Daniel (1975). [ntensional and ~igher-order modal logic ~uith ~pplic~tions ~o Montague sernan~t'cs (~ North- Holland Mathematics Series 9). Amsterdam: North- Holland, 1975. {Revised from the a.uthor's doctoral dissertation, Department of Mathematics, University of Caaifornia, Berkeley, September 1972.] GAWRON, Jea.n Mark; KING, Jona.thon J; LAMP[NG, John; LOEBNER, Egon E; PAULSON, E Anne; PULLUM, Geoffrey K; SAG, Ivan A and WASOW, Thomas A (1982). "Processing English with a. genera.lized phrase structure grammar. ° Ii[ Proceedings, gOCh Ann~cL~ 3v[ee~t'ng of ihe Association for Computational Linguistics, Toronto, June 1982. 74--81. [21 Technical note CSL-82-5, Computer Science Labors.tory, Hewiett-Packard, Palo Alto, CA 94304. April ~982. CAZDAR, Gera~d (1982). "Phrase structure grammar." in: JACOBSON, Pa.uline Ida and PULLUM, Geoffrey K. The ~attLre Of syntactic representation. Dordrecht: D. Reidel, 1982. GODDEN, Kurt Sterling (1081). Montag~¢ grgmmar ~nd machine er~nslation between English and Thai. Doctoral dissertation, Department of Linguistics, University of K ansa.s, 1981. HAMBLIN, C L (1973). =Questions in Montague English." [1[ Foundations o/ langt~=g¢, 10(1), May 1073, 41-53. {2] in Partee 1976, 247-259. HENDLER, Ja.mes Alexander and PHILLIPS, Brian (1981). =A flexible control structure for the conceptual analysis of natural l~nguage using message-passing. ~ Technical report TR-08-81-03, Computer Science Labors.tory, Texas Instruments Incorporated, Define, TX 75266, 1981. HIRST, Graeme (1983). A fot~ndation for se~nantic interp~'eta- tlon, ~ith toord ~nd c~s¢ disambigt~ation. Doctoral disser- tation, Department of Computer Science, Brown University (forthcoming t. HIRST, Graeme and CHARNIAK, Eugene (1982). "Word 72 sense ~nd case slot disamb/g~ation." Proceedings of the National Conference on Artificial Intelligence, Pittsburgh, August 1982. 95-98. HOBBS, Jerry Robert and ROSENSCHEIN, Stanley Joshua (1977). =Making computational sense of Monta4~ue'l inten- sional logic." Artificial Intelligence, 9(3), December 1977, 287-306. JOHNSON-LAIRD, Philip Nicholas (1978). "What's wrong with Grandma's guide to procedural semantics: A reply to Jerry Fodor. ~ Cognition, 6(3), September 1978, 249-261. JONES, Mark A ~nd WARREN, David Scott (1982). "Concep- tual dependency and Montagne ~rammar: A step toward conciliation." Proceedings of the N~tion¢l Conference on Artificial [n$elligence, Pittsburgh, Augnst 1982. 79-83. LEHNERT, Wendy Grace ~nd RINGLE, Martin H (1982). Strategies for natural language processing. Hlllsdale, N J: Lawrence Erlbaum Associates, 1982. MARCUS, Mitchell P (1980). A theory of sgntactic recognition for natv.ral l~nguag¢. Cambridge, MA: The MIT Preu, 1980. MONTAGUE, Richard (1973). "The proper treatment of quantification in ordinary English." [I I in: HINTIKKA, Ka~rlo Jaakko Jnhani; MORAVCSIK, Julius Matthew Emil ~nd SUPPES, Patrick Colonel (editors). Approaches to ~tur~l lang.~age: Proceedings of tAe 1970 Stanford workshop on grammar ~nd semantics. Dordrecht: D. Reide|, 1973. 221-242. [2] in: THOMASON, Richmond Hunt (editor). Formal philosophll: Selected papers of Richard Mont~gae. New Haven: Yale University Press, 1974. 247-270. MOORE, Robert C (1981). "Problems in logical form. ~ l~'ocsedings, 191h Annual Meeting of th, e Association for Computational Linguistics, Stanford, July 1981. 117-124. PARTEE, Baabara Hall (1973). "Some transformational exten- sions of Montagne ~ammar." [11 J'o~rnal of Philosophical Logic, 2, 1973, 509--534. [2] in Psatee 1976, 51-76. PARTEE, Barbara Hall (1975). "Montagne grammar and trxns- form&tional 6rammar." Linguistic [nquirF, 6(2), Spring 1975, 203-300. PARTEE, Barbara Hall (editor) (1976) . Montag~,e grammar. New York: Academic Press, 1976. PHILLIPS, Brian and HENDLER, James Alexander (1982). =A message-passing control structure for text understandin8." in: HORECK~', J~n (editor). COLING 8£: Proceedings of the NintA International Conference on Computational Linguistics, Prague, July 5--10, 198£ (= North-Holland Linguistic Series 47). Amsterdam: North-Holland, 1982. 307-312. ROSENSCHEIN, Stanley Joshua ~nd SHIEBER, Stuart M (1982). "Translating English into logical form." P~oceedin~s, ~O~h Annual Meeting of the Association for Computational Linguistics, Toronto, June 1982. i-8. SCHANK, Roger Carl (editor) (1975). Conceptual information processing (~ Fundamental studies in computer science 3). Amsterdam: North-Holland, 1975. SCHANK, Roger Carl (1982). "Reminding and memory or- ganization: An introduction to MOPs." [I] in Lehnert and Ringle 1982, 455-494. [2] Research Report 170, Department of Computer Science, Yale University, New Haven, CT 06520. December 1979. SCHUBERT, Lenhaxt K and PELLETIER, Francis Jeffry (1982). "From English to logic: Context-free computation of 'conventional' logical translation." American Journal of Computational Linguistics, 8(1), January-March 1982, 26--44. SMITH, Brian Cantwell (1979). "Intensionality in computa- tional contexts." Unpublished MS, Artificial intelligence Laboratory, Massachusetts Institute of Technology, Cam- bridge, MA 02139. December 1979. TARNAWSKY; George Orest (1982). Kno~oledgs semantics. Doctoral dissertation, Department of Linguistics, New York University, 1982. WILKS, Yorick Alexander (1982). "Some thoughts on proce- dural semantics." [I] in Lehnert and Ringle 1982, 494- 518. {21 Technical report CSCM-I, Cognitive Studies Centre, University of Essex, Wivenhoe Park, Colchester. November 1980. WONG, Douglas (1981~). =Language comprehension in a problem solver." l~roceedings of the 7th International Joint Conference on Artificial [ntelligence, Vancouver, August 1981. 7-12. WONG, Douglas (1981b). On the ~nifleation of language comprehension ~oitA problem solving. Doctoral dissertae tion [ava41able as technical report CS-78], Department of Computer Science, Brown University, 1981. WOODS, William Aaron Jr (1967). Semantics for a qucs~ion- ~nstosring system. {1] Doctoral dissertation, Harvard University, August 1967. [2] reprinted as a volume in the series Outstanding dissertations in the Computer Sciences, New York: Garland Publishing, 1979. WOODS, William Aaron Jr (1968). "Procedural semantics for a question-answering machine." AFIPS conference proceed- ings, 33 (Fall Joint Computer Conference), 1968. 457-471. 73 | 1983 | 10 |
TELEGRAM: A GRAMMAR FORMALISM FOR LANGUAGE PLANNING Douglas E. Appelt Artificial Intelligence Center SRI International Menlo Park, California O. Abstract Planning provides the basis for a theory of language generation that considers the communicative goals of the speaker when producing utterances. One central problem in designing a system based on such a theory is specifying the requisite linguistic knowledge in a form that interfaces well with a planning system and allows for the encoding of discourse information. The TELEGRAM (TELEological GRAMmar'} system described in this paper solves this prob- lem by annotating a unification grammar with assertions about how grammatical choices are used to achieve various goals, and by enabling the planner to augment the func- tional description of an utterance as it is being unified. The control structures of the planner and the grammar unifier are then merged in a manner that makes it possible for general planning to be guided by unification of a par- ticular functional description. 1. Introduction By viewing language generation as a planning process, one can not only account for the way people use language to satisfy different goals they have in mind, but also model the broad interaction between a speaker's physical and linguistic actions. Formal models of planing can provide the basis for a theory of language generation in which communicative goals play a central role. Recent research in natural-language generation [1][2] has established the feasibility of regarding planning as the basis for the genera- tion of utterances. This paper examines some of the prob- lems involved in devising a grammar formalism for such a generation system that produces utterances and describes a particular implementation of a unification grammar, re- ferred to as TELEGRAM, that solves some of these prob- lems. The KAMP system [1] was designed with the problems of multiple-goal satisfaction and the integration of physi- cal and linguistic ~etions in mind. KAMP is a multiagent planning system that can be given a high-level description This research was supported by the National Science Foundation under Grant MCS-8115105. The author is grateful to Barbara Grosz for helpful comments on earlier d~'afts of this paper. of an agent's goals, and then produce a plan that includes the performance of both physical and linguistic actions by several agents that will achieve the agent's goals. In the development of KAMP it was recognized that syntactic, semanlic and pragmatic knowledge sources are necessary for the planning of utterances. These sources of knowledge were stored independently inside the system: a grammar was provided in addition to the axioms that constitute the agent's knowledge of the pragxnatics of communica- tion. However, rather than have one process that decides what to say, drawing on knowledge about the world and about communication, plus another independant process that decides how to encode that knowledge into English, KA,XlP employs a single process that uses both sources of knowledge to produce plans. The primary focus of the research on KAMP was the representation and integration of the knowledge needed to make plans involving utterances. One area that was neglected was the representation of grammatical knowledge. KAMP relies on a very simple grammar com- posed of context-free rules that enable it to generate simple sentences. Such phenomena as gapping are totally outside of its capSbility. Because of the ad hoc nature of the rep- resentation, modifications and extensions of its linguistic coverage are very difficult. Another criticism of KAMP's approach was that there was no obvious way to control the planning process. Instead of formulaLing a plan quickly. KAMP would search a large space of linguistic alternatives until it found an "(,primal" solution. As some critics have pointed out, (e.g., [51) such exhaustive planning is often not needed in prac- tical ~ituations -- and is certainly not how people produce utterances in real time. KAMP would never produce an ungrammatical sentence, because it could always do un- limited backtracking after making an incorrect decision. "Flit' remainder of this paper describes how to use a unification grammar* to address these two problems of r,,pr4.s~,ntation and control. 2. Unification Grammar A unification grammar characterizes linguistic entities * (.lnific~tion gramma.r has often been referred to as Junctional gram- mar in the fiterature, e.g., [7], Jill. It is related to and shares many ideas with systemic grammar [6]. 74 by collections of features called a functional description (FDs). Each of the features in an FD has a value that can be either atomic or another functional description. A unification grammar is a large FD that characterizes the features of every possible sentence in the language. In this paper, the FD that characterizes the intended utterance is called the teat FD and the FD that constitutes the gram- mar is called the grammar FD. The most salient feature of unification grammar that distinguishes it from other grammatical formalisms is its emphasis on linguistic function. All of the features used by the grammar have equal status, with functional and discourse-related features like topic and focus sharing equal status with grammatical roles like subject and predicate, and with syntactic categories like NP and VP. Unification grammars are particularly well suited for language generation because they allow the encoding of discourse features in the grammar. A functional descrip- tion can be constructed incorporating these features, and the syntactic details of the final utterance can then be specified through unification with the grammar FD. The process that constructs the text FD can treat it as a high- level blueprint fleshed out by unification, thereby reliev- ing the high-level process of the need to consider low-level grammatical details. This strategy was used by McKeown {111. Two functional descriptions can be unified by an algo- rithm that is similar to set union. Suppose FI and F2 are functional descriptions. To compute the unification, Fa, of F, and Fz, written F3 = FI ~ Fz, the following algorithm is used: If (A,v,) is a feature-value pair, and ()'l,v,) E Fl and Vz (fl, z) ~ Fz, or (f,, vl) E F2 and Vx(ft, z) ¢~ Ft, then (:,, v, ) ~ &. If (fl, v,) E F, and (fl, vz) E Fz, then (fl, va) E/'3, where the following conditions apply: If v, -~ NIL then v3 = vz, and similarly for vz. If vl = ANY and v2 ~ NONE, then t,a = vz, and-similarly for vz. If v, ~ v~, then v3 = vl. If v, and v2 are functional descriptions, then v3 7)I ~ U2. If any one of the above conditions fails, then the unification itself fails and the value of F1 ~ F2 is undefined. Functional descriptions can optionally contain a distin- guished feature called PATTERN that is used to specify the surface order of constituents in the FD. The unification of two patterns is different in that it is based on deciding whether or not the orderings represented by the two pat- terns are consistent. In spite of its advantages, there are some serious prob- lems with unification grammar if it is employed straightfor- wardly in a language planning system. One of the most serious problems is the inefficiency of the unificat;,,- -!go- rithm as described above. A straightforward application of that algorithm is very expensive, consuming an order- of-magnitude more time in the unification process than in the entire planning process leading up to the construction of the text FD [11]. The problem is not simply one of efficiency, of implementation. It is inherent in any algo- rithm that searches alternatives blindly and thereby does work that is exponentially related to the number of alter- natives in the grammar. Any solution to the problem must be a conceptual one that minimizes the number of alter- natives that ever have to be considered. Another problem is that the text FD is not as high-level a blueprint as is really needed because every feature related to the speaker's intention to communicate must be part of the text FD when unification takes place. This implies, for example, that every descriptor that is part of a refer- ring expression must be specified in advance. This may be unnecessary because for certain grammatical choices, the referring expression may be eliminated entirely. For example, in the by-phrase in a passive sentence, reference may be made pronominally {or not at all), in which case descriptors are unnecessary. Since the planner must know the linguistic context when planning descriptors, a noun- phrase FD is best constructed initially with a REFERENT feature, and later expanded by adding features that cor- respond to the descriptors. While it is conceivable that the grammar could be designed to expand a REFERENT feature into a set of descriptors, that would amount to encoding in the gram- mar what is essentially a planning problem. This is un- desirable because the grammar, being a repository of syn- tactic knowledge, should be separated from pragmatic knowledge. Conversely, it is also desireable to separate detailed syntactic knowledge from the planner, and the failure to do so was a major shortcoming with KAMP. The next section describes how unification and plan- ning can be combined to allow syntactic knowledge to be separated from the planner, but still allow the required flexibility of interaction between the planner and the gram- mar. 3. Combination of Unification and Planning The TELEGRAM system solves the problems of efficiency and modularity through a close coupling be- tween the processes of unification and planning. (The name TELEGRAM stands for TELEological GRAMmar be- cause planning and goal satisfaction are integreated into the unification process.) K.AMP divided its actions into an abstraction hierarchy. The action hierarchy, as it pertains to linguistic actions, 75 | IIIocutionary Acts u• i II I Surface Speech Acts Ask III Concept Activation PropodUo~ Acts Utterance Acts ] Figure 3,1 KAMP's Hierarchy of Linguistic Actions is shown in Figure 3.1. Actions called illocutionary acts are at the top of the hierarchy, with surface speech acts and concept activation actions falling below, while the ac- tual performance of the utterance is at the lowest level. lllocutionary acts are easily described at an abstract level that. is best reasoned about by a conventional planning sys- tem, as was done in K.AMP [|1 and by Cohen [2 I. However, as one progresses down the hierarchy, the planning be- comes more and more dependent on the constraints of the grammar, although goal satisfaction is still very much a part of the reasoning that takes place. It is at the level of surface speech act and concept activation actions that the planning and unification processes can be most advan- tageously merged. The means of combining planning and unification works as follows. At the time the planner plans to per- form a surface speech act, enough information has been specified so that it knows the general syntactic structure of the sentence (declarative, interrogative, or imperative}. A functional description of the utterance is created and then ~mified with the grammar. This functional description is very general and does not contain suMcient information to specify a unique sen- tence. The functional description is elaborated during the process of unification so that it adds features incrementally to the functional description. The planner is called upon by the unification algorithm at the appropriate time to add the appropriate features. The end result is a functional description that is the same as if a complete functional description of the intended utterance had been unified with the grammar by means of a conventional unification algo- rithm that does not invoke planning. The planner is invoked by the unifier when either of two situations arises: The unifier detects a feature in the text FD that has no corresponding feature in the grammar FD. Such features are a signal that elaboration must be performed. The feature is annotated with a goal wff that the planner plans to achieve, and the resulting actions specify additions to the functional description being unified. The unifi- cation process then continues in the normal man- ner. ¢ The unifier detects a choice in the grammar functional description that cannot be resolved through the unification of atomic features. Each choice in the grammar is annotated with a wff that describes to the planner what the effects of making the choice will be. The planner then decides which alternative is most consis- tent with its plans, making an arbitrary choice if insufficient information is available for a deci- sion. The combination of planning and unification that results has a number of benefits resulting from annotating a grammar with information useful to the planner, rather than trying to work linguistic knowledge into the planner in an ad hoc manner. The ability to perform action subsumption, the op- portunistic "piggybacking" of related goal~ as described in Ill, is enhanced. Whether or not one can incorporate additional nonreferring descriptors into a noun phrase is governed by the structure and function of the noun phrase being planned. For example, a pronominal refer- ence cannot incorporate any additional descriptors at all. Therefore, if a planner were to decide whether or not to perform action subsumption, it would have to know in ad- vance how a referent was going to be realized. If this were to be performed before unification, the planner would have to have the detailed lin~-uistic knowledge to know that it was possible. With a simple grammar like KAMP'S this was possible, but with a larger grammar it is clearly un- desirable. The ability to do multiple-utterance and discourse planning is also enhanced. Since the grammar and plan- ner are closely coupled, information can be easily fed back from the ~rammar to the planner. This feedback is one of the features that distinguish a language planning system from a system that first decides what to say, then how to say it. When an alternative is chosen, the planner has information about the goal that is to be achieved through the selection of that alternative. If unification based on that selection fails, the planner, instead of blindly trying other alternatives, can revise the entire plan -- including 76 the incorporation of multiple utterances where only one was planned originally. 4. Example. This example illustrates how a language system can use an annotated unification gramar like TELEGRAM. Suppose that there are two agents operating in an equipment as- sembly domain, and that the planning agent decides that the other agent should know that the location of a screw- driver S I is in a particular toolbox, TB1. He then plans the illocutionary act*" Do(AGT1, Inform(AGT2, Location(S1) = TB1)). The planner then plans a surface speech act consist- ing of a declarative sentence with the same propositional content as the illocutionary act. However, instead of con- structing a syntactic-structure tree by using context-free rules, as in K.AMP would do in this example, the TELEGRAM planner will create a high-level functional description of the intended utterance. For this example, the functional description would look like the following."" "CAT ----- S [CAT = NP ] SUBJ = [REFERENT = S1 [CAT = V ] VERB = [LEX BE = [PREP = [LEX = IN] p CAT --~ NP C, OMP [ OBJ=[REFERENT=TB1 At this point., the planner is no longer directly in control of the planning process. The planner invokes the unifier with the above text functional description and the gram- mar fimctional description, and relinquishes control to the unifical ion process. The unification process follows the algorithm described in Section 2, until there is either an alternative in the grammar that needs to be selected or some feature in the text FD does not unify with any feature in the grammar FD. In this example, the second of these situations arises when the noun phrase FD CAT=NP TBI] REFERENT = ** The precise meanings the elements of this representation are described in [1], but their intuitive meaning-J are adequate for under- standing this paper. *** Using the notation of Kay 17][8]. is unified with the functional description of a noun phrase from the grammar: CAT -- NP ] PATTERN -- (DET MODS HEAD QUAL) DET- [.. ,] HEAD = [CAT = N] MODS -~ [...1 qUAL = [...1 The FD for the noun phrase tells what the structure of the constituent is, but it does not contain a REFERENT feature. The straightforward application of the unification algorithm of Section 2 would simply yield the grammar FD along with the feature "REFERENT ~ TBI," which is not particularly useful. However, the feature REFERENT has an annotation that tells the unifier that the planner should be invoked with the goal of activating the concept TBI for AGT2. The planner then plans a concept activa- tion action, using its knowledge about AGTI and AGT2's mutual knowledge, perhaps inserts a pointing action into the plan, and augments the text FD to resemble the fol- lowing: CAT = NP 1 DESC = (Toolbox(TBl), Under(TBl, TABLEU)J The new augmented functional description still does not unify with the grammar FD, but the annotation for the DESC feature is written to insert FDs corresponding to each of the descriptors into the text FD. This next expansion results in the following FD: "CAT = NP fNB = J DET = [SUBCAT = DEF HEAD = [LEX = TOOLBO:q PREP = [LEX ~ UNDER] QUAL [CAT -- NP POBJ [REFERENT = TABLE1 This FD can be unified directly with the grammar FD, using the algorithm described in section 2. It is identical to the one that would have been planned had the entire FD been specified at the start of the unification process. However, by postponing some of the planning, and plac- ing it under control of the unification process, the system preserves the ability to plan hierarchically while enhancing its ability to coordinate physical and linguistic actions. 5. Comparison with Related Systems. There are several significant differences between TELE- GRAM and other natural-language-generation systems that 77 have been developed using unification grammar or systemic grammar. The TEXT system developed by McKeown [11] uses a unification grammar to generate coherent multisentential text and employs a straightforward unification algorithm. The unifier does not draw upon the system's pragmatic knowledge to decide among alternatives in the grammar, and being reduced to blind search, it requires a great deal of time to unify a single text functional description. The TEXT system does all its planning during the construction of the text FD and uses the unification process to fill in the grammatical details essential for producing the final utterance. The NIGEL grammar designed by Mann [10] is a sys- temic grammar, but the. philosophies underlying systemic and unification grammar are so similar that a comparison of the systems is warranted. The system "choosers" of NIGEL play a role similar to the annotations on the al- ternatives in TELEGRAM, and many other parallels can be drawn. The most fundamental difference between the two systems is in the assmptions underlying their design. NIGEL is intended to be completely independent of any particular application system or knowledge representation, an intention that has influenced all aspects of its design. A consequence of this decision is a complete separation of the grammatical processes from the other processes in the system, permitting communication only through a narrow channel. TELEGRAM, on the other hand closely couples reasoning about syntactic choices with the other planning done by the system, thereby enabling the reasoning about combined physical and linguistic actions. However, TEL- EGRAM sacrifices some of the simplicity of the interface between the grammar and the rest of the system. 6. Summary and Conclusion. The TELEGRAM system described in this paper is an at, lempt to incorporate a large grammar into a language- planning system. This particular approach to representing knowledge in an annotated unification grammar and com- bining the processes of planning and unification results in the following advantages: • Greater efficiency in the lower levels of the plan- ning process, because the planner can be invoked to decide among alternatives, thus avoiding the reliance upon blind search. • A simple method of resource allocation to the planning process by limiting the amount of back- tracking the unifier is allowed to do. • The ability to combine reasoning about physi- cal and linguistic actions with a grammar that provides significantly wide coverage of the lan- guage. Although the development of TELEGRAM is still in progress, early experience suggests that the TELEGRAM formalism has sufficient power to represent the syntactic knowledge of a language-planning system that efficiently encompasses a significant portion of English. A small grammar has been written that already has more power than the grammar of KAMP. Research is being conducted in discovering those discourse-related features that have to be included in a unification grammar. Although writing a ~reversible ~ grammar does not appear to be feasible at this time, we hope this research will lead to the specification of a set of features that can be shared between unification grammars for parsing and for generation. [11 {21 131 141 {01 [71 tsl REFERENCES [01 [1ol Illl Appelt, Douglas E., Planning Natural Language Utter- ances to Satisfy Multiple Goals, SRI International Arti- ficial Intelligence Center Technical Note No. 259, 1982. Cohen, Philip and C. R. Perrault, Elements of a Plan- Based Theory of Speech Acts, Cognitive Science, vol. 3, pp. 177-212, 1979. Cohen, Philip, The Need for Mentificaion as a Planned Action, Proceedings of the Seventh International Joint Conference on Artificial Intelligence, 1981. Cohen, Philip, S. Fertig and K. Start, Dependencies of Discourse Slructure on the Modalily of Commun- ication: Telephone vs. Teletype, Proceedings of the Twentieth Annual Meeting of the Association for Com- putational Linguistics, 1982. Conklin, E. deffery, and D. McDonald, Salience: The Key to the Selection Prob&m in Natural Language Gen- eration. Proceedings of the Twentieth Annual Meeting of the .Association for Computational Linguistics, 1982. Italliday, M. A. K., System and Function in Lan- guage, Oxford University Press, London, 1976 Kay, Martin, Functional Grammar, Proceedings of the Annual Meeting of the Linguistic Society of .America, 1979. Kay. Martin, Unification Grammar, Xerox P.~RC tech report Kay, M.~rtin, An Algorithm for Compiling Parsing Tables from a Grammar Mann, William C., and Christian Matthiessen, Nigel: A Systemic Grammar for Text Generation, University of Southern California Information Sciences Institute Technical Report ISI/RR-83-105, February, 1983. McKeown, Kathleen, Generatin 9 Natural Language Text in Response to Questions about Database Structure, Ph.D. dissertation, University of Pennsylvania, 1982. 78 | 1983 | 11 |
AN OVERVIEW OF THE NIGEL TEXT GENERATION GRAMMAR William C. Mann USC/Information Sciences institute 4676 Admiralty Way # 1101 Marina del Rey, CA 90291 Abstract Research on the text generation task has led to creation of a large systemic grammar of English, Nigel, which is embedded in a computer program. The grammar and the systemic framework have been extended by addition of a semantic stratum. The grammar generates sentences and other units under several kinds of experimental control. This paper describes augmentations of various precedents in the systemic framework. The emphasis is on developments which control the text to fulfill a purpose, and on characteristics which make Nigel relatively easy to embed in a larger experimental program. 1 A Grammar for Text Generation - The Challenge Among the various uses for grammars, text generation at first seems to be relatively new. The organizing goal of text generation, as a research task, is to describe how texts can be created in fulfillment of text needs. 2 Such a description must relate texts to needs, and so must contain a functional account of the use and nature of language, a very old goal. Computational text generation research should be seen as simply a particular way to pursue that goal. As part of a text generation research project, a grammar of English has been created and embodied in a computer program. This grammar and program, called Nigel, is intended as a component of a larger program called Penman. This paper introduces Nigel, with just enough detail about Penman to show Nigel's potential use in a text generation system. IThis research was Supported by the Air Force Office of Scientific Research contract NO. F49620.79-C-0181. The views and conclusions contained =n this document are those of the author and should not be interpreted as necessarily representing the Official polic=es or endorsements, either expressed or implied, of the Air Force Office Of S(;ientific Research of the U.S. Government. 2A text need is the earliest recognition on the part of the speaker that the =mmeciiate situation is orle in which he would like to produce speech. In this report we will alternate freely between the terms speaker, writer and author, between hearer and reader, and between speech and text This is s=mpty partial accommodation of preva=ling jargon; no differences are intended. 1.1 The Text Generation Task as a Stimulus for Grammar Design Text generation seeks to characterize the use of natural languages by developing processes (computer programs) which can create appropriate, fluent text on demand. A representative research goal would oe to create a program which could write a text that serves as a commentary on a game transcript, making the eventsof the game understandable. 3 The guiding aims in the ongoing des=gn of the Penman text generation program are as follows: 1. To learn, in a more specific way than has prewously been achieved, how appropriate text can be created in response to text needs. 2. To identify the dominant characteristics which make a text appropriate for meeting its need. 3. To develop a demonstral~le capacity to create texts which meet some identifiable practical class of text needs. Seeking to fill these goals, several different grammatical frameworks were considered. The systemic framework was chosen, and it has proven to be an entirely agreeable choice. Although it is relatively unfamiliar to many American researchers. it has a long history of use in work on concerns which are central tO text generation. It was used by Winograd in the SHRDLU system, and more extensively by others since [Winograd 72. Davey 79, McKeown 82. McDonald 80]. A recent state of the art survey identifies the systemic framework as one of a small number of linguistic frameworks which are likely to be the basis for significant text generation programs in th~s decade {Mann 82a}. One of the principal advantages of the systemic framework iS its strong emphasis on "functional" explanations of grammatical phenomena. Each distinct kind of grammatical entity iS associated with an expression of what it does for the speaker. so that the grammar indicates not only what is possible but why it would be used. Another is its emphasis on principled, iustified descriptions of the choices which the grammar offers, i.e. all of its optionality. Both of these emphases support text generation programming significantly. For these and other reasons the systemic framework waS Chosen for Nigel. Basic references on the systemic framework include: [Berry 75, Berry 77, Halliday 76a, Halliday 76b, Hudson 3This was accomplished in work Py Anthony Davey [Davey 79]; [McKeown 821 is a comoaraOle more recent study it} whlcR the generated text clescrioed structural and definitional aspects of a data base. 79 76, Hatliday 81, de Joia 80, Fawcett 80]. 4 1.2 Design Goals for the Grammar Three kinds of goals have guided the work of creating Niget. 1.To specify in total detail how the systemic framework can generate syntactic units, using the computer as the medium of experimentation. 2. To develop a grammar of English which is a good representative of the systemic framework and useful for demonstrating text generation on a particular task. 3. To specify how the grammar can be regulated effectively by the prevailing text need in its generation activity. Nigel is intended to serve not only as a part of the Penman system, but also eventually as a portable generational grammar, a component of future research systems investigating, and developing text generation. Each of the three goals above has led to a different kind of activity in developing Nigel and a different kind of specification in the resulting program, as described below. The three design goals have not all been met. and the work continues. 1. Work on the first goal, specifying the framework, is essentially finished (see section 2.1). The lnterlisp program is stable and reliable for its developers. 2. Very substantial progress has been made on creating the grammar of English; although the existing grammar is apparently adequate for some text generation tasks, some additions are planned. 3. Progress on the third goal, although gratifying, is seriously incomplete. We have a notation and a design method for relating the grammar to prevailing text needs, and there are worked out examples which illustrate the methods the demonstration ~aper in [Mann 83](see section 2.3.) 2 A Grammar for Text Generation - The Design 2.1 Overview of Nigel's Design The creation of the Nigel program has required evolutionary rather than radical revisions in systemic notation, largely in the direction of making well-precedented ideas more explicit or detailed. Systemic notation deals principally with three kinds of entities: 1} systems, 2) realizations of systemic choices (including function structures), and 3) lexical items. These three account for most of the notational devices, and the Nigel program has separate parts for each. 4This work would not have been possible wtthout the active palliclpatlon of Christian MattNessen, and the participation and past contributions of Michael Halliday and other system=c=sts. Comparing the systemic functional approach to a structural approach such as context-free grammar, ATNs or transformational grammar, the differences in style (and their effects on the programmed result) are profound. Although it is not possible to compare the approaches in depth here, we note several differences of interest to people more familiar with structural approaches: • Systems, which are most like structural rules, do not specify the order of constituents. Instead they are used to specify sets of features to be possessed by the grammatical construction as a whole. 2. The grammar typically pursues several independent lines of reasoning (or specification) whose results are then combined. This is particularly difficult to do in a structurally oriented grammar, which ordinarily expresses the state of development of a unit in terms of categories of constituents. 3. In the systemic framework, all variability of the structure of the result, and hence all grammatical control, is in one kind of construct, the system. In other frameworks there is often variability from several sources: optional rules, disjunctive options within rules, optional constituents, order of application and so forth. For generation these would have to be coordinated by methods which lie outside of the grammar, but in the systemic grammar the coordination problem does not exist. 2.1 .1 Systems and Gates Each system contains a set of alternatives• symbols called grammatical features. When a system is entered, exactly one of its grammatical features must be chosen. Each system also has an input expression, which encodes the conditions under which the system is entered 5 Outing the generation, the Dr0gram keeps track of the selection expression, the set of features which have been chosen up to that point. Based on the selection expression. the program invokes the realization operations which are associated with each feature chosen. In addition to the systems there are Gates. A gate can be thought of as an input expression which activates a particular grammatical feature, without choice. 6 These grammatical features are used just as those chosen in systems. Gates are most often used to perform realization in response to a collection of features. 7 5Input expressions are BooLean expressions of features, without negation, ~.e. they are composed entirely of feature names, together with And. Or and 0arentheses. (See the figures in the demonstration paper tn IMann 8.3} for examples.) 6See the figure entitled Transitivity I =n [Mann 83} for examDles and further discussion of the roles of gates. 7Bach realization ot~erat=on is associated with just one feature, there are no realizat¢on operations which depend on more than one feature, and no rules corresponding to Hudson's function reah;'ation rules. The gates facihtate elimiqating this category of rules, with a net effect that the notation is more homogeneous. 80 2.1.2 Realization Operators There are three groups of realization operators: those that build structure (in terms of grammatical functions), those that constrain order, and those that associate features with grammatical functions. 1. The realization operators which build structure are Insert, Conflate, and Expand. By repeated use of the structure building functions, the grammar is able to construct sets of function bUndles, also called fundles. None of them are new to the systemic framework. 2. Realization operators which constrain order are Partition, Order, OrderAtFront and OrderAtEnd. Partition constrains one function (hence one fundle) to be realized to the left of another, but does not constrain them to be adjacent. Order constrains just as Partition does, and in addition constrains the two tO be realized adjacently. OrderAtFront constrains a function to be realized as the leftmost among the daughters of its mother, and OrderAtEnd symmetrically as rightmost. Of these, only Partition is new to the systemic framework. 3. Some operators associate features with functions. They are Preselect, which associates a grammatical feature with a function (and hence with its fundle); Classify, which associates a lexical feature with a function: OutClassify, which associates a lexical feature with a function in a preventive way; and Lexify, which forces a particular lexical item to be used to realize a function. Of these, OutClassify and Lexi~ are new, taking up roles previously filled by Classify. OutClaasify restricts the realization of a function (and hence fundle) to be a lexical item which does not bear the named feature. This is useful for controlling items in exception categories (e.g. reflexives) in a localized, manageable way. Lexify allows the grammar to force selection of a particular item without having a special lexical feature for that purpose. In addition to these realization operators, there =s a set of Default Function Order Lists. These are lists of functions which will be ordered in particular ways by Nigel. provided that the functions on the lists occur in the structure, and that the realization operators have not already ordered those functions. A large proportion of the constraint of order is performed through the use of these lists. The realization operations of the systemic frameworK, especially those having to do with order, have not been specified so explicitly before. 2.1.3 The Lexicon The lexicon is defined as a set of arbitrary symbols, called word names, such as "budten", associated wtth symbols called spellings, the lexical items as they appear in text. In order to keep Nigel simple during its early development, there is no formal provision for morphology or for relations between items which arise from the same root. Each word name has an associated set of lexical features. Lexify selects items by word name; Classify and OutClassify operate on sets of items in terms of the lexicat features. 2.2 The Grammar and Lexicon of English Nigel's grammar is partly based on published sources, and is partly new. It has all been expressed in a single homogeneous notation, with consistent naming conventions and much care to avoid reusing names where identity is not intended. The grammar is organized as a single network, whose one entry point is used for generating every kind of unit. 8 Nigers lexicon is designed for test purposes rather than for coverage of any particular generation task. It currently recogmzes 130 texical features, and it has about 2000 texical items in about 580 distinct categories (combinations of features). 2.3 Choosers - The Grammar's Semantics The most novel part of Nigel is the semantics of :Re grammar. One of the goals identified above was to "s~ecify '~ow the grammar can be regulated effectively by the prevailing text need." Just as the grammar and the resuiting text are ooth very, complex, so is the text need. In fact. grammar and text complexity actually reflect the prior complexity of the text nee~ ',vh~c~ ~ave rise to the text. The grammar must respond selectwely to those elements of the need which are represente~ by the omt Demg generated at the moment. Except for lexical choice, all variability in Nigers generated result comes from variability of choice in the grammar. Generating an appropriate s[ructure consists entirely in making the choices in each system appropriately. The semantics of the grammar must therefore be a semantics of cno~ces in the individual systems; the choices must be made in each system according to the appropriate elements of the prevailing need. In Nigel this semantic control is localized ',o the systems themselves. For each system, a procedure is defined ,.vh~ch can declare the appropriate choice in the system. When the system is entered, the procedure is followed to discover the appropriate choice. Such a procedure is called a chooser (or "choice expert".) The chooser is the semantic account of the system, me description of the circumstances under wnpch each choice is approoriate. To specify the semantics of the choices, we needed a notation for the choosers as procedures. This paper describes that notation briefly and informally. Its use is exemplified in the Nigel demonstration [Mann C:x3j and developed in more detail ~n another report [Mann 82b]. To gain access to the details of the need. the choosers must in some sense ask questions about particular entities. For example, to decide between the grammatical features Singular and Plural in creating a NominalGroup. the Number chooser (the 8At the end of 1982. N,gel contained about 220 systems, with all ot the necessary realizations speclfiecL tt ts thus the largest systemic grammar in a single notation, and possibly the largest grammar of a natural language in any of the functional linguJstic traditions. Nigel ~S ~rogrammed in INTEF:tLISP 81 chooser for the Number system, where these features are the options) must be able to ask whether a particular entity (already identified elsewhere as the entity the NominalGroup represents) is unitary or multiple. That knowledge resides outside of Niget, in the environment. The environment is regarded informally as being composed of three disjoint regions: 1. The Knowledge Base, consisting of information which existed prior to the text need; 2. The Text Plan, consisting of information which was created in response to the text need, but before the grammar was entered; 3. The Text Services, consisting of information which is available on demand, without anticipation. Choosers must have access to a stock of symbols representing entities in the environment. Such symbols are called hubs. In the cOurse of generation, hubs are associated with grammatical functions; the associations are kept in a Function Association Table, which is used to reaccess information in the environment. For example, in choosing pronouns the choosers will ask Questions about the multiplicity of an entity which is associated with the THING function in the Function Associat=on Table. Later they may ask about the gender of the same entity. again accessing it through its association with THING. This use of grammatical functions is an extension of prewous uses. Consequently, relations between referring phrases and the concepts being referred to are captured in the Function Association Table. For example, the function representing the NominalGroup as a whole is associated with the hub whictl represents the thing being referred to in the environment. Similarly for possessive determiners, the grammatical function for the determiner is associated with the hub for the possessor. It is convenient to define choosers in such a way that they have the form of a tree. For any particular case, a single path of operations is traversed. Choosers are defined principally in terms of the following Operations: 1. Ask presents an inquiry to the environment. The inquiry has a fixed predetermined set of possible responses, each corresponding to a branch of the path in the chooser, 2. Identify ~resents an inquiry to the environment. The set of responses is open-ended. The response is put in the Function Association Table. associated with a grammatical function which is given (in addition to the inquiry) as a parameter tO the Identify operator. 9 3. Choose declares a choice, 4. CopyHub transfers an association of a hub from one grammatical function tO another. 1° 9See the demonstration paper in [Mann 8,3} for an explanation and example of its use 10There are three athers whtCh have some linguistic slgnihcance: Pledge, TermPle~:lge, and Cho~ceError. These are necessary but do not Play a central rote, They are named here lust to indicate that the chooser notation ~s very s=m~le. Choosers obtain information about the immediate circumstances in which they are generating by presenting inquiries to the environment. Presenting inquiries, and receiving replies constitute the only way in which the grammar and its environment interact. An inquiry consists of an inquiry operator and a sequence of inquiry parameters. Each inquiry parameter is a grammatical function, and it represents (via the Function Association Table) the entities in the environment which the grammar is inquiring about. The operators are defined in such a way that they have both formal and informal modes of expression. Informally. each inquiry is a predefined question, in English, which represents the issue that the inquiry is intended to resolve for any chooser that uses it. Formally. the inquiry shows how systemic choices depend on facts about particular grammatical functions, and in particular restricts the account of a particular choice to be responsive to a well-constrained, well-identified collection of facts. Both the informal English form of the inquiry and the corresponding formal expression are regarded as parts of the semantic theory expressed by the choosers which use the inquiry. The entire collection of inquiries for a grammar ~s a definition of the semantic scope to which the grammar is responsive at its [evet of delicacy. Figure 1 shows the chooser for the ProcessType system. whose grammat=cal feature alternatives are Relational, Mental, Verbal and Material. Notice that in the ProcessType chooser, although there are only four possible choices, there are five paths through the chooser from the starting point at the too, because Mental processes can be identified in two different ways: those which represent states of affairs and those which do not. The number of termination points of a chooser often exceeds the number of choices available. Table 1 shows the English forms of the Questions being asked in the ProceasType chooser. (A word ~n all cap.tats names a grammatical function which is a oarameter of the inquiry,) Table 1: English Forms of the tncluiry Operators for the ProcessType Chooser StaticConditionQ Does the process PROCESS represent a static condition or state of being? VerbalProcessQ Does the process PROCESS represent symbolic communication of a Kind which could have an addressee? MentalProoessQ Is PROCESS a process of comprehension. recognition, belief, perception, deduction, remembering, evaluation or mental reaction? The sequence of incluiries which the choosers present to the environment, together with its responses, creates a dialogue. The unit generated can thus be seen as being formed out of a negotiation between the choosers and the environment. This is a particularly instructive way to view the grammar and its semantics, since it identifies clearly what assumptions are being made and what dependencies there are between the unit and the environment's representation of the text need. (This is the kind of dialogue represented in the demonstration paper in [Mann 83].) 82 ??(Static Condition 0 P ~ / \ • : : Matedal Figure 1 : The Chooser of the ProcessType system The grammar performs the final steps in the generation process. It must complete the surface form of the text, but there is a great deal of preparation necessary before it is appropriate for the grammar tO start its work. Penman's design calls for many kinds of activities under the umbrella of "text planning" to provide the necessary support. Work on Nigel is proceeding in parallel with other work intended to create text planning processes. 3 The Knowledge Representation of the Environment Nigel does not presume that any particular form Of knowledge representation prevails in the environment. The conceptual content of the environment is represented in the Function Association Table only by single, arbitrary, undecomposable symbols, received from the environment; the interface is designed so that environmentally structured responses do not occur. There is thus no way for Nigel to tell whether the environment's representation is, for example, a form of predicate calculus or a frame-based notation. Instead, the environment must be able to respond to incluiries, which requires that the inquiry operators be ~mplemented. It must be able to answer inquiries about multiplicity, gender, time, and so forth, by whatever means are appropriate to the actual environment. AS a result, Nigel is largely independent of the environment's notation. It does not need to know how to search, and so it is insulated from changes .in representation. We expect that Nigel will be transferable from one application to another with relatively little change, and will not embody covert knowledge about particular representation techniques. 4 Nigel's Syntactic Diversity This section provides a set of samples of Niget's syntactic diversity: aJl of the sentence and clause structures in the Abstract of this paper are within Nigers syntactic scope. Following a frequent practice in systemic linguistics (introduced by Halliday), the grammar provides for three relatively independent kinds of specification of each syntactic unit: the Ideational or logical content, the Interpersonal content (attitudes and relations between the speaker and the unit generated) and the Textual content. Provisions for textual control are well elaborated, and so contribute significantly to Nigel's ability to control the flow of the reader's attention and fit sentences into larger un=ts of text. 5 Uses for Nigel The activity of defining Nigel, especially its semantic parts. is productive in its own right, since it creates interesting descriotions and proposals about the nature of English and ti~e meaning of syntactic alternatives, as well as new notaticnal devices, t~ But given Niget as a program, contaimng a full complement of choosers, inquiry operators and related entities, new possibilities for investigation also arise. Nigel provides the first substantial opportunity to test systemic grammars to find out whether they produce unintended combinations of functions, structures or uses of lex~cal items. Similarly, it can test for contradictions. Again. Nigel provides the first substantial opportunity for such a test. And such a test is necessary, since there appears to be a natural tendency to write grammars with excessive homogeneity, not allowing for possible exception cases. A systemic functional account can also be 111t tS our intention eventually to make Nigel avaJlal~le for teaching, research, development and computational application 83 tested in Niget by attempting to replicate part=cular natural texts--a very revealing kind of experimentation. Since Nigel provides a consistent notation and has been tested extensively, it also has some advantages for educational and linguistic research uses. On another scale, the whole project can be regarded as a single experiment, a test of the functionalism of the systemic framework, and of its identification of the functions of English. In artificial intelligence, there is a need for priorities and guidance in the design of new knowledge representation notations. The inquiry operators of Nigel are a particularly interesting proposal as a set of distinctions already embodied in a mature, evolved knowledge notation, English, and encodable in other knowledge notations as well. To take just a few examples among many, the inquiry operators suggest that a notation for knowledge should be able to represent objects and actions, and should be able to distinguish between definite existence, hypothetical existence, conjectural existence and non.existence of actions, These are presently rather high expectations for artificial intelligence knowledge representations. 6 Summary As part of an effort to define a text generation process, a programmed systemic grammar called Nigel has been created. Systemic notation, a grammar of English, a semantic notation which extends systemic notation, and a semantics for English are all included as distinct parts of Nigel. When Nigel has been completed it will be useful as a research tool in artificial intelligence and linguistics, and as a component in systems which generate text. References [Berry 75] Berry, M., Introduction to Systemic Linguistics: Structures and Systems, B. T. Batsford, Ltd., London, 1975. [Berry 77] Ber~, M., Introduction to Systemic Lingusstics; Levels and Links, 8. T. Batsford, Ltd.. London, 1977. [Davey 79] Davey, A., Discourse Production, Edinburgh University Press, Edinburgh. 1979. [de Joia 80] de JoJa. A.. and A. Stenton, Terms in Systemic Linguistics, Batsford Academic and Educational. Ltd., London, 1980. [Fawcett 80] Fawcett, R. P., Exeter Lmgusstic Studies Volume 3: Cognitive Linguistics and Social Interaction, Julius Groos Verlag Heidelberg and Exeter University, 1980. [Halliday 76a] Halliday, M. A. K.. and R. Hasan, Cohesion in English, Longman, London, t976. English Language Series. Title No. 9. [Halliday 76b] Halliday, M. A. K., System and Function in Language, Oxford University Press, London, 1976. [Halliday 81] Halliday, M.A.K., and J. R. Martin (eds.), Readings in Systemic Linguisfics, Batsford, London, 1981. [Hudson 76] Hudson, FI. A., Arguments for a Non.Transformational Grammar, University of Chicago Press, Chicago, 1976. [Mann 82a] Mann, W. C., et. al., "Text Generation," American Journal of Computational Linguistics 8, (2), April-June 1982 ,62-69. [Mann 82b] Mann, W. C., The Anatomy of a Systemic Choice, USC/Information Sciences Institute, Marina del Rey, CA, RR.82-104, October 1982. [Mann 8,3} Mann, W. C., and C. M. I. M. Matthiessen, "A demonstration of the Niget text generation computer program," in Nigeh A Systemic Grammar for Text Generation. USC/Information Sciences Instrtute, RR.83-105, February 1983. This paper will also appear in a forthcoming volume of the Advances in Discourse Processes Ser~es, R. Freedle led.): Systemic Perspectives on Discourse: Selected Theoretical Papers from the 9th International Systemic Workst~op to be published by Ablex. [McDonald 80} McDonald, D. D., Natural Language Rroctuction as a Process of Decision.Making Under Constraints, Ph.D. thesis, Massachusetts Institute of Technology, Dept. of Electricial Engineering and Computer Science, 1980. To appear as a technical report from the MIT Artificial Intelligence Laboratory. [McKeown 82] McKeown. K.R., Generating Natural Language Text in Response to Questions at:out Dataoase Structure. Ph.O. thesis, University of Pennsylvania. 1982. [Winograd 72] Winograd. T.. Understanding Natural Language. Academic Press, Edinburgh. 1972. 84 | 1983 | 12 |
Automatic Recognition of Intonation Patterns Janet B. Pierrehumbert Bell Laboratories Murray Hill, New Jersey 07974 1. Introduction This paper is a progress report on a project in linguistically based automatic speech recognition, The domain of this project is English intonation. The system I will describe analyzes fundamental frequency contours (F0 contours) of speech in terms of the theory of melody laid out in Pierrehumbert (1980). Experiments discussed in Liberman and Pierrehumbert (1983) support the assumptions made about intonational phonetics, and an F0 synthesis program based on a precursor to the present theory is described in Pierrehumbert (1981). One aim of the project is to investigate the descriptive adequacy of this theory of English melody. A second motivation is to characterize cases where F0 may provide useful information about stress and phrasing. The third, and to my mind the most important, motivation depends on the observation that English intonation is in itself a small language, complete with a syntax and phonetics. Building a recognizer for this small language is a relatively tractable problem which still presents some of the interesting features of the general speech recognition problem. In particular, the F0 contour, like other measurements of speech, is a continuously varying time function without overt segmentation. Its transcription is in terms of a sequence of discrete elements whose relation to the quantitative level of description is not transparent. An analysis of a contour thus relates heterogeneous levels of description, one quantitative and one symbolic. In developing speech recognizers, we wish to exploit achievements in symbolic computation. At the same time, we wish to avoid forcing into a symbolic framework properties which could more insightfully or simply be treated as quantitative. In the case of intonation, our experimental results suggest both a division of labor between these two levels of description, and principles for their interaction. The next section of this paper sketches the theory of English intonation on which the recognizer is based. Comparisons to other proposals in the literature are not made here, but can be found in the papers just cited. The third section describes a preliminary implementation. The fourth contains discussion and conclusions. 2. Background on intonation 2.1 Phonology The primitives in the theory are two tones, low (L) and high (H). The distinction between L and H is paradigmatic; that is, L is lower than H would be in the same context. It can easily be treated as a distinction in a single binary valued feature. Utterances consist of one or more intonation phrases. The melody of an intonation phrase is decomposed into a sequence of elements, each made up of either one or two tones. Some are associated with stressed syllables, and others with the beginning and end of the phrase. Superficially global characteristics of phrasal F0 contours are explicated in terms of the concatenation of these local elements. B t75 - 150 - N ..l- z t25- o I,,I. 100- 75- i I I I t SEC. I NO ONE WAS WEARIER THAN ELI EILIECH Figure I: An F0 contour with three H* pitch accents, which come out as peaks. The alignment of "Elimelech" is indicated. *This work was done at MIT under NSF Grant No. IST- 8012248. 85 0 200 475 N := 450 z O 125 It. 1OO 75 4 SEC I I I 1 I i l f \ \ | • L.\_ NO ONE WAS WEARIER THAN ELI E ECH Figure 2: The two H+L" accents in this contour are circled. Compare the FO contour on the stressed syllable in "Elimelcch" to that in Figure 1. The characteristic FO configurations on stressed syllables are due to pitch accents, which consist of either a single tone or a sequence of two tones. For example, each peak circled in Figure 1 is attributed to a H pitch accent. The steps circled in Figure 2 are analyzed as H+L, because they have a relatively higher level just before the stress and a relatively lower [eve[ on the stress, In two tone accents, which tone fails on the stress is distinctive, and will be transcribed with a *. [n this notation, the circled accents in Figure 2 are H+L*. Seven different accents arc posited altogether. Some possible accents do not occur because they would be neutralized in every context by the realization rules. Different types of pitch accents can be mixed b) 450 - ~ 440 90 80- % 70 ANNE 300- ) 250 ;- N -r" 200 - 450- I O in one phrase. Also, material which is presupposed in the discourse may be unaccented. [n this case, the surrounding tonal elements control its F0 contour. The tonal correlates of phrasing are the boundary tones, which control the FO at the onset and offset of the phrase, and an additional element, the phrase accent, which controls the FO between the last pitch accent and the final phrase boundary. The boundary tones and the phrase accent are all single tones, either L or H. In what follows, a "%" will be used as the diacritic for a boundary tone. Figure 3 shows two pitch contours in which a L phrase accent is followed by a H% boundary tone. When the last pitch accent is early in the phrase, as in 3A, the level of the phrase accent is maintained over a fairly long segmental string ("doesn't think"). In 3B, on the other hand, the pitch accent, phrase accent, and boundary tone have all been compressed onto a single syllabic. As far as is known, different pitch accents, phrase accents, and boundary tones combine freely with each other. This means that Figure 3: Both of these contours have a L'+H accent followed by a L phrase accent and a H% boundary tone. In 3A, the accent is on "father-in-law". and the L H% sequcnce determines the FO contour on the rest of the utterance. The alignment of the speech segments with the FO contour is roughly indicated by the placement of lettering. In 3B, L'+H L H% is compressed onto a monosyllabic. t s YOUR FATHER-IN-LAW DOESN'T THINK SO % 86 BOUNDARY PI TCH TONE ACCENTS Figure 4: The grammar of the Pierrehumbert (1980). the grammar of English phrasal melodies can be represented by a transition network, as shown in Figure 4. This grammar defines the level of description that the recognizer attempts to recover. There is no effort to characterize the meaning of the transcriptions established, since our focus is on the sound structure of speech rather than its meaning. In production, the choice of loci for pitch accents depends on the focus structure of the sentence. The choice among different melodic elements appears to be controlled by the attitudes of the speaker and the relation of his phrase to others in the discourse. Meanings suggested in the literature for such elements include surprise, contradiction, elicitation, and judiciousness. 2.2 Phonetics Two types of rules have a part in relating the tonal level of description to the continuously varying F0 contour. One set of rules maps tones into crucial points, or targets, in the F0 contour. Both the small tonal inventory and the sequential decomposition proposed depend on these rules being nontrivial. Specifically, a rule of downstep lowers a H tone in the contexts H+L _ and H L+ w. The value of the downsteppod H is a fixed fraction of the value for the preceding H, once a phrasal constant reflecting pitch range is subtracted, herative application of this rule in a sequence of accents which meet its structural description generates an exponential decay to a nonzero asymptote. A related rule, upstep, raises a H% after a H phrase accent. This means that the L* H H% melody often used in yes/no questions (and illustrated in Figure 5 below) takes the form of a rise--plateau--rise in the F0 contour. Differences in the relative level of accent tones can also result from differences in the emphasis on the material they are associated with. This is why the middle H ° in Figure 1 is lower than the other two, for example. A second class of rules computes transitions between one target and the next. These fill in the F0 contour, and are responsible for the FO on syllables which carry no tone. Transitions are not always monotonic; in Figure 1, for example, the F0 dips between each pair of H accents. Such dipping can be found between two targets which are above the low part of the range. Its extent seems to depend on the time-frequency separation of the targets. PHRASE BOUNDARY ACCENT TONE phrasal tunes of English given in In certain circumstances, a single tone gives rise to a flat stretch in the F0 contour. For example, the phrase accent in Figure 3A has spread over two words. This phenomenon could be treated either at a phonological level, by linking the tone to a large number of syllables, or at a phonetic level, by positing a sustained style of transition. There are some interesting theoretical points here, but they do not seem to affect the design of an intonation recognizer. Note that the rules just described all operate in a small window, as defined on the sequence of tonal units. To a good approximation, the realization of a given tonal element can be computed without look-ahead, and looking back no further than the previous one. Of course, the window size could never be stated so simply with respect to the segmental string; two pitch accents could, for example, be squeezed onto adjacent syllables or separated by many syllables. One of the crucial assumptions of the work, taken from autosegmental and metrical phonology, is that the tonal string can be projected off the segmental string. The recognition system will make strong use of the locality constraint that this projection makes possible. 2.3 Summary The major theoretical innovations of the description just sketched have important computational consequences. The theory has only two tones, L and H, whereas earlier tone-level theories had four. In combination with expressive variation in pitch range, a four tone system has too many degrees of freedom for a transcription to be recoverable, in general, from the F0 contour. Reducing the inventory to two tones raises the hope of reducing the level of ambiguity to that ordinarily found in natural language. The claim that implementation rules for tonal elements are local mean that the quantitative evidence for the occurrence of a particular element is confined to a particular area of the F0 contour. This constraint will be used to simplify the control structure. A third claim, that phrasal tunes are constructed syntactically from a small number of elements, means that standard parsing methods are applicable to the recognition problem. 3. A recognition system The recognition system as currently implemented has three components, described in the next three sections. First, the F0 contour is preprocessed with a view to removing pitch tracking 87 errors and minimizing the effects of the speech segments. Then, a schematization in terms of events is established, by finding crucial features of the smoothed contour through analysis of the derivatives. Events are the interface between the quantitative and symbolic levels of description; they are discrete and relatively sparse with respect to the original contour, but carry with them relevant quantitative information. Parsing of events is carried out top down, with the aid of rules for matching the tonal elements to event sequences. Tonal elements may account for variable numbers of events, and different analyses of an ambiguous contour may divide up the event stream in different ways. Steps in the analysis of an example F0 contour are shown in Figure 5. 3.1 Pveprocessing The input to the system is an FO contour computed by the Gold Rabiner algorithm (Gold and Rabiner, 1969). Two difficulties with this input make it unsuitable for immediate prosodic analysis. First, the pitch tracker in some cases returns values which are related to the true values by an integer multiplier or divisor. These stray values are fatal to any prosodic analysis if they survive in the input to the smoothing of the contour. This problem is addressed by imposing continuity constraints on the F0 contour. When a stray value is located, an attempt to find a multiplier or divisor which will bring it into line is made, and if this attempt fails, the stray value is deleted. In our experience, such continuity constraints are necessary to eliminate sporadic errors; without them, no amount of parameter tweaking suffices. A second problem arises because the speech segments perturb the F0 contour; here, consonantal effects are of particular concern. There are no FO values during voiceless segments. Glottal stops and voiced obstruents depress the F0 on both sides. In addition, voiceless obstruents raise the F0 at the beginning of a following vowel. Because of these effects, a attempt was made ca} 0 1 sec . I I .,.,z Are I e gumes a good source of vitamins (b) Max Max Mln t I PAin Min Plateau y k~ J \ J \ J L% L ~ H H% \ J L% k ~ H ~ H H% Figure 5: Panel A shows an unprocessed F0 contour. The placement of lettering indicates roughly the alignment of tune and text. Parts of the F0 contour which survive the continuity constraints and the clipping are drawn with a heavier line. Panel B shows the connected and smoothed F0 contour, together with its event characterization. The two transcriptions of the contour are shown underneath. The alignment of tonal elements indicates what events each covers. 88 to remove F0 values in the immediate vicinity of obstruents. An adapted version of the Fleck and Liberman (1982) syllable peak finder controlled this clipping. Our modification worked outward from the sy!labic peaks to find sonorant regions, and then retained the FO values found there. In Figure 5A, the portions of the F0 contour remaining after this procedure are indicated by a heavier line. The retained portions of the contour are connected by linear interpolation. Following Hildreth and Marr's work on vision, the connected contour is smoothed by convolution with a Gaussian in order to permit analysis of the derivatives. The smoothed contour for the example is shown in Figure 5B. 3.2 Schematization Events in the contour are found by analysis of the first and second derivatives. The events of ultimate interest are maxima, minima, plateaus, and points of inflection. Roughly speaking, peaks correspond to H tones, some valleys are L tones, and points of inflection can arise through downstep, upstep, or a disparity in prominence between adjacent H accents. Plateaus, or level parts of the contour, can arise from tone spreading or from a sequence of two like tones. Events are implemented as structures which store quantitative information, such as location, F0 value, and derivative values. Maxima and minima can be located as zeroes in the first derivative. Those which exhibit insufficient contrast with their local environment are suppressed; in regions of little change, such as that covered by the phrase accent in Figure 3A, this threshholding prevents minor fluctuations from being treated as prosodic. Plateaus are significant stretches of the contour which are as good as level. A plateau is created from a sequence of low contrast maxima and minima, or from a very broad peak or valley. In either case, the boundaries of the plateau are marked with events, whose type is relevant to the ultimate tonal analysis. These events are not located at absolute maxima or minima, which in nearly level stretches may fall a fair distance from points of prosodic significance. Instead, they are pushed outward to a near-maximum, or a near-minimum. The event locations in Figure 5B reflect this adjustment. Minima in the absolute slope, (which form a subset of zero crossings in the second derivative) are retained as points of inflection if they contrast sufficiently in slope with the slope maxima on either side. In some cases, such points were engendered by smoothing from places where the original contour had a shelf. In many others, however, the shoulder in the original contour is a slope minimum, although a more prototypical realization of the same prosodic pattern would have a shell Presumably, this fact is due to the low pass characteristics of the articulatory system itself. 3.3 Parsing Tonal analysis of the event stream is carried out by a topdown nondeterministic finite state parser, assisted by a set of verification rules. The grammar is a close relative of the transition network in Figure 1. (There is no effort to make distinctions which would require independent information about stress location, and provision is made for the case where the phrase accent and boundary tone collapse phonetically,) The verification rules relate tonal elements to sequences of events in the F0 contour. As each tonal element is hypothesized, it is checked against the event stream to see whether it plausibly extends the analysis hypothesized so far. The integration of successful local hypotheses into complete analyses is handled conventionally (see Woods 1973). The ontology of the verification rules is based on our understanding of the phonetic realization rules for tonal elements. Each rule characterizes the realization of a particular element or class of elements, given the immediate left context. Wider contexts are unnecessary, because the realization rules are claimed to be local. Correct management of chained computations, such as iterative downsteps, falls out automatically from the control structure. The verification rules refer both to the event types (e.g. "maximum', "inflection,') and to values of a small vocabulary of predicates describing quantitative characteristics. The present system has five predicates, though a more detailed accounting of the F0 contour would require a few more. One returns a verdict on whether an event is in the correct relation to a preceding event to be considered downstepped. Another determines whether a minimum might be explained by a non-monotonic F0 transition, like that pointed out in Figure I. In general, relations between crucial points are considered, rather than their absolute values. Even for a single speaker, absolute values are not very relevant to melodic analysis, because of expressive variation in pitch range. Our experiments showed that local relations, when stated correctly, are much more stable. Timing differences result in multiple realizations for some tonal sequences. For example, the L* H H% sequence in Figure 5A comes out as a rise--plateau--rise. If the same sequence were compressed onto less segmental material, one would see a rise-- inflection--rise, or even a single large rise. For this reason, the rules OR several ways of accepting a given tonal hypothesis. As just indicated, these can involve different numbers of events. The transcription under figure 5B indicates the two analyses returned by the system. Note that they differ in the total number of tonal elements, and in the number of events covered by the H phrase accent. The first analysis correctly reflects the speaker's intention. The second is consistent with the shape of the F0 contour, but would require a different phrasal stress pattern. Thus the location of the phrasal stress cannot be uniquely recovered from the F0 contour, although analysis of the F0 does constrain the possibilities. 4. Discussion and conclusions 4.1 Intellectual antecedents The work described here has been greatly influenced by the work of Marr and his collaborators on vision. The schematization of the F0 contour has a family resemblance to their primal sketch, and I follow their suggestion that analysis of the derivatives, i~ a useful step in making such a schematization. Lea (1979) argues that stressed syllables and phrase boundaries can be located by setting a threshhold on FO changes. This procedure uses no representation of different melodic types, which are the main object of interest here. Its assumptions are commonly met, but break down in many perfectly well-formed English intonation patterns. Vires et al. (1977) use F0 in French to screen lexical hypotheses, by placing restrictions on the location of word boundaries. This procedure is motivated by the observation that the FO contour constrains but does not uniquely determine the boundary locations. In English, F0 does not mark word boundaries, but there are somewhat comparable situations in which it constrains but does not determine an analysis of how the utterance is organized. However, the English prosodic 89 system is much more complex than that of French, and so an implementation of this idea is accordingly more dii~cult. 4.2 Segmentation and labelling The ~pproach to segmentation used here contrasts strongly with that used in the past in phonemic analysis. Whereas the HWIM system, for example, proposed segmental boundaries bottom up (Woods et al., 1976), the system described here never establishes boundaries. For example, there is no point on the rise between a L* and a H* which is ever designated as the boundary between the two pitch accents. Whereas phonetic segments ordinarily carry only categorical information, the events found here are hybrids, with both categorical and quantitative !nformation. A kind of soft segmentation comes out, in the sense that a particular tonal element accounts for some particular sequer~ce of events. Study of ambiguous contours indicates that this grouping of events cannot be carried out separately from labelling. Thus, there is no stage of analysis where the contour is segmented, even in this soft sense, but not labelled. It is not hard to find examples suggesting that the approach taken here is also relevant for phonemic analysis. Consider the word "joy", shown in Figure 6. Here, the second formant fails from the palatal locus to a back vowel position, and then rises again for the off-glide. A different transcription involving two syllables might also be hypothesized; the second formant could be falling through a rather nondistinct vowel into a vocalized /I/, and then rising for a front vowel. Thus, we can only establish the correct segment count for this word by evaluating the hypothesis of a medial /1/. Even having clone so, there is no argument for boundary locations. The multiple pass strategy used in the HW!M system appears to have been aimed at such problems, but ~loes not really get at their root. 4.3 Problems A number of defects in the current implementation have become apparent. In the example, the amount of clipping and smoothing needed to suppress segmental effects enough for parsing results in poor time alignment of the second transcription. The H* in this analysis is assigned to "source', whereas the researcher looking at the raw F0 contour would be inclined to put it on "gumes'. In general, curves which are too smooth may still be L 77 k ..L k : i- k 0 Figure 6: A spectrogram of the word "joy", cut out of the sentence "We find joy in the simplest things." The example is taken from Zue et al. (1982). insufficiently smooth to parse. An alternatwe 2rpproacn basea on Hildreth's suggestions about integration of different scale channels in vision was also investigated. (Hildreth, 1980.) Most of the obstacles she mentions were actually encountered, and no way was found to surmount them. Thus, I view the separation of segmental and prosodic effects on F0 as an open problem. Adding verification rules for segmental effects appears to be the most promising course. Two classes of extraneous analyses generated by the system merit discussion. Some analyses, such as the second in Figure 5, violate the stress pattern. These are of interest, because they inform us about how much F0 by itself constrains the interpretation of stress. A second group, namely analyses which have too many tonal elements for the syllable count, is of less interest. A future implementation should eliminate these by referring to syllable peak locations. Acknowledgements I would like to thank Mitch Marcus and Dave Shipman for helpful discussions. References Fleck, M. and M. Y. Liberman (1982). "Test of an automatic syllable peak finder," J. Acoust. Soc. Am. 72, Suppl. 1 $78. Gold, B. and L. Rabiner (1969). "Parallel Processing Techniques for Estimating Pitch Periods of Speech in the Time Domain." J. Acoust. Soc. Am. 46, 442-448. Hildreth, E. (1980). "Implementation of a Theory of Edge Detection," Artificial lntetligience Laboratory Report AI-TR- 579, MIT. Lea, W. A. (1979). "Prosodic Aids to Speech Recognition." in W. A. Lea, ed. Trends in Speech Recognition. Prentice Hall, Englewood Cliffs N.J. 166-205. Liberman, M. Y. and J. Pierrehumbert (forthcoming in 1983). "Intonational lnvariance under Changes in Pitch Range and Length." Currently available as a Bell Labs Technical Memorandum. Marr, D. (1982). Vision. W. H. Freeman and Co., San Francisco. Pierrehumbert, J. (1980). "The Phonology and Phonetics of English Intonation." PhD dissertation, MIT. (forthcoming from MIT Press). Pierrehumbert, ,I. (1981). "Synthesizing intonation." J. Acoust. Soc. Am. 70, 985-995. Vires, R., C. Le Corre, G. Mercier. and J Vaissiere (1977). "Utilisation, pour la reconnaissance de la parole continue, de marqueurs prosodiques extraits de la frequence du fondamental." 7iemes Journees d'Etudes sur la Parole, Groupement des Acousticiens de Langue Francais, 353-363. Woods, W.A. (1973). "An Experimental Parsing System for Transition Network Grammars." in R. Rustin, ed., Natural Language Processing. Algorithmics Press, Inc., New York. Woods, W.A, M. Bates, G. Brown, B. Bruce, C. Cook, J. Klovstad, J. Makhoul, B. Nash-Webber, R. Schwartz, J. Wolf, and V. Zue (1976). "Speech Understanding Systems Final Report Volume II." BBN Report No. 3438. Zue, V., F. Chen, and L. Lamel (1982). Speech Spectrogram Reading: Special Summer Course. MIT. 90 | 1983 | 13 |
A Finite-Slate Parser for Use in Speech Recognition Kenneth W. Church NE43-307 Massachusetts Institute of Technology Cambridge, MA. 02139 This paper is divided into two parts. 1 The first section motivates the application of finite-state parsing techniques at the phonetic level in order to exploit certain classes or" contextual constraints. -In the second section, the parsing framework is extended in order to account ['or 'feature spreading' (i:.g., agreement and co-articulation) in a natural way. I. Parsing at the Phonetic Level It is well known that phonemcs have different acoustic/phonetic realizations depending on the context. Fur example, the phoneme/t/ is typically realized with a different allophone (phonetic variant) in syllable initial position than in syllable final position. In syllable initial position (e.g., Tom),/t/is almost always released (with a strong burst of energy) and aspirated (with h-like noise), whereas in syllable final position (e.g., cat.), /t/ is often unreleased and unaspirated_ It is common practice in speech research to distinguish acoustic/phonetic properties that vary a great deal with context (e.g., release and aspiration) from those that are relatively invariant to context (e.g., place, manner and voicing). 2 In the past, the emphasis has been on invariants; allophonic variation is traditionally seen as problematic for recognition. (I) "In most systems for sentence recognition, such modifications must be viewed as a kind of 'noise' that makes it more difficult to hypothesize lexical candidates given an input phonetic transcription. To see that this must be the case, we note that each phonological rule [in an example to be presented below] l, This research was ~pported (in part) by the National Institutes of I lealth Grant No. 1 POt I M 03374-01 and 03374-02 from the National Library of Medicine, 2. Place refers IO the location of the constriction in the vocal tracL Examples include: labial t'at the hpsl/p, b. f, ',. m/, velar/k, g. r~/, dental (at the teeth)/s, z, t. d, I, n/and palatal A, ;~, i:,'}/ Manner dislmgu~shes among vowels, liquids and slides (e.g., /1, r, y. w/t. fricatives le.s.,/s, z, f. v/t, nasals (e.g.,/n. m. rio and stops leg,/p, t, k, b, d, g/). Voietng (periodie ~,ibration of the vocal fold.s) distingmshes sounds like /b, d. S/ from sounds like/p, L, k./. results in irreversible ambiguity - the phonological rule does not have a unique inverse that cuuld be used to recover the underlying phonemic representation for a ie,xical item. l:or example .... schwa vowels could be the first vowel in a word like 'about' or the surface realization of almost any English vowel appearing in a sufficiently destressed word. The tongue tlap [El could have come from a /t/ or a /d/." Klatt (MIT) [21, pp. 548-5491 This view of allophonic variation is representative of much of the speech recognition literature, especially during the ARPA speech project. One can find similar statements by Cole and Jakim~k ICMU) [5] and by Jelinek (IBM)[17]. I prefer to think of variation as usefid. It is well known that atlo- phonic contrasts can be distinctive, as illustrated by the following famous minimal pairs where the crucial distinctions seem to lie in the allophonic realization of the/t/: (2at a tease / at ease aspirated / flapped (2b) night rate / ni-trate unreteased/retroflexed (2c) great wine / gray twine unreteased/rounded This evidence suggests that allophonic variation provides a tich source of constraints on syllable structure and word stress. The recognizer to be discussed here (and partly tmplcmented in Church [4]) is designed to exploit allophonic and phonotactic cues by parsing the input utterance into syllables and other suprasegmental constituents using phrase- structure parsing techniques. 1.1 An Example of Lexical Retrieval It might be helpful to work out an example it] order to illustrate how parsing can play a role in l.exica] retrieval. Consider the phonetic transcription, mentioned above in the citation from Klatt [20, p. 1346] [2], pp. 548-549J: 91 (3) [dD~hlf_lt) tam] It is desired to decode (3) into the string ofwords: (4) Did you hit it to Tom? In practice, the lexical retrieval problem is complicated by errors in the front cad. However, even with an ideal error-free front-end, it is difficult to decode (3) because, among other things, there are extensive nile-governed changes affecting the way that words are pronounced in different sentence contexts, as Klatt's example illustrates: (5a) Pabtalization of/d/before/y/in didyou (5b) Reduction of unstressed/u/to schwa in),~u (5c) Flapping of intervocalic /t/ in hit. it (5d) Reduction of schwa and devoicing of/u/in to (5e) Reduc:ion of geminate/t/in it. to These allophonic processes often appear to neutralize phonemic distinctions. For example, the voicing contrast between/t/ and/d/. which is usually distinctive, is almost completely lost in wr~er/rid_er, where bod~ /t/ and /d/ are realized in American English with a tongue ~ap (q. 1.2 .\n Ogtimistic "v'icw of Neutralization Fortunately, there are many fewer cases of true neutralization than it might seem. Even in writ.er/ri~.er, the voicing contrast is not completely lost. The vowel in rider tends to be longer than the vowel in w~ter due to a general process that lengthens vowels before voiced consonants (e.g., /d/) and shortens them before unvoiced consonants (e.g.,/t/). A similar lengthening argument can be used to separate In/and /ndl (at least in some cases). It tmght be suggested that In/is merged with/nd/by a/d/deletion rule that applies in words like mena~ wind (noun). wind (',erbL and find. (Admittedly there is little if any direct acoustic evidence fi)r a/d/segment in this environment.) However, [ suspect that these words can o)~en be distinguished from men, win. )vttte. and fine mostly on the basis of the duration of the nasal murmur which is lengthened in the precedence of a voiced obstruent like/d/. Thus, this /d/-detction process is probably not a true case of neutralization, Recent studies in acoustic/phonetics seem to indicate that more and more cases of apparent neutralization can be separated as the field progresses. For instance, it has been said that/s/merges with f~/in a context like ga~ shortage [12]. lh)we~cr, a recent experiment 1271 suggests that the/s~/sequence can be distinguished from /~,~/ las in fisth shortage) on the basis of a spectral tilt: the /s,~/'spectrum is more /s/-like in the beginning and more/~,/-like at the cad, whereas the f~ spectrum is relatively constant throughout. A similar spectral tilt argument can be used to separate other cases of apparent gemination (e.g../z~'/in ~ the). As a final example of apparent ncutra!ization, consider the portion of the spectrogram in Figure !, between 0.85 and 1.1 seconds. This corresponds to the two adjacent /t/s in Did you hit it to Tom? Klatt analyzed this region with a single geminated/t/. However, upon further investigation of the spectrum, I believe that there are acoustic cues for two segments. Note especially the total energy, which displays two peaks at 0.95 and 1.02 seconds. On the basis of this evidence, I will replace Klatt's transcription (6a) with (6b): (6a) [dl]ahlf.lu taml (6b) [dl]i}hll'I t tlmml U 1.3 Parsing and Matching Even though 1 might be able to re-interpret many cases of apparent neutralization, it remains extremely difficult to "undo" the allophonic rules by inverse transformational parsing techniques. Let me suggest an alternative proposal, l will treat syllable structure as an intermediate level of representation between the input segment lattice and ',he output word lattice. In so doing, I have replaced .:.he lexical retrieval problem with two (hopefully simpler) problems: (a) parse the segment lattice into syllable structure, and (b) match the resulting constituents a~ainst the lexicon. I will illustrate the approach with Fig. I. Did you hit it to Tom? ,-,~.(..~.) o,0 Pit oiZ . oi.~ 0.4 06 o.e 0.7 O.a 0.9 l.o 1.I t,Z 1.3 :.4 l.e as ,:~o'; Laer¢~ -- t~,6HIm76OH8 ......... -,o~ . . . . . --~-~-,;---~-'~- ;';' i'L " .... ;" ~ ' ~ ' ~ : " ~ ,,ill , Igll,, , .I r dl i Wavetom ~ ~ ~IL . ~ ~,.. I ._ J.~ L , I', I .. t I , L -t_~! I -.1 L.] I l I I Did you hit it to Tom 92 Klatt's example (enlu, nced with allophonic diacritics to show aspiration and glottalization): (7) [drjighlff tht thaml TTr Using phonotactic and allophonic constraints on syllable structure such as: 3 (8a) /h/is always syllable initial, phonotactic (8b) [1" I is always syllable final, allophonic (8c) [?] is always syllable final, and allophonie (Sd) [t h] is always syllable initial, allophonic the parser can insert the following syllable boundaries: (9) [di~} # hlf. # I ? # tht # tham] It is now it is relatively easy to decode the utterance with lcxical matching routines similar to those in Smith's Noah program at CMU {241. parsed transcription, decodinl dl]~ -...¢ did you hlf= --..* hit l ? -=+ it th) ---., to tham ---, Tom In summary, I believe that the lexical retrieval device will be in a superior position to hypothesize word candidates if it exploits allo- phonic and phonotactic constraints on syllable structure. 1.4 Exploiting Redund:mey In many cases, atlophonic and phonotacdc constraints are redundant, Even if the parser should miss a few of the cues for syll~ibie structure, it will often be able to find the correct structure by taking advantage of some other redundam cue. [:or example, suppose that the front end failed to notice die glottalized/t./in the word it. (10) dl]i9 #hlf_# I #tha #tham T The parser could deduce that the input transcription (10) is internally inconsistent, because of a phonotactic constraint on the lax vowel/I/. 3. This formulation of the eonst/'aints is oversimplified for exlx3,sltory convenience; see [10. lJ. 15] and references thereto for discussion of the more subtle issues. Lax vowels are restricted to closed syllables (sylkdgles ending in a consonant) [I]. However, in this case, /1/ cannot mcct the closed syllable restriction because the following consonant is aspirated (arid therefi)re syllable initial). Thus the transcription is internally inconsistent. The parser shotlld probably rejcct tbc transcriot;¢,n ~md hope that the front end can fix dxe problem. Alternatively, the parser might attempt to correct the error by hypothesizing a second/t/. 4 There are many other examples like (10) where phonotactic constraints and allophonic constraints overlap. Consider the pairs found in figure 2, where there are multiple arguments for assigning the crucial syllable boundary. In de-prive vs. dep-rivalion, for instance, the difference is revealed by the vowel argument above 5 and by the aspiration rule. 6 In addition, the stress contrast will probably be cor- related with a number of so-called 'suprasegmental' cues, e.g., duration, fundamental frequency, and intensity [81. In general, there seem to be a large number of multiple low level cues for syllable strt,cture. This observation, if correct, could be viewed as a form of a 'constituency hypothesis'. Just as syntacticians have argued for the constituent-hood of noun phrases, verb phrases and sentences on the grounds that these constituents seem to capture crucial linguistic generalizations (e.g., question formation, wh-movement), so too, I might argue (along with certain phonologists such as Kahn [13]) that syllables, onsets, and rhymes are constituents because they also capture important generalizations such as aspiration, tensing and laxing. If this constituency hypothesis for phonology is correct (and I believe Fig. 2. Some Structural Contrnsts r ! _w t2 de-prive dep-rivation t a-ttribute att-ribute li de-crease dec-riment b cele-bration celcb-rity d a-ddress add-tess g de-grade deg-radation di-plomacy dip-lumatic de-cline a-cquire dec-lination acq-uisition o-bligatory ob-ligation 4. Personally. 1 favor the first alternative: after years of ,.,.smessmg Victor Zue read spectrograms. I have become most tmpressed with the richness of low level phonetic cues. 5. The syllable de. is open because the vowel is tense (diphthongizcd): dep" is dosed because the vowel is lax 6. lhe /p/ m -prtve is syllable inttml because it ts a.sptrated whereas the /p/ in dep" is s) liable final because it is unaspirated. 93 that it is) then it seems F~atural to propose a syllabic parser fi)r proccssit~g speech, by analogy with sentence parsers that have bccome standard practicc in d~e natural laoguagc community for processing .~ext. 2. Parser Implementation and Feature Spreading A program has bcen implcmcntcd [41 which parses a lattice of phonetic segmcnts into a lattice of syllables and other phonological constituents. Except for its novcl mechanism for handling features, it is very much like a standard chart parser (e.g.. Earley's Algorithm lTD. P, ccall that a chart parser takes as input a sentence and a context-free grammar and produces as output a chart like that below, indicating the starting point and ending point of each phrase in the input string. lnput~ Sentenc(l: 0 They t are 2 flying 3 planes 4 Gram.mar: N "---* they V ---* are N --* tl¥ing A -"* flying V ---* flying N --~ planes S --* NP VP VP -..* V NP VP ---.~ V VP NP~ N NP~ APNP NP"-* VP AP -'* A ('n,,.rt: o o(} i!1} 2!{} I 2 3 # {Xt',N,they} {S} {S} {S} { } {VP.V.are) {VP} (VP} { } [ } {NP.VP,AP,N.V,A,flying| {NP.VP} ( } { } ( } {NP, N.planesl {} {} {} {} bLach entry in the chart represents the possible analyses of the input words between a start position (the row index) and a finish position (the column index). [-'or example, the entry {NP, VP} in Chart(2,4) represents two alternative analyses of the words between 2 and 4: [xp fi3ulg pia,esl add [vp flying planesl. .the same parsing methods can be used to find syllable structure from an input transcription. lod)u[ Sentence: O ~" £ t 2 S 3 l 4 Z 5 (this ~) Grammar: onset~ ~'[SIZ peak---) i t[ coda --.--) ~' [ S I Z syl ----) (onset) peak (coda) Chart: 0 J , H o{} t{} z{} st} 4{} s(I I 2 3 4 .~ , {[.onset.coda} {syl} {syl} { } { } { } {!,pcak.syl} {syl) { } { } { } { } {S.onset.codal (syl} {syl} { } { } { } {l,peak.syl} {syl} { } { } { } { } {Z, onset.coda) {} (} (I {} (} This chart shows that the input sentence can be decomposed into two syllables, one from 0 to 3 (this) and another one from 4 to 5 (is). Alternatively, the input sentence can be decomposed into [~'t][slzl. In this way. standard chart parsing techniques can be adopted to process allophonic and phonotactic constraints, if the constraints are reformulated in terms of a grammar. How can allophonic and phonotactic constraints be cast in terms of context-free rules? In many cases, the constraints can be carried over in a straightforward way. For example, the following set of roles express the aspiration constraint discussed above. These rules allow aspiration in syllable initial position (under the onset node), but not in syllable final position (under the coda). (lla) uttcrancc ---) syllable* (lib) syllable ~ (onset) peak (coda) (II.c) onset --* aspirated-t [ aspirated-k I aspirated-p I.,. (lld) coda---, unrelcascd-t I unrclcased-k I unrcleased-p I-.- The aspiration constraint (as stated above) is relatively easy to cast in terms of context-free rules. Other allophonic and pho~aotactic processes may be more difficult. 7 2..1 The Agreement Problem In particular, context-free roles are generally considered to be awkward for expressing agreement facts. For example, in order to express subject-verb agreement in "'pure" context-free rules, it is probably necessary to expand the rule S ~ NP VP into two cases: (12a) S ---* singular-NP singular-VP singular case (12b) S --) plural-NP plural-YP plural case 7. For example, there may be a problem with constraintS that depend on rule ordering, since rule ordenng is not supported in the context-free formalism. This topic is discussed at length in I41. 94 The agreement problem also arises in phonology. Consider the example of homorganic nasal clusters (e.g., cam2II2, can't, sank), where the nasal agrees with the following obstruent in place of articulation. That is, the labial nasal /m/ is found before the labial stop /p/, the cor9nal nasal/n/ before the coronal stop/t/, and the velar nasal/7// before the velar stop/k/. This constraint, like subject-verb agreement. poses a problem for pure unaugmented context-free rules; it seems to be necessary to expand out each of the three cases: (13a) homorganic-nasal-cluster ~ labial-nasal labial-obstruent (13b) homorganie-nasal-cluster ~ coronal-nasal coronal-obstruent (13c) homorganic-nasal-cluster---* velar-nasal velar-obstruent In an effort to alleviate this expansion problem, many researchers have proposed augmentations of various sorts (e.g., ATN registers [26], LFG constraint equations [16], GPSG recta-rules till, local constraints [18], bit vectors [6, 22]). My own solution will be suggested after I have had a chance to describe the parser in further detail. 2..2 A Parser Based on Matrix Operations This scction will show how the grammar can be implemented in terms of operations on binary matrices. Suppose that the chart is decomposed into a sum of binary matrices: (14) Chart = syl Msy I + onset Monse t + peak Mpeak + .,. where Msy I is a binary matrix 8 describing the location of syllables and Monse t is a binary matrix describing the location of onsets, and so forth. Each of these binary matrices has a I in position (i,j) if there is a constituent of the appropriate part of speech spanning from the i m position in the input sentence to the jth position.9 (See figure 3). Ph'rase-structure rules will be implemented with simple oper- ations on these binary matrices. For example, the homorganic rule (13) could be implemented as: 8. Fhese matnccs will sometimes be called segmentatton lattices for historical reasons. Techmcally. these matnc~ need not conform to the restrictions of a lattice, and therefore, the weaker term graph L~ more correcL 9 In a probabitisuc framework, one could replace all of the I's and 0's with probabdities. A high prohabdity m loeauon (i. j~ of the s),liable matnx would say that there probably is a ss'llahle from postuon t to position 1: a low probabdity would say that there probably isn't a syllable between i and 1. Most of the following apphcs to probabdity matrices welt as binary ntawices, though the probabdity matnces may be less sparse and consequently less efficient. Fig. 3. Msyl, Monse e and Mdtyme for: "O '~ I t Z s 3 I 4 z 5" 001100 010000 000000 001100 000000 001100 000011 000100 000000 000011 000001 000011 000000 000000 000000 000000 000000 000000 The matrices tend to be very sparse (ahnost entirely full of 0's) because syllable grammars are highly constrained. In principle, there could be n 2 entries. However, it can be shown that e (the number of l's) is linearly related to n because syllables have finite length. In Church [4], I sharpen this result by arguing that e tends to be bounded by 4n as a consequence ofa phonotactic principle known as sonority. Many more edges will be ruled out by a number of other linguistic constraints mentioned above: voicing and place assimilation, aspiration, flapping. etc. In short, these mamces are sparse because allophonic and phono- tactic constraints are useful (15) (setq homorganic-nasal-lattice (M + (M* (phoneme-lattice #/m)labial-lattice) (M* (phoneme-lattice #/n) coronal-lattice) (M* (phoneme-lattice #/G) velar-lattice))) illustrating tile use of M + (matrix additit)n) ttt express the uniun of several alternatives and M* (matrix multiplication) to express the concatenation of subparts. It is well known that any finite-state grammar could be implemented in this way with just three matrix operations: M,, M+, and M** (transitive closure). If context-free power were required, Valient's algorithm [25] could be employed. However, since there doesn't seem to be a need tbr additional generative capacity in speech applications, the system is restricted to handle only the simpler finite state case. 1° 2..3 Feature Manipulation Although "pure" unaugmented finite state grammars may be adequate fur speech applications (in the weak generative capacity sense), [ may, nevertheless, wish to introduce additional mechanism in order to account for agreement facts in a natural way. As discussed above, the formulation of the homorganic rule in (15) is unattractive because it splits the rule into three cases, one for each place of articulation. It would be preferable to state the agreement constraint just once, by defining a homorganic nasal cluster to be a nasal cluster ]0. I personally hold a much more controversial posution, that tinite state grammars are sufficient for most. if not nil, natural language )-asks [3]. 95 subject to phlcc assimilation. In my language of matrix operations, I can say just exactly that: (16) (setq homorganic-na~l-cluster-lattice (M& nasal-cluster-lattice place-assimilation)) where M& (element-wise intersection) implements the subject to constraint. Nasal-cluster and place-assimilation are defined as: (17a) (setq nasal-cluster-lattice (M. nasal-lattice obstruent-lattice)) (17b) (setq place-assimilation-lattice (M + (M** labial-lattice) (M" dental-lattice) (M'" velar-lattice))) In this way. M& seems to be an attractive solution to the agreement problem. In addition, M& might also shed some light on co-articulation, another problem of'feature spreading'. Co-articulation (articulation of multiple phonemes at the same time) makes it extremely difficult (perhaps impossible) to segment the speech waveform into phoneme- co-articulation, Fujimura su~csts that place, manner and other articulatory features be thought of as asynchronous processes, which have a certain amotmt of freedom to overlap in time. (tSa) "Speech is commonly viewed as the result of concatenating phonetic segments. In most discussions of the temporal structure of speech, a segment in such a model is assumed to represent a phoneme-sized phonetic unit. which possesses an inherent [invariantj target value in terms of articulation or acoustic manifestation. Any deviation from such an interpretation of observed phenomena requires special attention ... [Biased on some preliminary results of X-ray microbeam studies [which associate lip, tongue and jaw movements with phonetic events in the utteranceJ, it will be suggested that understanding articulator'/ processes, which are inherently multi-dimensional [and (more or less) asynchrouousl, may be essential for a successful description of temporal structures of speech." [9 p. 66] In light of Fujimura's suggestion, I might re-interpret my parser as a highly parallel feature-based asynchronous architecture. For example. the parser can process homorganic nasal clusters by processing place and manner phrases in parallel, and then synchronizing the results at the coda node with M&. That is, (17a) can be computed in parallel with (17b). mid then the rcsulLs are aligned whcn the coda is computed with (16), as illustrated below for the word tent. Imagine that the front end produces the following analysis: (19) t a n t dental: I-I I ..... vowel: I-..I stop: I.I I ..... I nasalization: I..I where many of the ~atures overlap m an asynchronous way. The parser will correctly locate the coda by intersecting the nasal cluster lattice (computed with (17a)) with the homorganic lattice (computed with (17b)). (20) t a n t nasal cluster: I ....... J homonganJc: I ..... I coda: I ..... I This parser is a bold departure from a standard practice in two respects: (1) the input stream is feature-based rather than segmental, and (2) the output parse is a heterarchy of overlapping constituents (e.g., place and manner phrases) as opposed to a list of hierarchical parse-trees. [ find these two modifications most exciting and worthy of further investigation. In summary, two points have been made. [:irst. I suggested the use of parsing techniques at the segmental/feature level in speech applications. Secondly, I introduced M& as a possible solution to the agreement/co-articulation problem. 3. Ack,mwledgements l have received a considerable amount of help and support over the course of this project. Let me mention just a few of the people that I should thank: Jon Allen, Glenn Burke, Francine Chen, Scott Cyphers, Sarah I-ergt,son..,'vlargaret Fleck, Dan Huttenlocher, Jay Kcyser, Lori LameL Ramesh Patil. Janet Pierrehumbert, Dave Shipman, Pete Szolovits. Meg Withgott and Victor Zue. References 1. Bamwell, T., An Algorithm for Segment Durations in a Reading Machine Context, unpublished doctoral dis- sertation, department of Electrical Engineering and Computer Science, M1T. 1970. L Chomsky. N. and Halle, M., The Sound Pattern of~'nglish, Harper & R.ow, 1968. 3. Church, K., On Memoo' Limitations in Natural Language Processing, MS Thesis, MIT, Mr['/I,CS/TR-245, 1980 (also available from Indiana University Linguistics Club). 96 4. Church, K., Phrase-Structure l'arsing: A Method lbr Taking Advantage of Allophonic Constraints, unpublished doctoral dissertation, department of I-',lectrical Engineering and Computer Science, MIT, 1983 (also to appear, I.CS and RLE publications, MIT). 5. Cole, R., and Jakimik, J., A Model of Speech Perception, in R. Cole (ed.). Perception and l'roduction of Fluent Speech, Lawrence Erlbaum, HiIlsdale, N.J., 1980. 6. Dostert. B., and Thompson, F., How Features Resolve Syntactic Ambiguity, in Proceedings of the Symposium on Information Storage and Retrieval, Minker. J., and Rosenfeld, S. (¢d.), 1971. 7. Farley, J., An Efficient Context-Free Parsing Algorithm, CACM, 13:2, February, 1970. 8. Fry, D., Duration and Intensity as Physical Correlates of Linguistic Stress, JASA 17:4, 1955, (reprinted in Lehiste (ed.), Readings in Acoustic l'honetics, MIT Press, 1967.) 9. Fujimura, O., Temporal Organization of Articulatory Move- ments as Multidimensional Phrasal Structure, Phonetica, 33: pp. 66-83, 1981. 10. l-'ujimura, O., and Lovins. J., Syllables as Concatenative Phonetic UralS, Indiana University Linguistics Club, 1982. 11. Gazdar, G., Phrase Structure Grammar, in P. Jacobson and G. Pullum (eds.), The Nature of Syntactic Representation, D. Rcidet, Dordrecht, in press, 1982. 12..Heffner, R., General Phonetics, The University of Wisconsin Press, 1960. 13. Kahn, D., Syllable-Based (ieneralizations ht lOtglish Pho- nology,, Indiana University Linguistics Club, 1976. 14. Kiparsky, P., Remarks on the Metrical Structure of the Syl" lable, in W. Dressier (ed.) Phonologica 1980. Proceedings of the Fourth International Phonology Meeting 1981. 15. Kiparsky, P., Metrical Structure, Assignments in Cyclic, Linguistic Inquiry, 10, pp. 421-441, 1979. 16. Kaplan, R. and Bresnan, J., LexicabFunctional Grammar: A Formal System for Grammatical Representation, in Bresnan (ed.), The Mental Representation of Grammatical Relations, MIT Press. 1982. 17. Jetinek, F., course notes, MIT, 1982. 18. Joshi, A., and Levy, L.. Phrase Structure Trees Bear More Fruit Than You Would Have Thought, AJCL, 8: I, [982. 19. Klatt, D., Word Verification in a Speech Understanding System, in P,. R, eddy (ed.), Speech Recognition, Invited Papers Presented at the 1974 [EEE Symposium, Academic Press, pp. 321-344, 1974. 20. Klatt, D., Review of the ARPA Speech Understanding Project, JASA, 62:6, December 1977. ZI. Klatt, D., Scriber and Lal's: Two New Approaches to Speech Analysis, chapter 25 in W. Lea, Trends in Speech Recog. ration, Prentice-Hall, 1980. 22. Martin, W., Church, K., and Patil, R., Prelhninary Analysis of a Breadth-First Parsing Algorithm: Theoretical attd Ex" permwntal Results, MI'I'/LCS/'I'R-261, 1981 (also to appear in I..Bolc (ed.), Natural language Parsing Systems, Macmillan, [.ondon). 23. Reddv R., Speech Recognition by Machine: A Review, Proceedings of the IEEE, pp. 501-531, April 1976, ~. Smith, A., Word flypothesization in the Ilearsay-ll Speech System, Proc. IEEE Int, Conf. ASSP, pp. 549-552, 1976. 25. Valient, l.., General Context Free Recognition in Less Than Cubic Time, J. Computer and System Sciences 10, pp. 308- 315, 1975. 26. Woods, W., Transition Network Grammars for Natural Language Analysis, CACM, 13:10, 1970. Z7. Zue, V., and Shattuck-Hufnagel, S., When is a ,/Ts/not a /3V?, ASA, Atlanta, 1980. 97 | 1983 | 14 |
On the Mathematical Properties of Linguistic Theories C. tgm.lm~nd Pev-r=utt Dept. of Computer Science Untvermty of Toronto Toronto, Ontario, Canada M5S IA4 ASS~ACT Meta-theoretical results on the decidability, genera- tire capacity, and recognition complexity o~ several syn- tactic theories are surveyed These include context-free grammars, transformational grammars, lexical func- tional grammars, generalized phrase structure gram- mars, and tree adjunct grammars. i. lmt~tiom. The development of new formalisms im which to express linguistic theories has been accompanied, at [east since Chomsky and Miller's early work on context- free languages, by the study of their nets-theory. In par- tioular, numerous results on the decidabttity, generative capacity, and more recently the co~aplexity of recogni- tion of these formalisms have been published (and rumoured!). Strangely enough, much less attention seems to have been devoted to a discussion of the significance of these mathematical results. As a prelim- tnary to the panel on formal properties which will address the significance issUe, it seemed appropriate to survey the existing results. Such is the modest gee[ of this paper. We wdt consider context-tree languages, transforma- tional grammars, lexicat functional grammars, general- ized phrase structure grammars, and tree adjunct gram- mars. Although we will not examme them here, formal studies of other syntactic theomes have been under- taken: e.g. Warren [51] ~or Montague's PTQ [30], and Bor- gida [71 for the stratifications[ grammars Of Lamb [25]. There follows a brief summary of some comments in the Rterature about related empirical issues, but we avoid entirely the issue of whether one theory is more descrip- tively adequate than another. Z. P,-elimLuary Defil~ons We assume the reader is familiar with the basic definitions of regular, context-tree (CF), context-sensitive ~CS), recurstve, and recursively enumerable (r.e.) languages and with their accepters as car~ be [ound tn [':-_], Some elementary definitions [rom complexity theory may be useful. ?urt.her details may be found tn [2] Complexity theory is the study of the resources required of algorithms, usuai~y space and time. Let e(z) be a ~une- Lion, say the recognition Junction [or a language i. The most inter~t!n~ results we could obtain about )'would be a ~o~JeT bo%znd on ~he resources needed to compute f on a mac~hine of a gLven architecture, say a yon Neumann This research was sponsored by the National Science and Engineering Research Council of Canada under Grant Ag2Rs. computer or a parallel array of neurons. These results over whole classes of machines are very difficult to obtain, and none el any significance exist for parsiD.g problems. Restricting ourselves to a specific machine model and an algorithm M for j', we can ask about the cost. (e.g time or space) e(z) of executing M on a specific input z. ~Ically c is too flne-gra/ned to be useful: what one stu- dies instead ts a functio~ c w whose argument is a~. L'Iteger n denoting the s/zs of the input to !4, and which gives some measure of the cost of processing inputs of length n. Complexity theorists have been most interested g% the a.~]m~i~ot.ie behavlour of c~v, i.e. the behaviour of c~ as n gets [alge. :f one is interested in Upper bo~n~S on the behavlot:- of M, one usuai[y defines c. (n) as the m.a:ru-n.um, of c(= over all inputs z of size n.~his is called the worst-case convexity hJumct/on for .&'. Notice that other de~rutlon: are possible: one could define the expected eomp[exity ~otion c,(n) for /v/as the average of c(=) over all LnpuL.-. of length ~%. c might be more useful than c~ if one had an ~dea of what the distribution of ~nputs to M could be. Unfortunately, the introduction of probabiJistic con- siderations makes the study of expected complex:It) techmcally more difficult that of worst case comp[exity ?or a given problem, expected and worst case measures may be quite ditTerent. it is quite dlfTieult to get detailed descriptions ot c~ and for many purposes a cruder estimate ts sufficient. ~"~'le next abstraction involves "lumping" classes of c w functions into simpler ones that more clearly demon- str~te their ~symptottc behavlour and are easier to mani- pulate. This is the purpose of O-no~.Oon. Let f(n) and g(n) be two ~ui%cttons. ], ts said to be O(g) ,.f a constant multiple of ~ is an upper bound for f, ~or ~tl[ but a finite number o~ values of n..~[ore precisely, f ts O(g) ff ~here ,s are constants K and n O such that ~or all ~%>~e ],(n) < K'y('.'~). Given an algorithn~. M, we will say that tts '.verst-case time complexity ts O(g) if the worst-ease time cost func- tion cw(.n ) :or M is O(g) Notice that this merely says that almost all ~nputs to M of s,ze n can be processed in time at most a constant times g(n). It does nat ~ay that alJ Lnputs requLre g[~%) time, or even that any do even on M, let alone on any other machine that Lmp[ements ],. Also, if two algorithms A] and A 2 are avaHab[e for a function]'. and [[ their worst-case complexity can be given respec- tively as OE, gl) and O(g~), and g2 < g2' tt may still.be the case that for a large number of cases (maybe even for all cases one is likely to encounter in practice) that A 2 will be the preferable algorithm, simply because the constant K! for g! may be much smaller than Kg for .q 8. 98 In examining known results about the recognition complexity of various theories, it is useful to consider how "robust" they are in the face of changes in the machine model from which they were derived. These models can be divided into two classes: sequential models and parallel models. Sequential models [2] include the familiar single- and multi-tape Turing Machines (TMs) as well as Random Access Machines (RAMs) and Random /%:ces~ Stored PrograznMaehines (RASPs). A RAM is Like a TM except that its working menory is random access rather than sequential. A RASP is like a RAM but stores its program in its memory. Of all these models, it is most like a yon Neumann computer. All these sequential models can simulate each other in ways that do not cause great changes in time complex- ity. For example, a e-tape Turing Machine that runs in time O(t) can be simulated by a RAM in time O(t). and conversely, a RAM runmng in O(t) can be simulated by a e-tape TM in time O(t~). In fact, all familiar sequential models are poIFnonm~Uy relate& they can su-nutate each other with at most a polynomial toss in efficiency. Thus if a syntactic model is known to have a difficult recognition problem on one sequential model, then it will not have a much easier one on another. TransforlTting a sequential algorithm to one on a parallel machine with a fixed number Kof processors pro- rides at most a factor K improvement in speed. More interesting results are obtained when the number of pro- cessors is allowed to grow with the size of the problem, e.g. with the length of the string to be parsed. If we view these processors as connected together in a circuit, vath inputs values entering at one end and outputs being pro- duced at the other, then a problem that has a solution on a ssq~ential machme in polynomial time and in space s w111 have a solution on a paraLLeL machine with a polyno- mial number of processors and ci~-c~ da-ptA (or max- Lmum number of processors data must be passed through from input to output) O(s 2) . Since the depth of a parallel circuit corresponds to the (parallel) ~/~te required to complete the computation, this means that a[gorlthms with sequential solutions requiring small space (such as deterrnimstic CSLs) have fast parallel solutions. For a comprehensive survey of parallel computation, see Cook[9]. 3. Context-Free languages. Recognition techmques for context-free languages are well-known ~3]. The so-called "CK~ ~' or "dTnarmc pro- gramming" method is attributed by Hays [~-51 to J Cocke, and Lt was discovered mdependentLy by Kasami ~5~.] and Younger [53] who showed it to be O(nJ). It requires the grarm-nar to be in Chomsky Normal Form, and putting an arbitrary grammar in CNF may square the size of the grammar. Ear[ey's algorithm recognizes strings in arbitrary CFGs in tlme O(n 3) and space O(rt2), and in time O(n 2) for unambiguous CF'Gs. Graham, Harrison and Ruzzo [/3] glve an algorithm that tlnifies ~ and Ear{ey's [/0] algo- rithm, and discuss implementation details. Valiant [50] showed how to Interpret the Ck'Y algo- rithm as the finding of the transitive closure of a matrix and thus reduced CF recognition to matrix multiphca- tion, for which sub-cubic aJgorithms exist. Because of the enormous constants of proport,onality associated with thls method, it is not likely to be of much practical use, either an implementation method or as a descrtp- tlon of the function of the brain. Ruzzo [55] has shown how CFLs can be recognized by boolean circuits of depth O(Log(n)2), and thus that paral- lel recognition can be done in time O(log(~)e). The required circuit has size polynomial in ~. So as not to get mystified by the uppe~- bs~nW2 on CF recogmtion, it is useful to remember that no known CFL requires more than linear time, nor is there a (non- constructive) proof of the existence of such a larg "-.=~. For an empirical comparison of various parsing methods, see Slocum [44]. 4. Tran~ormational Gram.mr. From its earliest days, discussions of transforma- t/onal grammar (TG) have included mention of matters computational. Peters and Ritchie [3S] provided the first non-trivial results on the generative power of TGs. Their model reflects the "Aspects" version quite closely, including transformations that could move and add constituents and delete them subject to recoverability. All transforma- tions are obligatory, and applied cycl)cally from the bot- tom up. They show that every recursively enumerable (re.) set can be generated by a TC using a conte×t- sensitive base. The proof ts quite simple: the right-hand sides of the type-0 rules that generate the r.e. set are padded with a new "blank" symbol to make them at least as long as their left-hand sides. Rules are added to allow the blank symbols to commute with all others. These context-sensitive rules are then used as the base of a T0 whose only transformation deletes the blank symbols. Thus if the transformational formalism itself is sup- posed to cA~te~'tze the grammatical strings of possible natural languages, then the only languages being excluded are those which are not enurnerabie under an~] model of computation. At the expense of a considerably more intricate argu_rnest, the previous result can be strengthened [32] to show that every r.e. set can be generated by a context-free 5used TG, as long as a ~Iter (intersection with a regular set) can be applied to the phrase-markers output by the transformations. In fact, the base gram- mar can be 4,ndependent of the language being generated. The proof involves simulating a TM by a TG. The transfor- mations first generate an "input tape" for the TM being simulated, and then apply the TM productions, one per cycle of the grammar. The filter insures that the base grammar generated just as many S nodes as necessary to generate the input string and do the simulation. Again, if the transformational formalism is supposed to character- ize the Dossibie natural languages, then the Universal Base HYl~)th.esis [31] according to which all natural [anguages can be generated from the same base gram- mar ks empirically vacuous: an?#, recurs[rely enumerable language can. :Teverai attempts were then made to find a restricted form of the transformational model that was descmp- tively adequate arld yet whose generated languages are recurslve (see e.g. [271). Since a key part of the proof in [32] involves the use of a filter on the final derivation trees, Feters and Ritchie examined the consequences of forbidding fi/%al filtering [35]. They show that if S is the only recursive symbol in the CF base then the generated language L is predict~bL U en~zrte~-~bLe and ez'pone'rLtZalL. N bo~ndec£ A language L [s predictably enunlerable if there is an "easily" computable function t(n) that gives an upper bound on the number of tape squares needed by its enumerating TM to enumerate the first n elements of L. L is exponent/aUy bounded if there is a constant K such that for every string z in L there is another string z' in L whose length is at most Ktimes the length of z. 99 The class of non-filtering languages is quite unusual, including all the CFLs (obviously), but also some (but not all) Cb-l~s, some (but not all) reeursive languages, and some (but not all) r.e. languages. The source o~ non-recursivtty in transformational[y generated languages is that transformations can delete arbitrarily large parts of the tree, thus producing surface trees arbitrarily smaller than the deep structure trees they were derived from. This ts what Chomsk'y's recover- ability of deletions condition was meant to avoid. In his thesis, Petrick [36] de6nes the following term~sal- [em&d.h-i~cr,=a-inE condition on transformational deriva- tions: consider the followi~g two p-markers from a derivation, where the right one is derived from the left one by applying the cycle of transformations to subtree c producing the subtree z~ r s s Contmuing the derivation, apply the cycle to tree t yield- ing tree ~. s $ cycle 2 A derivation satisfies the terminal-length-increasing con- dition if the yield of I~ is always [ortger than the yield of Petrick shows that if all recursion in the base "passes through S" and if all derivations satisfy the terrninal-[ength-mcreasing condition, then the generated language is recursive. Using a slightly more restricted model of transformations, Rounds [42] strengthens this result by showing that the resulting languages are in fact context-se nsitive. |n an unpublished paper, Myhill shows that Lf the condition is weakened to terrnlnal-length-non-decreasing, then the resulting languages can be recognized in space at most ez-po~ent/o], Ln the length of the input. This implies that the recognition can be done ~n at most double-exponential time, but Rounds [.~] shows that not only can recognition be done in ez-ponevtt/a/ t/raze, but that every language recognizable in exponential time can be generated by a TG satisfylng the terminal-length-non- decreasing condition and recoverability of deletions This Is a very stron~ resu].t, because of the closure properties of the class of exponential-time [a_r~uages. To see why this LS SO requires a ~ew more deflnitions Let P be the class o~ all languages that can be recog- cuzed Ln polynomlaJ time on a deterministic TM, and NP the class of all languages that can be recognized in poly- nnmiai time on a non-deterministic T~[ P [s obviously contained in NP, but the converse is not known, although there is much evidence that is false. There is a class of problems, the so-called NP- complete problems, which are in NP and "as di~icuit" as any prob'..em in NP m the followLn~ sense: if czn!] of them could be shown to be m P, then art the problems m NP would also be in P. One way to show thaL a language L Ls NP-complete [s to show that L is in NP and that every other lar~uage L o in NP can be pol~omi~lly tr-allSfOlrl0sed into L, i.e. that there is a deterministic TM, operating in polynomial time', that will transform an input tu to L into an input %u 0 to L 0 such that m is in L if and only tu 0 ts Ln L O. In practice, to show that a [an@uage is NP-complete, one shows that it ~s in NP, and that some already-known NP-complete language can be polynomially transformed to it. All the known NP-comp[et.e languages can be recog- nized in exponential time on a deterministic machine, and none are known to have sub-exponential solutions. Thus sint:e the restricted transformational languages of Rounds characterize the exponential languages, then i[ all of them were to be in P, then P would be equal to NP Putting it another way, i[P is not equal to NP, then some transformational languages (even those sat,sfytng the terrnlnal-le ng th-non-incre asin~ condition) have ~ "tractable" (i.e. polynomial time) recognition pro~ "cu::,s on any deterministic TM. Note that this result also holcts for all the other ki%own sequential models of computa- tion, and even for parallel machines wlth as many as a poL%pto,rt/at number o[ processors. 5. L=xical FUnctional Grammar, in part, transformational grammar seeks to account for a range of constraints or dependencles wtthm sen- tences. Of particular interest are subcategorlzation dependencies and predicate-argunlent dependencies. These dependencies can hold over arbitrarily large dis- tances ~everal recent theories su~o=est difTerent u/ays of accounting for these dependencies, but without making use of transformations. We will exa/'nine three o~ these, Lexica[ Functional Grammar, Generalized Phrase ~truc- ture Grammar, and Tree Adjunct Grammars, m ttte next few sections. Lexica[ Functional Grammar ~LFG) of gap[an and Bresnan [24] aims to provtd~ a descriptively adequate syrttactic formalism wlthout transformations. All the work done by transformations is instead encoded tn structures in the [exlcon and in [inks established between nodes in the constituent structure LFG languages are CS and properly include the CFLs [2~]. Berwlck [5] shows that a set of strings whose recog- nition problem is known to be NP-compIete, namely the set o[ satisQable boolean formulas, Ls an LFG l&[~uage. Therefore, as was the case for Rounds's restrlcted class of TGs, tf P Ls not equal to NP, then some languages ~en- erated by [-~s do not have polynomial t~me recognition algorithms indeed only quite "baste" parts of the LFG mecharusm are necessary to the reduction. This includes mechanlsms necessary for feature agreement, for forcing verbs to take certain cases, and [exlcal ambt- ==uity Thus no s,mp[e chan~e to the formalism is likely to avoid the combinatorlai consequences of the ~ull mechanism Berw1ek has also examined the relation between LFG and the class of languages generated by iIldexed gram~t- [I], a class kllown to be a proper subset of the C~Ls, but including some NP-complete languages [42] He ela/ms (personal communication) that the indexed languages are a proper subset of the LFG languages. 6. Generalized Phrase Structure Grammar. In a series of papers, Gerald Gazdar and his col- leagues [" l] have argued for a joint account of the syntax and semantics o[ En~hsh like LFG in eschewing the use of trans,formations but unlike it in positing, only one level of tO0 syntactic description. The syntactic apparatus is based on a non-standard interpretation of phrase-structure rules and on the use of meta-rules. The formal conse- quences of both these moves %ave been investigated. 6. I. No~ A~Mmsmbtlity There are two ways of interpreting the function of CF rules. The first, and most usual, is as rules for ,-e~,T,3b/~g strings. Derivation trees can then be seen as canonical representatives of classes of derivations producing the s~me string, and di~lering only in the order of application o~ the same productions. The second interpretation of CF rules is as con- straimts on derivation trees: a legal derivation tree is :he where each node is "admitted" by a rule, i.e. each node dormnates a sequence of nodes in a way sanctioned by a rule. For CF rules, the two interpretations obviously generate the same strings ~,~ the same set of trees. Following a suggestion of McCawley's, Peters and ~,itchle [34] showed that if one considered context- se~s~.ve rules from the node-admissibility point of view, the languages defined were still CF Thus the use of CS rules in the base to impose sub-categorization restric- t/oRs, for example, does not increase the weak generatlve capacity of the base component. (For some different res- trictions of context-sensitive rules that guarantee that only CFLs will be generated, see Baker [~:].) Rounds [40] gives a simpler proof of Peters and ?,itchie's node-adrnisstbility result using the techniques from tree-automata theory, a generalization to trees of fmlte state automata theory for strings. Just as a 0_rote state automaton (FSA) accepts a strong by reading it one character at a time, changing its state at each transi- tion, a finite state tree automaton (FETA) traverses trees, propagating states. The top-dowln F~TA "attaches" a start- ing state (from a flnite set) to the root o[ the tree. Tran- sltions are allowed by productions of the form (q, ,., ~)--> (q, ..... ~,..) such that if state q is being applied to a node Labelled and dominatmg n descendants, then state ~i should be applied to its ~th descendant. Acceptance occurs if all [eaves of the tree end up labelled with states in the accepting subset. The bottom-up Fs"rA is similar: start- [ng states are attached to the [eaves of the tree and the productions are of the form (=, ~, q~ ..... q~)-> q indicating that if a node labelled a dommating n descen- dants each labelled wlth states ql to q,v then node a gets labelled ~th state q..Acceptance occurs when the root is labelled by a state from the subset o[ accepting states. .As is the ease ~th FSAs, F~TAs of both flavours can be either deterministic or non-deterministic. A set of trees i~ sa~d to be recognizable if it is accepted by a non- deterministic bottom-up Fb-TA. Again as with FSAs, any set o~ trees accepted by a non-determlmstic bottom-up ~A t.~ accepted by a deterministic bottom-up ~,~TA, but the re:~ult does not hold for top-down F'5"FA. although the recognizable sets are exactly the languages retognized by non-determinlstic top-down FSTAs. A set of trees is local if it is the set of demvation trees of a CF grammar Clearly, every local set }s recog- nizable by a one-state bottom-up F~A that checks at each node that it satis6es a CF production. Also, the yield of a recogmzable set ol trees (the set of strings it generetes) is CF..4/though not all recognizable sets are local, hey can all be mapped into local sets by a simple ~homo~norphic) mapping. Rounds's proof !41] that CS rules under node- adnussibility generate only CFLs involves showing that the set of trees accepted by the rules is recognizable, i.e. that there is a non-deterministic bottom-up FSTA that can check at each node that some node-admissibili~:y condition holds there. This requires checking that the "strictly context-free" part of the rule holds, and that some proper analysis o[ the tree passing thr~"g '_ the node ~atisfies the "context-sensitive" part of the rule. The ditlieulty comes h'om the fact that the bottom- up automaton cannot generate the set of proper ana- lyses, but must instead propagate (in its state set) the proper analysis conditions necessary to "admit" the nodes of its subtrees. It must, of course, also check that those rules get satisfied. A more intuitive proof using tree tr~nsduce~rs as well as FSTAs ,s sketched inthe Appendix. Joshi and Levy [21] strengthened Peters and Ritchie's result by showing that the node admissibility conditions could also include arbitrary Boolean combina- tions of ~mance conditions: a node could specify a bounded set of [abels that must occur immediately above it along a path to the root, or un~r'nediate[y below it on a path to the frontier. In general the CF grammars constructed [n .the proof of weak equivalence to the CS grammars under node admissibility are much larger than the original, and not useful for practical recognition. Joshi, Levy and Yueh [22], however, show how Eariey's a/gomthm can be extended to a parser that uses the local constraints directly. 8.2. MetaruJes. The second important mechanism used by Gazdar [ii] is mp~es, or rules that apply to rules to produce other rules. Using standard notation for CF rules, one example of a metarule that could replace the transforma- tion k~lown as "particle movement" is: V--> VN Pt X ==> V--> VP~ ~-PRO] X Xhere is a vamable behavmg like vamables in structural analyses of transformations. If such vamables are res- tricted to being used as cbbTeviaticns, that is if they are only allowed to range over a ]~n~te subset of strings over the vocabulary, then closing the grammar under the metarules produces only a 6nite set of derived rules, arid thus the generative power of the formalism is not increased. If, on the other hand, X is allowed to range over strings of unbounded length, as are the essential ~ e s of transformational theory, then the conse- quences are less clear. It is well known, for example, that I[ the right-hand sides of phrase structure rules are allowed to be arbitrary regular expressions, then the gen- erated languages are still context-free. Might something like this not be happening wlth essential variables in metarules? It turns out not. The formal consequences of the presence of essen- tie/, variables in metarules depends on the preserice of another device, the so-called phantoms categories. It may be convenient in formulating metarules to allo~, in the left-hand sides of rules, occurrences of syntactic categories that are never introduced by the grammar, 1.e that never appear m the mght-hand sldes of rules. |n standard CFLs, these are called %L.~eLess e¢tego~es, and rules containing them can simply be dropped, with no change Jngenerative capacity Not so ~th metarules: it is possible for metarules to rewrite rules containing phantom categories into rules without them. Such a dev- ice was proposed at one time as a ~ay to implement pas- tures in the GPSG framework. i01 Uszkorelt and Peters [49] have shown that essential variables i.n metarules are powerful devices indeed: CF grammars with metaru[es that use at most one essential variable and allow phantom categories can generate all reeursively enumerab[e sets. Even if phantom categories are banned, as long as the use of at [east one essential variab[es [s allowed, then some non-reeursive sets can be generated. Possible restrictions on the use of metarules are suggested in Oazdar and Pultum [12]. Shieber et al.[45] discuss some empirical consequences of these moves. 7. Tree Adjunct ~ . The Tree Adjunct Grammars (TAGs) of Joshi and his colleagues presents a different way of accounting for syn- tactic dependencies ([17], [19]). A TAG conmsts of two (finite) sets of (finite) trees, the centre trees and the ndjunet trees. The centre trees correspond to the surface struc- tures of the "kernel" sentences of the languages. The root of the adjunct trees is labelled with a non-terminal symbol which also appears e~cactiy once on the frontier of the tree. All other frontier nodes are labelled with terrm- nai symbols. Derivations in TAGs are defined by repeated application of the operation of ad|uneUon I~ c is a centre tree containing an occurrence of a non-tern-anal ,4. and if is an adjunct tree whose root (and one node n on the fronUer) ;s labelled ,4, then the adjunction of a to e is per- formed by "detaching" from c the subtree ~ rooted at A, attaching a [n its place, and reattachiug t at node ft. Adjunctton may then be seen as a tree analogue of a context-free dertvatlon for strings [40]. The string [anguage.~ obtamed by taking the yletds of the tree languages generated by TAGs are called Tree Adjunct ~mgLu~es, or TALs. In TAGs all long-distance dependencies are the result of adjtmcttons separating nodes tb.~t at one point in the derivation were "cLose". 8oth crossing and non-crossing depenctenctes can be represented [).8]. The formal pro- perties of TAGs are fully discussed in [30]. [52], [~]. Of particular interest are the ~ollo~ng. TALs properly contain the C~Ls ~nd are property con- rained [n the indexed languages, which m turn are prop- erly contained m the CSLs. Although the indexed {anguages contain NP-complete languages, TALs are much better behaved: ~oshi and Yokomori report ~per- sonal eommunicationl an O(n ~) recognition algorithm and conjecture that an O(n ~) bound may be possible. [3. A Pointer to ]~t~nl~lrieal DLseusmons The literature on the emptmca[ issues underiyiug the formal results reported here ts not ex~enswe. Chomsky argues convincingly [8] that there is no argument for natural languages neeess~.'~l~j being recur- sive. This, or course, is different from the possibdity that '~anguages are covtt~zgentty recurstve. Putnam [39] gives three reasons he claims "point in this direction": (i) 'speaker~ can presumably classify sentences as accept- able or unacceptable, deviant or non-deviant, et cetera, wlthout reliance on extra-linguistic contexts. There are of course exceptions to this rule ", (~) grammatical[ty judgements can be made for nonsense sentences, and [S) grammars can be [earned. (e) and (S) are irrelevant and (i) contalns zts own counter-argument. Peters and Ritchie [S~] contains a suggestive but hardly open-and-shut case ~or contingent recurstvtty: (:) every TQ has an exponentially bounded cyehng ~unction, and thus generates only recurs[re languages, (Z) every natural }an£ua~e has a descriptively adequate TG, and (3) the comp[exlty of [anguages investigated so far ks typLca[ of the class. Hintikka[16] presents a very di[~erent argument against the recursivity of English based on the distribu- tion of the words ~r~y and evev-y. I/is account of why JoA~ ]cno~s e'ue'mjth£~g is grarnmatlcal whi[e John ~c~o~s =~y- thing ks not is that =~y can appear only in contexts where replacing it by eve~ changes the meaning. Taking mean- mg to be logical equivalence, this means that grammati- eality is dependent on the determination of logical equivalence of logical formulas, an undecidable problem. Chomsk'y [8] argues that a simpler solution ks available, namely one that replaces logical equivalence by syntac- tic tdentLty of some kind of logical form. PuHum and Gazdar [38] [s a thorough survey of, and argument against, published claims (mainly the "respec- tively" examples [26], Dutch cross-serial dependencies, and nominallzation in Mohawk [,37]) that some natural languages cannot be weakly generated by CF grammars. No cIalms are made about the strong adequacy of CFGs. 9. Seeking E~gnilleanee. When can the supporter of a weak (syntactic) formal- ism (i.e. low recognition complexity, low gener.~tive capa- city) e[alm that it superior to a competing more powerful formalism? Ling[astir theories can differ along several dimen- sions, wtth generative capacity and recognition capacity being only two (albeit related) ones. The evaluation must take into consideration at [east the fottovang others: Coverage. Do the theories make the same ~rammat o tcal predictions ? Extensibdity. The linguistic theory of which the syn- tactic theory ks a part will want to express well- formedness constraints other than syntactic ones These constraints may be expressed over syntactic representa- tions, or over different representations, presumably related to the syntactic ones. One theory may make this connection possible when another does not. This of course underlies the arguments for strong desempttve adequacy, Also relevant here Ls how the tmguLstlc theory as a whole is decomposed. The syntactm theory can obviously be made ampler by trans~ermng some of the explanatory burden to a.nother constituent. The c[asmc e×amp[e in programming languages is the constraint that all vam- ables must be declared before they are used. This con- strain[ cannot be Lmposed by a CFG but can be by an indexed grammar, at the cost of a dramatic increase in recognltton complexity. Typically, however, the require- ment is slmply not cen~idered part of "syntax", which thus remams CF, and imposed separately in this case, the overall recognitmn comp[exlty remams ~ome low- order polynomial, Some arguments of this kind can be found m [3t~] Separating the eonstralnts into different sub- theome~ wlt[ not tn general make the problem of recog- ntzmg strings that satisfy all the constraints any more eHictent, but tt may allow hailing the power of each con- stituent. To take an e×treme example, every r.e. set the homomorphic image of the intersectlon of [~,) context-free languages, Implementation. This Ls probably the most subtle s,~t of issues determining the sigmfieance of the [orm,,l results, and I don't claim to understand them. Comparison between theories requires agreement between the machine models used to derive the complex- ity results As mentioned above, the sequential models are all polynomtally related, and no problem not hawng a 102 polynomial time solution on a sequential machine is likely to have one on a parallel machine limited to at most a polynomial number o[ processors, at least if P is not equal to NP. Both these results restrict the improve- ment one can obtain by changing implementation, but are of little use in comparing algorithms of low complex- ity. Berwick and Weinberg [6] give examples of how algo- rithms of low complexity may have different implementa- tions differing by large constant factors. In particular, changes in the form of the grammar and in its represen- tation may have this effect. But of more interest I believe is the fact that imple- mentation is often accompanied by some form of resource limitation that has two effects. First it is also a change in speeifieaZ~bn. A context-free parser imple- mented with a bounded stack recognizes only a finite- state language. Second, very special implementations can be used if one is willing to restrict the size of the probterrt to be solved, or even use special-purpose methods for limited problems. Marcus's parser [28] with its bounded look- ahead is another good example. Sentences parsable ~nthin the allowed look-ahead have "quick" parses, but some grammatical sentences, such as "garden path" sen- tences cannot be recognized without an extension to the mechanism that would distort the complexity measures. There is obviously much more of this story to be told. Allow me to speculate as to how it might go. We may end up with a space of linguistic theories, differing in the idealization of the data they assume, in the way they decompose constraints, and in the procedural specifications they postulate (I take it that two theories may differ tn that the second simply provides more detail than the first as to how constraints specified by the first are to be used.) Our observations, in particular our meas- urements of necessary resources, are drawn from the "ultimate implementation", but this does not mean that the "ultimately low-level theory" is necessamly the most reformative, witness many examples in the physical sci- ences, or that less procedural theories are not useful stepping stones to more procedural ones. It is also not clear that theories of different compu- tational power may not be useful as descriptions of different parts of the syntactic apparatus. For example, tt may be earner to learn statements o[ constraints within the framework of a general machine. The con- straints once learned might then be subjected to transformation to produce more ei=llcient special-purpose processors also imposing resource limitations. Indeed, the "possible languages" of the future may be more com- plex than the present ones, just as earlier ones may have been syntactically simpler Were ancient languages reg- ularO Whatever we decide to make of existing formal results, Lt is clear that continumg contact with the com- plexity community is important. The driving problems there are the P = NPquestion, the determination of lower bounds, the study of time-space tradeot~s, and the com- plexity of parallel computations. We still have some m~thodological house-cleaning to do, but I don't see how we can avoid being affected by the outcome of their investigations. ~ITKNO~E~ENTS Thanks to Bob Bet'wlck, Arawnd Joshi, Jim Hoover, and Stan Peters for their suggestions. APPE~IDIX Rounds [41] proves that context-sensitive rules under node-admissibility generate only context-free languages by constructing a non-deterministic bottom-up .tree automaton to recognize the accepted trees. We sketch here a proof that makes use of several d~tev-r, tin/s- tic O"a.,~d'uc.e~s instead. FSTAs can be generalized so that instead of simply accepting or rejecting trees, they transform them, by adding constant trees, and deleting or duplicating sub- trees. Such devices are called ~nite state tree transdue- erw (FSTT), and like the FSTA they can be top-down or bottom-up. Flrst motivated as models of syntax-directed translations for compilers, they have been extensively studied (e.g. [47], [48], [40]) but a simple subset is sufficient here. The idea is this. Let The the set of trees accepted by the CS-based gram/nat. Let t be in 71 F'STTs can be used to label each node ~% of t with the set of all proper ana- lyses passing through n. It will then be simple to check that each node satisfies one of the node admissibility conditions by sweeping through the labelled tree with a bottom-up FSTA. The node labelling is done by two FST"Fs ~'l and r e. Let be the maximum length of any left or right-context of any node adITussibility condition. Thus we need only label nodes with sets of strings of length at most rrL, and over a finite alphabet there are only a fitute number of such strings. r I operates bottom-up on a tree t, and labels each node 7t of t with three sets Pb'efz(n), Sufj~z(n), and Y'zeld(n) of proper analyses: if P is the set of all proper analyses of the subtree rooted at vt, then Prefix(n) is the set of all substrings of length at most m that are prefixes of strings of P. Similarly, ,~tJ%z(n) is the set of all suGixes of length at most ~,~., and Z'zetd(n) is the set of all strings of P of length at most rrL. It can easily be shown that for any set of trees T, Tis recognizable if and only if ~ /r) is. Applying to the output of "r j, the second transducer -rm operating top-down, labels each node rt with all the proper analyses going through n, i.e. wlth a pair of sets of strings. The first set ~nll contain all left-contexts of node n and the second all mght-eontexts, r 2 also preserves recogmzability. A bottom-up FSTA can now be defined to check at each node that both the context-~ree part of a ru/e as well as its context conditions are satisfied. This argument also extends easily to cover the dotal- nance predicates of ioshi and Levy: transducers can be added to label each node wlth all its "top contexts" and all its "bottom- contexts" "['he final ?STA must then check that the nodes satlsf-y whatever Boolean combina- tion of dorrunanee and proper analysis predicates _are requJred by the node admissibility m//es. REFEEENCES [i] Aho A.V., [ndezed gT~rrLmars., aw eztensiovt o/ the co~ztezt-free ~rr=m~rs, JACM 15, 647-67!. i968. [2] Aho A.V., Hopcroft J.E. and Ullman J.D., The Design and Analysis of Comlmlter Algorith~q~ Addison-Wesley, Reading ~ass, 1974. [3] Aho A.V., and Ullman J.D, The Theory of Parsing. Translation. and CoKm@ili.g. vol I: Parsing. Prentice Hall, Englewood Cliffs N.J., 1972. 103 [4] Baker B.S., Arbitrary g'r~mm~r~ generating contezt- free languages, TR 11-72, Center for Research in Comput- ing Technology, Harvard Univ., 1972. [5] Berwick R. C., Comp~aZional com~lezity ~nd lezical 2u~tio~al srammar, t9th ACL, 198L [6] Berwick R. C. and Weinberg A., Pars/rig eff/c/.ency, com~tationn~ complezity, and the eval~lion of gram- rna//ca~ theor~s, l.ing. Inq. 13, ,.65-191, 1962. [7] Borgida & T., Formal Studies of Strat/ficational Gram- mars, Pb_D Thesis, University o£ Toronto, 1977. [8] Chomsky N., Rt~a and Representatinns. Columbia University Press, 1980. [9] Cook S. A., 7b~'ds ~ cornpte~ity thso~ d of svnchzo- nm~s pa.v~ztlet com.~utat~, L'P.n-eignement Math,bmatique 27, 99-t24, !981. [I0] Eariey J., An effic-Lent contezt-free txtrs-mg algo- ~hrn, Comm_ or ACM 13.2, 94-10~, 1970. [Ii] Gazdar G., PhTsse stru=hzre ~rramanar, in Jacobson P. and Puilum G. (eds.), The Nature of Syntactic Represen- tation. Reidel, 1982. [12] Gazdar G. and Pullum G., Generalized l~e ~-uc- ttu~ C~"amr~ A'l~oretical Synopsis. Indiana Univ. Ling. Club, 1982. [13] Graham S. L., Harrison M. A.. and Ruzzo W.L., An iwtprovaed co~tte=t-free recognizer, ACM Trans. on Frog. lang. and Systems, 2, 3, 415-462, 1980. [14] Hopcroft J.E. and Ullman J., Introduction to Auto- mata Theory, ~ e s and Computation, Addison Wes- ley, 1979. [15] Hays D.G., A~tornat~c l~g~age data ~'ocess~g. In Computer Ai~Nication~ in the Behavioral Sciences, H Borko (ed.), Prentice Hail, Englewood Cli~s N.J., 1962 [16] Hintikka, ,I.K.K., Q~¢n~e~ in naPurai ~ang~ges: solute logicalprobtems 2, l.i.= and PhiL, 153-L72, 1977. [17] Joshi ,% K., How much contezt-sensitiuit~j is required to pr~ride reasonable St~ttctur~ desc~ptio~s: tree a~]oining 9-ra~nmm-s, to appear in Dowry D., Karttunen L. add Zwicky A. (eds.), Natural l~-ua~e Processing: l~Chl)iinuui~c, CoH~=m.ltatiollaJ. and TheoRt~aJ. Pl"Oper- ties, Cambridge Univ. Press. [18] Joshi AK., F~to~..g recurs-m~ ~nd d~pendencies: an aspect of Tree Ad]o~rtY.~tg Ora~tmar,c and a compm-~son ~f some format properties of TAG's, GPSG's, PLG'S and LFG's, these Proceedings, !983. [19] Josht &K. and Levy L.S:, P/mnse st~t~'e trees be~ rnm'e ~t t?~n you zvo~Id have tturught, 18th ACL, 1980. [20] Joshi A.K., Levy L.S. and Takahasht M,, Tree adfunct @-ra~rnars, J. of Cor~p. and Sys. Sc. "0, i, !36-163, "975. !21] Joshi A.K., Levy L.S., Constv~z~nts on structural 48scr~tions: local transfc,rmations, SIAM J. on Comput- e, 1977. [Z2] Joshi A.K, Levy L.S. and Yueh K., L~ca/cor~traints on progrexr~m~ng ta~g~zges, Part 1. ~ttta~, Th. Comp. Sc. i2.2.65-290, :960 [~31 Josh/ A~ K. and Yokomori T, S0~ne characterization theore~-s fo~" tree ¢~j~tct la'nguages and recognizsuble sets. [orthcoming [24} Kaplan R. and Bresnan J., Lez~cal-F~nct~nal Gram- tnwr: a fozvrt~2 system fc, r gra~maZ~cal representation, in Bre.~nan J (ed.), The Mental Repe~sentation of Grammati- cad Relatinn~, M]T Press, ! 982. [25] Lamb S., OutlJ~e of StJratificational Gramr,']ar, George- town Umvermty Press, Washington, 1966. [26] Langendoen D.T., On the inadegu~cy o/ "/~jpe-2 and TMpe-3 gr~rruzrs for human l~tg~ages, tn P.J. Hopper (ed.) ~.udies m HistorieRl IJnguJ~cs: festsc]~riJtt [or W'~d P. Lehrrmrt, John Benjamin, Amsterdam, 159-171, 1977. [27] LaPointe S., /{%zc~rsive~tess and deletiovt, [Jno. Al~d. 3, 227-Z65, 1976. [28] Marcus M.P., A Theory of ~tacUc Recognition for NaturaI Language, MIT Press, 1980. [29] Matthews R., Are the ~Fr~at~cal sentences of a l,,ngtmge a rec~r~ve set?, Synthese 40, 2139-224, 1979. [30] Montague R., The prier treat~.ewt of T.Lm~t~fication in 0rdina~J English, in Hintikka, J., Moravcsik J, and Suppes P (eds.), Approaches to Natural Language, Reidel, Dordrecht, 1973. [31] Peters P.S. and Ritchie R.W., A note on the ~iversaZ base ~othes~, J. of IJn£~.~cs, 5, i50-2, 1969. [32] Peters PS. and Rttchie R.W., On restmctzng tim base component of t'r~wsform~ztiovtat g~zmma~s, Inf. and Con- trol 18, 483-5-1, 1971. [33] Peters P.S. arid Ritchie R.W., On the gevter~e power of brensloz'rrta~iona~ grcntm~rs, Inf. So. 6, 49-83, 1973. [34] Peters P.S. and Ritchte R.W., Co.tort-sensitive ~.t~rwd~--Ze cowst~tuevtt am~tyszs - co~tezt-free Languages. re'wLsited. Math Sys. Theory 6. 324-333, "9?3. [35] Peters P.S. and Ritchie R.W., Ncrn-fitterivu] and local fiZter~g gratnmm'.~, tn JK.K. Hinttkka, JM.E. Moravcsik, and P. Suppes (eds.) Approaches to Natural Ls~b~ge, Reidel, 180-194. 1973. [36] Petrick S. R., A Recognition Procedure tot Transfor- mational Grammar~ Pb_D Thesis, .k-IT, 1965 [37] Postal P.M., Lirtt~tations of phrase-structure g~rtt- mats, m J.A. Fodor and J.J. Katz (eds.), Th~ structure of language: Rean~nL/s in the philosophy of lan=~lage, Eng~e- wood Cliffs: Prentice Hall, 137-i51, 1964. [38] Pullum G.K. and Gazdar G., Natural and contezt-~ree ~a~@'uages, I~.~_. and Phil., 4, ~71-504. 1982. [39] Putnam H., So~te iss~s m the theory of ~ramrn.~r, m l~. of S~sia in ApI~lied Mathe~tics..American Math. Soc., 1961. [40] Rounds W. C., Mwptr~gs and gramr~tars o~ trees, Math. Sys. Th..4,3, 257-~87, '.9?0. [41] Rounds W. C., 7~'ee-ov-Lented proofs of some theorems on contezt-free and indezed lsmg~axjes, 2nd ACM b-'ymp, on Th. Comp. Sc.. 109-;-!6, 1979. [4~} Rounds W. C., Cs~r~ptezi~j of rec,Jgni~ion in ~rttev-r~diate-level langta~ges, 14th S]Enp. or* Sw. and Aut. 13~ 1973. [43] Rounds W C., A graz~.z}~x~ticai cAwr~ter~.ation of e~povtevttial-ti.te Imrtg~ages, ~ ~. on ,Found. cf Commp. So., L35-!43. 1975. [44] b-qocum J., A practizat co~tpar/son of pav~ing stra- tegies, !9th ACt "_981. [45] Shieber S.M., Stucky S. U., Uszkoreit H. anc[ Robinson J. j., fb~naZ constraints on metctrtdes, these Proceed- ings, 1983. [46] Thatcher J W, Charazter~ng dew.arian trees of context-free grarnrna~'s through a generalization of finite ~tovrtata theov'g, J. of Comp. and Sys. Sc. ~, 3~.7-3~2. 1967 [47] Thatcher J.W., C.ensv~/~zed ~ se~rttenti~t roach%no maps, J. of Comp. and Sys. So. 4, 339-67, :970 [~8] Thatcher J W., 7~'ee mLttozrtata: an infov-rrtal survey, [n A. Aho (ed.), Currents in the theory of COml~ting, Pren- tice Hall, 148-172, !973. 104 [49] Uszkoreit H. and Peters P. S., Essm~t~aL var/ables in rr~ts~/es, forthcoming. [50] Valiant L., ~ener~ context-free recogTtition in less than cubic t~z~, L of Comp. and b~Fs. So. i0, 308-315, 1975. [51] Warren D. S., Syntax and Sem~nLics of Paz~ing: An ~q~pficaUon to Montague C~ramm~m Ph.D Thesis, University of Michigan, 1979. [52] Yokomori T. and Joshi A. K., Serni-liztear~ty, p~z'Uch- baundea~ss ~td tree adjunct taTtguages, to appear [n inf. Pr. Letter& 1983. [53] Younger D. H., Recoqln52io~t ~nd pars~ztg of context- f~ee lamg'az~es ~ t/.~u~ nJ, Inf. and Contl~l, I0, 2, 189-208, 1967. [54] Kasami T., A~ e(T/c/ent recognition a~d s'tj.ntax a2go- r~thm /~r context-free laT~g~mges, Air Force Cambridge Research Laboratory report AF-CRL-65-758, Bedford ,VLA, [965. [55] Ruzzo W. L., On ur,:Lfo~'n, c'Lrc'~ cora]c~ez~tj (extended abstract), Proc. of 20th $mnual Syrup. on Found. of Com. S=., 312-318, 1979. 105 | 1983 | 15 |
A Framework for Processing Partially Free Word Order* Hans Uszkoreit Artificial Intelligence Center SRI International 333 Ravenswood Avenue Menlo Park, CA 94025 Abstract The partially free word order in German belongs to the class of phenomena in natttral language that require a close in- teraction between syntax and pragmatics. Several competing principles, which are based on syntactic and on discourse in- formation, determine the [ineac order of noun phrases. A solu- tion to problems of this sort is a prerequisite for high-quality language generation. The linguistic framework of Generalized Phrase Structure Grammar offers tools for dealing with-word order variation. Some slight modifications to the framework al- low for an analysis of the German data that incorporates just the right, degree of interaction between syntactic and pragmatic components and that can account for conflicting ordering state- ments. I. Introduction The relatively free order of major phrasal constituents in German belongs to the class of natural-language phenomena that require a closer interaction of syntax and pragmatics than is usually accounted for in formal linguistic frameworks. Computational linguists who pay attention to both syntax and pragmatics will find that analyses of such phenomena can provide valuable data for the design of systems that integrate these lin- guist ic components. German represents a good test case because the role of pragmatics in governing word order is much greater than in English while the role syntax plays is greater than in some of the so-called free-word-order languages like Warlpiri. The German data are well attested and thoroughly discussed in the descriptive literature The fact that English and German are closely related makes it easier to assess these data and to draw parallels. The .~imple analysis presented here for dealing with free word order in German syntax is based on the linguistic framework of Generalized Phrase Structure Grammar (GPSG}, especially on its Immediate Dominance/Linear Precedence for- malism {ID/LP), and complements an earlier treatment of German word order) The framework is slightly modified to ac- commodate the relevant class of word order regularities. The syntactic framework presented in this paper is not hound to any particular theory of discourse processing; it enables syntax to interact with whatever formal model of pragmatics one might want to implement. A brief discussion of the framework's implication~ for computational implementation centers Upon the problem of the status of metagrammatical devices. 2. The Problem German word order is essentially fixed: however, there is some freedom in the ordering of major phrasal categories like NPs and adverbial phrases - for example, in the linear order of subject (SUB J), direct object (DOBJ), and indirect object (lOB J) with respect to one another. All six permutations of these three constituents are possible for sentences like (In). Two are given as {Ib) and (It). (la) Dann hatte der Doktor dem Mann die Pille gegeben. Then had the doctor the man the pill given (lb) Dann hatte dec Doktor die Pille dem Mann gegeben. Then had the doctor the pill the man given (It) Dann hatte die Pille der Doktor dem Mann gegeben. Then had the pill the doctor the man given All permutations have the same truth conditional meaning, which can be paraphrased in English as: Then the doctor gave the man the pill. There are several basic principles that influence the order- ing of the three major NPs: • The unmarked order is SUBJ-iOBJ-DOBJ • Comment (or focus) follows non-comments * Personal pronouns precede other NPs • Light constituents precede heavy constituents, *This rese.'trch was supported by the National Science Foundation Grant [ST-RI03$50, The views and conclusions expressed in this paper are those ,,r the :tutbor and should not be interpreted as representative of the views of the Nati.,nal Science Foundation or the United States government. I have benefited fr,~rn discussions with and comments from Barbara Grosz, Fernand,, Pcreira. Jane Robinson. and Stuart Shieber. tThe best overview of the current GPSG framework can be found in Gazdar and Pullum (1982). For :t description of the II)/LP format refer to Gazdar and Pullum (Ig8l} and Klein (1983), for the ID/LP treatment of German t,, tszkoreit (]g82a. lgB2b} and Nerbonne (Ig82). t06 The order in (la) is based on the unmarked order, (lb) would be appropriate in a discourse situation that makes the man the focus of the sentence, and (1c) is an acceptable sentence if both doctor and man are focussed upon. l use focus here in the sense of comment, the part of the sentence that contains new important information. (lc) could be uttered as an answer to someone who inquires about both the giver and recipient of the pill (for example, with the question: Who gave whom the pill?l. The most complete description of the ordering principles, especially of the conflict between the unmarked order and the topic-commeni, relation, can be found in Lenerz (1977). 3. Implications for Processing Models Syntactic as well as pragmatic information is needed to determine the right word order; the unmarked-order prin- ciple is obviously a syntactic statement, whereas the topic- comment order principle requires access to discourse informa- tion. °, Sometimes different ordering principles make contradic- tory predictions. Example (lb) violates the unmarked-order principle; (In) is acceptable even if dem Mann [the man] is the focus of the sentence~ 3 The interaction of ordering variability and pragmatics can be found in many languages and not only in so-called free-word- order languages. Consider the following two English sentences: (2a) I will talk to him after lunch about the offer. (2b) I will talk to him about the offer after lunch. Most semantic frameworks would assign the same truth- conditional meaning to (2a) and (2b), but there are discourse situations in which one is more appropriate than the other. (2a) can answer a que~-tion about the topic of a planned afternoon meeting, but is much less likely to occur after an order to men- tion the offer as soon as possible. 4 Formal linguistic theories have traditionally assumed the existence of rather independent components for syntax, seman- tics, and pragmatics, s Linguistics not only could afford this idealization but has probably temporarily benefited from it. However, if the idealization is carried over to the computational implementation of a framework, it can have adverse effects on the efficiency of the resulting system. 2The heaviness principle requires access to phonological information in ad- dition, but, :~ discussion of this dependence is beyond the scope of this paper. 3Sentences that differ only in their discourse role assignments, e.g.. do not focus on the same constituent(s}, usually exhibit different sentential stress patterns. 4The claim is not that these sentences are not interchangeable in the men- ti, .n~.d di.<o-,urse situations under any circumstances. In English. marked in- ton arian can usually overwrite default discourse role assignments associated w~.. the order of the constituents. $Scvera[ more recent theories can account for the interaction among some of the components. Montague Grammar (Montague. 1974) and its successors (incl. GPSG) link semantic and syntactic rules. Work on presuppositions (Karttunen and Peters. 1979), discourse representations (Kamp, If80) and Situati~,n Semantics (Barwise and Perry. 1981) narrows the gap between .,,.'mantics and pragmatics. If we as.~ume that a language generation system should be able to generate all grammatical word orders and if we further assume that, every generated order should be appropriate to the given discourse situation, then a truly nonintegrated system, i.e., a system whose semantic, syntactic, and pragmatic components apply in sequence, has to be inel~cient. The syntax will first generate all possibilities, after which the pragmatic component will have to select the appropriate variant. To do so, this com- ponent will also need access to syntactic information. In an integrated model, much unnecessary work can be saved if the syntax refrains from using rules that introduce prag- matically inappropriate orders. A truly integrated model can discard improper parses very early during parsing, thereby con- siderably reducing the amount of syntactic processing. The question of integrating grammatical components is a linguistic problem. Any reasonable solution for an integration of syntax and pragmatics has to depend on linguistic findings about the interaction of syntactic and pragmatic phenomena. An integrated implementation of any theory that does not account for this interaction will either augment the theory or neglect the linguistic facts. By supporting integrated implementations, the framework and analysis to be proposed below fulfill an important condition for effcient treatment of partially free word order. 4. The Framework and Syntactic Analysis 4.1 Tile Framework of CPSG in ID/LP Format The theory of GPSG is based on the assumption that nat ural languages can be generated by context-free phrase struc- ture (CF-PS) grammars. As we know, such a grammar is bound to exhibit a high degree of redundancy and, consequently, is not the right formalism for encoding many of the linguistic generalizations a framework for natural language is expected to express. However. the presumption is that it is possible to give a condensed inductive definition of the CF-PS grammar, which contains various components for encoding the linguistic regt,laritics and which can be interpreted as a metagrammar, i.e.. a grammar for generating the actual CF-PS grammar. A GPSG can be defined as a two-leveJ grammar containing a metagrammar and an object grammar. The object grammar combines {CF-PS} syntax and model-theoretic semantics. Its rules are ordered triples (n. r. t) where n is an integer (the rule number}, r is a CF-PS rule. and t is the tramlationoft.he rule, its denotation represented in some version of intensional logic. The translation t is actually an operation that maps the translation of the children nodes into the translation of t.he parent. The nonterminals of r are complex symbols, subsets of a finite set of syntactic features or - as in the latest version of the theory (Gazd:w and Pullum, 1982) - feature trees of finite size. The rules o/' the obJect grammar are interpreted as tree-admissability conditions. The metagrammar consists of four different kinds of rules that are used by three major components to generate the object 107 grammar in a stepwise fashion. Figure {3) illustrates the basic structure of a GPSG metagrammar. (3) {Basic Rules ~N~ IDR doubles)j/ Application~ [ Metarule (IDR doubles) Rule Extension I i IDR triples) I binearization .' l ~{bjeet-G rammar~'X~ F-PS Rules),~/ Metaxules ) ~Rule Ext. Princpls). LP rules ) First. there is a set of banjo rules. Basic rules are immediate domi.a.ce rule (IDR) double~, ordered pairs < n,i >, where n is the rule number and i is an [DR. 1DRs closely resemble CF-PS rules, but, whereas the CF- PS rule "1 -- 6t 6..... 6. contains information about both immediate dominance and linear precedence in the subtree to be accepted, the corresponding IDR "~ -- 6t, /f~. ..... /f. encodes only information about immediate dominance. The order of the right-hand-side symbols, which are separated in IDRs by commas, has no significance. Metarule Application, maps [DR doubles to other IDR doubles. For this purpose, metaxules, which are the second kind of rules are applied to basic rules and then to the output of metarule applications to generate more IDR doubles. Metarules are relations between sets of IDRs and are written as A = B, where A and B are rule templates. The metarute can be read as: If there is an IDR double of kind A, then there is also an IDR double of kind /3. In each case the rule number is copied from A to /3. s .Several metarules can apply in the derivation of a single II)R double; however, the principle of Finite Closure, defined by Thompson (1982}, allows every metarule to apply only once in the derivational history of each IDR double. The invocation of this principle avoids the derivation of infinite rule sets, in- 6Rule number might he a misleading term for n because this copying :~.ssigns the s~me integer to the whole class of rules that were derived from the ~ame basic rules. This rule number propagation is a prerequisite for the <iPSG accouht of subcategori2ation. eluding those that generate non-CF, non-CS, and noarecursive languagesJ 7 Another component maps IDR doubles to IDR triples, which are ordered triples (n,i,t) of a rule number., an IDR i, and a translation t. The symbols of the resulting IDRs axe fully instantiated feature sets (or structures} and therefore identical to object grammar symbols. Thus, this component adds semantic translations and instantiates syntactic features. The mapping is controlled by a third set of rule czten6io, principles including feature co-occurrence restrictions, feature def. ult principles, and an algorithm that assigns the right kind of translation to each rule on the basis of its syntactic information. The last component of the metagrammar maps the IDR triples to the rules of the object grammar. For each IDR triple all the object grammar triples are generated whose CF-PS rules conform with the linear precedence(LP) rules, the fourth rule set of the metagrammar. LP rules are members of the LP relation, a partial ordering on V'r I.I VN. An LP rule (a,$} is usually written as a < ~/and simply states that a precedes/9 whenever both a and d occur in the right-hand-side of the same CF-PS rule. It is the separation of linear precedence from immediate dominance statements in the metagrammar that is referred to .as ID/LP format. And it is precisely this aspect of the for- malism that. makes the theory attractive for application to lan- guages with a high degree of word-urder freedom. The analysis presented in the next section demonstrates the functioning of the formalism and some of its virtues. 4.2 The Analysis of German Word Order Uszkoreit (1982a) proposes a GPSG analysis of German word order that accounts for the fixed-order phenomena, includ- ing the notoriously difqcult problem of the position of finite and nonfinite verbs. Within the scope of this paper it is impossible to repeat, the whole set of suggested rules. A tiny fragment should sumce to demonstrate the basic ideas as well as the need for modifications of the framework. Rule (41 is the basic VP ID rule that combines ditransitive verbs like forms of gebe. (give) with its two objects: (4} (,5, VP -- .NP, NP, V) [+DATI[+ACC] Th,~ rule .~tates that a VP can expand as a dative NP (IOBJ}, an attn.-alive NP (DOBJ), and a verb. Verbs that can occur in dilrnnsitive VPs, like geben (give). are marked in the lexicon with the rule number 5. Nothing has been said about the linear order of these constituents. The following metarule supplies a "flat" sentence rule for each main verb VP rule [+NOM 1 stands for the nominative case, which marks the subject. 7F, r ~ d*scu.-sion see Peters and Uszkoreit (1982} and Shieber et M. (1983}. I08 (5) VP ~ X, V ~ S -.* NP, X, V [-AUX] [+NOM] It generates the rule under (6) from (4): (6) (5, S ---, NP, NP, NP, V) [+ NOMI[+DAT][+ACC] Example (7) gives a German constituent that will be admitted by a PS rule derived from ID rule (6): (7} der Doktor dem Mann die Pille gegeben the doctor the man the pill given I shall not list the rules here that combine the auxiliary halle and the temporal adverb dann with (7) to arrive at sentence (la), since these rules play no role in the ordering of the three noun phrases. What is of interest here is the mapping from ID rule (5) to t.he appropriate set of PS rules. Which LP rules are needed to allow for all and only the acceptable linearizations? The position of the verb is a relatively easy matter: if it is the finite matrix verb it precedes the noun phrases; in all other cases, it follows everything else. We have a feature MC for matrix clause as well as a feature co-occurrence restriction to ensure that +MC will always imply +FIN (finite). Two LP rules are needed for the main verb: (Sa) +MC < NP (8b) NP <-MC The regularities that govern the order of the noun phrases can also be encoded in LP rules, as in (ga)-!ge): (Oa) +NOMINATIVE < +DATIVE (9b) +NOMINATIVE < +ACCUSATIVE (9c) +DATIVE < +ACCUSATIVE (9d) -FOCUS < +FOCUS (9e) +PRONOUN < -PRONOUN (Kart.tunen and Peters, 1979) 8 or a function from discourse situa- tions to the appropriate truth-conditional meaning in the spirit of Barwise and Perry (1981). The analysis here is not concerned with choosing a formalism for an extended semantic component, but rather with demonstrating where the syntax has to provide for those elements of discourse information that influence the syntactic structure directly. Note, that the new LP rules do not resolve the problem of ordering-principle conflicts, for the violation of one LP rule is enough to rule out an ordering. On the other hand, the absence of these LP rules would incorrectly predict that all permutations are acceptable. The next section introduces a redefinition of LP rules that provides a remedy for this deficiency. 4.3 The Modified Framework Before introducing a new definition of LP rules, let me suggest, anot.her modification that will simplify things somewhat. The I,P rules considered so far are not really LP rules in the sense in which they were defined by their originators. After all. LP rules are defined as members of a partial ordering on "v~,¢ U VT'. Our rules are schemata for LP rules at best, abbreviating the huge set of UP rules that are instantiations of these schemata. This definition is an unfortunate one in several respects. It not. only creates an unnecessarily large set of rules IVN con- tains thousands of fully instantiated complex symbols) but also suppresses some of the important generalizations about the lan- guage. Clearly, one could extract the relevant generalizations even from a fully expanded LP relation, e.g., realize that there is no LP rule whose first element has -MC and its second element NP. However, it should not be necessary to extract generaliza- tions from the grammar; the grammar should express these generalizat.ions directly. Another disadvantage follows from the choice of a procedure for arriving at the fully expanded LP rela- Lion. Should all extensions that are compatible instantiations of (Sa), (Sb). and (9a)-(9e} be LP rules: If so. then (10) is an instantiat.ion of (8a): (I0) +MC' NP +DEF < +FIN ,.\ feature FOCUS has been added that designates a focused con- sf it,eat. Despite its name FOCUS is a syntactic'fcature, justified by syntactic Pacts, such as its influence on word order. This syntactic feature needs t,o be linked with the appropriate dis- course information. The place to do this is in the rule exteu- sioq component, where features are instantiated and semantic translations added to ID rules. It is assumed that in so doing the translation part of rules will have to be extended anyway so as to incorporate non-truth-conditional aspects of the meaning. For example, the full translation could be an ordered pair of truth-conditional and non-truth-conditional content, extending Karttunen and Peters's treatment of conventional implicature Yet nothing can be a matrix verb and definite simultaneously, and NPs cannot be finite. (101 is a vacuous rule. Whether il is a LP rule at all will depend on the way the nonterminal vocabulary of the object grammar is defined. If it only includes the nonterminals that actually occur in rules then (10) is not as LP rule. [n this case we would need a component of the metagrammar, the feature instantiation principles, to determine 8T,~ be more precise. Karttunen and Peters actuaJly make their transla- ti,,ns ordered triples of truth-conditiona.l content, impllcatures, and an in- hcrhance expression that plays a role in h~.ndling the projection problem for presuppositions. 109 another compouent of the metagrammar, the LP component. 9 LP will be redefined as a partial order on 2 p, where F is the set of syntactic features I0 The second and more important change can best be described by viewing the LP component as a function from a pair of symbols (which can be characterized as feature sets) to truth values, telling us for every pair of symbols whether the first can precede the second in a linearized ru!e. Given the LP relation {(al,~/t),(a~,B~.) ..... (a~,~)} and a pair of complex symbols (3',6), the function can be expressed as in (11). (11} cl A c,~ A ... A c,~ where c~ ---- ~(~; _C 6 A #; C: 3') for 1 < i < n ~,Ve call the conjunct clauses LP conditions; the whole con- junction is a complex LP condition. The complex LP condi- tion allows "T to precede /~ on the right-hand side of a CF- PS rule if every LP condition is true. An LP condition ct derived from the LP rule (a~,//i) is true if it is not the case that 3 has the features ;/~ and 6 has the features a¢. Thus the LP rule NP < VP stanch for the following member of the LP relation {{+N,-V, +2B~R}, l-N, +V, +2BAR}). The LP condition following from this rule prevents a su- perset of {-N, +V, +2BAR} from preceding a superset of l-N, +V, +2BAR}, i.e., a VP from preceding an NP. But notice that there is nothing to prevent us from writing a fictitious LP rule such as (12} +PRONOUN < -ACCUSATIVE German has verbs like Ichrcn that take two accusative noun phr~.ses as complements. If {12) were an LP rule then the result- ing LP condition defined as in ( l 1 ) would rule out any occurrence of two prouominalized sister NPs because either order would be rejected.l 1 It. is an empirical question if one might ever find it useful to write LP rules as in (12}, i.e., rules a < ~/, where a U 3 could be a ~ubset of a complex symbol. Let me introduce a minor redefinition of the interpretation of LP, which will take care of cases such as (12) and at the same prepare the way for a more substantial modification of LP rules. LP shall again be interpreted as a function from pairs of feature sets (associated with complex symbols} to truth values. Given the LP relation {(a1,,'Jl),(oo..;]'.,} ..... (a.,~q~) and a pair of complex symbols 0The widety uscd notation for nomnstantiated LP rules and the feature in- stantiati,,n principles could be regarded an meta, met.Lgrammatical devices that inductively define a part of"the metagrammar. 10Remember that, in an .~-synta.x. syntactic categories abbreviate feature sets NP ~ {+N, -V, +2BAR}. The definition can emily be extended to work on feature trees instead of feature sets. 1 lln principle, there is nothing in the original ID/LP definition either that would prevent the grammar writer from abbreviating a set of LP rules by (121. It is not quite clear, however, which set of LP rules is abbreviated by (r"). (3',/~), the function can be expressed as in (13). (13) ct A c2, A ... A cn where ~, - (a~c6 A B~C3,)-(o~C3, A B, C6) for l < i < n That means 3' can precede 6 if all LP conditions are true. For instance, the LP condition of LP rule (12) will yield false only if "t is +ACCUSATIVE and # is +PRONOUN, and either 3, is -PRONOUN or 6 is -ACCUSATIVE (or both). - Now let. us assume that, in addition to the kind of simple LP rules just introduced, we can also have complex LP rules con- sisting of several simple LP rules and notated in curled brackets a.s in (14}: {14) '+NOMINATIVE < +DATIVE ] +NOMINATIVE < +ACCUSATIVE| +DATIVE < +ACCUSATIVE~ -FOCUS < +FOCUS | +PRONOUN < -PRONOUN / The LP condition associated with such a complex LP rule shall be the disjunction of the LP conditions assigned to its members. LP rules can be generally defined as sets of ordered pairs of feature sets {(at,Bt),(a~,~) ..... (am,~/m)}, which are either notated with curled brackets as in (10), or, in the case of singletons, as LP rules of the familiar kind. A complex LP rule {{at, dl), (no_, ,%) ..... {am, B,n)} is interpreted as a LP condition of the following form {(o 1 C 6 A~t C -~)V(a~ C 6 At/= C_ -,)v . vt~.,C6A~,,C_~))--((a, C_3,A3, c_ ~}v(a.. c_ "l A ,'t= C 6)V ... V(am C 3, A dm ~ 6)}. Any of the atomic LP rules within the complex LP rule can be violated as long as the violations are sanctioned by at least one of the atomic LP rules. Notice that with respect to this definition, "regular" LP rules, i.e., sing{elons, can be regarded as a speciaJ case of complex I,P rules. [ want ¢o suggest that the LP rules in {Sa}, (8h), and (l-I} arc a subset of the LP rules of German. This analysis makes a number of empirical predictions. For example, it predicts that (15) and (16) are grammatical, but not (17). (15) Dann batte der Doktor dem Mann die Pille gegeben -FOCUS +FOCUS -FOCUS +NOM +DAT +ACC Then had the doctor the man the pill given (18) Dana hatte der Doktor die Pille dem Mann gegeben -FOCUS +FOCUS +FOCUS +NOM - +ACC +DAT Then had the doctor the pill the man given (17)??Dann hatte der Doktor die Pille dem Mann gegeben -FOCUS +FOCUS -FOCUS +NOM +ACC +DAT Then had the doctor the pill the man given ii0 In (17) the sub-LP-rules +DAT < +ACC and -FOCUS < +FOCUS are violated. No other sub-LP-rule legitimizes these violations and therefore the sentence is bad. This agrees with the findings of Lenerz (1977), who tested a large number of sample sentences in order to determine the interaction of the unmarked syntactic order and the ordering preferences introduced by discourse roles. There are too many possible feature iustantiatious and permutations of the three noun phrases to permit making grammaticality predictions here for a larger sample of ordering variants. So far 1 have not discovered any empirical deficiencies in the proposed analysis. 5. Implications for Implementations The theory of GPSG, a,s described by its creators and as outlined in this paper, cannot be used directly for implementa- tion. The number of rules generated by the metagrammar is just too large. The Hewlett-Packard system (Gawron etal., 1982} as well as Henry Thompson's program, which are both based on a pre-ID/LP version of GPSG, use metarules as metagrammatical devices, but with feature iustantiation built into the processor. Agreement checks, however, which correspond to the work of the metagrammatical feature instantiation principles, are done at parse time. As Berwick and Weinberg (1982] have pointed out, the cont ext-freeness of a grammar might not accomplish much when the number of rules explodes. The more components of the metagrammar that can be built into the processor (or used by it as additional rule sets at parse time), the smaller the resulting grammar will be. The task is to search for parsing algorithms that. incorporate the work of the metagrammar into context-free phrase structure parsing without completely losing the parsing time advantages of the latter. Most PSG parsers do feature handling at parse time. Recently, Shieber (forthcoming) has extended the Earley algorithm (Earley 1970) to incorporate the linearization process without a concomitant loss in parsing c~ciency. The redefinition of the LP component proposed in this paper can be intrusted easily and efficiently into Shieber's extension. If the parser uses the disjunctive LP rules to accept all or- dering variants that are well-formed with respect to a discourse, there still remains the question of how the generator chooses among the disjuncts in the LP rule. It would be very surprising if the different orderings that can be obtained by choosing one LP rule disjua:t over another did in fact occur with equal fre- quency. Although there are no clear results that might provide an answer to this question, there are indications that certain dis- juntas "win out" more often than others. However, this choice is purely stylistic. A system that is supposed to produce high- quality output might contain a stylistic selection mechanism that avoids repe, hions or choose~ among variants according to the tyt:e of text or dialogue. 6. Conclusion The proposed analysis of partially free word order in German makes the accurate predictions about the gram- musicality of ordering variants, including their appropriate- ness with respect to a given diseo~se. The 1D/LP format, which has the mechanisms to handle free word order, has been extended to account for the interaction of syntax and prag- mat.its, as well as for the mutually competing ordering principles. The modifications are compatible with efficient implementation models. The redefined LP component can be used for the im- plementation of stylistic choice. References Barwise, J. and J. Perry (1981) "Situations and Attitudes', .Iouwna/ of Philosophy, lgHl, 668-891. Berwick, R. C., and A. S. Weinberg "Parsing Efficiency, Computational Complexity, and the Evaluation of Grammatical Theories," Linguistic Inquiry, 13, 165-191. Earley, d. (1970} "An Efficient Context-Free Parsing Algorithm," Communleatlona of the ACM, 13, (1970), 94-102. Gawron, M. J.. et al. (1982) "The GPSG Linguistics System," Proccedlnla of the 20th Annual Meeting of the Association for Computational Lingu~ties, University of Toronto. Toronto, June 1982, 74-81. Gazdar, G. and G. Pullum (1981) "Subcategorization, Constituent Order and the Notion 'IIead'," in M. Moortgat, H.v.d. Huist anti T. Hoekstra. eds., The Scope of Lexleal Rules, 107- 123, Foris. Dordreeht, Holland, 1981. Gaz,lar, G. and G. Pullum (1982) "Generalized Phrase Structure Grammar: A Theoretical Synopsis," Indiana University Linguistics Club, Bloomington, Indiana. Gazdar. G.. G. Pullum. and I. Sag {1981) "Auxiliaries and related phenomena in a restrictive theory of grammar," Lnngttatge 58. 591-638. Kamp, H. (1980) "A theory of truth and semantic representation" ms. l~:arttunen. L. and S. Peters (1979) "Conventional implicature," in C. I',:. Oh and D. Dinneen (eds.), Syntaut tad Semantics, Vol. 11: Presupposition, Academic Press, New York, 1-66. t<lein. E. (1983) "A .~h,ltisct Analysis of Immediate Domination Rules" ms. Lenerz. J. (1077) Zuw Abfolge nomlnnlet Satsglledcr Im Deutschen, TBL Verlag Gunter Narr. Tuebingen, 1977. ~lontag,,e. R. (1974} 1Corms1 Philosophy, edited and with an intro- duction I,y R. Thomason, Yale University Press, New Haven. Nerhonne. J. 11082} " 'Phantoms' in German fronting: Poltergeist constituents' ." paper presented at the 108'2, Annual meeting of the Linguistic Society of America, San Diego, California. December 1982. Peters. S. and II. L'szkoreit, "Essential Variables in Metarules," paper presented at the 1982 Annual Meeting of the Linguistic Society of America , San Diego, California, December 1982. Pul/um. G. (1982) "Free Word Order and Phrase Structure, Rules," J. Pustejovsky and P. Sells. (eds.), Peoceedlntm of the Twelfth Annual Meeting of the North Eauttern Lhagulstle Society, Graduate Linguistics Student Association, University of Massachusetts, Amherst, Massachusetts 1982. Shicber. S. (forthcoming) "Direct Parsing of ID/LP Grammars." Uszkoreit. H. (1982ai "German Word Order in GPSG," in D. Flickinger, NI. Macken, and N. Wiegand (eds.L Proccedint:, 111 of the Flt'at West Co~t CouFerenee on Fos.ma,/ Llnttuhttie,, Stanford University, Stanford, California (1982). Uszkoreit, H. (1982b) "Topicalization in Standard German," paper presented at the 1982 Annual meeting of the Linguistic Society of America, San Diego, December 1982. 112 | 1983 | 16 |
Sentence Disambiguation by a Shift-Reduce Parsing Technique* Stuart M. Shieber Abstract Artificial Intelligence Center SRI International 333 Ravenswood Avenue Menlo Park, CA 94025 Native speakers of English show definite and consistent preferences for certain readings of syntactically ambiguous sen- tences. A user of a natural-language-processing system would naturally expect it to reflect the same preferences. Thus, such systems must model in some way the linguistic performance as well as the linguistic competence of the native speaker. We have developed a parsing algorithm--a variant of the LALR(I} shift.-reduce algorithm--that models the preference behavior of native speakers for a range of syntactic preference phenomena reported in the psycholinguistic literature, including the recent data on lexical preferences. The algorithm yields the preferred parse deterministically, without building multiple parse trees and choosing among them. As a side effect, it displays ap- propriate behavior in processing the much discussed garden-path sentences. The parsing algorithm has been implemented and has confirmed the feasibility of our approach to the modeling of these phenomena. 1. Introduction For natural language processing systems to be useful, they must assign the same interpretation to a given sentence that a native speaker would, since that is precisely the behavior users will expect.. Consider, for example, the case of ambiguous sen- tences. Native speakers of English show definite and consistent preferences for certain readings of syntactically ambiguous sen- tences [Kimball, 1973, Frazier and Fodor, 1978, Ford et aL, 1982]. A user of a natural-language-processing system would naturally expect, it to reflect the same preferences. Thus, such systems must model in some way the lineuistie performance as well as the linguistic competence of the native speaker. This idea is certainly not new in the artificial-intelligence literature. The pioneering work of Marcus [Marcus, 1980] is per- haps the best. known example of linguistic-performance modeling in AI. Starting from the hypothesis that ~deterministic" parsing of English is possible, he demonstrated that certain performance "This research was supported by the Defense Advanced Research Proiects Agency under Contract NOOO39-80-C-0575 with the Naval Electronic Systems Command. The views and conclusions contained in this document are those of the author and should not be interpreted a.s representative of the oh~cial policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the United States government. constraints, e.g., the difl]culty of parsing garden-path sentences, could be modeled. His claim about deterministic parsing was quite strong. Not only was the behavior of the parser required to be deterministic, but, as Marcus claimed, The interpreter cannot use some general rule to take a nondeterministic grammar specification and im- pose arbitrary constraints to convert it to a deter- ministic specification {unless, of course, there is a general rule which will always lead to the correct decision in such a case). [Marcus, 1980, p.14] We have developed and implemented a parsing system that. given a nondeterministic grammar, forces disambiguation in just the manner Marcus rejected (i.e. t .hrough general rules}; it thereby exhibits the same preference behavior that psycbolin- guists have attributed to native speakers of English for a cer- tain range of ambiguities. These include structural ambiguities [Frazier and Fodor, 1978, Frazier and Fodor, 1980, Wanner, 1980l and lexical preferences [Ford et aL, 1982l, as well as the garden- path sentences as a side effect. The parsing system is based on the shih.-reduee scheduling technique of Pereira [forthcoming]. Our parsing algorithm is a slight variant of LALR{ 1) pars- ing, and, as such, exhibits the three conditions postulated by Marcus for a deterministic mechanism: it is data-driven, reflects expectations, and has look-ahead. Like Marcus's parser, our parsing system is deterministic. Unlike Marcus's parser, the grammars used by our parser can be ambiguous. 2. The Phenomena to be Modeled The parsing system was designed to manifest preferences among ,~tructurally distinct parses of ambiguous sentences. It, does this by building just one parse tree--rather than build- ing multiple parse trees and choosing among them. Like the Marcus parsing system, ours does not do disambiguation requir- ing "extensive semantic processing," hut, in contrast to Marcus, it does handle such phenomena as PP-attachment insofar as there exist a priori preferences for one attachment over another. By a priori we mean preferences that are exhibited in contexts where pragmatic or plausibility considerations do not tend to favor one reading over the other. Rather than make such value judgments ourselves, we defer to the psycholinguistic literature {specifically [Frazier and Fodor, 1978], [Frazier and Fodor, 1980] and [Ford et al., 1982]) for our examples. 113 The parsing system models the following phenomena: Right Association Native speakers of English tend to prefer readings in which constituents are "attached low." For instance, in the sen- tence Joe bought the book that I hod been trving to obtain for ~usan. the preferred reaL~lng is one in w~lch the prepositional phrase "for Susan ~ is associated with %o obtain ~ rather than %ought. ~ Minlmal Attachment On the other hand, higher attachment in preferred in eer- rain cases such as Joe bought the book [or Suean. in which "for Susan* modifies %he book" rather than "bought." Frazier and Fodor [1978] note that these are canes in which the higher attachment includes fewer nodes in the parse tree. Ore" analysis is somewhat different. Lexical Preference Ford et al. [10821 present evidence that attachment preferences depend on lexical choice. Thus, the preferred reading for The woman wanted the dresm on that rock. has low attachment of the PP, whereas The tnoman positioned the dreu on that rack. has high attachment. Garden-Path Sentences Grammatical sentences such as The horse raced pamt the barn fell. seem actually to receive no parse by the native speaker until some sort of "conscioun parsing" is done. Following Marcus [Marcus, 1980], we take this to be a hard failure of the human sentence-processing mechanism. It will be seen that all these phenomena axe handled in oux parser by the same general rules. The simple context-free gram- mar used t (see Appendix I) allows both parses of the ambiguous sentences as well as one for the garden-path sentences. The par- ser disambiguates the grammar and yields only the preferred structure. The actual output of the parsing system can be found in Appendix II. 3. The Parsing System The parsing system we use is a shift-reduce purser. Shift- reduce parsers [Aho and Johnson, 19741 axe a very general class of bottom-up parsers characterized by the following architecture. They incorporate a stock for holding constituents built up during IWe make no claims a4 to the accuracy of the sample grammar. It is obviously a gross simplific~t.ion of English syntax. Ins role is merely to show that the parsing system is sble to dis,~mbiguate the sentences under consideration correctly. the parse and a shift-reduce table for guiding the parse, At each step in the parse, the table is used for deciding between two basic types of operations: the shift operation, which adds the next word in the sentence (with its pretcrminal category) to the top of the stack, and the reduce operation, which removes several elements from the top of the stack and replaces them with a new element--for instance, removing an NP and a VP from the top of the stack and replacing them with an S. The state of the parser is also updated in accordance with the shift-reduce table at each stage. The combination of the stack, input, and state of the parser will be called a configuration and will be notated as, for example, 1 NPv IIMar, 110 1 where the stack contains the nonterminals NP and V, the input contains the lexical item Mary and the parser is in state 10. By way of example, we demonstrate the operation of the parser (using the grammar of Appendix I) on the oft-cited sen- tence "John loves Mary. ~ Initially the stack is empty and no input has been consumed. The parser begins in state 0. II ahn 10.. Mar, i0 i As elements are shifted to the stack, they axe replaced by their preterminal category." T.he shiR-reduce table for the grammar of Appendix I states that in state 0, with a proper noun as the next word in the input, the appropriate action is a shift. The new configuration, therefore, is i PNOUN lo~e8 Mar~l i 4 ! The next operation specified is a reduction of the proper noun to a noun phrase yielding , NP iI loves Mary [2 i The verb and second proper noun axe now shifted, in accordance with the shift-reduce table, exhausting the input, and the proper noun is then reduced to an NP. NP v !l Ma,, !1o v P. ouN il !, NP V NP i] :14 Finally, the verb and noun phrase on the top of the stack are reduced to a VP i NP VP !I ! l II ~6 I which is in turn reduced, together with the subject NP, to an S. i sJl ,'I ) This final configuration is an accepting configuration, since all 2But see Section 3.'2. for an exception. 114 the input has been consumed and an S derived. Thus the sen- tence is grammatical ia the grammar of Appendix I, as expected. 3.1 Differences from the Standard LR Techniques The shift-reduce table mentioned above is generated automatically from a context-free grammar by the standard al- gorithm [Aho and Johnson, 1974]. The parsing alogrithm differs, however, from the standard LALR(1) parsing algorithm in two ways. First, instead of assigning preterminal symbols to words as they are shifted, the algorithm allows the assignment to be delayed if the word is ambiguous among preterminals. When the word is used in a reduction, the appropriate preterminal is assigned. Second, and most importantly, since true LR parsers exist only for unambiguous grammars, the normal algorithm for deriv- ing LALR(1) shift-reduce tables yields a table that may specify conflicting actions under certain configurations. It is through the choice made from the options in a conflict that the preference behavior we desire is engendered. 3.2 Preterminal Delaying One key advantage of shift-reduce parsing that is critical in our system is the fact that decisions about the structure to be assigned to a phrase are postponed as long as possible. In keeping with this general principle, we extend the algorithm to allow the ~ssignment of a preterminal category to a lexical item to be deferred until a decision is forced upon it, so to speak, by aa encompassing reduction. For instance, we would not want to decide on the preterminal category of the word "that," which can serve as either a determiner (DET) or complementizer (THAT), until some further information is available. Consider the sentences That problem i* important. That problema are difficult to naive ia important. Instead of a.~signiag a preterminal to ~that," we leave open the possibility of assigning either DET or THAT until the first reduc- tion that involves the word. In the first case, this reduction will be by the rule NP ~DET NOM, thus forcing, once and for all, the assignment of DET as preterminal. In the second ease, the DET NOM analysis is disallowed oa the basis of number agreement, so that the first applicable reduction is the COMPS reduction to S, forcing the assignment of THAT as preterminal. Of course, the question arises as to what state the par- ser goes into after shitting the lexical item ~that." The answer is quite straightforward, though its interpretation t,i~ d t,,a the determinism hypothesis is subtle. The simple answer is that the parser enters into a state corresponding to the union of the states entered upon shifting a DET and upon shifting a THAT respectively, in much the same way as the deterministic simula- tion of a nondeterministic finite automaton enters a ~uniou" state when faced with a nondeterministic choice. Are we then merely simulating a aoadeterministic machine here. ~ The anss~er is equivocal. Although the implementation acts as a simulator for a nondeterministic machine, the nondeterminism is a priori bounded, given a particular grammar and lexicon. 3 Thus. the nondeterminism could be traded in for a larger, albeit still finite, set of states, unlike the nondeterminism found in other pars- ing algorithms. Another way of looking at the situation is to note that there is no observable property of the algorithm that would distinguish the operation of the parser from a determinis- tic one. In some sense, there is no interesting difference between the limited nondeterminism of this parser, and Marcus's notion of strict determinism. In fact, the implementation of Marcus's parser also embodies a bounded nondeterminism in much the same way this parser does. The differentiating property between this parser and that of Marcus is a slightly different one, namely, the property of qaaM-real-time operation. 4 By quasi-real-time operation, Marcus means that there exists a maximum interval of parser operation for which no output can be generated. If the parser operates for longer than this, it must generate some output. For instance, the parser might be guaranteed to produce output (i.e., struc- ture) at least every three words. However, because preterminal assignment can be delayed indefinitely in pathological grammars, there may exist sentences in such grammars for which arbitrary numbers of words need to be read before output can be produced. It is not clear whether this is a real disadvantage or not, and, if so, whether there are simple adjustments to the algorithm that would result in quasi-real-time behavior. In fact, it is a property of bottom-up parsing in general that quasi-real-time behavior is not guaranteed. Our parser has a less restrictive but similar property, fairneaH, that is, our parser generates output linear in the input, though there is no constant over which out- put is guaranteed. For a fuller discussion of these properties, see Pereira and Shieber [forthcoming]. To summarize, preterminal delaying, as an intrinsic part of the algorithm, does not actually change the basic properties of the algorithm in any observable way. Note, however, that preterminal assignments, like reductions, are irrevocable once they are made {as a byproduct of the determinism of the algo- rithm}. Such decisions can therefore lead to garden paths, as they do for the sentences presented in Section 3.6. We now discuss the central feature of the algorithm. namely, the resolution of shift-reduce conflicts. 3.3 The Disambiguation Rules Conflicts arise in two ways: aM/t-reduce conflicts, in which the parser has the option of either shifting a word onto the stack or reducing a set of elements on the stack to a new element; reduce-reduce conflicts, in which reductions by several grammar 3The boundedness comes about because only a finite amount or informa- tie, n is kept per state (an integer) and the nondeterrninlsm stops at the prcterminat level, so that, the splitting of states does not. propogate, 41 am indebted to Mitch Marcus for this .bservation and the previous comparison with his parser. i15 rules are possible. The parser uses two rules to resolve these conflicts: 5 (I) Resolve shift-reduce conflicts by shifting. (2) Resolve reduce-reduce conflicts by performing the longer reduction. These two rules suffice to engender the appropriate be- havior in the parser for cases of right association and minimal attachment. Though we demonstrate our system primarily with PP-attachment examples, we claim that the rules are generally valid for the phenomena being modeled [Pereira and Shieber, forthcoming]. 3.4 Some Examples Some examples demonstrate these principles. Consider the sentence Joe took the book that I bought for Sum,re. After a certain amount of parsing has beta completed deter- ministically, the parser will be in the following coniigttration: I NP v that V Ill°r S,... I with a shift-reduce confict, since the V can be reduced to a VP/NP ° or the P can be shifted. The principle* presented would solve the conflict in favor of the shift, thereby leading to the following derivation: NP V NP that NP V P l] Su,an 112 ) "NPV NP that NPVP NP II 119 I NP v NP that NP V PP !l 124 I NPVNPthatNPVP/NP II i 22 I NP V NP that S/NP .1O I NP v NP II I 7 I ,,2 Iq'P V NP, 11. }14 I ., NP VP t1 I 8 I .... sll I' I which yields the structure: [sdoe{vptook{Nl,{xethe book][gthat I bought for Susanl]]] The sentence 5The original notion of using a shift-reduce parser and general scheduling principles to handle right association and minlmal attachment, together with the following two rules, are due to Fernando Pereira [Pereira, 1982[. The formalization of preterminal delaying and the extensions to the Ionic tl- preference cases and garden-path behavior are due to the author. 8The "slash-category" analysis of long-distance dependencies used here is loosely based on the work of Gaadar [lggl]. The Appendix 1 grammar does not incorporate the full range of slashed rules, however, but merely a representative selection for illustrative purposes. Joe bou¢ht the book for Su,an. demonstrates resolution of a reduce-reduce conflict. At some point in the parse, the parser is in the following configuration: [ NP V NP PP ii 120 I with a reduce-reduce conflict. Either a more complex NP or a VP can be built. The conflict is resolved in favor of the longer reduction, i.e., the VP reduction. The derivation continues: I NP VP [I I 8 ! I sll 1! I ending in an accepting state with the following generated struc- ture: [sdoe{v~,bought[Npthe bookl[Ppfor Susan]I] 3.5 Lexical Preference To handle the lexical-preferenee examples, we extend the second rule slightly. Preterminal-word pairs can be stipulated as either weak or strong. The second rule becomes (2} Resolve reduce-reduce conflicts by performing the longest reduction with the stroncest &ftmost stack element. 7 Therefore, if it is assumed that the lexicon encodes the information that the triadic form of ~ant" iV2 in the sample grammar) and the dyadic form of ~position" (V1) are both weak, we can see the operation of the shift-reduce parser on the ~dress on that rack" sentences of Section 2. Both sentences are similar in form and will thus have a similar configuration when the reduce-reduce conflict arises. For example, the first sentence will be in the following configuration: t NP wanted NP PP i[ 120 i In this case, the longer reduction would require assignment of the preterminat category V2 to ~ant," which is the weak form: thus, the shorter reduction will be preferred, leading to the derivation: I NP wanted NP ]1 11,1 ] NP VP II i 6 :,': I sli il and the underlying structure: [sthe woman[vpwaated[Np{Npthe dress][ppoa that r~klll] 7Note that, strength takes precedence over length. 116 In the ca~e in which the verb is "positioned," however, the longer reduction does not yield the weak form of the verb; it will there- fore be invoked, reslting in the structure: [sthe woman [vP positioned [Npthe dress][ppon that rackl]] 3.6 Garden-Path Sentences As a side effect of these conflict resolution rules, certain sentences in the language of the grammar will receive no parse by the parsing system just discussed. These sentences are ap- parently the ones classified as "garden-path" sentences, a class that humans also have great difficulty parsing. Marcus's conjec- ture that such difficulty stems from a hard failure of the normal sentence-processing mechanism is directly modeled by the pars- ing system presented here. For instance, the sentence The horse raced past the barn fell exhibits a reduce-reduce conflict before the last word. If the participial form of "raced" is weak, the finite verb form will be chosen; consequently, "raced pant the barn" will be reduced to a VP rather than a participial phrase. The parser will fail shortly, since the correct choice of reduction was not made. Similarly, the sentence That scaly, deep-sea fish ,hould be underwater i~ impor- tant. will fail. though grammatical. Before the word %hould" is shifted, a reduce-reduce conflict arises in forming an NP from either "That scaly, deep-sea l~h" or "scaly, deep-sea fish." The longer (incorrect} reduction will be performed and the parser will fail. Other examples, e.g., "the boy got fat melted," or "the prime number few" would be handled similarly by the parser, though the sample grammar of Appendix I does not parse them [Pcreira and Shieber, forthcoming]. 4. Conclusion To be useful, aatttral-language systems must model the behavior, if not the method, of the native speaker. We have demonstrated that a parser using simple general rules for disam- biguating sentences can yield appropriate behavior for a large class of performance phenomena--right a-~soeiation, minimal at- tachment, lexical preference, and garden-path sentences--and that, morever, it can do so deterministically wit, hour generating all the parses and choosing among them. The parsing system has been implemented and has confirmed the feasibility of ottr approach to the modeling of these phenomena. References Aho, A.V.. and S.C. Johnson, 1974: "LR Parsing," Computi,, 9 Sur,,eys. Volume 6, Number 2, pp. 99-i24 ISpring). Ford, M., J. Bresnan, and R. Kaplan, 1982: "A Competence- Based Theory of Syntactic Closure," in The Mental Representation o/Grammatical Relations, J. Bresnan, ed. (Cambridge, Massachusetts: MIT Press). Frazier, L., and J.D. radar, 1978: ~I'he Sausage Machine: A New Two-Stage Parsing Model," Cognition, Volume 6, pp. 291-325. Frazier, L., and J.D. Fodor, 1980: "Is the Human Sentence Parsing Mechanism aa ATN?" Cognition, Volume 8, pp. 411-459. Gazdar, G., 1981: "Unbounded dependencies and coordinate structure," Linquistic Inquiry, Volume 12, pp. 105-179. Kimball, d., 1973: "Seven Principles of Surface Structure Parsing in Natural Language," Cognition, Volume 2, Number 1, pp. 15-47. Marcus, M., 1980: A Theory of Syntactic Recognition/or Natural Lanquagc, (Cambridge, Massachusetts: MIT Press). Pereira, F.C.N., forthcoming: "A New Characterization of Attachment Preferences," to appear in D. Dowry, L. Karttunen, and A. gwicky (eds.) Natural Language Prate,int. Psyeholingui, tic, Computational, and Theoretical Perspective~, Cambridge, England: Cambridge University Press. Pereira, F.C.N., and S.M. Shieber, forthcoming: "ShiR-Reduce Scheduling and Syntactic Closure/ to appear. Wanner, E., 1980: "The ATN and the Sausage Machine: Which One is Baloney?" Caanition, Volume 8, pp. '209-225. Appendix I. The Test Grammar The following is the grammar used to test the parting ~ystem descibed in the paper. Not a robust grammar of English by any means, it is presented only for the purpose of establishing that the preference rules yield the correct, results. S -- NP VP VP -- V3 INF S--gVP VP--V4 ADJ NP -- DET NOM VP -- V5 PP NP -- NOM 5-- that S NP -- PNOUN INF -- to VP NP -- NP S/NP PP -- P NP NP -- NP PARTP PARTP -- VPART PP NP -- NP PP S/NP -- that S/NP DET -- NP's S/NP -- VP NOM -- N S/NP -- NP VP/NP NOM -- ADJ NOM VP/NP -- Vl VP -- AUX VP VP/NP -- V2 PP VP -- V0 VP/NP -- V3 INF/NP VP -- Vl NP VP/NP -. AUX VP/NP VP -- V2 NP PP INF/NP --* to VP/NP Appendix II. Sample Runs >> do* bought the hook that I had beln tryin E to obt.in for Susan 117 Accepted: Is Cup Cpnonn Joe)) (vp Cvl bought) Cap (up (dec the) (uoa (n book))) (sbar/np (that that) Cs/np Cup (pnou I)) Cvp/up (uuz bud) (vp/np (auz been) (vp/np Cv3 tryinl) (t-~/np (~plup (v2 obtain) (pp (p for} (up (pnoun Saul] sta~e: stack: input: (1) <(0)> (v4 is) [e [up (den Thlt) (non (IdJ scaly) Chum (~tJ 4eup-ssl) (mum (u fish] C,p Can should) (vp (v4 be) (adj uadu~ter] (|dj itportut) (end) >> Joe bought the book for Suuu Accepted: [8 (up (puoun Joe)) (vp (v2 boucht) Cup Cdet the) Chum Cn book))) (pp (p for) Cup (puoun Sueua] >> The vomam vatted the dreou on thnt r~h Accepted: Is Cup Cdut The) Cue= (u vomu))) (Tp (vt v~ted) Cap (up (den the) (no= (n druu))) (pp (p on) (rip (det that) Curt (u rack] >> The youth poeitioued the dreue on that rack Accepted: Is (up (den The) (noa (n vol,~))) (vp (~2 poaitioued) (up (den the) (nee (~ dreJl))) (pp Cp on) (up (den that} Cuom (. rack] >> The horse raced put the barn fell Parse failed. Currant confiEurltlon: 8tare: (l) stack: <(0)> Is Cap (4*t me) (not (u horse))) (vp (v6 rncea) (pp (p put) (up (4et the) (aou (u b~rn] input: (tO fell) Cend) )) That ecal! ~eep-let fish should be undes=l~tur i8 importer Parse failed. Current cou~ilOlrttiou: 118 | 1983 | 17 |
SYN'I'ACI IC CONSTI~,,\INTS AND F~FI:ICIFNI' I~AI(SAI~,II.I'I'Y Robert C. Berwick Room 820, MIT Artificial Intelligence l,aboratory 545 Technology Square, Cambridge, MA 02139 Amy S. Weinberg Deparuncnt of Linguistics, MIT Cambridge, MA 02139 ABSTRACT A central goal of linguistic theory is to explain why natural languages are the way they are. It has often been supposed that com0utational considerations ought to play a role in this characterization, but rigorous arguments along these lines have been difficult to come by. In this paper we show how a key "axiom" of certain theories of grammar, Subjacency, can be explained by appealing to general restrictions on on-line parsing plus natural constraints on the rule-writing vocabulary of grammars. The explanation avoids the problems with Marcus' [1980] attempt to account for the same constraint. The argument is robust with respect to machine implementauon, and thus avoids the problems that often arise wilen making detailed claims about parsing efficiency. It has the added virtue of unifying in the functional domain of parsing certain grammatically disparate phenomena, as well as making a strong claim about the way in which the grammar is actually embedded into an on-line sentence processor. I INTRODUCTION In its short history, computational linguistics has bccn driven by two distinct but interrelated goals. On the one hand, it has aimed at computational explanations of distinctively human linguistic behavior -- that is, accounts of why natural languages are the way they are viewed from the perspective of computation. On the other hand, it has accumulated a stock of engineenng methods for building machines to deal with natural (and artificial) languages. Sometimes a single body of research has combined both goals. This was true of the work of Marcus [1980]. for example. But all too often the goals have remained opposed -- even to the extent that current transformational theory has been disparaged as hopelessly "intractable" and no help at all in constructing working parsers. This paper shows that modern transformational grammar (the "Government-Binding" or "GB" theory as described in Chomsky [1981]) can contribute to both aims of computational linguistics. We show that by combining simple assumptions about efficient parsability along with some assumpti(ms about just how grammatical theory is to be "embedded" in a model of language processing, one can actually explain some key constraints of natural languages, such as Suhjacency. (The a)gumcnt is differmlt frt)m that used in Marcus 119801.) In fact, almost the entire pattern of cunstraints taken as "axioms" by the GB thct)ry can be accutmtcd tbr. Second, contrary to what has sometimes been supposed, by exph)iting these constraints wc can ~how that a Gll-based theory is particularly compatil)le v~idl efficient parsing designs, in particdlar, with extended I I~,(k,t) parsers (uf the sort described by Marcus [1980 D. Wc can extcnd thc I,R(k.t) design to accommodate such phenomena as antecedent-PRO and pronominal binding. Jightward movement, gappiug, aml VP tlcletion. A, Functional Explanations o__f I,ocality Principles Let us consider how to explain locality constraints in natural languages. First of all, what exactly do we mean by a "locality constraint"? "]'he paradigm case is that of Subjacency: the distance between a displaced constituent and its "underlying" canonical argument position cannot be too large, where the distance is gauged (in English) in terms of the numher of the number of S(entence) or NP phrase boundaries. For example, in sentence (la) below, John (the so-called "antecedent") is just one S-boundary away from its presumably "underlying" argument position (denoted "x", the "trace")) as the Subject of the embedded clause, and the sentence is fine: (la) John seems [S x to like ice cream]. However, all we have to do ts to make the link between John and x extend over two S's, and the sentence is ill-formed: (lb) John seems [S it is certain [S x to like ice cream This restriction entails a "successive cyclic" analysis of transformational rules (see Chomsky [1973]). In order to derive a sentence like (lc) below without violating the Subjacency condition, we must move the NP from its canonical argument position through the empty Subject position in the next higher S and then to its surface slot: (lc) John seems tel to be certain x to get the ice cream. Since the intermediate subject position is filled in (lb) there is no licit derivation for this sentence. More precisely, we can state the Subjacency constraint as follows: No rule of grammar can involve X and Y in a configuration like the following, [ ...x...[,, ...[/r..Y...]...l ...X...] where a and # are bounding nodes (in l.'nglish, S or NP phrases). " Why should natural languages hc dcsigned Lhis way and not some other way? Why, that is, should a constraint like Subjaccncy exist at all? Our general result is that under a certain set of assumptions about grammars and their relationship to human sentence processing one can actually expect the following pattern of syntactic igcality constraints: (l) The antecedent-trace relationship must obey Subjaccncy, but other "binding" realtionships (e.g., NP--Pro) need not obey Subjaccncy. 119 (2) Gapping constructitms must be subject to a bounding condition resembling Subjacency. but VP deletion nced not be. (3) Rightward movemcnt must be stricdy bounded. To the extent that this predicted pattern of constraints is actually observed -- as it is in English and other languages -- we obtain a genuine functional explanation of these constraints and support for the assumptions themselves. The argument is different from Man:us' because it accounts for syntactic locality constraints (like Subjaceney) ,as the joint effect of a particular theory of grammar, a theory of how that grammar is used in parsing, a criterion for efficient parsability. and a theory of of how the parser is builL In contrast, Marcus attempted to argue that Subjaceney could be derived from just the (independently justified) operating principles of a particular kind of parser. B. Assumptions. The assumptions we make are the following: (1) The grammar includes a level of annotated surface structure indicating how constituents have been displaced from their canonical predicate argument positions. Further, sentence analysis is divided into two stages, along the lines indicated by tile theory of Government and Binding: the first stage is a purely syntactic analysis that rebuilds annotated surface structure; the second stage carries out the interpretation of variables, binds them to operators, all making use of the "referential indices" of NPs. (2) To be "visible" at a stage of analysis a linguistic representation must be written in the vocabulary of that level. For example, to be affected by syntactic operations, a representation must be expressed in a syntactic vocabulary (in the usual sense); to be interpreted by operations at the second stage, the NPs in a representation must possess referential indices. (This assumption is not needed to derive the Subjaccncy constraint, but may be used to account for another "axiom" of current grammatical theory, the so-called "constituent command" constraint on antecedcnLs and the variables that they hind.) This "visibility" assumption is a rather natural one. (3) The rule-writing vocabulary of the grammar cannot make use of arithmetic predicates such as "one", "two" or "three". but only such predicates as "adjacent". Further, quzmtificational statements are not allowed m rt.les. These two assumptions are also rather standard. It has often been noted that grammars "do not count" -- that grammatical predicates are structurally based. There is no rule of grammar that takes the just the fourth constituent of a sentence and moves it, for example. In contrast, many different kinds of rules of grammar make reference to adjacent constituents. (This is a feature found in morphological, phonological, and syntactic rules.) (4) Parsing is no....! done via a method that carries along (a representation) of all possible derivations in parallel. In particular, an Earley-type algorithm is ruled out. To the extent that multiple options about derivations are not pursued, the parse is "deterministic." (5) The left-context of the parse (as defined in Aho and Ullman [19721) is literally represented, rather than generatively represented (as, e.g., a regular set). In particular, just the symbols used by the grammar (S, NP. VP...) are part of the left-context vocabulary, and not "complex" symbols serving as proxies for the set of lefl.-context strings. 1 In effect, we make the (quite strong) assumption that the sentence processor adopts a direct, transparent embedding of the grammar. Other theories or parsing methods do not meet these constraints and fail to explain the existence of locality constraints with respect to thts particular set of assumpuons. 2 For example, as we show, there is no reason to expect a constraint like Subjacency in the Generalized Phrase Structure Grammars/GPSGsl of G,zdar 119811, because there is no inherent barrier to eastly processing a sentence where an antecedent and a trace are !.mboundedly far t'rt~m each other. Similarly if a parsing method like Earlcy's algorithm were actually used by people, than Sub]acency remains a my:;tcry on the functional grounds of efficient parsability. (It could still be explained on other functional grounds, e.g., that oflearnability.) II PARSING AND LOCALITY PRINCIPLES To begin the actual argument then, assume that on-line sentence processing is done by something like a deterministic parser) Sentences like (2) cause trouble for such a parser: (2) What i do you think that John told Mary...mat ne would like to eat % t. Recall that the suoec.~i~'e lines of a left- or right-most derivation in a context-free grammar cnnstttute a regular Language. ~.~ shown m. e.g.. DcRemer [19691. 2. Plainly. one is free to imagine some other set of assumptions that would do the job. 3. If one a.ssumcs a backtracking parser, then the argument can also be made to go through, but only by a.,,,,~ummg that backtracking Ks vcr/co~tlS, Since this son of parser clearly ,,~ab:~umes the IR(kPt,',pe machines under t/le right co,mrual of 'cost". we make the stronger assumption of I R(k)-ncss. 120 The problem is that on recognizing the verb eat the parser must decide whether to expand the parse with a trace (the transitive reading) or with no postverbal element (.the intransitive reading). The ambiguity cannot be locally resolved since eat takes both readings. It can only be resolved by checking to see whether there is an actual antecedent. Further, observe that this is indeed a parsing decision: the machine must make some decision about how to tu build a portion of the parse tree. Finally, given non-parallelism, the parser is not allowed to pursue both paths at once: it must decide now how to build the parse tree (by inserting an empty NP trace or not). Therefore, assuming that the correct decision is to be made on-line (or that retractions of incorrect decisions are costly) there must be an actual parsing rule that expands a category as transitive iff there is an immediate postverbal NP in the string (no movement) or if an actual antecedent is present. However, the phonologically overt antecedent can be unboundedly far away from the gap. Therefore, it would seem that the relevant parsing rule would have to refer to a potentially unbounded left context. Such a rule cannot be stated in the finite control table of an I,R(k) parser. Theretbre we must find some finite way of expressing the domain over which the antecedent must be searched. There are two ways of accomplishing this. First, one could express all possible left-contexts as somc regular set and then carry this representation along in the finite control table of the I,R(k) machine. This is always pu,,;sible m the case of a contcxt-fiee grammar, and m fact is die "standard" approach. 4 However, m the case of (e.g.) ,,h moven!enk this demands a generative encoding of the associated finite state automaton, via the use of complex symbols like "S/wh" (denoting the "state" that a tvtt has been encountered) and rules to pass king this nun-literal representation of the state of the parse. Illis approach works, since wc can pass akmg this state encoding through the VP (via the complex non-terminal symbol VP/wh) and finally into the embedded S. This complex non-terminal is then used to trigger an expansion of eat into its transitive form. Ill fact, this is precisely the solution method advocated by Gazdar. We ~ce then that if one adopts a non-terminal encoding scheme there should he no p,oblem in parsing any single long-distance gap-filler relationship. That is, there is no need for a constraint like Subjacency. s Second, the problem of unbounded left-context is directly avoided if the search space is limited to some literally finite left context. But this is just what the Sttbjacency c(mstraint does: it limits where an antecedent NP could be to an immediately adjacent S or S. This constraint has a StlllpJe intcrprctatum m an actual parser (like that built hy Murcus [19};0 D. l'he IF-THEN pattern-action rules that make up the Marcus parser's ~anite control "transi:ion table" must be finite in order to he stored ioside a machine. The rule actions themselves are literally finite. If the role patterns must be /herally stored (e.g., the pattern [S [S"[S must be stored as an actual arbitrarily long string ors nodes, rather than as the regular set S+), then these patterns must be literally finite. That is, parsing patterns must refer to literally hounded right and left context (in terms of phrasal nodes). 6 Note Further that 4 Following the approactl of DcRemer []969], one budds a finHe stale automaton Lhat reco~nl/es exactly Ihe set of i¢[t-(OIIlext strings that cain arise during the course of a right-most derivation, the so-Gilled ch,melert.sllcf'.nife s/ale ClUlOmC~lott. 5 l'laml} the same Imlds for a "hold cell" apploaeh [o compulm 8 filler-gap relallonshipi 6. Actually Uteri. lhJ8 k;nd or device lall!; lllto lJae (~itegoly of bounded contc;~t parsing. a.'~ defiued b~. I ]oyd f19(.)4]. this constraint depends on the sheer represcntability of the parser's rule system in a finite machine, rather than on any details of implementation. Therefore it will hold invariantly with respect to rnactfine design -- no matter kind of machine we build, if" we assume a literal representation of left-contexts, then some kind t)f finiteness constraint is required. The robustness of this result contrasts with the usual problems in applying "efficiency" results to explain grm'~T""'!cal constraints. These often fail because it is difficult to consider all possible implcmentauons simultaneously. However, if the argument is invariant with respect to machine desing, this problem is avoided. Given literal left-contexts and no (or costly) backtracking, the argument so far motivates some bounding condition for ambiguous sentences like these. However, to get the lull range of cases these functional facts must interact with properties of the rule writing system as defined by the grammar. We will derive the litct that the Imunding condition must be ~acency (as opposed to tri- or quad-jaccncy) by appeal to the lhct that grammatical c~m~tramts and rules arc ~tated in a vocabtdary which is non-c'vunmtg. ,',rithmetic predicates are forbidden. But this means that since only the prediu~lte "ad].cent" is permitted, any literal I)ouuding rc,~trict]oi] must be c.xprc,~)cd m tcrlllS of adjacent domains: t~e~;ce Subjaccncy. INert that ",djacent" is also an arithmetic predicate.) l:urthcr. Subjaccncy mu,,t appiy ~.o ,ill traces (not ju',t traces of,mlb=guously traw~itive/imransi[ive vcrb,o in:cause a restriction to just the ambiguous cases would low)ire using cxistentml quantilicati.n. Ouantificatiomd predicates are barred in the rule writing vocabulary of natural grammars. 7 Next we extend the approach to NP movement and Gapping. Gapping is particularly interesting because it is difficult ~o explain why this construction (tmlike other deletiou rules) is bounded. That is, why is (3) but not (4) grammatical: (3) John will hit Frank and Bill will [ely P George. *(4)John will hit Frank and I don't believe Bill will [elvpGeorge. The problem with gapping constructions is that the attachment of phonologically identical complements is governed by the verb that the complement follows. Extraction tests show that in {5) the pilrase u/?er M'ao' attaches to V" whde in (6) it attaches to V" (See Hornstem and Wemberg []981] for details.} (5) John will mn aftcr Mary. (6) John will arrivc after Mary. In gapping structures, however, the verb of the gapped constituent ,s not present in the string. Therefore. correct ,lltachrnent o( the complement can only be guaranteed by accessing the antecedent in the previous clause. If this is true however, then the boundlng argument for Suhjacency applies to this ease as well: given deterministic parsing of gapping done correctly, and a literal representation of left-context, then gapping must be comext-bounded. Note that this is a particularly 7 Of course, there zs a anolhcr natural predic.atc Ihat would produce a finite bound on rule context: i[ ~]) alld Irate hod I. bc in tile .ame S donlalll Prc~umahb', lhls is also an Optlllt3 ~l;iI could gel reah,ed in qOII|C n.'Ittlral l~rJoln'iai~: ll'ic resuhing languages would no( have ov,,:rt nlo~.eIIICill OUlside o[ an S. %o(e lllal Lhc naltllal plcdJc;des simply give the ranta¢ of po~edble ndiulal granmlars. ]lot those actually rour~d. The elimination of quanllfil',.llion predic~les is supportable on grounds o(acquisltton. 121 interesting example bccause it shows how grammatically dissimilar operations like wh-movement and gapping can "fall together" in the functional domain of parsing. NP-trace and gaplSing constructions contrast with antecedentY(pro)nominal binding, lexical anaphor relationships, and VP deletion. These last three do not obey Subjacency. For example, a Noun Phrase can be unboundedly far from a (phonologically empty) PRO. even in tenns of John i thought it was certain that... [PRO i feeding himself] would be easy. Note though that in these cases the expansion of the syntactic tree does no._At depend on the presence or absence of an antecedent (Pro)nominals and Icxical anaphors are phonologically realized in the string and can unambiguously tell the parser hew to expand the tree. (After the tree is fully expanded the parser may search back to see whether the element is bound to an antecedent, but this is not a parsing decision,) VP deletion sites are also always locally detectable from ~e simple fact that every sentence requires a VP. The same argument applies to PRO. PRO is locally detectable as the only phonologically unrealized element that can appear in an ungoverned context, and the predicate "ungoverned" is local. 8 In short, there is no parsing decision that hinges on establishing the PRO-antecedent. VP deletion-antecedent, t)r lexical anaphor-antecedent relationship. But then, we should not expect bounding principles to apply in thcse cases, and, in fact, we do not find these elements subject to bounding. Once again then. apparently diverse grammaucal phcnomc,m behave alike within a functional realm. To summarize, we can explain why Subjacency applies to exactly those elements that the grammar stipulates it must apply to. We do this using both facts about the functional design of a parsing system and properties of the formal rule writing vocabulary, l'o the extent that the array of assumpuons about the grammar and parser actually explain this observed constraint on human linguistic behavior, we obtain a powerful argument that certain kinds of grammatical represenumons and parsing dcstgns are actually implicated in human sentence processing. Chomsky, Noam [19811 Lectures on Gove,nmem and Binding, Foris Publications. I)eRerner, Frederick [1969] Practical 7"nms,':m~sJbr IR(k) I.angu,ges, Phi) di.~scrtation, MIT Department of Electrical Engineering and Computer Science. Floyd, Robert [1964] "Bounded-context syntactic analysis." Communtcations of the Assoctatiotl for Computing ,l.lachinery, 7, pp, 62-66. Gazdar, Gerald [19811 "Unbounded dependencies and coordinate structure," Linguistic Inquiry, 12:2 I55-184. Hornstein. Norbert and Wcinherg, Amy [19811 "Preposition stranding and case theory," LingutMic [nquio,, 12:1. Marcus, Mitchell [19801 A Theory of Syntactic Recognition for Natural Language, M IT Press 111 ACKNOWLEDGEIvlENTS This report describes work done at the Artificial Intelligence Laboratory of the Massachusetts Institute ofl'cchnt)logy. Support for the Laboratory's artificial intelligence research is prey)deal in part by tiac Advanced P, esearch ProjccLs Agency of the Department of Defense under Office ()f Naval Research Contract N00014-80-C-0505. IV REFERENCES Aho, Alfred and Ullman, Jeffrey [1972] The Theory of Parsing Trnn.~lalion, attdCumpiiing, vo[. [., Prentice-(-{all. Chumsky, Noam [1973] "Conditions on 'rransformations,"in S. Anders(m & P Kiparsky, eds. A Feslschr(l'l [or Morris Halle. Holt, Rinehart and Winston. 8 F;hlce ~ ~s ungovcNicd fff a ~ovct'llcd t:~ F;L[:~c, and a go~c,'m~J is a bounded predicate, i hcmg Lcstrictcd Io mu~',dy a ~in~i¢ lllaX1111;il Drojcctlon (at worst al| S). 122 | 1983 | 18 |
Deterministic Parsing of Syntactic Non-fluencies Donald Hindle Bell Laboratories Murray Hill, New Jersey 07974 It is often remarked that natural language, used naturally, is unnaturally ungrammatical.* Spontaneous speech contains all manner of false starts, hesitations, and self-corrections that disrupt the well-formedness of strings. It is a mystery then, that despite this apparent wide deviation from grammatical norms, people have little difficx:lty understanding the non-fluent speech that is the essential medium of everyday life. And it is a still greater mystery that children can succeed in acquiring the grammar of a language on the basis of evidence provided by a mixed set of apparently grammatical and ungrammatical strings. I. Sell-correction: a Rule-governed System In this paper I present a system of rules for resolving the non-fluencies of speech, implemented as part of a computational model of syntactic processing. The essential idea is that non-fluencies occur when a speaker corrects something that he or she has already said out loud. Since words once said cannot be unsaid, a speaker can only accomplish a self-correction by saying something additional -- namely the intended words. The intended words are supposed to substitute for the wrongly produced words. For example, in sentence (1), the speaker initially said I but meant we. (1) I was-- we were hungry. The problem for the hearer, as for any natural language understanding system, is to determine what words are to be expunged from the actual words said to find the intended sentence. Labov (1966) provided the key to solving this problem when he noted that a phonetic signal (specifically, a markedly abrupt cut-off of the speech signal) always marks the site where self-correction takes place. Of course, finding the site of a self-correction is only half the problem; it remains to specify what should be removed. A first guess suggests that this must be a non-deterministic problem, requiring complex reasoning about what the speaker meant to say. Labov claimed that a simple set of rules operating on the surface string would specify exactly what should be changed, transforming nearly all non-fluent strings into fully grammatical sentences. The specific set of transformational rules Labor proposed were not formally adequate, in part because they were surface transformations which ignored syntactic constituenthood. But his work forms the basis of this current analysis. This research was done for the most part at the University of Pennsylvama. supported by the National Institute of Education under grants GTg-0169 and G80-0163. Labor's claim was not of course that ungrammatical sentences are never produced in speech, for that clearly would be false. Rather, it seems that truly ungrammatical productions represent only a tiny fraction of the spoken output, and in the preponderance of cases, an apparent ungrammaticality can be resolved by simple editing rules. In order to make sense of non-fluent speech, it is essential that the various types of grammatical deviation be distinguished. This point has sometimes been missed, and fundamentally different kinds of deviation from standard grammaticality have been treated together because they all present the same sort of problem for a natural language understanding system. For example, Hayes and Mouradian (1981) mix together speaker-initiated self-corrections with fragmentary sentences of all sorts: people often leave out or repeat words or phrases, break off what they are saying and rephrase or replace it, speak in fragments, or otherwise use incorrect grammar (1981:231). Ultimately, it will be fluent productions on are fully grammatical other. Although we characterization of essential to distinguish between non- the one hand, and constructions that though not yet understood, on the may not know in detail the correct such processes as ellipsis and conjunction, they are without doubt fully productive grammatical processes. Without an understanding of the differences in the kinds of non-fluencies that occur, we are left with a kind of grab bag of grammatical deviation that can never be analyzed except by some sort of general purpose mechanisms. In this paper, I want to characterize the subset of spoken non-fluencies that can be treated as self-corrections, and to describe how they are handled in the context of a deterministic parser. I assume that a system for dealing with self-corrections similar to the one I describe must be a part of the competence of any natural language user. I will begin by discussing the range of non-fluencies that occur in speech. Then, after reviewing the notion of deterministic parsing, I will describe the model of parsing self-corrections in detail, and report results from a sample of 1500 sentences. Finally, I discuss some implications of this theory of self-correction, particularly for the problem of language acquisition. 2. Errors in Spontaneous Speech Linguists have been of less help in describing the nature of spoken non-fluencies than might have been hoped; relatively little attention has been devoted to the actual performance of speakers, and studies that claim to be based 123 on performance data seem to ignore the problem of non- fluencies. (Notable exceptions include Fromkin (1980), and Thompson (1980)). For the discussion of self-correction, I want to distinguish three types of non-fluencies that typically occur in speech. 1. Unusual Constructions. It is perhaps worth emphasizing that the mere fact that a parser does not handle a construction, or that linguists have not discussed it, does not mean that it is ungrammatical. In speech, there is a range of more or less unusual constructions which occur productively (some occur in writing as well), and which cannot be considered syntactically ill-formed. For example, (2a) I imagine there's a lot of them must have had some good reasons not to go there. (2b) That's the only thing he does is fight. Sentence (2a) is an example of non-standard subject relative clauses that are common in speech. Sentence (2b), which seems to have two tensed "be" verbs in one clause is a productive sentence type that occurs regularly, though rarely, in all sorts of spoken discourse (see Kroch and Hindle 1981). I assume that a correct and complete grammar for a parser will have to deal with all grammatical processes, marginal as well as central. I have nothing further to say about unusual constructions here. 2. True Ungrammatical/ties. A small percentage of spoken utterances are truly ungrammatical. That is, they do not result from any regular grammatical process (however rare), nor are they instances of successful self-correction. Unexceptionable examples are hard to find, but the following give the flavor. (3a) I've seen it happen is two girls fight. (3b) Today if you beat a guy wants to blow your head off for something. (3c) And aa a lot of the kids that are from our neighborhood-- there's one section that the kids aren't too-- think they would usually-- the-- the ones that were the-- the drop outs and the stoneheads. Labov (1966) reported that less that 2% of the sentences in a sample of a variety of types of conversational English were ungrammatical in this sense, a result that is confirmed by current work (Kroch and Hindle 1981). 3. Self-corrected strings. This type of non-fluency is the focus of this paper. Self-corrected strings all have the characteristic that some extraneous material was apparently inserted, and that expunging some substring results in a well-formed syntactic structure, which is apparently consistent with the meaning that is intended. In the degenerate case, self-correction inserts non-lexical material, which the syntactic processor ignores, as in (4). (aa) He was uh still asleep. (4b) I didn't ko-- go right into college. The minimal non-lexical material that self-correction might insert is the editing signal itself. Other cases (examples 6- 10 below) are only interpretable given the assumption that certain words, which are potentially part of the syntactic structure, are to be removed from the syntactic analysis. The status of the material that is corrected by self- correction and is expunged by the editing rules is somewhat odd. I use the term expunction to mean that it is removed from any further syntactic analysis. This does not mean however that a self-corrected string is unavailable for semantic processing. Although the self-corrected string is edited from the syntacti c analysis, it is nevertheless available for semantic interpretation. Jefferson (1974) discusses the example (5) ... [thuh] -- [thiy] officer ... where the initial, self-corrected string (with the pre- consonantal form of the rather than the pre-vocalic form) makes it clear that the speaker originally inteTided to refer to the police by some word other than officer. I should also note that the problems addressed by the self-correction component that I am concerned with are only part of the kind of deviance that occurs in natural language use. Many types of naturally occurring errors are not part of this system, for example, phonological and semantic errors. It is reasonable to hope that much of this dreck will be handled by similar subsystems. Of course, there will always remain errors that are outside of any system. But we expect that the apparent chaos is much more regular than it at first appears and that it can be modeled by the interaction of components that are themselves simple. In the following discussion, I use the terms self- correction and editing more or less interchangeably, though the two terms emphasize the generation and interpretation aspects of the same process. 3. The Parser The editing system that I will describe is implemented on top of a deterministic parser, called Fidditch. based on the processing principles proposed by Marcus (1980). It takes as input a sentence of standard words and returns a labeled bracketing that represents the syntactic structure as an annotated tree structure. Fidditch was'designed to process transcripts of spontaneous speech, and to produce an analysis, partial if necessary, for a large corpus of interview transcripts. Because Jris a deterministic parser, it produces only one analysis for each sentence. When Fidditch is unable to build larger constituents out of subphrases, it moves on to the next constituent of the sentence. In brief, the parsing process proceeds as follows. The words in a transcribed sentence (where sentence means one tensed clause together with all subordinate clauses) are assigned a lexical category (or set of lexical categories) on the basis of a 2000 word lexicon and a morphological analyzer. The lexicon contains, for each word, a list of possible lexical categories, subcategorization information, and in a few cases, information on compound words. For example, the entry for round states that it is a noun, verb, adjective or preposition, that as a verb it is subcategorized for the movable particles out and up and for NP, and that it may be part of the compound adjective/preposition round about. Once the lexical analysis is complete, The phrase structure tree is constructed on the basis of pattern-action rules using two internal data structures: 1) a push-down stack of incomplete nodes, and 2) a buffer of complete constituents, into which the grammar rules can look through 124 a window of three constituents. The parser matches rule patterns to the configuration of the window and stack. Its basic actions include -- starting to build a new node by pushing a category onto the stack -- attaching the first element of the window to the stack -- dropping subtrees from the stack into the first position in the window when they are complete. The parser proceeds deterministically in the sense that no aspect of the tree structure, once built may be altered by any rule. (See Marcus 1980 for a comprehensive discussion of this theory of parsing.) 4. The serf-correction rules The self-correction rules specify how much, if anything, to expunge when an editing signal is detected. The rules depend crucially on being able to recognize an editing signal, for that marks the right edge of an expunction site. For the present discussion, I will assume little about the phonetic nature of the signal except that it is phonetically recognizable, and that, whatever their phonetic nature, all editing signals are, for the self-correction system, equivalent. Specifying the nature of the editing signal is, obviously, an area where further research is needed. The only action that the editing rules can perform is expunction, by which I mean removing an element from the view of the parser. The rules never replace one element with another or insert an element in the parser data structures. However, both replacements and insertions can be accomplished within the self-correction system by expunction of partially identical strings. For example, in (6) I am-- I was really annoyed. The self-correction rules will expunge the I am which precedes the editing signal, thereby in effect replacing am with was and inserting really. Self-corrected strings can be viewed formally as having extra material inserted, but not involving either deletion or replacement of material. The linguistic system does seem to make use of both deletions and replacements in other subsystems of grammar however, namely in ellipsis and rank shift..As with the editing system, these are not errors but formal systems that interact with the central features of the syntax. True errors do of course occur involving all three logical possibilities (insertion, deletion, and replacement) but these are relatively rare. The self-correction rules have access to the internal data structures of the parser, and like the parser itself, they overate deterministicallv. The parser views the editing signal as occurring at the end of a constituent, because it marks the right edge of an expunged element. There are two types of editing rules in the system: expunction of copies, for which there are three rules, and lexically triggered restarts, for which there is one rule. 4.1 Copy Editing The copying rules say that if you have two elements which are the same and they are separated by an editing signal, the first should be expunged from the structure. Obviously the trick here is to determine what counts as copies. There are three specific places where copy editing applies. SURFACE COPY EDITOR. This is essentially a non- syntactic rule that matches the surface string on either side of the editing signal, and expunges the first copy. It applies to the surface string (i.e., for transcripts, the orthographic string) before any syntactic proct...i,~. For example, in (7), the underlined strings are expunged before parsing begins. (7a) Well if they'd-- if they'd had a knife 1 wou-- I wouldn't be here today. (Tb) lfthey-- if they could do it. Typically, the Surface Copy Editor expunges a string of words that would later be analyzed as a constituent (or partial constituent), and would be expunged by the Category or the Stack Editors (as in 7a). However. the string that is expunged by the Surface Copy Editor need not be dominated by a single node; it can be a sequence of unrelated constituents. For example, in (7b) the parser will not analyze the first i/they as an SBAR node since there is no AUX node to trigger the start of a sentence, and therefore, the words will not be expunged by either the Category or the Stack editor. Such cases where ',he Surface Copy Editor must apply are rare, and it may therefore be that there exists an optimal parser grammar that would make the Surface Copy Editor redundant; all strings would be edited by the syntactically based Category and Stack Copy rules. However, it seems that the Surface Copy Editor must exist at some stage in the process of syntactic acquisition. The overlap between it and the other rules may be essential in iearning. CATEGORY COPY EDITOR. This copy editor matches syntactic constituents in the first two positions in the parser's buffer of complete constituents. When the first window position ends with an editing signal and the first and second constituents in the window are of the same type, the first is expunged. For example, in sentence (8) the first of two determiners separated by an editing signal is expunged and the first of two verbs is similarly expunged. (8) I was just that -- the kind of guy that didn't have-- like to have people worrying. STACK COPY EDITOR. If the first constituent in the window is preceded by an editing signal, the Stack Copy Editor looks into the stack for a constituent of the same type, and expunges any copy it finds there along with all descendants. (In the current implementation, the Stack Copy Editor is allowed to look at successive nodes in the stack, back to the first COMP node or attention shifting boundary. If it finds a copy, it expunges that copy along with any nodes that are at a shallower level in the stack. If Fidditch were allowed to attach of incomplete constituents, the Stack Copy Editor could be implemented to delete the copy only, without searching through the stack. The specifics of the implementation seems not to matter for this discussion of the editing rules.) In sentence (9), the initial embedded sentence is expunged by the Stack Copy Editor. (9) I think that you get-- it's more strict in Catholic schools. 125 4.2 An Example It will be useful to look a little more closely at the operation of the parser to see the editing rules at work. Sentence (10) (10) I-- the-- the guys that I'm-- was telling you about were. includes three editing signals which trigger the copy editors. (note also that the complement of were is ellipted.) I will show a trace of the parser at each of these correction stages. The first editor that comes into play is the Surface Copy Editor, which searches for identical strings on either side of an editing signal, and expunges the first copy. This is done once for each sentence, before any lexical category assignments are made. Thus in effect, the Surface Copy Editor corresponds to a phonetic/phonological matching operation, although it is in fact an orthographic procedure because we are dealing with transcriptions. Obviously, a full understanding of the self-correction system calls for detailed phonetic/phonological investigations. After the Surface Copy Editor has applied, the string that the lexical analyzer sees is (11) (11) I-- the guys that I'm-- was telling you about were. rather than (10). Lexical assignments are made, and the parser proceeds to build the tree structures. After some processing, the configuration of the data structures is that shown in Figure 1. 5 4 3 2 eUi'l'ellt NODE STACK NP<I-> NP < the guys > • • ATTENSHIFT< < NP<I> AUX < am-- • Before determining what next rule to apply, the two editing rules come into play, the Category Editor and the Stack Editor. At this pulse, the Stack Editor will apply because the first constituent in the window is the same (an AUX node) as the current active node, and the current node ends with an edit signal. As a result, the first window element is popped into another dimension, leaving the the parser data structures in the state shown in Figure 2. Parsing of the sentence proceeds, and eventually reaches the state shown in Figure 3. where the Stack Editor conditions are again met. The current active node and the first element in the window are both NPs, and the active node cads with an edit signal. This causes the current node to be expunged, leaving only a single NP node, the one in the window. The final analysis of the sentence, after some more processing is the tree shown in Figure 4. I should reemphasize that the status of the edited elements is special. The copy editing rules remove a constituent, no matter how large, from the view of the parser. The parser continues as if those words had not been said. Although the expunged constituents may be available for semantic interpretation, they do not form part of the main predication. NODE STACK current ENP< I-'> ] COMPLETE NODES IN WINDOW INP< theguys> ] SBAR < that.-.> I AUX< were> I Figure 3. The parser state before the second aFplication of the Stack Copy Editor. COMPLETE NODES IN WINDOW [ ] I-- ] AUX < was> V < telling> PRON < you > Figure 1. The parser state before the Stack Copy Editor applies. 4 3 2 current NODE STACK . NP < the guys > COMPLETE NODES IN WINDOW I AUX< was> IV< telling> [ PRON< Y°U> 1. Figure 2. The parser state after Stack Copy Editing the AUX node. NP NP DETER DART the NOM N p[ N guy SBAR COMP CMP that NP t S NP PRON I AUX TNS PAST s be + in$ VP V tell NP PRON you PREP about NP t AUX THS PAST pl VP V be Figure 4, The final analysis of sentence (10). 226 4.3 Restarts A somewhat different sort of self-correction, less sensitive to syntactic structure and flagged not only bY the editing signal but also by a lexical item, is the restart. A restart triggers the expunction of all words from the edit signal back to the beginning of the sentence. It is signaled by a standard edit signal followed by a specific lexical item drawn from a set including well, ok. see, you know, like I said, etc. For example, (12a) That's the way if-- well everybody was so stoned, anyway. (12b) But when l was young I went in-- oh I was n'ineteen years old. It seems likely that, in addition to the lexical signals, specific intonational signals may also be involved in restarts. 5. A sample The editing system I have described has been applied to a corpus of over twenty hours of transcribed speech, in the process of using the parser to search for various syntactic constructions. Tht~ transcripts are of sociolinguistic interviews of the sort developed by Labor and designed to elicit unreflecting speech that approximates natural conversation." They are conversational interviews covering a range of topics, and they typically include considerable non-fluency. (Over half the sentences in one 90 minute interview contained at least one non-fluency). The transcriptions are in standard orthography, with sentence boundaries indicated. The alternation of speakers' turns is indicated, but overlap is not. Editing signals, when noted by the transcriber, are indicated in the transcripts with a double dash. It is clear that this approach to transcription only imperfectly reflects the phonetics of editing signals; we can't be sure to what extent the editing signals in our transcripts represent facts about production and to what extent they represent facts about perception. Nevertheless, except for a general tendency toward underrepresentation, there seems to be no systematic bias in our transcriptions of the editing signals, and therefore our findings are not likely to be undone by a better understanding of the phonetics of self-correction. One major problem in analyzing the syntax of English is the multiple category membership of words. In general, most decisions about category membership can be made on the basis of local context. However, by its nature, self- correction disrupts the local context, and therefore the disambiguation of lexical categories becomes a more difficult problem. It is not clear whether the rules for category disambiguation extend across an editing signal or not. The results I present depend on a successful disambiguation of the syntactic categories, though the algorithm to accomplish this is not completely specified. Thus, to test the self-correction routines I have, where necessary, imposed the proper category assignment. Table 1 shows the result of this editing system in the parsing of the interview transcripts from one speaker. All in all this shows the editing system to be quite successful in resolving non-fluencies. The interviews for this study were conducted by Tony Kroch and by Anne Bower. TABLE 1. SELF-CORRECTION RULE APPLICATION total sentences total sentences with no edit signal 1512 1108 (73%) Editing Rule Applications expunction of edit signal only 128 24% surface copy 161 29% category copy 47 9% stack copy 148 27% restart 32 6% failures 17 3% remaining unclear and ungrammatical 11 2% 6. Discussion Although the editing rules for Fidditch are written as deterministic pattern-action rules of the same sort as the rules in the parsing grammar, their operation is in a sense isolable. The patterns of the self-correction rules are checked first, before any of the grammar rule patterns are checked, at each step in the parse. Despite this independence in terms of rule ordering, the operation of the self-correction component is closely tied to the grammar of the parser; for it is the parsing grammar that specifies what sort of constituents count as the same for copying. For example, if the grammar did not treat there as a noun phrase when it is subject of a sentence, the self-correction rules could not properly resolve a sentence like (13) People-- there's a lot of people from Kennsington because the editing rules would never recognize that people and there are the same sort of element. (Note that (13) cannot be treated as a Restart because the lexical trigger is not present.) Thus, the observed pattern of self-correction introduces empirical constraints on the set of features that are available for syntactic rules. The self-correction rules impose constraints not only on what linguistic elements must count as the same, but also on what must count as different. For example, in sentence (14), could and be must be recognized as different sorts of elements in the grammar for the AUX node to be correctly resolved. If the grammar assigned the two words exactly the same part of speech, then the Category Cc'gy Editor would necessarily apply, incorrectly expunging could. (14) Kid could-- be a brain in school. It appears therefore that the pattern of self-corrections that occur represents a potentially rich source of evidence about the nature of syntactic categories. Learnability. If the patterns of self-correction count as evidence about the nature of syntactic categories for the linguist, then this data must be equally available to the language learner. This would suggest that, far from being an impediment to language learning, non-fluencies may in fact facilitate language acquisition bv highlighting equivalent classes. L27 This raises the general question of how children can acquire a language in the face of unrestrained non-fluency. How can a language learner sort out the grammatical from the ungrammatical strings? (The non-fluencies of speech are of course but one aspect of the degeneracy of input that makes language acquisition a puzzle.) The self-correction system I have described suggests that many non-fluent strings can be resolved with little detailed linguistic knowledge. As Table 1 shows, about a quarter of the editing signals result in expunction of only non-linguistic material. This requires only an ability to distinguish linguistic from non- linguistic stuff, and it introduces the idea that edit signals signal an expunction site. Almost a third are resolved by the Surface Copying rule, which can be viewed simply as an instance of the general non-linguistic rule that multiple instances of the same thing count as a single instance. The category copying rules are generalizations of simple copying, applied to a knowledge of linguistic categories, Making the transition from surface copies to category copies is aided by the fact that there is considerable overlap in coverage, defining a path of expanding generalization. Thus at the earliest stages of learning, only the simplest, non-linguistic self-correction rules would come into play, and gradually the more syntactically integrated would be acquired. Contrast this self-correction system to an approach that handles non-fluencies by some general problem solving routines, for example Granger (1982), who proposes reasoning from what a speaker might be expected to say. Besides the obvious inefficiencies of general problem solving approaches, it is worth giving special emphasis to the problem with learnability. A general problem solving approach depends crucially on evaluating the likelihood of possible deviations from the norms. But a language learner has by definition only partial and possibly incorrect knowledge of the syntax, and is therefore unable to consistently identify deviations from the grammatical system. With the editing system I describe, the learner need not have the ability to recognize deviations from grammatical norms, but merely the non-linguistic ability to recognize copies of the same thing. Generation. Thus far, I have considered the self- correction component from the standpoint of parsing. However, it is clear that the origins are in the process of generation. The mechanism for editing self-corrections that I have proposed has as its essential operation expunging one of two identical elements. It is unable to expunge a sequence of two elements. (The Surface Copy Editor might be viewed as a counterexample to this claim, but see below.) Consider expunction now from the standpoint of the generator. Suppose self-correction bears a one-to-one relationship to a possible action of the generator (initiated by some monitoring component) which could be called ABANDON CONSTRUCT X. And suppose that this action can be initiated at any time up until CONSTRUCT X is completed, when a signal is returned that the construction is complete. Further suppose that ABANDON CONSTRUCT X causes an editing signal. When the speaker decides in the middle of some linguistic element to abandon it and start again, an editing signal is produced. If this is an appropriate model, then the elements which are self-corrected should be exactly those elements that exist at some stage in the generation process. Thus, we should be able to find evidence for the units involved in generation by looking at the data of self-correction. And indeed, such evidence should be available to the language learner as well. Summary I have described the nature of self-corrected speech (which is a major source of spoken non.fluencies) and how it can be resolved by simple editing rules within the context of a deterministic parser. Two features are essential to the self-correction system: I) every self-correction site (whether it results in the expunction of words or not) is marked by a phonetically identifiable signal placed at the right edge of the potential expunction site; and 2) the expunged part is the left-hand member of a pair of copies, one on each side of the editing signal. The copies may be of three types: 1) identical surface strings, which are edited by a matching rule that applies before syntactic analysis begins; 2) complete constituents, when two constituents of the same type appear in the parser's buffer; or 3) incomplete constituents, when the parser finds itself trying to complete a constituent of the same type as a constituent it has just completed. Whenever two such copies appear in such a configuration, and the first one ends with an editing signal, the first is expunged from further analysis. This editing system has been implemented as part of a deterministic parser, and tested on a wide range of sentences from transcribed speech. Further study of the self-correction system promises to provide insights into the units of production and the nature of linguistic categories. Acknowledgements My thanks to Tony Kroch, Mitch Marcus, and Ken Church for helpful comments on this work. References Fromkin, Victoria A. ed. 1980. Errors in Linguistic Performance: Slips of the Tongue. Ear. Pen and Hand. Academic Press: New York. Granger, Richard H. 1982. Scruffy Text Understanding: Design and Implementation of 'Tolerant' Understanders. Proceedings of the 20th Annual Meeting of the ACL. Hayes, Philip I. and George V. Mouradian. 1981. Flexible Parsing. American Journal of Computational Linguistics 7.4, 232-242. J'efferson, Gall. 1974. Error correction as an interactional resource. Language in Society 2:181-199. Kroch, Anthony and Donald Hindle. 1981. A quantitative study of the syntax of speech and writing. Final report to the National Institute of Education, grant 78-0169. Labor, William. 1966. On the grammaticality of everyday speech. Paper presented at the Linguistic Society of America annual meeting. Marcus, Mitchell P. 1980. A Theory of Syntactic Recognition for Natural Language. MIT Press: Cambridge, MA. Thompson, Bozena H. 1980. A linguistic analysis of natural language communication with computers. Proceedings of the eighth international conference on computational linguistics. 128 | 1983 | 19 |
FACTORING RECURSION AND DEPENDENCIES: AN ASPECT OF TREE ADJOINING GRAMMARS (TAG) AND A COMPARISON OF SOME FORMAL PROPERTIES OF TAGS, GPSGS, PLGS, AND LPGS * Aravind K. Joshi Department of Computer and Information Science R. 268 Moore School University of Pennsylvania Philadelphia, PA 19104 I.IWrRODUCTION During the last few years there is vigorous activity In constructing highly constrained grammatical systems by eliminating the transformational component either totally or partially. There is increasing recognition of the fact that the entire range of dependencies that transformational grammars in their various incarnations have tried to account for can be satisfactorily captured by classes of rules that are non-transformational and at the same Clme highly constrlaned in terms of the classes of grammars and languages that they de fine. Two types of dependencies are especially important: subcategorlzatlon and filler-gap dependencies. Moreover,these dependencies can be unbounded. One of the motivations for transformations was co account for unbounded dependencies. The so-called non-transformational grammars account for the unbounded dependencies in different ways. In a cree-adJoinlng grammar (TAG), which has been introduced earlier in (Joshi,1982), unhoundedness is achieved by factoring the dependencies and recursion in a novel and, we belleve, in a linguistically interesting manner. All dependencies are defined on a finite set of basic structures (trees) which are bounded. Unhoundedness is then a corollary of a particular composition operation called ad~olnlng. There are thus no unbounded dependencies in a sense. In this paper, we will ~irsC briefly describe TAG's, which have the following Important properties: (l) we can represent the usual transformational relations more or less directly in TAG's, (2) the power of TAG's is only slightly more than that of context-free grammars (CFG's) in what appears to be Just the right way, and (3) TAG's are powerful enough to characterize dependencies (e.g., subcategorlzatlon, as in verb subcategorlzatlon, and filler-gap dependencies, as in the case of moved constltutents in wh-questlons) which might *GPSG: Generalized phrase structure grammar, PLG: Phrase linking grammar, and LFG: Lexlcal functional grannnar. This work is partially supported by the NSF Grant MCS 81-07290. be at unbounded distance and nested or crossed. We will then compare some of the formal properties of TAG's, GPSG*s,PLG's, and LFG*s, in particular, concerning (I) the types of languages, reflecting different patterns of dependencies that can or cannot be generated by the different types of grammars, (2) the degree of free word ordering permitted by different grammars, and (3) parsing complexity of the different gra--,-rs. 2.TREE ADJOINING GRAMMAR(TAG) A tree adjoining grammar (TAG), G = (I,A) consists of two finite sets of elementary trees. The trees in I will be called the initial trees and the trees in A, the auxiliary trees. A tree {~ is an initial tree if the root node of is labeled S and the frontier nodes are all terminal symbols (the interior nodes are all non-termlnals). A tree ~ is an auxiliary tree if the root node of ~ is labeled by a non-terminal, say, X, and the frontler nodes are all terminals except one which is also labeled X, the same label as that of the root. The node labeled by X on the frontier will be called the foot node of ~ . The internal nodes are non-terminals. ~t. ~ermfmJ$ , ,hAl~ As defined above, the initial trees and the auxiliary trees are not constrained in any manner other than as indicated above. The idea, however, is that both the initial and the auxiliary trees will be minimal in some sense. An initial tree will correspond to a minimal sententlal tree (i.e., for example, without recurslng on any non-terminal) and an auxiliary tree, with the root node and the foot node labeled X, will correspond to a minimal structure that must be brought into the derivation, if one recurses on X. * I wish to thank Bob Berwlck, Tim Finin, Jean Gallier, Gerald Gazdar, Ron Kaplan, Tony Kroch, Bill Marsh, Milch Marcus, Ellen Prince, Geoff Pullum, R. Shyamasundar, Bonnie Webber, Scott Weinstein, and Takashi Yokomori for their valuable comments We will now define a composition operation called adjoining (or adJunction) which composes an auxilia~ tree ~ with a tree ~ • ~t tree with a node labeled X and let ~ ~ an auxiliary tree ~th the root labeled X also. ~te Chat ~ ~st ~ve,by definition, a node (and only one)labeled X on the frontier. ~Jolnlng can now ~ defined as follows. If Is adjoining to ~ at the node n then the resulting tree ~ is as sho~ in Fig.l. s e / FiG, :L. The tree t dominated by X in ~ is excised, ~ is inserted at the node n in and the tree t is attached to the foot node (labeled X) of ~ , i.e., ~ is inserted or 'adjoined' to the node n in ~ pushing t downwards. Note that adjoining is not a substitution operation in the usual sense. Example 2.1: Let G - (I,A) be a TAG where m+ b ~ r /~ / xb o- b t+i-- , (Z) +++: db T x(~ S o,, b <:x,, T b The root node and the foot node of each auxiliary tree is circled for convenience. Let us took at some derivations in G. ~ wlll be adjoined to ~/o at the indicated node in ~ . The resulting tree Is then ~ b ~-r o $ (~.T. b b We can continue the derivation by ad~olnlng, say /@@, at S as indicated ing£ . The resulting tree ~fX is then . sL" • P4 F • ~ "[ 4''z'" @- b Note that ~o is an initial tree# a sententiat tree. The derived trees yi and MR are also sentential trees, We will now define T(G): The set of all trees derived in G starting from the initial Crees in I. This set will be called the tree setof G. LCG): The set of all terminal strings of the trees in TCG). This set will be called the strln~ language(or language) of G. The relationship between TAG's CFG's and the corresponding string languages can be summarized as follows (Joehl, Levy, and Takahashl, 1975). Theorem 2.1: For every CFG, G', there is an equivalent TAG, G, both weakly and strongly. Theorem 2.2: For every TAG, G, one of the following statements holds: (a)there is a cfg, G', that is both weakly and strongly equivalent to G, (b)there is a cfg,G', that is weakly equivalent to G but not strongly equivalent to G, Or (3) there is no cfg, G', that is weakly equivalent to G. Parts (a) and (c) appear in (Joshl, Levy, and Takahashl, 1975). Part (b) is implicit in that paper, but it is important to state It explicitly as we have done here. For the TAG, G, in Example 2.1, it can be shown that there is a CFG, G', such that G" Is both weakly and strongly equivalent to O. Examples 2.2 and 2.3 below illustrate parts (b) and (c) respectively. Example 2.2: Let G - (I,A) be a TAG where I: A e S o-'I" $ -r ~z" II i", i~ "T" Some derivations in G. t e. ¥~ : -'/I / O, "1" ~,, /,i O. "I" |"b $ ! e i O. "3".,, .t ! e. /ndi'u~ili aide ~i ¢a~i 3 .... $ Clearly, L(G)=L= { a'~e be/ n ~/ 0}, which Is a cfl. Thus there must exist a CFG, G', which ts at least weakly equivalent to G. It can be shown however that there Is no CFG, G', which Is strongly ,equivalent to G,l.e., T(G)=T(G'). This follows from the fact that T(G), the tree set of G, is "non-recogntzab]e',i.e., there is no finite state bottom to top automaton that can recognize precisely T(G). Thus a TAG may generate a cfl, yet assign structural descriptions to the strings that cannot be assigned by any CFG. Example 2.3: Let C - (I,A) be a TAG where "[: o<d = S I e A; ", d3 O- "1-" /1~ 11~ b "I" c It can be shown that L(C) - L1 = { w e cn/ n ~ 0}, w is a string of a's and b's such that (1) the number of a's = the number of b's and (2) for any initial substrlng of w, the number of a's ~ the number of b's.} Ll can be characterized as follows. We start with the language L = ( (ba)"e c~/ n ~ 0 }. L! is then obtained by taking strings in L and moving (dtslocsttng) some a's to the left. It can be shown that L! is a strictly context-sensitlve language (csl), thus there can be no CFG that is weakly equivalent to G. TAG's have more power than CFG's, however, the extra power is quite limited. The language Ll has equal number of a's ,b's had c's; however, the a's and b's are mixed in a certain way. The Language L2 ={a~b~e cn/ n O} is similar to Li, except that all a's come before all b's. TAG's are not powerful to generate L2. The so-called copy inguage L3 ~ {w e w /w 6{a,b} P } also cannot be generated by a TAG. The fact that TAG's cannot generate L2 and L3 is important, because it shows that TAG's are only slightly more powerful than CFG's. The way TAG's acquire this power is linguistically significant. With some modifications of TAG's or rather the operation of adjoinlnR, which Is linguistically motivated, it is possible to generate L2 and L3, but only in some special ways. (This modification consists of allowing for the possibility for checking ieft-riRht tree context(In terms of a proner analysis) as well as top-bottom tree context (in terms of domination) around the node at which adiunctlon is made. Thls is the notion of local constraints in (Joshi and Levy,1981)). Thus L2 and L3 in some ways characterize the limiting cases of context-sensitlvlty that can be achieved by TAG's and TAG's with local constraints. In (JoshI,Levy, and Takahashi,1975) it is also shown that CFL's C TAL's C IL's ~ CSL's. where IL's denotes indexed languages. 3. We will now consider TAG's with links. The elementary trees (initial and auxlliar-~ "-=- trees) are the appropriate domains for characterizing certain dependencies. The domain of the dependency is de fined by the elementary tree itself. However, the dependency can be charaeCerlzed explicitly by introducing a special relationship between certain specLfled pairs of nodes of an elementary tree. This relationship is pictorially exhibited by an arc (a dotted line) from one node to the oti,er. For example, in the tree below, the nodes labeled B and q are linked, A ~- c I-, ,, l'- c ~: F G ' I ~/~ "~ ~ .- -~ ~=. We will require the following conditions to hold for a llnk In an elementary tree. If a node n[ is tlnked to a node n2 then (1) n2 c-commands nl and (2) nl dominates a null string (or a temi.al symbol in the non-linguistic formal grammar examples). The notion of a link introduced here is closely related to that of Peters and Rltchie (1982). A TAG with links is a TAG where some of the elementary trees ~y have links as defined above. Henceforth, we may often refer to a TAG with links as just a TAG. Links are defined on the elementary trees. However, the important idea is that the composition operation of adjoining will preserve the links. Links defined on the elementary trees may become stretched as the derivation proceeds. [n a TAG the dependencies are defined on the elementary trees(which are bounded) and these dependencies are then preserved by the ad~olnlng(recurslve) operation. This is how rectlrsion and dependencies are factored in a TAG. This is in contrast to transformational grammars (TC) where recursion is defined in the base and the transformations essentially carry out the checking of the dependencies. The PiG's and LFG's share this aspect'of TG,i.e., tee.talon builds up a set of structures, some of which are filtered out by transfotn~atlons in a TG, by the constraints on linking in a PiG, and by the constraints introduced via functional structures in LFG. In a GPSG on the other hand, recurslon and the checking of the dependencies go hand in hand in a sense. In a TAG, dependencies are defined initially on bounded structures and recurslon simply preserves chem. In the APPENDIX we have given some examples to show how certain sentences could be deirved in a TAG. Example 2.4: Let G = (I,A) be a TAG with links where I e, IX i'-,b I S/ /I o.." S l--r: Some derivations in G: ! e. .'I t "" .,,i-,. B. • OL_'- "%= • f"i /t ',.L.',. 5 %,,/ = o,. o,. e. b b • s O-'; I" ~, ' i.'"l.- Io ' o 1 e., w-- o, e b i..,....,.I Y~" S /i /O.", .% / i , \ ' ., -.:,_."I=. ..... "" - .L":-'~ S c,,*" I ' "l" s -'-.1.~ b 5 I e. %J : ct~s~e-4 l0 ~¢ andes each have one link. ~%and ~63 show how the linking is preserved in adjoining. In ~ one of the links is stretched. It should be clear now, how, in general, the links will be preserved during the derivation. We note in this example that in ~¢ the dependencies between the a's and the b's as reflected tn the terminal string are properly nested, while in ~ two of them are properly nested, and the third one is cross-serlal and it is crossed with respect Co the nested ones. The two elementary trees /~ and Ps have only one link each. The nesttngs and crossings in ~ and ~3 are the result of adjoining. There are two points Co note here: (I) TAG's with links can characterize certain cross-serial dependencies as well as, of course, nested dependencies. (2) The cross-serial dependencies as well as the nested dependencies arise as a result of adjoining. But this is not the only way they can arise. It is possible to have two links in an elementary tree which represent crossed or nested dependencies, which will then be preserved during the derivation. It is clear from Example 2.4 that the string language of TAG with links is not affected by the links. Thus if G is a TAG with links. Then L(G)-L(G') where G" is a TAG which is obtained from G by removing all the links in the elementary trees of G. The links do not affect the weak generative capacity. However, they make certain aspects of the structural description explicit, which is implicit in the TAG without the links. TAG's (or TAL's) also have the following three impor~ant properties: (l) Limited cross-serial dependencies: Although TAG's permit cross-serial dependencies, these are restricted. The restriction is that if there are two sets of crossing dependencies, then they must be either disjoint or one of them must be properly nested inside the other. Hence, languages such as the double copy language, L4 - {w e w e w / w ~ {a,b} ~} or L5 = {anb "@dne~/ n ~ [} cannot be generated by TAG's. For details, see (Joshi,1983). (2)Constant. ~rowth property: In a TAG,G,at each step of the derivation, we have a sententlal tree with the terminal string which is a string in L(G). As we adjoin an auxiliary tree, we augment the length of the terminal string by the length of the terminal string of (not counting the single non-terminal symbol in the frontier of ~ ).Thus for any string, w, of L(G), we have where wgls the terminal string of some initial tree and wg,l ~ i~ m, the terminal string of the [-th auxiliary tree, assuming there are m auxiliary trees. Thus w is a linear combination of the length of the terminal string o~ some Inltial tree and the lengths of the terminal strings of the auxiliary trees. Th~ constant growth property severely restricts the class of languages generated by TAG's. Hence,languages such as L6 = { a ~" / n ~ l} or L8 ~{a n% /n ~ [} cannot be generated by TAG's. (3)Polynomial parstn~:TAL's can be parsed in time O(n~ )(Joshi and Yokomori, 1983). Whether or not an O(n5 ) algorithm exists for TAL's is not known at present. 3. A COMPARISION OF GPSG's,TAG's,PFG's,and LFG's WITH RESPECT TO SOME OF THEIR FORMAL PROPERTIES TABLE I lists (i) a set of languages reflecting different patterns of dependencies Chat can or cannot be generated by the different types of grammars, and (li) the three properties Just mentioned ahove. As regards the degree of free word order permitted by each grammar, the languages 1,2,3,4,5, and 6 In TABLE I give some idea of the degree of freedom. The language in 3 in TABLE I is the extreme case where the a's, b's,and c's can he any order, as long as the number of a's =the number of b's=the number of c'S. GPSG~and TAG's cannot generate this language (although for TAG's a proof is not in hand yet), LFG's can generate this language. In a TAG for each elementary tree, we can add mare elementary trees, systematically generated from the given tree to provide additional freedom of word order (tn a somewhat simllar fashion as in (Pullum,1982)). Since the adjoining operation in a TAG gives some additional power to a TAG beyond chat of a CFG, this device of augmenting the set of elementary trees should give more freedom, for example, by allowing some limited scrambling of an item outside of the constituent it belongs co. Even then a TAG does not seem co be capable of generatlng the language in 3 in TABLE I. Thus there is extra freedom but it is quite limited. lwl., i'~.l~" al.lw~i+ %~w~l+ ---.a,.lw.l iI TABLE I GPSG TAG (and CFG) (with or without local constraints) PLC LFG no yes yes yes to Language obcalned by starting with L={(ba)n~n/n ~ 1} and then dislocating some a's to the left. 2o Same as I above except that the dislocated a's are to the left of all b's.. 3. L={w / w is string of equal number of a's,b's and no c's but mixed in any order} 4° L={x ~y/ n~l, x,y are strings of a's and b*s such that the number of a'sin x and y = the number of b's in x and y- n} 5. Same as above except that the length of x = length of y. 6. L={w ~/ n~ t, w is string of a's and b's and the number of a's in w = the number of b's in w - n} 7. L={a ~b" c" In~l) 8. Lf{a n b ~ c n d"/n~t} 9. L={a~b ~ ~ d" ~ e/n 7 1} IO. L= {w w/ w is string of a's and b's}(copy language) 11. L=(w w wl w is string of a's and b's}(double copy language) 12. L=ia ~ c TM b ~ d m /m ~ l,n ~ 1} 13. L={a ~ ~ c W /n ~1, p ~ n) 14. L-{a ~ In~ It 15. L-{a nz /n~ 1} 16. Limited cross-serial dependencies. 17. Constant growth property 18. Polynomial parsing no yes yes yes yes no(?) no no yes no yes no(?) no yes yes( ? ) no yes no no yes no no no no no yes yes(?) no no ? no no no ( ? ) no yes ? no no no ( ? ) no no no( ? ) no yes ? yes yes yes( ? ) yes yes ? yes yes yes(?) yes(?) yes yes yes yes yes ? yes<?) yes yes no(?) no no(?) Notation: ?: answer unknown to the author, yes(?): conjectured yes no(?): conjectured no. 12 REFERENCES [[] Gazdar,G.,"Phrase structure grammars" in The Nature of Syntactic Representations(eds. P. Jacobson and G.K. Pullum),D. Reidel, Dordrecht, (to appear). [2] Joshi, A.K. and Levy, L.S.,"Phrase structure trees bear more fruit than you would have thought", AJCL, 1982. [3] Joshl, A.K., Levy, L.S., and Takahashi, M.,"Tree adjunct grammars", Journal of the Computer and System Sciences,1975. [4] Josht, A.K.,"How much context-sensitivity Is required to provide adequate structural descrlpclons ?", in Natural language processing: Psycholln~ulstic, Theoretical, and Computational Perseptives, (edso Dowry, O., Karttunen, L., and Zwicky, A.), Cambridge University Press, (to appear). [5] Joshl, A.K. and Yokomorl, T.,"Parsln8 of tree adjoining grammars", Tech. Rep. Department of Computer and Information Science, University of Pennsylvanla,1983. [6] Joshl, A.K. and Kroch, T., "Linguistic slgniflcance of TAG's" (tentative title), for thcoml ng. [7] Kaplan R. and Bresnan J.W., "Lexlcal functional grammar-s formal system for grammatical representation", in The Mental Representation of Grammatical Relatlons~ed. Bresnan, J.), MIT Press, 1983. [8] Peters, S. and Ritchte, R.W., "Phrase linking grammars",Tech. Rep. University of Texas at Austin, Department of Linguistics, 1982. [9] Pullum, G.K.,"Free word order and phrase structure rules", in Proceeding of NELS [_~2(eds. Puste.|ovsky, J. and Sells, P.), Amherst, MA, 1982. APPENDIX We will give here some examples to show how certain sentences could be derived in a TAG. For further details about thls TAG and its linguistic relevance, see (Joshi,1983 and Joshl and Kroch, forthcoming). Only the releva- ~ trees of the TAG, G-(I,A) are shown below. The following points are worth noting: (1)In a TAG the derivation starts with an initial tree. The appropriate lexlcal insertions are made for the Inltlal tree and the corresponding constraints as specified by the lexicon can be checked (e.g., agreement and subcacegorizacion). Then as the derivation proceeds, as each auxiliary tree is brought into the derivation, the appropriate lexical items are inserted and the constraints checked. Thus in a TAG, lexical insertion goes hand in hand with the derivation. (2) Each one of the two finite sets, I and A can be quite large, but these sets need not be expllcltely listed. The crees in [ roughly correspond to all the "minimal' sentences corresponding to different subcategorlzation frames together with the "transforms" of these sentences. We could , of course, provide rules for obtaining the trees in I from a given subset of I. These rules achieve the effect of conventional transformational rules, however, these rules can be formulated not as the usual transformational rules but directly as tree rewriting rules, since both the domains and the co-domains of the rules are finite. Introduction of links can ~,~ considered as a part of this rewriting. In any case, these rules will be abbreviatory in the sense Chat they will generate only finite sets of trees. Their adoption will be only a matter of convenience and does not affect the TAG in any essential ~nner. The set of auxiliary trees is also finite. Again these trees could themselves be "derived" from the corresponding trees in I by introducing appropriate tree rewrltlng rules. Again these rules will be abbrevlacory only as discussed above. It is in this sense that the trees in I and A capture the usual transformational relations more or less directly. Some derivations: (l)The girl who met 8ill is a senior. We start with the inlttal tree ~ with the appropriate texlcal insertions. S ~--z..-- ~/P VP ~r e~ v ~P I I ;~ /~ -'tk e. ~;~ I o.. N I Se..n ,'~ • 13 Adjoining 8t (with the appropriate lexical insertions) to~ at the indicated node in ~ , ve obtain ~I . /"\ Z.~ ', /\ e V NP ) kip v~' % ~ i ~ i 1 /\ ,, INt" I I i V Ivp'. ltlll ) I I ! 4i--Z._ llkl mee I,'il ~ i~llii..-' • "rl,,t ~i~i iik,I mlit I;ll {i 0, il~ilr (2)John persuaded Bill to invite Hary. N9 ,~p | /.~, tim "To vl ° V xP I I inv;te I Ad~otnin~ /~ ro ~".1 at the tndit'~ated node in ~.lr, ~ ohtain Yi" ~JP '4p I /\~ # t,i ;,,'11 ~ii,~'ll ~el litieiL ll'll t I / t ~ _ ] / I ,"/~_ '~ ~'o~'" I ~.; I I\ % / "' t I -'Iril ,~i ~i ) fro .... " V ~P I I ~, i~l'tt i (3)Nho did John persuade Bill to invite ? ~l ~ o{Ii -.. 3 44 \.S "/''x'''~ v I /\ ~, ~a To V? V ,'SAP "~, I ". I • "% i'l'lll'fl ~" Ad~ointng ~J to ~C% at the indicated node in IC~L, we obtain y~.. ® 3o NP ~? / ~ ~ I v NP ra ' t "" ~'a k,, ld p~v.fu.,ll ; r~¢tl .S "" ,/{~ - A P /,ili~ lle' .,i? ,, a /~--~'~-- : I- ~ ~ V ~P , ' ~ peY~.,~. [ ', mo v ,'h@ "- GIll ) .. , I ", I 14 Note the link In ~ is 'preserved' in~ , it is "stretched' resultin 8 in the so-called unbounded dependency. (&)John tried to please Mary. i ",._- NP vp l /~- ~o 1-o ,,/P' V NP On the other hand (5)john seems to like Mary. could be derived as follows. We will start with ~#~. / ~. S z"-z --~" ~P V? -r~ vf /\ :T~, 4 UP [ i I AdJ°inin8 J7 ~o ~ at the indicated node in Y~ we obtain ~l" ~'r = , ~4 I t / t Hr" vP ' l I /'~ ' \ -t.i~'~ j~i'~ f~ "~ • ~. . • -to VP ~e i\ V NP AdJoinin~ ~Mto Y~. at the indicated node in ~'*t , we obtain ~*~. I S I o /~..! m V YP I " i i !/~ wP I r~A~ JaQm~ -to l(ka /,4 o P.~ 15 | 1983 | 2 |
D-Theory: Talking about Talking about Trees Mitchell P. Marcus Donald Hindle Margaret M. Fleck Bell Laboratories Murray Hill, New Jersey 07974 Linguists, including computational linguists, have always been fond of talking about trees. In this paper, we outline a theory of linguistic structure which talks about talking about trees; we call this theory Description theory (D-theory). While important issues must be resolved before a complete picture of D-theory emerges (and also before we can build programs which utilize it), we believe that this theory will ultimately provide a framework for explaining the syntax and semantics of natural language in a manner which is intrinsically computational. This paper will focus primarily on one set of motivations for this theory, those engendered by attempts to handle certain syntactic phenomena within the framework of deterministic parsing. 1. D-Theory: An Introduction The key idea of D-theory is that a syntactic analysis of a sentence of English (or other natural language) consists of a description of its syntactic structure. Such a description contains information which differs from that contained in a standard tree structure in two crucial ways: 1) The primitive predicate for indicating hierarchical structure in a D-theory description is "dominates" rather than "directly dominates". (A node A is said to dominate a node B if A is some ancestor of B; A is said to directly dominate B if A is the immediate parent of B.) A D-theory analysis thus expresses directly only what structures are contained (somewhere) within larger structures, but does indicate per se what the immediate constituents of any particular constituent are. A tree structure, on the other hand, encodes which nodes are directly dominated by other nodes in the analysis; it indicates directly the immediate constituents of each node. In a standard parse tree, the topmost S node might directly dominate exactly a Noun Phrase node, an Aux node and a Verb Phrase node; it is thus made up of three subparts: .that NP, that Aux, and that VP. 2) A D-theory description uses names to make statements about entities, and does not contain the entities themselves. Furthermore, there is no distinguished set of names which are taken to be standard names or rigid designators; i.e. given only a name, one cannot tell what particular .syntactic entity it refers to. (This is the primary reason that we view D-theory representations as descriptions and not merely as directed acyclic graphs.) Because there are no standard names, if one is presented with two descriptions, each in terms of a different name, one can tell with certainty only if the two names refer to different entities, but never (for sure) if they refer to the same entity. In the latter case, there is always potential ambiguity. To take a commonplace example, given that "John has red hair" and "Mr. Jones has black hair', one can be sure that John is not Mr. Jones. But if one is told "John has red hair" and "Mr. Jones wears glasses" and nothing more about either John or Mr. Jones, then it is impossible to tell whether John is or is not Mr. Jones. In the domain of syntax, if a D-theory description says that Xisan NP;Zisan NP Y is an Adjective Phrase W is a noun X dominates Y Z dominates W and nothing else is stated about W, X, Y or Z, then it cannot be determined whether X and Z are aliases for the same NP node or are names for two distinct nodes, if an additional statement is added to the description that "Y dominates Z", then it must be the case that X and Z name distinct entities. We will show in what follows that the use of names has important ramifications for linguistic theory and the theory of parsing. The structure of the rest of this paper is roughly as follows: We will first sketch the computational framework we build on, in essence that of [Marcus 80], and explore briefly what a parser for this kind of grammar might look like; in appearance, its data structures and grammar will be Iittle different from that developed in [Berwick 82]. A series of syntactic phenomena will then be explored which resist elegant account within the earlier framework. For each phenomenon, we will present a simple D- theoretic solution together with exposition of the relevant aspects of D-theory. One final introductory comment: That D-theory expresses syntactic structure in terms of dominance rather than direct dominance may be reminiscent of [Lasnik & Kupin 1977] (henceforth L-K), but our use of the dominance predicate differs fundamentally from the L-K formulation both in the primacy of the predicate to the theory, and in the theory of syntax implied. Lasnik and Kupin's formalization of the Extended Standard Theory der:ves domino.tion relations from their primary representation of linguistic structure, namely a set of strings of terminals and nonterminals with specified properties. D-theory structures are expressed directly in terms of dominance relations; the linear order of constituents is only directly expressed for items in the lexical string. Despite appearances, D-theory and the Lasnik-Kupin formalization are not inter- definable. We discuss the properties of the Lasnik-Kupin formalization at length in a forthcoming paper. [29 20 DeterminLqgic Tree-Building: The Old Theory D-theory grows out of earlier work on deterministic parsing as deterministic tree building (as in e.g. [Marcus 19801, [Church 801 and [Berwick 82]). The essence of that work is the hypothesis that natural language can be analyzed by some process which builds a syntactic analysis indelibly (borrowing a term from [McDonald 83]); i.e. that any structure built by the parser is part of the correct analysis of the input. Again, in the context of this earlier theory, the form of the indelible syntactic analysis was that of a tree. One key idea of this earlier tree-building theory that we retain is the notion that a natural language parser can buffer and examine some small number (e.g. up to three) unattached constituents before being forced to add to its existing structures. (In D-theory, the node named X is attached to Y if the parser's description of the existing structure includes a predication of the form "Y dominates X', or, as we will henceforth write, "D(Y,X)." X is unattached if the parser's description of the existing structure includes no predication of the form "D(Y, X)', for any name Y.) We thus assume that such a parser will have the two principle data structures of these earlier deterministic parsers, a stack and a buffer. However, the stack and the buffer in a D-theory parser will contain names rather than constituents, and these data structures will be augmented by a data base where the description of the syntactic structure itself is built up by the parser. (While this might sound novel, a moment's reflection on LISP implementation techniques should assure the reader that this structure is far less different from that of older parsers like Parsifal and Fidditch [Hindle 831 than it might sound.) As we shall see below, however, a parser which embodies D- theory can recover (in some sense) from some of the constructions which would terminally confuse (or "garden path') a parser based on the deterministic tree-building theory. For D-theory to be psychologically valid, of course, it must be the case that just those constructions which do garden path a D- theory parser garden path people as well. (We might note in passing that recent experimental paradigms which explore online syntactic processing using eye-tracking technology promise to provide delicate tests of these hypotheses, e.g. [Rayner & Frazier 831.) Another goal of this earlier work was to find some way of procedurally representing grammars of natural languages which is brief and perspicuous, and which allows (and perhaps even forces) grammatical generalizations to be stated in a natural way. As is often argued, such a representation must be embodied by our language understanding faculty, given that the grammar of a language is learned incrementally and quickly by children given only limited evidence. (To recast this point from an engineering point of view, this property is also a prerequisite to writing a grammar for a subset of some given natural language which remains extensible, so that new constructions can be added to the grammar without global changes, and so that these new constructions will interact robustly with the old grammar.) Following [Shipman 78], as refined in [Berwick 82]. we assume that the grammar is organized into a set of context free rules, which we will call base templates, and a set of pattern-action rules. As in Parsifal, each pattern consists of up to four elements, each of which is a partial description of an element in the buffer, or the accessible node in the stack (the "current active node'). Loosely following [Berwick 82], we assume that the action of each rule consists of exactly one of some small set of limited actions which might include the following: • Attach a node in the buffer to the current active node. • Switch the nodes in the first two buffer positions. • Insert a specified lexical item into a specified buffer slot. • Create a new current active node. • Insert an empty NP into the first buffer slot. (Where "attachment" is as defined above, and "create" means something like coin a new node name, and push it onto the active node stack.) Each rule is associated with some position in one of the base templates. So, for example, in figure 1 below, one base template is given, a highly simplified template for a sentence. Associated with the NP in the subject position of the sentence are several rules. The first rule says that if the first buffer position holds a name which is asserted to be an NP (informally: if there is an NP in the first buffer slot), then (informally) it is dominated by the S. The second says that if there is an auxiliary verb in the first slot followed by an NP, then switch them. And so on. Note that while a D-the0ry parser itself has no predicate with which to express direct dominance, the base templates explicitly encode just such information. Insofar as the parser makes its assertions of dominance on the basis of the phrase structure rules, the parser will behave very similarly to deterministic tree S .> NP VP PP* {[NPI-> Attach} {[auxvl[NP]-> Switch} {[v, tenselessl -> lnsert(NP, 0)} Figure 1. A simplified base template for S, with associated NP rules. building parsers. In fact, the parser will typically (although, as we will see below, not always) behave in just such a fashion. 3. The Problem of Misleading Leading Edges By and large, we believe that a significant subset of the grammar of English has been successfully embedded within the deterministic tree-building model. However, a residue of syntactic phenomena remain which defy simple explication within this framework. Some of these phenomena are particular problems for the deterministic tree-building framework. Others, for example coordination and gapping phenomena, have defied adequate explication within any existing theory of grammar. In the remainder of this paper we will explore a range of such phenomena, and argue that D-theory provides a consistent approach which yields simple accounts for the range of phenomena we have considered to date. We will first argue for taking "dominates', not "directly dominates" as primitive, and then later argue why the use of names is justified. (Our view that this representation should be viewed as a description hangs on the use of names. In this section and in section 5 we argue only for a representation which is a particular kind of directed acyclic graph. Only with the arguments of section 7 is the position that this is a kind of description at all defensible.) One particularly interesting class of sentences which seems to defy deterministic accounts is exemplified by (2). (2) I drove my aunt from Peoria's car. 130 Sentences like (2) contain a constituent which has a misleading *leading edge', an initial right-embedded subconstituent which could itself be the next constituent of whatever structure is being built at the next level up. For example, while analyzing (2), a parser which deterministically builds old-fashioned trees might just take "my aunt" to be the object of "drove', attaching it as the object of the VP, only to discover (too late) that this phrase functions instead as genitive determiner of the full NP "my aunt from Peoria's car'. In fact, the existing grammar for Parsifal causes exactly this behavior, and for good reason: This parser constructs NPs only up to the head noun before deciding on their role within the larger context; only after attaching an NP will Parsifal construct the post-modifiers of the NP and attach them, (This involves a mechanism called node reactivation; it is described in [Shipman & Marcus 79].) One reason for this within the earlier framework is that, given a PP which immediately follows the head of an NP, it cannot be determined whether that PP should be attached to the preceding NP or to some constituent which dominates the NP until the role of that NP itself has been determined. In the specific case of (2), the parser will attach "my aunt" as the object of the verb "drove" so that it can decide where to attach the PP beginning with "from'. Only after it is too late will the parser see the genitive marker on "Peoria's" and boggle. While one could attempt to overcome this particular motivation for the two-stage parsing of NPs with some variant of the notion of pseudo-attachment (first used in [Church 801), this and related approaches have their problems too, as Church notes. Potential pseudo-attachment solutions aside, the upshot is that sentences like (2) will cause deterministic tree building parsers to garden path. However, it is our strong intuition that such cases are not "garden paths'; we believe that such cases should be analyzed correctly by a deterministic parser rather than by the (putative) mechanism which recovers from garden paths. The D-theoretic solution to the problem of misleading "leading edges" hinges on one formal property of this problem: The initial analysis of this class of examples is incorrect only in that some constituent is attached in the parse tree at a higher point in the surrounding structure than is correct. Crucially, the parser neither creates structures of the wrong kind nor does it attach the structure that it builds to some structure which does not dominate it. In the misanalysis of (2), the parser initially errs only in attaching the NP "my aunt', which is indeed dominated by the VP whose head is "drove', too high in the structure. This class of examples is handled by D-theory without difficulty exactly because syntactic analyses are expressed in terms of domination rather than direct domination. The developing description of the structure of (2) in a D-theory parser at the point at which the parser had analyzed "my aunt', but no further, might include the following predications: (3.1) D(vpl, npl) (3.2) D(vpl, vl) where the verb node named vl dominates "drove', and the NP node named npl dominates the lexical material "my aunt'. Let us assume for the sake of simplicity that while building the PP "from Peoria's', the parser detects a genitive marker on the proper noun "Peoria's" and knows (magically, for now) that "Peoria's car" is not the correct analysis. Given this, the genitive must mark the entire NP "my aunt from Peoria" and thus "my aunt from Peoria" must serve not as the object of the verb "drove" but as the determiner of some larger NP which itself must be the object of "drove'. (Unless it is followed by a genitive marker, in which case....) The question we are centrally interested in here is not how the parser comes to the realization that it has erred, but rather what can be done to remedy the situation. (Actually how the parser must resolve "..L first problem is a complex and interesting story in and of itself, with the punchline being that exactly one (but only one) of (2) and (4) I drove my aunt from Peoria's suburbs home. must cause a garden path. The details of this await further research on the control of D-theory parsing.) The description (3) is easy fixed, given that "D" is read "dominates', and not "directly dominates'. Several further predications can merely be added to (3), namely those of (5), which state that npl is dominated by a determiner node named detl, which itself is dominated by a new np node; np2, and that np2 is dominated by vpl. (5.1) D(npl, detl) (5.2) D(detl, np2) (5.3) D(np2, vpl) Adding these new predications does not make the predications of (3) false; it merely adds to them. The node named npl is still dominated by vpl as stated in (3.1), because the relation "D" is transitive. Given the predications in (5), (3.1) is redundant, but it is not false. The general point is this: D-theory allows nodes to be attached initially by a parser to some point which will turn out to be higher than its lowest point of attachment (for the more general sense of attachment defined above) without such initial states causing the parser to garden path. Because of the nature of "D'. the parser can in this sense "lower" a constituent without falsifying a previous predication. The earlier predication remains indelible. 4. Semantic Interpretation: The Standard Referent But how can such a list of domination predications be interpreted? It would seem that compositional semantics must depend upon being able to determine exactly what the immediate constituents of any given structure are: if the meaning of a phrase determined from the meanings of its parts, then it must be determined exactly what its parts are. We assume that semantic interpretation of a D-theory analysis is done by taking such an analysis as describing the minimal tree possible, i.e. by taking "D" to mean directly dominates wherever possible but only for semantic analysis. For example. if the analysis of a structure includes the predications that X dominates Y, Y dominates Z and X also dominates Z, then the semantic interpreter will assume that X directly dominates Y and that Y directly dominates Z. We will call such an interpretation of a D-theoretic analysis the standard referent of the analysis. (We further assume that the description produced by a D-theory parser will have at each stage of the analysis one and only one standard referent, and the complex situation where two or more chains of domination must be merged to arrive at a single standard referent will not arise in the operation of a D- theory parser. Substantiation of these assumptions awaits the construction of a parser and a sizable grammar.) This notion of "standard referent" means that adding predications to the (partial) analysis of a sentence may very well 131 change the standard referent of that analysis as viewed by the semantic interpreter. The key idea here is that from the point of view of semantics, the structure built by the parser may appear to change, but from the parser's point of view, the description remains indelible. The situation we describe is not far from that which occurs as the usual case in the communication of descriptions of objects between individuals. Suppose Don says to you, standing before you wearing a brown tweed jacket, "My coat is too warm". The phrase "my coat" can refer to any coat that Don owns, yet you will undoubtedly take the phrase to refer to the brown tweed jacket. Given that descriptions are always necessarily partial, there must always be a conventional standard referent for a description. But now suppose that Don says "My blue coat is too warm'. He merely adds "blue" to the phrase "my coat", but the set of possible referents changes, and in fact shrinks. More to the point, you will now take the referent of the phrase "my blue coat" to mean some blue coat or other which Don owns; i.e. adding to the description changes the standard referent. The key notion here is that because descriptions are always underspecified, there must be some set of conventions for choosing the intended single referent out of the often large (and sometimes infinite) class of objects that any given description is true of. Thus, once we claim that the output of syntactic analysis is a description, it is not surprising that there must be some restrictive conventions to determine exactly what such a description refers to. Given this, the convention we assume seems a simple and natural one. 5. On the Re.analysis of Indelible Strucmre~ Another problematic class of constructions for deterministic tree-building theories are those for which it is argued that some kind of active reanalysis process must occur. For each of these constructions, there is linguistic evidence (of varied force) which suggests (recast in processing terms) that different syntactic structures must be assigned to that construction at different points during grammatical processing. In other words, it can be demonstrated that each of these constructions has properties which provide evidence for one particular structure at one stage of processing, while displaying properties which argue for a quite different structure at a later stage of processing. But if this reanalysis account is the correct account for any of these constructions, then the deterministic tree building theory must be wrong somewhere, for changing a structural analysis is the one thing that indelible systems cannot do, ex hypothesL One class of examples widely assumed to involve some kind of reanatysis is the class of verb complement structures which have so-called "pseudo-passives". These verbs seem to have two passive forms, one of which has an NP in subject position which serves in the same role as that served by the seeming object of the active form, while the other passive form seems to have an underlying prepositional object in subject position. For example, there are two passives which correspond to the active sentence (6.1), a "normal" passive (6.3), and a passive which seems to pull the object of "of" into subject position, namely, (6.2). (6.1) Past owners had made a mess of the house. (6.2) The house had been made a mess of. (6.3) A mess had been made of the house. One fairly common view is that the phrase "made a mess of. functions as a single idiomatic verb, so that "the house" in (6.1) and (6. 2) can be simply viewed as the object of the verb "made a mess of.. But then to account for (6.3), it must be assumed that "made" is first treated as a normal verb with "a mess" as object. This means that either (6.3) has a different underlying syntactic structure than (6.1-2), or that the syntactic analysis assigned to the string "made of" (or perhaps "made <trace> of') changes after the passive is accounted for. To get a consistent syntactic analysis for these sentences, one can argue either that reanalysis always or never takes place. The position that we find most tenable, given the evidence, is that reanalysis sometimes takes place. (Of course, the fact that purely lexical accounts (see, e.g. [Bresnan 82]) seem plausible leaves the older tree-building theories on not entirely untenable ground.) But how can any reanalysis at all be reconciled with the determinism hypothesis? Consider the analysis that a D-theory parser will have built up after having parsed "made a mess', but before noticing "of'. At this point the parser should assign the sentence a non-idiomatic reading, with "a mess" the real object of "made". Some of the predications in the analysis will be (7.1) D(vpl, vl) (7,2) D(vpl, npl) where vpl is a vp node dominating "made" and npl is an np node dominating "a mess ~. (Note that'in (8.1) The children made a mess, but then cleaned it up. "it" refers to a mess, but that one cannot say (8.2) *The children made a mess of their bedrooms, but then cleaned it up. which seems to indicate that the phrase "a mess" is opaque to anaphoric reference in the idiomatic reading, and that therefore (8.1) is not idiomatic in the same sense.) We assume here that the preposition "of" is lexically marked for the idiomatic verb "make a mess', i.e. it is lexically specified for the idiom, but it is not itself a part of the idiom. Evidence for this includes sentences like (9), in which the preposition cannot be reanalyzed into the verb, given D-theory, as we will see below. (9) Of what did the children make a mess'? From a parsing point of view, this means that the presence of the preposition "of. will serve as a trigger to the reanalysis of "make a mess", without being part of the reanalysed material itself. (Thanks to Chris Halverson for pointing out a problem caused by (9) for an earlier analysis.) Returning to the analysis of (6.1), the preposition "of" triggers exactly such a reanalysis. Given D-theory, this can be effected simply by adding the additional predication (10) to (7.1-2) above: (10) D(vl, npl) Given this new predication, the standard referent of the description now has npl directly dominated by vl, i.e. it is now part of the verb. And now when "a house" is noticed by the parser, it will be attached as the first NP after the verb vl, i.e. as its object. Once again, the predications (7.1-2) are not falsified by the additional predication; they remain indelibly true - npl remains dominated by vpl, although no longer directly dominated by it. But, to repeat the point, the parser is (blissfully) unaware of this notion; the standard referent is a notion meaningful only to semantics. 132 The analysis of (6.2) proceeds as follows: After parsing "made" as a verb and "a mess" as its object and noticing the trigger "of" sitting in the buffer, the parser will add an extra predication effecting just the same "reanalysis" as was done for (6.1). We assume that the passive rule inserts a trace either immediately after a verb, or after the preposition immediately following a verb, if that preposition is lexically specified for that verb. We will not argue for this analysis here; suffice it to say that this analysis is motivated by facts which also motivate recent somewhat similar analyses of passive, e.g. [Hornstein and Weinberg 811 and [Bresnan 82]. Given this analysis, the parser will now drop a passive trace for the subject "the house" into the buffer after the lexically specified preposition "of", and the parse will then move to completion. (One issue that remains open, though, is exactly how the parser knows not to drop the passive trace after "made'. The solution to this particular problem must interact correctly with many such control problems involving passive. Resolving this entire set of issues in a consistent fashion awaits the pending implementation of a parser to serve as a tool in the investigation of these control issues.) How is (6.3) parsed? Here we assume that the parser will drop a passive trace after the verb "made'. Because we assume that the parser cannot access the binding of the trace, and therefore cannot access the lexical material "a mess', it must be the case that reanalysis will not take place in this case. While this asymmetry may seem unpleasant, we note that there is no evidence that syntactic reanatysis has taken place here. Instead,. we assume that semantic processing will simply add an additional domination predicate after it notices the binding of the passive trace. Thus, the reanalysis here is semantic, not syntactic. (Note that there are other cases, e.g. right dislocation, where it is clear that additional domination predicates are added by post-syntactic processes. We believe that semantics can add domination predicates, but cannot construct new nodes.) As an example of the kind of operation that is ruled out by D- theory, let us return to our assertion above that the preposition "of" cannot always be part of the idiomatic verb "make a mess'. Consider (9) above. In this sentence, the analysis will include some assertions that "of" is dominated by a PP, which itself is dominated by COMP. But if an assertion is then added to this description asserting that "of" is also dominated by a verb node, then there is no consistent interpretation of this structure at all, since the COMP cannot dominate the verb node and the verb node cannot dominate the COMP. Put more simply, there is no way something can merely be "lowered" from a COMP node into the verb. Another possibility similarly ruled out by D-theory is that in sentences like (6.1) there is initially a PP node which dominates both "of" and the NP "the house", but that "of" is reanalyzed into the idiomatic verb. For "of" to be dominated by a verb node, given that it is already dominated by the PP node, either the PP node must be dominated by the verb or the verb by the PP node, if the dominance relations are to be consistent. But it makes no sense for the PP node to have a standard referent where it immediately dominates only a verb and an NP, but no preposition. And if the verb dominates the PP, then the verb also dominates the NP which serves as the object of the VP, which is impossible. In this sense, D-theory is clearly more restrictive than the theory of [Lasnik and Kupin 771, at least as interpreted by [Chomsky 81 ], where reanalysis is done by adding an additional monostring to the existing Restricted Phrase Marker and eliminating others. In this case, the dominationrelations implied by the new analysis need not be consistent with those implicit in the pre- re, analysis RPM. 6. Constraints on D-theory: a brief discussion While we will not discuss this issue here at length, our current account of D-theory includes a set of stipulated constro;-'- 'hat further restrict where new domination predications can be added to a description. These constraints include the following: The Rightmost Daughter Constraint, that only the rightmost daughter of a node can be lowered under a sibling node at any given point in the parsing process; and The No Crossover Constraint, that no node can be lowered under a sibling which is not contiguous to it, and some others. As viewed from the point of view of the standard referent, we believe that a D-theory parser will appear to operate, by and large, just like a tree building deterministic parser, until it creates some structure whose standard referent must be changed. From the parser's point of view, it will scan base templates left-to-right for the most part, initiating some in a top-down manner, some in a bottom-up manner, until it finds itself unable to fill the next template slot somehow or other. At this point some mechanism must decide what additional predications to add to allow the parser to proceed. The functional force of the stipulations discussed above is to sevelely restrict the range of possibilities that can be considered in such a situation. Indeed, we would be delighted if it turned out to be the case that the parser can never consider more than several possibilities at any point that such an operation will be performed. It is particularly worthy of note that these two constraints interact to predict that the range of constructions that can be reanalyzed in the manner discussed in the last section is severely circumscribed, and that this prediction is borne out (see {Quirk, Greenbaum, Leech & Svartvik 72], §12.64). These two constraints together predict that verb reanalysis is possible only when a single constituent precedes the trigger for reanalysis: Suppose that there were two constituents which preceded the trigger for reanalysis, i.e. that the order of constituents in the VP is VCI C2T where C1 and C2 are the two constituents, and T is the trigger. Then these two constituents would be attached to the VP whose head is V before T is encountered, causing the parser (before attaching T) to assert two new predications which would have the force of shifting the two constituents into the verb. But which predication could be parser add first? If it asserts that D(V, CI), this violates the Rightmost Daughter Constraint, because only C2 can be lowered under a sibling. But if the parser first asserts D(V, C2) then C2 crosses over CI, which is prohibited by the No Crossover Constraint. Therefore, only constituent can have been attached before the reanalysis occurs. 7. A DETERMINISTIC APPROACH TO COORDINATION We now turn from the consequences of expressing syntactic structure in terms of domination to the use of names within D- theory. As stated above, it is this use of names which really makes D-theory analyses descriptions, and not merely directed acyclic graphs. The power of naming can be demonstrated most clearly by investigating some implications of the use of names 133 for the representation of coordinate constructions, i.e. conjunction phenomena and the like. 7,1 ~ Problem of Coordimtte Structure Coordinate constructions are infamous for being highly ambiguous given only syntactic constraints; standard techniques for parsing coordinate structures, e.g. [Woods 73], are highly combinatoric, and it would seem inherent in the phenomenon that tree-building parsers must do extensive search to build all syntactically possible analyses. (See, e.g. the analysis of [Church & Patil 1982].) One widely-used approach which eliminates much of this seemingly inherent search is to use extensive semantic and pragmatic interaction interleaved with the parsing process to quickly prune unpromising search paths. While Parsifal made use of exactly such interactions in other contexts, e.g. to correctly place prepositional phrases, such interactions seem to demand at least implicitly building syntactic structure which is discarded after some choice is made by higher-level cognitive components. Because this is counter to at least the spirit of the determinism hypothesis, it would be interesting if the syntactic analysis of coordinate structures could be made autonomous of higher-level processes. There are more central problems for a deterministic analysis of conjunction, however. Techniques which make use of the look- ahead provided by buffering constituents can deterministically handle a perhaps surprising range of coordinate phenomena, as first demonstrated by the YAP parser [Church 80], but there appear to be fundamental limitations to what can be analyzed in this way. The central problem is that a tree building deterministic parser cannot examine the context necessary to determine what is conjoined to what without constructing nodes which may turn out to be spurious, given the (ultimate) correct analysis. In what follows, we will illustrate each of these problems in more detail and sketch an approach to the analysis of coordinate structures which we believe can be extended to handle such structures deterministically and without semantic interaction. 7.2 Names and Appropriste Vagueness Consider the problem of analyzing sentences like (11.1-2). These two sentences are identical at the level of preterminal symbols; they differ only in the particular lexical items chosen as nouns, with the schematic lexical structure indicated by (11.3). However, (11.1) has the favored reading that the apples, pears and cherries are all ripe and from local orchards, while in (11.2), only the cheese is ripe and only the cider is from local orchards. From this, it is clear that (11.1) is read as a conjunction of three nouns within one NP, while (11.2) is read as a conjunction of three individual NPs, with structures as indicated by (ll.Ia,2a). We assume here, crucially, that constituents in coordination are all attached to the same constituent; they can be thought of as "stacking" in a plane orthogonal to the standard referent, as [Chomsky 82] suggests. The conjunction itself is attached to the rightmost of the coordinate structures. (ll.1) They sell ripe apples, pears, and cherries from local orchards. (1 l.la) They sell [NP ripe [N apples], [N pears], [N and cherries] from local orchards]. (11.2) They sell ripe cheese, bread, and cider from local orchards. (11.2a) They sell [Np ripe cheese], [uP bread], [uP and cherries from local orchards]. (11.3) They sell ripe NI, N2, and N3 from local orchards. Thus, it would seem that to determine the level at which the structures are conjoined requires much pragmatic knowledge about fruit, flowers and the like. Note also that while (11.1-2) have particular primary readings, one needs to consider these sentences carefully to decide what the primary reading is. This is suggestive of the kind of syntactic vagueness that VanLehn argues characterizes many judgements of quantifier scope [VanLehn 78]. Note, however, that most evidence suggests that quantifier scope is not represented directly in syntactic structure, but is interpreted from that structure. For the readings of (11.1-2) to be vague in this way, the structures of (I l.la-2a) must be interpreted from syntactic structure, and not be part of it. It turns out that D- theory, coupled with the assumption that the parser does not interact with semantic and pragmatic processing, provides an account which is consistent with these intuitions. But consider the D-theoretic analysis of (11.1); there are some surprises in store. Its representation will include predications like those of (12.1-8), where we are now careful to "unpack" informal names like "npl" to show that they consist of a content-free identifier and predications about the type of entity the identifier names. (12.1) D(vpl, npl); VP(vpl); NP(npl) (12.2) D(vpl, np2); NP(np2) (12.3) D(vpl, np3); NP(np3) (12.4) D(npl, apl); D(apl, adjl); ADJ(adjl) (12.5) D(npl, hi); NOUN(hi) (12.6) D(np2, n2); NOUN(n2) (12.7) D(np3, n3); NOUN(n3) (12.8) D(np3, ppl): D(ppl, prept); PREP(prepl) (12.9) adjl < nl < n2 < n3 < prepl Here vpl is the name of a node whose head is "sell", apl an adjective phrase dominating "ripe", and ppl the PP "from local orchards." The analysis will also include predications about, the left-to-right order of the terminal string, which has been informally represented in (12.9); +X < Y" is to be read +X is the left of Y". We indicate the order of nonterminals here only for the sake of brevity; we use nl <n2 as a shorthand for D(nl, 'cheese'); D(n2, 'bread'); 'cheese' < 'bread'. In particular, a D-theory analysis contains no explicit predications about left-right order of non-terminals. But given only the predications in (12), what can be said about the identities of the nodes named npl, np2, and np3? Under this description, the descriptions of npl, np2 and np3 are compatible descriptions; they are potentially descriptions of the same individual. They are all dominated by vpl, and each is an 134 NP, so there is no conflict here, Each dominates a different noun, but several constituents of the same type can be dominated by the same node if they are in a coordinate structure (given the analysis of coordinate structures we assume) and if they are string adjacent. NI, n2 and n3 are string adjacent (given only (12)), so the fact that the nodes named npl, np2 and np3 dominate nouns which may turn out to be different does not make the descriptions of the NPs incompatible. (Indeed, if the nouns are viewed as a coordinate structure, then the structure of the nouns is the same as that of (11.1).) Furthermore, adjl is immediately to the left of and ppl is immediately to the right of all the nouns, so these constituents could be dominated by the same single NP that might dominate hi, n2 and n3 as well. Thus there is no information here that can distinguish npl from np2 from np3. The fact that the conjunction "and" is dominated by np3 does not block the above analysis. The addition of one domination predicate leaves it dominated by n3 (as well as np3, of course), thereby making n l, n2 and n3 a perfect coordinate structure, and leaving no barrier to npl, np2 and np3 being co-referent, But this means that the D-theory analysis of (11.1) has as standard referents both it and (11.2)! (This modifies our statement earlier in this paper about the uniqueness of the standard referent; we now must say that for each possible "stacking" of nodes, there is one standard referent.) For if npl, np2 and np3 corefer, then the analysis above shows that the structure described is exactly that of (11.2). There is also the possibility that just npl and np2 corefer, given the above analysis, which yields a reading where np2 is an appositive to npl, with npl and np3 coordinate structures (the structure of appositives is similar to that of coordinate structures, we assume); and the possibility that just np2 and np3 corefer, yielding a reading with npl and np2 coordinate structures, and np3 in apposition to np2. (The fact that we use a simplified phrase structure here is not an important fact. The analysis goes through equally as well with a full X-bar theoretic phrase component; the story is just much longer.) The upshot of this is that upon encountering constructions like (11), the parser can proceed by simply assuming that the structures are conjoined at the highest level possible, using different names for each of the potential highest level constituents. It can then analyze the (potentially) coordinate structures entirely independently of feedback from pragmatic and semantic knowledge sources. When higher cognitive processing of this description requires distinguishing at what level the structures are conjoined, pragmatics can be invoked where needed, but there need be no interaction with syntactic processes themselves. This is because, once again, it turns out if it is syntactically possible that structures should be conjoined at a lower level than that initially posited, the names of the potentially separate constituents simply can be viewed as aliases of the one node that does exist in the corresponding standard referent; in this case all predications about whatever node is named by the alias remain true, and thus once again no predications need to be revoked. We now see how it is that D-theory gives an account of the intuition that the fine structure of coordinations in vague, in the sense of VanLehn. For we have seen that pragmatics does not need to determine whether (e.g.) all the fruits in (11.1) are ripe or not for the syntactic analysis to be completed deterministically, exactly because the D-theory analysis leaves all (and, we also claim, only) the syntactically correct possibilities open. Thus the description given in (12) is appropriately vague between possible syntactic analyses of sentences like those schematized in (11.3). Thus, this new representation opens the way for a simple formal expression of the notion that some sentences may be vague in certain well defined ways, even though they are believed to be understood, and that this vagueness may not be resolved until a hearer's attention is called to the unresolved decision. 7.3 The Problem of Nodes That Aren't There. While we can give only the briefest sketch here (the full story is quite long and complicated), exactly this use of names resolves yet another problem for the deterministic analysis of coordinate structures: To examine enough context (in the buffer) to decide what kind of structure is conjoined with what, a troe-building parser will often have to go out on a limb and posit the existence of nodes which may turn out not to exist after all. For example, if a tree-building parser has analyzed the inputs shown in (13.1-2) up to "worms" and has seen "and" and "frogs" in the (13.1) Birds eat small worms and frogs eat small flies. (13.2) Birds eat small worms and frogs. buffer, it will need to posit that "frogs" is a full NP to check to see if the pattern [conjunction] [NPI [verbl is fulfilled, and thus if an S should be created with the NP as its head. But if the input is not as in (13.1), but as in (13.2), then positing the NP might be incorrect, because the correct analysis may be a noun-noun conjunction of "worms" and "frogs', (with the reading that birds eat worms and frogs, both of which are small). Of course, there is a second problem here for a tree-building parser, namely that (13.2) has a second reading which is an "NP and NP" conjunction. As we have seen above, there is no corresponding problem for a D-theory parser, because if it merely posits an NP dominating "frogs', the structure which will result for (13.2) is appropriately vague between both the NP reading and the noun reading of "frogs" (i.e. between the readings where the frogs are just plain frogs and where the frogs are small.) But the solution to the second problem for a D-theory parser is also a solution to the first! After seeing "and" and "frogs" in its buffer, a D-theory parser can simply posit an NP node dominating "frogs" and continue. If the input proceeds as in (13.1), then the parser will introduce an S node and assert that it dominates the new NP. This will make the descriptions of the NPs dominating "worms" and dominating "frogs" incompatible, i.e. this will assure that there really are two NPs in the standard referent. If the input proceeds as in (13.2), a D-theory parser will state that the node referred to by the new name is dominated by the previous VP, resulting in the structure described immediately above. To summarize, where a tree- building parser might be misled into creating a node which might not exist at all, there is no corresponding problem for a D-theory parser. 8. SUMMING UP'. D-Theory on One Foot This paper has described a new theory of natural language syntax and parsing which argues that the proper output of syntactic analysis is not a tree structure per se, but rather a description of such structures. Rather than constructing a tree, a natural language parser based on these ideas will construct a 135 single description which can be viewed as a partial description of each of a family of trees. The two key ideas that we have presented here arc: (1) An analysis of a syntactic structure consists primarily of predications of the form "node X dominates node Y', and not the more traditional "node. X immediately dominates node Y'; syntactic analysis never says more than that node X is somewhere above node Y. (2) Because this is a description, two names used to refer to syntactic structures can always co-refer if their descriptions are compatible, and furthermore, it is impossible to block the possibility of coreferenec if the descriptions are compatible. These two ideas, taken together, imply that during the process of analyzing the structure of a given utterance, merely adding to the emerging description may change the set of trees ultimately described (just as adding "honest" to the phrase "all politicians" may radically change the set described). We have also sketched some implications of this theory that not only suggest a new analysis of coordinate structures, but also suggest that coordinate structures might be much easier to analyze than current parsing techniques would suggest. We are currently working to flesh out the analyses presented above. We arc also working on an analysis of gapping and elision phenomena which seems to fall naturally out of this framework. This new analysis is surprising in that it makes crucially use of descriptions even less fully specified than those we have discussed in this paper, by using the notations we have introduced here to fuller advantage. These emerging analyses move yet further away from the traditional view of either trees or phrase markers as an appropriate framework for expressing syntactic generalizations. 9. References Berwick, R. (1982) Locality Principles and the Acquisition of Syntactic Knowledge, MIT PhD thesis. Bresnan, J. (1982) -The Passive in Lexical Theory," in J. Bresnan (ed.) The Mental Representation of Grammatical Relations, MIT Press, pp. 3-86. Chomsky, N. (1981) Lectures on Government and Binding, Foris Publications. Chomsky, N. (1982) Some Concepts and Consequences of the Theory of Government and Binding, MIT Press. Church, K. (1980) "On Memory Limitations in Natural Language Processing," MIT Masters thesis, MIT/LCS/TR-245. Church, K. and R. Patil (1982) "Coping with Syntactic Ambiguity or How to Put the Block in the Box on the Table," MIT/LCS/TM-216. Hindle, D. (1983) "Deterministic Parsing of Syntactic Non- fluencies," this proceedings. Horustein, N. and A. Weinberg (1981) "Case Theory and Preposition Stranding," Linguistic Inquiry, 12.1, pp. 55-91. Lasnik, H. and J. Kapin (1977) "A Restrictive Theory of Transformational Grammar," Theoretical Linguistics, vol. 4, pp. 173-196. McDonald, D. (1983) "Natural Language Generation as a Computational Problem: an Introduction," in M. Brady and R. Berwick (eds.) Computational Models of Discourse, MIT Press, pp. 209-265. Marcus, M. (1980) A Theory of Syntactic Recognition for Natural Language, MIT Press. Quirk, R., S. Greenbaum, G. Leech and J. Svartik (1972) ,4 Grammar of Contemporary English, Longman. Shipman, D. (1979) "Phrase Structure Rules for Parsifal', MIT AI Lab Working Paper 182 Shipman, D. and M. Marcus (1979) "Towards Minimal Data Structures for Deterministic Parsing,' IJCAI79. VanLehn, K.A. (1978) "Determining the Scope of English Quantifiers', MIT AI-TR-483. Woods, W.A. (1973). "An Experimental Parsing System for Transition Network Grammars." in R. Rustin, ed., Natural Language Processing, Algorithmics Press. 136 | 1983 | 20 |
PARSING AS DEDUCTION l Fernando C. N. Pereira David H. D. Warren Artificial Intelligence Center SRI International 333 Ravenswood Ave., Menlo Park CA 04025 Abstract By exploring the relationship between parsing and deduction, a new and more general view of chart parsing is obtained, which encompasses parsing for grammar formalisms based on unification, and is the basis of the Earley Deduction proof procedure for definite clauses. The efficiency of this approach for an interesting class of grammars is discussed. 1. Introduction The aim of this paper is to explore the relationship between parsing and deduction. The basic notion, which goes back to Kowaiski (Kowalski, 1980} and Colmerauer {Colmeraucr, 1978), h'zs seen a very efficient, if limited, realization in tile use of the logic programming language Prolog for parsing {Colmerauer, 1978; Pereira and Warren, 1980). The connection between parsing and deduction was developed further in the design of the Eariey Deduction proof procedure (Warren, 1975), which will also be discussed at length here. Investigation of the connection between parsing and deduction yields several important benefits: • A theoretically clean mechanism to connect parsing with the inference needed for semantic interpretation. llandling of gaps and unbounded dependencies "on the fly" without adding special mechanisms to the parser. :\ reinterprecation and generalization of chart parsing that abstracts from unessential data- structure details. * Techniques that are applicable to parsing in related formalisms not directly based on logic. IThis work wa~ partially supported by the Defense Advanced Research Projects Agency under Contract N00039-80-C-0575 with the Naval Electronic Systems Command. The views and conclusions contained in this article are those of the authors and should not be interpreted as representative of the official policies, either expressed or imp{led, of the Defense Advanced Research Projects Agency or the United Slates Government. • Elucidation of parsing complexity issues for related formalisms, in particular lexieal-functional grammar (LFG). Our study of these topics is still far from complete; therefore, besides offering some initial results, we shall discuss various outstanding questions. The connection between parsing and deduction is based on the axiomatization of context-free grammars in definite clauses, a particularly simple subset of first- order logic (Kowalski, 1080; van Emden and Kowalski, 1976). This axiomatization allows us to identify context- free parsing algorithms with proof procedures for a restricted class of definite clauses, those derived from context-free rules. This identification can then be generalized to inc{ude larger classes of definite clauses to which the same algorithms can be applied, with simple modifications. Those larger classes of definite clauses can be seen as grammar formalisms in which the atomic grammar symbols of context-free grammars have been replaced by complex symbols that are matched by unification (Robinson, 1965; Colmerauer, 1978; Pereir3 and Warren, 1980}. The simplest of these formalisms is definite-clause grammars (DCG) (Pereira and Warren, 1980). There is a close relationship between DCGs ~nd other ~,rammar formalisms based on unification, such as Unification Grammar {UG) (Kay, 1070), LFG, PATR-2 {Shieber. 1083) and the more recent versions of GPSG (Gazdar and Pullum, 1082). The parsing a{gorithms we are concerned with are online algorithms, in the sense that they apply the constraints specified by the augmentation of a rule a~ soon as the rule is applied. In contrast, an olTline parsing algorithm will consist of two phases: a context-free parsing algorithm followed by application of the constraints to all the resulting analyses. The pap('r is organized as follows. Section 2 gives an overview of the concepts of definite clause logic, definite clause grammars, definite clause proof procedures, and chart parsing, Section 3 discusses the connection betwee DCGs and LFG. Section 4 describes the Earley Deduction definite-clause proof procedure. Section 5 then brings out the connection between Earley Deduction and chart parsing, and shows the added generality brought in by the proof procedure approach. Section 6 outlines some oi the problems of implementing Earley Deduction and similar parsing procedure~. Finally, Section 7 discusses questions of computational complexity and decidability. £37 2. Basic Notions 2.1. Definite Clauses A definite clause has the form P:Q~&... &Q.. to be read as "P is true if Q1 and ... and Qa are true". If n --~ 0, the clause is a unit clause and is written simply as P. P and QI ..... Qn are literals. P is the positive literal or head of the clause; Ql .... , Qn are the negative literals, forming the body of the clause. Literals have the forn~ pit I ..... tk), where p is the predicate of arity k and the t i the arguments. The arguments are terms. A term may be: a variable {variable names start with capital letters); a constant; a compound term J~tl,...,t m) where f is a functor of arit$ m and the t i are terms. All the variables in a clause are implicitly universally quantified. A set of definite clauses forms a program, and the clauses in a program are called input clauses. A program defines the relations denoted by the predicates appearing in the heads of clauses. When using a definite- clause proof procedure, such as Prolog (Roussel. 1975), a goal statement requests the proof procedure to find provable instances of P. 2.2. Definite Clause Grammars Any context-free rule i ' ~ o r 1 ... O n can be translated into a definite clause xlSo.S~) : %/S0,Sl) & .., & %(S~.l.S.). The variables S i are the string arguments, representing positions m the input string. For example, the context-free rule "S ~ NP VP" is translated into "s(S0,S2) np{,qO.Sl} k" vp(S1,S2)," which can be paraphrased as "'there is an S from SO to $2 in the input string if there is an NP from SO to S1 and a V'P from S1 to 82." Given the translation of a context-free grammar G with start symbol S into a set of definite clauses G" with corresponding predicate s, to say that a string w is in the grammar's language is equivalent to saying that the start goal S{po,pj is a consequence of G" U W, where Po and p represent the left and right endpoints of u,, and W is a set of unit clauses that represents w. It is easy to generalize the above notions to define DCGs. DCG nonterminals have arguments in the same way that predicates do. A DCG nonterminal with u arguments is translated into a predicate of n+2 arguments, the last two of which are the string points, as in the translation of context-free rules into definite clauses. The context-free grammar obtained from a DCG by dropping all nonterminal arguments is the context- free skeleton of the DCG. 2.3. Dedu.ction in Definite Clauses The fundamental inference rule for definite clauses is the following resolution rule: From the clauses B ¢= A l £: ... & A m . (l) C: D 1 & ,.. & D i & ... & D n. (2} when B and D i are unifiable by substitution a, infer aft = D 1 & ... Di. 1 &A t & ... &Am &,Di+ 1 ... & Dn. ~ (3} Clause (3) is a derived clause, the resoivent of {1) and (2). The proof procedure of Prolog is just a particular embedding of the resolution rule in a search procedure, in which a goal clause like (2) is successively rewritten by the res,qution rule using clauses from the program (1). The Prolog proof procedure can be implemented very efficiently, but it has the same theoretical problems of the top-d¢.wn backtrack parsing algorithms after which it is motif?led. These problems do not preclude its use for creating uniquely efficient parsers for suitably constructed grammars (Warren and Pereira, 1983: Pereira, 1982), but the broader questions of the relation between parsing and deduction and of the derivation of online parsing algorithms for unification formalisms require that we look at a more generally applicable class of proof procedures. 2.4. Chart Parsing and the Earley Algorithm Chart parsing is a general framework for constructing parsing algorithms for context-free grammars and related formalisms. The Earley context-free parsing algorithm, although independently developed, can be seen as a particular case ,)f chart parsing. We will give here just the basic terminolog-y of chart parsing and of the Eartey algorithm. Full accounts can be found in the articles by Kay (Kay. l.qS0} and Earley/Earley, 1970). The state of a chart parser is represented by the chart. which is a directed graph. The nodes of the chart represent positions in the string being analyzed. Each odge in Ihe chart is either active or passive. Both types of edges are labeled. A passive edge with label ,V links node r to node .~ if the string between r and s h,~ been analyzed as a phr,'tse of type N. Initially, the only edges are passive edges that link consecutive nodes and are labeh,d with Ihe words of the input string (see Figure I}. Active edges represent partially applied grammar rules. In the siml)le~.t case, active edges are labeled by dotted rules. A dolled rule is a grammar rule with a dot inserted some~vhcre on its right-hand side X--- % ... ~i-I • ~i-'" % {4) An edge with this label links node r to node s if the sentential form ~! ... o%1 is an analysis of the input string between r and s. An active edge that links a node to 138 itself is called empty and acts like a top-down prediction. Chart-parsing procedures start with a chart containing the passive edges for the input string. New edges are added in two distinct ways. First, an active edge from r to s labeled with a dotted rule {4) combines with a passive edge from s to t with label a i to produce a new edge from r to t, which will be a passive edge with label X if a i is the last symbol in the right-hand side of the dotted rule; otherwise it will be an active edge with the dot advanced over cr i. Second, the parsing strategy must place into the chart, at appropriate points, new empty active edges that will be used to combine existing passive edges. The exact method used determines whether the parsing method is seen as top-down, bottom*up, or a combination of the two. The Earley parsing algorithm can be seen as a special case of chart parsing in which new empty active edges are introduced top-down and, for all k, the edge combinations involving only the first k nodes are done before any combinations that involve later nodes. This particular strategy allows certain simplifications to be made in the general algorithm. 3. DCGs and LFG We would like to make a few informal observations at this point, to clarify the relationship between DCGs and other unification grammar formalisms -- LFG in particular. A more detailed discussion would take us beyond the intended scope of this paper. The diffl,rcnt nolational conventions of DCGs and LFG make the two formalisms less similar on the surface than the), actually are from the computational point of view. The object~ that appear ,as arguments in DCG rules are tree fragments every node of which has a number of children predetermined by the functor that labels the node. Explicit variables mark unspecified parts of the tree. In contrast, the functional structure nodes that are implicitly mentioned in LFG equations do not have a pred(,fined number of children, and unspecified parts are either omitted or defined implicitly through equations. As a first approximation, a DCG rule such as s(s(Subj,Obj)) ~ np(Subj) vp(Obj} (5) might correspond to the LFG rule S -- KP vP (6) I subj= i I obj----- t The DCG rule can be read as "an s with structure ii / \ Subj Obj is an np with structure Subj followed by a vp with structure Obj." The LFG rule can be read as "an S is an NP followed by a V'P, where the value of the subj attribute of the S is the functional structure of the NP and the value of the attribute obj of the S is the functional structure of the VP." For those familiar with the details of the mapping from functional descriptions to functional structures in LFG, DCG variables are just "placeholder" symbols (Bresnan and Kaplan, 1982). As we noted above, an apparent difference between LFG and DCGs is that LFG functional structure nodes, unlike DCG function symbols, do not have a definite number of children. Although we mu~t leave to a separate paper the details of the application to LFG of the unification algorithms from theorem proving, we will note here that the formal properties of logical and LFG or UG unification are similar, and there are adaptations to LFG and UG of the algorithms and data structures used in the logical case. 4. Earley Deduction The Earley Deduction proof procedure schema is named after Earley's context-free parsing algorithm (Earley, 1970), on which it is based Earley Deduction provides for definite clauses the same kind of mixed top-down bottom-up mechanism that the Earley parsing algorithm provides for context-free grammars. Earley Deduction operates on two sets of definite clauses called the program and the state. The program is just the set of input clauses and remains fixed. The state consists of a set of derived clauses, where each nonunit .:Iause has one of its negative literals selected; the state is continually being added to. Whenever a nonunit clause is added to the state, one of its negative literals is selected. Initially tile state contains just the goal statement (with one of its negative [iterals selected}. There are two inference rules, called instantiation and reduction, which can map the current state into a new one by adding a new derived clause. For an instantiation step, there is some clause in the current state whose selected literal unifies with the positive literal of a ,onunit clause C in the program. In this case, the derived clause is a[C], where cr is a most general unifier ([~obinson, 1965} of the two literals concerned. The selected literal is said to instantiate C to a[C]. For a reduction step, there is some clause C in the current state whose selected literal unifies with a unit clause from either the program or the current state. In this case, tile derived clause is siC'l, where a is a most general unifier of the two Iiterals concerned, and C" is C minus its selected literal. Thus, the deriydd clause is just the res,)lvent of C with the unit clause and the latter is said to reduce C to a(C" I. Before a derived clause is added to the state, a check is made to see whether the derived clause is subsumed by any clause already in the state. [f the derived clause is subsumed, it is not added to the state, and that inference step is said to be blocked. In the examples that follow, we assume that the selected literal in a derived clause is always the leftmost literal in the body. This choice is not optimal (Kowalski, 1980), but it is sufficient for our purposes. For example, given the program 139 cl.X:,Z) = c(X,Y) & c(Y,Z). (7) c(1,2). (8) c(O.,3). (g) and goal statement ass(Z) ~ c(l,Z). (10) here is a sequence of clauses derived by Early Deduction ass(Z) = c(t.Z), goal. statement (11) c(I,Z) = c(I,$) It c(Y,Z). (11) £nstantlates (7) (12) ass(2). (8) reduces (II) (13) c(1,Z) = c(2,Z). (8) reduces (12) (14) c(2,Z) = c(2.T) & c(Y,Z). (14) instantlatee (7) (15) c(1.3). (9) reduces (14) (15) arts(3), (16) reduces (11) (17) c(2,Z) ~ c(3,Z). (9) reduces (15) (18) c(3,Z) = c(3.T) It c(Y,Z). (18) inst~nC£aCes (7) (19) At this point, all further steps are blocked, so the computation terminates. Earley Deduction generalizes Earley parsing in a direct and natural way. [nstantiation is analogous to the "predictor" operation of Earley's algorithm, while reduction corresponds to the "scanner" and "completer" operations. The "scanner" operation amounts to reduction with an input unit clause representing a terminal symbol occurrence, while the "completer" operation amounts to reduction with a derived unit clause representing a nonterminal symbol occurrence. 5. Chart Parsing and Earley Deduction Chart parsing {Kay, I980) and other tabular parsing algorithms (Aho and Ullman, 1972; Graham et al., I980) are usually presented in terms of certain (abstract) data structures that keep a record of the alternatives being explored by the parser. Looking at parsing procedures as proof procedures has the following advantages: (i) unification, ~aps and unbounded dependencies are automatically handled: (ii} parsing strategies become possible that cannot be formulated in chart parsing. The chart represents completed nonterminals {passive edges) and partially applied rules {active edges). From the standpoint of Earley Deduction, both represent derived clauses that have been proved in the course of an attempt to deduce a goal statement whose meaning is that a string belongs to the language generated by the grammar. An active edge corresponds to a nonunit clause, a passive edge to a unit clause. Nowhere in this definition is there mention of i.he "endpoints" of the edges. The endpoints correspond to certain literal arguments, and are of no concern to the (abstract) proof procedure. Endpoints are just a convenient way of indexing derived clauses in an implementalion to reduce the number of nonproductive (nonunifying) attempts at applying the reduction rule. We shall give now an example of the application of Earley Deduction to parsing, corresponding to the chart of Figure I. The CFG S -, NP VP NP --- Det N Det ~ NP Gen Det ---* Art Det ---, A V'P --. V NP corresponds to the following definite-clause program: s(S0,S) = np(S0,Sl) & vp(SI,S). {20) np(S0,S) ~ det{S0,Sl) & n(S1,S). (21) det(S0,S} = np(S0,Sl) & gen(SI,S). (22} det(S0,S) ~ art(S0,S). (23) det(S,S). (24) vp{S0,S) = v(SO,~l) & np(Sl,S}. (25) The lexical categories of the sentence oAg ath~ 1 's2h usband3hit4 Ulrich s (26) can be represented by the unit clauses n(0,11. (97} gen(l,2). (28) n(2,3). (29} ,.(3..t). (301 n{.ts). 131) Thus. the t~k of determining whether (26) is a sentence can be represented by the goal statement ans ~ s(0.5). (32) If the sentence is in the language, the unit clause ass will be derived in the course of an Eariey Deduction proof. S.ch a pro(_)f could proceed as follows: • ns = s(0,5), goal statement (33) s(0,5) = np(O,Sl) • vp(Sl,5). (33) instantiates (20) (34) np(O,S) = det(O, Sl) I n(SI,S). (34) inst,&nt,£a, tes (21) (35) det(O.S) = np(O.5t) It gen(SI.S). (35) £nstanr, i~tes (22) (35) det(O.S) = crt(0,S). (35) inst~ntiates (23) (37) np(0.S) ~ n(O.5)'. (24) reduces (35) (38) up(0.1). (27) reduces (38) (39) s(0"~5~ = ':p(I_,5) (39) reduces (34) (40) vp(i.5) ~ v(I,SI) ~ np(Sl,5). (40) instant, in.tee (25) (41) der,(0,S) *=-gen(1.S). (39) reduces (36) (42) det(0.2) (28) reduces (42) (43) np(O-S)" ~ n(2.S) (43) reduces (35) (44) np(O.3). . (29) reduces (44) (45) s(O,5) = vp(3,5). (45) reduces (34) (46) det(O,3) = gen(3.S). (45) reduces (35) (47) vp(3.5) ~ v(3.$I) It np(SI,5). (46) instanti~tes (25)" (48) vp(3_,5) ~ np(4.5). (30) reduces (48) (49) ap(4,5) = det(4,St) ~t n($1,5), (49) inst~ntiates (21) (50) det(4.S) = np(4,Sl) It gen(Sl,S). (50) instantiatss (22) (51) det(4,S) ~ ~rt(4.S). (50) instantiates (23) (52) np(4.S) = det(4_~Sl) It n(SI,S), (51) inet&ntiLtes (21) (53) up(4,5) = n(4,5). (24) reduces (50) (54) np(4.S) = n(4.S) (24) reduces (53) (55) up(4_-,5). - (31) reduces (54) (56) vp(3.5) (56) reduces (49) (57) det'~4~'S) = gen(5,S). (56) reduces (51) (58) s(0,5). (67) reduces (46) (59) an•.- (69) reduce• (33) (60) Note how subsumption is used to curtail the left recursion of rules (21) and (22), by stopping extraneous instantiation steps from the derived clauses (35) and (36). As we have seen in the example of the previous section, this mechanism is a general one, capable of handling complex grammar symbols within certain constraints that will be discussed later. The Earley Deduction derivation given above corresponds directly to the chart in Figure 1. In general, chart parsing cannot support strategies that would create active edges by reducing the symbols in the right-hand side of a rule in any arbitrary order. This is because an active edge must correspond to a contiguous sequence of analyzed symbols. Definite clause proof procedures do not have this limitation. For example, it is very simple t.o define a strategy, "head word nar¢,ng - (NlgCord, 19801, which would use the" reduction rule to infer np(SO,S) = deqS0,2) & rel{3,S}. 37 40 49 51 58 44 48 63 vp Figure 1: ('hart vs. Earley Deduction Proof Each arc in tile chart is labeled with the number of a clause in the proof. In each clause that, corresponds to a chart arc, two literal arguments correspond to the two endpoints of the arc. These arguments have been underlined in the derivation. Notice how the endpoint arguments are tile two string arguments in the head for unit clauses {passive edges) but, in the case of nonunit clauses (passive edges), are the first string argument in the head and the first in the leftmost literal in the body. As we noted before, our view of parsing as deduction makes it possible to derive general parsing mechanisms for augmented phraso-structure grammars with gaps and unbounded dependencies. It is difficult (especially in the case of pure bottom-up parsing strategies} to augment chart parser~ to handle gaps and dependencies (Thompson, 1981}. However, if gaps and dependencies are specified by extra predicate arguments in the clauses that correspond to the rules, the general proof procedures will handle those phenomena without further change. This is the technique used in DCGs and is the basis of the specialized extra.position grammar formalism (Pereira, t081). The increased generality of our approach in the area of parsing strategy stems from the fact that chart parsing strategies correspond to specialized proof procedures for definite clauses with string arguments. In other words, the origin of these proof procedures means that string arguments are treated differently from other arguments, as they correspond to the chart nodes. from the clauses np(S0,S} '-- det(SO,Sl} & n(SI,S2) & rel(S2,S). [NP --- Det N Rei] n(2,3). [There is an N between points 2 and 3 in the input] This example shows that the class of parsing strategies allowed in the deductive approach is broader than what is p,,ssible in the chart parsing approach. It remains to be shown which of those strategies will have practical importance as well. 6. Implementing Earley Deduction To implement Earley Deduction with an efficiency comparable, say. to Prolog, presents some challenging problems. The main issues are •tlow to represent the derived clauses, especially the substitutions involved. • ttow to avoid the very heavy computational cost of subsunlption. • How to recognize when derived clauses are no longer 2This particular strategy could be implemented ia a chart parser, by changing the rules for combining edges but the generality demonstrated here would be lost. ihl needed and space can be recovered. There are two basic methods for representing derived clauses in resolution systems: the more direct copying method, in which substitutions are applied explicitly; the structure-shaelng method of Bayer and Moore, which avoids copying by representing derived clauses implicitly with the aid of variable binding environments. A promising strategy for Earley Deduction might be to use copying for derived unit clauses, structure sharing for other derived clauses. When copying, care should be taken not to copy variable-free subterms, but to copy just pointers to those subterrns instead. It is very costly to implement subsumption in its full generality. To keep the cost within reasonable bounds, it will be essential to index the derived clauses on at least the predicate symbols they contain -- and probably also. on symbols in certain key argument positions. A simpfification of full subsumption checking that would appear adequate to block most redundant steps is to keep track of selected literals that have been used exhaustively to generate instantiation steps. If another selected literal is an instance of one that has been exhaustively explored, there is no need to consider using it as a candidate for instantiation steps, Subsuvnption would then be only applied to derived unit clauses. A major efficiency problem with Earley deduction is that it is difficult to recognize situations in which derived clauses are no longer needed and space can be reclaimed. There is a marked contrast with purely top-down proof procedures, such as Prolog, to which highly effective ~pace recovery techniques can be applied relatively easily. The Eartey algorithm pursues all possible parses in parallel, indexed by string position. In principle, this permits space to be recovered, as parsing progresses, by deleting information relating to earlier string positions, l't amy be possible to generalize this technique to Earley Deduction. by recognizing, either automatically or manually, certain special properties of the input clauses. 7. Decidability and Computational Complexity It is not at. all obvious that grammar formalisms based on unification can be parsed within reasonable bounds of time and space. [n fact, unrestricted DCGs have Turing machine power, and LFG, although decidable, seems capable of encoding exponentially hard problems. llowever, we need not give up our interest in the complexity analysis of unification-based parsing. Whether for interesting subclasses of, grammars or specific ~rammars of interest, it is still important to determine how efficient parsing can be. A basic step in that direction is to estimale the cost added by unification to the operation of combining {reducing or expanding) a nontcrmin.~l in a derivation with a nonterminal in a grammar rule. Because definite clauses are only semidecidable, general proof procedures may not terminate for some sets of definite clauses. However, the specialized proof procedures we have derived from parsing algorithms are stable: if a set of definite clauses G is the translation of a context-free grammar, the procedure will always terminate (in success or failure) when to proving any start goal for G. More interesting in this context is the notion of strong stability, which depends on the following notion of off'line parsability. A DCG is offline-parsable if its context-free skeleton is not infinitely ambiguous. Using different terminology, Bresnan and Kaplan (Bresnan and Kaplan, 1982) have shown that the parsing problem for LFG is decidable because LFGs are offline parsable. This result can be adapted easily to DCGs, showing that the parsing problem for offline-parsable DCGs is decidable. Strong stability can now be defined: a parsing algorithm is strongly stable if it always terminates for offline-parsab[e grammars. For example, a direct DCG version of the Earley parsing algorithm is stable but not strongly so. In the following complexity arguments, we restrict ourselves to offline-parsable grammars. This is a reasonable restriction for two reasons: (i) since general DCGs have Turing machine power, there is no useful notion of computational complexity for the parser on its own; (ii) (.here are good reasons to believe that linguistically relevant grammars must be offliae-parsable {Bresnan and Kaplaa, 1982). In estimating the added complexity of doing online unification, we start from the fact that the length of any derivation of a terminal string in a finitely ambiguous context-free grammar is linearly bounded by the length of the termin:fi string. The proof of this fact is omitted for lack of spa~.e, but can be found elsewhere (Pereira and Warren, 1.q83). General definite-clause proof procedures need to access ttle values of variables {bindings} in derived clauses. The strueture-sh:lring method of representation makes the lime to access a variable binding at worst linear in the length of 1he derivation. Furthermore, the number of variables to be looked up in a derivation step is at worst linear in the size of tile derivation. Finally, the time (and space) to finish a derivation step, once all the relevant bindings are known, does not depend on the size of the derivation. Therefore, using this method for parsing offline-parsable grammars makes the time complexity of each step at worst oIn 2) in the length of the input. Some simplifications are possible that improve that time bound. First, it, is possible to use a value array rcpresenta~i(m of hinding~ (Bayer and Moore. 1972} while exploring any given derivation path. reducing to a constant the variable lookup time at the cost of having to save and restore o(n} variable bindings from the value array each time the parsing procedure moves to explore a different derivation path. Secondly, the unification cost can be mode independent of the derivation length, if we for~o the occurs check that prevents a variable from being bound to a term containing it. Finally, the combination of structure sharing and copying suggested in the last section eliminates the overhead of switching to a different derivation path in the value array method at the cost of a uniform o(log n) time to look up or create a variabl, binding in a balanced binary tree. When adding a new edge to the chart, a chart parser 142 must verify that no edge with the same label between the same nodes is already present. In general DCG parsing (and therefore in online parsing with any unification- based formalism}, we cannot check for the "same label" (same lemma), because lemmas in general will contain variables. \Ve must instead check for subsumption of the new lemma by some old lemma. The obvious subsumption checking mechanism has an o(n 3) worst case cost, but the improved binding representations described above, together with the other special techniques mentioned in the previous section, can be used to reduce this cost in practice. We do not yet have a full complexity comparison between online and offline parsing, but it is easy to envisage situations in which the number of edges created by an online algorithm is much smaller than that for the corresponding offline algorithm, whereas the cost of applying the unification constraints is the same for both algorithms. 8. Conclusion We have outlined an approach to the problems of parsing unification-based grammar formalisms that builds on the relationship between parsing and definite-clause deduction. Several theoretical and practical problems remain. Among these are the question of recognizing derived clauses that are no longer useful in Earley-style parsing, the design of restricted formalisms with a polynomial bound on the number of distinct derived clauses, and independent characterizations of the classes of offline- parsable grammars and languages. Acknowledgments We would like to thank Barbara Grosz and Stan Rosenschein for their comments on earlier versions of this paper. References .\. V. Aho and .I. D Ullman, The Theory o/Parsing, Translation and Compiling (Prentice-flail, Englewood Cliffs, New Jersey, 1972). R. S. Boyer and J S. Moore, "The Sharing of Structure in Theorem-Proving Programs," in Machine Intelligence 7, B. Meltzer and D. Michie, eds., pp. 101-116 (.John Wiley & Sons, New York, New York. 1.q72}. .1. Bresnan and R. Kaplan. "Lexical-Functional Grammar: A Formal System for Grammatical Representation," in The Mental Representation of Grammatical Relations, J. Bresnan, ed., pp. 173-281 (NflT Press, Cambridge, Massachusetts, 1982). A. Colmerauer, "Metamorphosis Grammars," in Natural Language Communication with Computers, L. Bole, ed. (Springer-Verlag, Berlin, 1978). First appeared as 'Les Grammaires de Metamorphose', Groupe d'Intelligence Artifieielle, Universitd de Marseille 17, November 1975. J. Earley, "An Efficient Context-Free Parsing Algorithm," Communications of the ACM, Vol. 13, No. 2, pp. 94-102 (February 1970). G. Gazdar and G. Pullum, Generalized Phrase Slructure Grammar: A Theoretical Synopsis (Indiana University Linguistics Club, Bloomington, Indiana, 1982). S. L. Graham, M. A. Harrison and W. L. Ruzzo, "An Improved Context-Free Recognizer," ACM Transactions on Programming Languages and Systems, Vol. 2, No. 3, pp. 415-462 (July 1980). NI. Kay, "Functional Grammar," Prec. of the Fifth Annual A[celing of the Berkeley Linguistic Society, pp. 142-158. Berkeley Linguistic Society, Berkeley, California (February 17-19 19791 . M. Kay, "Algorithm Schemata and Data Structures in Syntactic Processing," Technical Report , X~EROX Pale Alto Research Center, Pale Alto, California (1980). A version will appear in the proceedings of the Nobel Symposium on Text Processing, (h,t henburg, 1980. R. A. Kowalski. Logic for Problem Solving (North Holland. New York, New York, 1980}. M. C. Mc('ord, "Slot Grammars," American Journal of Computational Linguistics, Vol. 6. No. 1. pp. 2",,5-2Sli (Januar.v-March 1980). F. C N. Pereira. "Extraposition Grammars," American Journal of Computational Linguistics, Vol. 7, No. 4. pp. 243-256 (October-December 1981). F. C. N. Pereira. Logic for Natural Language Analysis. Ph.D. thesis. University of Edinburgh. Scotland. 1982. F. C'. N. Pereira and D. H. D. Warren. "Definite Clause Grammars for Language Analysis - a Survey of the Formalism and a Comparison with Augmented Transition Networks," Artificial Intelligence, Vot. 13. pp. 231-278 (19801. F. C. N. Pereira and D. H. D. Warren, "Parsing a.s Deduction," Forthcoming technical note , Artificial Intelligence Center, SRI International , Menlo Park, California { 1983). 143 J. A. Robinson, "A Machine-Oriented Logic Based on the Resolution Principle," Journal of the AGM, Vol. 12, pp. 23-44 (January 1965). P. Roussel, "Prolog : Manuel de Rdf6rence et Utilisation," Technical Report, Groupe d'Intelligence Artificielle, Universitd d'AJx-Marse.ille II, Marseille, France {1975). S. Shieber, Personal communication, 1983. H. Thompson, "Chart Parsing and Rule Schemata in GPSG," Proc. of the 19th Annual Meeting of the Association for Computational Linguistics, pp. 167-172, Association for Computational Linguistics, Stanford University, Stanford, California (June 29-July 1 1981). M. H. van Emden and R. A. Kowalski, "The Semantics of Predicate Logic as a Programming Language," Journal of the AC~V[, Vol. 23, No. 4, pp. 73.3-742 [October 19781. D. H. D. Warren. Earley Deduction. Unpublished note, 1975. D H. D. Warren and F. C. N. Pereira, An Efficient Easily Adaptable System for Interpreting Natural Langu.~e Queries. To appear in the American Journal of Computational Linguistics., 1983. 144 | 1983 | 21 |
DESIGN OF A KNOWLEDGE-BASED REPORT GENERATOR Karen Kukich University of Pittsburgh Bell Telephone Laboratories Murray ~tll, NJ 07974 ABSTRACT Knowledge-Based Report Generation is a technique for automatically generating natural language reports from computer databases. It is so named because it applies knowledge-based expert systems software to the problem of text generation. The first application of the technique, a system for generating natural language stock reports from a daily stock quotes database, is par- tially implemented. Three fundamental principles of the technique are its use of domain-specific semantic and linguistic knowledge, its use of macro-level semantic and linguistic constructs (such as whole messages, a phrasal lexicon, and a sentence-combining grammar), and its production system approach to knowledge representa- tion. I. WHAT IS KNOWLEDGE-BASED REPORT GENERATION A knowledge-based report generator is a computer program whose function is to generate natural language summaries from computer databases. For example, knowledge-based report generators can be designed to generate daily stock market reports from a stock quotes database, daily weather reports from a meteorological database, weekly sales reports from corporate databases, or quarterly economic reports from U. S. Commerce Department databases, etc. A separate generator must be implemented for each domain of discourse because each knowledge-based report generator contains domain-specific knowledge which is used to infer interesting messages from the database and to express those messages in the sublanguage of the domain of discourse. The technique of knowledge-based report generation is generalizable across domains, however, and the actual text generation component of the report gen- erator, which comprises roughly one-quarter of the code, is directly transportable and readily tailorable. Knowledge-based report generation is a practical approach to text generation. It's three fundamental tenets are the following. First, it assumes that much domain-specific semantic, linguistic, and rhetoric knowledge is required in order for a computer to automatically produce intelligent and fluent text. Second, it assumes that production system languages, such as those used to build expert systems, are well- suited to the task of representing and integrating seman- tic, linguistic, and rhetoric knowledge. Finally, it holds that macro-level knowledge units, such as whole seman- tic messages, a phrasal lexicon, clausal grammatical categories, and a clause-combining grammar, provide an appropriate level of knowledge representation for gen- erating that type of text which may be categorized as periodic summary reports. These three tenets guide the design and implementation of a knowledge-based report generation system. II. SAMPLE OUTPUT FROM A KNOWLEDGE-BASED REPORT GENERATOR The first application of the technique of knowledge-based report generation is a partially imple- mented stock report generator called Aria. Data from a Dow Jones stock quotes database serves as input to the system, and the opening paragraphs of a stock market summary are produced as output. As more semantic and linguistic knowledge about the stock market is added to the system, it will be able to generate longer, more informative reports. Figure 1 depicts a portion of the actual data submit- ted to Ana for January 12, 1983. A hand drawn graph of the same data is included. The following text samples are Ana's interpretation of the data on two different runs. DOW JONES INDUSTRIALS AVERAGE -- 01/12183 01/12 CLOSE 30 INDUS 1083.61 01/12 330PM 30 INDUS 1089.40 01/12 3PM 30 INDUS 1093.44 01/12 230PM 30 INDUS 1100.07 01/12 2PM 30 INDUS 1095.38 01/12 130PM 30 INDUS 1095.75 01/12 IPM 30 INDUS 1095.84 01/12 1230PM 30 INDUS 1095.75 01/12 NOON 30 INDUS 1092.35 01/12 II30AM 30 INDUS I089.40 01/12 IIAM 30 INDUS 1085.08 01/12 1030AM 30 INDUS 1085.36 01/11 CLOSE 30 INDUS 1083.79 CLOSING AVERAGE 1083.61 DOWN 0.18 145 1102 1098 1094 1o9o ~/ ~, 1086,...--------,t/ -~, 1082 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 3 3:30 4 Figure 1 (1) after climbing steadily through most of the morning , the stock market was pushed downhill late in the day stock prices posted a small loss , with the indexes turning in a mixed showing yesterday in brisk trading . the Dow Jones average of 30 industrials surrendered a 16.28 gain at 4pro and de- clined slightly , finishing the day at 1083.61 ,off 0.18 points. (2) wall street's securities markets rose steadily through most of the morning , before sliding downhill late in the day the stock market posted a small loss yesterday , with the indexes finishing with mixed results in ac- tive trading . the Dow Jones average of 30 industrials surrendered a 16.28 gain at 4pro and de- clined slightly , to finish at 1083.61 , off 0.18 points . III. SYSTEM OVERVIEW In order to generate accurate and fluent summaries, a knowledge-based report generator performs two main tasks: first, it infers semantic messages from the data in the database; second, it maps those messages into phrases in its phrasal lexicon, stitching them together according to the rules of its clause-combining grammar, and incorporating rhetoric constraints in the process. As the work of McKeown I and Mann and Moore 2 demon- strates, neither the problem of deciding what to say nor the problem of determining how to say it is trivial, and as'Appelt 3 has pointed out, the distinction between them is not always clear. A. System Architecture A knowledge-based report generator consists of the following four independent, sequential components: 1) a fact generator, 2) a message generator, 3) a discourse organizer, and 4) a text generator. Data from the data- base serves as input to the first module, which produces a stream of facts as output; facts serve as input to the second module, which produces a set of messages as out- put; messages form the input to the third module, which organizes them and produces a set of ordered messages as output; ordered messages form the input to the fourth module, which produces final text as output. The modules function independently and sequentially for the sake of computational manageability at the expense of psychological validity. With the exception of the first module, which is a straightforward C program, the entire system is coded in the OPS5 production system language. 4 At the time that the sample output above was generated, module 2, the message generator, consisted of 120 production rules; module 3, the discourse organizer contained 16 produc- tion rules; and module 4, the text generator, included 109 production rules and a phrasal dictionary of 519 entries. Real time processing requirements for each module on a lightly loaded VAX 11/780 processor were the following: phase 1 16 seconds, phase 2 - 34 seconds, phase 3 - 24 seconds, phase 4 - 1 minute, 59 seconds. B. Knowledge Constructs The fundamental knowledge constructs of the sys- tem are of two types: 1) static knowledge structures, or memory elements, which can be thought of as n- dimensional propositions, and 2) dynamic knowledge structures, or production rules, which perform pattern- recognition operations on n-dimensional propositions, Static knowledge structures come in five flavors: facts. messages, lexicon entries, medial text elements, and various control elements. Dynamic knowledge constructs occur in ten varieties: inference productions, ordering productions, discourse mechanics productions, phrase selection productions, syntax selection productions, ana- phora selection productions, verb morphology produc- tions, punctuation selection productions, writing produc- tions, and various control productions. C. Functions The function of the first module is to perform the arithmetic computation required to produce facts that contain the relevant information needed to infer interest- ing messages, and to write those facts in the OPS5 memory element format. For example, the fact that indicates the closing status of the Dow Jones Average of 30 Industrials for January 12, 1983 is: (make fact "fname CLb-~rAT "iname DJI "itype COMPOS "date 01/12 "hour CLOSE "open- level 1084.25 "high-level 1105.13 "low-level 1075.88 "close-level 1083.61 "cumul-dir DN "cumul-deg 0.18) The function of the second module is to inter interesting messages from the facts using inferencing productions such as the following: 146 (p instan-mixedup (goal "stat act "op instanmixed) (fact "(name CLSTAT "iname DJI "cumul-dir UP "repdate <date>) (fact "(name ADVDEC "iname NYSE "advances <x> "declines {<y> > <x>}) (make message "top GENMKT "subtop MIX "mix mixed "repdate <date> "subjclass MKT "tim close) (make goal "star pend "op writemessage) (remove 1) ) This production infers that if the closing status of the Dow had a direction of "up', and yet the number of declines exceeded the number of advances for the day, then it can be said that the market was mixed. The mes- sage that is produced looks like this: (make message "repdate 01/12 "top GENMKT "subsubtop nil "subtop MIX "subjclass MKT "dir nil "deg nil "vardeg I nil ] "varlev I nil [ "mix mixed "chg nil "sco nil "tim close "vartim I nil i "dur nil "vol nil "who nil ) The inferencing process in phase 2 is hierarchically con- trolled. Module 3 performs the uncomplicated task of grouping messages into paragraphs, ordering messages within paragraphs, and assigning a priority number to each message. Priorities are assigned as a function of topic and subtopic. The system "knows" a default order- ing sequence, and it "knows" some exception rules which assign higher priorities to messages of special signifi- cance, such as indicators hitting record highs. As in module 2, processing is hierarchically controlled. Even- tually, modules 2 and 3 should be combined so that their knowledge could be shared. The most complicated processing is performed by module 4. This processing is not hierarchically con- trolled, but instead more closely resembles control in an ATN. Module 4, the text generator, coordinates and executes the following activities: 1) selection of phrases from the phrasal lexicon that both capture the semantic meaning of the message and satisfy rhetorical con- straints; 2) selection of appropriate syntactic forms for predicate phrases, such as sentence, participial clause, prepositional phrase, etc.; 3) selection of appropriate anaphora for subject phrases 4) morphological processing of verbs; 5) interjection of appropriate punctuation; and 6) control of discourse mechanics, such as inclusion of more than one clause per sentence and more than one sentence per paragraph. The module 4 processor is able to coordinate and execute these activities because it incorporates and integrates the semantic, syntactic, and rhetoric knowledge it needs into its static and dynamic knowledge structures. For example, a phrasal lexicon entry that might match the "mixed market" message is the follow- ing: (make phraselex "top GENMKT "subtop MIX "mix mixed "chg nil "tim close "subjtype NAME "subjclass MKT *predfs turned Apredfpl turned "predpart turning "predinf ~to turnl ^predrem ~n a mixed showing] "fen 9 "rand 5 "imp 11) An example of a syntax selection production th,tt would select the syntactic form subordinate-participial-clause as an appropriate form for a phrase (a~) in "after rising steadily through most of the morning") is the following: (p 5 .selectsu borpartpre-selectsyntax (goal ^stat act "op selectsyntax) ; 1 (sentreq "sentstat nil) ; 2 (message "foc in "top <t> "tim <> nil "subjclass <sc>) ; 3 (message "foc nil "top <t> "tim <> nil "subjclass <sc>) ; 4 (paramsynforms "suborpartpre <set>) (randnum "randval < <set>) (lastsynform "form << initsent prepp >> ) - (openingsynform "form < < suborsent suborpart > >) - (message "foc in "tim close) -.> (remove 1) (make synform "form suborpart ) (modify 4 "foc peek ) (make goal "star act "op selectsubor) D. Context-Dependent Grammar Syntax selection productions, such as the examt)le above, comprise a context-dependent, right-branching, clause-combining grammar. Because of the attribute- value, pattern-recognition nature of these grammar rules and their use of the lexicon, they may be viewed as a high-level variant of a lexical functional grammar. 5 The efficacy of a low-level functional grammar for text gen- eration has been demonstrated in McKeown's TEXT sys- tem. 6 For each message, in sequence, the system first selects a predicate phrase that matches the semantic con- tent of the message, and next selects a syntactic form. such as sentence or prepositional phrase, into which form the predicate phrase may be hammered. The system's default goal is to form complex sentences by combining a variable number of messages expressed m a variety of syntactic forms in each sentence. Every message may be expressed in the syntactic form of a simple sentence. But under certain grammatical and rhetorical conditions, which are specified in the syntax selection productions, and which sometimes include looking ahead at the next sequential message, the system opts for a different syn- tactic form. The right-branching behavior of the system implies that at any point the system has the option to lay down a period and start a new ~ntence. It also implies that embedded subject-complement forms, such as relative ;5 ;6 ;7 147 clauses modifying subjects, are trickier to implement (and have not been implemented as yet). That embed- ded subject complements pose special difficulties should not be considered discouraging. Developmental linguis- tics research reveals that "operations on sentence sub- jects, including subject complementation and relative clauses modifying subjects" are among the last to appear in the acquisition of complex sentences, 7 and a knowledge-based report generator incorporates the basic mechanism for eventually matching messages to nominal- izations of predicate phrases to create subject comple- ments, as well as the mechanism for embedding relative clauses. IV. THE DOMAIN-SPECIFIC KNOWLEDGE REQUIREMENT TENET How does one determine what knowledge must incorporated into a knowledge-based report generator? Because the goal of a knowledge-based report generator is to produce reports that are indistinguishable from reports written by people for the same database, it is log- ical to turn to samples of naturally generated text from the specific domain of discourse in order to gain insights into the semantic, linguistic, and rhetoric knowledge requirements of the report generator. Research in machine translation s and text under- standing 9 has demonstrated that not only does naturally generated text disclose the lexicon and grammar of a sublanguage, but it also reveals the essential semantic classes and attributes of a domain of discourse, as well as the relations between those classes and attributes. Thus, samples of actual text may be used to build the phrasal dictionary for a report generator and to define the syntactic categories that a generator must have knowledge of. Similarly, the semantic classes, attributes and relations revealed in the text define the scope and variety of the semantic knowledge the system must incorporate in order to infer relevant and interesting messages from the database. Ana's phrasal lexicon consists of subjects, such as "wall street's securities markets", and predicates, such as "were swept into a broad and steep decline", which are extracted from the text of naturally generated stock reports, The syntactic categories Ann knows about are the clausal level categories that are found in the same text, such as, sentence, coordinate-sentence, subordinate-sentence, subordinate-participial-clause, prepositional-phrase, and others. Semantic analysis of a sample of natural text stock reports discloses that a hierarchy of approximately forty message classes accounts for nearly all of the semantic information contained in the "core market sentences" of stock reports. The term "core market sentences" was introduced by Kittredge to refer to those sentences which can be inferred from the data in the data base without reference to external events such as wars, strikes, and corporate or government policy making. 1° Thus, for example, Ana could say "Eastman Kodak advanced 2 3/4 to 85 3/4;" but it could not append "it announced development of the world's fastest color film for delivery in 1983.". Aria currently has knowledge of only six mes- sage classes. These include the closing market status message, the volume of trading message, and the mixed market message, the interesting market fluctuations mes- sage, the closing Dow status message, and the interesting Dow fluctuations message. V. THE PRODUCTION SYSTEM KNOWLEDGE REPRESENTATION TENET The use of production systems for natural language processing was suggested as early as 1972 by Heidorn,ll whose production language NLP is currently being used for syntactic processing research. A production system for language understanding has been implemented in OPS5 by Frederking. 12 Many benefits are derived from using a production system to represent the knowledge required for text generation. Two of the more important advantages are the ability to integrate semantic, syntac- tic, and rhetoric knowledge, and the ability to extend and tailor the system easily. A. Knowledge Integration Knowledge integration is evident in the production rule displayed earlier for selecting the syntactic form of subordinate participial clause. In English, that produc- tion said: IF 1) there is an active goal to select a syntactic form 2) the sentence requirement has not been satisfied 3) the message currently in focus has topic <t>, subject class <sc>, and some non-nil time 4) the next sequential message has the same topic. subject class, and some non-nil time 5) the subordinate-participial-clause parameter is set at value <set> 6) the current random number is less than <set> 7) the last syntactic form used was either a prepositional phrase or a sentence initializer 8) the opening syntactic form of the last sentence was not a subordinate sentence or a subordinate participial clause 9) the time attribute of the message in focus does not have value 'close' THEN 1) remove the goal of selecting a syntactic form 2) make the current syntactic form a subordinate participial clause 3) modify the next sequential message to put it in peripheral focus 4) set a goal to select a subordinating conjunction. It should be apparent from the explanation that the rule integrates semantic knowledge, such as message topic and time, syntactic knowledge, such as whether the sentence requirement has been satisfied, and rhetoric knowledge, such as the preference to avoid using subor- dinate clauses as the opening form of two consecutive sentences. 148 B. Knowledge Tailoring and Extending Conditions number 5 and 6, the syntactic form parameter and the random number, are examples of con- trol elements that are used for syntactic tailoring. A syntactic form parameter may be preset at any value between 1 and 11 by the system user. A value of 8, for example, would result in an 80 percent chance that the rule in which the parameter occurs would be satisfied if all its other conditions were satisfied. Consequently, on 20 percent of the occasions when the rule would have been otherwise satisfied, the syntactic form parameter would prevent the rule from firing, and the system would be forced to opt for a choice of some other syn- tactic form. Thus, if the user prefers reports that are low on subordinate participial clauses, the subordinate parti- cipial clause parameter might be set at 3 or lower. The following production contains the bank of parameters as they were set to generate text sample (2) above: (p _ l.setparams (goal "stat act "op setparams) (remove 1) (make paramsyllables "val 30) (make parammessages "val 3) (make paramsynforms "sentence 11 "coorsent 11 "suborsent 11 "prepphrase 11 "suborsentpre 5 "suborpartpre 8 "suborsentpost 8 "suborpartpost 11 "subol'partsentpost I 1 When sample text (1) was generated, all syntactic form parameters were set at 11. The first two parameters in the bank are rhetoric parameters. They control the maximum length of sentences in syllables (roughly) and in number of messages per sentence. Not only does production system knowledge representation allow syntactic tailoring, but it also per- mits semantic tailoring. Aria could be tailored to focus on particular stocks or groups of stocks to meet the information needs of individual users. Furthermore, a production system is readily extensible. Currently, Ana has only a small amount of general knowledge about the stock market and is far from a stock market expert. But any knowledge that can be made explicit can be added to the system prolonged incremental growth in the knowledge of the system could someday result in a sys- tem that truly is a stock market expert. Vl. THE MACRO-LEVEL KNOWLEDGE CONSTRUCTS TENET The problem of dealing with the complexity of natural language is made much more tractable by work- ing in macro-level knowledge constructs, such as seman- tic units consisting of whole messages, lexical iter-¢ ~,~,a- sisting of whole phrases, syntactic categories at the clause level, and a clause-combining grammar. Macro- level processing buys linguistic fluency at the cost of semantic and linguistic flexibility. However, the loss of flexibility appears to be not much greater than the con- straints imposed by the grammar and semantics of the sublanguage of the domain of discourse. Furthermore, there may be more to the notion of macro-level semantic and linguistic processing than mere computational manageability. The notion of a phrasal lexicon was suggested by Becker, 13 who proposed that people generate utterances "mostly by stitching together swatches of text that they have heard before. Wilensky and Arens have experi- mented with a phrasal lexicon in a language understand- ing system. 14 I believe that natural language behavior will eventually be understood in terms of a theory of stratified natural language processing in which macro- level knowledge constructs, such as those used in a knowledge-based report generator, occur at one of the higher cognitive gtrata. A poor but useful analogy to mechanical gear- shifting while driving a car can be drawn. Just as driv- ing in third gear makes most efficient use of an automobile's resources, so also does generating language in third gear make most efficient use of human informa- tion processing resources. That is, matching whole phrases and applying a clause-combining grammar is cognitively economical. But when only a near match for a message can be found in a speaker's phrasal diction- ary, the speaker must downshift into second gear, and either perform some additional processing on the nhrase to transform it into the desired form to match the mes- sage, or perform some processing on the message to transform it into one that matches the phrase. And if not even a near match for a message can be found, the speaker must downshift into first gear and either con- struct a phrase from elementary texicai items, including words, prefixes, and suffixes, or reconstruct the mes- sage. As currently configured, a knowledge-based text generator operates only in third gear. Because the units of processing are linguistically mature whole phrases, the report generation system can produce fluent text without having the detailed knowledge-needed to construct mature phrases from their elementary components. But there is nothing except the time and insight of a system implementor to prevent this detailed knowledge from being added to the system. By experimenting with addi- tional knowledge, a system could gradually be extended to shift into lower gears, to exhibit greater interaction between semantic and linguistic components, and to do more flexible, if not creative, generation of semantic 149 messages and linguistic phrases. A knowledge-based report generator may be viewed as a starting tool for modeling a stratiform theory of natural language pro- cessing. VII. CONCLUSION Knowledge-based report generation is practical because it tackles a moderately ill-defined problem with an effective technique, namely, a macro-level, knowledge-based, production system technique. Stock market reports are typical instances of a whole class of summary-type periodic reports for which the scope and variety of semantic and linguistic complexity is great enough to negate a straightforward algorithmic solution, but constrained enough to allow a high-level cross-wise slice of the variety of knowledge to be effectively incor- porated into a production system. Even so, it will be some time before the technique is cost effective. The time required to add knowledge to a system is greater than the time required to add productions to a traditional expert system. Most of the time is spent doing seman- tic analysis for the purpose of creating useful semantic classes and attributes, and identifying the relations between them. Coding itself goes quickly, but then the system must be tested and calibrated (if the guesses on the semantics were close) or redone entirely (if the guesses were not close). Still, the initial success of the technique suggests its value both as a basic research tool, for exploring increasingly more detailed semantic and linguistic processes, and as an applied research tool, for designing extensible and tailorable automatic report gen- erators. ACKNOWLEDGEMENT l'wish to express my deep appreciation to Michael Lesk for his unfailing guidance and support in the development of this project. REFERENCES 1. Kathleen R. McKeown, "The TEXT System for Natural Language Generation: An Overview," Proceedings of the Twentieth Annual Meeting of the Association for Computational Linguistics, Toronto, Canada (1982). 2. James A. Moore and William C. Mann, "'A Snapshot of KDS: A Knowledge Delivery System," in Proceedings of the 17th Annual Meeting of the Association for Computational linguistics, La Jolla, California (11-12 August 1979). 3. Douglas E. Appelt, "Problem Solving Applied to Language Generation," pp. 59-63 in Proceedings of the 18th Annual Meeting of the Association for Com- putational Linguistics, University of Pennsylvania, Philadelphia, PA (June 19-22,1980). 4. C.L. Forgy, "OPS-5 User's Manual," CMU-CS- 81-135, Dept of Computer Science, Carnegie- Mellon University, Pittsburgh, PA 15213 (July 1981). 5. Joan Bresnan and Ronald M. Kaplan, "Lexical- Functional Grammar: A Formal System for Gram- matical Representation," Occasional Paper #13, MIT Center for Cognitive Science (1982). 6. Kathleen Rose McKeown, "Generating Natural Language Text in Response to Questions about Database Structure," Doctoral Dissertation, University of Pennsylvania Computer and Informa- tion Science Department (1982). 7. Melissa Bowerman, "The Acquisition of Complex Sentences," pp. 285-305 in Language Acquisition, ed. Michael Garman, Cambridge University Press, Cambridge (1979). 8. Richard Kittredge and John Lehrberger, Sub- languages: Studies of Language in Restricted Seman- tic Domains, Walter DeGruyter, New York (in press). 9. Naomi Sager, "Information Structures in Texts of a Sublanguage," in The Information Communi~: Alli- ance for Progress - Proceedings of the 44th ASIS Annual Meeting, Volume 18, Knowlton Industry Publications for the American Society for Informa- tion Science, White Plains, N.Y. (October 1981). IO. Richard I. Kittredge, "Semantic Processing of Texts in Restricted Sublanguages," Computers and Mathematics with Applications 8(0), Pergamon Press (1982). 11. George E. Heidorn, "Natural Language Inputs to a Simulation Programming System,'" NPS- 55HD72101A, Naval Postgraduate School, Mon- terey, CA (October 1972). 12. Robert E. Frederking, A Production System Approach to Language Understanding, To appear (1983). 13. Joseph Becket, "The Phrasal Lexicon," pp. 70-73 in Theoretical Issues in Natural Language Process- ing, ed. B. I. Nash-Webber, Cambridge, Mas- sachusetts (10-13 June 1975). 14. Robert Wilensky and Yigel Arens, "'PHRAN -- A Knowledge-Based Natural Language Under- stander," pp. 117-121 in Proceedings of the 18th Annual Meeting of the Association for Computational Linguistics, University of Pennsylvania. Philadel- phia, Pennsylvania (June 19-22, 1980). 1-50 | 1983 | 22 |
MENU-BASED NATURAL LANGUAGE UNDERSTANDING Harry R. Tennant, Kenneth M. Ross, Richard M. Saenz, Craig W. Thompson, and James R. Miller Computer Science Laboratory Central Research Laboratories Texas Instruments Incorporated Dallas, Texas ABSTRACT This paper describes the NLMenu System, a menu-based natural language understanding system. Rather than requiring the user to type his input to the system, input to NLMenu is made by selec- ting items from a set of dynamically changing menus. Active menus and items are determined by a predictive left-corner parser that accesses a semantic grammar and lexicon. The advantage of this approach is that all inputs to the NLMenu System can be understood thus giving a 0% failure rate. A companion system that can automatically generate interfaces to relational databases is also discussed. relatively straightforward queries that PLANES could understand. Additionally, users did not successfully adapt to the system's limitations after some amount of use. One class of problem that caused negative and false user expectations was the user's ability to distinguish between the limitations in the system's conceptual coverage and the system's linguistic coverage. Often, users would attempt to para- phrase a sentence many times when the reason for the system's lack of understanding was due to th~ fact that the system did not have data about the query being asked (i.e. the question exceeded the conceptual coverage of the system). Conversely, users' queries would often fail because they were phrased in a way that the system could not handle (i.e. the question exceeded the linguistic coverage of the system). I INTRODUCTION Much research into the building of natural language interfaces has been going on for the past 15 years. The primary direction that this re- search has taken is to improve and extend the capabilities and coverage of natural language interfaces. Thus, work has focused on constructing and using new formalisms (both syntactically and semantically based) and on improving the grammars and/or semantics necessary for characterizing the range of sentences to be handled by the system. The ultimate goal of this work is to give natural language interfaces the ability to understand larger and larger classes of input sentences. Tennant (1980) is one of the few attempts to consider the problem of evaluating natural language interfaces. The results reported by Tennant concerning his evaluation of the PLANES System are discouraging. These results show that a major problem with PLANES was the negative expectations created by the system's inability to understand input sentences. The inability of PLANES to handle sentences that were input caused the users to infer that many other sentences wou|d not be correctly handled. These inferences about PLANES' capabilities resulted in much user frus- tration because of their very limited assumptions about what PLANES could understand. It rendered them unable to successfully solve many of the problems they were assigned as part of the evalu- ation of PLANES, even though these problems had been specifically designed to correspond to some The problem pointed out by Tennant seems to be a general problem that must be faced by any natural language interface. If the system is unable to understand user inputs, then the user will infer that many other sentences cannot be understood. Often, these expectations serve to severely limit the classes of sentences that users input, thus making the natural language interface virtually unusable for them. If natural language interfaces are to be made usable for novice users, with little or no knowledge of the domain of the system to which they are interfacing, then negative and false expectations about system capabilities and per- formance must be prevented. The most obvious way to prevent users of a natural language interface from having negative expectations is expand the coverage of that inter- face to the point where practically all inputs are understood. By doing this, most sentences that are input will be understood and few negative expectations will be created for the user. Then users will have enough confidence in the natural language interface to attempt to input a wide range of sentences, most of which will be understood. However, natural language interfaces with the ability to understand virtually all input sentences are far beyond current technology. Thus, users ~vill continue to have many negative expectations about system coverage. A possible solution to this problem is the use of a set of training sessions to teach the user the syntax of the system. However, there are several problems with this. First, it does not allow 151 untrained novices to use such a system. Second, it assumes that infrequent users will take with them and remember what they learned about the coverage of the system. Both of these are unreasonable restrictions. II A DESCRIPTION OF THE NLMENU SYSTEM In this paper, we will employ a technique that applies current technology (current grammar formal- isms, parsing techniques, etc.) to make natural language interface systems meet the criteria of usability by novice users. To do this, user expectations must closely match system performance. Thus, the interface system must somehow make it clear to the user what the coverage of the system is. Rather than requiring the user to type his input to the natural language understanding system, the user is presented with a set of menus on the upper half of a high resolution bit map display. He can choose the words and phrases that make up his query with a mouse. As the user chooses items, they are inserted into a window on the lower half of the screen so that he can see the sentence he is constructing. As a sentence is constructed, the active menus and items in them change to reflect only. the legal choices, given the portion of the sentence that has already been input. At any point in the construction of a natural language sentence, only those words or phrases that could legally come next will be displayed for the user to select. Sentences which cannot be processed by the natural language system can never be input to the system, giving a 0% failure rate. In this way, the scope and limitations of the system are made immediately clear to the user and only understand- able sentences can be input. Thus, all queries fall within the linguistic and conceptual coverage of the system. A. The Grammar Formalism The grammars used in the NLMenu System are context-free semantic grammars written with phrase structure rules. These rules may contain the standard abbreviatory conventions used by lin- guists for writing phrase structure rules. Curly brackets ({}, sometimes called braces) are used to indicate optional elements in a rule. Addition- ally, square brackets ([]) are used as well. They have two uses. First, in conjunction with curly brackets. Since it is difficult to allow rules to be written in two dimensions as linguists do, where alternatives in curly brackets are written one below the other, we require that each alter- native be put in square brackets. Thus, the rule below in (i) would be written as shown in (2). (2) A --> B {[C X] [E Y]} D Note that for single alternatives, the square brackets can be deleted without loss of informa- tion. We permit this and therefore {A B} is equivalent to {[A][B]}. The second use of square brackets is inside of parentheses. An example of this appears in rule (3) below. (3) Q --> R ([M N] V) This rule is an abbreviation for two rules, Q --> R M N and Q --> R V. Any arbitrary context-free grammar is per- mitted except for those grammars containing two classes of rules. These are rules of the form X --> null and rules that generate cycles, for example, A --> B, B --> C, C --> D and D --> A. The elimination of the second class of rules causes no difficulty and does not impair a grammar writer in any way. If the second class of rules were permitted, an infinite number of parses would result for sentences of grarm~ars using them. The elimination of the first class of rules causes a small inconvenience in that it prevents grammar writers from using the existence of null nodes in parse trees to account for certainunbounded dependencies like those found in questions like "Who do you think I saw?" which are said in some linguistic theories to contain a null noun phrase after the word "saw". However, alternative grammatical treatments, not requiring a null noun phrase, are also commonly used. Thus, the prohibition of such rules requires that these alternative grammatical treatments be used. In addition to synactic information indicating the allowable sentences, the grammar formalism also contains semantic information that determines what the meaning of each input sentence is. This is done by using lambda calculus. The mechanism is similar to the one used in Montague Grammar and the various theories that build on Montague's work. Associated with every word in the lexicon, there is a translation. This translation is a portion of the meaning of a sentence in which the word appears. In order to properly combine the translations of the words in a sentence together, there is a rule associated with each context-free rule indicating the order in which the transla- tions of the symbols on the right side of the arrow of a context-free rule are to be combined. These rules are parenthesized lists of numbers where the number i refers to the first item after the arrow, the number 2 to the second, etc. For example, for the rule X --> A B C 0, a possible rule indicating how to combine trans- lations might be (3 (I 2 4)). This rule means that the translation of A is taken as a function and applied to the translation of B as its argument. This resulting new translation is then taken as a function and applied to the transla- tion of 4 as its argument. This resulting trans- lation is then the argument to the translation of 3 which is the function. In general, the transla- tion of leftmost number applies to the translation of the number to its right as the argument. The result of this then is a function which applies to the translation of the item to its right as the 152 argument. However, parentheses can override this as in the example above. For rules containing abbreviatory conventions, one translation rule must be written for every possible expansion of the rule. Translations that are functions are of the form "(lambda x (... x ...)). When this is applied to an item like "c" as the argument, "c" is plugged in for every occurrence of x after the "lambda x" that is not within the scope of a more deeply embedded "lambda x". This is called lambda conversion and the result is just the expression with the "lambda x" stripped off of the front and the substitution made. B. The Parser The parser used in the NLMenu system is an implementation of an enhanced version of the modi- fied left-corner algorithm described in Ross (1982). Ross (1982) is a continuation of the work described in Ross (1981) and builds on that work and on the work of Griffiths and Petrick (1965). The enhancements enable the parser to parse a word at a time and to predict the set of next possible words in a sentence, given the input that has come before. Griffiths and Petrick (1965) propose several algorithms for recognizing sentences of context- free grammars in the general case. One of these algorithms, the NBT (Non-selective Bottom to Top) Algorithm, has since been called the "left-corner" algorithm. Of late, interest has been rekindled in left-corner parsers. Slocum (1981) shows that a left-corner parser inspired by Griffiths and Petrick's algorithm performs quite well when compared with parsers based on a Cocke-Kasami- Younger algorithm (see Younger 1967). Although algorithms to recognize or parse context-free grammars can be stated in terms of push-down store automata, G+P state their algorithm in terms of Turing machines to make its operation clearer. A somewhat modified version of their algorithm will be given in the next section. These modifications transform the recognition algorithm into a parsing algorithm. The G+P algorithm employs two push down stacks. The modified algorithm to be given below will use three, called alpha, beta and gamma. Turing machine instructions are of the following form, where A, B, C, D, E and F can be arbitrary strings of symbols from the terminal and non- terminal alphabet. [A,B,C] ---> [D,E,F] if "Conditions" This is to be interpreted as follows- If A is on top of stack alpha, B is on top of stack beta, C is on top of stack gamma, and "Conditions" are satisfied then replace A by D, B by E, and C by F. The modified algorithm follows- (1 [VI,X,Y] ---> [B,V2 ... Vn t X,A Y] if A --- Vl V2 ... Vn is a rule of the phrase structure grammar X is in the set of nonterminals and Y is anything (2 [X,t,A] ---> [A X,~,~] if A is in the set of nonterminals (3 [B,B,Y] ---> [B,B,Y] if B is in the set of nonterminals or terminals To begln, put the terminal string to be parsed followed by END on stack alpha. Put the nonterminal which is to be the root node of the tree to be constructed followed by END on stack beta. Put END on stack gamma. The symbol t is neither a terminal nor a nonterminal. When END is on top of each stack, the string has been recog- nized. If none of the turing machine instructions apply and END is not on the top of each stack, the path which led to this situation was a bad path and does not yield a valid parse. The rules necessary to give a parse tree can be stated informally (i.e. not in terms of turing machine instructions) as follows: When (I) is applied, attach Vl beneath A. When (3) is applied, attach the B on alpha B as the right daughter of the top symbol on gamma. Note that there is a formal statement of the parsing version of NBT in Griffiths (1965). However, it is somewhat more complicated and obscures what is going on during the parse. Therefore, the informal procedure given above will be used instead. The SBT (Selective Bottom to Top) algorithm is a selective version of the NBT algorithm and is also given in G+P. The only difference between the two is that the SBT algorithm employs a selec- tive technique for increasing the efficiency of the algorithm. In the terminology of G+P, a selective technique is one that eliminates bad parse paths before trying them. The selective technique employed is the use of a reachability matrix. A reachability matrix indicates whether each non-terminal node in the grammar can dominate each terminal or non-terminal in the grammar in a tree where that terminal or non-terminal is on the left-most branch. To use it, an additional con- dition is put on rule (i) requiring that X can reach down to A. Ross (1981) modifies the SBT Algorithm to directly handle grammar rules utilizing several abbreviatory conventions that are often used when writing grammars. Thus, parentheses (indicating optional nodes) and curly brackets (indicating that the items within are alternatives) can appear 153 in rules that the parser accesses when parsing a string. These modifications will not be discussed in this paper but the parser employed in the NLMenu System incorporates them because efficiency is increased, as discussed in Ross (1981). At this point, the statement of the algorithm is completely neutral with respect to control structure. At the beginning of a parse, there is only one 3-tuple. However, because the algorithm is non-deterministic, there are potentially points during a parse at which more than one turing machine instruction can apply. Each of the parse paths resulting from an application of a different turing machine instruction to the same parser state sends the parser off on a possible parse path. Each of these possible paths could result in a valid parse and all must be followed to completion. In order to assure this, it is necessary to proceed in some principled way. One strategy is to push one state as far as it will go. That is, apply one of the rules that are applicable, get a new state, and then apply one of the applicable rules to that new state. This can continue until either no rules apply or a parse is found. If no rules apply, it was a bad parse path. If a parse is found, it is one of possibly many parses for the sentence. In either case, the algorithm must continue on and pursue all other alternative paths. One way to do this and assure that all alternatives are pursued is to backtrack to the last choice point, pick another applicable rule, and continue in the manner described earlier. By doing this until the parser has backed up throughall possible choice points, all parses of the sentence will be found. A parser that works in this manner is a depth- first backtracking parser. This is probably the most straightforward control structure for a left- corner parser. Alternative control structures are possible. Rather than pursuing one path as far as possible, one could go down one parse path, leave that path before it is finished and then start another. The first parse path could then be pursued later from the point at which it was stopped. It is neces- sary to use an alternative control structure to enable parsing to begin before the entire input string is available. To enable the parser to function as described above, the control structure for a depth-first parser described earlier is used. To introduce the ability to begin parsing given only a subset of the input string, the item MORE is inserted after the last input item that is given to the parser. If no other instructions apply and MORE is on top of stack alpha, the parser must begin to backtrack as described earlier. Additionally, the contents of stack beta and gamma must be saved. Once all backtracking is completed, additional input is put on alpha and parsing begins again with a set of states, each containing the new input string on alpha and one of the saved tuples containing beta and gamma. Each of these states is a distinct parse path. To parse a word at a time, the first word of the sentence followed by MORE is put on alpha. The parser will then go as far as it can, given this word, and a set of tuples containing beta and gamma will result. Then, each of these tuples along with the next word is passed to the parser. The ability to parse a word at a time is essential for the NLMenu System. However, it is also beneficial for more traditional natural language interfaces. It can increase the perceived speed of any parser since work can proceed as the user is typing and composing his input. Note that a rubout facility can be added by saving the beta- gamma tuples that result after parsing for each of the words. Such a facility is used by the NLMenu System. The ability to predict the set of possible nth words of a sentence, given the first n-1 words of the sentence is the final modification necessary to enable this parser to be used for menu-based natural language understanding. This feature can be added in a straightforward way. Given any beta-gamma pair representing one of the parse paths active after n-1 words of the sentence have been input, it is possible to determine the set of words that will allow that state to con- tinue. This is by examing the top-most symbol on stack beta of the tuple. It represents the most immediate goal of that parse state. To determine all the words that can come next, given that goal, the set of all nodes that are reachable from that node as a left daughter must be determined. This information is easily obtainable from the reach- ability matrix discussed earlier. Once the set of reachable nodes is determined, all that need be done is find the subset of these that can dominate lexical material. If this is done for all of the beta-gamma pairs that resulted after parsing the first n-1 words and the union of the sets that result is taken, the resulting set is a list of all of the lexical categories that could come next. The list of next words is easily determined from this. Ill APPLICATIONS OF THE NLMENU SYSTEMS Although a wide class of applications are appropriate for menu-based natural language interfaces, our effort thus far has concentrated on building interfaces to relational databases. This has had several important consequences. First, it has made it easy to compare our inter- faces to those that have been built by others because a prime application area for natural language interfaces has been to databases. Second, the process of producing an interface to any arbitrary set of relations has been automated. A. Comparison to Existin 9 Systems We have run a series of pilot studies to evaluate the performance of an NLMenu interface to 154 the parts-suppliers database described in Data (1977). These studies were similar to the ones described in Tennant (1980) that evaluated the PLANES system. Our results were more encouraging than Tennant's. They indicated that both experienced computer users and naive subjects can successfully use a menu-based natural language interface to a database to solve problems. All subjects were successfully able to solve all of their problems. Comments from subjects indicated that al- though the phrasing of a query might not have been exactly how the subject would have chosen to ask the question in an unconstrained, traditional system, the subjects were not bothered by this and could find the alternative phrasing without any difficulty. One factor that appeared to be important in this was the displaying of the entire set of menus at all times. In cases where it was not clear which item on an active menu would lead to the users desired query, users looked at the inactive menus for hints on how to proceed. Additionally, the existence of a rubout facility that enabled users to rubout phrases they had input as far back as desired encouraged them to explore the system to determine how a sentence might be phrased. There was no penalty for choos- ing an item which did not allow a user to continue his question in the way he desired. All that the user had to do was rub it out and pick again. B. Automatically Buildin~ NLMenu Interfaces To Relational Databases The system outlined in this section is a com- panion system to NLMenu. It allows NLMenu inter- faces to an arbitrary set of relations to be constructed in a quick and concise way. Other researchers have examined the problem of construc- ting portable natural language interfaces. These include Kaplan (1979), Harris (1979), Hendrix and Lewis (1981), and Grosz et. al. (1982). While the work described here shares similarities, it differs in several ways. Our interface specifi- cation dialogue is simple, short, and is supported by the database data dictionary. It is intended for the informed user, not necessarily a database designer and certainly Dot a grammar expert. Information is obtained from this informed user through a menu-based natural language dialogue. Thus, the interface that builds interfaces is extremely easy to use. i. Implementation The system for automatically generating NLMenu interfaces to relational databases is divided into two basic components. One component, BUILD-INTERFACE, produces a domain specific data structure called a "portable spec" by engaging the user in an NLMenu dialog. The other component, MAKE-PORTABLE-INTERFACE, generates a semantic grammar and lexicon from the "portable spec". The MAKEZPORTABLE-INTERFACE component takes as input a "portable spec", uses it to instantiate a domain independent core grammar and lexicon, and returns a semantic grammar and a semantic lexicon pair, which defines an NLMENU interface. The core grammar and lexicon can be small (21 grammar rules and 40 lexical entries at present), but the size of the resulting semantic grammars and lexicons will depend on the portable spec. A portable-spec consists of a list of categories. The categories are as follows. The COVERED TABLES list specifies all relations or views that the interface will cover. The retrie- val, insertion, deletion and modification rela- tions specify ACCESS RIGHTS for the covered tables. Non-numeric attributes, CLASSIFY ATTRI- BUTES according to type. Computable attributes are numeric attributes that are averageable, summable, etc. A user may choose not to cover some attributes in interface. IDENTIFYING ATTRI- BUTES are attributes that can be used to identify the rows. Typically, identifying-attributes will include the key attributes, but may include other attributes if they better identify tuples (rows) or may even not include a full key if one seeks to identify sets of rows together. TWO TABLE JOINS specify supported join paths between tables. THREE TABLE JOINS specify supported "relation- ships" (in the entity-relationship data model sense) where one relation relates 2 others. The EDITED ITEMS specification records old and new values for menu phrases and the window they appear in. The EDITED HELP provides a way for users to add to, modify or replace automatically generated help messages associated with a menu item. Values to these last categories record changes that a user makes to his default menu screen to customize phrasings or help messages for an application. The BUILD-INTERFACES component is a menu- based natural language interface and thus is really another application of the NLMenu system to an interface problem. It elicits the information required to build up a "portable spec" from the user. In addition to allowing the user to create an interface, it also allows the user to modify or combine existing interfaces. The user may also grant interfaces to other users, revoke them, or drop them. The database management system controls which users have access to which interfaces. 2. Advantages The system for automatically constructing NLMenu interfaces enjoys seyeral practical and theoretical advantages. These advantages are outlined below. End-users can construct natural language interfaces to their own data in minutes, notweeks or years, and without the aid of a grammar special- ist. There is heavy dependence on a data diction- ary but not on linguistic information. The interface builder can control cover- age. He can decide to make an interface that covers only a semantically related subset of his 155 tables. He can choose to include some attributes and hide other attributes so that they cannot be mentioned. He can choose to support various kinds of joins with natural language phrases. He can mirror the access rights of a user in his inter- face, so that the interface will allow him to insert, delete, and modify as well as just re- trieve and only from those tables that he has the specified privileges on. Thus, interfaces are highly tunable and the term "coverage" can be given precise definition. Patchy coverage is avoided because of the uniform way in which the interface is constructed. Automatically generated natural language interfaces are robust with respect to database changes; interfaces are easy to change if the user adds or deletes tables or changes table descrip- tions. One need only modify the portable spec to reflect the changes and regenerate the inter- face. Automatically generated NLMenu interfaces are guaranteed to be correct (bug free). The in- teraction in which users specify the parameters defining an interface, ensures that parameters are valid, i.e. they correspond to real tables, attributes and domains. Instantiating a debugged core grammar with valid parameters yields a correct interface. Natural language interfaces are con- structed from semantically related tables that the user owns or has been granted and they reflect his access privileges (retrieval), insertion, etc). By extension, natural language interfaces become database objects in their own right. They are sharable (grantable and revokable) in a controlled way. A user can have several such NLMenu inter- faces. Each gives him a user-view of a semanti- cally related set of data. This notion of a view is like the notion of a database schema found in network and hierarchical but not relational systems. In relational systems, there is no convenient way for grouping tables together that are semantically related. Furthermore, an NLMenu interface can be treated as an object and can be granted to other users, so a user acting as a database administrator can make NLMenu interfaces for classes of users too naive to build them themselves (like executives). Furthermore, inter- faces can be combined by merging portable specs and so user's can combine different, related user- views if they wish. Since an interface covers exactly and only the data and operations that the user chooses, it can be considered to be a "model of the user" in that it provide a well-bounded language that re- flects a semantically related view of the user's data and operations. A final advantage is that even if an automatically generated interface is for some reason not quite what is needed for some application, it is much easier to first generate an interface this way and then modify it to suit specific needs than it is to build the entire interface by hand. This has been demonstrated already in the prototype where an automatically generated interface required for an appliction for another group at TI was manually altered to provide pictorial database capabilities. Taken together, the advantages listed above pave the way for low cost, maintainable interfaces to relational database systems. Many of the advantages are novel when considered with respect to past work. This approach makes it possible for a much broader class of users and applications to use menu-based, natural language interfaces to databases. 3. Features of NLMenu Interfaces to Databases The NLMenu system does not store the words that correspond to open class data base attributes in the lexicon as many other systems do. Instead, a meta category called an "expert" is stored in the lexicon. They may be user supplied or defaulted and they are arbitrary chunks of code. Possible implementations include directly doing a database lookup and presenting the user with a list of items to choose from or presenting the user with a type in window which is constrained to only allow input in the desired type or format (for example, for a date). Many systems allow ellipsis to permit the user to, in effect, ask a parameterized query. We approach this problem by making all phrases that were generated by experts be "mouse sensitive" in the sentence. To change the value of a data item, all that needs to be done is to move the mouse over the sentence. When a data item is encoun- tered, it is boxed by the mouse cursor. To change it, one merely clicks on the mouse. The expert which originally produced that data item is then called, allowing the user to change that item to something else. The grammars produced by the automatic generation system permit ambiguity. However, the ambiguity occurs in a small set of well- defined situations involving relative clause attachment. Because of this, it has been possible to define a bracketed and indented format that clearly indicates the source of ambiguity to the user and allows him to choose between alternative readings. Additionally, by constraining the parser to obey several human parsing strategies, as described in Ross (1981), the user is displayed a set of possible readings in which the most likely candidate comes first. The user is told that the firs't bracketed structure is most pro- bably the one he intended. IV CONCLUSIONS The menu approach to natural language input has many advantages over the traditional typing approach. Most importantly, every sentence that 156 is input is understood. Thus, a 100% success rate for queries input is achieved. Implementation time is greatly decreased because the grammars required can be much smaller. Generally, writing a thorough grammar for an application of a natural language understanding system consumes most of the development time. Note that the reason larger grammars are needed in traditional systems is that every possible paraphrase of a sentence must be understood. In a menu-based system, only one paraphrase is needed. The user will be guided to this paraphrase by the menus. The fact that the menu-based natural language understanding systems guide the user to the input he desires is also beneficial for two other reasons. First, confused users who don't know how to formulate their input need not compose their input without help. They only need to recognize their input by looking at the menus. They need not formulate their input in a vacuum. Secondly, the extent of the system's conceptual coverage will be apparent. The user will imme- diately know what the system knows about and what it does not know about. Only allowing for one paraphrase of each allowable query not only makes the grammar smaller. The lexicon is smaller as well. NLMenu lexicons must be smaller because if they were the size of a lexicon standardly used for a natural language interface, the menus would be much too large and would therefore be unmanageable. Thus, it is possible that limitations will be imposed on the system by the size of the menus. Menus can necessarily not be too big or the user will be swamped with choices and will be unable to find the right one. Several points must be made here. First, even though an inactive menu containing, say, a class of modifiers, might have one hundred modifiers, it is likely that all of these will never be active at the same time. Given a semantic grammar with five different classes of nouns, it will most likely be the case that only one fifth of the modifiers will make sense as a modifier for any of those nouns. Thus, an active modifier menu will have roughly twenty items in it. We have constructed NLMenu interfaces to about ten databases, some reasonably large, and we have had no problem with the size of the menus getting unmanageable. The NLMenu System and the companion system to automatically build NLMenu interfaces that are described in this paper are both implemented in Lisp Machine Lisp on an LMI Lisp Machine. It has also proved to be feasible to put them on a micro- computer. Two factors were responsible for this: the word by word parse and the smaller grammars. Parsing a word at a time means that most of the work necessary to parse a sentence is done before the sentence has been completely input. Thus, the perceived parse time is much less than it otherwise would be. Parse time is also made faster by the smaller grammars because it is a function of grammar size so the smaller the grammar, the faster the parse will be performed. Smaller grammars can be dealt with much more easily on a microcomputer with limited memory available. Both systems have been implemented in C on the Texas Instruments Professional Computer. These implementation are based on the Lisp Machine implementations but were done by another division of TI. These second imple- mentations will be available as a software package that will interface either locally to RSI s Oracle relational DBMS which uses S .... as the query language or to various remote computers running DBMS's that use SQL 3.0 as their query language. V REFERENCES Data, C. J. An introduction to database systems. New York: Addison-Wesley, 1977. Griffiths, T. On procedures for constructing structural descriptions for three parsing algorithms, Communications of the ACM, 1965, 8, 594. Griffiths, T. and Petrick, S. R., On the relative efficiencies of context-free grammar recogni- zers, Communications of the ACM, 1965, 8, 289-300. Grosz, B., Appelt, D., Archbold, A., Moore, R., Hendrix, G., Hobbs, J., Martin, P., Robinson, J., Sagalowicz, D., and Warren, P. TEAM: A transportable natural language system. Technical Note 263, SRI International, Menlo Park, California. April, 1982. Harris, L. Experience with ROBOT in 12 commercial natural language database query applications. Proceedings of the sixth IJCAI. 1979. Hendrix, G. and Lewis, W. Transportable natural language interfaces to databases. Proceeaings of the 19th Annual Meetin 9 of the ACL. 1981. Kaplan, S. J. Cooperative responses from a portable natural language query system. Ph.D. Dissertation, University of Pennsylvania, Computer Science Department, 1979. Konolige, K. A Framework for a portable NL interface to large databases. TechnicaiNote 197, SRI International, Menlo Park, CA, October, 1979. Ross, K. Parsing English phrase structure, Ph.D. Dissertation, Department of Linguistics, University of Massachusetts~ 1981. Ross, K. An improved left-corner parsing algorithm. Proceedings of COLING 82. 333-338. 1982, Slocum, J. A practical comparison of parsing strategies, Proceedings of the 19th Annual Meeting of the ACL. 1981, I-6. £57 Tennant, H. R. Evaluation of natural language processors. Ph.D. Dissertation Department of Computer Science, University of Illinois 1980. Thompson, C. W. SURLY: A single user relational DBMS. Technical Report, Computer Science Department, University of Tennessee, Knoxville, 1979. Ullman, J. Principles of Database Systems Computer Science Press, 1980. Younger, D. Recognition and parsing of context- free language in time n3. Information and Control, 1967, 10, 189-208 158 | 1983 | 23 |
Knowledge Structures in UC, the UNIX* Consultantt David N. Chin Division of Computer Science Department of EECS University of California, Berkeley Berkeley, CA. 94720 ABSTRACT The knowledge structures implemented in UC, the UNLX Consultant are sufficient for UC to reply to a large range of user queries in the domain of the UNIX operating sys- tem. This paper describes how these knowledge struc- tures are used in the natural language tasks of parsing, reference, planning, goal detection, and generation, and ~ow they are organized to enable efficient access even with the large database of an expert system. The struc- turing of knowledge to provide direct answers to common queries and the high usability and efficiency of knowledge structures allow UC to hold an interactive conversation with a user. 1. Introduction UC is a natural language program that converses in English with users in the domain of the UNIX operating system. UC provides information on usage of system utilities, UNIX terminology, and plans for accomplishing specific tasks in the UNIX environment, all upon direct query by the user. In order to accomplish these tasks, UC must perforce have a considerable knowledge base, a large part of which is particular to the UNIX domain. The specific representations used in this knowledge base are essential to the successful operation of UC. Not only are the knowledge structures used in parsing, inference, planning, goal detection, and generation, but also the for- mat of representation must permit the high efficiency in access and processing of the knowledge that is required in an interactive system like UC. This paper describes the details of this representation scheme and how it manages to satisfy these goals of usability and efficiency. Other aspects of the UC system are described in Arens (1982}, Faletti (1982}, Jacobs (1983}, Rau {1983), and Wilensky and Arens (1980a and b). An overview of the UC system can be found in Wilensky (1982). • UNIX is Lradem,trk of Bell Labor~.tone$ t This research wu sponsored ia part by the O~¢e of NavLl Re~etrcb under coBtrLct N00014-80-C-0732 ~ad the NLt,oa=d Scieace Foaadztiou =ader grant MCSTg-06543. 2. Speeifleations for the Representation The first step in the design of knowledge structures involves determining what forms of knowledge will be needed. In this case, the domain of conversation for a UNIX consultant is specific enough that it reduces the range of knowledge structures necessary for the task. Some insight into the kinds of knowledge structures that are needed can be gleaned from looking at the variety of questions which users actually ask. Since UC is aimed at the naive user of UNIX, a majority of the queries UC receives are of the following forms (taken from actual UC sessions}: User: How can [ change the write protection on my termi- nal? UC: To turn on write permission on your terminal, type 'mesg y'. To turn off write permission on your terminal, type 'mesg n'. User: What is a search path? UC: A search path in UNIX is a list of directories in which the operating system searches for programs to execute. User: Why can't I remove the directory Trap? UC: The directory Trap must he empty before the direc- tory can be deleted. Questions of the first form, asking how to do something, are usually requests for the names and/or usage of UNIX utilities. The user generally states the goals or results that are desired, or the actions to be performed and then asks for a specific plan for achieving these wishes. So to respond to how questions, UC must encode in its data- base a large number of plans for accomplishing desired results or equivalently, the knowledge necessary to gen- erate those plans as needed. The second question type is a request for the definition of certain UNL~ or general operating systems terminology. Such definitions can be provided easily by canned textual responses. However UC generates all of its output. The expression of knowledge in a format that is also useful for generation is a much more difficult problem than simply storing canned answers. In the third type of query, the user describes a situation where his expectations have failed to be substantiated and asks UC to explain why. Many such queries involve 159 plans where preconditions of those plans have been violated or steps omitted from the plans. The job that UC has is to determine what the user was attempting to do and then to determine whether or not preconditions may have been violated or steps left out by the user in the execution of the plans. Besides the ability to represent all the different forms of knowledge that might be encountered, knowledge struc- tures should be appropriate to the tasks for which they will be used. This means that it should be easy to represent knowledge, manipulate the knowledge struc- tures, use them in processing, and do all that efficiently in both time and space. In UC, these requirements are particularly hard to meet since the knowledge structures are used for so many diverse purposes. 3. The Choice Many different representation schemes were considered for UC. In the past, expert systems have used relations in a database (e.g. the UCC system of Douglass and Hegner, 1982), production rules and/or predicate calculus, for knowledge representation. Although these formats have their strong points, it was felt that none provided the flexibility needed for the variety of tasks in UC. Relations in a database are good for large amounts of data, but the database query languages which must be used for access to the knowledge are usually poor representation languages. Production rules encode pro- cedural knowledge in an easy to use format, but do not provide much help for representing declarative knowledge. Predicate calculus provides built-in inference mechanisms, but do not provide sufficient mechanism for representing the linguistic forms found in natural language. Also considered were various representation languages, in particular KL-one (Schmolze and Brach- man, 1981). However at the time, these did not seem to provide facilities for efficient access in very large knowledge bases. The final decision was to use a frame- like representation where some of the contents are based on Schank's conceptual dependencies, and to store the knowledge structures in PEARL databases (PEARL is an AI package developed at Berkeley that provides efficient access to Lisp representations through hashing mechan- isms, c.f. Deering, et. al., 1981 and 1982). 4. The Implementation Based on Minsky's theory of frames, the knowledge struc- tures in UC are frames which have a slot-filler format. The idea is to store all relevant information about a par- ticular entity together for efficient access. For example the following representation for users has the slots user- id, home-directory, and group which are filled by a user- id, a directory, and a set of group-id's respectively. (create expanded person user (user-id user-id) (home-directory directory) {group setof group-id)) In addition, users inherit the slots of person frames such as a person's name. To see how the knowledge structures are actually used, it is instructive to follow the processing of queries in some detail. UC first parses the English input into an internal representation. For instance, the query of example one is parsed into a question frame with the single slot, cd, which is filled by a planfor frame. The question asks what is the plan for (represented as a planfor with an unknown method) achieving the result of changing the write protection (mesg state) of a terminal (terminall which is actually a frame that is not shown). (question (cd (planfor (result (state-change (actor terminall) (state-name mesg) (from unspecified) (to unspecified))) (method *unknown*)))) Once the input is parsed, UC which is a data driven pro- gram looks in its data base to find out what to do with the representation of the input. An assertion frame would normally result in additions to the database and an Imperative might result in actions (depending on the goal analysis}. In this case, when UC sees a question with a planfor where the method is unknown, it looks in its database for an out-planfor with a query slot that matches the result slot of the planfor in the question. This knowledge is encoded associatively in a memory- association frame where the recall-key is the associative component and the cluster slot contains a set of struc- tures which are associated with the structure in the recall-key slot. (memory-association (recall-key {question (cd (planfor (result ?cone) (method *unknown*))))) {cluster ((out-planfor (query ?cone) (plan ?*any*))))) The purpose of the memory-association frame is to simu- late the process of reminding and to provide very flexible control flow for UC's data driven processor. After the question activates the memory-association, a new out- pianfor is created and added to working memory. This out-planfor in turn matches and activates the following knowledge structure in UC's database: (out-planfor (query (state-change (actor terminal) (state-name mesg} (from ?from-state) (to ?to-state))) (plan (output (cd (planfor67 planfor68))))) 160 The meaning of this out-planfor is that if a query about a state-change involving the mesg state of a terminal is ever encountered, then the proper response is the output frame in the plan slot. All output frames in UC are passed to the generator• The above output frame contains the planfors numbered 67 and 68: planfor67: (plan for (result (state-change (actor terminal) (state-name mesg) (from off) (to on))) (method (mtrans (actor *user*) (object (command (name mesg) (ar~ (y)) (input *stdin*} (output *stdout*) (dia~ostic *stdout*)}) (from *user*) (to *Unix*)))) This planfor states that a plan for changing the mesg state of a terminal from on to off is for the user co send the command rnes~I to UNIX with the argument "y". Planfor 68 is similar, only with the opposite result and with argument "n". In general, UC contains many of these planfors which define the purpose (result slot) of a plan (method slot). The plan is usually a simple com- mand although there are more complex meta plans for constructing sequences of simple commands such as might be found in a UNIX pipe or in conditionals. In UC, out-planfors represent "compiled" answers in an expert consultant where the consultant has encountered a particular query so often that the consultant already has a rote answer prepared• Usually the question that is in the query slot of the out-planfor is similar to the result of the planfor that is in the output frame in the plan slot of the out-planfor. However this is not necessarily the case, since the out-planfor may have anything in its plan slot. For example some queries invoke UC's interface with UNIX (due to Margaret Butler} to obtain specific infor- mation for the user. The use of memory-associations and out-planfors in UC provides a direct association between common user queries and their solutions. This direct link enables UC to process commonplace queries quickly. When UC encounters a query that cannot be handled by the out- planfors, the planning component of UC (PANDORA, c.f. Faletti, 1982) is activated• The planner component uses the information in the UC databases to create individual- ized plans for specific user queries. The description of that proems is beyond the scope of this paper. The representation of definitions requires a different approach than the above representations for actions and plans. Here one can take advantage of the practicality of terminology in a specialized domain such as UNIX. Specifically, objects in the UNIX domain usually have definite functions which serve well in the definition of the object. In example two, the type declaration of a search-path includes a use slot for the search-path which contains information about the main function of search paths. The following declaration defines a searc:.-~n as a kind of functional-object with a path slot that contains a set of directories and a ~zse slot which says that search paths are used in searching for programs by UNL~. (create expand'ed functional-object search-path (path setof directory) (use ($search (actor *Unix*) (object program} {location ?search-path))) • . . ) Additional information useful in generating a definition can be found the slots of a concept's declaration. These slots describe the parts of a concept and are ordered in terms of importance. Thus in the example, the fact tha~ a search-path is composed of a set of directories was used in the definition given in the examples. Other useful information for building definitions i~ encoded in the hierarchical structure of concepts in UC. This is not used in the above example since a search-path is only an expanded version of the theoretical concept, functional-object. However with other objects such a.~ directory, the fact that directory is an expanded version of a file {a directory is a file which is ,sed to store other files) is actually used in the definition. The third type of query involves failed preconditions of plans or missing steps in a plan. In UC the preconditions of a plan are listed in a preeonds frame. For instance, in example 3 above, the relevant preconds frame is: (preconds (plan (mtrans (actor *user*) (object (command (name rmdir) (args (?director/name)) (input stdin) (output stdout} (diagnostic s~dout))) (from *user*) (to ,Unix*))) (are ((state (actor (all (var ?file) (desc (file)) (pred (inside-of (object ?directoryname))))}) (state-name physical-state) (value non-existing}) .. ))) This states that one of the preconditions for removing a directory is that it must be empty. In analyzing the example, UC first finds the goal of the user, namely to 161 delete the directory Trap. Then from this goal, UC looks for a plan for that goal among planfors which have that goal in their result slots. This plan is shown above. Once the plan has been found, the preconds for that plan are checked which in this case leads to the fact that a directory must be empty before it can be deleted. Here UC actually checks with UNIX, looking in the user's area for the directory Trap and discovers that this precondi- tion is indeed violated. If UC had not been able to find the directory, UC would suggest that the user personally check for the preconditions. Of course if the first precon- dition was found to be satisfied, the next would be checked and so on. In a multi-step plan, UC would also verify that the steps of the plan had been carried out in the proper sequence by querying the user or checking with UNIX. 5. Storage for Efficient Access The knowledge structures in UC are stored in PEARL databases which provide efficient access by hash indexing. Frames are indexed by combinations of the frame type and/or the contents of selected slots. For instance, the planfor of example one is indexed using a hashing key based on the state-change in the planfor's result slot. This planfor is stored by the fact that it is a planfor for the state-change of a terminal's mesg state. This degree of detail in the indexing scheme allows this planfor to be immediately recovered whenever a reference is made to a state-change in a terminars mesg state. Similarly, a memory-association is indexed by the filler of the recall-key slot, an out-planfor is indexed using the contents of the query slot of the out-planfor, and a preconds is indexed by the plan in the plan slot of the preconds. Indeed all knowledge structures in UC have associated with them one or more indexing schemes which specify how to generate hashing keys for storage of the knowledge structure in the UC databases. These indexing methods are specified at the time that the knowledge structures are defined. Thus although care must be taken to choose good indexing schemes when defining the structure of a frame, the indexing scheme is used automatically whenever another instance of the frame is ~dded to the UC databases. Also, even though the indexing schemes for large structures like planfors involve many levels of embedded slots and frames, simpler knowledge structures usually have simpler index- ing schemes. For example, the representation for users in UC are stored in two ways: by the fact that they are users and have a specific account name, and by the fact that they are users and have some given real name. The basic idea behind using these complex indexing schemes is to simulate a real associative memory by using the hashing mechanisms provided in Pearl databases. This associative memory mechanism fits well with the data-driven control mechanism of UC and is usel'ul for a great variety of tasks. For example, goal analysis of speech acts can be done through this associative mechan- ism: (memory-association (recall-key (assertion (cd (goal (planner ?person} (objective ?obj )))) (cluster ((out-pianfor (cd ?obi))))) In the above example {provided by Jim Mayfield), UC • analyzes the user's statement of wanting to do something as a request for UC to explain how to achieve that goal. 6. Conclusions The knowledge structures developed for UC have so far shown good efficiency in both access time and space usage within the limited domain of processing queries to a Unix Consultant. The knowledge structures fit well in the framework of data-driven programming used in UC. Ease of use is somewhat subjective, but beginners have been able to add to the UC knowledge base after an introductory graduate course in AI. Efforts underway to extend UC in such areas as dialogue will further test the merit of this representation scheme. 7. Technical Data UC is a working system which is still under development. In size, UC is currently two and a half megabytes of which half a megabyte is FRANZ lisp. Since the knowledge base is still growing, it is uncertain how much of an impact even more knowledge will have on the sys- tem especially when the program becomes too large to fit in main memory. In terms of efficiency, queries to UC take between two and seven seconds of CPU time on a V.~X 11/780. Currently, all the knowledge in UC is hand coded, however efforts are under way to aatomate the process. 8. Acknowledgments Some of the knowledge structures used in UC are refinements of formats developed by Joe Faletti and Peter Norvig. Yigal A.rens is responsible for the underly- ing memory structure used in UC and of course, this pro- ject would not be possible without the guidance and advice of Robert Wilensky. 162 O. References Arens, Y. 1982. The Context Model: Language Understanding in Context. In the Proceedings of the Fourth Annual Conference of the Cognitive Sci- ence Society. Ann Arbor, MI. August 1982. Deering, M., J. Faletti, and R. Wilensky. 1981. PEARL: An Eflacient Language for Artificial Intel- ligence Programming. In the Proceedings of the Seventh International Joint Conference on Artificial Intelligence. Vancouver, British Columbia. August, 1981. Deering, M., J. Faletti, and R. Wilensky. 1982. The PEARL Users Manual. Berkeley Electronic Research Laboratory Memorandum No. UCB/ERL/M82/19. March, 1982. Douglass, R., and S. Heguer. 1982. An Expert Con- sultant for the Unix System: Bridging the Gap Between the User and Command Language Seman- tics. In the Proceedings of the Fourth National Conference of Canadian Society for Computational Studies of Intelligence. University of Saskatchewan, Saskatoon, Canada. Faletti, J. 1982. PANDORA - A Program for Doing Commonsense Planning in Complex Situa- tions. In the Proceedings of the National Confer- ence on Artificial Intelligence. Pittsburgh, PA. August, 1082. Rau, L. 1983. Computational Resolution of Ellipses. Submitted to IJCAI-83, Karlsruhe, Ger- many. Jacobs, P. 1983. Generation in a Natural Language Interface. Submitted to IJCAI-83, Karlsruhe, Ger- many. Schmolze, J. and R. Brachman. 1981. Proceedings of the 1981 KL-ONE Workshop. Fairchild Techni- cal Report No. 618, FLAIR Technical Report No. 4. May, 1982. Wilensky, R. 1982. Talking to UNIX in English: An Overview of UC. In the Proceedings of the National Conference on Artificial Intelligence. Pittsburgh, PA. August, 1982. Wilensky, R. 1981(b). A Knowledge-based Approach to Natural Language Processing: A Pro- gress Report. In the Proceedings of the Seventh International Joint Conference on Artificial Intelli- gence. Vancouver, British Columbia. August, 1981. Wilensky, R., and Arens, Y. 1980(a). PHRA.N - a Knowledge-Based Natural Language Understandcr. In the Proceedings of the 181h Annual Meetin~ of the Association for Computational Linquistics. Phi- ladelphia, PA. Wilensky, R., and Arens, Y. 1980(b). PHRAN - a Knowledge Based Approach to Natural Language Analysis. University of California at Berkeley, Elec- tronic Research Laboratory Memorandum No. UCB/ERL M80/34. 163 | 1983 | 24 |
Discourse Pragmatics and Ellipsis Resolution in Task-Oriented Natural Language Interfaces Jaime G. Carbonell Computer Science Department Carnegie-Mellon University. P!ttsburgh, PA 15213 Abstract This paper reviews discourse phenomena that occur frequently in task.oriented man.machine dialogs, reporting on a~n empirical study that demonstrates the necessity of handling ellipsis, anaphora, extragrammaticality, inter-sentential metalanguage, and other abbreviatory devices in order to achieve convivial user interaction. Invariably, users prefer to generate terse or fragmentary utterances instead of longer, more complete "stand- alone" expressions, even when given clear instructions tO the contrary. The XCALIBUR exbert system interface is designed to meet these needs, including generalized ellipsis resolution by means of a rule-based caseframe method superior tO previous semantic grammar approaches. 1. A Summary of Task-Oriented Discourse Phenomena Natural language discourse exhibits several intriguing phenomena that defy definitive linguistic analysis and general computational solutions. However, some progress has been made in developing tractable computational solutions to simplified version of phenomena such as ellipsis and anaphora resolution [20, 10, 211. This paper reviews discourse phenomena that arise ~n task.oriented dialogs with responsive agents (such as expert systems, rather than purely passive data base query systems), outlines the results of an empirical study, and presents our method for handling generalized ellipsis resolution in the XCALIBUR expert system interface. With the exception of inter- sentential metalanguage, and to a lesser degree extragrammaticality, the significance of the phenomena listed below have long been recognized and documented in the computational linguistics literature. • Anaphora -- Interactive task-oriented dialogs invite the use of anaphora, much more so than simpler data base query situations. • Definite noun phrases -- As Grosz [6] noted, resolving the referent of defimte noun phrases requires an understanding of the planning structure underlying cooperative discourse, • Ellipsis -. Sentential level ellipsis has long been recognized as ubiquitous in discourse. However, semantic ellipsis, where ellipsed information is manifest not as syntactically incomplete structures, but as semantically incomplete propositions, is also an important phenomenon. The ellipsis resolution method presented later in this paper addresses both kinds of ellipsis. • Extragrammatical utterances -- Interjections, dropped articles, false starts, misspellings, and other forms of grammatical deviance abound in our data (as explained in the following section). Developing robust parsing techniques that tolerate errors has been the focus of our earlier investigations [2, 9. 7] and remains high among our priorities. Other investigations on error-tolerant parsing incJude [13, 22]. • Meta.lincjuistic utterances -- Intra-sentential metalanguage has been investigated to some degree [18, 12J, but its more common inter-sententiai counterpart has received little attention [4}. However, utterances about other utterances (e,g,, corrections of previous commands, such as "1 meant to type X instead" or "1 should have said ...") are not infrequent in our dialogs, and we are making an initial stab at this problem [8}. Note that it is a cognitively less demanding task for a user to correct a previous utterance than to repeat an explicit sequence of commands (or worse yet, to detect and undo explicitly each and every unwanted consequence of a mistaken command). • indirect speech acts -- Occasionally users will resort tO indirect speech acts[19. 16, 1], especially in connection with inter.sentential metalanguage or by stating a desired state of affairs and expecting the system tO supply the sequence of actions necessary to achieve that state. In our prior work we have focused on extr~grammaticality and inter.sentential metalanguage. In this paper we report on an empLrical study of discourse phenomena to a s~mulated interface and on our work on generalized elhpsis resolutLon in the context of the XCALIBUR project, 2. An Empirical Study The necessity to handle most of the discourse phenomena listed in the preceding section was underscored by an empirical study we conducted to ascertain the most pressing needs of natural language interfaces in interactive apl~lications, The initial objective of this study was to circumscribe the natural language interface task by attempting to instruct users of a simulated interface not to employ different discourse devices or difficult linguistic constructs. In essence, we wanted to determine whether untrained users would be able to interact as instructed (for instance avoiding all anaphoric referents), and, if so, whether they would still find the interface convivial given our artificial constraints. The basic experimental set-up consisted of two remotely located terminals linked to each other and a transaction log file 164 that kept a record of all interactions. The user wassituated at one terminal and was told he or she was communicating with a real natural language interface to an operating system (and an accompanying intelligent help system, not unlike Wilensky's Unix Consultant[23].) The experimenter at the other terminal simulated the interface and gave appropriate commands to the (real) operating system. In different sessions, users were instructed not to use pronouns, to type only complete sentences, to avoid complex syntax, to type only direct commands or queries (e.g., no indirect speech acts or discourse-level metalinguistic utterances [4, 8]), and to stick to the topic. The only instructions that were reliably followed were sticking to the topic (always) and avoiding complex syntax (usually). All other instructions were repeatedly violated in spite of constant negative feedback -- that is, the person pretending to be the natural language program replied with a standard error message. I recorded some verbal responses as well (with users telling a secretary at the terminal what she should type), and, contrary to my expectations, these did not qualitatively differ from the typed utterances. The significant result here is that users appear incapable or unwilling to generate lengthy commands, queries or statements when they can employ a linguistic device to state the same proposition in a more terse manner. To restate the principle more succinctly: Terseness principle: users insist on being as terse as possible, independent Of communication media or typing ability. 1 Given these results, we concluded that it was more appropriate to focus our investigations on handling abbreviatory discourse devices, rather than to address the issue of expanding our syntactic coverage to handle verbose complex structures seldom observed in our experience. In this manner, the objectives of the XCALIBUR project differ from those of most current investigations. 3. A Sketch of the ×CALIBUR interface This section outlines the XCALIBUR project, whose objective is to provide flexible natural language access (comprehension and generation) to the XSEL expert system [15]. XSEL, the Digital Equipment Corporation's automated salesman's assistant, advises on selection of appropriate VAX components and produces a sales order for automatic configuration by the R1 system [14]. Part of the XSEL task is to provide the user with information about DEC components, hence subsuming the data- base query task. However, unlike a pure data base query system, an expert system interface must also interpret cnm"'~ndS, understand assertions of new information, and carry out task- oriented dialogs (such as those discussed by Grosz[6]). XCALIBUR, in particular, deals with commands to modify an order, as well as information requests pertaining to its present task or itS data base of VAX component parts. In the near future it should process clarificational dialogs when the underlying expert system (i.e. XSEL) requires additional information or advice, as illustrated in the sample dialog below: >What is the largest 11780 fixed disk under $40,000? The rp07-aa is a 516 M8 fixed pack disk that costs $38,000. >The largest under $50,000? The rp07-aa. >Add two rpO7-aa disks to my order. Line item 1 added: (2 ro07-aa) >Add a printer with graphics capatJility fixed or changeable font? >fixed tont lines per minute? >make it at least 200, upper/lowercase. Ok. Line item 2 added: (1 Ixyt 1-sy) >Tell me about the Ixyl 1 The Ixyl 1 is a 240 I/m line printer with plotting capabilities, With the exception of the system-driven clarification interchange, which is beyond XCALIBUR's presently implemented capabilities, the rest of the dialog, including the natural language generation, is indicative of the present state Of our system. The major contributions of XCALIBUR thus far is perhaps the integratlon of diverse techmques into a working system, including the DYPAR.II multi-strategy parser. expectatnon.based error correction, case.frame ellipsis USER - - --~ Oypar.II Genet al,.:,r ]L~-- 1 InformalLon Manager & <_J J'(- XSEL Long te,m (Static) Database XCALIBUR > R1 i Figu re 3-1 : Overview of XCALIBUR llndicative as these empirical studies are of where one must focus one's efforts in developing convivial interfaces, they were not performed with adeqgato control groups or statistical rigor. Therefore. there is ample room to confirm. refute or expand upon lhe detads of our emoirical findings. However. the surprisingly strong form in which Grice's maxgm [5] manifests itself in task- oa~ented human computer d=alogs seems qualitatively irrefutable. resolution and focused natural language generation. Figure 3.1 provides a simplified view of the major modules of XCALIBUR, and the reader is referred to [3] for further elaboration. 3.1. The Role of the Information Handler When XSEL is ready to accept input, the information handler is 165 passed a message indicating the case frame or class of case frames expected as a response. For our example, assume that a command or query is expected, the parser is notified, and the user enters >What is the price of the 2/argest dual port fixed media disks? The parser returns; [QUERY (OBJECT (SELECT (disk (ports (VALUE (2))) (disk-pack-type (VALUE (fixed))) (OPERATION (SORT (TYPE ('descending)) (ATTR (size)) (NUMBER (2))) (PROJECT ( p r i c e ) ) ) (INFO-SOURCE ('default)) ] Rather than delving into the details of the representation or the manner in which it is transformed prior to generating an internal command to XSEL, consider some of the functions of the information handler: • Defaults must be instantiated. In the example, the query does not explicitly name an INFO.SOURCE, which could be the component database, the current set of line.items, or a set of disks brought into focus by the preceding dialog. • Ambiguous fillers or attribute names must be resolved. For example, in most contexts. "300 MB disk" means a disk with "greater than or equal to 300 ME]" rather than strictly "equal to 300 MB", A "large" disk refers to ample memory capacity in the context of a functional component specification, but to large physical dimensions during site planning, Presently, a small amount of local pragmatic knowledge suffices for the analysis, but. in the general case. closer integration with XSEL may be required. • Generalized ellipsis resolution, as presented below, occurs within the information handler. As the reader may note, the present raison d'etre of the information manager ts to act as a repository of task and dialog knowledge providing information that the user did not feel necessary to convey explicitly. Additionally. the information handler routes the parsed command or query to the appropriate knowledge source, be it an external static data base, an expert system, or a dynamically constructed data structure (such as the current VAX order). Our plans call for incorporating a model of the user's task and knowledge state that should provide useful information to both parser and generator. At first, we intend to focus on stereotypical users such as a salesperson, a system engineer and a customer, who would have rather different domain knowledge, perhaps different vocabulary, and certainly different sets of tasks in m,nd. Eventually, refinements and updates to a default user model should be inferred from an analysis of the current dialog [17]. 4. Generalized Caseframe Ellipsis The XCALIBUR system handles ellipsis at the case-frame level. Its coverage appears to be a superset of the LIFER/LADDER system [10, 11 ] and the PLANES ellipsis module [21 ]. Although it handles most of the ellipsed utterances we encountered, it is not meant to be a general linguistic solution to the ellipsis phenomenon. 4.1. Examples The following examples are illustrative of the kind of sentence fragments the current case-frame method handles. For brevity, assume that each sentence fragment occurs immediately following the initial query below. INITIAL QUERY: "What is the price of the three largest single port fixed media disks?" "Speed?" "Two smallest?." "How about the price of the two smallest" "also the smallest with dual ports" "Speed with two ports?" "Disk with two ports." In the representatwe examples above, punctuation is of no help, and pure syntax is of very limited utility. For instance, the last three phrases are syntactically similar (indeed, the last two are indistinguishable), but each requires that a different substitution be made on the preceding query. All three substitute the number of ports in the original SELECT field, but the first substitutes "ascending" for "descending" in the OPERATION field, the second subshtutes "speed" for "price" in the I~ROJECT field, and the third merely repeats the case header of the SELECT field. 4.2. The Ellipsis Resolution Method Ellipsis ~s resolved differently in the presence or absence of strong discourse expectations. In the former case, the discourse expectatmon rules are tested first, and, if they fad to resolve the sentence fragment, the Contextual substitution rules are tned. If there are no strong d~scourse expectations, the contextual substitution rules are invoked directly. Exemplary discourse expectation rule: IF: The system generated a query f,or confirmation or dlsconrlrmation of a proposed value of a filler of a case in a case frame in Focus, THEN: EXPECT one or more of, the f,oIIowing: l) A conrirmatlon or disconf,irmation pattern. 7) A different but semantically permissible f,iller of the case frame in questlon (optlonally naming the attribute or provlOing the case marker) 3) A comparatlve or evaluative pattern. ~) ~ query for posslble r l l l e r s ,)r constralnts on possible f i l l e r s of the case In question. [ I f this expectatlon is confirmeo, a sup-dialog is entered, wtlere previously Focused entities remain in focus. ] The following dialog fragment, presented without further commentary, ~llustrates how these expectations come into play in a focused dialog: >Add a line printer with graphics capabilities. Is 150 lines per minute acceptable? >No. 320 is better Expectations 1, 2 & 3 (or) other options for the speed? Expectation4 (or) Too slow. try 300 or faster Expectations 2 & 3 The utterance "try 300 or faster" is syntactically a complete sentence, but semantically ,t is lust as fragmentary as the previous utterances. The strong discourse expectations, however, suggest that it be processed in the same manner as syntactically incomplete utterances, since Jt satisfies the expectations of the interactive task The terseness principle operates at all levels: syntactic, semantic and pragmatic. 166 The contextual substitution rules exploit the semantic representation of queries and commands discussed in the previous section. The scope of these rules, however, is limited to the last user interaction of appropriate type in the dialog focus, as ='llustrated in the following example: Contextual Substitution Rule 1: IF: An attribute name (or conjoined list of" attribute names) is present without any corresponding filter or case header, and the attribute is a semantically permissible descriptor of tile case frame In the SELECT rield o9 the last query in focus, THEN: Substitute the new attribute name tot the old tiller of' the PROJECT field of the last query. For example, this rule resolves the ellipsis in the following utterances: >What is the size of the 3/argest sing/e port fixed media disks? >And the price and speed? Contextual Substitution Rule 2: TF: t~o sentential case frames are recognized tn the inpuL, and part of the Input can be recognized as an attribute &rtller (or just a riller) of a case In the SELECT field or a command or query tn Focus, THEN: Substitute t.he new filler for the old in the same rteld or the old conlmand or query. This rule resolves the following kind of ellipsis: >What is the size of the 3 largest single port fixed media disks? >disks with two ports? Note that it is impossible to resolve this kind of ellipsis in a general manner if the previous query is stored verbatim or as a a semantic-grammar parse tree. "Disks with two ports" would at best correspond to some <disk-descriptor'> non-terminal, and hence, according to the LIFER algorithm[lO, 11], would replace the entire phrase "single port fixed media disks" that corresponded to <disk-descriptor> in the parse of the original query. However, an informal poll of potential users suggests that the preferred interpretation of the ellipsis retains the MEDIA specifier of the original query. The ellipsis resolution process, therefore, requires a finer grain substation method than simply inserting the highest level non-terminals in the in the ellipsed input in place of the matching non-terminals in the parse tree of the previous utterance. Taking advantage of the fact that a case frame analysis of a sentence or object description captures the meaningful semantic relations among its constituents in a canonical manner, a partially instantiated nominal case frame can be merged with the previous case frame as follows: = Substitute any cases instantiated in the original query that the ellipsis specifically overrides. For instance "with two ports" overrides "single port" in our example, as both entail different values of the same case descriptor, regardless of their different syntactic roles. ("Single port" in the original query is an adjectival construction, whereas "with two ports" is a post-nominal modifier in the ellipsed fragment.) • Retain any cases in the original parse that are not explicitly contradicted by new information in the ellipsed fragment. For instance, "fixed media" is retained as part of the disk description, as are all the sentential-level cases in the original query, SUCh as the quantity specifier and the projection attribute of the query ("size"). • Add cases of a case frame in the query that are not instantiated therein, but are specified in the ellipsed fragment. For instance, the "fixed head" descriptor is added as the media case of the disk nominal case frame in resolving the etlipsed fragment in the following example: >Which disks are configurable on a VAX 11.7807 >Any conligurable fixed head disks? • In the event that a new case frame is mentioned in the ellipsed fragment, wholesale substitution occurs, much like in the semantic grammar approach. For instance, if after the last example one were to ask "How about tape drives?", the substitution would replace "fixed head disks" with "tape drives", rather than replacing only "disks" and producing the phrase "fixed head tape drives", which is meaningless in the current domain. In these instances the semantic relations captured in a case frame representation and not in a semantic grammar parse tree prove immaterial. The key tO case-frame ellipsis resolution is matching corresponding cases, rather than surface strings, syntactic structures, or non-canonical representations. It is true that in order to instantiate correctly a sentential or nominal case frame in the parsing process requires semantic knowledge, some of which can be rather domain specific. But, once the parse is attained, the resulting canonical representation, encoding appropriate semantic relations, can and should be exploited to provide the system with additional functionality such as the present ellipsis resolution method. The major problem with semantic grammars is that they convolve syntax with semantics in a manner that requires multiple representations for the same semantic entity. For instance, the ordering of marked cases in the input does not reflect any difference in meaning (almough one could argue that surface ordering may reflect differential emphasis and other pragmatic considerations). A pure semantic grammar must employ different rules to recognize each and every admissible case sequence. Hence, the resultant parse trees differ, and the knowledge that surface positioning of unmarked cases is meaningful, but positioning of ranked ones is not, must be contained within the ellipsis resolution process, a very unnatural repository for such basic information. Moreover, in order to attain a measure of the functionality described above for case-frames, ellipsis resolution in semantic grammar parse trees must somehow merge adjectival and post nominal forms (corresponding to different non-terminals and different relative positions in the parse trees) so that ellipsed structures such as "a disk with 1 port" can replace the the "dual-port" part of the phrase "...dual-port fixed-media disk " in an earlier utterance. One way to achieve this effect is to collect together specific nonterminals that can substitute for each other in certain contexts, in essence grouping non-canonical representations into semantic equivalence classes. However, this process would requ=re hand.crafting large associative tables or similar data structures, a high price to pay for each domain-specific semantic grammar. Hence, in order to achive robust ellipsis resolution all proverbial roads lead to recursive case constructions encoding domain semantics and canonical structure for multiple surface manifestations. Finally, consider one more rule that provides additional context in situations where the ellipsis is of a purely semantic nature, such as: 167 )Which fixed media disks are configurable on a VAX780? The RP07-aa, the RP07.ab .... >"Add the largest" We need to answer the question "largest what?" before proceeding. One can call this problem a special case of definite noun phrase resolution, rather than semantic ellipses, but terminology is immaterial. Such phrases occur with regularity in our corpus of examples and must be resolved by a fairly general process. The following rule answers the question from context, regardless of the syntactic completeness of the new utterance. Contextual Substitution Rule 3: If: A command or query caseframe lacks one or more required case fillers (such as a missing SELECT field), and the last case frame in fOCUS has an instantiated case that meets a11 the semantic tests for the case missing the riller, THEN: t) Copy the filler onto the new caseframe, and Z) Attempt to copy uninstantiated case filler's as well (if they meet semantic tests) 3) Echo the action being performed for impllcit conrlrmetion by the user. XCALIBUR presently has eight contextual substitution rules. similar to the ones above, and we have found several additional ones to extend the coverage of ellipsed queries and commands (see [3] for a more extensive discussion). It is significant to note that a small set of fairly general rules exploiting the case frame structures cover most instances of commonly occurring ellipsis, including all the examples presented earlier in this section. 5. Acknowledgements Mark Boggs, Peter Anick and Michael Mauldin are part of the XCALIBUR team and have participated in the design and implementation of various modules. Phil Hayes and Steve Minton have contributed useful ideas in several discussions. Digital Equipment Corporation is funding the XCALIBUR project, which provides a fertile test bed for our investigations. 6. References 1, Allen, J.F. and Perrault, C.R.. "Analyzing Intention in Utterances," Artificial Intelligence. VOI. 15, NO. 3, 1980, pp. 14,3-178. 2. Carbonell, J.G. and Hayes, P.J., "Dynamic Strategy Selection ~n Flexible Parsing," Proceedings of the 79th Meeting o/ the Assoctatlon for Computational Linguistics. 1981. 3. Carbonell, J. G., Boggs. W. M., Mauldin, M. L, and Anick, P. G,. "XCALIBUR Progress Report # 1: Overview of the Natural Language Interface," Tech. report, Carnegie- Mellon University, Computer Science Department, 1983. 4. Carbonell, J.G., "Beyond Speech Acts: Meta-Language Utterances, Social Roles, and Goal Hierarchies," Preprints of the Workshop on Discourse Processes, Marseilles. France, 1982. 5. Grice, H. P., "Conversational Postulates," in Explorations in Cognition, O. A. Norman and O.E. Rumelhart, eds., Freeman, San Francisco, 1975. 6. Grosz, B.J., The Representation and Use of Focus in Dialogue Understanding. PhO dissertation, University of California at Berkeley, 1977, SRI Tech. Note 151, 7. Hayes, P. J,, and Carbonell, J.G., "Multi-Strategy Construction-Specific Parsing for Flexible Data Base Query and Update," Proceedings of the Seventh International Joint Conference on Artificial Intelligence, August 1981, pp. 432.4,39. 8. Hayes, P.J. and Carbonell, J.G., "A Framework for Processing Corrections in Task.Oriented Dialogs," Proceedings of the Eighth /nternationa/ Joint Conference on Artificial Intelligence, 1983, (Submitted). 9. Hayes, P. J. and Carbonell, J. G., "Multi-Strategy Parsing and it Role in Robust Man-Machine Communication," Tech. report CMU-CS-81-118, Carnegie-Mellon University, Computer Science Department, May 1981. 10. Hendrix. G.G., Sacerdoti, E.D. and Slocum, J., "Developing a Natural Language Interface to Complex Data," SRI International, 1976. 11. Hendrix, G. G., "The LIFER Manual: A guide to Building Practical Natural Language Interfaces," Tech. report Tech. note 138, SRI, 1977, 12. Joshi, A. K., "Use (or Abuse) of Metalinguistic Devices", Unpublished Manuscript. 13. Kwasny, S. C. and Sondheimer, N. K., "Ungrammaticality and Extragrammaticahty in Natural Language Understanding Systems." Proceedings ot the 17th Meeting ol the Assocsahon for Computational Linguistics, 1979, pp. 19-23. 14. McDermott, J., "RI: A Rule-Based Configurer of Computer Systems," Tech. report, Carnegie-Mellon University. Computer Science Department, 1980. 15. McDermott, J., "XSEL: A Computer Salesperson's Assistant," m Machine Intelligence 10. Hayes, J., Michie, O. and Pap, Y-H., eds., Chichester UK: Ellis Horwood Ltd., 1982", pp. 325-387. 16. Perrault, C, R., Allen, J.F. and Cohen, P R., "Speech Acts as a Basis for Understanding Dialog Coherence," Procceedings of the Second Conference on Theoretical Issues in Natural Language Processing. 1978. 17. Rich, E., Building and Exploring User Models. PhO dissertation, Carnegie-Mellon University, April 1979, 18. Ross, J. R.. "Metaanaphora," Linguistic Inquiry. 1970. 19. Searle, J.R., "Indirect Speech ACTS," =n Syntax and Semantics, Volume 3: Speech Acts, P Cole and J.L. Morgan, eds., New York: Academic Press, 1975. 20. Sidner, C. L., Towards a Computational Theory of Oelinite Anaphora Comprehension in English Discourse. PhO dissertation, MIT, 1979, AI-TR ~7. 21. Waltz. D.L. and Goodman, A.B., "Writing a Natural Language Data Base System," Proceedings of the Fifth International Joint Conference on Artificial Intelligence, 1977. pp. 144,150. 22. Weischedel, R.M. and Black. J., "Responding to Potentially Unparsable Sentences," Tech. report, University of Delaware, Computer and Information Sciences, 1979, Tech Report 79/3. 23. Wilensky, R.. "Talking to UNIX in English: An Overview of an Online Consultant," Tech. report, UC Berkeley, 1982, L68 | 1983 | 25 |
Crossed Serial Dependencies: i low-power parseable extension to GPSG Henry Thompson Department of Artificial Intelligence and Program in Cognitive Science University of Edinburgh Hope Park Square, Meadow Lane Edinburgh EH8 9NW SCOTLAND ABSTRACT An extension to the GPSG grammatical formalism is proposed, allowing non-terminals to consist of finite sequences of category labels, and allowing schematic variables to range over such sequences. The extension is shown to be sufficient to provide a strongly adequate grammar for crossed serial dependencies, as found in e.g. Dutch subordinate clauses. The structures induced for such constructions are argued to be more appropriate to data involving conjunction than some previous proposals have been. The extension is shown to be parseable by a simple extension to an existing parsing method for GPSG. I. INTRODUCTION There has been considerable interest in the community lately with the implications of crossed serial dependencies in e.g. Dutch subordinate clauses for non-transformational theories of grammar. Although context-free phrase structure grammars under the standard interpretations are weakly adequate to generate such languages as anb n, they are not capable of assigning the correct dependencies - that is, they are notstrongly adequate. In a recent paper (Bresnan Kaplsn Peters end Zaenen 1982) (hereafter BKPZ), a solution to the Dutch problem was presented in terms of LFG (Kaplan and Bresnan 1982), which is known to have considerably more than context-free power. (Steedman 1983) and (Joshi 1983) have also made proposals for solutions in terms of Steedman/Ades grammars and tree adjunction grammars (Ades and Steedman 1982; Joshi Levy and Yueh 1975). In this paper I present a minimal extension to the GPSC formalism (Gazdar 1981c) which also provides a solution. It induces structures for the relevant sentences which are non-trivially distinct from those in BKPZ, and which I argue are more appropriate. It appears, when suitably constrained, to be similar to Joshi's proposal in making only a small increment in power, being incapable, for instance, of analysing anbnc n with crossed dependencies. And it can easily be parsed by a small modification to the parsing mechanisms I have already developed for GPSG. II. AN EXTENSION TO GPSG II.I Extendin G the s~ntax GPSG includes the idea of compound non-terminals, composed of pairs of standard category labels. We can extend this trivially to finite sequences of category labels. This in itself does not change the weak generative capacity of the grammar, as the set of non-terminals remains finite. CPSG also includes the idea of rule schemata - rules with variables over categories. If we further allow variables over sequences, then we get a real change. At this point I must introduce some notation. I will write [a,b ,c] for a non-terminal label composed of the categories a, b, and c. I will write Za b* to indicate that the schematic variable Z ranges over sequences of the category b. We can then give the following grammar for anb n with crossed 16 dependencies: S -> e S:Z -> a SIZ:b .(I) s:z -> a s z:b (2) blZ -> b z (3), where we allow variables over sequences to appear not only alone, but in simple, that is with constant terms only, concatenation, notated with a vertical bar (I). This grammar gives us the following analysis for a3b 5, where I have used subscripts to record the dependencies, and the marginal numbers give the rule which admits the adjacent node: S (I) al/~[S,bl] (I) a ~ (2) s" [bI, 2, b] (3) 3 With the aid of this example, we see that rule I generates a's while accumulating b's, rule 2 brings this process to an end, and rule 5 successively generates the accumulated b's, in the correct, 'crossed', order. This is essentially the structure we will produce for the Dutch examples as well, so it is important to point out exactly how the crossed dependencies are captured. This must come out in two ways in GPSG - subcategorisation restrictions, and interpretation. That the subcategorisation is handled properly should be clear from the above example. Suppose that the categories a and b are pre-terminals rather than terminals, and that there are actually three sorts of a's and three sorts of b's, subcategorised for each other. If one used the standard GPSG mechanism for recording this dependency, namely by providing three rules, whose rule number would then appear as a feature on those pre-terminals appearing in them directly, we would get the above structure, where we can reinterpret the subscripts as the rule numbers so introduced, and see that the dependencies are correctly reflected. II.2 Semantic interpretation As for the semantics no actual extension is required - the untyped lambda calculus is still sufficient to the task, albeit with a fair amount of work. We can use what amounts to apa ...... 6 and unpacking approach. The compound b nodes have compound interpretations, which are distributed appropriately higher up the tree. For this, we need pairs and sequences of interpretations. Following Church, we can represent a pair <l,r> as ~f(1)(r)]. If P is such a pair, then PO P(~x~x[x]) and PI = P(kxXx[y]). Using pairs we can of course produce arbitrary sequences, as in Lisp. In what follows I will use a Lisp-based shorthand, using CAR, CDR, CONS, and so on. These usages are discharged in Appendix I. Using this shorthand, we can give the following example of a set of semantic rules for association with the syntactic rules given above, which preserves the appropriate dependency, assuming that the b'(a',S') is the desired result at each level: CONS(CADR (Q')(a' )(CA~(Q' )),CDDR (Q ' )) (~ where Q' is short for SI, Z~,b ' , CO~S(CAR (Q ' )(a') (S') ,CDR(Q ' )) (2 where Q' is short for Ziqh ' , ADJOIN(Z' ,b' ). (3 These rules are most easily understood in reverse order. Rule 3 simply appends the interpretation of the immediately dominated b to the sequence of interpretations of the dominated sequence of b's. Rule 2 takes the first interpretation of such a sequence, applies it to the interpretations of the immediately dominated a and S, and prepends the result to the unused balance of the sequence of b interpretations. We now have a sequence consisting of first a sentential interpretation, and then a number of h interpretations. Rule I thus applies the second (b type) element of such a sequence to the interpretation of the immediately dominated a, and the first (S type) element of the sequence. The result is again prepended to the unused balance, if any. The patient reader can satisfy himself that this will produce the following (crossed) interpretation: 17 II.3 Parsin~ As for parsing context-free grammars with the non-terminals and schemata this proposal allows, very little needs to be added to the mechanisms I have provided to deal with non-sequence schemata in GPSG, as described in (Thompson 1981 b). We simply treat all non-terminals as sequences, many of only one element. The same basic technique of a bottom- up chart parsing strategy, which substitutes for matched variables in the active version of the rule, will do the job. By restricting only one sequence variable to occur once in each non- terminal, the task of matching is kept simple and deterministic. Thus we allow e.g. SIZIb but not ZlblZ. The substitutions take place by concatenation, so that if we have an instance of rule (~) matching first [a] and then [3,b,b,b] in the course of bottom-up processing, the Z on the right hand side will match [b,b], and the resulting substitution into the left hand side will cause the constituent to be labeled [S,b,b]. In making this extension to my existing system, the changes required were all localised to that part of the code which matches rule parts against nodes, and here the price is paid only if a sequence variable is encountered. This suggests that the impact of this mechanism on the parsing complexity of the system is quite small. III. APPLICATION TO DUTCH Given the limited space available, I can present only a very high-level account of how this extension to GPSG can provide an account of crossed serial dependencies in Dutch. In particular I will have nothing to say about the difficult issue of the precise distribution of tensed and untensed verb forms. III. 1 The Dutch data Discussion of the phenomenon of crossed serial dependencies in Dutch subordinate clauses is bedeviled by considerable disagreement about just what the facts are. The following five examples form the core of the basis for my analysis: I) omdat ik probeer Nikki te leren Nederlands te spreken 2) omdat ik probeer Nikki Nederlands te leren spreken 3) omdat ik Nikki probeer te leren Nederlands te spreken 4) omdat ik Nikki Nederlands probeer te leren spreken 5) * omdat ik Nikki probeer Nederlands te leren spreken. With the proviso that (I) is often judged questionable, at least on stylistic grounds, this pattern of judgements seems fairly stable among native speakers of Dutch from the Netherlands. There is some suggestion that this is not the pattern of judgements typical of native speakers of Dutch from Belgium. III.2 Grammar rules for the Dutch data This pattern leads us to propose the following basic rules for subordinate clauses: A) S' -> omdat NP VP B) VP -> V VP (probeer) C) VP -> NP V VP (leren) D) VP -> NP V (spreken). Taken straight, these give us (I) only. For (2) - (4), we propose what amounts to a verb lowering approach, where verbs are lowered onto VPs, whence they lower again to form compound verbs. (5) is ruled out by requiring that a lowered verb must have a target verb to compound with. The resulting compound may itself be lowered, but only as a unit. This approach is partially inspired by Seuren's transformational account in terms of predicate raising (Seuren 1972). So the interpretation of the compound labels is that e.g. [V,V] is a compound verb, and [VP,V,V! is a VP with a compound verb lowered onto it. It follows that for each VP rule, we need an associated compound version which allows the lowering of (possibly compound) verbs from the VP onto the verb, so we would have e.g. Di) VPIZ -> NP ZIV, where we now use Z as a variable over sequences of VS. The other half of the process must be 18 reflected in rules associated with each VP rule which introduces a VP complement, allowing the verb to be lowered onto the complement. As this rule must also expand VPs with verbs lowered onto them, we want e.g. cii) vPlz -> ~P wlzlv. Rather than enumerate such rules, we can use metarules to conveniently express what is wanted: I) VP -> ... V ... ==> VPIZ -> ... ZlV ... H) vP -> ... v vP o-> vPlz -> ... vP:z:v. (I) will apply to all three of (B) - (D), allowing compound verbs to be discharged at any point. (II) will apply to (B) and (C), allowing the lowering (with compounding if needed) of verbs onto complements. We need one more rule, to unpack the compound verbs, and the syntactic part of our effort is complete: E) wlz -> W Z, where W is an ordinary variable whose range consists of V. This slight indirection is necessary to insure that subcategorisation information propagates correctly. By suitably combining the rules (A) - (E), together with the meta-generated rules (Bi) - (Di), (Bii) and (Cii), we can now generate examples (2) (4). (4), which is fully crossed, is very similar to the example in section II.1, and uses meta-generated expansions for all its VP nodes: S' Nikki Nederlands V b [Vc,Vd] probeer V c V d i I te leren spreken (A) (Bii) ( Cii ) (Di) (E) (E) Once again I include the relevant rule name in the margin, and indicate with subscripts the rule name feature introduced to enforce subcategorisation. Sentences (2) and (3) each involve two meta- generated rules and one ordinary one. For reasons of space, only (3) is illustrated below. (2) is similar, but using rules (B), (Cii), and (Di). s' (A) ~P vP (Rii) a ik [vP,Zb] (ci) .~Pc [Vb,Vc]~ ~ ~ (E),(Di) Nikki V b ~d Vd pro~eer ~c . !preken te leren Nederlands te III.3 Semantic rules for the Dutch data The semantics follows that in section II.2 quite closely. For our purposes simple interpretations of (B) - (D) will suffice: B') v'(vP') c') v' (NP' ,~') D') v'(NP'). The semantics for the metarules is also reasonably straightforward, given that we know where we are going: I') F(V') ==> CONS(F(CAR(Z:V')),CDR(Z',V')) II') F(V',VP') ==> CONS(F(CADR(Q'),CAR(Q')), cm~(Q')), where Q' is short for VPlZl, V '. (I') will give semantics very much like those of rule (2) in section II.2, while (II') will give semantics like those of rule (I). (E °) is just like (3): E') ADJ01N(Z' ,W ' ) It is left to the enthusiastic reader to work through the examples and see that all of sentences (I) - (4) above in fact receive the same interpretation. III.4 Which structure is right - evidence from conjunction The careful reader will have noted that the structures proposed are not the same as those of BKPZ. Their structures have the compound verb depending from the highest VP, while ours depend from the lowest possible. With the exception of BKPZ's example (~3), which none of my sources judge grammatical with the 'root Marie' as given, I 19 believe my proposal accounts for all the judgements cited in their paper. On the other hand, I do not believe they can account for all of the following conjunction judgement, the first three based on (4), the next two on (3), whereas under the standard GPSG treatment of conjunction they all fall out of our analysis: 6) omdat ik Nikki Nederlanda wil leren spreken en Frans wil laten schrijven because I want to teach Nikki to speak Dutch and let [Nikki] write French 7) * omdat ik Nikki Nedrelands wil leren spreken en Frans laten schrijven 8) omdat ik Nikki Nederlands wil leren spreken en Carla Frans wil laten schrijven because I want to teach Nikki to speak Dutch and let Carla write French. 9) omdat ik Nikki wil leren Nederlands te spreken en Frans te schrijven because I want to teach Nikki to speak Dutch and to write French IO) * omdat ik Nikki wil leren Nederlands te spreken en Carla Frans te schrijven or ... en Frans (ts) laten schrijven (6) contains a conjoined [VP,V,V], (8) a conjoined [VP,V], and (7) fails because it attempts to conjoin a [VP,V,V] with a [VP,V]. (9) conjoins an ordinary VP iaside a [VP,V], and (10) fails by trying to conjoin a VP with either a non- constituent or a [VP,V]. It is certainly not the case that adding this small amount of 'evidence' to the small amount already published establishes the case for the deep embedding, but I think it is suggestive. Taken together with the obvious way in which the deep embedding allows some vestige of compositionality to persist in the semantics, I think that at the very least a serious reconsideration of the BKPZ proposal is in order. IV. CONCLUSIONS It is of course too early to tell whether this augmentation will be of general use or significance. It does seem to me to offer a reasonably concise and satisfying account of at least the Dutch phenomena without radically altering the grammatical framework of GPSG. Further work is clearly needed to exactly establish the status of this augmented GPSG with respect to generative capacity and parsability. It is intriguing to speculate as to its weak equivalence with the tree adjunction grammars of Joahi et al. Even in the weakest augmentation, allowing only one occurence of one variable over sequences in any constituent of any rule, the apparent similarity of their power remains to be formally established, but it at least appears that like tree adjunction grammars, these grammars cannot generate anbncn with both dependencies crossed, and like them, it can generate it with any one set crossed and the other nested. Neither can it generate WW, although it can with a sequence variable ranging over the entire alphabet, if it can be shown that it is indeed weakly equivalent to TAG, then strong support will be lent to the claim that an interesting new point on the Chomsky hierarchy between CFGs and the indexed grammars has been found. ACKNOWLEDGEMENTS The work described herein was partially supported by SERC Grant GR/B/93086. My thanks to Han Reichgelt, for renewing my interest in this problem by presenting a version of Seuren's analysis in a seminar, and providing the initial sentential data; to Ewan Klein, for telling me about Church's 'implementation' of pairs and conditionals in the lambda calculus; to Brian Smith, for introducing me to the wonderfully obscure power of the Y operator; and to Gerald Gazdar, Aravind Joshi, Martin Kay and Mark Steedman, for helpful discussion on various aspects of this work. APPENDIX I SEQUENCES IN THE UNTYPED LAMBDA CALCULUS To imbed enough of Lisp in the lambda cslculus for our needs, we require not just pairs, but NIL and conditionals as well. Conditionals are implemented similarly to pairs - "if p then q else 20 r" is simply p applied to the pair <q,r>, where TRUE and FALSE are the left and right pair element selectors respectively. In order to effectively construct and manipulate lists, some method of determining their end is required. Numerous possibilities exist, of which we have chosen a relatively inefficient but conceptually clear approach. We compose lists of triples, rather than pairs. Normal CONS pairs are given as <TRUE,car,cdr>, while NIL is <FALSE,,>. Given this approach, we can define the following shorthand, with which the semantic rules given in sections II.2 and III.3 can be translated into the lambda calculus: TR= - Ix [~y [~]] FALSE- ~x.Lky.LyJ] NIL- ~f.Ef(FALSE)(kp.[p])(~p.[p])l C0NS(A,B) - ~f.Ef(TRUE)(A)(B)J CAe(L) - L(~x.[ ~y[ ~z[y] ]3 ) CDR(L) L()~x.t ),y.L ),z.[ z] ] j ) C0NSP(L) - T(~x [~y.[~z.[x]]]) CADR(L) - CAR(CDR(L)) ADJOINFORM - la.[ IL. [ ~N. [ CONSP(L)(CONS(CA~(L), a(CD~(L))(N))) (CONS(N,NIL)) ] ]] - ~f.[ ~.[ f(x(~) )] (~x.[ f(x(x))])] ADJOIN(L,N) - Y(ADJOI~0~M)(T)(N) Joshi, A. 1983. How much context-sensitivity is required to provide reasonable structural descriptions: Tree adjoining gran~nars, version submitted to this conference. Joehi, A.K., Levy, L. So and Yueh, K. 1975. Tree adjunct grammars. Journal of Comp .... and System Sciences. Kaplan, R.M. and Bresnan, J. 1982. Lexical- functional grammar: A formal system of grammatical representation. In J. Bresnan, editor, The mental representation of grammatical relations. MIT Press, Cambridge, MA. Seuren, P. 1972. Predicate Raising in French and Sundry Languages. ms., Nijmegen. Steedman, M. 1983. On the Generality of the Nested Dependency Constraint and the reason for an Exception in Dutch. In Butterworth, B., Comrie, E. and Dahl, 0., editors, Explanations of Language Universals. Mouton. Thompson, H.S. 1981b. Chart Parsing and Rule Schemata in GPSG. In Proceedings of the Nineteenth Annual Meeting of the Association for Computational Linguistics. ACL, Stanford, CA. Also DAI Research Paper 165, Dept. of Artificial Intelligence, Univ. of Edinburgh. Note that we use Church's Y operator to produce the required recursive definition of ADJOIN. REFERENCES Ades, A. and Steedman, M. 1982. On the order of words. Linguistics and Philosophy. to appear. Bresnan, J.W., Kaplan, R., Peters, S. and Zaenen, A. 1982. Cross-serial dependencies in Dutch. Linguistic Inquir[ 13. Cazdar, G. 1981c. Phrase structure grammar. In P. Jacobson and G. Pullum, editors, The nature of syntactic representation. D. Reidel, Dordrecht. 21 | 1983 | 3 |
Formal Constraints on Metarules* Stuart M. Shieber, Susan U. Stucky, Hans Uszkoreit, and Jane J. Robinson SRI International 333 Ravenswood Avenue Menlo Park, California Abstract Metagrammaticai formalisms that combine context-free phrase structure rules and metarules (MPS grammars) allow con- cise statement of generalizations about the syntax of natural lan- guages. Unconstrained MPS grammars, tmfortunately, are not cornputationally "safe." We evaluate several proposals for con- straining them, basing our amae~ment on computational trac- tability and explanatory adequacy. We show that none of them satisfies both criteria, and suggest new directions for research on alternative metagrammatical formalisms. 1. Introduction The computational-linguistics community has recently shown interest in a variety of metagrammatical formalisms for encoding grammars of natural language. A common technique found in these formalisms involves the notion of a metarule, which, in its most common conception, is a device used to generate grammar rules from other given grammar rules. 1 A metarule is essentially a statement declaring that, if a grammar contains rules that match one specified pattern, it also contains rules that match some other specified pattern. For example, the following metarule (1) VP -..V VP ~ VP-*Y ADVP VP [+/;-I [+o.~i states that, if there is a rule that expands a finite VP into a finite auxiliary and a nonfinite VP, there will also be a rule that expands the VP as before except for an additional adverb between the auxiliary and the nnnfinite VP. 2 The patterns may contain variables, in which case they characterize "families ~ of related rules rather than individual pairs. *This reeearch was supported by the National Science Foundation grant No. IST-8103550. The views and conclusions expressed in this document are those of the authors and should not be interpreted as represent,.tive of the views of the National Science Foundation or the United States government. We are indebted to Fernando Pereira, Stanley Peters, and Stanley Roscnscheln for many helpful discun~ons leading to the writing of this paper. IMetarules were first utilized for natural-language research and are most extensively developed within the theory of Generalized Phrase Structure Grammar (GPSG) [Ga2dar end Pullum, 1082; Gawron et ~., 1982; Thompson. 1082 I. 2A metarule similar to our example was proposed by Gazdar, Pullum, and sag [10s2, p. oorl. The metarule notion is a seductive one, intuitively allowing generalizations about the grammar of a language to be stated concisely. However, unconstrained metarule formalisms may possess more expressive power than is apparently needed, and, moreover, they are not always compatationally "safe." For ex- ample, they may generate infinite sew of rules and describe ar- bitrary languages, lu this paper we examine both the formal and linguistic implications of various constraints on metagram- matical formalisms consisting of a combination of context-free phrase structure rules and metarules, which we will call metarule phrase.structure (MPS] grammars. The term "MPS grammar" is used in two ways in this paper. An MPS grammar can be viewed as a grammar in its own right that characterizes a language directly. Alternatively, it can be viewed as a metagrammar, that is, as a generator of a phrase structure obiect grammar, the characterized language being defined as the language of the object grammar. Uszkoreit and Peters [1982] have developed a formal definition of MPS grammars and have shown that an uncon- strained MPS grammar can encode any recursively enumerable language. As long am the framework for grammatical descrip- tion is not seen am part of a theory of natural language, this fact may not alt'ect the usefulness of MPS grammars am tools for purely descriptive linguistics research; however, it has direct and obvious impact on those doing research in a computational or theoretical linguistic paradigm. Clearly, some way of constrain- ing the power of MPS grammars is necessary to enable their use for encoding grammars in a ¢omputationally feasible way. In the sections that follow, we consider several formal proposals for constraining their power and discuss some of their computational and linguistic ramifications. In our discussion of the computational ramifications of the proposed constraints, we will use the notion of weak-generative capacity as a barometer of the expressive power of a formalism. Other notions of expre~ivity are possible, although some of the traditional ones may not be applicable to MPS grammars. Strong*generative capacity, for instance, though well-defined, seems to be an inadequate notion for comparison of MPS gram- mars, since it would have to be extended to include informa- tion about rule derivations am well am tree derivations. Similarly, we do not mean to imply by our arguments that the class of natural languages corresponds to some class that ranks low in the Chomsky hierarchy merely because the higher classes are less constrained in weak-generative power. The appropriate charac- terization of possible natural languages may not coincide at all 22 with the divisions in the Chomsky hierarchy. Nevertheless weak- generative capacity--the weakest useful metric of capacity--will be the primary concern of this paper as a well-defined and relevant standard for measuring constraints. 2. Constraints by Change of Perspective Peters and Ritchie [1973] have pointed out that context- sensitive grammars have no more than context-free power when their rules are viewed as node-admissibility conditions. This suggests that MPS grammars might be analogously constrained by regarding the metarules as something other than phruse- structure grammar generators. A brief examination of three alternative approaches indicates, however, that none of them clearly yields any useful constraints on weak-generative capacity. Two of the alternatives discussed below consider metarules to be part of the grammar itself, rather than as part of the metagramo mar. The third views them as a set of redundant generalizations about the grammar. Stucky [forthcoming] investigates the possibility of defining metarules as complex node-admissibility conditions, which she calls meta, node-admissibility conditions. Two computationally desirable results could ensue, were this reinterpretation possible. Because the metarules do not generate rules under the meta, node-admissibility interpretation, it follows that there will be neither a combinatorial explosion of rules nor any derivation resulting in an infinite set of rules (both of which are potential problems that could arise under the original generative inter- pretation). For this reinterpretation to have a computationally tract- able implementation, however, two preconditions must be met. First, an independent mechanism must be provided that assig~ to any string a finite set of trees, including those admitted by the metarules together with the bmm rules. Second, a procedure must be defined that checks node admissibilities according to the base rules and metarules of the grammar--and that terminates. [t is this latter condition that we snspect will not be possible without constraining the weak-generative capacity of MPS gram- mars. Thus, this perspective does not seem to change the basic expressivity problems of the formalism by itself. A second alternative, proposed by Kay [1982], is one in which metarules are viewed as chart-manipulating operators on a chart parser. Here too, the metarules are not part of a metagrammar that generates a context-free grammar; rather, they constitute a second kind of rule in the grammar. Just like the meta-node-admissibility interpretation, Kay's explics- t, ion seems to retain the basic problem of expressive power, though Kay hints at a gain in efficiency if the metarules are compiled into a finite-state transducer. Finally, an alternative that does not integrate metarules into the object grammar but, on the other hand, does not as- sign them a role in generating an object grammar either, is to view them as redundancy statements describing the relationships that hold among rules in the full grammar. This interpretation eliminates the problem of generating infinite rule sets that gave rise to the Uszkoreit and Peters results. However, it is difficult to see how the solution supports a computationally useful notion of metarules, since it requires that all rules of the grammar be stated explicitly. Confining the role of metarules to that of stat- ing redundancies prevent~ their productive application, so that the metarules serve no clear computational purpose for grammar implementation. 3 We thus conclude that, in contrust to context-sensltive grammar, in which an alternative interpretation of the phruse structure rules makes a difference in weak-generative capacity, MPS grammars do not seem to benefit from the reinterpretations we have investigated. 3. For:hal Constraints ~. a, e it appears unlikely that a reinterpretation of MPS grammars can be found that solves their complexity problem, formal constraints on the MPS formalism itself have to be ex- plored if we want to salvage the basic concept of metarules. In the following examination of currently proposed constraints, the two criteria for evaluation are their effects on computational trac- tability and on the ezplanatory adcquaeltof the formalism. As an example of constraints that satisfy the criterion of computational tractability but not that of explanatory adequacy, we examine the issue of essential variables. These are variables in the metarule pattern that can match an arbitrary string of items in a phrase structure rule. Uszkoreit and Peters have shown that, contrary to an initial conjecture by Jcehi (see [Gazdar, 1982, fn. 28]), allowing even one such variable per metarule extends the power of the formalism to recursive enumerability. Gazdar has recommended [1982, p.160] that the power of metarules be controlled by eliminating essential variables, exchanging them for abbreviatory variables that can stand only for strings in a finite and cztrinsieally determined range. This constraint yields a computationslly tractable system with only context-free power. Exchanging essential for abbreviatory variables is not, however, as attractive a prospect as it appears at first blush. Uszkoreit and Peters [1982[ show that by restricting MFS gram- mars to using abbreviatory variables only, some significant generalizations are lost. Consider the following metarule that is proposed and motivated in [Gazdar 1982] for endowing VSO languages with the category VP. The metarule generates fiat VSO sentence rules from VP rules. (2) VP-.V U~ S-.V NPU Since U is an abbreviatory variable, its range needs to be stated explicitly. Let us imagine 'h:,t the VSO language in question has the follo~ ;~ small set of VF rules: (3) w ,'~ VP -- V NP vP-. V-~ VP -. V VP VP -. V NP V-P Therefore, the range of U has to be {e, NP, ~, ]77~, NP V'P}. 3As statements about the object ~'~mmar, however, metxrules might play s role in language acquisition or in dia~hronie processes. 23 If these VP rules are the only rules that satisfy the left- hand side of (2), then (2) generates exactly the same rules am it would if we declared U to be an essential variable--i.e., let its range be (Vr O VN) °. But now imagine that the language adopts a new subcategorizatiun frame for verbs, 4 e.g., a verb that takes an NP and an S am complements. VP rule (4) is added: (4) VP -- I/" NP -S Metarule (2) predicts that VPs headed by this verb do not have a corresponding fiat V$O sentence rule. We will have to change the metarule by extending the range of U in order to retain the generalization originally intended by the metarule. Obviously, our metarule did not encode the right generalization (a simple intension-extensiun problem). This shortcoming nun also surface in cases where the input to a metarule is the output of another metaruh. It might be that metarule (2) not only applies to basic verb rules but also includes the output of, say, a passive rule. The range of the variable [.r would have to be extended to cover these tames too, and, moreover, might have to be altered if its feeding metarules change. Thus, if the restriction to abbreviatury variables is to have no effect on the weak-gensrative capacity of a grammar, the range assigned to each variable must include the range that would have actually instantiated the variable on an expansion of the MPS grammar in which the variable was treated as essential. The assignment of a range to the variable can only be done po,t /actum. This would be a satisfactory result, were it not for the fact that finding the necessary range of a variable in this way is an undecidable problem in general. Thus, to exchange essen- tial for abbreviatory variables is to risk affecting the generative capacity of the grammar~with quite unintultive and unpredict- able results. In short, the choice is among three options: to affect the language of the grammar in ways that are linguistically un- moti~at4ed and arbitrary, to solve an undecidable problem, or to discard the notion of exchanging essential for abbreviatory variables--in effect, a Hobsun's choice. An example of a constraint that satisfies the second criterion, that of explanatory adequacy, hut not the first, com- putational tractability, is the leziesl-head constraint of GPSG [Gazdar and Pullum, 1982[. This constraint allows metarules to operate only on rules whose stipulated head is a lexical (preterminal) category. Since the Uszkoreit and Peters results are achieved even under this restriction to the formalism, the cow straint does not provide a solution to the problem of expressive power. Of course, this is no criticism of the proposal, since it was never intended as a formal restriction on the class of languages, but rather ~ a restriction un linguistically motivated grammars. Unfortunal,ely, the motivation behind even this use of the lexical- head constraint may be lacking. One of the few analyses that relies on the lexical-head constraint is a recent GPSG analysis of coordination and extraction in English (Gazdar, 1981]. In this ease--indeed, in general-one could achieve the desired effect simply by specifying that the coefficient of the bar feature be lezical. It remains to be seen whether the constraint must be imposed for enough metarules so as to justify its incorporation as a general principle. Even with such motivation one might raise a question about the advisability of the lexical-head constraint on a meta- theoretical level. The linguistic intuition behind the constraint is that the role of metarules is to "express generalizations about possibilities of subeategorizatiun" exclusively [Gaadar, Klein, Pullum, and Sag, 1982, p.391, e.g., to express the p~mive-active relation. This result is said to follow from principles of ~ syntax [Jackendoff, 1077], in which just those categories that are sub- categorized for are siblings of a lexieal head. However, in a lan- guage with freer word order than English, categories other than those subcategorized for will be siblings of lexieal heads; they would, thus, be affected by metarules even under the lexical-head constraint. This result will certainly follow from the liberation rule approach to free word order [Pullum, 1982]. The original linguistic generalization intended by the hxical-head constraint, therefore, will not hold cross-linguistically. Finally, there is the current proposal of the GPSG com- munity for constraining the formal powers of metarules by al- lowing each metaruh to apply only once in a derivation of a rule. Originally dubbed the once.through hgpothe~is, this con- straint is now incorporated" into GPSG under the name finite closure [Gazdar and Pullum, 1982]. Although linguistic evidence for the constraint has never been provided, the formal motiva- tion is quite strong because, under this constraint, the metarule formalism would have only context-free power. Several linguistic constructions present problems with respect to the adequacy of the finite-closure hypothesis. For in- stance, the liberation rule technique for handling free-word-order languages {Pullum, 1982] would require ffi noun-phrase liberation rule to be applied twice in a derivation of a rule with sibling noun phrases that permute their subconstituents freely among one another. As a hypothetical example of this phenomenon, let us suppose that English allowed relative clauses to be extraposed in general from noun phrases, instead of allowing just one ex- traposifion. For instance, in this quasi-English, the sentence (5) Two children are chasing the dog who are small that is here. would he a grammatical paraphrase of (0) Two children who are small axe chasing the dog that is here. Let us suppose further that the analysis of this phenomenon involved liberation of the NP-S substructure of the noun phrases for incorporation into the main sentence. Then the noun-phrase liberation rule would apply once to liberate the subject noun phrase, once again to liberate the object noun phrase. That these are not idle concerns is demonstrated by the following sentence in the free-word-order Australian aboriginal language Warlpiri. s 4Note that it does not matter whether the grammar writer discovers an additional subcateKorization, or the language develops one diachronically; the same problem obtains. 5This example is t,.ken from [van Riemsdijk, 1981]. 24 (7) Kutdu-jarra-rlu ks-pals maliki wita-jarra-rlu chiId-DUAL-ERG AUX:DUAL dog-ABS smalI-DUAL-ERG yalumpu wajilipi-nyi that-ABS chase=NONPAST Two 8mall children are cha,ing that dog. The Warlpiri example is analogous to the quasi-English example in that both sentences have two discontinuous NPs in the same distribution. Furthermore, the liberation rule approach has been proposed as a method of modeling the free word order of Waripiri. Thus, it appears that finite closure is not consistent with the liberation rule approach to free word order. Adverb distribution presents another problem for the hypothesis. In German, for example, and to a lesser extent in Engiish, an unbounded number of adverbs can be quite freely interspersed with the complements of a verb. The following German sentence is an extreme example of this phenomenon [Uszkoreit, 1982]. The sequence of its major constituents is given under (9). (8) Gestern hatte in dec Mittagspause yesterday had during lunch break der Brigadier in dec Werkzeugkammer the foreman (NOM) in the tool shop dam Labeling au~ Boehaftigkeit lancaam the apprentice (DAT) maliciously slowly zehn schmierige Gasseisenscbeiben unbemerkt ten greasy cast iron disks (ACC) unnoticed in die Hosentasche gesteckt in the pocket put )'*aerdav, durin~ lunch break in the tool shop, the foreman, malicioedy and unnoticed, put ten grea,y caJt iron disks tlowist into the apprentice's pocket. (9) ADVP VrrN ADVP NPsuuJ ADVP NProaJ ADVP ADVP NPDoa.t ADVP PP VIN e A metarule might therefore be proposed that inserts a single adverb in a verb-phrase rule. Repeated application of this rule (in contradiction to the finite-closure hypothesis) would achieve the desired effect. To maintain the finite-closure hypothesis, we could merely extend the notion of context-free rule to allow regular expressions on the right-hand side of a rule. The verb phrase rule would then be accurately, albeit clumsily, expressed as, say, VP -.* V NP ADVP* or VP -* V NP ADVP* PP ADVP* for ditransitives. Similar constructions in free-word-order languages do not permit such naive solutions. As an example, let us consider the Japanese causative. In this construction, the verb sutRx "-sase" signals the causativization of the verb, allowing an extra NP argument. The process is putatively unbounded (ignoring performance limitations). Furthermore, Japanese allows the NPs to order freely relative to one another (subject to considerations of ambiguity and focus), so that a fiat structure with some kind of extrinsic ordering is presumably preferable. One means of achieving a fiat structure with extrinsic ordering is by using the ID/LP formalism, a subformalism of GPSG that allows immediate dominance (ID) information to be specified separately from linear precedence (LP) notions. (Cf. context-free phrase structure grammar, which forces a strict one- to-one correlation between the two types of information.) ID information is specified by context-free style rules with unordered right-hand sides, notated, e.g., .4 ~ B, C, D. LP informa,Aon is specified as a partial order over the nonterminals in the ..orr-,m max, notated, e.g., B < C (read B precedes C). These two rules can be viewed as schematizing a set of three context-free rules, namely, A -- B C D, A -- B D C, and A -- D B C. Without a causativization metarule that can operate more than once, we might attempt to use the regular expression nota- tion that solved the adverb problem. For example, we might postulate the ID rule VP -, NP*, V, sane* with the LP rela- tion NP < V < sase, but no matching of NPs with sases is achieved. We might attempt to write a liberation rule that pulls NP.saee pairs from a nested structure into a flat one, but this would violate the finite-closure hypothesis (as well as Pullum's requirement precluding liberation through a recursive category). We could attempt to use even more of the power of regular-expression rules with ID/LP, i.e., VP -, {NP, 8a,e} °, V under the same LP relation. The formalism presupposed by this analysis, however, has greater than context-free power, ° so that this solution may not be desirable. Nevertheless, it should not be ruled out before the parsing properties of such a formalism are understood. T Gunji's analysis of Japanese, which attempts to solve such problems with the multiple application of a tlash introduction metarule [Gunji, 1980 l, again raises the problem of violating the 6nite-closure hypothesis (as well as being incom- patible with the current version of GPSG which disallows mul- tiple slashes). Finally, we could always move ca~ativization into the lexicon as a lexical rule. Such a move, though it does cir- cumvent the difficulty in the syntax, merely serves to move it elsewhere without resolving the basic problem. Yet another alternative involves treating the right-hand ~ides of phrase structure rules as sets, rather than multisets as is implicit in the ID/LP format. Since the nonterminal vocabulary is finite, right-hand sides of ID rules must be subsets of a finite set and therefore finite sets themselves. This hypothesis is quite similar in effect to the finite-closure hypothesis, albeit even more limited, and thus inherits the same problems aa were discussed above. 4. The Ultimate Solution An obvious way to constrain MPS grammar, is to eliminate metarules entirely and replace them with other mechanisms. In fact, within the GPSG paradigm, several of the functions of metarules have been replaced by other metagrammatical devices. Other functions have not, as of the writing of this paper, though 8For instance, the grammar $ ~ {a,b,e} e with a < b < • generates anb~en" 7Shieber [forthcoming] provides an ~l&orithm for parsing ID/LP grammars directly that includes a method for utilizing the Kleene star device. It could be extended to even more of the regular expression notation, though the effect of such extenslon-on the time complexity of the algorithm is an open question. 25 it i$ instructive ~.o co=ider ~.he c~es covered ~y this cia~s. In the discussion to follow we have isolated thxee of the primary functions of metarules. This is not intended az an exhaustive taxonomy, and certain metarules may manifest more than one of these functions. First, we consider generalizations over linear order. If metarules are metagrammatical statements about rules encod- ing linear order, they may relate rules that differ only in the linear order of categories. With the introduction of ID/LP for- mat, however, the hypothesis i, that this latter metagrammatical device will suffice to account for the linear order among the cat- egories within rules. For instance, the problematic adverb and causative metarnles could be replaced by extended contex.t-free rules with [D/LP, as was suggested in Section 3 above. Shieber [forthcoming[ has shown that a pure ID/LP formalism (without metarules, Kleene star, or the like) is no le~ computationally tractable than context-free grammars themselves. Although we do not yet know what the consequences of incorporating the extended context-free rules would be for computational com- plexity, ID/LP format can be used to replace certain word-order- variation metarules. A second function of metarnles wa~ to relate sets of rules that differed only in the values of certain specifed features. It has been suggested [Gat~iar and Pullum 1982] that such features are distributed according to certain general principles. For in- stance, the slash-propagation metarule haz been replaced by the distribution of slash features in accord with such a principle. A third function of metarules under the original interpreta- tion has not been relegated to other metagr~nmatical devices. \Ve have no single device to suggest, though we axe exploring alternative ways r,o account for the phenomena. Formally, this third class can be characterized as comprising those metacules that relate sets of rules in which the number of categories on the right- and left-hand sides of rules differ. It is this sort of metarule that is essential for the extension of GPSGs beyond context-free power in the Uszkoreit and Peters proofs {1982]. Simply requiring that such metarules be disallowed would not resolve the linguistic issues, however, since this constraint would inherit the problems connected with the regular expression and set notations discussed in Section 3 above. This third cl~s further breaks down into two cases: those that have different parent categories on the right- and left-hand sides of the metarule and those that have the same category on both sides. The ~rst c~e includes those liberation rules that figure in analyses of free-word-order phenomena, plus such other rules as the subject-auxiliary-inversion metarule in English. Uszkoreit [forthcoming] is exploring a method for isolat- ing liberation rules in a separate metagrammaticul formalism. It also appears that the subject-auxiliary inversion may be analyzed by already existing principles governing the distribution of fea- tures. The second case (those in which the categories on the right- and left-hand sides are the same) includes such analyses as the passive in English. This instance, at least, might be re- placed by a lexicai-redundancy rule. Thus, no uniform solution has yet been found for this third function of metarules. We conclude that it may be possible to replace MPS-style metagrammatical formalisms entirely without losing generaliza- tion~. '~Ve ~re consequently pursuing re~eaxcu tu ~u,o o~,,. 5. Conclusion The formal power of metaxule formalisms is clearly an important consideration for computational linguists. Uszkoreit and Pet.era [1982] have shown that the potential exists for defining metarule formalisms that are computationally "unsafe." However, these results do not sound a death knell for metarules. On the contrary, the safety of metarule formalisms is still an open question. We have merely shown that the constraints on metarules necessary to make them formally tractable will have to be based on empirical linguiaic evidence as well as solid formal research. The solutions to constraining metarules analyzed here seem to be either formally or linguistically inadequate. Further research is needed in the actual uses of metarules and in con- structions that axe problematic for metarules so ~ to develop either linguistically motivated and computationally interesting constraints on the formalisms, or alternative formalisms that axe linguistically adequate but not heir to the problems of metaxules. References Gawron, J. M., et al.. 1982: ~Processing English with a Generalized Phrase Structure Grammar," in Proceedings a/ the 20th Annual ,$feetin7 of the Association /or Computational Linfuistic$, University of Toronto, Toronto, Canada (15-18 June}. Gazdar. G., 1982: "Phrase Structure Grammar," in P. Jacobson and G. Putlum, eds., The Nature of Syntactic Rcvresentation (Reidel, Oordrecht, Holland). Gazdar, G.. E. Klein, G.K. Pullum, and I.A. Sag, 1982: "Coordinate Structure and Unbounded Dependencies," in M. Barlow, D.P. Flickinger, and LA. Sag, eds., Devdopment~ in Generalized Phraa~ S[rueture Grammar, Stanford Working Papers in Grammatical Theory, Volume 2 (Indiana University Linguistics Club, Bloomington, Indiana, November). Gazdar. G. and G.K. Pullum. 1981: "Subcategorization, Constituent Order and the Notion 'Head'," in M. Moortgat, H.v.d. Hulst and T. Hockstra, eds., T/ze Scape of Le:ical Rules, pp. 107-123 (Foris, Dordr~ht, Holland). Gazdar. G. and G.K. Pullum, 1982: "Generalized Phrase Structure Grammar:. A Theoretical Synopsis,* (Indiana University Linguistics Club, Bloomington, Indiana, August). Gazdar, G., G.K. Pullum, and LA. Sag, 1982: "Auxiliaries and related phenomena," Languafe, Volume 58, Number 3, pp.591-~38. Gunji, T., 1980- "A Phr~me Structure Analysis of the Japanese Language," M. A. dissertation, Ohio State University, Columbus, Ohio. Jackendoff, R., 1977: "~ Syntax," I, inyui~tie Inquiry Monograph 2, (MIT Press, Cambridge, M~sachusetts). Kay, M., 1982: "When Meta-Rules are Not Meta-Rules," in M. [~arlow, D.P. Flickinger, and I.A. Sag, eds., Devdopment# in G¢ncrati:¢d Phrase Structure Grammar, Stanford Working 26 Papers in Grammatical Theory, Volume 2 (Indiana University Linguistics Club, Bloomington, Indiana, November). Peters, S. and R.W. Ritchie, 1073: "Context-Sensitive Immediate Constituent Analysk: Context-Free Languages Revisited," in Mathematical SVmtem# Theory, 31"oi. 6, No. 4, pp. 324-333 (Springer-Verlag, New York). Peters, S. and R.W. Ritchie, forthcoming:. "Phrase-Linking Gramma/,s. m Pullum, G.K., 1982: "Free Word Order and Phrase Structure Rules," J. Pustejovsky and P. Sells, eds., Proescdlnfe o/ Iae T~dflh Annual Msetlnl o/ ths North Eulern Linfuimti¢ Society, (Graduate Linguistics Student Association, University of Ma~achusetts, Amherst, Massachusetts). Shieber, S., forthcoming: "Direct Parsing of ID/LP Grammars." Stueky, S., forthcoming:. "Metarules as Meta-Node-Admimsibility Conditions." Thompson, H., 1982: "Handling Metarules in a Parser for GPSG," in M. Barlow, D.P. Flickinger, and [.A. Sag, eds., De~dopment: in Generalized Phra#e Structure Grammar, Stanford Working Papers in Grammatical Theory, Volume 2 {Indiana University Linguistics Club, Bloomington, Indiana, November). Uszkoreit, H., forthcoming:. "Constituent Liberation." Uszkoreit, H. and S.J. Peters, 1982: "Essential Variables in Metarules," presented at the 1982 Annual Meeting of the Linguistic Society of America, San Diego, California (December). van Riemsdijk, H., 1981: "On 'Adjacency' in Phonology and Syntax," in V.A. Burke and J. Pustejovsky, erie., Proceedinfw o[ the Eleventh Annual Msetinf o/the North E~lern Linfuiatie Society, University of Mmssachusetts, Amherst, Massachusetts, pp. 399- 413 {April). 27 | 1983 | 4 |
A PROLEGOMENON TO SITUATION SEMANTICS David J. Israel Bolt Beranek and Newman Inc. Cambridge, MA 02238 ABSTRACT An attempt is made to prepare Computational Linguistics for Situation Semantics. I INTRODUCTION The editors of the AI Journal recently hit upon the nice notion of correspondents' columns. The basic idea was to solicit experts in various fields, both within and outside of Artificial Intelligence, to provide "guidance to important, interesting current literature" in their fields. For Philosophy, they made the happy choice of Dan Dennett; for natural language processing, the equally happy choice of Barbara Grosz. Each has so far contributed one column, and these early contributions overlap in one, and as it happens, only one, particular; to wit: Situation~manties. Witness Dennett: " " ~ t ~oplcln " Cis] the hottest new philosophical loglc...[is] in some ways a successor or rival to Montague semantics. And now Grosz: In recent work, Barwlse and Perry address the probZem [of what information from the context of an utterance affects which aspects of interpretation and how?] in the context of a proposed model theory of natural language, one that appears to be more compatible with the needs of AI than previous theories .... EI]t is of interest to work in natural-language processing for the kind of compositional semantics it ~ roposes, and the way in which it allows he contexts in which in an utterance is used to affect its interpretation. What is all the fuss about? I want to address this question, but rather indirectly. I want to situate situation semantics in "conceptual space" and draw some comparisons and contrasts between it and accounts in the style of Richard Montague. To this end, a few preliminary points are in order. A. The Present Situation First, as to the state of the Situation Semantics literature. There is as yet no published piece of the scope and detail of either "English as a Formal Language" or "The Proper Treatment of Ouantlficatlon in Ordinary English". Nor, of course, is there anything llke ~hat large body of work by philosophers and linguists - computational and otherwise - that has been produced from within the Montague paradigm. Montague's work was more or hess the first of its kind. It excited, quite justifiably, an extraordinary amount of interest t and has already inspired a distinguished body or work, some of it from within AI and Computational Linguistics. The latter can hardly be said for Situation Semantics (yet?). So what is there? Besides a few published papers, each of them containing at least one position since abandoned~ there is a book ~ Attitudes literally on the very verge of publication. This contains the philosophlcal/theoretlcal background of the program - The Big Picture. It also contains a very brief treatment of a very simple fragment of ALIASS. And what, the reader may well ask is ALIASS? An Artificial Language for lliustratlng Aspects of ~Ituation ~emantlcs, that's what. Moreover there is in the works a ceiiaboratlve effort, to be called Situations a n d S . m This will contain a "Fragment of Situation Semantics", a treatment of an extended fra~ent of ~ . Last, for the moment, but not least, is a second book by Barwise and Perry, ~ ~ , which will include a treatment of an even more extended fragment of English, together with a self-contalned treatment of the technical, mathematical background. (By "self-contalned". understand: not requiring either familiarity with or acceptance of The Big Picture ~ resented in S&A.) The bottom line: there is very Ittle of Situation Semantics presently available to the masses of hungry researchers. S. There are important points of similarity between Situation and Montague semantics, of course. One is that both are committed to formulating mathematically rigorous semantic accounts of English. To this end, both, of course, dip heavily into set theory. But this isn't saying a whole lot; for they deploy very different set theories. Montague, for a variety of technical reasons, was very fond of MKM, a very powerful theory, countenancing huge collections. MKM allows for both sets and (proper) classes, the latter being collections too big to be elements of other collections, and too big to be sets, say, of ZF. It also provides an unnervingly powerful comprehension axiom. B&P, on the other hand, have at least provisionally adopted KPU, a surprisingly weak set theory. Indeed, the vanilla version of KPU comes without an axiom of infinity and (more or less hence) has a model in the hereditarily finite sets. In that setting, even little infinite coliectlons, llke the universe of hereditarily finite sets, are proper classes, and beyond the pale. Enough for the moment of set theory, although we shall have to return to this strange land for one more brief visit. More important, and perhaps more disheartening, similarities are immediately to hand. Both Montague and B&P - thus far - restrict themselves to the declarative fragment of English; Montague, for the obvious reason that he was a model theorist and a student of Tarskl. For such types, the crucial notion to be explicated is that of "truth mThe collaborators being B&P, Robin Cooper, Hans Kamp, and Stanley Peters. 28 of a sentence on an interpretation". Moreover. Monta~e showed no interest in the use(s) of lar~Euage. Of course people working within his tradition are not debarred from doing so; but any such interest is an extra added attraction. The same point about model theory, broadly construed, holds for Barwlse-Perry as well; they certainly aren't syntaeticians. But in their case it is reinforced by philosophical considerations which point toward the use of language to convey information as the central use of language - hence, to assertlng as the central kind of utterance or speech act. Thus, even when they narrow their sights to this one use, the notion that language is something to be put to various uses by humans to further certain of their purposes is not foreign to Situation Semantics. • Second, B&P (again: so far) stop short at the awesome boundary of the period. Here again, this was only to be expected; and here again, the crucial question is whether their overall philosophical perspective so informs their account of natural language as to enable a more fruitful accommodation of work on various aspects of extended discourse. Barbara Grosz hints at a suspicion I share, that although at the moment much of what we have in this regard are promissory notes and wishful thinking, the answer is in the affirmative, me II THE BIG PICTURE The major point, however, concerns the primary focus of the work of Barwise and Perry as contrasted with that of Montague. Montague approached the problem of the semantics of natural language essentially as a model theorist, attempting to apply (newly) orthodox mathematical techniques to the solution of classical problems in the semantics of natural languages, many of which had to do with intensional contexts. After all, these new techniques - in the development of which Montague played a role - had precisely to do with the treatment of formal languages containing modal and other intensional constructions. What made a fragment of English of interest to Montague, then, was that it contained loads of such contexts. It is as if all of that wondrous machinery, and the technical brilliance to deploy it. were aimed at an analysis of the following sentence: While the was ~ ~seemed to be lookln~ for ~ unicorn who was thinkinK ~ ~ centaur. What is astounding, of course, is that Montague should have been able to pull a systematic and rigorous treatment of such contexts out of the model-theoretlc hat. When we turn to Situation Semantics, on the other hand, we seem to be back in the linguistic world of flrst-grade readers: Spot ran. ~ saw ~run. J a n e ~ that SPot ran. rndee~, t~ malor concern of Barwise-Perry is not the semantics of natural language at all. They have bigger (well, different) fish to fry. First and foremost, they are concerned with sketching an account of the place of meaning and mind in the universe, an account that finds the source of meaning in nomic regularities among kinds of events (sltuatlons)L regularltles which, in general, are independent of Aar~uage anu mind. For the frying of said fish, a treatment of cognitive attitudes is essential. Moreover, and not independently, for any attempt to apply their overall philosophical picture to the semantics of natural language, the propositional attitude contexts pose a crucial and seemingly I"A Fragment of Situation Semantics, will contain a treatment of certain kinds of English interrogatives ; further out in the future, Situation ~ will contain such a more extensive treatment. eeBreaking out of the straightjacket of the sentence is the job of Situations in Discourse. insuperable obstacleo tee Hence the fact that the book ~ and Attitudes precedes Situation - the first lays the philosophical foundations for the second. Thus the origin of their concern even with the classical problems of the propositional attitudes is different from. though by no means incompatible with, that of Montague's. Something brief must now be said about ~- big picture. Here goes. The work of B&P can be seen as part of a continuing debate in philosophy about the source of the intentlonallty of the mental - and the nature of meaning in general; a debate about the right account to give of the phenomenon of one thing or event or state-of-affalrs being able to represent (carry information about) another thlr~ or event or state-of-affalrs. On one side stand those who see the phenomenon of Intentionallty as dependent on language - no representation without notation. This doctrine is the heart of current orthodoxy in both philosophy of mind and meta-theory of cognitive psychology. (See, by way of best example, [5]:) It is also a doctrine widely thought to oe presupposed b~ the whole endeavor of Artificial Intelligence. On another side are those who see the representational power of language as itself based on the intentlonallty of mlnd. It The striking thing about Barwise and Perry is that, while they stand firmly with those who deny that meaning and intentlonality essentially involve language, they reject the thesis that intentlonallty and meaning are essentlaliy mental or mind-lnvolvlng. The source of meaning and intentlonallty is to be found, rather, in the existence of lawllke regularities - constraints - among kinds of events. For Barwlse-Perry, the analysis of meaning begins with such facts as that: smoke means fire or those In~t mean measles. The ground of such facts lies e ways of the world; in the regularities between event types in virtue of which events of one type can carry information about events of other types. If semantics is the theory of meaning, then there is no pun intended in the application of semantic notions to situations in which there is no use of language and, indeed, in which there are no minds. Meaning's natural home is the world, for meaning arises out of the regular relations that hold among situations, among bits of reality. We believe linguistic meaning should be seen within this general picture of a world teeming with meaning, a world full of information for organisms appropriately attuned to that meaning, tie [3] There is yet another dimension to the philosophical debate, one to which Barwise and Perry often allude: Some theories stress the power of language to classify minds, the mental significance of language, and treat the meeI shall return to this theme below. JWho knows? Maybe it is. eeTheae latter can, in turn, be divided into those who seek a naturalistic, in principle physicallst, account and those who, like Frege and Church, pose no such demand. "eeFor an important philosophical predecessor, see [~]. 29 classification of (external) events as derivative .... A second approach is to focus on the external significance of language, on its connection with the described world rather than the describing mind. Sentences are classified not by the ideas they express, but by how they describe thlngs to be .... Frege adopted a third strategy. He postulated a third realm, a realm neither of ideas nor of worldly events, but of senses. Senses are the "philosopher's stone", the medium that coordinates all three elements in our equation: minds, words and objects. Hinds grasp senses, words express them, and objects are referred to by them .... One way of regarding the crucial notion of Intension in possible world semantlos is a development of Frege's notion of sense. [3] Barwlse and Perry clearly opt for the second approach. This is one reason for their concern with the problems posed by the propositional attitudes; for it has often been argued that these contexts doom any attempt at a theory of the second type. This is the burden of the dreaded "Sllngshot" - a weapon we shall ~aze at later. F?r the moment, though, I want simply to note ~ne connection of this dimension with that about the source and nature of intentionality. Just as (some particular features of) a particular X-ray carries information about the individual on which the machine was trained, e.g., that its leg is broken, so too does an utterance by the doctor of the sentence "It's bone is broken", in a context in which that same individual is what's referred to by • it". One can, of course, learn things about the X-ray and the X-ray machine as well as about the ~ oor patient; Just so, one can learn thlnEs about he doctor from her utterance. In both cases, the ~ ainlng of this ~ information is grounded n certain regularlties, in the one case mechanical, optical and electro-magnetic; in the other, perceptual, cognitive, and social- conventional. More to the point, in all cases the central locus of meaning is a relation, a regularity, between types of situation and the primary focus of significance is an external event or event-type. ~ Now, alas, for that return to set theory. I have studiously avoided telling the reader what situations,, events and/or event-~ypes are. Indeed, I haven't even said which, if any, of these are technical terms of Situation Semantics. Later I shall say enough (I hope) to generate an intuitive feel for situations; still, I have been speaklng freely of the centrality of relations between events or between event-types. Set-thecretlcally- speaking, such relations are going to be (or be represented by) collections of ordered-palrs. C~llections, but not sets. These collections are proper classes relative to KPU; so, if thls be the last word on the matter, those very regularities so central to the account are not themselves available within the account - that is, they are not (represented by) set-theoretic constructs generated from the primitives by way of the resources of ~PU. For all such constructs are finite, me ~eedless to say, that isn't the last word on the matter. Still, this is scarcely the place for an extenced treatment of the issue; I raise it here simply to drive home a point about that first • Needless to say, we can talk about both minds and mental events and languages and linguistic events~ the key point is simply that a language user is not "really" always talklr~ first and foremost about his/her own mental state. We are not doomed to pathologlcal self-lnvolvement by being doomed to speak an d think. l.Assuming that we stick to an interpretation within the hereditarily finite sets, as we can. similarity between Montague and Situation Semantics. Montague wanted a very strong backEround theory within which models can be constructed precisely because he didn't want to have to worry about any (size) constraints on such models. B&Pput their money on a very weak set theory precisely because they want there to be such constraints; in particular because they want to erect a certain kind of barrier to the infinite. Obviously, large issues loom on the horizon; let's leave them there. I want now briefly to discuss 3 major aspects of Situation Semantics, aspects in which it differs fairly dramatically from Montague semantics. In passing. I will at least j~.. at the interrelationships among these, asloe from particular points of difference, remember that in the background there lurks a general conception of the use of language and its place in the overall scheme of things, a conception that is meant to inform and constrain detailed proposals. III THE PRINCIPLE OF EFFICIENCY One other respect in which Barwise and Perry are orthodox is their acceptance of a form of the of , the principle that the meaning of a complex expression is a runctlon of the meanings of its constituents. This is the principle that is supposed to explain the proouctivity or generatlvity of languases, and the ability of finite creatures to master them. But for Barwise and Perry, an at least equally important principle is the Principle ~ .~ Efflciencv of i ~ . " This principle is concerned with the ability of different people at different times and places and in different contexts to (re)use the self-same sentence to say different things - to impart different pieces of information. So, to adopt their favorite example. if Mitch now says to me, "You're dead wrong", what he says - what he asserts to be the case - is very different from what I would say if I were to utter the very same sentence directed at him. m" The very same sentence is used, "with the same meanir~"; but the message or information carried by its use differs. Moreover, the difference is systematically related to differences in the contexts in which the utterances are made. - Barwise and Perry take this phenomenon, often called indexlcality or token-reflexlvlty and all too often localized to the occurrence of particular words (e.g. t I , you , here , now , this , "that"), to oe of the essence of natural languages. They also note, however, that their relational account of meaning shows it to be a central feature of meaning in general. IT]hat smoke pouring out of the the window over there means that that particular building is on fire. Now the specific situation, smoke pouring out of that very building at that very time, will never be repeated. The next time we see smoke pouring out a building, it will be a new situation, and so will in one sense mean something else. It will mean that the building in the new situation is on fire at the new time. Each of these specific smoky situations means something, that the building then and there is on fire. This is...event meaning. The meaningful situations had something in common, the~ were of a co~n type, smoke pouring out o~ a building, a type that means fire. This is ...event-tYPe meanin~...What a particular case of smoke pouring out of a buildlng means, what it tells us about the mB&P choose to call such principles "semantic universals" - an unhappy choice, I think. JeWhlch, of course, ~ would never do. 30 wider world, is determined by the meaning of smoke pouring out of a building and the particulars of this case of it. [3] Moreover, B&P contend that the fact that modern formal semantics grew out of a concern with the language(s) of matSematics has caused those working within the orthodox model-theoretic tradition either to ignore or to slight this crucial feature.* A preoccupation with the language of mathematics, and with the seemingly e6ernal nature of its sentences, led the founders of our field to neglect the efficiency of language. In our opinion this was a critical blunder, for efficiency lies at the very heart of meaning. [3] A. A Little Background Sure enough, indexicallty gave nightmares to both Frege and Russell.** It might seem that the issue of indexicality did not escape Montague's attention; and it didn't. Indeed, as Thomason says, "As a formal discipline, the study of indexioals, owes much of its development to MQnta~e and his students" [22]. (See especially [10] and [11, 12].) This last is most especially true with respect to the work of David Kaplan, both a student and a colleague of Montague's. For Kaplan disagreed with Montague precisely about the extent to which the formal treatment of contexts of utterances should be accommodated to the treatment of Intensionailty via possible worlds. And B&P start from where Kaplan leaves off. [7, 8] I shall assume once again the right to be sketchy: Montague adopted a very narrow stance towards issues in pragmatics, concerning himself so*ely with indexicais and tense and not concerning himself at aii with other issues about the purposes of speakers and hearers and the corresponding uses of sentences. **e In addition, the treatment of formal pragmatics was to follow the lead of formal semantics: the central notion to be investigated was that of truth of a sentence, but now reiatlve to both an interpretation and a eontext of use or ~oint of reference. (See [10, 11, 12, 18].) The working hypothesis" was that one could and should give a thoroughly uniform treatment of indexicallty within the model-theoretic framework deployed for the treatment of the indexlcal-free constructions. Thus, for example, in standard quantificational theory, one of the "parameters" of an interpretation is a domain or universe of discourse; in standard accounts of modal languages, another parameter is a set of possible worlds; in tense logics, a set of points of time. Why stop there? It is clear when we ~et to indexicals that the three parameters I've just mentioned aren't sufficient to determine a function to truth-values. Just think of two simultaneous utterances of "You are dead wrong" in the same world, with all other *Barbara Grosz hints at agreement with this Judgment. "[O]ne place that situation semantics is more compatible with efforts in natural-language processing than previous approaches [is tha£] context and facts about the world participate at two points: (I) in interpretation, for determining such things as who the speaker is, the time of utterance..; (2) in evaluation, for determining such things as..whether the relationships expressed in the utterance hold." **For the former, see [14], see also [15]. m**Stalnaker is a wonderful example of someone working within the Montague tradition who does take the wider issues of praEmatics to heart. See [19]. things equal except speaker and addressee. In the interests of uniformity, stuff all such parameters into structures called points of reference, and who knows how many we'll need - see [9], where points of reference are called indices. Then the meaning of a sentence is a function from points of references into truth values. A number of researchers working within the MontaEue tradition (in a sense there was ,,~ ucner) were unhappy with this particular result of Montague's quest for generality; the most important apostate being Kaplan. s There are complex technical issues involved in the apostasy, centrally those involving the interaction of indexical and intenslonal constructions - interactions which, at the very least, cast doubt on the doctrine that the intenslons of expressions are total functions from the set of points of reference to extensions of the expression at that point of reference.** The end result, anyway, is the proposal for some type of a non-unlform two-step account. Montaguesque points of reference should be broken in two, with posslbie worlds (and possibly, moments of time) playing one role and contexts of use (possibly inciudlng moments of time) another, different, role. In this scheme, sentences get associated with functions from contexts of use to propositions and these in turn are functions from contexts to truth- values. Contexts, upon "application" to utterances of sentences, yield determinate propositions; worlds (world-times) function rather as points of evaluation, yielding truth values of determinate propositions.*** B&P, however, go beyond Kaplan's treatment, and in more than one direction. Cruclaily, the treatment of indexlcailty proper is only one aspect of the account of efficiency, in some ways, the least intriguing of the lot. Still, to drive home the first point: as it is with smoke pouring out of buildings, so too is it with sentences. The syntactic and semantic rules of a language, conventional regularities or constraints, determzne the meaning - the event-type meaning - of a sentence; features of the context of use of an utterance of that type get added in to determine what is actually said with that use. This is the event meaning of the utterance, also called its interpretation. Finally, that interpretation can be evaluated, either in a context which is essentially the same as the context of use, or some other; thereby yielding an evaluation of the utterance, (finally) a truth value. B. Beyond Indexicalitv For B&P, the features of the context of use go beyond those associated with the presence of explicit indexical items in the utterance - people with personal pronouns, places with "locatives", times with tense markers and temporal indicators. In particular they mention two such parameters: speaker connections and resource situations. Some aspects of the former can be looked on as aspects of indexicality, following the lines of Kaplan's treatment of demonstratives. But in other respects, e.g., the treatment of proper names, and certainly in the treatment of resource situations, the view they sketch seems to transcend the boundaries o~ even deviant model-theoretic semantics. For they mean to do justice, within a unified and systematic framework, both to the fact that the meaning of an utterance type *See [7. 8]. Others included Stalnaker and Kamp. See [19, 20] and [6]. **The extension appropriate to sentences and clauses being truth values. ***There is even a version of this called "two- dimensional modal logic" [20]. 31 "underdetermines" the interpretation of an utterance of that type and to the fact the interpretation of an utterance "underdetermines" the information that can be imparted by that utterance. It is a constraint they impose on themselves that they be able to account for significant regularities with respect to "the flow of information", in so far as that flow is mediated by the use of language and in cases where the information is not determined by a compositional semantic theory. And such cases are the norm. Compositionality holds only at the level of event- type or linguistic meaning. The claim is that seeing linguistic meaning as a special case of the relational nature of meaning - that meaning resides in regularities between kinds of situations - allows them to produce an account which satisfies this constraint. c. Names $9~" let me say something about proper names and some~nlng ease aoout resource situations. Let us put aside for the moment the semantic type that poor little "David Israel" gets assigned in [13]. Instead, we shall pretend that it gets associated with some individual." But which individual? Surely with one named "David Israel"; but there are bunches of such, and many, many more Davids. The probleml of course, is that proper names aren't proper. ~* Just as surely, at the level of linguistic meaning it makes no sense for me to ~ special treatment with respect to my name. Still, if you (or I) hear M_Itch Marcus. right after my talk, complaining to someone that "David is dead wrong", we'll know who's being maligned. Why so? Because we are aware of the speaker s connections; more finely, of the relevant connections in this instance. At the level of event-type or linguistic meaning, the contribution of a name is to refer to an individual of that name. **'e On the other hand, it is a feature of the context of use, that the speaker of an utterance containing that name is connected in certain ways to such and such individual~ of that name. Surely Mirth knows lots of Davids and we might find him saying "David thinks that David is really dead wrong". Of course, he ~ht be talking about someone inclined to harsh and "oSJectlve" self-crlticiam; ~robably not. Just one more thing about names and speaker connections. I noted above that for B&P, the interpretation of an utterance event underdetermines the information carried by that event. The use of names is a locus of nice examples of this. It is no part of the interpretation (event meaning) of Hitch's complaint about me that my name is "David"; but someone who saw him say this while he (Mitch, that is) was surreptitiously looking can learn that my name is "David", or even t~a~a{am the David Israel who gave the talk on Situation Semantics. Even without that, someone could learn that Mitch knows l is connected with) at least one person so named. Of course, there are possibilities for "misinformation" here, too.) Just so, when I *Some possible individual? My grandmother, for one, would have disagreed. So, too, dc B&P. *'Mostly not; but how about "Tristan Tzara", to pick a name out of a hat? ***English should have no truck with (even) benign analogues of bills of attainder. ***eIt's a nice question whether some na~.es carr~ with them, at the linguistic level, species information as well. But surely it doesn't seem to De an asuse of English to call, say, a platypus "David Israel". introduce myself by saying "I'm David Israel", the interpretation of what I say on that occasion i3 singularly uninteresting, being (roughly speaking) an instance of the law of self-identity. But I will have conveyed the information I wanted to, namely that I am a David Israel, that "David Israel" is my name (though not mine alone). That's why we engage in the (otherwise inexplicable) custom of making introductions. Anthropology aside, the central point is that Situation Semantics is meant to give us an account in which we can explain and predict such regularities in the flow of information as that exploited by the convention of introductions. This account must show how such regularities are related to the conventional regularities that determine the linguistic meaning of sentence types and the patterns of contextual determination which then generate the meanings of particular utterance events. D. Defining Descriptions An analogue of the problem of the impropriety of talk of proper names arises with respect to definite descriptions. Take a wild and wooly sentence such as "The dog is barking". Again, we want the denotations of such definite descriptions to be Just plain individuals; but again, which individuals? Surely, there is more than one dog in the world; does the definite description fail to refer because of non-uniqueness? Hardly; at the level of sentence meaning, there is no question of it's referring to some one individual dog. Rather we must introduce into our semantic account a ~ arameter for a set of resource situations. uppose, for " instance, that we have fixed a speaker, an audience and a (spatio-temporal) location of utterance of our sentence. These three are the main constituents of the parameter B&P call a discourse situation; note that this one parameter ~ retty much covers the contextual features ontague-Kaplan had in mind. Suppose also that a dog t otherwise unknown to our speaker and hls/her audience, just walked by the front porch, on which our protagonists are sitting. When the speaker utters the sentence he/she is exploiting a situation in which bo{h speaker and audience saw a lone dog stroll by; he/she is not describing either that particular recent situation or such a sltuation-type - there may have been many such; the two of them often sit out on that porch, the neighborhood is full of dogs. Rather, the speaker is referring to a situation in which that dog is barking. Which dog?. The one "contributed" by the resource situation; the one who just strolled by. It is an aspect of the linguistic meaning of a definite description that a resource situation should enter into the determination of its reference on a particular occasion of use; thus, an aspect of the meanings of sentences that a resource situation be a a parameter in the determination of the interpretations (event meanings) of sentential utterances. Moreover, one can imagine cases where what is of interest is precisely some feature of which resource situation a speaker is exploiting on a particular occasion. And here, too, as in the case of names or, more generally, of speaker connections, the claim is that the relational theory of meaning and the consequent emphasis on the centrality of the Principle of Efficiency give Situation Semantics a handle on a range of regularities connecting uses of languages with varieties of information that can be conveyed by such uses. IV LOGICAL FORM AND ENTAILMENT As we have noted, Barwlse and Perry's treatment of efficiency goes beyond indexlcality and, as embedded within their overall account, goes well beyond a Kaplan-Montague theory. An important theme in this regard is the radical de-emphaslzlng of the role of entailment in their semantic theory and the correlative fixing on statements, not sentences, as the primary locus of interpretation. This is yet another way in which B&P go beyond Kaplan's forays beyond Montague. I have said that in standard (or even mildly 32 deviant) model-theoretic accounts the key notion is that of truth on an interpretation, or in a model. Having said this, I might as well say that the key notion is that of entailment or logical consequence. A set of sentences S entails a sentence A iff there is no interpretation on which all of the sentences in S are true and A i3 false. From the purely model-theoretic point of view, this relation can be thought of as holding not between sentences, but between propositions (conceived of as the intenslons or meanings of sentences). For instance, it might be taken to hold between sets of possible worlds. Still, it is presumed (to put it mildly) that an important set of such relations among non-linguistic objects have syntactic realizations in relations holding among sentences which express those propositions. Moreover, that sentences stand in these relations is a function of certain specifiable aspects of their syntactic type - their "logical form". In artificial, logical languages, this presumption of syntactic realization can be made more or less good; and anyway, the connections between, on the one hand, syntactic types and modes of composition, and semantic values on the other, must be made completely explicit. In particular, one specifies a set of expresslons as the logical constants of the language, specifies how to build up complex expressfons by the use of those constants, operating ultimately on the "non-logical constants", and then - ipso facto - one has a ~ erfectly usable and precise notion of loglcai orm. In the standard run of such artificial languages, sentences (that is: sentence types, there being no need for a notion of tokens) can be, and typically are, assigned truth-values as their semantic values. Such languages do not allow for indexicality; hence the talk about "eternal sentences". The linguistic meaning of such a sentence need not be distinguished from the ~roposltion expressed by a partlcular use of it.* unce Inuexicality is taken seriously, one can no longer attribute truth-values to senhences. (Note how this way of putting things suggests Just the unification of the treatment of indexlcallty with that of modality that appealed to Montague.) One can still, however, take as central the notion of a sentence being true in a context on an interpretation. The main reason for this move is that it allows one to develop a fairly standard notion of logical consequence or entailment at the level of sentences. Roughly, a set of sentences S entails a sentence A iff for every interpretation and for every context of use of that interpretation: if every sentence in S is true in a given context, then so too is A. Barwlse&Perry are prepared to deemphaslze radically the notion of entaliment among sentences. As they fully realize they must provide a new notion - a notion of one statement following from another. At the very least then, our theory will seek to account for why the truth of certain ~ follows from the truth of other 9 _ ~ . This move has several important consequences...There is a lot of information available from utterances that is simply missed in traditional accounts, accounts that ignore the relational aspect of meaning...A semantic theory must go far beyond traditional "patterns of inference"...A rather startllng consequence of this is that there can be no syntactic counterpart, of the kind traditionally sought in proof theory and theories of loglcal form, to the semantic theory of consequence. For consequence is simply not a relation between purely syntactic elements. *Hence part, at least, of the oddity of talk about using such a language by uttering sentences thereof. What's at stake here? A whole lot, I fear. First, utterances - e.g., the makings of assertions - are actions. They are not linguistic items at all; they have no logical forms. Of course, they typically involve the production of linguistic tokens, which - by virtue of being of such and such types - may have such forms. (Typically, but not always - witness the shaking or nodding of a head, the winking of an eye, the pointing of a finger, all in appropriate contexts of use, of co,, ~e.) Thus, entailment relations among s~acements (utterances) can't be cashed in directly in terms of relations holding among sentences in virtue of special aspects of their syntactic shape. Remember what was said above about the main reason for opting out of an account based on statements and for an account based on sentence(type)-in-a- context. If you don't remember, let me (and David Kaplan) remind you: First, it is important to distinguish an utterance from a sentence-ln-~-context. The former notion is from the theory of speech acts, the latter from semantics. I Utterances take time, and utterances of distinct sentences can not be simultaneous (i.e., in the same context). But in order to develop a logic of demonstratives it seems most natural to be able to evaluate several premisses and a conclusion all in the same context. [8]. (The emphasis by way of underlining is mine - D.I.) A logic has to do with entailment and validity; these are the central semantic notions; sentences are their linguistic loci. This all sounds reasonable enough, except of course for that quite unmotivated presumption that contexts of use can't be spatio-temporally extended. And it seems correspondingly unreasonable when B&P opt out. IT]he ~ "Socrates is speaking" does not follow from the sentences "Every philosopher is speaking", "Socrates is a philosopher" even though this argument has the same "loglcal form" (on most accounts of logical form) as ["4 is an integral multiple of 2", "All integral multiples of 2 are even" (so) "4 is even".] In the first place, there is the matter of tense. At the very least the three sentences would have to be said at more or less the same time for the argument to be valid. Sentences are not true or false; only statements made with indicative sentences, utterances of certain kinds, are true or false. [3] (The example is mine - D.I.) B&P simplify somewhat. It is not required that all three sentences be uttered simultaneously (by one speaker). Roughly speaking, what is required is that the (spatio)temporal locations of their utterance be close together and that the "sum" of their locations overlap with that of some utterance of Socrates. But that isn't all. The speaker must be connected throughout to one and the same individual Socrates, else a pragmatic analogue of the fallacy of equivocation will result. The same (or something similar) could be said about the noun ~ hrase "every philosopher", for such phrases - just ike definite descriptions - require for their interpretation a resource situation. One can imagine a case wherein a given speaker, over a specified time and at a specified place, connected to one and the same guy named Socrates, exploits two different resource situations contributing two different groups of philosophers, one for each of *Thls is what is known in the trade as a stlpulatlve definition. 33 the first two utterances. (The case is stronger, of course, if we substitute for the second sentence "Socrates is one of the philosophers.") It must certainly seem that too much of the baby is being tossed out with the water; but there are alleged to be (compensating?) gains: There is a lot of information available from utterances that is simply missed in traditional accounts, accounts that ignore the relational aspect of meaning. If someone comes up to me and says--Melanie saw a bear." I may learn not Just that Melanie saw a bear, but also that the speaker is somehow connected to Melanie in a way that allows him to refer to her using "Melanie". And I learn that the speaker is somehow in a position to have information about what Melanie saw. A semantic theory must go far beyond traditional "patterns of inference" to account for the external significance of language...A semantic theory must account for how language fits into the general flow of information. The capturing of entailments between statements is Just one aspect of a real theory of the information in an utterance. We think the relation theory of meaning provides the proper framework for such a theory. By looking at linguistic meanir~ as a relation between utterances and described situations, we can focus on the many coordinates that allow information to be extracted from utterances, information not only about the situation described'ni but also about the speaker and her place the world. [3] A. A ~U.t~ A ~ Despite the heroic sentiments just expressed, B&P scarcely eschew sentences, a semantic account account of which they are, after all, aiming to provide. In the formal account statements get represented by n-tuples (of course), one element of which is the sentence uttered; and if you like. it is the sentence-under-syntactic-analysls. (This last bit is misleading, but not terribly.) Other elements of the tuple are a discourse situation and set of speaker connections and resource situations. Any%ray, there is the sentence. Given that, how about their logical form~q? Before touching on that issue, let me raise another and related feature of the account. This is the decision of B&P to let English sentences be the domain of their purely compositional semantic functions. For Montague, the "normal form" semantic interpretation of English went by way of a translation from English into some by now "fairly standard" logical language. (Such languages became fairly standard largely due to Montague's work.) Montague always claimed that thl3 was merely a pedagogical and simplifying device; and he provldeS an abstract account of how a "direct" semantic interpretation would go. Still, his practice leaves one with the taste of a search for hidden logical forms of a familiar type underlying the grammmtical forms of English sentences. No such intermediate logical language is forthcoming in Situation Semantics. First there is ALIASS: An Artificial Language for Illustrating Aspects of Situation Semantics... has more of the structure of English than any other artificial language we know, but it does not pretend to be a fragment of English, or any sort of "logical form" of English. It is Just what its name implies and nothing more. Next, and centrally, there is English. The decision to present a semantic theory of English directly may make the end product look even more different than it is. It certainly has the effect of depriving us of those familiar structures for which familiar "theorem provers" can be specified, and thus reinforces the sense of loss for seekers after a certain brand of entailments. Some may already feel the tell tale symptoms of withdrawal from an acute addiction. There is, however, more to it than that - or maybe the attendant liberation is enough. For instance, are English quantifiers logical constants, and if so, which ones? Which English quantlfiers correspond to which "formal" quantiflers? • Is there really a sententlai negation operator in English? Well, surely nit is not the case that" seems to qualify; but how about "not"? And how about conjunction? Consider, for example, a statement made with the sentence (I) Joe admires Sarah and she admires him. Let us confine our attention to the utterances in which (I) has the antecedent relations indicated by (I') Joe-1 admires Sarah-2 and she-2 admires him-1. While sentence (I) is a conjunction of two sentences, a statement made with (1) in the way [with the connections - D.I.] indicated by (I') is not a qonJunotion of independent statements. [3] In general, if ul and u2 are two statements with the same discourse situations and connections (and resource situations?), some sense can be made out of a [sic] conjunctive or [sic] disjunctive statement, with ul and u2 as "parts". But this is not true of arbitrary statements. Moreover, as in the case above, if we have a [sic] conjunctive statement, there may be no coherent decomposition of it into two independent statements. Talk of conjunctive and especially of disjunctive statements is likely to be wildly misleading. For the latter suggests, quite wrongl[, that the utterer is either asserting one "dis3unct" or the other. "A statement made using a disjunctive sentence is not the disjunction of two separate statements." ([3].) In an appendix to "Situations and Attitudes", B&P suggest an analogue of propositional logic for statements within a very simple fragment of ALIASS. There is no (sentential) negation and no conditional; but more to the point, there are no unrestricted laws of statement entailment, e.g., between an arbitrary "conjunctive statement" and its two "conjunots". Things get even worse when we add complex noun phrases to the fragment. The mind boggles. V THE PROPOSITIONAL ATTITUDES Here I shall be mercilessly brlef. I* The conventional wisdom, from Frege through to its logical culmination in MontaEue, has been that ~ropositional attitude constructions are referentially opaque"; more particularly, that substitution of co-designatlve singular terms within them does not preserve the truth-value of the whole. Within that orthodoxy there has been disagreement as to whether they are also hyperintensional; that is, as to whether tSee [I] passim; but especially the first two sections. ImMostly because of the sheer "sex appeal" of the issues involved, and partly because of the availability of the relevant texts, it has been their treatment of the propositional attitude contexts that has made B&P a cause celebre among philosophers. This is unfortunate; so I intend to do my part, by somewhat underplaying this whole tangle. 34 substituting necessarily co-designative terms or logically equivalent sentences within them preserves truth-value. Montague himself thought they were not hyperintensionai; but he countenanced the other view. (And sketched an account to handle it.) Barwise and Perry have the unique distinction of believing that said contexts are at least intensional and yet transparent to substitution of singular terms." This position is both solitary and thought to be incoherent. If it were in fact untenable, that would be most unfortunate for them, as it is also more or less mandated by their adopting an approach centered on the external significance of language. Indeed, there is supposed to be a proof that it is incoherent. The argument in question, which B.&P. call the slingshot, is sometimes supposed to show that all sentences with the same truth-value must designate the same thing; and hence, of course, that truth-values must be the primary semantic values of sentences. More usually and somewhat more technically, it has been supposed to show that if a sentential context allows substitution of logically equivalent sentences and co-deslgnating definite descriptions salva veritate, then that context must be truth- functional. More clearly: that all modes of sentence composition are truth-functlonal unless they're opaque. That is, the only contribution made by a sentence, so embedded, to the whole can be its truth-value. In fact, the slingshot is not a "knockdown proof"; that it is not is recognized by many of its major slingers(?). (See, for instance, L16, 17].) Instead, in all of its forms, it rests on some form or other of two critical assumptions: I. logically equivalent sentences are intersubstitutable in all contexts salva veritate; or, such sentences have the same semantic value 2. the semantic value of a sentence is unchanged when a component singular term is replaced by another, co-referentlal singular term. B&P reject the assumptions that underlie the slingshot. Here, too, especially with respect to the second assumption, tricky technical issues about the treatment of singular terms - both simple and complex - in a standard logic with identity are involved. B&P purposefully ignore these issues. They are interested in English, not in sentences of a standard logic with identity; and anyway, those very same issues actually get "transformed" into precisely the issues about singular terms they do discuss, issues having to do with the distinction between referential and attributive uses of (complex) singular terms. (See their discussion in [2] and chapter 7 of [3].) To show my strength of character, I'm not going to discuss the sexy issue of transparency to substitution of singular terms - except to say that, like Montague, B&P want a uniform treatment of singular terms as these occur both inside and outside of propositional attitude contexts; and that they also want to have it that the denotations of such terms are Just plain individual objects. (How perverse[) Rather, I want to look briefly at the first assumption about IThere is a class of exceptions to this, but I want not to get bogged down in details here. logical equivalence, i* A. The Relation Theory of M~anin~ With respect to the end-result, what's crucial is that B&P reject the alleged central consequence of the slingshot: that the primary semantic value of a sentence is its truth-value. Of course, given what we have already said, a better way to ~uc this is that for them, although statements are bearers of truth-values, the primary semantic value of a statement is not its truth value. That honor is accorded to a collection of situations or events. Very roughly, the story goes like this: the syntactic and semantic rules of the language associate to each sentence type a type of situations or states-of-affalrs; intuitively, the type actualizations of which would be accurately, though partially, described by any statement made using the sentence.* Thus: Consider the sentence "I am sitting". Its meaning is, roughly, a relation that holds between an utterance ~ and a situation ~ Just in case there is a (spatio-temporal) location 1 and an individual ~, i is speaking at i. and in ~, is sitting at i .... The extension of this relation will be a larKe class of pairs of abstract situations. [3]- Now consider a particular utterance of that sentence, say by Mitch, at a specific location i'. Then any situation that has [Mitch] sitting at i' will be an interpretation of the utterance. An utterance usually describes lots of different situations, or at any rate partially describes them. Because of this, it is sometimes useful to think of the interpretation as the class of such situations. Then we can say that the situations appearing in the interpretation of our utterance vary greatly in how much they constrain the world...When uttered on a specific occasion, our sentence constrains the described situation to be a certain way, to be llke one of the situations in the interpretation. Or, one might say, it constrains the described situation to be one of the interpretations. [3] B. On Lo~IcalEcuivalence If the primary semantic value of a sentence is a collection or a type of situations, then it is not surprising that logically equivalent sentences - sentences true in the same models - might not have the same semantic values, and hence, might not mmOne point to make, though, is the following: the indexical personal pronouns are certainly singular terms. Frege's general line on the referential opacity of propositional attitude contexts certainly seems at its shakiest precisely in appiicatlon to such pronouns - and in general to indexical elements. And remember if B&P are right, there is an element of "indexicality" in the use of proper names. If Mitch believes that David is dead wrong and I'm (that) David, then Mitch believes that I'm dead wrong. If Mitch believes that I'm dead wrong and I am David Israel. then Mitch believes that (this) David Israel is wrong. [14, 15] ml should note that neither "situation" nor "event" is a technical term in Situation Semantics; though "event-type" is . 35 be intersubstitutable salvo semantic value. Consider the two sentences: (I) Joe eats and (2) Joe eats, and Sarah sleeps or Sarah doesn't sleep. Let's grant that (I) and (2) are logically equivalent. But do they have the same "referent" or semantic value? If we think that sentences stand for situations..then we will not be at all inclined to accept the first principle required in the slingshot. The two logically equivalent sentences just do not have the same subject matter, they do not describe situations involving the same objects and properties. The first sentence will stand for all the situations in which Joe eats, the second sentence for those situations in which Joe eats and Sarah sleeps plus those in which Joe eats and Sarah doesn't sleep. Sarah is present in all of these. Since she is not present in may of the situations that "Joe eats" stands for, these sentences, though logically equivalent, do not stand for the same entity. (Obviously B&P are here ignoring the "indexlcality" inherent in proper uses of proper names - D.I.) [3] Notice that without so much as a glance in the direction of a single propositional attitude context, we can see how B&P can avoid certain well- known troubles that plague the standard model- theoretic treatments o~ such constructions.* Moreover and most importantly, they gain these fine powers of discrimination among "meanings" without following either Frege into a third realm of sense or Fodor (?) deep into the recesses of the mind. The significance of sentences, even as they occur in propositional attitude contexts, is out into the surrounding world, t* VI THE BOTTOM LINE What's the bottom line? Clearly, it's too soon to say. Indeed, I assume many of you will simply want to wait until you can look at least at some treatment of some fragment of English. Others would llke as well to get some idea of how the project of Situation Semantics might be realized computationally. For instance, it is clear even from what little I've said that the semantic values of various kinds of expression types are going to be quite different from the norm and much thought will be needed to specify a formalism for representing and manipulating these representations adequately. Again, wouldn't it be nice to be told something at least about the metaphysics of Situation Semantics, about situations, abstract, actual, factual and real - all four types figure in some way in the account; about events, event-types, courses-of-events, schema, etc? Yes, it would be nice. Some, no doubt, were positively lusting after the scoop on how B&P handle the classic ~ uzzles of intensionality with respect to singular erms. And so on. All in good time. What I want to do, instead, is to end with a claim, Barbara Grosz's claim in fact, that *On this point, compare, e.g., [22]. I do not mean to imply that there aren't good reasons for denying the hyperintensionality of the propositional attitudes. There are. See [21] Still, no one doubts that such a position is counter-intuitlve. t'Actually, there is another big issue looming here, the one that hangs on B&P's opting for a treatment which takes properties and relations, intensionally conceived, as primitive - instead, that is, of pretending that properties are m m functions from possible worlds into sets. Sets, of course, there are; but so too are there properties. attention should be paid. At the moment, the bottom line with respect to Situation Semantics is not, I think, to be arrived at by toting up technical details, as bedazzling as these will doubtless be. Rather, it is to be gotten at by attention precisely to THE BIG PICTORE. The relational theory of meaning, and more broadly, the centrality in Situation Semantics of the "flow of information" - the view that that part of this flow that is mediated by the uses of language should be seen as "part and parcel of the general flow of information that uses natural meaning" - allows reasoned hope for a theoretical framework within which work in pragmatics ann one theory of speech acts, as well research in the theory of discourse, can find a proper place. In many of these areas, there is an abundance of insight, harvested from close descriptive analyses of a wide range of phenomena - a range hitherto hidden from both orthodox linguists and philosophers. There are now even glimmerings of regularities. But there has been no overarching theoretical structure within which to systematize these insights, and those scattered reguiaritles, and through which to relate them to the results of syntactic and formal semantic analyses. Situation Semantics may help us in developing such a framework. This last is a good point at which to stop; so I shall. ACKNOWLEDGEMENTS This research was supported in part by the Defense Advanced Research Projects Agency, monitored by ONR under Contract No. N000~-777 C-0378 and in part by the Office of Naval Researcn under Contract No. N00014-77-C-0371. Also, special thanks are due to B&P- who, of cpurse~_are solely responsible for ai± ~ne we&ro loeas presented in this paper. Any remaining responsibility is to be charged to Hitch Marcus, who suggested I do this, and to Brian Smith, who agreed. [I] [2] [3] [4] [5] [6] REFERENCES Barwise, J. and Cooper, R. Generalized Quantifiers and Natural Language. / ~ and PhilosoPhY 2(2):159-219, 1981. Barwise, K.J. and Perry, J.R. Semantic Innocence and Uncompromising Situations. In French, Vehling, and Wettstein (editors), Studies in philosoohv, pages 387-~04. University of Minnesota Press, Minneapolis, 1981. Barwise, K.J. and Perry, J.R. Situations and Attitudes. Bradford Books, Cambridge MA, 1983. Dretske, F. Knowledge and the F!ow of ~ . Bradford Books, Cambridge MA, 1981. Fodor, J. A. The Language of Thought. Crowell, New York, 1975. Kamp, H. Formal Properties of 'Now'. Theoria 37:227-273, 1971. 36 [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] Kaplan, D. Demonstratives. 1977. unpublished manuscript. Kaplan, D. On the Logic of Demonstratives. In French, Vehling, and Wettstein (editors), Persepeetivesln philosophy of La~uage, pages 401-a12. University of Minnesota Press, Minneapolis, 1979. Lewis, D. General Semantics. In Davidson, D. and Harman, G. (editors), ~ of Natural Language, pages 169-218. Reidel, Boston, 1972. 2nd edition. Montague, R. Pragmatics. In Thomason, R. (editor), F o r m a l S , pages 95-118. Yale University Press, New Haven, 1974. Montague, R. Pragmatics and Intensional Logic. In Thomason, R. (editor), Formal ~ , pages 119-I~7. Yale University Press, New Haven, 1974. Montague, R. Universal Grammar. In Thomason, R. (editor), Formal Philosophy, pages 222-246. Yale University Press, New Haven, 1974. Montague, R. The Proper Treatment of Quantification in Ordinary English. In Thomason, R. (editor), Formal ~ , pages 247-270. Yale University Press, New Haven, 1974. Perry, J.R. Frege on Demonstratives. Philosophical Review LXXXVI(4):474-497, October, 1977. Perry, J.R. The Problem of the Essential Indexical° Nous 13(I):3-21, 1979. Quine, W.V.O. Reference and Modality. In From A Logical Point of View, pages 139-159. Harper & Row, New York, 1961. 2nd edition. Quine, W.V.O. Three Grades of Modal Involvement. In The Ways of Paradox and Other Essays, pages 156-174. Random House, New York, 1966. [18] [19] [20] [21] [22] Scott, D. Advice on Modal Logic. In Lambert, K. (editor), philosophical Problems in Logic, pages 143-173. Reidel, Dordrecht, 1970. Stalnaker, R. Pragmatics. In Davidson, D. and Harman, G. (editors), ~ of Natural Language, pages 380-397. Reidel, Boston, 1972. 2rid edition. Stalnaker , R. Assertion. In Cole, P. (editor), ~ , pages 315-332. Academic Press, New York, 1978. Stalnaker, R° Propositions. 1982. unpublished ms. Thomason, R. Introduction. In Thomason, R. (editor), Formal philosophy, pages 1-69. Yale University Press, New Haven, 1974. 37 | 1983 | 5 |
A Modal Temporal Logic for Reasoning about Change Eric Mays Department of Computer and Information Science Moore School of Electrical Engineerlng/D2 University of Pennsylvania Philadelphia, PA 19104 ABSTRACT We examine several behaviors for query systems that become possible with the ability to represent and reason about change in data bases: queries about possible futures, queries about alternative histories, and offers of monitors as responses to queries. A modal temporal logic is developed for this purpose. A completion axiom for history is given and modelling strategies are given by example. I INTRODUCTION In this paper we present a modal temporal logic that has been developed for reasoning about change in data bases. The basic motivation is as follows. A data base contains information about the world: as the world changes, so does the data base -- probably maintaining some description of what the world was like before the change took place. Moreover, if the world is constrained In the ways it can change, so is the dat~ base. We are motivated by the benefits to be gained by being able to represent those constraints and use them to reason about the possible states of a data base. It is generally accepted that a natural language query system often needs to provide more than just the literal answer to a question. For example, [Kaplan 82I presents methods for correcting a questionerls misconceptions (as reflected in a query) about the contents of a data base, as well as providing additional information in suvport of the literal answer to a query, By enriching the data base model, Kaplan's work on correcting misconceptions was extended in [Mays 801 to distinquish between misconceptions about data base structure and data base contents. In either case, however, the model was a static one. By incorporating a model of the data base in which a dynamic view is allowed, answers to questions can include an offer to monitor for some condition which might possibly occur in the future. The following is an example: U: "Is the Kitty Hawk in Norfolk?" S: "No, shall I let you know when she is?" IThJs work is partially supported by a grant from the Natlonal Science Foundation, NSF-MCS 81-07290. But just having a dynamic view is not adequate, it is necessary--r-y--~at the dynamic view correspond to the possible evolution of the world that is modelled. Otherwise, behaviors such as the following might arise: U: "Is New York less than 50 miles from Philadelphia?" S: "No, shall I let you know when it is?" An offer of a monitor is said to be competent only if the conditlon to be monitored can possibly occur. Thus, in the latter example the offer is not competent, while in the former it is. This paper is concerned with developing a lo~ic for reasoning about change in data bases, and assessing the impact of that capability on the behavior of question answering systems. The general area of extended interaction in data base systems is discussed in [WJMM 831. As just pointed out, the ability to represent and reason about change in data bases affects the range and quality of responses that may be produced by a query system. Reasoning about prior possibllty admits a class of queries dealing with the future possibility of some event or state of affairs at some time in the past. These queries have the general form: "Could it have been the case that p?" This class of queries will be termed counterhistoricals in an attempt to draw some parallel with counterfactuals. The future correlate of counterhistoricals, which one might call futurities, are of the form: "Can it be the case that p?" i.e. in the sense of: "Might it ever be the case that p?" The most interesting aspect of this form of question is that it admits the ability for a query system to offer a monitor as a response to a question for relevant information the system may become aware of at some future time. A query system can only competently offer such monitors when it has this ability, since otherwise it cannot determine if the monitor may ever be satisfied. II REPRESENTATION We have chosen to use a modal temporal logic. There are two basic requirements which lead us toward logic and away from methods such as Petri nets. F~rst, it may be desirable to assert that some proposition is the case without necessarily 38 specifying exactly when. Secondly, our knowledge may be disjunctive. That is, our knowledge of temporal situations may be incomplete and indefinite, and as others have argued [Moore 821 (as a recent example), methods based on formal logic (though usually flrst-order) are the only ones that have so far been capable of dealing with problems of this nature. In contrast to flrst-order representations, modal temporal logic makes a fundamental distinction between variability over time (as expressed by modal temporal operators) and variability in a state (as expressed using propositional or flrst-order languages). Modal temporal logic also reflects the temporally indefinite structure of language in a way that is more natural than the commaon method of using state variables and constants in a flrst-order logic. On the side of flrst-order logic, however, is expressive power that is not necessarily present in modal temporal logic. (But, see [K amp 68] and [GPSS 80] for comparisons of the expressive power of modal temporal logics with flrst-order theories.) There are several possible structures that one could reasonably imagine over states in time. The one we have in mind is discrete, backwards linear, and infinite in both directions. We allow branching into the future to capture the idea that it is open, but the past is determined. Due to the nature of the intended application, we also have assumed that time is discrete. It should be stressed that this decision Is not motivated by the belief that time itself is discrete, but rather by the data base application. Furthermore, in cases where it is necessary for the temporal structure to be dense or continuous, there is no immediate argument against modal temporal logic in general. (That Is, one could develop a modal temporal logic that models a continuous structure of time [RU 71].) A modal temporal structure is composed of a set oP states. Each state is a set of propositions which are true of that state. States are related by an immediate predecessor-successor relation. A branch of time is defined by taking some possible sequence of states accessible over this relation from a given state. The future fragment of the logic is based on the unified branching temporal logic of [BMP 81], which introduces branches and quantifies over them to make it possible to describe properties on some or all futures. Thls is extended with an "until" operator (as in [K amp 68], [GPSS 801) and a past fragment. Since the structures are backwards linear the existential and universal operators are merged to form a linear past fragment. A. Syntax Formulas are composed from the symbols, - A set ~of atomic propositions. Boolean connectives: v, -. Temporal operators: AX (every next), EX (some next), AG (every always), EG (some always), AF (every eventually), EF (some eventually), AU (every until), EU (some until), L (immediately past), P (sometime past), H (always past), S (since). AU, EU, and S are binary; the others are unary. For the operators composed of two symbols, the first symbol ("A" or "E") can be thought of as quantifying universally or existentially over branches in time; the second symbol as quantifying over states within the branch. Since branching is not allowed into the past, past operators have only one symbol. using the rules, - If p~, then p is a formula. - If p and q are formulas, then (-p), (p v q) are formulas. - If m is a unary temporal operator and p is a formula, then (m p) is a formula. - If m is a binary temporal operator and p and q are formulas, then (p m q) is a formula. Parentheses will occasionally be omitted, and &, -->, 4--> used as abbreviations. (In the next section: "Ax" should be read as the universal quantifier over the variable x, "Ex" as the existential quantifier over x.) B. Semantics A temporal structure T is a triple (S,~, R) where, - S is a set of states. -~'~:(S -+ 2 ~) is an assignment of atomic propositions to states. - R C (S x S) is an accessibility relation on--S. Each state is required to have at least one successor and exactly one predecessor -- i.e., As (Et (sRt) & E!t (tRs)). Define b to be an s-branch b = (..., S_l , S=So, Sl, ...) such that siRsi+ 1. The relation ">" is the transitive closure of R. The satisfaction of a formula p at a state s in a structure T, <T,s> I = p, is defined as follows : <T,s>I = p iff pG~s), for p~ <T,s>l = -p iff not <T,s>i=p <T,s>l = p v q Iff <T,s>J=p or <T,s>l=q 39 <T,s>L = AGp iff AbAt((t~b & t>s) -9 <T,t>l=p) (p is true at every time of every future) <T,s>[= AFp Iff AbEt(tfb & t>s & <T,t>[=p) (p is true at some time of every future) <T,s>i = pAUq iff AbEt(tf"b & t>s & <T,t>i=q & At'((t'~b & s<t'<t) -9 <T,t'>l=p))) (q is true at some--time of every future and until q is true p is true) <T,s>I= AXp i ff At(sRt --> <T,t>I=p) (p is true at every immediate future) <T,s>l= EGp iff EbAt((tSb & t>s) -9 <T,t>l=p) (p is true at every time of some future) <T,s>l= EFp iff EbEt(tfb & t>s & <T,t>{=p) (p fs true at some time of some future) <T,s>1 = EXp iff Et(sRt & <T,t>l=p) (p is true at some immediate future) <T,s>I = pEUq iff EbEt(teb & t>s & <T,t>I=q & At'((t'eb & s<t'<t) --> <T,t'>I=p))) (q is true at some time of some future and in that future until q is true p is true) <T,s>~= Hp iff AbAt((tfb & t<s) -~ <T,t>l=p) (p is true at every time of the past) <T,s>l= Pp iff AbEt(t~b & t<s & <T,t>I=p) (p is true at some time of The past) <T,s>J= Lp iff A=(tRs --> <T,t>l=p) (p is true at the immediate past) <T,s>I= pSq iff AbEt(tGb & t<s & <T,t>I=q & At'((t'~b & s>t'>t) -9 <T,t'>l=p))) (q is true at some time of the past and since q is true p is true) A formula p is valid iff for every structure T and every state s in T, <T,s> I= p. III MODELLING CHANGE IN KNOWLEDGE BASES As noted earlier, this logic was developed to reason about change in data bases. Although ultlmately the application requires extension to a flrst-order language to better express varlabillty within a state, for now we are restricted to the propositional case. Such an extenslon is not wfthout problems, but should be manageable. The set of propositional variables for modelling change in data bases is divided into two classes. A state proposition asserts the truth of some atomic condition. An event proposition associates the occurence of an event with the state in which it occurs. The idea is to impose constraints on the occurence of events and then derive the appropriate state description. To be specfic, let Osl...Qsn be state propositions and Qel...Oem be event propos~tlons. If PHI is a boolean formula of state propositions, then formulas of the form: (PHI -9 EX Qei) are event constraints. To derive state descriptions from events frame axioms are required: (Qei -9 ((L PHIl) -9 PHI2)), where PHIl and PHI2 are boolean ~ormulas of state propositions. In the blocks world, and event constraint would be that If block A was clear and block B was clear then move A onto B is a next possible event: ((cleartop(A) & cleartop(B)) -9 EX move(A,B)). Two frame axioms are: (move(A,B) -9 on(A,B)) and (move(A,B) --> ((L on(C,D)) -9 on(C,D))). If the modelling strategy was left as just outlined, nothing very significant would have been accomplished. Indeed, a simpler strategy would be hard to imagine, other than requiring that the state formulas be a complete description. This can be improved in two non-trivial ways. The first is that the conditions on the transitions may reference states earlier than the last one. ~econdly, we may require that certain conditions might or must eventually happen, but'not necessarily next. As mentioned earller, these capabilities are important consideratlons for us. By placing biconditionals on the event constraints, it can be determined that some condition may never arise, or from knowledge of some event a reconstruction of the previous state may be obtained. The form of the frame axioms may be inverted using the until operator to obtain a form that is perhaps more intuitive. As specified above the form of the frame axioms will yield identical previous and next state propositions for those events that have no effect on them. The standard example from the blocks world is that moving a block does not alter the color of the block. If there are a lot uf events llke move that don't change block color, there will be a lot of frame axioms around stating that the events don't change the block color. But if there is only one event, say paint, that changes the color of the block, the "every until" (AU) operator can be used to state that the color of the block stays the same unti] it is painted. This strategy works best if we maintain a single event condition for each state; i.e, no more than a single event can occur In each state. For each application, a decision must be made as to how to best represent the frame axioms. Of course, if the world is very complicated, there will be a lot of complicated frame axioms. I see no easy way around this problem in this logic. 40 A. Completion of History T-reg ~--> (AX T-add) As previously mentioned, we assume that the past is determined (i.e. backwards linear). However this does not imply that our knowledge of the past is complete. Since in some cases we may wish to claim complete knowledge with respect to one or more predicates in the past, a completion axiom is developed for an intuitively natural conception of history. Examples of predicates for which our knowledge might be complete are presidential inaugurations, employees of a company, and courses taken by someone in college. In a first order theory, T, the completion axiom with respect to the predicate Q where (Q cl)...(Q cn) are the only occurences of Q in T is: Ax((Q x) ~-~ x=cl v...v x=cn). From right to left on the bicondltional this just says what the orginal theory T did, that Q is true of cl...cn. The completion occurs from left to right, asserting that cl...cn are the only constants for which Q holds. Thus for some c' which is not equal to any of cl...cn, it is provable in the completed theory that ~(Q c'), which was not provable in the original theory T. This axiom captures our intuitive notions about Q. 2 The completion axiom for temporal logic is developed by introducing time propositions. The idea is that a conjunct of a time proposition, T, and some other proposition, Q, denotes that Q is true at time T. If time propositions are linearly ordered, and Q occurs only in the form P(Q & TI) &...& P(Q & Tn) in some theory M, then the h~story completion axiom for M with respect to Q is H(Q 4--> T1 v...v Tn). Analogous to the first- order completion axiom, the direction from left to right is the completion of Q. An equivalent first- order theory to M in which each temporal proposition Ti is a first-order constant tl and Q is a monadic predicate, (Q tl) &...& (Q tn), has the flrst-order completion axiom (with Q restricted to time constants of the past, where tO is now): Ax<t0 ((Q x) ~-+ x=tl v...v x=tn). B. Example The propositional variables T-reg, T-add, T- drop, T-enroll, and T-break are time points intended to denote periods in the academic semster on which certain activities regarding enrollment for courses is dependent. The event proposition are Qe-reg, Qe-pass, Qe-fail, and Qe-drop; for registering for a course, passing a course, failing a course, and dropping a couirse, respectively. The only state is Qs-reg, which means that a student is registered for a course. 2[Clark 781 contains a general discussion of predicate completion. [Reiter 82] discusses the completion axiom with respect to circumscription. T-add ~--> (AX T-drop) - drop follows add T-drop ~-~ (AX T-enroll) - enroll follows drop T-enroll (-~ (AX T-break) - break follows enroll ((T-reg v T-add) & ~Qs-reg & -(P Qe-pass)) ~-~ (EX Qe-reg) - if the period is reg or add and a student is not registered and has not passed the course then the student may next register for the course ((T-add v T-drop) & Qs-reg) ~-) (EX Qe-drop) - if the period is add or drop and a student is registered for a course then the student may next drop the course (T-enroll & Qs-reg) ~-+ (EX Qe-pass)) - if the period is enroll and a student is registered for a course then the student may next pass the course (T-enroll & Qs-reg) ~-~ (EX Qe-fail)) - if the period is enroll and a student is registered for a course then the student may next fail the course Qe-reg -+ (Os-reg AU (Qe-pass v Qe-fail v Qe-drop)) - if a student registers for a course then eventually the student will pass or fall or drop the course and until then the student will be registered for the course ((L -Qs-reg) & -Qe-reg) --> -Qs-reg) - not registering maintains not being registered AX(Qe-reg & Qe-pass & Qe-fail & Qe-drop & Qe-null) - one of these events must next happen -(Qe-i & Qe-j), for -l=j (e.g. -(Qe-reg & Qe- pass)) - but only one IV COUNTERHISTORICALS A counterhistorlcal may be thought of as a special case of a counterfactual, where rather than asking the counterfactual, "If kangaroos did not have tails would they topple over?", one asks instead "Could I have taken CSEII0 last semester?". That is, counterfac=uals suppose that the present state of affairs is slightly different and then question the consequences. Counterhlstorlcals, on the other hand, question how a course of events might have proceeded otherwise. If we picture the underlying temporal structure, we See that althouKh there are no branches into the past, there are branches from the past into the future. These are alternative histories to the one we are actually in. Counterhlstoricals explore these alternate 41 histories. Intuitively, a counterhistorlcal may be evaluated by "rolling back" to some previous state and then reasoning forward, dlsregarding any events that actually took place after that state, to determine whether the speclfied condition might arise. For the question, "Could I have registered for CSEII0 last semester?", we access the state specified by last semester, and from that state description, reason forward regarding the possibility of registering for CSEII0. However, a counterhistorlcal is really only interesting if there is some way in which the course of events is constrained. These constraints may be legal, physical, moral, bureaucratic, or a whole host of others. The set of axioms in the previous section is one example. The formalism does not provide any facility to dlstinquish between various sorts of constraints. Thus the mortal inevitability that everyone eventually dies is given the same importance as a university rule that you can't take the same course twice. In the logic, the general counterhistorical has the form: P(EFp). That is, is there some time in the past at which there is a future time when p might possibly be true. Constraints may be placed on the prior time: P(q & EFp), e.g. "When I was a sophomore, could I have taken Phil 6?". One might wish to require that some other condition still be accessible: P(EF(p & EFq)), e.g. "Could I have taken CSE220 and then CSEII0?"; or that the counterhistorical be immediate from the most recent state: L(EXp). (The latter is interesting in what it has to say about possible alternatives to -- or the inevitability of -- what is the case now. [WM 831 shows its use in recognizing and correcting event- related misconceptions.) For example, in the registration domain if we know that someone has passed a course then we can derive from the axioms above the counterhistorical that they could have not passed: ((P Qe-pass) -+ P(EF-Qe-pass). V FUTURITIES A query regarding future possibility has the general logical form: EFp. That is, is there some future time in which p is true. The basic variations are: AFp, must p eventually be true; EGp, can p remain true; AGp, must p remain true. These can be nested to produce infinite variation. However, answering direct questions about future possibility is not the only use to be made of futurities. In addition, futurities permit the query system to competently offer monitors as responses to questions. (A monitor watches for some specified condition to arise and then performs some action, usually notification that the condition has occurred.) A monitor can only be offered competently if it can be shown that the condition might possibly arise, given the present state of the data base. Note that if any of the stronger forms of future possibility can be derived it would be desirable to provide information to that effect. For example, if a student is not registered for a course and has not passed the course and the time wasprior to enrollment, a monitor for the student registering would be competently made given some question about registration, since ((~Qs-reg & -(P Qe-pass) & ~X(AF Te)) -+ (EF Qe-reg)). However, if the student had previously passed the course, the monitor offer would not be competent, since ((-Qs-reg & (P Qe-pass) & AX(AF Te)) -+ -(EF Qe-reg)). Note that if a monitor was explicity requested, "Let me know when p happens," a futurity may be used to determine whether p might ever happen. In addition to the processing efficiency gained by discarding monitors that can never be satisfied, one is also in a position to correct a user's mistaken belief that p might ever happen, since in order to make such a request s/he must believe p could happen. Corrections of this sort arise from Intensional failures of presumptions in the sense of [Mays gOl and [WM 8~I. If at some future time from the monitor request, due to some intervening events p can no longer happen, but was originally possible, an extensional failure of the presumption (in the sense of [Kaplan 82]) might be said to have occurred. The application of the constraints when attempting to determine the validity of an update to the data base is important to the determination of monitor competence. The approach we have adopted is to require that when some formula p is considered as a potential addition to the data base that it be provable that EXp. Alternatively one could just require that the update not be inconsistent, that is not provable chat .~X~p. The former approach is preferred since it does not make any requirement on decidability. Thus, in order to say that a monitor for some condition p [s competent, it must be provable that EFp. VI DISCUSSION This work has been influenced most strongly by work within theory of computation on proving program correctness (IBMP 811 and [GPSS 801) and within philosophy on temporal logic [RU 711..The work within AI that is most relevant is that of [McDermott 821. Two of McDermott's major points are regarding the openess of the future and the continuity of time. With the first of these we are in agreement, but on the second we differ. This difference is largely due to the intended application of the logic. Ours is applied to changes in data base states (which are discrete), whereas McDermott's is physical systems (which are continuous). But even within the domain of physical systems it may be worthwhile to consider discrete structures as a tool for abstraction, for 42 which computational methods may prove to be more tractable. At least by considering modal temporal logics we may be able to gain some insight into the reasoning process whether over discrete or continuous structures. We have not made at serlous effort towards implementation thus far. A tableau based theorem prover has been implemented for the future fragment based on the procedure given in [BMP 81]. It is able to do problems about one-half the size of the example given here. Based on this limited experience we have a few Ideas which might improve its abilities. Another procedure based on the tableau method which is based on ideas from [BMP 81] and [RU 71] has been developed but we are not sufficiently confident In its correctness to present ft at this point. ACKNOWLEDGEMENTS I have substantially benefited from comments, suggestions, and discussions wlth Aravlnd Joshi, Sltaram Lanka, Kathy McCoy, Gopalan Nadathur, David Silverman, Bonnie Webber, and Scott Weinstein. Reasoning About Processes and Plans," Cognitive Science (6), I982. [Moore 82] R.C. Moore, "The Role of Logic in Knowledge Representation and Commensense Reasoning," Proceedings of AAAI 82, Pittsburgh, Pa., August 1982. [RU 711N. Rescher and A. Urquhart, Temporal Logic, Sprlnger-Verlag, New York, 1971. [Relter 82] R. Relter, "Circumscription Implies Predicate Completion (Sometimes)," Proceedings of AAAI 82, Pittsburgh, Pa., August [982. [WJMM 83] B. Webber, A. Joshi, E. Mays, K. McKeown, "Extended Natural Language Data Base Interactions," International Journal of Computers and Mathematics, Spring 83. [W'M 83] B. Webber and E. Mays, "Varieties of User Misconception: Detection and Correction", Proceedings of IJCAI 83. REFERENCES [BMP 81] M. Ben-Ari, Z. Manna, A. Pneuli, "The Temporal Logic of Branching Time," Eighth ACM Symposium on Principles of Programming Languages, Williamsburg, Va., January [981. [Clark 78] K.L. Clark, "Negation as Failure," in Logic and Data Bases, H. Gallalre and J. Minker (eds.), Plenum, New York. [GPSS 80] D. Gabbay, A. Pneull, S. Shelah, J. Stavl, "On the Temporal Analysis of Fairness, Seventh ACM Symposium on Principles of Programming Languages, 1980. [Kamp 68] J.A.W. Kamp, Tense Logic and the Theory of Linear Order, PhD Thesis, UCLA, |968. [Kaplan 82] S.J. Kaplan, "Cooperative Responses from a Portable Natural Language Query System," Artificial Intelligence (19, 2), October 1982. [Mays 80] E. Mays, "Failures in Natural Language Systems: Appllcations to Data Base Query Systems," Proceedings of AAAI 80, Stanford, Ca., August [980. [Mays 82] E. Mays, "Monitors as Responses to Questions: Determining Competence," Proceedings of AAAI 82, Pittsburgh, Pa., August 1982. [McDermott 82] D. McDermott, "A Temporal Loglc for 43 | 1983 | 6 |
PROVIDING A UNIFIED ACCOUNT OF DEFINITE NOUN PHRASES IN DISCOURSE Barbara J. Grosz ,M'tificial Intelligence Center SRI International Menlo Park. CA Aravind K. Joshi Dept. of Computer and Information Science University of Pennsylvania Philadelphia, PA Scott Wcinstein Dept. of Philosophy University of Pennsylvania Philadelphia, PA 1. Overview Linguistic theories typically assign various linguistic phenomena to one of the categories, syntactic, semantic, or pragmatic, as if the phenomena in each category were relatively independent of those in the others. However, various phenomena in discourse do not seem to yield comfortably to any account that is strictly a syntactic or semantic or pragmatic one. This paper focuses on particular phenomena of this sort-the use of various referring expressions such as definite noun phrases and pronouns-and examines their interaction with mechanisms used to maintain discourse coherence. Even a casual survey of the literature on definite descriptions and referring expressions reveals not only defects in the individual accounts provided by theorists (from several different disciplines), but also deep confusions about the roles that syntactic, semantic, and pragmatic factors play in accounting for these phenomena. The research we have undertaken is an attempt to sort out some of these confusions and to create the basis for a theoretical framework that can account for a variety of discourse phenomena in which all three factors of language use interact. The major premise on which our research depends is that the concepts necessary for an adequate understanding of the phenomena in question are not exclusively either syntactic or semantic or pragmatic. The next section of this paper defines two levels of discourse coherence and describes their roles in accounting for the use of singular definite noun phrases. To illustrate the integration of factors in explaining the uses of referring expressions, their use on one of these levels, i.e., the local one, is discussed in Sections 3 and 4. This account requires introducing the notion of the centers of a sentence in a discourse, a notion that cannot be defined in terms of factors that are exclusively syntactic or semantic or pragmatic. In Section 5, the interactions of the two levels with these factors and their effects on the uses of referring expressions in discourse are discussed. 2. The Effects of Different Levels of Discourse Coherence A discourse comprises utterances that combine into subconstituents of the discourse, namely, units of discourse that are typically larger than a single sentence, but smaller than the complete discourse. However, the constituent structure is not determined solely by the linear sequence of utterances. It is common for two contiguous utterances to be members of different subconstituents of the discourse (as with breaks between phrases in the syntactic analysis of a sentence); likewise, it is common for two utterances that are not contiguous to be members of the same subconstituent. An individual subcoastituent of a discourse exhibits both internal coherence and coherence with the other subconstituents. That is, discourses have been shown to have two levels of coherence. Global coherence refers to the ways in which the larger segments of discourse relate to one another. It depends on such things as the function of a discourse, its subject matter, and rhetorical schema [Grosz, 1977, 1981; Reichman, 1981 I. Local coherence refers to the ways in which individual sentences bind together to form larger discourse segments. It depends on such things as the syntactic structure of an utterance, ellipsis, and the use of pronominal referring expressions [Sidner, 1981 I. The two levels of discourse coherence correspond to two levels of focusing--global focusing and centering. Participants are said to be globally focused on a set of entitie.~ relevant to the overall discourse. These entities may either have been explicitly introduced into the discourse or be sufficiently closely related to such entities to be considered implicitly in focus [Grosz, 19811 . In contrast, centering refers to a more local focusing process, one relates to identifying the single entity that an individual utterance most centrally concerns [Sidner, 1979; Joshi and Weinstein, 1981]. IThis research was supported in part by the National Science Foundation under Grant MCS-8115105 to SRI International, and Grant MCS81-07290 to the University of Pennsylvania. 44 The two levels of focusing/coherence have different effects on the processing of pronominal and nonpronominal definite noun phrases. Global coherence and focusing are major factors in the generation and interpretation of nonpronominal def'lnite referring expressions. 2 Local coherence and centering have greater effect on the processing of pronominal expressions. In Section 5 we shall describe the rules governing the use of these kinds of expressions and shall explain why additional processing by the hearer (needed for drawing additional inferences} is involved when pronominal expressions are used to refer to globally focused entities or nonpronominal expressions are used to refer to centered entities. Many approaches to language interpretation have ignored these differences, depending instead on powerful inference mechanisms to identify the referents of referring expressions. Although such approaches may suffice, especially for well-formed texts, they are insufficient in general. In particular, such approaches will not work for generation. Here the relationships among focusing, coherence, and referring expressions are essential and must be explicitly provided for. Theories-and systems based on them--will generate unacceptable uses of referring expressions if they do not take these relationships into account. 3 3. Centering and Anaphora In our theory, the centers of a sentence in a discourse serve to integrate that sentence into the discourse. Each sentence, S, has a single backward-looking center, Cb(S), and a set of forward-looking centers, Cf(S). Cb(S) serves to link S to the preceding discourse, while Cf(S) provides a set of entities to which the succeeding discourse may be linked. To avoid confusion, the phrase =the center" will be used to refer only to Cb(S). To clarify the notion of center, we will consider a number of discourses illustrating the various factors that are combined in its definition (abstractly) and in its identification in a discourse. In Section 5 we define center more precisely, show how it relates to Sidner's [1981] immediate focus and potential loci, and discuss how the linkages established by the centers of a sentence help to determine the degree of intelligibility of a discourse. We begin by showing that the center cannot be defined in syntactic terms alone. The interaction of semantics and centering is more complex and is discussed in Section 4. The following examples, drawn from Reinhart [1982], illustrate the point that the notion of center is not syntactically definable, 4 i.e., the syntax of a sentence S does not determine which of its NPs realizes Cb(S). (The 2They differ in other respects also. Reichman [19811 a~d Grosz [19811 discuss some of these. 3Initial attempts to incorporate focusing mechanisms in generation systems are described in [Appelt, 1981 and MeKeown, 1982]. 41ntonation can obviously affect the interpretation; for the purposes of this paper, it may be regarded a~ part of a syntax. reasons for the use of this terminology axe discussed in Section 4.) (t&) Who did Max see yesterday? (lb) Max saw Rosa. (2a) Did anyone see Ros& yesterday? (2b) Max s~w Rosa. Although (lb) and (2b) are identical, Cb(lb) is Max and Cb(2b) is Rosa. This can be seen in part by noticing that =He saw Rosa" seems more natural than (lb) and =Max saw her" than (2b) (a fact consistent with the centering rule introduced in Section 5.) The subject NP is the center in one context, the object NP in the other. Even when the NP used to realize Cb(S) can be syntactically determined, the Cb(S) itself is not yet fully determined, for Cb(S) is typically not a linguistic entity (i.e., it is not a particular linguistic expression). Rosa, not °Rosa ° is the Cb(2b). Consider. the discourse: (3z) How is Rosa? (3b) Did anyone see her yesterday? (3e) Max saw her. Here, Cb(3c) is Rosa, but clearly would not be in other contexts where the expression "her" still realized the backward-looking center of "Max saw her." This is seen most simply by considering the discourse that would result if "How is Joan?" replaced (3a). In the discourse that resulted, Joan, not Rosa, would be the center of (3c). 4. Centering and Realization The interactions of semantic and pragmatic factors with centering and their effects on referring expressions are more complex than the preceding discussion suggests. In the examples given above, the NPs that realize Cb(S) also denote it., but this is not always the case: we used the term "realize" in the above discussion advisedly. In this section, we consider two kinds of examples in which the center of a sentence is not simply the denotation of some noun phrase occurring in the sentence. First, we will examine several examples in which the choice of and interaction among different kinds of interpretations of definite noun phrases are affected by the local discourse context (i.e., centering}. Second, the role of pragmatic factors in some problematic cases of referential uses of definite descriptions [Donnellan 1966] is discussed. 4.1. Realization and Value-Free and Value-Loaded Interpretations The distinction between realization and semantic denotation is necessary to treat the interaction between value-free and value-loaded interpretations [Barwise and Perry, 1982] of definite descriptions, as they occur in extended discourse. Consider, for example, the following sequence: 45 (4a) The vice president of the United States is also president of the Senate. (4b) Historically, he is the president's key man in negotiations with Congress. (4b') As Ambassador to China, he handled many tricky negotiations, so he is well prepared for this Job. Cb(4b) and Cb(4b') are each realized by the anaphoric element "he. = But (4b) expresses the same thing as "Historically, the vice president of the United States is the president's key man in negotiations with Congress" (in which it is clear that no single individual vice president is being referred to) whereas (4b') expresses the same thing as, "As ambassador to China, the [person who is now] vice president of the United States handled many tricky negotiations,..." This can be accounted for by observing that "the vice president of the United States" contributes both its value-free interpretation and its value-loading at the world type to Cf(4a). Cb(4b) is then the value-free interpretation and Cb(4b') is the value- loading, i.e., George Bush. In this example, both value-free and value-loaded interpretations are showu to stem from the same full definite noun phrase. It is also possible for the movement of the center from a value-free interpretation (for Cb(S)) to a value-loaded interpretation (for Cb of the next sentence)-or vice versa-to be accomplished solely with pronouns. That is, although (4b)-(4b') is (at least for some readers) not a natural dialogue, similar sequences are possible. There appear to be strong constraints on the kinds of transitions that are allowed. In particular, if a given sentence forces either the value-free or value-loaded interpretation, then only that interpretation becomes possible in a subsequent sentence. However, if some sentence in a given context merely prefers one interpretation while allowing the other, then either one is possible in a subsequent sentence. For example, the sequence. (Sa) The vice president of the United States is also president of the Senate. (Sb) He's the president's key a~ in ne~otiatione with Congress. in which "he" may be interpreted a~ either value-free (iT') or value-loaded (VL}, may be followed by either of the following two sentences: (5c) As ambassador to China. he handled many tricky negotiations. (VL) (5c') He is required to he at least 35 years old. (V'F') tlowever, if we change (Sb) to force the value-loaded interpretation, as in (5b'), then only (5c) is possible. ($b') Right non he is the president's key man £n negotiations sith Congress. Similarly, if {5b) is changed to force the value-free interpretation, as in {4b), then only (5c') is possible. If an intermediate sentence allows both interpretations but prefers one in a given context, then either is possible in the third sentence. A use with preference for a value- loaded interpretation followed by a use indicating the value-free interpretation is illustrated in the sequence: John thinks that the telephone £s a toy. He plays with it every day. (V~ preferred; V~ok) He doesn't realize that £t is tn £nventlon that changed the world. (V~ The preference for a value-free interpretation that is followed bv a value-loaded one is easiest to see in a dialogue situation: st: The vice president of the United States is also president of the Senate. s2: I thought he played some important role in the House. (VFpreferred; VL ok) st: He did. but that van before he vu VP. (V~) 4.2. Realization and Referential Use From these examples, it might appear that the concepts of value-free and value-loaded interpretation are identical to Donnellan's I19661 attributive and referential uses of noun phrases. However, there is an important difference between these two distinctions. The importance to our theory is that the referential use of definite noun phrases introduces the need to take pragmatic factors (in particular speaker intention) into account, not just seman| ic factors. DonnelIan [1966[ describes the referential and attributive uses of definite descriptions in the following way: "A speaker who uses a definite description attributively in an assertion states something about whoever or whatever is the so-and-so. A speaker who uses a definite description referentially in an a~sertion, on the other hand , uses the description to enable his audience to pick out whom or what he is talking about and states something about that person or thing. In the first case the definite description might be said to occur essentially, for the speaker wishes to assert something about whatever or whoever fits that description; but in the referential use the definite description is merely one tool for doing a certain job--calling attention to a person or thing--and in gefieral any other device for doing the same job, another description or a name. would do as well. In the attributive use, the attribute of being the so-and-so is all imp~,rtant, while it is not in the referential use.* The distinction Donnellan suggests can be formulated in terms of the different propositions a sentence S containing a definite description D may be used to express on differcn! occasions of use. When D is used referentially, it contributes its denotation to the proposition expressed by ~6 S; when it is used attributively, it contributes to the proposition expressed by S a semantic interpretation related to the descriptive content of D. The identity of this semantic interpretation is not something about which Donnellan is explicit. Distinct formal treatments of the semantics of definite descriptions in natural language would construe the appropriate interpretation differently. In semantic treatments based on possible worlds, the appropriate interpretation would be a (partial} function from possible worlds to objects; in the situation semantics expounded by Barwise and Perry, the appropriate interpretation is a (partial} function from resource situations 5 to objects. .As just described, the referential-attributive distinction appears to be exactly the distinction that Barwise and Perry formulate in terms of the value-loaded and value- free interpretations of definite noun phrases. But this gloss omits an essential aspect of the referential- attributive distinction as elaborated by Donnellan. In Donnellan's view, a speaker may use a description referentially to refer to an object distinct from the semantic denotation of the description, and, moreover, to refer to an object even when the description has no semantic denotation. In one sense, this phenomenon arises within the framework of Barwise and Perry's treatment of descriptions. If we understand the semantic denotation of a description to be the unique object that satisfies the content of the description, if there is one, then Barwise and Perry would allow that there are referential uses of a description D that contribute objects other than the semantic denotation of D to the propositions expressed by uses of sentences in which D occurs. But this is only because Barwise and Perry allow that a description may be evaluated at ~ resource situation other than the complete situation in order to arrive at its denotation on a given occasion of use. Still, the denotation of the description relative to a given resource situation is the unique object in the situation that satisfies the description relative to that situation. The referential uses of descriptions that Donnellan gives examples of do not seem to arise by evaluation of descriptions at alternative resource situations, but rather through the "referential intentions" of the speaker in his use of the description. This aspect of referential use is a pragmatic rather than a semantic phenomenon and is best analyzed in terms of the distinction between semantic reference and speaker's reference elaborated in Kripke [10vv]. Con~idcr the following discourses drawn from Kripke {lov~'l: (6a) Her husband is kind to her. (6b) No. he isn't. The usa you're referring to isn't her husband. (7a) Her husband is kind to her. (7b) He is kind to her but be isn't her husband. With (6a) and (7a), Kripke has in mind a case like the one discussed in Donnellan [1066], in which a speaker uses a description to refer to something other than the semantic referent of that description, i.e., the unique thing that satisfies the description (if there is one). Kripke analyzes this case as an instance of the general phenomenon of a clash of intentions in language use. In the case at hand, the speaker has a general intention to use the description to refer to its semantic referent; his specific intention, distinct from his general semantic intention, is to use it to refer to a particular individual. He incorrectly believes that these two intentions coincide and this gives rise to a use of the referring expression "her husband" in which the speaker's reference and the semantic reference are distinct. "8 (The speaker's referent is presumably the woman's ]over). From our point of view, the importance of the case resides in its showing that Cf(S) may include more than one entity, that is realized by a single NP in S. In this case, "her husband" contributes both the husband and the lover to Cf{6a} and Cf(Ta). This can be seen by observing that both discourses seem equally appropriate and that the backward-looking centers of (6b) and /7b) are the husband and the lover, respectively, realized by their anaphoric elements. Hence, the forward-looking centers of a sentence may be related not semantically but pragmatically to the NPs that realize them. Hence, the importance of the referential/attributive distinction from our point of view is that it leads to cases in which the centers of a sentence may be pragmatically rather than semantically related to the noun phrases that realize them. 5. Center Movement and Center Realization-- Constraints In the foregoing sections we have discussed a number of examples to illustrate two essential points. First, the noun phrase that realizes the backward-looking center of an utterance in a discourse cannot be determined from the syntax of the utterance alone. Second, the relation N realizes c between noun phrases N and centers c is neither solely a semantic nor solely a pragmatic relation. This discussion has proceeded at a rather intuitive level, without explicit elaboration of the framework we regard as appropriate for dealing with centering and its role in explaining disco,trse phenomena. Before going on to describe constraints on the realization relation that 5Roughly, *any situation on which the speaker can focus attention ° is a potential candidate for a resource situation with respect to which the speaker may value load his uses of definite descriptions. Such resource situations must contain a unique object which satisfies the description. 6There are, of course, several alternative explanations; e.g., the speaker may believe that the description is more likely than an accurate one to be interpreted correctly by the hearer. Ferreting out exactly what the case is in a given situation requires accounts of mutual belief and the like. A discussion of these issues is beyond the scope of this paper. h7 explain certain phenomena in discourse, we should be somewhat more explicit about the notions of center and realization. We have said that each utterance S in a discourse has associated with it a backward-looking center, Cb(S), and a set of forward-looking centers, Cf(S). What manner of objects are these centers? They are the sort of objects that can serve as the semantic interpretations of singular noun phrases. 7 That is, either they are objects in the world (e.g., planets, people, numbers} or they are functions from possible worlds (situations, etc.} to objects in the world that can be used to interpret definite descriptions. That is, whatever serves to interpret a definite noun phrase can be a center. For the sake of concreteness in many of the examples in the preceding discussion, we have relied on the situation semantics of Barwise and Perry. The theory we are developing does not depend on this particular semantical treatment of definite noun phrases, but it does require several of the distinctions that treatment provides. In particular, our theory requires a semantical treatment that accommodates the distinction between interpretations of definite noun phrases that contribute their content to the propositions expressed by sentences in which they occur and interpretations that contribute only their denotation-in other words, the distinction between value-free and value-loaded interpretations. As noted, a distinction of this sort can be effected within the framework of "possible-worlds" approaches to the semantics of natural language. In addition, we see the need for interpretations of definite noun phrases to be dependent on their discourse context. Once again, this is a feature of interpretations that is accommodated in the relational approach to semantics advocated by Barwise and Perry, but it might be accommodated within other approaches as well. 8 Given that Cb(S), the center of sentence S in a discourse, is the interpretation of a definite noun phrase, how does it become related to S? In a typical example, S will contain a full definite noun phrase or pronoun that realizes the center. The realization relation is neither semantic nor pragmatic. For example, N realizes c may hold in cages where N is a definite description and c is its denotation, its value-free interpretation, or an object related to it by a "speaker's reference." More importantly, when N is a pronoun, the principles that govern which c are such that N realizes c derive from neither semantics nor pragmatics exclusively. They are principles that must be elicited from the study of discourse itself. A tentative formulation of some such principles is given below. Though it is typical that, when c is a center of S, S contains an N such that N realizes c, it is by no means necessary. In particular, for sentences containing noun 7In a fuller treatment of our theory we will consider centers that are realized by constituents in other syntactic categories. 81srael [1983] discusses some of these issues and compares several properties of situation semantics with Montague semantics. phrases that express functional relations (e.g., "the door," • the owner'} whose arguments are not exhibited explicitly (e.g., a house is the current center, but so far neither its door nor its owner has been mentioned), 9 it is sometimes the case that such an argument can be the backward-looking center of the sentence. We are currently studying such cases and expect to integrate that study into our theory of discourse phenomena. The basic rule that constrains the realization of the backward-looking center of an utterance is a constraint on the speaker, namely: [f the Cb of the current utterance is the same as the Cb of the previous utterance, a pronoun should be used. There are two things to note about this rule. First, it does not preclude using pronouns for other entities as long as one is used for the center. Second, it is not a hard rule, but rather a principle, like a Gricean maxim, that can be violated. However, such violations lead at best to conditions in which the hearer is forced to draw additional inferences. As a simple example, consider the following sequence, assuming at the outset that John is the center of the discourse: (Sa) He called up Mike yesterday. (he=John) (Sb) He ,as annoyed by John's call. (8b) is unacceptable, unless it is possible to consider the introduction of a second person named "John." However, intervening sentences that provide for a shift in center from John to Mike (e.g., "He was studying for his driver's test') suffice to make (8b) completely acceptable. Sidner's discourse focus corresponds roughly to Cb(S), while her potential foci correspond approximately to Cf(S). However, she also introduces an actor focus to handle multiple pronouns in a single utterance. The basic centering rule not only aLlows us to handle the same examples more simply, but also appears to avoid one of the complications in Sidner's account. Example D4 from Sidner [1081} illustrates this problem: (9-1)I haven't seen Jeff for several days, (9-2)Carl thinks he's studying for his exams. (9-3)But I think he Tent bo the Cape with Llnda. On Sidner's account, Carl is the actor focus after (0-2) and Jeff is the discourse focus (Cb(9-2)). Because the actor focus is preferred as the referrent of pronominal expressions, Carl is the leading candidate for the entity referred to by he in {9-3}. It is difficult to rule this case out without invoking fairly special rules. On our account, Jeff is Cb(0-2) and there is no problem. The addition of actor focus was made to handle multiple pronouns--for example, if (9-3) were replaced by He thinks he studies too much. The center rule allows such uses, without introducing a 9Grosz [1977] refers to this a~ "implicit focusing'; other examples are presented in Joshi and Weinstein [1981] ~,8 second kind of focus (or center), by permitting entities other than Cb(S) to be pronominalized as long as Cb(S) is.l° Two aspects of centering affect the kinds of inferences a hearer must draw in interpreting a definite description. First, the shifting of center from one entity to another requires recognition of this change. Most often such changes are affected by the use of full definite noun phrases, but in some instances a pronoun may be used. For example, Grosz [1977] presents several examples of pronouns being used to refer to objects mentioned many utterances back. Second, the hearer must process (interpret) the particular linguistic expression that realizes the center. Most previous attempts to account for the interaction of different kinds of referring expressions with centering and focusing (or "topic') have conflated these two. For example, Joshi and Weinstein [1981] present a preliminary report on their research regarding the connection between the computational complexity of the inferences required to process a discourse and the coherence of that discourse as assessed by measures that invoke the centering phenomenon. However, several of the examples combine changes of expression and shifts in centering. Violations of the basic centering rule require the hearer to draw two different kinds of inferences. The kind required depends on whether a full definite noun phrase is used to express the center or whether a pronoun is used for a noncentered entity. We will consider each case separately. Several different functions may be served by the use of a full definite noun phrase to realize the currently centered entity. For instance, the full noun phrase may include some new and unshared information about the entity. In such cases, additional inferences arise from the need to determine that the center has not shifted and that the properties expressed hold for the centered entity. For example, in the following sequences (I0) I toole i 7 clog to the vet the other day. The mangy old beast... (11) I'm reading The French Lieutenant's Woman. The book, which In Fowles best .... the full definite noun phrases that are in boldface do more than merely refer. When the current center is not pronominalized (it may not be present in the sentence), the use of a pronoun to express an entity other than the current center, is strongly constrained. The particular cases that have been identified involve instances in which attention is being shifted back to a previously centered entity (e.g., Grosz, 1977; Reichman, 1978) or to one element of a set that is currently centered. In such cases, additional inferences 10Obviously, if Cb(S) is not expressed'in the next sentence then this issue does not arise. are required to determine that the pronoun does not refer to the current center, as well as to identify the context back to which attention is shifting. These shifts, though indicated by linguistic expressions typically used for centering (pronouns), correspond to a shift in global focus. 8. Summary The main purpose of the paper was to sort out the confusion about the roles of syntactic, semantic, and pragmatic factors in the interpretation and generation of definite noun phrases in discourse. Specific mechanisms that account for the interactions among these factors were presented. Discourses were shown to be coherent at two different levels, i.e., with referring expressions used to identify entities that are centered locally and those focused upon more globally. The differences between references at the global and local levels were discussed, and the interaction of the syntactic role of a given noun phrase and its semantic interpretation with centering was described. References Appelt, D.E., "Planning Natural-Language Utterances, ° Proc. of the National Conference on Artificial Intelligence, Pittsburgh, Pennsylvania (August 1982). Barwise, J. and Perry, J. Situations and Attiludes, Bradford Books, Cambridge, Mass. (1982) Donnellan, K., "Reference and Definite Description," Philosophical Review, Vol. 60, pp. 281-304 (1966). Grosz, B.J., "The Representation and Use of Focus in Dialogue Understanding," Ph.D. Thesis, University of California, Berkeley. Also, Technical Note No. 151, Artificial Intelligence Center, SRI International. (1977). Grosz, B.J., "Focusing and Description Language Dialogues," Elements of Understanding, Joshi et al., (eds.) Cambridge Press, Cambridge, England (1982). in Natural Discourse University Israel, D.J., "A Prolegomenon to Situation Semantics," Proc. of the 21st Annual Meeting of the Assoc. for Computational Linguistics, Cambridge, Mass. (June 15-17, 1983). Joshi, A. and S. Weinstein, "Control of Inference: Role of Some Aspects of Discourse Structure-Centering," Proc. bzternational Joint Conference on Artificial Intelligence, Vancouver, B.C. pp. 385-387 {August 24-28, I08t). Kripke, S., "Speaker's Reference and Semantic Reference," Contemporary Pespectives in the Philosophy of Language, University of Minnesota Press, Minneapolis, Minnesota, pp. 6-27, (1977). McKeown, K.R., "The TEXT System for Natural Language Generation: An Overview," Proc. of the 20th .4nnual Aieeting of the Assoc. for Computational Linguistics, 16-18 June 1982, Toronto, Ontario, Canada (June 1982}. /49 Reichman, R. "Conversational Coherency," Cognitive Science Vol. 2, No. 4, pp. 283-327, (1978}. Reichman, R. "Plain Speaking: A Theory and Grammar of Spontaneous Discourse," Technical Report No. 4681, Bolt Beranek and Newman, Cambridge, Mass. (June 1981). Reinhart, T., "Prag'maties and Linguistics, An Analysis of Sentence Topics," Indiana University Linguistics Club, Bloomington, Indiana (1978). Sidner, C.L., Toward a Computational Theory of Definite Anaphora Comprehension in English, MIT Technical Report AI-TR-537, (1979). Sidner, C., "Focusing for Interpretation of Pronouns," American Journal of Computational Linguistics Vol. 7, No. 4, pp. 217-231 (1981). 5O | 1983 | 7 |
USING %-CALCULUS TO REPRESENT MF~kNINGS IN LOGIC GRAMMARS* David Scott Warren Computer Science Department SUNY at Stony Brook Stony Brook, NY 11794 ABSTRACT This paper descrlbes how meanings are repre- sented in a semantic grammar for a fragment of English in the logic programming language Prolog. The conventions of Definite Clause Grammars are used. Previous work on DCGs with a semantic com- ponent has used essentially first-order formulas for representing meanings. The system described here uses formulas of the typed ~-calculus. The first section discusses general issues concerning the use of first-order logic or the h-calculus to represent meanings, The second section describes how h-calculus meaning representations can be con- structed and manipulated directly in Prolog. This 'programmed' representation motivates a suggestion, discussed in the third section, for an extension to Prolog so that the language itself would include a mechanism for handling the ~-formulas directly. I h-CALCULUS AND FOL AS MEANING REPRESENTATION LANGUAGES The initial phase of most computer programs for processing natural language is a translation system. This phase takes the English text input and transforms it into structures in some internal meaning-representation language. Most of these systems fall into one of two groups: those that use a variant of first-order logic (FOL) as their representation language, and those that use the typed h-calculus (LC) for their representation language. (Systems based'on semantic nets or con- ceptual dependency structures would generally be calsslfied as using variants of FOL, but see [Jones and Warren, 1982] for an approach that views them as LC-based.) The system considered here are several highly formalized grammar systems that concentrate on the translation of sentences of logical form. The first-order logic systems are exemplified by those systems that have developed around (or gravitated to) logic programming, and the Prolog language in particular. These include the systems described ill [Colmerauer 1982], [Warren 1981], [Dahl 1981], [Simmons and Chester 1982], and [McCord 1982]. The systems using the ~- calculus are those that * This material is based upon work supported by the National Science Foundation under grant ~IST-80- 10834 developed out of the work of Richard Montague. They include the systems described in [Montague 1973], [Gawron et al. 1982], [Rosenschein and Sheiber 1982], [Schubert and Pelletier 1982], and [Warren and Friedman 1981]. For the purposes of this paper, no distinction is made between the intensional logic of Montague grammar and the typed h-calculus. There is a mapping from inten- sional logic to a subset of a typed h-calculus [Gallin 1975], [Clifford 1981] that shows they are essentially equivalent in expressive power. All these grammar systems construct a formula to represent the meaning of a sentence composi- tionally over the syntax tree for the sentence. They all use syntax directed translation. This is done by first associating a meaning structure with each word. Then phrases are constructed by syntac- tically combining smaller phrases together using syntactic rules. Corresponding to each syntactic rule is a semantic rule, that forms the meaning structure for a compound phrase by combinging the meanin~ structures of the component phrases. This is clearly and explicitly the program used in Montague grammar. It is also the program used in Prolog-based natural language grammars with a semantic component; the Prolog language itself essentially forces this methodology. Let us consider more carefully the meaning structures for the two classes of systems of inter- est here: those based on FOL and those based on LC. Each of the FOL systems, given a declarative sentence as input, produces a well-formed formula in a first-order logic to represent the meaning of the sentence. This meaning representation lo~ic will be called the MRFOL. The MILFOL has an intended interpretation based on the real world. For example, individual variables range over ob- jects in the world and unary predicate symbols are interpreted as properties holding of those real world objects. As a particular recent example, consider Dahl's system [1981]. Essentially the same approach was used in the Lunar System [Woods, et al. 1972]. For the sentence 'Every man walks', Dahl's system would produce the expression: for(X,and(man(X),not walk(X)), equal(card(X),0)) where X is a variable that ranges over real-world 51 individuals. This is a formula in Dahl's MRFOL, and illustrates her meaning representation lang- uage. The formula can be paraphrased as "the X's which man is true of and walk is not true of have ¢ardinality zero." It is essentially first-order because the variables range over individuals. (There would need to be some translation for the card function to work correctly.) This example also shows how Dahl uses a formula in her MRFOL as the meaning structure for a declarative sentence. The meaning of the English sentence is identified with the meaning that the formula has in the in- tended interpretations for the MRFOL. Consider mow the meaning structure Dahl uses for phrases of a category other than sentence, a noun phrase, for example. For the meaning of a noun phrase, Dahl uses a structure consisting of three components: a variable, and two 'formulas'. As an example, the noun phrase 'every man' has the following triple for its meaning structure: [X1,X/,for(Xl,and(man(Xl),not(X2)), eqnal(card(Xl),0))]. We can understand this structure informally by thinking of the third component as representing the meaning of 'every man'. It is an object that needs a verb phrase meaning in order to become a sentence. The X2 stands for that verb-phrase meaning. For example, during constz~ction of the meaning of a sentence containing this noun phrase as the subject, the meaning of the verb-phrase of the sentence will be bound to X2. Notice that the components of this meaning structure are not them- selves formulas in the MRFOL. They look very much like FOL formulas that represent meanings, but on closer inspection of the variables, we find that they cannot be. X2 in the third component is in the position of a formula, not a term; 'not' applies to truth values, not to individuals. Thus X2 cannot be a variable in the M1%FOL, because X2 would have to vary over truth values, and all FOL variables vary over individuals. So the third Component is not itself a MIRFOL formula that (in conjunction with the first two components) repre- sents the meaning of the noun phrase, 'every man'. The intuitive meaning here is clear. The third compdnent is a formula fragment that partici- pates in the final formula ultimately representing the meaning of the entire sentence of which this phrase is a subpart. The way this fragment Dartic- ipates is indicated in part by the variable X2. It is important to notice that X2 is, in fact, a syntactic variable that varies over formulas, i,e., it varies over certain terms in the MRFOL. X2 will have as its value a formula with a free variable in it: a verb-phrase waiting for a subject. The X1 in the first component indicates what the free variable must become to match this noun phrase correctly. Consider the operation of putting XI into the verb-phrase formula and this into the noun-phrase formula when a final sentence meaning is constructed. In whatever order this is done, there must be an operation of substitution a for- mula with a free variable (XI) in it, into the scope of a quantifier ('for') that captures it. Semantically this is certainly a dubious operation. The point here is not that this system is wrong or necessarily deficient. Rather the repre- sentation language used to represent meanings for subsentential components is not precisely the MRFOL. Meaning structures built fo~ subcomponents are, in general, fra~rments of first-order formulas with some extra notation to be used in further formula construction. This means, in general, that the meanings of subsentential phrases are not given a semantles by first-order model theory; the meanings of intermediate phrases are (as far as traditional first-order logic is concerned) merely uninterpreted data structures. The point is that the system is building terms, syntactic objects, that will eventually be put to- gether to represent meanings of sentences. This works because these terms, the ones ultimately associated with sentences, always turn out to be formulas in the MRFOL in just the right way. How- ever, some of the terms it builds on the way to a sentence, terms that correspond to subcomponents of the sentence, are not in the MRFOL, and so do not have a interpretation in its real world model. Next let us move to a consideration of those systems which use the typed l-calculus (LC) as their meaning representation language. Consider again the simple sentence 'Every man walks'. The grammar of [Montague 1973] associates with this sentence the meaning: forail(X,implies(man(X),waik(X))) (We use an extensional fragment here for simplic- ity.) This formula looks very much like the first- order formula given above by the Dahl system for the same sentence. This formula, also, is a for- mula of the typed X-calculus (FOL is a subset of LC). Now consider a noun phrase and its associated meaning structure in the LC framework. For 'every man' the meanin~ structure is: X(P,forall(X,implies(man(X),P(X)))) This meaning structure is a formula in the k- calculus. As such it has an interpretation in the intended model for the LC, just as any other for- mula in the language has. This interpretation is a function from properties to truth-values; it takes properties that hold of every man to 'true' and all other properties to 'false'. This shows that in the LC framework, sentences and subsenten- tial phrases are given meanings in the same way, whereas in FOL systems only the sentences have meanings. Meaning structures for sentences are well-formed LC formulas of type truth-value; those for other phrases are well-formed LC terms of other types. Consider this k-formula for 'every man' and compare it with the three-tuple meaning structure built for it in the Dahl system. The ~-variable P plays a corresponding role to the X2 variable of the triple; its ultimate value comes from a verb- phrase meaning encountered elsewhere in the sentence. First-order logic is not quite expressive 52 enough to represent directly the meanings of the categories of phrases that can be subcomponents of sentences. In systems based on first-order logic, this limitation is handled by explicitly construc- ting fragments of formulas, with extra notation to indicate how they must later combine with other fragments to form a true first-order formula that correctly represents the meaning of the entire sentence. In some sense the construction of the semantic representation is entirely syntactic until the full sentence meaning structure is constructed, at which point it comes to a form that does have a semantic interpretation. In contrast, in systems that use the typed l-calculus, actual formulas of the formal language are used at each step, the language of the l-calculus is never left, and the building of the semantic representation can actu- ally be understood as operations on semantic objects. The general idea of how to handle the example sentence 'Every man walks' in the two systems is essentially the same. The major difference is how this idea is expressed in the available languages. The LC system can express the entire idea in its meaning representation language, because the typed l-calculus is a more expressive language. The obvious question to ask is whether there is any need for semantically interpretable meaning representations at the subsentential level. One important reason is that to do formal deduction on subsentential components, their meanings must be represented in a formal meaning representation language. LC provides such a language and FOL does not. And one thing the field seems to have learned from experience in natural language proc- essing is that inferencing is useful at all levels of processing, from words to entire texts. This points us toward something like the LC. The problem, of course, is that because the LC is so expressive, deduction in the full LC is extremely difficult. Some problems which are decidable in FOL become undecidable in the l-calculus; some problems that are semi-decidable in FOL do not even have partial decision procedures in the LC. It is certainly clear that each language has limi- tations; the FOL is not quite expressive enough, and the LC is much too powerful. With this in mind, we next look at some of the implications of trying to use the LC as the meanin~ representation language in a Proiog system. II LC IN PROLOG PROLO~ is extremely attractive as a lan~uaFe for expressinE grammars. ~tamorphosis ~rammars [Colmerauer 197g] and Definite Clause Grammars (DCGs) [Pereira and ICarren 1980] are essentially conventions for representing grammars as logic programs. DCGs can perhaps most easily be under- stood as an improved cersion of the Augmented Transition Network language [Woods 1970]. Other work on natural language in the PROLOG framework has used firs$-order meaning representation lang- uages. The rest of this paper explores the impli- cations of using the l-calculus as the meaning representation language for a system written in PROLOG using the DCG conventions. The followin~ paragraphs describe a system that includes a very small grammar. The point of this system is to investigate the use of PROLOG to construct meanings with the %-calculus as the meaning representation language, and not to explore questions of linRulstic coverage. The grammar is based on the grammar of [Montague 1973], but is entirely extensional. Including inten- sionality would present no new problems in principle. The idea is very simple. Each nonterminal in the grammar becomes a three-place predicate in the Prolog program. The second and third places indicate locations in the input string, and are normally suppressed when DCGs are displayed. The first piece is the LC formula representing the meaning of the spanned syntactic component. Lambda-formulas are represented by Prolo~ terms. The crucial decision is how to represent variables in the h-formulas. One 'pure' way is to use a Prolog function symbol, say ivar, of one argument, an integer. Then Ivar(37) would repre- sent a l-variable. For our purposes, we need not explicitly encode the type of %-terms, since aii the formulas that are constructed are correctly typed. For other purposes it might be desirable to encode explicitly the type in a second argument of ivar. Constants could easily be represented using another function symbol, icon. Its first argument would identify the constant. A second argument could encode its type, if desired. Appli- cation of a l-term to another is represented using the Prolog function symbol lapply, which has two argument places, the first for the function term, the second for the argument term. Lambda abstrac- tion is represented using a function symbol ~ with two arguments: the ~-variable, and the function body. Other commonly used connectives, such as 'and' and 'or', are represented by similarly named function symbols with the appropriate number of argument places. With this encoding scheme, the h-term: %P(3x(man(x) & P(x)) would be represented by the (perhaDs somewhat awkward-looking) Prolo~ term: lambda(Ivar(3),Ithereis(ivar(1),land( lapply(icon(man),l~r(1)) lapply(ivar(3),ivar(1)) ))) ~-reduction would be coded as a predicate ireduce (Form, Reduced), whose first argument is an arbi- trary %-formula, and second is its ~-reduced form. This encoding requires one to generate new variables to create variants of terms in order to avoid collisions of %-variables. The normal way to avoid collisions is with a global 'gensym' counter, to insure the same variable is never used twice. One way to do this in Prolog is to include 53 a place for the counter in each grarmnar predicate. This can be done by including a parameter which will always be of the form gensym(Left,Right), where Left is the value of the gensym counter at the left end of the phrase spanned by the predicate and Right is the value at the right end. Any use of a k-variable in building a l-formula uses the counter and bumps it. An alternative and more efficient way to en- code k-terms as Prolog terms involves using Prolog variables for l-variables. This makes the substi- tution trival, essentially using Prolog's built-ln facility for manipulating variables. It does, how- ever, require the use of Prolog's meta-logical predicate var to test whether a Prolog variable is currently instantiated to a variable. This is necessary to prevent the k-varlables from being used by Prolog as Prolog variables, In the example below, we use Prolog variables for X-varlables and also modify the Icon function encoding of con- s=ants, and let constants stand for themselves. This results in a need to use the meta-logical predicate atom. This encodin E scheme might best be considered as an efficiency hack to use Prolog's built-in variable-handllng facilities to speed the A-reduction. We give below the Prolog program that repre- sents a small example grammar with a few rules. This shows how meaning structures can be repre- sented as l-formulas and manipulated in Prolog. Notice the simple, regular structure of the rules. Each consists of a sequence of grammar predicates that constructs the meanings of the subcomponents, followed by an instance of the ireduce predicate that constructs the compound meaning from the com- ponent meanings and l-reduces the result. The syntactic manipulation of the formulas, which re- sults for example in the relatively simple formula for the sentence 'Every man walks' shown above, is done in the h-reductlon performed by the ireduce predicate. /* */ tS(M,X,Y) :- te(Ml,X,Z). iv(M2,Z,Y), ireduce(lapply(Mi,M2),M). te(M,X,Y) :- det(Mi,X,Z), cn(M2,Z,Y), lreduce(lapply(}~,M2),M). te(lambda(P,lapply(P,j)),[johnIX],X). cn(man,[manlX],X). cn(woman,[womanIX],X). det(lambda(P,lambda(Q,iforall(Z, limplies(lapply(P,Z),lapply(Q,Z))))), [everyIX],X) iv(M,X,Y) :- tv(MI,X,Z), te(M2,Z,Y), ireduce(lapply(Mi,M2),M). */ iv(walk,[walkslX],X). tv(lambda(P,lambda(Q,lapply(P, lambda(Y,lapply(lapply(love,Y),Q))))), [loves[X],X). /* III I-CAT.CULUS IN THE PROLOG INTERPRETER There are several deficiencies in this Prolog implementation of grammars using the X-calculus as a meaning representation language. First, neither of the suggested implementa- tions of X-reduction in Prolog are particularly attractive. The first, which uses first-order constants to represent variables, requires the addition of a messy gensym argument place to every predicate to simulate the global counter, This seems both inelegant and a duplication of effort, since the Prolog interpreter has a similar kind of variable-handling mechanism built into it. The second approach takes advantage of Prolog's built- in variable facilities, but requires the use of Prolog's meta-logical facilities to do so. This is because Prolog variables are serving two func- tions, as Prolog varlabies and as h-variables. The two kinds of variables function differently and must be differentiated. Second, there is a problem with invertibility. Many Prolog programs are invertible and may be run 'backwards'. We should be able, for example, to evaluate the sentence grammar predicate giving the meaning of a sentence and have the system produce the sentence itself. This ability to go from a meaning formula back to an English phrase that would produce it is one of the attractive proper- ties of logic grammars. The grammar presented here can also be run this way. However, a careful look at this computation process reveals that with this implementation the Prolog interpreter performs essentially an exhaustive search. It generates every subphrase, h-reduces it and checks to see if it has the desired meaning. Aside from being theo- retically unsatisfactory, for a grammar much larger than a trivially-small one, this approach would not be computationally feasible. So the question arises as to whether the Prolog interpreter might be enhanced to know about l-formulas andmanipulate them directly. Then the Prolog interpreter itself would handle the X-reduc- tion and would be responsible for avoiding variable collisions. The logic grammars would look even simpler because the ireduce predicate would not need to be explicitly included in each grammar rule. For example, the ts clause in the grammar in the figure above would become: ts(lapply(MI,M2),X,Y) te(MI,X,Z), iv(M2,Z,Y). 54 Declarations to the Prolog interpreter could be included to indicate the predicate argument places that contain l-terms. Consider what would be involved in this modification to the Prolog sys- tem. It might seem that all that is required is just the addition of a l-reduction operator applied to l-arguments. And indeed when executing in the forward direction, this is essentially all that is involved. Consider what happens, however, if we wish to execute the grammar in the reverse direction, i.e., give a l-term that is a meaning, and have the Prolog system find the English phrase that has that meaning. Now we find the need for a 'l-expan- sion' ability. Consider the situation in which we present Prolog with the following goal: ts(forall(X,implies(man(X),walk(X))),S,[]). Prolog would first try to match it with $he head of the ts clause given above. This would require matching the first terms, i.e., forall(X,implies(lapply(man,X),lapply(walk,X))) and lapply(Mi,M2) (using our encoding of l-terms as Prolog terms.) The marcher would have available the types of the variables and terms. We would like it to be able to discover that by substituting the right terms for the variables, in particular substituting lambda(P,forall(X,implies( lapply(man,X),lapply(P,X)))) and walk for M2 for M1 in the second term, it becomes the same as the first term (after reduction). These MI and M2 values would then be passed on to the te and iv predicates. The iv predicate, for example, can easily find in the facts the word to express the meaning of the term, walk; it is the work 'walks' and is expressed by the fact iv(walk,[walksIX],X), shown above. For the predicate re, given the value of MI, the system would have to match it against the head of the te clause and then do further computation to eventually construct the sentence. ~at we require is a general algorithm for matching l-terms. Just as Prolog uses unification of first-order terms for its parameter mechanism, to enhance Prolog to include l-terms, we need general unification of l-~erms. The problem is that l-unlficatlon is much more complicated than first-order unification. For a unifiable pair of first-order terms, there exists a unique (up to change of bo~md variable) most general unifier (mgu) for them. In the case of l-terms, this is not true; there may be many unifiers, which are not generalizations of one another. Furthermore unification of l-terms is, in general, undecidable. These facts in themselves, while perhaps dis- couraging, need not force us to abandon hope. The fact that there is no unique mgu just contributes another place for nondeterminism to the Prolog interpreter. And all interpreters which have the power of a universal Turing machine have undecid- able properties. Perhaps another source of unde- cidability can be accommodated. Huet [197~] ',-s given a semi-decision procedure for unification in the typed l-calculus. The question of whether this approach is feasible really comes down to the finer properties of the unification procedure. It seems not unreasonable to hope that in the relatively simple cases we seem to have in our grammars, this procedure can be made to perform adequately. Notice that, for parsing in the forward direction, the system will always be unifying a l-term with a variable, in which case the unification problem is trivial. We are in the process of programming Huet's algorithm to include it in a simple Prolog- like interpreter. We intend to experiment with it to see how it performs on the l-terms used to represent meanings of natural language expressions. Warren [1982] points out how some suggestions for incorporating l-calculus into Prolog are moti- vated by needs that can easily and naturally be met in Prolog itself, unextended. Following his suggestions for how to represent l-expressions in in Prolo~ directly, we would represent the meaning of a sentence by a set of asserted Prolog clauses and an encoding atomic name, which would have to be generated. While this might be an interesting alternate approach to meaning representations, it is quite different from the ones discussed here. IV CONCLUSIONS We have discussed two alternatives for meaning representation languages for use in the context of lo~ic grammars. We pointed out how one advantage of the typed l-calculus over first-order logic is its ability to represent directly meanings of phrases of all syntactic cateBories. We then showed how we could implement in Prolog a logic grammar using the l-calculus as the meaning repre- sentation languaEe. Finally we discussed the possibility and some of the implications of trying to include part of the l-calculus in the logic pro- gramming system itself. We suggested how such an integration might allow grammars to be executed backwards, generating English sentences from input logical forms. ~ intend to explore this further in future work. If the l-calculus can be smoothly incorporated in the way suggested, then natural language grammar writers will find themselves 'programming' in two languages, the first-order language (e.g. Prolog) for syntax, and the typed l-calculus (e.g. typed LISP) for semantics. As a final note regarding meaning representa- tion languages: we are still left with the feeling that the first-order languages are too weak to express the meanings of phrases of all categories, and that the l-calculus is too expressive to be 55 computatlonally tractable. There is a third class of languages that holds promise of solving both these difficulties, the function-level languages that have recently been developed in the area of progranm~ing languages [Backus 1978] [$hultis 1982]. These languages represent functions of various types and thus can be used to represent the mean- ings of subsentential phrases in a way similar to the l-calculus. Deduction in these languages is currently an active area of research and much is beginning to be known about their algebraic prop- erties. Term rewriting systems seem to be a powerful tool for reasoning in these languages. I would not be surprised if these functlon-level languages were to strongly influence the formal meaning representation languages of the future. V REFERENCES Backus, J. [1978] Can Programming Be liberated from the yon Neumann Style? A Functional Style and Its Algebra of Programs, Co~unicatlons of the ACM, Vol 21, No 8, (Aug 1978), 613-641. Clark, K.L and S.-A. T~rnlund (eds.) [1982] Logic Programming, Academic Press, New York, 366 pp. Clifford, J. [1981] ILs: A formulation of Montague's intenslonal logic that includes variables and constants over indices. TR#81-029, Department of Computer Science, SUNY, Stony Brook, New York. Colmerauer, A. [1978] Metamorphosis Grammars, in Natural Language Conm~unication with Computers, Vol i, Sprlnger Verlag, 1978, 133-189. Colmerauer, A. [1982] An Interesting Subset of Natural Language, in Logic Pro~rarming, Clark, K.L and 3.-A T~rnlund (eds.), 45-66. Dahl, Veronica [1981] Translating Spanish into Logic through Logic, American Journal of Computational Linguistics, Vol 7, No 3, (Jul- Sep 1981), 149-164. Gallln, D. [1975] Intensional and Higher-order Modal Logic , North-Holland Pubilshing Company, Amsterdam. Gawron, J.M., et.al. [1982] The GPSG Linguistic System, Proceedings 20th Annual Meetin~ of the Association for Computational Linguistics, 74-81. Huet, G.P. [1975] A Unification Algorithm for Typed l-Calculus, Theoretical Computer Science, Vol i, No i, 22-57. Jones, M.A., and Warren, D.S. [1982] Conceptual Dependency and Montague Grammar: A step toward conciliation, Proceedings of the National Conference #nn A~tificial Intelli~ence, AAAI-82, 79-83. McCord, M. [1982] Using Slots and Modifiers in Logic Grammars for Natural Language, Artifical Intelligence, Vol 18, 327-367. Montague, Richard [1973] The proper treatment of quantification in ordinary English, (PTQ), reprinted in Montague [1974], 246-270. Montague, Richard [1974] Formal Philosophy: Selected Paper of Richard Montague, edited and with an introduction by R. Thomason, Yale University Press, New Haven. Pereira, F.C.N. and Warren, D.H.D. [1980] Definite Clause Grammars for Language Analysis - A survey of the formalism and a Comparison with Augmented Transition Networks. Artificial Intelligence 13,3 (May 1980) 231-278. Rosenschein, S.J. and Shieber, S.M. [1982] Translating English into Logical Form, Proceedings of the 20th Annual Meeting of the Association for Comp-~ational Linguistics, June 1982, Toronto, 1-8. Schubert L.K. and Pelletier F.J. [1982] From English to Logic: Context-free Computation of 'Conventional' Logical Translation, American Journal of Computational Linguistics, Vol 8, NO 1, (Jan-Mar 1982), 27-44. Shultls, J. [1982] Hierarchical Semantics, Reasoning, and Translation, Ph.D. Thesis, Department of Computer Science, SUNY, Stony Brook, New York. Simmons, R.F. and Chester, D. [1982] Relating Sentences and Semantic Networks with Procedural Logic, Communications of the ACM, Vol 25, Num 8, (August, 1982), 527-546. Warren, D.H.D. [1981] Efficient processing of interactive relational database queries expressed in logic, Proceedings of the 7th Conference on Very Large Data Bases, Cannes, ~72-281, Warren, D.H.D. [1982] Higher-order extensions to PROLOG: are they needed? Machine Intelligence i~ Ilayes, Michie, Pao, eds. Ellis Horwood Ltd. Chlchester. Warren, D.S. and Friedman, J. [1981] Using Semantics in Noncontext-free Parsing of Montague Grammar, TR#81-027, Department of Computer Science, SUNY, Stony Brook, New York, (to appear). Woods, W.A. [1970] Transition Network Grammars for Natural Language Analysis, Communications of the ACM, Vol i, No I0, (Oct 1970). Woods, W.A., Kaplan, R.M., and Nash-Webber, B. [19721 The Lunar Science Natural Language Information System: Final Report, BBN Report No. 2378, Bolt Baranek and Newman, Cambridge, 56 | 1983 | 8 |
AN IMPROPER TREATMENT OF QUANTIFICATION IN ORDINARY ENGLISH Jerry R. Hobbs SRI International Menlo Park, California i. The Problem Consider the sentence In most democratic countries most politicians can fool most of the people on almost every issue most of the time. In the currently standard ways of representing quantification in logical form, this sentence has 120 different readings, or quantifier scopings. Moreover, they are truly distinct, in the sense that for any two readings, there is a model that satisfies one and not the other. With the standard logical forms produced by the syntactic and semantic translation components of current theoretical frameworks and implemented systems, it would seem that an inferencing component must process each of these 120 readings in turn in order to produce a best reading. Yet it is obvious that people do not entertain all 120 possibilities, and people really do understand the sentence. The problem is not Just that inferencing is required for disamblguation. It is that people never do dlsambiguate completely. A single quantifier scoping is never chosen. (Van Lehn [1978] and Bobrow and Webber [1980] have also made this point.) In the currently standard logical notations, it is not clear how this vagueness can be represented. 1 What is needed is a logical form for such sentences that is neutral with respect to the various scoplng possibilities. It should be a notation that can be used easily by an inferenclng component. That is, it should be easy to define deductive operations on it, and the lo~ical forms of typical sentences should not be unwieldy. Moreover, when the inferenclng component discovers further information about dependencies among sets of entities, it should entail only a minor modification in the logical form, such as conjoining a new proposition, rather than a major restructuring. Finally, since the notion of "scope" is a powerful tool in semantic analysis, there should be a fairly transparent relationship between dependency information In the notation and standard representations of scope. Three possible approaches are ruled out by these criteria. i. Representing the sentence as a disjunction of the various readings. This is impossibly unwieldy. I Many people feel that most sentences exhibit too few quantifier scope ambiguities for much effort to be devoted to this problem, but a casual inspection of several sentences from any text should convince almost everyone otherwise. 2. Using as the logical notation a triple consisting of an expression of the propositional content of the sentence, a store of quantifier structures (e.g., as in Cooper [1975], Woods [19781), and a set of constraints on how the quantifier structures could be unstored. This would adequately capture the vagueness, but it is difficult to imagine defining inference procedures that would work on such an object. Indeed, Cooper did no inferenclng; Woods did little and chose a default reading heuristically before doing so. 3. Using a set-theoretlc notation like that of (I) below, pushing all the universal quantifiers to the outside and the existential quantifiers to the inside, and replacing the existentially quantified variables by Skolem functions of all the universally quantlf~ed variables. Then when inferencing discovers a nondependency, one of the arguments is dropped from one of the Skolem functions. One difficulty with this is that it yields representations that are too general, being satisfied by models that correspond to none of the possible intended interpretations. Moreover, in sentences in which one quantified noun phrase syntactically embeds another (what Woods [1978] calls "functional nesting"), as in Every representative of a company arrived. no representation that is neutral between the two is immediately apparent. With wide scope, "a company" is existential, with narrow scope it is universal, and a shift in commitment from one to the other would involve significant restructuring of the logical form. The approach taken here uses the notion of the "typical element'" of a set, to produce a flat logical form of conjoined atomic predications. A treatment has been worked out only for monotone increasing determiners; this is described in Section 2. In Section 3 some ideas about other determiners are discussed. An inferenclng component, such as that explored in Hobbs [1976, 1980], capable of resolving coreference, doing coercions, and refining predicates, will be assumed (but not discussed). Thus, translating the quantifier scoping problem into one of those three processes will count as a solution for the purposes of this paper. This problem has received little attention in linguistics and computational linguistics. Those who have investigated the processes by which a rich knowledge base is used in interpreting texts have largely ignored quantifier ambiguities. Those who have studied quantifiers have generally noted that inferencing is required for 57 disambiguation, without attempting to provide a notation that would accommodate this inferencing. There are some exceptions. Bobrow and Webber [1980] discuss many of the issues involved, but it is not entirely clear what their proposals are. The work of Webber [1978] and Melllsh [1980] are discussed below. 2. Monotone I~creasin~ Determiners 2.1. A Set-Theoretic Notation Let us represent the pattern of a simple intransitive sentence with a quantifier as "Q Ps R". In '~ost men work," Q - "most", P = "man", and R - "work". Q will be referred to as a determiner. A determiner Q is monotone increasing if and only if for any RI and R2 such that the denotation of R1 is a subset of the denotation of R2, "Q Ps RI" implies "Q Ps R2" (Barwlse and Cooper [1981]). For example, letting RI - "work hard" and R2 = "work", since "most men work hard" implies "most men work," the determiner "most" is monotone increasing. Intuitively, making the verb phrase more general doesn't change the truth value. Other monotone increasing determiners are "every", "some", "many", "several", "'any" and "a few". "No" and "few" are not. Any noun phrase Q Ps with a monotone increasing determiner Q involves two sets, an intensionally defined set denoted by the noun phrase minus the determiner, the set of all Ps, and a nonconstructlvely specified set denoted by the entire noun phrase. The determiner Q can be viewed as expressing a relation between these two sets. Thus the sentence pattern Q Fs R can be represented as follows: 41) (Ts)(Q(s,{x I P(x)}) & (VY)(~s -> R(y))) That is, there is a set s which bears the relation Q to the set of all Ps, and R is true of every element of s. (Barwlse and Cooper call s a "witness set".) "Most men work" would be represented (~ s)(most(s,{x I man(x)}) & (~ y)(y~s -> work(y))) For collective predicates such as "meet" and "agree", R would apply to the set rather than to each of its elements. (3 s) 0(s,{x I F(x)}) ~ R(s) Sometimes with singular noun phrases and determiners llke "a", "some" and "any" it will be more convenient to treat the determiner as a relation between a set and one of its elements. (B Y) 0(y,{x I P(x)}) & R(y). According to notation (i) there are two aspects to quantification. The first, which concerns a relation between two sets, is discussed in Section 2.2. The second aspect involves a predication made about the element~ of one of the sets. The approach taken here to this aspect of quantification is somewhat more radical, and depends on a view of semantics that might be called "ontological promiscuity". This is described briefly in Section 2.3. Then in Section 2.4 the scope-neutral representation is presented. 2.2. Determiners as Relations between Sets Expressing determiners as relations between sets allows us to express as axioms in a knowledge base more refined properties of the determiners than can be captured by representing them in terms of the standard quantlflers. First let us note that, with the proper definitions of "every" and "some", (V sl,s2) every(sl,s2) <-> sl= s2 (y x,s2) some(x, s2) <-> x~s2 formula (I) reduces to the standard notation. (This can be seen as explaining why the restriction is implicative in universal quantification and conjunctive in existential quantification.) A meaning postulate for "most" that is perhaps too mathematical is (~sl,s2) most(sl,s2) -> Isll > i/2 Is21 Next, consider "any". Instead of trying to force an interpretation of "any" as a standard quantifier, let us take it to mean "a random element of". (2) (~x,s) any(x,s) ~> x = random(s), where "random" is a function that returns a random element of a set. This means that the prototypical use of "any" is in sentences like Pick any card. Let me surround this with caveats. This can't be right, if for no other reason than that "any" is surely a more "primitive" notion in language than "random". Nevertheless, mathematics gives us firm intuitions about "random" and (2) may thus shed light on some linguistic facts. Many of the linguistic facts about "any" can be subsumed under two broad characterizations: i. It requires a "modal" or "nondeflnlte" context. For example, "John talks to any woman" must be interpreted dispositlonally. If we adopt (2), we can see this as deriving from the nature of randomness. It simply does not make sense to say of an actual entity that it is random. 2. It normally acts as a universal quantifier outside the scope of the most immediate modal embedder. This is usually the most natural interpretation of "random". Moreover, since "any" extracts a single element, we can make sense out of cases in which "any" fails to act llke "every". 58 I'Ii talk to anyone but only to one person. * I'Ii talk to everyone but only to one person. John wants to marry any Swedish woman. * John wants to marry every Swedish woman. (The second pair is due to Moore [1973].) This approach does not, however, seem to offer an especially convincing explanation as to why "any" functions in questions as an existential quantifier. 2.3. Ontological Promiscuity Davidson [1967] proposed a treatment of action sentences in which events are treated as individuals. This facilitated the representation of sentences with adverbials. But virtually every predication that can be made in natural language can be modified adverbially, be specified as to time, function as a cause or effect of something else, constitute a belief, be nominalized, and be referred to pronominally. It is therefore convenient to extend Davidson's approach to all predications, an approach that might be called "ontological promiscuity". One abandons all ontological scruples. A similar approach is used in many AI systems. We will use what might be called a "nomlnalization" operator ..... for predicates. Corresponding to every n-ary predicate p there will be an n+l-ary predicate p" whose first argument can be thought of as a condition of p's being true of the subsequent arguments. Thus, if "see(J,B)" means that John sees Sill, "see'(E,J,S)" will mean that E is John's seeing of Bill. For the purposes of this paper, we can consider that the primed and unprimed predicates are related by the following axiom schema: (3) (~ x,e) p'(e,x) -> p(x) (Vx)(~e) p(x) -> p'(e,x) It is beyond the scope of this paper to elaborate on the approach further, but it will be assumed, and taken to extremes, in the remainder of the paper. Let me illustrate the extremes to which it will be taken. Frequently we want to refer to the condition of two predicates p and q holding simultaneously of x. For this we will refer to the entity e such that and'[e,el,e2) & p*(el,x) & q'(e2,x) Here el is the condition of p being true of x, e2 is the condition of q being true of X, and e the condition of the conjunction being true. 2.4. The Scope-Neu¢ral Representation We will assume that a set has a typical element and that the logical form for a plural noun phrase will include reference to a set and its ~z~ical element. 2 The linguistic intuition 2 Woods [1978] mentions something llke this approach, but rejects it because difficulties that are worked out here would have to be worked out. behind this idea is that one can use singular pronouns and definite noun phrases as anaphors for plurals. Definite and indefinite generics can also be understood as referring to the typical element of a set. In the spirit of ontological promiscuity, we simply assume that typical elements of s~ ~re things that exist, and encode in meaning postulates the necessary relations between a set's typical element and its real elements. This move amounts to reifying the universally quantified variable. The typical element of s will be referred to as ~(s). There are two very nearly contradictory properties that typical elements must have. The first is the equivalent of universal instantiation; real elements should inherit the properties of the typical element. The second is that the typical element cannot itself be an element of the set, for that would lead to cardinallty problems. The two together would imply the set has no elements. 3 We could get around this problem by positing a special set of predicates that apply to typical elements and are systematically related to the predicates that apply to real elements. This idea should be rejected as being ad ho__~c, if aid did not come to us from an unexpected quarter -- the notion of "grain size". When utterances predicate, it is normally at some degree of resolution, or "grain". At a fairly coarse grain, we might say that John is at the post office -- "at(J,PO)". At a more refined grain, we have to say that he is at the stamp window -- "at(J,SW)'" We normally think of grain in terms of distance, but more generally we can move from entities at one grain to entities at a coarser grain by means of an arbitrary partition. Fine-grained entities in the same equivalence class are indistinguishable at the coarser grain. Given a set S, consider the partition that collapses all elements of S into one element and leaves everything else unchanged. We can view the typical element of S as the set of real elements seen at this coarser grain -- a grain at which, precisely, the elements of the set are indistinguishable. Formally, we can define an operator ~ which takes a set and a predicate as its arguments and produces what will be referred to as an "indexed predicate": T, if x=T(s) & (V yes) p(y), <;'(s,p)(x) = F, if x=~(s) &~(F y~s) p(y), p(x) otherwise. We will frequently abbreviate this "P5 " Note that predicate indexing gets us out of the above 3 An alternative approach would be to say that the typical element is in fact one of the real elements of the set, but that we will never know which one, and that furthermore, we will never know about the typical element any property that is not true of all the elements. This approach runs into technical difficulties involving the empty set. 59 contradiction, for now "~(s) E 5 s" is not only true but tautologous. We are now in a position to state the properties typical elements should have. The first implements universal instantiation: (4) (Us,y) p$(~(s)) & yes -> p(y) (5) (Vs)([(¥x~s) p(x)] -> p~(~s))) That is, the properties of the typical element at the coarser grain are also the properties of the real elements at the finer grain, and the typical element has those properties that all the real elements have. Note that while we can infer a property from set membership, we cannot infer set membership from a property. That is, the fact that p is true of a typical element of a set s and p is true of an entity y, does not imply that y is an element of s. After all, we will want "three men" to refer to a set, and to be able to infer from y's being in the set the fact that y is a man. But we do not want to infer from y's being a man that y is in the set. Nevertheless, we will need a notation for expressing this stronger relation among a set, a typical element, and a defining condition. In particular, we need it for representing "every man", Let us develop the notation from the standard notation for intensionally defined sets, (6) s - {x f p<x)}, by performing a fairly straightforward, though ontologically promiscuous, syntactic translation on it. First, instead of viewing x as a universally quantified variable, let us treat it as the typical element of s. Next, as a way of getting a handle on "p(x)", we will use the nominalization operator .... to reify it, and refer to the condition e of p (or p$) being true of the typical element x of s -- "p~ (e,x)". Expression (6) can then be translated into the following flat predlcate-argument form: (7) set(s,x,e) & p~ (e,x) This should be read as saying that s is a set whose typical element is x and which is defined by condition e, which is the condition of p (interpreted at the level of the typical element) being true of x. The two critical properties of the predicate "set" which make (7) equivalent to (6) are the following: (8) ~s,x,e,y) set(s,x,e) & p~ (e,x) & p(y) -> yes (9) (~s,x,e) set(s,x,e) -> x "T(s) Axiom schema (8) tells us that if an entity y has the defining property p of the set s, then y is an element of s. Axiom (9), along with axiom schemas (4) and (3), tells us that an element of a set has the act's defining property. With what we have, we can represent the distinction between the distributive and collective readings of a sentence like (I0) The men lifted the piano. For the collective reading the representation would include "llft(m)" where m is the set of men. For the distributive reading, the representation would have "lift(~(m))", where ~(m) is the typical element of the set m. To represent the ambiguity of (I0), we could use the device suggested in Hobbs [1982 I for prepositional phrase and other ambiguities, and wr~te "llft(x) & (x=m v x- ~(m) )". This approach involves a more thorough use of typical elements than two previous approaches. Webber [1978] admitted both set and prototype (my typical element) interpretations of phrases like "each man'" in order to have antecedents for both "they" and "he", but she maintained a distinction between the two. Essentially, she treated "each man" as ambiguous, whereas the present approach makes both the typical element and the set available for subsequent reference. Mellish [1980 1 uses =yplcal elements strictly as an intermediate representation that must be resolved into more standard notation by the end of processing. He can do this because he is working in a task domain -- physics problems -- in which sets are not just finite but small, and vagueness as to their composition must be resolved. Webber did not attempt to use typical elements to derive a scope-neutral representation; Mellish did so only in a limited way. Scope dependencies can now be represented as relations among typical elements. Consider the sentence (II) Most men love several women, under the reading in which there is a different set of women for each man. We can define a dependency function f which for each man returns the set of women whom that man loves. f(m) = {w [ woman(w) & love(m,w)} The relevant parts of the initial logical form, produced by a syntactic and semantic translation component, for sentence (Ii) will be (12) love(~(m),~(w)) & most(m,ml) & manl(~(ml)) & several(w) & womanl(~(w)) where ml is the set of all men, m the set of most of them referred to by the noun phrase "most men", and w the set referred to by the noun phrase "several women", and where "manl = ~'(ml,man)" and "womanl = ~" (w,woman)'. When the inferenclng component discovers there is a different set w for each element of the set m, w can be viewed as refering to the typical element of this set of sets: w-T({f<x> { x~m}) 60 To eliminate the set notation, we can extend the definition of the dependency function to the typical element of m as follows: f(~(m)) -Z({f(x) I x~m}) That is, f maps the typical element of a set into the typical element of the set of images under f of the elements of the set. From here on, we will consider all dependency functions so extended to the typical elements of their domains. The identity "w - f(~(m))" now simultaneously encodes the scoplng information and involves only existentially quantified variables denoting individuals in an (admittedly ontologlcally promiscuous) domain. Expressions llke (12) are thus the scope-~eutral representation, and scoplng information is added by conjoining such identities. Let us now consider several examples in which processes of interpretation result in the acquisition of scoplng information. The first will involve interpretation against a small model. The second will make use of world knowledge, while the third illustrates the treatment of embedded quantlflers. First the simple, and classic, example. (13) Every man loves some woman. The initial logical form for this sentence includes the following: lovel(r(ms),w) & manl(~(ms)) & woman(w) where "lovel -@(mS,Ax[love(x,w)])'" and "manl - (ms,man)". Figure i illustrates two small models of this sentence. M is the set of men {A,B}, W is the set of women {X,Y}, and the arrows signify love. Let us assume that the process of interpreting this sentence is Just the process of identifying the existentially quantified variables ms and w and possibly coercing the predicates, in a way that makes the sentence true. 4 M W M W A ~ X A ------~ X B / Y B ~ Y (a) (b) Figure I. Two models of sentence (13). In Figure l(a), "'love(A,X)" and "love(B,X)" are both true, so we can use axiom schema (5) to derive "lovel('~(M),X)". Thus, the identifications "ms - M'" and "w = X'" result in the sentence being true. In Figure l(b), "love(A,X)" and "love(B,Y)" are both true, but since these predications differ 4 Bobrow and Webber [1980] similarly show scoplng information acquired by Interpretatlon against a small model. in more than one argument, we cannot apply axiom schema (5). First we define a dependency function f, mapping each man into a woman he loves, yielding "love(A,f(A))" and "love(B,f(B))". We can now apply axiom schema (5) to derive '" love2 ('~ (M), f (~ (M)) ) ", where "love2 = ~(M,Ax[love(x,f(x))])". Thus, we can make the sentence true by identifying ms with M and w with f(~'(M)), and by coercing "love" to "'love2" and "woman" to "~ (W,woman)". , In each case we see that the identification of w is equivalent to solving the scope ambiguity problem. In our subsequent examples we will ignore the indexing on the predicates, until it must be mentioned in the case of embedded quantifiers. Next consider an example in which world knowledge leads to disamblguatlon: Three women had a baby. Before inferencing, the scope-neutral representation is had(~Z~ws),b) & lwsI=3 & woman(~(ws)) & baby(b) Let us suppose the inferencing component has axioms about the functionality of having a baby -- something llke (~ x,y) had(x,y) -> x = mother-of(y) and that we know about cardlnallty the fact that for any function g and set s, Ig(s)l ~ fsl Then we know the following: 3 - lwsl = Imother-of(b) I ~ Ibl This tells us that b cannot be an individual but must be the typical element of some set. Let f be a dependency function such that wEws & f(w) = x -> had(w,x) that is, a function that maps each woman into some baby she had. Then we can identify b with f('~'(ws)), or equivalently, with ~({f(w) I w~ ws}), giving us the correct scope. Finally, let us return to interpretation with respect to small models to see how embedded quantiflers are represented. Consider (14) Every representative of a company arrived. The initial logical form.includes arrive(r) & set(rs,r,ea) & and'(ea,er,eo) & rep'(er,r) & of'(eo,r,c) & co(c) That is, r arrives, where r is the typical element of a set rs defined by the conjunction ea of r's being a representative and r's being of c, where c is a company. We will consider the two models in 61 Figure 2. R is the set of representatives {A,B,(C)}, K is the set of companies {X,Y,(Z,W)}, there is an arrow from the representatives to the companies they represent, and the representatives who arrived are circled. R K R K (a) (b) Figure 2. Two models of sentence (14). In Figure 2(a), "of(A,X)", "of(B,Y)" and "of(B,Z)" are true. Define a dependency function f to map A into X and B into Y. Then "of(A,f(A))" and "of(B,f(B))" are both true, so that "of(~(R),f(~(R)))" is also true. Thus we have the following identifications: c = f(Z(R)) =~({X,Y}), rs = R, r -t(R) In Figure 2(b) "of(B~" and "of(C,Y)'" are both true, so "'of(~'(Rl),~)is also. Thus we may let c be Y and rs be RI, giving us the wide reading for "a company". In the case where no one represents any company and no one arrived, we can let c be anything and rs be the empty set. Since, by the definition of o" , any predicate indexed by the empty set will be true of the typical element of the empty set, "arrlve#(~(# ))" will be true, and the sentence will be satisfied. It is worth pointing out that this approach solves the problem of the classic "donkey sentences". If in sentence (14) we had had the verb phrase "hates it", then "it" would be resolved to c, and thus to whatever c was resolved to. So far the notation of typical elements and dependency functions has been introduced; it has been shown how scope information can be represented by these means; and an example of inferential processing acquiring that scope information has been given. Now the precise relation of this notation to standard notation must be specified. This can be done by means of an algorithm that takes the inferential notation, together with an indication of which proposition is asserted by the sentence, and produces In the conventional form all of the readings consistent with the known dependency information. First we must put the sentence into what will be called a "bracketed notation". We associate with each variable v an indication of the corresponding quantifier; this is determined from such pieces of the inferential logical form as those involving the predicates "set" and "most"; in the algorithm below it is refered to as "Quant(v)". The translation of the remainder of the inferential logical form into bracketed notation is best shown by example. For the sentence A representative of every company saw a sample the relevant parts of the inferential logical form are see(r,s) & rep(r) & of(r,c) & co(c) & sample(s) where "see(r,s) '° is asserted. This is translated " in a straightforward way into (18) see(It I rep(r) & of(r,[c I co(c)l)], Is I sample(s)]) This may be read "An r such that r is a representative and r is of a c such that c is a company sees an s such that s is a sample. The nondeterministic algorithm below generates all the scoplngs from the bracketed notation. The function TOPBVS returns a llst of all the top-level bracketed variables in Form, that is, all the bracketed variables except those within the brackets of some other variable -- in (18) r and s but not c. BRANCH nondetermlnistically generates a separate process for each element in a list it is given as argument. A four-part notation is used for quantifiers (similar to that of Woods [1978]) -- "(quantifier varlabie restriction body)". G(Form) : if [vlRl ~ BRANCH(TOPBVS(Form)) then Form ~ (Quant(v) v BRANCH({R,G(R)}) Form~.~ if Form is whole sentence then Return G(Form) else Return BRANCH({Form,G(Form)}) else Return Form In this algorithm the first BRANCH corresponds to the choice in ordering the top-level quantifiers. The variable chosen will get the narrowest scope. The second BRANCH corresponds to the decision of whether or not to give an embedded quantifier a wide reading. The choice R corresponds to a wide reading, G(R) to a narrow reading. The third BRANCH corresponds to the decision of how wide a reading to give to an embedded quantifier. Dependency constraints can be built into this algorithm by restricting the elements of its argument that BRANCH can choose. If the variables x and y are at the same level and y is dependent on x, then the first BRANCH cannot choose x. If y is embedded under x and y is dependent on x, then the second BRANCH must choose G(R). In the third BRANCH, if any top-level bracketed variable in Form is dependent on any variable one level of recurslon up, then G(Form) must be chosen. A fuller explanation of this algorithm and several further examples of the use of this notation are given in a longer version of this paper. 62 3. Other Determlners The approach of Section 2 will not work for monotone decreasing determiners, such as "few" and "no". Intuitively, the reason is that the sentences they occur in make statements about entities other than just those in the sets referred to by the noun phrase. Thus, Few men work. is more a negative statement about all but a few of the men than a positive statement about few of them. One possible representation would be similar to (I), but wlth the implication reversed. (Bs)(q(s,{x I P(x)}) & (~ y)(P(y) & R(y) -> yes)) This is unappealing, however, among other things, because the predicate P occurs twice, making the relation between sentences and logical forms less direct. Another approach would take advantage of the above intuition about what monotone decreasing determiners convey. (7 s)(Q(s,{x [ P(x)}) & (~y)(y£s->-~R(y))) That is, we convert the sentence into a negative assertion about the complement of the noun phrase, reducing this case tO the monotone increasing case. For example, "few men work" would be represented as follows: (~ s)([~w(s,{x I man(x)}) & (Vy)(y~s ->~work(y))) 5 (This formulation is equivalent to, but not identical with, Barwlse and Cooper's [1981] witness set condition for monotone decreasing determiners.) Some determiners are neither monotone increasing nor monotone decreasing, but Barwlse and Cooper conjecture that it is a linguistic universal that all such determiners can be expressed as conjunctions of monotone determiners. For example, "exactly three" means "at least three and at most three". If this is true, then they all yield to the approach presented here. Moreover, because of redundancy, only two new conjuncts would be introduced by this method. Acknowledgments I have profited considerably in this research from discussions with Lauri Kartunnen, Bob Moore, Fernando Pereira, Stan Rosenscheln, and Stu Shleber, none of whom would necessarily agree with what I have written, nor even view it with sympathy. This research was supported by the Defense Advanced Research Projects Agency under Contract No. N00039-82-C-0571, by the National Library of Medicine under Grant No. IR01 LM03611- 5 "~w' is pronounced "few bar". 01, and by the National Science Foundation under Grant No. IST-8209346. REFERENCES Barwise, Jo and R. Cooper 1981. Generalized quantifiers and natural language. Lln~uistics and philosophy, Vol. 4, No. 2, 159-219. Bobrow, R. and B. Webber 1980. PSI-KLONE: Parsing and semantic interpretation in the BBN natural language understanding system. Proceedings, Third National Conference of Canadian Society for Computational Studies of Intelli~ence. 131-142. Victoria, British Columbia. May 1980. Cooper, R. 1975. Montague's semantic theory and transformational syntax. Ph.D. thesis. University of Massachusetts. Davidson, D. 1967. The logical form of action sentences. In N. Rescher (Ed.), The Logic of Decision and Action. 81-95. Un{versity o-f Pittsburgh Press, Pittsburgh, Pennsylvania. Hobbs, J. 1976. A computational approach to discourse analysis. Research Report 76-2, Department of Computer Sciences, City College, City University of New York. Hobbs, J. 1980. Selective inferencing. ProceedinBs, Third National Conference of Canadian Society f_or Computational Studies of Intelll~ence. 101-114. Victoria, British Columbia. May 1980. Hobbs, J. 1982. Representing ambiguity. Proceedln~s of the First West Coast Conference on Formal Linguistics. 15-28. Stanford, California. Melllsh, C. 1980. Coping with uncertainty: Noun phrase interpretation and early semantic analysis. Ph.D. thesis. University of Edinburgh. Moore, R. 1973. Is there any reason to want lexical decomposition? Unpublished manuscript. Van Lehn, K~ 1978. Determining the scope of English quantlflers. Massachusetts Institute of Technology Artificial Intelligence Laboratory Technical Report AI-TR-483. Webber, B. 1978. A formal approach to discourse anaphora. Technical Report 3761, Bolt Beranek and Newman, Inc. Cambridge, Massachusetts. Woods, W. 1977. Semantics and quantification in natural language question answering. Advances i__~n Computers, Vol. 17. 1-87. Academic Press, New York. 63 | 1983 | 9 |
Multilingual Text Processing in a Two-Byte Code Lloyd B. Anderson Ecological Linguistics 316 "A" st. s. E. Washington, D. C., 20003 ABS~ACT National and international standards commit- tees are now discussing a two-byte code for multi- lingual information processing. This provides for 65,536 separate character and control codes, enough to make permanent code assiguments for all the cha- ranters of ell national alphabets of the world, and also to include Chinese/Japanese characters. This paper discusses the kinds of flexibility required to handle both Roman and non-Roman alp.ha- bets. It is crucial to separate information units (codes) from graphic forms, to maximize processing p ower, Comparing alphabets around the world, we find t.hat the graphic devices (letters, digraphs, accent marks, punctuation, spacing, etc.) represent a very limited number of information units. It is possi- ble to arr_ange alphabet codes to provide transliter- ation equivalence, the best of three solutions compared as a _eramework for code assignments. Information vs. Form. In developing proposals for codes in information processing, the most impor- tant decisions are the choices of what to code. In a proposal for a multilingual two-byte code, Xerox Corporation has n'~%de explicit a principle which we can state precisely as follows: Basic codes stand for independent1.y_, function- in~ information units (not for visual forms) The choice of type font, presence or absence of se- rifs, and variations like boldface, italics or underlining, are matters of form. Such choices are norrmlly made once for spans at least as long as one word. ~'[e do not use ComPLeX miXturEs, but con- sistent strings llke this, THIS, this, or THIS. By assigning the same basic code to variations of a single letter (as a, _~, A, A~, all variants will automatically be alphabetized the ~ame way, which is as it should be. The choice of variant forms is specified by supplementary "looks" information. (The capitalization of first letters of sentences, proper names, or nouns, is a kind of punctuation,) Identical graphic forms may also be assigned more than one code because they are distinct units in information processing. Thus the letter form "C"' is used in the Russian alphabet to represent the sound /s/, but it is not the same information unit as English "C", so it has a distinct code. So far this seems relatively obvious. The sane principle is now being applied in much more subtle cases. Thus the minus sign and the hyphen are assigned distinct codes in recent proposals because they are completely distinct in- formation units. There are even two kinds of hy- phens distinguished, a "hard" hyphen as in the word father-in-law, which remains always present, and a "soft" hyphen which is used only to di- vide a word at the end of a line, and which should automatically vanish when, in word-processing, the sane word comes to stand undivided within the line. We can now frame the question "what to code?" as a matter of empirical discovery, what are the independently functioning information units in text? Relevant facts emerge from comparing a range of different alphabets. What is a "letter of the alphabet"? -- the problem of diacritics and digraphs. The most obvious question turns out to be the most difficult of all. Western European alphabets are in many ways not typical of alphabets of the world. They have an unusually small number of basic letters, and to represent a larger number of sounds they use digraphs like English sh, ch, th, or diacritics as in Czech ~, ~. It seems at first entirely obvious that digraphs like sh should be coded simply as a sequence of two codes, one for s plus one for h. Indeed English, French, German and Scandinavian alphabets do alphabetize their digraphs just like a sequence, s__ plus h etc. But these national alphabets are not typical. Spanish, Hungarian, Polish, Croatian and Albanian treat their native digraphs as single letters for purposes of alpha- betical order. Spanish II is not & sequence of two l's, but a new letter which follows all io, l~u sequences! similarly ch follows all c sequences, & follows all ~ sequences as a separate letter. There is just as much variation in handling letters" with diacritics. The umlauted letter ~ is alphabetized as a separate letter following _o in Hungarian, and at the end of the alphabet in Swedish, but in German it is mixed in with o. In Spanish, ~ is treated as a separate letter, but the Slovak ~_ ~epresenting the same sound is mixed in with ordinary n. In Table I., the digraphs and letters with diacritics which are not in parentheses or brackets are alphabetized separately as distinct single units. Those in parentheses are alphabetized am a sequence of two or more letters or (Slovak and Czech I', n, ~ ~t', d_~ are treated as equivalent to the simpler letter, completely disregarding the diacritic. Combinations in brackets are used to represent sounds in words burrowed from other languages. Double dashes mark sounds fur which an particular alphabet has no distinctive written sym- bol. (In Russian, palatal consonants are marked by choice of special vowel letters, while Turkish has a different kind of contrast, hence the blanks~ Even when a digraph or trigraph is treated as a sequence of letters for alphabetization, there may be other evidence that it functions as a single information unit. In syllable division (hyphena- tion), English never divides the digraphs sh, oh, or th when they function as single units (~t~-er, ~er) but does when they represent two ~its t-house). The same is true of other letter com- binations in all national standard alphabets where a single sound is represented by a combination of letters. Within certain mechanical constraints, type- writer keyboards also put each distinct information unit on a separate key. Thus Spanish E mr Czech ~_, _~, ~_ are Produced by single keys, n~t by ~ g a diacritic to a base letter. Mechanical limits have forced a sequence of two letters (like the Spanish oh, ~ to be typed with two separate key- s~rokes whether or not they represent a single functional unit, but occasionally we see excep- tions, an in Dutch where the ~ digraph appears an a ligature on a single key and is printed in one Sound " space not two. Unit tmanalyzable letters exist in Serbian and Macedonian for most of the sound types (the columns) of Table I. Icelandic has single letters "thorn" and "edh" for the two rightmost columns. Even where the o~her languages use digraphs cr letters with diacritics, there is evidence from syllabification and usually also from alphabetical order that these are functionally independent in- formation units. For transliteration from one national alphabet into another, these symbol equi- valences are needed. The im~inciple stated on the preceding page thus implies that unique codes be available for English s h, c h, t_~h and unitary digraphs in other languages so these can be used when needed in information processing. (Informa- tion processing is not the shuffling of bits of scribal ink:) The principle does not compel use of those cedes -- English t h can be recorded first as a sequence of two cedes, then converted into a single cede only when needed, by a Program which has a dictions~y listing all wu~Is containing matary t_h. Spatial arrangement of printe~ characters. In al~habets of Europe, letters (and information units) almost always follow each other in a line, from left to right. This is not true of many Table I. Some Consonant Characters in Europe r~l~ f ~ ~ ~ ~ ~ ~ ~ s ~ ts d, o "% Russian Macedonian Serbian LU y~: q [,a~3 c x ~ [,,3] LU ~ q ~ c .x q, S Hungarian -- ly Croatian -- lj s'J.ovak -- (I') Czech Latvian r I Polish -- 1 C~man ny nj (~) n (~i) ty gy (t') (d') (~) (d') 6 (dg) (ci) (d~) s ,s cs [dzs] sz -- c [dz] -- -- ~ ~ d~ s h c [dz] ~ ~ (d~) s oh o [d,] .... ~ ~ (d~) S ch c [dz] .... ~ ~ (d~) s -- c (dz) .... (s,) ~ (cz) (d~) s (oh) c (d,) .... (sch) -- (tsch) [dsch] s (ch) z Edz] .... Albanian -- lj nj .q gj Turkish Rom~i~ -- (...) (...) . . . . French -" (''')S(''') . . . . Spanish -- II ~ .... sh zh 9 xh s h c x th dh j ~ o s h [ ] [ ] .... j ~(cl) ~(gi) ~ -- ~ [ ] .... L(oe) l~gs~ (eh) j Itch] mdJ3 ~s -- Its] [dz] .... Iw (sh) (...) (oh) J s -- Its] [dz] th th x [ ] ch [ ] s j Ets] Edz] .... important alphabets elsewhere in the world. Arabic and Hebrew, .hen they ~rite sh~rt vowels, place them above or below the consonant letters. What we transcribe as kit~bu appears (in a left-to-right transform of a u the Arabic s~Tangement) as shown k t b on the right. These vowel symbols i are independent information units, not "diacritics" in the sense of the European alphabets. They keep a constant f~rm, combining freely with any consonant letter. Alphabets of India and Southeast Asia place vowels above, below, to right or to left of a consonant letter or clus- ter, or in two or three of these positions simul- taneously. There can be further combinations with marks for tones or consonant-douBling. The Korean alphabet alTanges its letters in syllabic groups, so that mascot would be a shown to the right m a c o if ~ritten in the K~rean manner, s t The independently functioning Infcm~ation units are still consonants and vowels, for which we need codes, and we need one additional code to m~k the division between syllables. This is just as much an alphabet as o~ f~l~r English and is not a syll~hary. (Since there are only about ~00 syllables, a printin~ device Night store all of them, but these would not normally be useful in information processing.) A flexible multi-lingual code for Infatuation processing must be able to handle the different spatial arrangements described here, but it need not (except in input and output for human use) be concerned with what that spatial arrangement is, only with what si~nificent inf~tion units it contains. Even in Europe, Spanish accented vowels ~, ~, ~_, _6, ~ show a v ~ l sup~mpomiti~ of the basic vowels with a functionally independent symbol of accentnation. These are not new letters in the sense that ~tian _~, i, ~_ ~ =_" are, but are alphabetized just like simple a, e, i, o, u. C~it~ria far a two-byte cod e standard. We ca,, now consider alternative methods of coding fc~ multillngual information processing. Three basic criteria are given first, followed by discussion of alternative solutions and further criteria. A) Each independent character or information unit sb=11 have available a re~esentation in a two-byte code (whether it is graphically manifest as a base letter, di6raph, independent diacritic, letter-plus-dlacritic unit, syll~ble separation, punct~tion tomsk, or other unit of normal text, and in~ep~naent of position in printing). B) It s~=11 be possible to identify the source alphabet from the codes themselves. ~Since "C" in Czech represents the sound /ts/, it is not the same unit as ~llsh "c"! in li~ary processing it is impcm~cant to know that German den and di__~e are articles like ~lish the, to be disregarded in filing, but English den and die are headwords. 3 C) The assignment of information units to codes shall maximize the possibilities for use of one-byte code reductions through long monolingual texts, minimizing shifts between different blocks of 256 codes. ~This is especially important in reducing transmission coets.~ Each of the following three solutions has cer- tain a~vantages. The third is far superior in the long run. Solution I. Incorporate exlsti~ ?-bit or 8-bit n~tiona I code standards, one in each block of 256 codes. Use the extra space as codes for information units which are not single spacing characters, This satisfies all of the basic cri- teria (A,B,C) and uses existing codes, -~d~ng only a first byte as an alphabet name to make a two- byte code. There is no transllteration-equivalence and elaborate transliteration programs would be necessary f~ each conversion, N x N programs for ~_ alp~ets. Solution 2. Systematically code all b@sic letter forms and all their diacritic modifications thus allowing for expansion, use of new letter- dis~itic comblru~tlons. Despite their difTeremces, Latin-based alphabets share a common core of alpha- betical c~der, which can be reflected in a coding to minimize shuffling. This is attempted in Table 2., which includes all characters f~om IS0/T~9?/SC2 N 1255 1982-11-01 pp.60-61 plus additions from African and Vietnamese alphabets. Code ordering Is downwards within columns, starting from the left. Table 2. Alphabetical order of letters and diacritics as a basis for coding e Sf[g h~ i i lJJk ~ IEm~ ~ o cec/3pqr s @t~u ~ Cv~wxy~z ~ ~m~ a e i u y rnis solution satisfies none of the criteria (A,B,C), and does not provide codes for many kinds of infurmation units. It appears to be economical in Europe, where 20 national alphabets can fit in 48 x 13 = 624 code cells if only letter forms are considered. But for non-L&tin alphabets there can be no similar savings. Here there are (considering only living alphabets) about 5~ alphabets based on 38 distinct sets of letters. Solution ~. Transliteration-euuivalemt units assigned identical second bytes in their two-byte code. Transliteration between any two alphabets simply changes the first byte of the cede naming the alphabet, requi:in~ minor pro~rammin~ only ~hen an alphabet has non-recoverable spellings cr cannot represent certain sounds. This solution depends on the fact that there is a small number of types of information units which have ever been represented in a national standard alphabet. In the tentative arrangement of Table 3., most of the sound types noted ere represented by single unanalyz~ble cha- racters in some national alphabet (as Georgian, Armenian, Hindi, ...), and most of the rest by clearly unitary digraphs. Despite the strange symbols, this is not a list of fine phonetic dis- tinctions, it is a list of distinct categories of ~ritten symbols. The idea fc~ this solution came from the one- byte code adopted in India, struct~ed identically with transliteration-equivalence for each of the alphabets of India. A printer with only Tamil letters can simply ~int a Tamil transliteration of an incoming Hindl message. In the two-byte version presented here, there is provision far any alphabet to add characters representing sounds of some other alphabet, and a s~l~ amount of space to add unique information units which are not m~tched in other alphabets. This is the right amount of space for expansion. Applications to transliteration and llh~ar~ processing. Wlth newer capabilities of printers and screens, a speaker of any language can soon request a data base in its m~iginsl alphabet cr Table 3. Transliteration-equivalent information 0 I 2 3 a in any t~ansliteration of his choice, either one using many diacritic characters like C~oatlan and special symbols to avoid ambiguity, ~ one m~e adapted to his native alphabet, f~ example F~ench cr Hungarian. Rec~ds can be kept in the codes of the original alphabet, always ensuring complete recoverability. There would be a gentle encourage- ment f~ each national alphabet to use a consistent transliteration f~ each sound independent of the source alphabet, because this would be aatom~tlc. Summary. The third solution described above is designed to handle all the structures and fUnc- tions found in national standard alphabets and to fit them like a well-made glove, allowing the maxi- mum capabilities of infcrmstion processing, but never compelling their use. This type of solution could be a primar~ international standard, with code translations to reach existing 7-blt and 8-bit and an E~APE sequence to allow Proces- sing directly in the alds~ standards (solution I. above Imc~crated as an alternate). Since mAthe- matical and scientific symbol~ are international, they would :equire only single blocks of 256 codes. The first column of 16 blocks of 256 each could provide 4096 two-byte control codes, and the second column could eventually be added to the 96 alpha- bet blocks allowing t~nsliteration of numerals. The right 128 blocks of 256 codes each remain far Chinese/Japanese ch~acters cr other p~rposes, but even these can be coded alphabetically in terms of character components and arrangements (partly achieved in a keyboard now installed at Stanford and the Ll~:ary of Confess). AEKNONLE~TS I would llke to thank Mr. Thomas N. Hastings, chairman of the ANSI X3L~ committee, and ~. James Agen~omd, APO, Litany of Congress, f~ indispen- sable Information and discussions. They of course beer no resp~sibility for claims cr analyses presented here. units found in national standard alphabets 6 7 8 9 A B C D E F 0 SPace k l ~ • I k ? 2 ~ , i k h ~ ~ - / x a ® ~ ~ I g 6 o ~ ~ ~ T ~h ( C] h ) A o ~ INitial-CAPS SUPerscript B ~ o ~ ALT~n.-CHA~ n~ACritic a~ C ~ ~ o ° SYIL~ble-SEPAR. INSULator D = ~ REPeat r~KER (~, e~ 0 DIGraph-LINE SILent LETter F ~ ~ DOb~le CONSort. NO V~,~EL ~ ts~/c h 6h X s 6 d~ ~/~ 5 z ~ i (y) '~ ld~ .an.Win .1 a ~y@) i (ya~ T t~/cz t t p k w ~i -- ~ " t ~ ~ht~h _ ~h th i~ h w ( ) . • £ ~ (~) ~h ~ dh bh (r-) r .r ~l .I i 1 1 ~ (~) n ~ . m (~) m~ ~ )- - ~ (~) ~/m (~) #/~ ~/# (ye) ~ (yo) ~ ~ ~ an | 1984 | 1 |
DENORMALIZATION AND CROSS REFERENCING IN THEORETICAL LEXICOGRAPHY Joseph E. Grimes DMLL, Morrill Hall, Cornell University Ithaca NY lh853 USA Summer Institute of Linguistics 7500 West Camp Wisdom Road Dallas TX 75236 USA ABSTRACT A computational vehicle for lexicography was designed to keep to the constraints of meaning- text theory: sets of lexical correlates, limits on the form of definitions, and argument relations similar to lexical-functional grA--~-r. Relational data bases look like a natural frame- work for this. But linguists operate with a non- normalized view. Mappings between semantic actants and grammatical relations do not fit actant fields uniquely. Lexical correlates and examples are poly- valent, hence denormalized. Cross referencing routines help the lexicogra- pher work toward a closure state in which every term of a definition traces back to zero level terms defined extralinguistically or circularly. Dummy entries produced from defining terms ensure no trace is overlooked. Values of lexical corre- lates lead to other word senses. Cross references for glosses produce an indexed unilingual diction- ary, the start of a fully bilingual one. To assist field work a small structured editor for a systematically denormalized data base was implemented in PTP under RT-11; Mumps would now be easier to implement on small machines. It allowed fields to be repeated and nonatomic strings includ- ed, and produced cross reference entries. It served for a monograph on a language of Mexico? and for student projects from Africa and Asia.- I LEXICOGRAPHY Natural language dictionaries seem like obvious candidates for information management in data base form, at least until you try to do one. Then it ap- pears as if the better the dictionary in terms of lexicographic theory, the more awkward it is to fit relational constraints. Vest pocket tourist dictionaries are a snap; Webster's Collegiate and parser dictionaries require careful thought; the Mel'chuk style of explanatory-combinatory diction- ary forces us out of the strategies that work on ordinary data bases. In designing a tool to manage lexicographic field work under the constraints of Mel'chuk's meaning-text model, the most fully specified one available for detailed lexicography, I laid down specifications in four areas. First, it must han- dle all lexical correlates of the head word. Lex- ical correlates relate to the head in ways that have numerous parallels within the language. In English, for example, we have nouns that denote the doer of an action. Some, such as driver, writ- er, builder, are morphologically transparent. Others like pilot (from fly) and cook (from cook) are not; yet they relate to the corresponding verbs in the same way as the transparent ones do. Mel'- chuk and associates have identified about fifty such types, or lexical functions, of which S_, the habitual first substantive Just illustrated, is one. These types appear to have analogous meanings in different languages, though not all types are nec- essarily used in every language, and the relative popularity of each differs from one language to an- other, as does the extent to which each is grammat- icalized. For example, English has a rich vocabu- lary of values for a relation called Ma~n (from Latin magnus) that denotes the superlative degree of its argument: Magn (sit) = ti6ht, Magn (black) =Jet, pitch, coal, Magn (left) = hard, Magn---~ay) = for all you're worth, and on and on. On the other hand Huichol, a Uto-Aztecan language of Mexico I have been working on since 1952, has no such vo- cabulary; it uses the simple intensives yeme and va~c~a for all this, and2picks up its lexical richness in other areas. Second, a theoretically sound definition uses words that are themselves defined through as long a chain as possible back to zero level words that can be defined only in one of two ways: by accept- ing that some definitions -- as few as possible -- may be circular, or by defining the zero level via extralinguistic experiences. Some dictionaries de- fine sweet circularly in terms of sugar and vice versa; but one could also begin by passing the sug- ar bowl and thus break the circularity. The tool must help trace the use of defining words. Third, the arguments in the semantic represen- tation of a word have to relate explicitly to grammatical elements like subjects and objects and possessors: his projection of the budget and 1 NSF grant BNS-79060hl funded some of this work. 2 Huichol transcription follows Spanish except high back unrounded, ' glottal stop, • high tone, W long syllable, ~ rhythm break, ~ voiced retro- flex alveopalatal fricative, ~ retroflex flap, cuV labiovelar stop. 38 please turn out the li6ht each involve two argu- ments to the main operative word (him and budget, you and li6ht), but the relationship is handled in different grammatical frames. Finally, the tool must run on the smallest, most portable machine available, if necessary trad- ing processing time for memory and external space. II RELATIONS Relations were proposed by Codd and elaborated on by Fagin, Ullman, and many others. They are un- ordered sets of tuples, each of which contains an ordered set of fields. Each field has a value tak- en from a domain -- semantically, from a particu- lar kind of information. In lexicography the tuples correspond, not to entries in a dictionary, but to subentries, each with a particular sense. Each tuple contains fields for various aspects of the form, meaning, meaning-to-form mapping, and use of that sense. For the update and retrieval operations defined on relations to work right, the information stored in a relation is normalized. Each field is restric- ted to an atomic value~ it says only one thing, not a series of different things. No field appears more than once in a tuple. Beyond these formal con- straints are conceptual constraints based on the fact that the information in some fields determines what can be in other fields; Ullman spells out the main kinds of such dependency. It is possible, as Shu and associates show, to normalize nearly any information structure by par- titioning it into a set of normal form relations. It can be presented to the user, however, in a view that draws on all these relations but is not itself in normal form. Reconstituting a subentry from normal form tuples was beyond the capacity of the equipment that could be used in the field; it would have been cripplingly slow. Before sealed Winchester disks came out, floppies were unreliable in tropical hu- midity where the work was to be done, and only small digital tape cartridges were thoroughly reli- able. So the organization had to be managed by se- quential merges across a series of small (.25M) tapes without random access. The requirements of normal form came to be an issue in three areas. First, the prosaic matter of examples violates normal form. Nearly any field in a dictionary can take any number of illustrative examples. Second, the actants or arguments at the level of semantic representation that corresponds to the definition are in a theoretical status that is not yet clear. Mel'chnk (1981) simply numbers the act- ants in a way that allows them to map to gram- matical relations in as general a way as possible. Others, ~'self included, find recurring components of definitions on the order of Fillmore's cases (1968) that are at least as consistently motivated as are the lexical functions, and that map as sets of actants to sets of grammatical relations. Rather than load the dice at this uncertain stage by des- ignating either numbered or labeled actants as dis- tinct field types, it furthers discussion to be able to have Actant as a single field type that is repeatable, and whose value in each instance is a link between an actant number, a prcposed case, and even possibly a conceptual dependency category for comparison (Schank and Abelson, 1977.11-17). Third, lexical correlates are inherently many- to-one. For example, Huichol ~u~i 'house' in its sense labeled 1.1 'where a person lives' has sever- = taa. cuaa al antonyms: Ant (~u~i 1.1) + 'space in .. ~ o front of a house', ~ull.ru'aa 'space behlnd a the house', tel.cuarle 'space outside the fence', and J an adverbial use of taa.cuaa 'outdoors' (Grimes, 1981.88). One could normalize the cases of all three types. But both lexicographers and users expect the information to be in nonnormal form. Furthermore, we can make a realistic assumption that relational operations on a field are satisfied when there is one instance of that field that satisfies them. This is probably fatal for Joins like "get me the Huichol word for 'travel', then merge its defini- tion with the definitions of all other words whose agent and patient are inherently coreferential and involve motion'. But that kind of capability is be- yond a small implementation anyway; the lexicogra- pher who makes that kind of pass needs a large scale, fully normalized system. The kinds of selec- tions one usually does can be aimed at any instance of a field, and projections can produce all in- stances of a field, quite happily for most work, and at an order of magnitude lower cost. The important thing is to denormalize systemat- ically so that normal form can be recovered when it is needed. Actants denormalize to fields repeat- ed in a specified order. Examples denormalize to strings of examples appended to whatever field they illustrate. Lexical correlates denormalize to strings of values of particular functions, as in the antonym example Just given. The functions them- selves are ordered by a conventional list that groups similar functions together (Grimes 1981.288- 291). III CROSS REFERENCING To build a dictionary consistently along the lines chosen, a computational tool needs to incor- porate cross referencing. This means that for each field that is built, dummy entries are created for all or most of the words in the field. For example, the definition for 'opossum', y~u- xu, includes clauses like ca +u.~u+urime Ucu~'aa w 'eats things that are not green' and pUcu~i.m~e- s_~e 'its tail is bare'. From these notes are gener- ated that guarantee that each word used in the def- inition will ultimately either get defined itself or will be tagged yuun~itG mep~im~ate 'everybody knows it' to identify it as a zero level form that is undefinable. Each note tells what subentry its own head word is taken out of, and what field; this information is merged into a repeatable Notes field in the new entry. Under the stem~ruuri B 'be 39 alive, grow' appears the note d (y~uxu) • i cayuu.yuu- • J o rMne pUcua'aa 'eats thlngs that are not green'. This is a reminder to the lexicographer, first that there needs to be an entry for yuuri in sense B, and second that it needs to account at the very least for the way that stem is used in the defini- tion (d) field of the entry for yeuxu. Cross referencing to guarantee full coverage of all words that are used in definitions backs up a theoretical claim about definitional closure: the state where no matter how many words are added to the dictionary, all the words used to define them are themselves already defined, back to a finite set of zero level defining vocabulary. There is no clai, r that such a set is the only one possible; on- ly that at least one such set is l~Ossible. To reach closure even on a single set is such an ~--,ense task -- I spent eight months full time on Huichol lexicography and didn't get even a twentieth of the everyday vocabulary defined -- that it can be ap- proached only by some such systematic means. There are sets of conformable definitions that share most parts of their definitions, yet are not synonyms. Related species and groups of als~mals and plants have conformable definitions that are large- ly identical, but have differentiating parts as well (Grimes 1980). The same is true of sets of verbs llke ca/tel 'be sitting somewhere', ve/'u 'he standing somewhere', ma/mane 'be spread out some- where', and caa/hee 'be laid out straight some- where' (the slash separategunitary and multiple reference stems), which all share as part of their • . • , J • . deflnltlons ee.p~reu.teevl X-s~e cayupatatU• xa~.- s~e 'spend an extended time at X without changing to another location', but differ regarding the spatial orientation of what is at X. Cross refer- encing of words in definitions helps identify these cases. Values of lexical functions are not always com- pletely specified by the lexical function and the head word, so they are always cross referenced to create the opportunity for saying more about them. Qu~i 1.1 'house' in the sense of 'habitation of hu- mans'--~ersus 'stable' or 'lair' or 'hangar' 1.2 and 'ranch' 1.3) is pretty well defined by the function S_, substantive of the second actant, plus the head v~rb ca/tel 1.2 'live in a house' (versus 'be sitting somewhere', 1,1 and 'live in a locality' 1.3). Nevertheless it ha~ fifteen lexical functions of its own, includin@ the antonym set given ear- lier, and only one of those functions matches one of the nine that are associated with ca/tel 1.2: S. (ca/tei 1.2) = S 2 (~u~i 1.1) = ~ u ~ 'inhab- itant, householder'. Stepping outside the theoretical constraints of lexicography proper, the same cross referencing mechanism helps set up bilingual dictionaries. Def- initions are always in the language of the entries, but it is useful in many situations to gloss the definitions in some language of scientific dis- course or trade, then cross reference on the glos- ses by adding a tag that puts the notes from them into a separate section. I have done this both for Spanish, the national language of the country where Huichol is spoken, and for Latin, the language of the Linnean names of life forms. What results is not really a bilingual dictionary, because it ex- plains nothing at all about the second or third language -- no definitions, no mapping between grammatical relations and actants, no lexical func- tions for that language. It simply gives examples of counterparts of glosses. As such, however, it is no less useful than some bilingual dictionaries. To be consistent, the entries on the second language side would have to be as full as the first language entries, and some mechanism would have to be intro- duced for distinguishing translation equivalents rather than Just senses in each language. As it is, cross referencing the glosses gives what is prop- erly called an indexed unilingual dictionary as a handy intermediate stage. IV IMPLEMENTATION Because of the field situation far which the computational tool was required, it was implement- ed first in 1979 on an 8080 microcomputer with 32/( of memor~and two 130K sequentially accessible tape cartridges as an experimental package, later moved to an LSI-11/2 under RT-11 with .25M tapes. The language used was Simons's PTP (198h), designed for perspicuous handling of linguistic data. Data management was done record by record to maintain integrity, but the normal form constraints on at- omicity and singularity of fields were dropped. Functions were implemented as subtypes of a single field type, ordered with reference to a special list. Because dictionary users expect ordered records, that constraint was added, with provision for map- ping non-ASCII sort sequences to an ASCII sort key that controlled merging. Data entry and merging both put new instances of fields after existing instances of the same field, but this order of inclusion could be modi- fied by the editor. Furthermore, multiple instances of a field could be collapsed into a single non- atomic value with separator symbols in it, or such a string value could be returned to multiple in- stances, both by the editor. Transformations be- tween repeated fields, strings of atomic values, and various normal forms were worked out with Gary Simons but not implemented. Cross referencing was done in two ways: automat- ically for values of lexical functions, and by means of tags written in while editing for any field. Tags directed the processor to build a cross reference note for a full word, prefix, stem, or suffix, and to file it in the first, second, or third language part. In every case the lexicogra- pher had opportunity to edit in order to remove ir- relevant material and to associate the correct name form. Besides the major project in Huichol, the system was used by students for original lexicographic work in Dinka of the Sudan, Korean, and Isnag of the Philippines. If I were to rebuild the system now, I would probably use the University of Cali- fornia at Davis's CP/M version of Mumps on a port- able Winchester machine in order to have total 40 random access in portable form. The strategy of da- ta management, however, would remain the same, as it fits the application area well. I suspect, but have not proved, that full normalization capability provided by random access would still turn out un- acceptably slow on a small machine. V DISCUSSION Investigation of a language centers around four collections of information that computationally are like data bases: field notes, text collection with glosses and translations, grammar, and dic- tionary. The first two fit the relational para- digm easily, and are especially useful when sup- plemented with functions that display glosses in- terlinearly. The grammar and dictionary, however, require de- normalization in order to handle multiple examples, and dictionaries require the other kinds of denorm- alization that are presented here. Ideally those examples come out of the field notes and texts, where they are discovered by an automatic parsing component of the grammar that is used by the selec- tion algorithm, and they are attached to the ap- propriate spots in the grammar and dictionary by relational join operations. ~- VI REFERENCES Codd, E. F. 1970. A relational model for large shared data banks. Communications of the ACM 13:6.377-387. Fagin~ R. 1979. A normal form for relational data- bases that is based on domains and keys. IBM Research Report RJ 2520. Fillmore, Charles J. 1968. The case for case. In ~m~on Bach and Robert T. Harms, eds., Univers- als in linguistic theory, New York: Holt, Rine- hart and Winston, 1-88. Grimes, Joseph E. 1980. Huichol life form clas- sification I: Animals. Anthropological Linguist- ics 22:5.187-200. II: Plants. Anthropological Linguistics 22:6.264-27h. W . ..... . 1981. E1 huiehol: apuntes sobre el lexlco [Huichol: notes on the lexicon], with P. de la Cruz, J. Carrillo, F. Dzaz, R. Dlaz, and A. de la Rosa. ERIC document ED 210 901, microfiche. Kaplan, Ronald M. and Joan Bresnan. 1982. Lexical- functional grammar: a formal system for gram- matical representation. In Joan Bresnan, ed. The mental representation of grammatical rela- tions, Cambridge: The MIT Press, 173-281. Mel'chuk, Igor A. 1981. Meaning-text models: a recent trend in Soviet linguistics. Annual Re- view of Anthropology 10:27-62. ..... , A. K. Zholkovsky, and Ju. D. Apresyan. in press. Tolkovo-kombinatornyJ slovar' russkogo jazyka (with English introduction). Vienna: Wiener SlawistischerAlmanach. Schank, Roger C. and Robert P. Abelson. 1977. Scripts, plans, goals and understanding: an in- quiry into hnma~ knowledge structures. Hillsdale NJ: Lawrence Erlbaum Associates. Simons, Gary F. 198h. Powerful ideas for text pro- cessing. Dallas: Summer Institute of Linguist- ics. Ullman, Jeffrey D. 1980. Principles of database systems. Rockville MD: Computer Science Press. Wong, H. K. T. and N. C. Shu. 1980. An approach to relational data base scheme design. IBM Computer Science Research Report RJ 2688. 41 | 1984 | 10 |
EXPERT SYSTEMS AND OTHER NEW TECHNIQUES IN MT SYSTEMS Christian BOITET - Ren~ GERBER Groupe d'Etudes pour la Traduction Automatique BP n ° 68 Universit~ de Grenoble 38402 Saint-Martin d'H~res FRANCE ABSTRACT Our MT systems integrate many advanced con- cepts from the fields of computer science, linguis- tics, and AI : specialized languages for linguistic programming based on production systems, complete linguistic programming environment, multilevel representations, organization of the lexicons around "lexical units", units of translation of the size of several paragraphs, possibility of using text-driven heuristic strategies. We are now beginning to integrate new techni- ques : unified design of an "integrated" lexical data-base containing the lexicon in "natural" and "coded" form, use of the "static grammars" forma- lism as a specification language, addition of expert systems equipped with "extralinguistic" or "metalinguistic" knowledge, and design of a kind of structural metaeditor (driven by a static grammar) allowing the interactive construction of a document in the same way as syntactic editors are used for developing programs. We end the paper by mentioning some projects for long-term research. INTRODUCTION In this paper, we assume some basic knowledge of CAT (Computer Aided Translation) terminology (MT, M.AHT, HAMT, etc.). The starting point of our research towards "better" CAT systems is briefly reviewed in I. In II, we present 3 lines of current work : improving current second-generation metho- dology by incorporating advanced techniques from software engineering, moving toward third-genera- tion systems by incorporating expert systems, and returning to interactive techniques for the creation of a document. 1 - IMPORTANT CONCEPTS FROM EXISTING SYSTEMS For lack of space, we only list our major points, and refer the reader to (3,4,5,6,15) for further details. ! - Computer science aspects i) Use of Specialized Languages for Linguistic Programming (SLLP), like ATEF, ROBRA, Q-systems, REZO, etc. 2) Integration in some "user-friendly" envi- ronment, controlled by a conversational interface, and managing a specialized data-base composed of what we call "lln~-~are" (grammars, dictionaries, procedures, formats, variables~ and corpuses of texts (source, translated, revised, plus intermediate results and possibly "hors-textes" -- figures, etc.). 3) Analogy with compiler-compiler systems : rough translation is realized by a monolingual analysis, followed by a bilingual transfer, and then by a monolingual generation (synthesis). 2 - Linguistic aspects I) Only linguistic levels (of morphology, syntax, logico-semantics, modality, actualisation, ...) are used, leading to some implicit understan- ding, characteristic of second-generation MT systems. 2) Hence, the extralinguistic levels (of expertise and pragmatics) which furnish some degree of explicit understanding are beyond the limits of second-generation CAT systems. 3) During analysis of a unit of translation, computation of these (linguistic) levels is not done sequentially, but in a cooperative way. Analysis produces the analog of an "abstract tre@'~ namely a multilevel interface structure to repre- sent all the computed levels on the same graph (a "decorated tree"). 4) Lexical knowledge is organized around the notion of lexical unit (LU), allowing for powerful paraphrasing capability. 5) The texts are segmented into translation units of one or more paragraphs. This allows for intersentential resolution of anaphora in some not too difficult cases. 3 - AI aspects I) During the structural steps, the unit of translation is represented by the current "object tree", which may encode several competing interpre- tations, like the "blackboard" of some AI systems. 2)This and the SLLPs' control structures allow for some heuristic programming : it is possible to explicitly describe and process ambi- guous situations in the production rules. This is in contrast to systems based on combi- natorial algorithms which construct each interpre- tation independently, even if they represent them in a factorized way. 468 II - DIRECTIONS OF CURRENT WORK I - Linguistic knowledge processing The experience gained by the development of a Russian-French translation unit of a realistic size over the last three years (6) has shown that main- taining and upgrading the lingware, even in an admittedly limited second generation CAT system, requires a good deal of expertise. Techniques are now being developed to maintain the linguistic knowledge base. Some of them deal with the lexical data-base, others with the definition and use of specification formalisms ("static grammars") and verification tools. Lexical knowledge processin~ In the long run, dictionaries turn out to be the costliest components of CAT systems. Hence, we are working towards the reconciliation of "natural" and "coded" dictionaries, and towards the construc- tion of automated verification and indexing tools. Natural dictionaries are usually accessed by lemmas (normal forms). Coded dictionaries of CAT systems, on the other hand, are accessed by morphs or by lexical units. Moreover, the information the two types of dictionaries contain is not the same. However, it is highly desirable to maintain some degree of coherency between the coded dictionaries of a CAT system and the natural dictionaries which constitute their source, for documentation purposes, and also because these computerized natural dictio- naries should be made accessible to the revisors. Let us briefly present the kind of structure proposed by N. Nedobejkine and Ch. Boitet at an ATALA meeting in Paris in ]983. The central idea here is to start from the structure of modern dictionaries, which are accessed by the lemmas, but use the notion of lexical unit. Each item may be considered as a tree structure. Starting from the top, selections of a "local" nature (on the syntactico-semantic behavior in a phrase or in a sentence) give access to the "constructions". Then, more "global" constraints lead to "word senses". At each node, codes of one or more formalized models may be grafted on. Hence, it is in principle possible to index directly in this structure, and then to design programs to construct the coded dictionaries in the formats expected by the various SLLP. Up to this level, the information is monolin- gual and'usable for analysis as well as for genera- tion. If the considered language is source in one or more language pairs, each word sense may be further refined, for each target language, and lead to equivalents expressed as constructions of the target language, with all other information contai- ned in the dictionary constructed in a similar way for the target language. For lack of space, we cannot include examples. This part of the work thus aims at finding a good way of representing lexical knowledge But there is another problem, perhaps even more important. Because of the cost of building machine dictionaries, we need some way to transform and transport lexical knowledge from one CAT system to another. This is obviously a problem of translation. Hence, we consider this type of "integrated struc- ture" as a possible lexical interface structure. Research has recently begun on the possibility of using classical or advanced data base systems to store this lexical knowledge and to implement the various tools required for addition and verifica- tion. VlSULEX and ATLAS (1) are first versions of such tools. Gran~atical knowledge processing Just as in current software engineering, we have long felt the need for some level of "static" (algebraic) specification of the functions to be realized by algorithms expressed in procedural programming languages. In the case of CAT systems, there is no a priori correct gran~,ar of the language, and natural language is inherently ambi- guous. Hence, any usable specification must specify a relation (not a function) between strings and trees~ or trees and trees : many trees may corres- pond to one string, and, conversely, many strings may correspond to one tree. Working with B. Vauquois in this direction, S. Chappuy has developed a formalism of static ~rammars (7), presented in charts expressing the relation between strings of terminal elements (usually decorations expressing the result of some morphological analysis) and multilevel structural descriptors. This formalism is currently being used for all new linguistic developments at GETA. Of course, this is not a completely new idea. For example, M. Kay (|3) proposed the formalism of unification grammars for quite the same purpose. But his formalism is more algebraic and less geometric in nature, and we prefer to use a speci- fication in terms of the kind of structures we are accustomed to manipulating. 2 - Grafting o n expert systems Seeing that linguistic expertise is already quite well represented and handled in current ("closed") systems, we are orienting our research towards the possibility of addin~ extralinguistic knowledge (knowledge about some technical or scien- tific field, for instance) to existing CAT systems. Also, because current systems are based on trans- ducers rather than on analyzers, it is perfectly possible that the result of analysis or of transfer (the "structural descriptors") are partially incorrect and need correction. Knowledge about the types of errors made by linguistic systems may be called metalinsuistic. In his recent thesis (9), R. Gerber has attempted to design such a system, and to propose an initial implementation. The expertise to be incorporated in this system includes linguistic, metalinguistic, and extralinguistic knowledge. The system is constructed by combining a "closed" system, based only on linguistic knowledge (a ling- ware written in ARIANE-78), and two "open" systems, called "expert corrector systems". The first is inserted at the junction between analysis and transfer, and the second between transfer and generation. 469 The control structure of a corrector system is as follows : (1) transform the result of analysis into a suitable form ; (2) while there is some error configuration do solve (using meta- or extralinguistie knowledge) ; if solving has failed then exit endif ; (4) perform a partial reconstruction of the structure, according to the solution found ; endwhile ; (5) output the final structure in ARIANE-78 format. (2) relies on metalinguistic knowledge only. The implementation has been done in FolI-PROLOG (8). The lingware used corresponds to a small English-French system developed for teaching pur- poses. Here are some examples. Example I : ADJ + N N (1) Standard free-energy change is calculated by this equation. The analyzer proposes that "standard"modifies "change", while "free-energy" is juxtaposed to "change", hence the erroneous translation : "La variable standard d'~nergie libre est calcul~e par cette formule". In order to correct the structure, some knowledge of chemistry is required, namely that "standard free-energy change" is a ... standard notion. With this grouping, (1) translates as : "La variation d'finergie libre standard est calcul~e par cette formule". Example 2 : (ADJ) N and N N (2) The mixture gives off dangerous cyanide and chlorine fumes. (2') The experiment requires carbon and nitrogen tetraoxyde. Let us develop this example a little more. Sentence (2) presents the problem of determining the scope of the coordination. The result of ana- lysis (tree n ° 2) groups "dangerous cyanide" and chlorine fumes", "chlorine" being juxtaposed to "fumes" (SF(JUXT) on node 12). Hence the translation : "La preparation d~gage le cyanure et la vapeur de chlore dangereux". But, if we know that cyanide is dangerous as fumes, and not as crystals, we can correct the structure by grouping "(cyanide and chlorine) fumes" (see subtree n ° 2). The translation produced will then be : "La preparation d~gage la vapeur dangereuse de cyanure et de chlore". Of course, some more sophisticated analyzers would (and some actually do) use the semantic mar- ker "chemical element" present on both "chlorine" and "cyanide", and then group them on the basis of the " semantlc density" (e.g., number of features shared). But this technique will fail on (2'), because there is no "carbon tetraoxyde" in normal chemistry ! Hence, without extralinguistic knowledge, this more sophisticated (linguistic) strategy will produce : "L'expfirience demande du t~traoxyde de carbone et d'azote". instead of : "L'expfirience demande du carbone et du tfitraoxyde d'azote". RESULTAT DE L'EXECUTION. TEXTE: REHEC PHRASE2 ANALYSE STRUCTURALE ULTXT ...... I I I ' Tree n" 2 ULFRA ...... 2 I IVCL ...... 3 I I I I~NP s~ ...... 4 ...... 7 THE MIXTURE GIVE ...... 5 ...... 6 ...... 8 I I ~p ...... 9 .17 I I I I XAP CYANIDE ..... IO ..... 12 ..... 13 I I OANCERO AND QILORIN FUMES U .... 11 ..... 14 £ .... 15 ..... 16 SO~ET 9 ' ': ~('~NP'),RL(ARGI),K(NP),SF(OBJI),~T(N),SUBN(CN), N~(SIN),$~(CONC),SEHCO(SUBST),~I(N). SO~ET lO' ': UL('~P'),RS(QUAL),K(AP).SF(ATG),~T(A)tSU~(~J), [MPERS(I~ED),SUBJR(INF). S~T II 'DANGEROUS': UL('DANGEROUS'),SF(GOV),CAT(A),SUBA(ADJ), SUBJR(INF). SOt4HET 12 '~ANIDE': ~'CYANIDE').SFtGOV),~T(N),SUBN(CN),N~(SIH). S~(CONC) ,SENCO(S~ST). SO~ET 13 ' ': UL('~NP'),RL(ID),K(NP),SF(COO~),~T(N),SUBN(CN). N~(PLU),SHM(CONC),SEMCO(SUBST),VLI(N). SO~ET 14 'Am': ~('AND'),CAT(C). SOM=MET ]5 'CHLORINE': UL('CHLORINE'),RS(QUAL),UNSAFE(RS),SF(JUXT), CAT(N),SUBN(CN),NUH(SIN).SEH(CONC),SEMCO(SUBST). SOMHET 16 'F~ES' :~('F~ES' ) ,SF(GOV) ,CAT(N) ,SUBN(CN) ,N~(PLU), SEM(CONC),SEMCO(SUSST). TEXTS REHEG PHRASE2 Analyse structuraIe colfr~.g61 ~P i i ...... 9 I I SAP ..... IO I DANGHRO CYANIDE U .... II ..... 12 I I FUMES ..... 9' ..... 16 I I ~nP ..... 13 AND CHLORINE ..... 14 ...... 15 Example 3 : Antecedent of "which" (3) The water in the beaker with which the chlorine combines will the poisonous. The analyzer takes "beaker" instead of"water" as antecedent of "which". The corrector may know that chlorine combines with water, and not with a beaker. Examples 4 & 5 : Antecedent of "it" within or beyond the same sentence (4) The state in which a substance is depends on the energy that it contains. When a substance is heated the energy of the substance is increased. (5) The particles vibrate more vigorously, and it becomes a liquid. (5') It melts. 470 In order to choose between "substance" and "state" (4), one must make some type of complex reasoning using detailed knowledge of physics -- and one may easily fail in a given context : it is not correct to simply state (as we did to solve this particular case), that a substance may possess energy, while a state cannot. Here, perhaps it is better to rely on some (metalinguistic) information on the typology, which may be included in a (spe- cialized) linguistic analyzer, or in the expert cor- rector system. For (5), there are simple, but powerful rules like : if the antecedent cannot be found in the sentence, look for the nearest possible main clause subject to the left. 3 - Aiding the creation of the source documents Lingware engineering may be compared with modern software engineering, because it requires the design and implementation of complete program- ming systems, uses specification tools, and leads to research in automatic program generation. Star- ting from this analogy, a group of researchers at GETA have recently embarked on a project which could converge with still another line of software engineering, in a very interesting way. The final aim is to design and implement a syntactic~semantic structural metaeditor that uses a static grammar given as parameter in order to guide an author who is writing a document, in much the same manner as metaeditors like MENTOR are used for writing pro- grams in classical programming languages. This could offer an attractive alternative to interactive CAT systems like ITS, which require a specialist to assist the system during the transla- tion process. As a matter of fact, this principle i~ a sophisticated variant of the "controlled syntax" idea, like that implemented in the TITUS system. Its essential advantage is to guarantee the correctness of the intermediate structure, without the need for a large domain-specific knowledge base. It may be added that, in many cases, the documents being written are in effect contributing some new knowledge to the domain of discourse, which hen-c~ce cannot already be present in the computerized knowledge base, even if one exists. III - CONCLUSION : SOME LONG TERM PERSPECTIVES There are many areas open for future research The introduction of "static grammars" suggests a new kind of design, where the "dynamic grammars" would be generated from the specifications and from some strategies, possibly expressed as "met~-uules". "Multisliced decorated trees" (16) have been introduced as a data structure for the explicit factorization of decorated trees. However, there remains to develop a full implementation of the associated parallel rewriting rule system, STAR- PALE, and to test its linguistic practicability. Last but not least, the development of true "translation expert systems" requires an intensive (psycholinguistic) study of the expertise used by human translators and revisors. REFERENCES (I) Bachut D. - V~rast~gui N. "Software tools for the environment of a computer aided translation system". COLING-84. (2) Barr A. - Feigenbaum E., eds. "The Handbook of Artificial Intelligence (vol ],2). Pitman, ]981. (3) Boitet Ch. "Research and development on MT and related techniques at Grenoble University (GETA)". Tutorial on MT, Lugano, April ]984, 17 p. (4) Boitet Ch. - Guillaume P. - Qu~zel-Ambrunaz M. "Implementation and conversational environment of ARIANE 78.4, an integrated system for trans- lation and human revision". Proc. of COLING-82, Prag, July 1982, North-Holland, 19-27. (5) Boitet Ch. - N~dobejkine N. "Recent develop- ments in Russian-French Machine Translation at Grenoble. Linguistics ]9, 199-271, 198]. (6) Boitet Ch. - N~dobejkine N. "Illustration sur le d~veloppement d'un atelier de traduction automatis~e". Colloque "L'informatique au ser- vice de la linguistique", Universit~ de Metz, juin 1983. (7) Chappuy S. "Formalisation de la description des niveaux d'interpr~tation des langues natu- relies". Etude men~e en vue de l'analyse et de la g~n~ration au moyen de transducteurs. Th~se de 3~me cycle, USMG, Grenoble, juillet 1983. (8) Donz Ph. "Foil, une extension au langage PROLOG". Document CRISS, Grenoble, Universit~ II, f~vrier ]983. (9) Gerber R. "Etude des possibilit~s de coopera- tion entre un syst~me fond~ sur des techniques de comprehension implicite (syst~me logico- s~mantique) et un syst~me fond~ sur des techni- ques de comprehension explicite (syst~me ex- pert). Th~se de 3~me cycle, Grenoble, USMG, janvier ]984. (]O) Hayes-Roth F. - Waterman D.A. - Lenat D.B. eds. "Building expert systems". Reading MA, London Addison-Wesley, ]983. (l]) Hobbs J.R. "Coherence and co-reference". Cognitive sciences 3, 67-90, ]979. (]2) Isabelle P. "Perspectives d'avenir du groupe TAUM et du syst~me TAUM-AVIATION". TAUM, Universit~ de Montreal, mai 1981. (13) Kay M. "Unification grammars". Doc. Xerox, 1982. (14) Lauri~re J.L. "Representation et utilisation des connaissances". TSI ](],2), 1982. (15) Vauquois B. "La traduction automatique Grenoble". Document de Linguistique Quantita- tive n ° 29, Dunod, 1975. (16) V~rast~gui N. "Etude du parall~lisme appliqu~ la traduction automatis~e par ordinateur. STAR-PALE : un syst~me parall~le". Th~se de Docteur-lng~nieur, USMG & INPG, Grenoble, mai 1982. 471 | 1984 | 100 |
ROBUST PROCESSING IN MACHINE TRANSLATION Doug Arnold, Rod Johnson, Centre for Cognitive Studies, University of Essex, Colchester, CO4 3SQ, U.K. Centre for Computational Linguistics UMIST, Manchester, M60 8QD, U.K. ABSTRACT In this paper we provide an abstract characterisation of different kinds of robust processing in Machine Translation and Natural Language Processing systems in terms of the kinds of problem they are supposed to solve. We focus on one problem which is typically exacerbated by robust processing, and for which we know of no existing solutions. We discuss two possible approaches to this, emphasising the need to correct or repair processing malfunctions. ROBUST PROCESSING IN MACHINE TRANSLATION This paper is an attempt to provide part of the basis for a general theory of robust processing in Machine Translation (MT) with relevance to other areas of Natural Language Processing (NLP). That is, processing which is resistant to malfunctioning however caused. The background to the paper is work on a general purpose fully automatic multi-llngual MT system within a highly decentralised organisational framework (specifically, the Eurotra system under development by the EEC). This influences us in a number of ways. Decentralised development, and the fact that the system is to be general purpose motivate the formulation of a seneral theory, which abstracts away from matters of purely local relevance, and does not e.g. depend on exploiting special properties of a particular subject field (compare [7], e.g.). The fact that we consider robustness at all can be seen as a result of the difficulty of MT, and the aim of full automation is reflected in our concentration on a theory of robust process- ins, rather than "developmental robustness'. We will not be concerned here with problems that arise in designing systems so that they are capable of extension and repair (e.g. not being prone to unforseen "ripple effects" under modification). Developmental robustness is clearly essential, and such problems are serious, but no system which relies on this kind of robust- ness can ever be fully automatic. For the same reason, we will not consider the use of "interactive" approaches to robustness such as that of [I0]. Finally, the fact that we are concerned with translation militates against the kind of disregard for input that is characteristic of some robust systems (PARRY [4] is an extreme example), and motivates a concern with the repair or correction of errors. It is not enough that a translation system produces superficially acceptable output for a wide class of inputs, it should aim to produce outputs which represent as nearly as possible translations of the inputs. If it cannot do this, then in some cases it will be better if it indicates as much, so that other action can be taken. From the point of view we adopt, it is possible to regard MT and NLP systems generally as sets of processes implementing relations between representations (texts can be considered representations of themselves). It is important to distinguish: (i) R: the correct, or intended relation that holds between representations (e.g. the relation "is a (correct) translation of', or "is t~e surface constituent structure of'): we have only fairly vague, pre-theoretical ideas about Rs, in virtue of being bi-lingual speakers, or having some intuitive grasp of the semantics of artificial representations; (ii) T: a theoretical construct which is supposed to embody R; (iii) P: a process or program that is supposed to implement By a robust process P, we mean one which operates error free for all inputs. Clearly, the notion of error or correctness of P depends on the independent standard provided by T and R. If, for the sake of simplicity we ignore the possibility of ambiguous inputs here, we can define correctness thus: (1) Given P(x)=y, and a set W such that ~or all w in W, R(w)=y, then y is correct with respect to R and w iff x is a member of W. Intuitively, W is the set of items for which y is the correct representation according to R. One possible source of errors in P would be if P correctly implemented T, but T did not embody R. Clearly, in this case, the only sensible solution is to modify T. Since we can imagine no automatic way of finding such errors and doing this, we will 472 ignore this possibility, end assume that T is a we11-defined, correct and complete embodiment of R. We can thus replace R by T in (I), and treat T as the standard of correctness below. There appear to be two possible sources of error in P: Problem (1): where P is not a correct implementation of T. One would expect this to be common where (as often in MT and NLP) T is very complex, and serious problems arise in devising implementations for them. Problem (ii): where P is a correct implementation so far as it goes, but is incom- plete, so that the domain of P is a proper-subset of the domain of T. This will also be very common: in reality processes are often faced with inputs that violate the expectations implicit in an implementation. If we disregard hardware errors, low level bugs and such malfunctions as non-termlnatlon of P (for which there are well-known solutions), there are three possible manifestations of malfunction. We will discuss them in tur~ case (a): P(x)=@, where T(x)~@ i.e. P halts producing ~ output for input x, where this is not the intended output. This would be a typical response to unforseen or illformed input, and is the case of process fragility that is most often dealt with. There are two obvious solutions: (1) to manipulate the input so that it conforms to the expectations implicit in P (cf. the LIFER [8] approach to ellipsis), or to change P Itself, modifying (generally relaxing) its expectations (cf. e.g. the approaches of [7], [9], [10] and [Ii]). If successful, these guarantee that P produces some output for input x. However, there is of course no guarantee that it is correct with respect to T. It may be that P plus the input manipulation process, or P with relaxed expectat- ions is simply a more correct or complete implem- entation of T, but this will be fortuitous. It is more llkely that making P robust in these ways will lead to errors of another kind: case (b): P(x)=z where z is not a legal output for P according to T (i.e. z is not in the range of T. Typically, such an error will show itself by malfunctioning in a process that P feeds. Detec- tion of such errors is straightforward: a well- formedness check on the output of P is sufficient. By itself, of course, this will lead to a proliferation of case-(a) errors in P. These can be avoided by a number of methods, in particular: (1) introducing some process to manipulate the output of P to make it well-formed according to T, or (ii) attempting to set up processes that feed on P so that they can use 'abnormal" or "non- standard" output from P (e.g. partial representat- ions, or complete intermediate representations produced within P, or alternative representations constructed within P which can be more reliably computed than the "normal" intended output of P (the representational theories of GETA and Eurotra are designed with this in mind: cf. [2], [3], [5], [6], and references there, and see [i] for fuller discussion of these issues). Again, it is conceivable that the result of this may be to produce a robust P that implements T more correct- ly or completely, but again this will be fortuit- ous. The most likely result will he robust P will now produce errors of the third type: case (c): P(x)=y, where y is a legal output for P according to T, but is not the intended output according to T. i.e. y is in the range of T, but yqT(x). Suppose both input x and output y of some process are legal objects, it nevertheless does not follow that they have been correctly paired by the process: e.g.in the case of a parsing process, x may be some sentence and y some representatiom Obviously, the fact that x and y are legal objects for the parsing process and that y is the output of the parser for input x does not guarantee that y is a correct representation of x. Of course, robust processing should be resistant to this kind of malfunctloning also. Case-(c) errors are by far the most serious and resistant to solution because they are the hardest to detect, and because in many cases no output is preferable to superflclally (misleadingly) well-formed but incorrect output. Notice also that while any process may be subject to this kind of error, making a system robust in response to case-(a) and case-(b) errors will make this class of errors more widespread: we have suggested that the likely result of changing P to make it robust will be that it no longer pairs respresentatlons in the manner required by T, but since any process that takes the output of P should be set up so as to expect inputs that conform to T (since this is the "correct" embodiment of R, we have assumed), we can expect that in general making a process robust will lead to cascades of errors. If we assume that a system is resistant to case-(a) and case-(b) errors, then it follows that inputs for which the system has to resort to robust processing will be likely to lead to case-(c) errors. Moreover, we can expect that making P robust will have made case-(c) errors more difficult to deal with. The likely result of making P robust is that it no longer implements T, but some T" which is distinct from T, and for which assump- tlons about correctness in relatlon to R no longer hold. It is obvious that the possibility of detecting case-(c) errors depends on the possibility of distinguishing T from T'. Theoretically, this is unproblematlc. However, in a domain such as MT it will be rather unusual for T and T" to exist separately from the processes that implement them. Thus, if we are to have any chance of detecting case-(c) errors, we must be able to clearly distinguish those aspects of a process that relate to "normal' processing from 473 those that relate to robust processing. This distinction is not one that is made in most robust systems, We know of no existing solutions to case-(c) malfunctions. Here we will outline two possible approaches. To begin with we might consider a partial solution derived from a well-known technique in systems theory: insuring against the effect of faulty components in crucial parts of a system by computing the result for a given input by a number of different routes. For our purposes, the method would consist essentially in implementing the same theory T as a number of distinct processes P1,...Pn, etc. to be run in parallel, comparing outputs and using statistical criteria to determine the correctness of processing. We will call this the "statistical solution'. (Notice that certain kinds of system architecture make this quite feasible, even given real time constraints). Clearly, while this should significantly improve the chances that output will be correct, it can provide no guarantee. Moreover, the kind of situation we are considering is more complex than that arising given failure of relatively simple pieces of hardware. In particular, to make this worthwhile, we must be able to ensure that the different Ps are genuinely distinct, and that they are reasonably complete and correct implementations of T, at the very least sufficiently complete and correct that their outputs can be sensibly compared. Unfortunately, this will be very difficult to ensure, particularly in a field such as MT, where Ts are generally very complex, and (as we have noted) are often not stated separately from the processes that implement them. The statistical approach is attractive because it seems to provide a simultaneous solut- ion to both the detection and repair of case-(c) errors, and we consider such solutions are certainly worth further consideration. However, realistically, we expect the normal situation to be that it is difficult to produce reasonably correct and compelete distinct implementations, so that we are forced to look for an alternative approach to the detection of case-(c) errors. It is obvious that reliable detection of (e)- type errors requires ~he implementation of a relation that pairs representations in exactly the same way as T: the obvious candidate is a process p-l, implementing T -I, the inverse of T. The basic method here would be to compute an enumeration of the set of all possible inputs W that could have yielded the actual output, given T, and some hypothetical ideal P which correctly implements it. (Again, this is not unrealistic; certain system architectures would allow forward computation to procede while this inverse processing is carried out). To make this worthwhile would involve two assumptions: (1) That p-I terminates in reasonable time. This cannot be guaranteed, but the assumption can be rendered more reasonable by observing characteristics of the input, and thus restricting W (e.g. restricting the members of W in relation to the length of the input to p-I). (ii) That construction of p-1 is somehow more straightforward than construction of P, so that p-i is likely to be more reliable (correct and complete) than P. In fact this is not implausible for some applications (e.g. consider the case where P is a parser: it is a widely held idea that generators are easier to build than parsers). Granted these assumptions, detection of case- (c) errors is straightforward given this "inverse mapping" approach: one simply examines the enumeration for the actual input if it is present. If it is present, then given that p-i is likely to be more reliable than P, then it is likely that the output of P was T-correct, and hence did not constitute a ease-(c) error. At least, the chances of the output of P being correct have been increased. If the input is not present, then it is likely that P has produced a case-(c) error. The response to this will depend on the domain and application -- e.g. on whether incorrect but superficially well-formed output is preferable to no output at all. In the nature of things, we will ultimately be lead to the original problems of robustness, but now in connection with p-l. For this reason we cannot forsee any complete solution to problems of robustness generally. What we have seen is that solutions to one sort of fragility are normally only partly successful, leading to errors of another kind elsewhere. Clearly, what we have to hope is that each attempt to eliminate a source of error nevertheless leads to a net decrease in the overall number of errors. On the one hand, this hope is reasonable, since sometimes the faults that give rise to processing errors are actually fixed. But there can be no general guarantee of this, so that it seems clear that merely making systems or processes robust in the ways described provides only a partial solution to the problem of processing errors. This should not be surprising. Because our primary, concern is with automatic error detection and repair, we have assumed throughout that T could be considered a correct and complete embodiment of ~ Of course, this is unrealistic, and in fact it is probable that for many processes, at least as many processing errors will arise from the inadequacy of T with respect to R as arise from the inadequacy of P with respect to T. Our pre-theoretical and intuitive ability to relate representations far exceeds our ability to formulate clear theoretical statements about these relations. Given this, it would seem that error free processing depends at least as much on the correctness of theoretical models as the capacity 474 of a system to take advantage of the techniques described above. We should emphasise this because it sometimes appears as though techniques for ensuring process robustness might have a wider importance. We assumed above that T was to be regarded as a correct embodiment of R. Suppose this assumption is relaxed, and in addition that (as we have argued is likely to be the case) the robust version of P implements a relation T" which is distinct from T. Now, it could, in principle, turn out that T' is a better embodiment of R than T. It is worth saying that this possiblility is remote, because it is a possibility that seems to be taken seriously elsewhere: almost all the strategies we have mentioned as enhancing process robustness were originally proposed as theoretical devices to increase the adequacy of Ts in relation to Rs (e.g. by providing an account of metaphorical or other "problematic" usage). There can be no question that apart from improvements of T, such theoretical developments can have the side effect of increasing robustness. But notice that their justification is then not to do with robustness, but with theoretical adequacy. What must be emphasised is that the chances that a modification of a process to enhance robustness (and improve reliability) will also have the effect of improving the quality of its performance are extremely slim. We cannot expect robust processing to produce results which are as good as those that would result from 'ideal" (optimal/non- robust) processing. In fact, we have suggested that existing techniques for ensuring process robustness typically have the effect of changing the theory the process implements, changing the relitionship between representations that the system defines in ways which do not preserve the relationship relationship between representations that the designers intended, so that processes that have been made robust by existing methods can be expected to produce output of lower than intended quality. These remarks are intended to emphasise the importance of clear, complete, and correct theoretical models of the pre-theoretlcal relationships between the representations involved in systems for which error free 'robust' operation important, and to emphasise the need for approaches to robustness (such as the two we have outlined above) that make it more likely that robust processes will maintain the relationship between representations that the designers of the "normal/optlmal" processes intended. That is, to emphaslse the need to detect and repair malfunctions, so as to promote correct processing. of the ideas in this paper were first aired in Eurotra report ETL-3 ([4]), and in a paper presented at the Cranfield conference on MT earlier this year. We would like to thank all our friends and colleagues in the project and our institutions. The views (and, in particular, the errors) in this paper are our own responsibility, and should not be interpreted as "official' Eurotra doctrine. REFE RENCE S i. ARNOLD, D.J. & JOHNSON, R. (1984) "Approaches to Robust Processing in Machine Translation" Cognitive Studies Memo, University of Essex. 2. BOITET, CH. (1984) "Research and Development on MT and Related Techniques at Grenoble University' paper presented at Lugano MT tutorial April 1984. 3. BOITET, CH. & NEDOBEJKINE, N. (1980) "Russian- French at GETA: an outline of method and a detailed example" RR 219, GETA, Grenoble. 4. COLBY, K. (1975) Artificial Paranoia Pergamon Press, Oxford. 5. ETL-I-NL/B "Transfer (Taxonomy, Safety Nets, Strategy), Report by the Belgo-Dutch Eurotra Group, August 1983. 6. ETL-3 Final 'Trio' Report by the Eurotra Central Linguistics Team (Arnold, Jaspaert, Des Tombe), February 1984. 7. HAYES, P.J. and MOURADIAN, G.V. (1981): "Flexible parsing", AJCL 7, 4:232-242. 8. HENDRIX, G.G. (1977) "Human Engineering for Applied Natural Language Processing" Proc. 5th IJCAI, 183-191, MIT Press. 9. KWASNY, S.C. and SONDHEIMER, N.K. (1981): "Relaxation Techniques for Parsing Grammatically Ill-formed Input in Natural Language Understanding Systems". AJCL 7, 2:99-108. I0. WEISCHEDEL, R.M, and BLACK, J. (1980) 'Responding Intelligently to Unparsable Inputs" AJCL 6.2: 97-109. II. WILKS, Y. (1975): "A Preferential Pattern Matching Semantics for Natural Language". A.I. 6:53-74. AKNOWLEDGEMENTS Our debt to the Eurotra project is great: collaboration on this paper developed out of work on Eurotra and has only been possible because of opportunities made available by the project. Some 475 | 1984 | 101 |
Disambiguating Grammatically Ambiguous Sentences By Asking M-~saru Tomita Computer Science Department Carnegie-Mellon University Pittsburgh, PA 15213 Abstract The problem addressed in this paper is to disambiguate grammatically ambiguous input semences by asking the user. who need not be a computer specialist or a linguist, without showing any parse trees or phrase structure rules. Explanation List Comgarison (ELC) is the technique that implements this process. It is applicable to all parsers which are based on phrase structure grammar, regardless of the parser implementation. An experimental system has been implemented at Carnegie-Mellon University, and it has been applied to English-Japanese machine translation at Kyoto University. 1. Introduction /~ F=rge number of techniques using semantic information have been deve!oped to resolve natural language ambiguity. However, not all ambiguity problems can be solved by those techniques at the current state of art. Moreover, some sentences are absolutely ambiguous, that is, even a human cannot disambiguate them. Therefore. it is important for the system to be capable of asking a user questions interactively to disambiguate a sentence. Here, we make an important condition that an user is neither a computer scientist nor a linguist. Thus, an user may ROt recognize an;, spec=al terms or notations like a tree structure, phrase structure grammar, etc. The first system to disambiguate sentences by asking interactively is perhaps a program called "disambiguator" in Kay's MINO system [2]. Although the disambiguation algorithm is not presented in [2], some basic ideas have been already implemented in the Kay's system 2. In this paper, we shall only deal with grammatical ambiguity, or in other words, syntactic ambiguity. Other umhiguity problems, such as word-sense ambiguity and referential ambiguity, are excluded. Suppose a system is given the sentence: "Mary saw a man with a telescope" and the system has a phrase structure grammar including the following rules <a> - <g>: <a> S --> NP + VP <b> S --> NP + VP + PP <c> NP --> *noun <d> NP --> *det+ *noun <e> NP --> NP + PP <f> PP --> *prep + NP <g> VP --> *verb + NP The system would produce two parse trees from the input sentence (I. using rules <b>,<c>,<g>,<d>,<f>,<d>; II. using rules <a>,<c>,<g>,<e>,<d>,<f>,<d>). The difference is whether the preposition phrase "with a telescope" qualifies the noun phrase "a man" or the sentence "Mary saw a man". This paper shall discuss on how to ask the user" to select his intended interpretation without showing any kind of tree structures or phrase structure grammar rules. Our desired questior~ for that sentence is thus something like: 1) The action "Mary saw a man" takes place "with a telescope" 2) "a man" is "with a telescope" NUMBER ? The technique to implement this, which is described in the following sections, is called Explanation List Comparison. 2. Explanation List Comparison The basic idea is to attach an Explanation Template to each rule. For example, each of the rules <a> - <g> would have an explanation template as follows: <a> <b> <c> <d> <e> <f> <g> Explanation Template (1) is a subject of the action (2) The action (1 2) takes p]ace (3) (1) is a noun (1) is a determiner of (2) (1) is (2) (1) is a preposition of (2) (2) is an object of the verb (1) tThi: lesearcn was sponsored by the Defense Advanced Research Projects :~ger',:y {('~O~3), ~.PP.,'~ Order No. 3597, monitored by the Air Force Avionics Lahor;llor~, !JnOer Contract F3.3615-81 K-1539. The views and conclusions c~,lte,:l~J in fi=is d~cumnnt are those ef the authors and should not be interpreted as reor.e~,~ntinq the official L)olicies. eilher expressed or implied, of the Defense AdvanceO Rgsearch Projects Agency or the US Government. 2personal communication. Whenever a rule is employed to parse a sentence, an explanation is generated from its explanation template. Numbers in an explanation template indicate n-th constituent of the right hand side of the rule. For instance, when the rule <f> PP --> *prep + NP matches "with a telescope" (*prep = "WITH"; NP = "s 476 te'lescope"), the explanation "(with) is a preposition of (a telescope)" is uenerated. Whenever the system builds a parse tree, it also builds a list of explanations wnich are generated from explanation templates ot all rules employed. We refer to such a list as an explanation list. the explanation lists of the parse trees in the example above are: Alternative I. <b> The action (Mary saw a man) takes place (with a telescope) <c3 (Mary) is a noun <g> (a man) is an object of the verb (saw) Cd> (A) is a determiner of (man) <f> (v:ith) =s a preposition of (a telescope) <d> (A) is a dete,'miner of (telescope) Alternative II. <a> (Mary) is a subject of the action (saw a man with a telescope) <c> (Mary) is a noun <g> (~ man with a telescope) is an object of the verb (saw) <e> (a man) is (with a telescope) <d> (A) is a determiner of (man) <f> (with is a preposition of (a telescope) <d> (A) is a determiner of (telescope) In order to disambiguate a sentence, the system only examines these Explc, nation Lists, but not parse trees themselves• This makes our method independent from internal representation of a r~a~se tree. Loosely speaking, when a system produces more than erie parse tree, explanation lists of the trees are "compared" and the "diliere,~ce" is shown to the user. The user is, then, asked to select the correct alternative. 3. The revised version of ELC Ur, fortunately, the basic idea described in the preceding section does not work quite well. For instance, the difference of the two explanation lists in our example is 1) The action (Mary saw a man) takes place (with a telescope), (a man) is an object of the verb (saw); 2) (k.laf y) is a subject of the action (saw a man with a telescope), (a man with a telescope) is an object of the verb (saw), (a man) is (with a telescope); despite the fact that the essential difference is only 1) The action (Mary saw a man) takes place (with a telescope) 2) (a man) is (with a telescope) Two refinement ideas, head and multiple explanations, are introduced to solve this problem. 3.1. Head We define head as a word or a minimal cluster of words which are syntactically dominant in a group and could have the same syntactic function as the whole group if they stood alone. For example, the head of "VERY SMART PLAYERS IN NEW YORK" is "PLAYERS", and the head o! "INCREDIBLY BEAUTIFUL" is "BEAUTIFUL", but the head of "1 LOVE CATS" is "1 LOVE CATS" ilk, elf. The idea is that. whenever the system shows a part of an input sentence to the user, only the ilead of it is shown. To implement this idea, each rule must hove a head definition besides an explanation template, as follows. Rule Head <a> [z z] <b> [1Z] <c> [1] <d> [1 2] <e> It] <f> It Z] <g> [1 2] For instance, the head definition of the rule <b) says that the head of the construction "NP + VP + PP" is a concatenation of the head of 1.st constituent (NP) and the head of 2-nd constituent (VP). The i~ead of "A GIRL with A RED BAG saw A GREEN TREE WITH a telescope" is, therefore, "A GIRL saw A TREE", because the head of "A GIRL with A RED BAG" (NP) is "A GIRL" and the head of "saw A GREEN "IREE" (VP) is "saw A TREE". in our example, the explanation (Mary) is a subject of the action (saw a man with a telescope) becomes (Mary) is a subject of the action (saw a man), and the explanation (a man with a telescope) is an object of the verb (saw) becomes (a man) is an object of the verb (saw), because the head of "saw a man with a telescope" is "saw a man", and the head of "a man with a telescope" is "a man". The difference of the two alternatives are now: t) The action (Mary saw a man) take place (with a telescope); 2) (Mary) is a subject of the action (saw a man), (a man) is (with a telescope); 3.2. Multiple explanations In the example system we have discussed above, each rule generates exactly one explanation.. In general, multiple explanations (including zero) can be generated by each rule. For example, rule <b) S --> NP + VP + PP should have two explanation templates: (1) ts a subject of Lhe acLton (2) The actton (1 2) takes place (3), whereas rule <a> S --> NP + VP should have only one explanation template: (1) "Is a subject of the actton (2). With the idea of head and multiple explanations, the system now produces the ideal question, as we shall see below. 3.3. Revised ELC To summarize, the system has a phrase structure grammar, and each rule is followed by a head definition followed by an arbitrary number of explanation templates. 477 Rule Ilead Explanation Iemplate <a> [1 2] (t) is a subject of the action (2) <b> [t 2] (1) is a subject of the action (2) The action (1 2) takes place (3) <c> [t] <<none>> <d> [t 2] (1) is a determiner of (2) <e> [1] (1) is (2) <f> It 2] (1) is a preposition of (2) <g> [t 2] (2) is an object of the verb (1) With the ideas of head and multiple explanation, the system builds the following two explanation lists from the sentence "Mary saw a man with a telescope". Alternative I. <b> (Mary) is a subject of the action (saw a man) <b> The action (Mary saw a man) takes place (with a telescope) <g> (a man) is an object of tile verb (saw) <d> (A) is a determiner of (man) <f> (with) is a preposition of (a telescope) <d> (A) is adeterminer of (telescope) Alternative II. <a> (Mary) is a subject of the action (saw a man) <g> (a man) is an object of the verb (saw) <e> (a man) is (with a telescope) <d> (A) is a determiner of (man) <f> (with is a preposition of (a telescope) <d> (A) is adeterminer of (telescope) The difference between these two is The action (Mary saw a man) takes place (with a telescope) and (a man) is (with a telescope). Thus, the system can ask the ideal question: 1) The action (Mary saw a man) takes place (with a telescope) 2) (a man) is (with a telescope) Number?. 4. More Complex Example The example in the preceding sections is somewhat oversimplified, in the sense that there are only two alternatives and only two explanation lists are compared. If there were three or more alternatives, comparing explanation lists would be not as easy as comparing just two. Consider the following example sentence: Mary saw a man in the park with a telescope. This s~ntence is ambiguous in 5 ways, and its 5 explanation lists are shown below. Alternative I. (a man) is (in the park) (the Gark) is (with a telescope) Alternative II. (a man) is (with a telescope) (a man) is (in the park) : : Alternative III. The action (Mary saw a man) takes place (with a telescope) (a man) is (ill the park) Alternative IV. The action (Mary saw a man) takes place (in the park) (the park) is (with a telescope) : : : : Alternative V. The action (Mary saw a man) takes place (with a telescope) The action (Mary saw a man) takes place (in the park) : : With these 5 explanation lists, the system asks the user a question twice, as follows: 1) (a man) is (in the park) 2) The action (Mary saw a man) takes place (in the park) NUMBER? 1 i) (the park) is (with a telescope) 2) (a man) is (with a telescope) 3) The action (Mary saw a man) takes place (with a telescope) NUMBER? 3 The implementation of this is described in the following. We refer to the set of explanation lists to be compared, {/1' L2 .... }, as A. If the number of explanation lists in A is one ; jusl return the parsed tree which is associated with that explanation list. If there are more than one explanation list in A, the system makes a Qlist (Question list). The Qlist is a list of explanations Qlist = { e I, e 2 ..... en} which is shown to the user to ask a question as follows: t) e I 2) e 2 n) e n Number? Qlist must satisfy the following two conditions to make sure that always exactly one explanation is true. • Each explanation list / in A must contain at least one explanation e which is also in Olist. Mathematically, the following predicate must be satisfied. VL3e(e E L A e E Qlist) This condition makes sure that at least one of explanations in a Qlist is true. • No explanation list L in A contains more than one explanation in a Qlist. That is, 478 ~(gLgege'(L E AAeEL Ae'EL A e G Qlist A e' E Qlist A p =e') This condition makes sure that at most one of explanations in Qlist is true. The detailed algorithm of how to construct a Qlist is presented in Appendix. Once a Olist is created, ~t is presented to the user. The user is asked to select one correct explanation in the Qlist, called the key explanation. All explanation lists which do not contain the key explanation are removed from A. If A still contains more than one explanation list, another Olist for this new A is created, and shown to the user. This process is repeated until A contains only one explanation list. 5. Concluding Remarks An experimental system has been written in Maclisp, and running on Tops-20 at Computer Science Department, Carnegie- Mellon University. The system parses input sentences provided by a user according to grammar rules and a dictionary provided by a super user. The system, then. asks the user questions, if necessary, to disambiguate the sentence using the technique of Explanation List Comparison. The system finally produces only one parse tree of the sentence, which is the intended interpretation of the user. 1he parsor is implemented in a bottom- up, breath-first manner, but the idea described in the paper is independent from the parser implementation and from any specific grammar or dictionary. The kind of ambiguity we have discussed is structural ambiguity. An ambiguity is structural when two different structures can be bui!t up out of smaller constituents of the same given structure and type. On the other hand, an ambiguity is lexical when one word can serve as various parts of speech. Resolving lexical ambiguity is somewhat easier, and indeed, it is implemented in the system. As we can see in the Sample Runs below, the system first resolves lexical ambiguity m the obvious manner, if necessary. Recently, we have integrated our system into an English- Japanese Machine Translation system [3], as a first step toward user-friendly interactive machine translation [6]. The interactive English Japanese machine translation system has been implemented at Kyoto University in Japan [4, 5]. Acknowledgements I would like to thank Jaime Carbonell, Herb Simon, Martin Kay, Jun-ich Tsujii, Toyoaki Nishida, Shuji Doshita and Makoto Nagao for thoughtful comments on an earlier version of this paper. Appendix A: Qlist-Construction Algorithm input A : set of explanation lists output Qlist : set of explanations local e : explanation L : explanation list (set of explanations) U, C : set of explanation lists 1:C~ 2: U~A 3: Qlist ~ 4: ifU = ~then return Qlist 5: select one explanation e such that e is in some explanation list E U, but not in any explanation list E C; if no such e exists, return ERROR 6: Qlist ~ Qlist + {e} 7: C=C + {LIeELALEU } 8: U= {L leEL ALE (U)} 9: goto 4 • The input to this procedure is a set of explanation lists, {L1, L 2 .... }. The output of this procedure is a list of explanations, {e I, e 2 ..... en}, such that each explanation list, li, contains exactly one explanation which is in the Qlist. • An explanation list L is called covered, if some explanation e in L is also in Qlist. L is called uncovered, if any of the explanations in L is not in Olist. C is a set of covered explanation lists in A, and U is a set of uncovered explanation lists in A. • 1-3: initialization, let Olisl be empty. All explanation lists in A are uncovered. • 4: if all explanation lists are covered, quit. • 5-6: select an explanation e and put it into Qlist to cover some of uncovered not explanation lists, e must be such that it does 6xist in any of covered explanation lists (if it does exist, the explanation list has two explanation in A, violating the Qlist condition). • 7-8: make uncovered explanation lists which are now covered by e to be covered. • 9: repeat the process until everything is covered. 479 References [1] Kay, M. The MIND System. Algorithmic Press, New York, 1973,. [2] Nishida, T. and Doshita, S. An Application of Montague Grammar to English-Japanese Machine Translation. Proceedings of conference on Applied Natural Language Processing :156-165, 1983. [3] Tomita, M., Nishida, T. and Doshita, S. An Interactive English.Japanese Machine Translation System. Forthcoming (in Japanese), 1984. [4] Tomita, M., Nishida, T. and Doshita, S. User Front-End for disambiguation in Interactive Machine Translation System. In Tech. Reports of WGNLP. Information Processing ~ociety of Japan, (in Japanese, forthcoming), 1984. [5] Tomita, M. The Design Philosophy of Personal Machine Translation System. Technical Report, Computer Science Department, Carnegie-Mellon University, 1983. Appendix B: Sample Runs (transline '(time flies like an arrow in Japan)} (---END OF PARSE-- I0 ALTERNATIVES) (The word TIME (1) is:) (Z : VERB) (Z : NOUN) NUMBER> (The word FLIES (2) is:) (1 : VERB) (Z : NOUN) NUMBER> ! (I : (AN ARROW) IS (IN JAPAN)) (2 : THE ACTION (IIME FLIES) TAKES PLACE (IN JAPAN)) NUMBER> (S (MP (TIME *NOUN)) (FLIES *VERB) (PP (LIKE "PREPOSITION) (NP (AN "DETERMINER) (ARROW "NOUN))) (PP (IN "PREPOSIT[ON) (JAPAN "NOUN))) (transline '(Mary saw a man in the apartment with a telescope)) (---END OF PARSE-- 5 ALTERNAIIVES) (I : (A MAN) IS (IN TIIE APARTMENT)) (2 : Tile ACTION (MARY SAW A MAN) TAKES PLACE (IN TIIE APARTMENT)) NUMBER> i (1 : (A MAN) IS (WITH A TELESCOPE)) (2 : (THE APARTMENT) IS (WIIH A TELESCOPE)) (3 : THE ACIION (MARY SAW A MAN) TAKES PLACE (WITH A TELESCOPE)) NUMBER> (S (NP (MARY "NOUN)) (VP (SAW "VERB) (NP (NP (A "DETERMINER) !MAN *NOUN)) (PP (IN *PREPOSIIION) (NP (IHE *DETERMINER) (APARTMENT "NOUN))))) (PP (WITH "PREPOSITION) (NP (A "DETERMINER) (TELESCOPE "NOUN)))) 480 | 1984 | 102 |
AMBIGUITY RESOLUTION IN THE HUMAN SYNTACTIC PARSER: AN EXPERIMENTAL STUDY Howard S. Kurtzman Department of Psychology Massachusetts Institute of Technology Cambridge, MA 02139 (This paper presents in summary form some major points of Chapter 3 of Kurtzman, 1984.) Models of the human syntactic parsing mecha- nism can be classified according to the ways in which they operate upon ambiguous input. Each mode of operation carries particular requirements con- cerning such basic computational characteristics of the parser as its storage capacities and the sched- uling of its processes, and so specifying which mode is actually embodied in human parsing is a useful approach to determining the functional orga- nization of the human parser. In Section l, a pre- liminary taxonomy of parsing models is presented, based upon a consideration of modes of handling ambiguities; and then, in Section 2, psycholinguis- tic evidence is presented which indicates what type of model best describes the human parser. I. Parsing Models Parsing models can be initially classified ac- cording to two basic binary features. One feature is. whether the model immediately analyzes an ambi- guity, i.e., determines structure for the ambiguous portion of the string as soon as that portion be- gins, or delays the analysis, i.e., determines structure only after further material of the string is received. The other feature is whether the model constructs just a single analysis of the ambiguity at one time, or instead constructs multiple anal- yses in ~ . The following account develops and compITcates this initial classification scheme. Not every type of model described here has actually been proposed in the literature. The purpose here is to outline the space of possibilities so that a freer exploration and clearer evaluation of types can be made. An Immediate Single Analysis (ISA) model is characterized by two properties: (1) An ambiguity is resolved as soon as it arises, i.e., on its first word (or morpheme); (2) the analysis that serves as the resolution of the ambiguity is adopted without consideration of any of the other possible analyses. Typically, such models lack the capabili- ty to store input material in a form which is not completely analyzed. Pure top-down, depth-first models such as classical ATN's (Woods, 1970) are examples of ISA models. For certain sentences, Frazier & Fodor's (1978) Sausage Machine also behaves like an ISA model. In explaining their Local Association principle, they claim that in the first stage of parsing, structure can be built for only a small number of words at a time. As a result, in a sentence like "Rose read the note, the memo and the letter to Mary," the PP "to Mary" is immediately attached into a complex NP with "the letter" without any consideration of the other possible attachment directly into the VP, the head of which ("read") is many words back. A Dela_eZay_ed_Single ~ (DSA) model is also characterized by two propertles: (1) When an ambi- guity is reached, no analysis is attempted until a certain amount of further input is received; and (2) when an anlysis is attempted, then the analysis that serves as the resolution of the ambiguity is adopted without consideration of any other possible analyses (if any others are still possible--i.e., if the string is still ambiguous). A bottom-up parser is an example of a DSA model. Another example is Marcus's (1980) Parsifal. These models must have some sort of storage buffer for holding unanalyzed material. It is possible for Single Analysis models to combine Immediate and Delayed determination of structure. Ford, Bresnan, & Kaplan's (1982) version of a GSP does so in a limited way. Their Final Ar- guments principle permits a delay in the determina- tion of the attachment of particular constituents into the overall structure of the sentence that has been determined at certain points. (The GSP's Chart is what stores the unattached constituents.) However, it must be noted that during the period in which that determination is delayed, other attachment pos- sibilities of the constituent into higher-level structures (which are themselves not yet attached into the overall sentence structure) are considered. Therefore, it is not the case in their model that there is a true delay in attempting any analysis. The fundamentally Immediate nature of the GSP re- quires that some attachment possibility always be tested immedi-ai-e-ly. More authentic combinations of D- and ISA could be constructed by modifying bottom-up parsers or Parsifal, which are both inherently Delaying, so that under certain conditions auxiliary procedures are called which implement Immediate Analysis. (There is, though, no real motivation at present for such modifications.) It can be noted that while bottom-up mechanisms are logically capable of only Delayed Analysis, top-down mechanisms are capable of either Immediate or Delayed Analysis. Another type of model utilizes Delayed Parallel Analysis (DPA). In this type, paralle-T-a-6aTysls--]-s--6-f~ an ambiguity is commenced only after some delay 481 beyond the beginning of the ambiguous portion of the string. Such a model requires a buffer to hold input material during the delay before it is anal- yzed. Also, any model that allows parallelism re- quires that the parser's representational/storage medium be capable of supporting and distinguishing between multiple analyses of the same input mater- ial, and that the parser contain procedures that eventually oversee a decision of which analysis is to be adopted as resolution of the ambiguity. An example of a DPA parser would be a generally bottom-up parser which was adjusted so that at cer- tain points, perhaps at the ends of sentences or clauses, more than one analysis could be con- structed. Another example would be a (serious) modification of Parsifal such that when the pattern of more than one production rule is matched, all of those rules could be activated. There are actually two sorts of parallelism. One can be called momentary parallelism, in which a choice is made among the possible analyses according to some decision procedure immediately--before the next word is received. The other sort can be called strong parallelism, in which the possible analyses can stay active and be expanded as new input is received. If further input is inconsistent with any of the analyses, then that analysis is dropped. There might also be a limitation on how long paral- lel analyses can be held, with some decision pro- cedure choosing from the remaining possibilities once the limiting point is reached. (It would seem that some limitation would be required in order to account for garden-pathing.) In addition, in strong parallelism although multiple analyses are all available, they might still be ranked in a preference order. A further type of model is characterized by Immediate Parallel Analysis (IPA), in which all of the possib~ analyses of an ambiguity are built as soon as the ambiguous portion of the string begins. Frazier & Fodor's (1978) parser is par- tially describable as an IPA model with momentary parallelism. In explaining their Minimal Attachment principle, they propose that an attempt is made to build in parallel all the possible available struc- tures, on the first word of an ambiguity. The par- ticular structure that contains the fewest con- necting nodes is the one that is then right away adopted. Fodor, Bever, & Garrett (1974) proposed an IPA with strong parallelism. As soon as an ambiguity arises, the possible analyses are determined in parallel and can stay active until a clause boun- dary is reached, at which point a decision among them must be made. There is another design characteristic that a parser might have which has not been considered so far. Instead of the parser, after making a single or parallel analysis of an ambiguity, maintaining the analysis/es as further input is received, one can imagine it just dropping whatever analysis it had determined. This can be called abandonment. Then analysis would be resumed at some later point, determined by some scheduling principles. Perhaps the most natural form of a parser which utilizes abandonment would be an IPA model. The construction of more than one analysis for an ambiguity would trigger the parser to throw out the analyses and wait until a later point to attempt analysis anew. Thus, the parser is not forced to make an early de- cision which might turn out to be incorrect, as in momentary parallelism, nor is it forced to carry the load of multiple analyses, as in strong paral- lelism. At an implementation level, this abandonment might be realizedas mutual inhibition by the seve- ral analyses. Abandonment is also possible in an ISA model. Take, for instance, a generally bottom-up model in which constituents can be held free, not yet at- tached into the overall sentence structure. A con- straint could be plced on such a model which for- bade such free constituents, forcing the analyses of the constituents to be abandoned if they cannot immediately be fit into the overall sentence struc- ture. (Such a constraint might be implemented as a limit on storage space for free constituents.) Then, at some later point, a new analysis of the constituents and their attachments would be made. Abandonment is also possible, though less in- tuitively satisfying, in delayed models. In these models, there would be a delay in beginning analy- sis, and then another delay as a result of abandon- ment. When analysis is begun again following aban- donment, it can proceed according to any of the above models, though of course some would seem to be more natural than others. 2. Experiment Previous psycholinguistic experiments have often used quite indirect methods for tapping parsing processes (e.g., Frazier & Rayner's (1982) measurements of eye-movements during reading and Chodorow's (1979) measurements of subjects' recall of time-compressed speech) and have yielded con- flicting results. The present investigation set out to gather data concerning the determinants and scheduling of ambiguity resolution, through use of an on-line task that provides readily interpretable results. Subjects sat in front of a CRT screen and on each trial were presented with a series of words comprising a sentence, one word at a time, each word in the center of the screen. Each word re- mained on the screen for 240 msec and was followed by a 60 msec blank screen. Presentation of the words stopped at some point, either within or at the end of the sentence, and a beep was heard. The subjects' task was to respond, by pressing one of two response keys, whether or not the sentence had been completely grammatical up to that point. For experimental items, presentation always stopped before the end of the sentence, and the sentence was always grammatical. These experimen- tal sentences contained ambiguities which were shown to be correctly resolved in only one way by the last word that was presented. There were 482 two versions of each experimental item, which dif- ferred only in the last presented word. And these last words of the versions resolved the ambiguity in different ways. An example is shown in (1) (along with possible completions of the sentences in paren- theses). (1) The intelligent scientist examined with a magnifier [a] our (leaves.) [b] was (crazy.) Any individual subject was presented with only one version of an item. If subjects had chosen a par- ticular resolution for the ambiguity before the last word was presented, it was expected that they would make more errors and/or show longer correct response times (RTs) for the version which did not match the resolution that they had chosen than for the version which did match. (Experimental items were embedded among a large number of filler items whose presenta- tion stopped at a wide variety of points. Many of these fillers also contained ungrammaticalities, of various sorts and in various locations in the sen- tence.) A wide variety of ambiguities were tested, in- cluding those investigated in previous studies. Only a few highlights of the results are presented here, in order simply to illustrate the major findings. For items like (Ib), subjects made a large num- ber of errors--about 75%. This indicates that they were garden-pathed--just as in one's experience in normal reading of such sentences. By contrast, for items like (la), very few errors were made. Further, the RTs for the correct responses to (la) were sig- nificantly lower than those to (Ib). For (la), RTs feIY in the 450-650 msec range, while for (Ib) the RTs were lO0 to 400 msec higher. Evidently, subjects had resolved the ambiguity in (1) before receiving the last word, and they chose the resolution fitting (la), in which "examined" is a main-clause past- tense verb, rather than the resolution fitting (Ib), in which it is a past participle of a reduced rela- tive clause. However, quite different results were obtained for items like (2), which differs from (1) only by the replacement of "scientist" by "alien". (2) The intelligent alien examined with a magnifier [a] our (leaves.) [b] was (crazy.) There was no difference between (2a) and (2b) in either error rate or RT--both measures fell into the same low range as those for (la). That is, subjects were not garden-pathed on either sentence. They kept open both possibilities for analysis throughout presentation of the sentence. Several conclusions can be drawn from comparing results of items like (1) and those like (2). First, it is possible to delay the resolution of an analy- sis. Two classes of parsing models can thus be ruled out as descriptions of the overall operations of the human system: ISA and IPA-with-momentary-parallelism. Second, the duration of this delay is variable, and therefore any model in which the point of resolution for a particular syntactic structure is invariant is ruled out. Marcus's Parsifal is an example of such a disconfirmed model. By the way, this does not mean that there must alw__~be some delay in resolu- tion. In fact, for items like (1) it does appear the resolution is made immediately upon reception of "examined". This is indicated by subjects' per- formance for (3) and (4) matching their performance for (1) and (2), respectively. (3) The intelligent scientist examined [a] our (leaves.) [b] was (crazy.) (4) The intelligent alien examined [a] our (leaves.) [b] was (crazy.) It seems then that the delay can vary from zero to evidently a quite substantial number of words (or constituents). Third, the duration of the delay is apparently due to conceptual, or real-world knowledge, factors. With regard to (1) and (2), one component of our real-world knowledge is that scientists are likely to examine something with a magnifier but unlikely to be examined, but for aliens the likelihoods of examining and being examined with a magnifier are more alike. Thus, it seems that the point at which a resolution is made is the point at which one of the possible meanings of the sentence can be con- fidently judged to be the more plausible one. So, parsing decisions would be under significant influ- ence of coneptual mechanisms. This fits with work in Kurtzman (1984; Chapter 2), in which a substan- tial amount of evidence is offered for the strong claim that parsing strategies in the form of prefe- rences for particular structures (e.g., Frazier & Fodor, 1978; Ford et al., 1982; Crain & Steedman, in press) do not exist. It is argued rather that all cases of preference for one resolution of an ambiguity over another can be accounted for by a model in which conceptual mechanisms judge which possible resolution of the ambiguity results in the sentence expressing a meaning which better satis- fies expectations for particular conceptual infor ~ mation or for general plausibility. Such a model requires that parallel analyses be presented to the conceptual mechanisms so that it may be judged which analysis better meets the expectations. Therefore, an acceptable parsing model must have some parallel analysis at the time a resolution is made (which is consistent with some previous psycho- linguistic evidence: Lackner & Garrett, 1973). This requirement of parallelism then leaves us with the following models as candidates for describing the human parser: DPA with either kind of parallelism, IPA-with-strong-parallelism, or Abandonment-with- parallel-reanalysis. (Abandonment might work in (2) by abandoning analysis upon the attempt at analysis of "examined" and then commencing re-analysis either (a) at a point determined by some internal schedule, or (b) upon a signal from conceptual mechanisms that the conceptual content of the syntactically unanalyzed words was great enough to support a con- fident resolution decision.) In contrast to the other remaining models, 483 IPA-with-strong-parallelism posits that input mate- rial is at all times analyzed. A look at results for other stimuli suggests that this might be the case. In a task similar to the present one, Crain & Steedman (in press) have shown that for items such as (5), comprised of more than one sentence, the first sentences (5a or 5b) can bias the per- ceiver towards one or the other resolution in the last sentence (5c or 5d), which contains an ambig- uous "that"-clause (complement vs. relative). (5a) RELATIVE-BIASING CONTEXT A psychologist was counseling two married couples. One of the couples was fighting with him but the other one was nice to him. (5b) COMPLEMENT-BIASING CONTEXT A psychologist was counseling a married couple. One member of the pair was fighting with him but the other one was nice to him. (5c) RELATIVE SENTENCE The psychologist told the wife that he was having trouble with to leave her husband. (5d) COMPLEMENT SENTENCE The psychologist told the wife that he was having trouble with her husband. So, for example, (5c) preceded by (Sa) is processed smoothly, while (5c) preceded by (Sb) results in garden-pathing at the point of disambiguation (the word "to"). In the present experiment, sentences in which the "that"-clause was disambiguated immedi- ately following the beginning of the clause (5e or 5f) were presented following the contexts of (5a) or (5b). (Se) RELATIVE SENTENCE The psychologist told the wife that was (yelling to shut up.) (Sf) COMPLEMENT SENTENCE The psychologist told the wife that to (yell was not constructive..) It turned out that context had no effect on perfor- mance for this type of item. Rather, subjects per- formed somewhat more poorly when the "that"-clause was disambiguated as a relative (5e), showing about 20% errors and sometimes elevated RTs, as compared with the complement disambiguation in (5f), which showed low RTs and practically no errors. The effect did not differ in strength between the two contexts. These results along with those of Crain & Steedman show that initially the complement resolution is preferred but that later this preference can be overturned in favor of the relative resolution if that is what best fits the context. Now, there is no reason to believe that subjects are actually garden-pathed when they end up adopting the relative resolution. Note that there is no conscious experi- ence of garden-pathing, and that the error and RT effects here are much weaker than for classical garden-pathing items like (1). It seems more likely that both possible analyses of "that" have been determined but that one--as a complementizer--has been initially ranked higher and so is initially more accessible. In this speeded task, it would be expected that the less accessible relative pronoun analysis of "that" would sometimes be missed--resul- ting in incorrect responses for (5e)--or take longer to achieve. Now, if "that" had simply not been ana- lyzed at all by the time of the presentation of the last word, as in a DPA or Abandonment model, there would be little reason to expect that one analysis of it should cause more errors than the other. So, we may tentatively conclude that IPA-with- strong-parallelism describes the human parser's operations for at least certain types of structures. Similar results with other sorts of structures are consistent with this claim. This does not rule out the possibility, however, that the human parser is a hybrid, utilizing delay or abandonment in some other circumstances. Why is the complementizer analysis immediately preferred for "that"? In these items all of the main verbs of the ambiguous sentences had meanings which involved some notion of communication of a message from one party to another (e.g., "told", "taught", "reminded"). In Kurtzman (1984) it is argued that such verbs generate strong expectations for conceptual information about the nature of the message that is communicated. The complement reso- lution of the "that"-clause permits the clause to directly express this expected information, and so it would be preferred over the relative resolution, which generally would not result in expression of the information. It is also possible that such a conceptually-based preference gets encoded as a higher ranking for the verbs' particular lexical representations which subcategorize for the com- plement (cf. Ford et al., 1982). REFERENCES Chodorow, M.S. Time-compressed speech and the study of lexical and syntactic processing. In W.E. Cooper & E.C.T. Walker (Eds.), Sentence proces- sing. Hillsdale, NJ: Erlbaum, l~ Crain, S. & Steedman, M. On not being led up the garden path: The use of context by the psycho- logical parser. In D. Dowty, L. Kartunnen, & A. Zwicky (Eds.), Natural language processing. NY: Cambridge Univers1~ress, in press. Fodor, J.A., Bever, T.G., & Garrett, M.F. The psy- chology o__f_flanguage. NY: McGraw-Hill, 1974. Ford, M., Bresnan, J., & Kaplan, R.M. A competence- based theory of syntactic closure. In J. Bresnan (Ed.), The mental representation of grammatical relation-s~. ~dge, MA: MIT Pre~, 1982. Frazier, L. & Fodor, J.D. The sausage machine: A new two-stage parsing model. Cognition, 1978, 6, 291-325. Frazier, L. & Rayner, K. Making and correcting er- rors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sen- tences. Cognitive Psychology, 1982, 14, 178-210. 484 Kurtzman, H.S. Studies in syntactic ambiguity reso- lution. Ph.D. Dissertation, MIT, 1984. (Available from author in autumn, 1984, at School of Social Sciences, Univ. of California, Irvine, CA 92664.) Lackner, J.R. & Garrett, M.F. Resolving ambiguity: Effects of biasing context in the unattended ear. Cognition, 1973, I, 359-372. Marcus, M. A theory o_f_f syntactic recognition for natural language. Cambridge, MA: MIT Press, 1980. Woods, W.A. Transition network grammars for natu- ral language analysis. Communications of ACM, 1970, 13, 591-602. 485 | 1984 | 103 |
Conceptual Analysis of Garden-Path Sentences Michael J. Pazzani The MITRE Corporation Bedford, MA 01730 ABSTRACT By integrating syntactic and semantic processing, our parser (LAZY) is able to deterministically parse sentences which syntactically appear to be garden path sentences although native speakers do not need conscious reanalysis to understand them. LAZY comprises an extension to conceptual analysis which yields an explicit representation of syntactic information and a flexible interaction between semantic and syntactic knowledge. 1. INTRODUCTION The phenomenon we wish to model is the understanding of garden path sentences (GPs) by native speakers of English. Parsers designed by Marcus [81] and Shieber [83] duplicate a reader's first reaction to a GP such as (1) by rejecting it as ungrammatical, even though the sentence is, in some sense, grammatical. (1) The horse raced past the barn fell. Thinking first that *r~cedS is the main verb, most readers become confused when they see the word, "fell'. Our parser, responding like the average reader, initially makes this mistake, but later determines that *fell" is intended to be the main verb, and • raced* is a p.~sive participle modifying "horse'. We are particularly interested in a class of sentences which Shieber's and Marcus' parsers will consider to be GPs and reject as ungrammatical although many people do not. For example, most people can easily understand (2) and (3) without conscious reanalysis. (~) Three percent of the courses filled with freshmen were cancelled. (8) The chicken cooked with broccoli is delicious. The syntactic structure of (2) is similar to that of sentence (1). However, most readers do not initially mistake 'filled" to be the Current Address: The Aerospace Corporation P.O. Box 92957 Los Angeles, CA 90009 main verb. LAZY goes a step further than previous parsers by modeling the average readers ability to deterministieally recognize sentences (2) and (3). If "filled" were the main verb, then its subject would be the noun phrase =three percent of the courses* and the selectional restrictions [KATZ 63] associated with "to fill" would be violated. LAZY prefers not to violate selectional restrictions. Therefore, when processing (2), LAZY will delay deciding the relationship among *filled" and "three percent of the courses" until the word "were* is seen and it is clear that "filled" is a passive participle. We call sentences like (2) semantically disambiguatable garden path sentences (SDGPs). Crain and Croker [79] have reported experimental evidence which demonstrates that not all potential garden path sentences are actual garden paths. LAZY uses a language recognition scheme capable of waiting long enough to select the correct parse of both (1) and {2) without guessing and backing up [MARCUS 76]. However, when conceptual links are strong enough, LAZY is careless and will assume one syntactic (and therefore semantic) representation before waiting long enough to consider alternatives. We claim that we can model the performance of native English speakers understanding SDGPs and misunderstanding GPs by using this type of strategy. For example, when processing (1), LAZY assumes that "the horse" is the subject of the main verb "raced" as soon as the word "raced" is seen because the selectional restrictions associated with =raced = are satisfied. One implication of LAZY's parsing strategy, is that people could understand some true GPs if they were more careful and waited longer to select among alternative parses. Experimental evidence [Matthews 791 suggests that people can recognize garden path sentences as grammatical if properly prepared. Mathhews found that subjects recognized sentences such as (21 as being grammatical, and after doing so, when later presented with a sentence like (1) will also judge it to be grammatical. {In a more informal experiment, we have found that, colleagues who re~d papers on GPs, understand new GPs easily by tile end of a paper.) LAZY exhibits this behavior by being more careful after encountering SDGPs or when reanalyzing garden path sentences. 486 1I. SYNTAX IN A CONCEPTUAL ANALYZER The goal of conceptual analysis is to map natural language text into memory structures that represent the meaning of the text. It is claimed that this mapping can be accomplished without a prior syntactic analysis, relying instead on a variety of knowledge sources including expectations from both word definitions and inferential memory (see [Ricsbeck 76], [Schank 80], [Gershman 82], [Birnbaum 81], {Pazzani 83] and [Dyer 83]). Given this model of processing, in sentence (4), (~) Af~rg kickcd John. llow is it possible to tell who kicked whom? There is a very simple answer: Syntax. Sentence (4) is a simple active sentence whose verb is "to kick'. "Mary" is the subject of the sentence and • Bill" is the direct object. There may be a more complicated answer, if, for example, John and Mary are married, Mary is ill- tempered, John is passive, and Mary has just found out that John has been unfaithful. In this case, it is possible to expect that Mary might hit John, and confirm this prediction by noticing that the words in (4) refer to Mary, John, and hitting. In fact, if this prediction was formulated and the sentence were "John kicked Mary" we might take it to mean "Mary kicked John' and usually notice that the speaker had made a raistake. Although we feel that this type of processing is an important part of understanding, it cannot account for all language comprehension. Certainly, (4) can be understood in contexts which do not predict that Mary might hit John. requiring syntactic knowledge to determine who kicked whom. fla. Precedes and Follows Syntactic information is represented in a conceptual analyzer, in a number of ways, the simplest of which is the notion of one word preceding or following another. Such information is encoded as a positional predicate in the test of a type of production which Riesbeck calls a request. The test also contains a semantic predicate (i.e., the selectional restrictions). A set of requests make up the definition of a word. For example, the definition of "kick" has three requests: REQI: Test: true Action: Add the meaning structure for "kick" to an ordered list of concepts typically called the C-list. REQg: Test: Is there a concept preceding the concept for "kick" which is animate? Action: ... REQ3: Test: Is there a concept following the concept for "kick" which is a physical object? Action: ... The action of a request typically builds or connects concepts. Although people who build conceptual analyzers have reasons for not building a representation of the syntax of a sentence, there is no reason that they can not. LAZY builds syntactic representations. --" lib. Requests in LAZY LAZY, unlike other conceptual analyzers, separates the syntactic (or positional) information from the selectioual restrictions by dividing the test part of request into a number of facets. There are three reasons for doing this. First, it allows for a distinction between different kinds of knowledge. Secondly, it is possible to selectively ignore some facets. Finally, it permits a request to access the information encoded in other requests. In many conceptual analyzers, some syntactic information is hidden in the control structure. At certain times during the parse, not all of the request are considered. For example, in (5) it is necessary to delay considering a request. (5) Who is Mar~l reernitingf To avoid understanding the first three words of sentence {5) as a complete sentence, "Who is Mary?', some request from "is" must be delayed until the word "recruiting" is processed. In LAZY, the time that a request can be considered is explicitly represented as a facet of the request. Additionally, separate tests exist for the selectional restriction, the expected part of speech, and the expected sententiM position. In LAZY, REQ2 of "kick" would be: REQ2a: Position: Subject of "kick" Restriction: Animate Action: Make the concept found the syntactic subject of "kick" Part-Of-Speech: (noun pronoun) Time: Clause-Type-Known? In REQ2a, Subject is a function which examines the state of the C-list and returns the proper constituent as a function of the clause type. In an active declarative sentence, the subject precedes the verb, in a passive sentence it may follow the word "by', etc. [The usage of "subject" is incorrect in the usual sense of the word.) The Time facet of REQ2a states that the request should be considered only after the type of the clause is know. The predicates which are included in a request to control the time of consideration are: End-Of-Noun-Group?, Clause-Type-Known?, Head.Of, Immediate-Noun-Group?, and End-Of-Sentence?. These operate by examining the C-list in a manner similar to the positional predicates. The other facets of REQ2a state that the subject of "kick" must be animate, and should be a noun or s pronoun. 487 llI GARDEN PATH SENTENCES .... Several different types of local ambiguities cause GPs. Misunderstanding sentences I, 2 and 3 is a result of confusing a participle for the main verb of a sentence. Although there are other types of GPs (e.g., imperative and yes/no questions with an initial "have'), we will only demonstrate how LAZY understands or misunderstands passive participle and main verb conflicts. Passive participles and past main verbs are indicated by a • ed" suffix on the verb form. Therefore, the definition of "ed" must discriminate between these two cases. The definition of "ed= is shown in Figure 3a. A simpler definition for "ed ° is possible if the morphology routine reconstructs sentences so that the suffix of a verb is a separate "word" which precedes the verb. The definition of "ed" is shown in Figure 3a. Throughout this discussion, we will use the name Root for the verb immediately following =ed" on the C-list. If Root appears to be passive Then mark Root as a passive participle. Otherwise if Root does not appear to be passive Then note the tense of Root. Figure 3a. Definition of "ed'. It is safe to consider this request only at the end of the sentence or if a verb is seen following Root which could be the main verb. One test that is used to determine if Root could be passive is: 1. There is no known main verb seen preceding "ed', and 2. The word which would be the subject of Root if Root were active agrees with the selectional restrictions for the word which would precede Root if Root were passive (i.e., the selectional restrictions of the direct object if there is no indirect object), and 3. There is a verb which could be the main verb following Root. Figure 3b. One test performed to determine if Root does not appear to be passive is: 1. The verb is not marked as passive, and 2. The word which would be the subject of Root if Root were active agrees with the selectional restrictions for the subject. Figure 3c. Note that these tests rely on the fact that one request can examine the semantic or syntactic information encoded in another request. As we have presented requests so far, four separate tests must be true to fire a request (i.e., to execute the request's action): a word must be found in a particular position in the sentence, the worif must have the proper part of speech, the word must meet the selectional restrictions, and the parse must be in a state in which it is safe to execute the positional predicate. We have relaxed the requirement that the selectional restrictions be met if all of the other tests are true. This avoids problems present in some previous conceptual analyzers which are unable to parse some sentences such as "Do rocks talk? = . Additionally, we have experimented with not requiring that the Time test succeed if all other tests have passed unless we are reanalyzing a sentence that we have previously not been able to parse. We will demonstrate that this yields the performance that people exhibit when comprehending GPs. LAZY processes a sentence one word at a time from left to right. When processing a word, its representation is added to the C-list and its requests are activated. Next, all active requests are considered. When a request is fired, a syntactic structure is built by connecting two or more constituents on the C-list. At the end of a parse the C-list should contain one constituent as the root of a tree describing the structure of the sentence. Sentence ~6) is a GP which people normally have trouble reading: (6) The boat 8ailed across the river sank. When parsing this sentence, LAZY reads the word "the" and adds it to the C-list. Next, the word "boat" is added to the C-list. A request from "the s looking for a noun to modify is considered and all tests pass. This request constructs a noun phrase with "the" modifying "boat'. Next, "ed s is added to the C-list. All of its requests look for a verb following, so they can not fire yet. The work "sail" is added to the C-list. The request of Sed" which sets the tense of the immediately following verb is considered. It check the semantic features of "boat s and finds that they match the selectional restrictions required of the subject of "sail'. The action of this request is executed, in spite of the fact that its Time reports that it is not safe to do so. Next, a request from "sail" finds that that "boat" could serve as the subject since it precedes the verb in what is erroneously assumed to be an active clause. The structure built by this request notes that *boat" is the subject of "sail'. A request looking for the direct object of "sail" is then considered. It notices that the subject has been found and it is not animate, therefore "sail" is not being used transitively. This request is deactivated. The word "across" is added to the C-list and "the river" is then parsed analogously to "tile boat'. Next, a request from "across" looking for the object of the preposition is considered... and finds the noun phrase, "the river'. Another request is then activated and attaches this prepositional phrase to "sail'. At this point in tile parse, we have built a structure describing an active sentence "The boat sailed across the river.' and the C-list contains one constituent. After adding the verb suffix and "sink" to the C- list we find that "sink" cannot find a subject and there are two constituents left on the C-list. This is an error condition and the sentence must be reanalyzed more carefully. 488 It is possible to recover from misreading some garden path sentences by reading more carefully. In LAZY, this corresponds to not letting a request fire until all the tests are true. Although other recovery schemes are possible, our current implementation starts • over from the beginning. When reanalyzing (6), the request from "ed" which sets the tense of the main verb is not fired because all facets of its test never become true. This request is deactivated when the word "sank" is read and another request from "ed" notes that "sailed" is a participle. At the end of the parse there is oae constituent left on the C-list, similar to that which would be produced when processing "The boat which was sailed across the river sank'. It is possible to parse SDGPs without reanalysis. For example, most readers easily understand (7) which is simplified from [Birnbaum 81]. (7) The plane stuffed with marijuana crashed. Sentence (7) is parsed analogously to (6) until the word "stuff" is encountered. A request from "ed" tries t,, determine the sentence type by testing if "plane" could be the subject of "stuff* and fails because "plane" does not meet the selectional restrictions of "stuff'. This request also checks to see if "stuff" could be passive, but fails at this time (see condition 3 of Figure 3b). A request from "stuff" then finds that "plane" is in the default position to be the subject, but its action is not executed because two of the four tests have not passed: the seleetional restrictions are violated and it is too early to consider the positional predicate because the sentence type is unknow. A request looking for the direct object of "stuff" does not succeed at this time because the default location of the direct object follows the verb. Next, the prepositional phrase "with marijuana" is pawed analogously to "across the lake" in (6). After the suffix of "crash" (i.e., "ed') and "crash" are added to the C-list; the request fr.m the "ed' of "stuff" is considered, and it finds that "stuff" could be a passive participle because "plane" can fulfill the selectional restrictions of the direct object of "stuff'. A request from "stuff" then notes that "plane" is the direct object, and a request from the "ed" of "crash" marks the tense of "er~h'. Finally, "crash" finds "plane" as its subject. The only constituent of the C-list is a tree similar to that which would be produced by "The plane which was- stuffed with marijuana crashed'. There are some situations in which garden path sentences cannot be understood even with a careful reanalysis. For example, many people have problems understanding sentence (8). (8) The canoe floated down the river aank. To help some people understand this sentence, it is necessary to inform them that "float" can be a transitive verb by giving a simple example sentence such as "The man floated the canoe'. Our parser would fail to reanalyze this sentence if it did not have a request associated with "float" which looks for a direct object. "~e have been rather conservative in giving rules to determine when "ed" indicates a past participle instead of the past tense. In particular, condition 3 of Figure 3b may not be necessary. By removing it, as soon as "the plane stuffed" is processed we would assume that "stuffed" is a participle phrase. This would not change the parse of (7). However, there would be an impact when parsing (0). (9) The chicken cooked with broccoli. With condition 3 removed, this parses as a noun phrase. With it included, (9) would currently be recognized as a sentence. We have decided to include condition 3, because it delays the resolving of this ambiguity until both possibilities are clear. It is our belief that this ambiguity should be resolved by appealing to episodic and conceptual knowledge more powerful than sclectional restrictions. IV. PREVIOUS WORK in PARSIFAL, Marcus' parser, the misunderstanding of GPs is caused by having grammar rules which can look ahead only three constituents. To deterministically parse a GP such as (1), it is necessary to have a look ahead buffer of at least four constituents. PARSIFAL's grammar rules make the same guess that readers make when presented with a true GP. For a participle/main verb conflict, readers prefer to choose a main verb. However, PARSIFAL will make the same guess when processing SDGPs. Therefore, PARSIFAL fails to parse some sentences (SDGPs) deterministically which people can parse without conscious backtracking. In LAZY, the C-list corresponds to the look ahead buffer. When parsing most sentences, the C-list will contain at most three constituents. }]owever, when understanding a SDGP or reanalyzing a true garden path sentence, there are four constituents in the C-list. Instead of modeling the misunderstanding of GPs, by limiting the size of the look-ahead buffer and the look ahead in the grammar, LAZY models this phenomenon by deciding on a syntactic representation before waiting long enough to disamhiguate on a purely syntactic basis when semantic expectations are strong enough. Shieber models the misunderstanding of GPs in a LALR{I) parser [Aho 77] by the selection of an incorrect reduction in a reduce-reduce conflict. In a participle/main verb conflict, there is a state in his parser which requires choosing between a participle phrase and a verb phrase. Instead of guessing like PARSIFAL, Shieber's parser looks up the "lexical preference" of the verb. Some verbs are marked as preferring participle forms; others prefer being main verbs. While this lexicai preference can account for the understanding of SDGPs and the misunderstanding of GPs in any one particular example, it is not a very general mechanism. One implication of using lexical preference to select the correct form is that some verbs are only understood or misunderstood as main verbs and others only as participles. If this were true, then sentences (10a) and {10b) would both be either easily understood or GPs. (10n) No freshmen registered for Calculus failed. (lOb) No car registered in California should be driven in Mezico. 489 We find that most people easily understand (10b), but require conscious backtracking to understand (10a). Instead of using a predetermined preference for one syntactic form, LAZY utilizes semantic clues to favor a particular parse. V. FUTURE WORK We intend to extend LAZY by allowing it to consult and episodic memory during parsing. The format that we have chosen for requests can be augmented by adding an EPISODIC facet to the test. This will enable expectation to predict individual objects in addition to semantic features. We have seen examples of potential garden path sentences which we speculate are misunderstood or understood by consulting world knowledge {e.g., 11 and 12) (11) At MIT, ninety five percent of the freahmen registered for Calculus passed. (1~) At MIT, five percent of the freshmen registered foe Calculus failed. We have observed that more people mistake "registered" for the main verb in (11) than {12). This could be accounted forby the fact that the proposition that "At MIT, ninety five percent of the freshmen registered for Calculus" is more easily accepted than "At MIT, five percent of the freshmen registered for Calculus'. Evidence such as this suggests that semantic and episodic processing are done at early stages of understanding. VI. CONCLUSION We have augmented the basic request consideration algorithm of a conceptual analyzer to include information to determine the time that an expectation should be considered and shown that by ignoring this information when syntactic and semantic expectations agree, we can model the performance of native English speakers understanding and misunderstanding garden path sentences. VII. ACKNOWLEDGMENTS This work was supported by USAF Electronics System Division under Air Force contract F19628-84-C-0001 and monitored by the Rome Air Development Center. BIBLIOGRAPHT Birnbanm, L. and M. Selfridge, "Conceptual Analysis of Natural Language', in Inside Artificial Intelligence: Five Prol~rams Plus Miniatures, Hillsdale, N J: Lawrence Erlbaum Associates, 1981. Crain, S. and P. Coker, sA Semantic Constraint on Parsing', Paper presented at Linguistic Society of America Annual Meeting. University of California at Irvine, 1979. Dyer, M.G., In-Depth Understanding: A Computer Model of Integrated Processing for Narrative Comprehension, Cambridge, MA: The MIT Press, 1083. Gershman, A.V., "A Framework for Conceptual Analyzers', in Strategies for Natural Language Processin~b Hillsdale, N J: Lawrence Erlbaum Associates, 1982. Katz, 3. S. and J.A. Fodor, "The Structure of Semantic Theory', in Language, 309, 1963. Marcus, M., A Theory of Syntact~ic Recognition for Natural Language, Cambridge, MA: The MIT Press, 1980. Marcus, M., *Wait-and-See Strategies for Parsing Natural Language', MIT WP-75, Cambridge, MA: 1974. Matthews, R., mAre the Grammatical Sentences of s Language of Recursive Set?', in Systhese 400, 1979. Pazzani, M.J., *Interactive Script Instantiation', in Proceedings of the National Conference on Artificial Intelligence, 1983. Riesbeck, C. and R.C. Schank, "Comprehension by Computer: Expectation Based Analysis of Sentences in Coute~t', Research Report ~78, Dept. of Computer Science, Yale University, 1976. Schank, R.C. and L. Birnbaum, N lemory~ Meaning, and SyntaX,, Research Report 189, Yale University Department of Computer Science, 1980. Shieber, S.M., "Sentence Disambiguatiou by a Shift-Reduce Parsing Technique', 21st Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, 1983. 490 | 1984 | 104 |
LANGUAGE GENERATION FROM CONCEPTUAL STRUCTURE: SYNTHESIS OF GERMAN IN A JAPANESE/GERMAN MT PROJECT J. Laubsch, D. Roesner, K. Hanakata, A. Lesniewski Projekt SEMSYN, Institut fuer Informatik, Universitaet Stuttgart Herdweg 51, D-7000 Stuttgart i, West Germany This paper idescribes the current state of the S~/~gYN project , whose goal is be develop a module for generation of German from a semantic representation. The first application of this module is within the framework of a Japanese/German machine translation project. The generation process is organized into three stages that use distinct knowledge sources. ~ne first stage is conceptually oriented and language independent, and exploits case and concept schemata. The second stage e~ploys realization schemata which specify choices to map from meaning structures into German linguistic constructs. The last stage constructs the surface string using knowledge about syntax, morphology, and style. This paper describes the first two stages. INTRO[X~TION ~ ' s generation module is developed within a German/Japanese MT project. FUjitsu Research Labs. provide semantic representations that are produced as an interim data structure of their Ja- panese/English MT system ATLAS/II (Uchida & Sugiyama, 1980). ~ne feasibility of the approach of using a semantic representation as an interlingua in a practical application will be investigated and demonstrated by translating titles of Japanese papers from the field of "Information Technology". This material comes from Japanese documentation data bases and contains in addition to titles also their respective abstracts. Our design of the generation component is not limited to titles, but takes extensibility to abstracts and full texts into account. The envisioned future application of a Japanese/German translation system is to provide natural language access to Japanese documentation data bases. OVERALL DESIGN CF Fig. 1 shows the stages of generation. The Japanese text is processed by the analysis part of FtUI"TS~'s ATLAS/II system. Its output is a semantic net which serves as the input for our system. 1 ~ is an acronym for semantic synthesis. The project is funded by the "Informationslinguistik" program of the Ministry for Research and Technology (BM~T), FRG, and is carried out in cooc~ration with ~JJITSU Research Laboratories, Japan. I I .gem antic net stage 1 ~r ATLAS/II analysis stage - ~ generation stages Knowledge base relating semantic symbols to case- schemata for verb concept~ and amuept-schemata for # ~n ~ I Instantiated Knowledge Base , Schema (l]~) stage 2 1 Instantiated Realization Schema (IRS) I Generator front end: stage 3 I style, syntax, and { sociology Rules for selecting realization-schemata, specifying syntactic categories and functional roles Fig. 1 Stages of Generation 491 CONCEPTUAL STRUCTURE ATLAS/II's semantic networks (see Fig.2) are directed graphs with named nodes and labelled arcs. The names of the node are called "semantic symbols" and are associated with Japanese and English dictionary entries. The labelled arcs are used in two ways: a) Binary arcs either express case relations between connected symbols or combine sub- structures b) Unary arcs serve as modifying tags of various kinds (logical junctors, syntactic features, stylistics, ...) The first stage of generation is con- ceptually oriented and should be target language independent, we use frame structures in a KRL-like notation. Our representation distinguishes between case scb~.mta (used to carry the meaning of actions), and concept scho-~_ta (used to represent "things" or "qua- lities"). Each semantic symbol points to such a schema. These schemata have three parts: (I) roles: For action schemata, these are the usual cases of Fillmore (e.g. AGENT, OBJECT, ...); for concept schemata roles describe how the concept may be further specified by other concepts. (2) transformation rules: These are condition- action pairs that specify which schema is to be applied, and how its roles are to be filled from the ATLAS/II net. (3) choices describe possible syntactic patterns for realization. Examples: Case schema for the semantic symbol ACHIEVE: (ACHIEVE (super= goal-oriented-act) (roles (Agent (class animate)) (Goal) (Method (class abstract-object)) (Instrument (class concrete-object))) (transformation-rules ...) (choices ...))) The concept schema for SPEAKER is: (SPEAKER (superc animate) ( roles (Performs-act-for (class organization)) .o.) (transformation-rules ...) (choices ...))). i) Retrieval of the lexical entry of a German verb and its associated case frame cor- responding to the IKBS. ii) Selection of lexical entries for the other semantic symbols. iii) Selection of a realization schema (RS), mapping of IKBS roles to RS functional roles, and inferring syntactic features. In i) a simple retrieval may not suffice. In order to choose the most adequate German verb, it will e.g. be necessary to check the fillers of an IKBS. For example, the semantic symbol REALISE may translate to "realisieren", "implementieren" etc.. If the Instrument role of REALISE were filled with an instance of the PROGRAM concept, we would choose the more adequate word sense "implementieren". In ii) sometimes similar problems arise. For example, the semantic symbol ACCIDENT may translate to the German equivalent of "accident", "error", "failure" or "bug". The actual choice depends here on the filler of ACCIDENT's semantic role for "where it occurred". iii) The choices aspect o~ a schema describes different possibilities how an instance may be realized and specifies the conditions for selection. (This idea is due to McDonald (iq83) and his MUMBLE system). The factors determining the choice include: (a) Which roles are filled? (b) What are their respective fillers? (c) Which type of text are we going to generate? For example if the Agent-role of a case frame is unfilled, we may choose either passivation or selection of a German verb which maps the semantic object into the syntactic subject. If neither agent nor object are filled, nominalization is forced. A realization schema (RS) is a structure which identifies a syntactic category (e.g. CLAUSE, NP) and describes its functional roles (e.g. HEAD, MODIFIER, ...). We employ Winograd's terminology for functional gran~nar (Winograd, 1983). In general, case schemata will be mapped into CLAUSE-RS and concept schemata are mapped into NP-R~. A CLAUSE-RS has a features description and slots for verb, subject, direct object, and indirect obiects. A features description may include information about voice, modality, idiomatic realization, etc.. There are realization schemata for discourse as well as titles. The latter are special cases of the former, forcing nominalized constructions. FROM CONCEPTS TO LANGUAGE In the target language oriented stage 2, the following decisions have to be made: REFERENCING AND FOCUSSING For referencing and other phenomena like focussing, the simple approach of only allowing a schema instance as a filler is not sufficient. We therefore included in our 492 knowledge representation a way to have de- scriptors as fillers. Such descriptors are references to parts of a schema. In the following example the filler of USE'S Object- slot is a reference descriptor to SYNTHESIZE's Object-slot: X = (a USE with (Object (the Object from (a SYNTHESIZE with (Object [FUNCTION]) (Method [DYNAMIC-PROGRAMMING]))) (Purpose (an ACCESS with (Object [DATA-BASEl)))) X could be realized as: "Using functions, that are synthesized by dynamic programming for data-base access." In general, descriptors have the form: (the <path> from <IKBS>) <path> = <slot>... A description can be realized by a relative clause. The same technique of referring to a sub- structure may as well be used for focussing. For example, embedding X into (the Purpose from X) expresses that the focus is on X's Purpose slot, which would yield the realization: "Database access using functions that are synthesized by dynamic progra,ming." A WALK WITH SEMSYN Let us look at the first sentence from an abstract. Figure 2 contains the Japanese input and the semantic net corresponding to ATLAS/II's analysis. In stage i, we first examine those semantic symbols which have an attached case schema and instantiate them according to their trans- formation rules. In this example the WANT and ACHIEVE nodes (flagged by a FRED arc) are case schemata. Applying their tranformation rules results in the following IKBS: (a WANT with (Object (an ACHIEVE with (Agent [SPF2~KER]) (Object [PURPOSE (Number [PLURAL])]) (Method [U'~'I'ERANCE (Number [SINGLE])]))) In stage 2, we will derive a description of how this structure will be realized as German text. First, consider the outer WANT act. There japanese input for FUJITSUs RTLRS/II-systeR Top o,I" obicct SEMSYHs interface to RTLRS/II ((UTTERANCE --HUMBER-> ONE) (PURPOSE ~ R - > PLURAL) (MRNT --OBJ-> RCHIE~) (~T-"PRE~-> =NIL) (ZNIL --ST-> gRNT) (ACHIEVE --OBJ-> PURPOSE) (RCHIEUE --PRED-> ¢NIL) (ACHIEVE --IIETHOD-> UTTERANCE) (RCHIEVE ~RGENT-> SPERKER)) ,~otto.t of object ;EMRHTIC NET Top oy object GERMAN EQUIVALENT TO JAPANESE INPUT ES WIRD GEWUENSCHT DASS EIN SPRECHER MEHRERE ZWECKE MIT EINER EINZELNEN AEUSSERUNG ERREICHT #o#~m o,f object Figure 2. From Japanese to German is no Agent, so we choose to build a clause in passive voice. Next, we observe that WANT's object is itself an act with several filled roles and could be realized as a clause. One of the choices of WANT fits this situation. Its condition is that there is no Agent and the Object will be realized as a clause. Its realization schema is an idiomatic phrase named *Es-Part*: "Es ist erwuenscht, dass <CLAUSE>" ("It is wanted that <CLAUSE>") Now consider the embedded <CLAUSE>. An ACHIEVE act can be realized in German as a clause by the following realization schema: 493 (a CLAUSE with (Subject <NP-realization of Agent-role> (Verb "erreich " (DirObj <NP-re~lization of Object-role> (IndObjs (a PP with (Prep (One-of ["durch" "mit" "mittels"])) (PObj <N-P-realization of Method-role>)))) This schema is not particular to ACHIEVE. It is shared by other verbs and will therefore be found via general choices which ACHIEVE inherits. The Agent of ACHIEVE's IKBS maps to the Subject and the Method is realized as an indirect object. Within the scope of the chosen German verb "erreichen" (for "achieve"), a Method role maps into a PP with one of the prepositions "dutch", "mit", "mittels" (corresponding to "by means of"). This leads to the following IRS: (a CLAUSE with (Features (Voice Passive Idiom *Es-Part*) (Verb "wuensch_") ;want (DirObj (a CLAUSE with (Subject (a NP with (Head "Sprecher")));speaker (Verb "erreich") (DirObj (aNP with (Features (Numerus= Plural)) (Head ["Ziel", "Zweck"]) ; purpose (Adj "mehrere")) ; multiple (IndObjs ((a PP with (Prep ["durch", "mit", "mittels"]) (PObj (aNPwith (Features (Numerus Singular)) (Head "Aeusserung") ;utterance (Adj "einzeln") ; single ))))) Such an instantiated realization schema (IRS) will be the input of the generation front end that takes care of a syntactically and morphologically correct German surface structure (see Fig. 2). EXPERIMENTS WITH OTHER GENERATION MODULES We recently studied three generation modules (running in Lisp on our SYMBOLICS 3600) with the objective to find out, whether they could serve as a generation front end for SEMSYN: SUTRA (Busemann, 1983), the German version of IPG (Kempen & Hoenkamp, 1982), and MUMBLE (McDonald, 1983). Our IRS is a functional grammar descrip- tion. The input of SUTRA, the "preterminal structure", already makes assumptions about word order within the noun group. To use SUTRA, additional transformation rules would have to be written. IPG's input is a conceptual structure. Parts of it are fully realized before others are considered. The motivation for IPG's incremental control structure is psycho- logical. In contrast, the derivation of our IRS and its subsequent rendering is not committed to such a control structure. Never- theless, the procedural grarmnar of IPG could be used to produce surface strings from IKBS by providing it with additional syntactic features (which are contained in IRS). Both MUMBLE and IPG are conceptually oriented and incremental. MUMBLE's input is on the level of our IKBS. MUMBLE produces func- tional descriptions of sentences "on the fly". These descriptions are contained in a constituent structure tree, which is traversed to produce surface text. Our approach is to make the functional description explicit. ACKNOWLEDG~4ENTS We have to thank many colleagues in the generation field that helped SEMSYN with their experience. We are especially thankful to Dave McDonald (Amherst), and Eduard Hoenkamp (Nijmegen) whose support - personally and through their software - is still going on. We also thank the members of the ATLAS/II research group (Fujitsu Laboratories) for their support. REFERENCES Uchida,H. & Sugiyama: A machine translation system from Japanese into English based on conceptual structure, Proc. of COLING-80, Tokyo, 1980, pp.455-462 Winograd, T.: Language as a cognitive process, Addison-Wesley, 1983 McDonald, D.D.: Natural language generation as a computational problem: An Introduction; in: Brady & Berwick (Eds.) Computational model of discourse, NIT-Press, 1983, pp.209-265 Kempen, G. & Hoenkamp,E.: Incremental sentence generation: Implication for the structure of a syntactic processor; in Proc. COLING-82, Prague, 1982, pp.151-156 Busemann,B.: Oberflaechentransformationen bei der Generierung geschriebener deutscher Sprache; in: Neumann, B. (Ed.) GWAI-83, Springer, 1983, pp.90-99 494 | 1984 | 105 |
NAtural Language driven Image Generation Giovanni Adorni, Mauro Di Manzo and Fausto Giunchiglis Department of Communication, Computer and System Sciences University of Genoa Via Opera Pia i] A - 16145 Genoa - Italy ABSTRACT In this paper the experience made through the development of a NAtural Language driven Image Generation is discussed. This system is able to imagine a static scene described by means of a sequence of simple phrases. In particular, a theory for equilibrium and support will be outlined together with the problem of object positioning. i. IntrOduction A challenging application of the AI techniques is the generation of 2D projections of 3D scenes starting from a possibly unformalized input, as a natural language description. Apart from the practically unlimited simulation capabilities that a tool of this kind could give people working in the show business, a better modeling of the involved cognitive processes is important not only from the point of view of story understanding (Wa8Oa,WaS]a), but also for a more effective approach to a number of AI related problems, as, for instance, vision or robot planning (So76a). In this paper we discuss some of the ideas on which is based a NAtural Language driven Image Generation (NALIG from here on) which has been developed for experimental purposes at the University of Genoa. This system is currently able to reason about static scenes described by means of a set of simple phrases of the form: csubject~ ~preposition~ cobject, [ creference~ ] (*). The understanding process in NALIG flows through several steps (distinguishable only from a logic point of view), which perform object instantiation, relation inheritance, translation of the surface expression into unambiguous primitives, (*) NALIG has been developed for the Italian language; the prepositions it can presently analyze are: su, sopra, sotto, a destra, a sinistra, vici- no, davanti, dietro, in. A second deeply revised release is currently under design. This work has been supported by the Italian Depart- ment of Education under Grant M.P.I.-27430. consistency checking, object positioning and so on, up to the drawing of the "imagined" scene on a screen. A general overview of NALIG is given in the paper, which however is mainly concerned with the role of common sense physical reasoning in consistency checking and object instantiation. Qualitative reasoning about physical processes is a promising tool which is exciting the interest of an increasing number of A.I. researches (Fo83a,Fo83b,Fo83c) , (Ha78a,Ha79a) , (K179a,K183a). It plays a central role in the scene description understanding process for several reasons: i. naive physics, following Hayes definition (Ha78a), is an attempt to represent the common sense knowledge that people have about the physical world. Sharing this knowledge between the speaker and the listener (the A.I. system, in our case) is the only feasible way to let the second make realistic hypotheses about the assumptions underlying the speaker utterances; ii. it allows to reach conclusions about problems for which very little information is available and which consequently are hard to formalize using quantitative models; iii. qualitative reasoning can be much more effective to reach approximate conclusions which are sufficient in everyday life. It allows to build a hierarchy of models in order to use every time the minimal requested amount of information, and avoid to compute unnecessary details. Within the framework of naive physics, most of the current literature is devoted to dynamic processes. As far as we are concerned with the description of static scenes, other concepts are relevant as equilibrium, support, structural robustness, containment and so on. With few exceptions (Ha78a), qualitative theories to address these problems are not yet available even if some useful suggestions to approach statics can be found in (By8Oa). In this paper, a theory for equilibrium and support will be outlined. An important aspect of the scene description understanding process is that some amount of 495 qualitative analysis can never be avoided, since a well defined position must be completed for every object in order to draw the image of the scene on a screen. This computation must not result in an overspecification that masks the degree of fuzziness which is intrinsic in object positions (Wa79s), in order to avoid to unnecessarily constrain all the following reasoning activities. The last section of the paper will be devoted to the object positioning problem. 2. Object taxonomy and spatial primitives Spatial prepositions in natural language are often ambiguous, and each one may convey several different meanings (Bo79a,He80a). Therefore, the first step is to disambiguate descriptions through the definition of a proper number of "primitive relationships. The selection of the primitive relation representing the meaning of the input phrase is based mainly, but not only, on a taxonomy of the involved objects, where they are classified depending on attributes which, in turn, depend on the actual spatial preposition. An example may be given by the rules to select the relation H SUPPORT(A,B) (that is A is horizontally supported by B) from the phrase "A on B". This meaning is chosen by default when some conditions are satisfied. First of all, A must not bel~g to that special category of objects which, when properly used, are flying, as aircrafts, unless B is an object expressly devoted to support them in some special case: so, "the airplane on the runway" is likely to be imagined touching the ground, while for the "airplane on the desert" a flying stats is probably inferred (of course, the authors cannot exclude that NALIG default reasoning is biased by their personal preferences). FLYING(A) and REPOSITORY(A,B) predicates are used to formalize these facts. To be able to give horizontal support, B must have a free upper surface ((FREETOP(B)), walls or ceilings or closed doors in an indoor view do not belong to this category. Geographic objects (GEO(X)) impose a special care: "the mountains on the lake" cannot be interpreted as the lake supporting the mountains and even if only B is a geographic object, but A can fly, physical contact seems not to be the most common inference ("the birds on the garden"). Hence, a first tentative rule is the following (the actual rule is much more complex): not GEO(A) and not(FLYING(A) and not REPOSITORY(A,B)) and ((FREETOP(B) and not GEO(B)) or (GEO(B) and not CANFLY(A))) ===~, H SUPPORT(A,B) A complete discussion of NALIG's taxonomy of objects is in (Bo83a). Both the set of primitives and the set of attributes have been defined on the basis of empirical evidence, through the analysis of some thousands of sample phrases. Besides the fact that NALIG works, there are specific reasons to accept the current taxonomy, and it is likely that further experience will suggest modifications; however, most of knowledge in NALIG is descriptive, and the intrinsic flexibility of an expert system approach an easy stepwise refinement. The values of some predicates are simply attempts to summarize large amounts of specified knowledge. For example, CANFLY(X) is true for birds, but FLYING(X) is not; the last predicate is reserved for airplanes and similar objects. This is a simple trick to say that, in common experience, airplanes can be supported by a very limited set of objects, as runways, aircraft carrier ships and so on, while birds can stay almost everywhere and to list all possible places is too space wasting. However, most of them are directly related to geometrical or physical properties of obje~ts, to their common uses in a given environment and so on, and should be always referred to underlying specific theories. For instance, a number of features are clearly related to a description of space which is largely based on the Hayes' model to develop a theory for the containment of liquids (Ha78a). Within this model some predicates, as INSIDE(O), can be evaluated by means of a deeper geometric modeling module, which uses a generalized cone approach to maintain a more detailed description of the structures of objects (Ad82a,Ad83a,Ad83b). Some of these theories are currently under development (a naive approach to statics will be outlined in the following), some others are still beyond the horizon; nevertheless, for experimental purposes, unavailable sophisticated theories can be substituted by rough approximations or even by fixed valued predicates with only s graceful degradation of reasoning capabilities. Taxonomical rules generate hypotheses about the most likely spatial primitive, but these hypotheses must be checked for consistency, using knowledge about physical processes (section 4) or about constraints imposed by the previous allocation of other objects (section 5). Moreover there are other sources of primitive relations besides the input phrase. One of the most important sources is given by a set of rules which allow to infer unmentioned objects; they are briefly 496 outlined in the next section. Other relations may be inferred as side-effects of consistency checking and positioning activities. the branch and the roof becomes unlikely. A deeper discussion of these inference rules is presented in (Ad83c). 3. Object instantiation Often a natural language description gives only some details about the scene, but many other objects and relations must be inferred to satisfy the consistency requirements. An example is the phrase "a branch on the roof" which is probably interpreted as "a tree near the house having a branch on the roof"." Therefore a set of rules has been defined in NALIG to instantiate unmentioned objects and infer the relations holding between them. Some of these rules are based on knowledge about the structure of objects, so that, under proper conditions, the whole can be inferred when a part is mentioned. Other rules take into account state conditions, as the fact that a living fish need water all around, or containment constraints, as the fact that water is spread on a plane surface unless it is put into a suitable container. The inferred objects may inherit spatial relations from those explicitly mentioned; in such a case relation replacement rules are needed. A simple example is the following. Geographic objects containing water, as a lake, can be said to support something (the boat on the lake), but the true relation holds between the supported object end the water; this fact must be pointed out because it is relevant for consistency conditions. Therefore a replacement rule is : ON(A,B) and GEO(B) and OPENCONTAINER(B) and not GEO(A) and not (FLYING(A) and not REPOSITORY(A,B)) and not CANFLY(A) ==~ ON(A,water) and CONTAINED(water,B) where ON(X,Y) represents the phrase to be analyzed; OPENCONTAINER (X) has the same formal meaning defined by Hayes (Ha78a) and describes a container with an open top. When relation inheritance does not apply, relative positions between known and inferred objects must be deduced from knowledge about their structures and typical positions. For instance the PARTOF instantiation rule, triggered by the phrase "the branch on the rool TM to infer a tree and a house, does not use the relation inheritance (the tree is not on the house), but knowledge about their typical positions (both objects are usually on the ground with assumed standard axis orientations) or structural constraints, as the house cannot be too high and the tree too far from the house, otherwise the stated relation between 4. Co~istency checking and qualitative reas~d~g Objects which do not fly must be supported by other objects. This seemingly trivial interpretation of the law of gravity plays a basic role when we check the consistency of a set of given or assumed spatial relationships; no object is properly placed in the imagined scene if it is not possible to relate it, possibly through a chain of other supporting objects, to one which has the role of "ground" in the assumed environment (for instance floor, ceiling and interior surfaces of walls in an indoor view). The need of justifying this way all object positions may have effects on object instantiation, as in the phrase "the book on the pencil". Since the pencil cannot give full support to the book another object must be assumed, which supports the pencil and, at least partially, the book; both objects could be placed directly on the floor, but default knowledge about the typical positions that books and pencils may have in common will probably iced to the instantiation of the table as the most likely supporting object, in turn supported by the floor. The supporting laws may also give guidance to the positioning steps, as in the phrase "the car on the shell TM where, if there are reasons to reject the hypothesis that the car is a toy, then it is unlikely to have the shelf in its default position, that is "on the wall". """~/. {" l ...,,~ [°] Wall WO|I fig. l:assumed and default shelf structures Another example of reasoning based on supporting rules is given by assumptions about the structure of objects, in those cases in which a number of alternatives is known. For instance, if we know that "a shelf on the wall" must support a heavy load of books, we probably assume the structure of fig.la, even if fig.lb represents the default choice. To reason about these facts we need a strategy to find the equilibrium positions of an object or a pattern of supports, if such positions exist, taking into account specific characteristics of the involved objects. This strategy must be based, as 497 far as possible, on qualitative rules, to avoid unnecessary calculations in simple and common cases and to handle ill-defined situations; for instance, rules to grasp objects, as birds, are different from those helding for not grasping ones, as bottles, and nearly all situations in which birds are involved can be solved without any exact knowledge about their weight distributions, grasping strength and so on. An example of these rules, which we call "naive statics" is given in the following. Let us consider a simple case in which an object A is supported by another object B; the supported object has one or more plane faces that can be used as bases. If a face f is a base face for A (BASE(f,A)), it is possible to find the point e, which is the projection of the barlcenter of A on the plane containing f along its normal. It is rather intuitive that a plane horizontal surface is a stable support for A if the area of physical contact includes e and if this area is long and wide enough, in comparison to the dimensions of A, and its height in particular. Hence a minimum equilibrium area (M_E_AREA(a,f)) can be defined for each BASE f of A (this in turn imposes some constraints on the minimal dimensions of f). The upper surface of B may be of any shape. A support is a convex region of the upper surface of B; it may coincide with the whole upper surface of B, as it happens with a table top, or with a limited subset of it, as a piece of the upper edge of the back of a chair. In this example we will consider only supports with a plane horizontal top, possibly shrinking to a line or a point; if s is such a part of B, it will be described by the predicate P_SUPP(s,B). Let's consider now an object A, with a regular base f, lying on one or more supports whose upper surfaces belong to the same plane. For each position of A there is a pattern of possibly disconnected areas obtained from the intersection of f with the top surfaces of the supports. Let be a the minimal convex plane figure which include all these areas; a will be referred to as a supporting area (S_AREA(a)). A rather intuitive definition of equilibrium area is that A is stable in that position if its M_E_AREA(a,f) is contained in the supporting area. A further condition is that a free space V around the supports must exist, large enough to contain A; this space can be defined by the smallest convex volume Va enveloping A which is part of the description of A itself. Therefore conditions of stable lying can be formulated as follows: BASE(f,A) and LAY(A,B) and FREE(V) and ENVELOP(Va,A) and CONTAINED(Va,V) =9 STABLE_H_SUPPORT(A,B) where: LAY(A,B)E P_SUPP(sI,B) and.., and P_SUPP(sn,B) and S_AREA(a) and M_E_AREA(e,f) and CONTAINED(e,a) The evaluation of the supporting area (i.e. to find an area a for which its predicate S_AREA(a) is true) may be trivial in some cases and may require sophisticated positioning strategies in other cases. The most trivial case is given by a single support S, in this case we have S_AREA(TOP(S)), which means that the supporting area a coincides with the top surface of S. [.] i fig.2: radial simmetry Another simple but interesting case is given by regular patterns of supports, where it is possible to take advantage of existing simmetries. Let' s consider, for instance, a pattern of supports with radial simmetry, as shown in fig. 2a, which may resemble a gas_stove. If the base f of a has the same kind of approximately radial simmetry (a regular polygon could be a good approximation) and if the projection c of the baricenter of A coincides with the center of f, then the supporting a is the circle with radius Ra under the condition r R, where r is the radius of the "central hole" in the pattern of supports and R is the (minimal) radius of f. This simply means that the most obvious positioning strategy is to center A with respect to the pattern of supports; their actual shape is not important provided that they can be touched by A. In case of failure of equilibrium rules a lower number of supports must be considered and the radial simmetry is lost (for instance, the case of a single support may be analyzed). [°] l,' TYPE b fig.3: axial simmetry [~] --~,- y 1 TYPEa -.~y2 TVPEC -'~Y3 498 AS a third example let us consider a couple of supports with an axis simmetry as shown in fig.3a (straight contours are used only to simplify the discussion of this example, but there are not constraints on the actual shapes (besides simmetry). If the face f for A exhibits the same kinds of simmetry (fig.3b) the simplest placement strategy is to align the object axis to the support one. In this case the interior contours of each support can be divided into a number of intervals, so that for each interval [ Xi, Xi+l ] we have: a. min d(x) ,= max D(y) or I xi,xi+1 } y b. C. max d(x) < min D(y) or { xi,xi+1 } y { rain d(x) ~'= rain D(y)} and [ xi,xi+1 ] y { max d(x) ,~ max D(y) } { xi,xi+1 ] y Analogously the object contour can be divided in intervals, so that for each interval [ Yj, Yj+I we have: A. min D(y) ~ max d(x) or [ Yj,Yj+I } x B. max ~(y) (= min d(x) or { Yj,Yj+I } x C. rain O(y) ~ rain d(x) and [ Yj,Yj+I ] x max D(y) (= max d(x) [ Yj,Yj+I I x Of course, some situations are mutually exclusive (type a with type A or type b with type B intervals). PPSU~)RTIN G ARiA fig.4:supporting area Equilibrium positions may be found superimposing object intervals to support one by means of rules which are specific for each combination of types. For example, one type A and one type b intervals can be used to search for an equilibrium position by means of a rule that can be roughly expressed as: "put type A on type c and type C on type b so that the distance t (see fig.4) is maximized". The supporting area a obtained this way is shown (the dashed one) in fig.4. This kind of rules can be easily generalized to handle situations as a pencil on a grill. Some problems arise when the supports do not lie on the other plane, as for a book supported partially by the table top and partially by another book; in this case the concept of friction becomes relevant. A more detailed and better formalized description of naive statics can be found in (Di84a). 5. Positioning objects in the scene A special positioning module must be invoked to compute the actual coordinates of objects in order to show the scene on the screen. This module, which we mention only for lack of space, has a basic role, since it coordinates the knowledge about the whole scene, and can therefore activate specific reasoning activities. For instance, there are rules to handle the transparency of some objects with respect to particular relations and possibly to generate new relations to be checked on the basis of the previously discussed criteria. An example is the phrase "the book on the table", which is accepted by the logic module as H_SUPPORT(book,table) but can be rejected at this level if there is no enough free space on the table top, and therefore modified into a new relation H_SUPPORT(book,B), where B is a suitable object which is known to be supported by the table and is transparent to respect the On relationship (another book, for instance). A more detailed description can be found in (Ad84a). 6. Conclusions NALIG is currently able to accept a description as a set of simple spatial relations between objects and the draw the imagine scene on a screen. A number of problems are still open, mainly in the area of knowledge models to describe physical phenomena and in the area of a suitable use of fuzzy logic to handle uncertain object positions. Apart from these enhancements of the current release of NALIG, future work will be also focused (ml the interoonnection of NALIG with an animation system which is under development at the University of Genoa (Mo84a), in order to explore also those reasoning problems that are related to the description of actions performed by human actors. 499 REFERENCES Ad82a. Adorni,G., Boccalatte,A., and DiManzo,M., "Cognitive Models for Computer Vision", Proc. 9th. COLING, pp. 7-12 (Prague, Czechoslovakia, July 1982). Ad83a. Adorni,G. and DIManzo,M., "Top-Down Approach to Scene Interpretation", Proc, CIL-83, pp. 591-606 (Barcelona, Spain, June 1983). Ad83b. Adorni,G., DiManzo,M., and Ferrari,G.~ "Natural Language Input for Scene Generation", Proc. ist. Conf. of the European Chapter of the ACL, pp. 175-182 (Pisa, Italy, September 1983). Ad83c. Adorni,G., DiManzo,M., and Giunchiglia,F., "Some Basic Mechanisms for Common Sense Reasoning about Stories Envinronments", Proc. 8th. IJCAI, pp. 72-74 (Karlsruhe, West Germany, August 1983). Ad84a. Adorni,G., Di Manzo,M., and Giunchiglia,F., "From Descriptions to Images: what Reasoning in between?", to appear in Proc. 6th. ECAI, (Pisa, Italy, September 1984). Bo79a. Boggess,L.C., "Computational Interpretation of English Spatial Prepositions", TR-75, Coordinated Sei. Lab., Univ. of Illinois, Urbana, ILL (February 1979). Bo83a. Bona,R. and Giunchiglia,F., "The semantics of some ~patial prepositions: the Italian case as an example", DIST, Technical Report, Genoa, Italy (January 1983). By8Oa. Byrd,L. and Borning,A., "Extending MECHO to Solve Static Problems", Proc. AISB-80 Conference on Artificial Intelligence, (Amsterdam, The Netherlands, July 1980). Di84a. DiManzo,M., "A qualitative approach to statics", DIST, Technical Report, Genoa, Italy (June 1984). Fo83a. Forbus,K., "Qualitative Reasoning about Space and Motion", in Mental Models, ed. Gentner,D., and Stevens,A. ,LEA Publishers, Hillsdale, N.J. (1983). Fo83b. Forbus,K., "Measurement Interpretation in Qualitative Process Theory", Proc. 8th. IJCAI, pp. 315-320 (Karlsruhe, West Germany, August 1983). Fo83c. Forbus,K., "Qualitative Process Theory", AIM-664A, Massachusetts Institute of Technology, A.I. Lab., Cambridge, MA (May 1983). Ha78a. Hayes,P.J., "Naive Phisics I : Ontology for liquids", Working Paper N.35, ISSCO, Univ. of Geneve, Geneve, Switzerland (August 1978). HaVga. Hayes,P.J., "The Naive Physics Manifesto", in Expert Systems in the Micro Electronic Age, ed. Michie,D.,Edimburgh University Press, Edimburgh, England (1979). He8Oa. Herskovitz,A., "On the Spatial Uses of the Prepositions", Proe. 18th. ACL, pp. 1-6 (Philadelphia, PEN, June 1980). K179a. de Kleer,J., "Qualitative and Quantitative Reasoning in classical Mechanics", in Artificial Intelli~ence: an MIT Perspective, Volume I, ed. Winston,P.H. and Brown,R.H.,The MIT Press, Cambridge, MA (1979). K183a. de Kleer,J. and Brown,J., "Assumptions and Ambiguites in Mechanistic Mental Models", in Mental Models, ed. Gentner,D., and Stevens,A.,LEA Publishers, Hillsdale, N.J. (1983). Mo84a. Morasso,P. and Zaccaria,R., "FAN (Frame Algebra for Nem): an algebra for the description of tree-structured fi~zres in motion", DIST, Technical Report, Genoa, Italy (January 1984). So78a. Sondheimer,N.K., "Spatial Reference and Natural Language Machine Control", Int. J. Man-Machine Studies Vol. 8 pp. 329-336 (1976). Wa79a. Waltz,D.L. and Boggess, L., "Visual Analog Representations for Natural language Understanding", Proc. 6th. IJCAI, pp. 926-934 (Tokyo, Japan, August 1979). Wa8Oa. Waltz,D.L., "Understanding Scene Descriptions as Event Simulations", Proc. 18th. ACL , pp. 7-12 (Philadelphia, PEN, June 1980). Wa81a. Waltz,D.L., "Towsmd a Detailed Model of Processing for Language Describing the Physical World", Proc. 7th. IJCAI, pp. 1-6 (Vancouver, B.C., Canada, August 1981). 500 | 1984 | 106 |
Conceptual and Linguistic Decisions in Generation Laurence DANLOS LADL (CNRS) Universit~ de Paris 7 2, Place Jussieu 7S00S Paris, France ABSTRACT Generation of texts in natural language requires making conceptual and linguistic decisions. This paper shows first that these decisions involve the use of a discourse grammar, secondly that they are all dependent on one another but that there is a priori no reason to give priority to one decision rather than another. As a consequence, a generation algorithm must not be modularized in components that make these decisions in a fixed order. 1. Introduction To express in natural language the information given in a semantic representation, at least two kinds of decisions have to be made: "conceptual decisions" and "linguistic decisions". Conceptual decisions are concerned with questions such as: in what order must the information appear in the text? which information must be expressed explicitly and what can be left implicit? Linguistic decisions deal with questions such as: which lexical items to choose? which syntactic constructions to choose? how to cut the text into paragraphs and sentences? The purpose of this paper is to show that conceptual decisions and linguistic decisions cannot be made independently of one another, and therefore, that a generation system must be based on procedures that promote intimate interaction between conceptual and linguistic decisions. In particular, our claim is that a generation process cannot be modularized into a "conceptualizer" module making conceptual decisions regardless of any linguistic considerations, passing its output to a "dictionary" module which would figure out the lexical items to use accordingly, which would then in turn forward its results to a "grammar", where the appropriate syntactic constructions are chosen and then developed into sentences by a "syntactic component". In such generation systems (cf. (McDonald 1983) and (McKeown 1982)), it is assumed that the conceptualizer is language-free, i.e., need have no linguistic knowledge. This assumption is questionable, as we are going to show. Furthermore, in such modularized systems, the linguistic decisions must, clearly, be made so as to respect the conceptual ones. This consequence would be acceptable if the best lexical choices, i.e., the most precise, concise, evocative terms that can be chosen, always agree with the conceptual decisions. However, there exist cases in which the best lexical choices and the conceptual decisions are in conflict. To prove our theoritical points, we will take as an example the generation of situations involving a result causation, i.e., a new STATE which arises because of one (or several) prior ACTs (Schank 1975). An illustration of a result causation is given in the following semantic representation (A) CRIME : ACT =: SHOOTING ACTOR --> HUMO =: 3ohn SHOOTING:AT --> HUMI =: Mary BODY-PART =: HEAD ===> STATE =: DEAD OB3ECT --> HUMI which is intended to describe a crime committed by a person named John against a person named Mary, consisting of John's shooting Mary in the head, causing Mary's death. 2. Conceptual decisions and lexical choice Given a result causation, one decision that a language-free conceptualizer might well need to make would be whether tO express the STATE first and then the ACT, or to choose the opposite order. If these decisions were passed on to a dictionary, the synthesis of (A) above would be texts like Mary is dead because John shot her in the head. John shot Mary in the head. She is dead. made up of one phrase expressing the STATE and one expressing the ACT. But it seems more satisfactory to produce texts such as ( Z ) Mary was killed by John. He shot her in the head. (2) John shot Mary in the head, killing her. built around to kill. Such texts don't follow conceptual decisions dissociating the STATE and its cause: to kill (in the construction No V N1 =: John killed Mary) expresses in the same time the death of N1 and the fact that this death is due to an action (not specified) of No (McCawley 1971). We showed in (Danlos 1984) that a formulation embodying a verb with a causal semantics such as to kill to describe the RESULT, and another verb to describe the ACT is, in most of the cases, preferable to a formulation composed of a phrase 501 for the STATE and another one for the ACT. This result indicates that conceptual decisions should not be made without taking into account the possibilities provided by the language, in the present case, the existence of verbs with a causal semantics such as to kill, This attitude is also imperative if a generator is to produce frozen phrases. The meaning of a frozen sentence being not calculable from the meaning of its constituents, frozen phrases cannot be generated from a language- free conceptualizer forwarding its decisions to a dictionary ]1. Conceptual decisions, segmentation into sentences and syntactic constructions Let us suppose that a result causation is to be generated by means of two verbs, one with a causal semantics such as to kill for the RESULT, and one for the ACT, and let us look at the ways to form a text embodying these two verbs. The options available are the following: - order of the information. There are two possibilities. Either the phrase expressing the RESULT or the phrase expressing the ACT occurs first. - number of sentences. There are two possibilities. Either combine the phrases expressing the RESULT and the ACT into a complex sentence, as in (2) (John shot Mary in the head, killing her.), or form a text made up of two sentences, one describing the ACT, one describing the RESULT, as in (1) (Mary was killed by John. He shot her in the head.). - choice of syntactic constructions. We will restrict ourselves to the active construction and to the passive one. For the latter, there is the choice between passive with an agent and passive without an agent. On the whole, for each of the two verbs involved, there are three possibilities. The combination of these 3 options gives 36 possibilities, but it turns out that only 15 of them are feasible. For example, texts composed of two sentences, one in a passive form with an agent, the other in a passive form without an agent, are appropriate to precedes the expressing the (3a) Mary ( 3b ) Mary (3c) Mary (3d) *Mary express a result causation only if the RESULT ACT, or if the agent is in the first sentence ACT: was killed by John. She was shot. was killed. She was shot by John. was shot by John. She was killed. was shot. She was killed by John. 1 As another example, it is possible to combine the phrases expressing the ACT and the RESULT into a complex sentence if they are both in an active form John shot Mary, killing her. John killed Mary by shooting her. but it is impossible if they are both in a passive form: the following formulations are awkward *Mary was killed by being shot by John. *Mary was killed by John by being shot. 2 and the only other conceivable possibilities are to use a subordination conjunction such as because, when or as, but the resulting texts are clumsy: *Mary was killed (because + when + as) she was shot by John. *Mary was shot by John and, because of that, she was killed. A generation system must know for each combination whether it is feasible or not. Either this knowledge is calculable from other data, or it constitutes data that must be provided to the generator. We are going to see that the second solution is better. First, on a semantic level, one can seek to verbalize the intuitions that can be drawn, for example, from paradigm (3), but this activity can be only descriptive and not explicative. In other words, the inacceptability of (3d) is a fact of language that cannot be explained by semantic computations of more general import. So the list of the 15 feasible combinations must be part of the data of the generator. Now the following question arises: is it possible to determine the structures of the texts corresponding to the "15 elements of this list. The answer is affirmative when the number of sentences is 2, and negative when it is 1. The combinations with two sentences involve only one type of linearization: juxtaposition. On the other hand, the combinations with one sentence involve - a present participle if the ACT and RESULT are both expressed in an active form and if the ACT precedes the RESULT, as in John shot Mary, killing her - a gerundif if the ACT and the RESULT are both expressed in an active form and if the RESULT precedes the ACT, as in John killed Mary by shooting her 1. A star (') indicates that a text is awkward but it does not necessarily mean that it is ungrammatical Or uninterpretable. 2. The deletion of the agent leads to a formu]abon which is correct Mary was killed by being shot but which does not express the author of the crime. 502 - a relative clause if the RESULT is expressed in a passive form with an agent and precedes the ACT, this being expressed in an active form, as in Mary was killed by John who shot her in the head - etc. These types of linearization are nOt predictable. As a consequence, they must be provided to the generator. This one must embody in its data the structures of the texts corresponding to the 15 feasible combinations. These structures constitute a real discourse grammar for result causations. The formulation of result causations must be modelled on one of the 15 discourse structures 3. Generating a result causation thus entails selecting one of these discourse structures. ~.. Selection of a discourse structure The fact that only 15 discourse structures out of 36 possibilities are feasible shows that it is not possible to make decisions about order of information, segmentation into sentences and syntactic constructions independently of one another. To do so could potentially result in awkward texts more than half the time. Furthermore, lexical choice and selection of a discourse structure cannot be made independently of one another. A discourse structure leads to an acceptable text if and only if the formulations of the ACT and the RESULT present the syntactic properties required by the structure. For example, some causal verbs such as to assassinate cannot occur after a phrase describing the ACT: *John shot the Pope in the head assassinating him. *John shot the Pope in the head. He assassinated him 4 . So, if the verb to assassinate is to be used, all of the 3. This point is akin to an assumption supported by (McKeown 1982), except that ours discourse structures contain linguistic information contrarily to hers which indicate only the order in which the information must appear. 4. These forms become acceptable if they are added adverbial phrases: John shot the Pope in the head, thereby assassinating Aim in a spectacular way. John shot the Pope in the head. Thereby he assassinated him in a spectacular way. discourse structures in which the RESULT appears after the ACT are inappropriate. On the other hand, if a discourse structure where the RESULT occurs after the ACT is selected, the use of to assassinate is forbidden. At this point, we have shown that decisions about lexical choice, order of the information, segmentation into sentences and syntactic constructions are all dependent on one another. This result is fundamental in generation since it has an immediate consequence: ordering these decisions amounts to giving them an order of priority. $'. Priorities in decisions There is no general rule stating to which decisions priority must be given. It can vary from one case to another. For example, if a semantic representation describes a suicide, it is obviously appropriate to use to commit suicide. To do so, priority must be given to the lexical choice and not to the order of the information. If the order ACT-RESULT has been selected, it precludes the use of to commit a suicide which cannot occur after the description of the act performed to accomplish the suicide: *John shot himself, committing suicide. *John shot himself. He committed suicide. On the other hand, if a result causation is part of a bigger story, and if strictly chronological order has been chosen to generate the whole story, then the result causation should be generated in the order ACT-RESULT. In other words, the order of the information should be given priority. In other situations, there is no clear evidence for giving priority to one decision over another one. As an illustration, let us take the case of a result causation which occurs in the context of a crime. It can be stated that the result DEAD must be expressed by: - to assassinate as a first choice, to kill as a second choice, if the target is famous - to murder as a first choice, to kill as a second choice, if the target is not famous Moreover, the most appropriate order is, in general, RESULT-ACT if the target is famous, and ACT-RESULT otherwise. In the case of a famous target, the use of to assassinate is not in contradiction with the decision about the order of the information. But in the case of a non-famous • arget, the use of to murder doesn't fit the order ACT-RESULT, for this verb cannot occur after a description of the ACT: • John shot Mary in the head, murdering her. • John shot Mary in the head. He murdered her. Therefore, either the decision about the order of the information or the decision to use to murder has to be 503 forsaken. The former solution would yield to texts such as John murdered Mary by shooting her in the head. John murdered Mary. He shot her in the head. where the order of the information is RESULT-ACT, and the latter one to texts such as John shot Mary in the head, kilting her. John shot Mary in the head. He killed her. using the verb to kill instead of to murder. At the current time, the choice between these two solutions can be based only on intuitions that are not sufficiently operational to be integrated in a generation system. Condusion and future research We have shown that decisions about lexical choice, determination of the order of the information, segmentation into sentences and choice of syntactic construction are all dependent one another, the last three amounting to the selection of a discourse structure by means of a discourse grammar. As a consequence, a generation system must be based on a complete interaction between these decisions. In this work, we have been concerned only with the task of expressing into natural language a set of information. In others words, we have only dealt with the generation problem of "How to say it?", and not with the problem "What to say?". Some authors (cf. (McGuire 1980) and (Appelt 1982)) have rejected the separation between "What to say" and "How to say it" on the basis that the issue of "What to say" is not independent from the lexical choice. Thus, they have argued for generation systems involving interactions between conceptual decisions and linguistic ones. This point is akin to ours, and therefore, our model of generation could be extended so as to treat issues such as generating different texts according to the hearer and what it is supposed that he wants and/or needs to hear. REFERENCES Appelt, D.E., 1982, Planning Natural-Language Uterrances to satisfy Multiple Goals, Technical Note 259, SRI International, Menlo Park, California. Danlos, L., 1984, Generation automatique de textes en langues naturelles, These d'Etat, Universit~ de Paris 7. McCawley, J. D., 1971, "Prelexical Syntax" in Report of the 22nd annual round table meeting on Linguistics and Language Studies, O'Brien ~d., Georgetown University Press. McDonald, D., 1983, "Natural Language Generation as a Computational Problem : an introduction", in Computational Models of Discourse, Brady et Berwick ads., MIT Press, Cambridge, Massachussets. McGuire, R., 1980, "Political primaries and words of pain", unpublished manuscript, Yale University. McKeown, K. R., 1982, Generating Natural Language Text in response to Questions about database structure, PhD D=ssertation, University of Pensylvania. Schank, R.C., 1975, Conceptual Information Processing, North Holland, Amsterdam. ACKNOWLEDGEMENTS I would like to thank Lawrence Birnbaum for many valuable discussions and suggestions on this paper. 504 | 1984 | 107 |
A Computational Analysis of Complex Noun Phrmms in N,,vy Messages Elaine Marsh Navy Center for Applied Research in Artificial Intelligence Naval Research Laboratory - Code 7510 Washington, D.C. 20375 ABS TRACT Methods of text compression in Navy messages are not limited to sentence fragments and the omissions of function words such as the copula be. Text compression is also exhibited within ~grammatieal" sentences and is identified within noun phrases in Navy messages. Mechanisms of text compression include increased fre- quency of complex noun sequences and also increased usage of nominalizations. Semantic relationships among elements of a complex noun sequence can be used to derive a correct bracketing of syntactic constructions. I INTRODUCTION At the Navy Center for Applied Research in Artificial Intelligence, we have begun computer-analyzing and processing the compact text in Navy equipment failure messages, specifically equipment failure messages about electronics and data communications systems. These messages are required to be sent within 24 hours of the equipment casualty. Narrative remarks are restricted to a length of no more than 99 lines, and each line is res- tricted to a length of no more than 69 characters. Because hundreds of these messages are sent daily to update ship readiness data bases, automatic procedures are being implemented to handle them efficiently. Our task has been to process them for purposes of dissemina- tion and summarization, and we have developed a proto- type system for this purpose. To capture the information in the narrative, we have chosen to use natural language understanding techniques developed at the Linguistic String Project [Sager 1981]. These messages, like medical reports [Marsh 1982] and technical manuals [Lehrberger 1982], exhibit proper- ties of text compression, in part due to imposed time and length constraints. Some methods of compression result in sentences that are usually called ill-formed in normal English texts [Eastman 1981]. Although unusual in nor- mal, full English texts, these are characteristic of mes- sages. Recent work on these properties' include discus- sions of omissions of function words such as the copula be, which results in sentence fragments and omissions of articles in compact text [Marsh 1982, 1983; Bachenko 1983]. However, compact text also utilizes mechanisms of compression that are present in normal English but are used with greater frequency in messages and technical reports. Although the messages contain sentence frag- ments, they also contain many complete sentences. These sentences are long and complicated in spite of the telegraphic style often used. The internal structure of noun phrases in these constructions is often quite com- plex, and it is in these noun phrases that we find syntac- tic constructions characteristic of text compression. Simi- lar properties have been noted in other report sub- languages [Lehrberger, 1982; Levi, 1978]. When processing these messages it becomes impor- tant to recognize signs of text compression since the func- tion words that so often direct a parsing procedure and reduce the choice of possible constructions are frequently absent. Without these overt markers of phrase boun- daries, straightforward parsing becomes difficult and structural ambiguity becomes a serious problem. For example, sentences (1)-(2) are superficially identical, how- ever in Navy messages, the first is a request for a part (an antenna) and the second a sentence fragment specifying an antenna performing a specific function. (a transmit antenna). (1) Request antenna shipped by fastest available means. (2) Transmit antenna shipped by fastest available means. The question arises of how to recognize and capture these distinctions. We have chosen to take a sublangnage, or domain specific, approach to achieving correct parses by specifying the types of possible combinations among ele- ments of a construction in both structural and semantic terms. This paper discusses a method for recognizing instances of textual compression and identifies two types of textual compression that arise in standard and sub- language texts: complex noun sequences and nominaliza- tions. These are both typically found in noun phrase constructions. We propose a set of semantic relations for complex noun sequences, within a sublanguage analysis, that permits the proper bracketing of modifier and host for correct interpretation of noun phrases. II TEXT COMPRESSION IN NOUN PHRASES We can recognize the sources of text compression by two means: (1) comparing a full grammar of the standard language to that of the domain in which we are working, 505 and {2) comparing the distribution of constructions in two different sublanguages. The first comparison distin- guishes those constructions that are peculiar to a sub- language /el. Marsh 1982]. A comparison of a full gram- mar with two sublanguage grammars, the equipment failure messages discussed here and a set of patient medi- cal histories, disclosed that the sublanguage grammars were substantially smaller than full English grammars, having fewer productions and reflecting a more limited range of modifiers and complements [Grishman 1984]. The second comparison identifies the types of construc- tions that exhibit text compression. These are common even in full sentences. For example, we found that simi- lar sets of modifiers were used in the two different sub- languages [Grishman 1984]. However, the equipment failure messages had significantly more left and right modifier constructions than the medical, even though the equipment failure messages had about one-half the number of sentences of the patient histories. 236 sen- tences in the medical domain were analyzed and 123 in the Navy domain. The statistics are presented in Tables 1 and 2. In particular, there were significantly more noun modifiers of nouns constructions (Noun + Noun construc- tions) in the equipment failure messages than there were in the medical records, and more prepositional phrase modifiers of noun phrases. Further analysis suggested these constructions are symptomatic of two major mechanisms text compression in Navy messages: of com- plex noun sequences and nominalizations. Complex noun sequences. A major feature of noun phrases in this set of messages is the presence of many long sequences of left modifiers of nouns, (3). {3) (a) forward kingpost sliding padeye unit (b) coupler controller standby light (c) base plate insulator welds {d) recorder-reproducer tape transport (e) nbsv or ship-shore tty sat communications (f) fuze setter extend/retract cycle Complex noun sequences like these can cause major prob- lems in processing, since the proper bracketing requires an understanding of the semantic/syntactic relations between the components. [Lehrberger 1982] identifies similar sequences (empilage) in technical manuals. As he notes, this results from having to give highly descriptive names to parts in terms of their function and relation to other parts. Modifiers of nouns include nouns and adjectives. In Type Total noun phrases Articles Left Modifiers of Nouns Navy 339 27 72 4 [ Medical 532 38 Adjectival Modifiers: Adj Adj + Adj Possessive N 138 34 4 0 Noun Modifiers: Noun 99 76 N+N 25 4 Verb 7 0 Table I: Left Modifier Statistics Right Modifiers of Nouns Type [ Navy [ Medical Prepositional Phrases 95 107 Relative Clauses 1 5 Adverb 4 0 Reduced Relative Clauses 7 9 Table 2: Right Modifier Statistics 506 the sublanguage of Navy messages, unmarked verb modifiers of nouns also occur. This construction is not common in standard English or in the medical record sublanguage mentioned above. It is illustrated above in (2) and below in (4). (4) (a) receive sensitivity (b) operate mode (c) transmit antenna Because the verbs are unmarked for tense or aspect, they can be mistaken by the parsing procedure for imperative or present tense verbs. Furthermore, in this domain the problem is compounded by the frequent use of sentence fragments consisting of a verb and its object, with no subject present (1) repeated as (5) below. (5) Request antenna... Complex noun sequences also commonly arise from the omission of prepositions from prepositional phrases. The resulting long sequences of nouns are not easily bracketed correctly. In this data set, the omission of prepositions is restricted to place and time sequences (6- 7). (6) Request NAVSTA Guantanamo Bay Cuba coordi- nate ... Request RSG Mayport arrange .... (7) Original antenna replaced by outside contractor through RSG Mayport 7 JUN 82. In (6), prepositions marking time phrases have been omit- ted, and in (7) both time and place prepositions have been omitted. Nominalizations. The increased frequency of preposi- tional modifiers in the equipment failure messages was traced to the frequent use of nominalizations in Navy messages. Out of a preliminary set of 89 prepositional modifiers of nouns, 42 were identified as arguments to nominalized verbs (47%), the other 52% were attributive. Examples of argument prepositional phrases are given in (8), attributive in (9). (8) (a) assistance from MOTU 12 (b) failure of amplifier (c) cause of casualty (d) completion of assistance (9) (a) short circuit between amplifier and power supply (b) short in cable (c) receipt NLT 4 OCT 82 (d) burned spots on connector In these texts, in which nominalization serves as an important mechanism of text compression, it therefore becomes important to distinguish prepositional phrases that serve as arguments of nominalizations from attributive ones. The syntax of complex modifier sequences in noun phrases and the identification of nominalizations, both characteristic of text compression, need to be consistently defined f~,~ ~ r)roper understanding of the text being pro- cessed. By utilizing the semantic patterns that are derived from a sublanguage analysis, it becomes possible to properly bracket complex noun phrases. This is the subject of the next section. HI SEMANTIC PATTERNS IN COMPLEX NOUN SEQUENCES Noun phrases in the equipment failure messages typ- ically include numerous adjectival and noun modifiers on the head, and additional modifier types that are not so common in general English. The relationships expressed by this stacking are correspondingly complex. The sequences are highly descriptive, naming parts in terms of their function and relation to other parts, and also describing the status of parts and other objects in the sublanguage. Domain specific information can be used to derive the proper bracketing, but it is first necessary to identify the modifier-host semantic patterns through a distributional analysis of the texts. The basis for sub- language work is that the semantic patterns are a res- tricted, limited set. They talk about a limited number of classes and objects and express a limited number of rela- tionships among these objects. These objects and rela- tionships are derived through distributional analysis, and can ultimately be used to direct the parsing procedure. Complex noun sequences. Semantic patterns in complex noun phrases fall into two types: part names and other noun phrases. Names for pieces of equipment often con- tain complex noun sequences, i.e. stacked nouns. The relationships among the modifiers in the part names may indicate one of several semantic relations. They may indicate the levels of components. For example, assembly/component relationships are expressed. In cir- cuit diode, diode is a component of a circuit. In antenna coupler, coupler is a component part of an antenna. Part names may also describe the function of the piece of equipment. For example, in the phrase high frequency transmit antenna, trqlnsmit is the function of the antenna. The semantic relations among the modifiers of a part are strictly ordered are shown in (10a); examples are provided in (10b). (10) (a) ID REPAIR SIGNAL FUNCTION PART. (b) CU-t~O07 antenna coupler; HF XMIT antenna; deflection amplifier; UYA. 4 display system; primary HF receive antenna The component relations in part names are especially closely bound and are best regarded as a unit for process- ing. Thus antenna coupler in CU-~O07 antenna coupler can be considered a unit. We would not expect to find antenna CU-~O07 coupler or coupler CU-~007 antenna. In other noun phrases, i.e. those that are not part names, the head nouns can have other semantic categories. For example, looking back at the sentences in (3), the head noun of a noun sequence can be an equip- ment part ( unit, light ), a process that is performed on electrical signals ( cycle ), a part function (communica- 507 tions ). In addition, it can be a repair action (alignment, repair), an assistance actions ( assistance ), and so on. Only modifiers with appropriate semantic and syntactic category can be adjoined. For example, in the phrase fuze setter eztend/retract cycle, semantic information is neces- sary to attain the correct bracketing. Since only function verbs can serve as noun modifiers, eztend/retraet can be analyzed as a modifier of cycle, a process word. Fuze setter, a part name, can be treated as a unit because noun sequences consisting of part names are generally local in nature. Fuze setter is prohibited from modifying eztend/retract, since verb modifiers do not themselves take noun modifiers. Other problems, such as the omissions of preposi- tions resulting in long noun sequences (ef. (8) and (0) above), can also be treated in this manner. By identify- ing the semantic classes of the noun in the object of the prepositionless prepositional phrase and its host's class, the occurrence of these prepositionless phrases can he res- tricted. The date and place strings can then be properly treated as a modifier constructions instead as head nouns. IV CONCLUSION Methods of text compression are not limited to omis- sions of lexical items. They also include mechanisms for maximizing the amount of information that can he expressed within a limited time and space. These mechanisms include increased frequency of complex noun sequences and also increased usage of nominalizations. We would expect to find similar methods of text compres- sion in other types of scientific material and message traffic. The semantic relationships among the elements of a noun phrase permit the proper bracketing of complex noun sequences. These relationships are largely domain specific, although some patterns may be generalizable across domains [Marsh 1084 I. The approach taken here for Navy messages, which uses suhlanguage seleetional patterns for disambiguation, was developed, designed, and implemented initially at the New York University Linguistic String Project for medi- cal record processing [Friedman 1984; Grishman 1983; Hirschman 1982 I. It was implemented with the capability for transfer to other domains. We anticipate using a similar mechanism, based partially on the analysis presented here, on Navy messages in the near future. References [Baehenko 1983] Bachenko, J. and C.L. Heitmeyer. Noun Phrase Compression in Navy Messages. NRL Report 8748. [Eastman 1981]. Eastman, C.M. and D.S. McLean. On the Need for Parsing Ill-Formed Input. AJCL 7 (1981),4. [Friedman 1984] Friedman, C. Suhlanguage Text Process- ing - Application to Medical Narrative. In [Kittredge 1084]. [Grishman 10831 Grishman, R., Hirsehman, L. and C. Friedman. Isolating Domain Dependencies in Natural Language Interfaces. Proc. o/ the Con/. on Applied Nat. Lang. Processing (ACL). [Grishman 1984] Grishman, R., Nhan, N, Marsh, E. and L. Hirschman. Automated Determination of Suhlanguage Syntactic Usage. Proc. COLING 84) (current volume). [Hirschman 1082] Hirsehman, L. Constraints on Noun Phrase Conjunction: A Domain-independent Mechanism.Proc. COLING 8~ - Abstracts. ~ittredge 1984] Kittredge, R. and R. Grishman.Proc. of the Workshop on Sublanguage Description add Processing {held January 19-20, 1084, New York University, New York, New York), to appear. [Lehrberger 1982]. Lehrberger, J. Automatic Translation and the Concept of Sublanguage. In Kittredge and Lehrberger (eds), Sublanguage: Studies of Language in Restricted Semantic Domains. de Grnyter New York, 1082. [Levi 1078] Levi, J.N. The Syntaz and Semantics of Com- plez Nominals, Academic Press, New York. [Marsh 1982]. Marsh, E. and N. Sager. Analysis and Pro- cussing of Compact Text. Proc. COLING 82, 201-206, North Holland. [Marsh 1083] Marsh, E. Utilizing Domain-Specific Infor- mation for Processing Compact Text. Proc. Conf. Applied Natural Language Processing, 09-103 (ACL). [Marsh 1084] Marsh E. General Semantic Patterns in Different Sublanguages. In [Kittredge 1084]. [Sager 1081] Sager, N. Natural Language Information Pro- cessing. Addison-Wesley, Reading, MA. Acknowledgments This research was supported by the Oflace of Naval Research and the Ofllce of Naval Technology PE-62721N. The author gratefully acknowledges the efforts of Joan Bachenko, Judy Froseher, and Ralph Grishman in pro- ceasing the initial corpus of Navy messages, and the efforts of the researchers at New York University in pro- cussing the medical record corpus. 508 | 1984 | 108 |
ANOTHER LOOK AT NOMINAL COMPOUNDS Pierre Isabelle D~partement de linguistique Universit~ de Montreal C.P. 6128, Succ. A, Montreal, Qua., Canada H)C )37 ABSTRACT We present a progress report on our research on nominal compounds (NC's). Recent approaches to this probiem in linguistics and natural ianguage processing (NLP) are reviewed and criticized. We argue that the notion of "roie nominal", which is at the interface of linguistic and extraiinguis- tic knowledge, is crucial for characterizing NC'e as weII as other Iinguistic phenomena. We examine a number of constraints on the semantic interpre- tation ruies for NC's. Proposals are made that shouid improve the capability of NLP systems to deaI with NC's. I INTRODUCTION A. Problem Statement As a first approximation, we define a "nominal compound (NC) as s string of two or more nouns having the same distribution as a singie noun, as in example (I): (1) aircraft bomb bay door actuating cylinders We will see below that provisions have to be made in some cases for intervening adjectives. NC's can exhibit various degrees of lexica- lization, but we will focus our attention on productive rules for forming novel compounds. As to their surface syntax, NC's can assume any structure generated by the rule N --> N N; accor- dingly, their structural ambiguity grows exponen- tially with their length (following the "Catalan sequence"). How, then, do we determine that the normal interpretation for (i) imposes the bracke- ting shown in (2), rather than any of the other 41 syntactically possible bracketings? (2) ((aircraft ((bomb bay) door)) (actuating cylinder)) B. Goal__ssof th___ee study We believe that the analysis of NC's repre- sents an important and largely unsolved problem. From a theoretical point of view this problem raises the question of how to deal with noun semantics. And since noun meaning appears to be closely connected with knowledge of the world, one is led to explore the modes of interaction between linguistic and conceptual knowledge. From an NLP perspective, NC's have turned out to be an important stumbling block for sys- tems that attempt to deaI with real-iife text, especiaIly in technical domains, for purposes such as machine translation (IsabeIle, to appear), information retrieval, etc. Our ultimate goal is to develop an NLP sys- tem capable of analyzing large classes of NC's in the sublanguage (Kittredge and Lehrberger, i982) of aircraft maintenance manuals. We do not aim at solving all cases, since we believe the problem to be exceedingly difficult. At the present stage of our inquiry, we concentrate on the design of s suitable theoretical framework. In section II, we present a brief review of previous work in linguistics and NLP. In sections III and IV, we examine two aspects of the seman- tics of nouns that are crucially relevant to the analysis of NC's: predicative nouns and role nominals. Finally, in section V, we explore pos- sible constraints on the semantic interpretation of NC's. II BACKGROUND A. Approaches in Linguistics The early study of Lees (1963) classified NC's on purely grammatical criteria, and it failed to provide constraints that could explain how NC's are semantically interpreted. In s number of more recent studies, such as Levi (1978)9 there has been an attempt to view NC's as governed by tight semantic constraints. Thus, according to Levi, any novel NC realizes a pattern where either: a) the head noun is a deverbal naminalizstion and its modifier is interpreted as an argument of the related verb; or b) the two nouns are related by one of exactly nine deletable predicates ("is the cause of", "is for", etc.). This reductionist attempt has been criti- cized, most notably by Downing (1977), on the 509 grounds that the interpretation of NC's crucially involves pragmatic knowledge, and that numerous cases of NC's (such as thalidomide parents) will resist any analysis in terms of a closed set of relations. These criticisms have led theorists like Selkirk (1982) to adopt the position that only "verbal compounds" (those constructed on a pattern "argument + nominalization") are amenable to linguistic characterization; all other NC's would have to be explained in separate extralin- guistic theories. C. Approaches in NLP Several systems have been developed in an NLP framework to deal with the problem of inter- preting NC's; two recent exempIes are reported in Finin (1980) and McDonaId (i982). In both systems, the individual nouns of an NC are first mapped onto conceptual representa- tions where concepts are characterized by a set of "roles" or "slots", and are arranged in an abstraction hierarchy. Interpreting a compound then amounts to forming a derived concept on the basis of the constituent concepts; in most cases, this is done by interpreting one of the concepts as a slot filler for the other. For example, the interpretation of steel adapter would involve inserting the concept associated with steel into a RAW-MATERIAL slot within the representation of adapter. But the authors do not examine in any detail the question of eventual constraints on this interpretation process, such as the effect of word order. Another crucial issue which has not been explored in sufficient detail, is the nature of, and the justification for the particular set of slots which is assigned to a given concept. Finin is somewhat more explicit on this question. Some nouns have slots which represent standard case roles. Role nominals, on the other hand, are nouns which refer to a particular case role of another concept; for example, food refers to the object role of eat, and this fact provides the key to the interpretation of cat food. This notion will be examined in detail below. But Finin also resorts to other types of slots (e.g. "raw-material") which are not discussed. III PREDICATIVE NOUNS A. Root Nouns with Arguments The fact that several classes of non-derived nouns strictly subcategorize phrases which are semantically interpreted as arguments has received very little attention in the literature. This phenomenon would deserve an extensive study, and we can only give here some relevant examples, together with an indication of the semantic cate- gories between which the relation expressed by the noun effects a mapping: (3) a. measure nouns map: objects onto quantities examples: speed (of), temperature (of), volume (of), size (of) b. "area" nouns map: objects onto subparts examples: top (of), side (of), bottom (of), center (of), core (of) c. collective nouns map: individuals onto sets examples: group (of), set (of) d. representational nouns map: objects onto representations of ob- jects examples: picture (of), diagram (of), sense (of) e. other examples location (of), goal (of), brother (of), king (of) Most if not all of these argument-taking nouns can have their argument satisfied by a modifier noun in an NC: (a) a. oil temperature b. box top c. tank group d. circuit diagram e. component location A treatment in terms of predicate/argument pat- terns for this type of NC seems far superior to Levi's (1978) use of the semantically empty "deletable predicate" HAVE ("the component HAS a location"). Although these NC's are excluded from Selkirk's (1982) class of verbal compounds, they are amenable to the same type of semantic des- cription. B. Action and State Nominalizations Most studies on NC's have recognized the fact that deverbal nominalizations exhibit a semantic behavior closely related to that of the verb, and subcategorize elements that can occur as modifier nouns in NC's. As mentioned above, these are in fact the only cases that Selkirk (1982) deems characterizable at the level of linguistic competence. 8ut there seems to be no reason why deadjec- tival nominalizations should be handled in a different way. In examples (5) and (6), an action and a state are nominalized, with the argument occurring either in a prepositional phrase or as a modifier in an NC: (5) a. Someone removes the pump. b. removal of the pump c. pump removal (6) a. Uranium is scarce. b. scarcity of uranium c. uranium scarcity 510 We are not claiming that there is exact synonymy between the (b) and (c) examples, but only that they exhibit the same predicate/ argument pattern. Action nominals can take various types of arguments in NC's, and sometimes several of them simultaneously: (7) a. pump failure (subject) b. Montreal flight (source, goal) (8) a. poppet chatter tendency IV ROLE NOMINALS A. Nominalizations Deverbal nominalizations can refer not only to the action expressed by the verb, but also to the agent (driver), instrument (lubricant), pa- tient (employ-~-~and result (assembly) of this action. Except maybe for results, the term "role nominal" seems appropriate, since the nominaliza- tion refers to the filler of one of the roles of the verb. Although these nouns are not, strictly speaking, predicative, they generally permit to form NC's in which the other is interpreted as an argument of the underlying verb: (9) truck driver = one who drives trucks (I0) engine lubricant = something with which engines are cated ill) 18H employee = one who is employed by IBH lubri- (12) pump assembly = the result of assembling a pump With agent and instrument nominals (9, i0), there is a strong tendency to assign an "habitual" aspect to the underlying verb and generic refe- rence to its object; this kind of interpretation is awkward when the argument appears in a PP: (I)) ?a driver of trucks B. Root Nouns The term "role nominal" is due to Finin (1980) who uses it to cover not only nominaliza- tions of the type described above, but also any noun which can be semantically interpreted as referring to a role of a given verb, whether or not this verb happens to be morphologically related. This claims amounts to saying that, semantically, we have relations such as the following: (14) a. pilot:fly :: driver:drive b. gun:shoot :: lubricant:lubricate c. food:eat :: employee:employ A related claim underlies Zholkhovskij & Hel'cuk's (1971) use of certain "lexical func- tions": (15) a. Sl(bUy) = buyer b. S3(buy) = seller c. Sloc(battle) = battlefield d. Sinst(See) = eyes where Si, $1o c and Sinst are defined as functions t~t yield the typical name of, respectively, the i- arrant, the location a-nd the instrument. Fillmore (1971) also makes a comparable proposal, when he suggests that the lexical entry of knife should include (16) as a component: (16) use: I of <V 0 1 A> where V=cut C. On the Definition of Role Nominals Mow exactly should we understand the notion of role nominals? Finin's statement that they refer to an underlying case role of a verb is not accurate: they refer to role fillers, not to the roles themselves. Assuming that they denote a set of role fillers, we can ask if this set is: a. the set of all possible fillers for the role; or b. a set of typical fillers for that role; or c. any set of possible fillers for the role. Possibility (a) seems to describe correctly numerous cases of deverbal nouns. For example, it seems clear that anyone who is employed is an employee. However, with agents and instruments, there is a tendency to reserve the role nominal for habitual fillers: one hesitates to apply the term writer to a person who only wrote a letter. Horeover, with this definition, knife would not be a role nominal for cut, since it is perfectly possible to cut bread with a sword. The notion would then loose much of its power, since we need it to explain why bread knife is interpreted as "a knife used as an instrument for cutting bread". On the other hand, definition (c) seems too weak: even if a sword can be used to cut bread, bread sword is odd in a normal context (contex- tual factors will be discussed below). Thus, definition (b) seems the most appropriate. D. 3uatifyin9 Role Assignments For those role nominals where there is no morphological evidence of relatedness with the underlying verb, one is forced to rely mostly on intuitions. However, the risk of arbitrariness can be reduced by looking for further evidence from other linguistic phenomena. 511 i. Evaluative Adjectives When Fillmore (1971) introduced his notion of role nominal, he was attempting to characte- rize the behavior of evaluative adjectives, not the behavior of NC's. He noted that a ~ood ~, where X is a role nominal, means: a. if X is an agent: one who performs the associated activity skilfully (s good driver, a good pianist); b. if X is an instrument: a thing which permits the associated activity to be performed easily (a good knife, a good broom); c° In other cases, it seems that the resul- tant meaning is less predictible (good food has certain properties concer- ning nutritiousness and taste; a good house is comfortable, built to last, etc.). As far as we can tell, the evaluation domain for agents and instruments is precisely the acti- vity which is relevant for the understanding of NC'B. Thus while good driver evaluates the dri- ving, car driver specifies its object; moreover, in ~ood car driver, car falls within the evalua- tion domain: car drivers and truck drivers are evaluated on different scales. Evaluative adjec- tives can thus be used as a further source of evidence in the description of role nominals. 2. Denominal Verbs Another phenomenon which is relevant to the question of role nominals and NC's is the crea- tion of denominal verbs. Clark & Clark (1979) examine this very productive process, in which a verb formed by zero-affixation is understood "in such a way that the parent noun denotes one role in the situation, and the remaining surface argu- ments of the denominal verb denote other roles in the situation" (p. 787). For example, Max sub- wayed downtown means that Max went downtown on a subway. Intuitively, it appears obvious that the knowledge involved in this interpretation process is very closely related to whatever per- mits interpreting the NC downtown subway as "the subway that goes downtown". An important aspect of C & C's work is to show that the formation of denominal verbs is heavily dependent on contextuai knowIedge. Thus if you and me both know that Phii has iong had the crazy habit of sticking trombones into the nose of bypassers, I can inform you that PhiI has just tromboned a poiice officer. Notice that in the same context, 9opd trombone wouId presumably mean a trombone that is easy to stick into some- one's nose. In such cases, the interpretation is based on particular, situational knowledge. NC's can also be based on this type of knowledge. For example, if that same Phil uses different types of trombones for men and women, we might speak of his women trombones, to mean trombones of the type that Phil sticks into the nose of female bypaasers. But C & C claim that, more frequently, deno- minal verbs are based on ~eneric knowledge about concrete objects, knowIedge which is accessibIe to aii speakers in a Iinguistic community. This claim is to be linked with Downing's (i977) remark that aithough NC's are sometimes "deic- tic", they are most often based on generic or permanent reIationships. Obviously, the relevant knowledge (whether particular or generic) is at least as much about the world as about language. In fact, it is clear that both types condition each other. For exam- ple, objects that are used as vehicles will tend to be verbalized on the syntactic pattern of movement verbs; and in the absence of other evi- dence, one is likely to infer from its syntax that The fairy pumpkined the kids to Narnia allu- des not to an "satin 9 pumpkin" but to a "trans- portation pumpkin". In order to predict the range of meaning of large classes of denominal verbs, C & C propose to encode some generic knowledge in the lexicon, by means of "predominant features" such as: (17) a. x is the agent of Act (to pilot y) b. x is the instrument of Act (to pump y) c. x is the result of Act (to group y) d. x is the location of y (to can y) e. x is the locatum of y (to cover y) f. x is the time of Act (to weekend in y) It is quite apparent that these features are meant to capture the same type of facts as role nominsIs. It is easy to find NC's which paraIiel each ciass of verb singled out by them: (17') a. aircraft piiot b. oii pump c. tank group d. oil can e. pump cover f. Montreal weekend We believe that the semantic mechanisms that are at work in the interpretation of NC's, deno- minal verbs and evaluative adjectives are basi- cally similar. By using independent evidence from these phenomena, and an adequate generalization of the notion of role nominal, one can go s long way toward uncovering the relevant semantic mechanisms. The notion of role nominal, as we understand it, is at the interface between linguistic and extralinguistic knowledge. Nouns may have severaI different roies, depending on the contingencies of the entities that they denote; in those cases context usuaiiy makes one roie more saiient. In fact, context may even impart a noun with an unusuai roie, as we have seen. However, we beiieve that in ordinary texts, such as technieai 512 manuals, NC's that are analyzable in terms of role nominals are most often based on the usual, generic roles of the nouns. But this can only be shown through a large scale description of the relevant NC's. E. A Tentative Scheme Since the knowledge we have to encode is tied to world knowledge, there is a risk that it could become overwhelmingly compiex. However, we will minimize this risk, by limiting the scope of our inquiry to a suitably restricted subianguage. Technical manuals are interesting in this respect because they exhibit at the same time a tightIy constrained universe of interpretation, and an exceptional productivity in compounding. As to the semantic framework, we will not use case grammar, even if our discussion of role nominals was couched in the terms of this theory for expository purposes. As is well-known, case grammars raise a number of difficult problems. For example, the distinction between agent and instruments is problematic. At the morpholo- gical level, both can give rise to -er nominali- zations; at the syntactic level, both can appear in subject position, and in with NP environments (cf. co-agents); and at the semantic level, both notions are frequently undistinguishable, espe- cially in texts dealing with machines, such as technical manuals. Since action verbs can gene- rally occur with an agent, an instrument, or both, it does not seem necessary to include two slots for each verb in the dictionary: general rules should predict the relevant facts. Another problem is that if we want the notion of role nominal to be general enough, a role nominal should be able to refer to entities which are not case slots, at least in the usual sense of this term. Result nominals, for instan- ce, do not refer to a case slot as such. There is in fact no evidence that the inter- pretation rules for NC's crucially involve case roles rather than argument places, and our de- scriptions will be couched in terms of a predi- cate/argument notation. This notation is perfec- tly compatible with the use of an ontology that is richer than standard predicate logic. For example, if we agree with 3ackendoff's (198)) claim that "place" and "paths" constitute basic cognitive categories, we can define "typed" variables with the appropriate range. In our descriptions below, p will denote a path varia- ble, that is, a variable ranging over entities denoted by complex expressions constructed out of ,! ,, path functions (into, from, toward, via, etc.) ,! ,, and places . Similarly, we will use e as an "event" variable, as in Moore (1981) ~d Hobbs (1984). A role nominal will contain within its lexi- cal entry one or more statements of the form "x such that P(x)". We do not think that diacritic markers such as "typical function" are required; rather, notions such as typicality, should be a consequence of the semantic rules which interpret lexical entries. We give below a few examples of the type of lexical specification for role nominals with which we want to experiment. At this stage of our work this should be taken as nothing more than a first approximation. The material enclosed in brackets represents selectional restrictions. (18) pilot x such that FLY(x,y,p) <aircraft(y)> (19) adapter x such that ADAPT(x,y,z) (this entry is produced morphology) by derivational (20) tank x such that CONTAIN(x,y) <fluid(y)> (container, chamber, reservoir, bay, compar- tment, etc. are similar but selection res- trictions can differ) (21) hinge x such that ATTACH(x,y,z) (22) brace x such that SUPPORT(x,y) (23) witness x such that SEE(x,e) (24) line x such that CONDUCT(x,y,p) <fluid(y)> IV CONSTRAINING INTERPRETATION RULES The preceding sections have discussed two aspects of the lexical semantics of nouns that are relevant for the analysis of NC's: argument- taking nouns and role nominals. Assuming that the lexicon contains this type of information, we can now ask how it is used by semantic interpretation rules. More specifically, the picture that emerges is that lexical entries provide predicate/argu- ment patterns which contain variables; these variables have to be bound to the semantic mate- rial associated with some other noun. We must then ask what are the rules that govern this binding process. 513 A. Relative Order of the two Nouns I. Predicative Nouns Selkirk (1982) claims that the modifier noun can satisfy an argument of the headnoun but not vice-versa. Finin (1980) claims that argument satisfaction (or slot filling) is possible in both directions. Finally, McDonald (1982) takes the intermediate position that satisfaction of the head by a modifier is much more frequent. Now, consider the following pairs of exam- ples: (25) a. oil temperature b. ??temperature oil (26) a. uranium scarcity b. ??scarcity uranium (27) a. pump removal b. ??removal pump (28) a. student invention b. ??invention student It is clear that the (b) examples, if interpreta- ble st all, cannot easiIy realize the pattern found in the (a) examples. However, there is a very productive process which permits an action nominal to occur to the left of its argument; this pattern is most productive with inanimate headnouns: (29) a. repair man b. ?man repair (30) a. cooling device b. device cooling (3l) a. bleed valve b. ??valve bleed c. valve bleeding (32) a. jacking point b. ??point jacking In the first three (a) examples, the headnoun is interpreted as a subject argument (agent or ins- trument). In (28a), point is interpreted as a iocation (where one jacks something). In all of the (a) examples, the action nominai is inter- preted as denoting a permanent roie or function of the headnoun. When order is reversed, meaning and, even- tually, acceptability are affected. In the (b) examples, the action nominal is not interpreted as expressing a permanent function of the other noun. Thus, if one can manage subject interpreta- tions in (29b) and (31c) -- "a repair done by men (not robots)" and "a valve which bleeds some- thing" --, it will not follow that the men are repair men or that the valve is a bleed valve. The pattern "argument + predicative noun" has some pecularities of its own; for example, why is (32b) unacceptable no matter the role assignment that one makes? Or why is girl swim- min 9 unacceptable? In the latter case, it cannot be, as suggested by Selkirk (1982) that subject arguments are prohibited in general: pump failure is fine. It may be that subjects are ruled out for -ing nominals, unless they express a result (cf. consumer spending). But on the whole, the pattern where the predicate comes first is much more constrained, since it permits oniy action nominaIs, and pro- duces a semantic resuit very simiiar to =oie nominais. Thus, a cooIing device and cooier denote very simiIar entities; the same is true of jacking system and jack. Notice that in exampies such as: ()3) air temperature monitoring system monitorin 9 system forms a constituent, even if air temperature is understood as the object of of the monitoring; this is confirmed by the fact that (34) receives a similar interpretation: (34) monitoring system for air temperature It seems that this modification pattern has the effect of creating a role nominal out of its two constituents. 2. Role Nominals Here again, the order of the nouns strongly conditions the resulting interpretation: (35) a. truck driver b. ??driver truck (36) a. oil pump b. pump oil (37) a. pump case b. ?case pump (38) a. equipment bay b. bay equipment In the (a) examples, the modifier noun is inter- preted as the object of the underlying verb (drive, transfer, detect, hold); here too, a permanent connection is established. A truck driver is a person whose (social) function is to drive trucks, and that person is still a truck driver when he/she drives a scooter. If you use an oil pump to pump your tomato juice, it still qualifies to be called an oil pump. But when the nouns are inverted, there is a change in meaning and, possibly, in acceptabili- ty. It is hard to find a sensible interpretation for driver truck. The most natural interpretation for pump oil is "oil used for lubricating pumps": in this case, oil has become the relevant role nominal. There is also a possibility for an interpretation such as "oil in the pump" or "oil coming from the pump". We are not sure how that type of interpretation should be produced. But it seems clear that oil can only become an actor in 514 a permanent function of the pump when pump is the headnoun. It is easy to see that the other examples exhibit s similar behavior. We therefore conclude that role nominals tend to establish permanent, functional connections with their modifiers only. When they modify a headnoun, either this noun will absorb them itself (if it is a predicative or role nominal) or else a looser connection such as a locative one will be created. B. Multiple Modification i. Argument Ordering When a predicative noun or role nominal in head position is modified by more than one argu- ment, it seems that ordering (39) will apply, as illustrated in the examples below= ()9) time < paths < subject < object loc (40) s. computer fuel testing b. ??fuel computer testing (41) a. Montreal jet flights b. ??jet Montreal flights (42) a. evening Montreal trains b. ??Montreal evening trains 2. A Broader Perspective So far, we have concentrated on predicative nouns and role nominals, thereby excluding various types of compounds. When we look at the broader picture, different problems are raised. First, (in technical manuals) we find examples with intervening adjectives as in (43): (43) (Remove) front cockpit right shelf aft con- trol panel. While the formation of NC's such as those we have examined in previous sections is usually considered to be a lexical process, there is some evidence that, in examples like (43), syntactic processes are at work. Notice that each one of the three main groups in (43) is referential= "the (...) panel of the (...) shelf of the (...) cockpit", much as if there was an implicit geni- tive marker. But genuine compounds (that is, those of syntactic category N) are usually consi- dered to be anaphoric islands (Levi, 1978). One possible explanation is that (43) should in fact be analyzed as containing three NP's. A comparison of the following examples pro- vides some support for that view= (44) a. Remove front cockpit right shelf. b. Remove right cockpit shelf. In (44a), cockpit has definite reference and is interpreted as no more than a location for the shelf. However, in (44b), when the scope of righ t includes shelf, cockpit becomes nonreferential; cockpit shelf then denotes a type of shelf. These facts seem to indicate that this scoping of right, in virtue of syntactic constraints, forces one to take cockpit shelf as something of cate- gory N; and when we do so, shelf becomes a role nominal whose argument receives the type of "per- manent function" interpretation discussed above. Core NC'B, those which establish argument connections with a predicative noun or role nominal, appear to form a tightly bound unit: (45) a. ??oil high temperature b. ??truck good driver c. ??equipment left bay Some adjectives, the "nonpredicating" adjectives of Levi (1978) can appear within core NC's. But they are in fact nouns in disguise, and they receive an argument interpretation; in (46), it is understood that one repairs a structure. (46) aircraft structural repairs Thus, if we are willing to make a distinc- tion between syntactic compounding (similar to genitive phrases) and core NC's, it can be claimed that true compounds form anaphoric islands and cannot be separated by adjectives. Finally, there are certain types of nominal modifiers (for which we have as yet no analysis to offer) whose degree of cohesion with the headnoun is somehow intermediate between nouns interpreted as arguments and nouns in separate (pseudo-genitive) NP's. For example, nouns which stand in a "material of" relation to the head cannot precede adjectives, but must precede argu- ments= (43) a. large steel tank b. ??steel large tank (44) a. steel fuel tank b. ??fuel steel tank The constraints that we have seen, and no doubt others yet to be discovered, are certainly significant from a theoretical point of view; but they also have practical value for NLP systems which have to deal with NC's. VI CONCLUSIONS We have discussed two classes of nouns that have particular importance in NC formation= pre- dicative nouns and role nominals. We have shown that the latter class is also relevant to the description of evaluative adjectives and denomi- nal verbs. A tentative framework for the descrip- tion of role nominals has then been proposed. Finally, we have seen that rules which interpret 515 NC's obey e number of constraints. A distinction has been made between "pseudo-genitive" construc- tions and core NC's, the latter forming a tightly bound unit with internal ordering constraints. Much work remains to be done on the issues that we have discussed here; it is likely that experimentation with actual descriptions on a larger scale will lead to several refinements and revisions. Moreover, as has been pointed out in the last section, our description still ignores a number of types of compounding, and further dif- ficulties are to be expected. Nonetheless, we are confident that our work will, in the near future, result in a small NLP system capable of analyzing a broad range of NC's. Selkirk E., The Syntax of Words, MIT Press, 1982. Zholkovskij A., Mel'cuk I., SUr la synthbse sd- mantique, in T.A. Informations, 11:2, 1971. VII REFERENCES Aronoff M., Contextuals, in Language, 56:4, 744- 758, 1980. Clark E., Clark H. When Nouns Surface as Verbs, in Language, 55:4,, 767-811, 1979. Downing P., On the Creation and Use of English Compound Nouns, in Lanquaqe, 5):4, 810-842, 1977. Fillmore C., Types of Lexicsl Information, in D. Steinberg and L. 3akobovits (eds.), Semantics, Cambridge Univ. Press, 197I. Finin T., The Semantic Interpretation of Compound Nominals, Coordinated Science Lab., Univ. of IIIinois, 1980. Hobbs 3., Sublanguage and Knowledge , paper pre- sented at the NYU Conference on Subianguages, 3anuary 1984. Isabelle P., Machine Translation at the TAUM Group, paper presented at the Lugano Tutorial on Machine TraneIation, April i984, to appear. 3ackendoff R., Semantics and Cognition , MIT Press, 198). Kittredge R., Lehrberger 3., Sublanguage: Studies of Language in Restricted Semantic Domains, De Gruyter, 1982. Lees R., The Grammar of English Nominalizations, Mouton, i96). Levi 3., Th__ee Syntax and Semantics of CompIex Nominals, Academic, 1978. McDonald D.B., Understandin 9 Noun Compounds, Carnegie-Meiion University, i982. Moore R., Problems in Logical Form, Tech. note 241, SRI, 1981. 516 | 1984 | 109 |
Lexicon Features for Japanese Syntactic Analysis in Mu-Project-JE Yoshiyuki Sakamoto Electrotechnical Laboratory Sakura-mura, Niihari-gun, Ibsraki, Japan Masayuki Satoh The Japan Information Center of Science and Technology Nagata-cho, Chiyeda-ku Tokyo, Japan Tetsuya Ishikawa Univ. of Library & Information Science Yatabe-machio Tsukuba-gun. Ibaraki, Japan O. Abstract In this paper, we focus on the features of a lexicon for Japanese syntactic analysis in Japanese-to-English translation. Japanese word order is almost unrestricted and Kc~uio-~ti (postpositional case particle) is an important device which acts as the case label(case marker) in Japanese sentences. Therefore case grammar is the most effective grammar for Japanese syntactic analysis. The case frame governed by )buc~n and having surface case(Kakuio-shi), deep case(case label) and semantic markers for nouns is analyzed here to illustrate how we apply case grammar to Japanese syntactic analysis in our system. The parts of speech are classified into 58 sub-categories. We analyze semantic features for nouns and pronouns classified into sub-categories and we present a system for semantic markers. Lexicon formats for syntactic and semantic features are composed of different features classified by part of speech. As this system uses LISP as the programming language, the lexicons are written as S-expression in LISP. punched onto tapes, and stored as files in the computer. l. Introductign The Mu-project is a national project supported by the STA(Science and Technology Agency), the full name of which is "Research on a Machine Translation System(Japanese - English> for Scientific and Technological Documents.'~ We are currently restricting the domain of translation to abstract papers in scientific and technological fields. The system is based on a transfer approach and consist of three phases: analysis, transfer andgeneration. In the first phase of machine translation. analysis, morphological analysis divides the sentence into lexical items and then proceeds with semantic analysis on the basis of case grammar in Japanese. In the second phase, transfer, lexical features are transferred and at the same time, the syntactic structures are also transferred by matching tree pattern from Japanese to English, In the final generation phase, we generate the syntactic structures and the morphological features in English. 2. Coac_~pt of_~_Deoendencv Structure based on Case Gramma[_/n Jap_a_D~ In Japan, we have come to the conclusion that case grammar is most suitable grammar for Japanese syntactic analysis for machine translation systems. This type of grammar had been proposed and studied by Japanese linguists before Fillmore's presentation. As word order is heavily restricted in English syntax, ATNG~Augmented Transition Network Grammar) based on CFG~Context Free Grammar ) is adequate for syntactic analysis in English. On the other hand, Japanese word order is almost unrestricted and K~l!,jlio--shi play an important role as case labels in Japanese sentences. Therefore case grammar is the most effective grammar for Japanese syntactic analysis. In Japanese syntactic structure, the word order is free except for a predicate(verb or verb phrase) located at the end of a sentence. In case grammar, the verb plays a very important role during syntactic analysis, and the other parts of speech only perform in partnership with, and equally subordinate to. the verb. That is. syntactic analysis proceeds by checking the semantic compatibility between verb and nouns. Consequently. the semantic structure of a sentence can be extracted at the same time as syntactic analysis. 3. __ca.$_e_Er ame .~oYer n~ed ..by_ J:hu~/C_ll The case frame governed by !_bAag_<tn and having l~/_~Luio:~hi, case label and semantic markers for" nouns is analyzed here to illustrate how we apply case grmlmlar to Japanese syntactic analysis in our system. }i~ff.TCil consists of vet b. ~'~9ou _.s'hi ~adjec:tive and L<Cigo~!d()!#_mh~ adjectival noun.. L~bkujo ,~hi include inner case and outer' case markers in Japanese syntax. But a single Iqol,'ujo ~/l; corresi:~ond.~ to several deep cases: for instance, ".\'I" indicates more than ten case labels including SPAce. Sp~:ee TO. TIMe, ROl,e, MARu,-:I . GOAl. PARtr,cu'. COl'~i,or~ent. CONdition. 9ANge ...... We analyze re]atioP,<; br:twu,::n [<~,kuj~, ,>hi anH cas,:, labels and wr.i..i,c thcii~ out, manu~,l]y acc,.:,idii~, t,:, the ex~_m,;:]e.s fotmd o;;t ill samr, te texts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . * This project is being carried out with the aid of a specia], gro~H for the promotion of scien,:.c ah,! technology from the Science and Techno]ogy Agency of the Japane:ze GovoYf~: ~,t. 42 As a result of categorizing deep cases, 33 Japanese case labels have been determined as shown in Table I. T~_bi~_..!~__Ca_s~_Lahe~._fo_~_Ve~bal_Ca_se~_rames English Label Examples ~~- 1980 ~£(c ~[T~n. ~9, %99,,5 • ~;, ~)] I~. 10 m/sec. "C .~....~,a~ -~ ~ ,5 ~ <--9 ~ ,~', - lr r~] b-u Japanese Label (2) ";H~ OBJec~ (3) ~-~- RECipient (4l ~-Z.~ ORigin (5) ~.~- i PARmer (6) ~-~ 2 OPPonent {7) 8-~ TIMe (8)" ~ • ~i%,~,, Time-FRom (9) B@ • ~.~.,~, Time-TO leO) ~ DURatmn (l I ) L~p)~ SPAce 02) ~ • ~.,~,, Space-FRom (13) h~ • $~.,~., Space-TO (14") hP~ - ~ Space-THrough (15) ~Z~ ~.~, SOUrce (16) ~,~,~. GOAl (17) [~ ATTribute (18) ~.{:~ • iz~ CAUse (19) ~ • ii~. ~. TOO~ (20) $~ MATerial (21) f~ ~- '~ COMponent (22) 7]~ MANner (23) ~= CONdition (24) ~] ~ PURPOse (25) {~J ROLe (26) [-~ ~ ~.~ COnTent (27) i~ [~l ~. ~ RANge (28) ~ TOPic (29) [Lg...~,, VIEwpoint (30) ,L'~ tt~ COmpaRison (32) ~ DEGree 5%~/~-@. 3 ~0@-~/-,5 (33l P~]~ '~ PREdicative ~ "~,.~ 8 Note: The capitalized letters form English acronym for that case label. the When semantic markers are recorded for nouns in the verbal case frames, each noun appearing in relation to l/2u(~'n and Kclkuio-shi in the sample text is referred to the noun lexicon. The process of describing these case frames for lexicon entry are given in Figure ]. For each verb, l<ctkuio-Mtt and Keiuoudoi~-_.shi, Koktuo-shi and case labels able to accompany the verb are described, and the semantic marker for the noun which exist antecedent to that Kokuio-shL are described. 4. Sub-cat~or_ies of Parts of SDeech accordiDg to their Syntactic Features The parts of speech are classified into 13 main categories: nouns, pronouns, numerals, affixes, adverbs. verbs. ~eiy_ou--~h~. Ke~uoudou-shi. Renlcli-shii~adnoun), conjunctions, auxiliary verbs, markers and ./o~shi(postpositional particles;. Each category is sub-classified and divided into 56 sub-categories(see Appendix A); those which are mainly based on syntactic features, and additionally on semantic features. For example, nouns are divided into 11 sub-categories; proper nouns, common nouns, action nouns I (S~!tC~!--~jc i sh i ), action nouns 2 (others }. adverbial nouns. ~bk:±tio-shi-teki-i,~ishi (noun with case feature ~, ~l~:okuio-shi-teki-i~i~hi (noun with conjunction feature), unknown nouns, mathematical expressions, special symbols and complementizers. Action nouns are classified into ,~lhc(~-mc'ishi ia noun that can be a noun-plus-St~U,,doing> composite verb) and other verbal nouns, because action noun ] is also used as the word stem of a verb. Identify taigee-buusetsu I (substantive phrase) I governed by yougen J active vo Other thau active voice converted to active .,[ ~ephce kakarijo-sh~('~A'. / 'NOMISHIKA', 'NO', 'NO')wit~ kaku~o-nhi [ voice *ACTIVE, PASSIVE, CAUSATIVK POTENTIAL [TEkREJ --->.'y-- :e ,~= ~, ~.':, --9 "-~8 ffi I~'~,DII~) ....... ¢.,~1= J: 8t¢ ~ T ~ . NG '[ Fill kakujo-shi enteceden~ noun for verb phrase | in relative clause } { I ,.°__o.o.=,,, ..... t l i i Coustruct case frue forset J ] f~- F-~ ~'~' ~- ~'l: E~gure_._ ! .... Bho~_.k~___Dia_gr_am of Pro~ess___o..f [~s_c_rJ._b_in~Yerb_al .Case Frames_ 43 Adverbs are divided into 4 sub-categories for modality , aspect and tense. In Japanese, the adverb agrees with the auxiliary verb. C~in~utsu-futu-shi agrees with aspect, tense and mood features of specific auxiliary verb, Joukuou-fz~u-shi agrees with aspect and tense, Teido-fuku-shi agrees with gradability. Auxiliary verbs are divided into 5 sub-catagories based on modality, aspect, voice, cleft sentence and others. Verbs may be classified according to their case frames and therefore it is not necessary to sub-classify their sub-categories. 5. Semantic Markimz of Nouna We analyze semantic features, and assign semantic markers to Japanese words classified as nouns and pronouns. Each word can give five possible semantic markers. The system of semantic markers for nouns is made up of tO conceptual facets based on 44 semantic slots, and 38 plural filial slots at the end (see Figure 2 ). I,~ ~ ' [~3 N. J~l • ~1~ • O (Natiom-Organ|Zatlo.) (Thing. / '='" =,.t)I (PLant) (~nilet) (¢nanlsate I r----- (NaturaL) (~'tlfl¢laL) (~lty -Mare) I J-~ J~J'll~. (Hlterfat) CP 14:"t~b.4:'i'~4~ (Product) 5.1 Concept of semantic markers The tO conceptual facets are listed below. I) Thing or Object This conceptual facet contains things and objects; that is, actual concrete matter. This facet consists of such semantic slots as Nation/Organization, Animate object, Inanimate object, etc. 2) Commodity or Ware This conceptual facet contains commodity and wares; that is, artificial matter useful to humans. This facet consists of such semantic slots as Material. Means/Equipment, Product. etc. 3) Idea or Abstraction This conceptual facet contains ideas and abstractions: that is. non-matter as the result of intellectual activity in the human brain. This facet contsists of such semantic slots as Theory, Conceptual object. Sign/Symbol, etc. 4) Part This conceptual facet contains parts: that is, structural parts, elements and contents of things and matter. PA tA.Z~lf~.~li(~-tfffcl|L PMnoB¢~ .Em~ilemt ) (Social I , ~ (Pot I t Ica t -Eco~liclt ) (~tom-SO¢| ~L COmamt Ion) (Po~r -Ener~w. Physl ca t ObjKt) (Doing. t ~¢tlo.) ~,OH I~@. ~ (~t-Roaction) / L~ OE t~- ~ (Effect-O~eratfo~) (]du. ~=tract 1o.) ~4e~ • ~ - ~11 - ~ (mlery) ~D. ~ (Slgn-SxW~ot) (Sentllent • I', HentlL ~¢tfulty)~,~ (Emotion) ST j~l~. ~lJ (Recognition-Thought) (Part) (Attrl~te) ~ m@ (Part) • t " ~ (ELlee.t-Contemt) ~ ~1 (Property-Character t st Ic) )Bt~-----~ AF i]BS (For=.S~tpe) (Status- I ' ' Figure) ~ ~C [:h~lB (State-Cofldftion) Figu~ 2, Sy.a_t~m__of ~ Wl , ~--]1~ (Nu=her) I, (l~alure) ~-~ HU ]Jll~. RJ~ (Unit) I, [-I,-~1~= • aim (standard) • l TO I~ I ! T$ II~J~f" ~f~" ~h~. (Space-Topography) (Tile-SPace) I ~ ' ~ 1 ~ - ~ 1 I TP 'iB~J~ (Tile Point) (Tile) / TO ~l~mm u (Tile Ouration) I' J -- TA ,1~ (Tile Attrtbute~ Sem~nt~g__M~r ke~a_fo r _Np_u ns 44 5 Attribute This conceptual facet contains attributes: that is, properties, qualities or features representative of things. This facet consists of semantic slots such as Property Characteristic. Status Figure, Relation, Structure, etc. 6 Phenomenon This conceptual facet contains phenomena: that is, physical, chemical and social actions without human activity. This facet consists of semantic slots such as Natural phenomenon, Artificial phenomenon Experiment. Social phenomenon, Power Energy, etc. 7, Doing or Action This conceptual facet contains human doing and actions. This facet consists of such semantic slots as Action Deed. MovementReaction, Effect Operation, etc. 8: Mental activity This conceptual facet contains operations of the mind and mental process. This facet consists of semantic slots such as Perception. Emotion. RecognitionThought, etc. 9.! Measure This conceptual facet contains measure: that is, the extent, quantity, amount or degree of a thing. This facet consists of semantic slots such as Number. Unit, Standard, etc. 10i Time and Space This conceptual facet contains space, topography and time. 5.2 Process of semantic marking The semantic marker for each word is determined by the following steps. 1) Determine the definition and features of a word. 2, Extract semantic elements from the word. 3) Judge the agreement between a semantical slot concept and extracted semantical element word by word, and attach the corresponding semantic markers. 4; As a result, one word may have many semantic markers. However, the number of semantic markers for one word is restricted to five. If there are plural filial slots at the end. the higher family slot is used for semantic featurization of the word. It is easy to decide semantic markers for technical and specific words. But, it is not easy to mark common words, because one word has many meanings. ~..__Lexicon Z_Qr na,t .f_o_r. _$yn_tactic_ Ana!ys_is Lexicon formats for syntactic and semantic features are composed of different features classified by part of speech. I > Features of verb: Subject code: verb used in specific field. only electrical in our experiment Part of speech in syntax: verb Verb pattern: classifing the verbal case frame, a categorized marker like Hu{nby's case pattern is planned to be used. Entry to lexieal unit of transfe~ lexicon Aspect: stative, semi-stative, continuative, resultative, momentary or progressive/transitive Voice: passive, potential, causative or "7~l~RU'<perfective/stative) Volition; volitive, semi-volitive or volitionless Case frame: surface case, deep case, semantic marker for noun and inner-outer case classification Idiomatic usage: to accompany the verb(ex. catch a cold> syntax, verb pattern, 2i Features of Keillo~t-$h~ and lieiuoudou-shi: both syntactic features are described in almost the same format. Sub-category of part of speech; emotional, property, stative or relative Gradability: measurability and polarity Nounness grade: nounness grade for Keiuou-shi!++. +, -, --) 3) Features of noun: sub-category of nounCproper, common, action, adverbial, etc), lexical unit for transfer lexicon, semantic markers, thesaurus code, and usage. 4) Features of adverb: sub-category of adverb(/ouk~, Teido, (~2~iaiufSU, S~mr~10~¢) considering modality, aspect, tense and gradability 5) Features of other taigen: sub-category of Rcnluj_z~hi( demonstrative, interrogative, definitive, or adjectival) and conjunction(phrase or sentence 6i Features of/~k~l=~L*i(auxiliary verb): Jodo~=%bi are sub classified by sub-category on semantic feature: Modality~negation, necessity, suggestion, prohibition ....... ) Aspect~past. perfect, perfective stative, progressive, continuative, finishing, experiential .... ) Voice(passive or causative) Cleft sentence(purpose and reason> etc('T~WlRlr . "TENISEI~U" , "TEOhLi" , "SOKQ\;Ri" and "TEII@2~U" ) 7} Features of /9n$lli: Subcategory of /~==5~.(: case, conjunctive, adverbial, collateral final or 2_Ill~li Case: features of surface case(ex. "Gd" "I¢0" "NI' "TO'. .... ), modified relation~iu!!ui or ~B~o!t modification) Conjunctive: sub-category of semantic feature(cause/reason, conditional/provisional, accompanyment, time/place, purpose, collateral, positive or negative conjunction, ere) _7.., Data Base St.r_u._c.tur_e Qf~_h_e Lex, icon As this system uses LISP as the programming language, the lexicons are punched up as 45 S-expressions and input to computer files (see Figure 3 ). For the lexicon data base used for syntax analysis, only the lexical items are hold in main storage; syntactic and semantic features are stored in VSAM random acess files on disk(see Figure 4 ). ( cs~.,~at~ -v o o o ~ 5 o o- o z -~ ( $ R:~R fl,li c s{~{~ 64)) C Sg~::,- v t~) V] ( S Kea~ W) (($~ M) C$~JI~ SUB) ($~=-F OF OH) ($~4jl~ I)) v2 (s~ W) (${~ ,,~'-~ - ) (($~z~ ~() (s~JE~ SUB) c$~i~9~=-y OF OH) ($,~1~ 1)) ( $ ~J~v60BJ) (S~J~:-~' IT IC CO) ($~ PAR) ($~|~=--v IT IC CO) ( $#Z~ O)))) V3 ($I:~ W) ( $ ~3~J1111 (c$~ ~) ($~Im~ SUB) ($~=-~' OF OH) C$~11~ 1)) (($~I~ I:) ($~%~ REC) ($~J~=--~" xx) (S~4Ji~ 1))) (S~flt~ ¢$~,~ ".~t~"))))) Figure 3. Lexicon File Format__in LISP S-express " otoj~ Kn~ty-v~ct~r ~ia&er -li~t o ] /~(OoO ....... ) • 3 ~ MFR;mor~aol~cal feature • for ~ Z t i O n t~r¢l; ~Olmorm%ol~cal f~we for ~ for ~&~tio~ v(m'd e~leom for syntactic am~lysLs Fimure 4. Lexicon Data Base Structure for Analvsis The head character of the lexical unit is used as the record key for the hashing algorithm to generate the addresses in the VSAM files. 8. con__cJJ~i_o_n We have reached the opinion that it is necessary to develop a way of allocating semantic markers automatically to overcome the ambiguities in word meaning confronting the human attempting this task. In the same thing, there are problems how to find an English term corresponding to the Japanese technical terms not stored in dictionary, how to collect a large number of technical terms effectively and to decide the length of compound words, and how to edit this lexicon data base easily, accurately, safely and speedily. In lexicon development for a huge volume of You(~n , it is quite important that we have a way of collecting automatically many usages of verbal case frames, and we suppose it exist different case frames in different domains. Ackn_o_Ki~Lgm~_ We would like to thank Mrs. Mutsuko Kimura(IBS~, Toyo information Systems Co. Ltd., Japan Convention Sorvice Co. Ltd., and the other members of the Mu-projeet working group for the useful discussions which led to many of the ideas presented in this paper. Rcf_c~.¢ng_e_a (I) Nagao. M., Nishida, T. and Tsujii, J.: Dealing with Incompleteness of Linguistic Knowledge on Language Translation, COTING84, Stanford, 1984. (2) Tsujii. J., Nakamura, J. and Nagao, M.; Analysis Grammar of Japanese for Mu-project. COTING84. {3) Nakamura. J.. Tsujii. J. and Nagao. M.: Grammar Writing Syst~n (GRADE, of Mu-Machine Translation Project. COTING84. (4; Nakai, H. and Satoh, M. : A Dictionary with Taigen as its Core, Working Group Report of Natural Language Processing in Information Processing Society of Japan, WGNL 38 7, July, 1983. (5 Nagao. M. ; Introduction to Mu Project. WGNL 38 2, 1983. 6 Saka!roto. Y. : Yougcn and Fuzo'=:u- go Lexicon in VerbJa! Case Frame. WGNL 38 8. 1983. !7 ',. Sak~,r,!oLo. Y. : Japanese SyntaetLc Lexiccm in Mu project. Proc. of 28th Conference of IPSJ, 1984. '.8 Ishik~,~'._,, T., Sat,.>h. M. and Tal:aJ, S. : SemantJ caI FulicLJ o:i on Natural [.~q~;S~.-~s, ~' Processing, Proc. o.r" 28Lh CIPSJ. 1984. 46 Xi r £ U n 0 CO L Z ~a I~1 I w ~ ~ ' i~ ~i~ ~ ..3 ,i • m!-- .'- - i-~l, r I :1 t o I i I i m ~ ...1 '~:t ~i: I ~ : f.: ® : : ~ a :i l || l@ : E "~i ~.~ ,~ I^ ~ J ~ ~ ~ v 1 ~ ~ ~ ~i ~ ~ ~ ~i ~ ~ ~ ~ i ~ I ~- ~ z i N i I i@ E E~ EE 47 | 1984 | 11 |
SEMANIIC PARSING AS GRAPH LANGUAGE TRANSFORMATION - A MULIIDIMENSIONAL APPROACH TO PARSING HIGHLY INFLECTIONAL LANGUAGES Eero Hyv~nen He]sJnkJ IJniversity of TechnoloQy DiaJtal SysLems Laboratory OtakaarJ 5A n215n Espoo 15 FINLAND ABSTRACT The structure of many languages with "free" word order and rich morphology like Finnish is rather configurational than linear. Although non-linear structures can be represented by linear formalisms i t is often more natural to study multidimensional arrangement of symbols. Graph grammars are a multidimensional generalization of linear string grammars. In graph grammars string rewrite rules are generalized into graph rewrite rules. This p a p e r presents a graph grammar formalism and parsing scheme for parsing languages with inherent configurational flavor. A small experimental Finnish parsing system has been implemented (Hyv6nen 1983). A SIMPLE GRAPH GRAMMAR FORMALISM WITH A CONTROL FACILITY In applying string grammars to parsing natural Finnish several problems arise in representing complex w o r d structures, argeements, "free" word ordering, discontinuity, and intermediate depencies between morphology, syntax and semantics. A strong, multidimensional formalism that can cope with different levels of language seems necessary. In this chapter a graph grammar formalism based on the notions of relational graph grammars (Rajlich 1975) and attributed programmed graph grammars (Bunke 1982) is developed for parsing languages with configurational structure. Definition 1.1 (relational graph, r-graph) Let ARCS, NODES, and PROPS be f i n i t e sets of symbols. A relational graph (r-graph) RG is pair RG = (EDGES, NP) consisting of a set of edges EDGES, ARCSxNODESxNODES and a function liP that associates each node in EDGES to a set of labeled property values: tJP: NODESxPROPS -> PVALUES PVALUES is the set of possible node property values. They are represented as sets of symbols or l i s t s . Example: Figure I .1 depicts the morphological r-graph representation of Finnish word "ihmisten" (the humans') and its edges as a l i s t . EXT-property expresses the set of symbols the node currently refers to (extension); CAT t e l l s the syntactico-semantic category of the node. C~L~£ NR [XT.(PL) [XT- {IHNINEN) CAT- (SUBST- I HHINEN) ((NOUN N1 N2) (C#3E NI N3) (NR Nl N4) (PERS Nl N5) (PS Nl N6) (EP Nl N7)) Fig. 1.1. Morphological r-graph representation of word "ihmisten" (the humans). Definition 1.2 (r-production) An r-production RP is a pair: RP = (LS, RS) LS ( l e f t side) and RS (right side) are r-graphs. An RP is said to be applicable to an r-graph G i f f EDGES~EDGES G and the values in N~sare subsets 6f corresponding values in NPofor each node in LS. Definition 1.3 (direct r-derivation) The direct r-derivation of r-graph H from r-graph G via an r-production RP = (LS, RS) is defined by the following algorithm: Algorithm 1.1 (Direct r-derivation) Input: An r-graph G and an r-production RP = (LS, RS) Output: An r-graph H derived via RP from G 517 PROCEDURE Di rect-r-deri vation : BEGIN IF RP is applicable to G (see text) THEN EDGES G := EDGES G - EDGESLs H :=GURS RETURN H ELSE RETURN "Not applicable" END Here U is an operation defined for two r-graphs RGI and RG2 as follows: H = RGI I~ RG2 i ff EDGES H = EDGESRG 1 U EDGESRG 2 and NPw(ni, propj) = NPDr.~(ni, propj) for any priJperty propj in every node ni in RG2. Time complexity: Direct r-derivations are essentially set operations and can be performed e f f i c i e n t l y . By using a hash table the expected time complexity is O(n) with respect to the size of the production ( i t does not depend on the size of the object graph). The worst case complexity is O(n**2). Example: Figure 1.2 represents an r-production and figure 1.3 its application to an r-graph. We have designed a meta-production description f a c i l i t y for r-productions by which match-predicates can be attached to nodes and arcs in order to test and modify node properies. The instantiation of a meta-production is found context-dependently while matching the production l e f t side. I t is also possible to specify some special modifications to the derivation graph by meta-productions. ) Fig. 1.2. Production ADJ-ATTR identify adjective attributes. to Definition 1.4 (r-graph gralnmar and r-graph language) An r-graph grammar (RGG) is a pair: RGG = (PROD, START) PROD is a set of r-productions and START is a set of r-graphs. An r-graph language (RGL) generated by an r-graph grammar is the set of all derivable r-graphs f r o m any r-graph in START by any sequence of applicable r-productions of PROD: RGL ={R-graphISTART =,~R-graph! EXT-fPL) EXT-{~ PL) • ~T~U~T I F CM.ANECilVE CM-IIOUtt-ABST EXT=(eO~-ALL) EXT.{BIG) [XT=(PRCG. AFTER: (Node properties as above) Fig. 1.3. The effect of applying production ADJ-ATTK ( f i g . 1.2) to an r-graph. Definition 1.5 (controlled r-graph grammar) A controlled r-graph grammar (CRG) is a pair: CRG = (CG, RGG) CG is an r-graph called control graph (c-graph). Its interpretation is defined very much in the same way as with ATN-networks. The actions associated to arcs are direct r-derivations (def. 1.3). RGG is an r-graph grammar (def. 1.4). Example: Figure 1.4 i l l u s t r a t e s a c-graph expressing potential attribute configurations of nouns belonging to category !JOUN-HUMAN. Adjective, pronoun and genetive attributes and a quantifier may be identified hy corresponding r-productions (the meaning of (READWORD)- and (PUT-LAST)-arcs is not relevant here). 518 PRON-ATTR ADJ-ATTR ADJ-ATTR Fig. 1.4. A control graph expressing attribute configurations of syntactico-semantic w o r d category NOUN-HUHAN. Definition 1.6 (Controlled graph language) A controlled g r a p h language (CGL) corresponding to a controlled r-graph grammar CRG = (CG, RGG) is the set of r-graphs derived by the CG using the start graphs START and the productions of the grammar RGG. 2 A GRAPH GRAIItIAR PARSING SCHEME 2.1 Function and structure Figure 2.1 depicts a RGG-based parsing scheme that we have applied to natural language parsing. Roughly spoken, the input of the parser, i.e. the set START of a CRG, is the morphological representation(s) of a sentence. The output is a set of corresponding semantic deep case representations. Parsing is ~een as a multidimensional transformation between the morphological and semantic levels of a language. These levels are seen as graph languages. The parser essentially defines a "meaning preserving" mapping from the morphological representations of a sentence into its semantic representations. The transformation is specified by a controlled r-graph grammar. The control graph is not predefined but is constructed dynamically according to the individual words of the current sentence. During parsing morphological and semantic representations are generated in parallel as words are read from l e f t to right. 2.2 Specification of the morphological and semantic graph languages Morphological level. The morphological representation of a sentence consists of star-like morphological representations of the words ( f i g . 1.1) that are glued togetiler by sequential >- and <-relations ( f i g . 1.3). Semantic level. The semantic representatien of a sentence consists of a semantic deop case structure corresponding tc Lhe main verb. Deep case constituents have their own semantic case structures corresponding to their main words. SOURCE GRAPH LANGUAG£ MORPHOLOGY Control led r-nraph c-~M INTERPRE~R g ramma r (CRG', / i GOAL GRAPH LANGUAGE /3 SEtIANTI CS \ PRODUCTIONS j Fig. 2.1. A parsing scheme for transforming graph languages. Example: Figure 2.2 i l l u s t r a t e s the semantic representation of question " Kuka luennoitsija on luennoinut jonkun seminaarimaisen kurssin tietojenk~sittelyteoriasta syksyll~ 1981" ("Which lecturer has lectured some seminar-type course on computer science in the autumn 1981"). MAZN Fig. 2.2. Semantic graph representation of a Finnish question. Node properties are not shown. 2.3 Specification of the graph language transformation The transformation is specified by an agenda of p r i o r i t i z e d c-graphs. I n i t i a l l y , the agenda consists of a set of sentence independent "transformational" c-graphs (that, for example, transform passive clauses into active ones) and 519 sentence dependent c-graphs corresponding to the syntactico-semantic categories of the individual words in the sentence. For example, the c-graph of f i g . 1.4 corresponds to nouns belonging to category NOUN-HUMAN. I t tries to identify semantic case constituents by the productions corresponding to the arcs. Fig. 1.2 i l l u s t r a t e s the production ADJ-ATTR (adjective attribute) used in the c-graph of fig. 1.4. The interpretation of the production is: I f there is an adjective preceeding a noun in the same case and number the words are in semantic KIND relation with each other. As a whole, the agenda constitutes a modular, sentence dependent c-graph. Parsing is performed by interpreting the agenda. Different strategies could be applied here; the structure of the c-graphs depend on the choice. In our experimental system parsing is performed by interpreting the f i r s t c-graph in the agenda. The c-graohs are defined in such way that they interpret each other and glue morphological representations of words into the derivation graph (arcs (READWORD) and (PUTLAST) in f i g . 1.4) until a grammatical semantic representation (or in ambiguous cases several ones) is reached. 2.4 Linguistic and computational motivations Most i n f l u e n t i a l l i n g u i s t i c theories and ideas behind our parser are dependence grammar, semantic case grammar, and the notion of "word expert" parsing. The idea is that the c-graphs of word categories actively try to find the dependents of the main words and i d e n t i f y in what semantic roles they are (cf. the ADJ-ATTR-production of fig. 1.2). In some cases i t i t useful to assign active role to dependents. The c-graphs serve as i l l u s t r a t i v e l i n g u i s t i c descriptions of the syntactico-semantic features of word categories and other fenomena. Computationally, our formalism and parsing scheme gives high expressive power but its time complexity is not high. Only potentially relevant productions are tried to use during parsing. Graphs are i l l u s t r a t i v e and can be used to express both procedural and declarative knowledge. New word category models can be added to the parser rather independently from the other models. Our small experimental g r a p h grammar parser for Finnish (Hyv6nen 1983) is s t i l l l i g u i s t i c a l l y quite naive containing some 150 lexical entries, 50 productions, and 50 c-graphs. A larqer subset of Finnish needs to be modelled in order to evaluate the approach properly. We are currently developing the graph grammar approch further by generalizing the formalism into hierarchic graphs. By this way, for example, large graph structures could be manipulated more easily as single entities and identical structures could have different interpretations in different contexts. Also, a m o r e elaborate coroutine based control structure for interpreting the c-graphs is under developement. We feel that the idea of seeing parsing as a multidimensional transformation of relational graphs in stead of as a delinearization process of a string into a parse tree is worth investicating further. 3 ACKNOWLEDGEMENTS Thanks are due to Rauno Heinonen, Harri J~ppinen, Leo Ojala, Jouko Sepp~nen and the personnel of Digital Systems Laboratory for f r u i t f u l discussions. Finnish Academy, Finnish Cultural Foundation, Siemens Foundation, and Technical Foundation of Finland have supported our work f i n a n c i a l l y . 4 REFERENCES Bunke H. (1982): Attributed graph grammars and their application to schematic diagram interpretation. IEEE Trans. of pattern analysis and machine intelligence, No 6, pp. 574-582. Hyv~nen E. (1983): G r a p h grammar approach to natural language parsing and understanding. Proceedings of IJCAI-83, Karlsruhe. Rajlich V. (1975): Dynamics of discrete structures and pattern reproduction. Journal of computer and system sciences, No 11, pp. 186-202. 520 | 1984 | 110 |
HANDLING SYNTACTICAL AMBIGUITY IN MACHINE TRANSLATION Vladimir Pericliev Institute of Industrial Cybernetics and Robotics Acad. O.Bontchev Sir., bl.12 1113 Sofia, Bulgaria ABSTRACT The difficulties to be met with the resolu- tion of syntactical ambiguity in MT can be at least partially overcome by means of preserving the syntactical ambiguity of the source language into the target language. An extensive study of the co- rrespondences between the syntactically ambiguous structures in English and Bulgarian has provided a solid empirical basis in favor of such an approach. Similar results could be expected for other suffi- ciently related languages as well. The paper con- centrates on the linguistic grounds for adopting the approach proposed. 1. INTRODUCTION Syntactical amblgulty, as part of the ambigui- ty problem in general, is widely recognized as a major difficulty in MT. To solve this problem, the efforts of computational linguists have been main- ly directed to the process of analysis: a unique analysis is searched (semantical and/or world knowledge information being basically employed to this end), and only having obtained such an ana- lysis, it is proceeded to the process of synthesis. On this approach, in addition to the well known difficulties of general-linguistic and computa- tional character, there are two principle embarras- ments to he encountered. It makes us entirely in- capable to process, first, sentences with "unre- solvable syntactical ambiguity" (with respect to the disambiguation information stored), and, se- condly, sentences which must he translated ambi- guously (e.g. puns and the like). In this paper, the burden of solution of the syntactical ambiguity problem is shifted from the domain of analysis to the domain of synthesis of sentences. Thus, instead of trying to resolve such ambiguities in the source language (SL), syntac- tically ambiguous sentences are synthesized in the target language (TL) which preserve their ambigui- ty, so that the user himself rather than the par- ser disambiguates the ambiguities in question. This way of handling syntactical ambiguity may be viewed as an illustration of a more gene- ral approach, outlined earlier (Penchev and Perl- cliev 1982, Pericliev 1983, Penchev and Perlcllev 1984), concerned also with other types of ambt- guitles in the SL translated by means of syntacti- cal, and not only syntactical, ambiguity in the TL. In this paper, we will concentrate on the linguistics ~rounds for adopting such a manner of handling of syntactical ambiguity in an English in- to Bulgarian translation system. 2. PHILOSOPHY This approach may be viewed as an attempt to simulate the behavior of s man-translator who is linguistically very competent, but is quite unfa- miliar with the domain he is translating his texts from. Such a man-translator will be able to say what words in the original and in the translated sentence go together under all of the syntactica- lly admissible analyses; however, he will be, in general, unable to make a decision as to which of these parses "make sense". Our approach will be an obvious way out of this situation. And it is in fact not Infrequently employed in the everyday practice of more "smart" translators. We believe that the capacity of such transla- tors to produce quite intelligible translations is a fact that can have a very direct bearing on at least some trends in MT. Resolvlng syntactical am- biguity, or, to put it more accurately, evading syntactical ambiguity in MT following a similar human-like strategy is only one instance of this. There are two further points that should be made in connection with the approach discussed. We assume as more or less self-evident that: (i) MT should not be intended to explicate texts in the SL by means of texts in the TL as previous approaches imply, but should only tran- slate them, no matter how ambiguous they might happen to be; (ii) Since ambiguities almost always pass un- noticed in speech, the user will unconsciously dtsambtguate them (as in fact he would have done, had he read the text in the SL); this, in effect, will not diminish the quality of the translation in comparison with the original, at least insofar as ambiguity is concerned. 521 3. THE DESCRIPTION OF SYNTACTICAL AMBIGUITY IN ENGLISH AND BULGARIAN The empirical basis of the approach is provi- ded by an extensive study of syntactical ambiguity in English and Bulgarlan (Pericliev 19835, accom- plished within the framework of a version of de- pendency grammar using dependency arcs and bra- cketlngs. In this study, from a given llst of con- figurations for each language, all logically-ad- mlssible ambiguous strings of three types in En- gllsh and Bulgarian were calculated. The first type of syntactlcally ambiguous strings is of the form: (15 A ~L~B, e.g. adv.mod(how long?) f The statistician studied(V) the ~hole year(PP), obj.dir(wh~t?) where A, B, ... are complexes of word-classes, "---~" is a dependency arc, and 1, 2, ... are syn- tactical relations. The second type is of the form: (2) A -~->B<-~- C, e.g. adv.mod(how?) She greeted(V) the girl(N) ~ith a smil6(PP) attrib(what?) The third type is of the form: (3) A -!-~B~-~- C, e.g. adv.mod(how?) [ He failed(V) enttrely(Adv) to cheat(Vin f) her adv.mod(how?) It was found, first, that almost all logically -admissible strings of the three types are actually realized in both languages (cf. the same result al- so for Russian in JordanskaJa (1967)5. Secondly, and more important, there turned out to be a stri- king coincidence between the strings in English and Bulgarian; the latter was to he expected from the coincidence of configurations in both languages as well as from their sufficiently similar global syntactic organization. 4. TRANSLATIONAL PROBLEMS With a view to the aims of translation, it was convenient to distinguish two cases: Case A, in which to each syntactically ambiguous string in En- glish corresponds a syntactically ambiguous string in Bulgarlan, and Case B, in which to some English strings do not correspond any Bulgarian ones; Case A provides a possibility for literal English into Bulgarian translation, while there is no such possibillty for sentences containing strings classed under Case B. 4.1. Case A: Literal Translation English strings which can be literally tran- slated into Bulgarian comprise,roughly speaking, the majority and the most common of strings to appear In real English texts. Informally, these strings can be included into several large groups of syntactically ambiguous constructions, such as constructions with "floating" word-classes (Ad- verbs, Prepositional Phrases, etc. acting as slaves either to one, or to another master-word), constru- ctions with prepositional and post-positional ad- juncts to conjoined groups, constructions with se- veral conjoined members, constructions with symmet- rical predicates, some elliptical constructions, etc. Due to space limitations, a few English phra- ses with their literal translations will suffice as an illustration of Case A. (Further on, syntac- tical relations as labels of arcs will be omitted where superfluous in marking the ambiguity): (4) I 41 a review(N) "of a ^boo~(PP) ~ ( P P ) ===~ I t l [ ---==>retsenzija(N) ~ ( P P ) o~--~(PP) (5) I saw(V) the car(N) ouslde(Adv) --==~> ===~Azl vidjah(V)i k°l~ Ata(N) navan(Adv)I O' v°iy 'dv' ) ===>.mnogo (Adv) ~ I skromen (Adjjl))i" razumen (Adj)i, 522 1 t l IVq ) beau ful( d )(wo n(N) II gi s(N) > v' !1 'v ) (ze,,, (N) " momicheta(N) ) ---->kra ivi( dj, It 4.2. Case B: Non-Literal Translation English strings which cannot be literally translated into Bulgarian are such strings which contain: (i) word-classes (V i f Gerund) not pre- n ' sent in Bulgarian, and/or (ii) syntactical relations (e.g. "composite": language~-~ -- theory, etc.) not present in Bulgarian, and/or (iii) other differences (in global syntactical organization, agreement, etc. ). It will be shown how certain English strings falling under this heading are related to Bulgarian strings preserving their ambiguity. A way to over- come difficulties with (il) and (iii) is exempli- fied on a very common (complex) string, vlz. Adj/N/Prt+N/N's+N (e.g. stylish ~entlemen's suits). As an illustration, here we confine to prob- lems to be met with (i), and, more concretely, to such English strings containing Vin f. These strings are mapped onto Bulgarian strings containing da-construction or a verbal noun (V i ~ generally b-eeing translated either way). E.g. nXthe Vln f in obj. dlr (8) a. He promised(V) to please(Vin f) mother t._JI . eL. adv. mod (promised what or why?) is rendered by a da-con- struction in agreement with the subject, preserving the ambiguity: obj. dir ~,'" I[ ~1 ' zaradva(da-const r) objelht a (V) da b. T~J . ~ I __ m~Jka adv. mod In the string attrib (9) a. ~ have(V)jl, instructions(N)~, toj st~dy(Vin f ) j obJ.dlr (what instructions or I have to study what?) V. _ can be rendered alternatively by a d_~a-construc ~nz- tion or by a prepositional verbal noun: attrib b. AZ imam(V) lnstruktsii(N) da ucha(d__aa-constr) ohj dir attrib c. instruktsii(N) za uchene(PrVblN) obj. dl r J Yet in other strings, e.g. The chicken(N) is ready(Adj) to eat(V. .) (the chicken eats or is eaten.), in order to preserve the ambiguity the infinitive should be rendered by a prepositional verbal noun: Pileto(N) e gotovo(AdJ) z_~a jadene (PrVblN), rather than with the finite da-construc- tion, since in the latter case we would obtain two unambiguous translations: Pileto e gotovo d a ~ade (the chicken eats) or Pileto e got ovo da se ~ade (the chicken is eaten), and so on. For some English strings no syntactically am- biguous Bulgarian strings could be put into corres- pondence, so that a translation with our method proved to be an impossibility. E.g. predicative V~--7 I[ ob~ .dir ~ (I0) He found(V) the mechanic(N) a helper(N) ~ Jl~bJ.indir ~ t obJ.dir (either the mechanic or someone else is the helper) is such a sentence due to the impossibility in Bul- garian~r two non-prepositional objects, a direct and an indirect one, to appear in a sentence. 4.3. Mul~,,iple Syntactical Ambiguity Many very frequently encountered cases of mul- tiple syntactical ambiguity can also be handled successfully within this approach. E.g. a phrase like Cybernetical devices and systems for automatic control and dia~nosis in biomedicine with more than 30 possible parsings is amenable to literal trans- lation into Bulgarian. 4.4. Semantically Irrelevant Syntactical Ambi~uity Disambiguating syntactical ambiguity is an im- portant task in MT only because different meanings are usually associated with the different syntac- tical descriptions. This, however, is not always the case. There are some constructions in English the syntactical ambiguity of which cannot lead to multiple understanding. E.g. in sentences of the form A is not B (He is not happy), in which the ad- verbial particle not is either a verbal negation (He isn't happy) or a non-verbal negation (He's not happy), the different syntactical trees will be in- terpreted semantically as synonymous: 'A is not B' ~-==~A is not-B'. 523 We should not worry about finding Bulgarlan syntactically ambiguous correspondences for such English constructions. We can choose arbitrarily one analysis, since either of the syntactical des- criptions will provide correct information for our translational purposes. Indeed, the construc- tion above has no ambiguous Bulgarian correspon- dence: in Bulgarian the negating particle combines either with the verb (then it is written as a se- parate word) or with the adjective (in which case it is prefixed to it). Either construction, how- ever, will yield a correct translation: To~ nee == -- radosten or To~ e neradosten. 4.5. A Lexical Problem Certain difficulties may arise, having managed to map English syntactically ambiguous strings onto ambiguous Bulgarian ones. These difficulties are due to the different behavior of certain English lexemes in comparison to their Bulgarian equiva- lents. This behavior is displayed in the phenomenon we call "intralingual lexical-resolution of syn- tactical ambiguity" (the substitution of lexemes in the SL with their translational equivalents from the TL results in the resolution of the syn- tactical ambiguity). For instance, in spite of the existence of am- biguous strings in both languages of the form Verbtr/itr~->Noun, with some particular le- xemes (e.g. shoot~r/itr==-~>zastrel~amtr or strel~amitr), In which to One Engllsh lexeme co- rrespond two in Bulgarian (one only transitive, and the other only intransitive), the ambiguity in the translation will be lost. This situation explains why it seems impossible to translate ambiguously into Bulgarian examples containing verbs of the type given, or verbal nouns formed from such verbs, as the case is in The shootin~ of the hunters. This problem, however, could be generally tackled in the translation into Bulgarian, since it is a language usually providing a series of forms for a verb: transitive, intransitive, and transitive/in- transitive, which are more or less synonymous ~for more details, cf. Penchev and Perlcliev (1984)). 5. CONCLUDING REMARKS To conclude, some syntactically ambiguous strings in English can have literal, others non-ll- teral, and still others do not have any correspon- dences in Bulgarian. In summary, from a total num- ber of approximately 200 simple strings treated in Engllsh more than 3/4 can, and only 1/4 cannot, be literally translated; about half of the latter strings can be put into correspondence with syntac- tically ambiguous strings in Bulgarian preserving their ambiguity. This gives quite a strong support to the usefulness of our approach in an English in- to Bulgarian translation system. Several advantages of this way of handling of syntactical ambiguity can be mentioned. First, in the processing of the majority of syntactically ambiguous sentences within an En- glish into Bulgarian translation system it dispen- ses with semantical and world knowledge information at the very low cost of studying the ambiguity co- rrespondences in both languages. It could be expec- ted that investigations along this line will prove to be frultful for other pairs of languages as well. Secondly, whenever this way of handling syn- tactical ambiguity is applicable, the impossibili- ty of previous approaches to translate sentences with unresolvable ambiguity, or such with verbal Jokes and the like, turns out to be an easily attainable task. Thirdly, the approach seems to have a very na- tural extension to another principal difficulty in MT, viz. coreference (cf. the three-ways ambiguity of Jim hit John and then he (Jim, John or neither?) went away and the same ambiguity of tQ~ (=he) in its literal translation into Bulgarian: D$im udari DJon i togava toj(?) si otide). And, finally, there is yet another reason for adopting the approach discussed here. Even if we choose to go another way and (somehow) dlsamblgu- ate sentences in the SL, almost certainly their translational equivalents will be again syntactl- cally ambiguous, and quite probably preserve the very ambiguity we tried to resolve. In this sense, for the purposes of MT (or other man-oriented applications of CL) we need not waste our efforts to disambiguate e.g. sentences like John hit the dog with th___ee lon~ hat or John hit th____ee do~ with the long woo1, since, even if we have done that, the correct Bulgarian translations of both these sen- tences are syntactically ambiguous in exactly the same way, the resolution of ambiguity thus proving to be an entirely superfluous operation (cf. D~on udari kucheto s dal~ata palka and Djon udari ku- cheto s dal~ata valna). 6. REFERENCES JordanskaJa, L. 1967. Syntactical ambiguity in Russian (with respect to automatic analysis and synthesis). Scientific and Technical In- formation, Moscow, No.5, 1967. (in Russian). Penchev, J. and V. Perlcllev. 1982. On meaning in theoretical and computational semantics. In: COLING-82, Abstracts, Prague, 1982. Penchev, J. and V. Perlcliev. 1984. On meaning in theoretical and computational semantics. Bulgarian Language, Sofia, No.4, 1984. (in Bulgarian). Pericliev, V. 1983. Syntactical Ambiguity in Bul- garian and in English. Ph.D. Dissertation, ms., Sofia, 1983. (in Bulgarian). 524 | 1984 | 111 |
ARGUMENTATION IN REPRESENTATION SEMANTICS * Pierre-Yves RACCAH ERA 430 - C.N.R.S. Conseil d'Etat Palais Royal 75100 Paris RP ABSTRACT It seems rather natural to admit that language use is governed by rules that relate signs, forms and meanings to possible intentions or possible interpretations, in function of utterance situations. Not less natural should seem the idea that the meaning of a natural language expression conveys enough material to the input of these rules, so that, given the situation of utterance, they determine the appropriate interpretation. If this is correct, the semantic description of a natural language expression should output not only the 'informative content' of that expression, but also all sorts of indications concerning the way this expression may be used or interpreted. In particular, the argumentative power of utterances is due to argumentative indications conveyed by the sentences uttered, indications that are not part of their informative content . This paper emphasizes the role of argumentation in language and shows bow it could be accounted for in a formal Representation Semantics framework. An "example of an analysis is provided in order to show the "system at work". I. ARGUMENTATION AND THE SEMANTIC PROGRAM. A. What is linguistic in argumentation. The theory of argumentation developped by Jean-Claude Anscombre and Oswald Ducrot is an attempt to describe some aspects of language that have not been carefully studied yet, in spite of their importance for linguistic theory, discourse representation, as well as simulation of understanding. In their framework, utterances are seen to be produ@ed in order to argue for some particular conclusions with a certain force, depending on the situation of utterance. Thus, when I utter (I) This is beautiful but expensive in front of a shop window and pointing to some item, I present my utterance as a reason for not buying this item, ~hile if I say (2) This is expensive but beautiful *This work has been supported in part by a contract with the Centre National de la Recherche Scientifique (contrat~n ° 95. 5122) I am giving a reason to buy the item I. Note that after uttering(l), I can perfectly walk into the store and buy the item : what is odd, in normal situations is to say (I') (l') This is beautiful but expensive, and therefore, I will buy it. Anscombre and Ducrot unburied the old Aristotelician concept of topoi to describe the movement from the utterance to the conclusion. They take these topoi to be of the form : (To) The more X is P, the more Y is Q. where 'X is P' is the idea expressed by the original utterance, and 'Y is Q' is the argumentative orientation (the conclusion argued for by producing the original utterance in the particular situation in which it is uttered). In Raccah 84, I have argued for the adequacy of a slightly different form for the topoi, which takes into account the epistemical relation of the speaker to the p~miss : (T) The more evidence I have in favor of X being P The more arguments l'have in favor of Y being Q. Topoi of this kind are shown to avoid problems with non-gradual properties and, I argue, are closer to the intuition we have about the argumentative process 2. The description of argumentative connectives provides rules to select the argumentative orientation of a compounded utterance in function of the more basic utterances that they connect. Thus, the analysis of (1), (i'), and (2) suggests the following description of the argumentative aspects of but : in any utterance of P but Q, the presence of but Ii am talking here of normal situations , where expensiveness is a reason not to buy, while beauty is a reason to buy 2The idea is that it is not the degree of P-uess of X (when this means something) that makes Y (more or less) Q, but the degree to which the speaker believes X is P that entitles him (her) to believe (more or less) that Y is Q. 525 - requires that the utterances of P and Q be interpreted as oriented towards opposite conclusions, - indicates that the complex utterance is oriented towards the conclusion towards which Q is oriented. Following the example of Occam's -disposable- razor, I think that when there is a con~non property for all utterances of the same sentence, ther~ ought to be, in the description of the sentence, some features that enable the utterance de--~-~ptlon to state this common property of the different utterances. In other words, at the output of the sentence semantics level of analysis, there ought to be something that should be taken as input to the pragmatic level and will enable it to formulate the argumentative properties common to all utterances of the same sentence, l call the study of this something "pre-argumentative analysis". The reason why I talk of "disposable" razor is that it is through utterance analysis that we discover the interesting properties of sentences. So that we need, for heuristic reasons, to use the pragmatic analysis in order to know what kind of output we want for the sentence analysis : we dispose of the razor only after using it... B. What is argumentative in semantics. In spite of this slight methodological incursion into pragmatics, my concern is for sentence semantic analysis. I postulate a semantic level of sentence analysis such that : - no information about the world or the speaker's (or hearer's) beliefs are taken into account at this level; - all of the informative meaning carried by the sentence can be represented at this level (in particular, the logical information as well as the conventional implicature ; - the pre-argumentative aspects of the sentence are described at this level; the representation of meaning and the description of pre-argumentation are both conpositional ; - information about the world and beliefs only need to be added at the next level of analysis to get full interpretations of the utterances of the sentence. Note that I do not claim that models of this kind have any psychological reality, not even any chance to be good candidates, as such, for computer simulating of understanding. Thus my claim of autonomy of semantics (including pre-argumentation) towards pragmatics is neither an ontological claim nor a claim of technical efficiency, but rather an epistemic one. This way of analyzing language aims at answering some linguistic and methodological questions, and it is as such that I wish it be tested for its applicability to Artificial Intelligence. Among the theories sharing these assumptions, I would like to speak about what I call Representation Semantics : a theory of meaning representation for sentences, inspired by Montague 73 for its formal aspects, but diverging from it in its more fundamental issues. Representation Semantics uses the tools developped by Montague but, instead of aiming at describing the meaning of a sentence, as a result of its semantic analysis, it only pretends to give, as its output, a representation of some aspects of its meaning : partial models of the presuppositional contentpthe informative content, and the pre-argu=entative content of the sentence I . I use Karttunen and Peters' conventional implicature framework 2 , as a pre-selection of possible models for representing the meaning of sentences. This is shown to avoid the classical paradox of the presupposltion/entailment relationship 3 , Meaning representations for sentences include pre-argumentative features in such a way that, given the situation and the adequate topoi, the argumentation of an utterance -in that situation and within the corresponding cultural frame- of the sentence analyzed can be computed. II. OUTLINE OF A THEORY OF MEANING REPRESENTATION. A. Ingredients. A detailed presentation of the theory would require a long and careful discussion of the concepts involved in it (some of which have already been discussed in Raccah 80, 82 and 83), a justification of their raison d'@tre and of their articulation within the theory. However interesting, these technical and foundational aspects do not fit this paper (both for material and strategical reasons). Nevertheless, I would like to briefly sketch the great lines of the analysis process suggested by the theory . The following diagram should partially illustrate this point. Isee Kamp 80, for the informative content ; Raccah 80 or 83 for the presuppositional and informative contents and Bruxelles-Raccah 83 and Raccah 84 for preliminary discussions about the ~ re-argumentative content. Karttunen and Peters 79. 3cf. Raccah 82. 526 semantic analysis I S-) . tree2-Itra?s- L/Rl~[represenL/--~ .... ~ " ~P2[| tati°n JIM2/ Ix--/ I Where|: S is a sentenc~l expresses what is presupposed P2 expresses what is asserted R1 expresses conditions on argumentation R2 expresses pre-argumentation M1 is a model representing P1 M2 is a model representing P2. Each sentence is given one (or more, if ambiguous) analysis tree by the syntactic module. Each tree is then 'decomposed' into four formulae : one for the presupposition, one for the asserted informative content, one for conditions on argumentation, and one for pre-argumentation. The first two 'decompositions' can be obtained by the use of Karttunen and Peters' method, inspired by Montague's translation function 2. They both lead to the construction of a partial model, say one of the smallest models satisfying P1 for the presupposition, and one of the smallest models satisfying both P1 and P2, for what is asserted. An example of constructions of this kind is given by Kamp's discourse representations (Kamp 80) B. yes, but what about argumentation ? Conditions on argumentation are imposed mainly by the use of connectives (like but, however, even~ etc.). A semantic description of these connectives states, among other things, the ~elationship between the possible argumentative orientations of the utterances connected =. Formulae expressing these conditions on argumentation will only appear in sentences containing this kind of connectives, since I haven't found, as yet, simple sentences imposing conditions on argumentation. ~le form of this kind of formulae is shown in the discussion of the example. Pre-argumentation is a theorical construct much harder to justify on empirical grounds than anyone of the other three~. Its theoretical justification, however, is easy to see : The topoi apply to some semantic indications in order to 1 Please recall that this process is not intented to be a model of how humans actually deal with language nor a suggestion about how a computer should be structured : it stems from an external epistemic view of language. 2 See Montague 73, Karttunen and Peters 79, and Raccah 80. form argumentative orientations of utterances. These indications cannot be equated with the informative content of the sentence, for two reasons : (i) the same sentence, say "It is 8 o'clock", can be used in an argumentation whose premiss is 'it is late', as well as in an argumentation whose premiss is 'it is early'. We will have to take the sentence "It is 8 o'clock" to be pre-argumentatively ambiguous, while its informative content is not. (ii) Adverbs of degree (rather, very, extremely .... ) usualy do not modify the argumentative orientation of utterances (while they change the informative content of the sentence uttered) : they indicate the force with which the utterance, as it is presented, argues for the orientation. For example, if I say "This car is very expensive" as an argument for not buying it, it is not the very-expensiveness of the car that makes the argument, but its expensiveness; what the use of '~ery" says is that my arguments for not buying the car are stronger because my evidence for its expensiveness is stronger : in fact I even have enough evidence to say that it is very expensive. Formulae expressing the pre-argumentation will also express the pre-argumentatlve value ascribed to it by these indications. The form of these formulae (which can certainly be improved) is ~cl~ where c is a logical expression (stan6i ~n for the pre-orientation) and ~ is an index standing for the pre-argumentative value. III. AN EXAMPLE. I will now show, in an example analysis of a particular sentence, how the theory builds descriptions of the different aspects of the meaning, and how these descriptions are connected to one another and to eventual pragmatic information, in order to allow an interpretation of the possible utterances of the sentence. Suppose we want to analyse the sentence 5 This position, however, assumes the hypothesis that any utterance of a complex sentence containing an argumentative connective can be considered as a complex utterance, i.e. an utterance which can be decomposed into two utterances linked with this connective. See Bruxelles-Raccah 83 for a discussion of this hypothesis. 4Fortunately, this kind of justifications do not concern us here, but I realize that even the ugly notion of informative content seems to have more intuitive backup than this one : a story to be continued... 527 (3) The present king of France is very old but he plays Jazz. in a cultural context where it is believed that a) old people tend not to like Jazz, and b) people who play Jazz tend to like it. Note that there are very many other things believed about old people, such as (a') (a') old people tend to be wise, and many other things believed about people who play Jazz, such as (b') (b') people who play Jazz tend to wake up late in the morning. We will take the topos expressing (a) to be the rule : Where 0 stands for old, L for like and "#i, for Jazz I, and the topos expressing ~--to be the rule Where ~ stands for play. Suppose now that the analysis of (4) (4) The present king of France is very old gives the following four formulae : Rl(4) : R2(4) f O( ~I~K(51} ~ where K , V~) mean "present king of France", and "very old", ~ ~(~) means "the unique x such that ~(~)# , ~ " • s truth. PI(4) says that (4) presupposes that there is a unique entity which is the present king of France ; P2(4) says that (4) asserts that this entity is very old ; RI(4) says that (4) imposes no conditions on argumentation ; and R2(4) says that (4) is pre-oriented towards whatever conclusion can be infered from the present king of France being old, and that the conclusion will obtain with a force, Similarily, suppose that the analysis of (5): I This is terribly sloppy (the symbolic language used is not defined) and incomplete (for instance, there should be an indication of conditions on the application of the topos), but it doesn't affect my purpose. (5) He (the present king of France) plays Jazz gives the following four formulae : P2(S)Rl(5) ~Ci,?k~c~)J with similar interpretation. If, in addition, we have a formal description of but in accordance to what has been suggested in section I, we account, in a compositional way, for all of the four aspects of (3) which are examined here : let us see this in some detail. The formal description of but is following P1 (X but Y) : Pl(x) A PI(Y) P2 (X but Y) : P2(x)~ P2(Y) R1 (X but Y) : Topos/R2 (Y) = ~Topos/R2 (x) R2 (X but Y) : R2(Y) the ~ere the first expression says that what is presupposed by X but Y is the conjunction of what is presupposed by X and what is presupposed by Y ; the second expression says that what is asserted by X but Y is the conjunction of what is asserted by X and what is asserted by Y ; the third expression says that the topoi that can be selected are those which are such that their application to the respective pre-orientations of X and Y leads to opposite formulae (i.e. such that the argumentative orientations of the corresponding utterances of X and Y are opposite); the last expression says that the pre-orientation of X bu~Y is that of Y. Applying this description of but to (4) and (5) leads to the following description of (3) : pl (3) : H~(~)+-->~ ~) which corresponds to the actual interpretations of (3). In particular, this description correctly predicts that, without further information about the context of utterance, the pair of topoi that are naturally selected to interpret (3) is (Ta,Tb) rather than the other three possibilities mentioned here. In fact, to [elect (Ta,Tb') , we would have to believe tha~o like Jazz and to wake up late in the morning are incompatible while believing that people who play Jazz tend to wake up late in the morning. If we wanted to select (Ta',Tb) we would have to believe that to be wise and to like Jazz are opposed : this is a possible 528 choice, and an utterance of (3) where these topoi were forced by some additional contextual information would be likely to shock some people (including myself). Finally, if we wanted to select (Ta',~b') , we would have to believe that to be wise and to wake up late in the morning are opposed : another possible choice, that might have more adepts than ~ the last one. The theory is still young ; its formal version is even younger, and certainly very imperfect. However, it is the only theory on the "market" (and for that reason, the first one...) which examines this aspect of semantics, and offers a basis for a conception of a Natural Language Processor that might "grasp the idea" expressed by a text and not only retrieve pieces of information. A computer version of a small fragment of French is now at study. The programming languages used for this study are PROLOG and LISP. The programming of syntax and of the informative aspects of semantic~ follows the ideas of Friedman and Warren 78 ar~79 and of Hobhs and Rosenschein 78. For the pre-argumentative aspects and topoi rules, nothing had been done before and much remains to be done... IV REFERENCES Anscombre, Jean-Claude and Oswald Ducrot : L'argumentation dans la lan~ue, Mardaga, Bruxelles, 1983. Bruxelles, Sylvie and Pierre-Y~es Raccah : "L'analyse Argumentative" report on CNRS project n ° 95.5122 : Intelligence Artificielle 82, Paris, 1983. Friedman, Joyce and David S. Warren : "A parsing method for Montague Grammar", Linguistics and Philosophy, 1978, vol. 2. "Using semantics in non-context-free parsing of Montague Gra,,nar" ; Department of Computer Sciences, University of Michigan, 1979. Hobbs, Jerry and Stanley Rosenschein : "Making computational sense of Montague's Intensionnal Logic", Artificial Intelligence 9, 1978. Kemp, Hans : "A theory of truth and semantic representation" in Groenendijk et el, eds. Formal Methods in the Study of Language, Amsterdam, 1980. Karttunen, Lauri and Stanley Peters : "Conventional Implicature", in Syntax and Semantics, vol.11, Oh and Dinnen, eds. New York 1979. Montague, Richard : "The Proper Treatment of Quantification in Ordinary English" (1973), reprint in Thomason, ed. Formal Philosophy, Yale University Press, 1974. Raccah, Pierre-Yves : "Formal Understanding" Semantlkos 4,2 1980. "Presupposition, Signification et Implication" Semantikos 6:2, 1982 "Presupposition et Intension" HEL 5:2, 1983. "Argumentation et Raisonnement Implicite", in Les Modes de raisonnement proceedings of the 2nd t~onference on f.ognitive ~ciences, University of Paris, 1984. 529 | 1984 | 112 |
VOICE SIMULATION: FACTORS AFFECTING QUALITY AND NATURALNESS B. Yegnanarayana Department of Computer Science and Engineering Indian Institute of Technology, Madras-60O 036, India J.M. Naik and D.G. Childers Department of Electrical Engineering University of Florida, Galnesville, FL 32611, U.S.A. ABSTRACT In this paper we describe a flexible analysls-synthesls system which can be used for a number of studies In speech research. The maln objective Is to have a synthesis system whose characteristics can be controlled through a set of parameters to realize any desired voice characteristics. The basic synthesis scheme consists of two steps: Generation of an excita- tion signal from pitch and galn contours and excitation of the linear system model described by linear prediction coefficients, We show that a number of basic studies such as time expansion/ compression, pitch modifications and spectral expansion/compression can be made to study the effect of these parameters on the quality of synthetic speech. A systematic study is made to determine factors responsible for unnaturalness tn synthetic speech. It is found that the shape of the glottal pulse determines the quality to a large extent. We have also made some studies to determine factors responsible for loss of Intel- ligibility tn some segments of speech. A signal dependent analysts-synthesis scheme ts proposed to improve the intelligibility of dynamic sounds such as stops. A simple implementation of the signal dependent analysis is proposed. I. INTRODUCTION The maln objective of this paper is to develop an analysis-synthesls system whose parameters can be varied at will to realize any desired voice characteristics. Thls wlll enable us to determine factors responsible for the unnatural quality of synthetic speech. It is also possible to determine parameters of speech that contribute to intelligibility. The key ideas In our basic system are similar to the usual linear predictive (LP) coding vocoder [I], [2]. Our main contributions to the design of the basic system are: (1) the flexibility incorpor- ated in the system for changing the parameters of excitation and system independently and (2) a means for combining the excitation and system through convolution without further interpolation of the system parameters during synthesis. Atal and Hanauer [1] demonstrated the feasl- billty of modifying voice characteristics through an LPC vocoder. There have been some attempts to modify some characteristics (llke pitch, speaking rate) of speech without explicitly extracting the source parameters. One such attempt is with the phase vocoder [3]. A recent attempt to independently modify the excitation and vocal tract system characteristics is due to Senef [4]. Unlike the LPC method, Senef's method performs the desired transformations in the frequency domain without explicitly extracting pitch. However, it Is difficult to adjust the intonation patterns while modifying the voice characteristics. In order to transform voice from one type (e.g., masculine) to another (e.g., feminine), it is necessary to change not only the pitch and vocal tract system but also the pitch contour as well as the glottal waveshape independently. It is known that glottal pulse shapes differ from person to person and also for the same person for utterances in different contexts [5]. Since one of our objectives is to determine factors respon- sible for producing natural sounding synthetic speech, we have decided to implement a scheme which controls independently the vocal tract system characteristics and the excitation charac- teristics such as pitch, pitch contour and glottal waveshape. For thls reason we have decided to use the standard LPC-type vocoder. In Sec. II we describe the basic analysis- synthesis system developed for our studies. We discuss two important innovations in our system which provide smooth control of the parameters for generating speech. In Sec. III we present results of our studies on voice modifications and transformations using the basic system. In particular, we demonstrate the ease wtth which one can vary independently the speaking rate, pitch, glottal pulse shape and the vocal tract response. We report in Sec. IV results from our studies to determine the factors responsible for unnatural quality of synthetic speech from our system, After accounting for the major source of unnaturalness in synthetic speech, we investigate the factors responsible for low intelligibility of some segments of speech. We propose a signal dependent analysls-synthesls scheme in Sec. V to improve Intelliglbility of dynamic sounds such as stops. 530 II. DESCRIPTION OF THE ANALYSIS- SYNTHESIS SYSTEM A. Basic System As mentioned earlier, our system is basical- ly same as that LPC vocoders described in the literature F2]. The production model assumes that speech is the output of a tlme varying vocal tract system excited by a time varying excita- tion. The excitation is a quaslperlodlc glottal volume velocity signal or a random noise signal or a combination of both. Speech analysis Is based on the assumption of quasistationarlty during short intervals (10-20 msec). At the synthesizer the excitation parameters and gain for each analysis frame are used to generate the excitation signal. Then the system represented by the vocal tract parameters is excited by this signal to generate synthetic speech. B. Analysis Parameters For the basic system a fixed frame size of 20 msec (200 samples at 10kHz sampling rate) and a frame rate of 100 frames per second are used. For each frame a set fo 14 LPCs are extracted using the autocorrelatlon method [2]. Pitch period and volce/unvoiced decisions are deter- mined using the SIFT algorithm [2]. The glottal pulse information is not extracted in the basic system. The gain for each analysis frame Is computed from the linear prediction residual, The residual energy for an Interval corresponding to only one pitch period is computed and the energy is divided by the period in number of samples. This method of computation of squared ~aln per sample avoids the incorrect computation of the gain due to arbitrary location of analysls frame relative to glottal closure. C. Synthesis Synthesis consists of two steps: Generation of the excitation signal and synthesis of speech. Separation of the synthesis procedure into these two steps helps when modifying the voice charac- teristics as will be evident in the followlng sections. The excitation parameters are used to generate the excitation signal as follows: The pitch period and galn contours as a function of analysls frame number (1) are first nonllnearly smoothed using a 3-polnt median smoothing. Two arrays (called Q and H for convenience) are cre- ated as illustrated in Figure I. The smoothed pitch contour P(1) is used to generate a Q-array using the value of the pitch period at any point to determine the next point on the pitch contour. Since the pitch period Is given in number of samples and the Interframe interval is known, say N samples, the value of the pitch period at the end of the current pitch period is determined using suitable interpolation of P(1) for points in between two frame Indicles. The values of the pitch period as read from the pitch contour are stored in the Q-array. The entry In the Q-array is the value of the pitch period for that frame. For nonvolced frames the number of samples to be skipped along the horizontal axis is N, although on the pitch contour the value is zero. The entry in the O-array for unvoiced frames is zero. For each entry in the Q-array the corresponding squared gain per sample can be computed from the gain contour using suitable interpolation between two frame indices. The squared gain per sample corresponding to each element in the Q-array Is stored in the H-array. From the Q and H arrays an excitation slgnal is generated as follows. For each nonvoIced segment, identified by an entry zero in the Q- array, N s samples of random noise are generated. The average energy per sample of the noise is adjusted to be equal to the entry in the H-array corresponding to that segment. For a voiced segment identified by a nonzero value in the Q- array, the required number of excitation samples are generated using any desired excitation model. In the initial experiments only one of the five exctlation models shown in Figure 2 were considered. The model parameters were fixed aprlorl and they were not derived from the speech signal. Note that the total number of excitation samples generated In this way are equal to the number of desired synthetic speech samples. Once the excitation signal Is obtained, the synthetic speech Is generated by exciting the vocal tract system with the excitation samples. The system parameters are updated every N samples. We are not using pitch synchronous updating of the parameters, as is normally done in LPC synthesis. Therefore, interpolation of parameters is not necessary. Thus, the instability problems arising out of the interpolated system parameters are avolced. We still obtain a very smooth synthetic speech. III. STUDIES USING THE BASIS SYSTEM Two sentences spoken by a male speaker were used In our studies with the system: Sl: WE WERE AWAY A YEAR AGO $2: SHOULD WE CHASE THOSE COWBOYS Speech data sampled at lOkHz was analyzed under the following conditions: Frame size: 200 samples Frame rate: 100 frames/sec Each frame was preemphastzed and windowed Number of LPC's: 14 Pitch contour: (SIFT algorithm) Gain contour: (from LP residual) 3-potnt median smoothing of pitch and gatn contour The excitation signal was generated using the smoothed pitch and gain contours with the non- overlapping samples per frame being N=200, The excitation model-3 (Fig. 2) was used throughout the tntttal studies. This model was a stmple impulse excitation normally used in most LPC syn- thesizers, Synthesis was performed by using the excitation signal with the all-pole system, The system parameters were updated every 100 samples. Ne conducted the following studies using this system. 531 A. Tlme expanslon/compresslon wlth spectrum and excitation characteristics preserved. B. Pitch period expanslon/compression with spectrum and other excitation characteristics preserved, C. Spectral expanslon/compresslon wlth all the excitation characteristics preserved. D. Modification of voice characteristics (both pitch and spectrum). The llst of recordings made from these studies Is given in Appendix. The synthetic speech is highly Intelllglble and devoid of c11cks, noise, etc. The speech quallty Is distinctly synthetic. The issues of quallty or naturalness w111 be addressed In Section IV. IV. FACTORS FOR UNNATURAL QUALITY OF SYNTHETIC SPEECH It appears that the quality of the overall speech depends on the quality of reproduction of voiced segments. To determine the factors responsible for synthetic quality of speech, a systematic investigation was performed. The first part of the investigation consisted of determining which of the three factors namely, the vocal tract response, pitch period contour, and glottal pulse shape contributed significantly to the unnatural quality. Each of these factors was varied over a wide range of alternatives to determine whether a significant improvement in quality can be achieved. We have found that glottal pulse approximation contributes to the voice quality more than the vocal tract system model and pitch period errors. Different excitation models were Investl- gated to determine the one which contributes most significantly to naturalness. If we replace the glottal pulse characteristics wlth the LP residual itself, we get the original speech. If we can model the excitation sultably and determine the parameters of the model from speech, then we can generate hlgh quality synthetic speech. But it is not clear how to model the excitation. Several artificial pulse shapes wlth their parameters arbitrarily fixed, are used In our studies (Fig. 2). Excitation Model-l: Impulse excitation Excitation Model-2: Two impulse excitation Excitation Model-3: Three impulse excita- tion Excitation Model-4: Hflbert transform of an impulse Excitation Model-5: First derivative of Fant's model [6] Out of all these, Model-5 seems to produce the best quality speech. However, the most important problem to be addressed is how to determine the model parameters from speech. The studies on excitation models indicate that the shape of the excitation pulse Is crltlcal and It should be close to the original pulse If naturalness Is to be obtained in the synthetic speech. Another way of viewing thls is that the phase function of the excitation plays a prominent role In determining the quality. None of the simplified models approximate the phase properly. So it Is necessary to model the phase of the original signal and incorporate it in the synthesis. Flanagan's phase vocoder studies [7] also suggest the need for incorporating phase of the signal In synthesis. V. SIGNAL-DEPENDENT ANALYSIS- SYNTHESIS SCHEME The quality of synthetic speech depends mostly on the reproduction of voiced speech, whereas, we conjecture that intelligibility of speech depends on how different segments are reproduced. It Is known [8] that analysis frame size, frame rate, number of LPCs, pre-emphasis factor, glottal pulse shape, should be different for different classes of segments In an utterance. In many cases unnecessary preemphasls of data, or hlgh order LPCs can produce undesirable effects. Human listeners perform the analysis dynamically depending on the nature of the input segment. So it is necessary to Incorproate a signal dependent analysls-synthesis feature Into the system. There are several ways of implementing the slgnal dependent analysls ideas. One way is to have a fixed slze window whose shape changes depending on the desired effective size of the frame. We use the signal knowledge embodied in the pitch contour to guide the analysls. For example, the shape of the window could be a Gaussian function, whose width can be controlled by the pitch contour. The frame rate is kept as high as possible during the analysis stage. Unnecessary frames can be discarded, thus reducing the storage requirement and synthesis effort. The slgnal dependent analysls can be taken to any level of sophistication, wlth consequent advantages of improvement in inte111glbility, bandwidth compression and probably quality also. VI. DISCUSSION We have presented in this paper a discussion of an analysts-synthesis system which is convenient to study various aspects of the speech signal such as the importance of different parameters of features and their effect on naturalness and intelligibility. Once the characteristics of the speech signal are well understood, it fs possible to transform the voice characteristics of an utterance tn any desired manner. It is to be noted that modelling both the excitation signal and the vocal tract system are crucial for any studies on speech. Significant success has been achieved in modelling the vocal tract system accurately for purposes of synthesis. But on the other hand we have not yet found a convenient way of modelling the excitation source. It is to be noted that the solution to the source modelling problem does not lle in preserving the entire LP residual or Its Fourier transform or parts of the residual information In either domain. Because any such 532 approach limits the manipulative capability in synthesis especially for changing voice characterl stl cs. APPENDIX A: LIST OF RECORDINGS 1. Basic system Utterance of Speaker I: (a) original (b) synthetic (c) original Utterance of Speaker 2: (a) original (b) synthetic (c) original Utterance of Speaker 3: (a) original (b) synthetic (c) original 2. Time expansl on/compression (a) original (b) 11/2 times normal speaking rate (c) normal speaking rate (d)I/2 the normal speaking rate (e) original 3. Pitch period expansion/compression (a) original (b) twice the normal pitch frequency (c) normal pitch frequency (d) half the normal pitch frequency (e) ori gi nal 4. Spectral expanslon/compression (a) original (b) spectran expansion factor 1.1 (c) normal spectrum (d) spectral com- pression factor 0.9 (e) original 5. Conversion of one voice to another (a) male to female voice: original male voice - artificial female voice - original female voice (b) male to child voice: original male voice artificial child voice - original child voice (c) child to male voice: original child voice - artificial male voice - original male voice Q(1) - o Q(Z) • 0 " pitch contour ¢ : . Q(3) - Pl I i iil I 0 ,I, ,' I , , . I i °, Time in # samples Ft~ le. Illustration of generating Q-Array from smoothed pitch contour gain contour N(1) . G 1 H(2) • G 2 H(3) - G 3 H(4) - G 4 HiS) - G s Time in # samples Fig lb. I11ustratlon of qenerstlnq H-Array from smoothed pitch and getn contours 6. Effect of excitation models (a) orlginal (b) single Impulse excitation (c) two Impulses excitation (d) three impulses excitation (e) Hllbert transform of an impulse if) first derivative of Fant's model of glottal pulse REFERENCES [1] B.S. Atal and S.L. Hanauer, J. Acoust. Soc. Amer., vol. 50, pp. 637-655, 1971. [2] J.D. Markel and A.H. Gray, Linear Predic- tion of Speech, Sprtnger-Verlag, 19/6. [3] J.L. Flanagan, Speech Analysts, Synthesis and Perception, Sprlnger-Verlag, 1972. [4] s. Seneff, IEEE Trans. Acoust., Speech and Signal Processing, vol. ASSP-30, no. 4, pp. 566-577, August 1982. [5] R.H. Cotton and J.A. Estrie, Elements of Voice Quality in Speech and Language, N.J. Lass (Ed.), Academic Press, 1975. [6] G. Fant, "The Source Filter Concept in Voice Production," IV FASE Symposium on Acoustics and Speech, Venezta, April 21-24, 1981. [7] J.L. Flanagan, 3. Acoust. Soc. Amer., vol. 68, pp. 412-420, August lgBO. [8] C.R. Patlsaul and J.C. Hammett, Jr., J. Acoust. Soc. Amer., vol. 58, pp. 1296-1307, December 1975. Time tn t saumles T • J (a) Stngle tmpulse excitation P l (b) Two tmpulses excitation P Time In ! samples t I (c) O p T 1 IJ T2-WP Ttme |n t samplei llw,,, " " I I I o I ! Time In # stmples Three tmpulses excitation p (d) Htlbert transform of an tmpulse k--'Tl ' 1~P Ttme to # samples (e) Ftrst der|vat|ve of Fanl:'s model of glottal pulse Flq 2. Different Hodels for excitation 533 | 1984 | 113 |
INTERPRETING SYNTACTICALLY ILL-FORMED SENTENCES Leonardo LESMO and Pietro TORASSO Dipartimento di Informatica - Universita' di Torino Corso Massimo D'Azeglio 42 - 10125 Torino - ITALY ABSTRACT The paper discusses three different kinds of syntactic ill-formedness: ellipsis, conjunctions, and actual syntactic errors. It is shown how a new grammatical formalism, based on a two-level repr_e sentation of the syntactic knowledge is used to cope with Ill-formed sentences. The basic control struc ture of the parser is briefly sketched; the paper shows that it can be applied without any substan tial change both to correct and to ill-formed sen tences. This is achieved by introducing a mechanism for the hypothesization of syntactic structures, which is largely independent of the rules defining the well-formedness. On the contrary, the second level of syntactic knowledge embodies those rules and is used to validate the hypotheses emitted by the first level. Alternative hypotheses are obtain ed, when needed, by means of local reorganizations of the parse tree. Sentence fragments are handled by the same mechanism, but in this case the second level rules are used to detect the absence of one (or more) constituents. INTRODUCTION In the last years we have been involved in building a natural language (Italian) interface to ward a relational database. Even if this research required to consider issues relative to knowledge representation (Lesmo et al 83) and query optimiza tion (Lesmo et al, in press), our main concern was to devise efficient parsing techniques (Lesmo et al 81, Lesmo & Torasso 83). The term "efficient", when applied to language processing, can take a number of different meanings, ranging from pure processing speed to the ability to analyze fragments of text, to the flexibility that characterizes the behavior of the parser. We believe that all facets of efficiency are worth be ing pursued, but if the communication between the man and the machine has to occur in a really natu ral fashion, the robustness of the parser, i.e. its ability to cope with unforeseen inputs must receive the greatest attention. It is important to realize that "unforeseen" is assumed her to refer to the syntactic form of the input sentence: of course, also inputs that are unexpected from a semantic point of view should be handled properly, but, since usually the syntactic knowledge acts as a fil ter between the reception of the input and the sub sequent stages of the analysis, the first problem that must be faced is the following: how can the parser be prevented from rejecting sentences that are syntactically ill-formed, but could be interpr_e ted correctly if they are passed to the other comp2 nents of the system? Alternatively, the problem can be stated as: how to foresee every interpretable input? Marcus (1982) envisages the following alternatives: a) the use of special "un-grammatical" rules, which explicitly encode facts about non-standard usage b) the use of "meta-rules" to relax the constraints imposed by classes of rules of the grammar c) allowing flexible interaction between syntax and semantics, so that semantics can directly ana lyze substrings of syntactic fragments or indi vidual words when full syntactic analysis fails. Even if we agree in stating the importance of a strong interaction between syntax and semantics, our approach is quite different from c) (as well as from the other ones). For this reason, and in spite of the fact that a detailed description of the parser's operating principles has been given elsewhere (Lesmo & Torasso 83), the next section is devoted to an introduction to the basic ideas that led to the design of the syntactic knowledge source. The subsequent sections will cover some phenomena which are related with ill-formedness of sentences, namely: ellipsis, conjunctions, and some types of actual syntactic errors. GRAMMARS AND NATURAL LANGUAGE It is widely accepted (see Charniak 81) that syntactic knowledge consitutes one of the founda tions needed to build natural language interpreters. Various kinds of grammatical formalisms have been devised to represent in efficient, flexible and pe[ spicuous way the syntactic knowledge (Winograd 83). Even if the formalisms are quite different, the main characteristic shared by all grammars is that they are prescriptive (or normative) in nature. A grammar defines what a sentence is, that is it spe~ 534 what sequences of words are acceptable. This is in sharp contrast with the normal use of language, which has, as its main purpose, the communication of something. Of course all grammars can be (and have been) augmented in order to build a representa tion of the meaning of the sentences (i.e. some thing that should be able to carry most of its tom municative contents), but a meaning can only be ob tained for correct sentences. Some efforts have recently been devoted to ex tending the coverage of grammars, in order to deal also with ill-formed sentences (Kwasny & Sondheimer 81, Weischedel & Sondheimer 82, Granger 82). This is usually done by relaxing the constraints imposed by some rules of the grammar, by adding new rules to take care of some kinds of ill-formedness, or by allowing the semantics to intervene when the sy~ tax is not able to process the input. However, most of these approaches present some problems: either the perspicuousness and the readibility of the gram mar is reduced or the control structure of the ana lyser is made considerably more complex. The sources of ill-formedness can be grouped in three classes: ellipsis, conjunctions, and syn tactic errors. In the case of ellipsis, a fragment such as "John" or "probably" can be understood by a human listener without any particular difficulty, prov! dad that a particular context is given. On the oth er hand, it is apparent that those fragments are not consistent with the rules defining the well- formed sentences. Similar problems arise in case the grammar at tempts to cope with conjunctions. In general, ellip sis is meaningful just in case a context external to the expression to analyse is assumed to exist. The situation with conjunctions is rather different: in some sense, the context that must be used to in terpret a conjunct is given by the previous con junet(s), so that it is expressed inside the sen tence that has to be analysed. The difficulty in the analysis of conjunctions depends on the fact that not only the second conjunct is often ill- formed (if it is considered as a standing-alone sen tence), but it is the particular form of ill-formed hess that provides the analyzer with the piece of information needed to decide what is the syntactic role of that conjunct (or, if we assume that the re sult of the syntactic analysis is represented in form of a tree, to decide where the constituent ex pressed by the conjunct has to be appended in the syntactic tree). For this reason, in the following sentences the second conjuncts have quite different roles: John loves Mary and Susy (i) John loves Mary and Susy Fred (2) John loves Mary and hates Violet (3) Thus, as in the case of ellipsis, a syntactic ana lyser designed to handle conjunctions must be able to operate on ill-formed fragments, but with the additional difficulty of modifying the parse tree on the basis of the type of ill-formedness. The last source of ill-formedness that we will consider are the syntactic errors. Differently from the previous cases, it is almost impossible to list all possible mistakes that a person could make in writing a sentence. Probably, most of them can not be considered as syntactic errors (e.g. misspe! ling of words or wrong markers for a given case of a verb), but there are also errors that have purely syntactic grounds. Some noticeable examples are agreement errors, ordering errors and errors in verb tenses. An examples of each of them is report ed below: John love Mary (4) John is going probably to home (5) Yesterday I have eaten a good cake (6) Even if a more detailed discussion appears in the fifth section of this paper, it is worth noting here three points: - most native English speakers will probably never make such errors, but, firstly, they could easily be made by non-native speakers and, secondly, at least the error exemplified in (4) could result from a typing error - errors of that kind are more frequent in Italian, since it is richly inflectional - even if the first and third type of errors can be (more or less) easily handled by means of relaxa tion techniques (Kwasny & Sondheimer 81), this is not the case for ordering errors; this is due to the fact that the agreement and tense constraints are expressed "explicitly" in the grammar (e.g. by an augmentation), whereas the order is specif_i ed implicitly (i.e. rigidly embodied in the gram mar itself). The analysis of the problems mentioned in this section, together with some other considerations that are not worth being discussed extensively here (regarding, for instance, garden paths) led us to the design of a formalism for representing the sy~ tactic knowledge that splits it into two levels. The first level contains a set of rules that, in our intention, characterize the meaningful sen fences. It can be questioned whether rules regard ing meaning can be considered as syntactic rules. Our opinion is that the syntactic categories asso ciated with natural language words have a strong semantic bias (see, for a thorough discussion of this thesis (Lyons 77, Chapt.ll~ For this reason, we defined a set of node types that have to be used in building the tree representing the syntactic structure of the sentence. These node types (report ed in table l) are associated with the syntactic categories and the topological constraints that go v 535 REL Relation Verbs, copulas REF Referent Nouns, pronouns CONN Connector Prepositions, conjunctions DET Determiner MOD ADJ Adverbial Modifier Adjectival Modifier Articles demonstrative adjectives, adjectival question words Adverbs Adjectives Table 1 - The node types: The first column contains the name (actual and extended); the sec- oond one contains the classical syntactic categories associated with the node type ern the attachment of nodes constitute the basic filter which selects the "meaningful" fragments of sentence. As an example of this kind of constraint% it is unreasonable to assume that an ADJ node can be attached elsewhere than a REF node (with the ex- ception of verbs having a copulative function, e.g. to be, to seem, to taste etc.). For this reason, in dependently of its position in the sentence, we can exclude some kinds of constructs (e.g. ADJ-ADJ at- tachment) as meaningless. W When a rule of the first set is executed it (normally) involves the creation of a new node (possibly more than one) and its at- tachment to the syntactic tree which was built up to that time. Because of the limited knowledge used to hypo- thesize the attachment point, it can often happen that the parser made the wrong choice. Such an er- ror can be detected by using two different knowledge sources: higher-level syntactic constraints and se- mantics. The first of them contains the rules that define the well-formedness of sentences (in partic- ular gender-number agreements rules and ordering rules) whereas the second knowledge source tells whether an attachment is semantically acceptable (of course, even if a REF-ADJ attachment is consis tent with the topological constraints, not all ad- jectives can be used to qualify a given noun). The semantic checks are done accessing a semantic net organized in two levels: the first of them (exter- nal) concerns the acceptable surface structures (e. g. case frames for verbs), whilst the second one (internal) is concerned with the actual semantics of the domain (e.g. subsetting among classes). 4 it must be noted that the rules embodying these constraints are expressed in procedural form. Even if the lack of a declarative representation makes more difficult the design and the maintenance of the rules, they are made more efficient in terms of execution time by taking into account the con text where the word occurs (involving a limited one word lookahead). Because of the frequency of this kind of wrong hyp2 thesization, an effective computational tool must be used to restructure the tree: this tool consists in what we called "natural changes", which are sim- ple pattern-action rules able to move around con- stituents; their purpose is to provide the parser with an alternative hypothesis when a given one has failed. Whereas the natural changes are tri~ered the same way both in case the inconsistency is syn- tactic and semantic, different courses of action take place if the changes cannot produce any accep~ able alternative hypothesis: if the error is of sy~ tactic type than the first hypothesis is maintained but a warning message is sent to the user; if the error is semantic, then the current interpretation of the fragment is considered unacceptable and, in case one or more choice points were previously met, the parser backtracks, otherwise the analysis fails. More details about the use of backup, as well as about other topics related with the parsing strate- p~y, can be found in (Lesmo & Torasso 83). A problem which must be faced when a natural change is stimulated is the choice of the best in- terpretation. Let us suppose that an agreement be- tween an adjective and a noun is violated. In this case the natural change MOVE UP tries to attach the adjective to a REF node which is at a higher level with respect to the REF which the adjective is cur rently attached to. The new attachment stimulates the rules of the second set (that is the rules veri lying the agreement and the word ordering) and the semantic ones. It is possible that the semantic rules signal that the new attachment is not admissi ble from a semantic point of view. At this point, if no alternative attachment is possible, the sysL tem has to consider the first interpretation as the best one since it violates only the "weak" syntac- tic constraints. ELLIPSIS "Ellipsis" is a greek word (elleipsis) roughly corresponding to "lack, omission", that is used, to take a dictionary definition, to stand for "omis- sion of one or more words that can easily be sub- sumed". Even if all components of the definition are fundamental, we want to stress the presence of the adverb "easily". It is consistent with the ob- servation that, whereas other phenomena occurring in natural language (e.g. garden path) require a conscious effort in the listener, elliptical sen- tences are understood without any difficulty. On the other hand, most current grammatical formalisms are not able to account for this ease in understand ing ellipsis; it must be noted the importance that is often laid on the ability to decide as soon as possible what is the allowable form of a given conz stituent (Buchenko et al. 83). This is due to the necessityof triggering in advance a suitable re- 536 stricted set of grammar rules, in our case this is not required: the first-level rules will work the same way independently of the global context where s given word or constituent occurs (this is not true for "local" contexts in the current version of the system: see note i); the consistency with the rules which govern the construction of well-formed sentences will be tested afterwards. This is parti- cularly useful for handling elliptical fragments. Let's see through a pair of examples what is the b~ haviour of the parser in such sistuations. Example (i) is reported below: John (i) The rules associated with the category "noun" (note that the first-level rules are grouped in packets associated with syntactic categories), in case the analysis is at the beginning of the sentence, cause the building of the sentence reported below: REL I i,l CONN J- REF @ " I JOHN When the end of the sentence in encountered, the structure is recognized as being incomplete and a pattern matching procedure applied to any preceding question can reconstruct its actual meaning. What must be noticed is that the first-level syntactic rules used to analyze the fragment are exactly the same that are used to analyze complete and correct sentences. CONJUNCTIONS The kind of processing that occurs in handling conjunctions requires the introduction of rather different constraints. The first interpretation pro duced for sentences 3) and 4) after the fragment "John loves Mary and Susy" has been analyzed is re- ported in fig. is. This interpretation is confirmed when the end of sentence 3) is encountered (so that the final structure is the one shown in fig. la). On the contrary, when the name "Fred" is scanned in sentence 4), it cannot be attached to "Susy" (excl~ ding the possibility that "Fred" is her family name) and the attempt to move it up to "loves" causes a semantic error (three unmarked case for "love"). At this point another "natural change" is triggered, which handles conjunctions. It tries to move up the "and" node, producing the structure of fig.lb which is accepted as the correct one. Note, however, that this kind of natural change is much more complex than the standard ones. For example, in the report- ed examples two new nodes have to be built: the emp ty REL node (this is done easily since only two nodes of the same type can be connected via "and") ILOVES h I Hl,l IUN~rl I UNMARKED 12 1 (a) (b) Fig.l - The parse trees for sentence 3) (fig.la) and sentence 4 (fig.lb). and the "UNMARKED" connection (for which an explic- it request of creation and attachment must be is- sued). A final observation regards the fact that the parser assumes that the first acceptable interpre- tation is the right one. This implies that a sen- tence of the form (see EX4 in Huang 83, pag.82) "The man with the telescope and the woman with the umbrella kicked the ball" would be interpreted as "The man with the telescope and with the woman with the umbrella kicked the ball", that is not the most natural interpretation for a human listener. How- ever, Italian always expresses explicitly the num- ber of the verb (i.e. plural in this case), so that the Italian translation of the sentence would be analyzed correctly. SYNTACTIC ERRORS The system tolerates and possibly recovers the following different kinds of errors: - lexical errors - agreement errors - errors in the ordering of the constituents - extra cases (note that only the second and the third kind of errors are actual syntactic errors). As regards the errors at the lexical level, they are detected when the morphological analyzer tries to decompose a given word in "root + suffix" form. When no decomposition is posslble or none of the obtained roots occurs in the dictionary, the system asks the user about the possibility that the input word is mispelled. In the affirmative case, the user can retype the word, whereas in the oppo- site case the system asks the user to provide it with some pieces of information such as the synta~ tic category of the word, its normalized form (i.e. its root), the gender, the number, etc.; moreover the system asks what semantic object the word re- fers to. In this way the analysis of the sentence can go on and possibly an interpretation is con- structed. However, it has to be pointed out that the information provided by the user during the 537 analysis of the sentence is not always sufficient for the system to complete the analysis. In fact, the current version of the system has not the capa- bility of restructuring the semantic net dynamical- ly, so that the system can continue the analysis only when the semantic object denoted by the un- known word is already present in the net. As regards "agreement errors" there is a large variety of error types grouped under this label: a) a first kind refers to the agreement in number and gender between the noun and the determiner and between the noun and the adjectives. It is worth noticing that such kind of errors is un- common in Italian, because the suffixes for male and female and for singular and plural are in many cases quite different. b) A slightly more frequent error concerns the a- greement in number, gender and person between the subject and the verb. Since in Italian the suffixes indicating the different persons of the verb, its tense and mood are quite different, people whose mother tongue is Italian usually do not make this kind of mistake. c) Another kind of agreement refers to the relation ships existing between the moods and the tenses of the verbs occurring in the main sentences and its subordinates. The rules, which are quite com plex since they derive from the "consecutio tem- porum" of Latin, are often violated so that this kind of error must be tolerate by the system. In this case the procedure which has the task of verifying the agreement emits a warning message when the rules are violated, but, contrarily to cases a) and b), it does not try to restructure the parse tree via "natural changes", since in most cases no alternative interpretation exists. The framework we have provided is particularly useful for treating errors in the ordering of the constituents, in fact the order is checked only when a given sentence (possibly a subordinate) has been completed. This happens when the REL node that heads the clause (main or subordinate) is closed, that is a punctuation mark is encountered or a new node is attached to a node which is (in the parse tree) at a level higher than the REL currently ana- lized. Before stimulating the ordering rules, the system checks that the case frame of REL has been correctly filled, that is all the cases attached to REL are compatible with the head and among them. Just in this case a set of rules is activated de- pending on the sentence type (it is apparent that the constituent order is different in a declarative, interrogative or relative clause). Each rule repre- sents a legitimate ordering of the constituents and the rules are ordered in decreasing degree of ac- ceptability. The rules are matched in turn against the actual case frame of the verb acting as head of the clause under examination; in case no rule matches, a warning is issued to signal the user that something has gone wrong in the ordering; any- way the interpretation of the clause obtained by ac cessing the semantic net is maintained and the ana- lysis goes on if the entire sentence has not yet been scanned. A similar (but simpler) processing oc curs for a REF node with respect to the adjectives attached to it. There are also cases which are more difficult to treat thao the ones involving violations in the word ordering. In fact, a sentence like "Ii giorna- le Io ha comprato Giovanni stamattina" (literally "The newspaper it has bought John this morning") in volves not only word order violations (the syntac- tic object occurs in the first position in the sen- tence), but also there is a case denoted by "io" ("it") which duplicates the object. Such sentences are clearly incorrect from a syntactic point of view as well as, in principle, from a semantic one (wrong case frame), but they are perfectly under- standable and quite frequent because they allow one to identify as focus of the utterance the object without passivizing the sentence. The treatment of such kinds of errors requires only relatively inexpensive modifications to the way the semantic net is accessed. It is worth no- ticing, in fact, that the syntactic object ("il giornale") is attached to a REL node which is empty when this attachment is performed. The semantic and agreement check procedures are stimulated but are immediately suspended since the REL node is empty. Similarly the pronoun "lo" is attached to the REL and the corresponding check procedures are suspend- ed. When the REL node has been filled with "compra- to" the suspended checks are resumed. The semantic procedure is able, by inspecting the semantic net, to state that "giornale" may fill the "object" role so that when the previously suspended semantic check is executed, it concludes that "lo" ("it") cannot be attached to the REL filled with "comprare" ("buy") since the object role has already been fil- led. Instead of rejecting the current interpreta- tion by stimulating the natural changes and possi- bly the backup mechanism, a modification of the par sing strategy consists in attaching a warning to the REF node containing the pronoun "lo" and in go- ing on with the sentence analysis. When the sen- tence has been completely scanned and, consequently, it is possible to perform a global check on the ac- tual case frame of "comprare", the semantic proce- dure decides that "lo" is simply a repetition of the object and therefore it may be disregarded. In this way the interpretation of the sentence is pos- sible, but the warning attached to the REF node con taining "io" is output to the user. 538 CONCLUSIONS The paper presents a parsing strategy able to cope with different kinds of syntactic ill-formed hess: ellipsis, conjunctions, syntactic errors. Some examples are reported to show that the adopted for malism allows the parser to analyse ill-formed fra~ ments without substantial changes to the rules used to analyse correct sentences. However, some problems still deserve further attention. First of all, in case of ill-formed sen tences it is often possible to assign more than one interpretation to the sentence (e.g. in "The boy love the girl" the subject can be considered plural - missing "s" in "boy" - or singular - missing "s" in "love"); this can also happen for correct sen tences (see the last example in the section on CONJUNCTIONS). The current version of the system should be enhanced both by taking into account con textual information (which could be useful in the first case) and by weighing in some way the output of the semantic component (which, today, is catego~ ical: yes or no). As regards the context, the experiments we made on the parser refer to isolated sentences, so that the "pattern matching" procedure we referred to in the section on ELLIPSIS (see the example "John") is neither implemented nor designed. Our belief is that the two components (pattern marcher and parser) are quite independent each other, but we are planning to address also issues connected with discourse analysis. Last but not least, some problems are more strictly connected with the basic parser design. Some English sentences break a locality principle embodied in the first-level syntactic rules. An example is given by "What architect do you know who likes the balalaika" (see Winograd 83, pag.136). We are currently studying this problem, whose solution will involve a change in the final representation as well as in the rule packets. The current version of the parser, that runs on a VAX-II/780 under the UNIX operating system and is implemented in FRANZ LISP, includes the mecha nisms for detecting and recovering the lexical, agreement, and word ordering errors, whereas the "extra cases", in the sense explained above, are currently being implemented. REFERENCES Bachenko J., Hindle D., Fitzpatrick: Constraining a Deterministic Parser. Proc. AAAI-83 (1983)8-11. Charniak E.:Six Topics in Search of a Parser: An Overview of AI Language Research. Proc. 7th IJCAI Vancouver B.C. (1981), i074-1087. Huang X.: Dealing with Conjunctions in a Machine Translation Environment. Proc. Ist Conf. ACL-Eu rope, Pisa (1983), 81-85. Granger R.H.: Scruffy Text Understanding: Design and Implementation of "Tolerant" Understanders. Proc. 20th ACL, Toronto (1982), 157-180. Kwasny S.C., Sondheimer N.K.: Relaxation Techniques for Parsing Grammatically Ill-Formed Input in Nat ural Language Understanding Systems. AJCL 7 (1981), 99-108. Lesmo L., Magnani D., Torasso P.: A Deterministic Analyzer for the Interpretation of Natural Lan- guage Commands. Proc. 7th IJCAI, Vancouver B.C. (1981), 440-442. Lesmo L., Siklossy L., Torasso P.: A Two-Level Net for Integrating Selectional Restrictions and Se- mantic Knowledge. Proc. IEEE Int. Conf. on Sys- tem, Man and Cybernetics, India (1983), 14-18. Lesmo L., Torasso P.: A Flexible Natural Language Parser based on a Two-Level Representation of Syntax, Proc. ist Conf. ACL-Europe, Pisa (1983), 114-121. Lesmo L., Siklossy L;, Torasso P.: Semantic and PraEmatic Processing in FIDO: A Flexible Inter- face for Database Operations. Accepted for Publi cation on Information Systems. Lyons J.: Semantics. CambridEe Univ. Press (1977). Marcus M.: Building Non-Normative Systems: The Search for Robustness: An Overview. Proc. 20th ACL, Toronto (1982), 152. Weischedel R.M., Sondheimer N.K.: An Improved Heuri stic for Ellipsis Processing. Proc. 20th ACL, Toronto (1982), 85-88. Winograd T.: Language as a Cognitive Process; Vol.l Syntax. Addison Wesley (1983). 539 | 1984 | 114 |
AN INT~ATIONAL DELPHI POLL ON FUTURE TRENDS IN "INFORMATION LINGUISTICS" Rainer Kuhlen Universitast Konstanz Informationswissenschaft Box 6650 D-7750 Konstanz I, West Germany ABSTRACT The results of an international Delphi poll on information linguistics which was carried out between 1982 and 1983 are presented. As part of conceptual work being done in information science at the University of Constance an international Delphi poll wss carried out from 1982 to 1983 with the aim of establishing a mid-term pro@aosis for the development of "information linguistics". The term "information linguistics" refers to a scientific discipline combining the fields of linguistic data processing, applied computer science, linguistics, artificial intelligence, and information science. A Delphi poll is a written poll of experts - carried out in this case in two phases. The results of the first round were incorporated into the second round, so that participants in the poll could react to the trends as they took shape. I. Some demoscopic data I. I Return rate Based on sophisticated selection procedures 385 international experts in the field of information linguistics were determined and were sent questionnaires in the first round (April 1982). 90 questionnaires were returned. In the second round 360 questionnaires were mailed out (January 1983) and 56 were returned, 48 of these from experts who had answered in the first round. The last questionnaires were accepted at the end of June 1 983. Overlapping data in the two rounds first round (90) second round (56) 2 48 8 In the following we refer to four sets of data: Set A 90 from round I Set--B 48 from round I with answers in round 2 8et--C 56 from round 2 Set--D 48 from round 2 with answers in round I But we shall concentrate primarily on Set C becanse - according to the Delphi philosophy - t~e data of the second round are the most relevant. There were 8 persons within Set C who did not answer in the first round. But the~ also were aware of the results of the first round; therefore a Delphi effect was possible. (In the following the whole integers refer to absolute numbers; the decimal figures to relative/procentual numbers) I .2 Qualification accordin~ to academic degree The survey singled out highly competent people, as reflected in academic degree( data from A and C): Tab.1 Qualification of l~articipants Set A Set C B.S./B.A 23 25.6 16 28.6 M.S./M.A./Dipl. 40 44.4 28 50.0 Ph.D./Dr. 62 68.9 37 66.1 Professor 14 15.6 15 26.8 1.3 A~e Since Delphi polls are concerned with future developments, it has been claimed in the past that the age and experience of people in the field influence the rating. In this paper, however, we cannot prove this hypothesis. Here are the mere statistical facts, only taken from Set C (they do not differ significantly in the other--sets) Tab.2 Age of participants -30 30-35 36-40 41-45 46-50 50- years 3 5.6 14 25.9 14 25.9 10 18.5 5 9.3 8 14.8 I .4 Experience The number of years these trained specialists have been working in the general area of information linguistics were as follows Tab.3 Experience in information lin~istics -2 3-5 6-I0 I O- years of experience 3 5.6 7 13.O 13 24.1 31 57.4 $ 540 These data in particular confirm our impression that very qualified and experienced people answered the questionnaire. Almost 60% have worked longer than 10 years in the general area of information linguistics. 1.5 Size of research groups Mos~ of those answering the questionnaire work in a research-group. Table 4 gives an impression of the size of ~ne groups in SetA and Set_C: Tab.4 Size of research groups I-2 3-5 6-10 11-50 50 - Set A 16 19.0 25 29.8 21 25.0 18 21.4 4 4.8 Se~--C 14 26.4 17 32.1 12 22.6 8 15.1 2 3.8 1.6. Represented subject fields Among those answering in the two lowing fields were represented: rounds, the fol- Tab.5 Scientific back6round of participants Set A Set C information science 32 35.6 17 30.4 computer science 36 40.0 20 35.7 linguistics 21 27.3 16 28.6 natural sciences/ 15 16.7 12 21.4 mathematics e,ngineerin 6 3 3.3 2 3.6 humanities/social 15 16.7 12 21.4 sciences I~7 Research and application/development With respect to whether participants are mainly involved in research (defined as: basic groundwork, mainly of theoretical interest, experimental environment) or in applica- tion/development (defined as: mainly of interest from the point of view of working systems (i.e. commercial, industrial), applicable to routine tasks) the results were as follows: Tab.6 Involved in research or application Set A Set B Set C Set D research 59 65.6 31 64.6 39 69.6 33 68.8 application 27 30.0 16 33.3 16 28.6 15 31.3 1.8 Workin 6 environment Tab.7 Types of institutions Set A Set C m university 45 50.0 30 53.6 research institute 7 7.8 4 7. I industrial research 17 18.9 12 21.4 information industry 8 8.9 2 3.6 indust, administ. - I I .8 puolic administration 8 8.9 4 7.1 public inf. systems 3 3.3 2 3.6 Most of the work in information linguistics so far has concentrated on English ~generally more than 80%, with slight differences in the single sub-areas, i.e. acoustic 80.6%, indexing 82.5%, question-answering83.3%). 2. Content of the ~uestionnaire 2. I Sub-areas The discipline "information linguistics" was not defined theoretically but ostensively instead by a number of sub-areas. abreviation I. Acoustic/phonetic procedures Ac 2. Morphological/syntactic procedures Mo 3. Semantic/pr~m~tic procedures Se 4. Contribution of new hardware Ha 5. Contribution of new software So 6. Information/documentation languages I1 7. Automatic indexing In 8. Automatic abstracting Ab 9. Automatic translation Tr 10. Reference and data retrieval systems Re 11. Question answering and understanding Qu systems 2.2 Single topics The sub-areas included a varying number of topics (from 6 to 15). These topics were chosen based on the author's experience in information linguis- tics, on a pretest with mostly German researchers and practitioners, on advices from members of FID/LD, and on long discussions with Don Walker, Hans Karlgren, and Udo Hahn. Altogether, there were 91 topics in the first round and 90 in the second round, as follows:. acl Segmentation of Acoustic Input ac2 Speaker Dependent Speech Recognition ac3 Speaker Independent Speech Recognition ac4 Speech Understanding ac5 Identification of Intonational/Prosodic Infor- mation with respect to Syntax ac6 Identification of Intonational/Prosodic Infor- mation with respect to Semantics ac7 Automatic Speech Synthesis mol mo2 mo3 mo4 mo5 mo6 mot mo8 mo9 mol 0 mol I Automatic Correction of Incomplete or False Input Analysis of Incomplete or Irregular Input Morphological Analysis (Reduction Algorithms) Automatic Determination of Parts of Speech Automatic Analysis of ?unctions& Notions Partial Parsing Recognition Techniques Partial Parsing Transformation Techniques Recognition of Syntactic Paraphrases Reco~ition of Textual Paraphrases Question Recognition Grits of Syntactic Parsing of Unrestricted Natural Language Input sel Semantic Classification of Verbs or Predicates se2 Or6mnizin6 Domain-Specific ?tame/Script-Type Structures se3 Semantically Guided Parsing se4 Semantic Parsing 541 se5 Knowledge Acquisition se6 Analysis of Quantifiers se7 Analysis of Deictic Expressions se8 Analysis of Anaphoric/Cataphoric Expressions (Pronominalization) se9 Processing of Temporal Expressions se10 Establishment of Text Cohesion and Text Coherence sel I Recognition of Argumentation Patterns se12 Management of Vague and Incomplete Knowledge set3 Automatic Management of Plans set4 Formalizing Speech Act Theory se15 Processing of "Unpr~m~tical" Input hal Personal Computers for Linguistic Procedures ha2 Parallel Processing Systems ha3 New Mass Memory Technologies ha4 Associative Memory ha5 Terminal Support ha6 Hardware Realization of NatnAral Langusge Analysis Procedures ha7 Communication Networks sol Standard Progr~,mi ng Languages for Information Linguistics so2 Development of Modular Standard Programs (Hardware-Independent) so3 Natural Language ProgrPJ,ming so4 Parallel Processing Techniques so5 Alternative File Organization so6 New Database System Architecture for the Purpose of Information Linguistics so7 Flexible Data Management Systems i11 Compatibility of Documentation Languages in Distributed Networks il2 Enrichment of Information Languages by Statistical Relations ll3 Enrichment of Information/Documentation La~s by Linguistic Semantics il4 Enrichment of Higher Documentation Languages by Artificial Intelligence Methods il5 Standardization of Information/Documentation Languages il6 Documentation Languages for Non-Textual Data il7 Information/Documentation Languages for Heterogeneous Domains lib Determination of Linguistic Relations il9 Adaptation of Ordinary Language Dictionary Databases ill0 (cancelled in the second round) ill I Statistical Models of Domain-Specific Scientific Languages inl Improvement of Automatic Indexing by Morphological Reduction Algorithms in2 Improvement of Automatic Indexing by Syntactic Analysis in3 Improvement of Automatic Indexing by Semantic Approaches in4 Probabilistic Methods of Indexing in5 Indexing Functions in6 Automatic Indexing of Full-texts abl Abstracting Methodolo~ ab2 Automatic Extracting ab3 Automatic Indicative Abstracting ab4 Automatic Informative Abstracting ab5 Automatic Positional Abstracting ab6 Graphic Representation of Text Structures trl Development of Sophisticated Multi-Lingual Lexicons tr2 Automatic Translation of Restricted Input tr3 Interactive Translation Systems tr4 Fully Automatic Translation Systems tr5 Multilingual Translation Systems tr6 Integration of Information and Translation Systems rel Iterative Index and/or Query Modification by Enrichment of Term Relations re2 Natural Language Front-End to Database Systems re3 Graphic Displsy for Query Formulation support re4 Multi-Lingual Databases and Search Assistance re5 Public Information Systems qul Integration of Reference Retrieval and Question Answering Systems qu2 Linguistic Modeling of Question/Answer Interaction qu3 Formal Dialogue Behavior qu4 Belief Structures qu5 Heuristic/Common Sense Knowledge qu6 Change of Roles in Man-Machine Communication qu7 Automatic Analysis of Phatic Expressions qu8 Inferencing qu9 Variable Depth of System Answers qu10 Natural Language Answer Generation Each topic was defined by textual paraphrase, e.g. for ab4: "procedures of text condensation that stress the overall, true-to-scale compression of a given text; although varyin~ in length (according to the degree of reduction); can be used as a substitute for original texts". 3. Answer parameters for the sub-areas 3.1 Competence (--CO) At the beginning of every sub-area participants were requested to rate their competence accord- ing to three parameters "good" (with a speciaiist's knowledge), "fair" (with a working knowledge), and "superficial" (with a layman's knowledge). Tab.8 shows the self-estimation of competence within the sub-areas (data taken from SetC): Tab. 8 Competence Tab.9 Desirability good fair superficial ++ + rank rank rank Ac 4 11 14 8 34 1 Mo 25 3 17 5 8 7 Se 24 4 17 5 10 5 Ha 13 10 23 ] 14 3 So 18 7 22 2 8 7 I1 18 7 18 4 12 4 In 21 6 17 5 9 6 Ab 14 9 20 3 16 2 Tr 24 4 5 11 O 11 Re 31 2 12 10 8 7 Qu 32 1 13 9 7 10 In 19 19 1 0 Ab 21 22 4 O Tr 33 11 I 0 Re 35 13 O 0 Qu 35 83 0 542 3.2 Desirability (=DE) With respect to the application oriented subject areas the category of desirability was used in order to determine the social desirability according to the following 4-point scale: "very desirable"/++ (will have a positive social effect, little or no negative social effect, extremely beneficial), "desirable"/+ (in general positive, minor negative social effects), "undesirable"/- (negative social effect, socially harmful), "very ur~esirable"/m (major negative social effect, socially not justifiable). Tab.9 (data from Set C) shows that the nega- tive parameters (--, -)--were never or only sel~om used. Information linguistics is not judged - accordir~ to the estimation of the experts - as a socially harmful scientific discipline. 4. Answer parameters for the single topics The following parameters were used as ratin~ for the sub-areas and the single topics. Their definitions were given in more detail in the questionnaire. Tab.10 Evaluation l~arameters IMPORTANCE(=I) FEASIBILITY(=F) DATE OF REALIZ. (=DR) ~+ very i. ++ def. f. realized + i. + poss. f. 1984+/-2 1989 +/-3 1996 +/-10 - slightly i. - doubtf, f. 2010 +/-10 w-un-i. --def. un-f. non-realistic These categories of scientific importance, feasibility, and date of realization were to be judged from tu~o points of view: research(=R) - defined as: basic groundwork, mainly of ~heoretical interest application/development (=A) - defined as: mainly of interest for working systems, applicable to routine tasks Therefore every single topic was evaluated accord- ing to six parameters: Importance for research I/R Importance for application I/A Feasibility for research F/R Feasibility for application A/A Date of realization considering research DR/R Date of realization considering application DR/A 5. More detailed results 5 • I Sub-areas 5.1.1 Competence Competence was an important influence on evalua- tion. In general one can say that people with "good" competence (or more correctly: with competence estimation of "good") in a sub-area gave topics higher ratings for importance and feasibility both from the research and the application points of view. Nevertheless, there were differences. Those with "good" competence differed more widely in evaluations of research-oriented topics than in applica- tion-oriented topics, whereas those with "super- ficial" competence in the sub-areas were closer to the average in their evaluations of applica- tion-oriented topics than of research-oriented topics. Here are some examples of the differences (as reflected in the averages of the sub-areas). Tab. 11 is to be read as follows: (line I) in the sub-area "Acoustic" those with "good" competence evaluated 5.6% higher than the average with respect to importance for research, whereas people with "superficial" competence in the same sub-area evaluated 6.9% lower than average. Tab.11 Competence differences ( g=good; s=superficial) I/R I/A F/R F/A CO/g CO/s CO/g CO/s CO/g CO/s CO/g CO/s Ac5.6+ 3.0- In4.7+ 5.1- Ac25.1+ 3.9- Ac9.4+ 0.6- Hal .8+ 9.3- Ab4.3+ 13.8- Sel .I- 5.8+ Ha7.5+ 7.0- In5.4+ 19.8- In6.2+ 19.4- In5.0+ 19.4- Ab7.2+ 8.4- As can be seen in the column F/R, sometimes the general trend is reversed (Semantic: values from "competent" participants are lower than from par- ticipants with "superficial" competence). 5.1.2 Desirability There is also a connection between desirability and the values of importance and feasibility. Those who gave high ratin~s for desirability (DE++) in general gave higher values to the single topics in the respective sub-areas, both in comparison to the average values and to the values of those who gave only high desirability (DE+) to a given sub-area. The differences between DE++ and DE+ are even higher than those between C/g und C/s. 0nly the F/R data in the translation and retrieval areas are lower for D++ than for D+, in all other cases the D++ values are higher. Some examples: Tab. 12 Desirability differences I/R I/A F/R F/A DE++ DE+ DE++ DE+ DE++ DE+ DF~-* DE+ In 6.6+ 4.3- 4.5+ 4.9- 6.9+ 10.9- 11.4+ 15.3- Ab 6.8+ 0.6-13.2+ 5.8- 0.9+ 0.2+ 7.9+ 4.3- Tr 2.8+ 5.9- 0.4+ 1.1- 2.1- 8.3+ 2.9+ 3.2- Re 1.9+ 8.3- O.1+ - 0.2- 0.6+ 2.0+ 4.1- Qu4.O+ 8.1- 7.5+ 14.2- 3.8+ 11.4- 7.7+23.5- 5.1.3 Importance, Feasibility, Date of Realization (In the following tables the values of the answers ++ (very important, definitel~ feasible) and + (important, possibly feasible) have been added 543 ~ ogether, and the values from the single topics ave oeen averaged. Exact year-datawere calcu- lated from the answers on the 6-point rating scale, cf. Tab.10. In order to show the Delphi effect the data in Tab. 13 are taken fromSet__A, in Tab.14 from Set_C) Tab.13 Averaged I- r F- t DR-values from Set A Importance Feasibility Realization I/R I/A F/R F/A DR/R DR/A Ac 85.4 82.5 62.5 49.4 1997 2000 Mo 84.0 87.7 84.1 75.9 1987 1990 Se 89.2 81.2 67.5 53.3 1995 1999 Ha 84.8 87.9 84.6 76.0 1986 1991 So 88.1 88.9 80.8 72.1 1988 1994 IL 77.6 79.0 83.1 74.6 1987 1993 In 90.2 90.0 79.9 74.7 1986 1990 Ab 79.8 77.7 69.2 58.7 1991 1997 Tr 87.5 87. I 72.3 63.0 1994 1998 Re 87.7 90.7 86.8 78.3 1 985 1 989 Qu 87.5 80.2 74.2 61 .I 1991 19989 Tab.14 Averaged I-, F- t DR-values from Set C I/R I/A F/R F/A DR/R DR/A Ac 90.9 84.0 64.2 46.4 1998 2001 Mo 90.1 89.3 88.4 78.6 1967 1991 Se 92.6 83.4 70.3 49.4 1996 2000 Ha 82.4 83.8 88.6 75.8 1 987 1993 So 88.0 88.3 80.1 67.5 1989 1996 IL 82.8 83.4 88.0 77.0 1988 1997 In 89.4 90.5 89.6 79.2 1986 1991 Ab 75.6 75.0 68.8 52.3 1992 1999 Tr 89.3 91.5 69.7 53.2 1994 2000 Re 83.8 91.7 91.7 83.9 1986 1991 Qu 88.4 80.8 76.8 52.7 1992 1999 The average values in Tab. 13 and 14 should not be over-interpreted. In particular, ranking is unjustified. One cannot simply conclude that, say, the sub-area "Semantics" (92.6) is more important than that of "Abstracting" (75.6) with respect to research because the average value is higher; or that Indexing (79.2) is more feasible from an application point of view than Abstracting (52.3). $uch conclusions may be true, and this is why the values in Tab. 13 and 14 are given, but the parameters should actually only be applied to the single topics in the sub-areas. Cross-group ranking is not allowed for methodological reasons. But nevertheless the It is obvious that general true: data are interesting enough. the following relation is in I/R (-values) > I/A > F/R > F/A There are some exceptions to this general rule, such as Re-I/A>I/R (both in Set A and Set C); Ha-F/R>I/R (in Set C); (Re-F/R ant F/A)>I/R--(in Set_C); and I1-F/R>~/R(both in Set_A and SetC). There seems to be a non-trivial g~p between impor- tance and feasibility (both with respect to research and application). In other words, there are more problems than solutions. And there is an even broader gap between application and research. From a practical point of view there is some skep- sis concerning the possibility of solving important research problems. And what seems to be feasible from a research point of view looks different from an application one. The values in the second round are in general higher than in the first one. This is an argument against the oft cited Delphi hypothesis that the feedback-mechanism - i.e. that the data of the previous round are made known at the start of the following round -has an averaging effect. The increase-effect can probably be explained by the fact that the percentage of qualified and "com- p etent" people was higher in the second round perhaps these were the ones who were motivated to take on the burden of a second round) - and, as Tab.11 shows, people who rated themselves "com- petent" tend to evaluate higher. Between the two rounds the decline in the sub-areas "Software" and "Hardware" (apart from the parameter F/R) is striking. There is an overall increase for '%lorphology" and "Information Lan- guages" for all parameters, and a dramatic increase for the topics in "Indexing" for F/R (9.7%), and a dramatic decline for the "Translation"- and "Ques- tion-Answering"-topics for the parameter F/A (9.8 and 8.4%). The dates of realization do not change dramati- cally° On the average there is a difference of one year (and this makes sense because there was almost one year between round I and 2). There is a ten- dency from a research point of view for the expec- tation of realization to be somewhat earlier from an application standpoint. But the differences are not so dramatic as to justify the conclusion that researchers are more optimistic than developers/practitioners. 5.2 Single topics Tab. 15 and 16 show the two highest rated topics in each sub-area in the first two columns and the two lowest rated topics in each sub-area in the last two columns. These represent average data from Set C. The four columns in the middle show the estimation of participants who work in research or application, respectively. As part of the demos- copic data it was determined whether participants work more in research or in application (cf. Tab.6). Notice that both groups answered from a research and application point of view. In a more detailed analysis (which will be published later) this- and other aspects- can be pursued. In Tab.15 and 16 the data for very high importance (*+) and high importance (+) have been added together. 544 Tab.15 Topics accordin 6 to importance most important topics (++^+) less important average research application average(--~-) I/R I/A I/R I/A I/R I/A I/R I/A acl ac7 acl acl acl a22 ac3 ac2 ac3 ac2 ac2 ac3 mc8 mol too8 tool too8 mol mo11 mo10 mo11 too3 mo9 mo2 se5 se3 se5 se3 se2 se2 se2 set2 se8 se2 se3 se5 ha7 ha7 ha4 ha3 ha7 ha5 ha4 ha5 ha2 ha7 ha2 ha7 so6 so7 so6 so5 so3 so4 so7 so5 so5 so7 so4 so6 i110 i110 i14 i11 i11 i11 i14 i11 i11 i14 i17 i16 in3 inl in3 in6 in3 in3 in2 in6 in6 in3 in6 in6 ab4 ab3 ab4 ab2 ab3 ab3 ab5 ab2 ab5 ab3 abl ab4 tr3 tr3 tr2 tr3 tr3 trl tr5 tr2 tr5 tr2 tr4 tr3 re2 rel re2 tel tel tel rel re5 rel re2 re2 re5 qu5 qul qu2 qul qul qul qu2 qu8 qu5 qu8 qu5 qu2 ac6 ac6 ac7 ac5 tool mo9 too7 too4 se15 set5 se7 sel I ha6 ha6 hal ha2 sol so3 so3 so4 il5 ill I ill I il5 in4 in5 in5 in4 ab2 ab6 ab6 ab5 trl tr5 tr6 trl re3 re3 re4 re4 qu7 qu7 qu3 qu3 Tab. 16 Most feasible~ less feasible topics most feasible topics (++^+) less feasible average research application aversge(--A-) F/R F/A F/R F/A F/R WA F/R F/A ac7 ac7 ac2 ac7 ac2 ac2 ac2 ac2 ac5 acl ac7 ac7 too3 mo3 mo3 mo3 tool tool mot0 mot0 mot0 mot0 too2 mo2 se3 se2 se3 se9 se2 se2 se6 se6 se2 se2 se6 se6 ha5 ha5 ha5 ha5 ha% ha4 ha7 hal ha7 ha3 ha5 ha5 so2 so2 so2 sol so2 so2 sol sol sol so2 so7 so5 i110 ill0 il9 il6 ill ill il9 il9 lib il9 il7 il7 inl in4 in4 irg in3 in4 in2 inl in5 in5 in4 in3 ab2 ab2 ab2 ab2 ab2 ab2 ab3 ab3 ab3 ab3 abl ab3 tr3 tr3 tr3 tr3 tr3 tr3 tr2 trl tr2 trl tr2 tr2 rel re3 rel re3 rel rel re3 re5 re3 re5 re2 re3 qul qul qul qul qul qu10 qu2 qu10 qu2 qu10 qu5 qul ac6 ac6 ac4 ac4 mo9 mol I mo5 mo5 set5 set5 sell sel I has ha6 ha2 ha2 so3 so3 so4 so4 il7 il4 i16 i15 in6 in3 in3 in6 ab4 ab5 ab5 ab6 tr4 tr4 tr5 tr5 re4 re4 re5 re2 - qu4 qu4 qu9 qu9 A final Table shows the data for short term and long term topics, only the two closest and the two most distant topics in each sub-area are given (data from SetC). Tab.l 7 1 ~o ~ term and lon 6 term t9pics short term long term R/R R/A R/R R/A ac7 1987 ac7 1992 ac2 1991 ac2 1997 too3 1984 mo3 1984 mot0 1984 too6 1986 se2 1987 sel 1992 sel 1988 se6 1995 ha5 1984 ha5 1985 ha7 1984 ha3 1988 sol 1984 sol 1987 so2 1987 so2 1992 il2 1986 il9 1990 i±9 1986 il2 1991 inl 1984 inl 1986 in4 1984 in4 1987 1986 ~ 1991 ~3 1988 ~3 1996 at3 1985 at3 1990 at2 1985 at2 1992 re2 1984 re3 1987 tel 1984 tel 1988 qul 1988 qul 1997 qu2 1988 qu2 1997 ac4 2003 ac4 2006 ac6 2003 ac6 2006 too9 1997 too9 2000 mo11 1992 mo11 1997 set5 2000 sell 2005 sel I 2000 se14 2005 ha6 1996 ha6 1999 ha2 1991 ha2 1997 so3 1998 so3 2001 so4 1993 so4 1998 ill0 1989 il4 1997 i15 1989 i13 1996 in3 1 989 in3 1997 in6 1988 in6 1997 aa5 1996 sa4 2002 aa6 1996 am6 2001 at4 2000 at4 2006 at5 1998 at5 2005 re% 1992 re4 1998 re5 1986 re5 1990 qu9 1997 qu4 2001 qu4 1997 qu5 2001 Finally I would like to thank all those who par- ticipated in the Delphi rounds. It was an extremely time-consuming task to answer the questionnaire, which was more like a book than a folder. I hope the results justify the efforts. The analysis would not have been possible without the help of m~ colleagues - Udo Hahn for the conceptual desi~a, and Dr.J.Staud together with Annette Woehrle, Frank Dittmar and Gerhard Schneider for the statistical analysis. This project has been partially financed by the FID/LD-comnittee and by the "Bundesminis- terium fuer Forschung und Technologie/ Gesellschaft fuer Information und Dokumentation", Grant PT 200.08. 545 | 1984 | 115 |
Machine Translation: its History, Current Status, and Future Prospects Jonathan Slocum Abstract Elements ot the history, state of the art, and probable future of Machine Translation (MT) are discussed. The treatment is largely tutorial, based on the assumption that this audience is, for the most part, ignorant of matters pertaining to translation in general, and MT in particular. The paper covers some of the major MT R&D groups, the general techniques they employ(ed), and the roles they play(ed) in the development of the field. The conclusions concern the seeming permanence of the translation problem, and potential re-integration of MT with mainstream Computational Linguistics. Introduction Siemens Communications Systems, Inc. Linguistics Research Center University of Texas Austin, Texas We are now into the fourth decade of MT, and there is a resurgence of interest throughout the world -- plus a growing number of ~ and MAT (Machine-aided Translation) systems in use by governments, business and industry. Industrial firms are also beginning to fund M(A)T R&D projects of their own; thus it can no longer be said that only goverement funding keeps the field alive (indeed, in the U.S. there is no government funding, though the Japanese and European governments are heavily subsidizing MT R&D). In part this interest is due to more realistic expectations of what is possible in MT, and realization that MT can be very useful though imperfect; but it is also true that the capabilities of the newer MT systems lie well beyond what was possible just one decade ago. ~chine Translation (MT) of natural human languages is not a subject about which most scholars feel neutral. Thzs field has had a long, colorful career, and boasts no shortage of vociferous detractors and proponents alike. During its first decade in the 1950"s, interest and support was fueled by visions of high-speed high-quality translation of arbitrary texts (especially those of interest to the military and intelligence communities, who funded MT projects quite heavily). During its second decade in the 1960"s, disillusionment crept in as the number and difficulty of the linguistic problems became increasingly obvious, and as it was realized that the translation problem was not nearly so amenable to automated solution as had been thought. The climax came with the delivery of the National Academy of Sciences ALPAC report in 1966, condemning the field and, indirectly, its workers allke. The ALPAC report was criticized as narrow, biased, and short-sighted, but its recommendations were adopted (with the important exception of increased expenditures for long-term research in computational linguistics), and as a result MT projects were cancelled in the U.S. and elsewhere around the world. By 1973, the early part of the third decade of MT, only three government-funded projects were left in the U.S., and by late 1975 there were none. Paradoxically, MT systems were still being used by various government agencies here and abroad, because there was simply no alternative means of gathering information from foreign [Russian] sources so quickly; in addition, private companies were developing and selling MT sysEoms based on the mid-60"s technology so roundly castigated by ALPAC. Nevertheless the general disrepute of MT resulted in a remarkably quiet third decade. In light of these events, it is worth reconsidering the potential of, and prospects for, Machine Translation. After opening with an explanation of how [human] translation is done where it is taken seriously, we will present a brief introduction to MT technology and a short historical perspective before considering the present status and state of the art, and then moving on to a discussion of the future prospects. For reasons of space and perspicuity, we shall concentrate on MT efforts "in the U.S. and western Europe, though some other MT projects and less-sophisticated approaches will receive attention. The Human Translation Context When evaluating the feasibility or desirability of Machine Translation, one should consider the endeavor in light of the facts of human translation for like purposes. In the U.S., it is common to conceive of translation as simply that which a human translator does. It is generally believed that a college degree [or the equivalent] in a foreign language qualifies one to be a translator for just about any material whatsoever. Native speakers of foreign languages are considered to be that much more qualified. Thus, translation is not particularly respected as a profession in the U.S., and the pay is poor. In Canada, in Europe, and generally around the world, this myopic attitude is not held. Where translation is a fact of life rather than an oddity, it is realized that any translator's competence is sharply restricted to a few domains (this is especially true of technical areas), and that native fluency in a foreign language does not bestow on one the ability to serve as a translator. 546 Thus, there are college-level and post-graduate schools that teach the theory (translatology) as well as the practice of translation; thus, a technical translator is trained in the few areas in which he will be doing translation. Of special relevance to MT is the fact that essentially all translations for dissemination (export) are revised by more highly qualified translators who necessarily refer back to the original text when post-editing the translation. (Thls is not "pre-publication stylistic editing".) Unrevised translations are always regarded as inferior in quality, or at least suspect, and for many if not most purposes they are simply not acceptable. In the multi-national firm Siemens, even internal communications which are translated are post-edited. Such news generally comes as a surprise, if not a shock, to most people in the US. It is easy to see, therefore, that the "fully-automatic high-quality machine translation" standard, imagined by most U.S. scholars to constitute minimum acceptability, must be radically redefined. Indeed, the most famous MT critic of all eventually recanted his strong opposition to MT, admitting that these terms could only be defined by the users, according to their own standards, for each situation [Bar-Hillel, 71]. So an FIT system does not have to print and bind the result of its translation in order to qualify as "fully automatic." '~igh quality" does not at all rule out post-editing, since the proscription of human revision would "prove" the infeasibility of high-quality Human Translation. Academic debates about what constitutes "high-quality" and "fully- automatic" are considered irrelevant by the users of Machine Translation (MT) and Machine-aided Translation (MAT) systems; what matters to them are two things: whether the systems can produce output of sufficient quality for the intended use (e.g., revision), and whether the operation as a whole is cost-effective or, rarely, justifiable on other grounds, like speed. Machine Translation Technology In order to appreciate the differences among translation systems (and their applications), it is necessary to understand, first, the broad categories into which they can be classified; second, the different purposes for which translations (however produced) are used; third, the intended applications of these systems; and fourth, something about the linguistic techniques which MT systems employ in attacking the translation problem. Categories of Systems There are three broad categories of "computerized translation tools" (the differences hinging on how ambitious the system is intended to be): Machine Translation (MT), Machine-aided Translation (MAT), and Terminology Databanks. MT systems are intended to perform translation without human intervention. This does not rule out pre-processing (assuming this is not for the purpose of marking phrase boundaries and resolving part-of-speech and/or other ambiguities, etc.), nor post-editing (since this is normally done for human translations anyway). However, an NT system is solely responsible for the complete translation process from input of the source text to output of the target text without human assistance, using special programs, comprehensive dictionaries, and collections of linguistic rules (to the extent they exist, varying with the NT system). NT occupies the top range of positions on the scale of computer translation sophistication. MAT systems fall into two subgroups: human-assisted machine translation (RAMT) and machine-assisted human translation (NAHT). These occupy successively lower ranges on the scale of computer translation sophistication. Ih~HT refers to a system wherein the computer is responsible for producing the translation per se, but may interact with a human monitor at many stages along the way -- for example, asking the human to disambiguate a word's part of speech or meaning, or to indicate where to attach a phrase, or to choose a translation for a word or phrase from among several candidates discovered in the system's dictionary. ¥~kHT refers to a system wherein the human is responsible for producing the translation per se (on-line), but may interact with the system in certain prescribed situations -- for example, requesting assistance in searching through a local dictionary/thesaurus, accessing a remote terminology databank, retrieving examples of the use of a word or phrase, or performing word processing functions like formatting. The existence of a pre-processing stage is unlikely in a NA(H)T system (the system does not need help, instead, it is making help available), but post-editing is frequently appropriate. Terminology Databanks (TD) are the least sophisticated systems because access frequently is not made during a translation task (the translator may not be working on-line), but usually is performed prior to human translation. Indeed the databank may not be accessible (to the translator) on-line at all, but may be limited to the production of printed subject-area glossaries. A TD offers access to technical terminology, but usually not to common words (the user already knows these). The chief advantage of a TD is not the fact that it is automated (even with on-line access, words can be found just as quickly in a printed dictionary), but that it is up-to-date: technical terminology is constantly changing and published dictionaries are essentially obsolete by the time they are available. It is also possible for a TD to contain more entries because it can draw on a larger group of active contributors: its user8. The Purposes of Translation The most immediate division of translation purposes involves information acquisition vs. dissemination. The classic example of the former purpose is intelligence-gathering: with masses of data to sift through, there is no time, money, or incentive to carefully translate every document by 547 normal (i.e., human) means. Scientists more generally are faced with this dilemma: there is already more to read than can be read in the time available, and having to labor through texts written in foreign languages -- when the probability is low that any given text is of real interest -- is not worth the effort. In the past, the lingua franca of science has been English; this is becoming less and less true for a variety of reasons, including the rise of nationalism and the spread of technology around the world. As a result, scientists who rely on English are having greater difficulty keeping up with work in their fields. If a very rapid and inexpensive means of translation were available, then -- for texts within the reader's areas of expertise -- even a low-quality translation might be sufficient for information acquisition. At worst, the reader could determine whether a more careful (and more expensive) translation effort might be justified. More likely, he could understand the content of the text well enough that a more careful translation would not be necessary. The classic example of the latter purpose of translation is technology export: an industry in one country that desires to sell its products in another country must usually provide documentation in the purchaser's chosen language. In the past, U.S. companies have escaped this responsibility by requiring that the purchasers learn English; other exporters (German, for example) have never had this luxury. In the future, with the increase of nationalism, it is less likely that English documentation will be acceptable. Translation is becoming increasingly common as more companies look to foreign markets. More to the point, texts for information dissemination (export) must be translated with a great deal of care: the translation must be "right" as well as clear. Qualified human technical translators are hard to find, expensive, and slow (translating somewhere around 4-6 pages/day, on the average). The information dissemination application is mast responsible for the renewed interest in MT. Intended Applications of M(A)T Although literary translation is a case of information dissemination, there is little or no demand for literary translation by machine: relative to technical translation, there is no shortage of human translators capable of fulfilling this need, and in any case computers do not fare well at literary translation. By contrast, the demand for technical translation is staggering in sheer volume; moreover, the acquisition, maintenance, and consistent use of valid technical terminology is an enormous problem. Worse, in many technical fields there is a distinct shortage of qualified human translators, and it is obvious that the problem will never be alleviated by measures such as greater incentives for translators, however laudable that may be. The only hope for a solution to the technical translation problem lies with increased human productivity through computer technology: full-scale MT, less ambitious MAT, on-line terminology databanks, and word-processing all have their place. A serendipitous situation involves style: in literary translation, emphasis is placed on style, perhaps at the expense of absolute fidelity to content (especially for poetry). In technical translation, emphasis is properly placed on fidelity, even at the expense of style. M(A)T systems lack style, but excel at terminology: they are best suited for technical translation. Linguistic Techniques There are several perspectives from which one can view MT techniques. We will use the following: direct vs. indirect; interlingua vs. transfer; and local vs. global scope. (Not all eight combinations are realized in practice.) We shall characterize MT systems from these perspectives, in our discussions. In the past, "the use of semantics" was always used to distinguish MT systems; those which used semantics were labelled "good', and those which did not were labelled "bad'. Now all MT systems [are claimed to] make use of semantics, for obvious reasons, so this is no longer a distinguishing characteristic. '~irect translation" is characteristic of a system (e.g., CAT) designed from the start to translate out of one specific language and into another. Direct systems are limited to the minimom work necessary to effect that translation; for example, disambiguation is performed only to the extent necessary for translation into that one target language, irrespective of what might be required for another language. "Indirect translation," on the other hand, is characteristic of a system (e.g., EUROTRA) wherein the analysis of the source language and the synthesis of the target language are totally independent processes; for example, disambiguntion is performed to the extent necessary to determine the "meaning" (however represented) of the source language input, irrespective of which target language(s) that input might be translated into. The "interlingua" approach is characteristic of a system (e.g., CETA) in which the representation of the "meaning" of the source language input is [intended to be] independent of any language, and this same representation is used to synthesize the target language output. The "linguistic universals" searched for and debated about by linguists and philosophers is the notion that underlies an interlingua. Thus, the representation of a given "unit of meaning" would be the same, no matter what language (or gr"mm-tical structure) that unit might be expressed in. The "transfer" approach is characteristic of a system (e.g., TAUM) in which the underlying representation of the "meaning" of a gr---,-tical unit (e.g., sentence) differs depending on the language it was derived from [or into which it is to be generated]; this implies the existence of a third translation stage which maps one language-specific meaning representation into another: this stage is called Transfer. Thus, the overall transfer translation process is Analysis followed by Transfer and then Synthesis. The "transfer" vs. "interlingua" difference is not applicable to all systems; in particular, "direct" MT systems use neither the 548 transfer nor the interlingua approach, since they do not attempt to represent "meaning'. '~ocal scope" vs. "global scope" is not so much a difference of category as degree. '~ocal scope" characterizes a system (e.g., SYSTRAN) in which words are the essential unit driving analysis, and in which that analysis is, in effect, performed by separate procedures for each word which try to determine -- based on the words to the left and/or right -- the part of speech, possible idiomatic usage, and "sense" of the word keying the procedure. In such systems, for example, homographs (words which differ in part of speech and/or derivstional history [thus meaning], but which are written alike) are a major problem, because s unified analysis of the sentence per se is not attempted. "Global scope" characterizes a system (e.g., METAL) in which the meaning of a word is determined by its context within a unified analysis of the sentence (or, rarely, paragraph). In such systems, by contrast, homographs do not typically constitute a significant problem because the amount of context taken into account is much greater than is the case with systems of "local scope. " Historical Perspective There are several comprehensive treatments of MT projects [Bruderer, 77] and MT history [Hutchins, 78] available in the open literature. To illustrate some continuity in the field of MT, while remaining within reasonable space limits, our brief historical overview will be restricted to defunct systems/projects which gave rise to follow-on systems/projects of current interest. THese are: Georgetown's CAT, Grenoble's CETA, Texas" METAL, Montreal's TAUM, and Brigham Young University's ALP system. CAT - Georgetown Automatic Translation Georgetown University was the site of one of the earllest MT projects. Begun in 1952, and supported by the U.S. government, Georgetown's CAT system became operational in 1964 with its delivery to the Atomic Energy Commission at Oak Ridge National Laboratory, and to Europe's corresponding research facility EURATON in Ispra, Italy. Both systems were used for many years to translate Russian physics texts into "English." The output quality was quits poor, by comparison with human translations, but for the intended purpose of quickly scanning documents to determine their content and interest, the CAT system was nevertheless superior to the only alternatives: slow and more expensive human translation or, worse, no translation at all. GAT was not replaced at EURATOM until 1976; at ORNL, it seems to have been used until around 1979 [Jordan et el., 76, 77]. The GAT strategy was "direct" and "local": simple word-for-word replacement, followed by a limited amount of transposition of words to result in something vaguely resembling English. Very soon, a "word" came to be defined as a single word or a sequence of words forming an "idiom'. There was no true linguistic theory underlying the GAT design; and, given the state of the art in computer science, there was no underlying computational theory either. GAT was developed by being made to work for a given text, then being modified to account for the next text, and so on. The eventual result was a monolithic system of intractable complexity: after its delivery to ORNL and EURATOM, it underwent no significant modification. The fact that it was used for so long is nothing short of remarkable -- a lesson in what can be tolerated by users who desperately need translation services for which there is no viable alternative to even low-quality MT. The termination of the Georgetown MT project in the mid-60"s resulted in the incorporation of LATSEC by Peter Tome, one of the GAT workers. LATSEC soon developed the SYSTRAN system (based on GAT technology), which in 1970 replaced the IBM Mark II system at the USAF Foreign Technology Division (FTD) at Wright Patterson AYB, and in 1976 replaced GAT at EURATOM. SYSTRAN is still being used to translate Russian into English for information-acquisition purposes. We shall return to our discussion of SYSTRAN in the next major section. CETA - Centre d'~tudes pour la Traduction Automatique In 1%1 a project was started at Grenoble University in France, to translate Russian into French. Unlike CAT, Grenoble began the CETA project with a clear linguistic theory-- having had a number of years in which to witness and learn from the events transpiring at Georgetown and elsewhere. In particular, it was resolved to achieve a dependency-structure analysis of every sentence (a "global" approach) rather than rely on intra-sentential heuristics to control limited word transposition (the "local" approach); with a unified analysis in hand, a reasonable synthesis effort could be mounted. The theoretical basis of CETA was "interlingua" (implying a language- independent, "neutral" meaning representation) at the gr-mm-tical level, hut "transfer" (implying a mapping from one language-specific meaning representation to another) at the lexical [dictionary] level. The state of the art in computer science still being primitive, Grenoble was essentially forced to adopt IBM assembly language as the software basis of CETA [Rutchins, 78]. The CETA system was under development for ten years; during 1%7-71 it was used to translate 400,000 words of Russian mathematics and physics texts into French. The major findings of this period were that the use of an interlingua erases all clues about how to express the translation; also, that it results in extremely poor or no translations of sentences for which complete analyses cannot be derived. The CETA workers learned that it is critically important in an operational system to retain surface clues about how to formulate the translation (Indo-European languages, for example, have many structural similarities, not to mention cognates, that one can 549 take advantage of), and to have "fail-soft" measures designed into the system. An interlingua does not allow this [easily, if at all], but the transfer approach does. A change in hardware (thus software) in 1971 prompted the abandonment of the CETA system, immediately followed by the creation of a new project/system called GETA, based entirely on a fail-soft transfer design. The software was still, however, written in assembly language; this continued reliance on assembly language was soon to have deleterious effects, for reasons now obvious to anyone. We will return to our discussion of GETA, below. METAL - MEchanical Translation and Analysis of Languages Having had the same opportunity for hindsight, the University of Texas in 1961 used U.S. government funding to establish the Linguistics Research Center, and with it the METAL project, to investigate MT -- not from Russian, but from German into English. The LRC adopted Chomsky's transformational paradigm, which was quickly gaining popularity in linguistics circles, and within that framework employed a syntactic interl~ngua based on deep structures. It was soon discovered that transformational linguistics per se was not sufficiently well-developed to support an operational system, and certain compromises were made. The eventual result, in 1974, was an 80,000-1ine, 14-overlay FORTRAN program running on a dedicated CDC 6600. Indirect translation was performed in 14 steps of global analysis, transfer, and synthesis -- one for each of the 14 overlays -- and required prodigious amounts of CPU time and I/O from/to massive data files. U.S. government support for MT projects was winding down in any case, and the METAL project was shortly terminated. Several years later, a small Government grant resurrected the project. The FORTRAN program was rewritten in LISP to run on a DEC-10; in the process, it was pared down to just three major stages (analysis, transfer, and synthesis) comprising about 4,000 lines of code which could be accommodated in three "overlays," and its computer resource requirements were reduced by a factor of ten. Though U.S. government interest once again languished, the Sprachendienst (Language Services) department of Siemens b~ in Munich had begun supporting the project, and in 1980 Siemens AG became the sole sponsor. TAUM - Traduction Automatique de l'Universit~ de Hontr~al In 1962 the University of Montreal established the TAUM project with Canadian government funding. This was probably the first MT project designed strictly around the transfer approach. As the software basis of the project, TAUM chose the PASCAL programming language on the CDC 6600. After an initial period of more-or-less open-ended research, the Canadian gover~m~ent began adopting specific goals for the TAUM system. A chance remark by a bored translator in the Canadian Meteorological Center had led to a spin-off project: TAUM-METEO. Weather forecasters were already required to adhere to a prescribed manual of style and vocabulary in their English reports. Partly as a result of this, translation into French was so monotonous a task that human translator turnover in the weather service was extraordinarily high -- six months was the average tenure. TAUM was commissioned in 1975 to produce an operational English-French MT system for weather forecasts. A prototype was demonstrated in 1976, and by 1977 METEO was installed for production translation. We will discuss METEO in the next major section. The next challenge was not long in coming: by a fixed date, TAUM had to be usable for the translation of a 90 million word set of aviation maintenance manuals from English into French (else the translation had to he started by human means, since the result was needed quickly). From this point on, TAUM concentrated on the aviation manuals exclusively. To alleviate problems with their purely syntactic analysis (especially considering the many multlple-noun compounds present in the aviation manuals), the group began in 1977 to incorporate partial semantic analysis in the TAUM-AVLkTION system. After a test in 1979, it became obvious that TAUM-AVIATION was not going to be production-ready in time for its intended use. The Canadian goverement organized a series of tests and evaluations to assess the status of the system. Among other things, it was discovered that the cost of writing each dictionary entry was remarkably high (3.75 man-hours, costing $35-40), and that the system's runtime translation cost was also high (6 cents/word) considering the cost of human translation (8 cents/word), especially when the post-editing costs (10 cents/word for TAUM vs. 4 cents/word for human translations) were taken ihto account [Gervais, 1980]; TAUM was not yet cost-effective. Several other factors, especially the bad Canadian economic situation, combined with this to cause the cancellation of the TAUM project in 1981. There are recent signs of renewed interest in MT in Canada. State-of-the-art surveys have been commissioned [Pierre Isabelle, formerly of TAUM, personal communication], but no successor project has yet been established. ALP - Automated Language Processing In 1971 a project was established at Brigham Young University to translate Mormon ecclesiastical texts from English into multiple languages -- starting with French, German, Portuguese and Spanish. The eventual aim was to produce a fully-automatic MT system based on Junction Grammar [Lytle et al., 75], but actual work proceeded on Machine-Aided Translation (MAT, where the system does not attempt to analyze sentences on its own, according to pre-programmed linguistic rules, but instead relies heavily on interaction with a human to effect the analysis [if one is even attempted] and complete the translation). The BYU project never produced an operational system, and the Mormon Church, through the 550 University, began to dismantle the project. Around 1977, a group composed primarily of programmers left BYU to join Weidner Communications, Inc., and proceeded to develop the fully-automatic, direct Weidner MT system. Shortly thereafter, most of the remaining BYU project members left to form Automated Language Processing Systems (ALPS) and continue development of the BYU MAT system. Both of these systems are actively marketed today, and will be discussed in the next section. Some work continues at BYU, but at a very much reduced level and degree of aspiration (e.g., [Melby, 82]). Current Production Systems In this section we consider the major M(A)T systems being used and/or marketed today. Four of these originate from the "failures" described above, but four systems are essentially the result of successful (i.e., continuing) MT R&D projects. The full MT systems discussed below are the following: SYSTRAN, LOGOS, METEO, Weidner, and SPANAM; we will also discuss the MAT systems CULT and ALPS. Most of these systems have been installed for several customers (METEO, SPANAM, and CULT ere the exceptions, with only one obvious "user" each). The oldest installation dates from 1970. A "standard installation," if it can be said to exist, includes provision for pre-processing in some cases, translation (with much human intervention in the case of MAT systems), and some amount of post-editing. To MT system users, acceptability is a function of the amount of pre- and/or post-editing that must be done (which is also the greatest determinant of cost). Van Slype [82] reports that "acceptability to the human translator...appears negotiable when the quality of the MT system is such that the correction (i.e., post-editing) ratio is lower than 20% (i correction every 5 words) and when the human translator can be associated with the upgrading of the MT system." It is worth noting that editing time has been observed to fall with practice: Pigott [82] reports that "...the more M.T. output a translator handles, the more proficient he becomes in making the best use of this new tool. In some cases he manages to double his output within a few months as he begins to recognize typical M.T. errors and devise more efficient ways of correcting them." It is also important to realize that, though none of these systems produces output mistakable for human translation [at least not good human translation], their users have found sufficient reason to continue using them. Some users, indeed, are repeat customers. In short, FIT & MAT systems cannot be argued not to work, for they are in fact being bought and used, and they save time and/or money for their users. Every user eXpresses a desire for improved quality and reduced cost, to be sure, but then the same is said about human translation. Thus, in the only valid sense of the idiom, MT & MAT have already "arrived." Future improvements in quality, and reductions in cost -- both certain to take place -- will serve to make M(A)T systems even more attractive. SYSTRAN SYSTRAN was one of the first MT systems to be marketed; the first installation replaced the IBM Mark II Russian-English system at the USAF FTD in 1970, and is still operational, Eased on the CAT technology (SYSTRAN uses the same linguistic strategies, to the extent they can be argued to exist), SYSTRAN's software basis has been much improved by the introduction of modularity (separating the analysis and synthesis stages), by a recent shift away from simple "direct" translation (from the Source Language straight into the Target Language) toward the inclusion of something resembling an intermediate "transfer" stage, and by the allowance of manually-selected topical glossaries (essentially, dictionaries specific to [the subject area of] the text). The system is still ad hoc -- particularly in the assignment of semantic features [Pigott, 79]. The USAF FTD dictionaries number over a million entries; Eostad [82] reports that dictionary updating must be severely constrained, lest a change to one entry disrupt the activities of many others. (A study by Wilks [78] reported an improvement/degradation ratio [after dictionary updates] of 7:3, but Bostad implies a much more stable situation after the introduction of stringent [and expensive] quality-control measures.) NASA selected SYSTRAN in 1974 to translate materials relating to the Apollo-Soyuz collaboration, and EURATOM replaced GAT with SYSTRAN in 1976. Also by 1976, FTD was augmenting SYSTRA~ with word-processing equipment to increase productivity (e.g., to eliminate the use of punch-cards). In 1976 the Commission of the European Communities purchased an English-French version of SYSTRAN for evaluation and potential use. Unlike the FTD, NASA, and EURATOM installations, where the goal was information acquisition, the intended use by CEC was for information dissemination -- meaning that the output was to be carefully edited before human consumption. Van Slype [82] reports that "the English-French standard vocabulary delivered by Prof. Toma to the Commission was found to be almost entirely useless for the Commission enviror--ent. '' Early evaluations were negative (e.g., Van Slype [79]), but the existing and projected overload on CEC human translators was such that investigation continued in the hope that dictionary additions would improve the system to the point of usability. Additional versions of SYSTRAN were purchased (French-English in 1978, and Engllsh-Italian in 1979). The dream of acceptable quality for post-editing purposes was eventually realized: Pigott [82] reports that "...the enthusiasm demonstrated by [a few translators] seems to mark something of a turning point in [machine translation]." Currently, about 20 CEC translators in Luxambourg are using SYSTRAN on a Siamens 7740 computer for routine translation; one factor accounting for success is that the English and French dictionaries now consist of well over i00,000 entries in the very few technical areas for which SYSTRAN is being employed. 551 Also in 1976, General Motors of Canada acquired SYSTRAN for translation of various manuals (for vehicle service, diesel locomotives, and highway transit coaches) from English into French on an IBM mainframe. GM's English-French dictionary had been expanded to over 130,000 terms by 1981 [Sereda, 82]. Subsequently, GM purchased an English-Spanish version of SYSTRAN, and is now working to build the necessary [very large] dictionary. Sereda [82] reports a speed-up of 3-4 times in the productivity of his human translators (from about 1000 words per day); he also reveals that developing SYSTRAN dictionary entries costs the company approximately $4 per term (word- or idiom-pair). While other SYSTRAN users have applied the system to unrestricted texts (in selected subject areas), Xerox has developed a restricted input language ('Multinational Customized English') after consultation with LATSEC. That is, Xerox requires its English technical writers to adhere to a specialized vocabulary and a strict manual of style. SYSTRAN is then employed to translate the resulting documents into French, Italian, and Spanish; Xerox hopes to add German and Portuguese. Ruffino [82] reports "a five-to-one gain in translation time for most texts" with the range of gains being 2-10 times. This approach is not necessarily feasible for all organizations, but Xerox is willing to employ it and claims it also enhances source-text clarity. Currently, SYSTRAN is being used in the CEC for the routine translation, followed by human post-editing, of around 1,000 pages of text per month in the couples English-French, French-English, and English-ltalian [Wheeler, 83]. Given this relative success in the CEC envirom-ent, the Commission has recently ordered an English-German version as well as a French-German version. Judging by past experience, it will be quite some time before these are ready for production use, but when ready they will probably save the CEC translation bureau valuable time, if not real money as well. LOGOS Development of the LOGOS system was begun in 1964. The first installation, in 1971, was used by the U.S. Air Force to translate English maintenance manuals for military equipment into Vietnamese. Due to the termination of U.S. involvement in that war, and perhaps partly to a poor evaluation of LOGOS" cost-effectiveness [Sinaiko and Xlare, 73], its use was ended after two years. As with SYSTRAN, the linguistic foundations of LOGOS are weak and inexplicit (they appear to involve dependency structures); and the analysis and synthesis rules, though separate, seem to be designed for particular source and target languages, limiting their extensibility. LOCOS continued to attract customers. In 1978, Siemens AG began funding the development of a LOGOS German-English system for telecommunications manuals. After three years LOCOS delivered a "production" system, but it was not found suitable for use (due in part to poor quality of the translations, and in part to the economic situation within Siemens which had resulted in a much-reduced demand for translation, hence no immediate need for an MT system). Eventually LOGOS forged an agreement with the Wang computer company which allowed LOGOS to implement the German-English system (formerly restricted to large IBM mainframes) on Wang office computers. This system is being marketed today, and has recently been purchased by the Commission of the European Communities. Development of other language pairs has been mentioned from time to time. METEO TAUM-METEO is the world's only example of a truly fully-automatic MT system. Developed as a spin-off of the TAUM technology, as discussed earlier, it was fully integrated into the Canadian Meteorological Center's (CMC's) nation-wide weather communications network by 1977. METEO scans the network traffic for English weather reports, translates them "directly" into French, and sends the translations back out over the communications network automatically. Rather than relying on post-editors to discover and correct errors, METEO detects its own errors and passes the offending input to human editors; output deemed "correct" by METEO is dispatched without human intervention, or even overview. TAUM-METEO was probably also the first MT system where translators were involved in all phases of the design/development/refinement; indeed, a CMC translator instigated the entire project. Since the restrictions on input to METEO were already in place before the project started (i.e., METEO imposed no new restrictions on weather forecasters), METEO cannot quite be classed with the TITUS and Xerox SYSTRAN systems which rely "on restrictions geared to the characteristics of those MT systems. But METEO is not extensible. One of the more remarkable side-effects of the METEO installation is that the translator turn-over rate within the CMC went from 6 ~nths, prior to METEO, to several years, once the CMC translators began to trust METEO's operational decisions and not review its output [Brian Harris, personal communication]. METEO's input constitutes over 11,000 words/day, or 3.5 million words/year. Of this, it correctly translates 80%, shuttling the other ('bore interesting") 20% to the human CMC translators; almost all of these "analysis failures" are attributable to violations of the CMC language restrictions, though some are due to the inability of the system to handle certain constructions. METEO's computational requirements total about 15 CPU minutes per day on a CDC 7600 [Thouin, 82]. By 1981, it appeared that the built-in limitations of METEO's theoretical basis had been reached, and further improvement was not possible. Weidner Communications Systems, Inc. Weidner was established in 1977 by Bruce Weidner, who hired a group of FIT workers (predominantly programmers) from the fading BYU project. Weidner 552 delivered a production English-French system to Mitel in Canada in 1980, and a beta-test English-Spanish system to the Siemens Corporation (USA) in the same year. In 1981 Mite1 took delivery on Weidner's English-Spanish and English-German systems, and Bravice (a translation service bureau in Japan) purchased the Weidner English-Spanish and Spanish-English systems. To date, there are about 22 installations of the Weidner MT system around the world. The Weidner system, though "fully automatic" during translation, is marketed as a "machine aid" to translation (perhaps to avoid the stigma usually attached to MT). It is highly interactive for other purposes (the lexical pre-analysis of texts, the construction of dictionaries, etc.), and integrates word-processing software with external devices (e.g., the Xerox 9700 laser printer at Mitel) for enhanced overall document production. Thus, the Weidner system accepts a formatted source document (actually, one containing formatting/typesetting codes) and produces a formatted translation. This is an important feature to users, since almost everyone is interested in producing formatted translations from formatted source texts. Given the way this system is tightly integrated with moaern word-processing technology, it is difficult to assess the degree to which the translation component itself enhances translator productlvity, vs. the degree to which simple automation of formerly manual (or poorly automated) processes accounts for the productivity gains. The "direct" translation component itself is not particularly sophisticated. For example analysis is "local," being restricted to the noun phrase or verb phrase level -- so that context available only at higher levels can never be taken into account. Translation is performed in four independent stages: idiom search, homograph disambiguation, structural analysis, and transfer. These stages do not interact with each other, which creates more problems; for example, an apparent idiom in a text is always treated idiomatically -- never literally, no matter what its context (since no other contextual information is available until later). Hundt [82] comments that "idioms are an extremely important part of the translation procedure." It is particularly interesting that he continues: "...machine assisted translation is for the most part word replacement..." Then, "It is not worthwhile discussing the various problems of the [Weidner] system in great depth because in the first place they are much too numerous..." Yet even though the Weidner translations are of low quality, users nevertheless report economic satisfaction with the results. Hundt continues "...the Weidner system indeed works as an aid..." and, "800 words an hour as a final figure [for translation throughput] is not unrealistic." This level of performance was not attainable with previous [human] methods, and some users report the use of Weidner to be cost-effective, as well as faster, in their enviroements. In 1982, Weidner delivered English-German and German-English systems to ITT in Great Britain; but there were some financial problems (a third of the employees were laid off that year) until a controlling interest was purchased by a Japanese company: Bravice, one of Weidner's customers, owned by a group of wealthy Japanese investors. Weidner continues to market }iT systems, and is presently working to develop Japanese MT systama. A prototype Japanese-English system has recently been installed at Bravice, and work continues on an English-Japanese system. In addition, Weidner has implemented its systam on the IBM Personal Computer, in order to reduce its former dependence on the PDP-II. SPANAM Following a promising feasiblity study, the Pan American Health Organization in Washington, D.C. decided in 1975 to undertake work on a machine translation system, utilizing many of the same techniques developed for GAT; consultants were hired from nearby Georgetown University, the home of GAT. The official PAHO languages are English, French, Portuguese, and Spanish; Spanish-English was chosen as the initial language pair, due to the belief that "This combination requires fewer parsing strategies in order to produce manageable output [and other reasons relating to expending effort on software rather than linguistic rules]" [Vasconcellos, 83]. Actual work started in 1976, and the first prototype was running in 1979, using punched card input on an IBM mainframe. With the subsequent integration of a word processing system, production use could be seriously considered. After further upgrading, the system in 1980 was offerred as a service to potential users. Later that year, in its first major test, SPANAM reduced manpower requirements for a certain translation effort by 45~, resulting in a monetary savings of 61Z [Vasconcellos, 83]. Since then it has been used to translate well over a million words of text, averaging about 4,000 words per day per post-editor. (Significantly, SPANAM's in-house developers seem to be the only revisors of its output.) The post-editors have amassed "a bag of tricks" for speeding the revision work, and special string functions have also been built into the word processor for handling SPANAM's English output. Sketchy details imply that the linguistic technology underlying SPANAM is essentially that of GAT; the rules may even still be built into the programs. The software technology has been updated considerably in that the programs are modular (in the newest version). The total lack of sophistication by modern Computational Linguistics standards is evidenced by the offhand remark that "The maximum length of an idiom [allowed in the dictionary] was increased from five words to twenty-five" in 1980 [Vasconcellos, 83]. Also, the system adopts the "direct" translation strategy, and fails to attempt a "global" analysis of the sentence, settling for "local" analysis of limited phrases. The SPANAM dictionary currently numbers 55,000 entries. A follow-on project to develop ENGSPAN, underway since 1981, has produced some test translations. 553 CULT - Chinese University Language Translator CULT is perhaps the most successful of the Machine-aided Translation systems. Development began at the Chinese University of Hong Kong around 1968. CULT translates Chinese mathematics and physics journals (published in Beijing) into English through a highly-interactive process [or, at least, with a lot of human intervention]. The goal was to eliminate post-editing of the results by allowing a large amount of pre-editing of the input, and a certain [unknown] degree of human intervention during translation. Although published details [Loh, 76, 78, 79] are not unambiguous, it is clear that humans intervene by marking sentence and phrase boundaries in the input, and by indicating word senses where necessary, among other things. (What is not clear is whether this is strictly a pre-editing task, or an interactive task.) CULT runs on the ICL 1904A computer. Beginning in 197~, the CULT system was applied to the task of translating the Acta Mathematica Sinica into English; in 1976, this was joined by the Acta Physica Sinlca. This production translation practice continues to this day. Originally the Chinese character transcription problem was solved by use of the standard telegraph codes invented a century ago, and the input data was punched on cards. But in 1978 the system was updated by the addition of word-processing equipment for on-line data entry and pre/post-editing. It is not clear how general the techniques behind CULT are -- whether, for example, it could be applied to the translation of other texts -- nor how cost-effective it is in operation. Other factors may justify its continued use. It is also unclear whether R&D is continuing, or whether CULT, like METEO, is unsuited to design modification beyond a certain point already reached. In the absence of answers to these questions, and perhaps despite them, CULT does appear to be an MAT success story: the amount of post-editing said to be required is trivial -- limited to the re-introduction of certain untranslatable formulas, figures, etc., into the translated output. At some point, other translator intervention is required, but it seems to be limited to the manual inflection of verbs and nouns for tense and number, and perhaps the introduction of a few function words such as English determiners. ALPS - Automated Language Processing Systems ALPS was incorporated by another group of Brigham Young University workers, around 1979; while the group forming Weidner was composed mostly of the programmers interested in producing a fully-automatic MT system, the group forming ALPS (reusing the old BYU acronym) was composed mostly of linguists interested in producing machine aids for human translators (dictionary look-up and substitution, etc.) [Melby and Tenney, personal communication]. Thus the ALPS system is interactive in all respects, and does not seriously pretend to perform translation at all; rather, ALFS provides the translator with a set of software tools to automate many of the tasks encountered in everyday translation experience. ALPS adopted the tools originally developed at BYU -- and hence, the language pairs the BYU system had supported: English into French, German, Portuguese, and Spanish. Since then, other languages (e.g., Arabic) have been announced, but their commercial status is unclear. The ALPS system is intended to work on any of three "levels" -- providing capabilities from simple dictionary lookup on demand to word-for-word (actually, term-for-term) translation and substitution into the target text. The central tool provided by ALPS is a menu-driven word-processing system coupled to the on-line dictionary. One of the first ALPS customers seems to have been Agnew TechTran -- a commercial translation bureau which acquired the ALP$ system for in-house use. Recently, another change of ownership and consequent shake-up at Weidner communication Systems, Inc., has allowed ALPS to hire a large group of former Weidner workers, leading to speculation that ALPS might itself be intending to enter the MT arena. Current Research and Development In addition to the organizations marketing or using existing M(A)T systems, there are several groups engaged in on-going R&D in this area. Operational (i.e., marketed or used) systems have not yet resulted from these efforts, but deliveries are foreseen at various times in the future. We will discuss the major Japanese MT efforts briefly (as if they were unified, in a sense, though for the most part they are actually separate), and then the major U.S. and European MT systems at greater length. MT R&D in Japan In 1982 Japan electrified the technological world by widely publicizing their new Fifth Generation project and establishing the Institute for New Generation Computer Technology (ICOT) as its base. Its goal is to leapfrog Western technology and place Japan at the forefront of the digital electronics world in the 1990"s. MITI (Japan's Ministry of International Trade and Industry) is the motivating force behind this project, and intends that the goal be achieved through the development and application of highly innovative techniques in both computer architecture and Artificial Intelligence. Of the research areas to be addressed by the ICOT scientists and engineers, Machine Translation plays a prominent role. Among the western Artificial Intelligentsia, the inclusion of D~ seems out of place: AI researchers have been trying (successfully) to ignore all MT work in the two decades since the ALPAC debacle, and almost universally believe that success is impossible in the foreseeable future -- in ignorance of the successful, cost-effective applications already in place. To the Japanese leadership, however, the inclusion of D~ is no accident. Foreign language training aside, translation into Japanese is still 554 one of the primary means by which Japanese researchers acquire information about what their Western competitors are doing, and how they are doing it. Translation out of Japanese is necessary before Japan can export products to its foreign markets, because the customers demand that the manuals and other documentation not be written only in Japanese. The Japanese correctly view translation as necessary to their technological survival, but have found it extremely difficult to accomplish by human means. Accordingly, their government has sponsored MT research for several decades. There has been no rift between AI and D~ researchers in Japan, as there has been in the West -- especially in the U.S. MT may even be seen as the key to Japan's acquisition of enough Western technology to train their scientists and engineers, and thus accomplish their Fifth Generation project goals. Nemura [82] nembers the MT R&D groups in Japan at more than eighteen. (By contrast, there might be a dozen significant MT groups in all of the U.S. and Europe, including commercial vendors.) Several of the Japanese projects are quite large. (By contrast, only one MT project in the western world [EUROTRA] even appears as large, but most of the 80 individuals involved work on EUROTRA only a fraction of their time.) Most of the Japanese projects are engaged in research as much as development. (Most Western projects are engaged in development.) Japanese progress in MT has not come fast: until a few years ago, their hardware technology was inferior; so was their software competence, but this situation has been changing rapidly. Another obstacle has been the great differences between Japanese and Western languages -~ especially English, which is of greatest interest to them -- and the relative paucity of knowledge about these differences. The Japanese are working to eliminate this ignorance: progress has been made, and production-quality systems already exist for some applications. None of the Japanese MT systems are "direct," and all engage in "global" analysis; most are based on a transfer approach, but a few groups are pursuing the interlingua approach. MT research has been pursued at Kyoto University since 1968. There are now two MT projects at Kyoto (one for near-term application, one for long-term research). The former has developed a practical system for translating English titles of scientific and technical papers into Japanese [Nagao, 80, 82], and is working on other applications of English-Japanese [Tsujii, 82] as well as Japanese-English [Nagao, 81]. The other group at Kyoto is working on an English-Japanese translation system based on formal semantics (Cresswell's simplified version of Montague Grammar [Nishida et al., 82, 83j). Kyushu University has been the home of HT research since 1955, with projects by Tamachi and Shudo [74]. The University of Osaka Prefecture and Fukuoka University also host MT projects. However, most Japanese D~ research (like other research) is performed in the industrial laboratories. Fujitsu [Sawai et al., 82], Hitachi, Toshiba [Amano, 82], and NEC [Muraki & Ichiyema, 82], among others, support large projects generally concentrating on the translation of computer manuals. Nippon Telegraph and Telephone is working on a system to translate scientific and technical articles from Japanese into English and vice versa [Nemura et al., 82], and is looking into the future as far as simultaneous machine translation of telephone conversations [Nemura, personal communication]. The Japanese industrialists are not confining their attention to work at home. Several AI/MT groups in the U.S. (e.g., SRI, U. Texas) have been approached by Japanese companies desiring to fund MT R&D projects. More than that, some U.S. MT vendors (SYSTRAN and Weidner, at least) have recently sold partial interests to Japanese investors. Various Japanese corporations (e.g., NTT and Hitachi) and trade groups (e.g., JEIDA [Japan Electronic Industry Development Association]) have sent teems to visit MT projects around the world and assess the state of the art. University researchers have been given sabbaticals to work at Western MT centers (e.g., Shudo at Texas, Tsujii at Grenoble). Other representatives have indicated Japan's desire to participate in the CEC's EUROTRA project [Margaret King, personal communication]. Japan evidences a long-term, growing commitment to acquire and develop HT technology. The Japanese leadership is convinced that success in MT is vital to their future. METAL Of the major MT R&D groups around the world, it would appear that the new METAL project at the Linguistics Research Center of the University of Texas is closest to delivering a product. The METAL German-English system passed tests in a production-style setting in late 1982, mid-EJ, and early 1984, and the system has been installed at the sponsor's site in Germany for further testing and final development of a translator interface. The METAL dictionaries are being expanded for maximum possible coverage of selected technical areas in anticipation of production use in 1984. Commercial introduction is also a possibility. Work on other language pairs has begun: English-German is now underwayj and Spanish and Chinese are in the target language design stage. One of the particular strengths of the METAL system is its accommodation of a variety of linguistic theories/strategies. The German analysis component is based on a context-free phrase-structure grammar, augmented by procedures with facilities ford among other things, arbitrary transformations. The English analysis component, on the other hand, employs a modified GPSG approach and makes no use of transformations. Analysis is completely separated from transfer, and the system is multi-lingual in that a given constituent structure analysis can be used for transfer and synthesis into multiple target languages. Experimental translation of English into Chinese (in addition to German) will soon be underway; translation from both English and German into Spanish is expected to begin in the immediate future. 555 The transfer component of METAL includes two transformation packages, one used by transfer grammar rules and the other by transfer dictionary entries; these co-operate during transfer, which is effected during a top-down exploration of the /highest-scoring] tree produced in the analysis phase. The strategy for the top-down pass is controlled by the linguist who writes the transfer rules; these in turn are paired i-I with the grammar rules used to perform the original analysis, so that there is no need to search through a general transfer gr-m,,-r to find applicable rules (potentially allowing application of the wrong ones). As implied above, structural and lexical transfer are performed in the same pass, so that each may influence the operation of the other; in particular, transfer dictionary entries may specify the syntactic and/or semantic contexts in which they are valid. If no analysis is achieved for a given input, the longest phrases which together span that input are selected for independent transfer and synthesis, so that every input (a sentence, or perhaps a phrase) results in some translation. In addition to producing a translation system per se, the Texas group has developed software packages for text processing (so as to format the output translations like the original input documents), data base management (of dictionary entries and grammar rules), rule validation (to eliminate most errors in dictionary entries and gr-,-m-r rules), dictionary construction (to enhance human efficiency in coding lexical entries)j etc. Aside from the word-processing front-end (being developed by Siemens, the project sponsor), the METAL group is developing a complete system, rather than a basic machine translation engine that leaves much drudgery for its human developers/users. Lehmann et al. [81], Bennett [82], and Slocum [83, 84] present more details about the METAL system. GETA As discussed earlier, the Groupe d'Etudes pour la Traduction Automatique was formed when Grenoble abandoned the CETA system. In reaction to the failures of the interlingua approach, GETA adopted the transfer approach. In addition, the former software design was largely discarded, and a new software package supporting a new style of processing was substituted. The core of GETA is composed of three programs: one converts strings into trees (for, e.g., word analysis), one converts trees into trees (for, e.g., syntactic analysis and transfer), and the third converts trees into strings (for, e.g., word synthesis). The overall translation process is composed of a sequence of stages, wherein each stage employs one of these three programs. One ot the features of GETA that sets it apart from other MT systems is the insistence on the part of the designers that no stage be more powerful than is minimally necessary for its proper function. Thus, rather than supplying the linguist with programming tools capable of performing any operation whatever (e.g., the arbitrarily powerful Q-systems of TAUM), GBTA supplies at each stage only the minimum capability necessary to effect the desired linguistic operation, and no more. This reduces the likelihood that the linguist will become overly ambitious and create unnecessary problems, and also enabled the programmers to produce software that runs more rapidly than would be possible with a more general scheme. A "grammar" in GETA is actually a network of subgrammars; that is, a grammar is a graph specifying alternative sequences of applications of the subgr---,-rs and optional choices of which subgra~mars are to be applied (at all). The top-level grammar is therefore a "control graph" over the subgrm, m-rs which actually effect the linguistic operations -- analysis, transfer, etc. GETA is sufficiently general to allow implementation of any linguistic theory, or even multiple theories at once (in separate subgrammars) if such is desired. Thus, in principle, GETA is completely open-ended and could accommodate arbitrary semantic processing and reference to "world models" of any description. In practice, however, the story is more complicated. In order to increase the computational flexibility, as is required to take advantage of substantially new linguistic theories, especially "world models', the underlying software would have to be changed in many various ways. Unfortunately, it is written in IBM assembly language, making modification extremely difficult. Worse, the programmers who wrote the software have long since left the GETA project, and the current staff is unable to safely attempt significant modification. As a result, there has been no substantive change to the GETA software since 1975, and the GBTA group has been unable to experiment with any new computational strategies. Back-up, for example, is a known problem [Tsujii, personal communication]: if the GETA system "pursues a wr6ng path" through the control graph of subgr~mmars, it can undo some of its work by backing up past whole graphs, discarding the results produced by entire subgr---,-rs; but within a subgr-mm-r, there is no possibility of backing up and reversing the effects of individual rule applications. The GETA workers would like to experiment with such a facility, but are unable to change the software to allow this. Until GETA receives enough funding that new progra~mers can be hired to rewrite the software in a high-level language, facilitating present and future redesign, the GETA group is "stuck" with the current software, now 10 years old and showing clear signs of age, to say nothing of non-transportability. GETA seems not to have been pressed to produce an application early on, and the staff was relatively "free" to pursue research interests. Until GETA can be updated, and in the process freed from dependence on IBM mainframes, it may never he a viable system. The project staff are actively seeking funding for such a project. Meanwhile, the French goverr=nent has launched an application effort through the GETA group. 556 SUSY - Saarbruecker Uebersetzungssystem The University of the Saar at Saarbruecken, West Germany, hosts one of the larger MT projects in Europe, established in the late 1960"s. After the failure of a project intended to modify GAT for Russian-German translation, a new systsm was designed along somewhat similar lines to translate Russian into German after "global" sentence analysis into dependency tree structures, using the transfer approach. Unlike most other F?r projects, the Saarbruecken group was left relatively free to pursue research interests, rather than forced to produce applications, and was also funded at a level sufficient to permit significant on-going experimentation and modification. As a result, SUSY tended to track external developments in ~ and AI more closely than other projects. For example, Saarbruecken helped establish the co-operative HT group LEIBNIZ (along with Grenoble and others) in 1974, and adopted design ideas from the GETA system. Until 1975, SUSY was based on a strict transfer approach; since 1976, however, it has evolved, becoming more abstract as linguistic problems mandating "deeper" analysis have forced the transfer representations to assume some of the generality of an interlingua. Also as a result of such research freedom, there was apparently no sustained attempt to develop coverage for specific applications. Intended as a multi-lingual system involving English, French, German and Russian, work on SUSY has concentrated on translation into German from Russian and, recently, English. Thus, the extent to which SUSY may be capable of multilingual translation has not yet been ascertained. Then, toO, some aspects of the software are surprisingly primitzve: only very recently, for example, did the morphological analysis program become nondeterministic (i.e., general enough to permit lexical ambiguity). The strongest limiting factor in the further development of SUSY seems to be related to the initial inspiration behind the project: SUSY adopted a primitive approach in which the linguistic rules were organized into independent strata, and were incorporated directly into the software [Maas, 84]. As a consequence, the rules were virtually unreadable, and their interactions, eventually, became almost impossible to manage. In terms of application potential, therefore, SUSY seems to have failed. A second-generation project, SUSY-II, begun in 1981, may fare better. EUROTRA EUROTRA is the largest MT project in the Western world. It is the first serious attempt to produce a true multi-lingual system, in this case intended for all seven European Economic Community languages. The justification for the project is simple, inescapable economics: over a third of the entire administrative budget of the EEC for 1982 was needed to pay the translation division (average individual income: $43,O00/year), which still could not keep up with the demands placed on it; technical translation costs the EEC $.20 per word for each of six translations (from the seventh original language), and doubles the cost of the technology documented; with the addition of Spain and Portugal later this decade, the translation staff would have to double for the current demand level (unless highly productive machine aids were already in place) [Perusse, 83]. The high cost of writing SYSTRAR dictionary entries is presently justifiable for reasons of speed in translation, but this situation is not viable in the long term. The EEC must have superior quallty MT at lower cost for dictionary work. Human translation alone will never suffice. EUROTRA is a true multi-national development project. There is no central laboratory where the work will take place, but instead designated University representatives of each member country will produce the analysis and synthesis modules for their native language; only the transfer modules will be built by a "central" group -- and the transfer modules are designed to be as small as possible, consisting of little more than lexical substitution [King, 82]. Software development will be almost entirely separated from the linguistic rule development; indeed, the production software, though designed by the EUROTRAmembers, will be written by whichever commercial software house wins the contract in bidding competition. Several co-ordinating c~ittees are working with the various language and emphasis groups to insure co-operation. The linguistic basis of EUROTRA is nothing novel. The basic structures for representating "meaning" are dependency trees, marked with feature-value pairs partly at the discretion of the language groups writing the gram~nars (anything a group wants, it can add), and partly controlled by mutual agreement among the language groups (a certain set of feature-value combinations has been agreed to constitute minimum information; all are constrained to produce this set when analyzing sentences in their language, and all may expect it to be present when synthesizing sentences in their language) [King, 81, 82]. The software basis of EUROTRA will not be novel either, though the design is not yet complete. The basic rule interpreter will be "a general re-write system with a control language over grazamars/processes" [King, personal communication]. As in GETA, the linguistic rules will be bundled into packets of subgrammars, and the linguists will be provided with a means of controlling which packets of rules are applied, and when; the individual rules will be non-destructive re-write rules, so that the application of any given rule may create new structure, but will never erase any old information (no back-up). EUROTRAwill engage in straightforward development using state-of-the-art but "proven" techniques. The charter requires delivery of a small representative prototype system by late 1987, and a prototype covering one technical area by late 1988. EUROTRA is required to translate among the native languages of all member countries which sign the "contract of association" by early mid-84; thus, not all seven EEC languages will necessarily be 557 represented, but by law at least four languages must be represented if the project is to continue. The State of the Art Human languages are, by nature, different. So much so, that the illusory goal of abstract perfection in translation -- once and still imagined by some to be achievable -- can be comfortably ruled out of the realm of possible existence, whether attempted by machine or man. Even the abstract notion of "quality" is undefinable, hence immeasurable. In its place, we must substitute the notion of evaluation of translation according to its purpose, judged by the consomer. One must therefore accept the truth that the notion of quality is inherently subjective. Certainly there will be translations hailed by most if not all as "good," and correspondingly there will be translations almost universally labelled 'bad." Most translations, however, will surely fall in between these extremes, and each user must render his own judgement according to his needs. In corporate circles, however, there is and has always been an operational definition of "good" vs. 'bad" translation: a good translation is what senior translators are willing to expose to outside scrutiny (not that they are fully satisfied, for they never are); and a bad one is what they are not willing to release. These experienced translators -- usuatly post-editors -- impose a judgement which the corporate body is willing to accept at face value: after all, such judgement is the very purpose for having senior translators. It is arrived at subjectively, based on the purpose for which the translation is intended, but comes as close to being an objective assessment as the world is likely to see. In a post-edltin@ context, a "good" original translation is one worth revising i.e., one which the editor will endeavor to change, rather than reject or replace with his own original translation. Therefore, any rational position on the state of the art in MT & MAT must respect the operational decisions about the quality of MT & MAT as judged by the present users. These systems are all, of course, based on old technology ("ancient," by the standards of AI researchers); but by the time systems employing today's AI technology hit the market, they too will be "antiquated" by the research laboratory standards of their time. Such is the nature of technology. We will therefore distinguish, in our assessment, between what is available and/or used now ("old," yet operationally current, technology), and what is around the next corner (techniques working in research labs today), and what is farther down the road (experimental approaches). Productlon Systems Production M(A)T systems are based on old technology; some, for example, still (or until very recently did) employ punch-cards and print(ed) out translations in all upper-case. Few if any attempt a comprehensive "global" analysis at the sentence level (trade secrets make this hard to discern), and none go beyond that to the paragraph level. None use a significant amount of semantic information (though all claim to use some). Most if not all perform as "idiots savants', making use of enormous amounts of very unsophisticated pragmatic information and brute-force computation to determine the proper word-for-word or idiom-for-idiom translation followed by local rearrangement of word order -- leaving the translation chaotic, even if understandable. But they work! Some of them do, anyway -- well enough that their customers find reason to invest enormous amounts of time and capital developing the necessary massive dictionaries specialized to their applications. Translation time is certainly reduced. Translator frustration is increased or decreased, as the case may be (it seems that personality differences, among other things, have a large bearing on this). Some translators resist their introduction -- there are those who still resist the introduction of typewriters, to say nothing of word processors -- with varying degrees of success. But most are thinking about accepting the place of computers in translation, and a few actually look forward to relief from much of the drudgery they now face. Current MT systems seem to take some getting used to, and further productivity increases are realized as time goes by; they are usually accepted, eventually, as a boon to the bored translator. New products embodying old technology are constantly introduced; most are found not viable, and quickly disappear from the market. But those which have been around for years must be economically justifiable to their users -- else, presumably, they would no longer exist. Development Systems Systoms being developed for near-term introduction employ Computational Linguistics (CL) techniques cf the late 1970"s, if not the 80"s. Essentially all are full HT, not MAT, systems. As Hutchins [82] notes, "...there is now considerable agreement on the basic strategy, i.e. a "transfer" system with some semantic analysis and some interlingual features in order to simplify transfer components." These systems employ one of a variety of sophisticated parsing/transducing techniques, typically based on charts, whether the grammar is expressed via phrase-structure rules (e.g., METAL) or [strings of] trees (e.g., GETA, EUROTRA); they operate at the sentence level, or higher, and make significant use of semantic features. Proper linguistic theories, whether elegant or not quite, and heuristic software strategies take the place of simple word substitution and brute-force programming. If the analysis attempt succeeds, the translation stands a fair chance of being acceptable to the revisor; if analysis fails, then fail-soft measures are likely to produce something equivalent to the output of a current production MT system. These systems work well enough in experimental settings to give their sponsors and waiting customers (to say nothing of their implementors) reason to hope for near-term success in application. Their technology is based on some of 558 the latest techniques which appear to be workable in i,m, ediate large-scale application. Most "pure AI" techniques do not fall in this category; thus, serious AI researchers look down on these development systems (to say nothing of production systems) as old, uninteresting -- and probably useless. Some likely are. But others, though "old," will soon find an application niche, and will begin displacing any of the current production systems which try to compete. (Since the present crop of development systems all seen to be aimed at the "information dissemination" application, the current productlon systems that are aimed at the "information acquisition" market may survive for some time.) The major hurdle is time: time to write and debug the grammars (a very hard task), and time to develop lexicons with roughly ten thousand general vocabulary items, and the few tens of thousands of technical terms required per subject area. Some development projects have invested the necessary time, and stand ready to deliver commercial applications (e.g., GETA, METAL). Research Systems The biggest problem associated with MT research systems is their scarcity (nonexistence, in the U.S.). If current CL and AI researchers were seriously interested in multiple languages -- even if not for translation per se -- this would not necessarily be a bad situation. But in the U.S. they certainly are not, and in Europe, CL and AI research has not yet reached the level achieved in the U.S. Western business and industry are naturally more concerned with near-term payoff, and some track development systems; very few support FiT development directly, and none yet support pure D~ research at a significant level. (The Dutch firm Philips may, indeed, have the only long-term research project in the West.) Some European governments fund significant R&D projects (e.g., Germany and France), but Japan is making by far the world's largest investment in MT research. The U.S. government, which otherwise supports the best overall AI and [English] CL research in the world, is not involved. Where pure MT research projects do exist, they tend to concentrate on the problems of deep meaning representations -- striving to pursue the goal of a true AI system, which would presumably include language-independent meaning representations of great depth and complexity. Translation here is seen as just one application of such a system: the system "understands" natural language input, then "generates" natural language output; if the languages happen to be different, then translation has been performed via paraphrase. Translation could thus be viewed as one of the ultimate tests of an Artificial Intelligence: if a system "translates correctly," then to some extent it can be argued to have "understood correctly," and in any case will tell us much about what translation is all about. In this role, MT research holds out its greatest promise as a once-again scientifically respectable discipline. The first requirement, however, is the existence of research groups interested in, and funded for, the study of multiple languages and translation among them within the framework of AI research. At the present time only Japan, and to a somewhat lesser extent western Europe, can boast such groups. Future Prospects The world has changed in the two decades since ALPAC. The need and demand for technical translation has increased dramatically, and the supply of qualified human technical translators has not kept pace. (Indeed, it is debatable whether there existed a sufficient supply of qualified technical translators even in 1966, contrary to ALPAC's claims.) The classic "law of supply and demand" has not worked in this instance, for whatever reasons: the shortage is real, all over the world; nothing is yet serving to stem this worsening situation; and nothing seems capable of doing so outside of dramatic productivity increases via computer automation. In the EEC, for example, the already overwhelming load of technical translation is projected to rise sixfold within five years. The future premises greater acceptance by translators of the role of machine aids -- running the gamut from word processing systems and on-line term banks to MT systems -- in technical translation. Correspondingly, M(A)T systems will experience greater success in the marketplace. As these systems continue to drive down the cost of translation, the demand and capacity for translation will grow even more than it would otherwise: many "new" needs for translation, not presently economically justifiable, will surface. If MT systems are to continue to improve so as to further reduce the burden on human translators, there will be a greater need and demand for continuing MT R&D efforts. Conclusions The translation problem will not go away, and human solutions (short of full automation) do not now, and never will, suffice. MT systems have already scored successes among the user community, and the trend can hardly fail to continue as users demand further improvements and greater speed, and MT system vendors respond. Of course, the need for research is great, but some current and future applications will continue to succeed on economic grounds alone -- and to the user community, this is virtually the only measure of success or failure. It is important to note that translation systems are not going to "fall out" of AI efforts which are not seriously contending with multiple languages from the start. There are two reasons for this. First, English is not a representative language. Relatively speaking, it is not even a very hard language from the standpoint of Computational Linguistics: Japanese, Chinese, Russian, and even German, for example, seem more difficult to deal with using existing CL techniques -- surely in pert due to the nearly total concentration of CL workers on English. Developing translation ability will require similar concentration by CL workers on other languages; nothing less will suffice. 559 Second, it would seem that translation is not by any means a simple matter of understanding the source text, then reproducing it in the target language -- even though some translators (and virtually every layman) will say this is so. On the one hand, there is the serious question of whether, in for example the case of an article on front-line research in semiconductor switching theory, or nuclear physics, a translator really does "fully comprehend" the content of the article he is translating. One would suspect not. (Johnson [83] makes a point of claiming that he has produced translations, judged good by informed peers, in technical areas where his expertise is deficient, and his understanding, incomplete.) On the other hand, it is also true that translation schools expend a great deal of effort teaching techniques for low-level lexical and syntactic manipulation -- a curious fact to contrast with the usual "full comprehension" claim. In any event, every qualified translator will agree that there is much more to translation than simple analysis/synthesis (an almost prima facie proof of the necessity for Transfer). What this means is that the development of translation as an application of Computational Linguistics will require substantial research in its own right in addition to the work necessary in order to provide the basic multi-lingual analysis and synthesis tools. Translators must be consulted, for they are the experts in translation. None of this will happen by accident; it must result from design. References Amano, S. Machine Translation Project at Toshiba Corporation. Technical note. Toshiba Corporation, R&D Center, Information Systems Laboratory, Kawasaki, Japan, November 1982. Bar-Hillel, Y., "Some Reflections on the Present Outlook for High-Quality Machine Translation," in W.P. Lehmann and R. Stachowitz (eds.), Feasibility Study on Fully Automatic High Quality Translation. Final technical report RADC-TR-71-295. Linguistics Research Center, University of Texas at Austin, December 1971. Bennett, W. S. The Linguistic Component of METAL. Working paper LRC-82-2, Linguistics Research Center, University of Texas at Austin, July 1982. Bostad, D. A., 'Quality Control Procedures in Modification of the Air Force Russian-EnKlish MT System," in V. Lawson (ed.), Practical Experience of Machine Translation. North-Holland, Amsterdam, 1982, pp. 129-133. Bruderer, H. E., "The Present State of Machine and Machine-Assisted Translation," in Commission of the European Communities, Third European Congress on Information Systems and Networks: Overcoming the Language Barrier, vol. i. Verlag Dokumentation, Munich, 1977, pp. 529-556. Gervais, A., et DG de la Planification, de l'Evaluation et de la Verification. Rapport final d'~valuation du syst~me pilots de traduction automatique TAUM-AVIATION. Canada, Secretariat d'Etat, June 1980. Hundt, M. G., 'Working with the Weidner Machine-Aided Translation System," in V. Lawson (ed.), Practical Experience of Machine Translation. North-Holland, Amsterdam, 1982, pp. 45-51. Hutchins, W. J., "Progress in Documentation: Machine Translation and Machine-Aided Translation," Journal of Documentation 34, 2, June 1978, pp. 119-159. Hutchins, W. J., "The Evolution of Machine Translation Systems," in V. Lawson (ed.), Practical Experience of Machine Translation. North-Holland, Amsterdam, 1982, pp. 21-37. Johnson, R. L., "Parsing - an MT Perspective," in K. S. Jones and Y. Wilks (eds.), Automatic Natural Language Parsing. Ellis Horwood, Ltd., Chichester, Great Britain, 1983. Jordan, S. R., A. F. R. Brown,, and F. C. Rutton, "Computerized Russian Translation at ORNL," in Proceedings of the ASIS Annual Meeting, San Francisco, 1976, p. 163; also in ASIS Journal 28, 1, 1977, pp. 26-33. King, M., '~esign Characteristics of a Machine Translation System," Proceedings of the Seventh IJCAI, Vancouver, B.C., Canada, Aug. 1981, vol. I, pp. 43-46. King, M., "EUROTRA: An Attempt to Achieve Multilingual MT," in V. Lawson (ed.), Practical Experience of Machine Translation. North-Holland, Amsterdam, 1982, pp. 139-147. Lehmann, W. P., W. S. Bennett, J. Sloeum, H. Smith, S. M. V. Pfluger, and S. A. Eveland. The METAL System. Final technical report RADC-TR-80-374, Linguistics Research Center, University of Texas at Austin, January 1981. NTIS report A0-97896. Loh, S.-C., '~achine Translation: Past, Present, and Future," ALLC Bulletin 4, 2, March 1976, pp. 105-114. Lob, S.-C., L. KonE, and H.-S. Rung, '~achine Translation of Chinese Mathematical Articles," ALLC Bulletin 6, 2, 1978, pp. 111-120. Loh, S.-C., and L. Kong, "An Interactive On-Line Machine Translation System (Chinese into English)," in B. M. Snell (ed.), Translating and the Computer. North-Holland, Amsterdam, 1979, pp. 135-148. Lytle, E. G., D. Packard, D. Gibb~ A. g. Melby, and F. H. Billings, "Junction Grammar as a Base for Natural Language Processing," AJCL 3, 1975, microfiche 26, pp. 1-77. Maas, H.-D., "The D~ system SUSY," presented at the ISSCO Tutorial on Machine Translation, Lugano, Switzerland, 2-6 April 1984. 560 Melby, A. K. "Multi-level Translation Aids in a Distributed System," Ninth ICCL [COLING 82], Prague, Czechoslovakia, July 1982, pp. 215-220. Muraki, K., and S. Ichiyama. An Overview of Machine Translation Project at NEC Corporation. Technical note. NEC Corporation, C & C Systems Research Laboratories, 1982. Nagao, H., J. Tsujii, K. Mitamure, N. Rirakawa, and M. Kume, "A Machine Translation System from Japanese into English: Another Perspective of MT Systems," Proceedings of the Eighth ICCL [COLING 80], Tokyo, 1980, pp. 414-423. Nagao, M., et el. On English Generation for a Japanese-English Translation System. Technical Report on Natural Language Processing 25. Information Processing of Japan, 1981. Nagao, M., J. Tsujii, K. Yada, and T. Kakimoto, "An English Japanese Machine Translation System of the Titles of Scientific and Engineering Papers," Proceedings of the Ninth ICCL [COLING 82], Prague, 5-10 July 1982, pp. 245-252. Nishida, F., and S. Takamatsu, 'Uapanese-English Translation Through Internal Expressions," Proceedings of the Ninth ICCL [COLING 82], Prague, 5-10 July 1982, pp. 271-276. Nisnida, T., and S. Doshita, '~n English-Japanese Machine Translation System Based on Formal Semantics of Natural Language," Proceedings of the Ninth ICCL [COLING 82], Prague, 5-10 July 1982, pp. 277-282. Nisnida, T., and S. Doshita. An Application of Montague Grammar to English-Japanese }~chine Translation. Proceedings of the ACL-NRL Conference on Applied Natural Language Processing, Santa Monica, California, February 1983, pp. 156-165. Nomura, H., and A. Shimazu. Machine Translation in Japan. Technical note. Nippon Telegraph and Telephone Public Corporation, Musashino Electrical Communication Laboratory, Tokyo, November 1982. Nomura, N., A. Shimazu, H. Iida, Y. Katagiri, Y. Saito, S. Naito, K. Ogura, A. Yokoo, and M. Mikami. Introduction to LUTE (Language Understander, Translator & Editor). Technical note, Musashino Electrical Communication Laboratory, Research Division, Nippon Telegraph and Telephone Public Corporation, Tokyo, November 1982. Perusse, D., '~achine Translation," ATA Chronicle 12, 8, 1983, pp. 6-8. Pigott, I. M., "Theoretical Options and Practical Limitations of Using Semantics to Solve Problems of Natural Language Analysis and Machine Translation," in H. MacCatferty and K. Gray (eds.), The Analysis of Meaning: Informatics 5. ASLIB, London, 1979, pp. 239-268. Pigott, I. M., "The Importance of Feedback from Translators in the Development of High-Quality Machine Translation," in V. Lawson (ed.), Practical Experience of Machine Translation. North-Holland, Amsterdam, 1982, pp. 61-73. Ruffino, J. R., "Coping with Machine Translation," in V. Lawson (ed.), Practical Experience of Machine Translation. North-Holland, Amsterdam, 1982, pp. 57-60. Sawai, S., R. Fukushima, M. Sugimoto, and N. Ukai, '~nowledge Representation and Machine Translation," Proceedings of the Ninth ICCL [COLING 82], Prague, 5-10 July 1982, pp. 351-356. Sereda, S. P., "Practical Experience of Machine Translation," in V. Lawson (ed.), Practical Experience of Machine Translation. North-Holland, Amsterdam, 1982, pp. 119-123. Shudo, K., "On Machine Translation from Japanese into English for a Technical Field," Information Processing in Japan 14, 1974, pp. 44-50. Sinaiko, S. W., and G. R. Klare, '~urther Experiments in Language Translation: A Second Evaluation of the Readability of Computer Translations," ITL 19, 1973, pp. 29-52. Slocu~, J. '~ Status Report on the LRC Machine Translation System," Proceedings of the ACL-NRL Conference on Applied Natural Language Processing, Santa Monica, California, I-3 February 1983, pp. 166-173. Slocu~, J., '~ETAL: The LRC Machine Translation System," presented at the ISSCO Tutorial on Machine Translation, Lugano, Switzerland, 2-6 April 1984. Thouin, B., "The Meteo System," in V. Lawson (ed.), Practlcal Experience of Machine Translation. North-Holland, Amsterdam, 1982, pp. 39-44. Tsujii, J., "The Transfer Phase in an English- Japanese Translation System," Proceedings of the Ninth ICCL [COLING 82], Prague, 5-10 July 1982, pp. 383-390. Van Slype, G., '~valuation du syst~me de traduction automatlque SYSTE~ anglais-fran~ais, version 1978, de la Commission des communaut~s Europ~ennes," Babel 25, 3, 1979, pp. 157-162. Van Slype, G., "Economic Aspects of Machine Translation," in V. Lawson (ed.), Practical Experience of Machine Translation. North-Holland, Amsterdam, 1982, pp. 79-93. Vasconcellos, M., '~Lanagoment of the Machine Translation Envirooment," Proceedings of the ASLIB Conference on Translating and the Computer, London, November 1983. Wheelerp P., "The Errant Avocado (Approaches to Ambiguity in SYSTEAN Translation)," Newsletter 13, Natural Language Translations Specialist Group, BCS, February 1983. Wilks, Y., and LATSEC, Inc. Comparative Translation Quality Analysis. Final report on contract F-33657-77-C-0695. 1978. 561 | 1984 | 116 |
Toward a Redefinition of Yea/No Questions Julia Hirschberg Department of Computer and Information Science Moore School/D0 University of Pennsylvania Philadelphia, PA 19104 ABSTRACT While both theoretical and empirical studies of question- answering have revealed the inadequacy of traditional definitions of Ve*-no questions (YNQs), little progress has been made toward a more satisfactory redefinition. This paper reviews the limitations of several proposed revisions. It proposes a new definition of YNQs baaed upon research on a type of conversational irnpIieature, termed here setdar imp/ie,,ture, that helps define appropriate responses to YNQs. By representing YNQs as sealer qtteriee it is possible to support a wider variety of system anti user responses in a principled way. I INTRODUCTION If natural language interfaces to question-answering systems are to support a broad range of responses to user queries, the way these systems represent queries for response retrieval should be reexamined. Theorists of question-answering commonly define questions in terms of the set of all their possible [true) answers. Traditionally, they have defined t/us-no quesgiofts (YNQs) as propositional questions (?P) or as a special type of alternative question (?P V ?Q), in which the second alternative is simply the negation of the fir.~t (?P V ?"P). So 'Does Mary like skiing?' would be represented as flikr(lffary,skling) or ?like(Alary.okiing) V "~-Iikt(Mary, skiing) and the range of appropriate responses wouhl be gee, no and, possibly, unknown. • " ilowever, both theoretleal wnrk and empirical studies of naturally occurring question-answer exchanges have shown this approach to be inadequate: ?~s, .o. and unknown form only a small portion of the set ¢,f all appropriate responses to a YNQ. Furthermore, for some YNQ's. none of these simple direct responses alone i~ appropri:,te. While it is widely recognized (llobbs, 1979, Pollack, 1982} that indirect resp.nses I to YNQs represent an important option for respondents in natural discourse, standard theories of question- answering have n,~t been revised accordiugly. A practical COllsPquence surface~ when attempts are made t.o support indirect responses to YNQs computationally. For lack of alternative representations, question-answering systems which would permit indirect responses must still represent YNQs as if the direct respons-s were the 'norm', and then resort to ad hoe manipulations to generate second-class 'indirect' responses, thus perpetuating an asymmetric distinction between 'direct' and 'indirect' responses. However. resea.'h under way on how a type of generalized conversational implieatttre, termed here scalar irrtplieature, can be used to guide the generation and iaterpretion of indirect respt,nses to YNQs sugges/,s a revi~ed representation fi)r YNQs which scrotums)dates a wide variety of responses in a uniform way. II CURRENT REPRESENTATIONS OF YNQ, S Among st:~ad:,rd accounts of ¥NQs, I-lintikka's (l.lh:tik.~a, 197~) is one of the shnl)lest and mo~t widely accepted, c~.,mbinh~g the llndirect r~sponses to YNQs tr~ defined h~:re as responses other than lltR, n0, or some expression of ignorance. concepts of YNQ" as pn,positional question " and as alternative question; as such, it will be used below to represent traditional approaches in general. To define attswerhood, the conditions under which a response eonnts as an answer to a natural-language query, Hintikka divides qneriq~s into two parts: an imperative or optaGve operator {[). roughly expressing 'bri,g it about that', and a daesideratu~,n, a specification of the epistemie state a questhmer desires. For Hintikka, a YNQ is a ~peciai case of alterna(Jve question in which the negative alternative 'or not P' has been suppressed. So the desideratum of a YNQ is of the f¢~rm ([ know thai P) V {I kin, u, that #t~3+P}. where net-l- indicates the negation-fi)rming process. 'Does Mary like skiing?' thus has a.s its desideratum I know that ,~htrg li];e.* skiing or I kno~ ihat Aiary does not like skiing, or, more concisely, fK~Jike{Marg, skling) V Ks~likc{'Marll, ekiing), where K S is the epistemic representatitm of 'S knows that'. The full sense of the query is then 'Bring it aboot that 1 know that Mary likes skiing or that I know that Mary does not like skiing', which can be represented by ! [KsP V K.~',P). Possible resp.nses are simply {P,-,P}, or {yes,no). A. tI_~othesls Confirmation Bolingcr (Boliuger, 1978) has called such interpretations into question by showing that YNQs may have very different meanings from their alternatlve-questioa counterparts; they also have more restricted paraphrase and intonation patterns. In 13oliuger's view the term I/US-no qtterl/ has hypnolized scholars into a.ssurrling that, simply because a class of question can be answered by a 2us or no, the~ altern:ttives are critcrial, and every YNQ is intended to elicit, one or the other. He proposes instead that YNQs be viewed as hypotheses put forward for confirmation, arncadmenL or diseonfirnladon - in any degree. Thus, in Bolinger's exampie (l), the (1) Q: Do you like llonolnlu? R: Just a little. questioner (Q)'s hypothesis 'you like tloaoh, iu' is amended by ~he respondent (R) in a re.-ponse v, hich is neither .t, es n,~r no bnt somewhere in between. In his example (2), Q's hypothesis 'it is (2) Q: Is it difficult? R: It's imposeS'de. difficult' is confirmed by R's as,ertion of a more positive resi~onse than a simple go.;. While Bolingcr makes a good ca'.:.e for the inadequacy of sttmdard views of YNQs, the revisi,m hv I)mposes is itself :,~, ]i,tited. '~t',~ imp~,~ible', in (2). d.',e:; n:,.'e than simply pr~'-,..t a strong affirmation of the hypoth,'~is 'it is dilficult' - it Frovid~ new :'.rid unrequested though perlit..nt inr,,r.tati.n. In fact, 'str,mg affirmation' might better t)e provided by a respon.-e -.uch as '1 am absolutely sure it's difficult' than by "he response he suggests. And ther,~ are equally appropriate responses to the queries in (l) and {2) that are not easily explained in terms of degree of hypotl~esis confirmatit,n, u.~ shown iu (3) and (4). /48 (:~;) Q: 1),, you like ! h,,a.hllu? a. R: I don't like llawaii. b. R: I lik~- Ililo. (4} t~: Is it dif~'icult? a. l,': It could be. b. It: Mike says so. Finally, l~.ii~ger does not propose a representatiozt to accommodate hi~ hy~,~,the~is-confirmation model. B. Fo~oesed YNf~.~ Similarly, Ki,'fer (Kid,for, 19~;0) points out evidence for the inadt,quacy of the standard view of YNQs, but proposes no unified sohrti.n. In a stt~dy of the indirect speech acts that may be p~.rh~rm,'d I,? "(NQ~, h," nc~le~ that certain YNQs, which he terms .focussed YTVQs, aetu:dly function as v,h-queslions. Focussed YNQs I'¢,r Kit'f,'r are YNQs that are marked in some way (:~l)parenlly by sire:.~ i to il~di,.ate a background aasuntption which Q and l{ typic:ally share. For example, (Sa) is not a focussed YNQ while {Y~bHY, d ) are. While any of the four may be auswrted with 9~,~ or a. 1.~ John h,aving for .~tockholm tomorrow? b. Is .Mhn leaving for Stockholm TOMORROW? c. Is John h.aviug for STOCKIR~I.M tomorr,~w? d. [s JOIIN leaving fi~r St~wkh.~dm tom~)rrow? no, ii is also po~.ii,le that, if Q a~ks (,Sb). she want~ R to answer the question 'When is Johi! leaving for Stockholm?'; if she a.,;ks (Se) she may want to know 'Where is John going tomorrow?'; and if she asks (Sd) she may want to know 'Who is leaving for Stockhohn tomorrow?" Titus a f,~cussed YNQ resemhles the wh-question that might be formed by replacing the focussed element in the desideratum with a corresponding Pro-element. In Kiefer's analysis, only one eh't~ent can he focussed, so resl~mses such as 'lie's leaving for Paris Thursday' will not be accommodated. Although Kiefer does not propose a representation for focugsed YNQs, a di..:j,nc! resembling the desideratum of a wh-question might I,e added to the traditional representation to areommodate his third :tlterna|ive: for (5d} this might take the form 'Is John leaving for Stoekhohn tomorrow, or, if not, who is?' or, in Hintikka's notation, ! KQleaving(.Iohn,Stockholm,tomorrow) V Kq-leav ing{.Ioh n,Stoek h-Ira,tomorrow) V 3x Kqleav ing(x,Stoek holm,tomorrow). This represenl.atiou reflects another problem posed by Kiefer's analysis: the third disjunet is appropriate only when the second also is and not when the direct response ~les is true. For example, a response of 'Bill is' to (Sd) seems to convey that John h not leaving for Stockhoha tomorrow. Thus viewing some YNQs as wh*qm,,qions req.ires a rather more coml~lex representation than simply adding a wh-question as a third disjunct. * In addition, defining different representations for various YNQ subtypes seems a le~s than satisfactory solution to tbe linfitations presented by current representations of YNQs. A more unified solution to the problems identified by Bolinger and Kiefer would clearly be desirable. Such a solution is suggested by current research on the role conversational implieature plays in accounting for indirect re.~pons~s to YN~.~)s. III CONVERSATIONAL I'MPLICATURE AND YNQS In a large cl:~s of in,!irect respon:~e.~ to YNQs, query and response each refi, r to an entity, attribute, state, activity, or event that can bo viewed as appearing on sorae eea~e; such references "In f~et, the third di~jon~t would have to be something like ~ KQ-~leaving(Jol, n,~3oekholm,to~,~,~rrou,} A tea~ingfz.Sterkl~olm,tomorrow). aThe idea.~ outlined in the following section are discussed in more detail in (tlir,~rhberg, 1984). will be termed scalars and responses in such exchanges will be termed scalar responnes, s In such scalar exchanges, questioners can infer both a direct response and additional implicit information flora the unreqm'sted information provided by the respondent. In {0) for example, Q is entitled to infer the direct response no or I don "~ know (6) Q: Are mushrooms poisonous? R: Some are. and the additional information that It believes that there may be mushrooms that are not poisonous, ew, n though 3z(rnashroom(z) A poism~ous(x)) does not IogicMly i-,{v an)" of this information. Clearly 'Some are' is an appropriate r,.~pouse to the query - more appropriate in fact than a simple no, wllich might convey that. no mushrooms are poisonous - but what makes it appropriate? Grire's (Grice, 1975) Cooperative Principle claims that, without contrary evide~cp, participants in convers~.tion assume their partners are trying to be cooperative. In consequence, they recognize certain conversational maxims, such as Grice's Mnzirn. of Quantit|l u I Make your eoutribution as informative as is required (for the current purposes of the exchange). b) Do not make your contribution more informative than is required. and his ~,~azint ol QuoJity Try to make your contribution one that is true. a) Do not say what you believe to be false. b) Do not. say that for which you I~k adequate evidence. Speaker and hearer's mutual recognition of these maxims may give rise to eort~erscttional ~mp~ieaturen: An utterance eonveraatios~allll intp~icates a proposition P when it conveys P by virtue of the bearer's assumption of the speaker's cooperativeness. While s speaker may not always obey the~e maxims, the hearer's expectations are based on her belief that such conventions represent the norm. A. Scalar Pred|eatlon Following Grice, Horn {flora, 1972) observed that, when a speaker refers to a value on some scale defined by eentantl," entai|ment 4, that value represents the highest value on its scale the speaker can t ruthful!y affirm. The speaker is saying as much {Quantity) as she truthfully (Quality) can. Higher values on that scale are thus implicitly marked by the speaker as not known to be the case or known not to be the ease. 5 Values lower on the scale will of course be marked as true, since they are entailed. Horn called this phenomenon scalar predleation, and Gazdar {Gazdar, 1979) later used a variation as the basis for a phenomenon he termed sea/at quantity irrtp[ieature. Here a much revi~d and extended version will be termed scalar implleature. Horn's simple notion of scalar predication does provide a principled ba.~is for interpreting ({3) attd similar indirect responses to YNQs where scales are defined by entailment. Some is the highest value on a quantifier scale that R can truthfully affirm. Truth°values of higher scalars such as all are either unknown to R or believed by him to be false. Thus, if Q recognizes R's implieature, roughly, 'As far as 1 know, not all mushrooms are poisonous', she will derive the direct response to her query as no or I don ~ know. H must believe either that some mushrooms are not poisonous or that some mushrooms may not be poisonous. 4W semantieMly entails Tiff T is true whenever W is. 5Whether x speaker implicates ignorance or falsity of • value is t subject of ~ome disagreement •merit Ilorn and those (Gasdar, lg7g, So~mes, 1082) who h•ve taken up his basic notion, In (ltirschberg, 1984) I contend that such implieatures should be viewed as didunctions, K(~T) V ~K(T), which may be dbamhiguated by the nature of the ordering relation or by the context. 49 d¢ It is also important to note that, in (6), were R simply to deny Q's query or to assert ignora~ce with a simple [ don't know, Q would be entitled, by virtue of the Cooperative Principle, to assume that there is no scalar value whose truth R can in fact affirm. That is, Q can assume that, as far as R knows, there are no mushrooms that are poisonous, for otherwise R could commit himself to the proposition that 'some mushrooms are poisonous'. More generally then, 1-~ is obliged by the Cooperative Principle, and more especially by Joshi's (Josh}, 1982) modification of Grice's Maxim el Qua/itl/: 'Do not say anything which may imply for the hearer something which you the speaker believe to be false.', to provide an indirect response in (6), lest a simple direct response entitle Q to conclude some ,fa/,e iwtplieaturee. Thus indirect responses must be included among the set of all appropriate responses to a given YNQ, since in some cases they may be the most appropriate response R can make. B. Scalar Impllcature While scalar predication provides a principled explanation for {6), a revised and extended notion of aea/ar irrtplieature can account for a much larger class of indirect responses to YNQs. It can also suggest a revised representation of YNQs in general based upon this enlarged class of appropriate responses. Order}ors not defined by entailment and order}rigs other than linear orderings, including but not limited to set/set-member, whole/part, process stages, spatial relationship, prerequisite orderings, entity/attribute, lea hierarchy, or temporal ordering, permit the conveyance of scalar implicatures in much the same way that the entailed quantifer scale does in (6)~ In (7) the set/ member (7) Q: Did you invite the Reagans! R: I invited Nancy. (8~ Q: }lave you finished the manuscript? It: I've started a rough draft. relati,,nship orders the Rcagans and Nancy; R implicates that he has not invited Ronald, for instance. In 18), starting a rough draft precedes finishing a manuecript in the process of preparing a paper. So Q is entitled to conclude that R has not finished the manuscript or completed any later stage in this process, such as finishing the rough draft. More formally, any set of referents {bl,...,bn} that can be partially ordered by a relation O s can support scalar implicature. Any scale S that permits scalar implicature can be represented as a partiallg-ordered eet. For any referents bt, b z on S, b 2 is higher on S than b I iff blOb2; similarly, b I is lower on S than b~ iff blOb ~. Any pair b 1, b~ of ineontparable elements (elements not ordered with respect to one another by O) will be termed alternate values with respect to S. This redefinition of scale accommodates order}ors such as those mentioned above, while excluding orderings such as cycles, that do not permit scalar implieatute. It also helps define the inferences licensed when [t affirms a higher or an alternate value, or when he denies or asserts ignorance of lower, higher, or alternate valses. For example, R affirms a higher scalar value than the value queried in Bolinger's example reproduced in (2). If difficult and impo.~Mble are viewed on a scale defined in d,.grees of feasibility, then Q can conclude that by affirming ghc higher value H has affirmed the lower. Similarly, R may affirm an alternate value, as he d~s in (3h}. If II sees Honoluh| and Hilo as b~,th members of a set of Hawaiian cities, he can affirm an unqueried set member (ltilo) to deny a queried member {llawaii). The affirmati,~n of an unqueried ah,'rnate value generally conveys the falsity or R's ignorance of the queried value. SA partial ord~-rin 9 may be defined as an irreflexive, tsymmr-trie, and transitive rel~.tiou. Speakers may also license scalar implicat,ires by denying scalars. The dual to Horn's notion of affirming the highest affirmable v:due would be negating the lowest deniable scalar. In such a denial a speaker may implicate his affirmation or ignorance of lower scalars. So, in exchanges like {9a), a value higher than a queried value {here, (9} Q: Did you write a check for the rent? a. R: l haven't mailed it yet. b. R: I haven't signed it. c. R: I didn't pay cash. a stage in the process of mortgage payment) may be denied to convey the truth of the queried value. R may also deny lower values (gb) or alternate vahscs (9c}. So, indirect scalar responses may be defined UlU,n a number of metrics and may involve the affirmation or negation of higher, lower, or alternate values. They may also involve the affirmation or denial of more than one scalar h~r a single query, as shown in (10). Ash';nine that Mary and Joe are brother and s:ster and both are known to Q and tL Also, Mary and Tim are fellow-workers with Q and R. Then to Q's question in {10), R may felicitously respond with any or the (10) Q: Does Mary like skiing? a. R: She loves iee-gkating. b. R: ,Joe loves cross-country. e. R: Tim likes cros~country. answers given - as well a~s a variety of others, such as 'Site n~ed to' or even 'Joe used to love ice-skating.' That is, R may base his response upon any one or more scalars he perceives as invoked by Q's query. In addition, a single lexical it(:m (here Mary} may invoke more than one scale: R may view Mary as a member of a family or of a set of fellow-workers, for example, to generate responses (10b) and (ll}c), respectively. C. A Scalar Representation of YNQs. Given this characterization of appropriate indirect responses, it is possible to model the exchanges present,,d above in the following way: 1. For some query uttered by Q, let P V "P represent the query's desideratum; 2. Let Pxl/bl,x2/b2,...,Xn/bnV-Pxl/b~,xg/b2,...,Xn/bn re- present the open propozition formed by substituting variables x I for each b i ir~vokcd by P that R perceives as lying on some scMe Si; 3. Then P V'P • J~X,~z/xa,...,~n/Xn ~/%,%/~.,,...,.~Jx,, defines the set ~.,f possible responses to Q's query, where each a I repre.-.ents some scalar coo*currier with its corresponding b i on S i. 4. A subset of these p~,ssit.qe re~ponses, the set of possible true respcmses, will be det~.rmined by 1¢ from his knowledge ba0:c, and an actual r~'sponsc ~l~lectcd. 7 In 16), for example, the de.-.ider:dum {P V "q>) of Q's query is the generic '(all) mushrooms are poisonous' V 'not (all) mushrooms are poisonous', tiers R might perceive a single scalar all lying on a quantifier scale, ,onc//~¢ome/all. So, 'x I mushrooms are poisonous' V 'not x I [all,brooms ace poisonous' represents the (,pen proF-sition formed b) substituting a variable for all in P, where x! ranges over the values on SI. nor~,/oorn,~/u!l. Then the set of p..-.ible resp(.n~:.~ tt, t~'s query, given P~'s choice of seal:~r, is dt,fin~.d by the affirmatiml or ~wgati~m of cach of the possible instantiations of 'al/x I mushrooms at, ~ poisonous', or the set {no nlushrool/is are poisoIIOUS.SO.~le L'lushfooIIlS are poisonous.all mushrooms are poisonous,-nno mushrooms are poisonons, -some 7S~.e lliir~ehberg, l~t~41 rr.r farth~ r diseusslon of this self'ca}on process. 50 mushr-on~s :~r~ poisonous, ~ail r:,,a,hro,~ms are poisonous}. The set of po,.-ibh, true r,.sponscs will be a subset of this set, determined b)' It from Iris knowh:dgc ba.se. Note that a I and b l may in fact be identical. Thus, the simple direct responses, equivalent to 'All mushrooms are poisonous' and 'Not all mushrooms are poi.~t)nous', are accorumodated in this schema. Thi~ charact~,riz:ttion of potcnt.ial response.-, suggests a new repre~entath)n for YNt~s. l'oih)wing Hintikka, one might paraphrase the query in (6) as 'Bring it about that I know that x t mushro~Jnls are poisonous r~r that I know that. not x t mushrooms art.' poisonous t, where x I range~ over the values on some scale S t up.n which the qlo'ried v:due .~om( appears (assuming a many- sorted epi~temic logic). Thus the query alight be represented as ! 3~l.~X I (so:;,e,xtENtA {KQ(X I mushrooms are pois,,nou~) V KQ~(X t mi, shrooms are poisonous)}}. For a query like that in (It)), an appropriate representation might be: ! :3Sl-~Xt3S2.:]x2~]Sa3x.~ {Mary,xtESiAIove,x2ES 2 Askiing.xaES3A {KQ{X 1 x 2 x3) V KQ~(X l x 2 x3)}}. lI may then instantiate each variable with any value from its domain in his response. In the gem'ral e~e, then, YNQs might be represented as 3SI,...,:JSa3xI,...,3x~, {bI,x1ES 1 A .... A bn,XnCS a A {KQ(l'x I ...... n) V Kq'{Pxt ...... n )}" This representation shares some features of standard representations of wh-qm.stions, .~uggesting that it simply extends Kiefer's view of foct:s~ed "fNQs to all YNQs. However, there are several :dgnificant di~tincthms between this representation and standard repres,.ntatioas of wh-questk)ns, and, thus, between it and Kiefer's suggesthm. First, it restricts the domains of variables to scales invoked by corresponding scalars in the original queries desideratum and it includes a negative disjuuet. 'Do you like Ilonolulu?' for example might have as its desideratum ::IS |-:Ix t :~S2::lx2"]Ss3xa {you,xl ES IAlike,x=ES2 Allonolulu,x.~ES s A {KQ(X t x~ xsJVKq~(X i x 2 xs)}}, while the corresponding wh-question 'What do you like?' would have as its desideratum 32 lfQfVou like z). Second, the representation prop,sed here allows for reference in a query to muhiple scalars, or, multiple focii, which Kiefer does not consider. Third, it awJids both the division of YNQs into focussed and non- focussed queries and the dependency between wh-responses and negative responses noted above; hence, the representation is simpler and more unified. So, YNQs are not represented as wh- questions, although Kiefer's focussed YNQs can be accommodated in this more general representation, which 1 will term a ~eel~," repreae~tatlo~. IV DISCUSSION A scalar representation of YNQs can accommodate a wide range of direct and indirect responses which are common in natural discourse but which current representations of YNQs cannot support. Of course, such a redefinition is no panacea for the limitations of current representations: In its current form, for instance, there are sonic appropriate responses to indirect speech acts, such as (ill, which it (11) Q: Can you tell me the time? R: It's 5:30. will not support. In other exchanges, such as {12), the notion of seale may seem less tha,~ natural, where a scale like attribute* of a (12) Q: Is she pretty? R: She's married. potcnHal date.: {pr~:ttg, unmarried,...} must be postulated to accommodate this query in the the representation proposed here. Too, tbe actual representation of a particular query may vary according to participants' differing perception of scalars invoked by it, as shown in (I0). Because scales are not defined in absolute terms, it is difficult to determine even an abstract specification of the set of all possible responses to a given query; should temporal and modal variables always be understood as implicitly evoked by any query, for example, as in {13)? However, if broad categories of sucb (13) Q: Is Gloria a blonde? a. R: She used to be. b. R: She could be. 'understood' scales can be identified, much of this difficulty might. be alleviated. The representation proposed here does accommodate a far larger class of appropriate responses than representations previously suggested, and accommodates them in a unified way. With further refinement it promises to provide a useful tool for theoretical and computational treatments of YNQs. ACKNOWLEDGEMENTS 1 would like to thank Aravind Joshi, Kathy McCoy, Martha Pollack, Sitaram Lanka, and Bonnie Webber for their comments on this paper. REFERENCES Bolinger, D. Yes-No Questions Are Not Alternative Questions. In Hiz, H. ( ,Ed.}, Qucstiona. Dordrecht (Neth): Reidel, 1978. Gazdar, G. A Solution to the Projection Problem. In Oh, C.-K. and Dinneen, D. (Eds.), Syntax and Semantics. New York: Academic Press, 1979. Grice, H. P. Logic and Conversation. In Cole, P. and Morgan, J.L. (F_Ms.}, Syntaz and Semantic*. New York: Academic Press, 1975. Hintikka, J. Answers to Questions. In Hiz, H. tEd.), Question~. Dordrecht (Neth.): Reidel, 1978. Hirschberg, J. Scalar lmplicature and Indirect Responses to Yes- No Que*tiona (Teeh. Rep. MS-CIS-84-9). University of Pennsylvania, April 198t. Hobbs, J. and Robinson, J. Why Ask? Di*cour, e Procesaes, 1979, Vol. ~. Horn, L. R. On the Semantic Properties of Logical Operators in English. Doctoral dis~rtation, University of California at Los Angeles, 197 ° . Joshi, A.K. Tile Role of Mutual Beliefs in Question-Answer Systems. In Smith, N. {Ed.}, Mutual Belief. New York: Academic Press, 1982. Kiefer, F. Yes-No Questions as WH-Questions. In Searle, J., Kiefer, F., and Bierwisch, J. (Eds.), Speech Act Theory and Pragmatics. Dordrecht (Neth): Reidel, 1980. Pollack, M. E., Hirschberg, J., and Webber, B. Uaer Participation in the Rca*oning Proeessea of Ezpert Systems (Tech. Rep. MS-CIS-82-9). University of Pennsylvania, July 1982. A shorter version appears in the AAAI Proceedings, 1982. Soames, C. How Presuppositions Are Inherited: A solution to the projection problem. Lingui*tie lnquir~l, 1982, 13~3), 483-545. 51 | 1984 | 12 |
THE SYNTAX AND SEMANTICS OF USER-DEFINED MODIFIERS IN A TRANSPORTABLE NATURAL LANGUAGE PROCESSOR Bruce W. Ballard Dept. of Computer Science Duke University Durham, N.C. 27708 ABSTRACT The Layered Domain Class system (LDC) is an experimental natural language processor being developed at Duke University which reached the prototype stage in May of 1983. Its primary goals are (I) to provide English-language retrieval capabilities for structured but unnormaUzed data files created by the user, (2) to allow very complex semantics, in terms of the information directly available from the physical data file; and (3) to enable users to customize the system to operate with new types of data. In this paper we shall discuss (a) the types of modifiers LDC provides for; (b) how information about the syntax and semantics of modifmrs is obtained from users; and (c) how this information is used to process English inputs. I INTRODUCTION The Layered Domain Class system (LDC) is an experimental natural language processor being developed at Duke .University. In this paper we concentrate on the typ.~s of modifiers provided by LDC and the methods by which the system acquires information about the syntax and semantics of user- defined modifiers. A more complete description is available in [4,5], and further details on matters not discussed in this paper can be found in [1,2,6,8,9]. The LDC system is made up of two primary components. First, the Ic'nowledge aeTui.~i2ion component, whose job is to find out about the vocabulary and semantics of the language to be used for a new domain, then inquire about the composition of the underlying input file. Second, the User-Phase Processor, which enables a user to obtain statistical reductions on his or her data by typed English inputs. The top-level design of the User-Phase processor involves a linear sequence of modules for scavtvtir~g the input and looking up each token in the dictionary; pars/rig the scanned input to determine its syntactic structure; translatiort of the parsed input into an appropriate formal query; and finally query processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . This research has been supported in part by the National Science Foundation, Grants MCS-81-16607 and IST-83-01994; in part by the National Library of Medicine, Grant LM-07003; and in part by the Air Force Office of Scientific Research, Grant 81-0221. The User-Phrase portion of LDC resembles familiar natural language database query systems such as INTELLECT, JETS. LADDER, LUNAR. PHLIQA, PLANES, REL, RENDEZVOUS, TQA, and USL (see [10-23]) while the overall LDC system is similar in its objectives to more recent systems such as ASK, CONSUL, IRUS, and TEAM (see [24-319. At the time of this writing, LDC has been completely customized for two fairly complex domains. from which examples are drawn in the remainder of the paper, and several simpler ones. The complex domains are a 2~al gTz, des domain, giving course grades for students in an academic department, and a bu~di~tg ~rgsvtizatiovt domain, containing information on the floors, wings, corridors, occupants, and so forth for one or more buildings. Among the simpler domains LDC has been customized for are files giving employee information and stock market quotations. II MODIFIER TYPES PROVIDED FOR As shown in [4]. LDC handles inputs about as complicated as students who were given a passing grade by an instructor Jim took a graduate course from As suggested here, most of the syntactic and semantic sophistication of inputs to LDC are due to noun phrase modifiers, including a fairly broad coverage of relative clauses. For example, if LDC is told that "students take courses from instructors", it will accept such relative clause forms as students who took a graduate course from Trivedi courses Sarah took from Rogers instructors Jim took a graduate course from courses that were taken by Jim students who did not take a course from Rosenberg We summarize the modifier types distinguished by LDC in Table i. which is divided into four parts roughly corresponding to pre-norninal, nominal, post-nominal, and negating modifiers. We have included several modifier types, most of them anaphorie, which are processed syntactically, and methods for whose semantic processing are being implemented along the lines suggested in [7]. 52 Most of the names we give to modifier types are self- explanatory, but the reader will notice that we have chosen to categorize verbs, based upon their semantics, as tr~Isial verbs, irrtplied para~ter verbs; and operational verbs. "Trivial" verbs, which involve no semantics to speak of, can be roughly paraphrased as "be associated with". For example, students who take a certain course are precisely those students associated ~ith the database records related to the course. "Implied parameter" verbs can be paraphrased as a longer "trivial" verb phrase by adding a parameter and requisite noise words for syntactic acceptability. For example, students who fai/a course are those students who rrmlce a grade of F in the course. Finally, "operational" verbs require an operation to be performed on one or more of its noun phrase arguments, rather than simply asking for a comparison of its noun phrase referent(s) against values in specified fields of the physical data file. For example, the students who oz~tscure Jim are precisely those students who Trtake a grade h~gher than the grade of Jirm At present, prepositions are treated semantically as trivial verbs, so that "students in AI" is interpreted as "students associated with records related to the AI course". Table 1 - Modifier Types Available in LDC Modifier Type Example Usage Syntax Implemented Semantics Implemented Ordinal the second floor yes yes 3uperlative the largest office yes yes Anaphoric better students Comparative more desirable instructors yes no Adjective the large rooms classes that were small yes yes Anaphoric Argument-Taking Adjective adjacent offices yes no Anaphoric Implied-Parameter Verb failing students yes no Noun Modifier conference rooms yes yes Subtype offices yes yes Argument-Taking Noun classmates of Jim Jim's classmates yes yes Anaphoric Argument-Taking Noun the best classmate yes no Prepositional Phrase students in CPS215 yes (yes) Comparative Phrase students better than Jim a higher grade than a C yes yes Trivial instructors who teach AI Verb Phrase students who took AI from Smith yes yes Implied-Parameter Verb Phrase students who failed AI yes yes Operational Verb Phrase students who outscored Jim yes yes Argument-Taking Adjective offices adjacent to X-238 yes yes Negations the non graduate students (of many sorts) offices not adjacent to X-23B instructors that did not teach M yes yes etc. 53 III KNOWLEDGE ACQUISITION FOR MODIFIERS The job of the knowledge acquisition module of LDC, called "Prep" in Figure 1, is to' find out about (a) the vocabulary of the new domain and (b) the composition of the physical data file. This paper is concerned only with vocabulary acquisition, which occurs in three stages. In Stage 1, Prep asks the user to name each ent~.ty, or conceptual data item, of the domain. As each entity name is given, Prep asks for several simple kinds of information, as in ENTITY NAME? section SYNONYMS: class TYPE (PERSON, NUMBER, LIST, PATTERN, NONE)? pattern GIVE 2 OR 3 EXAMPLE NAMES: epsSl.12, ee34.1 NOUN SUBTYPES: none ADJECTIVES: large, small NOUN MODIFIERS: none HIGHER LEVEL ENTITIES: class LOWER LEVEL ENTITIES: student, instructor MULTIPLE ENTITY? yes ORDERED ENTITY? yes Prep next determines the case structure of verbs having the given entity as surface subject, as in ACQUIRING VERBS FOR STUDENT: A STUDENT CAN pass a course fail a course take a course from an instructor make a grade from an instructor make a grade in a course In Stage 2, Prep learns the rnorhological variants of words not known to it, e.g. plurals for nouns, comparative and superlative forms for adjectives, and past tense and participle forms for verbs. For example, PAST-TENSE VERB ACQUISITION PLEASE GIVE CORRECTED FORMS, OR HIT RETURN FAIL FAILED > BITE BITED > bit TRY TRIED > In Stage 3, Prep acquires the semantics of adjectives, verbs, and other modifier types, based upon the following principles. 1. Systems which attempt to acquire complex semantics from relatively untrained users had better restrict the class of the domains they seek to provide an interface to. For this reason, LDC restricts itself to a class of domains [1] in which the important relationships among domain entities involve hierarchical decompositions. 2. There need not be any correlation between the type of modifier being defined and the way in which its rr~eaTt/rtg relates to the underlying data file. For this reason, Prep acquires the meanings of all user-defined modifiers in the same manner by providing such primitives as id, the identity function; va2, which retrieves a specified field of a record; vzzern, which returns the size of its argument, which is assumed to be a set; sum, which returns the sum of '.'-s list of inputs; aug, which returns the average of its list of inputs; and pct, which returns the percentage of its list of boolean arguments which are true. Other user- defined adjectives may also be used. Thus, a "desirable instructor" might be defined as an instructor who gave a good grade to more than half his students, where a "good grade" is defined as a grade of B or above. These two adjectives may be specified as shown below. ACQUIRING SEMANTICS FOR DESIRABLE INSTRUCTOR PRIMARY? section TARGET? grade PATH IS: GRADE /STUDENT /SECTION- FUNCTIONS? good /id /pet PREDICATE? > 50 ACQUIRING SEMANTICS FOR GOOD GRADE PRIMARY? grade TARGET? grade PATH IS: GRADE FUNCTIONS? val PREDICATE? >= B As shown here, Prep requests three pieces of information for each adjective-entity pair, namely (1) the pv-/.rn.ary (highest-level) and ~c~rget [lowest-level) entities needed to specify the desired adjective meaning; (2) a list of furtcticvts corresponding to the arcs on the path from the primary to the target nodes; and finally (3) a pred/cate to be applied to the numerical value obtained from the series of function calls just acquired. IV UTILIZATION OF THE INFORMATION ACQUIRED DURING PREPROCESSING As shown in Figure i, the English-language processor of LDC achieves domain independence by restricting itself to (a) a domain-independent. linguistically-motivated phrase-structure grammar [6] and (b) and the domain-specific files produced by the knowledge acquisition module. The simplest file is the pattern file, which captures the morphology of domain-specific proper nouns, e.g. the entity type "room" may have values such as X-238 and A-22, or "letter, dash. digits". This information frees us from having to store all possible field values in the dictionary, as some systems do, or to make reference to the physical data file when new data values are typed by the user, as other systems do. The domain-specific d/ctlon~ry file contains some standard terms (articles, ordinals, etc.) and also both root words and inflections for terms acquired from the user. The sample dictionary entry (longest Superl long (nt meeting week)) says that "longest" is the superlative form of the adjective "long", and may occur in noun phrases whose 'head noun refers to entities of type meeting or week. By having this information in the dictionary, the parser can perform "local" compatibility checks to assure the 54 I User User ., > PREP Pattern Dictionary Compat File /// // SCANNER ~I PARSER File f ---*1 TRANSLATOR Augmented Phrase-Structured Grammar Macro File \ ) RETRIEVAL i T Text-Edited Data File Figure 1 - Overview of LDC integrity of a noun phrase being built up, i.e. to assure all words in the phrase can go together on non- syntactic grounds. This aids in disambiguation, yet avoids expensive interaction with a subsequent semantics module. related to negation Interestingly, most meaningful interpretations of phrases containing "non" or "not" can be obtained by inserting the retrieval r2.odule's Not command at an appropriate point in the macro body for the modifier in question. For example, An opportunity to perform "non-local" compatibility checking is provided for by the eompat file, which tells (a) the case structure of each verb, i.e. which prepositions may occur and which entity types may fill each noun phrase "slot", and (b) which pairs of entity types may be linked by each preposition. The former information will have been acquired directly from the user, while the latter is predicted by heuristics based upon the sorts of conceptual relationships that can occur in the "layered" domains of interest [1]. Finally, the macro file contains the meanings of modifiers, roughly in the form in which they were acquired using the specification language discussed in the previous section. Although this required us to formulate our own retrieval query language [3], having complex modifier meanings directly exceutable by the retrieval module enables us to avoid many of the problems typically arising in the translation from parse structures to formal retrieval queries• Furthermore, some modifier meanings can be derived by the system from the meanings of other modifiers, rather than separately acquired from the user• For example, if the meaning of the adjective "large" has been given by the user, the system automatically processes "largest" and "larger than ..." by appropriately interpreting the macro body for "large". A partially unsolved problem in macro processing involves the resolution of scope ambiguities students who were not failed by Rosenberg might or might not be intended to include students who did not take a course from Rosenberg. The retrieval query commands generated by the positive usage of "fail", as in students that Rosenberg failed would be the sequence instructor -- Rosenberg; student -> fail so the question is whether to introduce "not" at the phrase level not iinstructor = Rosenberg; student -> fail~ or instead at the verb level instructor = Rosenberg; not ~student -> fail] Our current system takes the literal reading, and thus generates the first interpretation given• The example points out the close relationship between negation scope and the important problem of "presupposition", in that the user may be interested only in students who had a chance to be failed• 55 REFERENCES I. BaUard, B. A "Domain Class" approach to transportable natural language processing. Cogn~tio~ g~td /Yrczin Theory, 5 (1982), 3, pp. 269-287. Ballard, B. and Lusth, J. An English-language processing system that "learns" about new domains. AF~PS N¢~on~ Gomputer Conference, 1983. pp. 39-46. Ballard, B. and Lusth, J. The design of DOMINO: a knowledge-based information retrieval processor for office enviroments. Tech. Report CS-1984-2, Dept. of Computer Science, Duke University, February 1984. Ballard, B., Lusth, J. and Tinkham, N. LDC-I: a transportable, knowledge-based natural language processor for office environments. ACM Tt'~ns. o~ Off~ce /~-mah~ ~ystoma, 2 (1984), 1, pp. 1-25. BaUard, B., Lusth, J. and Tinkham, N. Transportable English language processing for office environments. AF~' Nat~mw~ O~m~uter Conference, 1984, to appear in the proceedings. Ballard, B. and Tinkham, N. A phrase-structured grammatical formalism for transportable natural language processing, llm~r. J. Cow~p~t~zt~na~ L~n~ist~cs, to appear. Biermann, A. and Ballard, B. Toward natural language computation. Am~r. ~. Com~ut=~mu=l ~g=iet~cs, 6 (1980), 2, pp. 71-86. Lusth, J. Conceptual Information Retrieval for Improved Natural Language Processing (Master's Thesis). Dept. of Computer Science, Duke University, February 1984. Lusth, J. and Ballard, B. Knowledge acquisition for a natural language processor. Cue,'ere*we o~ .4~t~-ieJ .~tetH@e~ws, Oakland University, Rochester, Michigan, April 1983, to appear in the proceedings. I0. Bronnenberg, W., Landsbergen, S., Scha, R., Schoenmakers, W. and van Utteren, E. pHLIQA-1, a question-answering system for data-base consultation in natural English. /Wt~s tecA, Roy. 38 (1978-79), pp. 229-239 and 269-284. 11. Codd, T. Seven steps to RENDEZVOUS with the casual user. [n Do2~ Base M¢m,o, gem, en¢, J. Kimbie and K. Koffeman (Eds.), North-Holland, 1974. 12. Codd, T. RENDEZVOUS Version I: Aa experimental English-language query formulation system for casual users of relational data bases. IBM Research Report RJ2144, San Jose, Ca., 1978. 13. Finin, T., Goodman, B. and Tennant, H. JETS: achieving completeness through coverage and closure. Int. J. Conf. on Art~j~/n~e/~igence, 1979, pp. 275-281. 14. Harris, L. User-oriented data base query with the Robot natural language system. Int. J. M~n-M~ch~ne ~dies, 9 (1977), pp. 697-713. 15. Harris, L. The ROBOT system: natural language processing applied to data base query. ACM Nct~ion~t C~rnference, 1978, pp. 165-172. 16. Hendrix, G. Human engineering for applied natural language processing. /n~. $. Co~f. o~ .4~t~j~c~a~ ~¢tott@jev~e, 1977, pp. 183-191. 2. 3. 4. 5. 8. 7. 8. 9. 17. Hendrix, G., Sacerdoti, E., Sagalowicz, D. and Slocum, J. Developing a natural language interface to complex data. ACM Tr(uts. on D=t~bsse ~l/stsrrts, 3 (1978), 2, pp. 105-147. 18. Lehmann, H. Interpretation of natural language in an information system. IBM $. _N~s. Des. 22 (1978), 5, pp. 560-571. 19. Plath, W. REQUEST: a natural language question- answering system. IBM J: ~s. Deo., 20 (1976), 4, pp. 326- 335. 20. Thompson, F. and Thompson, B. Practical natural language processing: the gEL system as prototype. In Ad~vtces ~t Com~ters, Vol. 3, M. Rubinoff and M. Yovits, Eds., Academic Press, 1975. 21. Waltz, D. An English language question answering system for a large relational database. Cowzm. ACM 21 (1978), 7, pp. 526-539. 22. Woods, W. Semantics and quantification in natural language question answering. In Advances ~,n Computers, Vol. 17, M. Yovits, Ed., Academic Press, 1978. 23. Woods, W., Kaplan, R. and Nash-Webber, B. The Lunar 3L'iencos Natural Lar~w, ge ~tfov~rn~t~n ~Jstsm: ]~¢rrt. Report 2378, Bolt, Beranek and Newman, Cambridge, Mass., 1972. 24. Ginsparg, J. A robust portable natural language data base interface. Cmlf. on Ap'1)lied Nc~t~ral L~znguage Processing, Santa Munica, Ca., 1983, pp. 25-30. 25. Grosz, B. TEAM: A transportable natural language interface system. Omf. o~ ~plied Nut, rat L~-tLags Processiz~, Santa Monica, Ca., 1983, pp. 39-45. 28. Haas, N. and Hendrix, G. An approach to acquiring and applying knowledge. .~rst N;t. Cor~. o~ .~tell~qTence, Stanford univ., Palo Alto, Ca., 1980, pp. 235- 239. 27. Hendrix, G. and Lewis, W. Transportable natural-language interfaces to databases. Proc. 19th A~z~t Meet~w of the ACL, Stanford Univ., 1981, pp. 159-165. 28. Mark, W. Representation and inference in the Consul system. ~t. Jo'i, nt Conf. on ~ct#,f~c'i~l [nteU{gence, 1981. 29. Thompson, B. and Thompson, F. Introducing ASK, a simple knowledgeable system. Co~I. on AppLied Natu~zt L~tg1~zge i~rocsssing, Santa Monica, Ca., 1983, pp. 17-24. 30. Thompson, F. and Thompson, B. Shifting to a higher gear in a natural language system. Na~-na~ CornF~ter Coexistence, 1981, 657-662. 31. WUczynski, D. Knowledge acquisition in the Consul system. Int. Jo~,nt Conf. on .4rt~f~c~ /ntsUwence, 1981. 56 | 1984 | 13 |
Interaction of Knowledge Sources in a Portable Natural Language Interface Carole D. Hafner Computer Science Department General Motors Research Laboratories Warren, MI 48090 Abstract This paper describes a general approach to the design of natural language interfaces that has evolved during the development of DATALOG, an Eng- lish database query system based on Cascaded ATN grammar. By providing separate representation schemes for linguistic knowledge, general world knowledge, and application domain knowledge, DATALOG achieves a high degree of portability and extendability. 1. Introduction An area of continuing interest and challenge in computational linguistics is the development of techniques for building portable natural language (NL) interfaces (See, for example, [9,3,12]). The investigation of this problem has led to several NL systemS, including TEAM [7], IRUS [i], and INTELLECT [10], which separate domain-dependent information from other, more general capabilities, and thus have the ability to be transported from one application to another. However, it is impor- tant to realize that the domain-independent por- tions of such systems constrain both the form and the content of the domain-dependent portions. Thus, in order to understand a system's capabili- ties, one must have a clear picture of the struc- ture of interaction among these modules. This paper describes a general approach to the design of NL interfaces, focusing on the structure of interaction among the components of a portable NL system. The approach has evolved during the development of DATALOG (for "database dialogue") an esperimental system that accepts a wide variety of English queries and commands and retreives the answer from the user's database. If no items sat- isfy the user's request, DATALOG gives an informa- tive response explaining what part of the query could not be satisfied. (Generation of responses in DATALOG is described in another report [6].) Although DATALOG is primarily a testbed for research, it has been applied to several demon- stration databases ~nd one "real" database con- taining descriptions and rental information for more than 500 computer hardware units. The portability of DATALOG is based on the independent specification of three kinds of knowl- edge that such a system must have: a linguistic grammar of English; a general semantic model of database objects and relationships; and a domain model representing the particular concepts of the application domain. After giving a brief overview of the architecture of DATALOG, the remainder of the paper will focus on the interactions among the components of the system, first describing the interaction between syntax and semantics, and then the interaction between general knowledge and domain knowledge. 2. Overview of DATALOG Architecture The architecture of DATALOG is based on Cas- caded ATN grammar, a general approach to the design of language processors which is an exten- sion of Augmented Transition Network grammar [13]. The Cascaded ATN approach to NL processing was first developed in the RUS parser [2] and was for- mally characterized by Woods [14]. Figure 1 shows the architecture of a Cascaded ATN for NL process- ing: the syntactic-and semantic components are implemented as separate processes which operate in parallel, communicating information back and forth. This communication (represented by the "interface" portions of the diagram) allows a lin- guistic ATN grammar to interact with a semantic processor, creating a conceptual representation of the input in a step-by-step manner and rejecting semantically incorrect analyses at an early stage. DATALOG extends the architecture shown in Fig- ure 1 in the direction of increased portability, by dividing semantics into two parts (see Figure 2). A "general" semantic processor based on the relational model of data [5] interprets a wide variety of information requests applied to input ATN GRAMMAR interface ) combined syntactic/ semantic analysis interface SEMANTICS Figure i. Cascaded Architecture for Natural Language Processing 57 ATN input combined syntactic/ semantic analysis interface 1 Figure 2. Architecture Of DATALOG abstract database objects. This level of knowl- edge is equivalent to what Hendrix has labelled "pragmatic grammar" [9]. Domain knowledge is rep- resented in a semantic network, which encodes the conceptual structure of the user's database. These two levels of knowledge representation ar~ linked together, as described in Section 4 below. The output of the cascaded ATN grammar is a combined linguistic and conceptual representation of the query (see Figure 3), which includes a "SEMANTICS" component along with the usual lin- guistic constituents in the interpretation of each phrase. 3. Interaction of Syntax and Semantics The DATALOG interface between syntax and seman- tics is a simplification of the RUS approach, which has been described in detail elsewhere [ii]. The linguistic portion of the interface is imple- Pushing for Noun Phrase. ASSIGN Actions : employee employee employee employee Popping Noun Phrase: (NP (DET (the)) mented by adding a new arc action called "ASSIGN" to the ATN model of grammar. ASSIGN communicates partial linguistic analyses to a semantic inter- preter, which incrementally creates a conceptual representation of the input. If an assignment is nonsensical or incompatible with previous assign- ments, the semantic interpreter can reject the assignment, causing the parser to back up and try another path through the grammar. In DATALOG, ASSIGN is a function of three argu- ments: the BEAD of the current clause or phrase, the CONSTITUENT which is being added to the inter- pretation of the phrase, and the SYNTACTIC SLOT which the constituent occupies. As a simplified example, an ATN gram, mr might process noun phrases by "collecting" determiners, numbers, superlatives and other pre-modifiers in registers until the head noun is found. Then the head is assigned to the NPHEAD slot; the pre-modifiers are assigned (in reverse order) to the NPPREMOD slot; superla- tives are assigned to the SUPER slot; and numbers are assigned to the NUMBER slot. Finally, the determiners are assigned to the DETERMINER slot. If all of these assignments are acceptable to the s~m~ntic interpreter, an interpretation is con- structed for the "base noun phrase", and the par- ser can then begin to process the noun phrase post-modifiers. Figure 3 illustrates the inter- pretation of "the tallest female employee", according to this scheme. A more detailed description of how DATALOG constructs interpreta- tions is contained in another report [8]. During parsing, semantic information is col- lected in "semantic" registers, which are inacces- sible (by convention) to the grammar. This con- vention ensures the generality of the grammar; although the linguistic component (through the assignment mechanism) controls the information that is passed to the semantic interpreter, the only information that flows back to the grazm~ar is CONSTITUENT SYNTACTIC SLOT employee NPHEAD (AMOD female) NPPREMOD (ADJp SUPER (ADV most) (ADJ tall)) (the) DET (PREMODS ((ADJP (ADV most) (ADJ tall)) (AMOD female)) (HEAD employee) (SEMANTICS (ENTITY (Q nil) (KIND employee) (RESTRICTIONS ( ((ATT sex) (RELOP ISA) (VALUE female)) ((ATT height) (RANKOP MOST) (CUTOFF i)) ))))) Figure 3. Interpretation of "the tallest female employee". 58 the acceptance or rejection of each assignment. When the grammar builds a constituent structure for a phrase or clause, it includes an extra con- stituent called "SEMANTICS", which it takes from a semantic register. However, generality of the grammar is maintained by forbidding the gra~mmar to examine the contents of the SEMANTICS constituent. 4. Interaction of General and Application Semantics The semantic interpreter is divided into two levels: a "lower-level" semantic network repre- senting the objects and relationships in the application domain; and a "higher-level" network representing general knowledge about database structures, data analysis, and information requests. Each node of the domain network, in addition to its links with other domain concepts, has a "hook" attaching it to the higher-level con- cept of which it is an instance. Semantic proce- dures are also attached to the higher-level con- cepts; in this way, domain concepts are indirectly linked to the semantic procedores that are used to interpret them. Figure ¢ illustrates the relationship between the general concepts of DATALOG and the domain semantic network of a personnel application. Domain concepts such as "female" and "dollar" are attached to general concepts such as /SUBCLASS/ and /UNIT/. (The higher-level concepts are delim- ited by slash "/" characters.) When a phrase such as "40000 dollars" is analyzed, the semantic procedures for the general concept ,'b~::T/ are invoked to interpret it. The general concepts also organized ~nto a net- work, which supports inheritance of s~msntic pro- cedures. For example, two of the general concepts in DATALOG are /ATTR/, which can represent any attribute in the database, and /NUMATTR/, which represents numeric attributes such as "salary" and "age". Since /ATTR/ is the parent of /NUMATTR/ in the general concept network, its semantic proce- dures are automatically invoked when required dur- ing interpretation of a phrase whose head is a numeric attribute. This occurs whenever no /NUMATTR/ procedure exists for a given syntactic slot; thus, sub-concepts can be defined by specif- ying only those cases where their interpretations differ from the parent. Figure 5 shows the same diagram as Figure 4, with concepts from the computer hardware database substituted for personnel concepts. This illus- trates how the semantic procedures that inter- preted personnel queries can be easily transported to a different domain. 5. Conclusions The general approach we have taken to defining the inter-component interactions in DATALOG has led to a high degree of extendability. We have been able to add new sub-networks to the grammar without making any changes in the semantic inter- preter, producing correct interpretations (and correct answers from the database) on the first try. We have also been able to implement new gen- eral semantic processes without modifying the grammar, taking advantage of the "conceptual fac- toring" [14] which is one of the benefits of the Cascaded ATN approach. The use of a two-level semantic model is an experimental approach that further adds to the portability of a Cascaded ATN grammar. By repre- senting application concepts in an "epistemologi- cal" s~m~ntic network with a restricted set of primitive links (see Brao~un [4]), the task of building a new application of DATALOG is reduced to defining the nodes and connectivity of this network and the synonyms for the concepts repre- Which female Ph.D.s earn more than 40000 dollars female ' male Ph.D. masters earn i, Sex ] I-degree i I I salary i I ploy-i Figure 4. Interaction of Domain and General Knowledge '59 sented by the nodes. Martin et. al. [12] define a transportable NL interface as one that can acquire a new domain model by interacting with a human database expert. Although DATALOG does not yet have such a capability, the two-level semantic model provides a foundation for it. DATALOG is still under active development, and current research activities are focused on two problem areas: extending the two-level semantic model to handle more complex databases, and inte- grating a pragmatic component for handling ana- phora and other dialogue-level phenomena into the Cascaded ATN grammar. 1. 6. References Bates, M. and Bobrow, R. J., "Information Retrieval Using a Transportable Natural Lan- guage Interface." In Research and Development in Information Retrieval: Proc. Sixth Annual International ACM SIGIR Conf., Bathesda MD, pp. 81-86 (1983). 2. Bobrow, R. "The RUS System." In "Research in Natural Language Understanding," BBN Report No. 3878. Cambridge, MA: Bolt Beranek and Newman Inc. (1978). 3. Bobrow, R. and Webber, B. L., "Knowledge Rep- resentation for Syntactic/Semantic Process- ing." In Proc. of the First Annual National Conf. o.nn Artificial Intelligence, Palo Alto CA, pp. 316-323 (1980). 4. Brachman, R. 3., "On the Epistemological Sta- tus of Semantic Networks." In Associative Net- works: Representation and Use of Knowledge by Computers, pp. 3-50. Edited by N. Y. Findler, New York NY (1979). 5. Codd, E. F. "A Relational Model of Data for Large Shared Data Banks." Communications of th_.~e ACM, Vol. 13, No. 6, pp.377-387 (1970). 6. Godden, K. S., "Categorizing Natural Language Queries for Int~lllgent Responses." Research Publication 4839, General Motors Research Lab- oratories, Warren MI (1984). 7. Grosz, B. J., "TEAM: A Transportable Natural Language Interface System." In Proc. Conf. on Applied Natural Language Processing, Santa Monica CA, pp. 39-45 (1983). 8. Hafner, C. D. and Godden, K. S., "Design of Natural Language Interfaces: A Case Study." Research Publication 4587, General Motors Research Laboratories, Warren MI (1984). 9. Hendrix, G. G. and Lewis, W. H., "Transporta- ble Natural Language Interfaces to Data." Proc. 19th Annual Meeting of theAssoc, fo__~r Computational Linguistics, 5tanford CA, pp. 159-165 (1981). 10. INTELLECT Query System User's Guide, 2nd. Edi- tion. Newton Centre, MA: Artificial Intelli- gence Corp. (1980). 11. Mark, W. S. and Barton, G. E., "The RUSGRAMMAR Parsing System." Research Publication GMR-3243. Warren, MI: General Motors Research Laboratories (1980). 12. Martin, P., Appelt, D., and Pereira, F., "Transportability and Generality in a Natural- Language Interface System." In Proc. Eight International Joint Conf. on Artificial Intel- ligence, Karlsruhe, West Germany (1983). 13. Woods, W. "Transition Network Grammars for Natural Language Analysis." Cowmunications of the ACM, Vol. 13, No. 10, pp. 591-606 (1970). 14. WOodS, W., "Cascaded ATN Gra/~nars." American Journal of Computational Linguistics," Vol. 6, No. 1, pp. 1-12 (1980). Which IBM terminals weigh more than 70 pounds val o val_of verb o ' unit of / Figure 5. Figure 4 Transported to a New Domain 60 | 1984 | 14 |
USES OF C-GP.APHS lil A PROTOTYPE FOR ALrFC~ATIC TRNLSLATION, Marco A. CLEMENTE-SALAZAR Centro de Graduados e Investigaci6n, Instltuto Tecnol6gico de Chihuahua, Av. Tecnol6gico No. 2909, 31310 Chihuahua, Chih., MEXICO. ABSTRACT This paper presents a prototype, not com- pletely operational, that is intended to use c-graphs in the translation of assemblers. Firstly, the formalization of the structure and its princi- pal notions (substructures, classes of substruc- tures, order, etc.) are presented. Next section de- scribes the prototype which is based on a Transfor- mational System as well as on a rewriting system of c-graphs which constitutes the nodes of the Trans- formational System. The following part discusses a set of operations on the structure. Finally, the implementation in its present state is shown. 1. INTRODUCTION. In the past [10,11], several kinds of repre- sentation have been used (strings, labelled trees, trees with "decorations", graphs of strings and (semantic) networks). C-graphs had its origin as an alternative in the representation and in the treatment of ambiguities in Automatic Translation. In earlier papers [4,5] this structure is named E-graph but c-graph is better suited since it is a generalized "grafo de cadenas" (graph of strings). This structure combines some advantages of the Q-systems [7] and of the trees of ARIANE-78 [1,2,11], in particular, the use of only one struc- ture for all the translation process (asln the former) and foreseeable decidability and parallel- ism (as in the latter). This paper presents a pro- totype, not completely operational, that uses c-graphs and is intended to translate assemblers to refine the adequacy of this kind of structure in the translation of natural languages. 2. DEFINITIONS C-graph. A c-graph G is a cycle free,labelled graph [1,9] without isolated nodes and with exactly one entry node and one exit node. It is completely determined by a 7-tupie: G=(A,S,p,I,O,E,¢), where A is a set of arcs, S a set of nodes, p a mapping of A into SxS, I the input node, 0 the output node, E a set of labels (c-trees, c-graphs) and E a map- ping of A into E. For the sake of simplicity, arcs and labels will be merged in the representation of G (cf. Fig.1 . Interesting c-graphs are sequential c-graphs (cf. Fig.2a) and bundles (cf. Fig.2b). G= 1 ~ 7 h~...~ e -- v k A={1 ..... 12} ; S={1 ..... 7} ; I={1} ; 0={7} p={ (1,1,2), (2,2,4), (3,4,5), (4,5,7), (5,5,6), (6,6,7), (7,6,7), (8,2,3), (9,3,4), (10,3,5), (11,1,2), (12,1,2)} E={a,b,c,d,e,f,g,h,i ,j,k} E={ (I ,a), (2,b), (3,f), (4,g), (5, i), (6,j), (7,k), (8,c), (9,d), (lO,e), (11,b), (12,h) } Fig.1. A c-graph. GI= ~ i :c J ~o (a) (b) Fig.2. A seq. c-graph (a) and a bundle (b). C-trees. A c-tree or a tree with decorations is an ordered tree, with nodes labelled by a label and a decoration that is itself a decorated tree, possibly empty. Classes of c-graphs. There are three major classes: (1) recursive c-graphs (cf. Fig.3a) where each arc is labelled by a c-graph; (2) simple c-graphs (cf. Fig.l) where each arc is labelled by a c-tree and (3) regular c-graphs, a proper sub- class of the second that is obtained by concatena- tion and alternation of simple arcs (cf. Fig.3b). By denoting concatenation by "." and alternation by "+", we have an evident linear representation. For example, G4=g+i.(j+k). Note that not every c-graph may be obtained by these operations, e.g.G. Substructures. For the sake of homogeneity, the only substructures allowed are those that are themselves c-graphs. They will be called sub- 61 -c-graphs or seg's. For example, G1 and G2 are seg's of G. G2 a) A recursive c-graph. b) A regular c-graph. G4= Fig.3. Two classes of c-graphs. Isolatability. It is a feature that deter- mines, for each c-graph G, several classes of seg's An isolated seg G' is intuitively a seg that has no arcs that "enter" or that "leave" G'. Depending on the relation that each isolated seg keeps with the rest of the c-graph, several classes of isolatabil- ity can be defined. a) Weak isolatability. A seg G' of G is weakly isolatable (segif) if and only if for every node x of G' (except I' and 0'), all of the arcs that leave or enter x are in G ~. E.g.: G5=i is a segif of G. b) Normal isolatability. A seg G' of G is normaly isolatable (segmi) if and only if it is a segif and there is a path, not in G', such that it leaves I' and enters 0'. Example: G6=k is a segmi of G. c) Strong isolatability. A seg G' of G is strongly isolatable (segfi) if and only if the only node that has entering arcs not in G' is I' and the only node that has leaving arcs not in G' is 0'. When G' is not an arc and there is no segfi contained strictly in G', then G' is an "elementary segfi"; if G contains no segfi, then G. is elementary. E.g. G4 is a segfi of G. Order and roads. Two order relations are con- sidered: (l) a "vertical" order or linear order of the arcs having the same initial node and (2) a "horizontal" order or partial order between two arcs on the same path. A road is a path from I to 0 Vertical order induces a linear order on roads. 3. DEFINITION OF THE PROTOTYPE. The prototype consists of a model and a data structure. The model is essentially a generaliza- tion of a Transformational System (TS) analogous to ROBRA [2] and whose grammars are rewriting sys- tems of c-graphs (RSC) [4,5,6]. Regarding data structure, we use c-graphs, 3.1A Transformational ~stem. This TS is a c-graph-~c-graph transducer. It is a "control" graph whose nodes are RSC and the arcs are labelled by conditions. A TS is a cycle free oriented graph, with only one input and such that, CI) Each node is labelled with a RSC or &nul. (2) &nul has no successor. (3) Each grammar of the RSC has a transition scheme S or c (empty scheme). ~4) Arcs of the same initial node are ordered. TS works heuristically. G~ven a c-graph gn as an input, it searches for the first path endin~ in &nul. This fact implies that all of the transition schemes on the path were satisfied. Any scheme not satisfied provokes a search of a new path. For example, if $1 is satisfied, TS produces Gl(gn)=g 1 and it proceeds to calculate G2(G1(go))=g ~. IY S 4' is satisfied the system stops and produce~ g~. Otherwise, it backtracks to GI and tests S2.-If it is satisfied g] is produced. Otherwise, it tests S3, etc. • Snul S 4 ~- &nul Fig.4. A Transformational System. 3.2 A REWRITING SYSTEM. Let us consider a simple example: let GR be the following grar~mar for syntactic analysis (with- out intending an example of linguistic value). R1:(g1+e1+g2)(g3+~2+g4)* I (g1+gZ)(g3+~2+g4)÷61 I R2:(g1+~1+gZ)(g3+eZ+g4) (gl+g2)(g3+~2+g4)+81 R3:~I(gl+~Z+g2) ~1(g1+g2)+B1 R4:~l(g1+~2+g2) g1+g2+81 R5:(g1+~1+g2)(g3+~2+g4) (g1+g2)(g3+~2+g4)+B1 R6:(g1+~1+g2)(g3+~2+g4) (g1+g2)(g3+~2+g4)+61 ~I=GN, ~2=GV / == 81:=PHRA(~I,~2) /. / ~I=VB, ~2=GN / == / BI:=PRED(~I,~2) /. / ~I=NP, ~2=AD / == / BI:=GN(~I,~2) /. / ~I=NP, ~2=PRED / == / 61:=PHRA(~I,~2) /. / ~I=PRON, ~2=VB / == / 61:=GV(~I,~2) /. / ~I=ART, ~2=NM / == / BI:=GN(~I,~2) /. As we can see, each rule has: a name (RI,R2, ...), a left side and a right side. The left side defines the geometricaI Form 62 and the condition that an actual seg must meet in order to be transformed. It is a c-graph scheme composed of two parts: the structural descriptor that defines the geometrical form and the condition (between slashes) that tests label information. The first part use "*" as an "element of structural de- scription" in the first rule. It denotes the fact that no seg must be right-concatenated to g3+~2+g4. The right side defines the transformation to be done. It consists of a structural descriptor, similar to the one on the left side and a llst of label assignments (also between slashes) where for each new iabe] we precise the values it takes; and for each old one, its possible modifications. A point ends the rule. Note the properties of an empty g: if g' is any c-graph, then g.g'=g and g+g'=g'. Let us analyze the phrase: "Ana lista la ti- ra". The representation in our formalism is G7. Morphological analysis produces G8. Note that a11 ambiguities are kept in the same structure in the form of para]]e] arcs. The application of GR to G8 results in Gg, where each arc will be labelled with a c-tree with a possib]e interpretation of G8 in grammar GR. The sequence of applications is R3, R6, RS, RI, R2, R4. The system stops when. no more rules are applicab]e. G7= e Ana ^ . . . . . lista _ la _^ tira :o GS= Ana C np el 1 isto \ ad t i tar lo pron , where AI=PHRA(GN(NP(Ana), AD(listo)), GV(PRON(Io), VB(tirar))) A2=PHRA(NP(Ana), PRED(VB(IIstar, GN(ART(eI), NM(tira)))) Operations are divided in two classes: (1) those where the structure is taken as a whole (glo~ a]) and (2) those that transform substructures (local), I. Global Operations. Concatenation and alternation have been de- fined above. These operations produce sequentlaI c-graphs and bundles respectively, as well as the polynomia] writing of regular c-graphs. Expansion. This operation produces a bundle exp(G) from all the roads of a c-graph G. For exam- ple, expansion of GIO produces exp(G10)=(b.f)+ (c.d.f)+(c.e). GIO= ~ f exp(G10)= f Fig.6. Expansion of a c-graph. Factorization. There are two kinds and their results may differ. Consider G11=a.b+a.c+d.e+d.f+ g.f+h.e. Left factorlzation produces G12=a.(b+c)+ d.(e+f)+g.f+h.e, and right factorization G13=a.b+ a. c+ (d+h). e+ (d+g). f. Arborization. This operation constructs a c-tree from a c-graph. There may be several kinds of c-trees that can be constructed but we search for a tree that keeps vertical and horizontal or- ders, i.e. one that codes the structure of the c-graph. An "and-or" (y-o) tree is well suited for this purpose. The result of the operation will be a c-graph with one and only one arc labelled by the and-or tree. For example, arb(G)=G14 (cf. Fig. 7). Note that the non-regular seg has ~ as a root. Regular seg's have o. G14= C ~ :O , where A= y (o (y (a) ,y (b) ,y (h)) ,a (y (b,f) ,y (c,d, f), y (c,e)),o(g,y (i ,o(j ,k))) Fig.7. Arborization of G. Fig.5. Example of sentence analysis. 3.3 Operations. 2. Local Operations. Replacement. Given two c-graphs G and G",this operation substitutes a seg G' in G for G", e.g. if G=G4, G"=m+n and G'=i, then the result will be 63 G 15=g+ (re+n) : (j+k). Addition. This operation inserts a c-graph G' into another, G, by merging two distinct nodes (x, y) of G with the input and output of G'. Addition requires only that insertion does not produce cy- cles. Note that if (I,0) are taken as a couple of nodes, we have alternation. Example, let (2,3) be a couple of nodes of G16 and take G'=G17=s+u. The resulting c-graph is G18. c G16=c ---c i 2 3 5 c GI8= c i 2 Fig.8. Addition of a c-graph. Erasing. This eliminates a substructure G' of a c-graph G. Erasing may destroy the structure even if we work with isolated seg's. Consequently, it is only defined on particular classes of seg's, namely segfi's and segmi's. For any other substruc- ture, we eliminate the smaller segmi that contains it. A special case constitutes a segfi G' such that I and 0 do not belong to G'. Eliminating G' in such a case produces two non-connecting nodes in the c-graph that we have chosen to merge to pre- serve homogeneity. Example: let us take G and G'= GIO, then the result of erasing GIO from G is G19= G2.G4. 4. IMPLEMENTATION. A small system has been programmed in PROLOG [4] (mainly operations) and in PASCAL (TS and RSC). For the first approach, we chose regular c-graphs to work with, since there is always a string to represent a c-graph of this class. In its present state, the system has two parts: (1) the Transformational System including the rewriting system and (2) the set of local and global operations. The TS is interactive. It consists of an ana- lyzer that verifies the structure of the TS given as a console input and of the TS proper. As data we have the console input and a segment composed of transition schemes. There are no finer controls for different modes of grammar execution. Regarding operations and from a methodological point of vlew, algorithms for c-graph treatment can be divided in two classes: (I) the one where we search for substructures and (2) the one where this search is not needed. Obviously, local operations belong to the first class, but among global opera- tions, only concatenation, alternation and expan- sion belong to the second one. Detailed description of algorithms of this part Of ~he system can be found in [4]. 5. CONCLUSION. Once we have an operational version of the prototype, it is intended as a first approach to proceed to the translation of assemblers of the microprocessors available in our laboratory such as INTEL's 8085 or 8080 and MOTOROLA's 6800. 6. REFERENCES. I.[I] Boitet, Ch. UN ESSAI DE REPONSE A QUELQUES QUESTIONS THEORIQUES ET PRATIQUES LIEES A LA TRA- DUCTION AUTOMATIQUE. DEFINITION D'UN SYSTEME PROTO- TYPE. Th~se d'Etat. Grenoble. Avril. 1976. 2.[2] Boitet, Ch. AUTOMATIC PRODUCTION OF CF AND CS ANALYSERS USING A GENERAL TREE TRANSDUCER. Rapport de recherche de l'Institut de Math~matiques Appli- qu~es N°218. Grenoble. Novembre. 1979. 3.[4] Clemente-Salazar, M. ETUDES ET ALGORITHMES LIES A UNE NOUVELLE STRUCTURE DE DONNEES EN T.A.: LES E-GRAPHES. Th~se Dr-lng. Grenoble. Mai. 1982. 4.[5] Clemente-Salazar, M. E-GRAPHS: AN INTERESTING DATA STRUCTURE FOR M.T. Paper presented in COLING- 82. Prague. July. 1982. 5.[6] Clemente-Salazar, M. C-GRAPHS: A DATA STRUC- TURE FOR AUTOMATED TRANSLATION. Paper presented in the 26th International Midwest Symposium on Clr- cuits and Systems. Puebla. Mexico. August. 1983. 6.[7] Colmerauer, A. LES SYSTEMES-Q. Universit~ de Montreal.Publication Interne N°43. Septembre. 1970. 7.[9] Kuntzmann, J. THEORIE DES RESEAUX (GRAPHES). Dunod. Paris. 1972. 8.[10] Vauquois, B. LA TRADUCTION AUTOMATIQUE A GRENOBLE. Document de Linguistique Quantitative N°24. Dunod. Paris. 1975. 9.[11] Vauquois, B. ASPECTS OF MECHANICAL TRANSLA- TION IN 1979. Conference for Japan IBM Scientific Program. Document du Groupe d'Etudes pour la Tra- duction Automatique. Grenoble. July. 1979. 64 | 1984 | 15 |
QUASI-INDEXICAL REFERENCE IN PROPOSITIONAL SEMANTIC NETWORKS William J. Rapaport Department of Philosophy, SUNY Fredonia, Fredonia, NY 14063 Departmeot of Computer Science, SUNY Buffalo, Buffalo, NY 14260 Stuart C. Shapiro Department of Computer Science, SUNY Buffalo, Buffalo, NY 14260 ABSTRACT We discuss how a deductive question-answering sys- tem can represent the beliefs or other cognitive states of users, of other (interacting) systems, and of itself. In particular, we examine the representation of first-person beliefs of others (e.g., the ~/v.~'~ representation of a user'A belief that he himself is rich). Such beliefs have as an essential component "'quasi-indexical pronouns" (e.g., 'he himself'), and, hence, require for their analysis a method of represent- ing these pronominal constructions and performing valid inferences with them. The theoretical jus- tification for the approach to be discussed is the representation of nested "'de ditto" beliefs (e.g., the system's belief that user-I believes that system-2 believes that user-2 is rich). We dis- cuss a computer implementation of these represen- tations using the Semantic Network Processing Sys- tem (SNePS) and an ATN parser-generator with a question-answering capability. I- INTRODUCTION Consider a deductive knowledge-representation system whose data base contains information about various people (e.g., its users), other (perhaps interacting) systems, or even itself. In order for the system to learn more about these entities--to expand its "'knowledge" base--it should contain information about the beliefs (or desires, wants, or other cognitive states) of these entities, and it should be able to reason about them (cf. Moore 1977, Creary 1979, Wilks and Bien 1983, Barnden 1983, and Nilsson 1983: 9). Such a data base constitutes the "knowledge" (more accurately, the beliefs) of the system about these entities and about their beliefs. Among the interrelated issues in knowledge representation that can be raised in such a context are those of multiple reference and the proper treatment of pronouns. For instance, is the person named 'Lucy' whom John believes to be rich the same as the person named 'Lucy' who is believed by the system to be young? How can the system (a) represent the person named 'Lucy' who is an object of its own belief without (b) confus- ing her with the person named 'Lucy' who is an object..of. John'~ belief, yet (c) be. ..able to merge its representations of those two people if it is later determined that they are the same? A solution to this problem turns out to be a side effect of a solution to a subtler problem in pro- nominal reference, namely, the proper treatment of pronouns occurring within belief-contexts. i. OUASI-INDICATORS Following Casta~eda (1967: 85). an indic,tot is a personal or demonstrative pronoun or adverb used to make a strictly demonstrative reference. and a ouasi-indicator is an expression within a 'believes-that' context that represents a use of an indicator by another person. Consider the fol- lowing statement by person A addressed to person at time ~ and ~lace ~: A says, "I am going to kill you here now. Person ~, who overheard this, calls the police and says. "A said .to ~ at ~ at A that he* was going to kill him* there* then*." The starred words are quasi-indicators representing uses by A of the indicators 'I'. 'you'. 'here', and 'now'. There are two properties (among many others) of quasi-indicators that must be taken into account: (i) They occur only within inten- tional contexts, and (ii) they cannot be replaced salva veritate by any co-referential expressions. The general question is: "How can we attri- bute indexical references to others?" (Casta~eda 1980: 794). The specific cases that we are con- cerned with are exemplified in the following scenario. Suppose that John has just been appointed editor of Byte. but that John does not yet know this. Further. suppose that, because of the well-publicized salary accompanying the office of Byte'A editor, (1) John believes that the editor of Byte is rich. And suppose finally that. because of severe losses in the stock market. (2) John believes that he himself is not rich. Suppose that the system had information about each of the following: John's appointment as editor, Johnts (lack of) knowledge of this appointment. and John's belief about the wealth of the editor. We would not want the system to infer (3) John believes that he* is rich because (2) is consistent with the system's infor- mation. The 'he himself' in (2) is a quasi- indicator, for (2) is the sentence that we use to express the belief that John would express as 'I am not rich'. Someone pointing to John. saying. 65 (4) He [i.e., that man there] believes that he* is not rich could just as well have said (2). The first 'he' in (4) is not a quasi-indicator: It occurs outside the believes-that context, and it can be replaced by 'John' or by 'the editor of Byte', salva veri- tare. But the 'he*' in )4) and the 'he himself' in (2) could not be thus replaced by 'the editor of Byte' - given our scenario - even though John is the editor of Byte. And if poor John also suf- fered from amnesia, it could not be replaced by 'John' either. ~. REPRESENTATIONS Entities such as the Lucy who is the object of John's belief are intentional (mental), hence intensional. (Of. Frege 1892; Meinong 1904; Cas- ta~eda 1972; Rapaport 1978, 1981.) Moreover, the entities represented in the data base are the objects of the ~y.~'~ beliefs, and, so, are also intentional, hence intensional. We represent sen- tences by means of propositional semantic net- works, using the Semantic Network Processing Sys- tem (SNePS; Shapiro 1979), which treats nodes as representing intensional concepts (of. Woods 1975, Brachman 1977, Maida and Shapiro 1982). We claim that in the absence of prior knowledge of co-referentiality, the entities within belief-contexts should be represented separately from entities outside the context that might be co-referential with them. Suppose the system's beliefs include that a person named 'Lucy' is young and that John believes that a (possibly different) person named 'Lucy' is rich. We represent this with the network of Fig. I. Fig. I. Lucy is young (m3) and John believes that someone named 'Lucy' is rich (m12). The section of network dominated by nodes m7 and m9 is the system's de ditto representation of John's belief. That is, m9 is the system'~ representation of a belief that John might express by 'Lucy is rich', and it is represented as one of John's beliefs. Such nodes are considered as being in the system's representation of John's i"belief space". Non-dominated nodes, such as ml4, m12, m15, mS, and m3, are the system's representa- tion of its own belief space (i.e., they can be thought of as the object of an implicit 'I believe that' case-frame; cf. Casta~eda 1975: 121-22, Kant 1787: BI31). If it is later determined that the "two" Lucies are the same, then a node of co- referentiality would be added (m16, in Fig. 2). Fig. 2. Lucy is young (m3), John believes that someone named 'Lucy' is rich (mlS), and John's Lucy is the system's Lucy (m16). Now consider the case where the system has no information about the "content" of John's belief, but does have information that John's belief is about the ~.7_~/.f.~'E Lucy. Thus, whereas John might express his belief as, 'Linus's sister is rich', the system would express it as, '(Lucy system) is believed by John to be rich' (where '(Lucy sys- tem)' is the system's Lucy). This is a de re representation of John's belief, and would be represented by node ml2 of Figure 3. The strategy of separating entities in dif- ferent belief spaces is needed in order to satisfy the two main properties of quasi-indicators. Consider the possible representations of sen- tence (3) in Figure 4 (adapted from Maida and Shapiro 1983: 316). This suffers from three major problems. First, it is ambiguous: It could be the representation of (3) or of (5) John believes that John is rich. But, as we have seen, (3) and (5) express quite different propositions; thus, they should be separate items in the data base. Second, Figure 4 cannot represent (5). For then we would have no easy or uniform way to represent (3) in the case where John does not know that he is named 'John': Figure 4 says that the person (m3) who is named 'John' and who believes m6, believes that that person is rich; and this would be false in the amnesia case. 66 Fig. 3. The system's young Lucy is believed by John to be rich. Fig. 4. A representation for "John believes that he* is rich" Third, Figure 4 cannot represent (3) either, for it does not adequately represent the quasi- indexical nature of the 'he' in (3): Node m3 represents both 'John' and 'he', hence is both inside and outside the intentional context, con- trary to both of the properties of quasi- indicators. Finally, because of these representational inadequacies, the system would invalidly "'infer" (6iii) from (6i)-(6ii): (6) (i) John believes that he is rich. (ii) he = John (iii) John believes that John is rich. simply because premise (6i) would be represented by the same network as conclusion (6iii). Rather, the general pattern for representing such sentences is illustrated in Figure 5. The 'he*' in the English sentence is represented by node m2; its quasi-indexical nature is represented by means of node ml0. "I Fig. 5. John believes that he* is rich (m2 is the s~stem's representation of John's "'self-concept , expressed by John as 'I' and by the system as 'he*') That nodes m2 and m5 must be distinct follows from our separation principle. But, since m2 is the system's representation of Johnts representa- tion of himself, it must be within the system's representation of John's belief space; this is accomplished via nodes ml0 and m9, representing John's belief that m2 is his "self- representation". Node m9, with its EGO arc to m2, represents, roughly, the proposition 'm2 is me'. Our representation of quasi-indexical de se sentences is thus a special case of the general schema for de ditto representations of belief sen- tences. When a de se sentence is interpreted de re, it does not contain quasi-indicators, and can be handled by the general schema for de re representations. Thus, (7) John is believed by himself to be rich would be represented by the network of Figure 4. ~. INFERENCES Using an ATN parser-generator with a question-answering capability (based on Shapiro 1982), we are implementing a system that parses English sentences representing beliefs de re or de ditto into our semantic-network representations, and that generates appropriate sentences from such networks. It also "recognizes" the invalidity of argu- ments such as (5) since the premise and conclusion (when interpreted de din,o) are no longer represented by the same network. When given an appropriate inference rule, however, the system 67 will treat as valid such inferences as the follow- ing: (81 (i) John believes that the editor of Byte is rich. (ii) John believes that he* is the editor of Byte. Therefore, (iii) John believes that he* is rich In this case, an appropriate inference rule would be: (9) (¥x,y,z,F)[x believes F(y) b x believes z=y -> x believes FCz)] In SNePS, inference rules are treated as proposi- tions represented by nodes in the network. Thus, the network for (9) would be built by the SNePS User Language command given in Figure 6 (cf. Shapiro 1979). (build avb ($x $y Sz $F) &ant (build agent *x verb (build lex believe) object (build which *y adj (build lex *F))) &ant (build agent *x verb (find lex believe) object (build equiv *z equiv *y)) cq (build agent *x verb (find lex believe) object (build which *z adj (find lex *F)))) Fig. 6. SNePSUL command for building rule (9), for argument (8). ~. ITERATED BELIEF CONTEXTS Our system can also handle sentences involv- ing iterated belief contexts. Consider (10) John believes that Mary believes that Lucy is rich. The interpretation of this that we are most interested in representing treats (I0) as the system's de ditto representation of John's de ditto representation of Mary's belief that Lucy is rich. On this interpretation, we need to represent the system'~ John--(John system)--the system's representation of John'~ Mary--(Mary John system)--and the system's representation of John's representation of Mary'~ Lucy--(Lucy Mary John system). This is done by the network of Figure 7. Such a network is built recursively as fol- lows: The parser maintains a stack of "believers". Each time a belief-sentence is parsed, it is made the object of a belief of the previous believer in the stack. Structure-sharing is used wherever possible. Thus, (II) John believes that Mary believes that Lucy is sweet Fig. 7. John believes that Mary believes that Lucy is rich. would modify the network of Figure 7 by adding new beliefs to (John system)'s belief space and tO (Mary John system)'s belief space, but would use the same nodes to represent John, Mary, and Lucy. ~. NEW INFORMATION The system is also capable of handling sequences of new information. For instance, sup- pose that the system is given the following infor- mation at three successive times: tl: (121 The system's Lucy believes that Lucy's Lucy is sweet. t2: (13) The system's Lucy is sweet. t3: (14) The systemIs Lucy = LucyIs Lucy. Then it will build the networks of Figures 8-I0, successively. At tl (Fig. 8), node m3 represents the systemts Lucy and m7 represents Lucy's Lucy. At t2 (Fig. 9), m13 is built, representing the system's belief that the system's Lucy (who is not yet believed to be--and, indeed, might not be-- Lucy's Lucy) is sweet.[l] At t3 (Fig. II), m14 is built, representing the system's new belief that there is really only one Lucy. This is a merging of the two "'Lucy"-nodes. From now on, all proper- ties of "either" Lucy will be inherited by the "'other", by means of an inference rule for the EQUIV case-frame (roughly, the indiscernibility of id___@enticals). It]We are assumin B that tile system's concept of sweetness (node me) is also the system's concept of (Lucy system)'s concept of sweetness. This as- sumption seems warranted, since all nodes are in the system's belief space. If the system had rea- son to believe that its concept of sweetness dif- fered from Lucy's, this could--and would have to-- be represented. 68 Fig. 8. Lucy believes that Lucy is sweet. I \ Fig. 9. Lucy believes that Lucy is sweet, and Lucy (the believer) is sweet. i. FUTURE WORK There are several directions for future modifications. First, the node-merging mechanism of the EQUIV case-frame with its associated rule needs to be generalized: Its current interpreta- tion is co-referentiality; but if the sequence (12)-(14) were embedded in someone else's belief- space, then co-referentiality might be incorrect. What is needed is a notion of "co-refere~tiality- within-a-belief-space'. The relation of consoct- ation" (Casta~eda 1972) seems to be more appropri- ate. Second, the system needs to be much more flexible. Currently, it treats all sentences of the form (15) x believes that F(y) as canonically de dicto and all sentences of the form (16) y is believed by x to be F Fig. I0. Lucy believes that Lucy is sweet, Lucy is sweet, and the system's Lucy is Lucy's Lucy. as canonically de re. In ordinary conversation, however, both sentences can be understood in either way, depending on context, including prior beliefs as well as idiosyncracies of particular predicates. For instance, given (I), above, and the fact that John is the editor of Byte, most people would infer (3). But given (17) John believes that all identical twins are conceited. (18) Unknown to John, he is an identical twin most people would not infer (19) John believes that he* is conceited. Thus, we want to allow the system to make the most "reasonable" interpretations (de re vs. de d£cto) of users' belief-reports, based on prior beliefs and on subject matter, and to modify its initial representation as more information is received. SUNIqARY A deductive knowledge-representation system that is to be able to reason about the beliefs of cog- nitive agents must have a scheme for representing beliefs. This scheme must be able to distinguish among the "belief spaces" of different agents, as yell as be able to handle "nested belief spaces", i.e., second-order beliefs such as the beliefs of one agent about the beliefs of another. We have shown how a scheme for representing beliefs as either de re or de d£cto can distinguish the items in different belief spa~es (including nested belief spaces), yet merge the items on the basis of new information. This general scheme also enables the system to adequately represent sen- tences containing quasi-indicators, while not allowing invalid inferences to be drawn from them. 69 REFERENCES J. A. Barnden, "Intensions as Such: An Outline," IJCAI-83 (1983)280-286. R. J. Brachman, "What's in a Concept: Structural Foundations for Semantic Networks," Interna- tional Journal for Man-Machine Studies 9(1977)127-52. Hector-Neri Casta~eda, "Indicators and Quasi- Indicators," ~ Philosoohical Ouarterlv 4(1967)85-100. __, "Thinking and the Structure of the World" (1972), Philosoohia 4(1974)3-40. "Identity and Sameness," PhilosoDhia 5~1975)121-50. __, "Reference, Reality and Perceptual Fields," Proceedings and Addresses of the ~erican ~hilosophical Association 53(1980)763-823. L. G. Creary, "Propositional Attitudes: Fregean Representation and Simulative Reasoning," IJCAI-79 (1979)176-81. Gottlob Frege, "On Sense and Reference" (1892), in Translations from the Philosophical Writings of ~ottlob Fre~e, ed. by P. Geach and M. Black (Oxford: Basil Blackwell, 1970): 56-78. Immanuel Kant, Critique of Pure Reason, 2nd ed. (1787), trans. N. Kemp Smith (New York: St. Martin's Press, 1929). Anthony S. Maida and Stuart C. Shapiro, "Inten- sional Concepts in Propositional Semantic Net- works." Cognitive Science 6(1982)291-330. Alexius Meinong, -Ueber Gegenstandstheorie" (1904), in Alexius Meinon~ Gesamtaus~ahe, Vol. II, ed. R. Haller (Graz, Austria: Akademische Druck- u. Verlagsanstalt, 1971): 481-535. English translation in R. Chisholm (ed.), Real- ism and the Background of Phenomenolo~y (New York: Free Press, 1960): 76-117. R. C. Moore, "'Reasoning about Knowledge and Action," IJCAI-77 (1977)223-27. Nils J. Nilsson, "Artificial Intelligence Prepares for 2001," AI Ma~azine 4.4(Winter 1983)7-14. William J. Rapaport, "Meinongian Theories and a Russellian Paradox," NoGs 12(1978)153-80; errata, 13(1979)125. __, "How to Make the World Fit Our Language: An Essay in Meinongian Semantics," Grazer Philoso- nhische Studien 14(1981)I-21. Stuart C. Shapiro, "The SNePS Semantic Network Processing System," in N. V. Findler (ed.), Associative Networks (New York: Academic Press, 1979): 179-203. __, "Generalized Augmented Transition Network Grammars For Generation From Semantic Networks," ~ Journal of ~ Linguistics 8(1982)12-25. Yorick Wilks and Janusz Bien, "Beliefs, Points of View, and Multiple Environments," Cognitive Sci- ence 7(1983)95-119. William A. Woods, "'What's in a Link: The Semantics of Semantic Networks," in D. G. Bobrow and A. M. Collins (eds.), Reuresentation and ~ (New York: Academic Press, 1975): 35-79. 70 | 1984 | 16 |
The Costs of Inheritance in Semantic Networks Rob't F. Simmons The University of Texas, Austin Abstract Questioning texts represented in semantic relations I requires the recognition that synonyms, instances, and hyponyms may all satisfy a questioned term. A basic procedure for accomplishing such loose matching using inheritance from a taxonomic organization of the dictionary is defined in analogy with the unification a!gorithm used for theorem proving, and the costs of its application are analyzed. It is concluded tl,at inherit,~nce logic can profitably be ixiclu.'ted in the basic questioning procedure. AI Handbook Study In studying the pro-.~ss of answering questions from fifty pages of the AI tlandbook, it is striking that such subsections as those describing problem representations are organized so as to define conceptual dictionary entries for the terms. First, class definitions are offered and their terms defined; then examples are given and the computational terms of the definitions are instantiated. Finally the technique described is applied to examples and redel'ined mathematical!y. Organizing these texts (by hand) into coherent hierarchic structures of discourse results in very usable conceptual dictionary definitions that are related by taxonomic and partitive relations, leaving gaps only for non-technical terms. For example, in "give snapshots of the state of the problem at various stages in its solution," terms such as "state', 'problem', and "solution" are defined by the text. while • give', "snapshots', and "stages = are not. Our first studies in representing and questioning this text have used semantic networks with a minimal number of case arcs to represent the sentences and Super:~et/Instance and *Of/llas arcs to represent, respectively, taxonomic and partitive relations between concepts. Equivalence arcs are also used to represent certain relations sig~fified by uses of "is" and apposition 1supported by NSF Grant/ST 8200976 and *AND and *OR arcs represent conjunction. Since June 1982, eight question-answering systems have been' written, some in procedural logic and some in compilable EIJSP. Although we have so far studied questioning and data manipulation operations on about 40 pages of the text, the detailed study of inheritance costs discussed in this paper was based on 170 semantic relations (SRs), represented by 733 binary relations each composed of a node-arc-node triple. In this study the only inference rules used were those needed to obtain transitive closure for inheritance, but in other studies of this text a great deal of power is gained by using general inference rules for paraphrasing the question into the terms given by an answering text. The use of paraphrastie inference rules is computationally expensive and is discussed elsewhere [Simmons 1083]. The text-knowledge base is constructed either as a set of triples using subscripted words, or by establishing node-numbers whose values are the complete SR and indexing these by the first element of every SR. The latter form, shown in Figure 1, occupies only about a third of the space that the triples require and neither form is clearly computationally better than the other. The first experiments with this text-knowledge base showed that the cost of following inheritance ares, i.e. obtaining taxonomic closures for concepts, was very high; some questions required as much as a minute of central processor time. As a result it was necessary to analyze the process and to develop an understanding that would minimize any redundant computation. Our current system for questioning this fragment knowledge base has reduced the computation time to the range of 1/2 to less than 15 seconds per question in uncompiled ELISP on a DEC 2060. I believe the approach taken in this study is of particular interest to researchers who plan to use the taxonomic structure of ordinary dictionaries in support of natural language processing operations. Beginning with studies made in 1075 [Simmons and Chester, 1077] it was apparent to us that question-answering could be viewed profitably as a specialized form of theorem proving that 71 Example SR representation for a sentence: (C100 A STATE-SPACE REPRESENTATION OF A PROBLEM EMPLEYS TWO KINDS OF ENTITIES: STATES, WHICH ARE DATA STRUCIURES GMNG • SNAPSHOTS" OF THE CONDITION OF THE PROBLEM AT EACH STAGE OF ITS SOLUTION, AND OPERATORS. WHICH ARE ~Y_ANS FOR TRANSFORMING THE PROBLEM FROM ONE STATE TO ANOTHER) (N137 (N138 (N140 (N142 (N143 (N144 (N146 (N145 (N147 (N141 (N148 (N149 (Nl~ (Ni~ (REPRESENTATION SUP N101 HAS N138 EG N139 SNT C100)) (ENTITY NBR PL QTY 2. INST N140 INST N141SNT C100)) (STATE NBR PL ~ N142 SNT CI00)) (STRUCTURE *OF DATA INSTR* N143 SNT C100)) (GIVE TNS PRES INSTR N142AE N144 vAT N145 SNT CLOG)) (SNAPSI~3T NBR PL *OF N146 SNT C100)) (PROBLEM NBR SING HAS N145 SUP N79 SNT C100)) (STAGE NBR PL IDENT VARI~J3 *OF N147 SNT C100)) (SOLUTION NBR SING SNT C100)) (OPERATOR NBR PLEQUIV* N148 SNT C100)) (PROCEDURE NBR PL INSTR* N149 SNT C100)) (TRANSFORM TNS PRESAE N146 *FROM N164 *TO N165 SNT C100)) (STATE NBR SING IDENT ONE 5~JP N140 SNT C100)) (STATE NBR SING IDENT ANOTHER SUP N140 SNT CI00)) Example of SR representation of the question, =How many entities are used in the state-space representation of a problem? = (REPRESENTATION *OF (STATE-SPACE *OF PROBLE24) HAS (ENTITY CITY YO) Figure 1. Representation of Sem~tlc Relations Query Triple: Match Candid. AR B + + + + means a match by unlficatlon. ++ C (CLOSABCB) + + C (CLOSCF R C B) + R1 + (SYNONYM R R1) B R1 A (CO~ R R1) C + ÷ (CLOSAB C A) where CLOSAB stands for Abstractive Closure and is defined in procedural logic (where the symbol < is shorthand for the reversed implication sign <--, i.e. P < Q S is equivalent to Q " S --> P): (CLOSAB NI N2) < (OR CINST NI N2) (SUP N1 N2)) (INST N1 N2) < (OR (NI INST N2) (N1 ~ * N2)) (INST N1N2) < (INST N1X)(INSTX N2) (SUP Ni N2) < (OR (Ni E~U£V N2)(Ni SUP N2)) (SUP NI N2) < (SUP NI X)(SUPX N2) CLOSCP stands for Complex Product Closure and is defined as (CLOSCP R N1N2) < (TRANSITIVE R)(NI R N2) =N1R N2 is the new A R B" (CLOSCP R N1N2) < (NI ~OF N2)*~ (CLOSCF R N1N2) < (NI LOC N2)** (CLOSCF R NI N2) < (NI *AND N2) (CLOSCP R N1N2) < (NI *OR N2) ** These two relations turn out not to be universally true complex products; they only give answers that are possibly true, so they have been dropped for most question answering applications. Figure 2. Conditions for MatchLug Question and Candidate Triples 72 used taxonomic connections to recognize synonymic terms in a question and a candidate answer. A procedural logic question-answerer was later developed and specialized to understanding a story about the flight of a rocket [Simmons 1084, Simmons and Chester, 1982, Levine 1980]. Although it was effective in answering a wide range c,f ordinary questions, we were disturbed at the m,~gnitude of computation that was sometimes required. This led us to the challenge of developing a system that would work effectively with large bodies of text, particularly the AI Iiandbook. The choice of this text proved fortunate in that it provided experience with m~my taxonomic and partitive relations that were essential to an.~wering a test sample of questions. This hrief paper offers an initial description of a basic proccs.~ for questioning such a text and an analysis of the cost of using such a procedure. It is clear that the technique and analysis apply to any use of the English dictionary where definitions are encoded in semantic ne{ works. Relaxed Unification for Matching Semantlc Relations In the unification algorithm, two n-tuples, nl and n °, unify if Arity(nl) ~ Arity(n2) and if every element in nl matches an element in n2. Two elements el and e2 match if el or e2 is a variable, or if el ~-- e2, or in the case that el and e2 are lists of the same length, each of the elements of el matches a corresponding element of e2. Since semantic relations (SRs) are unordered lists of binary relations that vary in length and since a question representation (SRq) can be answered by a sentence candidate (SRc) that includes more information than the question specified, the Arity constraint i~ revised to Arity(SRq} Less/Equal Arity(SRc}. The primitive elements of SRs include words, arcnames, variables and constants. Arcnames and words are organized taxonomically, and words are further organized by the discourse structures in which they occur. One or more element 6f taxonomic or discourse structure may imply others. Words in general can be viewed as restricted variables whose values can be any other word on an acceptable inference path (usually taxonomic) that joins them. The matching constraints of unification can thus be relaxed by allowing two terms to match if one implies the other in a taxonomic closure. The matching procedure is further adapted to read SRs effectively as unordered lists of triples and to seek for each triple ill SRq a corresponding one in SRc. The two SRs below match because Head matches Head, Arcl matches Arcl, Vail matches Vall, etc. even though they are not given in the same order. SRq (Head Arcl Vail, Arc2 Val2, ..., Arcn Vain) SRc (Head Arc2 Val2, Arcl Vail, ..., Arch Vain) The SR may be represented (actually or virtually) as a list of triples as follows: SRq ((Head Arcl Vail) (Head Arc2 Val2) ..., (Head Arcn Vain}) Two triples match in Relaxed Unification according (at least) to the conditions shown in Figure 2. The query triple, A R B may match the candidate giving + + + to signify that all three elements unified. If the first two elements match, the third may be matched using the procedures CLOSAB or CLOSCP to relate the .non- matching C with the question term B by discovering that B is either in the abstractive closure or the complex product closure of C. The abstractive closure of an element is the set of all triples that can be reached by following separately the SUP and EQUIV arcs and the INST and EQUIV* arcs. The complex product closure is the set of triples that can be reached by following a set of generally transitive arcs (not including the abstractive ones). The arc of the question may have a synonym or a converse and so develop alternative questions, and additional questions may be derived by asking such terms as C R B that include the question term A in their • abstractive closure. Both closure procedures should be limited to n-step paths where n is a value between 3 and 6. Computational Cost In the above recursive definition the cost is not immediately obvious. If it is mapped onto a graphic representation in semantic network form, it is possible to see some of its implications. Essentially the procedure first seeks a direct match between a question term and a candidate answer; if the match fails, the abstractive closure arcs, SUP, INST, EQUFv', and EQUIV* may lead. to a new candidate that does match. If these fail, then complex product arcs, *OF, HAS, LOC, AND, and OR may lead to a matching value. The graph below outlines the essence of the procedure. 73 A---R---B---SUP---Q i ---INST---{I i ---E~UlV---Q i ---E~JIV*---Q I ---*AND---el i ---*OR .... Cl I ---L0C---Q I ---*0F---Q I ---HAS---Q This graph shows nine possible complex product paths to follow in seeking a match between B and Q. If we allow each path to extend N steps such that each step has the same number of possible paths, then the worst case computation, assuming each candidate SR has all the arcs, is of the order, 9 raised to the Nth. If the A term of the question also has these possibilities, and the R term has a synonym, then there appear to be 2*2*9**Nth possible candidates for answers. The first factor of 2 reflects the converse by assigning the A term 9**N paths. Assuming only one synonym, each of two R terms might lead to a B via any of 9 paths, giving the second factor of 2. If the query arc is also transitive, then the power factor 9 is increased by one. In fact, SRs representing ordinary text appear to h~ve less than an average of 3 possible-CP paths, so something like 2*3**Nth seems to be the average cost. So if N is limited to 3 there are about 2'81=162 candidates to be examined for each subquestion. These are merely rough estimates, but if the question is composed of 5 subquestions, we .might expect to examine something on the order of a thousand candidates in a complete search for the answer. Fortunately, this is accomplished in a few seconds of comphtation time. The length of tr£nsitive path is also of importance for two other reasons. First, most of the CP arcs lead only to probable inference. Even superset and instance are really only highly probable indicators of equivalence, while LOC, HAS, and *OF are even less certain. Thus if the probability of truth of match is less than one for each step, the number of steps that can reasonably be taken must be sharply limited. Second, it is the case empirically that the great majority of answers to questions are found with short paths of inference. In one all-answers version of the QA-system, we found a puzzling phenomem)n in that all of the answers were typically found in tlle first fifteen seconds of computation although the exploratior! continued for up to 50 seconds. Our current hypothesis is that the likelihood of discovering an answer falls off rapidly as the length of the inference path increases. Disusslon It is important to note that this experiment was solely concerned with the simple levels of inference concerned in inheritance from a taxonomic structure. It shows that this class of inference can be embedded profitably in a procedure for relaxed unification. In addition it allows us to state rules of inference in the form of semantic relations. For example we know that the commander of troops is responsible for the outcome of their battles. So if we know that Cornwallis commanded an army and the army lost a battle, then we can conclude correctly that Cornwallis lost the battle. An SR inference rule to this effect is shown below: Rule Axiom: ((LOSE AGT X AE Y) <- (SUP X COh/LMANDER) (SUP Y BATTLE) (COMMAND AGT X AE W) (SUP W MILITARY-GROUP) (LOSE AGT W AE Y)) Text Axioms: ((COMMAND AGT CORNWALLIS AE (ARMY MOD BRITISH))) ((LOSE AGT (AR/vfY MOD BRITISH) AE (BATTLE *OF YORKTOWN})) ((CORNWALLIS SUP COMMANDER)) ((ARMY SUP {MILITARY-GROUP))) ((YORKTOWN SUP BATTLE)) Theorem: ((LOSE AGT CORNWALLIS AE (BATTLE *OF YORKTOWN))) The relaxed unification procedure described earlier allows us to match the theorem with the consequent of the rule which is then proved if its antecedents are proved. It can be noticed that what is being accomplished is the definition of a theorem prover for the loosely ordered logic of semantic relations. We have used such rules for answering questions of the AI handbook text, but have not yet determined whether the cost of using such rules with relaxed unification can be justified (or whether some theoretically less appealing compilation is needed). References Levine, Sharon, Questioning English Text with Clausal Logic, Univ. of Texas, Dept. Comp. Sci., Thesis, 1980. Simmons, R.F., Computations from the English, Prentice-Hall, New Jersey, 198.i. Simmons, R.F.I A Text Knowledge Base for the A! Handbook, Univ. of Texas, Dept. of Comp. Sci., Ti:-83-24, 1983. Simmons, R.F., and Chester, D.L. Inferences in quantified semantic networks. PROC 5TH INT. JT. CONI~.. ART. INTELL. Stanford, 1977. 74 | 1984 | 17 |
FUNCTIONAL UNIFICATION GRAMMAR: A FORMALISM FOR MACHINE TRANSLATION Martin Kay Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto California 94304 and CSLI, Stanford Abstract Functional Unification Grammar provides an opportunity to encompass within one formalism and computational system the parts of machine translation systems that have usually been treated separately, natably analysis, transfer, and synthesis. Many of the advantages of this formalism come from the fact that it is monotonic allowing data structures to grow differently as different nondeterministic alternatives in a computation are pursued, but never to be modified in any way. A striking feature of this system is that it is fundamental reversible, allowing a to translate as b only if b could translate as a. I Overview A. Machine Translation A classical translating machine stands with one foot on the input text and one on the output. The input text is analyzed by the components of the machine that make up the left leg, each one feeding information into the one above it. Information is passed from component to component down the right leg to construct the output text. The components of each leg correspond to the chapters of an introductory textbook on linguistics with phonology or graphology at the bottom, then syntax, semantics, and so on. The legs join where langnages are no longer differentiated and linguistics shades off into psychology and philosophy. The higber levels are also the ones whose theoretical underpinnings are less well known and system designers therefore often tie the legs together somewhere lower down, constructing a more or less ad hoe bridge, pivot, or transfer component. We connot be sure that the classical design is the right design, or the best design, for a translating machine. But it does have several strong points. Since the structure of the components is grounded in linguistic theory, it is possible to divide each of these components into two parts: a formal description of the relevant facts about the language, and an interpreter of the formalism. The formal description is data whereas the interpreter is program. The formal description should" ideally serve the needs of synthesis and analysis indifferently. On the other hand we would expect different interpreters to be required in the two legs of the machine• We expect to be able to use identical interpreters in corresponding places in all machines of similar design because the information they embody comes from general lingusitic theory and not from particular languages. The scheme therefore has the advantage of modularity. The linguistic descriptions are independent of the leg of the machine they are used in and the programs are independent of the languages to which they are applied. For all the advantgages of the classical design, it is not hard to imagine improvements. In the best all possible worlds, there would only be one formalism in which all the facts about a language--morphological, syntactic, semantic, or whatever--could be stated. A formalism powerful enough to accommodate the various different kinds of linguistic phenomena with equal facility might be unappealing to theoretical linguists because powerful formal systems do not make powerful claims. But the engineering advantages are clear to see. A single formalism would straightfor- wardly reduce the number of interpreters to two, one for analysis and one for synthesis. Furthermore, the explanatory value of a theory clearly rests on a great deal more than the restriciveness of its formal base. In particular, the possiblity of encompassing what had hitherto been thought to require altogether different kinds of treatment within a single framework could be theoretically inter- esting. Another clear improvement on the classical design would "result from merging 'the two interpreters associated with a for- malism. The most obvious advantage to be hoped for with this move would be that the overall structure of the translating machine would be greatly simplified, though this would not neces- sarily happen. It is also reasonable to hope that the machine would be more robust, easier to modify and maintain, and altogether more perspicuous. This is because a device to which analysis and synthesis look essentially the same is one that is fundamentally less time dependent, with fewer internal variables and states; it is apt to work by monitoring constraints laid down in the formal description and ensuring that they are maintained, rather than carrying out long and complex sequences of steps in a carefully prescribed order. • These advantages are available in large measure through a class of formal devices that are slowly gaining acceptance in linguistics and which are based on the relations contracted by formal objects rather than by transformations of one formal object into another. These systems are all procedurally monotonic in the sense that, while new information may be added to existing data structures, possibly different information on different branches of a nondeterministic process, nothing is ever deleted or changed. As a result, the particular order in which elementary events take place is of little importance. Lexical Functional Grammar and Generalized Phrase-Structure grammar share these relational and monotonic properties. They are also characteristics of Functional Unificational Grammar (FUG) which I believe also has additional properties that suit it particularly well to the needs of experimen- tal machine-translation systems. The term experimental must be taken quite seriously here though, if my view of machine translation were more generally held, it would be redundant. I believe that all machine translation of natural languages is experimental and that he who claims otherwise does his more serious colleagues a serious disservice. I should not wish any thing that I say in this paper as a claim to have solved any of the miriad problems that stand between us and working machine translation systems worthy of the name. The contribution that FUG might make is, I believe, a great deal more 75 modest, namely to reformalize more simply and perspicuously what has been done before and which has come to be regarded, as 1 said at the outset %lassical'. B. Functional Unification Grammar FUG traffics in descriptions and there is essentially only one kind of description, whether for lexical items, phrases, sentences, or entire languages. Descriptions do not distinguish among levels in the linguistic hierarchy. This is not to say that the distinctions among the levels are unreal or that a linguist working with the formalism whould not respect them. It means only that the notation and its interpretation are always uniform• Either a pair of descriptions is incompatible or they are combinable into a single description. Within FUG, every object has infinitely many descriptions, though a given grammar partitions the descriptions of the words and phrases in its language into a finite number of equivalence classes, one for each interpretation that the grammar assigns to it. The members of an equivalence class differ along dimensions that are grammatically irrelevant--when they were uttered, whether they ammused Queen Victoria, or whether they contain a prime number of words. Each equivalence class constitutes a lattice with just one member that contains none of these grammatically irrelevant properties, and this canonical member is the only one a linguist would normally concern himself with. However, a grammatical irrelevancy that acquires relevance in the present context is the description of possible translations of a word or phrase, or of one of its interpretations, in one or more other languages. A description is an expression over an essentially arbitrary basic vocabulary. The relations among sets of descriptions there- fore remain unchanged under one-for-one mappings of their basic vocabularies. It is therefore possible to arrange that different grammars share no terms except for possible quotations from the languages described. Canonical descriptions of a pair of sentences in different languages according to grammars that shared no terms could always be unified into a single descrip- tion which would, of course, not be canonical. Since all pairs are unifiable, the relation that they establish between sentences is entriely arbitrary. However, a third grammar can be written that unifies with these combined descriptions only if the sentences they describe in the two langaunges stand in a certain relation to one another. The relation we are interested in is, of course, the translation relation which, for the purposes of the kind'of expcrimantal system I have in mind I take to be definable o':en for isolated sentences. Such a transfer grammar can readily cap- ture all the components of the translation relation that have in fact been built into translation systems: correspondences between words and continuous or discontinuous phrases, use of selectional features or local contexts, case frames, reordering rules, lexical functions, compositional semantics, and so on. II The Formalism A. Functional Descriptions In'FUG, linguistic objects are represented by functional descriptions (FDs). The basic constituent of a functional descrip- tion is a feature consisting of an attribute and an associated value. We write features in the form a ~ v, where a is the attribute and v, the value. Attributes are arbitrary words with no significant internal structure. Values can be of various types, the simplest of which is an atomic value, also an arbitrary word. So Cat ~- S is a feature of the most elementary type. It appears in the descrip- tions of sentences, and which declares that their Category is S. The only kinds of non-atomic values that will concern us here are constituent sets, patterns and FDs themselves. A FD is a Boolean expression over features. We distinguish conjuncts from disjuncts by the kinds of brackets used to enclose their members; the conjuncts and disjuncts of a ---- p, b ~-~ q, and c --~ r are written b -~ q and b ~--- q c~q c~r respectively. The vertical arrangement of these expressions has proved convenient zind it is of minor importance in that braces of the ordinary variety are used for a different purpose in FUG, namely to enclose the ]nembers of consituent sets. The following FD describes all sentences whose subject is a singular noun phrase in the nominative or accusative cases [Cat = S 1 / [Cat = NP 1/ (1) I... /l',lum = Sing // pu°' = l[case-- om .l I L LLCase =Acc JJJ It is a crucial property of FDs that no attribute should figure more than once in any conjunct, though a given attribute may appear in feature lists that are themselves the values of different attributes. This being the case, it is ahvays possible to identify a given conjunct or disjunct in a FD by giving a sequence of attributes (al...ak). a I is a attribvte in the FD whose value, el, is another FD. The attribute a2 is an attribute in Vl whose value if an FD, and so on. Sequences of attributes of this kind are referred to as paths. If the FD contains disjuncts, then the value identified by the path will naturally also be a disjunct. We sometimes write a path as the value of an attribute to indicate that that value of that attribute is not only eaqual to the value identified by the path but that these values are one and the same, inshort, that they are unified in a sense soon to be explained. Roughly, if more information were acquired about one of the values so that more features were added to it, the same additions would be reflected in the other value. This would not automatically happen because a pair of values happened to be the • same. So, for example, if the topic of the sentence were also its object, we might write Object -~ v 1 Topic = (Object)J where v is some FD. Constituent sets are sets of paths identifying within a given FD the descriptions of its constituents in the sense of phrase- structure grammar. No constituent set is specified in example (l) above and the question of whether the subject is a constituent is therefore left open.. Example (2), though still artificially simple, is more realis- tic. It is a syntactic description of the sentence John knows Mary. Perhaps the most striking property of this description is that descriptions of constituents are embedded one inside another, even though the constituents themselves are not so embedded. The value of the Head attribute describes a constituent of the sentence, a fact which is declared in the value of the CSet attribute. We also see that the sentence has a second attribute whose decription is to be found as the value of the Subject of the Head of the Head of the sentence. The reason for this arrangement will become clear shortly. In example (2), every conjunct in which the CSet attribute has a value other than NONE also has a substantive value for the attribute Pat. The value of this attribute is a regular expression over paths which restricts the order in which the constituents must appear. By convention, if no pattern is given for a description which nevertheless does have constituents, they may occur in any order. We shall have more to say about patterns in due course. 76 B. Unification Essentially the only operation used in processing FUG is that of Unification, the paradigm example of a monotonic operation. Given a pair of descriptions, the unification process first deter- mines whether they are compatible in the sense of allowing the possibility of there being some object that is in the extension of both of them. This possibility would bc excluded if there were a path in one of the two descriptions that lead to an atomic value while the same path in the other one lead to some other value. This would occur if, for example, one described a sentence with a singular subject and the other a sentence with a plural subject, or if one described a sentence and the other a noun phrase. There can also be incompatibilities in respect of other kinds of value. Thus, if one has a pattern requiring the subject to precede the main verb whereas the other specifies the other order, the two descriptions will be incompatible. Constituent sets are incompatible if they are not the same. We have briefly considered how three different types of descrip- tion behave under unification. Implicit in what we have said is that descriptions of different types do not unify with one another. Grammars, which are the descriptions of the infinite sets of sen- tences that make up a language constitute a type of description that is structurally identical an ordinary FD but is distinguished on the grounds that it behaves slightly differently under unifica- tion. In particular, it is possible to unify a grammar with another grammar to produce a new grammar, but it is also possible to unify a grammar with a FD, in which case the result is a new FD. The rules for unifying grammars with grammars are the same as those for unifying FDs with FDs. The rules for unify- ing grammars with FDs, however, are slightly different and in the difference lies the ability of FUG to describe structures recur- sively and hence to provide for sentences of unbounded size. The rule for unifying grammars with FDs requires the grammars to be unified~following the rules for FD unification~with each in- dividual constituent of the FD. (s) Head ~-~ [tIead = [Cat ~--- V]] CSet = {(Head Head Subj)(Head)} I Pat = ((Itead Head Subj}(Heed)) I /IObj = NONE Head = |[Obj = [Cat = NP] LCSet = NONE [Head = [Cat = N II L LCSet = NONEJJ By way of illustration, consider the grammar in (3). Like most grammars, it is a disjunction of clauses, one for each (non- terminal) category or constituent type in the language. The first of the three clauses in the principle dir.junction describes sentences as having a head whose head is of category V. This characterization is in line with so called X-theory, according to which a sentenceI belongs to the category ~. In general, a phrase of category X, for whatever X, has a head constituent of category X, that is, a category with the same name but one less bar. X is built into the very fabric of the version of FUG illutrated here where, for example, a setence is by definition a phrase whose bead's head is a verb. The head of a sentence is a V, that is, a phrase whose head is of category V and which has no head of its own. A phrase with this description cannot unify with the first clause in the grammar because its head has the feature [Head = NONE]. Of sentences, the grammar says that they have two con- stituents. It is no surprise that the second of these is its head. The first would usually be called its subject but is here charac- terized as the subject of its verb. This does not implythat there must be lexical entries not only for all the verbs in the language but that there must be such an entry for each of the subjects that the verb might have. What it does mean is that the subject must be unifiable with any description the verb gives of its subject and thus provides automatically both for any selectional restrictions that a verb might place on its subject but also for agreement in person and number between subject and verb. Objects are handled in an analogous manner. Thus, the lexical entries for the French verb forms cm, nait and salt might be as follows: Cat = V ] Lex --~ connaitre / Tense = Pres I [ Pers = 3 ]/ Subj = |Num = Sing|/ LAnim = + J[ Obj = [Cat = NP] J Cat ~ V 1 Lex : savoir I Tense = Pres I [Pers = 3 II Subj = INure = Sing|I [Anim ~ + J/ Obj ~i~ [Cat ~--- S] J Each requires its subject to be third person, singular and animate. Taking a rather simplistic view of the difference between these verbs for the sake of the example, this lexicon states that connatt takes noun phrases as objects, whereas salt takes sentences. III Translation A. Syntax Consider now the French sentence Jean connaft Marie which is presumably a reasonable rendering of the English sentence John knows Mary, a possible fumctional description of which we was given in (2). I take it that the French sentence has an essentially isomorphic structure. In fact, following the plan laid out at the beginning of the paper, let us assume that the functional description of the French sentence is that given in (2) with obvious replacements for the values of the Lex attribute and with attribute names z~ in the English grammar systematically replaced by F-zi in the French. Thus we have F-Cat, F-Head, etc. Suppose now, that, using the English grammar and a suitable parsing algorithm, the structure given in (2) is derived from the English sentence, and that this description is then unified with the following transfer grammar: tt = (F-Cat} ] Lex ~---John ] )I :F-Lex ~--- JeanJ | [ Lex = Mary ] // .F-~x = mrieJ ~/ "~ = know lI/ = conna'tre1111 LF-Lex -= savoir JJ)J The first clause of the principal conjunct states a very strong requirement, namely that the description of a phrase in one of the two languages should be a description of a phrase of the same category in the other language. The disjunct that follows is essentially a bilingual lexicon that requires the description of a lexical item in one language to be a description of that word's counterpart in the other language. It allows the English verb know to be set in correspondence with either connattre or savoir and gives no means by which to distinguish them. In the simple example we are developing, the choice will be determined on the basis of criteria expressed only in the French grammar, namely whether the object is a noun phrase or a sentence. This is about as trivial a transfer grammar as one could readily imagine writing. It profits to the minimal possible extent from the power of FUG. Nevertheless, it should already do better than word-for-word translation because the transfer grammar says nothing at all about the order of the words or phrases. If the 77 English grammar states that pronominal objects follow the verb and the French one says that they precede, the same transfer grammar, though still without any explicit mention of order, will cause the appropriate "reordering" to take place. Similarly, nothing more would be required in the transfer grammar in order to place adjectives properly with respect to the nouns they modify, and so forth. B. Semantics It may be objected to the line of argument that I have been persuing that it requires the legs of the translating machine to be tied together at too lower a level, essentially at the level of syntax. To be sure, it allows more elaborate transfer grammars than the one just illustrated so that the translation of a sentence would not have to be structurally isomorphic with its source, modulo ordering. But the device is essentially syntactic. However, the relations that can be characterized by FUG and similar monotonic devices are in fact a great deal more diverse than this suggests. In particular, much of what falls under the umbrella of semantics in modern linguistics also fits conveniently within this framework. Something of the flavor of this can be captured from the following example. Suppose that the lexieal entries for the words all and dogs are as follows: "Cat ---~ Det Lex ~ all Num ~ Plur Def ~ + [Type = all Ill | [Type -- Implies Sense = [P op = [P1 = [Arg = (Sense Varl] L LP2 = [Arg --~ (Sense Var)JJJ Cat = N ] Lex = dog | _ . [Num= Plur ] I Arc---- Lse~e = {Sense}J | -- __ __ Type ~ Pred When the first of these is unified with the value of the Art attribute in the second as required by the grammar, the result is as follows: "Cat ---~ N Lex .clog Cat --~ Det Lex = All Art Def ~ + Num ~ Plur ~ense = (Sense' [Type = All ]l / [Type ----- Implies Ill / / [Type = 1//I Se~ |Prop = lP1 = |Pred = dog ///I / / LArg = (Sense Var)J//I [ LP2 -- [Arg --~ (Sense Var)] JJJ This, in turn, is readily interpretable as a description of the logical expression Vq.dogCq)AP(q) It remains to provide verbs with a sense that provides a suitable value for P, that is, for (Sense Prop P2 Pred). An example would be the following: "Cat ~ V Lex ~ barks Tense ~ Pres r Pers = 3 1 Subj -- |Num ~ Sing| LAnim ~ + J Obj : NONE Sense = [Prop ='- [P2 = [Pred = bark]]] IV Conclusion It has not been possible in this paper to give more than an impression of how an experimental machine translation system might be constructed based on FUG. I hope, however, that it has been possible to convey something of the value of monotonic systems for this purpose. Implementing FUG in an efficient way requires skill and a variety of little known techniques. However, the programs, though subtle, are not large and, once written, they provide the grammarian and lexicographer with an emmense wealth of expressive devices. Any system implemented strictly within this framework will be reversible in the sense that, if it translates from language A to language B the, to the same extent, it translates from B to A. If the set S is among the translations it delivers for a, then a will be among the translations of each member of S. I know of no system that comes close to providing these advantages and I know of no facility provided for in any system proposed hitherto that it not subsumable under FUG 78 | 1984 | 18 |
COMPUTER SIMULATION OF SPONTANEOUS SPEECH PRODUCTION Bengt Sigurd Dept of Linguistics and Phonetics Helgonabacken 12, S-223 62 Lund, SWEDEN ABSTRACT This paper pinpoints some of the problems faced when a computer text production model (COMMENTATOR) is to produce spontaneous speech, in particular the problem of chunking the utterances in order to get natural prosodic units. The paper proposes a buffer model which allows the accumula- tion and delay of phonetic material until a chunk of the desired size has been built up. Several phonetic studies have suggested a similar tempo- rary storage in order to explain intonation slopes, rythmical patterns, speech errors and speech dis- orders. Small-scale simulations of the whole ver- balization process from perception and thought to sounds, hesitation behaviour, pausing, speech errors, sound changes and speech disorders are pre- sented. 1. Introduction Several text production models implement- ed on computers are able to print grammatical sen- tences and coherent text (see e.g. contributions in All~n, 1983, Mann & Matthiessen, 1982). There is, however, to my knowledge no such verbal production system with spoken output, simulating spontaneous speech, except the experimental version of Commentator to be described. The task to design a speech production system cannot be solved just by attaching a speech synthesis device to the output instead of a printer. The whole production model has to be reconsidered if the system is to produce natural sound and pro- sody, in particular if the system is to have some psychological reality by simulating the hesitation pauses, and speech errors so common in spontaneous speech. This paper discusses some of the prob- lems in the light of the computer model of verbal production presented £n Sigurd (1982), Fornell (1983). For experimental purposes a simple speech synthesis device (VOTRAX) has been used. The Problem of producing naturally sounding utterances is also met in text-to-speech systems (see e.g. Carlson & Granstr~m, 1978). Such systems, however, take printed text as input and turn it into a phonetic representation, eventually sound. Because of the differences between spelling and sound such systems have to face special prob- lems, e.g. to derive single sounds from the letter combinations t__hh, ng, sh, ch in such words as the, thing, shy, change. 2. Co,~entator as a speech production system The general outline of Con~entator is presented in fig. I. The input to this model is perceptual data or equivalent values, e.g. infor- mation about persons and objects on a screen. These primary perceptual facts constitute the basis for various calculations in order to derive secondary facts and draw concluslons about movements and re- lations such as distances, directions, right/left, over/under, front/back, closeness, goals and in- tentions of the persons involved etc. The Commentator produces comments consisting of gram- matical sentences making up coherent and well- formed text (although often soon boring). Some typical comments on a marine scene are: THE SUB- 79 MARINE IS TO THE SOUTH OF THE PORT. IT IS APPROACH- ING THE PORT, BUT IT IS NOT CLOSE TO IT. THE DESTROYER IS APPROACHING THE PORT TOO. The orig- inal version commented onthe movements of the two persons ADAM and EVE in front of a gate. A question menu, different for different situations, suggests topics leading to proposi- tions which are considered appropriate under the circumstances and their truth values are tested against the primary and secondary facts of the world known to the system (the simulated scene). If a proposition is found to be true, it is ac- cepted as a protosentence and verbalized by var- ious lexical, syntactic, referential and texual subroutines. If, e.g., the proposition CLOSE (SUBMARINE, PORT) is verified after measuring the distance between the submarine and the port, the lexical subroutines try to find out how closeness, the submarine and the port should be expressed in the language (Swedish and English printing and speaking versions have been implemented). The referential subroutines determine whether pronouns could be used instead of proper or other nouns and textual procedures investigate whether connectives such as but, however, too, either and perhaps contrastive stress should be inserted. Dialogue (interactive) versions of the Commentator have also been developed, but it is difficult to simulate dialogue behaviour. A person taking part in a dialogue must also master turntaking, questioning, answering, and back- channelling (indicating, listening, evaluation). Expert systems, and even operative systems, simu- -late dialogue behaviour, but as everyone knows, who has worked with computers, the computer dia- logue often breaks down and it is poor and cer- tainly not as smooth as human dialogue. The Commentator can deliver words one at a time whose meaning, syntactic and textual functions are well-defined through the verbal- ization processes. For the printing version of Co~nentator these words are characterized by whatever markers are needed. Lines Component 10- 35 Primary infor- mation 100- Secondary infor- 140 mation 152- i 183 Focus and topic planning expert 210- 232 Verification expert 500 Sentence struc- ture (syntax) expert 600- Reference expert 800 (subroutine) 700- Lexical expert (dictionary) expert Task Result (sample) I Get values of Localization primary dimen- coordinates sions Derive values Distances, right- of complex left, under-over dimensions Determine objects Choice of sub- in focus (refe- ject, object and rents) and topics instructions to according to menu test abstract pred- icates with these Test whether the Positive or nega- conditions for tive protosentences the use of the and instructions for abstract predl- how to proceed cares are met in the situation don the screen) Order the abstract Sentence struc- sentence constltu- ture with further ents (subject, pre- instructions dicate, object); basic prosody Determine whether Pronouns, proper pronouns, proper nouns, indefinite nouns, or other or definlteNPs expressions could be used Translate (substi- Surface phrases, tute} abstract words predicates, etc. Insert conjunc- Sentenc~with tlons, connective words such as ock- adverbs; prosodic s~ (too}, dock-- features -~owever} -- Pronounce or print Uttered or the assembled printed sentence structure (text) Figure I. Components of the text production model underlying Commentator 3. A Simple speech synthesis device The experimental system presented in this paper uses a Votrax speech synthesis unit (for a presentation see Giarcia, 1982). Although it is a very simple system designed to enable computers to deliver spoken output such as numbers, short instructions etc, it has some experimental poten- tials. It forces the researcher to take a stand on a number of interesting issues and make theories about speech production more concrete. The Votrax is an inexpensive and unsophisticated synthesis device and it is not our hope to achieve perfect pronunciation using this circuit, of course. The circuit, rather, provides a simple way of doing research in the field of speech production. Votrax (which is in fact based on a cir- cuit named SC-01 sold under several trade names) 80 offers a choice of some 60 (American) English sounds (allophones) and 4 pitch levels. A sound must be transcribed by its numerical code and a pitch level, represented by one of the figures 0,1,2,3. The pitch figures correspond roughly to the male levels 65,90,110,130 Hz. Votrax offers no way of changing the amplitude or the duration. Votrax is designed for (American) English and if used for other languages it will, of course, add an English flavour. It can, however, be used at least to produce intelligible words for several other languages. Of course, some sounds may be lacking, e.g. Swedish ~ and [ and some sounds may be slightly different, as e.g. Swedish sh-, ch-, r_-, and ~-sounds. Most Swedish words can be pronounced intelligibly by the Votrax. The pitch levels have been found to be sufficient for the production of the Swedish word tones: accent I (acute) as in and-en (the duck) and accent 2 (grave) as in ande- (the spirit). Accent I can be rendered by the pitch sequence 20 and accent 2 by the sequence 22 on the stressed syllable (the beginning) of the words. Stressed syllables have to include at least one 2. Words are transcribed in the Votrax al- phabet by series of numbers for the sounds and their pitch levels. The Swedish word hSger (right) may be given by the series 27,2,58,0,28,0,35,0, 43,0, where 27,58,28,35,43 are the sounds corre- sponding to h,~:,g,e,r, respectively and the fig- ures 2,0 etc after each sound are the pitch levels of each sound. The word h~ger sounds American because of the ~, which sounds like the (retroflex) vowels in bird. The pronunciation (execution) of the words is handled by instructions in a computer program, which transmits the information to the sound generators and the filters simulating the human vocal apparatus. 4. Some problems to handle 4.1. Pauses and prosodic units in speech The spoken text produced by human beings is normally divided by pauses into units of several words (prosodic units). There is no generally accepted theory explaining the location and dura- tion of the pauses and the intonation and stress patterns in the prosodic units. Many observations have, however, been made, see e.g. Dechert & Raupach (1980). The printing version of Con=nentator col- lects all letters and spaces into a string before they are printed. A speaking version trying to simulate at least some of the production processes cannot, of course, produce words one at a time with pauses corresponding to the word spaces, nor produce all the words of a sentence as one proso- dic unit. A speaking version must be able to pro- duce prosodic units including 3-5 words (cf Svartvik (1982)) and lasting 1-2 seconds (see JSnsson, Mandersson & Sigurd (1983)). How this should be achieved may be called the chunking problem. It has been noted that the chunks of spontaneous speech are generally shorter than in text read aloud. The text chunks have internal intonation and stress patterns often described as superim- posed on the words. Deriving these internal proso- dic patterns may be called the intra-chunk problem. We may also talk about the inter-chunk problem having to do with the relations e.g. in pitch, between succesive chunks. As human beings need to breathe they have to pause in order to inhale at certain inter- vals. The need for air is generally satisfied without conscious actions. We estimate that chunks of I-2 seconds and inhalation pauses of about 0.5 seconds allow convenient breathing. Clearly, breathing allows great variation. Everybody has met persons who try to extend the speech chunks and minimize the pauses in order to say as much as possible, or to hold the floor. It has also been observed that pauses often occur where there is a major syntactic break (corresponding to a deep cut in the syntactic tree), and that, except for soTcalled hesitation pauses, pauses rarely occur between two words which belong closely together (corresponding to a 81 shallow cut in the syntactic tree). There is, however, no support for a simple theory that pauses are introduced between the main constitu- ents of the sentence and that their duration is a function of the depthof the cuts in the syntactic tree. The conclusion to draw seems rather to be that chunk cuts are avoided between words which belong closely together. Syntactic structure does not govern chunking, but puts constraints on it. Click experiments which show that the click is erroneously located at major syntactic cuts rather than between words which are syntactically coherent seem to point in the same direction. As an illus- tration of syntactic closeness we mention the combination of a verb and a following reflexive pronoun as in Adam n~rmar+sig Eva. ("Adam ap- proaches Eva"). Cutting between n~rmar and si~ would be most unnatural. Lexical search, syntactic and textual planning are often mentioned as the reasons for pauses, so-called hesitation pauses, filled or unfilled. In the speech production model envisaged in this paper sounds are generally stored in a buffer where they are given the proper intona- tional contours and stress patterns. The pronun- ciation is therefore generally delayed. Hesitation pauses seem, however, to be direct (on-line) re- flexes of searching or planning processes and at such moments there is no delay. Whatever has been accumulated in the articulation or execution buffer is pronounced and the system is waiting for the next word. While waiting (idling),some human beings are silent, others prolong the last sounds of the previous word or produce sounds, such as ah, eh, or repeat part of the previous utterence. (This can also be simulated by Commentator.) Hesitation pauses may occur anywhere, but they seem to be more frequent before lexical words than function words. By using buffers chunking may be made according to various principles. If a sentence termination (full stop) is entered in the execu- tion buffer, whatever has been accumulated in the buffer may be pronounced setting the pitch of the final part at low. If the number of segments in the chunk being accumulated in the buffer does not exceed a certain limit a new word is only stored after the others in the execution buffer. The duration of a sound in Votrax is 0.1 second on the average. If the limit is set at 15 the system will deliver chunks about 1.5 seconds, which is a common length of speech chunks. The system may also accumulate words in such a way that each chunk normally includes at least one stressed word, or one syntactic constituent (if these features are marked in the representation). The system may be made to avoid cutting where there is a tight syntactic link, as e.g. between a head word and enclitic morphemes. The length of the chunk can be varied in order to simulate different speech styles, individuals or speech disorders. 4.2. Prosodic patterns within utterance chunks A system producing spontaneous speech must give the proper prosodic patterns to all the chunks the text has been divided into. Except for a few studies, e.g. Svartvik (1982) most prosodic studies concern well-formed grammatical sentences pronounced in isolation. While waiting for further information and more sophisticated synthesis devices it is interesting to do experiments to find out how natural the result is. Only @itch, not intensity, is available in Votrax, but pitch may be used to signal stress too. Unstressed words may be assigned pitch level I or 0, stressed words 2 or higher on at least one segment. Words may be assumed to be inherently stressed or unstressed. In the restricted Swedish vocabulary of Commentator the following illustrate lexically stressed words: Adam, v~nster (left), n~ra (close), ocks~ (too). The following words are lexically unstressed in the experiments: han (he), den (it), i (in), och (and), men (but), ~r (is). Inherently unstressed words may become stressed, e.g. by contrast assigned during the verbalization process. The final sounds of prosodic units are often prolonged, a fact which can be simulated by doubling some chunk-final sounds, but the 82 Votrax is not sophisticated enough to handle these phonetic subtleties. Nor can it take into account the fact that the duration of sounds seem to vary with the length of the speech chunk. The rising pitch observed in chunks which are not sentence final (signalling incompleteness) can be implemented by raising the pitch of the final sounds of such chunks. It has also been ob- served that words (syllables) within a prosodic unit seem to be placed on a slope of intonation (grid). The decrement to the pitch of each sound caused by such a slope can be calculated knowing the place of the sound and the length of the chunk. But so far, the resulting prosody, as is the case of text-to-speech systems, cannot be said to be natural. 4.3. Speech errors and sound change Speech errors may be classed as lexical, grammatical or phonetic. Some lexical errors can be explained (and simulated) as mistakes in pick- ing up a lexical item. Instead of picking up hbge~ (right) the word v~nster (left), a semi- antonym, stored on an adjacent address, is sent to the buffer. Grammatical mistakes may be simu- lated by mixing up the contents of memories stor- ing the constituents during the process of verbal- ization. Phonetic errors can be explaned (and simulated) if we assume buffers where the phonetic material is stored and mistakes in handling these buffers. The representation in Votrax is not, however, sophisticated enough for this purpose as sound features and syllable constituents often must be specified. If a person says pb~er om porten instead of h~ger om porten (to the right of the gate) he has picked up the initial conso- nantal element of the following stressed syllable too early. Most explanations of speech errors assume an unconscious or a conscious monitoring of the contents of the buffers used during the speech production process. This monitoring (which in some ways can be simulated by computer) may result in changes in order to adjust the contents of the buffers, e.g. to a certain norm or a fashion. Similar monitoring is seen in word processing systems which apply automatic spelling correction. But there are several places in Commentator where sound changes may be simulated. REFERENCES All~n, S. (ed) 1983. Text processing. Nobel symposium. Stockholm: Almqvist & Wiksell Carlson, R. & B. Granstrbm. 1978. Experimental text-to-speech system for the handicapped. JASA 64, p 163 Ciarcia, S. 1982. Build the Microvox Text-to-speech synthesizer. Byte 1982:0ct Dechert, H.W. & M. Raupach (eds) 1980. Temporal variables in speech. The Hague: Mouton Fornell, J. 1983. Commentator, ett mikrodator- baserat forskningsredskap fDr lingvister. Praktisk Lingvistik 8 Jbnsson, K-G, B. Mandersson & B. Sigurd. 1983. A microcomputer pausemeter for linguists. In: Working Papers 24. Lund. Department of linguistics Mann, W.C. 5 C. Matthiessen. 1982. Nigel: a systemic grammar for text generation. In- formation sciences institute. USC. Marina del Ray. ISI/RR-83-I05 Sigurd, B. 1982. Text representation in a text production model. In: All~n (1982) Sigurd, B. 1983. Commentator: A computer model of verbal production. Linguistics 20-9/10 (to appear) Svartvik, J. 1982. The segmentation of impromptu speech. In Enkvist, N-E (ed). Impromptu speech: Symposium. Abo: Abo akademi 83 | 1984 | 19 |
CONVEYING IMPLICIT CONTENT IN NARRATIVE SUMMARW~S Malcolm E. Cook, Wendy G. Lehnert, David D. ~ d Department of Computer and Information Science University of Massachusetts Amherst, Massachusetts 01003 ABSTRACT One of the key characteristics of any summary is that it must be concise. To achieve this the content of the summary (1) must be focused on the key events, and (2) should leave out any information that she audience can infer on their own. We have recently begun a project on summarizing simple narrative stories. In our approach, we assume that the focus of the story has already been determined and is explicitly given in the story's lung-term representation; we concentrate instead on how one can plan what inferences an audience will be able to make when they read a summary. Our conduglon is that one should think about inferences as following from the audience's recognition of the central concepts in the story's plot, and then plan the textual structure of the ~mm~'y so as go reinforce that recognition. BACKGROUND This research builds on our previous work on narrative structure and generation. We are using Plot Units [Lchnert 1981] to represent the structure of the ori~nal narrative, and use Mumble [McDonald 1983] to do the linguistic realiTAtion. To connect these two facilities we have a new interface and a new text plannin~ ©omponeng named Plot units are a technique for organizing the conceptual representation of a narrative in such a way that the topological structure of the representation directly indicates which events are central to the story and which are peripheral. A graph of connected plot units is constructed for a story as it is understood, based on the recognition of goal-oriented behavior by the characters and their affective reactions to events. Plot units summarize larger-scale relationships among explicit and implicit events in the story, and are oriented toward long term recall rather than appreciation of story style or specific wording. Mumble is a "realization" module for language generation; it takes a stream of output from a text planner and incrementally produces fluent, cohesive En~ligh text in accordance with the planner's spec/ficatious. The planner decides what information should be imparted and most of its rhetorical features; Mumble filters those decis/ons in accordance with grammatical constraints, handles syntax and morphology, and performs the "smoothing" operations that are required by the discourse context in which the information aIvears. 1. This research was supported in part by the National Science Foundation under contracts IST-8217502 and IST-8104984, and in part by the Office of Naval Research under coatract N00014.E3-K-0~0. Precis stands between the plot unit graph and Mumble. h has been under development for a only short time and the ultimate form that its architecture will take is not yet fixed. We have so far been working bottom up, experimenting with different ways to combine the texts contributed by individual units and affect states, and trying to understand the consequences of the alternatives. We report here on one key "tactical" problem in narrative summarization which we refer to as conceptual ell~sis, omitting those events from a summary that we expect an audience to be able to infer on their own, and reinforcing that inference through a judicious choice of textual form. THE Nggn FOR CONCEPTUAL gLLlWb'IS Ever since the original work by Bartlett, researchers have appreciated that people who are remembering a story some time after they have heard it typically fail to distinguish between events that were explicitly stated in the stray and throe that they only inferred while reading it. Present day story understaDding systems act in a ~imilar way by malntainin~ Oilily a lingie conceptual record of what they have understood regardless of its murte [Jcehi & Weischedel 1977, Graemer 1980, Dyer 1983]. Since our summarization process starts from the conceptual representation of the story rather than the text itself, it too will be unable to make this distinction. This theory of memory has two consequences. One is that any decisions about what constituted the crux or point of the story must have been made at comprehension time rather than summarization time. This is one of the purposes of a plot unit ~tatiun. The other is that we now need to deliberately recalculate what information should be explicit in our summary and what should be left for the audience to infer; were this not done, the superfluous information in the summary would make it sound quite unnatural-as though it were being told by a person from a different society who did not have any commonsen~ understanding of the social context in which the story was set. How the explicit versus left-to-inference calculation turns out will vary with the tmmmary: the tame story can be summarized or retold in diffeie~.t ways depending on which character's point of view is taken or which events are emphasized. The plot unit graph is neutral on this question, and it will be an important part of what we do next in this research. Decisions about conceptual ellipsis are made prior to any of the linguistic decim'uns about form; they are however linked to those later decim'ons since some linguistic forms will be more effective than others in indicating to the audience that an inference is intended. Certain marked choices of form will suggest to the reader that particular implications were ~'m the mind of the writer" at the time of generation. The conceptual decm'ons are thus the source 5 of clependencies that must be carried forward to the point where the text-form decis/ens will be made in order that the i~ht re~liTntio nt are chOOSe~. By the lalne tokeD there will also be dependencies percolating back to the conceptual ellipsis decisions indicating what alternative realizations are actually available in a given case and thus whether a partienlar implication can be adequately supported by the information that is included and the way it is phrased. A N ~ The followin8 simple stray will demomarate the gene_"al phenomenou. THE COMSYS STORY John and Mike were campot/ng for the same job at IBM. John got the job and Mike derided to stwn Ms own consulting f~m, COMSgS. W~hin three years, COMSg$ was flourfsMn&. By that time, John had become dissatisfied wfth IBM so he asked Mike for a job. M~te spU~d~y turned ~n down. A analysis of this text in terms of plot unJet has "Competition" as a central unit in the graph, which would make it a candidate bash for a snmmaEy of the story. All competition unim have this pattern: COMPETITION Agent1 Ageat2 M1 M2 + Underlying this levd of representation are the actual goals and events experieaced by the two charate~ In any competitim unit, we have: M1 : geal(agentl~xtll) M2 : goal(ageat2~2) + : m_,y~_,goall,eventl) : failme(gml2~-veat2) with the additional constraints: Cl : event1 = evenl2 C2 : goall and gml2 cannot boch be realized. (Note that in C1 the positive and negative acuudizatious are actually the mine event but from the point of view of two different charaeten.) In the COMSYS story the competition is between John and Mike over who will get a particular job at IBM. The instanfiatiou of the Competition unit in this story is: M1 : A-goall (John has-role #employee in M-job1 (where ~employer = raM) M2 : A-goal2 (Mike hu-role ~-mployee in M-JOb1 where ~employer = IBM) + : m___o~__.A-goall , gm~IBMjohn)) - : fallm~A-guai2 , not($~re(WM,Mike))) where cl : eventl = event2 = hire(IBM,/ohn) ¢2 : A-goall and A-goal2 cannot both be realized. At the time of this writing, Precis can specify any of the following texts for this instantiation of the Competition unit, prefeie~ces dilated by conceptual ellips~ aside. (Discourse fluency effects inch as verb phrase deletion or prouominalizatiou are put in by Mumble as it is realizing Pre~ ° wecification.) (a) "John wanted to work for IBM and so did Mike. They hired John and did not hire Mike." Co) "Both John and Mike wanted to work for IBM, but they hired John." (c) "Mike wanted to work for IBM, but they hired John. n These three choices vary according to how much of the content of the Competition unit they explicitly express. Choice A includes each of the four aHect states (MI, M2, +, .), smoothed somewhat by the recognition that MI and M2 share the same predicate. The very simplest choice.one that did not ¢apreu that commonality in its textual structure, e.g. "John wanted to work for IBM. Mike wanted to work for IBM. They Mred John and did not hire Mike.'-is cotnpletely nnnatural; people wouldn't say it. This minimal level of implicit information that the textual m'uctum must carry is ~.+dingiy not even made Prech" respom/bllity, but is in o*~d carried out automatically within Mumble. The alternative realization of this commonality, ruing a coujolned subject rather than verb phrase deletion, is taken to be a da:ba'ou and is not de.berated over by Pro:is. If we begin to include the constraints that accompany the Competltion Unit (Cl Slid C~) eXplicitiy in the tmmmagy then we can leave out mote of the affect states as in/erable. In choice B we make use of the first comgralnt, iJ~. that the pmitive and the negative acaualizations are consequences of the same event, to enable the omit'on of ~ellg2, nog(hlgt~MlkeJ]~M)), ~ the tegg of the lmmm~l~ by dropping the phrase "they did not ~re Mike". In our present vernon of choice B there is.no structural indicator of the constraint. It is probably no coincidena~. then that the text for B rounds a little odd-readers nnf~ with the orj~nal story wi[! not really understand what the but is mppouxl to be communicating until they go further and make the deduction that there must oaly have been one job available. A better venion of B would probably be: "Both John and Mike wanted to work for IBM, bus they f~ly hired John", with the only acting as an explicit aruetural indicator of the information in the constraints. This addition can probably be licensed as a cotueque~e of the second constraint that only one of the two goals can be realized. At the time of this writing we do not yet have an adequately general mechanism for making observation and incorporating the on/y, so we have not included it among Precis" choices. It is intriguing that choice c, "Mike wanted to work for IBM, but they hired John", is probably the best of the three choices even though it requires the audience to do the most inferencing. In c we have omitted state Ml-that John wanted to work for IBM-yet the audience is able to recover this information quite easily given the presence of the but. Given the ease with which choice c is undemoud, we are led to the suggestion that there may be a very general "template" being recognized here-that choice c is seen by an audience as an instance of the pattern: <expression of agent A's goal>, but <.realization of agent B's goal> and that this template alwa~ carries with it the inference that the two goals must be incompatible and therefore A's goal has not be satisfied. Note that here again the choice would be improved by including an explicit lexical indication of the constraint: "Mike wanted to work for IBM, but they hired John ~nstead". We expect that most instances of these "rhetorical markers" in texts will turn out to be indicators of constraint-levcl information akin to our present cases, which raises the intriguing possibility that a general theory of how they are used might arise out of this kind of work in generation. SUMMARY Cuiie,,tly, we are working with two programs. PUGG (Plot Unit Graph Generator) operates on an affect-state representation of a story, and produces a graph or network of plot units that act as pointers to the o~e of the conceptual representation of the input story and organizes how it will be '~n~sented" to the program that plans the text of the summary, Precis. Precis is in the early stages of its development and so far can only use a single, core plot unit from the graph as the basis of the summ~'y of the story. Precis works at the interface between purely conceptuml and purely linguistic concideratiens as it makes its planning decisions. It chooses from a set of alternative specifications for the summary that vary according to which of the elements of the plot unit are included and which left to be inferred by the audience once they recognize the story as a case of competition. Precis can state the three alternative choices described above (and a few other sets like them), and Mumble can take those specifications and produce the indicated texts. However we do not as yet have any general mechanism for deciding which choice to prefer over the others. Perhaps such a decision mechanism will become apparent once these single unit summaries are embedded in a larger context, or possibly there is no reasonable basis for decision without more knowledge of the purpose of the summary or the ability of a particular audience to make these kinds of inferences (one might have to talk quite differently to young children for exam#e). In futu~ work we also hope to be able to work out a ~ basis for planning the use of infe~e~,ce-directing words like on/y or /nstead. REFERENCES Dyer, M. (1983) In~kTm Undermmd~: A Compu~r Model of Integrated Proesss~ for Narrative Comprehemien, Cambridge, Mass.: M1T Press. Graemer, A.fi c. (1981) Prose Comprehension Beyond the Word New York, N.Y.: Springer.Verlag. Joshi, A.K., and Woischedel (1977) Computation of a subclass of inferences: Presupposition and EntAilment, in Am J. of Comp. Lingulst~ Lehnert, W. (1982) Plot Units: A Narrative Summarization Strategy, in Lehnert, W. and Ringle, M. 0Eds.), Strategies for Natural Language Prmaush~, Hilisdale, NJ.: Lawrence Erlbeum Associates. Lehnert, W. (1983) "Narrative Complexity Based on Summarization Algorithms," ~ of the Seventh Internatlomd Joint Canf~ on Art/fkal ~ , Karisruhe, Germany. McDonald, D. (1983) "National Language Generation as a Computatienal Problem - an Introduction" in Brady, M. and Berwick, R. (Eds.) Computatiomd Models of Discourse, Cambridge, Mass.: MIT Press. McDonald, D. (1982) "~)escription Directed Control: its Implications for Natural Language Generation", in Cercone (ed.) Computational IJn_maistics, Dublin: Pergamon Press. | 1984 | 2 |
LIMITED DOMAIN SYSTEMS FOR LANGUAGE TEACHING S G Pulman, Linguistics, EAS University of East Anglia, Norwich NR4 7Tj, UK. This abstract describes a natural language system which deals usefully with ungrammatical input and describes some actual and potential applications of it in computer aided second language learning. However, this is not the only area in which the principles of the system might be used, and the aim in building it was simply to demonstrate the workability of the general mechanism, and provide a framework for assessing developments of it. BACKGROUND The really hard problem in natural language processing, for any purpose, is the role of non-linguistic knowledge in the understanding process. The correct treatment of even the simplest type of non-syntactic phenomena seems to demand a formidable amount of encyclopedic knowledge, and complex inferences therefrom. To date, the only systems which have simulated understanding to any convincing degree have done so by sidestepping this problem and restricting the factual domain in which they operate very severely. In such limited domains semantic or pragmatic processing to the necessary depth can be achieved by brute force, as a last resort. However, such systems are typically difficult to transport from one domain to another. In many contexts this state of affairs is unsatisfactory - something more than fragile, toy, domain dependent systems is required. But there are also situations in which the use of language within a limited factual domain might well be all that was required. Second language learning, especially during the early stages, is one, where quite a lot of the time what is important is practice and training in correct usage of basic grammatical forms, not the conveying of facts about the world. If someone can be taught to use the comparative construction when talking about, say, lions and tigers, he is not likely to encounter much difficulty of a linguistic nature when switching to talk about cars and buses, overdrafts and bank loans, etc., even thought the system he was using might. Several existing limited domain systems might lend themselves to exploitation for these purposes: one example might be the program described by Isard (1974) which plays a game of noughts and crosses with the user and then engages in a dialogue about the game. Although the domain is tiny the program can deal with much of the modal and tense system of English, as well as some conditionals. Also dealing with noughts and crosses is the program described by Davey (1978), which is capable of (and therefore capable of detecting ) correct uses of conjunctions like 'but' and 'although'. Other examples of systems geared to a particular domain and often to a particular syntactic construction will spring readily to mind. Embedded in educationally motivated settings, such systems might well form the basis for programs giving instruction and practice in some of these traditionally tricky parts of English grammar. Such, at any rate, is the philosophy behind the present work. The idea is that there is scope for using limited systems in an area where their limitations do not matter. ERROR DETECTION AND REPORTING Of course, such an application carries its own special requirements. By definition, a language learner interacting with such a system is likely to be giving it input which is ill-formed in some way quite often. It is not a feature of most NL systems that they respond usefully in this situation: in a language tuition context, an efficient method for detecting and diagnosing errors is essential. The problem has of course not gone unnoticed. Hayes and Mouradian (1981), Kwasny and Sondheimer (1981) - among others - have presented techniques for allowing a parser to succeed even with ill-formed or partial input. The ATN based framework of the latter also generates descriptions of the linguistic requirements which have had to be violated in order for the parse to succeed. Such descriptions might well form the basis for a useful interaction between system and learner. However, the work most directly related to that reported here, and an influence on it, is that by Weischedel et al (1978) and Weischedei and Black (1980), (see also Hendr~x (1977). They also describe ATN based systems, this time 84 specifically intended for use in language tutoring programs. The earlier paper describes two techniques [or handling errors: encoding likely errors directly into the network, so that the ungrammatical sentences are treated like grammatical ones, except that error messages are printed; and using 'failable' predicates on arcs for such things as errors of agreement. The disadvantages of such a system are obvious: the grammar writer has to predict in advance likely mistakes and a/low for them in designing the ATN. Unpredicted errors cannot be handled. The later paper describes a generalisation of these techniques, with two new features: condition-action pairs on selected states of the ATN for generating reports (1980:100) and the use of a 'longest path' heuristic (lOI) for deciding between alternative failed parsings. Although impressive in its coverage, We~schedel and Black report two major problems with the system: the difficulty of locating precisely where in a sentence the parser failed, and the difficulty of generating appropriate responses for the user. Those derived from relaxed predicates for the meanings of states were often fairly technical: some helpful examples of usage were given in some cases, but these had to be prestored and indexed by particular lexical items (103). The problem of accurately locating ungrammaticality is one that is extremely difficult, but arguably made more difficult than it need be by adopting the ATN framework for grammatical description. The ATN formalism is simply too rich: a successful parse in general depends not only on having traversed the network and consumed all the inpul but on having various registers appropriately filled. Since the registers may be inspected at different points this makes it difficult to provide an algorithmic method of locating ungrammaticality. The problem of generating error reports and helpful responses for the learner is also made more difficult than it need be if this is conceived of as something extra which needs to be added to a system already capable of dealing with we/l-formed input. This is because there is a perfectly straightforward sense in which this problem has already been solved if the system contains an adequate grammar. Such a grammar, by explicitly c~aracterising well-formedness, automatically provides an implicit characterisation of how far actual inputs deviate from expected inputs. It also contains all the grammatical information necessary for providing the user with examples of correct usage. These two types of information ought to be sufficient to generate appropriate reports. THE SYSTEM The syntactic theory underlying the present system is that of Generalised Phrase Structure Grammar, of the vintage described in Gazdar (1982). This is a more constrained grammatical formalism than that of an ATN, and hence it was possible to develop a relatively simple procedure for almost always accurately locating ungrammaticality, and also for automatically generating error reports of varying degrees of complexity, as well as examples of correct usage. All this is done using no information over and above what is already encoded in the grammar: nothing need be anticipated or pre-stored. Briefly, on the GPSG theory, the syntactic description of a language consists of two parts: a basic context-free grammar generating simple canonical structures, and a set of metarules, which generate rules for more complex structures from the basic rules. The result of applying the metarules to the basic rules is a large CFG. The system contains a suite of pre-compilation programs which manipulate a GPSG into the form used by the parser. First, the metarules are applied, producing a large, simple, CFG. The metarule expansion routine is in fact only fully defined for a subset of the metarules permitted by the theory. Roughly speaking, only metarules which do not contain variables which could be instantiated more than one way on any given rule application will be accepted. This is not a theoretically motivated restriction but simply a short cut to enable a straighforward pattern matching production system already available in Pop-ll to be transferred wholesale. A set of filters can be specified for the output by the same means if required. Next, the resulting CFG is compiled into an equivalent RTN, and finally this RTN is optimised and reduced, using a variant of a standard algorithm for ordinary transition networks (Aho and Uilman (1977:101). The intention behind this extensive preprocessing, apart from increased efficiency, is that the eventual system could be tailored by teachers for their own purposes. All that would be needed is the ability to write GPS grammars, or simple CF grammars, with no knowledge needed of the internal workings of the system. To give an example of the effect of this pre-processing, the grammar used by the system in the interchanges below contained about 8 rules and 4 metarules. These expand to a simple CFG of about 60 rules; this compiles to an RTN of over 200 states, and the final optimised RTN contains about 40 states. The parser is a standard RTN parser operating breadth first. The error detection routine is part of the main loop of the parser and works as follows: when no transition can be taken from a particular state in the network, a record is taken of the overall state of the machine. This contains information about how much of the sentence has been successfully parsed, the tree built, a list of states to POP to etc. If this record represents a more successful parse than any record so far it is preserved. This means that at the end of an unsuccessful parse the system has a record of the most successful path pursued, and this record is passed to the error reporting routine. If desired, all such records could be preserved during a parse and some procedure for choosing between then defined. This would mean that ambiguous parses can be treated independently, whereas at present only one record representing the most successful path through the input on any reading is retained. 85 The error reporting routine is based around an RTN generator, which simply picks up from the point in the network indicated by the record handed to it, using the information in that record, as well as the RTN, and a special sub-lexicon described below. It is capable of generating error reports of several different types: (i) it can say what constituent(s) it was trying to complete (ii) it can say what type of item it was expecting to find at the point of failure - either using the terminology of the grammar, or by example (iii) it can say what would be necessary to continue the sentence correctly, by generating example continuations. Here are some transcriptions of typical exchanges with the system using the small grammar mentioned above: :go(); ** ready ? william put the book on the shelf ok ? did william put the book on the shelf yes ? was the book put on the shelf yes ? who put the book on the shelf william ? what did william put on the shelf the book ? what was put on the shelf the book ? who was the book put on the shelf by william ? what did william put sentence ok up to here: what did william put ... expecting toofind one of the following preposition (in, on, etc) examples of grammatical continuations what did william put ... with something ? what did the read sentence ok up to here: what did the ... expecting to find one of the following adjective (nice, big etc.)/ noun (boy, girl etc.) examples ot grammatical continuations what did the ... whatdoyoucallit hit ? william hit jumble with a stick big sentence ok up to here: william hit jumble with a stick ... expecting to find end of sentence (NB this response is not as helpful as it could be, since the system does not look at the input after the point of failure). v who did was hit sentence ok up to here: who did ... expecting to find one of the following noun phrase examples of grammatical continuations who did ... something's thing hit ? who william did hit sentence ok up to here: who ... expecting to find one of the following verbl (did, was, etc.)/ verb2 (hit, read, etc.) examples of grammatical continuations who ... read something put something with something An attraction of this mechanism, apart from its simplicity, is that it is defined for the whole class of CFGs; this class of grammars is currently believed to be more or less adequate for English and for most of most other languages (Gazdar 1982). The two problems faced by the system of Weischedel and Black seem to have been overcome in a reasonably satisfying way: since after optimisation, the only non-determinism in the RTN is due to genuine ambiguity, we can be sure that the system will, given the way it operates, almost always locate accurately the point of failure in all non-ambiguous cases. And of course, when working with such limited domains we can control for ambiguity to a large extent, and deal with it by brute force if necessary. However, no such procedure can be wholly learner-proof, (as one of our referees has pointed out). A user might, for example, misspell his intended word and accidentally produce another legitimate word which could fit syntactically. Under these circumstances the parser would proceed unknowingly past the real point of error. The error reports delivered by the system can be as technical or informal as the grammar writer wants, or simply be prompts and examples of correct usage. In practice, simple one word prompts seem to be as useful as any more elaborated response. As will be clear from the examples, both for prompts and continuations, the system uses a restricted sub-lexicon to m~nimise the likelihood of generating grammatical nonsense. This sub-lexicon contains vague and general purpose words like 'thing' and 'whatsit'. This apart, no extra work has to be done once the grammar has been written: the system uses only its knowledge of what is grammatical to diagnose and report on what is ungrammatical. DEVELOPMENTS The mechanism is currently embedded within two small domains. The one illustrated here is 'told' a simple 'story' and then asks or answers questions about that. The sample grammar was intended to demonstrate the interaction of wh questions with passives, among other things. Although we are not here concerned with the semantics of these domains, they are fairly simple, and several different types of semantic components are used depending on the nature of the domain. For some domains a procedural semantics is appropriate, manipulating objects on 86 a screen or asking and answering questions about them. In the 'William' program here a production system again based on the Pop-ll matching procedures is used, currently being coupled to a simple backwards chaining inference mechanism. Neither the grammatical routines nor any embodiment of them constitute a complete tuition system, or anything approaching that: they are merely frameworks for experimentation. But the syntactic error detection routines could be used in many other environments where useful feedback of this type was required, say in database interrogation or machine translation. Within a language tuition context the mechanism could be used to advantage without an associated semantics, in some of the more traditional types of computer aided EFL teaching programs: for example, gap-filling, drill and practice, sentence completion, or grammatical paraphrase tasks. Only trivial adjustments would be needed to the overall mechanism for this to become a powerful and sophisticated framework within which to elaborate such programs. However, there are several ways in which the general mechanism might be improved upon, most immediately, the following: (i) if a parse fails early in the sentence, the user only gets a report based on that part of the sentence, when there may be more serious errors later one (or some praiseworthy use of the language). In these cases a secondary parse looking for well-formed sub-constituents, in something like the way a chart parser might do, would provide useful information. (I am grateful to Steve Isard and Henry Thompson for this suggestion). (ii) the quality of the example continuations could be improved. Eventually it would be desirable to have the generator semantically guided, but this is by no means trivial, even in a limited domain. There are several heuristics which can produce a better type of continuation, however: using a temporary lexicon containing words from the unparsed portion of the sentence, or from the most recently parsed sentences, or combinations of these with the restricted sub-lexicon. In the best cases this type of heuristic can be spectacularly successful, producing a grammatical version of what the user was trying to say. However, they can also flop badly: more testing on real students would be one way of disceovering which of these alternatives is best. (iii) as suggested in Weischedel and Black, it might be profitable to explore the use of semantic grammars - grammars using semantically rather than syntactically motivated categories - in the system. Although of dubious theoretical status, they are a useful engineering tool: the non-terminals can be labelled in a domain-specific way that is transparent for the user, and, being semantically motivated, the system could appear as if it were doing semantic diagnosis of a limited type as well as syntactic diagnosis. For example, instead of being prompted for an adjective, the user might be prompted for 'a word describing the appearance of a car', or something equally specific. Furthermore, the availability of the pre-compilation programs means that it should be possible to use the metarule formalism for these grammars also: this should go some way towards minimising their linguistic disadvantages, namely, a tendency to repetition and redundancy in expressing facts about the languages they generate. The system is written in Pop-ll (a Lisp-like language) within the POPLOG programming environment developed by the University of Sussex. At UEA POPLOG runs on a VAX 11/780 under VMS. REFERENCES Aho, A. and Ullman, J. (1977) Principles of Compiler Design, London, Addison Wesley Publishing Co. Davey, A. (1978) Discourse Production Edinburgh University Press Gazdar, G. (1982) Phrase Structure Grammar in P. Jacobson and G.K. Pullum (eds) The Nature of Syntactic Representation, Dordrecht: D.Reidel Publishing. Hayes, P.J., and Mouradian, G.V. (1981) Flexible Parsing AJCL 7, 232-242 Hendrix, G. (1977) Human Engineering for Applied NL Processing IJCAI 5, Cambridge MA. Isard, S.D. (1974) What would you have done if...? Theoretical Linguistics 1, No 3. Kwasny, S. and Sondheimer, N. (1981) Relaxation Techniques for Parsing Ill-formed Input AJCL 7, 99-108 Weischedel, R. et al. (1978) An Artificial Intelligence Approach to Language Instruction Artificial Intelligence 10,3 Weischedel R. and Black, J. (1980) Responding Intelligently to Unparsable Inputs AJCL 6, 97-109 87 | 1984 | 20 |
G~T : A GENERAL TRANSDUCER FOR TEACHING C~TIONAL LINGUISTICS P. Shann J.L. Cochard Dalle Molle Institute for Semantic and Cognitive Studies University of Geneva Switzerland ABSTRACT The GTI~syst~m is a tree-to-tree transducer developed for teaching purposes in machine transla- tion. The transducer is a specialized production system giving the linguists the tools for express- ing infon~ation in a syntax that is close to theo- retical linguistics. Major emphasis was placed on developing a system that is user friendly, uniform and legible. This paper describes the linguistic data structure, the rule formalism and the control facilities that the linguist is provided with. 1. INTRODUCTION The GTT-system (Geneva Teaching Transducer)1 is a ger~ral tree-to-tree transducer developed as a tool for training linguists in machine transla- tion and computational linguistics. The transducer is a specialized production system tailored to the requirements of ecmputational linguists providing them with a means of expressing information in a format close to the linguistic theory they are familiar with. GIT has been developed for teaching purposes and cannot be considered as a system for large scale development. A first version has been inple- mented in standard Pascal and is currently running on a Univac 1100/61 and a VAX-780 under UNIX. At present it is being used by a team of linguists for experimental devel~t of an MT system for a special purpose language (Buchmann et al., 1984), and to train students in cc~putational linguistics. 2. THE UNIFORMITY AND SIMPLICITY OF THE SYSTEM As a tool for training ccr~putational linguists, major emphasis was placed on developing a system that is user friendly, uniform, and which provides a legible syntax. One of the important requirements in machine translation is the separation of linguistic data and algorithms (Vauquois, 1975). The linguist should have the means to express his knowledge declaratively without being obliged to mix ~u- This project is sponsored by the Swiss govern- ment. tational algorithms and linguistic data. Produc- tion systems (Rosner, 1983) seem particularly suited to meet such requirements (Johnson, 1982); the production set that expresses the object-level knowledge is clearly separated from the control part that drives the application of the produc- tions. Colmerauer's Q-system is the classic exam- ple of such a uniform production system used for machine translation (Colmerauer, 1970; Chevalier, 1978: TAUM-METEO). The linguistic knowledge is ex- pressed declaratively using the same data structu- re during the whole translation process as well as tb~ sane type of production rules for dictionary entries, morphology, analysis, transfer and gene- ration. The disadvantage of the Q-system is its quite unnatural rule-syntax for non-prrx/rammers and its lack of flexible control mechanism for the user (Vauquois, 1978). In the design of our system the basic uniform sch~re of Q-systems has been followed, but the rule syntax, the linguistic data structure and the control facilities have been modernized according to recent developments in machine translation (Vauquois, 1978; Bo£tet, 1977; Johnson, 1980; Slocan, 1982). These three points will be deve- loped in the next section. 3. DESCRIPTION OF THE SYST~4 3.1 Overview The general framework is a production system where linguistic object knowledge is expressed in a rule-based declarative way. The system takes the dictionaries and the grammars as data, cc~piles these data and the interpreter then uses them to process the input text. The decoder transforms the result into a digestable form for the user. 3.2 Data structure The data structure of the system is based on a chart (Varile, 1983). One of the main advantages of using a c~art is that the data structure does not change throughout the whole process of trans- lation (Vauquois, 1978). In the Q-system all linguistic data on the arcs is represented by bracketed strings causing an unclean mixture of constituent structure and other linguistic attributes such as grammatical and semantic labels, etc. With this representation 88 type checking is not possible. Vauquois proposes two changes : I) Tree structures with uun~lex labels on the nodes in order to allow interaction between different linguistic levels such as syntax or semantics, etc. 2) A dissociation of the gecmetry from a particular linguistic level. With these modifications a single tree structure with complex labels increases the power of representation in that several levels of interpretation can be processed simultaneously (Vauquois, 1978; Boftet, 1977). In our system each arc of the chart carries a tree geometry and each node of the tree has a plex labelling consisting of a possible string and the linguistic attributes. Through the separation of gecmetry and attributes, the linguist can deal with two distinct objects: with tree structures and complex labels on the nodes of the trees. tring='linguist' ] at=noun, gender=p~ Figure i. Tree with cc~plex labelling The range or kind of linguistic attributes possible is not predefined by the system. The lin- guist has to define the types he wants to use in a declaration part. e.g.: category = verb, noun, np, pp. semantic-features = human, animate. gender = masc, fern, neut. An important aspect of type declaration is the con- trol it offers. ~ne system provides strong syntac- tic and semantic type checking, thereby constrain- ing the application range in order to avoid inap- propriate transductions. The actual implementation allows the use of sets and subsets in the type de- finition. Further extensions are planned. C~'ven that in this systmm the tree geometry is not bound to a specific linguistic level, the linguist has the freedom to decide which infommation will be represented by the geometry and which will be treated as attributes on the nodes. This repre- sentation tool is thus fairly general and allows the testing of different theories and strategies in MT or computational linguistics. 3.3 The rule slnltax The basic tool to express object-knc~ledge is a set of production rules which are similar in form to context-free phrase structure rules, and well- known to linguists from fozmal grammar. In order to have the same rule type for all operations in a translation system the power of the rules must be of type 0 in the Chomsky classification, including string handling facilities. The rules exhibit two important additions to context-free phrase structure rules: - arbitrary structures can be matched on the left- hand side or built on the rlght-hand side, giving • (ge~etry) (conditions) the pfx~er of unrestricted rules or transforma- tional grammar ~ - arbitrary conditions on the application of the rule can be added, giving the pc~er of a context sensitive grammar. The power of unrestricted rewriting rules makes the transducer a versatile inset for express- ing any rule-governed aspect of language whether this be norphology, syntax, semantics. The fact that the statements are basically phrase structure rules makes this language particularly congenial to linguists and hence well-suited for teaching purposes. The fozmat of rules is detenuined by the sepa- ration of tree structure and attributes on the nodes. Each rule has three parts: geometry, condi- tions and assignments, e.g.: RULE1 a + b ~ c(a,b) IF cat(a) = [det] and cat(b) = [nou~ (assist) ~ cat(c) := [n~; The geometry has the standard left-hand side, pro- duction symbol (~, and right-hand side of a pro- duction rule. a,b,c are variables describing the nodes of the tree structure. The '+' indicates the sequence in the chart, e.g. a+b : a b Tree configurations are indicated by bracketing, c(a,b) correspc~ds to : ----9 /c\ a b Conditions and asslgrm~nts affect only the objects on the nodes. 3.4 Control structure The linguist has ~ tools for controlling the application of the rewriting rules : i) The rules can be grouped into packets (grammars) which are executed in sequence. 2) Within a given grammar the rule-application can be controlled by means of paraneters set by the linguist. According to the linguistic operation en- visaged, the parameters can be set to a ccmbination of serial or parallel and one-pass or iterate. In all, 4 different combinations are possible : parallel and one-pass parallel and iterate serial and one-pass serial and iterate 89 In the parallel mode the rules within a gram- mar are considered as being unordered from a logi- cal point of view. Different rules can be applied on the same piece of data and produce alternatives in the chart. The chart is updated at the end of every application-cycle. In the serial mode the rules are considered as being ordered in a sequen- ce. Only one rule can be fired for a particular piece of data. But the following rules can match the result prDduced by a preceding rule. The chart is updated after every rule that fired. The para- meters one-pass and iterate control the nunber of cycles. Either the interpreter goes through a cy- cle only once, or iterates the cycles as long as any rule of the grammar can fire. The four ccmbinations allow different uses according to the linguistic task to be performed, e.g.: Parallel and iterate applies the rules non-deter- ministically to cc~pute all possibilities, which gives the system the power of a Turing Maritime (this is the only control mode for the Q-system). Parallel and one-pass is the typical ccrnbination for dictionaries that contain alternatives. Two different rules can apply to the sane piece of data. The exhale below (fig. 2) uses this combi- nation in the first GRAMMAR 'vocabulary'. Serial and one-pass allows rule ordering. A possible application of this combination is a pre- ference mechanism via the explicit rule ordering using the longest-match-first technique. The 'preference' in the example below (fig. 2) makes use of that by progressive weakening of the selectional restriction of the verb 'drink'. Rule 24 fires without semantic restrictions and rule 25 accepts sentences where the optional argu- ment is missing. The ~le should be sufficiently self-expla- natory. It begins with the declaration of the attributes and contains three grannars. The result is shown for two sentences (fig. 3). To demonstrate which rule in the preference gran~ar has fired each rule prDduces a different top label: rule 21 = PHI, rule 22 . PH2, etc. Figure 2. Example of a grammar file. DECLARE cat ~ dot, noun, verb, val_nodo, np, phi, ph2, ph3, ph4, phE; number 5 sg, pl; marker =human, liquld, notdrinkablo, phyeobj°abetr; valancu 5 vl, v2, v3~ argument - argl, erg],arg3J GRAHMAR vocebulerU PARN_L ~t QNEPASS RULE 1 a -) • ZF strlnQ (a) 5 "the" THEN cat(aJ :~ [dot]; RULE 2 a -> a ZF strtna(a)5 "man" THEN cat(a~ :~ [noun]; number(a) :" [sg]J markor(e) :5 [human]; RULE 3 a :> a XF string(a) m "boor* THEN cat(a~ :5 [noun]; number(a) :~ Csg]; marker(a) :~ C11qutd]; RULE 4 a 5) a IF strlnq (a) m "car' THEN ca%Ear :m [noun]J number(a) :" [eg]; marker(a) :m [phyeobj]; RULE 5 a 5 [F e~r~nala)" "gaxolLno' THEN cat(a~ :5 [noun]; number(a) :5 Gig]; markor(a) :i £notdr£nkable]l RULE & a 5~ a ]F string(e)- "drinks" THEN cat(el :~ [noun]; number(a) :5 [pl]~ markor(a) :m [1Lqutd]; RULE 7 a -) a(b0c) IF string(e)5 "drinks" : THEN cat(a?: ~[Vorb]J valencu (a):5[V]]l cat(b).~[val node]; cat(c):5[val node]; argument(b): ;[argl]J markor(b):-C~uman]; argument(c):5[ar92]; marko~(c):-CIL;utd]; GRAMMAR nounphraee SERIAL ONEPASS RULE 21 a + b m) tEa, b) [F cat(a) 5 [dot] and cat(b) 5 [noun] THEN cat(c) :5 [np]; marker(c) :u markor(b)J GRAMMAR proforence SERIAL ON[PASS RULE 21 a + b(#l,c,#2, d, W3) + e_m) ~(b,a~a)m , . |F cat(a)ECnp] and cat(b)ECveroJ ago ca;Le; ;npJ and valency(b) 5Cv2] and araumont(¢)mCar9 L] and marker(c)~marke r(a) and argument(d)ECar92] end marker(d)mma~ko r(a) THEN cat(x) :- £phl]J RULE 22 a + b(Ol, c,#a) + • 5> x(b,e,e) . . IF cat(a)mCnp] and cat(b) mCvOrb] and cat(e)~LnpJ and valencu(b) =[v]] and argument(c)sCar91] and ma~kor(c)-marker(a) THEN cat(x) :5 [ph2]; RULE 23 4 + b(#1, c,#2) + • ~) z(b,a,o) ZF ca%(a)-Cnp] and cat(b)aCvorb] and cet(o)~Cnp] and valoncu(b) m£v2] and aTgumlnt(c)m[arg 2] and marker(c)Emarkor(a ) THEN Cat(x) :m £ph3]; RULE 24 a + b + • 5~ x(b,a.e) IF cat(a)m(np] end cat(b)=Cverb] and cat(e)~Cnp] and valence(D) 5[V2] THEN cat(x) :5 £ph4]; RULE 25 a + b 5) x(b,a) IF cat(a)5[np] and cat(b)m[verb] and valoncu(b) 5(v2] THEN cat(x) :5 [phE]J ENDFILE Figure 3. Output of upper granmar file. Input sentence : (1) The men drinks tho boor. Result : PHI CATmCPHI] ! I-~DRINKS' CATs[VERB] VALENCYEEV~] i i -~AJ-'-NQDE CATE(VAL_NODE] MARKER--[HUMAN] ARQUMENT--CARQI~ ; i-VALNODE CATECVAL_NQDE] MARKERECLIGU[D] AROUMENTECARQ23 I-NP CAT'[NP] MARKER'[HUMAN] i; .i-'THE' CATmCDET] !-'MAN' CAT~CNOUN] NUHEER~CSQ] MARKERs[HUMAN] I i-NP CATE[NP] •ARKERE[LIGUID] i -'THE' ¢AT-CDET] i-'BEER" CATBCNGUN] NUMBERE[EQ] RARKERE[LZQUZD] Xnput sentence : (2) The man drinks the gazoline. Result : PH2 CATmCPH2 ] !-'DRINKS" CATmEVERB] VALENCYsEVS] i I-VALNOgE CAT-CVAL,.NQDE] NARKER=CHUHAN] ARGUMENT-CARQI] ! !-VAL_NODE CAT=[VALNQDE] HARMER=CLZGUZD] ARGUMENT=CARG2] i -NP CAT-(NP] NARKER=(HUNAN] • ! I I-'THE" CAT=CDET] ' !-'MAN" CAT=(NOUN] NUMBERmCSG] MARKER-[HUMAN] ! ~-NP CATBCNP] MARKER~CNOTDRINKABLE] ~-'THE" CAT=(DET] i-'GAZOL[NE" CATuCNOUN] NUMBERsCEQ] HARKERs(NQTDRZNKABLE] 90 4. FACILITIES FOR THE USER There is a system user-interaction in the two main prograns of the system, the compiler and the interpreter. The following exanple (fig. 4) shows how the error n~_ssages of the ccrnpiler are printed in the u~L~ilation listing. Each star with a number points to the approximate position of the error and a message explains the possible errors. The cc~piler tries to correct the error and in the worst case ignores that portion of the text follo- wing the error. @RAHMAR er~ortest PARALEL ITERATE *0 pop. O : -ES- ISERIAL/ ou /PARALLEL/ attendu RULE 1 a+b m) c(a,b) [F ETRING(a)m'blable' ANO cot(b)m[nom THEN cAt(d) :m [nom]; POe1 *2 pos. 0 -E8- /,/ attendua pop. 1 -E8- /3/ ottendue pop. 2 -SEN- td. pop de~lni dane 14 geometria (cote d~oit) RULE 2 a(a) m) c(a,b) *0 pop. 0 : -SKM-- ld. deJa utlllso put pa~tie gouche ZF cot(a)m[det] THEN categ(b) :m [noun]; oO o1 pop. ~ i -SEH- ld. ne represente poe un ensemble pos. -SEPI- id. ne ~ep~esente pas un o|ement Figure 4. Compilation listing with error message. The interpreter has a parameter that allows the sequence of rules that fired to be traced. The tra- ce in figure 5 below corresponds to the execution of the example (i) in figure 3. int|rpreteur do @-cedes O'J.| few-14-84 applicotten de lo ~egle 1 application de la regle 1 applicotion de 14 ~egle 2 application de lo regle 3 application de la reglp 6 application de la ~ogle 7 VOCABULARY execute(e) application de lo ~eglo 11 application de lo ~egle 11 NOUNPHRASE execute(e) application de la ~ogle 21 PREFERENCE execute(e) temps d'lnterp~atotion : O.~lb Po¢. CPU 3.583 soc. utllisateur Figure 5. Trace of execution. 5. CONCLUSION The transducer is implemented in a m0dular style to allow easy changes to or addition of ccm- ponents as the need arises. Tnis provides the pos- sibility of experimentation and of further deve- lopment in various directions: - integration of a lexical database with special editing facilities for lexioographers; - developments of special interpreters for trans- fer or scoring mechanis~s for heuristics; - refinement of linguistically motivated type d~ecking. In this paper we have mainly conoentrated on syn- tactic applications to illustrate the use of the transducer. However, as we hope to have shown, the formalism of the system is general enough to allow interesting applications in various domains of ion- guistics such as morphology, valency matching and preference mechanisms (Wilks, 1983). A C ~ N ~ Special thanks should go to Roderick Johnson of CCL, UMIST, who contributed a great deal in the original design of the system presented here, and who, through frequent fruitful discussion, has continued to stimulate and influence later deve- lopments, as well as to Dominique Petitpierre and Lindsay Hammond who programmed the initial i~le- mentation. We would also like to thank all bets of ISSO0 who have participated in the work, particularly B. Buchmann and S. Warwick. r/~rmK~ES Buchmann, B., Shann, P., Warwick, S. (1984). Design of a Machine Translation System for a Sublanguage. Prooeedings, COLING' 84. Chevalier, M., Dansereau, 5., Poulin, G. (1978). TA[94-M~I'~O : description du syst~. T.A.U.M., Groupe de recherdue en traduction autcmatique, Univez~it@ de Montreal, janvier 1978. Colmerauer, A. (1970). Los syst~nes-Q ou un forma- lisme pour analyser et synth~tiser des phrases sur ordinateur. Universit@ de Montreal. Johnson, R.L. (1982). Parsing - an MT Perspective. In: K. Spazk Jones and Y. Wilks (eds.), Automa- tic Natural Language Parsing, M~morand%~n I0, Cognitive Studies Centre, University of Essex. }~Dsner, M. (1983). Production SystEm~. In: M. King (ed.), Parsing Natural Language, Aca- demic Press, London. Sloc~n, J. and Bennett, W.S. (1982). Tne LRC Ma- chine Translation System: An Application of State-of-the-Art Text and Natural Language Processing Techniques to the Translation of Tedunical Manuals. Working paper LRC-82-1, Linguistics Research Center, University of Texas at Austin. Va~is, B. (1975). La traduction automatique Grenoble. Documents de Linguistique Quantita- tive, 24. Dunod, Paris. Vauquois, B. (1978). L'@vOlution des logiciels et des mod~les linguistiques pour la traduction autcmatis@e. T.A. Infolmations, 19. Varile, G.B. (1983). Charts: A Data Structure for Parsing. In: M. King (ed.), Parsing Natural Language, Ac~mic Press, London. Wilks, Y. (1973). An Artificial Intelligenoe Ap- proach to Maduine Translation. In: R.C. Schank and K.M. Colby (eds.), Computer Models of Thought and Language, W.H. Freeman, San Fran- cisco., pp. 114-151. 91 | 1984 | 21 |
A PARSING ARCHITECTURE BASED ON DISTRIBUTED MEMORY MACHINES Jon M. Slack Department of Psychology Open University Milton Keynes MK7 6AA ENGLAND ABSTRACT The paper begins by defining a class of distributed memory machines which have useful properties as retrieval and filtering devices. These memory mechanisms store large numbers of associations on a single composite vector. They provide a natural format for encoding the syntactic and semantic constraints associated with linguistic elements. A computational architecture for parsing natural language is proposed which utillses the retrieval and associative features of these devices. The parsing mechanism is based on the principles of Lexlcal Functional Grammar and the paper demonstrates how these principles can be derived from the properties of the memory mechanisms. I INTRODUCTION Recently, interest has focussed on computational architectures employing massively parallel processing lip2]. Some of these systems have used a distributed form of knowledge representation [3]. This type of representation encodes an item of knowledge in terms of the relationships among a collection of elementary processing units, and such assemblages can encode large numbers of items. Representational similarity and the ability to generalize are the principal features of such memory systems. The next section defines a distributed memory machine which incorporates some of the computational advantages of distributed representations within a traditional yon Neumann architecture. The rest of the paper explores the properties of such machines as the basis for natural language parsing. II DISTRIBUTED MEMORY MACHINES Distributed memory machines (DMM) can be represented formally by the septuple DMM=(V,X,Y,Q,qo,p,A) , where V is a finite set denoting the total vocabulary; X is a finite set of inputs, and XGV; Y is a finite set of acceptable outputs and Y~V; Q is a set of internal states; q0 is a distinguished initial state; ~.QxX-->Q, the retrieval function; A:Q-->Qxy, the output function. Further, where Y" denotes the set of all finite concatenations of the elements of the set Y, Q~Y', and therefore QgV'. This statement represents the notion that internal states of DMMs can encode multiple outputs or hypotheses. The vocabulary, V, can be represented by the space I k, where I is some interval range defined within a chosen number system, N; IoN. The elements of X, Y and Q are encoded as k-element vectors, referred to as memory vectozs. A. Holographic Associative Memory One form of DMM is the holographic associative memory [4,5,6] which encodes large numbers of associations on a single composite vector. Items of information are encoded as k-element zero-centred vectors over an interval such as [-I,+I]; <X>=(...x.t,x0,x~t,...). Two items, <A> and <B> (angular brackets denote memory vectors), are associated in memory through the operation of convolution. This method of association formation is fundamental to the concept of holographic memory and the resulting associative trace is denoted <A>*<B>. The operation of convolution is define by the equation (<A>*<B>~=.~AIB~. i and has the following propertles*[7]: Commutative: <A>*<B> = <B>*<A>, Associative: <A>*(<B>*<C>) = (<A>*<B>)*<C>. Further, where a delta vector, denoted ~, is defined as a vector that has values of zero on all features except the central feature, which has a value of one, then <A>* ~ffi <A>. Moreover, <A>*0 ffi 0, where 0 is a zero vector in which all feature values are zero. Convolving an item wlth an attenuated delta vector (i.e., a vector with values of zero on all features except the central one, which has a value between 0 and i) produces the original item with a strength that is equal to the value of the central feature of the attenuated delta vector. The initial state, qo, encodes all the associations stored in the machine. In this model, associative traces are concatenated (+) through the operations of vector addition and normalization to produce a single vector. Overlapping associative items produce composite 92 vectors which represent both the range of items stored and the central tendency of the those items. This form of prototype generation is a basic property of distributed memories. The retrieval function,@ , is simulated by the operation of correlation. If the state, q~, encodes the association <A>*<B>, then presenting say <A> as an input, or retrieval key, produces a new state, q{~, which encodes the item <B>', a noisy version of <B>, under the operation of correlation. This operation is defined by the equation (<A>#<B>~=~A%Bm,%and has the following properties: % An item correlated with itself, autocorrelation, produces an approximation to a delta vector. If two similar memory vectors are correlated, the central feature of the resulting vector will be equal to their similarity, or dot product, producing an attenuated delta vector. If the two items are completely independent, correlation produces a zero vector. The relation between convolution and correlation is given by <A>~(<A>*<B>) = (<A>~<A>)*<B> + (<A>~<B>)*<A> + noise ...(I) where the noise component results from some of the less significant cross products. Assuming that <A> and <B> are unrelated, Equation (I) becomes: <AMI(<A>*<B>) = ~*<B> + 0*<A> + noise - <B> + 0 + noise Extending these results to a composite trace, suppose that q encodes two associated pairs of four unrelated items forming the vector (<A>*<B> + <C>*<D>). When <A> is given as the retrieval cue, the reconstruction can be characterized as follows: <A>~(<A>*<B> + <C>*<D>) = (<A>~t<A>)*<B> + (<A>~<B>)*<A> + noise + (<A>~<C>)*<D> + (<A>@<D>)*<C> + noise = ~ *<B>+0*<A>+noise+O*<D>+O*<C>+noise - <B> + noise + noise When the additional unrelated items are added to the memory trace their affect on retrieval is to add noise to the reconstructed item <B>, which was associated with the retrieval cue. In a situation in which the encoded items are related to each other, the composite trace causes all of the related items to contribute to the reconstructed pattern, in addition to producing noise. The amount of noise added to a retrieved item is a function of both the amount of information held on the composite memory vector and the size of the vector. III BUILDING NATURAL LANCUACZ PARSERS A. Case-Frame Parsing The computational properties of distributed memory machines (DMM) make them natural mechanisms for case-frame parsing. Consider a DMM which encodes case-frame structures of the following form: <Pred>*(<Cl>*<Pl> + <C2>*<P2> + ...+ <Cn>*<Pn>) where <Pred> is the vector representing the predicate associated with the verb of an input clause; <C1> to <Cn> are the case vectors such as <agent>, <instrument>, etc., and <PI> to <Pn> are vectors representing prototype concepts which can fill the associated cases. These structures can be made more complex by including tagging vectors which indicate such features as obligatory case, as shown in the case-frame vector for the predicate BREAK: (<agent>*<anlobJ+natforce> + <obJect>*<physobJ> *<obllg> + <instrument>*<physobJ>) In this example, the object case has a prototype covering the category of physical objects, and is tagged as obligatory. The initial state of the DMM, qo, encodes the concatenation of the set of case-frame vectors stored by the parser. The system receives two types of inputs, noun concept vectors representing noun phrases, and predicate vectors representing the verb components. If the system is in state qo only a predicate vector input produces a significant new state representing the case-frame structure associated with it. Once in this state, noun vector inputs identify the case slots they can potentially fill as illustrated in the following example: In parsing the sentence Fred broke the window with e stone, the input vector encodin E broke will retrieve the case-frame structure for break given above. The input of <Fred> now gives <Fred>~q<agent>*<Pa>+<obJ>*<Po>+<instr>*<Pi>) " <Fred>g<agent>*<Pa>+<Fred>~<Pa>*<agent> + ... - 0*<Pa>+ee*<agent> O*<Po>+e@*<obJ> + O*<Pi>+e%*<instr> : e~agent> + e~obJ> + es<instr> where ej is a measure of the similarity between the vectors, and underlying concepts, <Fred> and the case prototype <Pj>. In this example, <Fred> would be identified as the agent because e 0 and e~ would be low relative to ee. The vector is "cleaned-up" by a threshold function which is a component of the output function,)%. This process is repeated for the other noun concepts in the sentence, linking <window> and <stone> with the object and instrument cases, respectively. However, the parser requires additional machinery to handle the large set of sentences in which the case assignment is ambiguous using semantic knowledge alone. B. Encodin~ Syntactic Knowledge Unambiguous case assignment can only be achieved through the integration of syntactic and semantic processing. Moreover, an adequate parser should generate an encoding of the grammatical relations between sentential elements in addition to a semantic representation. The rest of the paper demonstrates how the properties of DMMs can be combined with the ideas embodied in the theory of Lextcal-functional CTammar (LFG) [8] in a parser which builds both types of relational structure. 93 In LFG the mapping between grammatical and semantic relations is represented directly in the semantic form of the lexlcal entries for verbs. For example, the lexlcal entry for the verb hands is given by hands: V, #participle) = NONE #tense) = PRESENT (tsubJ hum) = SO ~pred) = HAND[@subJ)#obj2)@obJ)] where the arguments of the predicate HAND are ordered such that they map directly onto the arguments of the semantic predicate-argument structure. The order and value of the arguments in a lexical entry are transformed by lexlcal rules, such as the passive, to produce new lexical entries, e.g., HAND[#byobJ)~subJ)(~oobJ)]. The direct mapping between lexical predicates and case-frame structures is encoded on the case-frame DMM by augmenting the vectors as follows: Hands:- <HAND>*(<agent>*<Pa>*<subJ> + <obJect>*<Po>*<obJ2>+<goal>*<Pg>*<obJ>) When the SUBJ component has been identified through syntactic processing the resulting association vector, for example <subJ>*<John> for the sentence John handed Mary the book, will retrieve <agent> on input to the CF-DMM, according to the principles specified above. The multiple lexical entries produced by lexical rules have corresponding multiple case-frame vectors which are tagged by the appropriate grammatical vector. The CF-DMM encodes multiple case-frame entries for verbs, and the grammatical vector tags, such as <PASSIVE>, generated by the syntactic component, are input to the CF-DMM to retrieve the appropriate case-frame for the verb. The grammatical relations Between the sententlal elements are represented in the form of functional structure (f-structures) as in LFG. These structures correspond to embedded lists of attrlbute-value pairs, and because of the Uniqueness criterion which governs their format they are efficiently encoded as memory vectors. As an example, the grammatical relations for the sentence John handed Mary a book are encoded in the f-structure below: SUBJ NUM RED 'JO PAST 'HAND[( SUBJ)( OSJ2)( OBJ)] TENSE PRED OBJ [~UM MARY 3 SG RED " OBJ2 [~C ASG K~" [,PRED "BOO The lists of grammatical functions and features are encoded as single vectors under the + operator, and the embedded structure is preserved by the associative operator, *. The f-structure is encoded by the vector (<SUBJ>*(<NUM>*<SG>+<PRED>*<JOHN>) + <TENSE> *<PAST> + <PRED>*(<HAND>*(<#SUBJ>*<TOBJ2>* <TOBJ>)) + <OBJ>*(<NUM>*<SG>+<PRED>*<MARY>)+ <OBJ2>*(<SPEC>*<A>+<NUM>*<SG>+<PRED>*<BOOK>)) This compatibility between f-structures and memory vectors is the basis for an efficient procedure for deriving f-structures from input strings. In LFG f-structures are generated in three steps. First, a context-free grammar (CFG) is used to derive an input string's constituent structure (C-structure). The grammar is augmented so that it generates a phrase structure tree which includes statements about the properties of the string's f-structure. In the next step, this structure is condensed to derive a series of equations, called the functional description of the string. Finally, the f-structure is derived from the f-description. The properties of DMMs enable a simple procedure to be written which derives f-structures from augmented phrase structure trees, obviating the need for an f-descrlptlon. Consider the tree in figure 1 generated for our example sentence: ~SUBJ) - & St& ~ENSE)-PAST \ @FRED) =HAND[ ..] ~ \ (I'NUM)- SO [ ~OBJ)-~, ~PRED)=JOHN) I (~qUM) =SG ~PRED)=MARY #OBJ2)=& / [ b John handed Mary a k Figure I. Augmented Phrase Structure Tree The f-structure, encoded as a memory vector, can be derived from this tree by the following procedure. First, all the grammatical functions, features and semantic forms must be encoded as vectors. The~-variables, f,-f#, have no values at this point; they are derived by the procedure. All the vectors dominated by a node are concatenated to produce a single vector at that node. The symbol '=" is interpreted as the association operator ,*. Applying this interpretation to the tree from the bottom up produces a memory vector for the value of f! which encodes the f-structure for the string, as given above. Accordingly, f~ takes the value (<TNUM>*<SG>+<TPRED>*<JOHN>); applying the rule specified at the node, (f, SUBJ)=f~ gives <tSUBJ>*(<tNUM>*<SG>+<TPRED>*<JOHN>) as a component of f,. The other components of fl are derived in the same way. The front-end CFG can be veiwed as generating the control structure for the derivation of a memory vector which represents the input string's f-structure. 94 The properties of memory vectors also enable the procedure to automatically determine the consistency Df the structure. For example, in deriving the value of f& the concatenation operator merges the (%NUM)~SG features for A and book to form a single component of the f~vector, (<SPEC>*<A>+<NUM>*<SG>+<PRED>*<MARY>). .owever, if the two features had not matched, producing the vector component <NU}~*(<SG>+<PL>) for example, the vectors encoding the incompatible feature values are set such that their concatenation produces a special control vector which signals the mismatch. C. A Parsing Architecture The ideas outlined above are combined in the design of a tentative parsing architecture shown in figure 2. The diamonds denote DMMs, and the r Figure 2. Parsing Architecture ellipse denotes a form of DMM functioning as a working memory for encoding temporary f-structures. As elements of the input string enter the lexicon their associated entries are retrieved. The syntactic category of the element is passed onto the CFG, and the lexical schemata {e.g., ~PRED)='JOHN'}, encoded as memory vectors, are passed to the f-structure working memory. The lexical entry associated with the verb is passed to the case-frame memory to retrieve the appropriate set of structures. The partial results of the CFG control the formation of memory vectors in the f-structure memory, as indicated by the broad arrow. The CFG also generates grammatical vectors as inputs for case-frame memory to select the appropriate structure from the multiple encodings associated with each verb. The partial f-structure encoding can then be used as input to the case-frame memory to assign the semantic forms of grammatical functions to case slots. When the end of the string is reached both the case-frame instantiation and the f-structure should be complete. IV CONCLUSIONS This paper attempts to demonstrate the value of distributed memory machines as components of a parsing system which generates both semantic and grammatical relational structures. The ideas presented are similar to those being developed within the connectlonist paradigm [I]. Small, and his colleagues [9], have proposed a parsing model based directly on connectionist principles. The computational architecture consists of a large number of appropriately connected computing units communicating through weighted levels of excitation and inhibition. The ideas presented here differ from those embodied in the connectionist parser in that they emphasise distributed information storage and retrieval, rather than distributed parallel processing. Retrieval and filtering are achieved through simple computable functions operating on k-element arrays, in contrast to the complex interactions of the independent units in connectlonist models. In figure 2, although the network of machines requires heterarchical control, the architecture can be considered to be at the lower end of the family of parallel processing machines [i0]. V BEF~e~wCES [I] Feldman, J.A. and Ballard, D.H. Connection- ist models and their properties. Cognitive Science, 1982, 6, 205-254. [2] Hinton, G.E. and Anderson, J.A. (Eds) Parallel Models of Associative Memory. Hillsdale, NJ: Lawrence Erlhat~Q Associates, 1981. [3] Hinton, G.E. Shape representation in parallel systems. In Proceedinss of the Seventh International Joint Conference on Artificial Intelli~ence, Vol. 2, Vancouver BC, Canada, August, 1981. [4] Longuet-Higgins, H.C., Willshaw, D.J., and Bunemann, O.P. Theories of associative recall. ~uarterly Reviews of Biophysics, 1970, 3, 223-244. [5] Murdock, B.B. A theory for the storage and retrieval of item and associative information. Psychological Review, 1982, 89, 609-627. [6] Kohonen, T. Associative memory~ system- theoretical approach. Berlin: Springer- Verlag, 1977. [7] Borsellino, A., and Poggio, T. Convolution and Correlation algebras. Kybernetik, 1973, 13, 113-122. [8] Kaplan, R., and Bresnan, J. Lexical-Functional Grammar: A formal system for grammatical representation. In J. Bresnan (ed.), The Mental 9~presentation of Grammatical Relations. Cambridge, Mass.:MIT Press, 1982. [9] Small, S.L., Cottre11, G.W., and Shastri, L. Toward connectlonlst parsing. In Proceedings of the National Conference on Artificial Intelligence, Pittsburgh, P~nsylvanla, 1982. [10] Fahlman, S,E., Hinton, G.E., and Sejnowski, T. Massively parallel architectures for AI: NETL, THISTLE, and BOLTZMANNmachines. In Proceed- ings of the National Conference on Artificial Intelli~enc~e, Washington D.C., I~3o 95 | 1984 | 22 |
AUTOMATED DETERMINATION OF SUBLANGUAGE SYNTACTIC USAGE Ralph Grbhman and Ngo Thanh Nhan Courant Institute of Mathematical Sciences New York University New York, NY 10012 Elalne Marsh Navy Center for Applied 1~se, arch in ~ Intel~ Naval ~ Laboratory Wx,~hinm~, DC 20375 Lynel~ Hirxehnum Research and Development Division System Development Corpmation / A Burroughs Company Paofi, PA 19301 Abstract Sublanguages _differ from each other, and from the "stan- dard Ian~age, in their syntactic, semantic, and discourse vrolx:rties. Understanding these differences is important'if -we are to improve our ability to process these sublanguages. We have developed a sen~.'- automatic ~ure for identifying sublangnage syntact/c usage from a sample of text in the sublanguage..We describe the results of applying this procedure to taree text samples: two sets of medical documents and a set of equipment failure me~ages. Introduction b A sub~age.is th.e f.oan.of ..natron." ~a~ y a oommumty ot s~ts m atm~mg a resmctea domain. Sublanguages differ from each other, and tron}. the "standard language, in their syntactic, ~antic, anti discourse properties. We describe ~ some rec~.t work on (-senii-)automatically determining the.syntactic_ properties of several sublangnages. This work m part ot a larger effort aimed at improving the techniques for parsing sublanguages. If we esamine a variety of scientific and technical sublanguages, we will encounter most of the constructs of the standard language, plus a number of syntactic exten- sions. For example, report" sublantgnag ~, such as are used in medical s||mmarles and eqmpment failure sum- maries, include both full sentences and a number of ~ag- merit forms [Marsh 1983]. Specific sublanguages differ in their usage of these syntactic constructs [Kittredge 1982, Lehrberger 1982]. Identifying these differences is important in under- standing how sublanguages differ from the Language as a whole. It also has immediate practical benefits, since it allows us to trim our grammar tO fit the specific sub- language we are processing. This can significantly speed up the analysis process and bl~.k some spurious parses which wouldbe obtained with a grammar of Overly broad coverage. Determining Syntaai¢ Usage Unf .ort~natcly, a~l..uirin~ the data .about ,yn~'c usage can De very te~ous, masmuca ~ st reqmres .me analysis of hundreds (or even thousands) of s~. fence., for each new sublangnage to.be proces____~i. We nave mere- fore chosen to automate this process. We are fortunate to have available to us a very broad coverage English grammar, the Linguistic.St~ing Grammar [S~gor 1981], which hp been ex~. d~ include the sentence fragn~n_ ts of certain medical aria cquilnnent failure rcixn'm [Marsh 1983]. The gram,--," consmts of a context-~r=, component a.ugmehtc~l .by pr~ural restrictions which capture v_.anous synt.t.t ~ and sublanguage _semantic cons_tt'aints. "l]~e con~- . component is stated in terms ot lgra.mmatical camgones such as noun, tensed verb, and ad~:tive. To be. gin .the analysis proceSS, a sample .mrpus is usmg this gr~,-=-,: .The me of generanm par~s_ m reviewed manually to eliminate incorrect ~ . x ne remalningparses are then fed to a program which .cc~ts -- for each parse tree and .cumulatively for ~ entb'e me .- the number of times that each production m me context-free component of the grammar was applied in building the tr¢~. This yields a "trimmed" context-fr¢~ grammar for. the sublangua!~e (consLsting ~. ~osc pro- ductions usea one or more tunes), atong w~m zrequency information on the various productions. This process was initially applied to text. sampl~ from two Sublanguages. The .fi~s. t is a set o.x s~ pauent documents (including patient his.tm'y., eTam,n.ation, .and plan of treatment). The second m a set ot electrical equipment failure relxals called "CASREPs', a class of operational report used by the U. S. Navy [Froscher 1983]. The parse file for the patient documents had correct parses for 236 sentences (and sentence frag- ments); the file for the CASREPS had correct parses tor 123 sentences. We have recently applied the process, to a third text sample, drawn from a subIanguage very stmflar to the first: a set of five hospital discharge summaries , Which include patient histories, e~nmlnnt[ous, and sum- maries of the murse of treatment in the hospital. This last sample included correct parses for 310 sentences. 96 Results The trimmed grarnrtl~l~ ~du~ from thc three sublanguage text samples were of comparable size. The grammar produced from the first set of patient docu- menU; col~tained 129 non-termlnal symbols and 248 pro- ductions; the grnmmar from the second set (the "discharge summaries") Was Slightly ]~trger, with 134 non-termin~ds and 282 productions. The grammar for the CASREP sublanguage was slightly smaller, with 124 non-terminal~ and 220 productions (this is probably a reflection of the smaller size of the CASR text sam- ple). These figures compare with 255 non-termlnal sym- bols and 744 productions in the "medical records" gram- mar used by the New York University Linguistic String Pro~=t (the "medical records" grammar iS the Lingttistic String Project English Grammar with extensions for sen- tencc fragments and other, sublanguagc specific, con- structs, and with a few options deleted). Figures 1 and 2 show the cumulative growth in the size of the I~"immed grammars for the three sublanguages as a function of the number of sentences in the sample. In Ftgure 1 we plot the number of non-term/hal symbols in the grammar as a function of sample size; in Figure 2, the number of productions in the ~ as a function of sample size. Note that the curves for the two medical sublanguages (curves A and B) have pretty much fiat- tcned out toward the end, indicating that, by that point, the trimmed grnmm~tr COVe'S a V~"y lar~ fra~on of the sentences in the sublanguage. (Some of the jumps in the growth curves for the medical grAmmarS refleet the ~vi- sion of the patient documents into sections (history, pl3y- sical exam, lab tests, etc.) with different syntactic charac- teristics. For the first few documents, wl3en a new see- tion bedim, constructs are encountered which did not appear m prior sections, thus producing a jump in the c11rve.) The sublanguage gramma~ arc substantially smaller than the full English grammar, reflecting the more lim- itcd range of modifiers and complements in these sub- languages. While the full grammar has 67 options for sentence object, the sublanguage grammars have substan- tially restricted mages: each of the three sublanguage grammars has only 14 object options. Further, the gram- mars greatly overlap, so that the three grammars com- bined contain only 20 different object options. While sentential complements of nouns are available in the full grammar, there arc no i~tanc~ of such a:~[lstrllcfions in either medical sublanguage, aad only one instance in the CASREP sublanguage. The range of modifiers iS also much restricted ia the sublangu=age grammars as com- pared to the full grammar. 15 options for sentential modifiers are available in the full grammar. These are restricted to 9 in the first medical sample, 11 in the second, and 8 in the equipment failure sublangua~e. Similarly, the full English gr~mmnr has 21 options tor right modifiers of nouns; the sublanguage gr~mma_~S had fewer, 11 in the first medical sumple, I0 m" the second, and 7 in the CASREP sublanguage. Here the sub- language grammars overlap almost completely: only 12 different right modifiers of noun are represented in the three grammars combined. Among the options occurring in all the sublanguage grammars, their relative frequency varies ao~o~ding to the domain of the text. For example, the frequency of prepositional phrases as right modifiers of nouns (meas; urea as instances per sentence or sentence fragment) was 0.36 and 0.46 for the two medical samples, as compared to 0.77 for the CASREPs. More striking was the fre- quency of noun phrases with nouns as modifiers of other nouns: 0.20 and 0.32 for the two medical ~mples, versus 0.80 for the CASREPs. We reparsed some of the sentences from the first set of medical documents with the trimmed grammar and, as ~ , o.bserved a considerable " speed-up. The t.mgumuc ~mng rarser uses a p.op-uown pa.~mg algo- rithm with., .ba~track~" g. A,~Ldingly , for short, simple sentences which require little backtr~.king there was only a small gain in processing speed (about 25%). For long, complex sentences, however, which require extensive backtracking, the speed-up (by roughly a factor of 3) was approximately proportional to the reduction in the number of productions. In addition, the ~fyequcncy of bad parses decreased slightly (by <3%) with the l~mmed y.mm.r (because some of the bad parses involved syntactic constructs which did not appear m any o~,,~ect parse in the sublanguage sample). Discussion As natural .lan..~,uage interfaces become more mature, their portability .- the ability to move an inter- face to a new domain and sublenguage -. is becoming increasingly important. At 8 minimllm, portability requires us to isolate the domain dependent information in a natural ]aDgua.~.e system [C~OSZ 1983, Gri~hman 1983]. A more ambitious goal m to provide a discovery procedure for this information -. a procedure Wl~eh can determine the domain dependent information from sam- ple texts in the sublanguage. The tcchnklUeS described above provide a partial, semi-automatic discovery pro- cedure for the syntactic usages of a sublangua~.* By applying .these .t~gues to a small sublan~ sample, we ~ adapt a broad-coverage grammar tO the syntax of a particular sublanguage. Sub~.quont text from this sub- language caa then be i~xessed more efficiently. We are currently extending this work in two direc- tions. For sentences with two or more parses which ~ atisfy .both the syntactic and the sublanguage selectional semanu.'c) constraints, we intena to try using the/re- Cency information ga~ered for productions to select, a invol "ving the more frequent syntactic constructs.** Second, we are using a s~milAr approach to develop a discovery procedure for sublanguage selectional patterns. We are collecting, from the same sublanguage samples, statistics on the frequency of co-occurrence of particular sublan .guage (semantic) classes in subjeet.vedy.ob~:ct and host-adjunct relations, and are using this data as input to * Partial, because it cannot identify new extensions to the base gramme; semi-automatic, because the parses produced with the broad-coverage grammar • must be manually reviewed. * Some small experiments of this type have been one with a Japanese ~ [Naga 0 1982] with 1|mired success. Becat~ of the v~_ differ~t na- ture of the grammar, however, it is not dear whether this lass any implications for our experi- ments. 97 the grammar's sublanguage selectional restrictions. Acknowledgemeat This material is based upon work supported by the Nalional Science Foundation under Grants No. MCS-82- 02373 and MCS-82-02397. Referenem [Frmcher 1983] Froscher, J.; Grishmau, R.; Bachenko, J.; Marsh, E. "A linguistically motivated approach to automated analysis of military messages." To appear in Proc. 1983 Conf. on Artificial Intelligence, Rochester, MI, April 1983. [Grlslnnan 1983] Gfishman, R.; ~ , L.; Fried. man, C. "Isolating domain dependencies in natural language interface__. Proc. Conf. Applied Natural l~nguage Processing, 46-53, Assn. for Computational Linguistics, 1983. [Greu 1963] Grosz, B. "TEAM: a transportable natural-language interface system," Proc. Conf. Applied Natural Language Processing, 39-45, Assn. for Comlmta- fional IAnguhflm, 1983. [Kittredge 1982] Kim-edge, 11. "Variation and homo- geneity of sublauguages3 In Sublanguage: Jmdies of language in reslricted semantic domains, ed. R. Kittredge and J. Lehrberger. Berlin & New York: Walter de Gruyter; 1982. on and the concept of sublanguage. In $ublan~a&e: sl~lies of language in restricted semantic domains, ed. R. Kittredge and J. Lehrberger. Berlin & New York: Walter de Gruyter; 1982. [Marsh 1983] Marsh, E.. "Utilizing domain-specific information for processing compact text." Proc. Conf. ied Namra[ Lansuage Processing, 99-103, Assn. for putational Linguistics, 1983. [Nape 1982] Nagao, M.; Nakamura, J. "A parser which learns the application order of rewriting rules." Proc. COLING 82, 253-258. [Sager 1981] Sager, N. Natural Lansuage lnform~on Pro- ceasing. Reading, MA: Addlson-Wesley; 1981. 98 130 120 110 100 80 80 90 60 50 40 30 0 SENTENCES VS. NJ~N-TERMINRL SYHBBLS • ' • ' " ' ' , ' , " , • , • , • , • I • v " r 2- Y A , i . , . . . , I / , i . i , i , i , ) , i . z ° ~lo 80 oo I oo 12o 14o 18o 18o zoo zzo z4o x Figure 1. Growth in thc size of the gr~mm.r as a function of the size of the text sample. X = the number of sentences (and sentence frag- ments) in the text samplc; ~" = the number of non-terminal symbols m the context-free com- ponent of thc ~'ammar. Graph A: first set of patient documents Graph B: second set of pat/cnt documcnts ("discharge s-~-,-,'ics") Graph C: e~, uipment failure messages 140 130 1:)0 110 100 gO 8O 90 30 SENTENCES VS. NON-TERMINRL 5YHBBLS f / B SO , , • , , . . l , . . . . . . , . . . , . , . , . , . , . , . 0 ZO 40 60 80 100 IZO 140 130 180 ZOO ZZO 240 Z60 ZSO 300 3ZO X 1so 12o 11o SENTENCES VS. N~N-TERMINRL SYMBOLS • e • , , l • , • l , , • , , , , , , , , J / J . /--' / , , v , lOO 80 )-- 80 70 80 3o C 4O • * , , • I s I , i , : * f , i , i • * , , * , • 30 0 10 ZO 30 40 30 60 70 30 ~0 100 110 120 1~0 X 99 30O 200 ZSO SENTENCES VS. PR°IDUCTI°JNS • , . [ • , . , • . . , . , . , . , • , , . . ,.._/7 A J . . . . . ,,, , ~, . . . . . . . . . . . . . . . ~0 40 6 100 12Q 140 1150 180 ZOO ZZO Z~O X Figure 2. Growth in the size of the grammar as a fuaction of the size of thc text sample. X = the number of sentences (and sentence frag- ments) in the text sample; Y = the number of productions in the context-free component of the grammar. Graph A: first set of patient documents Graph B: second set of pati_e~.t documents ("discharge s.~,-,,~cs ) Graph C: e~,. ,uipment failure messages (cAs~,Ps-) 220 20O 180 2~ 220 2(30 =,- 100 180 Z 40 SENTENCES VS. PRODUCTI°'INS ", 1 , i • i • , • a , i • J , , , i , i , J . i • J . , • i , 260 240 220 200 180 16G 140 120 lOG 80 80 40 J t2Q 80 60 , * , J . i • i , i , i . i . i . , , . , i , , , B , . . . . O ZO 40 60 OO 100 120 1"i0 150 150 ZOO 220 Z~O ZSO ZSO 30O 32O X SENTENCES VS. PRgDUCTI°INS 160 140 100 O0 / C 6O ZOo 10 ZO 30 40 O0 ~0 tO0 ;10 IZO X i00 | 1984 | 23 |