Source_Paper_ID,Target_Paper_ID,Sentiment,Citation_Text A00-1043,A00-2024,o,"We analyzed a set of articles and identified six major operations that can be used for editing the extracted sentences, including removing extraneous phrases from an extracted sentence, combining a reduced sentence with other sentences, syntactic transformation, substituting phrases in an extracted sentence with their paraphrases, substituting phrases with more general or specific descriptions, and reordering the extracted sentences (Jing and McKeown, 1999; Jing and McKeown, 2000)." H05-1033,A00-2024,o,"Table 3: Example compressions Compression AvgLen Rating Baseline 9.70 1.93 BT-2-Step 22.06 3.21 Spade 19.09 3.10 Humans 20.07 3.83 Table 4: Mean ratings for automatic compressions nally, we added a simple baseline compression algorithm proposed by Jing and McKeown (2000) which removed all prepositional phrases, clauses, toinfinitives, and gerunds." I05-2009,A00-2024,o,"5.3 Related works and discussion Our two-step model essentially belongs to the same category as the works of (Mani et al. , 1999) and (Jing and McKeown, 2000)." I05-2009,A00-2024,o,"(1999) proposed a summarization system based on the draft and revision. Jing and McKeown (2000) proposed a system based on extraction and cut-and-paste generation. Our abstractors performed the same cut-and-paste operations that Jing and McKeown noted in their work, and we think that our two-step model will be a reasonable starting point for our subsequent research." I05-2009,A00-2024,o,"We found that the deletion of lead parts did not occur very often in our summary, unlike the case of Jing and McKeown (2000)." I08-1016,A00-2024,o,"Automatic text summarization approaches have offered reasonably well-performing approximations for identifiying important sentences (Lin and Hovy, 2002; Schiffman et al., 2002; Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Daume III and Marcu, 2006) but, not surprisingly, text (re)generation has been a major challange despite some work on sub-sentential modification (Jing and McKeown, 2000; Knight and Marcu, 2000; Barzilay and McKeown, 2005)." I08-2101,A00-2024,p,"al., 1994), compression of sentences with Automatic Translation approaches (Knight and Marcu, 2000), Hidden Markov Model (Jing and McKeown, 2000), Topic Signatures based methods (Lin and Hovy, 2000, Lacatusu et al., 2006) are among the most popular techniques that have been used in the summarization systems of this category." J02-4002,A00-2024,o,"Because of this, it is generally accepted that some kind of postprocessing should be performed to improve the final result, by shortening, fusing, or otherwise revising the material (Grefenstette 1998; Mani, Gates, and Bloedorn 1999; Jing and McKeown 2000; Barzilay et al. 2000; Knight and Marcu 2000)." J02-4004,A00-2024,o,"Additionally, some research has explored cutting and pasting segments of text from the full document to generate a summary (Jing and McKeown 2000)." J02-4005,A00-2024,p,"But in fact, the issue of editing in text summarization has usually been neglected, notable exceptions being the works by Jing and McKeown (2000) and Mani, Gates, and Bloedorn (1999)." J02-4005,A00-2024,o,Jing and McKeown (2000) and Jing (2000) propose a cut-and-paste strategy as a computational process of automatic abstracting and a sentence reduction strategy to produce concise sentences. J02-4005,A00-2024,o,Our work in sentence reformulation is different from cut-and-paste summarization (Jing and McKeown 2000) in many ways. J02-4005,A00-2024,n,"Jing and McKeown (2000) have proposed a rule-based algorithm for sentence combination, but no results have been reported." J05-3002,A00-2024,o,"As previously observed in the literature (Mani, Gates, and Bloedorn 1999; Jing and McKeown 2000), such components include a clause in the clause conjunction, relative clauses, and some elements within a clause (such as adverbs and prepositions)." J05-3002,A00-2024,o,"In addition to sentence fusion, compression algorithms (Chandrasekar, Doran, and Bangalore 1996; Grefenstette 1998; Mani, Gates, and Bloedorn 1999; Knight and Marcu 2002; Jing and McKeown 2000; Reizler et al. 2003) and methods for expansion of a multiparallel corpus (Pang, Knight, and Marcu 2003) are other instances of such methods." J05-3002,A00-2024,o,"While earlier approaches for text compression were based on symbolic reduction rules (Grefenstette 1998; Mani, Gates, and Bloedorn 1999), more recent approaches use an aligned corpus of documents and their human written summaries to determine which constituents can be reduced (Knight and Marcu 2002; Jing and McKeown 2000; Reizler et al. 2003)." J05-3002,A00-2024,o,"While this approach exploits only syntactic and lexical information, Jing and McKeown (2000) also rely on cohesion information, derived from word distribution in a text: Phrases that are linked to a local context are retained, while phrases that have no such links are dropped." J05-3002,A00-2024,o,"In addition to reducing the original sentences, Jing and McKeown (2000) use a number of manually compiled rules to aggregate reduced sentences; for example, reduced clauses might be conjoined with and." W02-0404,A00-2024,o,"Previous research has addressed revision in single-document summaries [Jing & McKeown, 2000] [Mani et al, 1999] and has suggested that revising summaries can make them more informative and correct errors." W02-0404,A00-2024,o,"To contrast, [Jing & McKeown, 2000] concentrated on analyzing human-written summaries in order to determine how professionals construct summaries." W03-1004,A00-2024,o,"1 Introduction Text-to-text generation is an emerging area of research in NLP (Chandrasekar and Bangalore, 1997; Caroll et al. , 1999; Knight and Marcu, 2000; Jing and McKeown, 2000)." W03-1102,A00-2024,p,"The recent approach for editing extracted text spans (Jing and McKeown, 2000) may also produce improvement for our algorithm." W09-0604,A00-2024,o,"First, splitting and merging of sentences (Jing and McKeown, 2000), which seems related to content planning and aggregation." W09-0604,A00-2024,o,"1 Introduction The task of sentence compression (or sentence reduction) can be defined as summarizing a single sentence by removing information from it (Jing and McKeown, 2000)." W09-0604,A00-2024,o,"One of the applications is in automatic summarization in order to compress sentences extracted for the summary (Lin, 2003; Jing and McKeown, 2000)." W09-2807,A00-2024,o,"In cut-and-paste summarization (Jing and McKeown, 2000), sentence combination operations were implemented manually following the study of a set of professionally written abstracts; however the particular pasting operation presented here was not implemented." W09-2807,A00-2024,o,"Close to the problem studied here is Jing and McKeowns (Jing and McKeown, 2000) cut-and-paste method founded on EndresNiggemeyers observations." W09-2808,A00-2024,o,Jing and McKeown (1999; 2000) found that human summarization can be traced back to six cut-andpaste operations of a text and proposed a revision method consisting of sentence reduction and combination modules with a sentence extraction part. W09-2808,A00-2024,o,Like the work of Jing and McKeown (2000) and Mani et al. A00-1026,A92-1018,o,The SPECIALIST minimal commitment parser relies on the SPECIALIST Lexicon as well as the Xerox stochastic tagger (Cutting et al. 1992). A00-1031,A92-1018,p,"Recent comparisons of approaches that can be trained on corpora (van Halteren et al. , 1998; Volk and Schneider, 1998) have shown that in most cases statistical aproaches (Cutting et al. , 1992; Schmid, 1995; Ratnaparkhi, 1996) yield better results than finite-state, rule-based, or memory-based taggers (Brill, 1993; Daelemans et al. , 1996)." A94-1008,A92-1018,o,"The two systems we use are ENGCG (Karlsson et al. , 1994) and the Xerox Tagger (Cutting et al. , 1992)." A94-1008,A92-1018,o,"2.2 Xerox Tagger The Xerox Tagger 1, XT, (Cutting et al. , 1992) is a statistical tagger made by Doug Cutting, Julian Kupiec, Jan Pedersen and Penelope Sibun in Xerox PARC." A94-1009,A92-1018,p,"One of the most effective taggers based on a pure HMM is that developed at Xerox (Cutting et al. , 1992)." A94-1009,A92-1018,o,"The Xerox experiments (Cutting et al. , 1992) correspond to something between D1 and D2, and between TO and T1, in that there is some initial biasing of the probabilities." A94-1027,A92-1018,o,"All 8,907 articles were tagged by the Xerox Part-ofSpeech Tagger (Cutting et al. , 1992) 4." A97-1004,A92-1018,o,"(Cutting et al. , 1992))." A97-1014,A92-1018,o,"(Cutting et al. , 1992) and (Feldweg, 1995))." A97-1017,A92-1018,o,"For Czech, we created a prototype of the first step of this process -the part-of-speech (POS) tagger -using Rank Xerox tools (Tapanainen, 1995), (Cutting et al. , 1992)." C00-1004,A92-1018,o,"5 Related work Cutting introduced grouping of words into equiva.lence classes based on the set of possible tags to reduce the number of the parameters (Cutting et al. , 1992) . Schmid used tile equivaleuce classes for smoothing." C08-1026,A92-1018,p,"4.1 Complete ambiguity classes Ambiguity classes capture the relevant property we are interested in: words with the same category possibilities are grouped together.4 And ambiguity classes have been shown to be successfully employed, in a variety of ways, to improve POS tagging (e.g., Cutting et al., 1992; Daelemans et al., 1996; Dickinson, 2007; Goldberg et al., 2008; Tseng et al., 2005)." C94-1027,A92-1018,o,"In tabh; 2, the accuracy rate of the Net-Tagger is cOrolLated to that of a trigram l)msed tagger (Kempe, 1993) and a lIidden Markov Model tagger (Cutting et al. , 1992) which were." C94-1027,A92-1018,o,"In this paper, a new part-of-speech tagging method hased on neural networks (Net-Tagger) is presented and its performance is compared to that of a llMM-tagger (Cutting et al. , 1992) and a trigrambased tagger (Kempe, 1993)." C94-1027,A92-1018,o,"The performance of tl,e presented tagger is measured and compared to that of two other taggers (Cutting et al. , 1992; Kempe, 1993)." C94-1027,A92-1018,o,"No documentation of tile construction algorithm of the su\[lix lexicon in (Cutting et al. , 1992) was available." C96-1036,A92-1018,o,"Language models, such as N-gram class models (Brown et al. , 1992) and Ergodic Hidden Markov Models (Kuhn el, al. , 1994) were proposed and used in applications such as syntactic class (POS) tagging for English (Cutting et al. , 1992), clustering and scoring of recognizer sentence hypotheses." C96-2114,A92-1018,o,"The tagger used is thus one that does not need tagged and disambiguated material to be trained on, namely the XPOST originally constructed at Xerox Parc (Cutting et al. 1992, Cutting and Pedersen 1993)." C96-2136,A92-1018,o,"It is used,as tagging mode\[ in English (Church, 1988; Cutting et al. , 1992) and morphological analysis nlodel (word segmentation and tagging) in Japanese (Nagata, 1994)." C96-2136,A92-1018,o,"It is a natural extension of the Viteri>i algorithm (Church, 1<,)88; Cutting et al. , 1992) for those languages that do not have delimiters between words, and it can generate N-best morphological analysis hypotheses, like tree trellis search (Soong and l\[uang, 1991)." C96-2192,A92-1018,o,(DeRose 1988; Cutting et al 1992; Merialdo 1994). E06-1034,A92-1018,o,"5.2 Assigning complex ambiguity tags In the tagging literature (e.g. , Cutting et al (1992)) an ambiguity class is often composed of the set of every possible tag for a word." E95-1014,A92-1018,o,"The corpus lines retained are part-of-speech tagged (Cutting et al. , 1992)." E95-1014,A92-1018,o,"This text was part-of-speech tagged using the Xerox HMM tagger (Cutting et al. , 1992)." E95-1020,A92-1018,o,"No pretagged text is necessary for Hidden Markov Models (Jelinek, 1985; Cutting et al. , 1991; Kupiec, 1992)." E95-1020,A92-1018,o,"We obtained 47,025 50-dimensional reduced vectors from the SVD and clustered them into 200 classes using the fast clustering algorithm Buckshot (Cutting et al. , 1992) (group average agglomeration applied to a sample)." E95-1021,A92-1018,o,"3 The statistical model We use the Xerox part-of-speech tagger (Cutting et al. , 1992), a statistical tagger made at the Xerox Palo Alto Research Center." E95-1022,A92-1018,o,"This corpus-based information typically concerns sequences of 1-3 tags or words (with some well-known exceptions, e.g. Cutting et al. 1992)." E95-1022,A92-1018,o,"157 ena or the linguist's abstraction capabilities (e.g. knowledge about what is relevant in the context), they tend to reach a 95-97% accuracy in the analysis of several languages, in particular English (Marshall 1983; Black et aL 1992; Church 1988; Cutting et al. 1992; de Marcken 1990; DeRose 1988; Hindle 1989; Merialdo 1994; Weischedel et al. 1993; Brill 1992; Samuelsson 1994; Eineborg and Gamb~ick 1994, etc.)." E99-1018,A92-1018,o,"As a common strategy, POS guessers examine the endings of unknown words (Cutting et al. 1992) along with their capitalization, or consider the distribution of unknown words over specific parts-of-speech (Weischedel et aL, 1993)." E99-1018,A92-1018,o,"On the other hand, according to the data-driven approach, a frequency-based language model is acquired from corpora and has the forms of ngrams (Church, 1988; Cutting et al. , 1992), rules (Hindle, 1989; Brill, 1995), decision trees (Cardie, 1994; Daelemans et al. , 1996) or neural networks (Schmid, 1994)." H05-1052,A92-1018,o,"In the absence of an annotated corpus, dependencies can be derived by other means, e.g. part413 of-speech probabilities can be approximated from a raw corpus as in (Cutting et al. , 1992), word-sense dependencies can be derived as definition-based similarities, etc. Label dependencies are set as weights on the arcs drawn between corresponding labels." I08-3015,A92-1018,o,"There are many POS taggers developed using different techniques for many major languages such as transformation-based error-driven learning (Brill, 1995), decision trees (Black et al., 1992), Markov model (Cutting et al., 1992), maximum entropy methods (Ratnaparkhi, 1996) etc for English." J02-1004,A92-1018,o,Our statistical tagging model is modified from the standard bigrams (Cutting et al. 1992) using Viterbi search plus onthe-fly extra computing of lexical probabilities for unknown morphemes. J02-1004,A92-1018,o,"POS disambiguation has usually been performed by statistical approaches, mainly using the hidden Markov model (HMM) in English research communities (Cutting et al. 1992; Kupiec 1992; Weischedel et al. 1993)." J93-1002,A92-1018,o,"The main application of these techniques to written input has been in the robust, lexical tagging of corpora with part-of-speech labels (e.g. Garside, Leech, and Sampson 1987; de Rose 1988; Meteer, Schwartz, and Weischedel 1991; Cutting et al. 1992)." J94-2001,A92-1018,o,"Two main approaches have generally been considered: rule-based (Klein and Simmons 1963; Brodda 1982; Paulussen and Martin 1992; Brill et al. 1990) probabilistic (Bahl and Mercer 1976; Debili 1977; Stolz, Tannenbaum, and Carstensen 1965; Marshall 1983; Leech, Garside, and Atwell 1983; Derouault and Merialdo 1986; DeRose 1988; Church 1989; Beale 1988; Marcken 1990; Merialdo 1991; Cutting et al. 1992)." J95-2001,A92-1018,o,"Stochastic taggers use both contextual and morphological information, and the model parameters are usually defined or updated automatically from tagged texts (Cerf-Danon and E1-Beze 1991; Church 1988; Cutting et al. 1992; Dermatas and Kokkinakis 1988, 1990, 1993, 1994; Garside, Leech, and Sampson 1987; Kupiec 1992; Maltese * Department of Electrical Engineering, Wire Communications Laboratory (WCL), University of Patras, 265 00 Patras, Greece." J95-2004,A92-1018,o,"Unlike stochastic approaches to part-of-speech tagging (Church 1988; Kupiec 1992; Cutting et al. 1992; Merialdo 1990; DeRose 1988; Weischedel et al. 1993), up to now the knowledge found in finite-state taggers has been handcrafted and was not automatically acquired." J95-2004,A92-1018,o,"7 Independently, Cutting et aL (1992) quote a performance of 800 words per second for their part-of-speech tagger based on hidden Markov models." J95-3004,A92-1018,o,"These methods have reported performance in the range of 95-99% ""correct"" by word (DeRose 1988; Cutting et al. 1992; Jelinek, Mercer, and Roukos 1992; Kupiec 1992)." J95-4004,A92-1018,p,"A number of part-of-speech taggers are readily available and widely used, all trained and retrainable on text corpora (Church 1988; Cutting et al. 1992; Brill 1992; Weischedel et al. 1993)." J95-4004,A92-1018,o,"Part-of-speech tagging is an active area of research; a great deal of work has been done in this area over the past few years (e.g. , Jelinek 1985; Church 1988; Derose 1988; Hindle 1989; DeMarcken 1990; Merialdo 1994; Brill 1992; Black et al. 1992; Cutting et al. 1992; Kupiec 1992; Charniak et al. 1993; Weischedel et al. 1993; Schutze and Singer 1994)." J95-4004,A92-1018,o,Almost all recent work in developing automatically trained part-of-speech taggers has been on further exploring Markovmodel based tagging (Jelinek 1985; Church 1988; Derose 1988; DeMarcken 1990; Merialdo 1994; Cutting et al. 1992; Kupiec 1992; Charniak et al. 1993; Weischedel et al. 1993; Schutze and Singer 1994). J97-3003,A92-1018,o,"As the baseline standard, we took the ending-guessing rule set supplied with the Xerox tagger (Cutting et al. 1992)." J97-3003,A92-1018,o,"The Xerox tagger (Cutting et al. 1992) comes with a set of rules that assign an unknown word a set of possible pos-tags (i.e. , POS-class) on the basis of its ending segment." N01-1023,A92-1018,p,"(Cutting et al. , 1992) reported very high results (96% on the Brown corpus) for unsupervised POS tagging using Hidden Markov Models (HMMs) by exploiting hand-built tag dictionaries and equivalence classes." N06-1042,A92-1018,o,"It is also possible to train statistical models using unlabeled data with the expectation maximization algorithm (Cutting et al. , 1992)." P06-2100,A92-1018,o,"For English there are many POS taggers, employing machine learning techniques like transformation-based error-driven learning (Brill, 1995), decision trees (Black et al. , 1992), markov model (Cutting et al. 1992), maximum entropy methods (Ratnaparkhi, 1996) etc. There are also taggers which are hybrid using both stochastic and rule-based approaches, such as CLAWS (Garside and Smith, 1997)." P07-2056,A92-1018,o,"In such cases, additional information may be coded into the HMM model to achieve higher accuracy (Cutting et al. , 1992)." P07-2056,A92-1018,p,"Stochastic models (Cutting et al. , 1992; Dermatas et al. , 1995; Brants, 2000) have been widely used in POS tagging for simplicity and language independence of the models." P93-1003,A92-1018,o,"This situation is very similar to that involved in training HMM text taggers, where joint probabilities are computed that a particular word corresponds to a particular part-ofspeech, and the rest of the words in the sentence are also generated (e.g. \[Cutting et al. , 1992\])." P95-1039,A92-1018,p,"1 Motivation Statistical part-of-speech disambiguation can be efficiently done with n-gram models (Church, 1988; Cutting et al. , 1992)." P96-1006,A92-1018,o,"Making such an assumption is reasonable since POS taggers that can achieve accuracy of 96% are readily available to assign POS to unrestricted English sentences (Brill, 1992; Cutting et al. , 1992)." P96-1030,A92-1018,o,"(DeRose, 1988; Cutting et al. , 1992; Church, 1988)." P97-1029,A92-1018,o,"There has been a large number of studies in tagging and morphological disambiguation using various techniques such as statistical techniques, e.g., (Church, 1988; Cutting et al. , 1992; DeRose, 1988), constraint-based techniques (Karlsson et al. , 1995; Voutilainen, 1995b; Voutilainen, Heikkil/i, and Anttila, 1992; Voutilainen and Tapanainen, 1993; Oflazer and KuruSz, 1994; Oflazer and Till 1996) and transformation-based techniques (Brilt, 1992; Brill, 1994; Brill, 1995)." P97-1031,A92-1018,o,"Second, the automatic approach, in which the model is automatically obtained from corpora (either raw or annotated) 1, and consists of n-grams (Garside et al. , 1987; Cutting et ah, 1992), rules (Hindle, 1989) or neural nets (Schmid, 1994)." P97-1032,A92-1018,o,"Cutting et al. 1992), local rules (e.g. Hindle 1989) and neural networks (e.g. Schmid 1994)." W03-1314,A92-1018,o,"(2000) that draws on a stochastic tagger (see (Cutting et al. , 1992) for details) as well as the SPECIALIST Lexicon5, a large syntactic lexicon of both general and medical English that is distributed with the UMLS." W04-1211,A92-1018,o,"The prime public domain examples of such implementations include the TrigramsnTags tagger (Brandts 2000), Xerox tagger (Cutting et al. 1992) and LT POS tagger (Mikheev 1997)." W04-2010,A92-1018,o,"The XEROX tagger comes with a list of built-in ending guessing rules (Cutting et al. ,1992)." W04-2602,A92-1018,p,"It has been known for some years that good performance can be realized with partial tagging and a hidden Markov model (Cutting et al. , 1992)." W04-2611,A92-1018,o,"This analysis depends on the SPECIALIST Lexicon and the Xerox part-of-speech tagger (Cutting et al. , 1992) and provides simple noun phrases that are mapped to concepts in the UMLS Metathesaurus using MetaMap (Aronson, 2001)." W04-3112,A92-1018,o,The initial phase relies on a parser that draws on the SPECIALIST Lexicon (McCray et al. 1994) and the Xerox Part-of-Speech Tagger (Cutting et al. 1992) to produce an underspecified categorial analysis. W05-0708,A92-1018,n,"Many approaches for POS tagging have been developed in the past, including rule-based tagging (Brill, 1995), HMM taggers (Brants, 2000; Cutting and others, 1992), maximum-entropy models (Rathnaparki, 1996), cyclic dependency networks (Toutanova et al. , 2003), memory-based learning (Daelemans et al. , 1996), etc. All of these approaches require either a large amount of annotated training data (for supervised tagging) or a lexicon listing all possible tags for each word (for unsupervised tagging)." W94-0111,A92-1018,n,"Brill's results demonstrate that this approach can outperform the Hidden Markov Model approaches that are frequently used for part-of-speech tagging (Jelinek, 1985; Church, 1988; DeRose, 1988; Cutting et al. , 1992; Weischedel et al. , 1993), as well as showing promise for other applications." W95-0101,A92-1018,o,"It is possible to use unsupervised learning to train stochastic taggers without the need for a manually annotated corpus by using the Baum-Welch algorithm \[Baum, 1972; Jelinek, 1985; Cutting et al. , 1992; Kupiec, 1992; Elworthy, 1994; Merialdo, 1995\]." W95-0101,A92-1018,o,"This method is employed in \[Kupiec, 1992; Cutting et al. , 1992\]." W95-0101,A92-1018,o,"Almost all of the work in the area of automatically trained taggers has explored Markov-model based part of speech tagging \[Jelinek, 1985; Church, 1988; Derose, 1988; DeMarcken, 1990; Cutting et al. , 1992; Kupiec, 1992; Charniak et al. , 1993; Weischedel et al. , 1993; Schutze and Singer, 1994; Lin et al. , 1994; Elworthy, 1994; Merialdo, 1995\]." W96-0101,A92-1018,o,"1 Introduction In the part-of-speech hterature, whether taggers are based on a rule-based approach (Klein and Simmons, 1963), (Brill, 1992), (Voutilainen, 1993), or on a statistical one (Bahl and Mercer, 1976), (Leech et al. , 1983), (Merialdo, 1994), (DeRose, 1988), (Church, 1989), (Cutting et al. , 1992), there is a debate as to whether more attention should be paid to lexical probabilities rather than contextual ones." W96-0101,A92-1018,o,"5 Comparison with other approaches In some sense, this approach is similar to the notion of ""ambiguity classes"" explained in (Kupiec, 1992) and (Cutting et al. , 1992) where words that belong to the same part-of-speech figure together." W96-0101,A92-1018,o,"(Chanod and Tapanainen, 1995) compare two tagging frameworks for tagging French, one that is statistical, built upon the Xerox tagger (Cutting et al. , 1992), and another based on linguistic constraints only." W96-0102,A92-1018,o,"Most work on statistical methods has used n-gram models or Hidden Markov Model-based taggers (e.g. Church, 1988; DeRose, 1988; Cutting et al. 1992; Merialdo, 1994, etc.)." W96-0113,A92-1018,o,"Kupiec (1992) has proposed an estimation method for the N-gram language model using the Baum-Welch reestimation algorithm (Rabiner et al. , 1994) from an untagged corpus and Cutting et al." W96-0205,A92-1018,o,"Generalized Forward Backward Reestimation Generalization of the Forward and Viterbi Algorithm In English part of speech taggers, the maximization of Equation (1) to get the most likely tag sequence, is accomplished by the Viterbi algorithm (Church, 1988), and the maximum likelihood estimates of the parameters of Equation (2) are obtained from untagged corpus by the ForwardBackward algorithm (Cutting et al. , 1992)." W96-0206,A92-1018,o,"The accuracy of the derived model depends heavily on the initial bias, but with a good choice results are comparable to those of method three (Cutting et al. , 1992)." W97-0307,A92-1018,o,"(Cutting et al. , 1992; Feldweg, 1995)), the tagger for grammatical functions works with lexical and contextual probability measures Pq()." W97-0811,A92-1018,o,"In our experiments, we used the Hidden Markov Model (HMM) tagging method described in \[Cutting et aL, 1992\]." W98-1110,A92-1018,o,"The POS disambiguation has usually been performed by statistical approaches mainly using hidden markov model (HMM) (Cutting et al. , 1992; Kupiec." W98-1110,A92-1018,o,"Our statistical tagging model is adjusted from standard bi-grams using the Viterbi-search (Cutting et al. , 1992) plus on-the-fly extra computing of lexical probabilities for unknown morphemes." W98-1207,A92-1018,o,"(Cutting et al. , 1992; Feldweg, 1995)), the tagger for grammatical functions works with lexical (1) Selbst besucht ADV VVPP himself visited hat Peter Sabine VAFIN NE NE has Peter Sabine 'Peter never visited Sabine himself' l hie ADV never Figure 2: Example sentence and contextual probability measures PO.(') depending on the category of a mother node (Q)." W99-0608,A92-1018,o,"2.2 STT: A Statistical Tree-based Tagger The aim of statistical or probabilistic tagging (Church, 1988; Cutting et al. , 1992) is to assign the most likely sequence of tags given the observed sequence of words." C04-1147,C02-1007,o,"Examples of such affinities include synonyms (Terra and Clarke, 2003), verb similarities (Resnik and Diab, 2000) and word associations (Rapp, 2002)." D08-1096,C02-1007,n,"Several papers have looked at higher-order representations, but have not examined the equivalence of syn/para distributions when formalized as Markov chains (Schutze and Pedersen, 1993; Lund and Burgess, 1996; Edmonds, 1997; Rapp, 2002; Biemann et al., 2004; Lemaire and Denhi`ere, 2006)." D09-1066,C02-1007,o,"Roughly in keeping with (Rapp, 2002), we hereby regard paradigmatic assocations as those based largely on word similarity (i.e. including those typically classed as synonyms, antonyms, hypernyms, hyponyms etc), whereas syntagmatic associations are all those words which strongly invoke one another yet which cannot readily be said to be similar." D09-1066,C02-1007,o,"Then, by using evaluations similar to those described in (Baroni et al., 2008) and by Rapp (2002), we show that the best distance-based measures correlate better overall with human association scores than do the best window based configurations (see Section 4), and that they also serve as better predictors of the strongest human associations (see Section 5)." D09-1066,C02-1007,o,"While choosing an optimum window size for an application is often subject to trial and error, there are some generally recognized trade-offs between small versus large windows, such as the impact of data-sparseness, and the nature of the associations retrieved (Church and Hanks, 1989; Church and Hanks, 1991; Rapp, 2002) Measures based on distance between words in the text." D09-1066,C02-1007,o,"3 Methodology Similar to (Rapp, 2002; Baroni et al., 2008, among others), we use comparison to human assocation datasets as a test bed for the scores produced by computational association measures." D09-1066,C02-1007,o,"We use evaluations similar to those used before (Rapp, 2002; Pado and Lapata, 2007; Baroni et al., 2008, among others)." E09-1098,C02-1007,o,"At the present time, given the key role of window size in determining the selection and apparent strength of associations under the conventional co-occurrence model highlighted here and in the works of Church et al (1991), Rapp (2002), Wang (2005), and Schulte im Walde & Melinger (2008) we would urge that this is an issue which window-driven studies continue to conscientiously address; at the very least, scale is a parameter which findings dependent on distributional phenomena must be qualified in light of." E09-1098,C02-1007,o,"As, Rapp (2002) observes, choosing a window size involves making a trade-off between various qualities." E09-1098,C02-1007,o,"Rapp (2002) calls this trade-off specificity; equivalent observations were made by Church & Hanks (1989) and Church et al (1991), who refer to the tendency for large windows to wash out, smear or defocus those associations exhibited at smaller scales." E09-1098,C02-1007,o,"2.1 Scale-dependence It has been shown that varying the size of the context considered for a word can impact upon the performance of applications (Rapp, 2002; Yarowsky & Florian, 2002), there being no ideal window size for all applications." E09-1098,C02-1007,o,2.2 Data sparseness Another facet of the general trade-off identified by Rapp (2002) pertains to how limitations in862 herent in the combination of data and cooccurrence retrieval method are manifest. E09-1098,C02-1007,o,"This is one manifestation of what is commonly referred to as the data sparseness problem, and was discussed by Rapp (2002) as a side-effect of specificity." P04-3026,C02-1007,o,"Whereas until recently the focus of research had been on sense disambiguation, papers like Pantel & Lin (2002), Neill (2002), and Rapp (2003) give evidence that sense induction now also attracts attention." P04-3026,C02-1007,o,"3 Algorithm As in previous work (Rapp, 2002), our computations are based on a partially lemmatized version of the British National Corpus (BNC) which has the function words removed." P04-3026,C02-1007,o,"We used the procedure described in Rapp (2002), with the only modification being the multiplication of the loglikelihood values with a triangular function that depends on the logarithm of a words frequency." P09-1051,C02-1007,o,"(Ruge, 1992; Rapp, 2002))." W04-2117,C02-1007,o,"Even though there are some studies that compare the results from statistically computed association measures with word association norms from psycholinguistic experiments (Landauer et al. , 1998; Rapp, 2002) there has not been any research on the usage of a digital, network-based dictionary reflecting the organisation of the mental lexicon to our knowledge." W04-2117,C02-1007,o,There are several other approaches such as Ji and Ploux (2003) and the already mentioned Rapp (2002). C08-1040,C04-1162,o,"For example it has been used to measure centrality in hyperlinked web pages networks (Brin and Page, 1998; Kleinberg, 1998), lexical networks (Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Kurland and Lee, 2005; Kurland and Lee, 2006), and semantic networks (Mihalcea et al., 2004)." C08-1040,C04-1162,o,"Our method is based on the ones described in (Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Fader et al., 2007), The objective of this paper is to dynamically rank speakers or participants in a discussion." D07-1069,C04-1162,p,"Eigenvector centrality in particular has been successfully applied to many different types of networks, including hyperlinked web pages (Brin and Page, 1998; Kleinberg, 1998), lexical networks (Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Kurland and Lee, 2005; Kurland and Lee, 2006), and semantic networks (Mihalcea et al. , 2004)." E09-3009,C04-1162,o,"Still, it is in our next plans and part of our future work to embed in our model some of the interesting WSD approaches, like knowledgebased (Sinha and Mihalcea, 2007; Brody et al., 2006), corpus-based (Mihalcea and Csomai, 2005; McCarthy et al., 2004), or combinations with very high accuracy (Montoyo et al., 2005)." E09-3009,C04-1162,n,"This method was preferred against other related methods, like the one introduced in (Mihalcea et al., 2004), since it embeds all the available semantic information existing in WordNet, even edges that cross POS, thus offering a richer semantic representation." H05-1052,C04-1162,o,"417 structure of semantic networks was proposed in (Mihalcea et al. , 2004), with a disambiguation accuracy of 50.9% measured on all the words in the SENSEVAL-2 data set." I05-2004,C04-1162,o,"Previous approaches include supervised learning (Hirao et al. , 2002), (Teufel and Moens, 1997), vectorial similarity computed between an initial abstract and sentences in the given document, intradocument similarities (Salton et al. , 1997), or graph algorithms (Mihalcea and Tarau, 2004), (Erkan and Radev, 2004), (Wolf and Gibson, 2004)." I05-2004,C04-1162,o,"Ranking algorithms, such as Kleinbergs HITS algorithm (Kleinberg, 1999) or Googles PageRank (Brin and Page, 1998), have been traditionally and successfully used in Web-link analysis (Brin and Page, 1998), social networks, and more recently in text processing applications (Mihalcea and Tarau, 2004), (Mihalcea et al. , 2004), (Erkan and Radev, 2004)." N06-1027,C04-1162,o,"Inspired by the idea of graph based algorithms to collectively rank and select the best candidate, research efforts in the natural language community have applied graph-based approaches on keyword selection (Mihalcea and Tarau, 2004), text summarization (Erkan and Radev, 2004; Mihalcea, 2004), word sense disambiguation (Mihalcea et al. , 2004; Mihalcea, 2005), sentiment analysis (Pang and Lee, 2004), and sentence retrieval for question answering (Otterbacher et al. , 2005)." P04-3020,C04-1162,o,"Such text-oriented ranking methods can be applied to tasks ranging from automated extraction of keyphrases, to extractive summarization and word sense disambiguation (Mihalcea et al. , 2004)." W06-3811,C04-1162,o,"Using dictionaries as network of lexical items or senses has been quite popular for word sense disambiguation (Veronis and Ide, 1990; H.Kozima and Furugori, 1993; Niwa and Nitta, 1994) before losing ground to statistical approaches, even though (Gaume et al. , 2004; Mihalcea et al. , 2004) tried a revival of such methods." D09-1114,C08-1005,o,"Although various approaches to SMT system combination have been explored, including enhanced combination model structure (Rosti et al., 2007), better word alignment between translations (Ayan et al., 2008; He et al., 2008) and improved confusion network construction (Rosti et al., 2008), most previous work simply used the ensemble of SMT systems based on different models and paradigms at hand and did not tackle the issue of how to obtain the ensemble in a principled way." P09-1066,C08-1005,o,"Most of the work focused on seeking better word alignment for consensus-based confusion network decoding (Matusov et al., 2006) or word-level system combination (He et al., 2008; Ayan et al., 2008)." D09-1073,C08-1027,o,"However, one of the major limitations of these advances is the structured syntactic knowledge, which is important to global reordering (Li et al., 2007; Elming, 2008), has not been well exploited." D09-1008,C08-1041,o,"(He et al., 2008)." D09-1008,C08-1041,o,"Please note that our approach is very different from other approaches to context dependent rule selection such as (Ittycheriah and Roukos, 2007) and (He et al., 2008)." D09-1008,C08-1041,o,"Thus, we can compute the source dependency LM score in the same way we compute the target side score, using a procedure described in (Shen et al., 2008)." D09-1008,C08-1041,o,"Due to the lack of a good Arabic parser compatible with the Sakhr tokenization that we used on the source side, we did not test the source dependency LM for Arabic-to-English MT. When extracting rules with source dependency structures, we applied the same well-formedness constraint on the source side as we did on the target side, using a procedure described by (Shen et al., 2008)." D09-1008,C08-1041,o,"A remedy is to aggressively limit the feature space, e.g. to syntactic labels or a small fraction of the bi-lingual features available, as in (Chiang et al., 2008; Chiang et al., 2009), but that reduces the benefit of lexical features." D09-1008,C08-1041,o,"In (Post and Gildea, 2008; Shen et al., 2008), target trees were employed to improve the scoring of translation theories." D09-1008,C08-1041,o,"A few studies (Carpuat and Wu, 2007; Ittycheriah and Roukos, 2007; He et al., 2008; Hasan et al., 2008) addressed this defect by selecting the appropriate translation rules for an input span based on its context in the input sentence." D09-1008,C08-1041,o,"The other approach is to estimate a single score or likelihood of a translation with rich features, for example, with the maximum entropy (MaxEnt) method as in (Carpuat and Wu, 2007; Ittycheriah and Roukos, 2007; He et al., 2008)." D09-1008,C08-1041,o,"In (He et al., 2008), lexical 72 features were limited on each single side due to the feature space problem." D09-1008,C08-1041,o,"Similar ideas were explored in (He et al., 2008)." D09-1008,C08-1041,o,"(Carpuat and Wu, 2007) and (He et al., 2008), the specific technique we used by means of a context language model is rather different." D09-1008,C08-1041,o,"73 1.2.2 Baseline System and Experimental Setup We take BBNs HierDec, a string-to-dependency decoder as described in (Shen et al., 2008), as our baseline for the following two reasons: It provides a strong baseline, which ensures the validity of the improvement we would obtain." D09-1008,C08-1041,o,"2 Linguistic and Context Features 2.1 Non-terminal Labels In the original string-to-dependency model (Shen et al., 2008), a translation rule is composed of a string of words and non-terminals on the source side and a well-formed dependency structure on the target side." E09-1044,C08-1064,o,"Previously published approaches to reducing the rule set include: enforcing a minimum span of two words per non-terminal (Lopez, 2008), which would reduce our set to 115M rules; or a minimum count (mincount) threshold (Zollmann et al., 2008), which would reduce our set to 78M (mincount=2) or 57M (mincount=3) rules." E09-1044,C08-1064,o,Lopez (2008) explores whether lexical reordering or the phrase discontiguity inherent in hierarchical rules explains improvements over phrase-based systems. N09-1049,C08-1064,o,Lopez (2008) explores whether lexical reordering or the phrase discontiguity inherent in hierarchical rules explains improvements over phrase-based systems. W09-0426,C08-1064,p,"First, such a system makes use of lexical information when modeling reordering (Lopez, 2008), which has previously been shown to be useful in German-to-English translation (Koehn et al., 2008)." W09-0437,C08-1064,o,"2 Models, Search Spaces, and Errors A translation model consists of two distinct elements: an unweighted ruleset, and a parameterization (Lopez, 2008a; 2009)." W09-0437,C08-1064,o,Lopez (2008b) gives indirect experimental evidence that this difference affects performance. W09-0437,C08-1064,o,"Our hierarchical system is Hiero (Chiang, 2007), modified to construct rules from a small sample of occurrences of each source phrase in training as described by Lopez (2008b)." E09-1057,C08-1067,o,"In a next step, chunk information was added by a rule-based language-independent chunker (Macken et al., 2008) that contains distituency rules, which implies that chunk boundaries are added between two PoS codes that cannot occur in the same constituent." E09-1057,C08-1067,p,"(Macken et al., 2008) showed that the results for French-English were competitive to state-of-the-art alignment systems." D09-1125,C08-1074,p,"Then the same system weights are applied to both IncHMM and Joint Decoding -based approaches, and the feature weights of them are trained using the max-BLEU training method proposed by Och (2003) and refined by Moore and Quirk (2008)." W09-0439,C08-1074,o,"Previouswork, eg (Moore and Quirk, 2008; Cer et al., 2008), has focusedonimprovingtheperformanceofPowells algorithm." W09-0439,C08-1074,o,"Moore and Quirk (2008) share the goal underlying our own research: improving, rather than replacing, Ochs MERT procedure." E09-1071,C08-1114,o,"One such relational reasoning task is the problem of compound noun interpretation, which has received a great deal of attention in recent years (Girju et al., 2005; Turney, 2006; Butnariu and Veale, 2008)." E09-1071,C08-1114,o,Turney (2008) has recently proposed a simpler SVM-based algorithm for analogical classification called PairClass. E09-1071,C08-1114,o,"Turney (2008) argues that many NLP tasks can be formulated in terms of analogical reasoning, and he applies his PairClass algorithm to a number of problems including SAT verbal analogy tests, synonym/antonym classification and distinction between semantically similar and semantically associated words." E09-1071,C08-1114,o,AnalternativeembeddingisthatusedbyTurney (2008) in his PairClass system (see Section 6). N09-1058,C08-1114,o,"Language modeling (Chen and Goodman, 1996), noun-clustering (Ravichandran et al., 2005), constructing syntactic rules for SMT (Galley et al., 2004), and finding analogies (Turney, 2008) are examples of some of the problems where we need to compute relative frequencies." N09-1058,C08-1114,o,"In NLP community, it has been shown that having more data results in better performance (Ravichandran et al., 2005; Brants et al., 2007; Turney, 2008)." W09-0201,C08-1114,o,"In Table 6 we report our results, together with the state-of-the-art from the ACL wiki5 and the scores of Turney (2008) (PairClass) and from Amac Herdagdelens PairSpace system, that was trained on ukWaC." W09-0201,C08-1114,o,2 Related work Turney (2008) recently advocated the need for a uniform approach to corpus-based semantic tasks. W09-0201,C08-1114,o,Such tasks will require an extension of the current framework of Turney (2008) beyond evidence from the direct cooccurrence of target word pairs. W09-0205,C08-1114,o,"Turney (2008) is the first, to the best of our knowledge, to raise the issue of a unified approach." W09-0205,C08-1114,o,The algorithm proposed by Turney (2008) is labeled as Turney-PairClass. W09-0205,C08-1114,o,"Building on a recent proposal in this direction by Turney (2008), we propose a generic method of this sort, and we test it on a set of unrelated tasks, reporting good performance across the board with very little task-specific tweaking." W09-0205,C08-1114,o,We adopt a similar approach to the one used in Turney (2008) and consider each question as a separate binary classification problem with one positive training instance and 5 unknown pairs. W09-0419,C08-1115,o,"They are part of an effort to better integrate a linguistic, rule-based system and the statistical correcting layer also illustrated in (Ueffing et al., 2008)." D09-1079,C08-1125,o,"3.5 Domain adaptation in Machine Translation Within MT there has been a variety of approaches dealing with domain adaption (for example (Wu et al., 2008; Koehn and Schroeder, 2007)." P09-1036,C08-1127,o,"This, unfortunately, significantly jeopardizes performance (Koehn et al., 2003; Xiong et al., 2008) because by integrating syntactic constraint into decoding as a hard constraint, it simply prohibits any other useful non-syntactic translations which violate constituent boundaries." N09-1061,C08-1136,o,"Optimal algorithms exist for minimising the size of rules in a Synchronous Context-Free Grammar (SCFG) (Uno and Yagiura, 2000; Zhang et al., 2008)." P09-1088,C08-1136,o,"The machine translation literature is littered with various attempts to learn a phrase-based string transducer directly from aligned sentence pairs, doing away with the separate word alignment step (Marcu and Wong, 2002; Cherry and Lin, 2007; Zhang et al., 2008b; Blunsom et al., 2008)." P09-1088,C08-1136,o,"The sampler reasons over the infinite space of possible translation units without recourse to arbitrary restrictions (e.g., constraints drawn from a wordalignment (Cherry and Lin, 2007; Zhang et al., 2008b) or a grammar fixed a priori (Blunsom et al., 1f and e are the input and output sentences respectively." P09-1088,C08-1136,o,"Following the broad shift in the field from finite state transducers to grammar transducers (Chiang, 2007), recent approaches to phrase-based alignment have used synchronous grammar formalisms permitting polynomial time inference (Wu, 1997; 783 Cherry and Lin, 2007; Zhang et al., 2008b; Blunsom et al., 2008)." P09-1111,C08-1136,o,"Other linear time algorithms for rank reduction are found in the literature (Zhang et al., 2008), but they are restricted to the case of synchronous context-free grammars, a strict subclass of the LCFRS with f = 2." D09-1108,C08-1138,o,"In the SMT research community, the second step has been well studied and many methods have been proposed to speed up the decoding process, such as node-based or span-based beam search with different pruning strategies (Liu et al., 2006; Zhang et al., 2008a, 2008b) and cube pruning (Huang and Chiang, 2007; Mi et al., 2008)." D09-1108,C08-1138,o,"3.1 Exhaustive search by tree fragments This method generates all possible tree fragments rooted by each node in the source parse tree or forest, and then matches all the generated tree fragments against the source parts (left hand side) of translation rules to extract the useful rules (Zhang et al., 2008a)." D09-1108,C08-1138,p,"1 Introduction Recently linguistically-motivated syntax-based translation method has achieved great success in statistical machine translation (SMT) (Galley et al., 2004; Liu et al., 2006, 2007; Zhang et al., 2007, 2008a; Mi et al., 2008; Mi and Huang 2008; Zhang et al., 2009)." P09-1020,C08-1138,o,"4 Training This section discusses how to extract our translation rules given a triple nullnull,null null ,nullnull . As we know, the traditional tree-to-string rules can be easily extracted from nullnull,null null ,nullnull using the algorithm of Mi and Huang (2008) 2 . We would like 2 Mi and Huang (2008) extend the tree-based rule extraction algorithm (Galley et al., 2004) to forest-based by introducing non-deterministic mechanism." P09-1020,C08-1138,p,"Among these advances, forest-based modeling (Mi et al., 2008; Mi and Huang, 2008) and tree sequence-based modeling (Liu et al., 2007; Zhang et al., 2008a) are two interesting modeling methods with promising results reported." P09-1020,C08-1138,o,"Motivated by the fact that non-syntactic phrases make non-trivial contribution to phrase-based SMT, the tree sequencebased translation model is proposed (Liu et al., 2007; Zhang et al., 2008a) that uses tree sequence as the basic translation unit, rather than using single sub-tree as in the STSG." P09-1020,C08-1138,o,(2008a) propose a tree sequence-based tree to tree translation model and Zhang et al. P09-1020,C08-1138,o,"Therefore, structure divergence and parse errors are two of the major issues that may largely compromise the performance of syntax-based SMT (Zhang et al., 2008a; Mi et al., 2008)." P09-1020,C08-1138,o,"A tree sequence to string rule 174 A tree-sequence to string translation rule in a forest is a triple , where L is the tree sequence in source language, R is the string containing words and variables in target language, and A is the alignment between the leaf nodes of L and R. This definition is similar to that of (Liu et al. 2007, Zhang et al. 2008a) except our treesequence is defined in forest." P09-1103,C08-1138,o,"To address this issue, many syntax-based approaches (Yamada and Knight, 2001; Eisner, 2003; Gildea, 2003; Ding and Palmer, 2005; Quirk et al, 2005; Zhang et al, 2007, 2008a; Bod, 2007; Liu et al, 2006, 2007; Hearne and Way, 2003) tend to integrate more syntactic information to enhance the non-contiguous phrase modeling." P09-1103,C08-1138,o,"Nevertheless, the generated rules are strictly required to be derived from the contiguous translational equivalences (Galley et al, 2006; Marcu et al, 2006; Zhang et al, 2007, 2008a, 2008b; Liu et al, 2006, 2007)." P09-1103,C08-1138,o,"2 We illustrate the rule extraction with an example from the tree-to-tree translation model based on tree sequence alignment (Zhang et al, 2008a) without losing of generality to most syntactic tree based models." P09-1103,C08-1138,o,"The proposed synchronous grammar is able to cover the previous proposed grammar based on tree (STSG, Eisner, 2003; Zhang et al, 2007) and tree sequence (STSSG, Zhang et al, 2008a) alignment." D09-1024,C08-1139,o,"Word alignment is also a required first step in other algorithms such as for learning sub-sentential phrase pairs (Lavie et al., 2008) or the generation of parallel treebanks (Zhechev and Way, 2002)." E09-1044,C08-1144,o,"Previously published approaches to reducing the rule set include: enforcing a minimum span of two words per non-terminal (Lopez, 2008), which would reduce our set to 115M rules; or a minimum count (mincount) threshold (Zollmann et al., 2008), which would reduce our set to 78M (mincount=2) or 57M (mincount=3) rules." E09-1044,C08-1144,o,"(Zollmann et al., 2008)." E09-1044,C08-1144,o,"This is in direct contrast to recent reported results in which other filtering strategies lead to degraded performance (Shen et al., 2008; Zollmann et al., 2008)." N09-1049,C08-1144,o,"Extensions to Hiero Several authors describe extensions to Hiero, to incorporate additional syntactic information (Zollmann and Venugopal, 2006; Zhang and Gildea, 2006; Shen et al., 2008; Marton and Resnik, 2008), or to combine it with discriminative latent models (Blunsom et al., 2008)." E09-1017,C08-1145,p,"The fluency models hold promise for actual improvements in machine translation output quality (Zwarts and Dras, 2008)." A97-1055,C94-2113,o,"(Dolan, 1994) and (Krovetz and Croft, 1992) claim that fine-grained semantic distinctions are unlikely to be of practical value for many applications." D07-1107,C94-2113,o,"Much work has gone into methods for measuring synset similarity; early work in this direction includes (Dolan, 1994), which attempted to discover sense similarities between dictionary senses." J98-1001,C94-2113,o,"Recognizing this, Dolan (1994) proposes a method for ""ambiguating"" dictionary senses by combining them to create grosser sense distinctions." J98-1003,C94-2113,o,"Various approaches to word sense division have been proposed in the literature on WSD, including (1) sense numbers in every-day dictionaries (Lesk 1986; Cowie, Guthrie, and Guthrie 1992), (2) automatic or hand-crafted clusters of dictionary senses (Dolan 1994; Bruce and Wiebe 1995; Luk * Department of Computer Science, National Tsing Hua University, Hsinchu 30043, Taiwan, ROC." J98-1003,C94-2113,o,"Furthermore, as pointed out in Dolan (1994), the sense division in an MRD is frequently too fine-grained for the purpose of WSD." J98-1003,C94-2113,o,"82 Chen and Chang Topical Clustering Dolan (1994) maintains the position that intersense relations are mostly idiosyncratical, thereby making it difficult to characterize them in a general way so as to identify them." J98-1003,C94-2113,o,"However, they do not elaborate on how the comparisons are done, or on how effective the program is. Dolan (1994) describes a heuristic approach to forming unlabeled clusters of closely related senses in an MRD." J98-1003,C94-2113,o,"As noted in Dolan (1994), it is possible to run a sense-clustering algorithm on several MRDs to build an integrated lexical database with more complete coverage of word senses." J98-1003,C94-2113,o,"These relations are then used for various tasks, ranging from the interpretation of a noun sequence (Vanderwende 1994) or a prepositional phrase (Ravin 1990), to resolving structural ambiguity (Jenson and Binot 1987), to merging dictionary senses for WSD (Dolan 1994)." P06-1014,C94-2113,o,"5 Related Work Dolan (1994) describes a method for clustering word senses with the use of information provided in the electronic version of LDOCE (textual definitions, semantic relations, domain labels, etc.)." W00-0103,C94-2113,p,"This approach took inspiration from the pioneering work by (Dolan 1994), but it is also fundamentally different, because instead of grouping similar senses together, the CoreLex approach groups together words according to all of their senses." W06-2503,C94-2113,o,"There is also work on grouping senses of other inventories using information in the inventory (Dolan, 1994) along with information retrieval techniques (Chen and Chang, 1998)." W96-0305,C94-2113,o,"Recently, various approaches (Dolan 1994; Luk 1995; Yarowsky 1992; Dagan et al. 1991 ;Dagan and Itai 1994) to word sense division have been used in WSD research." W96-0305,C94-2113,o,Zero derivation Dolan (1994) pointed out that it is helpful to identify zero-derived noun/verb pairs for such tasks as normalization of the semantics of expressions that are only superficially different. W96-0305,C94-2113,o,Dolan (1994) described a heuristic approach to forming unlabeled clusters of closely related senses in a MRD. W96-0305,C94-2113,o,Dolan (1994) observed that sense division in MRD is frequently too free for the purpose of WSD. W99-0505,C94-2113,o,"Towards a Meaning-Full Comparison of Lexieal Resources Kenneth C Lltkowska CL Research 9208 Gue Road Damascus, MD 20872 ken@clres corn http//www tires tom Abstract The mapping from WordNet to Hector senses m Senseval provides a ""gold standard"" against wluch to judge our ability to compare lexlcal resources The ""gold standard"" is provided through a word overlap analysis (with and without a stop list) for flus mapping, achieving at most a 36 percent correct mapping (inflated by 9 percent from ""empty"" assignments) An alternaUve componenttal analysis of the defimtaons, using syntacUc, collocatmnal, and semantac component and relation identification (through the use ofdefimng patterns integrated seamlessly mto the parsing thclaonary), provides an almost 41 percent correct mapping, with an additaonal 4 percent by recogmzmg semantic components not used in the Senseval mapping Defimtion sets of the Senseval words from three pubhshed thclaonanes and Dorr's lextcal knowledge base were added to WordNet and the Hector database to exanune the nature of the mapping process between defimtton sets of more and less sco\[~e The tecbauques described here consUtute only an maaal implementation of the componenUal analysis approach and suggests that considerable further improvements can be aclueved Introduction The difficulty of companng lemcal resources, long a s~gnfficant challenge in computauonal hnguistlcs (Atlans, 1991), came to the fore in the recent Senseval competatton (IOlgarnff, 1998), when some systems that relied heavily on the WordNet (Miller, et al, 1990) sense inventory were faced with the necessity of using another sense inventory (Hecto0 A hasty solutaon to the problem was the "" development of a map between the two inventories, but some part~cipants expressed concerns that use of flus map may have degraded their performance to an unknown degree Although there were disclaimers about the WordNet-Hector map, it nonetheless stands as a usable gold standard for efforts to compare lexical resources Moreover, we have a usable baseline (a word overlap method suggested m (Lesk, 1986)) against which to compare whether we are able to make improvements m the mapping (since flus method has been shown to perform not as well as expected (Krovetz, 1992)) We first describe the lextcal resources used m the study (Hector, WordNet, other dicUonanes, and a lex~cal knowledge base), first characterizing them in terms ofpolysemy and the types of leracal mformaUon each contmns (syntacUc properties and features, semantac components and relaUons, and collocaUonal properties) We then present results of perfornung the word overlap analysis of the 18 verbs used m Senseval, analyzing the definitions m WordNet and Hector We then expand our analysis to include other dictionaries We describe our methods of analysis, particularly the methods of parsing defimtaons and identff)qng semantic relations (semrels) based on defimng patterns, essentially takang first steps m Implementing the program described by Atkms and focusmg on the use of""meamng"" full mformataon rather than statistical mformaUon We identify the results that have been achieved thus far and outline further steps that may add more ""meanmg"" to the analysis IAll analyses described m this paper were performed automatically using functlonahty incorporated m DIMAP (Dictionary Maintenance Programs) (available for immediate download at (CL Research, 1999a)) This includes automatac extracuon of WordNet reformation for the selected words (mtegrated m DIMAP) Hector defimtlons were uploaded into DIMAP dicUonanes after use of a conversmn program Defimtlons for other 30 The Lexical Resources Tlus analysis focuses on the mmn verb senses used In Senseval (not ichoms and phrases), specifically the followmg AMAZE, BAND, BET, BOTHER, BURY, CALCULATE, CONSUME, DERIVE, FLOAT, HURDLE, INVADE, PROMISE, SACK, SANCTION, SCRAP, SEIZE, SHAKE, SLIGHT The Hector database used In Senseval consists of a tree of senses, each of which contains defimttons, syntactic properties, example usages, and ""clues"" (collocational information about the syntactic and semantic enwronment in wluch a word appears in the spectfic sense) The WordNet database contmns synonyms (synsets), perhaps a defimtton or example usages (gloss), some syntactic mformaUon (verb frames), hypernyms, hyponyms, and some other semrels (ENTAILS, CAUSES) To extend our analysis In order to look at other issues of lexacal resource comparison, we have included the defirauons or leracal information from the following additional sources Webster's 3 ra New International Dictionary (W3) Oxford Advanced l.earners D~ctlonary (OALD) American Hentage DlcUonary (AI-ID) Dorr's Lexacal Knowledge Base (Dorr) We used only the defimuons from W3, OALD, and AHD (which also contmn sample usages and some collocattonal information m the form of usage notes, not used at the present tame) Dorr's database contains thematic grids wluch characterize the thematic roles of obligatory and optional semanuc components, frequently identifying accompanying preposmons (Olsen, et al, 1998) The following table identities the number of senses and average overall polysemy for each of these resources dictionaries were entered by hand Word amaze band bet bother bury calculate consume denve float hurdle invade pronuse sack sanction scrap seize shake shght Average Polysemy o o o 1 2 4 2 3 1 II 4 4 2 5 5 7 6 9 7 12 6 14 5 5 5 10 9 6 6 8 8 6 5 15 5 16 4 41 14 2 1 4 3 6 2 10 5 5 4 7 4 4 4 6 3 2 2 5 2 3 1 3 3 11 6 21 13 8 8 37 17 1 1 6 3 O 1 2 2 4 1 3 4 4 8 1 3 1 3 1 3 2 10 5 1 0 3 1 3 2 2 0 1 1 1 0 7 1 7 12 I 0 57 37 120 62 34 22 Word Overlap Analysis We first estabhsh a baseline for automatic replication of the lexicographer's mappmg from WordNet 1 6 to Hector, using a s~mple word overlap analysis smular to (Lesk, 1986) The lextcographer mapped the 66 WordNet senses (each synset m which a test occurred) Into 102 Hector senses A total of 86 assignments were made, 9 WordNet senses were gwen no assignments, 40 recewed exactly one, and 17 senses received 2 or 3 asssgnments The WordNet senses contained 348 words (about half of wluch were common words appeanng on our stop list, which contained 165 words, mostly preposmons, pronouns, and conjunctions) The Hector senses selected m the word overlap analysis contained about 960 words (all Hector senses contained 1878 words) We performed a strict word overlap analysts (with and wsthout a stop hst) between tile definlUons in WordNet and the Hector senses, that is, we did not attempt to ldenttfy root forms of Inflected words We took each word m a WordNet sense and determined whether ~t appeared in a Hector sense, we selected a Hector sense based on the highest percentage of words over all Hector senses An 31 empty selection was made ff all the words in the WordNet sense did not appear in any Hector sense, only content words were considered when the stop hst was used For example, for bet, WordNet sense 2 (stake (money) on the outcome of an issue) mapped into Hector sense 4 ((of a person) to risk (a sum of money or property) m thts way) In this case, there was an overlap on two words (money, 039 in the Hector defimtlon (0 13 of its 15 words) without the stop list When the stop list was invoked, there was an overlap of only one word (money, 0 07 of the Hector defimtion) In this case, the lexicographer had made three assignments (Hector senses 2, 3, and 4), our scoring method treated flus as only 1 out of 3 correct (not using the relaxed method employed in Senseval of treating flus as completely correct) Without the stop hst, our selections matched the lexicographer's in 28 of 86 cases (32 6%), using the stop list, we were successful in 31 of 86 cases (36 1%) The improvement arising when the stop list was used is deceptive, where 8 cases were due to empty assignments (so that only 23 cases, 26 7%, were due to matching content words) Overall, only 41 content words were involved in these 23 successes when the stop list was used, an average of I 8 content words To summanze the word overlap analysis (1) despite a ncher set of defimtions in Hector, 9 of 66 WordNet senses (13 6%) could not be assigned, (2) despite the greater detail in Hector senses compared to WordNet senses (2 8 times as many words), only 1 8 content words participated in the assignments, and (3) therefore, the defimng vocabulary between these two definition sets seems to be somewhat divergent Although it might appear as if the word overlap analysis does not perform well, this is not the case The analysis provides a broad overview of the defimuon companson process between two definmon sets and frames a deeper analysis of the differences Moreover, it appears that the accuracy of a ""gold standard"" mapping is not crucially important The quality of the mapping may help frame the subsequent analysis more precisely, but it seems sufficient that any reasonable mapping will suffice This will be discussed further after presenting the results of the componentlal analysis of the defimtlons 32 Meaning-Full Analysis of Definitions The deeper analysis of the mapping between two defimtion sets relies primarily on two major steps (1) parsing definitions and using defimng patterns to identify semrels present m the definitions and (2) relaxing values to these relations by allowing ""synonymic"" substitution (using WordNet) Thus, for example, ffwe identify hypernyms or instruments from parsing a defimtion, we would say that the defimtions are ""equal"" not just ffthe hypernym or instrument is the same word, but also Lf the hypernyms or instruments are members of the same synset This approach is based on the finding (Litkowski, 1978) that a dictionary induces a semantic network where nodes represent ""concepts"" that may be lexicahzed and verbalized in more than one way This finding implies, in general, the absence of true synonyms, and instead the kind of ""concept"" embodied in WordNet synsets (with several lexical items and phraseologles) A slmdar approach, parsing defimtlons and relaxing semrel values, was followed in (Dolan, 1994) for clnstenng related senses w~thin a single dictionary The ideal toward which this approach strives is a complete identification of the meamng components included in a defimtion The meaning components can include syntactic features and charactenstlcs (including subcategonzation patterns), semantm components (realized through identification of semrels), selectional restrictions, and coUocational specifications The first stage of the analysis parses the definitions (CL Research, 1999b, Litkowski, to appear) and uses the parse results to extract (via defining patterns) semrels Since definitions have many idiosyncrasies (that do not follow ordinary text), an important first step in this stage is preprocessmg the definition text to put it into a sentence frame that facilitates the extraction of semrels 2 2Note that the stop hst is not applicable to the definition parsing The parser is a full-scale sentence parser, where prepositmns and other words on the stop list are necessary for successful parsing Moreover, inclusion of the prepositions is cmcml to the method, since they are the bearers of much semrel information The extractmn of semrels examines the parse results, a e, a tree whose mtermedaate nodes represent non-ternunals and whose leaves represent the lextcal atems that compnse the defimuons, where any node may also include annotations such as characterizations of number and tense For all noun or verb defimttons, flus includes Identification of the head noun (with recogmtton of""empty"" heads) or verb, for verbs, we signal whether the defimtaon contmned any selecttonal restnctmus (that as, pamcular parenthesazed expressaons) for the subject and object We then exanune preposattonal phrases In the defimUon and deterrmne whether we have a ""defining pattern"" for the preposaUon whach we can use as mdacaUve of a partacular semrel We also identify adverbs m the parse tree and look these up in WordNet to adentffy an adjecuve synset from wluch they are derived (if one is gwen) The defimng pattems are actually part of the dictionary used by the parser That is, we do not have to develop specafic routines to look for specLfic patterns A defimng pattern ~s a regular expressaon that arlaculates a syntactac pattern to be matched Thus, to recograze a ""manner"" semrel, we have the foUowmg entry for ""m"" m(dpat((~ rep0 l(det(0)) adj manner(0) st(manner)))) This allows us to recognize ""m"" as possibly gwmg rise to a ""manner"" component, where we recogmze ""m"" (the tdde, which allows us to specify partacular elements before the ""m"" as well), vath a noun phrase that consasts of 0 or 1 determiner, an adjectwe, and the lateral ""manner"" The '0 after the detenmner and the hteral mdacate that these words are not copied into the value for a ""manner"" role, so that the value to the ""manner"" semrel becomes only the adjectwe that as recogmzed The second stage of the analysis uses the populated lexacal database to compare senses and make the selectaons This process follows the general methodology used m Senseval (Lltkowska, to appear) Specifically, m the defimtaon comparison, we first exanune exclusaon cntena to rule out specific mappings These criteria include syntacUc properUes (e g, a verb sense that Is only transluve cannot map into one that Is only mtransRave) and collocataonal propertaes (e g, a sense that is used with a parUcle cannot map into one that uses a different particle) At the present tune, these are used only rmmmally 33 We next score each viable sense based on rots semrels We increment the score ff the senses have a common hypernym or If a sense's hypernyms belong to the same synset as the other sense's hypernyms If a parUcular sense con~ns a large number of synonyms (that as, no differentiae on the hypernym) and they overlap consaderably m the synsets they evoke, the score can be increased substanUally Currently, we add 5 points for each match 3 We increment the score based on common semrels In tins amtml tmplementaUon, we have defimng patterns (usually qmte nummal) for recogmzmg Instrument, means, location, purpose, source, manner, has-constituents, has-members, is-part-of, locale, and goal 4 We Increment the score by 2 points when we have a common semrel and then by another 5 points when the value Is ~dentacal or m the same synset After all possable increments to the scores have been made, we then select the sense(s) w~th the lughest score Finally, we compare our selecuon with that of the gold standard to assess our mapping over all senses Another way an wluch our methodology follows the Senseval process as that at proceeds incrementally Thus, ~t ms not necessary to have a ""final"" perfect parse and mapping rouUne We can make conUnual refinements at any stage of the process and exarmne the overall effect As m Senseval, we may make changes to deal wath a particular phenomenon with the result that overall performance dechnes, but w~th a sounder basis for making subsequent amprovements Results of Componential Analysis The ""gold standard"" analysis Involves mapping 66 WordNet senses with 348 words into 102 Hector senses with 1878 words Using the method described above, we obtained 35 out of 86 correct 3At the present tame, we use WordNet to adentffy semreis We envaslon usmg the full semanlac network created by parsing all a dlcUonary's defimtaons Thas would include a richer set of semrels than currently included m WordNet 4The defimng patterns are developed by hand We have onlyJust begun this effort, so the current set ms somewhat Impoverished mappmgs (407%), a shght improvement over the 31 correct assignments usmg the stop-last word overlap techmque However, as mentioned above, the stophst techmque had aclueved 8 of its successes by matclung null assignments Consadered on tlus basins, ~t seems that the componentaal analysis techmque provides substantial ~mprovement In addition, our technique ""erred"" on 4 cases by malang assagnments where none were made by the leracographer We suggest that these cases do con~n some common elements of meaning and may conceivably not be construed as errors The mapping from WordNet to Hector had relatavely few empty mappings, senses for wtuch It was not possable to make an assignment These are the cases where at appears that the chetmnanes do not overlap and thus prowde a tentative mdacataon of where two dictionaries may have different coverage The cases of multiple assignments mchcate the degree ofamblgmty m the mapping The average m both darecUons between Hector and WordNet were donunated by the mabdaty to obtain good dascnnunatton for the word ""semze"" Thus, tlus method identifies individual words where the &scnnunatwe ablhty needs to be further refined Perhaps more importantly, the componentml analysis method exploits consaderably more WordNet Hector mformauon than the word overlap methods Whereas the stop-hst word overlap mapping was based on only 41 content words, the componenual ~ approach (In the selected mappings) had 228 hits in ~.~ developing ats scores, with only a small number of ~ .~ ~ defining patterns Comparison of Dictionaries tel O ~3 0'3 We next exanuned the nature of the mterrelalaons between parrs of chctaonanes w~thout use of a ""gold standard"" to assess the process of mapping For t/us purpose, we mapped m both &recttons between the paars {WordNet, Hector}, {W3, OALD}, and {W3, AHD We exanune Dorr's lexacal knowledge base for the amphcatlons It may have m the mapping process Neither WordNet nor Hector are properly v~ewed as chcuonanes, since there was no mtenuon to pubhsh them as such WordNet ""glosses"" are generally smaller (53 words per sense) compared to Hector (184 words per sense), whach contains many words specff3nng selectmnal restnct~ons on the subject and object of the verbs Hector was used primarily for a large-scale sense tagging project The three formal d~ctmnanes were subject to rigorous pubhslung and style standards The average number of words per sense were 87 (OALD), 7 1 (AHD), and 9 9 (W3), w~th an average of 3 4, 62, and 120 senses per word Each table shows the average number of senses being mapped, the average number of assignments m the target dlCtmnary, the average number of senses for which no assagnment could be made, the average number of mulUple assignments per word, and the average score of the assignments that were made WN-Hector 37 47 06 17 119 Hector-WN 57 64 14 22 113 These points are further emphasized m the mapping between W3 and OALD, where the disparity between the empty and mulUple assagnments indicate that we are mapping between dictionaries qmte disparate This tends to be the case not only for the enUre set of words, but also is evident for individual words where there is a considerable d~spanty m the number of senses, wtuch then dominate the overall dlspanty Thus, for example, W3 has 41 defimUons for ""float"", while OALD has 10 We tend to be unable to find the specific sense m going from W3 to OALD, because at is likely that we have many more specific defimtlons that are not present In the other direction, we are hkely to have considerable ambiguity and multiple assignments W3-OALD OALD-W3 W3 OALD 120 78 60 18 99 34 60 07 32 86 34 A Between W3 and AHD, there ss less overall daspanty between the defimtaon sets, although since W3 Is tmabndged, we stall have a relatavely lugh number of senses m W3 that do not appear to be present m AHD Finally, It should be noted that the scores for the published dictaonanes tend to be a little lower than for WordNet and Hector Tlus reflects the hkehhood that we have not extracted as much mformataon as we dad m parsing and analyzmg the defimtaon sets used m Senseval W3 AHD oJ 'q O W3-AHD 120 115 40 36 90 AHD-W3 6 2 9 1 1 2 4 1 9 1 We next considered Dorr's lexacal database We first transformed her theta grids to syntactic spectflcataons (transttave or lntransmttve) and identtficataon of semreis (e g, where she Identified an instr component, we added such a semrel to the DIMAP sense) We were able to identify a mappmg from WordNet to her senses for two words (""float"" and ""shake"") for wluch Dorr has several entries However, smce she has considerably more semanuc components than we are currently able to recogmze, we dad not pursue this avenue any further at flus time More important than just mappmg between two words, Dorr's data mdacates the posstbday of further exploitation of a richer set of semanUc components Spectfically, as reported m (Olsen, et al, 1998), m descnbmg procedures for automatically acqumng thematic grids for Mandann Chinese, ~t was noted that ""verbs that incorporate themaUc elements m their meamng would not allow that element to appear m the complement structure"" Thus, by usmg Dorr's thematic grids when verb are parsed m defimtaons, it ~s possible to ~dentffy where partacular semantac components are lexicahzed and which others are transnutted through to the themaUc grid (complement or subcategonzataon pattern) for the defimendum The transmiss~on of semantic components to the thematic gnd ~s also reflected overtly m many defimtlons For example, shake has one definition, ""to bnng to a specified condatton by or as ffby repeated qmck jerky movements"" We would thus expect that the thematac grid for this defimtaon should include a ""goal"" And, deed, Dorr's database has two senses whch reqmre a ""goal"" as part of their thematic grid Smularly, for many defimtaons m the sample set, we ~dentLfied a source defimng pattern based on the word ""from,"" frequently, the object of the preposmon was the word ""source"" ttseff, mdacatmg that the subcategonzaUon, properties of the defimendum should elude a source component Discussion Wlule the improvement m mapping by using the componentaal analysis techmque (over the word overlap methods) is modest, we consider these results qmte slgmficant m wew of the very small number of defimng patterns we have Implemented Most of the improvement stems from the word substatuUon pnnclple described earlier (as ewdenced by the preponderance of 5 point scores) This techmque also provides a mechamsm for bnngmg back the stop words, wz, the preposmons, wluch are the careers of mformatmn about semrels (the 2 point scores) The more general conclusion (from the word subsutuuon) is that the success arises from no longer considenng a defimtmn m ~solation The proper context for a word and its defimtions consists not .lUSt of the words that make up the definition, but also the total semantac network represented by the dictaonary We have aclueved our results by explomng only a small part of that network We have moved only a few steps to that network beyond the mdawdual words and their definitions We would expect that further expansmn, first by the addon of further and ~mproved semrel defining patterns, and second, through the identaficataon of more pnmmve semanuc components, will add considerably to our abflay to map between lexacal resources We also expect ~mprovements from consideration of other techniques, such as attempts at ontology ahgnment (Hovy, 1998) Although tile definition analysis provlded here was performed on definmons with a stogie language, the vanous meamng components m m m m m m m m 35 correspond to those used in an Interhngua The use of the exUncuon method (developed m order to charactenze verbs m another language, Clunese) can frmtfully be applied here as well Two further observaUons about tlus process can be made The first is that rchance on a wellestablished semantic network such as WordNet,s not necessary The componenUal analysis method rehes on the local neighborhood of words m the defimUons, not on the completeness of the network Indeed, the network ~tsel can be bootstrapped based on the parsing results The method can work vath any semanUc network or ontology and may be used to refine or flesh out the network or ontology The second observation is that it is not necessary to have a well-estabhshed ""gold standard"" Any mapping vail do All that Is necessary is for any mvesugator (lemcographer or not) to create a judgmental mappmg The methods employed here can then quanufy ttus mapping based on a word overlap analysis and then further examine tt based on the componenaal analysis The componenUal analysis method can then be used to exanune underlying subtleUes and nuances tn the defimUous, wluch a lemcographer or analyst can then examine m further detail to assess the mapping Future Work Tlus work has marked the first ume that all the necessary mfrastructure has been combmed tn a rudimentary form Because of its rudimentary status, the opportumUes for improvement are quite extensive In addlUon, there are many opportumUes for using the techmques descnbed here m further NLP apphcatlons First, the techmques described here have immediate apphcabtllty as part of a lexicographer's workstaUon When defimUons are parsed and semrels are zdenttfied, the resulUng data structures can be apphed against a corpus of instances for parUcular words (as m Senseval) for improving word-sense disamblguaUon The techmques will also permit comparing an entry vath Itself to deternune the mterrelattonshtps among ~ts defimUons and of companng the defimUons of two ""synonyms"" to deternune the amount of overlap between them on a defimtlon by defimUon bas~s Although the analys,s here has focused on the parsing of defimUous, the development of defimng patterns clearly extends to generalized text parsing since the defimng patterns have been incorporated mto the same chcttonary used for parsing free text, the patterns can be used threctly to identify the presence of parUcular semrels among sentenual consUtuents We are working to integrate th~s funcUonahty into our word-sense &sambiguaUon techruques (both the defimng patterns and the semrels) Even further, mt seems that matclung defimng patterns in free text can be used for lextcal acquisition Textual matenal that contains these patterns could concewably be flagged as providing defimUonal matenal which can then be compared to emstmg defimUons to assess whether their use ts cous,stent vath these defimUons, and ff not, at least to flag the inconsistency The tecluuques descnbed here can be apphed directly to the fields of ontology development and analysis of ternunologlcal databases For ontoiogles, vath or w~thout defimuons, the methods employed can be used to compare entries m dai'erent ontologles based pnmanly on the relattous m the ontology, both luerarclucal and other For ternunologlcal databases, the methods descnbed here can be used to exanune the set of conceptual relaUons lmphed by the defimtmus The defimuon parsing wall facd~tate the development of the termmolog~ca I network tn the pamcular field covered by the database The componenUal analysts methods result m a richer semantic network that can be used m other apphcattous Thus, for example, ~t ts possible to extend the leracal chatmng methods described m (Green, 1997), which are based on the semrels used m WordNet The semrels developed with the componenttal analysis method would provide additional detad available for apphcauon of lexlcal cohesion methods In particular, addtUonal relattous would penmt some structunng wmthm the individual leracal chams, rather than just consldenng each cham as an amorphous set (Green, 1999) Finally, we are currently investigating the use of the componenUal analysts techmque for mformauon extracUon The techmque identifies (from defimtlous) slots that can be used as slots or fields m template generataon Once these slots are identified, we wall be attemptmg to extract slot values from Items m large catalog databases (mdhons of items) 36 In conclusion, it would seem that, instead of a paucity of tnformation allovang us to compare lexmal resources, by bnngmg m the full semantic network of the lexicon, we are overwhelmed with a plethora of data Acknowledgments I would like to thank Bonnie Dorr, Chnstiane Fellbaum, Steve Green, Ed Hovy, Ramesh Knshnamurthy, Bob Krovetz, Thomas Potter, Lucy Vanderwende, and an anonymous reviewer for their comments on an earlier draft of this paper References Atlans, B T S (1991) Bmldmga lexicon The contribution of lexicography lnternattonal Journal of Lextcography, 4(3), 167-204 CL Research (1999a) CL Research Demos http//www clres com/Demo html CL Research (1999b) Dmtlonary Parsing Project http//www clres com/dpp html Dolan, W B (1994, 5-9 Aug) Word Sense Amblguation Chistenng Related Senses COLING-94, The 15th International Conference on Computational Linguistics Kyoto, Japan Green, S J (1997) Automatically generating hypertext by computing semantic smulanty \[Dlss\], Toronto, Canada Umverstty of Toronto Green, S J (Sjgreen@mn mq edu au) (1999, 1 June) (Rich semantic networks) Hovy, E (1998, May) Combining and Standardizing Large-Scale, Practical Ontologms for Machine Translation and Other Uses Language Resources and Evaluation Conference Granada, Spam Kalgarnff, A (1998) SENSEVAL Home Page http//www itn bton ac uk/events/senseval/ Krovetz, R (1992, June) Sense-Linking m a Machine Readable Dictionary 30th Annual Meeting of the Association for Computational Lmgu~stics Newark, Delaware Association for Computational Lmgtustics Lesk, M (1986) Automatic Sense Dlsamblguation Using Machine Readable Dmttonanes How to Tell a Pine Cone from an Ice Cream Cone Proceechngs of SIGDOC Lttkowski, K C (1978) Models of the semantic structure of dictionaries American Journal of Computattonal Lmgutsttcs, Atf 81, 25-74 Lttkowskl, K C (to appear) SENSEVAL The CL Research Expenence Computers and the Humamttes Mtller, G A, Beckwlth, R, Fellbaum, C, Gross, D, & Miller, K J (1990) Introduction to WordNet An on-hne lexical database lnternatwnal Journal of Lexicography, 3(4), 235-244 Olsen, M B, Dorr, B J, & Thomas, S C (1998, 28-31 October) Enhancmg Automatic Acqulsmon of Thematic Structure in a Large-Scale Lexacon for Mandann Chinese Tlurd Conference of the Association for Machine Translation m the Americas, AMTA-98 Langhorne, PA" C08-1009,C98-2122,o,"On the British National Corpus (BNC), using Lins (1998) similarity method, we retrieve the following neighbors for the first and second sense, respectively: 1." C08-1009,C98-2122,o,"As described in Section 3 we retrieved neighbors using Lins (1998) similarity measure on a RASP parsed (Briscoe and Carroll, 2002) version of the BNC." C08-1009,C98-2122,o,The best accuracies are observed when the labelsarecreatedfromdistributionallysimilarwords using Lins (1998) dependency-based similarity measure (Depend). C08-1009,C98-2122,p,"Lins (1998) information-theoretic similarity measure is commonly used in lexicon acquisition tasks and has demonstrated good performance in unsupervised WSD (McCarthy et al., 2004)." C08-1009,C98-2122,n,A potential caveat with Lins (1998) distributional similarity measure is its reliance on syntactic information for obtaining dependency relations. C08-1029,C98-2122,p,"Point-wise mutual information (Lin, 1998) and Relative Feature Focus (Geffet and Dagan, 2004) are well-known examples." C08-1029,C98-2122,o,"Feature comparison measures: to convert two feature sets into a scalar value, several measures have been proposed, such as cosine, Lins measure (Lin, 1998), Kullback-Leibler (KL) divergence and its variants." C08-1029,C98-2122,o,"Lins measure Lin (1998) proposed a symmetrical measure: Par Lin (s t)= summationtext fF s F t (w(s,f)+w(t,f)) summationtext fF s w(s,f)+ summationtext fF t w(t,f) , where F s and F t denote sets of features with positive weights for words s and t, respectively." C08-1051,C98-2122,o," Three K-means algorithms using different distributional similarity or dissimilarity measures: cosine, -skew divergence (Lee, 1999) 4 , and Lins similarity (Lin, 1998)." C08-1051,C98-2122,o,"Others proposed distributional similarity measures between words (Hindle, 1990; Lin, 1998; Lee, 1999; Weeds et al., 2004)." C08-1051,C98-2122,o,"405 PRF 1 proposed .383 .437 .408 multinomial mixture .360 .374 .367 Newman (2004) .318 .353 .334 cosine .603 .114 .192 -skew divergence (Lee, 1999) .730 .155 .255 Lins similarity (Lin, 1998) .691 .096 .169 CBC (Lin and Pantel, 2002) .981 .060 .114 Table 3: Precision, recall, and F-measure." C08-1051,C98-2122,o,"Applications of word clustering include language modeling (Brown et al., 1992), text classification (Baker and McCallum, 1998), thesaurus construction (Lin, 1998) and so on." C08-1054,C98-2122,o,(2005) applied the distributional similarity proposed by Lin (1998) to coordination disambiguation. C08-1058,C98-2122,o,"One is automatic thesaurus acquisition, that is, to identify synonyms or topically related words from corpora based on various measures of similarity (e.g. Riloff and Shepherd, 1997; Lin, 1998; Caraballo, 1999; Thelen and Riloff, 2002; You and Chen, 2006)." C08-1086,C98-2122,o,"By no means an exhaustive list, the most commonly cited ranking and scoring algorithms are HITS (Kleinberg 1998) and PageRank (Page et al. 1998), which rank hyperlinked documents using the concepts of hubs and authorities." C08-1086,C98-2122,o,"Within the NLP community, n-best list ranking has been looked at carefully in parsing, extractive summarization (Barzilay et al. 1999; Hovy and Lin 1998), and machine translation (Zhang et al. 2006), to name a few." C08-1086,C98-2122,o,"Following Lin (1998), we use syntactic dependencies between words to model their semantic properties." C08-1100,C98-2122,o,"For each word in the LDV, we consulted three existing thesauri: Rogets Thesaurus (Roget, 1995), Collins COBUILD Thesaurus (Collins, 2002), and WordNet (Fellbaum, 1998)." C08-1100,C98-2122,o,"Various methods (Hindle, 1990; Lin, 1998) of automatically acquiring synonyms have been proposed." C08-1100,C98-2122,p,"4.1 Features We used a dependency structure as the context for words because it is the most widely used and one of the best performing contextual information in the past studies (Ruge, 1997; Lin, 1998)." C08-1107,C98-2122,o,"Given a wordq, its set of featuresFq and feature weightswq(f) for f Fq, a common symmetric similarity measure is Lin similarity (Lin, 1998a): Lin(u,v) = summationtext fFuFv[wu(f)+wv(f)]summationtext fFu wu(f)+ summationtext fFv wv(f) where the weight of each feature is the pointwise mutual information (pmi) between the word and the feature: wq(f) =log[Pr(f|q)Pr(f) ]." C08-1107,C98-2122,o,"Texts are represented by dependency parse trees (using the Minipar parser (Lin, 1998b)) and templates by parse sub-trees." C08-1117,C98-2122,p,"Among these measures, the most important are Wu & Palmers (Wu and Palmer, 1994), Resniks (Resnik, 1995) and Lins (Lin, 1998)." C08-1117,C98-2122,o,"Where Pantel and Lin use Lins (1998) measure, we use Wu and Palmers (1994) measure." C08-1117,C98-2122,p,One of the most important is Lins (1998). D08-1007,C98-2122,o,"4 Experiments and Results 4.1 Set up We parsed the 3 GB AQUAINT corpus (Voorhees, 2002) using Minipar (Lin, 1998b), and collected verb-object and verb-subject frequencies, building an empirical MI model from this data." D08-1007,C98-2122,o,"Lin (1998a)s similar word list for eat misses these but includes sleep (ranked 6) and sit (ranked 14), because these have similar subjects to eat." D08-1007,C98-2122,o,"Discriminative, context-specific training seems to yield a better set of similar predicates, e.g. the highest-ranked contexts for DSPcooc on the verb join,3 lead 1.42, rejoin 1.39, form 1.34, belong to 1.31, found 1.31, quit 1.29, guide 1.19, induct 1.19, launch (subj) 1.18, work at 1.14 give a better SIMS(join) for Equation (1) than the top similarities returned by (Lin, 1998a): participate 0.164, lead 0.150, return to 0.148, say 0.143, rejoin 0.142, sign 0.142, meet 0.142, include 0.141, leave 0.140, work 0.137 Other features are also weighted intuitively." D08-1007,C98-2122,o,"We also test an MI model inspired by Erk (2007): MISIM(n,v) = log summationdisplay nSIMS(n) Sim(n,n) Pr(v,n ) Pr(v)Pr(n) We gather similar words using Lin (1998a), mining similar verbs from a comparable-sized parsed corpus, and collecting similar nouns from a broader 10 GB corpus of English text.4 We also use Keller and Lapata (2003)s approach to obtaining web-counts." D08-1007,C98-2122,p,Erk (2007) compared a number of techniques for creating similar-word sets and found that both the Jaccard coefficient and Lin (1998a)s information-theoretic metric work best. D08-1048,C98-2122,o,"They have been successfully applied in several tasks, such as information retrieval (Salton et al., 1975) and harvesting thesauri (Lin, 1998)." D08-1048,C98-2122,o,"Two LUs close in the space are likely to be in a paradigmatic relation, i.e. to be close in a is-a hierarchy (Budanitsky and Hirst, 2006; Lin, 1998; Pado, 2007)." D08-1084,C98-2122,p,"This similarity score is computed as a max over a number of component scoring functions, some based on external lexical resources, including: various string similarity functions, of which most are applied to word lemmas measures of synonymy, hypernymy, antonymy, and semantic relatedness, including a widelyused measure due to Jiang and Conrath (1997), based on manually constructed lexical resources such as WordNet and NomBank a function based on the well-known distributional similarity metric of Lin (1998), which automatically infers similarity of words and phrases from their distributions in a very large corpus of English text The ability to leverage external lexical resources both manually and automatically constructedis critical to the success of MANLI." D08-1103,C98-2122,o,"Distributional measures of distance, such as those proposed by Lin (1998), quantify how similar the two sets of contexts of a target word pair are." D08-1103,C98-2122,o,"For each word pair from the antonym set, we calculated the distributional distance between each of their senses using Mohammad and Hirsts (2006) method of concept distance along with the modified form of Lins (1998) distributional measure (equation 2)." D08-1103,C98-2122,o,Again we used Mohammad and Hirsts (2006) method along with Lins (1998) distributional measure to determine the distributional closeness of two thesaurus concepts. D09-1028,C98-2122,o,Curran (2002) and Lin (1998) use syntactic features in the vector definition. D09-1084,C98-2122,o,"Accurate measurement of semantic similarity between lexical units such as words or phrases is important for numerous tasks in natural language processing such as word sense disambiguation (Resnik, 1995), synonym extraction (Lin, 1998a), and automatic thesauri generation (Curran, 2002)." D09-1084,C98-2122,o,Method Correlation Edge-counting 0.664 Jiang & Conrath (1998) 0.848 Lin (1998a) 0.822 Resnik (1995) 0.745 Li et al. D09-1084,C98-2122,o,"(Strube and Ponzetto, 2006) 0.19-0.48 Leacock & Chodrow (1998) 0.36 Lin (1998b) 0.36 Resnik (1995) 0.37 Proposed 0.504 7 Conclusion We proposed a relational model to measure the semantic similarity between two words." D09-1084,C98-2122,o,Lin (1998b) defined the similarity between two concepts as the information that is in common to both concepts and the information contained in each individual concept. D09-1089,C98-2122,o,"Pereira et al.(1993), Curran and Moens (2002) and Lin (1998) use syntactic features in the vector definition." E09-1077,C98-2122,o,Wiebe (2000) uses Lin (1998a) style distributionally similar adjectives in a cluster-and-label process to generate sentiment lexicon of adjectives. E09-1077,C98-2122,o,"3http://www.openoffice.org Another corpora based method due to Turney and Littman (2003) tries to measure the semantic orientation O(t) for a term t by O(t) = summationdisplay tiS+ PMI(t,ti) summationdisplay tjS PMI(t,tj) where S+ and S are minimal sets of polar terms that contain prototypical positive and negative terms respectively, and PMI(t,ti) is the pointwise mutual information (Lin, 1998b) between the terms t and ti." I08-1021,C98-2122,o,"Our approach to STC uses a thesaurus based on corpus statistics (Lin, 1998) for real-valued similarity calculation." I08-1060,C98-2122,o,"Some researchers (Hindle, 1990; Grefenstette, 1994; Lin, 1998) classify terms by similarities based on their distributional syntactic patterns." I08-1072,C98-2122,o,"A wide range of contextual information, such as surrounding words (Lowe and McDonald, 2000; Curran and Moens, 2002a), dependency or case structure (Hindle, 1990; Ruge, 1997; Lin, 1998), and dependency path (Lin and Pantel, 2001; Pado and Lapata, 2007), has been utilized for similarity calculation, and achieved considerable success." I08-1072,C98-2122,o,"3.1 Context Extraction We adopted dependency structure as the context of words since it is the most widely used and wellperforming contextual information in the past studies (Ruge, 1997; Lin, 1998)." I08-1072,C98-2122,o,"For each word in LDV, three existing thesauri are consulted: Rogets Thesaurus (Roget, 1995), Collins COBUILD Thesaurus (Collins, 2002), and WordNet (Fellbaum, 1998)." I08-1073,C98-2122,o,"We propose using distributional similarity (using (Lin, 1998)) as an approximation of semantic distancebetweenthewordsinthetwoglosses,rather than requiring an exact match." I08-1073,C98-2122,o,We adopt the similarity score proposed by Lin (1998) as the distributional similarity score and use 50 nearest neighbours in line with McCarthy et al. For the random baseline we select one word sense at random for each word token and average the precision over 100 trials. I08-1073,C98-2122,o,"2 Related Work ThisworkbuildsuponthatofMcCarthyetal.(2004) which acquires predominant senses for target words from a large sample of text using distributional similarity (Lin, 1998) to provide evidence for predominance." I08-1073,C98-2122,o,"In this approach we extend the denition overlap by considering the distributional similarity (Lin, 1998) rather than identify of the words in the two denitions." I08-1073,C98-2122,o,McCarthy et al. use a distributional similarity thesaurus acquired from corpus data using the method of Lin (1998) for nding the predominant sense of a word where the senses are dened by WordNet. I08-1073,C98-2122,o,"Let w be a target word and Nw = fn1,n2nkg be the ordered set of the top scoring k neighbours of w from the thesaurus with associated distributional similarity scores fdss(w,n1),dss(w,n2),dss(w,nk)g using (Lin, 1998)." I08-2102,C98-2122,o,We use the similarity proposed by Lin (1998). N09-2059,C98-2122,o,"The thesaurus was produced using the metric described by Lin (1998) with input from the grammatical relation data extracted using the 90 million words of written English from the British National Corpus (BNC) (Leech, 1992) using the RASP parser (Briscoe and Carroll, 2002)." P09-1031,C98-2122,o,"The common types of features include contextual (Lin, 1998), co-occurrence (Yang and Callan, 2008), and syntactic dependency (Pantel and Lin, 2002; Pantel and Ravichandran, 2004)." P09-1031,C98-2122,o,"Inspired by the conjunction and appositive structures, Riloff and Shepherd (1997), Roark and Charniak (1998) used cooccurrence statistics in local context to discover sibling relations." P09-1031,C98-2122,o,"Clustering-based approaches usually represent word contexts as vectors and cluster words based on similarities of the vectors (Brown et al., 1992; Lin, 1998)." P09-1051,C98-2122,o,"The second uses Lin dependency similarity, a syntacticdependency based distributional word similarity resource described in (Lin, 1998a)9." P09-1051,C98-2122,o,"While Kazama and Torisawa used a chunker, we parsed the definition sentence using Minipar (Lin, 1998b)." P09-1052,C98-2122,o,"Syntactic context information is used (Hindle, 1990; Ruge, 1992; Lin, 1998) to compute term similarities, based on which similar words to a particular word can directly be returned." P09-2062,C98-2122,o,"Semantic DSN: The construction of this network is inspired by (Lin, 1998)." W08-1901,C98-2122,o,"corpora and corpus query tools has been particularly significant in the area of compiling and developing lexicographic materials (Kilgarriff and Rundell, 2002) and in the area of creating various kinds of lexical resources, such as WordNet (Fellbaum, 1998) and FrameNet (Atkins et al., 2003; Fillmore et al., 2003)." W08-1901,C98-2122,o,"This approach is similar to conventional techniques for automatic thesaurus construction (Lin, 1998)." W08-1902,C98-2122,o,"Our next steps will be to take a closer look at the following work: clustering of similar words (Lin, 1998), topic signatures (Lin and Hovy, 2000) and Kilgariffs sketch engine (Kilgarriff et al., 2004)." W08-2005,C98-2122,o,"The earliest work in this direction are those of (Hindle, 1990), (Lin, 1998), (Dagan et al., 1999), (Chen and Chen, 2000), (Geffet and Dagan, 2004) and (Weeds and Weir, 2005)." W08-2005,C98-2122,o,Lin (1998) proposed a word similarity measure based on the distributio nal pattern of words which allows to construct a thesaurus using a parsed corpus. W08-2211,C98-2122,o,(2004) we use k = 50 and obtain our thesaurus using the distributional similarity metric described by Lin (1998). W08-2211,C98-2122,o,"Thus we rank each sense wsi WSw using Prevalence Score wsi = (11) njNw dssnj wnss(wsi,nj) wsiWSw wnss(wsi,nj) where the WordNet similarity score (wnss) is defined as: wnss(wsi,nj)= max nsxNSnj (wnss(wsi,nsx)) 2.2 Building the Thesaurus The thesaurus was acquired using the method described by Lin (1998)." W08-2211,C98-2122,o,"For every pair of nouns, where each noun had a total frequency in the triple data of 10 or more, we computed their distributional similarity using the measure given by Lin (1998)." W09-0201,C98-2122,o,"Concept similarity is often measured by vectors of co-occurrence with context words that are typed with dependency information (Lin, 1998; Curran and Moens, 2002)." W09-0203,C98-2122,p,"Whereas dependency based semantic spaces have been shown to surpass other word space models for a number of problems (Pad and Lapata, 2007; Lin, 1998), for the task of categorisation simple pattern based spaces have been shown to perform equally good if not better (Poesio and Almuhareb, 2005b; Almuhareb and Poesio, 2005b)." W09-0203,C98-2122,o,In particular we work with dependency paths that can reach beyond direct dependencies as opposed to Lin (1998) but in the line of Pado and Lapata (2007). W09-0203,C98-2122,o,As a basis mapping function we used a generalisation of the one used by Grefenstette (1994) and Lin (1998). W09-0805,C98-2122,o,"Example of such algorithms are (Pereira et al., 1993) and (Lin, 1998) that use syntactic features in the vector definition." W09-1108,C98-2122,o,"Pereira (1993), Curran (2002) and Lin (1998) use syntactic features in the vector definition." W09-1316,C98-2122,o,"In particular, this method has been used for word sense disambiguation (Lin, 1997) and thesaurus construction (Lin, 1998)." D07-1008,D07-1001,o,"We used an implementation of McDonald (2006)forcomparisonofresults(ClarkeandLapata, 2007)." D08-1057,D07-1001,o,"More recently, Clarke and Lapata (2007) use Centering Theory (Grosz et al., 1995) and Lexical Chains (Morris and Hirst, 1991) to identify which information to prune." N09-2058,D07-1001,o,"(Elhadad et al., 2001; Clarke and Lapata, 2007; Madnani et al., 2007))." P09-1024,D07-1001,o,"This framework is 211 commonly used in generation and summarization applications where the selection process is driven by multiple constraints (Marciniak and Strube, 2005; Clarke and Lapata, 2007)." P09-1024,D07-1001,o,"In prior research, ILP was used as a postprocessing step to remove redundancy and make other global decisions about parameters (McDonald, 2007; Marciniak and Strube, 2005; Clarke and Lapata, 2007)." P09-2026,D07-1001,o,Clarke and Lapata (2007) included discourse level features in their framework to leverage context for enhancing coherence. P09-2026,D07-1001,o,"4.1 Corpora Sentence compression systems have been tested on product review data from the Ziff-Davis (ZD, henceforth) Corpus by Knight and Marcu (2000), general news articles by Clarke and Lapata (CL, henceforth) corpus (2007) and biomedical articles (Lin and Wilbur, 2007)." D09-1024,D07-1006,o,"In the first set of experiments, we compare two settings of our UALIGN system with other aligners, GIZA++ (Union) (Och and Ney, 2003) and LEAF (with 2 iterations) (Fraser and Marcu, 2007)." D09-1024,D07-1006,o,"Besides precision, recall and (balanced) F-measure, we also include an F-measure variant strongly biased towards recall (#0B=0.1), which (Fraser and Marcu, 2007) found to be best to tune their LEAF aligner for maximum MT accuracy." D09-1024,D07-1006,o,"1 Introduction Word alignment is a critical component in training statistical machine translation systems and has received a significant amount of research, for example, (Brown et al., 1993; Ittycheriah and Roukos, 2005; Fraser and Marcu, 2007), including work leveraging syntactic parse trees, e.g., (Cherry and Lin, 2006; DeNero and Klein, 2007; Fossum et al., 2008)." D09-1076,D07-1006,o,"The training data is aligned using the LEAF technique (Fraser and Marcu, 2007)." W08-0306,D07-1006,o,"1.2 Related Work Recently, discriminative methods for alignment have rivaled the quality of IBM Model 4 alignments (Liu et al., 2005; Ittycheriah and Roukos, 2005; Taskar et al., 2005; Moore et al., 2006; Fraser and Marcu, 2007b)." W08-0306,D07-1006,p,"However, except for (Fraser and Marcu, 2007b), none of these advances in alignment quality has improved translation quality of a state-of-the-art system." W08-0306,D07-1006,o,"In contrast to the semi-supervised LEAF alignment algorithm of (Fraser and Marcu, 2007b), which requires 1,5002,000 CPU days per iteration to align 8.4M ChineseEnglish sentences (anonymous, p.c.), link deletion requires only 450 CPU hours to re-align such a corpus (after initial alignment by GIZA++, which requires 20-24 CPU days)." W08-0306,D07-1006,o,"However, (Fraser and Marcu, 2007a) show that, in phrase-based translation, improvements in AER or f-measure do not necessarily correlate with improvements in BLEU score." W08-0306,D07-1006,o,"They propose two modifications to f-measure: varying the precision/recall tradeoff, and fully-connecting the alignment links before computing f-measure.11 Weighted Fully-Connected F-Measure Given a hypothesized set of alignment links H and a goldstandard set of alignment links G, we define H+ = fullyConnect(H) and G+ = fullyConnect(G), and then compute: f-measure(H+) = 1 precision(H+) + 1 recall(H+) For phrase-based Chinese-English and ArabicEnglish translation tasks, (Fraser and Marcu, 2007a) obtain the closest correlation between weighted fully-connected alignment f-measure and BLEU score using =0.5 and =0.1, respectively." W09-1804,D07-1006,o,"Probabilistic generative models like IBM 1-5 (Brown et al., 1993), HMM (Vogel et al., 1996), ITG (Wu, 1997), and LEAF (Fraser and Marcu, 2007) define formulas for P(f | e) or P(e, f), with ok-voon ororok sprok at-voon bichat dat erok sprok izok hihok ghirok totat dat arrat vat hilat ok-drubel ok-voon anok plok sprok at-drubel at-voon pippat rrat dat ok-voon anok drok brok jok at-voon krat pippat sat lat wiwok farok izok stok totat jjat quat cat lalok sprok izok jok stok wat dat krat quat cat lalok farok ororok lalok sprok izok enemok wat jjat bichat wat dat vat eneat lalok brok anok plok nok iat lat pippat rrat nnat wiwok nok izok kantok ok-yurp totat nnat quat oloat at-yurp lalok mok nok yorok ghirok clok wat nnat gat mat bat hilat lalok nok crrrok hihok yorok zanzanok wat nnat arrat mat zanzanat lalok rarok nok izok hihok mok wat nnat forat arrat vat gat Figure 1: Word alignment exercise (Knight, 1997)." C08-1041,D07-1007,o,"Carpuat and Wu (2007b) integrated a WSD system into a phrase-based SMT system, Pharaoh (Koehn, 2004a)." C08-1041,D07-1007,o,"Furthermore, they extended WSD to phrase sense disambiguation (PSD) (Carpuat and Wu, 2007a)." D08-1010,D07-1007,o,Carpuat and Wu (2007b) and Chan et al. D08-1010,D07-1007,p,"Similar to WSD, Carpuat and Wu (2007a) used contextual information to solve the ambiguity problem for phrases." D08-1039,D07-1007,p,"Recently, word-sense disambiguation (WSD) methods have been shown to improve translation quality (Chan et al., 2007; Carpuat and Wu, 2007)." D08-1039,D07-1007,p,"In Carpuat and Wu (2007), anotherstate-of-the-artWSDengine(acombination of naive Bayes, maximum entropy, boosting and Kernel PCA models) is used to dynamically determine the score of a phrase pair under consideration and, thus, let the phrase selection adapt to the context of the sentence." D08-1105,D07-1007,p,"WSD is one of the fundamental problems in natural language processing and is important for applications such as machine translation (MT) (Chan et al., 2007a; Carpuat and Wu, 2007), information retrieval (IR), etc. WSD is typically viewed as a classification problem where each ambiguous word is assigned a sense label (from a pre-defined sense inventory) during the disambiguation process." D09-1022,D07-1007,o,"Another WSD approach incorporating context-dependent phrasal translation lexicons is given in (Carpuat and Wu, 2007) and has been evaluated on several translation tasks." D09-1022,D07-1007,o,"Second, instead of disambiguating phrase senses as in (Carpuat and Wu, 2007), we model word selection independently of the phrases used in the MT models." D09-1046,D07-1007,o,"The senses are: 1 material from cellulose 2 report 3 publication 4 medium for writing 5 scientific 6 publishing firm 7 physical object inventory is suitable for which application, other than cross-lingual applications where the inventory can be determined from parallel data (Carpuat and Wu, 2007; Chan et al., 2007)." I08-1073,D07-1007,p,"There has been considerable skepticism over whether WSD will actually improve performance of applications, but we are now starting to see improvement in performance due to WSD in cross-lingual information retrieval (Clough and Stevenson, 2004; Vossen et al., 2006) and machine translation (Carpuat and Wu, 2007; Chan et al., 2007) and we hope that other applications such as question-answering, text simplication and summarisation might also benet as WSD methods improve." P08-1024,D07-1007,p,"Promising features might include those over source side reordering rules (Wang et al., 2007) or source context features (Carpuat and Wu, 2007)." P08-1049,D07-1007,p,"On the other hand, integrating an additional component into a baseline SMT system is notoriously tricky as evident in the research on integrating word sense disambiguation (WSD) into SMT systems: different ways of integration lead to conflicting conclusions on whether WSD helps MT performance (Chan et al., 2007; Carpuat and Wu, 2007)." P08-1087,D07-1007,o,Carpuat and Wu (2007) approached the issue as a Word Sense Disambiguation problem. W08-0302,D07-1007,o,Carpuat and Wu (2007) and Chan et al. W08-0404,D07-1007,o,"Maximum entropy estimation for translation of individual words dates back to Berger et al (1996), and the idea of using multi-class classifiers to sharpen predictions normally made through relative frequency estimates has been recently reintroducedundertherubricofwordsensedisambiguation and generalized to substrings (Chan et al 2007; Carpuat and Wu 2007a; Carpuat and Wu 2007b)." W08-0404,D07-1007,o,4 are equivalent to a maximum entropy variant of the phrase sense disambiguation approach studied by Carpuat & Wu (2007b). W09-2404,D07-1007,p,"In Statistical Machine Translation (SMT), recent work shows that WSD helps translation quality when the WSD system directly uses translation candidates as sense inventories (Carpuat and Wu, 2007; Chan et al., 2007; Gimenez and M`arquez, 2007)." W09-2404,D07-1007,o,"Even the recent generation of SMT models that explicitly use WSD modeling to perform lexical choice rely on sentence context rather than wider document context and translate sentences in isolation (Carpuat and Wu, 2007; Chan et al., 2007; Gimenez and M`arquez, 2007; Stroppa et al., 2007; Specia et al., 2008)." W09-2410,D07-1007,p,"We are starting to see the beginnings of a positive effect of WSD in NLP applications such as Machine Translation (Carpuat and Wu, 2007; Chan et al., 2007)." W09-2412,D07-1007,o,"Unlike a full blown machine translation task (Carpuat and Wu, 2007), annotators and systems will not be required to translate the whole context but just the target word." W09-2413,D07-1007,p,"Several studies have demonstrated that for instance Statistical Machine Translation (SMT) benefits from incorporating a dedicated WSD module (Chan et al., 2007; Carpuat and Wu, 2007)." C08-1015,D07-1013,o,"A simple example is shown in Figure 1, where the arc between a and hat indicates that hat is the head of a. Current statistical dependency parsers perform better if the dependency lengthes are shorter (McDonald and Nivre, 2007)." C08-1081,D07-1013,o,"The corresponding unlabeled figures are 73.3 and 33.4.3 This confirms the results of previous studies showing that the pseudo-projective parsing technique used by MaltParser tends to give high precision given that non-projective dependencies are among the most difficult to parse correctly but rather low recall (McDonald and Nivre, 2007)." C08-1081,D07-1013,o,"3 MaltParser MaltParser (Nivre et al., 2007b) is a languageindependent system for data-driven dependency parsing, based on a transition-based parsing model (McDonald and Nivre, 2007)." D07-1096,D07-1013,o,"We then describe the two main paradigms for learning and inference, in this years shared task as well as in last years, which we call transition-based parsers (section 5.2) and graph-based parsers (section 5.3), adopting the terminology of McDonald and Nivre (2007).5 Finally, we give an overview of the domain adaptation methods that were used (section 5.4)." D07-1097,D07-1013,o,"As shown by McDonald and Nivre (2007), the Single Malt parser tends to suffer from two problems: error propagation due to the deterministic parsing strategy, typicallyaffectinglongdependenciesmorethan short ones, and low precision on dependencies originating in the artificial root node due to fragmented parses.9 The question is which of these problems is alleviatedbythemultipleviewsgivenbythecomponent parsers in the Blended system." D08-1017,D07-1013,p,A solution that leverages the complementary strengths of these two approachesdescribed in detail by McDonald and Nivre (2007)was recently and successfully explored by Nivre and McDonald (2008). D08-1059,D07-1013,o,"However, they make different types of errors, which can be seen as a reflection of their theoretical differences (McDonald and Nivre, 2007)." D08-1059,D07-1013,o,(2007) and Nivre and McDonald (2008) can be seen as methods to combine separately defined models. D08-1059,D07-1013,o,"The terms graph-based and transition-based were used by McDonald and Nivre (2007) to describe the difference between MSTParser (McDonald and Pereira, 2006), which is a graph-based parser with an exhaustive search decoder, and MaltParser (Nivre et al., 2006), which is a transition-based parser with a greedy search decoder." D08-1059,D07-1013,o,McDonald and Nivre (2007) showed that the MSTParser and MaltParser produce different errors. D09-1121,D07-1013,o,"In the field of parsing, McDonald and Nivre (2007) compared parsing errors between graphbased and transition-based parsers." D09-1121,D07-1013,o,"In examining the combination of the two types of parsing, McDonald and Nivre (2007) utilized similar approaches to our empirical analysis." E09-1023,D07-1013,o,"also McDonald and Nivre, 2007)." I08-1012,D07-1013,o,"F 1 =2precisionrecall/(precision + recall) the figure, we find that F 1 score decreases when dependency length increases as (McDonald and Nivre, 2007) found." I08-1012,D07-1013,o,"The reason may be that shorter dependencies are often modifier of nouns such as determiners or adjectives or pronouns modifying their direct neighbors, while longer dependencies typically represent modifiers of the root or the main verb in a sentence(McDonald and Nivre, 2007)." I08-1012,D07-1013,o,"However, current statistical dependency parsers provide worse results if the dependency length becomes longer (McDonald and Nivre, 2007)." I08-2097,D07-1013,o,"sentence length: The longer the sentence is, the poorer the parser performs (McDonald and Nivre, 2007)." I08-2097,D07-1013,o,"dependency lengths: Long-distance dependencies exhibit bad performance (McDonald and Nivre, 2007)." J08-4003,D07-1013,o,"Looking rst at learning times, it is obvious that learning time depends primarily on the number of training instances, which is why we can observe a difference of several orders of magnitude in learning time between the biggest training set (Czech) and the smallest training set (Slovene) 14 This is shown by Nivre and Scholz (2004) in comparison to the iterative, arc-standard algorithm of Yamada and Matsumoto (2003) and by McDonald and Nivre (2007) in comparison to the spanning tree algorithm of McDonald, Lerman, and Pereira (2006)." N09-2066,D07-1013,o,"c 2009 Association for Computational Linguistics Reverse Revision and Linear Tree Combination for Dependency Parsing Giuseppe Attardi Dipartimento di Informatica Universit`a di Pisa Pisa, Italy attardi@di.unipi.it Felice DellOrletta Dipartimento di Informatica Universit`a di Pisa Pisa, Italy felice.dellorletta@di.unipi.it 1 Introduction Deterministic transition-based Shift/Reduce dependency parsers make often mistakes in the analysis of long span dependencies (McDonald & Nivre, 2007)." P08-1108,D07-1013,o,"Practically all data-driven models that have been proposed for dependency parsing in recent years can be described as either graph-based or transitionbased (McDonald and Nivre, 2007)." P08-1108,D07-1013,o,"In order to get a better understanding of these matters, we replicate parts of the error analysis presented by McDonald and Nivre (2007), where parsing errors are related to different structural properties of sentences and their dependency graphs." P08-1108,D07-1013,o,"As expected, Malt and MST have very similar accuracy for short sentences but Malt degrades more rapidly with increasing sentence length because of error propagation (McDonald and Nivre, 2007)." P08-1108,D07-1013,o,"First, the graph-based models have better precision than the transition-based models when predicting long arcs, which is compatible with the results of McDonald and Nivre (2007)." P08-1108,D07-1013,o,"Again, we find the clearest patterns in the graphs for precision, where Malt has very low precision near the root but improves with increasing depth, while MST shows the opposite trend (McDonald and Nivre, 2007)." P08-1108,D07-1013,o,"As expected, we see that MST does better than Malt for all categories except nouns and pronouns (McDonald and Nivre, 2007)." P08-1108,D07-1013,o,"Both models have been used to achieve state-of-the-art accuracy for a wide range of languages, as shown in the CoNLL shared tasks on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007), but McDonald and Nivre (2007) showed that a detailed error analysis reveals important differences in the distribution of errors associated with the two models." P08-1108,D07-1013,o,"This difference was highlighted in the 3http://w3.msi.vxu.se/jha/maltparser/ studyofMcDonaldandNivre(2007), whichshowed that the difference is reflected directly in the error distributions of the parsers." P08-1110,D07-1013,o,"7An alternative framework that formally describes some dependency parsers is that of transition systems (McDonald and Nivre, 2007)." P09-1007,D07-1013,o,"The experimental results in (McDonald and Nivre, 2007) show a negative impact on the parsing accuracy from too long dependency relation." P09-1007,D07-1013,o,"4 Dependency Parsing: Baseline 4.1 Learning Model and Features According to (McDonald and Nivre, 2007), all data-driven models for dependency parsing that have been proposed in recent years can be described as either graph-based or transition-based." P09-3002,D07-1013,o,"(Kuhlmann and Mohl, 2007; McDonald and Nivre, 2007; Nivre et al., 2007) Hindi is a verb final, flexible word order language and therefore, has frequent occurrences of non-projectivity in its dependency structures." W07-2220,D07-1013,o,"The majority of these systems used models belonging to one of the twodominantapproachesindata-drivendependency parsinginrecentyears(McDonaldandNivre,2007): In graph-based models, every possible dependency graph for a given input sentence is given a score that decomposes into scores for the arcs of the graph." W07-2220,D07-1013,o,"Acknowledgments I want to thank my fellow organizers of the shared task, Johan Hall, Sandra Kubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret, whoarealsoco-authorsofthelongerpaperonwhich this paper is partly based (Nivre et al. , 2007)." W08-2104,D07-1013,o,"There are also attempts at a more fine-grained analysis of accuracy, targeting specific linguistic constructions or grammatical functions (Carroll and Briscoe, 2002; Kubler and Prokic, 2006; McDonald and Nivre, 2007)." W09-1104,D07-1013,o,"5 Data-driven Dependency Parsing Models for data-driven dependency parsing can be roughly divided into two paradigms: Graph-based and transition-based models (McDonald and Nivre, 2007)." D07-1015,D07-1014,o,"Two other groups of authors have independently and simultaneously proposed adaptations of the Matrix-Tree Theorem for structured inference on directed spanning trees (McDonald and Satta, 2007; SmithandSmith,2007)." D07-1015,D07-1014,o,"Second, McDonald and Satta (2007) propose an O(n5) algorithm for computing the marginals, as opposed to the O(n3) matrix-inversion approach used by Smith and Smith (2007) and ourselves." D07-1015,D07-1014,o,"For example, both papers propose minimum-risk decoding, and McDonald and Satta (2007) discuss unsupervised learning and language modeling, while Smith and Smith (2007) define hiddenvariable models based on spanning trees." D07-1015,D07-1014,o,Similar adaptations of the Matrix-Tree Theorem have been developed independently and simultaneouslybySmithandSmith(2007)andMcDonaldand Satta (2007); see Section 5 for more discussion. D07-1070,D07-1014,o,"For nonprojective parsing, the analogy to the inside algorithm is the O(n3) matrix-tree algorithm, which is dominated asymptotically by a matrix determinant (Smith and Smith, 2007; Koo et al. , 2007; McDonald and Satta, 2007)." D07-1102,D07-1014,o,"We can sum over all non-projective spanning trees by taking the determinant of the Kirchhoff matrix of the graph defined above, minus the row and column corresponding to the root node (Smith and Smith, 2007)." D08-1016,D07-1014,o,(2007) and Smith and Smith (2007) show how to employ the matrix-tree theorem. D08-1065,D07-1014,p,"The approach has been shown to give improvements over the MAP classifier in many areas of natural language processing including automatic speech recognition (Goel and Byrne, 2000), machine translation (Kumar and Byrne, 2004; Zhang and Gildea, 2008), bilingual word alignment (Kumar and Byrne, 2002), andparsing(Goodman, 1996; TitovandHenderson, 2006; Smith and Smith, 2007)." D09-1058,D07-1014,o,"It is often straightforward to obtain large amounts of unlabeled data, making semi-supervised approaches appealing; previous work on semisupervised methods for dependency parsing includes (Smith and Eisner, 2007; Koo et al., 2008; Wang et al., 2008)." D09-1058,D07-1014,o,"We used a non-projective model, trained using an application of the matrix-tree theorem (Koo et al., 2007; Smith and Smith, 2007; McDonald and Satta, 2007) for the first-order Czech models, and projective parsers for all other models." D09-1058,D07-1014,o,"Note that it is straightforward to calculate these expected counts using a variant of the inside-outside algorithm (Baker, 1979) applied to the (Eisner, 1996) dependency-parsing data structures (Paskin, 2001) for projective dependency structures, or the matrix-tree theorem (Koo et al., 2007; Smith and Smith, 2007; McDonald and Satta, 2007) for nonprojective dependency structures." P09-1041,D07-1014,o,"Then, the method of Smith and Smith (2007) can be used to compute the probability of every possible edge conditioned on the presence of ki, p(yiprime =kprime|yi = k,x), using K1ki. Multiplying this probability by p(yi=k|x) yields the desired two edge marginal." P09-1041,D07-1014,n,"Unfortunately, there is no straightforward generalization of the method of Smith and Smith (2007) to the two edge marginal problem." P09-1041,D07-1014,o,"(Smith and Smith, 2007))." P09-1041,D07-1014,o,"This weak supervision has been encoded using priors and initializations (Klein and Manning, 2004; Smith, 2006), specialized models (Klein and Manning, 2004; Seginer, 2007; Bod, 2006), and implicit negative evidence (Smith, 2006)." P09-1041,D07-1014,o,"In this paper we use a non-projective dependency tree CRF (Smith and Smith, 2007)." P09-1041,D07-1014,o,"This generates tens of millions features, so we prune those features that occur fewer than 10 total times, as in (Smith and Eisner, 2007)." P09-1041,D07-1014,o,Smith and Eisner (2007) apply entropy regularization to dependency parsing. P09-1041,D07-1014,o,"(McDonald and Satta, 2007; Smith and Smith, 2007)." P09-1041,D07-1014,p,Smith and Smith (2007) describe a more efficient algorithm that can compute all edge expectations in O(n3) time using the inverse of the Kirchoff matrix K1. P09-1064,D07-1014,p,"Minimizing risk has been shown to improve performance for MT (Kumar and Byrne, 2004), as well as other language processing tasks (Goodman, 1996; Goel and Byrne, 2000; Kumar and Byrne, 2002; Titov and Henderson, 2006; Smith and Smith, 2007)." W07-2216,D07-1014,o,"FollowingtheworkofKooetal.(2007)andSmith and Smith (2007), it is possible to compute all expectations in O(n3 + |L|n2) through matrix inversion." W07-2216,D07-1014,o,(2007) and Smith and Smith (2007) showed that the MatrixTree Theorem can be used to train edge-factored log-linearmodelsofdependencyparsing. D07-1070,D07-1015,o,"For nonprojective parsing, the analogy to the inside algorithm is the O(n3) matrix-tree algorithm, which is dominated asymptotically by a matrix determinant (Smith and Smith, 2007; Koo et al. , 2007; McDonald and Satta, 2007)." D09-1058,D07-1015,o,"It is often straightforward to obtain large amounts of unlabeled data, making semi-supervised approaches appealing; previous work on semisupervised methods for dependency parsing includes (Smith and Eisner, 2007; Koo et al., 2008; Wang et al., 2008)." D09-1058,D07-1015,o,"We used a non-projective model, trained using an application of the matrix-tree theorem (Koo et al., 2007; Smith and Smith, 2007; McDonald and Satta, 2007) for the first-order Czech models, and projective parsers for all other models." D09-1058,D07-1015,o,"Note that it is straightforward to calculate these expected counts using a variant of the inside-outside algorithm (Baker, 1979) applied to the (Eisner, 1996) dependency-parsing data structures (Paskin, 2001) for projective dependency structures, or the matrix-tree theorem (Koo et al., 2007; Smith and Smith, 2007; McDonald and Satta, 2007) for nonprojective dependency structures." N09-3002,D07-1020,o,"For example, the topics Sport and Education are important cues for differentiating mentions of Michael Jordan, which may refer to a basketball player, a computer science professor, etc. Second, as noted in the top WePS run (Chen and Martin, 2007), feature development is important in achieving good coreference performance." P09-1047,D07-1020,o,"(Mann and Yarowsky, 2003; Chen and Martin, 2007; Baron and Freedman, 2008)." P09-2090,D07-1020,o,"We base our work partly on previous work done by Bagga and Baldwin (Bagga and Baldwin, 1998), which has also been used in later work (Chen and Martin, 2007)." P09-3011,D07-1020,o,Chen and Martin (2007) explored the use of a range of syntactic and semantic features in unsupervised clustering of documents. W07-2024,D07-1020,o,"For more detail, see Chen & Martin (2007)." W07-2024,D07-1020,o,"Chen & Martin (2007) introduced one of those similarity schemes, ?two-level SoftTFIDF??" C08-1008,D07-1031,o,"Standard sequence prediction models are highly effective for supertagging, including Hidden Markov Models (Bangalore and Joshi, 1999; Nielsen, 2002), Maximum Entropy Markov Models (Clark, 2002; Hockenmaier et al., 2004; Clark and Curran, 2007), and Conditional Random Fields (Blunsom and Baldwin, 2006)." C08-1008,D07-1031,o,"Recentworkconsidersadamagedtagdictionary by assuming that tags are known only for words that occur more than once or twice (Toutanova and Johnson, 2007)." C08-1008,D07-1031,o,"Other work aims to do truly unsupervised learning of taggers, such as Goldwater and Griffiths (2007) and Johnson (2007)." C08-1008,D07-1031,o,"Dirichlet priors can be used to bias HMMs toward more skewed distributions (Goldwater and Griffiths, 2007; Johnson, 2007), which is especially useful in the weakly supervised setting consideredhere." C08-1008,D07-1031,o,"FollowingJohnson(2007),Iusevariational Bayes EM (Beal, 2003) during the M-step for the transition distribution: l+1j|i = f(E[ni,j] +i)f(E[n i] + |C|i) (3) f(v) = exp((v)) (4) 60 (v) = braceleftBigg g(v 1 2) ifv> 7 (v+ 1) 1v o.w." C08-1042,D07-1031,o,"1 Introduction There has been a great deal of recent interest in the unsupervised discovery of syntactic structure from text, both parts-of-speech (Johnson, 2007; Goldwater and Griffiths, 2007; Biemann, 2006; Dasgupta and Ng, 2007) and deeper grammatical structure like constituency and dependency trees (Klein and Manning, 2004; Smith, 2006; Bod, 2006; Seginer, 2007; Van Zaanen, 2001)." C08-1042,D07-1031,o,"For an HMM with a set of states T and a set of output symbols V : t T t Dir(1,|T|) (1) t T t Dir(1,|V |) (2) ti|ti1, ti1 Multi(ti1) (3) wi|ti, ti Multi(ti) (4) One advantage of the Bayesian approach is that the prior allows us to bias learning toward sparser structures, by setting the Dirichlet hyperparameters , to a value less than one (Johnson, 2007; Goldwater and Griffiths, 2007)." C08-1042,D07-1031,o,"There is evidence that this leads to better performance on some part-of-speech induction metrics (Johnson, 2007; Goldwater and Griffiths, 2007)." C08-1042,D07-1031,o,"Johnson (2007) evaluates both estimation techniques on the Bayesian bitag model; Goldwater and Griffiths (2007) emphasize the advantage in the MCMC approach of integrating out the HMM parameters in a tritag model, yielding a tagging supported by many different parameter settings." C08-1042,D07-1031,o,"Following the setup in Johnson (2007), we initialize the transition and emission distributions to be uniform with a small amount of noise, and run EM and VB for 1000 iterations." C08-1042,D07-1031,o,"In our VB experiments we set i = j = 0.1,i {1,,|T|},j {1,,|V |}, which yielded the best performance on most reported metrics in Johnson (2007)." C08-1042,D07-1031,o,"We use maximum marginal decoding, which Johnson (2007) reports performs better than Viterbi decoding." C08-1042,D07-1031,o,"One option is what Johnson (2007) calls many-to-one (M-to-1) accuracy, in which each induced tag is labeled with its most frequent gold tag." C08-1042,D07-1031,o,"In cases where the number of gold tags is different than the number of induced tags, some must necessarily remain unassigned (Johnson, 2007)." D08-1036,D07-1031,o,"Finally, following Haghighi and Klein (2006) and Johnson (2007) we can instead insist that at most one HMM state can be mapped to any part-of-speech tag." D08-1036,D07-1031,o,The studies presented by Goldwater and Griffiths (2007) and Johnson (2007) differed in the number of states that they used. D08-1036,D07-1031,o,"Goldwater and Griffiths (2007) evaluated against the reduced tag set of 17 tags developed by Smith and Eisner (2005), while Johnson (2007) evaluated against the full Penn Treebank tag set." D08-1036,D07-1031,o,"The largest corpus that Goldwater and Griffiths (2007) studied contained 96,000 words, while Johnson (2007) used all of the 1,173,766 words in the full Penn WSJ treebank." D08-1036,D07-1031,o,"We ran each estimator with the eight different combinations of values for the hyperparameters and prime listed below, which include the optimal values for the hyperparameters found by Johnson (2007), and report results for the best combination for each estimator below 1." D08-1036,D07-1031,o," prime 1 1 1 0.5 0.5 1 0.5 0.5 0.1 0.1 0.1 0.0001 0.0001 0.1 0.0001 0.0001 Further, we ran each setting of each estimator at least 10 times (from randomly jittered initial starting points) for at least 1,000 iterations, as Johnson (2007) showed that some estimators require many iterations to converge." D08-1036,D07-1031,o,"Expectation Maximization does surprisingly well on larger data sets and is competitive with the Bayesian estimators at least in terms of cross-validation accuracy, confirming the results reported by Johnson (2007)." D08-1036,D07-1031,o,"Monte Carlo sampling methods and Variational Bayes are two kinds of approximate inference methods that have been applied to Bayesian inference of unsupervised HMM POS taggers (Goldwater and Griffiths, 2007; Johnson, 2007)." D08-1036,D07-1031,o,"Johnson (2007) compared two Bayesian inference algorithms, Variational Bayes and what we call here a point-wise collapsed Gibbs sampler, and found that Variational Bayes produced the best solution, and that the Gibbs sampler was extremely slow to converge and produced a worse solution than EM." D08-1036,D07-1031,o,The samplers that Goldwater and Griffiths (2007) and Johnson (2007) describe are pointwise collapsed Gibbs samplers. D08-1109,D07-1031,o,"Recent advances in these approaches include the use of a fully Bayesian HMM (Johnson, 2007; Goldwater and Griffiths, 2007)." D09-1071,D07-1031,o,"Recent work (Johnson, 2007; Goldwater and Griffiths, 2007; Gao and Johnson, 2008) explored the task of part-of-speech tagging (PoS) using unsupervised Hidden Markov Models (HMMs) with encouraging results." D09-1071,D07-1031,o,"Recent work (Goldwater and Griffiths, 2007; Johnson, 2007; Gao and Johnson, 2008) on this task explored a variety of methodologies to address this issue." D09-1071,D07-1031,o,Johnson (2007) and Gao & Johnson (2008) assume that words are generated by a hidden Markov model and find that the resulting states strongly correlate with POS tags. D09-1071,D07-1031,o,The fact that different authors use different versions of the same gold standard to evaluate similar experiments (e.g. Goldwater & Griffiths (2007) versus Johnson (2007)) supports this claim. D09-1071,D07-1031,o,"Johnson (2007) reports results for different numbers of hidden states but it is unclear how to make this choice a priori, while Goldwater & Griffiths (2007) leave this question as future work." D09-1071,D07-1031,p,"Given the parameters{pi0,pi,,K}of the HMM, the joint distribution over hidden states s and observationsy can be written (with s0 = 0): p(s,y|pi0,pi,,K) = Tproductdisplay t=1 p(st|st1)p(yt|st) As Johnson (2007) clearly explained, training the HMM with EM leads to poor results in PoS tagging." D09-1075,D07-1031,o,"4.1 Variational Bayes Beal (2003) and Johnson (2007) describe variational Bayes for hidden Markov model in detail, which can be directly applied to our bilingual model." D09-1075,D07-1031,o,Johnson (2007) and Zhang et al. E09-1042,D07-1031,o,"Importantly, this Bayesian approach facilitates the incorporation of sparse priors that result in a more practical distribution of tokens to lexical categories (Johnson, 2007)." E09-1042,D07-1031,o,"Similar to Goldwater and Griffiths (2007) and Johnson (2007), Toutanova and Johnson (2007) also use Bayesian inference for POS tagging." E09-1042,D07-1031,o,"Nevertheless, EM sometimes fails to find good parameter values.2 The reason is that EM tries to assign roughly the same number of word tokens to each of the hidden states (Johnson, 2007)." N09-1009,D07-1031,o,"There has been an increased interest recently in employing Bayesian modeling for probabilistic grammars in different settings, ranging from putting priors over grammar probabilities (Johnson et al., 2007) to putting non-parametric priors over derivations (Johnson et al., 2006) to learning the set of states in a grammar (Finkel et al., 2007; Liang et al., 2007)." N09-1009,D07-1031,o,"Mostcommonlyvariational (Johnson, 2007; Kurihara and Sato, 2006) or sampling techniques are applied (Johnson et al., 2006)." N09-1009,D07-1031,o,"They are most commonly used for parsing and linguistic analysis (Charniak and Johnson, 2005; Collins, 2003), but are now commonly seen in applications like machine translation (Wu, 1997) and question answering (Wang et al., 2007)." N09-1009,D07-1031,o,"For example, if we make a mean-field assumption, with respect to hidden structure and weights, the variationalalgorithmforapproximatelyinferringthe distribution over and trees y resembles the traditional EM algorithm very closely (Johnson, 2007)." N09-1069,D07-1031,o,"For instance, on unsupervised part-ofspeech tagging, EM requires over 100 iterations to reach its peak performance on the Wall-Street Journal (Johnson, 2007)." P08-1012,D07-1031,n,"Unlike Johnson (2007), who found optimal performance when was approximately 104, we observed monotonic increases in performance as dropped." P08-1012,D07-1031,o,3 Variational Bayes for ITG Goldwater and Griffiths (2007) and Johnson (2007) show that modifying an HMM to include a sparse prior over its parameters and using Bayesian estimation leads to improved accuracy for unsupervised part-of-speech tagging. P08-1012,D07-1031,o,"However, in experiments in unsupervised POS tag learning using HMM structured models, Johnson (2007) shows that VB is more effective than Gibbs sampling in approaching distributions that agree with the Zipfs law, which is prominent in natural languages." P08-1012,D07-1031,o,"As pointed out by Johnson (2007), in effect this expression adds to c a small value that asymptotically approaches 0.5 as c approaches , and 0 as c approaches 0." P08-1100,D07-1031,o,"Bayesian approaches can also improve performance (Goldwater and Griffiths, 2007; Johnson, 2007; Kurihara and Sato, 2006)." P09-1056,D07-1031,p,"Recent projects in semisupervised (Toutanova and Johnson, 2007) and unsupervised (Biemann et al., 2007; Smith and Eisner, 2005) tagging also show significant progress." P09-1056,D07-1031,o,"HMMs have been used many times for POS tagging and chunking, in supervised, semisupervised, and in unsupervised settings (Banko and Moore, 2004; Goldwater and Griffiths, 2007; Johnson, 2007; Zhou, 2004)." P09-1057,D07-1031,o,"6 Smaller Tagset and Incomplete Dictionaries Previously, researchers working on this task have also reported results for unsupervised tagging with a smaller tagset (Smith and Eisner, 2005; Goldwater and Griffiths, 2007; Toutanova and Johnson, 2008; Goldberg et al., 2008)." P09-1057,D07-1031,o,"The overall POS tag distribution learnt by EM is relatively uniform, as noted by Johnson (2007), and it tends to assign equal number of tokens to each tag label whereas the real tag distribution is highly skewed." E09-1048,D07-1047,p,"This is also the main reason why most summarization systems applied to news articles do not outperform a simple baseline that just uses the first 100 words of an article (Svore et al., 2007; Nenkova, 2005)." E09-1048,D07-1047,n,"5Since the test data of (Svore et al., 2007) is not publicly available we were unable to carry out a more detailed comparison." E09-1048,D07-1047,n,"Our approach not only outperformed a notoriously difficult baseline but also achieved similar performance to the approach of (Svore et al., 2007), without requiring their third-party data resources." D08-1104,D07-1050,o,"They are not used in LN, but they are known to be useful for WSD (Tanaka et al., 2007; Magnini et al., 2002)." D09-1124,D07-1060,o,"Although to a lesser extent, measures of word relatedness have also been applied on other languages, including German (Zesch et al., 2007; Zesch et al., 2008; Mohammad et al., 2007), Chinese (Wang et al., 2008), Dutch (Heylen et al., 2008) and others." D09-1124,D07-1060,o,"Also related are the areas of word alignment for machine translation (Och and Ney, 2000), induction of translation lexicons (Schafer and Yarowsky, 2002), and cross-language annotation projections to a second language (Riloff et al., 2002; Hwa et al., 2002; Mohammad et al., 2007)." D09-1124,D07-1060,o,"Measures of cross-language relatedness are useful for a large number of applications, including cross-language information retrieval (Nie et al., 1999; Monz and Dorr, 2005), cross-language text classification (Gliozzo and Strapparava, 2006), lexical choice in machine translation (Och and Ney, 2000; Bangalore et al., 2007), induction of translation lexicons (Schafer and Yarowsky, 2002), cross-language annotation and resource projections to a second language (Riloff et al., 2002; Hwa et al., 2002; Mohammad et al., 2007)." C08-1036,D07-1061,o,"2 Related Work The most commonly used similarity measures are based on the WordNet lexical database (eg Budanitsky and Hirst 2006, Hughes and Ramage 2007) and a number of such measures have been made publicly available (Pedersen et-al 2004)." D08-1095,D07-1061,o,"et al., 2004; Collins-Thompson and Callan, 2005; Hughes and Ramage, 2007)." D08-1095,D07-1061,o,"For instance, Hughes and Ramage (2007) constructed a graph which represented various types of word relations from WordNet, and compared random-walk similarity to similarity assessments from humansubject trials." D09-1124,D07-1061,o,"0 500 1000 1500 2000 5000 10000 15000 20000 25000 30000 Number of interlanguage links Vector length aren ares arro enes enro esro Figure 5: Number of interlanguage links vs. vector length for the Miller-Charles data set 0 500 1000 1500 2000 2500 3000 3500 4000 5000 10000 15000 20000 25000 30000 Number of interlanguage links Vector length aren ares arro enes enro esro Figure 6: Number of interlanguage links vs. vector length for the WordSimilarity-353 data set edge bases (Lesk, 1986; Wu and Palmer, 1994; Resnik, 1995; Jiang and Conrath, 1997; Hughes and Ramage, 2007) or on large corpora (Salton et al., 1997; Landauer et al., 1998; Turney, 2001; Gabrilovich and Markovitch, 2007)." D09-1124,D07-1061,o,"The dataset is available only in English and has been widely used in previous semantic relatedness evaluations (e.g., (Resnik, 1995; Hughes and Ramage, 2007; Zesch et al., 2008))." N09-1003,D07-1061,o,"Method Source Spearman (Strube and Ponzetto, 2006) Wikipedia 0.190.48 (Jarmasz, 2003) WordNet 0.330.35 (Jarmasz, 2003) Rogets 0.55 (Hughes and Ramage, 2007) WordNet 0.55 (Finkelstein et al., 2002) Web corpus, WN 0.56 (Gabrilovich and Markovitch, 2007) ODP 0.65 (Gabrilovich and Markovitch, 2007) Wikipedia 0.75 SVM Web corpus, WN 0.78 Table 9: Comparison with previous work for WordSim353." N09-1003,D07-1061,n,"We want to note that our WordNetbased method outperforms that of Hughes and Ramage (2007), which uses a similar method." N09-1003,D07-1061,p,"Our similarity method is similar, but simpler, to that used by (Hughes and Ramage, 2007), which report very good results on similarity datasets." N09-1003,D07-1061,o,"The techniques used to solve this problem can be roughly classified into two main categories: those relying on pre-existing knowledge resources (thesauri, semantic networks, taxonomies or encyclopedias) (Alvarez and Lim, 2007; Yang and Powers, 2005; Hughes and Ramage, 2007) and those inducing distributional properties of words from corpora (Sahami and Heilman, 2006; Chen et al., 2006; Bollegala et al., 2007)." N09-2060,D07-1061,o,Hughes and Ramage (2007) present a lexical similarity model based on random walks on graphs derived from WordNet; Rao et al. N09-2060,D07-1061,o,This is similartothegraphconstructionmethodofHughes and Ramage (2007) and Rao et al. W08-2006,D07-1061,o,"7.1.3 Similarity via pagerank Pagerank (Page et al., 1998) is the celebrated citation ranking algorithm that has been applied to several natural language problems from summarization (Erkan and Radev, 2004) to opinion mining (Esuli and Sebastiani, 2007) to our task of lexical relatedness (Hughes and Ramage, 2007)." W08-2006,D07-1061,o,"We further note that our results are different from that of (Hughes and Ramage, 2007) as they use extensive feature engineering and weight tuning during the graph generation process that we have not been able to reproduce." W09-1126,D07-1061,o,"(Hughes and Ramage, 2007) described the use of a biased PageRank over the WordNet graph to compute word pair semantic relatedness using the divergence of the probability values over the graph created by each word." E09-1041,D07-1068,o,"4 Semantic Class Induction from Wikipedia Wikipedia has recently been used as a knowledge source for various language processing tasks, including taxonomy construction (Ponzetto and Strube, 2007a), coreference resolution (Ponzetto and Strube, 2007b), and English NER (e.g., Bunescu and Pasca (2006), Cucerzan (2007), Kazama and Torisawa (2007), Watanabe et al." D09-1058,D07-1070,o,"It is often straightforward to obtain large amounts of unlabeled data, making semi-supervised approaches appealing; previous work on semisupervised methods for dependency parsing includes (Smith and Eisner, 2007; Koo et al., 2008; Wang et al., 2008)." D09-1058,D07-1070,o,"Note that it is straightforward to calculate these expected counts using a variant of the inside-outside algorithm (Baker, 1979) applied to the (Eisner, 1996) dependency-parsing data structures (Paskin, 2001) for projective dependency structures, or the matrix-tree theorem (Koo et al., 2007; Smith and Smith, 2007; McDonald and Satta, 2007) for nonprojective dependency structures." D09-1086,D07-1070,o,"One option would be to leverage unannotated text (McClosky et al., 2006; Smith and Eisner, 2007)." P09-1041,D07-1070,o,"This generates tens of millions features, so we prune those features that occur fewer than 10 total times, as in (Smith and Eisner, 2007)." P09-1041,D07-1070,o,Smith and Eisner (2007) apply entropy regularization to dependency parsing. D08-1082,D07-1071,o,"Finally, recent work has explored learning to map sentences to lambda-calculus meaning representations (Wong and Mooney, 2007; Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007)." D09-1001,D07-1071,o,"Recently, a number of machine learning approaches have been proposed (Zettlemoyer and Collins, 2005; Mooney, 2007)." D09-1001,D07-1071,o,"For example, when applying their approach to a different domain with somewhat less rigid syntax, Zettlemoyer and Collins (2007) need to introduce new combinators and new forms of candidate lexical entries." E09-1052,D07-1071,o,"There has thus been a trend recently towards robust wide-coverage semantic construction (e.g., (Bos et al., 2004; Zettlemoyer and Collins, 2007))." P08-1038,D07-1071,o,"It has been used for a variety of tasks, such as wide-coverage parsing (Hockenmaier and Steedman, 2002; Clark and Curran, 2007), sentence realization (White, 2006), learning semantic parsers (Zettlemoyer and Collins, 2007), dialog systems (Kruijff et al., 2007), grammar engineering (Beavers, 2004; Baldridge et al., 2007), and modeling syntactic priming (Reitter et al., 2006)." P09-1011,D07-1071,o,"1 Introduction Recent work in learning semantics has focused on mapping sentences to meaning representations (e.g., some logical form) given aligned sentence/meaning pairs as training data (Ge and Mooney, 2005; Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Lu et al., 2008)." P09-1069,D07-1071,o,"available): SCISSOR (Ge and Mooney, 2005), an integrated syntactic-semantic parser; KRISP (Kate and Mooney, 2006), an SVM-based parser using string kernels; WASP (Wong and Mooney, 2006; Wong and Mooney, 2007), a system based on synchronous grammars; Z&C (Zettlemoyer and Collins, 2007)3, a probabilistic parser based on relaxed CCG grammars; and LU (Lu et al., 2008), a generative model with discriminative reranking." P09-1069,D07-1071,o,"A number of systems for automatically learning semantic parsers have been proposed (Ge and Mooney, 2005; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008)." P09-1110,D07-1071,o,"1 Introduction Recently, researchers have developed algorithms that learn to map natural language sentences to representations of their underlying meaning (He and Young, 2006; Wong and Mooney, 2007; Zettlemoyer and Collins, 2005)." W09-0508,D07-1071,o,"Our starting point is the work done by Zettlemoyer and Collins on parsing using relaxed CCG grammars (Zettlemoyer and Collins, 2007) (ZC07)." W09-0508,D07-1071,o,"Practically, the grammar relaxation is done via the introduction of non-standard CCG rules (Zettlemoyer and Collins, 2007)." W09-0508,D07-1071,p,"Albeit simple, the algorithm has proven to be very efficient and accurate for the task of parse selection (Collins and Roark, 2004; Collins, 2004; Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007)." D08-1091,D07-1072,o,"The parameters of the refined productions Ax By Cz, where Ax is a subcategory of A, By of B, and Cz of C, can then be estimated in various ways; past work has included both generative (Matsuzaki et al., 2005; Liang et al., 2007) and discriminative approaches (Petrov and Klein, 2008)." D09-1071,D07-1072,o,"A different approach in evaluating nonparametric Bayesian models for NLP is statesplitting (Finkel et al., 2007; Liang et al., 2007)." N09-1009,D07-1072,o,"There has been an increased interest recently in employing Bayesian modeling for probabilistic grammars in different settings, ranging from putting priors over grammar probabilities (Johnson et al., 2007) to putting non-parametric priors over derivations (Johnson et al., 2006) to learning the set of states in a grammar (Finkel et al., 2007; Liang et al., 2007)." N09-1019,D07-1072,o,"In addition to the block sampler used by Bhattacharya and Getoor (2006), we are investigating general-purpose splitmerge samplers (Jain and Neal, 2000) and the permutation sampler (Liang et al., 2007a)." N09-1019,D07-1072,o,"(General grammars with infinite numbers of nonterminals were studied by (Liang et al., 2007b))." N09-1062,D07-1072,o,"Our work differs from these previous approaches in that we explicitly model a prior over grammars within a Bayesian framework.4 Models of grammar refinement (Petrov et al., 2006; Liang et al., 2007; Finkel et al., 2007) also aim to automatically learn latent structure underlying treebanked data." P08-1046,D07-1072,o,"Wed like to learn the number of paradigm classes from the data, but doing this would probably require extending adaptor grammars to incorporate the kind of adaptive statesplitting found in the iHMM and iPCFG (Liang et al., 2007)." P08-1046,D07-1072,o,"First, we can construct an infinite number of more specialized PCFGs by splitting or refining the PCFGs nonterminals into increasingly finer states; this leads to the iPCFG or infinite PCFG (Liang et al., 2007)." P09-2085,D07-1072,o,"Recently, methods from nonparametric Bayesian statistics have been gaining popularity as a way to approach unsupervised learning for a variety of tasks, including language modeling, word and morpheme segmentation, parsing, and machine translation (Teh et al., 2006; Goldwater et al., 2006a; Goldwater et al., 2006b; Liang et al., 2007; Finkel et al., 2007; DeNero et al., 2008)." W08-0704,D07-1072,o,"First, we can let the number of nonterminals grow unboundedly, as in the Infinite PCFG, where the nonterminals of the grammar can be indefinitely refined versions of a base PCFG (Liang et al., 2007)." E09-1041,D07-1073,o,"4 Semantic Class Induction from Wikipedia Wikipedia has recently been used as a knowledge source for various language processing tasks, including taxonomy construction (Ponzetto and Strube, 2007a), coreference resolution (Ponzetto and Strube, 2007b), and English NER (e.g., Bunescu and Pasca (2006), Cucerzan (2007), Kazama and Torisawa (2007), Watanabe et al." E09-1064,D07-1073,p,Wikipedia first sentence (WikiFS): Kazama and Torisawa (2007) used Wikipedia as an external knowledge to improve Named Entity Recognition. E09-1064,D07-1073,o,"Recently, Wikipedia is emerging as a source for extracting semantic relationships (Suchanek et al., 2007; Kazama and Torisawa, 2007)." E09-1070,D07-1073,p,Kazama and Torisawa (2007) improve their F-score by 3% by including a Wikipedia-based feature in their machine learner. I08-2126,D07-1073,o,"4.1 Extraction from Definition Sentences Definition sentences in the Wikipedia article were used for acquiring hyponymy relations by (Kazama and Torisawa, 2007) for named entity recognition." I08-2126,D07-1073,o,"Hyponymy relations were extracted from definition sentences (Herbelot and Copestake, 2006; Kazama and Torisawa, 2007)." P08-1001,D07-1073,o,"The most relevant to our work are Kazama and Torisawa (2007), Toral and Muoz (2006), and Cucerzan (2007)." P08-1001,D07-1073,o,"Similarly, Kazama and Torisawa (2007) used Wikipedia, particularly the first sentence of each article, to create lists of entities." P08-1047,D07-1073,n,"Although this Wikipedia gazetteer is much smaller than the English version used by Kazama and Torisawa (2007) that has over 2,000,000 entries, it is the largest gazetteer that can be freely used for Japanese NER." P08-1047,D07-1073,o,"We follow the method used by Kazama and Torisawa (2007), which encodes the matching with a gazetteer entity using IOB tags, with the modication for Japanese." P08-1047,D07-1073,o,"The small differences from their work are: (1) We used characters as the unit as we described above, (2) While Kazama and Torisawa (2007) checked only the word sequences that start with a capitalized word and thus exploitedthecharacteristicsofEnglishlanguage, we checked the matching at every character, (3) We used a TRIE to make the look-up efcient." P08-1047,D07-1073,p,"For instance, Kazama and Torisawa (2007) used the hyponymy relations extracted from Wikipedia for the English NER, and reported improved accuracies with such a gazetteer." P08-1047,D07-1073,o,"First, the Wikipedia gazetteer improved the accuracy as expected, i.e., it reproduced the result of Kazama and Torisawa (2007) for Japanese NER." P08-1047,D07-1073,o,"6 Related Work and Discussion There are several studies that used automatically extracted gazetteers for NER (Shinzato et al., 2006; Talukdar et al., 2006; Nadeau et al., 2006; Kazama and Torisawa, 2007)." P08-1047,D07-1073,o,"On the other hand, Kazama and Torisawa (2007) extracted hyponymy relations, which are independent of the NE categories, from Wikipedia and utilized it as a gazetteer." P08-1047,D07-1073,o,"The previous studies, with the exception of Kazama and Torisawa (2007), used smaller gazetteers than ours." P08-1047,D07-1073,o,"We also compared the cluster gazetteers with the Wikipedia gazetteer constructed by following the method of (Kazama and Torisawa, 2007)." P08-1047,D07-1073,o,"Kazama and Torisawa (2007) extracted hyponymyrelationsfromtherstsentences(i.e., dening sentences) of Wikipedia articles and then used them as a gazetteer for NER." P08-1047,D07-1073,o,"The method described by Kazama and Torisawa (2007) is to rst extract the rst (base) noun phrase after the rst is, was, are, or were in the rst sentence of a Wikipedia article." P09-1049,D07-1073,o,"Some regarded Wikipedia as the corpora and applied hand-crafted or machine-learned rules to acquire semantic relations (Herbelot and Copestake, 2006; Kazama and Torisawa, 2007; Ruiz-casado et al., 2005; Nastase and Strube, 2008; Sumida et al., 2008; Suchanek et al., 2007)." P09-1051,D07-1073,o,"The second baseline is our implementation of the relevant part of the Wikipedia extraction in (Kazama and Torisawa, 2007), taking the first noun after a be verb in the definition sentence, denoted as WikiBL." P09-1051,D07-1073,o,Kazama and Torisawa (2007) explores the first sentence of an article and identifies the first noun phrase following the verb be as a label for the article title. P09-1051,D07-1073,o,"As the most concise definition we take the first sentence of each article, following (Kazama and Torisawa, 2007)." P09-1051,D07-1073,o,"Be-Comp Following the general idea in (Kazama and Torisawa, 2007), we identify the ISA pattern in the definition sentence by extracting nominal complements of the verb be, taking 451 No." W09-1119,D07-1073,o,"It turns out that while problems of coverage and ambiguity prevent straightforward lookup, injection of gazetteer matches as features in machine-learning based approaches is critical for good performance (Cohen, 2004; Kazama and Torisawa, 2007a; Toral and Munoz, 2006; Florian et al., 2003)." W09-1119,D07-1073,p,"Recently, (Toral and Munoz, 2006; Kazama and Torisawa, 2007a) have successfully constructed high quality and high coverage gazetteers from Wikipedia." W09-1119,D07-1073,o,"For example, the entry about the Microsoft in Wikipedia has the following categories: Companies listed on NASDAQ; Cloud computing vendors; etc. Both (Toral and Munoz, 2006) and (Kazama and Torisawa, 2007a) used the free-text description of the Wikipedia entity to reason about the entity type." W09-1119,D07-1073,o,"NER proves to be a knowledgeintensive task, and it was reassuring to observe that System Resources Used F1 + LBJ-NER Wikipedia, Nonlocal Features, Word-class Model 90.80 (Suzuki and Isozaki, 2008) Semi-supervised on 1Gword unlabeled data 89.92 (Ando and Zhang, 2005) Semi-supervised on 27Mword unlabeled data 89.31 (Kazama and Torisawa, 2007a) Wikipedia 88.02 (Krishnan and Manning, 2006) Non-local Features 87.24 (Kazama and Torisawa, 2007b) Non-local Features 87.17 + (Finkel et al., 2005) Non-local Features 86.86 Table 7: Results for CoNLL03 data reported in the literature." W09-1119,D07-1073,o,"Systems based on perceptron have been shown to be competitive in NER and text chunking (Kazama and Torisawa, 2007b; Punyakanok and Roth, 2001; Carreras et al., 2003) We specify the model and the features with the LBJ (Rizzolo and Roth, 2007) modeling language." D07-1047,D07-1074,o,"We perform term disambiguation on each document using an entity extractor (Cucerzan, 2007)." D09-1029,D07-1074,o,"Even if the idea of using Wikipedia links for disambiguation is not novel (Cucerzan, 2007), it is applied for the first time to FrameNet lexical units, considering a frame as a sense definition." D09-1056,D07-1074,o,"Some researchers (Cucerzan, 2007; Nguyen and Cao, 2008) have explored the use of Wikipedia information to improve the disambiguation process." E09-1007,D07-1074,o,"However, most of them do not build a NEs resource but exploit external gazetteers (Bunescu and Pasca, 2006), (Cucerzan, 2007)." E09-1035,D07-1074,p,"An important aspect of web search is to be able to narrow down search results by distinguishing among people with the same name leading to multiple efforts focusing on web person name disambiguation in the literature (Mann and Yarowsky, 2003; Artiles et al., 2007, Cucerzan, 2007)." E09-1041,D07-1074,o,"4 Semantic Class Induction from Wikipedia Wikipedia has recently been used as a knowledge source for various language processing tasks, including taxonomy construction (Ponzetto and Strube, 2007a), coreference resolution (Ponzetto and Strube, 2007b), and English NER (e.g., Bunescu and Pasca (2006), Cucerzan (2007), Kazama and Torisawa (2007), Watanabe et al." I08-1071,D07-1074,o,"ca (2006) and Cucerzan (2007), in mining relationships between named entities, or in extracting useful facet terms from news articles (e.g., Dakka and Ipeirotis, 2008)." I08-1071,D07-1074,o,"Some of these have been previously employed for various tasks by Gabrilovich and Markovitch, (2006); Overell and Ruger (2006), Cucerzan (2007), and Suchanek et al." N09-1019,D07-1074,p,"Much later work (Evans, 2003; Etzioni et al., 2005; Cucerzan, 2007; Pasca, 2004) relies on the use of extremely large corpora which allow very precise, but sparse features." P08-1001,D07-1074,o,"The most relevant to our work are Kazama and Torisawa (2007), Toral and Muoz (2006), and Cucerzan (2007)." P08-1001,D07-1074,o,"Cucerzan (2007), by contrast to the above, used Wikipedia primarily for Named Entity Disambiguation, following the path of Bunescu and Paca (2006)." W08-2231,D07-1074,o,"Wu and Weld (2007) and Cucerzan (2007) calculate the overlap between contexts of named entities and candidate articles from Wikipedia, using overlap ratios or similarity scores in a vector space model, respectively." P08-1076,D07-1083,o,"CRF (baseline)] 97.18 97.21 Table 7: POS tagging results of the previous top systems for PTB III data evaluated by label accuracy system test additional resources JESS-CM (CRF/HMM) 95.15 1G-word unlabeled data 94.67 15M-word unlabeled data (Ando and Zhang, 2005) 94.39 15M-word unlabeled data (Suzuki et al., 2007) 94.36 17M-word unlabeled data (Zhang et al., 2002) 94.17 full parser output (Kudo and Matsumoto, 2001) 93.91 [supervised CRF (baseline)] 93.88 Table 8: Syntactic chunking results of the previous top systems for CoNLL00 shared task data (F=1 score) 30-31 Aug. 1996 and 6-7 Dec. 1996 Reuters news articles, respectively." P08-1076,D07-1083,o,"test additional resources JESS-CM (CRF/HMM) 94.48 89.92 1G-word unlabeled data 93.66 89.36 37M-word unlabeled data (Ando and Zhang, 2005) 93.15 89.31 27M-word unlabeled data (Florian et al., 2003) 93.87 88.76 own large gazetteers, 2M-word labeled data (Suzuki et al., 2007) N/A 88.41 27M-word unlabeled data [sup." P08-1076,D07-1083,o,"As our approach for incorporating unlabeled data, we basically follow the idea proposed in (Suzuki et al., 2007)." P08-1076,D07-1083,o,"Following this idea, there have been introduced a parameter estimation approach for non-generative approaches that can effectively incorporate unlabeled data (Suzuki et al., 2007)." P08-1076,D07-1083,o,"In addition, the calculation cost for estimating parameters of embedded joint PMs (HMMs) is independent of the number of HMMs, J, that we used (Suzuki et al., 2007)." P08-1076,D07-1083,o,"2.4 Comparison with Hybrid Model SSL based on a hybrid generative/discriminative approach proposed in (Suzuki et al., 2007) has been defined as a log-linear model that discriminatively combines several discriminative models, pDi , and generative models, pGj , such that: R(y|x;,,) = producttext i p Di (y|x;i)i producttext j p Gj (xj,y;j)j summationtext y producttext i p Di (y|x;i)i producttext j p Gj (xj,y;j)j , where ={i}Ii=1, and ={{i}Ii=1,{j}I+Jj=I+1}." P08-1076,D07-1083,o,"As a solution, a given amount of labeled training data is divided into two distinct sets, i.e., 4/5 for estimating , and the 667 remaining 1/5 for estimating (Suzuki et al., 2007)." P08-1076,D07-1083,n,"Surprisingly, although JESS-CM is a simpler version of the hybrid model in terms of model structure and parameter estimation procedure, JESS-CM provides F-scores of 94.45 and 88.03 for CoNLL00 and 03 data, respectively, which are 0.15 and 0.83 points higher than those reported in (Suzuki et al., 2007) for the same configurations." W08-2103,D07-1083,o,"Networks (Toutanova et al., 2003) 97.24 SVM (Gimenez and M`arquez, 2003) 97.05 ME based a bidirectional inference (Tsuruoka and Tsujii, 2005) 97.15 Guided learning for bidirectional sequence classification (Shen et al., 2007) 97.33 AdaBoost.SDF with candidate features (=2,=1,=100, W-dist) 97.32 AdaBoost.SDF with candidate features (=2,=10,=10, F-dist) 97.32 SVM with candidate features (C=0.1, d=2) 97.32 Text Chunking F=1 Regularized Winnow + full parser output (Zhang et al., 2001) 94.17 SVM-voting (Kudo and Matsumoto, 2001) 93.91 ASO + unlabeled data (Ando and Zhang, 2005) 94.39 CRF+Reranking(Kudo et al., 2005) 94.12 ME based a bidirectional inference (Tsuruoka and Tsujii, 2005) 93.70 LaSo (Approximate Large Margin Update) (Daume III and Marcu, 2005) 94.4 HySOL (Suzuki et al., 2007) 94.36 AdaBoost.SDF with candidate featuers (=2,=1,=, W-dist) 94.32 AdaBoost.SDF with candidate featuers (=2,=10,=10,W-dist) 94.30 SVM with candidate features (C=1, d=2) 94.31 One of the reasons that boosting-based classifiers realize faster classification speed is sparseness of rules." D09-1014,D07-1087,o,"In the first, a separate language model is trained on each column of the database and these models are then used to segment and label a given text sequence (Agichtein and Ganti, 2004; Canisius and Sporleder, 2007)." D09-1014,D07-1087,o,"These records are also known as field books and reference sets in literature (Canisius and Sporleder, 2007; Michelson and Knoblock, 2008)." D09-1014,D07-1087,o,Both Agichtein and Ganti (2004) and Canisius and Sporleder (2007) train a language model for each database column. C08-2005,D07-1090,o,"In the second pass, 5-gram and 6-gram zero-cutoff stupid-backoff (Brants et al., 2007) language models estimated using 4.7 billion words of English newswire text are used to generate lattices for phrasal segmentation model rescoring." D07-1005,D07-1090,o,"5-gram word language models in English are trained on a variety of monolingual corpora (Brants et al. , 2007)." D07-1105,D07-1090,o,"For instance, word alignment models are often trained using the GIZA++ toolkit (Och and Ney, 2003); error minimizing training criteria such as the Minimum Error Rate Training (Och, 2003) are employed in order to learn feature function weights for log-linear models; and translation candidates are produced using phrase-based decoders (Koehn et al. , 2003) in combination with n-gram language models (Brants et al. , 2007)." D08-1044,D07-1090,o,"Of course, many applications require smoothing of the estimated distributionsthis problem also has known solutions in MapReduce (Brants et al., 2007)." D08-1085,D07-1090,p,"We conclude by noting that English language models currently used in speech recognition (Chelba and Jelinek, 1999) and automated language translation (Brants et al., 2007) are much more powerful, employing, for example, 7-gram word models (not letter models) trained on trillions of words." D09-1078,D07-1090,o,"Since that time, however, increasingly large amounts of language model training data have become available ranging from approximately one billion words (the Gigaword corpora from the Linguistic Data Consortium) to trillions of words (Brants et al., 2007)." D09-1079,D07-1090,o,"All the TB-LMs and O-RLMs were unpruned 5gram models and used Stupid-backoff smoothing (Brants et al., 2007) 2 with the backoff parameter set to 0.4 as suggested." D09-1079,D07-1090,o,(2007) looked at Golomb Coding and Brants et al. D09-1093,D07-1090,o,"3.3 Language Model We estimate P(s) using n-gram LMs trained on data from the Web, using Stupid Backoff (Brants et al., 2007)." E09-1019,D07-1090,o,"This was expected, as it has been observed before that very simple smoothing techniques can perform well on large data sets, such as web data (Brants et al., 2007)." E09-1044,D07-1090,o,"We build sentencespecific zero-cutoff stupid-backoff (Brants et al., 2007) 5-gram language models, estimated using 4.7B words of English newswire text, and apply them to rescore each 10000-best list." I08-2089,D07-1090,o,"A recent trend is to store the LM in a distributed cluster of machines, which are queried via network requests (Brants et al., 2007; Emami et al., 2007)." N09-1049,D07-1090,o,"We build sentencespecific zero-cutoff stupid-backoff (Brants et al., 2007) 5-gram language models, estimated using 4.7B words of English newswire text, and apply them to rescore either 10000-best lists generated by HCP or word lattices generated by HiFST." N09-1058,D07-1090,o,"In NLP community, it has been shown that having more data results in better performance (Ravichandran et al., 2005; Brants et al., 2007; Turney, 2008)." N09-1058,D07-1090,o,"(Brants et al., 2007; Emami et al., 2007) built 5-gram LMs over web using distributed cluster of machines and queried them via network requests." N09-1059,D07-1090,p,"1 Introduction Very large corpora obtained from the Web have been successfully utilized for many natural languageprocessing(NLP)applications, suchasprepositional phrase (PP) attachment, other-anaphora resolution, spellingcorrection, confusablewordsetdisambiguation and machine translation (Volk, 2001; Modjeska et al., 2003; Lapata and Keller, 2005; Atterer and Schutze, 2006; Brants et al., 2007)." P08-1058,D07-1090,p,"To scale LMs to larger corpora with higher-order dependencies, researchers Work completed while this author was at Google Inc. have considered alternative parameterizations such as class-based models (Brown et al., 1992), model reduction techniques such as entropy-based pruning (Stolcke, 1998), novel represention schemes such as suffix arrays (Emami et al., 2007), Golomb Coding (Church et al., 2007) and distributed language models that scale more readily (Brants et al., 2007)." P08-1058,D07-1090,p,"Here we choose to work with stupid backoff smoothing (Brants et al., 2007) since this is significantly more efficient to train and deploy in a distributed framework than a contextdependent smoothing scheme such as Kneser-Ney." P08-1058,D07-1090,o,"Previous work (Brants et al., 2007) has shown it to be appropriate to large-scale language modeling." P08-1058,D07-1090,o,"Table 2 shows the total space and number of bytes required per n-gram to encode the model under different schemes: LDC gzipd is the size of the files as delivered by LDC; Trie uses a compact trie representation (e.g., (Clarkson et al., 1997; Church et al., 2007)) with 3 byte word ids, 1 byte values, and 3 byte indices; Block encoding is the encoding used in (Brants et al., 2007); and randomized uses our novel randomized scheme with 12 error bits." P08-1058,D07-1090,o,"(Emami et al., 2007), (Brants et al., 2007), (Church et al., 2007)." P08-1075,D07-1090,o,"There is a vast literature on language modeling; see, e.g., (Rosenfeld, 2000; Chen and Goodman, 1999; Brants et al., 2007; Roark et al., 2007)." P08-1086,D07-1090,o,"We use the distributed training and application infrastructure described in (Brants et al., 2007) with modifications to allow the training of predictive class-based models and their application in the decoder of the machine translation system." P08-1086,D07-1090,o,"759 For all models used in our experiments, both wordand class-based, the smoothing method used was Stupid Backoff (Brants et al., 2007)." P08-1086,D07-1090,o,"Class-based n-gram models have also been shown to benefit from their reduced number of parameters when scaling to higher-order n-grams (Goodman and Gao, 2000), and even despite the increasing size and decreasing sparsity of language model training corpora (Brants et al., 2007), class-based n-gram models might lead to improvements when increasing the n-gram order." P09-1087,D07-1090,p,"Indeed, researchers have shown that gigantic language models are key to state-ofthe-art performance (Brants et al., 2007), and the ability of phrase-based decoders to handle large-size, high-order language models with no consequence on asymptotic running time during decoding presents a compelling advantage over CKYdecoders,whosetimecomplexitygrowsprohibitively large with higher-order language models." P09-2086,D07-1090,o,"Either pruning (Stolcke, 1998; Church et al., 2007) or lossy randomizing approaches (Talbot and Brants, 2008) may result in a compact representation for the application run-time." P09-2086,D07-1090,o,"To support distributed computation (Brants et al., 2007), we further split the N-gram data into shards by hash values of the first bigram." P09-2086,D07-1090,o,"We implemented an N-gram indexer/estimator using MPI inspired by the MapReduce implementation of N-gram language model indexing/estimation pipeline (Brants et al., 2007)." W08-0302,D07-1090,o,"Phrase-based MT systems are straightforward to train from parallel corpora (Koehn et al., 2003) and, like the original IBM models (Brown et al., 1990), benefit from standard language models built on large monolingual, target-language corpora (Brants et al., 2007)." W08-0302,D07-1090,o,"The recent emphasis on improving these components of a translation system (Brants et al., 2007) is likely due in part to the widespread availability of NLP tools for the language that is most frequently the target: English." W08-0316,D07-1090,o,"For our contrast submission, we rescore the first-pass translation lattices with a large zero-cutoff stupid-backoff (Brants et al., 2007) language model estimated over approximately five billion words of newswire text." W08-0402,D07-1090,o,"It is therefore desirable to have dedicated servers to load parts of the LM3 an idea that has been exploited by (Zhang et al., 2006; Emami et al., 2007; Brants et al., 2007)." W09-0423,D07-1090,o,"These findings are somehow surprising since it was eventually believed by the community that adding large amounts of bitexts should improve the translation model, as it is usually observed for the language model (Brants et al., 2007)." W09-1505,D07-1090,o,"We have also used TPTs to encode n-gram count databases such as the Google 1T web n-gram database (Brants and Franz, 2006), but are not able to provide detailed results within the space limitations of this paper.4 5.1 Perplexity computation with 5-gram language models We compared the performance of TPT-encoded language models against three other language model implementations: the SRI language modeling toolkit (Stolcke, 2002), IRSTLM (Federico and Cettolo, 2007), and the language model implementation currently used in the Portage SMT system (Badr et al., 2007), which uses a pointer-based implementation but is able to perform fast LM filtering at load time." W09-2012,D07-1090,o,"In this study, we use the Google Web 1T 5gram Corpus (Brants et al., 2007)." C08-1127,D07-1091,o,"2 Related Work There have been various efforts to integrate linguistic knowledge into SMT systems, either from the target side (Marcu et al., 2006; Hassan et al., 2007; Zollmann and Venugopal, 2006), the source side (Quirk et al., 2005; Liu et al., 2006; Huang et al., 2006) or both sides (Eisner, 2003; Ding et al., 2005; Koehn and Hoang, 2007), just to name a few." D07-1049,D07-1091,o,"Decoding is carried-out using the Moses decoder (Koehn and Hoang, 2007)." D09-1008,D07-1091,o,"In (Koehn and Hoang, 2007), shallow syntactic analysis such as POS tagging and morphological analysis were incorporated in a phrasal decoder." D09-1079,D07-1091,o,"5 SMT Experiments 5.1 Experimental Setup We used publicly available resources for all our tests: for decoding we used Moses (Koehn and Hoang, 2007) and our parallel data was taken from the Spanish-English section of Europarl." E09-1011,D07-1091,o,"Koehn and Hoang (2007) propose Factored Translation Models, which extend phrase-based statistical machine translation by allowing the integration of additional morphological features at the word level." E09-1043,D07-1091,o,"c2009 Association for Computational Linguistics Improving Mid-Range Reordering using Templates of Factors Hieu Hoang School of Informatics University of Edinburgh h.hoang@sms.ed.ac.uk Philipp Koehn School of Informatics University of Edinburgh pkoehn@inf.ed.ac.uk Abstract We extend the factored translation model (Koehn and Hoang, 2007) to allow translations of longer phrases composed of factors such as POS and morphological tags to act as templates for the selection and reordering of surface phrase translation." E09-1043,D07-1091,o,"2.4 Factor Model Decomposition Factored translation models (Koehn and Hoang, 2007) extend the phrase-based model by integrating word level factors into the decoding process." E09-1043,D07-1091,o,"(Koehn and Hoang, 2007) describes various strategies for the decomposition of the decoding into multiple translation models using the Moses decoder." E09-1043,D07-1091,o,"4.1 Training The training procedure is identical to the factored phrase-based training described in (Koehn and Hoang, 2007)." E09-3008,D07-1091,o,"In a factored translation model other factors than surface form can be used, such as lemma or part-of-speech (Koehn and Hoang, 2007)." I08-1067,D07-1091,o,"Recent work by Koehn and Hoang (2007) pro514 poses factored translation models that combine feature functions to handle syntactic, morphological, and other linguistic information in a log-linear model." N09-1021,D07-1091,o,"In particular, we adopt the approach of phrase-based statistical machine translation (Koehn et al., 2003; Koehn and Hoang, 2007)." N09-1021,D07-1091,o,"The reader is referred to (Koehn and Hoang, 2007; Koehn et al., 2007) for detailed information about phrase-based statistical machine translation." N09-1058,D07-1091,o,"The publicly available Moses4 decoder is used for training and decoding (Koehn and Hoang, 2007)." N09-2019,D07-1091,o,"Factored models are introduced in (Koehn and Hoang, 2007) for better integration of morphosyntactic information." P07-1065,D07-1091,o,"Decoding is carried-out using the Moses decoder (Koehn and Hoang, 2007)." P07-2045,D07-1091,o,"Initial results show the potential benefit of factors for statistical machine translation, (Koehn et al. 2006) and (Koehn and Hoang 2007)." P08-1059,D07-1091,o,"In recent work, Koehn and Hoang (2007) proposed a general framework for including morphological features in a phrase-based SMT system by factoring the representation of words into a vector of morphological features and allowing a phrase-based MT system to work on any of the factored representations, which is implemented in the Moses system." P08-1059,D07-1091,o,"Though our motivation is similar to that of Koehn and Hoang (2007), we chose to build an independent component for inflection prediction in isolation rather than folding morphological information into the main translation model." P08-1087,D07-1091,o,"In their presentation of the factored SMT models, Koehn and Hoang (2007) describe experiments for translating from English to German, Spanish and Czech, using morphology tags added on the morphologically rich side, along with POS tags." P08-1087,D07-1091,o,"The model is defined mathematically (Koehn and Hoang, 2007) as following: p(f|e) = 1Zexp nsummationdisplay i=1 ihi(f,e) (1) where i is a vector of weights determined during a tuning process, and hi is the feature function." P08-1114,D07-1091,o,"Chiang (2005) distinguishes statistical MT approaches that are syntactic in a formal sense, going beyond the nite-state underpinnings of phrasebased models, from approaches that are syntactic in a linguistic sense, i.e. taking advantage of a priori language knowledge in the form of annotations derived from human linguistic analysis or treebanking.1 The two forms of syntactic modeling are doubly dissociable: current research frameworks include systems that are nite state but informed by linguistic annotation prior to training (e.g., (Koehn and Hoang, 2007; Birch et al., 2007; Hassan et al., 2007)), and also include systems employing contextfree models trained on parallel text without bene t of any prior linguistic analysis (e.g." P08-2039,D07-1091,o,"We also report on applying Factored Translation Models (Koehn and Hoang, 2007) for English-to-Arabic translation." P08-2039,D07-1091,o,Koehn and Hoang (2007) present Factored Translation Models as an extension to phrase-based statistical machine translation models. P09-1090,D07-1091,o,"Koehn and Hoang (2007) propose factored translation models that combine feature functions to handle syntactic, morphological, and other linguistic information in a log-linear model." W08-0305,D07-1091,o,"Any way to enforce linguistic constraints will result in a reduced need for data, and ultimately in more complete models, given the same amount of data (Koehn and Hoang, 2007)." W08-0310,D07-1091,o,"4.1 Overview In this work, factored models (Koehn and Hoang, 2007) are experimented with three factors : the surface form, the lemma and the part of speech (POS)." W08-0310,D07-1091,o,"Therefore, including a model based on surface forms, as suggested (Koehn and Hoang, 2007), is also necessary." W08-0318,D07-1091,o,"+ truecase 20.7 (+0.4) 27.8 (+0.2) Table 2: Impact of truecasing on case-sensitive BLEU In a more integrated approach, factored translation models (Koehn and Hoang, 2007) allow us to consider grammatical coherence in form of partof-speech language models." W08-0319,D07-1091,o,"3.2.1 Factored Treelet Translation Labels of nodes at the t-layer are not atomic but consist of more than 20 attributes representing various linguistic features.3 We can consider the attributes as individual factors (Koehn and Hoang, 2007)." W08-0319,D07-1091,o,"In order to generate a value for each target-side factor, we use a sequence of mapping steps similar to Koehn and Hoang (2007)." W08-0322,D07-1091,p,"Furthermore, the BLEU score performance suggests that our model is not very powerful, but some interesting hints can be found in Table 3 when we compare our method with a 5-gram language model to a state-of-the-art system Moses (Koehn and Hoang, 2007) based on various evaluation metrics, including BLEU score, NIST score (Doddington, 2002), METEOR (Banerjee and Lavie, 2005), TER (Snover et al., 2006), WER and PER." W08-0410,D07-1091,p,"For example, factored translation models (Koehn and Hoang, 2007) retain the simplicity of phrase-based SMT while adding the ability to incorporate additional features." W08-0510,D07-1091,o,Some research into factored machine translation has been published by (Koehn and Hoang 2007). W08-2119,D07-1091,o,"We believe that other kinds of translationunit such as n-gram (Jos et al., 2006),factoredphrasaltranslation(Koehn and Hoang, 2007), or treelet (Quirk et al., 2005) can be used in this method." W09-0418,D07-1091,o,"A tight integration of morphosyntactic information into the translation model was proposed by (Koehn and Hoang, 2007) where lemma and morphological information are translated separately, and this information is combined on the output side to generate the translation." W09-0427,D07-1091,o,"Unlike with factored models (Koehn and Hoang, 2007) or additional translation lexicons (Schwenk et al., 2008), we do not generate the surface form back from the lemma translation, which means that tense, gender and number information are 151 news-dev2009a representation OOV % METEOR BLEU NIST baseline surface form only 2.24 49.05 20.45 6.135 decoding lemma backoff 2.13 49.12 20.44 6.143 word alignment lemma+POS for all 2.24 48.87 20.36 6.145 lemma+POS for adj 2.25 48.94 20.46 6.131 lemma+POS for verbs 2.21 49.05 20.47 6.137 decoding + alignment backoff + all 2.10 48.97 20.36 6.147 backoff + adj 2.12 49.05 20.48 6.140 backoff + verbs 2.08 49.15 20.50 6.148 news-dev2009b representation OOV % METEOR BLEU NIST baseline surface form only 2.52 49.60 21.10 6.211 decoding lemma backoff 2.43 49.66 21.02 6.210 word alignment lemma+POS for all 2.53 49.56 21.03 6.199 lemma+POS for adj 2.52 49.74 21.00 6.213 lemma+POS for verbs 2.47 49.73 21.10 6.217 decoding+alignment backoff + all 2.44 49.59 20.92 6.194 backoff + adj 2.43 49.80 21.03 6.217 backoff + verbs 2.39 49.80 21.03 6.217 Table 2: Evaluation of the decoding backoff strategy, the modified word alignment strategy and their combination Input Meme sil demissionnait, la situation ne changerait pas." W09-0427,D07-1091,o,"Many strategies have been proposed to integrate morphology information in SMT, including factored translation models (Koehn and Hoang, 2007), adding a translation dictionary containing inflected forms to the training data (Schwenk et al., 2008), entirely replacing surface forms by representations built on lemmas and POS tags (Popovic and Ney, 2004), morphemes learned in an unsupervised manner (Virpojia et al., 2007), and using Porter stems and even 4-letter prefixes for word alignment (Watanabe et al., 2006)." W09-0429,D07-1091,o,"part-of-speech language model We use factored translation models (Koehn and Hoang, 2007) to also output part-of-speech tags with each word in a single phrase mapping and run a second n-gram model over them." C08-1015,D07-1112,o,"4.3 Adaptation for unknown word2 The unknown word problem is an important issue for domain adaptation(Dredze et al., 2007)." C08-1015,D07-1112,o,"Without specific knowledge of the target domains annotation standards, significant improvement can not be made(Dredze et al., 2007)." C08-1015,D07-1112,o,"(2007), and Dredze et al." D07-1096,D07-1112,o,"Instead of assigning HEAD and DEPREL in a single step, some systems use a two-stage approach for attaching and labeling dependencies (Chen et al. , 2007; Dredze et al. , 2007)." D07-1096,D07-1112,o,"In order to calculate a global score or probability for a transition sequence, two systems used a Markov chain approach (Duan et al. , 2007; Sagae and Tsujii, 2007)." D07-1096,D07-1112,o,"5.4 Domain Adaptation 5.4.1 Feature-Based Approaches Onewayofadaptingalearnertoanewdomainwithout using any unlabeled data is to only include features that are expected to transfer well (Dredze et al. , 2007)." D07-1096,D07-1112,o,"Another technique used was to filter sentences of the out-of-domain corpus based on their similarity to the target domain, as predicted by a classifier (Dredze et al. , 2007)." D09-1086,D07-1112,o,"As with many domain adaptation problems, it is quite helpful to have some annotated target data, especially when annotation styles vary (Dredze et al., 2007)." E09-3005,D07-1112,o,"The problem itself has started to get attention only recently (Roark and Bacchiani, 2003; Hara et al., 2005; Daume III and Marcu, 2006; Daume III, 2007; Blitzer et al., 2006; McClosky et al., 2006; Dredze et al., 2007)." E09-3005,D07-1112,o,"In contrast, semi-supervised domain adaptation (Blitzer et al., 2006; McClosky et al., 2006; Dredze et al., 2007) is the scenario in which, in addition to the labeled source data, we only have unlabeled and no labeled target domain data." E09-3005,D07-1112,o,"2 Motivation and Prior Work While several authors have looked at the supervised adaptation case, there are less (and especially less successful) studies on semi-supervised domain adaptation (McClosky et al., 2006; Blitzer et al., 2006; Dredze et al., 2007)." E09-3005,D07-1112,o,"However, based on annotation differences in the datasets (Dredze et al., 2007) and a bug in their system (Shimizu and Nakagawa, 2007), their results are inconclusive.1 Thus, the effectiveness of SCL is rather unexplored for parsing." I08-2097,D07-1112,o,"84.12 only PTB (baseline) 83.58 1st (Sagae and Tsujii, 2007) 83.42 2nd (Dredze et al., 2007) 83.38 3rd (Attardi et al., 2007) 83.08 third row lists the three highest scores of the domain adaptation track of the CoNLL 2007 shared task." I08-2097,D07-1112,o,"This was a difcult challenge as many participants in the task failed to obtain any meaningful gains from unlabeled data (Dredze et al., 2007)." I08-2097,D07-1112,o,"Dredze et al. yielded the second highest score1 in the domain adaptation track (Dredze et al., 2007)." I08-2097,D07-1112,o,"Dredze et al. also indicated that unlabeled dependency parsing is not robust to domain adaptation (Dredze et al., 2007)." P08-1082,D07-1112,o,"It is important to realize that the output of all mentioned processing steps is noisy and contains plenty of mistakes, since the data has huge variability in terms of quality, style, genres, domains etc., and domain adaptation for the NLP tasks involved is still an open problem (Dredze et al., 2007)." W09-2205,D07-1112,o,"Based on annotation differences in the datasets (Dredze et al., 2007) and a bug in their system (Shimizu and Nakagawa, 2007), their results are inconclusive." C08-1052,D07-1113,o,"As well as the sentiment expressions leading to evaluations, there are many semantic aspects to be extracted from documents which contain writers opinions, such as subjectivity (Wiebe and Mihalcea, 2006), comparative sentences (Jindal and Liu, 2006), or predictive expressions (Kim and Hovy, 2007)." C08-1060,D07-1113,o,"Specifically, Kim and Hovy (2007) identify which political candidate is predicted to win by an opinion posted on a message board and aggregate opinions to correctly predict an election result." C08-1060,D07-1113,o,"Opinion forecasting differs from that of opinion analysis, such as extracting opinions, evaluating sentiment, and extracting predictions (Kim and Hovy, 2007)." C08-1060,D07-1113,o,Kim and Hovy (2007) make a similar assumption. C08-1101,D07-1113,o,An application of the idea of alternative targets can be seen in Kim and Hovys (2007) work on election prediction. P09-1026,D07-1113,o,Kim and Hovy (2007) predict the results of an election by analyzing forums discussing the elections. D09-1031,D08-1027,o,"Examples of the latter include providing suggestions from a machine labeler and using extremely cheap human labelers, e.g. with the Amazon Mechanical Turk (Snow et al., 2008)." P09-1032,D08-1027,p,"While this is certainly a daunting task, it is possible that for annotation studies that do not require expert annotators and extensive annotator training, the newly available access to a large pool of inexpensive annotators, such as the Amazon Mechanical Turk scheme (Snow et al., 2008),4 or embedding the task in an online game played by volunteers (Poesio et al., 2008; von Ahn, 2006) could provide some solutions." P09-2078,D08-1027,o,"Previous work has shown that data collected through the Mechanical Turk service is reliable and comparable in quality with trusted sources (Snow et al., 2008)." W09-1904,D08-1027,o,"Several recent papers have studied the use of annotations obtained from Amazon Mechanical Turk, a marketplace for recruiting online workers (Su et al., 2007; Kaisser et al., 2008; Kittur et al., 2008; Sheng et al., 2008; Snow et al., 2008; Sorokin and Forsyth, 2008)." W09-1908,D08-1027,o,"With the success of collaborative sites like Amazons Mechanical Turk 1, one 1http://www.mturk.com/ 59 can provide the task of annotation to multiple oracles on the internet (Snow et al., 2008)." D09-1008,D08-1039,o,"A few studies (Carpuat and Wu, 2007; Ittycheriah and Roukos, 2007; He et al., 2008; Hasan et al., 2008) addressed this defect by selecting the appropriate translation rules for an input span based on its context in the input sentence." D09-1022,D08-1039,o,"We chose this inverse direction since it can be integrated directly into the decoder and, thus, does not rely on a two-pass approach using reranking, as it is the case for (Hasan et al., 2008)." D09-1022,D08-1039,o,"The trigger-based lexicon model used in this work follows the training procedure introduced in (Hasan et al., 2008) and is integrated directly in the decoder instead of being applied in n-best list reranking." D08-1067,H05-1013,o,"(2005), Ponzetto and Strube (2006)) and the exploitation of advanced techniques that involve joint learning (e.g., Daume III and Marcu (2005)) and joint inference (e.g., Denis and Baldridge (2007)) for coreference resolution and a related extraction task." D08-1069,H05-1013,o,"While early machine learning approaches for the task relied on local, discriminative classifiers (Soon et al., 2001; Ng and Cardie, 2002b; Morton, 2000; Kehler et al., 2004), more recent approaches use joint and/or global models (McCallum and Wellner, 2004; Ng, 2004; Daume III and Marcu, 2005; Denis and Baldridge, 2007a)." D09-1128,H05-1013,o,"The most similar work to ours is (Daume III and Marcu, 2005), in which two most common synsets from WordNet for all words in an NP and their hypernyms are extracted as features." D09-1128,H05-1013,o,"Other similar work includes the mention detection (MD) task (Florian et al., 2006) and joint probabilistic model of coreference (Daume III and Marcu, 2005)." N07-1010,H05-1013,o,"See (Luo and Zitouni, 2005) and (Daume III and Marcu, 2005)." N07-1011,H05-1013,o,"7 Related Work There has been a recent interest in training methods that enable the use of first-order features (Paskin, 2002; Daume III and Marcu, 2005b; Richardson and Domingos, 2006)." N07-1011,H05-1013,o,"Perhaps the most related is 86 learning as search optimization (LASO) (Daume III and Marcu, 2005b; Daume III and Marcu, 2005a)." P09-1024,H05-1013,o,"We implement this algorithm using the perceptron framework, as it can be easily modified for structured prediction while preserving convergence guarantees (Daume III and Marcu, 2005; Snyder and Barzilay, 2007)." P09-1024,H05-1013,o,"Training Procedure Our algorithm is a modification of the perceptron ranking algorithm (Collins, 2002), which allows for joint learning across several ranking problems (Daume III and Marcu, 2005; Snyder and Barzilay, 2007)." P09-5006,H05-1013,o,"We finally move on to present more complex models which attempt to model coreference as a global discourse phenomenon (Yang et al., 2003; Luo et al., 2004; Daume III & Marcu, 2005, inter alia)." W06-1633,H05-1013,o,"In contrast, globally optimized clustering decisions were reported in (Luo et al. , 2004) and (DaumeIII and Marcu, 2005a), where all clustering possibilities are considered by searching on a Bell tree representation or by using the Learning as Search Optimization (LaSO) framework (DaumeIII and Marcu, 2005b) respectively, but the first search is partial and driven by heuristics and the second one only looks back in text." W06-1633,H05-1013,o,"(DaumeIII and Marcu, 2005a) use the Learning as Search Optimization framework to take into account the non-locality behavior of the coreference features." D07-1014,H05-1064,o,"We have already shown in Section 3 how to solve (a); here we avoid (b) by maximizing conditional likelihood, marginalizing out the hidden variable, denotedz: max vector summationdisplay x,y p(x,y)log summationdisplay z pvector(y,z | x) (17) This sort of conditional training with hidden variables was carried out by Koo and Collins (2005), for example, in reranking; it is related to the information bottleneck method (Tishby et al. , 1999) and contrastive estimation (Smith and Eisner, 2005)." D08-1091,H05-1064,o,"Our method is based on the ones described in (Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Fader et al., 2007), The objective of this paper is to dynamically rank speakers or participants in a discussion." D08-1091,H05-1064,o,"Discriminative parsing has been investigated before, such as in Johnson (2001), Clark and Curran (2004), Henderson (2004), Koo and Collins (2005), Turian et al." D09-1023,H05-1064,o,"1113: Recursive DP equations for summing over t and a. alignments are treated as a hidden variable to be marginalized out.10 Optimization problems of this form are by now widely known in NLP (Koo and Collins, 2005), and have recently been used for machinetranslationaswell(Blunsometal.,2008)." E09-1037,H05-1064,o,"Meanwhile, some learning algorithms, like maximum likelihood for conditional log-linear models (Lafferty et al., 2001), unsupervised models (Pereira and Schabes, 1992), and models with hidden variables (Koo and Collins, 2005; Wang et al., 2007; Blunsom et al., 2008), require summing over the scores of many structures to calculate marginals." P06-1096,H05-1064,o,"Discriminative training with hidden variables has been handled in this probabilistic framework (Quattoni et al. , 2004; Koo and Collins, 2005), but we choose Equation 3 for efficiency." P07-1078,H05-1064,o,"A reranking parser (see also (Koo and Collins, 2005)) is a layered model: the base layer is a generative statistical PCFG parser that creates a ranked list of k parses (say, 50), and the second layer is a reranker that reorders these parses using more detailed features." P07-1078,H05-1064,p,"1 Introduction State of the art statistical parsers (Collins, 1999; Charniak, 2000; Koo and Collins, 2005; Charniak and Johnson, 2005) are trained on manually annotated treebanks that are highly expensive to create." P07-1080,H05-1064,o,"5 Related Work There has not been much previous work on graphical models for full parsing, although recently several latent variable models for parsing have been proposed (Koo and Collins, 2005; Matsuzaki et al. , 2005; Riezler et al. , 2002)." P07-1080,H05-1064,o,"In (Koo and Collins, 2005), an undirected graphical model is used for parse reranking." P07-1080,H05-1064,o,"(Koo and Collins, 2005; Matsuzaki et al. , 2005; Riezler et al. , 2002))." P08-1068,H05-1064,o,"Previous research in this area includes several models which incorporate hidden variables (Matsuzaki et al., 2005; Koo and Collins, 2005; Petrov et al., 2006; Titov and Henderson, 2007)." W06-1666,H05-1064,o,"Use of probability estimates is not a serious limitation of this approach because in practice candidates are normally provided by some probabilistic model and its probability estimates are used as additional features in the reranker (Collins and Koo, 2005; Shen and Joshi, 2003; Henderson and Titov, 2005)." W06-1666,H05-1064,o,"1 Introduction The reranking approach is widely used in parsing (Collins and Koo, 2005; Koo and Collins, 2005; Henderson and Titov, 2005; Shen and Joshi, 2003) as well as in other structured classification problems." W06-1670,H05-1064,p,"In syntactic parse re-ranking supersenses have been used to build useful latent semantic features (Koo and Collins, 2005)." W06-2902,H05-1064,o,"(Matsuzaki et al. , 2005; Koo and Collins, 2005))." W07-2217,H05-1064,o,"Collins and Koo (Collins & Koo, 2005) introduced an improved reranking model for parsing which includes a hidden layer of semantic features." W07-2218,H05-1064,o,"Recently several latent variable models for constituent parsing have been proposed (Koo and Collins, 2005; Matsuzaki et al. , 2005; Prescher, 2005; Riezler et al. , 2002)." W07-2218,H05-1064,o,"In (Koo and Collins, 2005), an undirected graphical model for constituent parse reranking uses dependency relations to define the edges." C08-1121,H05-1083,o,"Some researchers (Lappin and Leass 1994; Kennedy and Boguraev 1996) use manually designed rules to take into account the grammatical role of the antecedent candidates as well as the governing relations between the candidate and the pronoun, while others use features determined over the parse tree in a machine-learning approach (Aone and Bennett 1995; Yang et al. 2004; Luo and Zitouni 2005)." D08-1031,H05-1083,o,(2007) and Luo and Zitouni (2005). N07-1010,H05-1083,o,"For example, syntactic features (Ng and Cardie, 2002b; Luo and Zitouni, 2005) can be computed this way and are used in our system." N07-1010,H05-1083,o,"See (Luo and Zitouni, 2005) and (Daume III and Marcu, 2005)." N09-2051,H05-1083,o,"The coreference resolution system employs a variety of lexical, semantic, distance and syntactic features(Luo et al., 2004; Luo and Zitouni, 2005)." P06-1006,H05-1083,o,"These features are calculated by mining the parse trees, and then could be used for resolution by using manually designed rules (Lappin and Leass, 1994; Kennedy and Boguraev, 1996; Mitkov, 1998), or using machine-learning methods (Aone and Bennett, 1995; Yang et al. , 2004; Luo and Zitouni, 2005)." P06-1006,H05-1083,o,We also tested the flat syntactic feature set proposed in Luo and Zitouni (2005)s work. P06-1006,H05-1083,o,"In line with the reports in (Luo and Zitouni, 2005) we do observe the performance improvement against the baseline (NORM) for all the domains." P06-1006,H05-1083,o,Luo and Zitouni (2005) proposed a coreference resolution approach which also explores the information from the syntactic parse trees. I08-2116,H05-1087,o,"The training methods of LRM-F and SVM-F were useful to improve the F M -scores of LRM and SVM, as reported in (Jansche, 2005; Joachims, 2005)." I08-2116,H05-1087,o,"Recently, methods for training binary classifiers to maximize the F 1 -score have been proposed for SVM (Joachims, 2005) and LRM (Jansche, 2005)." I08-2116,H05-1087,o,"To estimate combination weights, we extend the F 1 -score maximization training algorithm for LRM described in (Jansche, 2005)." I08-2116,H05-1087,o,"2 F 1 -score Maximization Training of LRM We first review the F 1 -score maximization training method for linear models using a logistic function described in (Jansche, 2005)." I08-2116,H05-1087,o,"By contrast, in the training method proposed by (Jansche, 2005), the discriminative function f(x;w) is estimated to maximize the F 1 -score of training dataset D. This training method employs an approximate form of the F 1 -score obtained by using a logistic function." I08-2116,H05-1087,o,"C, A, and B are computed for training dataset D as C = summationtext M m=1 y (m) y (m) , A = summationtext M m=1 y (m) , and B = summationtext M m=1 y (m) . In (Jansche, 2005), y (m) was approximated by using the discriminative and logistic functions shown in Eqs." P06-1028,H05-1087,o,"Moreover, an F-score optimization method for logistic regression has also been proposed (Jansche, 2005)." P07-1093,H05-1087,o,"(2006) and Jansche (2005), who discuss maximum expected F-score training of decision trees and logistic regression models." P07-1093,H05-1087,o,"For example, the constrained optimization method of (Mozer et al. , 2001) relies on approximations of sensitivity (which they call CA) and specificity2 (their CR); related techniques (Gao et al. , 2006; Jansche, 2005) rely on approximations of true positives, false positives, and false negatives, and, indirectly, recall and precision." P07-1093,H05-1087,o,"(2001), whose constrained optimization technique is similar to those in (Gao et al. , 2006; Jansche, 2005)." D07-1070,J04-3004,o,"In his analysis of Yarowsky (1995), Abney (2004) formulates several variants of bootstrapping." D07-1070,J04-3004,o,"Drawing on Abneys (2004) analysis of the Yarowsky algorithm, we perform bootstrapping by entropy regularization: we maximize a linear combination of conditional likelihood on labeled data and confidence (negative Renyi entropy) on unlabeled data." D08-1106,J04-3004,o,Abney (2004) presented a thorough discussion on the Yarowsky algorithm. D09-1134,J04-3004,o,"This approach, however, does not have a theoretical guarantee on optimality unless certain nontrivial conditions are satisfied (Abney, 2004)." P06-1027,J04-3004,o,"5.1 Comparison to self-training For completeness, we also compared our results to the self-learning algorithm, which has commonly been referred to as bootstrapping in natural language processing and originally popularized by the work of Yarowsky in word sense disambiguation (Abney 2004; Yarowsky 1995)." P07-1004,J04-3004,o,"3 The Framework 3.1 The Algorithm Our transductive learning algorithm, Algorithm 1, is inspired by the Yarowsky algorithm (Yarowsky, 1995; Abney, 2004)." P07-1004,J04-3004,o,"Under certain precise conditions, as described in (Abney, 2004), we can analyze Algorithm 1 as minimizing the entropy of the distribution over translations of U. However, this is true only when the functions Estimate, Score and Select have very prescribed definitions." P08-1061,J04-3004,o,"More recently, Haffari and Sarkar (2007) have extended the work of Abney (2004) and given a better mathematical understanding of self-training algorithms." W06-2207,J04-3004,o,"Although a rich literature covers bootstrapping methods applied to natural language problems (Yarowsky, 1995; Riloff, 1996; Collins and Singer, 1999; Yangarber et al. , 2000; Yangarber, 2003; Abney, 2004) several questions remain unanswered when these methods are applied to syntactic or semantic pattern acquisition." W06-2207,J04-3004,o,"Previous studies called the class of algorithms illustrated in Figure 2 cautious or sequential because in each iteration they acquire 1 or a small set of rules (Abney, 2004; Collins and Singer, 1999)." W06-2207,J04-3004,o,"Thispaperfocusesontheframeworkintroduced in Figure 2 for two reasons: (a) cautious al50 gorithms were shown to perform best for several NLP problems (including acquisition of IE patterns), and (b) it has nice theoretical properties: Abney (2004) showed that, regardless of the selection procedure, sequential bootstrapping algorithms converge to a local minimum of K, where K is an upper bound of the negative log likelihood of the data." W07-2060,J04-3004,n,"Unsupervised methods have been developed for WSD, but despite modest success have not always been well understood statistically (Abney, 2004)." C08-1082,J05-4002,o,"Many 649 similarity measures and weighting functions have been proposed for distributional vectors; comparative studies include Lee (1999), Curran (2003) and Weeds and Weir (2005)." C08-1082,J05-4002,o,The linear kernel derived from the L1 distance is the same as the difference-weighted token-based similarity measure of Weeds and Weir (2005). D07-1052,J05-4002,o,"Weeds and Weir (2005) discuss the influence of bias towards highor low-frequency items for different tasks (correlation with WordNet-derived neighbour sets and pseudoword disambiguation), and it would not be surprising if the different high-frequency bias were leading to different results." D07-1052,J05-4002,o,See Weeds and Weir (2005) for an overview of other measures. D07-1061,J05-4002,o,"A variety of other measures of semantic relatedness have been proposed, including distributional similarity measures based on co-occurrence in a body of text see (Weeds and Weir, 2005) for a survey." J06-1003,J05-4002,o,"Formally, by distributional similarity (or co-occurrence similarity) of two words w 1 and w 2 , we mean that they tend to occur in similar contexts, for some definition of context; or that the set of words that w 1 tends to co-occur with is similar to the set that w 2 tends to co-occur with; or that if w 1 is substituted for w 2 in a context, its plausibility (Weeds 2003; Weeds and Weir 2005) is unchanged." J06-1003,J05-4002,o,"For example, Weeds (2003; Weeds and Weir, 2005) (see below) took verbs as contexts for nouns in object position: so they regarded two nouns to be similar to the extent that they occur as direct objects of the same set of verbs." J06-1003,J05-4002,o,"If distributional similarity is conceived of as substitutability, as Weeds and Weir (2005) and Lee (1999) emphasize, then asymmetries arise when one word appears in a subset of the contexts in which the other appears; for example, the adjectives that typically modify apple are a subset of those that modify fruit,sofruit substitutes for apple better than apple substitutes for fruit." J07-4005,J05-4002,p,"However, the study of Weeds and Weir (2005) provides interesting insights into what makes a good distributional similarity measure in the contexts of semantic similarity prediction and language modeling." J07-4005,J05-4002,o,"Further, it has been shown (Weeds et al. 2005; Weeds and Weir 2005) that performance of Lins distributional similarity score decreases more significantly than other measures for low frequency nouns." N07-1024,J05-4002,o,"We then compute the weight of a context word w in context c, W(w, c), using mutual information and t-test, which were reported by Weeds and Weir (2005) to perform the best on a pseudo-disambiguation task." P06-1046,J05-4002,o,"This is important when LARGE CUT-OFF 0 5 100 NAIVE 541,721 184,493 35,617 SASH 10,599 8,796 6,231 INDEX 5,844 13,187 32,663 Table 4: Average number of comparisons per term considering that different tasks may require different weights and measures (Weeds and Weir, 2005)." P07-2011,J05-4002,o,"Following initial work by (Sparck Jones, 1964) and (Grefenstette, 1994), an early, online distributional thesaurus presented in (Lin, 1998) has been widely used and cited, and numerous authors since have explored thesaurus properties and parameters: see survey component of (Weeds and Weir, 2005)." P07-2011,J05-4002,p,"It is explored extensively in (Curran, 2004; Weeds and Weir, 2005)." W06-1104,J05-4002,o,"2 Evaluating SR measures Various approaches for computing semantic relatedness of words or concepts have been proposed, e.g. dictionary-based (Lesk, 1986), ontology-based (Wu and Palmer, 1994; Leacock and Chodorow, 1998), information-based (Resnik, 1995; Jiang and Conrath, 1997) or distributional (Weeds and Weir, 2005)." W08-2005,J05-4002,o,"The earliest work in this direction are those of (Hindle, 1990), (Lin, 1998), (Dagan et al., 1999), (Chen and Chen, 2000), (Geffet and Dagan, 2004) and (Weeds and Weir, 2005)." W09-1706,J05-4002,o,"They generally perform less well on low-frequency words (Weeds and Weir, 2005; van der Plas, 2008)." D08-1057,J05-4004,o,This upper bound is consistent with the upper limit of 50% found by Daume III and Marcu (2005) which takes into account stemming differences. D08-1057,J05-4004,o,"The closest work is that of Jing and McKeown (1999) and Daume III and Marcu (2005), in which multiple sentences are processed, with fragments within them being recycled to generate the novel generated text." D08-1057,J05-4004,o,Daume III and Marcu (2005) propose a model that encodes how likely it is that different sized spans of text are skipped to reach words and phrases to recycle. C08-1082,J06-3003,o,"3.2 Compound Noun Interpretation The task of interpreting the semantics of noun compounds is one which has recently received considerable attention (Lauer, 1995; Girju et al., 2005; Turney, 2006)." C08-1114,J06-3003,p,"The best previous result is an accuracy of 56.1% (Turney, 2006)." C08-1114,J06-3003,o,"The average senior high school student achieves 57% correct (Turney, 2006)." C08-1114,J06-3003,o,Turney (2006) later addressed the same problem using 8000 automatically generated patterns. C08-1114,J06-3003,o,"PairClass is most similar to the algorithm of Turney (2006), but it differs in the following ways: PairClass does not use a lexicon to find synonyms for the input word pairs." C08-1114,J06-3003,o," PairClass generates probability estimates, whereas Turney (2006) uses a cosine measure of similarity." C08-1114,J06-3003,n, The automatically generated patterns in PairClass are slightly more general than the patterns of Turney (2006). C08-1114,J06-3003,n," The morphological processing in PairClass (Minnen et al., 2001) is more sophisticated than in Turney (2006)." C08-1114,J06-3003,p,"Veale (2004) used WordNet to answer 374 multiple-choice SAT analogy questions, achieving an accuracy of 43%, but the best corpus-based approach attains an accuracy of 56% (Turney, 2006)." C08-1114,J06-3003,o,"The template we use here is similar to Turney (2006), but we have added extra context words before the X and after the Y . Our morphological processing also differs from Turney (2006)." C08-1114,J06-3003,o,"Turney (2006) also selects patterns based on the number of pairs that generate them, but the number of selected patterns is a constant (8000), independent of the number of input word pairs." C08-1114,J06-3003,o,Turney (2006) used a corpus-based algorithm. E09-1071,J06-3003,o,"One such relational reasoning task is the problem of compound noun interpretation, which has received a great deal of attention in recent years (Girju et al., 2005; Turney, 2006; Butnariu and Veale, 2008)." E09-1071,J06-3003,o,"Turney (2006) describes a method (Latent Relational Analysis) that extracts subsequence patterns for noun pairs from a large corpus, using query expansion to increase the recall of the search and feature selection and dimensionality reduction to reduce the complexity of the feature space." E09-1071,J06-3003,o,"The distinction between lexical and relational similarity for word pair comparison is recognised byTurney(2006)(hecallstheformer attributional similarity), though the methods he presents focus on relational similarity." W09-0201,J06-3003,o,"in at with use teacher school 11894.47020.1 28.9 0.0 teacher handbook 2.5 0.0 3.2 10.1 soldier gun 2.8 10.3 105.9 41.0 Table 5: A fragment of the CCxL space We use this space to measure relational similarity (Turney, 2006) of concept pairs, e.g., finding that the relation between teachers and handbooks is more similar to the one between soldiers and guns, than to the one between teachers and schools." W09-0201,J06-3003,o,"The Attr cells summarize the performance of the 6 models on the wiki table that are based on attributional similarity only (Turney, 2006)." W09-0201,J06-3003,o,"In particular, we need to develop a backoff strategy for unseen pairs in the relational similarity tasks, that, following Turney (2006), could be based on constructing surrogate pairs of taxonomically similar words found in the CxLC space." W09-0201,J06-3003,o,We solve SAT analogies with a simplified version of the method of Turney (2006). W09-0201,J06-3003,o,"1 Introduction Corpus-derived distributional semantic spaces have proved valuable in tackling a variety of tasks, ranging from concept categorization to relation extraction to many others (Sahlgren, 2006; Turney, 2006; Pado and Lapata, 2007)." W09-0205,J06-3003,o,"The literature on relational similarity, on the other hand, has focused on pairs of words, devising various methods to compare how similar the contexts in which target pairs appear are to the contexts of other pairs that instantiate a relation of interest (Turney, 2006; Pantel and Pennacchiotti, 2006)." W09-0205,J06-3003,p,"1 Introduction Co-occurrence statistics extracted from corpora lead to good performance on a wide range of tasks that involve the identification of the semantic relation between two words or concepts (Sahlgren, 2006; Turney, 2006)." W09-1111,J06-3003,o,"In computational linguistics, our pattern discovery procedure extends over previous approaches that use surface patterns as indicators of semantic relations between nouns or verbs ((Hearst, 1998; Chklovski and Pantel, 2004; Etzioni et al., 2004; Turney, 2006; Davidov and Rappoport, 2008) inter alia)." A92-1013,J90-1003,o,"Congress of the Italian Association for Artificial Intelligence, Palermo, 1991 B. Boguraev, Building a Lexicon: the Contribution of Computers, IBM Report, T.J. Watson Research Center, 1991 M. Brent, Automatic Aquisition of Subcategorization frames from Untagged Texts, in (ACL, 1991) N. Calzolari, R. Bindi, Acquisition of Lexical Information from Corpus, in (COLING 1990) K. W. Church, P. Hanks, Word Association Norms, Mutual Information, and Lexicography, Computational Linguistics, vol." A92-1013,J90-1003,o,"The results of these studies have important applications in lexicography, to detect lexicosyntactic regularities (Church and Hanks, 19901 (Calzolari and Bindi,1990), such as, for example~ support verbs (e.g. ""make-decision"") prepositional verbs (e.g. ""rely-upon"") idioms, semantic relations (e.g. ""part_of"") and fixed expressions (e.g. ""kick the bucket"")." A92-1013,J90-1003,o,"In (Calzolari and Bindi, 1990), (Church and Hanks, 1990) the significance of an association (x,y) is measured by the mutual information I(x,y), i.e. the probability of observing x and y together, compared with the probability of observing x and y independently." A94-1006,J90-1003,o,"In particular, mutual information (Church and Hanks, 1990; Wu and Su, 1993) and other statistical methods such as (Smadja, 1993) and frequency-based methods such as (Justeson and Katz, 1993) exclude infrequent phrases because they tend to introduce too much noise." A97-1021,J90-1003,o,"Previous research in automatic acquisition focuses primarily on the use of statistical techniques, such as bilingual alignment (Church and Hanks, 1990; Klavans and Tzoukermann, 1995; Wu and Xia, 1995) or extraction of syntactic constructions from online dictionaries and corpora (Brent, 1993)." A97-1045,J90-1003,o,Church and Hanks (1990) introduced a statistical measurement called mutual information for extracting strongly associated or collocated words. C00-1059,J90-1003,o,Word association norms based on co-occurrence information have been proposed by (Church and Hanks 1990). C00-1059,J90-1003,o,"2.1.3 Correlation analysis As a correlation measure between terms, we use mutual information (Church and Hanks 1990)." C00-2128,J90-1003,o,"A large corpus is vahmble as a source of such nouns (Church and Hanks, 1990; Brown et al. , 1992)." C02-1033,J90-1003,o,"Collocations were extracted according to the method described in (Church and Hanks, 1990) by moving a window on texts." C02-1086,J90-1003,o,"Mutual information MI(x,y) is defined as following (Church and Hanks, 1990): )()( ),( log )()( ),( log),( 22 yfxf yxfN ypxp yxp yxMI == (4) where f(x) and f(y) are frequency of term x and term y, respectively." C02-1086,J90-1003,o,"One way of resolving query ambiguities is to use the statistics, such as mutual information (Church and Hanks, 1990), to measure associations of query terms, on the basis of existing corpora (Jang et al, 1999)." C04-1105,J90-1003,o,The mutual information of a pair of words is defined in terms of their co-occurrence frequency and respective occurrence frequencies (Church and Hanks 1990). C04-1141,J90-1003,o,"5 Related Work Although there have been many studies on collocation extraction and mining using only statistical approaches (Church and Hanks, 1990; Ikehara et al. , 1996), there has been much less work on collocation acquisition which takes into account the linguistic properties typically associated with collocations." C04-1194,J90-1003,o,"As (Church and Hanks, 1990), we adopted an evaluation of mutual information as a cohesion measure of each cooccurrence." C08-1117,J90-1003,o,"The initial vectors to be clustered are adapted with pointwise mutual information (Church and Hanks, 1990)." C92-1033,J90-1003,o,a Hindle and Rooth (1991) and Church and Hanks (1990) used partial parses generated by Fidditch to study word ~urrt.nc patterns m syntactic contexts. C94-1074,J90-1003,o,In (Zernik 1990; Calzolari and Bindi 1990; Smadja 1989; Church and Hanks 1990) associations are detected in a 5 window. C94-1084,J90-1003,o,"In the field of eomputationa.1 linguistics, mutual information \[Brown et al. , 1988\], 2 \[Church and Hanks, 1990\], or a likelihood ratio test \[Dunning, 199a\] are suggested." C96-1055,J90-1003,o,"Previous research in automatic acquisition focuscs primarily on the use of statistical techniques, such as bilingual alignment (Church and Hanks, 1990; Klavans and Tzoukermann, 1996; Wu and Xia, 1995), or extraction of syntactic constructions from online dictionaries and corpora (Brant, 1993; Dorr, Garman, and Weinberg, 1995)." C96-1083,J90-1003,o,"Hindle uses the observed frequencies within a specific syntactic pattern (subject/verb, and verb/object) to derive a cooccu,> rence score which is an estimate of mutual information (Church and Hanks, 1990)." C96-1083,J90-1003,p,"In the past five years, important research on the automatic acquisition of word classes based on lexical distribution has been published (Church and Hanks, 1990; Hindle, 1990; Smadja, 1993; Grei~nstette, 1994; Grishman and Sterling, 1994)." C96-1097,J90-1003,o,"There are many method proposed to extract rigid expressions from corpora such as a method of focusing on the binding strength of two words (Church and Hanks 1990); the distance between words (Smadja and Makeown 1990); and the number of combined words and frequency of appearance (Kita 1993, 1994)." C96-2100,J90-1003,o,"(Church & Hanks, 1990:p.24) Merkel, Nilsson, & Ahrenberg (1994) have constructed a system that uses frequency of recurrent segments to determine long phrases." C96-2100,J90-1003,o,"Given this, the mutual information ratio (Church & Hanks, 1990; Church & Mercer, 1993; Steier & Belew, 1991) is expressed by Formula 1." C96-2163,J90-1003,o,"In the field of statistical analysis of natural language data, it is common to use measures of lexical association, such as the informationtheoretic measure of mutual information, to extract useful relationships between words (e.g. Church and Hanks (1990))." C96-2208,J90-1003,o,"More rare words rather than common words are found even in standard dictionaries (Church and Hanks, 1990)." D07-1039,J90-1003,o,"pointwise mutual information (Church and Hanks, 1990), 3." D08-1007,J90-1003,o,"We measure this association using pointwise Mutual Information (MI) (Church and Hanks, 1990)." D08-1044,J90-1003,o,"This task is quite common in corpus linguistics and provides the starting point to many other algorithms, e.g., for computing statistics such as pointwise mutual information (Church and Hanks, 1990), for unsupervised sense clustering (Schutze, 1998), and more generally, a large body of work in lexical semantics based on distributional profiles, dating back to Firth (1957) and Harris (1968)." D09-1051,J90-1003,o,"Many studies on collocation extraction are carried out based on co-occurring frequencies of the word pairs in texts (Choueka et al., 1983; Church and Hanks, 1990; Smadja, 1993; Dunning, 1993; Pearce, 2002; Evert, 2004)." E95-1037,J90-1003,o,", 1989), e.g, lexicography (Church and Hanks, 1990), information retrieval (Salton, 1986a), text input (Yamashina and Obashi, 1988), etc. This paper will touch on its feasibility in topic identification." E99-1005,J90-1003,n,"2Mutual information, though potentially of interest as a measure of collocational status, was not tested due to its well-known property of overemphasising the significance of rare events (Church and Hanks, 1990)." E99-1013,J90-1003,o,"Mutual information compares the probability of the co-occurence of words a and b with the independent probabilities of occurrence of a and b (Church and Hanks, 1990)." H92-1040,J90-1003,o,IC function is a derivative of Fano's mutual information formula recently used by Church and Hanks (1990) to compute word co-occurrence patterns in a 44 million word corpus of Associated Press news stories. H92-1047,J90-1003,o,"Using techniques described in Church and Hindle (1990), Church and Hanks (1990), and Hindle and Rooth (1991), below are some examples of the most frequent V-O pairs from the AP corpus." H93-1049,J90-1003,o,"Church, K. and Hanks, P. , (1990) ""Word Association Norms, Mutual Information, and Lexicography,"" Computational Linguistics Vol." I08-1014,J90-1003,o,"2 Related Works Some of the most common measures of unithood include pointwise mutual information (MI) (Church and Hanks, 1990) and log-likelihood ratio (Dunning, 1994)." J00-3001,J90-1003,o,"While we have observed reasonable results with both G 2 and Fisher's exact test, we have not yet discussed how these results compare to the results that can be obtained with a technique commonly used in corpus linguistics based on the mutual information (MI) measure (Church and Hanks 1990): I(x,y) --log 2 P(x,y) (4) P(x)P(y) In (4), y is the seed term and x a potential target word." J00-3001,J90-1003,p,"Church and Hanks (1990) use mutual information to identify collocations, a method they claim is reasonably effective for words with a frequency of not less than five." J00-3001,J90-1003,o,"org/pubs/citations/ j ournals/toms/1986-12-2/p154-meht a/ Mutual Information Given the definition of Mutual Information (Church and Hanks 1990), I(x,y) = log 2 P(x,y) P(x)P(y)"" we consider the distribution of a window word according to the contingency table (a) in Table 4." J02-2003,J90-1003,o,"Equation (10) is of interest because the ratio p(C | v, r)/p(C | r) can be interpreted as a measure of association between the verb v and class C. This ratio is similar to pointwise mutual information (Church and Hanks 1990) and also forms part of Resniks association score, which will be introduced in Section 6." J04-3002,J90-1003,o,"303 Wiebe, Wilson, Bruce, Bell, and Martin Learning Subjective Language While it is common in studies of collocations to omit low-frequency words and expressions from analysis, because they give rise to invalid or unrealistic statistical measures (Church and Hanks, 1990), we are able to identify higher-precision collocations by including placeholders for unique words (i.e. , the ugen-n-grams)." J09-3004,J90-1003,o,"This does not seem to be the case, however, for common feature weighting functions, such as Point-wise Mutual Information (Church and Patrick 1990; Hindle 1990)." J09-3004,J90-1003,p,"Probably the most widely used feature weighting function is (point-wise) Mutual Information (MI) (Church and Patrick 1990; Hindle 1990; Luk 1995; Lin 1998; Gauch, Wang, and Rachakonda 1999; Dagan 2000; Baroni and Vegnaduzzo 2004; Chklovski and Pantel 2004; Pantel and Ravichandran 2004; Pantel, Ravichandran, and Hovy 2004; Weeds, Weir, and McCarthy 2004), dened by: weight MI (w,f)=log 2 P(w,f) P(w)P(f) (1) We calculate the MI weights by the following statistics in the space of co-occurrence instances S: weight MI (w,f)=log 2 count(w,f) nrels count(w) count(f) (2) where count(w,f) is the frequency of the co-occurrence pair w,f in S, count(w)and count(f) are the independent frequencies of w and f in S,andnrels is the size of S.High MI weights are assumed to correspond to strong wordfeature associations." J92-1001,J90-1003,o,Church and Hanks 1990; Smadja and McKeown 1990). J93-1006,J90-1003,o,"7 This discussion could also be cast in an information theoretic framework using the notion of ""mutual information"" (Fano 1961), estimating the variance of the degree of match in order to find a frequency-threshold (see Church and Hanks 1990)." J93-2005,J90-1003,o,"Using techniques described in Church and Hindle (1990), Church and Hanks (1990), and Hindle and Rooth (1991), Figure 4 shows some examples of the most frequent V-O pairs from the AP corpus." J93-2005,J90-1003,o,"Church and Hanks 1990; Klavans, Chodorow, and Wacholder 1990; Wilks et al. 1993; Smadja 1991a, 1991b; Calzolari and Bindi 1990)." J93-3004,J90-1003,o,"For example, Church and Hanks (1990) describe the use of the mutual information index for this purpose (cf." J93-3004,J90-1003,o,"These tools are important in that the strongest collocational associations often represent different word senses, and thus 'they provide a powerful set of suggestions to the lexicographer for what needs to be accounted for in choosing a set of semantic tags' (Church and Hanks 1990, p. 28)." J93-3004,J90-1003,o,1 Church and Hanks (1990; Church et al. 1991) thus emphasize the importance of human judgment used in conjunction with these tools. J94-4003,J90-1003,o,The use of such relations (mainly relations between verbs or nouns and their arguments and modifiers) for various purposes has received growing attention in recent research (Church and Hanks 1990; Zernik and Jacobs 1990; Hindle 1990; Smadja 1993). J94-4003,J90-1003,o,"Statistics on co-occurrence of words in a local context were used recently for monolingual word sense disambiguation (Gale, Church, and Yarowsky 1992b, 1993; Sch6tze 1992, 1993) (see Section 7 for more details and Church and Hanks 1990; Smadja 1993, for other applications of these statistics)." J94-4005,J90-1003,o,"Lexical collocation functions, especially those determined statistically, have recently attracted considerable attention in computational linguistics (Calzolari and Bindi 1990; Church and Hanks 1990; Sekine et al. 1992; Hindle and Rooth 1993) mainly, though not exclusively, for use in disambiguation." N03-1032,J90-1003,o,"The window size may vary, Church and Hanks (1990) used windows of size 2 and 5." N03-1032,J90-1003,o,2.1.1 Pointwise Mutual Information This measure for word similarity was first used in this context by Church and Hanks (1990). N03-1032,J90-1003,o,"1 Introduction Many different statistical tests have been proposed to measure the strength of word similarity or word association in natural language texts (Dunning, 1993; Church and Hanks, 1990; Dagan et al. , 1999)." N09-1032,J90-1003,o,"The value of fj is calculated by Mutual Information (Church and Hanks, 1990) between xi and fj." P01-1059,J90-1003,o,"Strength of association between subject i and verb j is measured using mutual information (Church and Hanks 1990): )ln(),( ji ij tftf tfNjiMI = . Here tfij is the maximum frequency of subject-verb pair ij in the Reuters corpus, tfi is the frequency of subject head noun i in the corpus, tfj is the frequency of verb j in the corpus, and N is the number of terms in the corpus." P04-1022,J90-1003,o,"The former extracts collocations within a fixed window (Church and Hanks 1990; Smadja, 1993)." P04-3019,J90-1003,n,"Hanks and Church (1990) proposed using pointwise mutual information to identify collocations in lexicography; however, the method may result in unacceptable collocations for low-count pairs." P05-1014,J90-1003,p,"The most widely used association weight function is (point-wise) Mutual Information (MI) (Church and Hanks, 1990; Lin, 1998; Dagan, 2000; Weeds et al. , 2004)." P05-1014,J90-1003,o,"Concrete similarity measures compare a pair of weighted context feature vectors that characterize two words (Church and Hanks, 1990; Ruge, 1992; Pereira et al. , 1993; Grefenstette, 1994; Lee, 1997; Lin, 1998; Pantel and Lin, 2002; Weeds and Weir, 2003)." P05-1075,J90-1003,o,"METRIC FORMULA Frequency (Guiliano, 1964) x yf Pointwise Mutual Information [PMI] (Church & Hanks, 1990) ( )xy x y2log /P P P True Mutual Information [TMI] (Manning, 1999) ( )xy 2 xy x ylog /P P P P Chi-Squared ( 2 ) (Church and Gale, 1991) { }{ },, 2( ) i X X Y Y i j i j i j j f T-Score (Church & Hanks, 1990) 1 2 2 2 1 2 1 2 x x s s n n + C-Values4 (Frantzi, Anadiou & Mima 2000) 2 is not nested 2 log ( ) log ( ) 1 ( ) ( ) a a b T a f f f b P T where is the candidate string f( ) is its frequency in the corpus T is the set of candidate terms that contain P(T ) is the number of these candidate terms 609 1,700 of the three-word phrases are attested in the Lexile corpus." P05-1075,J90-1003,o,"A variety of methods have been applied, ranging from simple frequency (Justeson & Katz 1995), modified frequency measures such as c-values (Frantzi, Anadiou & Mima 2000, Maynard & Anadiou 2000) and standard statistical significance tests such as the t-test, the chi-squared test, and loglikelihood (Church and Hanks 1990, Dunning 1993), and information-based methods, e.g. pointwise mutual information (Church & Hanks 1990)." P06-1036,J90-1003,o,"To this end we follow the method introduced by (Church and Hanks, 1990), i.e. by sliding a window of a given size over some texts." P06-1036,J90-1003,o,"Like (Church and Hanks, 1990), we used mutual information to measure the cohesion between two words." P06-2033,J90-1003,o,"The information content of this set is defined as mutual information I(F(w)) (Church and Hanks, 1990)." P06-2069,J90-1003,o,"One can also examine the distribution of character or word ngrams, e.g. Language Modeling (Croft and Lafferty, 2003), phrases (Church and Hanks, 1990; Lewis, 1992), and so on." P08-1013,J90-1003,o,"To extract such word clusters we used suffix arrays proposed in Yamamoto and Church (2001) and the pointwise mutual information measure, see Church and Hanks (1990)." P08-2046,J90-1003,o,"To examine the effects of including some known AMs on the performance, the following AMs had a 50% chance of being included in the initial population: pointwise mutual information (Church and Hanks, 1990), the Dice coefficient, and the heuristic measure defined in (Petrovic et al., 2006): H(a,b,c) = 2log f(abc)f(a)f(c) if POS(b) = X, log f(abc)f(a)f(b)f(c) otherwise." P09-1072,J90-1003,o,"Computational linguists have demonstrated that a words meaning is captured to some extent by the distribution of words and phrases with which it commonly co-occurs (Church & Hanks, 1990)." P09-1072,J90-1003,o,"4 Using vector-based models of semantic representation to account for the systematic variances in neural activity 4.1 Lexical Semantic Representation Computational linguists have demonstrated that a words meaning is captured to some extent by the distribution of words and phrases with which it commonly co-occurs (Church and Hanks, 1990)." P91-1017,J90-1003,o,"The use of such relations (mainly relations between verbs or nouns and their arguments and modifiers) for various purposes has received growing attention in recent research (Church and Hanks, 1990; Zernik and Jacobs, 1990; Hindle, 1990)." P91-1017,J90-1003,o,"In this case it is possible to perform the correct selection if we used only statistics about the cooccurrences of 'corruption' with either 'investigator' or 'researcher', without looking for any syntactic relation (as in Church and Hanks (1990))." P91-1019,J90-1003,o,"INTRODUCTION Word associations have been studied for some time in the fields of psycholinguistics (by testing human subjects on words), linguistics (where meaning is often based on how words co-occur with each other), and more recently, by researchers in natural language processing (Church and Hanks, 1990; Hindle and Rooth, 1990; Dagan, 1990; McDonald et al. , 1990; Wilks et al. , 1990) using statistical measures to identify sets of associated words for use in various natural language processing tasks." P91-1027,J90-1003,o,"Three recent papers in this area are Church and Hanks (1990), Hindle (1990), and Smadja and McKeown (1990)." P92-1014,J90-1003,o,"In addition, IC is stable even for relatively low frequency words, which can be contrasted with Fano's mutual information formula recently used by Church and Hanks (1990) to compute word cooccurrence patterns in a 44 million word corpus of Associated Press news stories." P92-1052,J90-1003,p,"Researchers such as (Evans et al. 1991) and (Church and Hanks 1990) have applied robust grammars and statistical techniques over large corpora to extract interesting noun phrases and subject-verb, verb-object pairs." P93-1022,J90-1003,o,"Statistical data about these various cooccurrence relations is employed for a variety of applications, such as speech recognition (Jelinek, 1990), language generation (Smadja and McKeown, 1990), lexicography (Church and Hanks, 1990), machine translation (Brown et al. , ; Sadler, 1989), information retrieval (Maarek and Smadja, 1989) and various disambiguation tasks (Dagan et al. , 1991; Hindle and Rooth, 1991; Grishman et al. , 1986; Dagan and Itai, 1990)." P93-1022,J90-1003,o,"The mutual information of a cooccurrence pair, which measures the degree of association between the two words (Church and Hanks, 1990), is defined as (Fano, 1961): P(xly) I(x,y) -log 2 P(x,y) _ log 2 (1) P(x)P(y) P(x) = log 2 P(y\[x) P(Y) where P(x) and P(y) are the probabilities of the events x and y (occurrences of words, in our case) and P(x, y) is the probability of the joint event (a cooccurrence pair)." P93-1043,J90-1003,o,"In the results we describe here, we use mutual information (Fano 1961, 27-28; Church and Hanks 1990) as the metric for neighbourhood pruning, pruning which occurs as the network is being generated." P97-1007,J90-1003,o,"Thus, given a hyponym definition (O) and a set of candidate hypernym definitions, this method selects the candidate hypernym definition (E) which returns the maximum score given by formula (1): SC(O, E) : E cw(wi, wj) (I) 'wIEOAwj6E The cooccurrence weight (cw) between two words can be given by Cooccurrence Frequency, Mutual Information (Church and Hanks, 1990) or Association Ratio (Resnik, 1992)." P97-1024,J90-1003,o,"In each experiment, performance IMutu',d Information provides an estimate of the magnitude of the ratio t)ctw(.(-n the joint prol)ability P(verb/noun,1)reposition), and the joint probability a.~suming indcpendcnce P(verb/noun)P(prcl)osition ) s(:(, (Church and Hanks, 1990)." P98-1065,J90-1003,o,The collocations have been calculated according to the method described in Church and Hanks (1990) by moving a window on the texts. P98-1065,J90-1003,o,The cohesion between two words is measured as in Church and Hanks (1990) by an estimation of the mutual information based on their collocation frequency. P98-1100,J90-1003,o,"Collocation: Collocations were extracted from a seven million word sample of the Longman English Language Corpus using the association ratio (Church and Hanks, 1990) and outputted to a lexicon." P98-2231,J90-1003,o,"For instance, Church and Hanks (1990) calculated SA in terms of mutual information between two words wl and w2: N * f(wl,w2) I(wl, w2) = log2 (1) f(wl)f(w2) here N is the size of the corpus used in the estimation, f(Wl, w2) is the frequency of the cooccurrence, f(wl) and f(w2) that of each word." P98-2243,J90-1003,o,"The cohesion between words has been evaluated with the mutual information measure, as in (Church and Hanks, 1990)." P99-1004,J90-1003,p,"Arguably the most widely used is the mutual information (Hindle, 1990; Church and Hanks, 1990; Dagan et al. , 1995; Luk, 1995; D. Lin, 1998a)." P99-1029,J90-1003,o,"We then propose a relatively simple yet effective method for resolving translation disambiguation using mutual information (MI) (Church and Hanks, 1990) statistics obtained only from the target document collection." P99-1029,J90-1003,o,"The mutual information Ml(x,y) is defined as the following formula (Church and Hanks, 1990)." P99-1051,J90-1003,o,"We preferred the log-likelihood ratio to other statistical scores, such as the association ratio (Church and Hanks, 1990) or ;(2, since it adequately takes into account the frequency of the co-occurring words and is less sensitive to rare events and corpussize (Dunning, 1993; Daille, 1996)." W00-1106,J90-1003,o,"For mutual information (MI), we use two different equations: one for two-element compound nouns (Church and Hanks, 1990) and the other for three-element compound nouns (Suet al. , 1994)." W00-1313,J90-1003,o,"The association relationship between two words can be indicated by their mutual information, which can be further used to discover phrases \[Church :& Hanks (1990)\]." W01-0513,J90-1003,o,"We then rank-order the P X|Y MI XY M Z Pr Z|Y MI ZY G092log [P X P Y P X P Y ] f Y [P XY P XY ] f XY [P XY P XY ] f XY M iG13X,X} jG13Y,Y} (f ij G09 ij ) 2 ij f XY G09 XY XY (1G09( XY /N)) f XY G09 XY f XY (1G09(f XY /N)) Table 1: Probabilistic Approaches METHOD FORMULA Frequency (Guiliano, 1964) f XY Pointwise Mutual Information (MI) (Fano, 1961; Church and Hanks, 1990) log (P / PP) 2XY XY Selectional Association (Resnik, 1996) Symmetric Conditional Probability (Ferreira and Pereira, 1999) P / PP XY X Y 2 Dice Formula (Dice, 1945) 2 f / (f +f ) XY X Y Log-likelihood (Dunning, 1993; (Daille, 1996)." W01-0513,J90-1003,o,"Since we need knowledge-poor Daille, 1996) induction, we cannot use human-suggested filtering Chi-squared (G24 ) 2 (Church and Gale, 1991) Z-Score (Smadja, 1993; Fontenelle, et al. , 1994) Students t-Score (Church and Hanks, 1990) n-gram list in accordance to each probabilistic algorithm." W02-1115,J90-1003,p,"There are several distance measures suitable for this purpose, such as the mutual information(Church and Hanks, 1990), the dice coefficient(Manning and Schueutze 8.5, 1999), the phi coefficient(Manning and Schuetze 5.3.3, 1999), the cosine measure(Manning and Schueutze 8.5, 1999) and the confidence(Arrawal and Srikant, 1995)." W03-1805,J90-1003,o,"3 Related work Word collocation Various collocation metrics have been proposed, including mean and variance (Smadja, 1994), the t-test (Church et al. , 1991), the chi-square test, pointwise mutual information (MI) (Church and Hanks, 1990), and binomial loglikelihood ratio test (BLRT) (Dunning, 1993)." W03-1805,J90-1003,o,"a11a29a9 thea13 thea15 a1a4a3a6a5 a11a29a9 thea13 thea15 a11a29a9 thea15 a11a29a9 thea15a1a0 a2 since a11a2a9 thea13 thea15a4a3 a11a29a9 thea15 a11a29a9 thea15 . Also note that in the case of phraseness of a bigram, the equation looks similar to pointwise mutual information (Church and Hanks, 1990), but they are different." W04-1113,J90-1003,o,"Study in collocation extraction using lexical statistics has gained some insights to the issues faced in collocation extraction (Church and Hanks 1990, Smadja 1993, Choueka 1993, Lin 1998)." W04-1113,J90-1003,o,Church and Hanks (Church and Hanks 1990) employed mutual information to extract both adjacent and distant bi-grams that tend to co-occur within a fixed-size window. W04-1114,J90-1003,o,The typical problems like doctor-nurse (Church and Hanks 1990) could be avoided by using such information. W04-2105,J90-1003,o,"There are several basic methods for evaluating associations between words: based on frequency counts (Choueka, 1988; Wettler and Rapp, 1993), information theoretic (Church and Hanks, 1990) and statistical significance (Smadja, 1993)." W05-0101,J90-1003,o,"After building the chunker, students were asked to 4 choose a verb and then analyze verb-argument structure (they were provided with two relevant papers (Church and Hanks, 1990; Chklovski and Pantel, 2004))." W05-0829,J90-1003,o,"In the following sections, we will use 2 statistics to measure the the mutual translation likelihood (Church and Hanks, 1990)." W06-0308,J90-1003,o,"PMI (Church and Hanks, 1990) between two phrases is de ned as: log2 prob(ph1 is near ph2)prob(ph 1) prob(ph2) PMI is positive when two phrases tend to co-occur and negative when they tend to be in a complementary distribution." W06-1101,J90-1003,o,"Such studies follow the empiricist approach to word meaning summarized best in the famous dictum of the British 3 linguist J.R. Firth: You shall know a word by the company it keeps. (Firth, 1957, p. 11) Context similarity has been used as a means of extracting collocations from corpora, e.g. by Church & Hanks (1990) and by Dunning (1993), of identifying word senses, e.g. by Yarowski (1995) and by Schutze (1998), of clustering verb classes, e.g. by Schulte im Walde (2003), and of inducing selectional restrictions of verbs, e.g. by Resnik (1993), by Abe & Li (1996), by Rooth et al." W06-1605,J90-1003,p,"Usually in 1 In our experiments, we set negative PMI values to 0, because Church and Hanks (1990), in their seminal paper on word association ratio, show that negative PMI values are not expected to be accurate unless co-occurrence counts are made from an extremely large corpus." W06-3307,J90-1003,o,"To compute the degree of interaction between two proteins D4 BD and D4 BE, we use the information-theoretic measure of pointwise mutual information (Church and Hanks, 1990; Manning and Schutze, 1999), which is computed based on the following quantities: 1." W07-2201,J90-1003,o,"Pointwise mutual information (Fano, 1961) was used to measure strength of selection restrictions for instance by Church and Hanks (1990)." W07-2201,J90-1003,o,"andw2 iscomputedusinganassociationscorebased on pointwise mutual information, asdefinedbyFano (1961) and used for a similar purpose in Church and Hanks (1990), as well as in many other studies in corpus linguistics." W08-1901,J90-1003,o,"Indeed, as Sinopalnikova and Pavel (2004) note, Deese (1965) was the first to conduct linguistic analyses of word association norms, such as measurements of semantic similarity based on his convictions that similar words evoke similar word association responsesan approach that is somewhat reminiscent of Church and Hanks (1990) notion of mutual information." W08-1914,J90-1003,o,"Following Church & Hanks (1990), Rapp (2004), and Wettler et al." W08-2005,J90-1003,o,"Mutual Informatio n Church and Hanks (1990) discussed the use of the mutual information statistics as a way to identify a variety of interesting linguistic phenomena, ranging from semanti c relations of the doctor/nurse type (content word/content word) to lexico-syntactic co-occurrence preferences between verbs and prepositions (content word/function word)." W09-0211,J90-1003,o,"The tensor has been adapted with a straightforward extension of pointwise mutual information (Church and Hanks, 1990) for three-way cooccurrences, following equation 4." W09-0304,J90-1003,o,"The first adaptation includes theswap-operation(WagnerandLowrance,1975), whilethesecondadaptationincludesphoneticsegment distances, which are generated by applying an iterative pointwise mutual information (PMI) procedure(Churchand Hanks, 1990)." W09-0304,J90-1003,o,"We used pointwise mutual information (PMI; Church and Hanks, 1990) to obtain these distances." W91-0211,J90-1003,o,"A broad view of the possible scope of lexical semantics would thus be one which tries to chart out the systematic, generalizable aspects of word meanings, and of the relations between words, drawing on readily accessible sources of lexical knowledge, such as machine readable dictionaries, encyclopedias, and representative corpora, coupled with the kind of analytic apparatus that is needed to fruitfully explore such sources, for instance custom-built parsers to cope with dictionary definitions (Vossen 1990), statistical programs to deal with the distributional properties of lexical items in large corpora (Church & Hanks 1990) etc. At the same time this kind of massive data-acquisition should be made sensitive to the borders between perceptual experience, lexical knowledge and expert knowledge." W93-0111,J90-1003,o,"In our approach, we take into account both the relative positions of the nearby context words as well as the mutual information (Church & Hanks, 1990) associated with the occurrence of a particular context word." W93-0310,J90-1003,o,"(1989), Wettler & Rapp (1989) and Church & Hanks (1990) describe algorithms which do this." W95-0111,J90-1003,o,Other representative collocation research can be found in Church and Hanks (1990) and Smadja (1993). W95-0111,J90-1003,p,"Unlike Choueka (1988), Church and Hanks (1990) identify as collocations both interrupted and uninterrupted sequences of words." W95-0111,J90-1003,n,"Unlike Church and Hanks (1990), Smadja (1993) goes beyond the ""two-word"" limitation and deals with ""collocations of arbitrary length""." W95-0111,J90-1003,o,"Following Church and Hanks (1990), they use mutual information to select significant two-word patterns, but, at the same time, a lexical inductive process is incorporated which, as they claim, can improve the collection of domain-specific terms." W96-0103,J90-1003,o,"We used *TH*=3 following ""a very rough rule of thumb"" used for word-based mutual information in (Church and Hanks, 1990)." W96-0103,J90-1003,o,"Most of the previously proposed methods to extract compounds or to measure word association using mutual information (MI) either ignore or penalize items with low co-occurrence counts (Church and Hanks 1990, Su, Wu and Chang 1994), because MI becomes unstable when the co-occurrence counts are very small." W96-0306,J90-1003,o,"Previous research in automatic acquisition focuses primarily on the use of statistical techniques, such as bilingual alignment (Church and Hanks, 1990; Klavans and Tzoukermann, 1996; Wu and Xia, 1995), or extraction of syntactic constructions from online dictionaries and corpora (Brent, 1993; Dorr, Garman, and Weinberg, 1995)." W97-0205,J90-1003,o,"The classifier uses mutual information (MI) scores rather than the raw frequences of the occurring patterns (Church and Hanks, 1990)." W97-0709,J90-1003,o,"s e, the window to consider when extracting words related to word w, should span from postttuon w-5 to w+5 Maarek also defines the resolwng power of a parr m a document d as P = ~'Pd log Pc where Pd is the observed probabshty of appearance of the pan"" m document d, Pc the observed probabdny of the pmr recorpus, and -log Pc the quantity of mformauon assocmted to the pmr It Is easdy seen that p wall be h|gher, the higher the frequency of the pmr m the document and the lower sts frequency m the corpus, which agrees wlth the sdea presented at the begmnmg of this sectton Church and Hanks (1990) propose the apphcatlon of the concept of mutual mformatton e(x,y) ~,(x.y) = hog2 ecx)e(y) 51 to the retrieval, ro a corpus, of pairs of lextcally related words They alsoconslder a word span of :e5 words and observe that ""roterestrog"" pmr, s generally present a mutual mformatxon above 3 Salton and.Allan (1995) foc~as on paragraph level Each paragraph Is represented by a weighed vector, where each element is a term (typically." W97-0711,J90-1003,o,"of the works of (Kuplec, Pedersen, and Chen, 1995) and (Brandow, Mltze, .and Ran, 1995), and advances summarmatlon technology by applynag corpus-based statistical NLP teehmques, robust information extraction, and readily avaalable on-hne resources Our prehxmnary experiments with combining different summarization features have been reported, and our current effort to learn to combine these features to produce the best summaries has been described The features derived by these robust NLP techmques were also utihzed m presentmg multiple summary.vtews to the user m a novel way References Advanced Research Projects Agency 1995 Proceed:rigs of S:zth Message Understanding Conference (MUC-6) Morgan Kanfmann Pubhshers Brandow, Ron, Karl Mltze, and Lisa Ran 1995 Automatic condensation of electromc pubhcatlous by sentence selection Information Processing and Management, 31, forthcoming .Bull, Eric 1993 A Comps-based Approach to Language Learning Ph D thesm, Umverslty of Pennsylvania Church, Kenneth and Patrick Hanks 1990 Word Aesoclatlon Norrns, Mutual Information, and Lexicography Computational Lmgmstscs, 16(1) Church, Kenneth W 1995 One term or two 9 In Proceedings of the 17th Annual International SIGIR Conference on Research and Development In Informatzon Retrzeral, pages 310-318 Edmundson, H P 1969 New methods m automatic abstracting Journal of the ACM, 16(2) 264-228 Fum, Dando, Glovanm Gmda, and Carlo Tasso 1985 Evalutatmg Importance A step towards text surnmarlzatlon In I3CAI85, pages 840-844IJCAi, AAAI Hahn, Udo 1990 Topic parsing Accounting for text macro structures m full-text analysm In format:on Processing and Management, 26(1)135170 Harman, Donna 1991 How effective is suttixang ~ Journal of the Amerlcan Sot:cry for Informatwn Sc:ence, 42(1) 7-15 Harman, Donna 1996 Overview of the fifth text retrieval conference (tree-5) In TREC-5 Conference Proceedings Jmg, Y and B Croft 1994 An Assoc:atwn Thesaurns for Informatzon Retrseval Umass Techmcal Report 94-I7 Center for Intelligent Information Retrieval, University of Massachusetts Johnson, F C, C D Prate, W J Black, and A P Neal 1993." W97-0711,J90-1003,p,"robust mforrmatlon extractlon, and readlly-avmlable on-hne NLP resources These techtuques and resources allow us to create a richer indexed source of Imgmstlc and domain knowledge than other frequency approaches Our approach attempts to apprommate text dlscourse structure through these multlple layers of mformatlon, ohtinned from automated methods m contrast to labor-lntenslve, discourse-based approaches Moreover, our planned training methodology will also allow us to explmt thin productlve infrastructure m ways whlch model human performance whde avoidmg hand-crafting domain-dependent rules of the knowledge-based approaches Our ultlmate goal m to make our summarlzatlon system scalable and portable by learning summarization rules from easily extractable text features 2 System Description Our summarization system DlmSum consmts of the Summarization Server and the Summarlzatzon Chent The Server extracts features (the Feature Extractor) from a document using various robust NLP techmques, described In Sectzon 2 1, and combines these features (the Feature Combiner) to basehne multiple combinations of features, as described m Section 2 2 Our work m progress to automattcally tram the Feature Combiner based upon user and apphcatlon needs m presented in Section 2 2 2 The Java-based Chent, which wdl be dmcnssed In Section 4, provides a graphical user interface (GUI) for the end user to cnstomlze the summamzatlon preferences and see multiple views of generated sumInarles 2.1 Extracting Stlmmarization Features In this section, we describe how we apply robust NLP technology to extract summarization features Our goal IS to add more mtelhgence to frequencybased approaches, to acqmre domain knowledge In a more automated fashion, and to apprommate text structure by recogmzing sources of dmcourse cohesion and coherence 2.1.1 Going Beyond a Word Frequency-based summarization systems typically use a single word stnng as a umt for counting frequencies Whde such a method IS very robust, it ignores the semantic content of words and their potential membership m multi-word phrases For example, zt does not dmtmgumh between ""bill"" m ""Bdl Table 1 Collocations with ""chlps"" {potato tortdla corn chocolate b~gle} chips {computer pentmm Intel macroprocessor memory} chips {wood oak plastlc} cchlps bsrgmmng clups blue clups mr chips Clmton"" and ""bill"" in ""reform bill"" This may introduce noise m frequency counting as the same strmgs are treated umformly no matter how the context may have dmamblguated the sense or regardless of membership in multl-word phrases For DlrnSum, we use term frequency based on tf*Idf (Salton and McGdl, 1983, Brandow, Mitze, and Rau, 1995) to derive ssgnature words as one of the summarization features If single words were the sole basra of countmg for our summarization application, nome would be introduced both m term frequency and reverse document frequency However, recent advances in statmtlcal NLP and information extraction make it possible to utilize features which go beyond the single word level Our approach is to extract multi-word phrases automatlcally with high accuracy and use them as the basic unit in the summarization process, including frequency calculation Ftrst, just as word association methods have proven effective m lemcal analysis, e g (Church and Hanks, 1990), we are exploring whether frequently occurring Collocatlonal reformation can improve on simple word-based approaches We have preprocessed about 800 MB of LA tlmes/Wastnngton Post newspaper articles nsmg a POS tagger (Bnll, 1993) and derived two-word noun collocations using mutual information The." W98-1104,J90-1003,o,"RIDF is like MI, but different References Church, K. and P. Hanks (1990)Word association norms, mutual information, and lexicography Computational Linguistics, 16:1, pp." W98-1104,J90-1003,o,"l(x;y) = log (P(x,y) / e(x)e(y) ) MI has been used to identify a variety of interesting linguistic phenomena, ranging from semantic relations of the doctor/nurse type to lexico-syntactic co-occurrence preferences of the save/from type (Church and Hanks, 1990)." H05-1025,J92-1002,o,"A standard solution is to use a weighted linear mixture of N-gram models, 1 n N, (Brown et al. , 1992)." H05-1025,J92-1002,o,"Previous studies have shed light on the predictability of the next unix command that a user will enter (Motoda and Yoshida, 1997; Davison and Hirsch, 1998), the next keystrokes on a small input device such as a PDA (Darragh and Witten, 1992), and of the translation that a human translator will choose for a given foreign sentence (Nepveu et al. , 2004)." I08-5010,J92-1002,o,"This is due to the reason that Telugu (Entropy=15.625 bits per character) (Bharati et al., 1998) is comparitively a high entropy language than English (Brown and Pietra, 1992)." J93-1001,J92-1002,o,"As a result, the empirical approach has been adopted by almost all contemporary part-of-speech programs: Bahl and Mercer (1976), Leech, Garside, and Atwell (1983), Jelinek (1985), Deroualt and Merialdo (1986), Garside, Leech, and Sampson (1987), Church (1988), DeRose (1988), Hindle (1989), Kupiec (1989, 1992), Ayuso et al." J93-1001,J92-1002,o,"Model Bits / Character ASCII Huffman code each char Lempel-Ziv (Unix TM compress) Unigram (Huffman code each word) Trigram Human Performance 8 5 4.43 2.1 (Brown, personal communication) 1.76 (Brown et al. 1992) 1.25 (Shannon 1951) The cross entropy, H, of a code and a source is given by: H(source, code) = ~ ~ Pr(s, h I source) log 2 Pr(s I h, code) s h where Pr(s, h I source) is the joint probability of a symbol s following a history h given the source." J96-2003,J92-1002,o,"Illustrative clusterings of this type can also be found in Pereira, Tishby, and Lee (1993), Brown, Della Pietra, Mercer, Della Pietra, and Lai (1992), Kneser and Ney (1993), and Brill et al." J96-2003,J92-1002,o,"Successful approaches aimed at trying to overcome the sparse data limitation include backoff (Katz 1987), Turing-Good variants (Good 1953; Church and Gale 1991), interpolation (Jelinek 1985), deleted estimation (Jelinek 1985; Church and Gale 1991), similarity-based models (Dagan, Pereira, and Lee 1994; Essen and Steinbiss 1992), Pos-language models (Derouault and Merialdo 1986) and decision tree models (Bahl et al. 1989; Black, Garside, and Leech 1993; Magerman 1994)." J96-2003,J92-1002,o,"Much research has been carried out recently in this area (Hughes and Atwell 1994; Finch and Chater 1994; Redington, Chater, and Finch 1993; Brill et al. 1990; Kiss 1973; Pereira and Tishby 1992; Resnik 1993; Ney, Essen, and Kneser 1994; Matsukawa 1993)." J96-2003,J92-1002,o,"Introduction Many applications that process natural language can be enhanced by incorporating information about the probabilities of word strings; that is, by using statistical language model information (Church et al. 1991; Church and Mercer 1993; Gale, Church, and Yarowsky 1992; Liddy and Paik 1992)." P09-2087,J92-1002,o,"Dependency models (Rosenfeld, 2000) use the parsed dependency structure of sentences to build the language model as in grammatical trigrams (Lafferty et al., 1992), structured language models (Chelba and Jelinek, 2000), and dependency language models (Chelba et al., 1997)." P99-1036,J92-1002,o,"In order to estimate the entropy of English, (Brown et al. , 1992) approximated P(kI ) by a Poisson distribution whose parameter is the average word length A in the training corpus, and P(cz cklk, ) by the product of character zerogram probabilities." W97-0506,J92-1002,o,"It is sometimes assumed that estimates of entropy (e.g. , Shannon's estimate that English is 75% redundant, Brown et al's (1992) upper bound of 1.75 bits per character for printed English) are directly 3There are some cases where words are deliberately misspelled in order to get better output from the synthesizer, such as coyote spelled kiote." W97-0506,J92-1002,o,"Work at the University of Dundee (e.g. , Aim et al, 1992; Todman and Alm, this volume) has shown that the extensive use of fixed text for sequences such as greetings and prestored narratives is beneficial in AAC." W98-1217,J92-1002,o,"(Farach et al. , 1995; Wyner, in press) describe a novel algorithm for entropy estimation for which they claim very fast convergence time; using no more than about five pages of text, they can achieve nearly the same accuracy as (Brown et al. , 1992)." C00-2121,J92-4003,o,"Methods that use bigrams (Brown et al. , 1992) or trigrams (Martin et al. , 1998) cluster words considering as a word's context the one or two immediately adjacent words and employ as clustering criteria the minimal loss of average 836 nmtual information and the perplexity improvement respectively." C00-2121,J92-4003,o,"Most approaches (Brown et al. , 1992; Li & Abe, 1997) inherently extract semantic knowledge in the abstracted form of semantic clusters." C00-2128,J92-4003,o,"Previous approaches to processing lnetonymy have used hand-constructed ontologies or semantic networks (.\]?ass, 1988; Iverson and Hehnreich, 1992; B(maud et al. , 1996; Fass, 1997)." C00-2128,J92-4003,o,"A large corpus is vahmble as a source of such nouns (Church and Hanks, 1990; Brown et al. , 1992)." C02-1012,J92-4003,o,Brown et al (1992) put forward and discussed n-gram models based on classes of words. C04-1014,J92-4003,o,"Among all the language modeling approaches, ngram models have been most widely used in speech recognition (Jelinek 1990; Gale and Church 1990; Brown et al. 1992; Yang et al. 1996) and other applications." C04-1146,J92-4003,p,"Similarity-based smoothing (Brown et al. , 1992; Dagan et al. , 1999) is an intuitively appealing approach to this problem where probabilities of unseen co-occurrences are estimated from probabilities of seen co-occurrences of distributionally similar events." C08-1017,J92-4003,o,(1992) describe one application of MI to identify word collocations; Kashioka et al. C08-1017,J92-4003,o,"We believe the benefit to limiting the size of n is connected to Brown et al.s (1992: 470) observation that as n increases, the accuracy of an n-gram model increases, but the reliability of our parameter estimates, drawn as they must be from a limited training text, decreases." C08-1051,J92-4003,o,"Applications of word clustering include language modeling (Brown et al., 1992), text classification (Baker and McCallum, 1998), thesaurus construction (Lin, 1998) and so on." C94-2198,J92-4003,o,"We have used a state-of-the-art Chinese handwriting recognizer (Li et al. , 1992) developed by ATC, CCL, ITRI, Taiwan as the basis of our experiments." C94-2198,J92-4003,o,"For a class bigram model, find : V --+ C to maximize ~(T) = ~I/L=I p(wi I(wl))p((wi)l(wi-1)))) Alternatively, perplexity (Jardino an d Adda, 1993) or average mutual information (Brown et al. , 1992) can be used as the characteristic value for optimization." C94-2198,J92-4003,o,"INTRODUCTION Class-based language models (Brown et al. , 1992)have been proposed for dealing with two problems confronted by the well-known word n-gram language models (1) data sparseness: the amount of training data is insufficient for estimating the huge number of parameters; and (2) domain robustness: the model is not adaptable to new application domains." C96-1003,J92-4003,o,"have been proposed (Hindle, 1990; Brown et al. , 1992; Pereira et al. , 1993; Tokunaga et al. , 1995)." C96-1036,J92-4003,o,"Language models, such as N-gram class models (Brown et al. , 1992) and Ergodic Hidden Markov Models (Kuhn el, al. , 1994) were proposed and used in applications such as syntactic class (POS) tagging for English (Cutting et al. , 1992), clustering and scoring of recognizer sentence hypotheses." C96-2205,J92-4003,o,"Brown et al. proposed a class-based n-gram model, which generalizes the n-gram model, to predict a word from previous words in a text (Brown et al. , 1992)." D08-1006,J92-4003,o,"In future work we plan to experiment with richer representations, e.g. including long-range n-grams (Rosenfeld, 1996), class n-grams (Brown et al., 1992), grammatical features (Amaya and Benedy, 2001), etc'." D08-1096,J92-4003,o,"Of particular relevance are class-based language models (e.g., (Saul and Pereira, 1997; Brown et al., 1992))." D09-1003,J92-4003,o,"(2008) who employ clusters of related words constructed by the Brown clustering algorithm (Brown et al., 1992) for syntactic processing of texts." D09-1003,J92-4003,n,"This method was shown to outperform the class based model proposed in (Brown et al., 1992) and can thus be expected to discover better clusters of words." D09-1058,J92-4003,o,"First, hierarchical word clusters are derived from unlabeled data using the Brown et al. clustering algorithm (Brown et al., 1992)." D09-1116,J92-4003,o,"Models of this type include: (Brown et al., 1992; Zitouni, 2007), which use semantic word clustering, and (Bahl et al., 1990), which uses variablelength context." E06-1050,J92-4003,o,"Many methods exist for clustering, e.g., (Brown et al. , 1990; Cutting et al. , 1992; Pereira et al. , 1993; Karypis et al. , 1999)." E95-1039,J92-4003,o,"Introduction There has been considerable recent interest in the use of statistical methods for grouping words in large on-line corpora into categories which capture some of our intuitions about the reference of the words we use and the relationships between them (e.g. Brown et al. , 1992; Schiitze, 1993)." E99-1010,J92-4003,o,"Various clustering techniques have been proposed (Brown et al. , 1992; Jardino and Adda, 1993; Martin et al. , 1998) which perform automatic word clustering optimizing a maximum-likelihood criterion with iterative clustering algorithms." H05-1026,J92-4003,o,"decades like n-gram back-off word models (Katz, 1987), class models (Brown et al. , 1992), structured language models (Chelba and Jelinek, 2000) or maximum entropy language models (Rosenfeld, 1996)." H05-1028,J92-4003,o,"More specifically, we use a class-based bigram model from (Brown et al, 1992): )|()|()|( 11 = iiiiii ccPcwPwwP (3) In Equation (3), c i is the class of the word w i, which could be a syntactic class or a semantic class." H93-1036,J92-4003,n,"This is in contrast to purely statistical systems (e.g. , \[Brown et al. , 1992\]), which are difficult to inspect and modify." H93-1036,J92-4003,o,"There has been considerable use in the NLP community of both WordNet (e.g. , \[Lehman et al. , 1992; Resnik, 1992\]) and LDOCE (e.g, \[Liddy et aL, 1992; Wilks et al. , 1990\]), but no one has merged the two in order to combine their strengths." J02-3004,J92-4003,o,"The smoothing methods proposed in the literature (overviews are provided by Dagan, Lee, and Pereira (1999) and Lee (1999)) can be generally divided into three types: discounting (Katz 1987), class-based smoothing (Resnik 1993; Brown et al. 1992; 364 Computational Linguistics Volume 28, Number 3 Pereira, Tishby, and Lee 1993), and distance-weighted averaging (Grishman and Sterling 1994; Dagan, Lee, and Pereira 1999)." J02-3004,J92-4003,o,"Classes can be induced directly from the corpus using distributional clustering (Pereira, Tishby, and Lee 1993; Brown et al. 1992; Lee and Pereira 1999) or taken from a manually crafted taxonomy (Resnik 1993)." J03-1005,J92-4003,o,"For this purpose, we present a data-driven beam search algorithm similar to the one used in speech recognition search algorithms (Ney et al. 1992)." J03-1005,J92-4003,o,The distortion probabilities are class-based: They depend on the word class F(f) of a covered source word f as well as on the word class E(e) of the previously generated target word e. The classes are automatically trained (Brown et al. 1992). J05-4002,J92-4003,p,"Similarity-based smoothing (Hindle 1990; Brown et al. 1992; Dagan, Marcus, and Markovitch 1993; Pereira, Tishby, and Lee 1993; Dagan, Lee, and Pereira 1999) provides an intuitively appealing approach to language modeling." J05-4002,J92-4003,o,"5.2 Pseudo-Disambiguation Task Pseudo-disambiguation tasks have become a standard evaluation technique (Gale, Church, and Yarowsky 1992; Sch utze 1992; Pereira, Tishby, and Lee 1993; Sch utze 1998; Lee 1999; Dagan, Lee, and Pereira 1999; Golding and Roth 1999; Rooth et al. 1999; EvenZohar and Roth 2000; Lee 2001; Clark and Weir 2002) and, in the current setting, we may use a nouns neighbors to decide which of two co-occurrences is the most likely." J95-3002,J92-4003,o,"In addition, explicitly using the left context symbols allows easy use of smoothing techniques, such as deleted interpolation (Bahl, Jelinek, and Mercer 1983), clustering techniques (Brown et al. 1992), and model refinement techniques (Lin, Chiang, and Su 1994) to estimate the probabilities more reliably by changing the window sizes of the context and weighting the various estimates dynamically." J95-3002,J92-4003,o,"Previous work has demonstrated that this scoring function is able to provide high discrimination power for a variety of applications (Su, Chiang, and Lin 1992; Chen et al. 1991; Su and Chang 1990)." J95-3002,J92-4003,o,"This scoring function has been successfully applied to resolve ambiguity problems in an English-to-Chinese machine translation system (BehaviorTran) (Chen et al. 1991) and a spoken language processing system (Su, Chiang, and Lin 1991; 1992)." J96-2003,J92-4003,o,"Illustrative clusterings of this type can also be found in Pereira, Tishby, and Lee (1993), Brown, Della Pietra, Mercer, Della Pietra, and Lai (1992), Kneser and Ney (1993), and Brill et al." J96-2003,J92-4003,o,"Successful approaches aimed at trying to overcome the sparse data limitation include backoff (Katz 1987), Turing-Good variants (Good 1953; Church and Gale 1991), interpolation (Jelinek 1985), deleted estimation (Jelinek 1985; Church and Gale 1991), similarity-based models (Dagan, Pereira, and Lee 1994; Essen and Steinbiss 1992), Pos-language models (Derouault and Merialdo 1986) and decision tree models (Bahl et al. 1989; Black, Garside, and Leech 1993; Magerman 1994)." J96-2003,J92-4003,o,"Much research has been carried out recently in this area (Hughes and Atwell 1994; Finch and Chater 1994; Redington, Chater, and Finch 1993; Brill et al. 1990; Kiss 1973; Pereira and Tishby 1992; Resnik 1993; Ney, Essen, and Kneser 1994; Matsukawa 1993)." J96-2003,J92-4003,o,"Introduction Many applications that process natural language can be enhanced by incorporating information about the probabilities of word strings; that is, by using statistical language model information (Church et al. 1991; Church and Mercer 1993; Gale, Church, and Yarowsky 1992; Liddy and Paik 1992)." J96-4003,J92-4003,o,"Furthermore, our model is not necessarily nativist; these biases may be innate, but they may also be the product of some other earlier learning algorithm, as the results of Ellison (1992) and Brown et al." J97-2004,J92-4003,o,Notice that most in-context and dictionary translations of source words are bounded within the same category in a typical thesaurus such as the LLOCE (McArthur 1992) and CILIN (Mei et al. 1993). J98-1001,J92-4003,o,"Several authors (for example, Krovetz and Croft \[1989\], Guthrie et al. \[1991\], Slator \[1992\], Cowie, Guthrie, and Guthrie \[1992\], Janssen \[1992\], Braden-Harder \[1993\], Liddy and Paik \[1993\]) have attempted to improve results by using supplementary fields of information in the electronic version of the Longman Dictionary of Contemporary English (LDOCE), in particular, the box codes and subject codes provided for each sense." J98-1001,J92-4003,o,"Since then, supervised learning from sense-tagged corpora has since been used by several researchers: Zernik (1990, 1991), Hearst (1991), Leacock, Towell, and Voorhees (1993), Gale, Church, and Yarowsky (1992d, 1993), Bruce and Wiebe (1994), Miller et al." J98-1001,J92-4003,o,"A similar view underlies the class-based methods cited in Section 2.4.3 (Brown et al. 1992; Pereira and Tishby 1992; Pereira, Tishby, and Lee 1993)." J98-1004,J92-4003,o,"This set of context vectors is then clustered into a predetermined number of coherent clusters or context groups using Buckshot (Cutting et al. 1992), a combination of the EM algorithm and agglomerative clustering." J98-1004,J92-4003,o,"Regardless of whether it takes the form of dictionaries (Lesk 1986; Guthrie et al. 1991; Dagan, Itai, and Schwall 1991; Karov and Edelman 1996), thesauri (Yarowsky 1992; Walker and Amsler 1986), bilingual corpora (Brown et al. 1991; Church and Gale 1991), or hand-labeled training sets (Hearst 1991; Leacock, Towell, and Voorhees 1993; Niwa and Nitta 1994; Bruce and Wiebe 1994), providing information for sense definitions can be a considerable burden." J98-1004,J92-4003,o,"Another body of related work is the literature on word clustering in computational linguistics (Brown et al. 1992; Finch 1993; Pereira, Tishby, and Lee 1993; Grefenstette 1994a) and document clustering in information retrieval (van Rijsbergen 1979; Willett 1988; Sparck-Jones 1991; Cutting et al. 1992)." J98-2002,J92-4003,o,"The second approach (Sekine et al. 1992; Chang, Luo, and Su 1992; Resnik 1993a; Grishman and Sterling 1994; Alshawi and Carter 1994) takes triples (verb, prep, noun2) and (nounl, prep, noun2), like those in Table 10, as training data for acquiring semantic knowledge and performs PP-attachment disambiguation on quadruples." J98-2002,J92-4003,o,"It is potentially useful in other natural language processing tasks, such as the problem of estimating n-gram models (Brown et al. 1992) or the problem of semantic tagging (Cucchiarelli and Velardi 1997)." J99-4003,J92-4003,o,"3 2.4 Intonation Annotations For our intonation annotation, we have annotated the intonational phrase boundaries, using the ToBI (Tones and Break Indices) definition (Silverman et al. 1992)." J99-4003,J92-4003,o,(1992) and Magerman (1994) used the clustering algorithm of Brown et al. J99-4003,J92-4003,o,"For handling word identities, one could follow the approach used for handling the POS tags (e.g. , Black et al. 1992; Magerman 1994) and view the POS tags and word identities as two separate sources of information." N03-1032,J92-4003,o,"In information retrieval, word similarity can be used to identify terms for pseudo-relevance feedback (Harman, 1992; Buckley et al. , 1995; Xu and Croft, 2000; Vechtomova and Robertson, 2000)." N04-4034,J92-4003,o,"A re nement of this model is the class-based n-gram where the words are partitioned into equivalence classes (Brown et al. , 1992)." N04-4034,J92-4003,o,"In addition, we developed a word clustering procedure (based on a standard approach (Brown et al. , 1992)) that optimizes conditional word clusters." N04-4034,J92-4003,o,"Table 2: Three types of class-based MSLMs on Switchboard-I (swbd) and ICSI Meeting (mr) corpora # of swbd mr classes BROWN MMI MCMI BROWN MMI MCMI 100 68.9 0.3 68.4 0.3 68.2 0.3 78.9 3.0 77.3 2.8 76.8 2.8 500 68.9 0.3 68.3 0.3 67.9 0.3 78.7 3.1 77.1 2.8 76.7 2.8 1000 68.9 0.3 68.2 0.3 67.9 0.3 79.0 3.1 77.2 2.7 76.9 2.8 1500 69.0 0.3 68.2 0.3 68.0 0.3 79.6 3.1 77.4 2.7 77.4 2.7 2000 69.0 0.3 68.3 0.3 68.0 0.3 80.1 3.1 77.6 2.7 77.9 2.7 jV j 68.5 0.3 78.3 2.7 Table 3: Class-based MSLM on Switchboard Eval-2003 size 100 500 1000 1500 2000 jV j 3-gram 4-gram ppl 65.8 65.5 65.6 65.7 66.1 67.9 72.1 76.3 % reduction 8.6 8.9 8.8 8.7 8.3 5.8 0 -5.8 Class-based language models (Brown et al. , 1992; Whittaker and Woodland, 2003) yield great bene ts when data sparseness abounds." N04-4034,J92-4003,o,"SRILM (Stolcke, 2002) can produce classes to maximize the mutual information between the classes I(C(wt);C(wt 1)), as described in (Brown et al. , 1992)." N04-4034,J92-4003,o,"To compare different clustering algorithms, results with the standard method of (Brown et al. , 1992) (SRILMs ngram-class) are also reported." N06-1058,J92-4003,o,"4.1.3 Alternative Paraphrasing Techniques To investigate the effect of paraphrase quality on automatic evaluation, we consider two alternative paraphrasing resources: Latent Semantic Analysis (LSA), and Brown clustering (Brown et al. , 1992)." N06-2001,J92-4003,o,"This can also be interpreted as a generalization of standard class-based models (Brown et al. , 1992)." N09-1051,J92-4003,o,"(4) can be used to motivate a novel class-based language model and a regularized version of minimum discrimination information (MDI) models (Della Pietra et al., 1992)." N09-1051,J92-4003,o,"We consider three class models, models S, M, and L, defined as pS(cj|c1cj1,w1wj1)=png(cj|cj2cj1) pS(wj|c1cj,w1wj1)=png(wj|cj) pM(cj|c1cj1,w1wj1)=png(cj|cj2cj1,wj2wj1) pM(wj|c1cj,w1wj1)=png(wj|wj2wj1cj) pL(cj|c1cj1,w1wj1)=png(cj|wj2cj2wj1cj1) pL(wj|c1cj,w1wj1)=png(wj|wj2cj2wj1cj1cj) Model S is an exponential version of the class-based n-gram model from (Brown et al., 1992); model M is a novel model introduced in (Chen, 2009); and model L is an exponential version of the model indexpredict from (Goodman, 2001)." N09-1051,J92-4003,o,"4.2 Models with Prior Distributions Minimum discrimination information models (Della Pietra et al., 1992) are exponential models with a prior distribution q(y|x): p(y|x) = q(y|x)exp( summationtextF i=1 ifi(x,y)) Z(x) (14) The central issue in performance prediction for MDI models is whether q(y|x) needs to be accounted for." N09-1051,J92-4003,o,"The most popular non-data-splitting methods for predicting test set cross-entropy (or likelihood) are AIC and variants such as AICc, quasi-AIC (QAIC), and QAICc (Akaike, 1973; Hurvich and Tsai, 1989; Lebreton et al., 1992)." N09-1053,J92-4003,o,"We compare the following model types: conventional (i.e., non-exponential) word n-gram models; conventional IBM class n-gram models interpolated with conventional word n-gram models (Brown et al., 1992); and model M. All conventional n-gram models are smoothed with modified Kneser-Ney smoothing (Chen and Goodman, 1998), except we also evaluate word n-gram models with Katz smoothing (Katz, 1987)." N09-1053,J92-4003,o,"While we can only compare class models with word models on the largest training set, for this training set model M outperforms the baseline Katzsmoothed word trigram model by 1.9% absolute.6 4 Domain Adaptation In this section, we introduce another heuristic for improving exponential models and show how this heuristic can be used to motivate a regularized version of minimum discrimination information (MDI) models (Della Pietra et al., 1992)." P01-1046,J92-4003,o,"(1999) and Lee (1999)) can be generally divided into three types: discounting (Katz, 1987), class-based smoothing (Resnik, 1993; Brown et al. , 1992; Pereira et al. , 1993), and distance-weighted averaging (Grishman and Sterling, 1994; Dagan et al. , 1999)." P01-1046,J92-4003,o,"Classes can be induced directly from the corpus (Pereira et al. , 1993; Brown et al. , 1992) or taken from a manually crafted taxonomy (Resnik, 1993)." P01-1068,J92-4003,o,"And we consider that word pairs that have a small distance between vectors also have similar word neighboring characteristics (Brown et al. , 1992) (Bai et al. , 1998)." P02-1016,J92-4003,o,"Words are encoded through an automatic clustering algorithm (Brown et al. , 1992) while tags, labels and extensions are normally encoded using diagonal bits." P02-1024,J92-4003,o,"Recent research [Yamamoto et al. , 2001] shows that using different clusters for predicted and conditional words can lead to cluster models that are superior to classical cluster models, which use the same clusters for both words [Brown et al. , 1992]." P02-1024,J92-4003,o,"2 Related Work A large amount of previous research on clustering has been focused on how to find the best clusters [Brown et al. , 1992; Kneser and Ney, 1993; Yamamoto and Sagisaka, 1999; Ueberla, 1996; Pereira et al. , 1993; Bellegarda et al. , 1996; Bai et al. , 1998]." P02-1024,J92-4003,o,"Many traditional clustering techniques [Brown et al. , 1992] attempt to maximize the average mutual information of adjacent clusters = 21, 2 12 2121 )( )|( log)(),( WW WP WWP WWPWWI, (2) where the same clusters are used for both predicted and conditional words." P02-1030,J92-4003,o,"Proceedings of the 40th Annual Meeting of the Association for cently, semantic resources have also been used in collocation discovery (Pearce, 2001), smoothing and model estimation (Brown et al. , 1992; Clark and Weir, 2001) and text classi cation (Baker and McCallum, 1998)." P02-1030,J92-4003,o,"Most systems extract co-occurrence and syntactic information from the words surrounding the target term, which is then converted into a vector-space representation of the contexts that each target term appears in (Brown et al. , 1992; Pereira et al. , 1993; Ruge, 1997; Lin, 1998b)." P03-1006,J92-4003,o,"In many applications, it is natural and convenient to construct class-based language models, that is models based on classes of words (Brown et al. , 1992)." P06-1038,J92-4003,o,"Agglomerative clustering (e.g. , (Brown et al, 1992; Li, 1996)) can produce hierarchical word categories from an unannotated corpus." P06-1096,J92-4003,n,"For example, we would like to know that if a (JJ, JJ) 7We also tried using word clusters (Brown et al. , 1992) instead of POS but found that POS was more helpful." P06-2069,J92-4003,o,"In class-based n-gram modeling (Brown et al. , 1992) for example, classbased n-grams are used to determine the probability of occurrence of a POS class, given its preceding classes, and the probability of a particular word, given its own POS class." P07-1094,J92-4003,o,"Distributional clustering and dimensionality reduction techniques are typically applied when linguistically meaningful classes are desired (Schutze, 1995; Clark, 2000; Finch et al. , 1995); probabilistic models have been used to find classes that can improve smoothing and reduce perplexity (Brown et al. , 1992; Saul and Pereira, 1997)." P08-1047,J92-4003,o,"For example, we can use automatically extracted hyponymy relations (Hearst, 1992; Shinzato and Torisawa, 2004), or automatically induced MN clusters (Rooth et al., 1999; Torisawa, 2001)." P08-1047,J92-4003,n,"In addition, the clustering methods used, such as HMMs and Browns algorithm (Brown et al., 1992), seem unable to adequately capture the semantics of MNs since they are based only on the information of adjacent words." P08-1047,J92-4003,o,"They constructed word clusters by using HMMs or Browns clustering algorithm (Brown et al., 1992), which utilize only information from neighboring words." P08-1058,J92-4003,o,"To scale LMs to larger corpora with higher-order dependencies, researchers Work completed while this author was at Google Inc. have considered alternative parameterizations such as class-based models (Brown et al., 1992), model reduction techniques such as entropy-based pruning (Stolcke, 1998), novel represention schemes such as suffix arrays (Emami et al., 2007), Golomb Coding (Church et al., 2007) and distributed language models that scale more readily (Brants et al., 2007)." P08-1068,J92-4003,o,"2.2 Brown clustering algorithm In order to provide word clusters for our experiments, we used the Brown clustering algorithm (Brown et al., 1992)." P09-1015,J92-4003,o,"To group the letters into classes, we employ a hierarchical clustering algorithm (Brown et al., 1992)." P09-1015,J92-4003,o,"129 5 Active learning Whereas a passive supervised learning algorithm is provided with a collection of training examples that are typically drawn at random, an active learner has control over the labelled data that it obtains (Cohn et al., 1992)." P09-1031,J92-4003,o,"We have (11) Hypernym Patterns based on patterns proposed by (Hearst, 1992) and (Snow et al., 2005), (12) Sibling Patterns which are basically conjunctions, and (13) Part-of Patterns based on patterns proposed by (Girju et al., 2003) and (Cimiano and Wenderoth, 2007)." P09-1031,J92-4003,o,"5.3 Performance of Taxonomy Induction In this section, we compare the following automatic taxonomy induction systems: HE, the system by Hearst (1992) with 6 hypernym patterns; GI, the system by Girju et al." P09-1031,J92-4003,o,"Pattern-based approaches are known for their high accuracy in recognizing instances of relations if the patterns are carefully chosen, either manually (Berland and Charniak, 1999; Kozareva et al., 2008) or via automatic bootstrapping (Hearst, 1992; Widdows and Dorow, 2002; Girju et al., 2003)." P09-1031,J92-4003,o,"Clustering-based approaches usually represent word contexts as vectors and cluster words based on similarities of the vectors (Brown et al., 1992; Lin, 1998)." P09-1031,J92-4003,o,"Agglomerative clustering (Brown et al., 1992; Caraballo, 1999; Rosenfeld and Feldman, 2007; Yang and Callan, 2008) iteratively merges the most similar clusters into bigger clusters, which need to be labeled." P09-1116,J92-4003,o,"Previous approaches, e.g., (Miller et al. 2004) and (Koo et al. 2008), have all used the Brown algorithm for clustering (Brown et al. 1992)." P93-1022,J92-4003,o,"5.2 A data recovery task In the second evaluation, the estimation method had to distinguish between members of two sets of 8It should be emphasized that the TWS method uses only a monolingual target corpus, and not a bilingual corpus as in other methods ((Brown et al. , 1991; Gale et al. , 1992))." P93-1022,J92-4003,o,"Class based models (Brown et al. , ; Pereira et al. , 1993; Hirschman, 1986; Resnik, 1992) distinguish between unobserved cooccurrences using classes of ""similar"" words." P93-1023,J92-4003,o,"However, only recently has work been done on the automatic computation of such relationships from text, quantifying similarity between words and clustering them ( (Brown et aL, 1992), (Pereira et al. , 1993))." P93-1023,J92-4003,o,"One other published model for grouping semantically related words (Brown et al. , 1992), is based on a statistical model of bigrams and trigrams and produces word groups using no linguistic knowledge, but no evaluation of the results is reported." P93-1034,J92-4003,o,(Brown et al. 1992) where the same idea of improving generalization and accuracy by looking at word classes instead of individual words is used. P93-1043,J92-4003,p,"The notion of incrementally merging classes of lexical items is intuitively satisfying and is explored in detail in (Brown, et al. 1992)." P94-1038,J92-4003,o,"272 Similarity-based estimation was first used for language modeling in the cooccurrence smoothing method of Essen and Steinbiss (1992), derived from work on acoustic model smoothing by Sugawara et al." P95-1025,J92-4003,o,"The class-based approaches (Brown et al. , 1992; Resnik, 1992; Pereira et al. , 1993) calculate co-occurrence data of words belonging to different classes,~ rather than individual words, to enhance the co-occurrence data collected and to cover words which have low occurrence frequencies." P95-1025,J92-4003,o,"On the other hand, the thesaurus-based method of Yarowsky (1992) may suffer from loss of information (since it is semi-class-based) as well as data sparseness (since H Classes used in Resnik (1992) are based on the WordNet taxonomy while classes of Brown et al." P95-1025,J92-4003,o,"1 Introduction Previous corpus-based sense disambiguation methods require substantial amounts of sense-tagged training data (Kelly and Stone, 1975; Black, 1988 and Hearst, 1991) or aligned bilingual corpora (Brown et al. , 1991; Dagan, 1991 and Gale et al. 1992)." P95-1037,J92-4003,o,"These 30 questions are determined by growing a classification tree on the word vocabulary as described in (Brown et al. , 1992)." P96-1004,J92-4003,o,"Finally, inducing lexical semantics from distributional data (e.g. , (Brown et al. , 1992; Church et al. , 1989)) is also a form of surface cueing." P97-1008,J92-4003,o,"Class-based methods (Brown et al. , 1992; Pereira, Tishby, and Lee, 1993; Resnik, 1992) cluster words into classes of similar words, so that one can base the estimate of a word pair's probability on the averaged cooccurrence probability of the classes to which the two words belong." P97-1023,J92-4003,o,"Given that semantically similar words can be identified automatically on the basis of distributional properties and linguistic cues (Brown et al. , 1992; Pereira et al. , 1993; Hatzivassiloglou and McKeown, 1993), identifying the semantic orientation of words would allow a system to further refine the retrieved semantic similarity relationships, extracting antonyms." P97-1033,J92-4003,o,"(Wang and Hirschberg, 1992; Wightman and Ostendorf, 1994; Stolcke and Shriberg, 1996a; Kompe et al. , 1994; Mast et al. , 1996)) and on speech repair detection and correction (e.g." P97-1033,J92-4003,o,"Since there is no well-agreed to definition of what an utterance is, we instead focus on intonational phrases (Silverman et al. , 1992), which end with an acoustically signaled boundary lone." P97-1033,J92-4003,o,"(Black et al. , 1992; Materman, 1995)), we treat the word identities as a further refinement of the POS tags; thus we build a word classification tree for each POS tag." P97-1056,J92-4003,o,"An exception is the use of similarity for alleviating the sparse data problem in language modeling (Essen & Steinbiss, 1992; Brown et al. , 1992; Dagan et al. , 1994)." P98-1016,J92-4003,o,"Clustering can be done statistically by analyzing text corpora (Wilks et al. , 1989; Brown et al. , 1992; Pereira et al. , 1995) and usually results in a set of words or word senses." P98-1047,J92-4003,o,"In our experiments, the class assignment is performed by maximizing the mutual information between adjacent phrases, following the line described in (Brown 301 et al. , 1992), with only the modification that candidates to clustering are phrases instead of words." P98-1119,J92-4003,o,"In some cases, class (or part of speech) n-grams are used instead of word n-grams(Brown et al. , 1992; Chang and Chen, 1996)." P98-2124,J92-4003,o,"There have been a number of methods proposed in the literature to address the word clustering problem (e.g. , (Brown et al. , 1992; Pereira et al. , 1993; Li and Abe, 1996))." P98-2124,J92-4003,n,"Our method is a natural extension of those proposed in (Brown et al. , 1992) and (Li and Abe, 1996), and overcomes their drawbacks while retaining their advantages." P98-2148,J92-4003,o,"To cope with this problem we 898 use the concept of class proposed for a word n-gram model (Brown et al. , 1992)." P98-2148,J92-4003,o,"To avoid this problem we use the concept of class proposed for a word n-gram model (Brown et al. , 1992)." P98-2180,J92-4003,o,"Syntagmatic strategies for determining similarity have often been based on statistical analyses of large corpora that yield clusters of words occurring in similar bigram and trigram contexts (e.g. , Brown et al. 1992, Yarowsky 1992), as well as in similar predicateargument structure contexts (e.g. , Grishman and Sterling 1994)." P98-2221,J92-4003,o,"The mutual information clustering algorithm(Brown et al. , 1992) were used for this." P99-1005,J92-4003,o,"Furthermore, early work on class-based language models was inconclusive (Brown et al. , 1992)." W00-0725,J92-4003,o,"In this spirit, we introduce a generalization of the classic k-gram models, widely used for string processing (Brown et al. , 1992; Ney et al. , 1995), to the case of trees." W00-1305,J92-4003,o,",(Brown et al. , 1992))." W00-1311,J92-4003,o,"For better probability estimation, the model was extended to work with (hidden) word classes (Brown et al. , 1992, Ward and Issar, 1996)." W01-1004,J92-4003,o,"In the literature approaches to construction of taxonomies of concepts have been proposed (Brown et al. 1992, McMahon and Smith 1996, Sanderson and Croft 1999)." W02-0908,J92-4003,o,"These tasks include collocation discovery (Pearce, 2001), smoothing and model estimation (Brown et al. , 1992; Clark and Weir, 2001) and text classi cation (Baker and McCallum, 1998)." W02-1029,J92-4003,o,"Proceedings of the Conference on Empirical Methods in Natural 2 Automatic Thesaurus Extraction The development of large thesauri and semantic resources, such as WordNet (Fellbaum, 1998), has allowed lexical semantic information to be leveraged to solve NLP tasks, including collocation discovery (Pearce, 2001), model estimation (Brown et al. , 1992; Clark and Weir, 2001) and text classi cation (Baker and McCallum, 1998)." W02-1032,J92-4003,o,"The clusters were found automatically by attempting to minimize perplexity (Brown et al. , 1992)." W03-0416,J92-4003,o,"The second type has clear interpretation as a probability model, but no criteria to determine the number of clusters (Brown et al. , 1992; Kneser and Ney, 1993)." W03-0416,J92-4003,o,"The idea of word class (Brown et al. , 1992) gives a general solution to this problem." W03-0416,J92-4003,o,"Examples have been class-based D2-gram models (Brown et al. , 1992; Kneser and Ney, 1993), smoothing techniques for structural disambiguation (Li and Abe, 1998) and word sense disambiguation (Shutze, 1998)." W03-1710,J92-4003,o,"Among various language modeling approaches, ngram modeling has been widely used in many applications, such as speech recognition, machine translation (Katz 1987; Jelinek 1989; Gale and Church 1990; Brown et al. 1992; Yang et al. 1996; Bai et al 1998; Zhou et al 1999; Rosenfeld 2000; Gao et al 2002)." W03-2907,J92-4003,o,"(Brown et al., 1992) is one of the first works to use statistical methods of distributional analysis to induce clusters of words." W03-2907,J92-4003,n,"While we have shown an increase in performance over a purely syntactic baseline model (the algorithm of (Brown et al., 1992)), there are a number of avenues to pursue in extending this work." W03-2907,J92-4003,o,"The corpus used for training our models was on the order of 100,000 words, whereas that used by (Brown et al., 1992) was around 1,000 times this size." W03-2907,J92-4003,o,"In this article, we used the algorithm of (Brown et al., 1992) to initialize the model." W03-2907,J92-4003,o,"Since there is no practical way of determining the classification a0 which maximizes this quantity for a given corpus, (Brown et al., 1992) use a greedy algorithm which proceeds from the initial classification, performing the merge which results in the least loss in mutual information at each stage." W03-2907,J92-4003,o,"The observation probabilities for a given state, representing a certain word class, are determined by the relative frequencies of words belonging to that class (as determined by the algorithm of (Brown et al., 1992)); the probabilities of other words are set to a small initial value." W03-2907,J92-4003,o,"Since these morphological generalizations are based on the initial categorization provided by the algorithm of (Brown et al., 1992), we hope that they will foster speedy convergence of HNN training." W04-2410,J92-4003,p,"For example the class-based language model of (Brown et al. , 1992) is defined as: p(w2|w1) = p(w2|c2)p(c2|c1) (1) This helps solve the sparse data problem since the number of classes is usually much smaller than the number of words." W04-2410,J92-4003,o,"(1994) uses the mutual information clustering algorithm described in (Brown et al. , 1992)." W04-2602,J92-4003,o,"It has been known for some years that good performance can be realized with partial tagging and a hidden Markov model (Cutting et al. , 1992)." W04-2602,J92-4003,o,"Fortunately, using distributional characteristics of term contexts, it is feasible to induce part-of-speech categories directly from a corpus of suf cient size, as several papers have made clear (Brown et al. , 1992; Schcurrency1utze, 1993; Clark, 2000)." W04-2602,J92-4003,o,"Our approach to inducing syntactic clusters is closely related to that described in Brown, et al, (1992) which is one of the earliest papers on the subject." W04-2602,J92-4003,o,"Clark (2000) reports results on a corpus containing 12 million terms, Schcurrency1utze (1993) on one containing 25 million terms, and Brown, et al, (1992) on one containing 365 million terms." W05-0503,J92-4003,p,"This paper is heavily indebted to prior work on unsupervised learning of position categories such as Brown et al 1992, Schtze 1997, Higgins 2002, and others cited there." W05-0609,J92-4003,o,"A key example is that of class-based language models (Brown et al. , 1992; Dagan et al. , 1999) where clustering approaches are used in order to partition words, determined to be similar, into sets." W05-0617,J92-4003,o,"This approach to term clustering is closely related to others from the literature (Brown et al. , 1992; Clark, 2000).2 Recall that the mutual information between random variables a0 and a1 can be written: a2a4a3a6a5a8a7a10a9a11a13a12a15a14a17a16a19a18a21a20a23a22a25a24a27a26a29a28 a14a17a16a19a18a21a20a23a22a25a24 a14a17a16a19a18a30a24a31a14a17a16a19a22a32a24 (1) Here, a0 and a1 correspond to term and context clusters, respectively, each event a18 and a22 the observation of some term and contextual term in the corpus." W05-0708,J92-4003,o,"Many approaches for POS tagging have been developed in the past, including rule-based tagging (Brill, 1995), HMM taggers (Brants, 2000; Cutting and others, 1992), maximum-entropy models (Rathnaparki, 1996), cyclic dependency networks (Toutanova et al. , 2003), memory-based learning (Daelemans et al. , 1996), etc. All of these approaches require either a large amount of annotated training data (for supervised tagging) or a lexicon listing all possible tags for each word (for unsupervised tagging)." W05-1011,J92-4003,o,"These problems include collocation discovery (Pearce, 2001), smoothing and estimation (Brown et al. , 1992; Clark and Weir, 2001) and question answering (Pasca and Harabagiu, 2001)." W06-1615,J92-4003,o,"There are many choices for modeling co-occurrence data (Brown et al. , 1992; Pereira et al. , 1993; Blei et al. , 2003)." W07-0735,J92-4003,o,"8 Related Research Class-based LMs (Brown et al. , 1992) or factored LMs (Bilmes and Kirchhoff, 2003) are very similar to our T+C scenario." W09-0905,J92-4003,o,"This merging of contexts is different than clustering words (e.g., Clark, 2000; Brown et al., 1992), but is applicable, as word clustering relies on knowing which contexts identify the same category." W09-1119,J92-4003,o,"The technique is based on word class models, pioneered by (Brown et al., 1992), which hierarchically 151 CoNLL03 CoNLL03 MUC7 MUC7 Web Component Test data Dev data Dev Test pages 1) Baseline 83.65 89.25 74.72 71.28 71.41 2) (1) + Gazetteer Match 87.22 91.61 85.83 80.43 74.46 3) (1) + Word Class Model 86.82 90.85 80.25 79.88 72.26 4) All External Knowledge 88.55 92.49 84.50 83.23 74.44 Table 4: Utility of external knowledge." W09-1119,J92-4003,o,"The approach is related, but not identical, to distributional similarity (for details, see (Brown et al., 1992) and (Liang, 2005))." W93-0113,J92-4003,o,"\[Brown et al. , 1992\] Peter F. Brown, Vincent J. Della Pietra, Petere V. deSouza, Jenifer C. Lai, and Robert L. Mercer." W93-0113,J92-4003,o,"A number of knowledge-rich \[Jacobs and Rau, 1990, Calzolari and Bindi, 1990, Mauldin, 1991\] and knowledge-poor \[Brown et al. , 1992, Hindle, 1990, Ruge, 1991, Grefenstette, 1992\] methods have been proposed for recognizing when words are similar." W94-0106,J92-4003,o,"Other researchers have also reported similar problems of excessive resource demands with the ""collect all neighbors"" model \[Gale et al. , 1992\]." W94-0106,J92-4003,n,"Other statistical systems that address word classification probleans do not emphasize the use of linguistic knowledge and do not deal with a specific word class\[Brown et al. , 1992\], or do not exploit as much linguistic knowledge as we do \[Pereira et al. , 1993\]." W94-0106,J92-4003,o,"Similarly, the sense disambiguation problem is typically attacked by comparing the distribution of the neighbors of a word's occurrence to prototypical distributions associated with each of the word's senses \[Gale et al. , 1992, Schtltze, 1992\]." W94-0106,J92-4003,o,"It has been used for diverse problems such as machine translation and sense disambiguation \[Gale et al. , 1992, Schiltze, 1992\]." W95-0105,J92-4003,o,"3.1 Distributionally derived groupings Distributional cluster (Brown et al. , 1992): head, body, hands, eye, voice, arm, seat, hair, mouth Word 'head' (17 alternatives) 0.0000 crown, peak, summit, head, top: subconceptofupperbound 0.0000 principaL school principal, head teacher, head: educator who has executive authority 0.0000 head, chief, top dog: subeoncept of leader 0.0000 head: a user of (usually soft) drugs 0.1983 head: ""the head of the page""; ""the head of the fist"" 0.1983 beginning, head, origin, root, source: the point or place where something begins 0.0000 pass, head, straits: a difficult juncture; ""a pretty pass"" 0.0000 headway, head: subconcept of progress, progression, advance 0.0903 point, hod: a V-shaped mark at one end of an arrow pointer 0.0000 heading, head: a line of text serving to indicate what the passage below it is about 0.0000 mind, head, intellect, psyche: that which is responsible for your thoughts and feelings 0.5428 head: the upper or front part of the body that contains the faee and brains 0.0000 toilet, lavatory, can, head, facility, john, privy, bathroom 0.0000 head: the striking part of a tool; ""hammerhead"" 0.1685 head: a part that projects out from the rest; ""the head of the nail"", ""pinhead"" 0.0000 drumhead, head: stretched taut 0.0000 oral sex, head: oral-genital stimulation Word 'body' (8 alternatives) 0.0000 body: an individual 3-dimensional object that has mass 0.0000 gathering, assemblage, assembly, body, confluence: group of people together in one place 0.0000 body: people associated by some common tie or occupation 0.0000 body: the centralmessage of a communication 0.9178 torso, trunk, body: subconcept of body part, member 0.0000 body, organic structure: the entire physical structure of an animal or human being 60 0.0822 consistency, consistence, body: subeoncept of property 0.0000 fuselage, body: the central portion of an airplane Word 'hands' 0.0000 0.0653 0.0653 0.0000 0.0000 0.0000 0.2151 0.7196 0.0000 0.0000 (10 alternatives) hand: subconeept of linear unit hired hand, hand, hired man: a hired laborer on a farm or ranch bridge player, hand: ""we need a 4th hand for bridge"" hand, deal: the cards held in a card game by a given player at any given time hand: a round of applause to signify approval; ""give the little lady a great big hand"" handwriting, cursive, hand, script: something written by hand hand: ability; ""he wanted to try his hand at singing"" hand, manus, hook, mauler, mitt, paw: the distal extremity of the superior limb hand: subconcept of pointer hand: physical assistance; ""give me a hand with the chores"" Word 'eye' (4 alternatives) 0.1479 center, centre, middle, heart, eye: approximately central within some region 0.1547 eye: good judgment; ""she has an eye for fresh talent"" 0.6432 eye, eyeball, oculus, optic, peeper, organ of sight 0.0542 eye: a sanall hole or loop (as in a needle) Word 'voice' (7 alternatives) 0.0000 0.1414 0.1122 0.2029 0.3895 0.0000 0.1539 voice: the relation of the subject of a verb to the action that the verb denotes spokesperson, spokesman, interpreter, representative, mouthpiece, voice voice, vocalization: the sound made by the vibration of vocal folds articulation, voice: expressing in coherent verbal form; ""I gave voice to my feelings"" part, voice: the melody carried by a particular voice or instrument in polyphonic music voice: the ability to speak; ""he lost his voice"" voice: the distinctive sound of a person's speech; ""I recognized her voice"" Word 'arm' (6 alternatives) 0.0000 branch, subdivision, arm: an administrative division: ""a branch of Congress"" 0.6131 arm: eornrnonly used to refer to the whole superior limb 0.0346 weapon, arm, weapon system: used in fighting or hunting 0.2265 sleeve, arm: attached at armhole 0.1950 arm: any proj~tion that is thought to resemble an arm; ""the arm of the record player"" 0.0346 arm: the part of an armchair that supports the elbow and forearm of a seated person Word 'seat' (6 alternatives) 0.0000 seat: a city from which authority is exercised 0.0000 seat, place: a space reserved for sitting 0.7369 buttocks, arse, butt, backside, burn, buns, can 0.2631 seat: covers the buttocks 0.0402 seat: designed for sitting on 0.0402 seat: where one sits Word 'hair' (5 0.0323 0.2313 1.0000 1.0000 1.0000 alternatives) hair, pilus: threadlike keratinous filaments growing from the skin of mammals hair, tomentum: filamentous hairlike growth on a plant hair, follicular growth: subeoncept of externalbody part hair, mane, head of hair: hair on the head hair: hairy covering of an animal or body part Word 'mouth' (5 alternatives) 0.0000 mouth: the point where a stream issues into a larger body of water 0.0000 mouth: an opening that resembles a mouth (as of a cave or a gorge) 0.0613 sass, sassing, baektalk, lip, mouth: an impudent or insolent rejoinder 0.9387 mouth, oral cavity: subconcept of cavity, body cavity, bodily cavity 0.9387 mouth, trap, hole, maw, yap, muzzle, suout: list includes informal terms for ""mouth"" This group was among classes hand-selected by Brown et al. as ""particularly interesting""." W95-0105,J92-4003,o,"61 Distributional cluster (Brown et al. , 1992): tie, jacket, suit Word 'tie' (7 alternatives) 0.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0000 draw, standoff, tie, stalemate affiliation, association, tie, tie-up: a social or business relationship tie, crosstie, sleeper: subconcept of brace, bracing necktie, tie link, linkup, tie, tie-in: something that serves to join or link drawstring, string, tie: cord used as a fastener tie, tie beam: used to prevent two rafters, e.g., from spreading apart Word 'jacket' (4 alternatives) 0.0000 book jacket, dust cover: subeoncept of promotional material 0.0000 jacket crown, jacket: artificial crown fitted over a broken or decayed tooth 0.0000 jacket: subconceptofwrapping, wrap, wrapper 1.0000 jacket: a short coat Word 'suit' (4 alternatives) 0.0000 suit, suing: subconcept of entreaty, prayer, appeal 1.0000 suit, suit of clothes: subconcept of garment 0.0000 suit: any of four sets of13"" cards in a paek 0.0000 legal action, action, case, lawsuit, suit: a judicial proceeding This cluster was derived by Brown et al. using a modification of their algorithm, designed to uncover ""semantically sticky"" clusters." W95-0105,J92-4003,o,"Distributional cluster (Brown et al. , 1992): cost, expense, risk, profitability, deferral, earmarks, capstone, cardinality, mintage, reseller Word 'cost' (2 alternatives) 0.5426 cost, price, terms, damage: the amount of money paid for something 0.4574 monetary value, price, cost: the amount of money it would bring if sold Word 'expense' (2 alternatives) 1.0000 expense, expenditure, outlay, outgo, spending, disbursal, disbursement 0.0000 expense: a detriment or sacrifice; ""at the expense of"" Word 'risk' (2 alternatives) 0.6267 hazard, jeopardy, peril risk: subconeept of danger 0.3733 risk, peril danger: subeonceptofventure Word 'profitability' (1 alternatives) 1.0000 profitableness, profitability: subconcept of advantage, benefit, usefulness Word 'deferral' (3 alternatives) 0.6267 abeyance, deferral, recess: subconcept of inaction, inactivity, inactiveness 0.3733 postponement, deferment, deferral, moratorium: an agreed suspension of activity 0.3733 deferral: subconeeptofpause, wait Word 'earmarks' (2 alternatives) 0.2898 earmark: identification mark on the ear of a domestic animal 0.7102 hallma.k, trademark, earmark: a distinguishing characteristic or attribute Word 'capstone' (1 alternatives) 1.0000 capstone, coping stone, stretcher: used at top of wall Word 'eardinality' Not in WordNet Word 'mintage' (1 alternatives) 62 1.0000 coinage, mintage, specie, metal money: subconcept of cash Word 'reseller' Not in WordNet This cluster was one presented by Brown et al. as a randomly-selected class, rather than one hand-picked for its coherence." W95-0105,J92-4003,o,"5 Conclusions and Future Work The results of the evaluation are exlremely encouraging, especially considering that disambiguating word senses to the level of fine-grainedness found in WordNet is quite a bit more difficult than disambiguation to the level of homographs (Hearst, 1991; Cowie et al. , 1992)." W95-0105,J92-4003,o,"(Bensch and Savitch, 1992; Brill, 1991; Brown et al. , 1992; Grefenstette, 1994; McKcown and Hatzivassiloglou, 1993; Pereira et al. , 1993; Schtltze, 1993))." W96-0103,J92-4003,o,"2 Hierarchical Clustering of Words Several algorithms have been proposed for automatically clustering words based on a large corpus (Jardino and Adda 91, Brown et al. 1992, Kneser and Ney 1993, Martin et al. 1995, Ueberla 1995)." W96-0103,J92-4003,o,"The reader is referred to (Ushioda 1996) and (Brown et al. 1992) for details of MI clustering, but we will first briefly summarize the MI clustering and then describe our hierarchical clustering algorithm." W96-0213,J92-4003,o,"However, the aforementioned SDT techniques require word classes(Brown et al. , 1992) to help prevent data fragmentation, and a sophisticated smoothing algorithm to mitigate the effects of any fragmentation that occurs." W97-0105,J92-4003,o,"In all other respects, our work departs from previous research on broad--coverage 16 I t I I I I I i ! I i I I I I I I I I I I I i I 1, I. I I I I i I 1 I I I I probabilistic parsing, which either attempts to learn to predict gr~rarn~tical structure of test data directly from a training treebank (Brill, 1993; Collins, 1996; Eisner, 1996; Jelinek et al. , 1994; Magerman, 1995; S~kine and Orishman, 1995; Sharman et al. , 1990), or employs a grammar and sometimes a dictionary to capture linguistic expertise directly (Black et al. , 1993a; GrinBerg et al. , 1995; Schabes; 1992), but arguably at a less detailed and informative level than in the research reported here." W97-0105,J92-4003,o,"For example, the sets of tags and rule labels have been clustered by our team gr~:mm~trian, while a vocabulary of about 60,000 words has been clustered by machine (Brown et al. , 1992; Ushioda~ 1996a; Ushioda, 1996b)." W97-0127,J92-4003,o,"The concept of mutual information, taken from information theory, was proposed as a measure of word association (Church 1990; ""Jelinek et al. 1990,1992; Dagan, 1995;)." W97-0210,J92-4003,o,"Semantic classification programs (Brown et al. , 1992; Hatzivassiloglou and McKeown, 1993; Pereira et al. , 1993) use statistical information based on cooccurrence with appropriate marker words to partition a set of words into semantic groups or classes." W97-0211,J92-4003,o,"Many authors claim that class-based methods are more robust against data sparseness problems (Dagan,1994), (Pereira, 1993), (Brown et al. ,1992)." W97-0213,J92-4003,o,"(Brown et al. , 1992))." W97-0213,J92-4003,o,"In contrast, approaches to WSD attempt to take advantage of many different sources of information (e.g. see (McRoy, 1992; Ng and Lee, 1996; Bruce and Wiebe, 1994)); it seems possible to obtain benefit from sources ranging from local collocational clues (Yarowsky, 1993) to membership in semantically or topically related word classes (arowsky, 1992; Resnik, 1993) to consistency of word usages within a discourse (Gale et al. , 1992); and disambignation seems highly lexically sensitive, in effect requiring specialized disamhignators for each polysemous word." W97-0307,J92-4003,o,"Their weights are calculated by deleted interpolation (Brown et al. , 1992)." W97-0307,J92-4003,o,"(Cutting et al. , 1992; Feldweg, 1995)), the tagger for grammatical functions works with lexical and contextual probability measures Pq()." W97-0309,J92-4003,o,"Aggregate models based on higher-order n-grams (Brown et al. , 1992) might be able to capture multi-word structures such as noun phrases." W97-0309,J92-4003,o,"In Section 2, we examine aggregate Markov models, or class-based bigram models (Brown et al. , 1992) in which the mapping from words to classes 81 is probabilistic." W97-0309,J92-4003,o,"82 2 Aggregate Markov models In this section we consider how to construct classbased bigram models (Brown et al. , 1992)." W97-0309,J92-4003,o,"Though several algorithms (Brown et al. , 1992; Pereira, Tishby, and Lee, 1993) have been proposed 100( 9o( 80( 4O( 20( 1000 goo 80~ 41111 2@ 5 10 15 20 25 30 5 10 15 20 25 30 iteration of EM iteration of EM (a) (b) Figure 1: Plots of (a) training and (b) test perplexity versus number of iterations of the EM algorithm, for the aggregate Markov model with C = 32 classes." W97-0309,J92-4003,o,"Our approach differs in important ways from the use of hidden Markov models (HMMs) for classbased language modeling (Jelinek et al. , 1992)." W97-0311,J92-4003,o,"Several authors have used mutual information and similar statistics as an objective function for word clustering (Dagan et al. , 1993; Brown et al. , 1992; Pereira et al. , 1993; Wang et al. , 1996), for automatic determination of phonemic baseforms (Lucassen & Mercer, 1984), and for language modeling for speech recognition (Ries ct al. , 1996)." W97-0311,J92-4003,o,"In practice, texts contain an enormous number of word sequences (Brown et al. , 1992), only a tiny fraction of which are NCCs, and it takes considerable computational effort to induce each translation model." W97-1003,J92-4003,o,"Various methods are based on Mutual Information between classes, see (Brown et al. , 1992, McMahon and Smith, 1996, Kneser and Ney, 1993, Jardino and Adda, 1993, Martin, Liermann, and Ney, 1995, Ueberla, 1995)." W97-1003,J92-4003,o,"Another application of hard clustering methods (in particular bottom-up variants) is that they can also produce a binary tree, which can be used for decision-tree based systems such as the SPATTER parser (Magerman, 1995) or the ATR Decision-Tree Part-OfSpeech Tagger (Black et al. , 1992, Ushioda, 1996)." W97-1006,J92-4003,o,"Brown, (Brown et al. , 1992) uses the same bigrams and by means of a greedy algorithm forms the hierarchical clusters of words." W98-1109,J92-4003,n,"As with similar work (e.g. Brown et al 1992), the size of the corpus makes preprocessing such as lemmatization, POS tagging or partial parsing, too costly." W98-1109,J92-4003,o,"While Schiitze and Pedersen (1993), Brown et al (1992) and Futrelle and Gauch (1993) all demonstrate the ability of their systems to identify word similarity using clustering on the most frequently occurring words in their corpus, only Grefenstette (1992) demonstrates his system by generating word similarities with respect to a set of target words." W98-1109,J92-4003,o,"This is in contrast to work by researchers such as Schiitze and Pedersen (1992), Brown et al (1992) and Futrelle and Gauch (1995), where it is often the most frequent words in the lexicon which are clustered, predominantly with the purpose of determining their grammatical classes." W98-1109,J92-4003,o,"While previous researchers have used agglomerative nesting clustering (e.g. Brown et al (1992), Futrelle and Gauch (1993)), comparisons with our work are difficult to draw, due to their use of the 1,000 commonest words from their respective corpora." W98-1109,J92-4003,o,"In Brown et al (1992), the authors provide some sample subtrees resulting from such a 1,000-word clustering." W98-1113,J92-4003,o,"Precursors to this work include (Pereira et al, 1993), (Brown et al. 1992), (Brill & Kapur, 1993), (Jelinek, 1990), and (Brill et al, 1990) and, as applied to child language acquisition, (Finch & Chater, 1992)." W98-1113,J92-4003,n,"Clustering algorithms have been previously shown to work fairly well for the classification of words into syntactic and semantic classes (Brown et al. 1992), but determining the optimum number of classes for a hierarchical cluster tree is an ongoing difficult problem, particularly without prior knowledge of the item classification." W98-1113,J92-4003,o,The fact that information consisting of nothing more than bigrams can capture syntactic information about English has already been noted by (Brown et al. 1992). W98-1117,J92-4003,o,"In the Link Grammar framework (Lagerty et al. , 1992; Della Pietra et al. , 1994), strictly local contexts are naturally combined with long-distance information coming from long-range trigrams." W98-1122,J92-4003,o,"Most clustering schemes (et.al. , 1992; Kneser and Ney, 1993; Pereira et al. , 1993; McCandless and Glass, 1993; Bellegarda et al. , 1996; Saul and Pereira, 1997) use the average entropy reduction to decide when two words fall into the same cluster." W98-1207,J92-4003,o,"(Cutting et al. , 1992; Feldweg, 1995)), the tagger for grammatical functions works with lexical (1) Selbst besucht ADV VVPP himself visited hat Peter Sabine VAFIN NE NE has Peter Sabine 'Peter never visited Sabine himself' l hie ADV never Figure 2: Example sentence and contextual probability measures PO.(') depending on the category of a mother node (Q)." W98-1207,J92-4003,o,"Their weights are calculated by deleted interpolation (Brown et al. , 1992)." W99-0617,J92-4003,o,"(Black et al. , 1992; Magerman, 1994)) and view the POS tags and word identities as two separate sources of information." A97-1053,J93-1003,p,"7Another related measure is Dunning (1993)'s likelihood ratio tests for binomial and multinomial distributions, which are claimed to be effective even with very much smaller volumes of text than is necessary for other tests based on assumed normal distributions." A97-1053,J93-1003,o,"For example, in the context of syntactic disambiguation, Black (1993) and Magerman (1995) proposed statistical parsing models based-on decisiontree learning techniques, which incorporated not only syntactic but also lexical/semantic information in the decision-trees." A97-1053,J93-1003,o,"(~(e) = max ((fl ,f~) ~ e) (23) (11 Y~) Now, the problem of learning probabilistic subcategorization preference is stated as: for every verb-noun collocation e in C, estimating the probability distribution P((fl, 6Resnik (1993) applys the idea of the KL distance to measuring the association of a verb v and its object noun class c. Our definition of ekt corresponds to an extension of Resnik's association score, which considers dependencies of more than one case-markers in a subcategorization frame." A97-2010,J93-1003,o,"toilet/bathroom Since the word ""facility"" is the subject of ""employ"" and is modified by ""new"" in (3), we retrieve other words that appeared in the same contexts and obtain the following two groups of selectors (the log A column shows the likelihood ratios (Dunning, 1993) of these words in the local contexts): Subjects of ""employ"" with top-20 highest likelihood ratios: word freq, Iog,k word freq ORG"" 64 50.4 plant 14 31.0 company 27 28.6 operation 8 23.0 industry 9 14.6 firm 8 13.5 pirate 2 12.1 unit 9 9.32 shift 3 8.48 postal service 2 7.73 machine 3 6.56 corporation 3 6.47 manufacturer 3 6.21 insurance company 2 6.06 aerospace 2 5.81 memory device 1 5.79 department 3 5.55 foreign office 1 5.41 enterprise 2 5.39 pilot 2 537 *ORG includes all proper names recognized as organizations 18 Modifiees of ""new"" with top-20 highest likelihood ratios: word freq log,k post 432 952.9 issue 805 902.8 product 675 888.6 rule 459 875.8 law 356 541.5 technology 237 382.7 generation 150 323.2 model 207 319.3 job 260 269.2 system 318 251.8 word freq log )~ bonds 223 245.4 capital 178 241.8 order 228 236.5 version 158 223.7 position 236 207.3 high 152 201.2 contract 279 198.1 bill 208 194.9 venture 123 193.7 program 283 183.8 Since the similarity between Sense 1 of ""facility"" and the selectors is greater than that of other senses, the word ""facility"" in (3) is tagged ""Sense The key innovation of our algorithm is that a polysemous word is disambiguated with past usages of other words." C00-1029,J93-1003,o,"The problem of choosing an appropria.te level in the h.ierarchy at which to represent a particular noun sense (given a predicate and argument position) has been investigated by Resnik (1993), Li and Abe (1998) and ll,iba,s (1995)." C00-1047,J93-1003,p,"For instance, mutual information (Church ct al. 1990) and the log-likelihood (Dunning 1993) methods for extracting word bigrams have been widely used." C00-1058,J93-1003,o,"We adopted log-likelihood ratio (Danning 1993), which gave the best pertbrmance among crude non-iterative methods in our test experiments 6 ." C00-1059,J93-1003,o,Mutual infornaation involves a problem in that it is overestimated for low-frequency terms (I)unning 1993). C00-2100,J93-1003,o,"Several techniques and results have been reported on learning subcategorization frames (SFs) from text corpora (Webster and Marcus, 1989; Brent, 1991; Brent, 1993; Brent, 1994; Ushioda et al. , 1993; Manning, 1993; Ersan and Charniak, 1996; Briscoe and Carroll, 1997; Carroll and Minnen, 1998; Carroll and Rooth, 1998)." C00-2100,J93-1003,o,"Using the values computed above: Pl -7tl k2 P2 --= -77, 2 kl +k2 p -7z 1 .-\]'It 2 Taking these probabilities to be binomially distributed, the log likelihood statistic (Dunning, 1993) is given by: 2 log A = 2\[log L(pt, k:l, rtl) @ log L(p2, k2, rl,2) -log L(p, kl, n2) log L(p, k2, n2)\] where, log L(p, n, k) = k logp + (,z -k)log(1 p) According to this statistic, tile greater the value of -2 log A for a particular pair of observed frame and verb, the more likely that frame is to be valid SF of the verb." C00-2100,J93-1003,o,"5 Comparison with related work Preliminary work on SF extraction from coq~ora was done by (Brent, 1991; Brunt, 1993; Brent, 1994) and (Webster and Marcus, 1989; Ushioda et al. , 1993)." C02-1007,J93-1003,o,"However, Dunning (1993) pointed out that for the purpose of corpus statistics, where the sparseness of data is an important issue, it is better to use the log-likelihood ratio." C02-1016,J93-1003,o,"In the first step, the scores are initialized according to the G 2 statistic (Dunning, 1993)." C02-1040,J93-1003,o,"We describe the experiment in greater detail 2The particular verbs selected were looked up in (Levin, 1993) and the class for each verb in the classification system defined in (Stevenson and Merlo, 1997) was selected with some discussion with linguists." C02-1040,J93-1003,o,"For example, in John saw Mary yesterday at the station, only John and Mary are required arguments while the other constituents are optional (adjuncts).3 The problem of SF identification using statistical methods has had a rich discussion in the literature (Ushioda et al. , 1993; Manning, 1993; Briscoe and Carroll, 1997; Brent, 1994) (also see the refences cited in (Sarkar and Zeman, 2000))." C02-1040,J93-1003,o,"ther background on this method of hypothesis testing the reader is referred to (Bickel and Doksum, 1977; Dunning, 1993)." C02-1040,J93-1003,o,"Using the values computed above: p1 = k1n 1 p2 = k2n 2 p = k1+k2n 1+n2 Taking these probabilities to be binomially distributed, the log likelihood statistic (Dunning, 1993) is given by: 2 log = 2[log L(p1;k1;n1)+log L(p2;k2;n2) log L(p;k1;n2) log L(p;k2;n2)] where, log L(p;n;k)=k log p+(n k) log(1 p) According to this statistic, the greater the value of 2 log for a particular pair of observed frame and verb, the more likely that frame is to be valid SF of the verb." C02-1065,J93-1003,o,"As the strength of relevance between a target compound noun t and its co-occurring word r, the feature value of r, w(t;r) is deflned by the log likelihood ratio (Dunning, 1993) 1 as follows." C02-1125,J93-1003,p,"For instance, the mutual information (Church et al. 1990) and log-likelihood ratio (Dunning 1993; Cohen 1995) have been widely used for extracting word bigrams." C02-1125,J93-1003,o,"The value of Dist(D(T)) can be defined in various ways, and they found that using log-likelihood ratio (see Dunning 1993) worked best which is represented as follows: 0 # log )(# log D K k TD k k i M ii i i M ii i == , where k i and K i are the frequency of a word w i in D(W) and D 0 respectively, and {w 1,,w M } is the set of all words in D 0 . As stated in introduction, Dist(D(T)) is normalized by the baseline function, which is referred as B Dist () here." C02-1130,J93-1003,o,"In order to avoid this problem we implemented a simple bootstrapping procedure in which a seed data set of 100 instances of each of the eight categories was hand tagged and used to generate a decision list classifier using the C4.5 algorithm (Quinlan, 1993) with the word frequency and topic signature features described below." C02-1130,J93-1003,o,"The topic signatures are automatically generated for each specific term by computing the likelihood ratio (-score) between two hypotheses (Dunning, 1993)." C02-1130,J93-1003,o,"The scores were then weighted by the inverse of their height in the tree and then summed together, similarly to the procedure in (Resnik, 1993)." C02-1130,J93-1003,o,"Methods 4.1 Experiment 1: Held out data To examine the generalizability of classifiers trained on the automatically generated data, a C4.5 decision tree classifier (Quinlan, 1993) was trained and tested on the held out test set described above." C02-1166,J93-1003,o,"Each word i in the context vector of w is then weighted with a measure of its association with w. We chose the loglikelihood ratio test, (Dunning, 1993), to measure this association the context vectors of the target words are then translated with our general bilingual dictionary, leaving the weights unchanged (when several translations are proposed by the dictionary, we consider all of them with the same weight) the similarity of each source word s, for each target word t, is computed on the basis of the cosine measure the similarities are then normalized to yield a probabilistic translation lexicon, P(t|s)." C02-2003,J93-1003,o,"3.1 The Likelihood Ratio We adopted a method for collocation discovery based on the likelihood ratio (Dunning, 1993)." C02-2005,J93-1003,o,"The starting point is the log likelihood ratio (log , Dunning 1993)." C02-2005,J93-1003,o,Its distribution is asymptotic to a 2 distribution and can hence be used as a test statistic (Dunning 1993). C02-2005,J93-1003,p,"(3) () () 0 log 2 log A LH LH = 1 Problems for an unscaled log approach Although log identifies collocations much better than competing approaches (Dunning 1993) in terms of its recall, it suffers from its relatively poor precision rates." C04-1088,J93-1003,o,We have begun experimenting with log likelihood ratio (Dunning 1993) as a thresholding technique. C04-1094,J93-1003,o,"In the iNeast system (Leuski et al. , 2003), the identification of relevant terms is oriented towards multi-document summarization, and they use a likelihood ratio (Dunning, 1993) which favours terms which are representative of the set of documents as opposed to the full collection." C04-1111,J93-1003,o,We apply the log likelihood principle (Dunning 1993) to compute this score. C04-1121,J93-1003,o,The measure of predictiveness we employed is log likelihood ratio with respect to the target variable (Dunning 1993). C04-1136,J93-1003,o,"The candidates were then ranked according to the scores assigned by four association measures: the log-likelihood ratio G2 (Dunning, 1993), Pearsons chi-squared statistic X2 (Manning and Schutze, 1999, 169172), the t-score statistic t (Church et al. , 1991), and mere cooccurrence frequency f.4 TPs were identified according to the definition of Krenn (2000)." C04-1136,J93-1003,p,"The evaluation results also confirm the argument of Dunning (1993), who suggested G2 as a more robust alternative to X2." C04-1141,J93-1003,o,"Smadja (1993), which is the classic work on collocation extraction, uses a two-stage filtering model in which, in the first step, n-gram statistics determine possible collocations and, in the second step, these candidates are submitted to a syntactic valida7Of course, lexical material is always at least partially dependent on the domain in question." C04-1141,J93-1003,o,"Almost all of these measures can be grouped into one of the following three categories: a0 frequency-based measures (e.g. , based on absolute and relative co-occurrence frequencies) a0 information-theoretic measures (e.g. , mutual information, entropy) a0 statistical measures (e.g. , chi-square, t-test, log-likelihood, Dices coefficient) The corresponding metrics have been extensively discussed in the literature both in terms of their mathematical properties (Dunning, 1993; Manning and Schutze, 1999) and their suitability for the task of collocation extraction (see Evert and Krenn (2001) and Krenn and Evert (2001) for recent evaluations)." C96-2098,J93-1003,o,"2.2 The Choice of Co-occurrence ~qeasure and Matrix Distance There :~:c many alternatives to measure cooccurrence between two words x and y (Church, 1990; Dunning, 1993)." D07-1052,J93-1003,o,"However, while similarity measures (such as WordNet distance or Lins similarity metric) only detect cases of semantic similarity, association measures (such as the ones used by Poesio et al. , or by Garera and Yarowsky) also find cases of associative bridg497 Lin98 RFF TheY TheY:G2 PL03 Land (country/state/land) Staat Staat Kemalismus Regierung Kontinent state state Kemalism government continent Stadt Stadt Bauernfamilie Prasident Region city city agricultural family president region Region Landesregierung Bankgesellschaft Dollar Stadt region country government banking corporation dollar city Bundesrepublik Bundesregierung Baht Albanien Staat federal republic federal government Baht Albania state Republik Gewerkschaft Gasag Hauptstadt Bundesland republic trade union (a gas company) capital state Medikament (medical drug) Arzneimittel Pille RU Patient Arzneimittel pharmaceutical pill (a drug) patient pharmaceutical Praparat Droge Abtreibungspille Arzt Lebensmittel preparation drug (non-medical) abortion pill doctor foodstuff Pille Praparat Viagra Pille Praparat pill preparation Viagra pill preparation Hormon Pestizid Pharmakonzern Behandlung Behandlung hormone pesticide pharmaceutical company treatment treatment Lebensmittel Lebensmittel Praparat Abtreibungspille Arznei foodstuff foodstuff preparation abortion pill drug highest ranked words, with very rare words removed : RU 486, an abortifacient drug Lin98: Lins distributional similarity measure (Lin, 1998) RFF: Geffet and Dagans Relative Feature Focus measure (Geffet and Dagan, 2004) TheY: association measure introduced by Garera and Yarowsky (2006) TheY:G2: similar method using a log-likelihood-based statistic (see Dunning 1993) this statistic has a preference for higher-frequency terms PL03: semantic space association measure proposed by Pado and Lapata (2003) Table 1: Similarity and association measures: most similar items ing like 1a,b; the result of this can be seen in table (2): while the similarity measures (Lin98, RFF) list substitutable terms (which behave like synonyms in many contexts), the association measures (Garera and Yarowskys TheY measure, Pado and Lapatas association measure) also find non-compatible associations such as countrycapital or drugtreatment, which is why they are commonly called relationfree." D08-1048,J93-1003,o,"As association measure we apply log-likelihood ratio (Dunning, 1993) to normalized frequency." D09-1040,J93-1003,o,"Beside simple cooccurrence counts within sliding windows, other SoA measures include functions based on TF/IDF (Fung and Yee, 1998), mutual information (PMI) (Lin, 1998), conditional probabilities (Schuetze and Pedersen, 1997), chi-square test, and the loglikelihood ratio (Dunning, 1993)." D09-1040,J93-1003,o,"1 Introduction Phrase-based systems, flat and hierarchical alike (Koehn et al., 2003; Koehn, 2004b; Koehn et al., 2007; Chiang, 2005; Chiang, 2007), have achieved a much better translation coverage than wordbased ones (Brown et al., 1993), but untranslated words remain a major problem in SMT." D09-1051,J93-1003,o,"Many studies on collocation extraction are carried out based on co-occurring frequencies of the word pairs in texts (Choueka et al., 1983; Church and Hanks, 1990; Smadja, 1993; Dunning, 1993; Pearce, 2002; Evert, 2004)." D09-1051,J93-1003,o,"Thus the alignment set is denoted as }&],1[|),{( ialiaiA ii = . We adapt the bilingual word alignment model, IBM Model 3 (Brown et al., 1993), to monolingual word alignment." D09-1066,J93-1003,p,"One popular and statistically appealing such measure is Log-Likelihood (LL) (Dunning, 1993)." D09-1081,J93-1003,o,"For example, in this work we use loglikelihood ratio (Dunning, 1993) to determine the SoA between a word sense and co-occurring words, and cosine to determine the distance between two DPWSs log likelihood vectors (McDonald, 2000)." D09-1081,J93-1003,p,This further supports the claim by Dunning (1993) that loglikelihood ratio is much less sensitive than pmi to low counts. D09-1154,J93-1003,o,"We then scored each query pair (q1,q2) in this subset using the log-likelihood ratio (LLR, Dunning, 1993) between q1 and q2, which measures the mutual dependence within the context of web search queries (Jones et al., 2006a)." E06-1018,J93-1003,o,"The significance values are obtained using the loglikelihood measure assuming a binomial distribution for the unrelatedness hypothesis (Dunning, 1993)." E09-1074,J93-1003,o,"As a result, we can use collocation measures like point-wise mutual information (Church and Hanks, 1989) or the log-likelihood ratio (Dunning, 1993) to predict the strong association for a given cue." E09-2012,J93-1003,p,"By default, the log-likelihood ratio measure (LLR) is proposed, since it was shown to be particularly suited to language data (Dunning, 1993)." E95-1008,J93-1003,o,Collocation map that is first suggested in (Itan 1993) is a sigmoid belief network with words as probabilistic variables. E95-1008,J93-1003,p,This results also agree with Dunning's argument about overestimation on the infrequent occurrences in which many infrequent pairs tend to get higher estimation (Dunning 1993). E95-1008,J93-1003,o,The problem is due to the assumption of normality in naive frequency based statistics according to Dunning (1993). E99-1005,J93-1003,o,"Proceedings of EACL '99 Determinants of Adjective-Noun Plausibility Maria Lapata and Scott McDonald and Frank Keller School of Cognitive Science Division of Informatics, University of Edinburgh 2 Buccleuch Place, Edinburgh EH8 9LW, UK {mlap, scottm, keller} @cogsci.ed.ac.uk Abstract This paper explores the determinants of adjective-noun plausibility by using correlation analysis to compare judgements elicited from human subjects with five corpus-based variables: co-occurrence frequency of the adjective-noun pair, noun frequency, conditional probability of the noun given the adjective, the log-likelihood ratio, and Resnik's (1993) selectional association measure." E99-1005,J93-1003,o,"Conditional probability, the log-likelihood ratio, and Resnik's (1993) selectional association measure were also significantly correlated with plausibility ratings." E99-1005,J93-1003,o,The research presented in this paper is similar in motivation to Resnik's (1993) work on selectional restrictions. E99-1005,J93-1003,o,"We employ the loglikelihood ratio as a measure of the collocational status of the adjective-noun pair (Dunning, 1993; Daille, 1996)." E99-1005,J93-1003,o,"We estimated the probabilities P(c I Pi) and P(c) similarly to Resnik (1993) by using relative frequencies from the BNC, together with WordNet (Miller et al. , 1990) as a source of taxonomic semantic class information." H05-1013,J93-1003,o,"The second attempts to instill knowledge of collocations in the data; we use the technique described by (Dunning, 1993) to compute multi-word expressions and then mark words that are commonly used as such with a feature that expresses this fact." H05-1089,J93-1003,o,"1 Introduction Word associations (co-occurrences) have a wide range of applications including: Speech Recognition, Optical Character Recognition and Information Retrieval (IR) (Church and Hanks, 1991; Dunning, 1993; Manning and Schutze, 1999)." H05-1089,J93-1003,o,"Many studies focus on rare words (Dunning, 1993; Moore, 2004); butterflies are more interesting than moths." H05-1113,J93-1003,o,"Several other measures like Log-Likelihood (Dunning, 1993), Pearsons a2a4a3 (Church et al. , 1991), Z-Score (Church et al. , 1991), Cubic Association Ratio (MI3), etc. , have been also proposed." I08-1013,J93-1003,o,"5http://cl.cs.okayama-u.ac.jp/rsc/ jacabit/ a4a6a5 which gathers the set of co-occurrence units a7 associated with the number of times that a7 and a2 occur together a8a6a9a10a9 a5 a11 . In order to identify speci c words in the lexical context and to reduce word-frequency effects, we normalize context vectors using an association score such as Mutual Information (Fano, 1961) or Log-likelihood (Dunning, 1993)." I08-1013,J93-1003,o,"4 Pattern switching The compositional translation presents problems which have been reported by (Baldwin and Tanaka, 2004; Brown et al., 1993): Fertility SWTs and MWTs are not translated by a term of a same length." I08-1038,J93-1003,o,"Many methods have been proposed to measure the co-occurrence relation between two words such as 2 (Church and Mercer,1993) , mutual information (Church and Hanks, 1989; Pantel and Lin, 2002), t-test (Church and Hanks, 1989), and loglikelihood (Dunning,1993)." I08-1059,J93-1003,o,"Before training the classifiers, we perform feature ablation by imposing a count cutoff of 10, and by limiting the number of features to the top 75K features in terms of log likelihood ratio (Dunning 1993)." I08-2134,J93-1003,p,"All the enumerated segment pairs are listed in the following table: Feature x,y Feature x,y AM1+1 c1, c0 AM2+1 c2c1, c0 AM1+2 c1, c0c1 AM2+2 c2c1, c0c1 AM1+3 c1, c0c1c2 AM3+1 c3c2c1, c0 We use Dunnings method (Dunning, 1993) because it does not depend on the assumption of normality and it allows comparisons to be made between the signiflcance of the occurrences of both rare and common phenomenon." I08-2134,J93-1003,o,"(Choueka, 1988) regarded MWE as connected collocations: a sequence of neighboring words whose exact meaning cannot be derived from the meaning or connotation of its components, which means that MWEs also have low ST. As some pioneers provide MWE identiflcation methods which are based on association metrics (AM), such as likelihood ratio (Dunning, 1993)." J00-2004,J93-1003,o,"A boundary-based model of co-occurrence assumes that both halves of the bitext have been segmented into s segments, so that segment Ui in one half of the bitext and segment Vi in the other half are mutual translations, 1 < i < s. Under the boundary-based model of co-occurrence, there are several ways to compute co-occurrence counts cooc(u, v) between word types u and v. In the models of Brown, Della Pietra, Della Pietra, and Mercer (1993), reviewed in Section 4.3, s COOC(R, V) = ~ ei(u) .j~(V), (12) i=1 where ei and j5 are the unigram frequencies of u and v, respectively, in each aligned text segment i. For most translation models, this method produces suboptimal results, however, when ei(u) > 1 and )~(v) > 1." J00-2004,J93-1003,o,"Due to the parameter interdependencies introduced by the one-to-one assumption, we are unlikely to find a method for decomposing the assignments into parameters that can be estimated independently of each other as in Brown et al. \[1993b, Equation 26\])." J00-2004,J93-1003,p,"In informal experiments described elsewhere (Melamed 1995), I found that the G 2 statistic suggested by Dunning (1993) slightly outperforms 2." J00-2004,J93-1003,o,"Until now, translation models have been evaluated either subjectively (e.g. White and O'Connell 1993) or using relative metrics, such as perplexity with respect to other models (Brown et al. 1993b)." J00-2004,J93-1003,o,"Bilingual lexicographers can work with bilingual concordancing software that can point them to instances of any link type induced from a bitext and display these instances sorted by their contexts (e.g. Simard, Foster, and Perrault 1993)." J00-2004,J93-1003,o,"The performance of cross-language information retrieval with a uniform T is likely to be limited in the same way as the performance of conventional information retrieval without term-frequency information, i.e., where the system knows which terms occur in which documents, but not how often (Buckley 1993)." J00-2004,J93-1003,o,"(1993b), this model is symmetric, because both word bags are generated together from a joint probability distribution." J00-3001,J93-1003,o,"Dunning (1993) has called attention to the log-likelihood ratio, G 2, as appropriate for the analysis of such contingency tables, especially when such contingency tables concern very low frequency words." J02-2003,J93-1003,o,"Dunning (1993) argues for the use of G 2 rather than X 2, based on an analysis of the sampling distributions of G 2 and X 2, and results obtained when using the statistics to acquire highly associated bigrams." J02-2003,J93-1003,o,"8 An alternative formula for G 2 is given in Dunning (1993), but the two are equivalent." J02-2003,J93-1003,o,"Dunning (1993) argues for the use of G 2 rather than X 2, based on the claim that the sampling distribution of G 2 approaches the true chi-square distribution quicker than the sampling distribution of X 2 . However, Agresti (1996, page 34) makes the opposite claim: The sampling distributions of X 2 and G 2 get closer to chi-squared as the sample size n increasesThe convergence is quicker for X 2 than G 2 . In addition, Pedersen (2001) questions whether one statistic should be preferred over the other for the bigram acquisition task and cites Cressie and Read (1984), who argue that there are some cases where the Pearson statistic is more reliable than the log-likelihood statistic." J02-2003,J93-1003,o,"Alternative Class-Based Estimation Methods The approaches used for comparison are that of Resnik (1993, 1998), subsequently developed by Ribas (1995), and that of Li and Abe (1998), which has been adopted by McCarthy (2000)." J02-2003,J93-1003,o,"The X 2 statistic is performing at least as well as G 2, and the results show that the average level of generalization is slightly higher for G 2 than X 2 . This suggests a possible explanation for the results presented here and those in Dunning (1993): that the X 2 statistic provides a less conservative test when counts in the contingency table are low." J02-4002,J93-1003,o,"(The example paper we use throughout the article is F. Pereira, N. Tishby, and L. Lees Distributional Clustering of English Words [ACL-1993, cmp lg/9408011]; it was chosen because it is the paper most often cited within our collection)." J02-4002,J93-1003,o,We measured associations using the log-likelihood measure (Dunning 1993) for each combination of target category and semantic class by converting each cell of the contingency into a 22 contingency table. J02-4002,J93-1003,o,"There are good reasons for using such a hand-crafted, genre-specific verb lexicon instead of a general resource such as WordNet or Levins (1993) classes: Many verbs used in the domain of scientific argumentation have assumed a specialized meaning, which our lexicon readily encodes." J06-1005,J93-1003,o,"Their systems output was an ordered list of possible parts according to some statistical metrics (e.g., the log-likelihood metric (Dunning 1993))." J06-1005,J93-1003,o,"5 The SemCor collection (Miller et al., 1993) is a subset of the Brown Corpus and consists of 352 news articles distributed into three sets in which the nouns, verbs, adverbs, and adjectives have been manually tagged with their corresponding WordNet senses and part-of-speech tags using Brills tagger (1995)." J06-3001,J93-1003,o,Introduction The automated analysis of large corpora has many useful applications (Church and Mercer 1993). J06-3001,J93-1003,o,"Other corpus-based methods determine associations between words (Grefenstette 1992; Dunning 1993; Lin et al. 1998), which yields a basis for computing thesauri, or dictionaries of terminological expressions and multiword lexemes (Gaizauskas, Demetriou, and Humphreys 2000; Grefenstette 2001)." J06-3001,J93-1003,o,"From multilingual texts, translation lexica can be generated (Gale and Church 1991; Kupiec 1993; Kumano and Hirakawa 1994; Boutsis, Piperidis, and Demiros 1999; Grefenstette 1999)." J06-4003,J93-1003,o,"In the usual case considered by Dunning (1993) and discussed by Manning and Sch utze (1999), the right-hand side of the equation is larger than the left-hand side." J06-4003,J93-1003,o,"As has been pointed out by Dunning (1993), the calculation of log assumes a binomial distribution." J06-4003,J93-1003,o,A period should therefore be interpreted as an abbreviation marker and not as a sentence boundary marker if the two tokens surrounding it can indeed be considered as a collocation according to Dunnings (1993) original log-likelihood ratio amended with the one-sidedness constraint introduced in Section 2.2. J06-4003,J93-1003,o,"For English, we have used sections 03-06 of the WSJ portion of the Penn Treebank (Marcus, Santorini, and Marcinkiewicz 1993) distributed by the Linguistic Data Consortium (LDC), which have frequently been used to evaluate sentence boundary detection systems before; compare Section 7." J06-4003,J93-1003,o,The usefulness of likelihood ratios for collocation detection has been made explicit by Dunning (1993) and has been confirmed by an evaluation of various collocation detection methods carried out by Evert and Krenn (2001). J06-4003,J93-1003,o,2.1 Likelihood Ratios in the Type-based Stage The log-likelihood ratio by Dunning (1993) tests whether the probability of a word is dependent on the occurrence of the preceding word type. J07-2002,J93-1003,p,"In our experiments, we follow Lowe and McDonald (2000) in using the well-known log-likelihood ratio G 2 (Dunning 1993)." J07-3003,J93-1003,o,"One could use the estimated co-occurrences from a small sample to compute the test statistics, most commonly Pearsons chi-squared test, the likelihood ratio test, Fishers exact test, cosine similarity, or resemblance (Jaccard coefficient) (Dunning 1993; Manning and Schutze 1999; Agresti 2002; Moore 2004)." J07-3003,J93-1003,o,"1 Word associations (co-occurrences, or joint frequencies) have a wide range of applications including: speech recognition, optical character recognition, and information retrieval (IR) (Salton 1989; Church and Hanks 1991; Dunning 1993; Baeza-Yates and Ribeiro-Neto 1999; Manning and Schutze 1999)." J94-4005,J93-1003,o,The third function is an original variant of the second; the fourth is original; and the fifth is prompted by the arguments of Dunning (1993). J94-4005,J93-1003,o,"Lexical collocation functions, especially those determined statistically, have recently attracted considerable attention in computational linguistics (Calzolari and Bindi 1990; Church and Hanks 1990; Sekine et al. 1992; Hindle and Rooth 1993) mainly, though not exclusively, for use in disambiguation." N03-1018,J93-1003,o,"translation lexicon entries were scored according to the log likelihood ratio (Dunning, 1993) (cf." N03-1032,J93-1003,o,"Dunning (1993) also used windows of size 2, which corresponds to word bigrams." N03-1032,J93-1003,o,"3.3 Syntax based approach An alternative to the Window and Document-oriented approach is to use syntactical information (Grefenstette, 1993)." N03-1032,J93-1003,o,Dunning (1993) used a likelihood ratio to test word similarity under the assumption that the words in text have a binomial distribution. N03-1032,J93-1003,o,"1 Introduction Many different statistical tests have been proposed to measure the strength of word similarity or word association in natural language texts (Dunning, 1993; Church and Hanks, 1990; Dagan et al. , 1999)." N04-1008,J93-1003,o,"The chunker is trained on the answer side of the Training corpus in order to learn 2 and 3word collocations, defined using the likelihood ratio of Dunning (1993)." N04-1038,J93-1003,o,"BABAR uses the log-likelihood statistic (Dunning, 1993) to evaluate the strength of a co-occurrence relationship." N07-1014,J93-1003,o,"For comparing the sentence generator sample to the English sample, we compute log-likelihood statistics (Dunning, 1993) on neighboring words that at least co-occur twice." N07-1014,J93-1003,o," Significant neighbor-based co-occurrence: As discussed in (Dunning 1993), it is possible to measure the amount of surprise to see two neighboring words in a corpus at a certain frequency under the assumption of independence." N07-1014,J93-1003,o,"We use the log-likelihood ratio for determining significance as in (Dunning, 1993), but other measures are possible as well." N07-3010,J93-1003,o,"Schtze, 1993) is not suited to highly skewed distributions omni-present in natural language." N07-3010,J93-1003,p,"Throughout, the likelihood ratio (Dunning, 1993) is used as significance measure because of its stable performance in various evaluations, yet many more measures are possible." N09-1022,J93-1003,o,"We then ranked the collected query pairs using loglikelihoodratio(LLR)(Dunning, 1993), whichmeasures the dependence between q1 and q2 within the context of web queries (Jones et al., 2006b)." P01-1025,J93-1003,o,"The measures2 Mutual Information (a0a2a1 ) (Church and Hanks, 1989), the log-likelihood ratio test (Dunning, 1993), two statistical tests: t-test and a3a5a4 -test, and co-occurrence frequency are applied to two sets of data: adjective-noun (AdjN) pairs and preposition-noun-verb (PNV) triples, where the AMs are applied to (PN,V) pairs." P01-1025,J93-1003,o,"the remarks on the a3 a4 measure in (Dunning, 1993))." P02-1054,J93-1003,o,"Substituting the probabilities in the PMI formula with the previously introduced Web statistics, we obtain: a15a17a16a25a18a26a11a22a21 Qspa49a6a50a22a51a6a52 Aspa24 a15a17a16a25a18a26a11a22a21 Qspa24a56a55a57a15a33a16a19a18a26a11a6a21 Aspa24 a55 a38 a1a6a39a17a34a40a1a8a41a45a43a46a11 Maximal Likelihood Ratio (MLHR) is also used for word co-occurrence mining (Dunning, 1993)." P02-1058,J93-1003,o,"Proceedings of the 40th Annual Meeting of the Association for In a key step for locating important sentences, NeATS computes the likelihood ratio (Dunning, 1993) to identify key concepts in unigrams, bigrams, and trigrams1, using the ontopic document collection as the relevant set and the off-topic document collection as the irrelevant set." P03-1012,J93-1003,o,"These methods often involve using a statistic such as 2 (Gale and Church, 1991) or the log likelihood ratio (Dunning, 1993) to create a score to measure the strength of correlation between source and target words." P03-1012,J93-1003,o,"These constraints tie words in such a way that the space of alignments cannot be enumerated as in IBM models 1 and 2 (Brown et al. , 1993)." P03-1017,J93-1003,o,"Each element of the resulting vector was replaced with its log-likelihood value (see Definition 10 in Section 2.3) which can be considered as an estimate of how surprising or distinctive a co-occurrence pair is (Dunning, 1993)." P03-1030,J93-1003,o,"Our approach was to identify a parallel corpus of manually and automatically transcribed documents, the TDT2 corpus, and then use a statistical approach (Dunning, 1993) to identify tokens with significantly Table 5: Impact of recall and precision enhancing devices." P03-1030,J93-1003,o,"Second, the significance of the K-S distance in case of the null hypothesis (data sets are drawn from same distribution) can be calculated (Press et al. , 1993)." P03-2021,J93-1003,o,"NeATS computes the likelihood ratio (Dunning, 1993) to identify key concepts in unigrams, bigrams, and trigrams and clusters these concepts in order to identify major subtopics within the main topic." P04-1022,J93-1003,o,"In addition to collocation translation, there is also some related work in acquiring phrase or term translations from parallel corpus (Kupiec, 1993; Yamamoto and Matsumoto 2000)." P04-1022,J93-1003,o,"We have: )|(),|(),|( )|,,()|( 21 21 trictrictric trictritri erpercpercp ecrcpecp = = (6) Assumption 2: For an English triple tri e, assume that i c only depends on {1,2}) (i i e, and c r only depends on e r . Equation (6) is rewritten as: )|()|()|( )|(),|(),|()|( 2211 21 ec trietrictrictritri rrpecpecp erpercpercpecp = = (7) Notice that )|( 11 ecp and )|( 22 ecp are translation probabilities within triples, they are different from the unrestricted probabilities such as the ones in IBM models (Brown et al. , 1993)." P04-1066,J93-1003,o,"To test whether a better set of initial parameter estimates can improve Model 1 alignment accuracy, we use a heuristic model based on the loglikelihood-ratio (LLR) statistic recommended by Dunning (1993)." P04-3002,J93-1003,p,"In order to filter some noise caused by the error alignment links, we only retain those translation pairs whose translation probabilities are above a threshold 1 D 1 or co-occurring frequencies are above a threshold 2 . When we train the IBM statistical word alignment model with a limited bilingual corpus in the specific domain, we build another translation dictionary with the same method as for the dictionary . But we adopt a different filtering strategy for the translation dictionary . We use log-likelihood ratio to estimate the association strength of each translation pair because Dunning (1993) proved that log-likelihood ratio performed very well on small-scale data." P04-3019,J93-1003,o,"Smadja (1993) also detailed techniques for collocation extraction and developed a program called XTRACT, which is capable of computing flexible collocations based on elaborated statistical calculation." P04-3019,J93-1003,p,"Moreover, log likelihood ratios are regarded as a more effective method to identify collocations especially when the occurrence count is very low (Dunning, 1993)." P05-1058,J93-1003,o,"2 Statistical Word Alignment According to the IBM models (Brown et al. , 1993), the statistical word alignment model can be generally represented as in Equation (1)." P05-1058,J93-1003,o," = == = = m aj j m j aj l i i l i ii m j j mlajdeft en pp m ap 0:1 11 1 2 0 0 0 ),( ),,|()|( ! )|( )|,Pr()|,( 00 eef (3) 1 A cept is defined as the set of target words connected to a source word (Brown et al. , 1993)." P05-1058,J93-1003,o,"In order to filter the noise caused by the error alignment links, we only retain those translation pairs whose log-likelihood ratio scores (Dunning, 1993) are above a threshold." P05-1075,J93-1003,o,"A variety of methods have been applied, ranging from simple frequency (Justeson & Katz 1995), modified frequency measures such as c-values (Frantzi, Anadiou & Mima 2000, Maynard & Anadiou 2000) and standard statistical significance tests such as the t-test, the chi-squared test, and loglikelihood (Church and Hanks 1990, Dunning 1993), and information-based methods, e.g. pointwise mutual information (Church & Hanks 1990)." P05-1075,J93-1003,o,Dunning 1993) or else (as with mutual information) eschew significance testing in favor of a generic information-theoretic approach. P06-1011,J93-1003,o,"2.2 Using Log-Likelihood-Ratios to Estimate Word Translation Probabilities Our method for computing the probabilistic translation lexicon LLR-Lex is based on the the Log2http://www.fjoch.com/GIZA++.html Likelihood-Ratio (LLR) statistic (Dunning, 1993), which has also been used by Moore (2004a; 2004b) and Melamed (2000) as a measure of word association." P06-1120,J93-1003,o,"Finally, the loglikelihood ratios test (henceforth LLR) (Dunning, 1993) is applied on each set of pairs." P06-2007,J93-1003,o,"This metric tests the hypothesis that the probability of phrase is the same whether phrase has been seen or not by calculating the likelihood of the observed data under a binomial distribution using probabilities derived using each hypothesis (Dunning, 1993)." P06-2016,J93-1003,o,"This method uses mutual information and loglikelihood, which Dunning (1993) used to calculate the dependency value between words." P06-2020,J93-1003,o,"To identify these terms,weusethelog-likelihoodstatisticsuggested by Dunning (Dunning 1993) and first used in summarization by Lin and Hovy (Hovy and Lin 2000)." P06-3002,J93-1003,o,"Partitioning 2: Medium and low frequency words As noted in (Dunning, 1993), log-likelihood statistics are able to capture word bi-gram regularities." P06-3014,J93-1003,o,"(ii) Apply some statistical tests such as the Binomial Hypothesis Test (Brent, 1993) and loglikelihood ratio score (Dunning, 1993) to SCCs to filter out false SCCs on the basis of their reliability and likelihood." P06-3014,J93-1003,o,"1 Introduction Robust statistical syntactic parsers, made possible by new statistical techniques (Collins, 1999; Charniak, 2000; Bikel, 2004) and by the availability of large, hand-annotated training corpora such as WSJ (Marcus et al. , 1993) and Switchboard (Godefrey et al. , 1992), have had a major impact on the field of natural language processing." P07-1070,J93-1003,o,"Such measures as mutual information (Turney 2001), latent semantic analysis (Landauer et al. , 1998), log-likelihood ratio (Dunning, 1993) have been proposed to evaluate word semantic similarity based on the co-occurrence information on a large corpus." P09-2017,J93-1003,o,"Log-likelihood ratio (G2) (Dunning, 1993) with respect to a large reference corpus, Web 1T 5-gram Corpus (Brants and Franz, 2006), is used to capture the contextually relevant nouns." P09-2062,J93-1003,o,"We compute log-likelihood significance between features and target nouns (as in (Dunning, 1993)) and keep only the most significant 200 features per target word." P95-1054,J93-1003,o,"Many researchers ((Smadja, 1991); (Srihari & Baltus, 1993)) have suggested that the informationtheoretic notion of mutual information score (MIS) directly captures the idea of context." P95-1054,J93-1003,o,"It forms a baseline for performance evaluations, but is prone to sparse data problems (Dunning, 1993)." P97-1009,J93-1003,o,"64 Table 1: Subjects of ""employ"" with highest likelihood ratio word freq logA word freq logA bRG 64 50.4 plant 14 31.0 company 27 28.6 operation 8 23.0 industry 9 14.6 firm 8 13.5 pirate 2 12.1 unit 9 9.32 shift 3 8.48 postal service 2 7.73 machine 3 6.56 corporation 3 6.47 manufacturer 3 6.21 insurance company 2 6.06 aerospace 2 5.81 memory device 1 5.79 department 3 5.55 foreign office 1 5.41 enterprise 2 5.39 pilot 2 5.37 *ORG includes all proper names recognized as organizations The logA column are their likelihood ratios (Dunning, 1993)." P97-1009,J93-1003,o,"The likelihood ratio is obtained by treating word and Ic as a bigram and computed with the formula in (Dunning, 1993)." P97-1046,J93-1003,o,"(1990, 1993), these models have non-uniform linguistically motivated structure, at present coded by hand." P97-1046,J93-1003,o,"Dunning 1993), make use of both positive and negative instances of performing a task." P97-1046,J93-1003,o,5 Effectiveness Comparison 5.1 English-Chinese ATIS Models Both the transfer and transducer systems were trained and evaluated on English-to-Mandarin Chinese translation of transcribed utterances from the ATIS corpus (Hirschman et al. 1993). P97-1063,J93-1003,o,"(Macklovitch, 1994; Melamed, 1996b)), concordancing for bilingual lexicography (Catizone et al. , 1993; Gale & Church, 1991), computerassisted language learning, corpus linguistics (Melby." P97-1063,J93-1003,o,"The co-occurrence relation can also be based on distance in a bitext space, which is a more general representations of bitext correspondence (Dagan et al. , 1993; Resnik & Melamed, 1997), or it can be restricted to words pairs that satisfy some matching predicate, which can be extrinsic to the model (Melamed, 1995; Melamed, 1997)." P97-1063,J93-1003,o,"For each co-occurring pair of word types u and v, these likelihoods are initially set proportional to their co-occurrence frequency n(u,v) and inversely proportional to their marginal frequencies n(u) and n(v) z, following (Dunning, 1993) 2." P97-1063,J93-1003,o,"1 Introduction Over the past decade, researchers at IBM have developed a series of increasingly sophisticated statistical models for machine translation (Brown et al. , 1988; Brown et al. , 1990; Brown et al. , 1993a)." P97-1063,J93-1003,o,"Table look-up using an explicit translation lexicon is sufficient and preferable for many multilingual NLP applications, including ""crummy"" MT on the World Wide Web (Church & I-Iovy, 1993), certain machine-assisted translation tools (e.g." P98-1074,J93-1003,o,"1 Introduction Early works, (Gale and Church, 1993; Brown et al. , 1993), and to a certain extent (Kay and R6scheisen, 1993), presented methods to ex~.:'~.ct bi'_.'i~gua!" P98-1074,J93-1003,o,"Probabilities based on relative frequencies, or derived fl'om the measure defined in (Dunning, 1993), for example, allow to take this fact into account." P98-2182,J93-1003,o,"For the final ranking, we chose the log likelihood statistic outlined in Dunning (1993), which is based upon the co-occurrence counts of all nouns (see Dunning for details)." P99-1032,J93-1003,o,"In this work, model fit is reported in terms of the likelihood ratio statistic, G 2, and its significance (Read and Cressie, 1988; Dunning, 1993)." P99-1041,J93-1003,o,"It is clear that Appendix B contains far fewer true non-compositional phrases than Appendix A. 7 Related Work There have been numerous previous research on extracting collocations from corpus, e.g., (Choueka, 1988) and (Smadja, 1993)." P99-1041,J93-1003,o,"We parsed a 125-million word newspaper corpus with Minipar, 1 a descendent of Principar (Lin, 1993; Lin, 1994), and extracted dependency relationships from the parsed corpus." P99-1041,J93-1003,o,"The frequency counts of dependency relationships are filtered with the loglikelihood ratio (Dunning, 1993)." P99-1041,J93-1003,o,"Not only many combinations are found in the corpus, many of them have very similar mutual information values to that of 318 Table 2: economic impact verb economic financial political social budgetary ecological economic economic economic economic economic economic economic economic economic object impact impact impact impact impact impact effect implication consequence significance fallout repercussion potential ramification risk mutual freq info 171 1.85 127 1.72 46 0.50 15 0.94 8 3.20 4 2.59 84 0.70 17 0.80 59 1.88 10 0.84 7 1.66 7 1.84 27 1.24 8 2.19 17 -0.33 nomial distribution can be accurately approximated by a normal distribution (Dunning, 1993)." P99-1041,J93-1003,o,"A total of 216 collocations were extracted, shown in Appendix A. We compared the collocations in Appendix A with the entries for the above 10 words in the NTC's English Idioms Dictionary (henceforth NTC-EID) (Spears and Kirkpatrick, 1993), which contains approximately 6000 definitions of idioms." P99-1051,J93-1003,o,Levin (1993) assumes that the syntactic realization of a verb's arguments is directly correlated with its meaning (cf. P99-1051,J93-1003,o,We also experimented with a method suggested by Brent (1993) which applies the binomial test on frame frequency data. P99-1051,J93-1003,o,"For instance, the to-PP frame is poorly' represented in the syntactically annotated version of the Penn Treebank (Marcus et al. , 1993)." P99-1051,J93-1003,p,"We preferred the log-likelihood ratio to other statistical scores, such as the association ratio (Church and Hanks, 1990) or ;(2, since it adequately takes into account the frequency of the co-occurring words and is less sensitive to rare events and corpussize (Dunning, 1993; Daille, 1996)." P99-1067,J93-1003,n,It is faster and more mnemonic than the one in Dunning (1993). P99-1067,J93-1003,o,"However, in yet unpublished work we found that at least for the computation of synonyms and related words neither syntactical analysis nor singular value decomposition lead to significantly better results than the approach described here when applied to the monolingual case (see also Grefenstette, 1993), so we did not try to include these methods in our system." P99-1067,J93-1003,p,"They were based on mutual information (Church & Hanks, 1989), conditional probabilities (Rapp, 1996), or on some standard statistical tests, such as the chi-square test or the loglikelihood ratio (Dunning, 1993)." W00-0901,J93-1003,o,Dunning (1993) reports that we should not rely on the assumption of a normal distribution when performing statistical text analysis and suggests that parametric analysis based on the binomial or multinomial distributions is a better alternative for smaller texts. W00-1325,J93-1003,o,"The different approaches (e.g. Brent, !991, 1993; Ushioda et al. , 1993; Briscoe and Carroll, 1997; Manning, 1993; Carroll and Rooth, 1998; Gahl, 1998; Lapata, 1999; Sarkar and Zeman, 2000) vary largely according to the methods used and the number of SCFS being extracted." W00-1325,J93-1003,o,"According to one account (Briscoe and Carroll, 1997) the majority of errors arise because of the statistical filtering process, which is reported to be particularly unreliable for low frequency SCFs (Brent, 1993; Briscoe and Carroll, 1997; Manning, 1993; Manning and Schiitze, 1999)." W00-1325,J93-1003,o,"Adopting the SCF acquisition system of Briscoe and Carroll, we have experimented with an alternative hypothesis test, the binomial log-likelihood ratio (LLR) test (Dunning, 1993)." W00-1325,J93-1003,o,"Brent (1993) estimated the error probabilities for each SCF experimentally from the behaviour of his SCF extractor, which detected simple morpho-syntactic cues in the corpus data." W00-1325,J93-1003,p,"2.2.2 The Binomial Log Likelihood Ratio as a Statistical Filter Dunning (1993) demonstrates the benefits of the LLR statistic, compared to Pearson's chisquared, on the task of ranking bigram data." W01-0513,J93-1003,o,"We then rank-order the P X|Y MI XY M Z Pr Z|Y MI ZY G092log [P X P Y P X P Y ] f Y [P XY P XY ] f XY [P XY P XY ] f XY M iG13X,X} jG13Y,Y} (f ij G09 ij ) 2 ij f XY G09 XY XY (1G09( XY /N)) f XY G09 XY f XY (1G09(f XY /N)) Table 1: Probabilistic Approaches METHOD FORMULA Frequency (Guiliano, 1964) f XY Pointwise Mutual Information (MI) (Fano, 1961; Church and Hanks, 1990) log (P / PP) 2XY XY Selectional Association (Resnik, 1996) Symmetric Conditional Probability (Ferreira and Pereira, 1999) P / PP XY X Y 2 Dice Formula (Dice, 1945) 2 f / (f +f ) XY X Y Log-likelihood (Dunning, 1993; (Daille, 1996)." W01-0513,J93-1003,o,"Since we need knowledge-poor Daille, 1996) induction, we cannot use human-suggested filtering Chi-squared (G24 ) 2 (Church and Gale, 1991) Z-Score (Smadja, 1993; Fontenelle, et al. , 1994) Students t-Score (Church and Hanks, 1990) n-gram list in accordance to each probabilistic algorithm." W01-0513,J93-1003,o,"In particular, we use a randomly-selected corpus the first five columns as information-like. consisting of a 6.7 million word subset of the TREC Similarly, since the last four columns share databases (DARPA, 1993-1997)." W01-1411,J93-1003,o,"As a measure of association, we use the loglikelihood-ratio statistic recommended by Dunning (1993), which is the same statistic used by Melamed to initialize his models." W02-0906,J93-1003,o,Method Number of frames Number of verbs Linguistic resources F-Score (evaluation based on a gold standard) Coverage on a corpus C. Manning (1993) 19 200 POS tagger + simple finite state parser 58 T. Briscoe & J. Carroll (1997) 161 14 Full parser 55 A. Sarkar & D. Zeman (2000) 137 914 Annotated treebank 88 D. Kawahara et al. W02-0909,J93-1003,o,"For each cell in the contingency table, the expected counts are: mi j = ni+n+ jn++ . The measures are calculated as (Pedersen, 1996): 2 = i;j (ni j mi j) 2 mi j LL = 2 i;j log2 n 2i j mi j Log-likelihood ratios (Dunning, 1993) are more appropriate for sparse data than chi-square." W02-2001,J93-1003,o,"In the latter case, we use an unsupervised attachment disambiguation method, based on the log-likelihood ratio (\LLR"", Dunning (1993))." W02-2001,J93-1003,o,"One aspect of VPCs that makes them dicult to extract (cited in, e.g., Smadja (1993)) is that the verb and particle can be non-contiguous, e.g. hand the paper in and battle right on." W02-2001,J93-1003,o,"One of the earliest attempts at extracting \interrupted collocations"" (i.e. non-contiguous collocations, including VPCs), was that of Smadja (1993)." W02-2001,J93-1003,o,"2.2 Corpus occurrence In order to get a feel for the relative frequency of VPCs in the corpus targeted for extraction, namely 0 5 10 15 20 25 30 35 40 0 10 20 30 40 50 60 70 VPC types (%) Corpus frequency Figure 1: Frequency distribution of VPCs in the WSJ Tagger correctextracted Prec Rec Ffl=1 Brill 135135 1.000 0.177 0.301 Penn 667800 0.834 0.565 0.673 Table 1: POS-based extraction results the WSJ section of the Penn Treebank, we took a random sample of 200 VPCs from the Alvey Natural Language Tools grammar (Grover et al. , 1993) and did a manual corpus search for each." W02-2001,J93-1003,o,"4 Method-2: Simple Chunk-based Extraction To overcome the shortcomings of the Brill tagger in identifying particles, we next look to full chunk 2Note, this is the same as the maximum span length of 5 used by Smadja (1993), and above the maximum attested NP length of 3 from our corpus study (see Section 2.2)." W03-0201,J93-1003,o,"Likelihood ratios are particularly useful when comparing common and rare events (Dunning 1993; Plaunt and Norgard 1998), making them natural here given the rareness of most question categories and the frequency of contributions." W03-0314,J93-1003,o,"For this present work, we use Dunnings log-likelihood ratio statistics (Dunning, 1993) defined as follows: sim = aloga+blogb+clogc+dlogd (a+b)log(a+b)(a+c)log(a+c) (b+d)log(b+d)(c+d)log(c+d) +(a+b+c+d)log(a+b+c+d) For each bilingual pattern EiJj, we compute its similarity score and qualify it as a bilingual sequence-to-sequence correspondence if no equally strong or stronger association for monolingual constituent is found." W03-1101,J93-1003,o,"NeATS computes the likelihood ratio (Dunning, 1993) to identify key concepts in unigrams, bigrams, and trigrams, and clusters these concepts in order to identify major subtopics within the main topic." W03-1108,J93-1003,o,"First, word frequencies, context word frequencies in surrounding positions (here three-words window) are computed following a statistics-based metrics, the log-likelihood ratio (Dunning, 1993)." W03-1702,J93-1003,o,"To make sense tagging more precise, it is advisable to place constraint on the translation counterpart c of w. SWAT considers only those translations c that has been linked with w based the Competitive Linking Algorithm (Melamed 1997) and logarithmic likelihood ratio (Dunning 1993)." W03-1702,J93-1003,o,"There is potential of developing Sense Definition Model to identify and represent semantic and stylistic differentiation reflected in the MRD glosses pointed out in DiMarco, Hirst and Stede (1993)." W03-1717,J93-1003,o,"The approach is in the spirit of Smadja (1993) on retrieving collocations from text corpora, but is more integrated with parsing." W03-1717,J93-1003,o,"To prune away those pairs, we used the log-likelihood-ratio algorithm (Dunning, 1993) to compute the degree of association between the verb and the noun in each pair." W03-1802,J93-1003,p,"a list of pilot terms ranked from the most representative of the corpus to the least thanks to the Loglikelihood coefficient introduced by (Dunning, 1993)." W03-1805,J93-1003,o,"1 minority report 2 box office 3 scooby doo 4 sixth sense 5 national guard 6 bourne identity 7 air national guard 8 united states 9 phantom menace 10 special effects 11 hotel room 12 comic book 13 blair witch project 14 short story 15 real life 16 jude law 17 iron giant 18 bin laden 19 black people 20 opening weekend 21 bad guy 22 country bears 23 mans man 24 long time 25 spoiler space 26 empire strikes back 27 top ten 28 politically correct 29 white people 30 tv show 31 bad guys 32 freddie prinze jr 33 monsters ball 34 good thing 35 evil minions 36 big screen 37 political correctness 38 martial arts 39 supreme court 40 beautiful mind Figure 7: Result of re-ranking output from the phrase extension module 6.4 Revisiting unigram informativeness An alternative approach to calculate informativeness from the foreground LM and the background LM is just to take the ratio of likelihood scores, a11 fga9a54a86 a15 a23 a11 bga9a54a86 a15 . This is a smoothed version of relative frequency ratio which is commonly used to find subject-specific terms (Damerau, 1993)." W03-1805,J93-1003,o,"3 Related work Word collocation Various collocation metrics have been proposed, including mean and variance (Smadja, 1994), the t-test (Church et al. , 1991), the chi-square test, pointwise mutual information (MI) (Church and Hanks, 1990), and binomial loglikelihood ratio test (BLRT) (Dunning, 1993)." W03-1805,J93-1003,o,"For our baseline, we have selected the method based on binomial loglikelihood ratio test (BLRT) described in (Dunning, 1993)." W03-1806,J93-1003,o,"For that purpose, syntactical (Didier Bourigault, 1993), statistical (Frank Smadja, 1993; Ted Dunning, 1993; Gal Dias, 2002) and hybrid syntaxicostatistical methodologies (Batrice Daille, 1996; JeanPhilippe Goldman et al. 2001) have been proposed." W03-1806,J93-1003,o,"alpha 0 0.1 0.2 0.3 0.4 0.5 Freq=2 13555 13093 12235 11061 10803 10458 Freq=3 4203 3953 3616 3118 2753 2384 Freq=4 1952 1839 1649 1350 1166 960 Freq=5 1091 1019 917 743 608 511 Freq>2 2869 2699 2488 2070 1666 1307 TOTAL 23670 22603 20905 18342 16996 15620 alpha 0.6 0.7 0.8 0.9 1.0 Freq=2 10011 9631 9596 9554 9031 Freq=3 2088 1858 1730 1685 1678 Freq=4 766 617 524 485 468 Freq=5 392 276 232 202 189 Freq>2 1000 796 627 517 439 TOTAL 14257 13178 12709 12443 11805 Table 7: Number of extracted MWUs by frequency 6.2 Qualitative Analysis As many authors assess (Frank Smadja, 1993; John Justeson and Slava Katz, 1995), deciding whether a sequence of words is a multiword unit or not is a tricky problem." W03-1806,J93-1003,o,"On the other hand, purely statistical systems (Frank Smadja, 1993; Ted Dunning, 1993; Gal Dias, 2002) extract discriminating MWUs from text corpora by means of association measure regularities." W04-1122,J93-1003,o,"Presently, many systems (Tan et al, 1999), (Liu, 2000), (Song, 1993), (Luo et al, 2001) focus on online recognition of proper nouns, and have achieved inspiring results in newscorpus but will be deteriorated in special text, such as spoken corpus, novels." W04-1122,J93-1003,o,"Many statistical metrics have been proposed, including pointwise mutual information (MI) (Church et al, 1990), mean and variance, hypothesis testing (t-test, chisquare test, etc.), log-likelihood ratio (LR) (Dunning, 1993), statistic language model (Tomokiyo, et al, 2003), and so on." W04-1122,J93-1003,o,"Relative frequency ratio (RFR) of terms between two different corpora can also be used to discover domain-oriented multi-word terms that are characteristic of a corpus when compared with another (Damerau, 1993)." W04-1122,J93-1003,o,"3 Candidates extraction on Suffix array Suffix array (also known as String PATarray)(Manber et al, 1993) is a compact data structure to handle arbitrary-length strings and performs much powerful on-line string search operations such as the ones supported by PAT-tree, but has less space overhead." W04-1122,J93-1003,o,"Candidate term Segment result of GPWS for one sentence, in which term appears / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / Table 2: Examples of candidates eliminated by GPWS 5 Relative frequency ratio against background corpus Relative frequency ratio (RFR) is a useful method to be used to discover characteristic linguistic phenomena of a corpus when compared with another (Damerau, 1993)." W04-1806,J93-1003,o,"Information extraction approaches that infer labeled relations either require substantial handcreated linguistic or domain knowledge, e.g., (Craven and Kumlien 1999) (Hull and Gomez 1993), or require human-annotated training data with relation information for each domain (Craven et al. 1998)." W04-1806,J93-1003,o,"We use the log likelihood ratio (LLR) (Dunning 1993) given by -2log 2 (H o (p;k 1,n 1,k 2,n 2 )/H a (p 1,p 2 ;n 1,k 1,n 2,k 2 )) LLR measures the extent to which a hypothesized model of the distribution of cell counts, H a, differs from the null hypothesis, H o (namely, that the percentage of documents containing this term is the same in both corpora)." W04-2105,J93-1003,o,"The statistical significance often evaluate whether two words are independant using hypothesis tests such as t-score (Church et al. , 1991), the X2, the log-likelihood (Dunning, 1993) and Fishers exact test (Pedersen, 1996)." W04-2105,J93-1003,o,"The various extraction measures have been discussed in great detail in the literature (Manning and Schutze, 1999; McKeown and Radev, 2000), their performance has been compared (Dunning, 1993; Pedersen, 1996; Evert and Krenn, 2001), and the methods have been combined to improve overall performance (Inkpen and Hirst, 2002)." W05-0801,J93-1003,o,3 The Log-Likelihood-Ratio Association Measure We base all our association-based word-alignment methods on the log-likelihood-ratio (LLR) statistic introduced to the NLP community by Dunning (1993). W05-0801,J93-1003,o,"(1993), sometimes augmented by an HMM-based model or Och and Neys Model 6 (Och and Ney, 2003)." W05-0810,J93-1003,o,"A second pass aligns the sentences in a way similar1 to the algorithm described by Gale and Church (1993), but where the search space is constrained to be close to the one delimited by the word alignment." W05-0810,J93-1003,o,"First, we considered single sentences as documents, and tokens as sentences (we define a token as a sequence of characters delimited by 1In our case, the score we seek to globally maximize by dynamic programming is not only taking into account the length criteria described in (Gale and Church, 1993) but also a cognate-based one similar to (Simard et al. , 1992)." W05-0810,J93-1003,o,"In our case, we computed a likelihood ratio score (Dunning, 1993) for all pairs of English tokens and Inuktitut substrings of length ranging from 3 to 10 characters." W05-0810,J93-1003,o,"When efficient techniques have been proposed (Brown et al. , 1993; Och and Ney, 2003), they have been mostly evaluated on safe pairs of languages where the notion of word is rather clear." W05-1005,J93-1003,o,"Since in these LVCs the complement is a predicative noun in stem form identical to a verb, we form development and test expressions by combining give or take with verbs from selected semantic classes of Levin (1993), taken from Stevenson et al." W05-1005,J93-1003,o,"1PMI is subject to overestimation for low frequency items (Dunning, 1993), thus we require a minimum frequency of occurrence for the expressions under study." W06-1006,J93-1003,o,"The sets obtainedare then ranked usingthe loglikelihoodratiostest(Dunning,1993)." W06-1006,J93-1003,o,"3.2 German Germanis thesecondmostinvestigatedlanguage, thanks to the early work of Breidt (1993) and, morerecently, to thatof KrennandEvert,such as (Krennand Evert, 2001; Evert and Krenn,2001; Evert,2004)centeredonevaluation." W06-1006,J93-1003,o,"Breidt(1993) alsopointedouta coupleof problemsthatmakes extractionfor Germanmoredifficultthanfor English: the stronginflectionfor verbs,the variable word-order,andthepositionalambiguityofthearguments.Sheshowsthatevendistinguishingsubjectsfromobjectsisverydifficultwithoutparsing." W06-1101,J93-1003,o,"Such studies follow the empiricist approach to word meaning summarized best in the famous dictum of the British 3 linguist J.R. Firth: You shall know a word by the company it keeps. (Firth, 1957, p. 11) Context similarity has been used as a means of extracting collocations from corpora, e.g. by Church & Hanks (1990) and by Dunning (1993), of identifying word senses, e.g. by Yarowski (1995) and by Schutze (1998), of clustering verb classes, e.g. by Schulte im Walde (2003), and of inducing selectional restrictions of verbs, e.g. by Resnik (1993), by Abe & Li (1996), by Rooth et al." W06-1653,J93-1003,o,"Typicality was measured using the log-likelihood ratio test (Dunning, 1993)." W06-2403,J93-1003,o,"2 Related Work The issue of MWE processing has attracted much attention from the Natural Language Processing (NLP) community, including Smadja, 1993; Dagan and Church, 1994; Daille, 1995; 1995; McEnery et al. , 1997; Wu, 1997; Michiels and Dufour, 1998; Maynard and Ananiadou, 2000; Merkel and Andersson, 2000; Piao and McEnery, 2001; Sag et al. , 2001; Tanaka and Baldwin, 2003; Dias, 2003; Baldwin et al. , 2003; Nivre and Nilsson, 2004 Pereira et al,." W06-2405,J93-1003,o,"For each candidate triple, the log-likelihood (Dunning, 1993) and salience (Kilgarriff and Tugwell, 2001) scores were calculated." W06-2405,J93-1003,o,"The other 5 have been suggested for Dutch by (Hollebrandse, 1993)." W06-3307,J93-1003,o,"It is known that PMI gives undue importance to low frequency events (Dunning, 1993), therefore the evaluation considers only pairs of genes that occur at least 5 times in the whole corpus." W06-3804,J93-1003,o,"We use the likelihood ratio for a binomial distribution (Dunning 1993), which tests the hypothesis whether the term occurs independently in texts of biographical nature given a large corpus of biographical and non-biographical texts." W06-3812,J93-1003,o,"In this case, we use the log-likelihood measure as described in (Dunning 1993)." W06-3812,J93-1003,o,The outcomes of CW resemble those of MinCut (Wu & Leahy 1993): Dense regions in the graph are grouped into one cluster while sparsely connected regions are separated. W07-1002,J93-1003,o,"It was later applied by (Dunning, 1993) as a way to determine if a sequence of N words (Ngram) came from an independently distributed sample." W07-1108,J93-1003,o,"To model aspects of co-occurrence association that might be obscured by raw frequency, the log-likelihood ratio G2 (Dunning, 1993) was also used to transform the feature space." W07-1708,J93-1003,p,"For the current work, the Log-likelihood coefficient has been employed (Dunning, 1993), as it is reported to perform well among other scoring methods (Daille, 1995)." W07-1708,J93-1003,o,"In this study we have concentrated on the NPs??term extraction, which comprises the focus of interest in several studies (Jacquemin, 2001; Justeson & Katz, 1995; Voutanen, 1993)." W08-0409,J93-1003,o,"Generative word alignment models, initially developed at IBM (Brown et al., 1993), and then augmented by an HMM-based model (Vogel et al., 1996), have provided powerful modeling capability for word alignment." W08-0409,J93-1003,p,"Pr(cJ1,aJ1|eI1) = p(J|I)(I + 1)J Jproductdisplay j=1 p(cj|eaj) (8) 3.1.2 Log-likelihood ratio The log-likelihood ratio statistic has been found to be accurate for modeling the associations between rare events (Dunning, 1993)." W08-1914,J93-1003,p,"Many previous studies have shown that the log-likelihood ratio is well suited for this purpose (Dunning, 1993)." W08-1914,J93-1003,p,"It can be expected that the log-likelihood ratio produces an accurate ranking of word pairs that highly correlates with human judgment (Dunning, 1993), although there are other measures which come close in performance (e.g. Rapp, 1998)." W09-0202,J93-1003,o,"corpus (Dunning, 1993; Scott, 1997; Rayson et al., 2004)." W09-0202,J93-1003,o,"2.1 Keywords As our starting point, we calculated the keywords of the Belgian corpus with respect to the Netherlandic corpus, both on the basis of a chi-square test (with Yates continuity correction) (Scott, 1997) and the log-likelihood ratio (Dunning, 1993)." W09-0202,J93-1003,o,"The most obvious comparison takes on the form of a keyword analysis, which looks for the words that are significantly more frequent in the one corpus as compared to the other (Dunning, 1993; Scott, 1997; Rayson et al., 2004)." W09-0203,J93-1003,o,"We worked with an implementation of the log likelihood ratio (g-Score) as proposed by Dunning (1993) and two variants of the t-score, one considering all values (t-score) and one where only positive values (t-score+) are kept following the results of Curran and Moens (2002)." W09-0426,J93-1003,o,"Rapp (1999), Dunning (1993)) but using cosine rather than cityblock distance to measure profile similarity." W09-1705,J93-1003,o,"Given a contextual word cw that occurs in the paragraphs of bc, a log-likelihood ratio (G2) test is employed (Dunning, 1993), which checks if the distribution of cw in bc is similar to the distribution of cw in rc; p(cw|bc) = p(cw|rc) (null hypothesis)." W94-0103,J93-1003,o,"The algorithm is based on the Machine Learning method for word categorisation, inspired by the well known study on basic-level categories \[Rosch, 1978\], presented in \[Basili et al, 1993a\]." W94-0103,J93-1003,o,"The algorithm to acquire the lexicon, implemented in the ARIOSTQLEX system, has been extensively described in \[Basili et al, 1993c\]." W94-0103,J93-1003,o,Pustejovsky confronted with the problem of automatic acquisition more extensively in \[Pustejovsky et al. 1993\]. W94-0103,J93-1003,o,"The interest reader is referred to \[Basili et al, 1993 b and c\], for a summary of ARIOSTO, an integrated tool for extensive acquisition of lexieal knowledge from corpora that we used to demonstrate and validate our approach." W94-0103,J93-1003,o,"The statistical methods are based on distributional analysis (we defined a measure called mutual conditioned plausibility, a derivation of the well known mutual information), and cluster analysis (a COBWEB-like algorithm for word classification is presented in \[Basili et al, 1993,a\])." W97-0116,J93-1003,o,"4 Related Work The automatic extraction of English subcategorization frames has been considered in (Brent, 1991; Brent, 1993), where a procedure is presented that takes untamed text as input and generates a list of verbal subcategorization frames." W97-0116,J93-1003,o,"This statistic is given by -2 log A = 2(log L(p1, kl, hi) log L(p2, k2, n2)-log L(p, kl, R1)--log L(p, k2, n2)), where log LCo, k, n) = k logp + (n k)log(1 -p), and Pl = ~, P2 = ~, P =,~',~; (For a detailed description of the statistic used, see (Dunning, 1993))." W97-0118,J93-1003,o,"The Logllkelihood Ratio, G 2, is a mathematically well-grounded and accurate method for calculating how ""surprising"" an event is (Dunning, 1993)." W97-0122,J93-1003,p,"(Dunning, 1993) and (Pedersen, 1996) shows how some of the methods which have been used in the past (particularly mutual information scores) are invalid for rare events, and introduce accurate measures of how 'surprising' rare events are." W97-0203,J93-1003,o,"Its roots are the same as computational linguistics (CL), but it has been largely ignored in CL until recently (Dunning, 1993; Carletta, 1996; Kilgarriff, 1996)." W97-0203,J93-1003,o,"This set of words (rooted primarily in the verbs of the set) corresponds to the (Levin, 1993) Characterize (class 29.2), Declare (29.4), Admire (31.2), and Judgment verbs (33) and hence may have particular syntactic and semantic patterning." W98-1119,J93-1003,o,3.1 The gender/animaticity statistics After we have identified the correct antecedents it is a simple counting procedure to compute P(p\[wa) where wa is in the correct antecedent for the pronoun p (Note the pronouns are grouped by their gender): \[ wain the antecedent for p \[ P(pl o) = When there are multiple relevant words in the antecedent we apply the likelihood test designed by Dunning (1993) on all the words in the candidate NP. W99-0631,J93-1003,o,"C c C, p(C\]v,r) is just the probability of the disjunction of the concepts in C; that is, = Zp(clv, r) cEC In order to see how p(clv,r) relates to the input data, note that given a concept c, verb v and argument position r, a noun can be generated according to the distribution p(n\[c, v, r), where p(nlc, v, r) = 1 nEsyn(c) Now we have a model for the input data: p(n, v, r) = p(v,r)p(niv,r) = p(v,r) p(clv, rlp(ntc, v,r) cecn(n) Note that for c cn(n), p(nlc, v, r) = O. The association norm (and similar measures such as the mutual information score) have been criticised (Dunning, 1993) because these scores can be greatly over-estimated when frequency counts are low." W99-0631,J93-1003,o,"Although this approach can give inaccurate estimates, the counts given to the incorrect senses will disperse randomly throughout the hierarchy as noise, and by accumulating counts up the hierarchy we will tend to gather counts from the correct senses of related words (Yarowsky, 1992; Resnik, 1993)." W99-0631,J93-1003,o,(This example is adapted from Resnik (1993)). W99-0631,J93-1003,o,"We use the log-likelihood X ~ statistic, rather than the Pearson's X 2 statistic, as this is thought to be more appropriate when the counts in the contingency table are low (Dunning, 1993)." A00-1042,J93-1007,o,"Smadja, Frank (1993) ""Retrieving collocations from text"", Computational Linguistics 19(1):143-177." A94-1006,J93-1007,o,"In particular, mutual information (Church and Hanks, 1990; Wu and Su, 1993) and other statistical methods such as (Smadja, 1993) and frequency-based methods such as (Justeson and Katz, 1993) exclude infrequent phrases because they tend to introduce too much noise." A94-1006,J93-1007,o,"have been used in statistical machine translation (Brown et al. , 1990), terminology research and translation aids (Isabelle, 1992; Ogden and Gonzales, 1993; van der Eijk, 1993), bilingual lexicography (Klavans and Tzoukermann, 1990; Smadja, 1992), word-sense disambiguation (Brown et al. , 1991b; Gale et al. , 1992) and information retrieval in a multilingual environment (Landauer and Littman, 1990)." A94-1006,J93-1007,o,"Some methods use sentence alignment and additional statistics to find candidate translations of terms (Smadja, 1992; van der Eijk, 1993)." A94-1006,J93-1007,o,3.4 Related work and issues for future research Smadja (1992) and van der Eijk (1993) describe term translation methods that use bilingual texts that were aligned at the sentence level. A97-1026,J93-1007,o,"Manual processes, such as lexicon development could be automated in the future using standard contextbased, word distribution methods (Smadja, 1993), or other corpus-based techniques." A97-1026,J93-1007,o,"Smadja,Frank.(1993)." A97-1045,J93-1007,p,"Tools like Xtract (Smadja 1993) were based on the work of Church and others, but made a step forward by incorporating various statistical measurements like z-score and variance of distribution, as well as shallow linguistic techniques like part-of-speech tagging and lemmatization of input data and partial parsing of raw output." A97-1050,J93-1007,o,"(Daille, 1996; Smadja, 1993)), less prior work exists for bilingual acquisition of domain-specific translations." A97-1054,J93-1007,o,"For instance, one might be interested in frequencies of co-occurences of a word with other words and phrases (collocations) (Smadja, 1993), or one might be interested in inducing wordclasses from the text by collecting frequencies of the left and right context words for a word in focus (Finch&Chater, 1993)." C00-2113,J93-1007,o,For colnparison~ we refer here to Smadja's method (1993) because this method and the proposed method have much in connnon. C00-2121,J93-1007,o,"Smadja, 1993): 1." C02-1007,J93-1007,o,"Algorithms for the computation of first-order associations have been used in lexicography for the extraction of collocations (Smadja, 1993) and in cognitive psychology for the simulation of associative learning (Wettler & Rapp, 1993)." C02-2003,J93-1007,o,"Sometimes, the notion of collocation is defined in terms of syntax (by possible part-of-speech patterns) or in terms of semantics (requiring collocations to exhibit non-compositional meaning) (Smadja, 1993)." C04-1141,J93-1007,p,"Smadja (1993), which is the classic work on collocation extraction, uses a two-stage filtering model in which, in the first step, n-gram statistics determine possible collocations and, in the second step, these candidates are submitted to a syntactic valida7Of course, lexical material is always at least partially dependent on the domain in question." C08-1030,J93-1007,o,"Future work will include: (i) applying the method to retrieve other types of collocations (Smadja, 1993), and (ii) evaluating the method using Internet directories." C94-1074,J93-1007,o,One example of the 450 latter problem is the following: in (Smadja 1993) the nature of a syntactic link between two associated words is detected a posteriori. C94-1074,J93-1007,o,"of ACL 1990 (Smadja, 1993), F. Smadja, Retrieving collocations fi'cma text: XTRACT, (1993)." C94-1091,J93-1007,o,"We propose a corpus-based method (Biber,1993; Nagao,1993; Smadja,1993) which generates Noun Classifier Associations (NCA) to overcome the problems in classifier assignment and semantic construction of noun phrase." C94-1096,J93-1007,o,"Unlike Smadja (1993), the ke~vord rnay be part of a Chinese word." C94-1096,J93-1007,o,"The user can select characters by their frequencies (i.e. -f and -g options), the top or bottom N% (i.e. -m and -n options), their ranks (i.e. -r and -s options) and by their frequencies above two standard deviations phlS the mean (Smadja, 1993) (i.e. -z option)." C94-1096,J93-1007,o,"Further enhancement of these utilities include compiling collocation statistics (Smadja, 1993) and semi-automatic gloassary construction (Tong, 1993)." C94-2202,J93-1007,o,"For instance, there is a substantial body of papers on the extraction of ""frequently co-occurring words"" from corpora using statistical methods (e.g. , (Choueka et al. , 1983), (Church and Hanks, 1989), (Smadja, 1993) to list only a few)." C96-1009,J93-1007,o,"Iegar(ling l;his l;ypu of (:olloeation, the approaches till ilOW could be divi(led inl;o t;wo groups: those thai; do uo(, refer to s'ttbstrings of colloco, l, ions as a l)arti(:ular problem, (Church and lla.nks, t99(); Kim and Cho, 1993; Nagao and Mori, 1994), and those t.hat; do (Kita et al. , t994; Smadja, 1993; lkchara et al. , 1995; Kjelhner, 11994)." C96-1009,J93-1007,o,"From the extracted n-grams, those with a flequc'ncy of 3 or more were kept (other approaches get rid of n-grams of such low frequencies (Smadja, 1993))." C96-1009,J93-1007,o,"(Smadja, 1993), extracts uninterrupted as well as interrupted collocations (predicative relations, rigid noun phrases and phrasal templates)." C96-1009,J93-1007,o,"The COlllillOil poini;s regarding collocations appear to be, as (Smadja, 1993) suggestsl: they are m'bil;rary (it is nol; clear why to ""Bill through"" means to ""fail""), th('y are domain-dependent (""interest rate"", ""stock market""), t;hey are recurrenl; and cohesive lo~xical clusters: the presence of one of the." C96-1009,J93-1007,o,"(Smadja, 1993; Kits et al. , 1994; Ikehara et al. , 1995), mention about substrings of collocations." C96-1039,J93-1007,o,"Some papers (Fung & Wu, 1994; Wang et al. , 1994) based on Smadja's paradigm (1993) learned an aided dictionary from a corpus to reduce the possibility of unknown words." C96-1083,J93-1007,p,"In the past five years, important research on the automatic acquisition of word classes based on lexical distribution has been published (Church and Hanks, 1990; Hindle, 1990; Smadja, 1993; Grei~nstette, 1994; Grishman and Sterling, 1994)." C96-1089,J93-1007,o,"They first extract English collocations using the Xtract systetn (Smadja, 1993), and theu look for French coutlterparts." C96-1097,J93-1007,o,"There are many method proposed to extract rigid expressions from corpora such as a method of focusing on the binding strength of two words (Church and Hanks 1990); the distance between words (Smadja and Makeown 1990); and the number of combined words and frequency of appearance (Kita 1993, 1994)." C96-1097,J93-1007,o,"Thus, conventional methods had to introduce some kinds of restrictions such as the limitation of the kind of chains or the length of chains to be extracted (Smadja 1993, Shinnou and Isahara 1995)." C96-2100,J93-1007,o,Smadja (1993)finds significant bigrams using an estimate of z-score (deviation from an expected mean). C96-2100,J93-1007,o,"(Smadja, 1993:p.168) Kita & al." E06-1026,J93-1007,o,"Baron and Hirst (2004) extracted collocations with Xtract (Smadja, 1993) and classified the collocations using the orientations of the words in the neighboring sentences." E06-1043,J93-1007,o,"Most previous work on compositionality of MWEs either treat them as collocations (Smadja, 1993), or examine the distributional similarity between the expression and its constituents (McCarthy et al. , 2003; Baldwin et al. , 2003; Bannard et al. , 2003)." E95-1003,J93-1007,o,"These measures have, in fact, been used previously in measuring term recognition (Smadja, 1993; Bourigault, 1994; Lauriston, 1994)." E95-1003,J93-1007,p,"One of the best efforts to quantify the performance of a term-recognition system (Smadja, 1993) does so only for one processing stage, leaving unassessed the text-to-output performance of the system." I08-1014,J93-1007,o,"Since Odds = P/(1 P), we multiply both sides of Definition 3 by (1P(U|E))1 to obtain, P(U|E) 1P(U|E) = P(E|U)P(U) P(E)(1P(U|E)) (7) By substituting Equation 6 in Equation 7 and later, applying the multiplication rule P(U|E)P(E) = P(E|U)P(U) to it, we will obtain: P(U|E) P(U|E) = P(E|U)P(U) P(E|U)P(U) (8) We proceed to take the log of the odds in Equation 8 (i.e. logit) to get: log P(E|U)P(E|U) = log P(U|E)P(U|E) log P(U)P(U) (9) While it is obvious that certain words tend to cooccur more frequently than others (i.e. idioms and collocations), such phenomena are largely arbitrary (Smadja, 1993)." J00-3001,J93-1007,o,"In Smadja's collocation algorithm Xtract, the lowest-frequency words are effectively discarded as well (Smadja 1993)." J94-4003,J93-1007,o,The use of such relations (mainly relations between verbs or nouns and their arguments and modifiers) for various purposes has received growing attention in recent research (Church and Hanks 1990; Zernik and Jacobs 1990; Hindle 1990; Smadja 1993). J94-4003,J93-1007,o,"Statistics on co-occurrence of words in a local context were used recently for monolingual word sense disambiguation (Gale, Church, and Yarowsky 1992b, 1993; Sch6tze 1992, 1993) (see Section 7 for more details and Church and Hanks 1990; Smadja 1993, for other applications of these statistics)." J98-2002,J93-1007,p,"For the extraction problem, there have been various methods proposed to date, which are quite adequate (Hindle and Rooth 1991; Grishman and Sterling 1992; Manning 1992; Utsuro, Matsumoto, and Nagao 1992; Brent 1993; Smadja 1993; Grefenstette 1994; Briscoe and Carroll 1997)." J98-2002,J93-1007,o,"As we remarked earlier, however, the input data required by our method (triples) could be generated automatically from unparsed corpora making use of existing heuristic rules (Brent 1993; Smadja 1993), although for the experiments we report here we used a parsed corpus." N07-1037,J93-1007,o,"Baron and Hirst (2004) extracted collocations with Xtract (Smadja, 1993) and classified the collocations using the orientations of the words in the neighboring sentences." P04-1022,J93-1007,o,"Some studies have been done for acquiring collocation translations using parallel corpora (Smadja et al, 1996; Kupiec, 1993; Echizen-ya et al. , 2003)." P04-1022,J93-1007,o,"The former extracts collocations within a fixed window (Church and Hanks 1990; Smadja, 1993)." P04-1022,J93-1007,o,"These range from twoword to multi-word, with or without syntactic structure (Smadja 1993; Lin, 1998; Pearce, 2001; Seretan et al. 2003)." P04-3019,J93-1007,o,"Smadja (1993) also detailed techniques for collocation extraction and developed a program called XTRACT, which is capable of computing flexible collocations based on elaborated statistical calculation." P05-1075,J93-1007,o,"3 Schone & Jurafsky's results indicate similar results for log-likelihood & T-score, and strong parallelism among information-theoretic measures such as ChiSquared, Selectional Association (Resnik 1996), Symmetric Conditional Probability (Ferreira and Pereira Lopes, 1999) and the Z-Score (Smadja 1993)." P05-1075,J93-1007,o,"It is true that various term extraction systems have been developed, such as Xtract (Smadja 1993), Termight (Dagan & Church 1994), and TERMS (Justeson & Katz 1995) among others (cf." P06-1120,J93-1007,o,"Parsing has been also used after extraction (Smadja, 1993) for filtering out invalid results." P06-1120,J93-1007,o,"This fact is being seriously challenged by current research (), and might not be true in the near future (Smadja, 1993, 151)." P95-1007,J93-1007,o,"Similarly, Smadja (1993) uses a six content word window to extract significant collocations." P95-1027,J93-1007,o,"Finally, knowledge of polarity can be combined with corpus-based collocation extraction methods (Smadja, 1993) to automatically produce entries for the lexical functions used in MeaningText Theory (Mel'~uk and Pertsov, 1987) for text generation." P97-1061,J93-1007,o,"4 Related work Algorithms for retrieving collocations has been described (Smadja, 1993) (Haruno et al. , 1996)." P97-1061,J93-1007,o,"(Smadja, 1993) proposed a method to retrieve collocations by combining bigrams whose cooccurrences are greater than a given threshold 3." P97-1061,J93-1007,o,"There has been a growing interest in corpus-based approaches which retrieve collocations from large corpora (Nagao and Mori, 1994), (Ikehara et al. , 1996) (Kupiec, 1993), (Fung, 1995), (Kitamura and Matsumoto, 1996), (Smadja, 1993), (Smadja et al. , 1996), (Haruno et al. , 1996)." P98-1092,J93-1007,o,However morphosyntactic features alone cannot verify the terminological status of the units extracted since they can also select non terms (see Smadja 1993). P98-2125,J93-1007,o,"We then replaced fi with its associated z-score k$,e. k$,e is the strength of code frequency f at Lt, and represents the standard deviation above the average of frequency fave,t. Referring to Smadja's definition (Smadja, 1993), the standard deviation at at Lt and strength kf,t of the code frequencies are defined as shown in formulas 1 and 2." P98-2127,J93-1007,o,"In (Smadja, 1993), automatically extracted collocations are judged by a lexicographer." P98-2176,J93-1007,o,"Some examples of language reuse include collocation analysis (Smadja, 1993), the use of entire factual sentences extracted from corpora (e.g. , ""'Toy Story' is the Academy Award winning animated film developed by Pixar~'), and summarization using sentence extraction (Paice, 1990; Kupiec et al. , 1995)." P98-2216,J93-1007,o,"Other classes, such as the ones below can be extracted using lexico-statistical tools, such as in (Smadja, 1993), and then checked by a human." P98-2216,J93-1007,o,"It seems nevertheless that all 2Church and Hanks (1989), Smadja (1993) use statistics in their algorithms to extract collocations from texts." P99-1029,J93-1007,o,"For the correct identification of phrases in a Korean query, it would help to identify the lexical relations and produce statistical information on pairs of words in a text corpus as in Smadja (1993)." P99-1041,J93-1007,o,"It is clear that Appendix B contains far fewer true non-compositional phrases than Appendix A. 7 Related Work There have been numerous previous research on extracting collocations from corpus, e.g., (Choueka, 1988) and (Smadja, 1993)." P99-1043,J93-1007,o,"Co-occurrence information between neighboring words and words in the same sentence has been used in phrase extraction (Smadja, 1993; Fung and Wu, 1994), phrasal translation (Smadja et al. , 1996; Kupiec, 1993; Wu, 1995; Dagan and Church, 1994), target word selection (Liu and Li, 1997; Tanaka and Iwasaki, 1996), domain word translation (Fung and Lo, 1998; Fung, 1998), sense disambiguation (Brown et al. , 1991; Dagan et al. , 1991; Dagan and Itai, 1994; Gale et al. , 1992a; Gale et al. , 1992b; Gale et al. , 1992c; Shiitze, 1992; Gale et al. , 1993; Yarowsky, 1995), and even recently for query translation in cross-language IR as well (Ballesteros and Croft, 1998)." P99-1043,J93-1007,o,"Co-occurrence statistics is collected from either bilingual parallel and 334 non-parallel corpora (Smadja et al. , 1996; Kupiec, 1993; Wu, 1995; Tanaka and Iwasaki, 1996; Fung and Lo, 1998), or monolingual corpora (Smadja, 1993; Fung and Wu, 1994; Liu and Li, 1997; Shiitze, 1992; Yarowsky, 1995)." W00-1203,J93-1007,o,"The recurrence property had been utilized to extract keywords or key-phrases from text (Chien 1999, Fung 1998, Smadja 1993)." W01-0513,J93-1007,o,"Since we need knowledge-poor Daille, 1996) induction, we cannot use human-suggested filtering Chi-squared (G24 ) 2 (Church and Gale, 1991) Z-Score (Smadja, 1993; Fontenelle, et al. , 1994) Students t-Score (Church and Hanks, 1990) n-gram list in accordance to each probabilistic algorithm." W02-0909,J93-1007,o,"The second method considers the means and variance of the distance between two words, and can compute flexible collocations (Smadja, 1993)." W02-1606,J93-1007,o,"To perform code generalization, Li adopted to Smadjas work (Smadja, 1993) and defined the code strength using a code frequency and a standard deviation in each level of the concept hierarchy." W02-2001,J93-1007,o,"One aspect of VPCs that makes them dicult to extract (cited in, e.g., Smadja (1993)) is that the verb and particle can be non-contiguous, e.g. hand the paper in and battle right on." W02-2001,J93-1007,o,"One of the earliest attempts at extracting \interrupted collocations"" (i.e. non-contiguous collocations, including VPCs), was that of Smadja (1993)." W02-2001,J93-1007,o,"4 Method-2: Simple Chunk-based Extraction To overcome the shortcomings of the Brill tagger in identifying particles, we next look to full chunk 2Note, this is the same as the maximum span length of 5 used by Smadja (1993), and above the maximum attested NP length of 3 from our corpus study (see Section 2.2)." W03-1705,J93-1007,o,"There have been many statistical measures which estimate co-occurrence and the degree of association in previous researches, such as mutual information (Church 1990, Sporat 1990), t-score (Church 1991), dice matrix (Smadja 1993, 1996)." W03-1717,J93-1007,o,"The approach is in the spirit of Smadja (1993) on retrieving collocations from text corpora, but is more integrated with parsing." W03-1805,J93-1007,o,"3 Related work Word collocation Various collocation metrics have been proposed, including mean and variance (Smadja, 1994), the t-test (Church et al. , 1991), the chi-square test, pointwise mutual information (MI) (Church and Hanks, 1990), and binomial loglikelihood ratio test (BLRT) (Dunning, 1993)." W03-1806,J93-1007,o,"For that purpose, syntactical (Didier Bourigault, 1993), statistical (Frank Smadja, 1993; Ted Dunning, 1993; Gal Dias, 2002) and hybrid syntaxicostatistical methodologies (Batrice Daille, 1996; JeanPhilippe Goldman et al. 2001) have been proposed." W03-1806,J93-1007,o,"alpha 0 0.1 0.2 0.3 0.4 0.5 Freq=2 13555 13093 12235 11061 10803 10458 Freq=3 4203 3953 3616 3118 2753 2384 Freq=4 1952 1839 1649 1350 1166 960 Freq=5 1091 1019 917 743 608 511 Freq>2 2869 2699 2488 2070 1666 1307 TOTAL 23670 22603 20905 18342 16996 15620 alpha 0.6 0.7 0.8 0.9 1.0 Freq=2 10011 9631 9596 9554 9031 Freq=3 2088 1858 1730 1685 1678 Freq=4 766 617 524 485 468 Freq=5 392 276 232 202 189 Freq>2 1000 796 627 517 439 TOTAL 14257 13178 12709 12443 11805 Table 7: Number of extracted MWUs by frequency 6.2 Qualitative Analysis As many authors assess (Frank Smadja, 1993; John Justeson and Slava Katz, 1995), deciding whether a sequence of words is a multiword unit or not is a tricky problem." W03-1806,J93-1007,o,"On the other hand, purely statistical systems (Frank Smadja, 1993; Ted Dunning, 1993; Gal Dias, 2002) extract discriminating MWUs from text corpora by means of association measure regularities." W03-1807,J93-1007,o,"For example, Smadja (1993) suggests a basic characteristic of collocations and multiword units is recurrent, domain-dependent and cohesive lexical clusters." W03-1807,J93-1007,o,"Related Works Generally speaking, approaches to MWE extraction proposed so far can be divided into three categories: a) statistical approaches based on frequency and co-occurrence affinity, b) knowledgebased or symbolic approaches using parsers, lexicons and language filters, and c) hybrid approaches combining different methods (Smadja 1993; Dagan and Church 1994; Daille 1995; McEnery et al. 1997; Wu 1997; Wermter et al. 1997; Michiels and Dufour 1998; Merkel and Andersson 2000; Piao and McEnery 2001; Sag et al. 2001a, 2001b; Biber et al. 2003)." W03-1807,J93-1007,o,"In his Xtract system, Smadja (1993) first extracted significant pairs of words that consistently co-occur within a single syntactic structure using statistical scores called distance, strength and spread, and then examined concordances of the bi-grams to find longer frequent multiword units." W04-0407,J93-1007,o,"The group of collocations and compounds should be delimited using statistical approaches, such as Xtract (Smadja, 1993) or LocalMax (Silva et al. , 1999), so that only the most relevantthose of higher frequency are included in the database." W04-0412,J93-1007,p,"Many efficient techniques exist to extract multiword expressions, collocations, lexical units and idioms (Church and Hanks, 1989; Smadja, 1993; Dias et al. , 2000; Dias, 2003)." W04-1113,J93-1007,p,"Study in collocation extraction using lexical statistics has gained some insights to the issues faced in collocation extraction (Church and Hanks 1990, Smadja 1993, Choueka 1993, Lin 1998)." W04-1113,J93-1007,o,"The precision rate using the lexical statistics approach can reach around 60% if both word bi-gram extraction and n-gram extractions are taking into account (Smadja 1993, Lin 1997 and Lu et al. 2003)." W04-1113,J93-1007,o,Smadja (Smadja 1993) proposed a statistical model by measuring the spread of the distribution of cooccurring pairs of words with higher strength. W04-2105,J93-1007,o,"There are several basic methods for evaluating associations between words: based on frequency counts (Choueka, 1988; Wettler and Rapp, 1993), information theoretic (Church and Hanks, 1990) and statistical significance (Smadja, 1993)." W05-1006,J93-1007,o,"In the work of Smadja (1993) on extracting collocations, preference was given to constructions whose constituents appear in a fixed order, a similar (and more generally implemented) version of our assumption here that asymmetric constructions are more idiomatic than symmetric ones." W06-1006,J93-1007,o,"We can mentionhere only part of this work: (Berry-Rogghe, 1973; Church et al. , 1989; Smadja,1993;Lin,1998;KrennandEvert,2001) for monolingualextraction, and (Kupiec, 1993; Wu,1994;Smadjaetal." W06-1006,J93-1007,o,"Morphosyntacticinformationhas in fact been shown to significantlyimprove the extractionresults (Breidt, 1993; Smadja, 1993; Zajac et al. , 2003)." W06-1006,J93-1007,o,"Morphologicaltoolssuch as lemmatizers andPOStaggersarebeingcommonlyusedin extractionsystems;they areemployedbothfordealingwithtext variationandfor validatingthe candidatepairs: combinationsof functionwordsare typicallyruledout (Justesonand Katz, 1995),as are the ungrammaticalcombinationsin the systemsthatmake useofparsers(ChurchandHanks, 1990;Smadja,1993;Basilietal." W06-1006,J93-1007,o,"Given the motivations for performing a linguistically-informedextraction whichwere also put forth, among others, by Church and Hanks(1990,25), Smadja(1993,151) and Heid (1994) and given the recent developmentof linguisticanalysistools,itseemsplausiblethatthe linguisticstructurewill be more and more taken intoaccountbycollocationextractionsystems." W06-1006,J93-1007,o,"3 OverviewofExtractionWork 3.1 English As one mightexpect,the bulk of the collocation extractionwork concernsthe English language: (Choueka,1988;Churchet al. ,1989;Churchand Hanks,1990; Smadja,1993; Justesonand Katz, 1995;Kjellmer, 1994;Sinclair, 1995;Lin,1998), amongmany others1." W06-1006,J93-1007,o,"Smadja(1993)employsthez-scoreinconjunction with several heuristics(e.g. , the systematic occurrenceof two lexical items at the same distanceintext)andextractspredicativecollocations, 1E.g.,(Frantziet al. ,2000;Pearce,2001;Goldmanet al. , 2001;ZaiuInkpenandHirst,2002;Dias,2003;Seretanetal." W06-1006,J93-1007,o,",2004)appliedextractiontechniquessimilarto Xtractsystem (Smadja,1993); Japanese:(Ikeharaetal." W06-2403,J93-1007,o,"2 Related Work The issue of MWE processing has attracted much attention from the Natural Language Processing (NLP) community, including Smadja, 1993; Dagan and Church, 1994; Daille, 1995; 1995; McEnery et al. , 1997; Wu, 1997; Michiels and Dufour, 1998; Maynard and Ananiadou, 2000; Merkel and Andersson, 2000; Piao and McEnery, 2001; Sag et al. , 2001; Tanaka and Baldwin, 2003; Dias, 2003; Baldwin et al. , 2003; Nivre and Nilsson, 2004 Pereira et al,." W06-2406,J93-1007,o,"We argue that linguistic knowledge could not only improve results (Krenn, 2000b; Smadja, 1993) but is essential when extracting collocations from certain languages: this knowledge provides other applications (or a lexicon user, respectively) with a ne-grained description of how the extracted collocations are to be used in context." W06-2406,J93-1007,o,"(Krenn, 2000b; Smadja, 1993))." W07-1511,J93-1007,o,There are some existing corpus linguistic researches on automatic extraction of collocations from electronic text (Smadja 1993; Lin 1998; Xu and Lu 2006). W07-1511,J93-1007,o,"Lastly, collocations are domain-dependent (Smadja 1993) and language-dependent." W94-0311,J93-1007,o,"The problem is that with such a definition of collocations, even when improved, one identifies not only collocations but freecombining pairs frequently appearing together such as lawyer-client; doctor-hospital, as pointed out by Smadja (1993)." W95-0111,J93-1007,o,Other representative collocation research can be found in Church and Hanks (1990) and Smadja (1993). W95-0111,J93-1007,o,"Unlike Church and Hanks (1990), Smadja (1993) goes beyond the ""two-word"" limitation and deals with ""collocations of arbitrary length""." W96-0103,J93-1007,n,"While several methods have been proposed to automatically extract compounds (Smadja 1993, Suet al. 1994), we know of no successful attempt to automatically make classes of compounds." W96-0304,J93-1007,n,"Therefore, sublanguage techniques such as Sager (1981) and Smadja (1993) do not work." W97-0205,J93-1007,o,"MI is defined in general as follows: y) I ix y) = log2 P(x) P(y) We can use this definition to derive an estimate of the connectedness between words, in terms of collocations (Smadja, 1993), but also in terms of phrases and grammatical relations (Hindle, 1990)." W97-1004,J93-1007,o,"While bound compositions are not predictable, i.e., their reasonableness cannot be derived from the syntactic and semantic properties of the words in them(Smadja 1993)." W97-1004,J93-1007,o,"Now with the availability of large-scale corpus, automatic acquisition of word compositions, especially word collocations from them have been extensively studied(e.g. , Choueka et al. 1988; Church and Hanks 1989; Smadja 1993)." W99-0610,J93-1007,o,"Based on this assumption, (Smadja, 1993) stored all bigrams of words along with their relative position, p (-5 < p _~ 5)." W99-0610,J93-1007,o,"~F ~c ~R ~cR (2) ~\]~) continue explanations, we begin by mentioning the 'Xtrgct' tool by Smadja (Smadja, 1993)." A00-1004,J93-2003,o,"A number of alignment techniques have been proposed, varying from statistical methods (Brown et al. , 1991; Gale and Church, 1991) to lexical methods (Kay and RSscheisen, 1993; Chen, 1993)." A00-1019,J93-2003,o,"2.1 The Evaluator The evaluator is a function p(t\[t', s) which assigns to each target-text unit t an estimate of its probability given a source text s and the tokens t' which precede t in the current translation of s. 1 Our approach to modeling this distribution is based to a large extent on that of the IBM group (Brown et al. , 1993), but it differs in one significant aspect: whereas the IBM model involves a ""noisy channel"" decomposition, we use a linear combination of separate predictions from a language model p(tlt ~) and a translation model p(tls )." A00-1019,J93-2003,o,"Techniques for weakening the independence assumptions made by the IBM models 1 and 2 have been proposed in recent work (Brown et al. , 1993; Berger et al. , 1996; Och and Weber, 98; Wang and Waibel, 98; Wu and Wong, 98)." A00-1019,J93-2003,o,"Furthermore, the underlying decoding strategies are too time consuming for our application We therefore use a translation model based on the simple linear interpolation given in equation 2 which combines predictions of two translation models -Ms and M~ -both based on IBM-like model 2(Brown et al. , 1993)." A00-1019,J93-2003,o,"3.2 Mapping Mapping the identified units (tokens or sequences) to their equivalents in the other language was achieved by training a new translation model (IBM 2) using the EM algorithm as described in (Brown et al. , 1993)." A94-1006,J93-2003,o,"3 Bilingual Task: An Application for Word Alignment 3.1 Sentence and word alignment Bilingual alignment methods (Warwick et al. , 1990; Brown et al. , 1991a; Brown et al. , 1993; Gale and Church, 1991b; Gale and Church, 1991a; Kay and Roscheisen, 1993; Simard et al. , 1992; Church, 1993; Kupiec, 1993a; Matsumoto et al. , 1993; Dagan et al. , 1993)." A94-1006,J93-2003,o,"have been used in statistical machine translation (Brown et al. , 1990), terminology research and translation aids (Isabelle, 1992; Ogden and Gonzales, 1993; van der Eijk, 1993), bilingual lexicography (Klavans and Tzoukermann, 1990; Smadja, 1992), word-sense disambiguation (Brown et al. , 1991b; Gale et al. , 1992) and information retrieval in a multilingual environment (Landauer and Littman, 1990)." A94-1006,J93-2003,o,"Algorithms for the more difficult task of word alignment were proposed in (Gale and Church, 1991a; Brown et al. , 1993; Dagan et al. , 1993) and were applied for parameter estimation in the IBM statistical machine translation system (Brown et al. , 1993)." A94-1006,J93-2003,o,"We have been using the output of word_align, a robust alignment program that proved useful for bilingual concordancing of noisy texts (Dagan et al. , 1993)." A94-1006,J93-2003,o,"Part-ofspeech taggers are used in a few applications, such as speech synthesis (Sproat et al. , 1992) and question answering (Kupiec, 1993b)." A94-1006,J93-2003,o,"Word alignment is newer, found only in a few places (Gale and Church, 1991a; Brown et al. , 1993; Dagan et al. , 1993)." A94-1012,J93-2003,o,"Unlike probabilistic parsing, proposed by (Fujisaki et al. , 1989; Briscoe and Carroll, 1993), *also a staff member of Matsushita Electric Industrial Co. ,Ltd., Shinagawa, Tokyo, JAPAN." A94-1012,J93-2003,n,"It also differs from previous proposals on lexical acquisition using statistical measures such as (Church et al. , 1991; Brent, 1991; Brown et al. , 1993) which either deny the prior existence of linguistic knowledge or use linguistic knowledge in ad hoc ways." A94-1012,J93-2003,o,"The computation mechanism of GP and LP bears a resemblance to the EM algorithm(Dempster et al. , 1977; Brown et al. , 1993), which iteratively computes maximum likelihood estimates from incomplete data." A97-1050,J93-2003,o,"On the other end of the spectrum, character-based bitext mapping algorithms (Church, 1993; Davis et al. , 1995) are limited to language pairs where cognates are common; in addition, they may easily be misled by superficial differences in formatting and page layout and must sacrifice precision to be computationally tractable." C00-1064,J93-2003,p,"In statistical machine translation, IBM 1~5 models (Brown et al. , 1993) based on the source-chmmel model have been widely used and revised for many language donmins and applications." C00-1064,J93-2003,o,"Thus, a lot of alignment techniques have been suggested at; the sentence (Gale et al. , 1993), phrase (Shin et al. , 1996), nomt t)hrase (Kupiec, 1993), word (Brown et al. , 1993; Berger et al. , 1996; Melamed, 1997), collocation (Smadja et al. , 1996) and terminology level." C00-1078,J93-2003,o,"In our Machine %'anslation system, transfer rules are generated automatically from parsed parallel text along the lines of (Matsulnoto el; al,, 1993; Meyers et al. , 1996; Meyers et al. , 1998b)." C00-1078,J93-2003,o,"These transtbr rules are pairs of corresponding rooted substructures, where a substructure (Matsumoto et al. , 1993) is a connected set of arcs and nodes." C00-2123,J93-2003,o,"The model is often further restricted so that each source word is assigned to exactly one target word (Brown et al. , 1993; Ney et al. , 2000)." C00-2162,J93-2003,o,"Many existing systems tbr SMT (Wang and Waibel, 1997; Niefien et al. , 1.(/98; Och and Weber, 1998) make use of a special way of structuring the string translation model (Brown et al. , 1993): 'l?he correspondence between the words in the source and the target string is described by aligmuents that assign one target word position to each source word position." C00-2163,J93-2003,o,"In this paper we will describe extensions to tile Hidden-Markov alignment model froln (Vogel et al. , 1.996) and compare tlmse to Models 1 4 of (Brown et al. , 1993)." C00-2163,J93-2003,o,"3 Model 1 and Model 2 l~cl)lacing the (l(~,t)endence on aj-l in the HMM alignment mo(M I)y a del)endence on j, we olltain a model wlfich (:an lie seen as a zero-order Hid(l(mMarkov Model which is similar to Model 2 1)rot)ose(t t/y (Brown et al. , 1993)." C00-2163,J93-2003,o,"1087 Model 3 of (Brown et al. , 1993) is a zero-order alignment model like Model 2 including in addition fertility paranmters." C00-2163,J93-2003,o,"Model 4 of (Brown et al. , 1993) is also a first-order alignment model (along the source positions) like the HMM, trot includes also fertilities." C00-2163,J93-2003,o,"Tile full description of Model 4 (Brown et al. , 1993) is rather complica.ted as there have to be considered tile cases that English words have fertility larger than one and that English words have fertility zero." C00-2163,J93-2003,o,"Therefore, the Viterbi alignment is comlmted only approximately using the method described in (Brown et al. , 1993)." C00-2163,J93-2003,o,"As in tile HMM we easily can extend the dependencies in the alignment model of Model 4 easily using the word class of the previous English word E = G(ci,), or the word class of the French word F = G(Ij) (Brown et al. , 1993)." C00-2163,J93-2003,o,"Most SMT models (Brown et al. , 1993; Vogel et al. , 1996) try to model word-to-word corresl)ondences between source and target words using an alignment nmpl)ing from source l)osition j to target position i = aj." C02-1002,J93-2003,o,"The assumptions we made were the following: a lexical token in one half of the translation unit (TU) corresponds to at most one non-empty lexical unit in the other half of the TU; this is the 1:1 mapping assumption which underlines the work of many other researchers (Ahrenberg et al (2000), Brew and McKelvie (1996), Hiemstra (1996), Kay and Rscheisen (1993), Tiedmann (1998), Melamed (2001) etc); a polysemous lexical token, if used several times in the same TU, is used with the same meaning; this assumption is explicitly used by Gale and Church (1991), Melamed (2001) and implicitly by all the previously mentioned authors; a lexical token in one part of a TU can be aligned to a lexical token in the other part of the TU only if the two tokens have compatible types (part-of-speech); in most cases, compatibility reduces to the same POS, but it is also possible to define other compatibility mappings (e.g. participles or gerunds in English are quite often translated as adjectives or nouns in Romanian and vice-versa); although the word order is not an invariant of translation, it is not random either (Ahrenberg et al (2000)); when two or more candidate translation pairs are equally scored, the one containing tokens which are closer in relative position are preferred." C02-1008,J93-2003,p,"Another kind of popular approaches to dealing with query translation based on corpus-based techniques uses a parallel corpus containing aligned sentences whose translation pairs are corresponding to each other (Brown et al. , 1993; Dagan et al. , 1993; Smadja et al. , 1996)." C02-1011,J93-2003,o,"Related Work 2.1 Translation with Non-parallel Corpora A straightforward approach to word or phrase translation is to perform the task by using parallel bilingual corpora (e.g. , Brown et al, 1993)." C02-1050,J93-2003,o,"According to the Bayes Rule, the problem is transformed into the noisy channel model paradigm, where the translation is the maximum a posteriori solution of a distribution for a channel target text given a channel source text and a prior distribution for the channel source text (Brown et al. , 1993)." C04-1005,J93-2003,n,"For example, the statistical word alignment in IBM translation models (Brown et al. 1993) can only handle word to word and multi-word to word alignments." C04-1005,J93-2003,n,"2 Statistical Word Alignment Statistical translation models (Brown, et al. 1993) only allow word to word and multi-word to word alignments." C04-1005,J93-2003,o,1 Introduction Bilingual word alignment is first introduced as an intermediate result in statistical machine translation (SMT) (Brown et al. 1993). C04-1005,J93-2003,o,"Besides being used in SMT, it is also used in translation lexicon building (Melamed 1996), transfer rule learning (Menezes and Richardson 2001), example-based machine translation (Somers 1999), etc. In previous alignment methods, some researches modeled the alignments as hidden parameters in a statistical translation model (Brown et al. 1993; Och and Ney 2000) or directly modeled them given the sentence pairs (Cherry and Lin 2003)." C04-1006,J93-2003,o,"Word alignment models were first introduced in statistical machine translation (Brown et al. , 1993)." C04-1006,J93-2003,p,"Using the IBM translation models IBM-1 to IBM-5 (Brown et al. , 1993), as well as the Hidden-Markov alignment model (Vogel et al. , 1996), we can produce alignments of good quality." C04-1006,J93-2003,p,"6 Related Work The popular IBM models for statistical machine translation are described in (Brown et al. , 1993)." C04-1006,J93-2003,o,"These alignment models stem from the source-channel approach to statistical machine translation (Brown et al. , 1993)." C04-1006,J93-2003,p,"A detailed description of the popular translation models IBM-1 to IBM-5 (Brown et al. , 1993), aswellastheHidden-Markovalignmentmodel (HMM) (Vogel et al. , 1996) can be found in (Och and Ney, 2003)." C04-1015,J93-2003,o,"On the other hand, statistical MT employing IBM models (Brown et al. , 1993) translates an input sentence by the combination of word transfer and word re-ordering." C04-1031,J93-2003,o,"Estimated clues are derived from the parallel data using, for example, measures of co-occurrence (e.g. the Dice coefficient (Smadja et al. , 1996)), statistical alignment models (e.g. IBM models from statistical machine translation (Brown et al. , 1993)), or string similarity measures (e.g. the longest common sub-sequence ratio (Melamed, 1995))." C04-1031,J93-2003,o,"(Brown et al. , 1993; Vogel et al. , 1996; Garca-Varea et al. , 2002; Ahrenberg et al. , 1998; Tiedemann, 1999; Tufis and Barbu, 2002; Melamed, 2000)." C04-1032,J93-2003,o,"Word alignment models were first introduced in statistical machine translation (Brown et al. , 1993)." C04-1032,J93-2003,p,"Using the IBM translation models IBM-1 to IBM-5 (Brown et al. , 1993), as well as the Hidden-Markov alignment model (Vogel et al. , 1996), we can produce alignments of good quality." C04-1032,J93-2003,o,"6 Related Work A description of the IBM models for statistical machine translation can be found in (Brown et al. , 1993)." C04-1032,J93-2003,o,"They are based on the sourcechannel approach to statistical machine translation (Brown et al. , 1993)." C04-1032,J93-2003,p,"A detailed description of the popular translation/alignment models IBM-1 to IBM-5 (Brown et al. , 1993), as well as the Hidden-Markov alignment model (HMM) (Vogel et al. , 1996) can be found in (Och and Ney, 2003)." C04-1045,J93-2003,p,"2 Related Work The popular IBM models for statistical machine translation are described in (Brown et al. , 1993) and the HMM-based alignment model was introduced in (Vogel et al. , 1996)." C04-1045,J93-2003,o,"Detailed description of those models can be found in (Brown et al. , 1993), (Vogel et al. , 1996) and (Och and Ney, 2003)." C04-1045,J93-2003,o,"So far, most of the statistical machine translation systems are based on the single-word alignment models as described in (Brown et al. , 1993) as well as the Hidden Markov alignment model (Vogel et al. , 1996)." C04-1047,J93-2003,o,"For scoring MT outputs, the proposed RSCM uses a score based on a translation model called IBM4 (Brown et al. , 1993) (TM-score) and a score based on a language model for the translation target language (LM-score)." C04-1051,J93-2003,o,"Giza++ is a freely available implementation of IBM Models 1-5 (Brown et al. 1993) and the HMM alignment (Vogel et al. 1996), along with various improvements and modifications motivated by experimentation by Och & Ney (2000)." C04-1059,J93-2003,o,"Statistical machine translation is based on the noisy channel model, where the translation hypothesis is searched over the space defined by a translation model and a target language (Brown et al, 1993)." C04-1090,J93-2003,o,"In word-based models, such as IBM Model 1-5 (Brown et al 1993), the probability P(T|S) is decomposed into statistical parameters involving words." C04-1091,J93-2003,o,"1 Introduction Decoding is one of the three fundamental problems in classical SMT (translation model and language model being the other two) as proposed by IBM in the early 1990s (Brown et al. , 1993)." C04-1091,J93-2003,o,"2 Decoding The decoding problem in SMT is one of finding the most probable translation e in the target language of a given source language sentence f in accordance with the Fundamental Equation of SMT (Brown et al. , 1993): e = argmaxe Pr(f|e)Pr(e)." C04-1091,J93-2003,o,"In each iteration of local search, we look in the neighborhood of the current best alignment for a better alignment (Brown et al. , 1993)." C04-1168,J93-2003,o,"According to the statistical machine translation formalism (Brown et al. , 1993), the translation process is to search for the best sentence bE such that bE = arg max E P(EjJ) = arg maxE P(JjE)P(E) where P(JjE) is a translation model characterizing the correspondence between E and J; P(E), the English language model probability." C08-1014,J93-2003,o,"By introducing the hidden word alignment variable a (Brown et al., 1993), the optimal translation can be searched for based on the following criterion: * 1 , arg max( ( , , )) M mm m ea eh = = efa (1) where is a string of phrases in the target language, e f fa is the source language string of phrases, he are feature functions, weights (, , ) m m are typically optimized to maximize the scoring function (Och, 2003)." C08-1128,J93-2003,o,"Given a manually compiled lexicon containing words and their relative frequencies Ps(fprimej), the best segmentationfJ1 is the one that maximizes the joint probability of all words in the sentence, with the assumption that words are independent of each other1: fJ1 = argmax fprimeJprime1 Pr(fprimeJprime1 |cK1 ) argmax fprimeJprime1 Jprimeproductdisplay j=1 Ps(fprimej), where the maximization is taken over Chinese word sequences whose character sequence is cK1 . 2.2 Translation system Once we have segmented the Chinese sentences into words, we train standard alignment models in both directions with GIZA++ (Och and Ney, 2002) using models of IBM-1 (Brown et al., 1993), HMM (Vogel et al., 1996) and IBM-4 (Brown et al., 1993)." C08-1136,J93-2003,o,"In the context of statistical machine translation (Brown et al., 1993), we may interpretE as an English sentence, F its translation in French, and A a representation of how the words correspond to each other in the two sentences." C08-1138,J93-2003,o,"Based on these grammars, a great number of SMT models have been recently proposed, including string-to-string model (Synchronous FSG) (Brown et al., 1993; Koehn et al., 2003), tree-to-string model (TSG-string) (Huang et al., 2006; Liu et al., 2006; Liu et al., 2007), string-totree model (string-CFG/TSG) (Yamada and Knight, 2001; Galley et al., 2006; Marcu et al., 2006), tree-to-tree model (Synchronous CFG/TSG, Data-Oriented Translation) (Chiang, 2005; Cowan et al., 2006; Eisner, 2003; Ding and Palmer, 2005; Zhang et al., 2007; Bod, 2007; Quirk wt al., 2005; Poutsma, 2000; Hearne and Way, 2003) and so on." C94-2175,J93-2003,o,"For example, sentence alignment of bilingual texts are performed just by measuring sentence lengths in words or in characters (Brown et al. , 1991; Gale and Church, 1993), or by statistically estimating word level correspondences (Chen, 1993; Kay and RSscheisen, 1993)." C94-2175,J93-2003,o,"Then, those structurally matched parallel sentences are used as a source for acquiring lexical knowledge snch as verbal case frames (Utsuro et al. , 1992; Utsuro et al. , 1993)." C94-2175,J93-2003,o,"So fitr, we have implemented the following,: sentence ~dignment btLsed-on word correspondence information, word correspondence estimation by cooccnl'rence-ffequency-based methods in GMe mid Church (19.~H) and Kay and R6scheisen (1993), structured Imttehlng of parallel sentences (Matsumoto et a l. , 1993), and case Dame acquisition of Japanese verbs (Utsuro et al. , 1993)." C94-2175,J93-2003,o,"Dynamic programming is applied to bilingual sentence alignment in most of previous works (Brown et al. , 1991; Gate and Church, 1993; Chen, 1993)." C94-2175,J93-2003,o,"The statistical approach involves the following: alignment of bilingual texts at the sentence level nsing statistical techniques (e.g. Brown, Lai and Mercer (1991), Gale and Church (1993), Chen (1993), and Kay and RSscheisen (1993)), statistical machine translation models (e.g. Brown, Cooke, Pietra, Pietra et al." C94-2178,J93-2003,o,"In previous work (Church et al, 1993), we have reported some preliminary success in aligning the English and Japanese versions of the AWK manual (Aho, Kernighan, Weinberger (1980)), using charalign (Church, 1993), a method that looks for character sequences that are the same in both the source and target." C94-2178,J93-2003,o,"This estimate could be used as a starting point for a more detailed alignment algorithm such as word_align (Dagan et al, 1993)." C94-2178,J93-2003,o,These tables were computed from a small fragment of the Canadian Hansards that has been used in a number of other studies: Church (1993) and Simard et al (1992). C94-2178,J93-2003,o,"Motivation There have been quite a number of recent papers on parallel text: Brown et al (1990, 1991, 1993), Chen (1993), Church (1993), Church et al (1993), Dagan et al (1993), Gale and Church (1991, 1993), Isabelle (1992), Kay and Rgsenschein (1993), Klavans and Tzoukermann (1990), Kupiec (1993), Matsumoto (1991), Ogden and Gonzales (1993), Shemtov (1993), Simard et al (1992), WarwickArmstrong and Russell (1990), Wu (to appear)." C94-2178,J93-2003,o,Results This algorithm was applied to a fragment of the Canadian Hansards that has been used in a number of other studies: Church (1993) and Simard et al (1992). C96-1037,J93-2003,o,"The resolution of alignment can vat3, from low to high: section, paragraph, sentence, phrase, and word (Gale and Church 1993; Matsumoto et al. 1993)." C96-1037,J93-2003,o,(McArthur 1992; Mei et al. 1993) Classification allows a word to align with a target word using the collective translation tendency of words in the same class. C96-1040,J93-2003,o,machine translation (Brown et al. 1993) but also in other applications such as word sense disanabiguation (Brown et al. 1991) and bilingnal lexicography (Klavans and Tzoukermann 1990). C96-1040,J93-2003,p,"of the position infer marion of words at ltlat(;hillg pairs of sellte/lCeS, which turned out useful (Brown et al. 1993)." C96-1040,J93-2003,o,"(>\["" t, he EM algorit, hnt (Brown et al. 1993)(I)etrtt>stcr et al. 1977)." C96-1067,J93-2003,o,"This conclusion is supported by the fact that true IMT is not, to our knowledge, used in most modern translator's support environments, eg (Eurolang, 1995; I,'rederking et al. , 1993; IBM, 1995; Kugler et al. , 1991; Nirenburg, 1992; ~li'ados, 1995)." C96-1067,J93-2003,o,"IIowever, (Dagan et al. , 1993) have shown that knowledge of target-text length is not crucial to the model's i)ertbrmanee." C96-1078,J93-2003,o,"Some o1' l;his research has treated the sentenees as unstructured word sequences to be aligned; this work has primarily involved the acquisition of bilingual lexical correspondences (Chen, 1993), although there has also been a,n attempt to create a full MT system based on such trcat, ment (Brown et al. , 1993)." C96-2141,J93-2003,o,"A sinfilar approach has been chosen by (Da.gan et al. , 1993)." C96-2141,J93-2003,o,"In the recent years, there have been a number of papers considering this or similar problems: (Brown et al. , 1990), (Dagan et al. , 1993), (Kay et al. , 1993), (Fung et al. , 1993)." C96-2211,J93-2003,o,", 1993; Graham et al. , 1980) where K is the number of distinct nonternfinal symbols in the gramma.r G. We ca.n expect a. very etfide.nt pa.rser tbr our pa.tterns, r The input string ca.n a.lso be scanned to reduce the number of relewmt gramma.r rules before pa.rsing, e The combined process is a.lso known as offlineparsing in LTAC,." D07-1003,J93-2003,o,"This sort of problem can be solved in principle by conditional variants of the Expectation-Maximization algorithm (Baum et al. , 1970; Dempster et al. , 1977; Meng and Rubin, 1993; Jebara and Pentland, 1999)." D07-1003,J93-2003,o,"Similarly, Murdock and Croft (2005) adopted a simple translation model from IBM model 1 (Brown et al. , 1990; Brown et al. , 1993) and applied it to QA." D07-1003,J93-2003,o,"The tree is produced by a state-of-the-art dependency parser (McDonald et al. , 2005) trained on the Wall Street Journal Penn Treebank (Marcus et al. , 1993)." D07-1005,J93-2003,o,"2 Word Alignment Framework A statistical translation model (Brown et al. , 1993; Och and Ney, 2003) describes the relationship between a pair of sentences in the source and target languages (f = fJ1,e = eI1) using a translation probability P(f|e)." D07-1005,J93-2003,o,"Given a sentence-pair (f,e), the most likely (Viterbi) word alignment is found as (Brown et al. , 1993): a = argmaxa P(f,a|e)." D07-1005,J93-2003,o,"Given any word alignment model, posterior probabilities can be computed as (Brown et al. , 1993) P(aj = i|e,f) =summationdisplay a P(a|f,e)(i,aj), (1) where i {0,1,,I}." D07-1005,J93-2003,p,"(2) We note that these posterior probabilities can be computed efficiently for some alignment models such as the HMM (Vogel et al. , 1996; Och and Ney, 2003), Models 1 and 2 (Brown et al. , 1993)." D07-1006,J93-2003,o,"5 Previous Work The LEAF model is inspired by the literature on generative modeling for statistical word alignment and particularly by Model 4 (Brown et al. , 1993)." D07-1006,J93-2003,o,"2.2 Unsupervised Parameter Estimation We can perform maximum likelihood estimation of the parameters of this model in a similar fashion to that of Model 4 (Brown et al. , 1993), described thoroughly in (Och and Ney, 2003)." D07-1006,J93-2003,o,"We use Viterbi training (Brown et al. , 1993) but neighborhood estimation (Al-Onaizan et al. , 1999; Och and Ney, 2003) or pegging (Brown et al. , 1993) could also be used." D07-1006,J93-2003,o,"(Brown et al. , 1993) defined two local search operations for their 1-to-N alignment models 3, 4 and 5." D07-1025,J93-2003,o,"Therefore, to make the phrase-based SMT system robust against data sparseness for the ranking task, we also make use of the IBM Model 4 (Brown et al. , 1993) in both directions." D07-1030,J93-2003,o,"SMT has evolved from the original word-based approach (Brown et al. , 1993) into phrase-based approaches (Koehn et al. , 2003; Och and Ney, 2004) and syntax-based approaches (Wu, 1997; Alshawi et al. , 2000; Yamada and Knignt, 2001; Chiang, 2005)." D07-1038,J93-2003,o,"3.1 The traditional IBM alignment model IBM Model 4 (Brown et al. , 1993) learns a set of 4 probability tables to compute p(f|e) given a foreign sentence f and its target translation e via the following (greatly simplified) generative story: 361 NP-C NPB NPB NNP taiwan POS s NN surplus PP IN in NP-C NPB NN trade PP IN between NP-C NPB DT the CD two NNS shores FTD0 GR G4E7 DYBG EL DIDV TAIWAN IN TWO-SHORES TRADE MIDDLE SURPLUS R1: NP-C NPB x0:NPB x1:NN x2:PP x0 x2EL x1 R10: NP-C NPB x0:NPB x1:NN x2:PP x0 x2 x1 R10: NP-C NPB x0:NPB x1:NN x2:PP x0 x2 x1 R2: NPB NNP taiwan POS s FTD0 R11: NPB x0:NNP POS s x0 R17: NPB NNP taiwan x0:POS x0 R12: NNP taiwan FTD0 R18: POS s FTD0 R3: PP x0:IN x1:NP-C x0 x1 R13: PP IN in x0:NP-C GR x0EL R19: PP IN in x0:NP-C x0 R4: IN in GR R5: NP-C x0:NPB x1:PP x1 x0 R5: NP-C x0:NPB x1:PP x1 x0 R20: NP-C x0:NPB PP x1:IN x2:NP-C x2 x0 x1 R6: PP IN between NP-C NPB DT the CD two NNS shores G4E7 R14: PP IN between x0:NP-C x0 R21: IN between EL R15: NP-C x0:NPB x0 R15: NP-C x0:NPB x0 R16: NPB DT the CD two NNS shores G4E7 R22: NPB x0:DT CD two x1:NNS x0 x1 R23: NNS shores G4E7 R24: DT the GR R7: NPB x0:NN x0 R7: NPB x0:NN x0 R7: NPB x0:NN x0 R8: NN trade DYBG R9: NN surplus DIDV R8: NN trade DYBG R9: NN surplus DIDV R8: NN trade DYBG R9: NN surplus DIDV Figure 2: A (English tree, Chinese string) pair and three different sets of multilevel tree-to-string rules that can explain it; the first set is obtained from bootstrap alignments, the second from this papers re-alignment procedure, and the third is a viable, if poor quality, alternative that is not learned." D07-1038,J93-2003,o,"However, searching the space of all possible alignments is intractable for EM, so in practice the procedure is bootstrapped by models with narrower search space such as IBM Model 1 (Brown et al. , 1993) or Aachen HMM (Vogel et al. , 1996)." D07-1045,J93-2003,o,"This approach is usually referred to as the noisy source-channel approach in statistical machine translation (Brown et al. , 1993)." D07-1079,J93-2003,o,"Approaches include word substitution systems (Brown et al. , 1993), phrase substitution systems (Koehn et al. , 2003; Och and Ney, 2004), and synchronous context-free grammar systems (Wu and Wong, 1998; Chiang, 2005), all of which train on string pairs and seek to establish connections between source and target strings." D07-1090,J93-2003,o,"1 Introduction Given a source-language (e.g. , French) sentence f, the problem of machine translation is to automatically produce a target-language (e.g. , English) translation e. The mathematics of the problem were formalized by (Brown et al. , 1993), and re-formulated by (Och and Ney, 2004) in terms of the optimization e = arg maxe Msummationdisplay m=1 mhm(e,f) (1) where fhm(e,f)g is a set of M feature functions and fmg a set of weights." D07-1103,J93-2003,o,"These joint counts are estimated using the phrase induction algorithm described in (Koehn et al. , 2003), with symmetrized word alignments generated using IBM model 2 (Brown et al. , 1993)." D08-1026,J93-2003,o,"The words with the highest association probabilities are chosen as acquired words for entity e. 4.1 Base Model I Using the translation model I (Brown et al., 1993), where each word is equally likely to be aligned with each entity, we have p(w|e) = 1(l + 1)m mproductdisplay j=1 lsummationdisplay i=0 p(wj|ei) (1) where l and m are the lengths of entity and word sequences respectively." D08-1026,J93-2003,o,"4.2 Base Model II Using the translation model II (Brown et al., 1993), where alignments are dependent on word/entity positions and word/entity sequence lengths, we have p(w|e) = mproductdisplay j=1 lsummationdisplay i=0 p(aj = i|j,m,l)p(wj|ei) (2) where aj = i means that wj is aligned with ei." D08-1039,J93-2003,o,"One of the simplest models that can be seen in the context of lexical triggers is the IBM model 1 (Brown et al., 1993) which captures lexical dependencies between source and target words." D08-1039,J93-2003,o,"3 Model As an extension to commonly used lexical word pair probabilities p(f|e) as introduced in (Brown et al., 1993), we define our model to operate on word triplets." D08-1039,J93-2003,o,"The resulting training procedure is analogous to the one presented in (Brown et al., 1993) and (Tillmann and Ney, 1997)." D08-1043,J93-2003,o,"Usually the IBM Model 1, developed in the statistical machine translation field (Brown et al., 1993), is used to construct translation models for retrieval purposes in practice." D08-1053,J93-2003,p,"Compared with clean parallel corpora such as ""Hansard"" (Brown et al. 1993), which consists of 505 French-English translations of political debates in the Canadian parliament, texts from the web are far more diverse and noisy." D08-1053,J93-2003,o,"1 Introduction Sentence-aligned parallel bilingual corpora have been essential resources for statistical machine translation (Brown et al. 1993), and many other multi-lingual natural language processing applications." D08-1082,J93-2003,o,"9.1 Training Methodology Given a training set, we first run a variant of IBM alignment model 1 (Brown et al., 1993) for 100 iterations, and then initialize Model I with the learned parameter values." D08-1082,J93-2003,o,"It acquires a set of synchronous lexical entries by running the IBM alignment model (Brown et al., 1993) and learns a log-linear model to weight parses." D08-1084,J93-2003,o,"Although we have argued (section 2) that this is unlikely to succeed, to our knowledge, we are the first to investigate the matter empirically.11 The best-known MT aligner is undoubtedly GIZA++ (Och and Ney, 2003), which contains implementations of various IBM models (Brown et al., 1993), as well as the HMM model of Vogel et al." D08-1084,J93-2003,o,"The MT community has developed not only an extensive literature on alignment (Brown et al., 1993; Vogel et al., 1996; Marcu and Wong, 2002; DeNero et al., 2006), but also standard, proven alignment tools such as GIZA++ (Och and Ney, 2003)." D09-1014,J93-2003,o,"We use GIZA++ (Och and Ney, 2003) to train generative directed alignment models: HMM and IBM Model4 (Brown et al., 1993) from training record-text pairs." D09-1014,J93-2003,o,"Traditionally, generative word alignment models have been trained on massive parallel corpora (Brown et al., 1993)." D09-1014,J93-2003,o,"The structure of the graphical model resembles IBM Model 1 (Brown et al., 1993) in which each target (record) word is assigned one or more source (text) words." D09-1014,J93-2003,n,"Furthermore, we provide a 63.8% error reduction compared to IBM Model 4 (Brown et al., 1993)." D09-1014,J93-2003,o,"3.1 Conditional Random Field for Alignment Our conditional random field (CRF) for alignment has a graphical model structure that resembles that of IBM Model 1 (Brown et al., 1993)." D09-1022,J93-2003,p,"In this work, we propose two models that can be categorized as extensions of standard word lexicons: A discriminative word lexicon that uses global, i.e. sentence-level source information to predict the target words using a statistical classifier and a trigger-based lexicon model that extends the well-known IBM model 1 (Brown et al., 1993) with a second trigger, allowing for a more finegrained lexical choice of target words." D09-1022,J93-2003,o,"There are three major types of models: Heuristic models as in (Melamed, 2000), generative models as the IBM models (Brown et al., 1993) and discriminative models (Varea et al., 2001; Bangalore et al., 2006)." D09-1022,J93-2003,o,"One of the simplest models in the context of lexical triggers is the IBM model 1 (Brown et al., 1993) which captures lexical dependencies between source and target words." D09-1023,J93-2003,n,"This is a problem with other direct translation models, such as IBM model 1 used as a direct model rather than a channel model (Brown et al., 1993)." D09-1024,J93-2003,o,"The GIZA++ aligner is based on IBM Model 4 (Brown et al., 1993)." D09-1024,J93-2003,o,"1 Introduction Word alignment is a critical component in training statistical machine translation systems and has received a significant amount of research, for example, (Brown et al., 1993; Ittycheriah and Roukos, 2005; Fraser and Marcu, 2007), including work leveraging syntactic parse trees, e.g., (Cherry and Lin, 2006; DeNero and Klein, 2007; Fossum et al., 2008)." D09-1039,J93-2003,o,"(Yamada and Knight, 2001) follow (Brown et al., 1993) in using the noisy channel model, by decomposing the translation decisions modeled by the translation model into different types, and inducing probability distributions via maximum likelihood estimation over each decision type." D09-1050,J93-2003,o,"Previous SMT systems (e.g., Brown et al., 1993) used a word-based translation model which assumes that a sentence can be translated into other languages by translating each word into one or more words in the target language." D09-1050,J93-2003,o,"However, since we are interested in the word counts that correlate to w, we adopt the concept of the translation model proposed by Brown et al (1993)." D09-1051,J93-2003,o,"Many studies on collocation extraction are carried out based on co-occurring frequencies of the word pairs in texts (Choueka et al., 1983; Church and Hanks, 1990; Smadja, 1993; Dunning, 1993; Pearce, 2002; Evert, 2004)." D09-1051,J93-2003,o,"The difference between MWA and bilingual word alignment (Brown et al., 1993) is that the MWA method works on monolingual parallel corpus instead of bilingual corpus used by bilingual word alignment." D09-1051,J93-2003,o,"Thus the alignment set is denoted as }&],1[|),{( ialiaiA ii = . We adapt the bilingual word alignment model, IBM Model 3 (Brown et al., 1993), to monolingual word alignment." D09-1092,J93-2003,o,"In the early statistical translation model work at IBM, these representations were called cepts, short for concepts (Brown et al., 1993)." D09-1106,J93-2003,o,"Generative methods (Brown et al., 1993; Vogel and Ney, 1996) treat word alignment as a hidden process and maximize the likelihood of bilingual training corpus using the expectation maximization (EM) algorithm." D09-1117,J93-2003,n,"Numbers in the table correspond to the percentage of experiments in which the condition at the head of the column was true (for example figure in the first row and first column means that for 98.9 percent of the language pairs the BLEU score for the bidirectional decoder was better than that of the forward decoder) proach (Brown et al., 1993))." D09-1117,J93-2003,o,"Their experiments were performed using a decoder based on IBM Model 4 using the translation techniques developed at IBM (Brown et al., 1993)." D09-1136,J93-2003,o,"Becausesuchapproachesdirectly learn a generative model over phrase pairs, they are theoretically preferable to the standard heuristics for extracting the phrase pairs from the many-to-one word-level alignments produced by the IBM series models (Brown et al., 1993) or the Hidden Markov Model (HMM) (Vogel et al., 1996)." D09-1141,J93-2003,o,"We then built separate directed word alignments for EnglishX andXEnglish (X{Indonesian, Spanish}) using IBM model 4 (Brown et al., 1993), combined them using the intersect+grow heuristic (Och and Ney, 2003), and extracted phrase-level translation pairs of maximum length seven using the alignment template approach (Och and Ney, 2004)." E06-1004,J93-2003,o,"Increasingly, parallel corpora are becoming available for many language pairs and SMT systems have been built for French-English, German-English, Arabic-English, Chinese-English, Hindi-English and other language pairs (Brown et al. , 1993), (AlOnaizan et al. , 1999), (Udupa, 2004)." E06-1004,J93-2003,p,"In the classic work on SMT,Brownandhiscolleagues atIBMintroduced the notion of alignment between a sentence f and its translation e and used it in the development of translation models (Brown et al. , 1993)." E06-1004,J93-2003,o,"An open question in SMT is whether there existsclosed formexpressions (whoserepresentation is polynomial in the size of the input) for P (f|e) and the counts in the EM iterations for models 3-5 (Brown et al. , 1993)." E06-1004,J93-2003,o,"For a detailed introduction to IBM translation models, please see (Brown et al. , 1993)." E06-1004,J93-2003,o,"Expectation Evaluation is the soul of parameter estimation (Brown et al. , 1993), (Al-Onaizan et al. , 1999)." E06-1004,J93-2003,o,"Exact Decoding is the original decoding problem as defined in (Brown et al. , 1993) and Relaxed Decoding is the relaxation of the decoding problem typically used in practice." E06-1004,J93-2003,p,"In their seminal paper on SMT, Brownand his colleagues highlighted the problems weface aswe go from IBM Models 1-2 to 3-5(Brown et al. , 1993) 3: Asweprogress from Model1toModel5, evaluating the expectations that gives us counts becomes increasingly difficult." E06-1004,J93-2003,o,"1 Introduction Statistical Machine Translation is a data driven machine translation technique which uses probabilistic models of natural language for automatic translation (Brown et al. , 1993), (Al-Onaizan et al. , 1999)." E06-1004,J93-2003,o,"The parameters of the models are estimated by iterative maximum-likelihood training on a large parallel corpus of natural language texts using the EM algorithm (Brown et al. , 1993)." E06-1005,J93-2003,o,"We use the IBM Model 1 (Brown et al. , 1993) (uniform distribution) and the Hidden Markov Model (HMM, first-order dependency, (Vogel et al. , 1996)) to estimate the alignment model." E06-1019,J93-2003,o,"Alignment spaces can emerge from generative stories (Brown et al. , 1993), from syntactic notions (Wu, 1997), or they can be imposed to create competition between links (Melamed, 2000)." E06-1019,J93-2003,o,"The IBM models (Brown et al. , 1993) search a version of permutation space with a one-to-many constraint." E06-1019,J93-2003,o,"The task originally emerged as an intermediate result of training the IBM translation models (Brown et al. , 1993)." E06-1020,J93-2003,p,The implementation of MEBA was strongly influenced by the notorious five IBM models described in (Brown et al. 1993). E06-1046,J93-2003,o,"First, a parsing-based approach attempts to recover partial parses from the parse chart when the input cannot be parsed in its entirety due to noise, in order to construct a (partial) semantic representation (Dowding et al. , 1993; Allen et al. , 2001; Ward, 1991)." E06-1046,J93-2003,o,"To this end, we adopt techniques from statistical machine translation (Brown et al. , 1993; Och and Ney, 2003) and use statistical alignment to learn the edit patterns." E06-1046,J93-2003,o,"We adopt an approach, similar to (Ciaramella, 1993; Boros et al. , 1996), in which the meaning representation, in our case XML, is transformed into a sorted flat list of attribute-value pairs indicating the core contentful concepts of each command." E06-2002,J93-2003,o,"By introducing the hidden word alignment variable a, the following approximate optimization criterion can be applied for that purpose: e = argmaxe Pr(e | f) = argmaxe summationdisplay a Pr(e,a | f) argmaxe,a Pr(e,a | f) Exploiting the maximum entropy (Berger et al. , 1996) framework, the conditional distribution Pr(e,a | f) can be determined through suitable real valued functions (called features) hr(e,f,a),r = 1R, and takes the parametric form: p(e,a | f) exp Rsummationdisplay r=1 rhr(e,f,a)} The ITC-irst system (Chen et al. , 2005) is based on a log-linear model which extends the original IBM Model 4 (Brown et al. , 1993) to phrases (Koehn et al. , 2003; Federico and Bertoldi, 2005)." E09-1003,J93-2003,o,"This approach is usually referred to as the noisy sourcechannel approach in SMT (Brown et al., 1993)." E09-1018,J93-2003,p,"While EM has worked quite well for a few tasks, notably machine translations (starting with the IBM models 1-5 (Brown et al., 1993), it has not had success in most others, such as part-of-speech tagging (Merialdo, 1991), named-entity recognition (Collins and Singer, 1999) and context-free-grammar induction (numerous attempts, too many to mention)." E09-1033,J93-2003,o,"287 System Train +base Test +base 1 Baseline 87.89 87.89 2 Contrastive 88.70 0.82 88.45 0.56 (5 trials/fold) 3 Contrastive 88.82 0.93 88.55 0.66 (greedy selection) Table 1: Average F1 of 7-way cross-validation To generate the alignments, we used Model 4 (Brown et al., 1993), as implemented in GIZA++ (Och and Ney, 2003)." E09-1033,J93-2003,o,"Our test set is 3718 sentences from the English Penn treebank (Marcus et al., 1993) which were translated into German." E09-1050,J93-2003,o,"According to this model, when translating a stringf in the source language into the target language, a string e is chosen out of all target language strings e if it has the maximal probability given f (Brown et al., 1993): e = arg maxe {Pr(e|f)} = arg maxe {Pr(f|e)Pr(e)} where Pr(f|e) is the translation model and Pr(e) is the target language model." E09-1050,J93-2003,o,"The method uses a translation model based on IBM Model 1 (Brown et al., 1993), in which translation candidates of a phrase are generated by combining translations and transliterations of the phrase components, and matching the result against a large corpus." E09-1056,J93-2003,o,"One approach to translate terms consists in using a domain-specific parallel corpus with standard alignment techniques (Brown et al., 1993) to mine new translations." E09-1061,J93-2003,o,"Alignment is often used in training both generative and discriminative models (Brown et al., 1993; Blunsom et al., 2008; Liang et al., 2006)." E09-1061,J93-2003,o,"Each item is associated with a stack whose signa12Specifically a B-hypergraph, equivalent to an and-or graph (Gallo et al., 1993) or context-free grammar (Nederhof, 2003)." E09-1061,J93-2003,o,"Logics for the IBM Models (Brown et al., 1993) would be similar to our logics for phrase-based models." E99-1010,J93-2003,o,"To model p(fJle~;8,.T) we assume the existence of an alignment a J. We assume that every word fj is produced by the word e~j at position aj in the training corpus with the probability P(f~le,~i): J p(f lc ') = 1\] p(L Icon) j=l (7) The word alignment a J is trained automatically using statistical translation models as described in (Brown et al. , 1993; Vogel et al. , 1996)." E99-1010,J93-2003,o,"Various clustering techniques have been proposed (Brown et al. , 1992; Jardino and Adda, 1993; Martin et al. , 1998) which perform automatic word clustering optimizing a maximum-likelihood criterion with iterative clustering algorithms." H05-1012,J93-2003,p,"The IBM models 1-5 (Brown et al. , 1993) produce word alignments with increasing algorithmic complexity and performance." H05-1021,J93-2003,o,"The 1000-best lists are augmented with IBM Model-1 (Brown et al. , 1993) scores and then rescored with a second set of MET parameters." H05-1021,J93-2003,o,"The IBM translation models (Brown et al. , 1993) describe word reordering via a distortion model defined over word positions within sentence pairs." H05-1021,J93-2003,o,"2 The WFST Reordering Model The Translation Template Model (TTM) is a generative model of phrase-based translation (Brown et al. , 1993)." H05-1023,J93-2003,o,"Word alignments traditionally are based on IBM Models 1-5 (Brown et al. , 1993) or on HMMs (Vogel et al. , 1996)." H05-1024,J93-2003,n,"2 Related Work One of the major problems with the IBM models (Brown et al. , 1993) and the HMM models (Vogel et al. , 1996) is that they are restricted to the alignment of each source-language word to at most one targetlanguage word." H05-1057,J93-2003,o,"There are five different IBM translation models (Brown et al. , 1993)." H05-1057,J93-2003,o,"Further details are in the original paper (Brown et al. , 1993)." H05-1057,J93-2003,p,"The IBM models have shown good performance in machine translation, and especially so within certain families of languages, for example in translating between French and English or between Sinhalese and Tamil (Brown et al. , 1993; Weerasinghe, 2004)." H05-1057,J93-2003,p,"This is a common technique in machine translation for which the IBM translation models are popular methods (Brown et al. , 1993)." H05-1057,J93-2003,o,"In the first of our methods we align manual transcripts and ASR sentences using the IBM translation model (Brown et al. , 1993) to obtain a probabilistic dictionary." H05-1057,J93-2003,o,"3.2 Details To learn alignments, translation probabilities, etc in the first method we used work that has been done in statistical machine translation (Brown et al. , 1993), where the translation process is considered to be equivalent to a corruption of the source language text to the target language text due to a noisy channel." H05-1061,J93-2003,p,One widely used model is the IBM model (Brown et al. 1993). H05-1095,J93-2003,o,"757 hbps strong tendency to overestimate the probability of rare bi-phrases; it is computed as in equation (2), except that bi-phrase probabilities are computed based on individual word translation probabilities, somewhat as in IBM model 1 (Brown et al. , 1993): Pr(t|s) = 1|s||t| productdisplay tt summationdisplay ss Pr(t|s) The target language feature function htl: this is based on a N-gram language model of the target language." H05-1095,J93-2003,n,"While in traditional word-based statistical models (Brown et al. , 1993) the atomic unit that translation operates on is the word, phrase-based methods acknowledge the significant role played in language by multiword expressions, thus incorporating in a statistical framework the insight behind Example-Based Machine Translation (Somers, 1999)." H05-1096,J93-2003,o,"Therefore, we determine the maximal translation probability of the target word e over the source sentence words: pIBM1(e|fJ1 ) = maxj=0,,J p(e|fj), (9) where f0 is the empty source word (Brown et al. , 1993)." H05-1097,J93-2003,o,"For example, in the IBM Models (Brown et al. , 1993), each word ti independently generates 0, 1, or more 2Note that we refer to t as the target sentence, even though in the source-channel model, t is the source sentence which goes through the channel model P(s|t) to produce the observed sentence s. words in the source language." I05-2012,J93-2003,p,"It was initially proposed by (Brown et al. , 1993) and, more recently, have been intensively studied by several research groups (Germann et al. , 2001; Och et al. , 2003)." I05-2012,J93-2003,o,"For the give source text, S, it finds the most probable alignment set, A, and target text, T. = Aa SaTpSTp )|,()|( (1) Brown (Brown et al. , 1993) proposed five alignment models, called IBM Model, for an English-French alignment task based on equa68 tion (1)." I05-2012,J93-2003,o,"(Chen et al. , 1993; Gale et al. , 1993) proposed sentence alignment techniques based on dynamic programming, using sentence length and lexical mapping information." I05-2012,J93-2003,o,"(Haruno et al. , 1996; Kay et al. , 1993) applied iterative refinement algorithms to sentence level alignment tasks." I05-2012,J93-2003,o,"In this paper, we propose an alignment algorithm between English and Korean conceptual units (or between English and Korean term constituents) in English-Korean technical term pairs based on IBM Model (Brown et al. , 1993)." I05-2014,J93-2003,o,"2 Overview 2.1 The word segmentation problem As statistical machine translation systems basically rely on the notion of words through their lexicon models (BROWN et al. , 1993), they are usually capable of outputting sentences already segmented into words when they translate into languages like Chinese or Japanese." I05-4010,J93-2003,o,"Large volumes of training data of this kind are indispensable for constructing statistical translation models (Brown et al. , 1993; Melamed, 2000), acquiring bilingual lexicon (Gale and Church, 1991; Melamed, 1997), and building example-based machine translation (EBMT) systems (Nagao, 1984; Carl and Way, 2003; Way and Gough, 2003)." I05-5001,J93-2003,o,"One promising approach extends standard Statistical Machine Translation (SMT) techniques (e.g. , Brown et al. , 1993; Och & Ney, 2000, 2003) to the problems of monolingual paraphrase identification and generation." I08-1013,J93-2003,o,"4 Pattern switching The compositional translation presents problems which have been reported by (Baldwin and Tanaka, 2004; Brown et al., 1993): Fertility SWTs and MWTs are not translated by a term of a same length." I08-1024,J93-2003,o,"These probabilities are estimated with IBM model 1 (Brown et al., 1993) on parallel corpora." I08-1033,J93-2003,o,"Most existing methods treat word tokens as basic alignment units (Brown et al., 1993; Vogel et al., 1996; Deng and Byrne, 2005), however, many languages have no explicit word boundary markers, such as Chinese and Japanese." I08-1068,J93-2003,o,"5.1.2 Learning Translation Model According to the standard statistical translation model (Brown et al., 1993), we can find the optimal model M by maximizing the probability of generating queries from documents or M = argmax M NY i=1 P(QijDi;M) 524 qw dw P(qwjdw,u) journal kdd 0.0176 journal conference 0.0123 journal journal 0.0176 journal sigkdd 0.0088 journal discovery 0.0211 journal mining 0.0017 journal acm 0.0088 music music 0.0375 music purchase 0.0090 music mp3 0.0090 music listen 0.0180 music mp3.com 0.0450 music free 0.0008 Table 1: Sample user profile To find the optimal word translation probabilities P(qwjdw;M ), we can use the EM algorithm." I08-1068,J93-2003,o,"The details of the algorithm can be found in the literature for statistical translation models, such as (Brown et al., 1993)." I08-1068,J93-2003,n,"IBM Model1 (Brown et al., 1993) is a simplistic model which takes no account of the subtler aspects of language translation including the way word order tends to differ across languages." I08-2104,J93-2003,o,"In the proposed method, the statistical machine translation (SMT) (Brown et al., 1993) is deeply incorporated into the question answering process, instead of using the SMT as the preprocessing before the mono-lingual QA process as in the previous work." I08-2104,J93-2003,o,"In this paper, we use IBM model 1 (Brown et al., 1993) in order to get the probability P(Q|DA) as follows." I08-6006,J93-2003,o,"Most current transliteration systems use a generative model for transliteration such as freely available GIZA++1 (Och and Ney , 2000),an implementation of the IBM alignment models (Brown et al., 1993)." J00-1004,J93-2003,o,Brown et al. 1993). J00-1004,J93-2003,n,"At the same time, we believe our method has advantages over the approach developed initially at IBM (Brown et al. 1990; Brown et al. 1993) for training translation systems automatically." J00-2004,J93-2003,o,"Dagan, Church, and Gale (1993) expanded on this idea by replacing Brown et al.'s (1988) word alignment parameters, which were based on absolute word positions in aligned segments, with a much smaller set of relative offset parameters." J00-2004,J93-2003,n,"A word order correlation bias, as well as the phrase structure biases in Brown et al.'s (1993b) Models 4 and 5, would be less beneficial with noisier training bitexts or for language pairs with less similar word order." J00-2004,J93-2003,o,"Choosing the most advantageous, Hiemstra has published parts of the translational distributions of certain words, induced using both his method and Brown et al.'s (1993b) Model 1 from the same training bitext." J00-2004,J93-2003,o,"Due to the parameter interdependencies introduced by the one-to-one assumption, we are unlikely to find a method for decomposing the assignments into parameters that can be estimated independently of each other as in Brown et al. \[1993b, Equation 26\])." J00-2004,J93-2003,o,Brown et al. 1993). J00-2004,J93-2003,o,"Evaluation 6.1 Evaluation at the Token Level This section compares translation model estimation methods A, B, and C to each other and to Brown et al.'s (1993b) Model 1." J00-2004,J93-2003,o,"Until now, translation models have been evaluated either subjectively (e.g. White and O'Connell 1993) or using relative metrics, such as perplexity with respect to other models (Brown et al. 1993b)." J03-1002,J93-2003,o,"An analysis of the alignments shows that smoothing the fertility probabilities significantly reduces the frequently occurring problem of rare words forming garbage collectors in that they tend to align with too many words in the other language (Brown, Della Pietra, Della Pietra, Goldsmith, et al. 1993)." J03-1005,J93-2003,o,The fertility for the null word is treated specially (for details see Brown et al. [1993]). J03-1005,J93-2003,o,For placing the head the center function center(i) (Brown et al. [1993] uses the notation circledot i ) is used: the average position of the source words with which the target word e i1 is aligned. J04-2003,J93-2003,o,"Many existing systems for statistical machine translation (Garca-Varea and Casacuberta 2001; Germann et al. 2001; Nieen et al. 1998; Och, Tillmann, and Ney 1999) implement models presented by Brown, Della Pietra, Della Pietra, and Mercer (1993): The correspondence between the words in the source and the target strings is described by alignments that assign target word positions to each source word position." J04-2003,J93-2003,o,"The translation models they presented in various papers between 1988 and 1993 (Brown et al. 1988; Brown et al. 1990; Brown, Della Pietra, Della Pietra, and Mercer 1993) are commonly referred to as IBM models 15, based on the numbering in Brown, Della Pietra, Della Pietra, and Mercer (1993)." J04-2004,J93-2003,o,"These results were achieved using the statistical alignments provided by model 5 (Brown et al. 1993; Och and Ney 2000) and smoothed 11-grams and 6-grams, respectively." J04-2004,J93-2003,o,In the following section we show how this drawback can be overcome using statistical alignments (Brown et al. 1993). J04-4002,J93-2003,p,"Yet the modeling, training, and search methods have also improved since the field of statistical machine translation was pioneered by IBM in the late 1980s and early 1990s (Brown et al. 1990; Brown et al. 1993; Berger et al. 1994)." J04-4002,J93-2003,o,"As an alternative to the often used sourcechannel approach (Brown et al. 1993), we directly model the posterior probability Pr(e I 1 | f J 1 ) (Och and Ney 2002)." J05-3002,J93-2003,o,Knight and Marcu (2000) treat reduction as a translation process using a noisychannel model (Brown et al. 1993). J05-4003,J93-2003,o,One such model is the IBM Model 1 (Brown et al. 1993). J05-4004,J93-2003,o,"We compare against several competing systems, the first of which is based on the original IBM Model 4 for machine translation (Brown et al. 1993) and the HMM machine translation alignment model (Vogel, Ney, and Tillmann 1996) as implemented in the GIZA++ package (Och and Ney 2003)." J05-4004,J93-2003,n,"Our system outperforms competing approaches, including the standard machine translation alignment models (Brown et al. 1993; Vogel, Ney, and Tillmann 1996) and the state-of-the-art Cut and Paste summary alignment technique (Jing 2002)." J05-4004,J93-2003,o,"One obvious first approach would be to run a simpler model for the first iteration (for example, Model 1 from machine translation (Brown et al. 1993), which tends to be very recall oriented) and use this to see subsequent iterations of the more complex model." J05-4004,J93-2003,o,"In the context of headline generation, simple statistical models are used for aligning documents and headlines (Banko, Mittal, and Witbrock 2000; Berger and Mittal 2000; Schwartz, Zajic, and Dorr 2002), based on IBM Model 1 (Brown et al. 1993)." J06-4004,J93-2003,o,"For these first SMT systems, translation-model probabilities at the sentence level were approximated from word-based translation models that were trained by using bilingual corpora (Brown et al. 1993)." J06-4004,J93-2003,o,This feature is implemented by using the IBM-1 lexical parameters (Brown et al. 1993; Och et al. 2004). J06-4004,J93-2003,o,"More specifically, the latter system uses the IBM-1 lexical parameters (Brown et al. 1993) for computing the translation probabilities of two possible new tuples: the one resulting when the null-aligned-word is attached to Table 6 Evaluation results for experiments on n-gram size incidence." J06-4004,J93-2003,p,"According to our experience, the best performance is achieved when the union of the source-to-target and target-to-source alignment sets (IBM models; Brown et al. [1993]) is used for tuple extraction (some experimental results regarding this issue are presented in Section 4.2.2)." J06-4004,J93-2003,o,"The first SMT systems were developed in the early nineties (Brown et al. 1990, 1993)." J07-1003,J93-2003,o,"Therefore, we determine the maximal translation probability of the target word e over the source sentence words: p ibm1 (e|f J 1 ) = max j=0,,J p(e|f j ) (18) where f 0 is the empty source word (Brown et al. 1993)." J07-2003,J93-2003,o,The basic phrase-based model is an instance of the noisy-channel approach (Brown et al. 1993). J07-3002,J93-2003,p,Introduction Automatic word alignment (Brown et al. 1993) is a vital component of all statistical machine translation (SMT) approaches. J09-1002,J93-2003,o,"Different approaches have been proposed for modeling Pr(s,a|t) in Equation (8): Zero-order models such as model 1, model 2,andmodel 3 (Brown et al. 1993) and the rstorder models such as model 4, model 5 (Brown et al. 1993), hidden Markov model (Ney et al. 2000), and model 6 (Och and Ney 2003)." J09-1002,J93-2003,o,"Note that the translation direction is inverted from what would be normally expected; correspondingly the models built around this equation are often called invertedtranslationmodels (Brown et al. 1990, 1993)." J93-1001,J93-2003,o,"Four alternatives are proposed in these special issues: (1) Brent (1993), (2) Briscoe and Carroll (this issue), (3) Hindle and Rooth (this issue), and (4) Weischedel et al." J93-1001,J93-2003,o,"This is a particularly exciting area in computational linguistics as evidenced by the large number of contributions in these special issues: Biber (1993), Brent (1993), Hindle and Rooth (this issue), Pustejovsky et al." J96-1001,J93-2003,o,"Related Work The recent availability of large amounts of bilingual data has attracted interest in several areas, including sentence alignment (Gale and Church 1991b; Brown, Lai, and Mercer 1991; Simard, Foster and Isabelle 1992; Gale and Church 1993; Chen 1993), word alignment (Gale and Church 1991a; Brown et al. 1993; Dagan, Church, and Gale 1993; Fung and McKeown 1994; Fung 1995b), alignment of groups of words (Smadja 1992; Kupiec 1993; van der Eijk 1993), and statistical translation (Brown et al. 1993)." J97-2004,J93-2003,o,Notice that most in-context and dictionary translations of source words are bounded within the same category in a typical thesaurus such as the LLOCE (McArthur 1992) and CILIN (Mei et al. 1993). J97-2004,J93-2003,o,The above observations can be stated formally from the perspective of Brown et al.'s (1993) Model 2. J97-2004,J93-2003,n,"In terms of alignment, this wordnumber difference means that multiword connections must be considered, a task which 334 Sue J. Ker and Jason S. Chang Word Alignment is beyond the reach of methods proposed in recent alignment works based on Brown et al.'s (1993) Model 1 and 2." J97-3002,J93-2003,o,"Parallel bilingual corpora have been shown to provide a rich source of constraints for statistical analysis (Brown et al. 1990; Gale and Church 1991; Gale, Church, and Yarowsky 1992; Church 1993; Brown et al. 1993; Dagan, Church, and Gale 1993; Department of Computer Science, University of Science and Technology, Clear Water Bay, Hong Kong." J97-3002,J93-2003,o,"The usual Chinese NLP architecture first preprocesses input text through a word segmentation module (Chiang et al. 1992; Lin, Chiang, and Su 1992, 1993; Chang and Chen 1993; Wu and Tseng 1993; Sproat et al. 1994; Wu and Fung 1994), but, clearly, bilingual parsing will be hampered by any errors arising from segmentation ambiguities that could not be resolved in the isolated monolingual context because even if the Chinese segmentation is acceptable monolingually, it may not agree with the words present in the English sentence." J97-3002,J93-2003,o,The later IBM models are formulated to prefer collocations (Brown et al. 1993). J98-4003,J93-2003,o,"(p. 18) Whether this is a useful perspective for machine translation is debatable (Brown et al. 1993; Knoblock 1996)--however, it is a dead-on description of transliteration." J99-1003,J93-2003,o,Then they adapted Brown et al.'s (1993) statistical translation Model 2 to work with this model of cooccurrence. J99-1003,J93-2003,o,"A limitation of Church's method, and therefore also of Dagan, Church, and Gale's method, is that orthographic cognates exist only among languages with similar alphabets (Church et al. 1993)." J99-1003,J93-2003,o,"Although the above statement was made about translation problems faced by human translators, recent research (Brown et al. 1993; Melamed 1996b) suggests that it also applies to problems in machine translation." J99-1003,J93-2003,o,"For example, bilingual lexicographers can use bitexts to discover new cross-language lexicalization patterns (Catizone, Russell, and Warwick 1993; Gale and Church 1991b); students of foreign languages can use one half of a bitext to practice their reading skills, referring to the other half for translation when they get stuck (Nerbonne et al. 1997)." N03-1010,J93-2003,o,"1 Introduction Most of the current work in statistical machine translation builds on word replacement models developed at IBM in the early 1990s (Brown et al. , 1990, 1993; Berger et al. , 1994, 1996)." N03-1017,J93-2003,o,"For more information on these models, please refer to Brown et al. [1993]." N03-1017,J93-2003,o,"As the first method, we learn phrase alignments from a corpus that has been word-aligned by a training toolkit for a word-based translation model: the Giza++ [Och and Ney, 2000] toolkit for the IBM models [Brown et al. , 1993]." N03-1019,J93-2003,n,"The ATTM attempts to overcome the deficiencies of word-to-word translation models (Brown et al. , 1993) through the use of phrasal translations." N03-2036,J93-2003,n,"1 Phrase-based Unigram Model Various papers use phrase-based translation systems (Och et al. , 1999; Marcu and Wong, 2002; Yamada and Knight, 2002) that have shown to improve translation quality over single-word based translation systems introduced in (Brown et al. , 1993)." N03-4001,J93-2003,n,"By segmenting words into morphemes, we can improve the performance of natural language systems including machine translation (Brown et al. 1993) and information retrieval (Franz, M. and McCarley, S. 2002)." N04-1008,J93-2003,o,"For comparison purposes, we consider two different algorithms for our AnswerExtraction module: one that does not bridge the lexical chasm, based on N-gram cooccurrences between the question terms and the answer terms; and one that attempts to bridge the lexical chasm using Statistical Machine Translation inspired techniques (Brown et al. , 1993) in order to find the best answer for a given question." N04-1008,J93-2003,o,"The mapping of answer terms to question terms is modeled using Black et al.s (1993) simplest model, called IBM Model 1." N04-1021,J93-2003,o,"We are given a source (Chinese) sentence f = fJ1 = f1,,fj,,fJ, which is to be translated into a target (English) sentence e = eI1 = e1,,ei,,eI Among all possible target sentences, we will choose the sentence with the highest probability: eI1 = argmax eI1 {Pr(eI1|fJ1 )} (1) As an alternative to the often used source-channel approach (Brown et al. , 1993), we directly model the posterior probability Pr(eI1|fJ1 ) (Och and Ney, 2002) using a log-linear combination of feature functions." N04-1021,J93-2003,o,"4.1 Model 1 Score We used IBM Model 1 (Brown et al. , 1993) as one of the feature functions." N04-4003,J93-2003,o,"1 Introduction The statistical machine translation framework (SMT) formulates the problem of translating a sentence from a source language S into a target language T as the maximization problem of the conditional probability: TM LM = argmaxT p(SjT) p(T), (1) where p(SjT) is called a translation model (TM), representing the generation probability from T into S, p(T) is called a language model (LM) and represents the likelihood of the target language (Brown et al. , 1993)." N04-4015,J93-2003,o,Introduction Translation of two languages with highly different morphological structures as exemplified by Arabic and English poses a challenge to successful implementation of statistical machine translation models (Brown et al. 1993). N04-4026,J93-2003,o,"The orientation model is related to the distortion model in (Brown et al. , 1993), but we do not compute a block alignment during training." N06-1002,J93-2003,o," = = = = )(),( InverseM1 )(),( DirectM1 )(),( InverseMLE )(),( DirectMLE )|(),,( )|(),,( )(*, ),(),,(,*)( ),(),,( Atreelets s t Atreelets t s Atreelets Atreelets tspATSf stpATSf c cATSf c cATSf We use word probability tables p(t | s) and p(s | t) estimated by IBM Model 1 (Brown et al. 1993)." N06-1003,J93-2003,n,"By 17 0 10 20 30 40 50 60 70 80 90 100 10000 100000 1e+06 1e+07 Test Set Items with Translations (%) Training Corpus Size (num words) unigrams bigrams trigrams 4-grams Figure 1: Percent of unique unigrams, bigrams, trigrams, and 4-grams from the Europarl Spanish test sentences for which translations were learned in increasingly large training corpora increasing the size of the basic unit of translation, phrase-based machine translation does away with many of the problems associated with the original word-based formulation of statistical machine translation (Brown et al. , 1993)." N06-1003,J93-2003,p,"1 Introduction As with many other statistical natural language processing tasks, statistical machine translation (Brown et al. , 1993) produces high quality results when ample training data is available." N06-1013,J93-2003,n,"stance, the IBM models (Brown et al. , 1993) can be improved by adding more context dependencies into the translation model using a ME framework rather than using only p(f j |e i ) (Garcia-Varea et al. , 2002)." N06-1013,J93-2003,o,"1 Introduction Word alignmentdetection of corresponding words between two sentences that are translations of each otheris usually an intermediate step of statistical machine translation (MT) (Brown et al. , 1993; Och and Ney, 2003; Koehn et al. , 2003), but also has been shown useful for other applications such as construction of bilingual lexicons, word-sense disambiguation, projection of resources, and crosslanguage information retrieval." N06-1056,J93-2003,o,"More specifically, a statistical word alignment model (Brown et al. , 1993) is used to acquire a bilingual lexicon consisting of NL substrings coupled with their translations in the target MRL." N06-1056,J93-2003,o,"In this work, we use the GIZA++ implementation (Och and Ney, 2003) of IBM Model 5 (Brown et al. , 1993)." N06-2051,J93-2003,n,"Lexical relationships under the standard IBM models (Brown et al. , 1993) do not account for many-to-many mappings, and phrase extraction relies heavily on the accuracy of the IBM word-toword alignment." N06-4004,J93-2003,o,"Alignment quality can be further improved when the chunking procedure is based on translation lexicons from IBM Model-1 alignment model (Brown et al. , 1993)." N06-4004,J93-2003,o,"MTTK provides implementations of various alignment, models including IBM Model-1, Model-2 (Brown et al. , 1993), HMM-based word-to-word alignment model (Vogel et al. , 1996; Och and Ney, 2003) and HMM-based word-to-phrase alignment model (Deng and Byrne, 2005)." N06-4004,J93-2003,o,"Atthefinestlevel, thisinvolvesthealignment of words and phrases within two sentences that are known to be translations (Brown et al. , 1993; Och and Ney, 2003; Vogel et al. , 1996; Deng and Byrne, 2005)." N07-1008,J93-2003,o,"1.2 Statistical modeling for translation Earlier work in statistical machine translation (Brown et al. , 1993) is based on the noisy-channel formulation where T = arg max T p(TjS) = argmax T p(T)p(SjT) (1) where the target language model p(T) is further decomposed as p(T) / productdisplay i p(tijti1, . . ., tik+1) where k is the order of the language model and the translation model p(SjT) has been modeled by a sequence of five models with increasing complexity (Brown et al. , 1993)." N07-1008,J93-2003,o,"The translation model is estimated via the EM algorithm or approximations that are bootstrapped from the previous model in the sequence as introduced in (Brown et al. , 1993)." N07-1008,J93-2003,o,"3 A Categorization of Block Styles In (Brown et al. , 1993), multi-word cepts (which are realized in our block concept) are discussed and the authors state that when a target sequence is sufficiently different from a word by word translation, only then should the target sequence should be promoted to a cept." N07-1008,J93-2003,o,"Following the perspective of (Brown et al. , 1993), a minimal set of phrase blocks with lengths (m, n) where either m or n must be greater than zero results in the following types of blocks: 1." N07-1022,J93-2003,n,"Compared to earlier word-based methods such as IBM Models (Brown et al. , 1993), phrasebased methods such as PHARAOH are much more effective in producing idiomatic translations, and are currently the best performing methods in SMT (Koehn and Monz, 2006)." N07-1022,J93-2003,o,"These rules are learned using a word alignment model, which finds an optimal mapping from words to MR predicates given a set of training sentences and their correct MRs. Word alignment models have been widely used for lexical acquisition in SMT (Brown et al. , 1993; Koehn et al. , 2003)." N07-1046,J93-2003,o,"4.1.3 Letter Lexical Transliteration Similar to IBM Model-1 (Brown et al. , 1993), we use a bag-of-letter generative model within a block to approximate the lexical transliteration equivalence: P(fj+lj |ei+ki )= j+lproductdisplay jprime=j i+ksummationdisplay iprime=i P(fjprime|eiprime)P(eiprime|ei+ki ), (10) where P(eiprime|ei+ki ) similarequal 1/(k+1) is approximated by a bagof-word unigram." N07-1046,J93-2003,o,"Standard SMT alignment models (Brown et al. , 1993) are used to align letter-pairs within named entity pairs for transliteration." N07-1046,J93-2003,o,"3 Bi-Stream HMMs for Transliteration Standard IBM translation models (Brown et al. , 1993) can be used to obtain letter-to-letter translations." N07-1057,J93-2003,o,"We then train IBM models (Brown et al. , 1993) using the GIZA++ package (Och and Ney, 2000)." N07-1061,J93-2003,o,"1 Introduction The rapid and steady progress in corpus-based machine translation (Nagao, 1981; Brown et al. , 1993) has been supported by large parallel corpora such as the Arabic-English and Chinese-English parallel corpora distributed by the Linguistic Data Consortium and the Europarl corpus (Koehn, 2005), which consists of 11 European languages." N07-1064,J93-2003,o,"To improve raw output from decoding, Portage relies on a rescoring strategy: given a list of n-best translations from the decoder, the system reorders this list, this time using a more elaborate loglinear model, incorporating more feature functions, in addition to those of the decoding model: these typically include IBM-1 and IBM-2 model probabilities (Brown et al. , 1993) and an IBM-1-based feature function designed to detect whether any word in one language appears to have been left without satisfactory translation in the other language; all of these feature functions can be used in both language directions, i.e. source-to-target and target-to-source." N07-2009,J93-2003,o,"3 GM Representation of IBM MT Models In this section we present a GM representation for IBM model 3 (Brown et al. , 1993) in fig." N07-2009,J93-2003,o,"We attribute the difference in M3/4 scores to the fact we use a Viterbi-like training procedure (i.e. , we consider a single configuration of the hidden variables in EM training) while GIZA uses pegging (Brown et al. , 1993) to sum over a set of likely hidden variable configurations in EM." N07-2010,J93-2003,o,"Similar to work in image retrieval (Barnard et al. , 2003), we cast the problem in terms of Machine Translation: given a paired corpus of words and a set of video event representations to which they refer, we make the IBM Model 1 assumption and use the expectation-maximization method to estimate the parameters (Brown et al. , 1993): =+ = m j ajm jvideowordpl Cvideowordp 1 )|()1()|( (1) This paired corpus is created from a corpus of raw video by first abstracting each video into the feature streams described above." N07-2022,J93-2003,o,"1 Introduction In the first SMT systems (Brown et al. , 1993), word alignment was introduced as a hidden variable of the translation model." N07-2034,J93-2003,o,"A monotonous segmentation copes with monotonous alignments, that is, j < k aj < ak following the notation of (Brown et al. , 1993)." N09-1013,J93-2003,o,"Standard CI Model 1 training, initialised with a uniform translation table so that t(ejf) is constant for all source/target word pairs (f,e), was run on untagged data for 10 iterations in each direction (Brown et al., 1993; Deng and Byrne, 2005b)." N09-1013,J93-2003,o,(1993) introduce IBM Models 1-5 for alignment modelling; Vogel et al. N09-1013,J93-2003,o,"2.1 EM parameter estimation We train using Expectation Maximisation (EM), optimising the log probability of the training setfe(s),f(s)gSs=1 (Brown et al., 1993)." N09-1013,J93-2003,o,"Then P(eI1jfj1) = summationtextaI 1 P(eI1,aI1jfj1) (Brown et al., 1993)." N09-1025,J93-2003,o,"Following previous work in statistical MT (Brown et al., 1993), we envision a noisy-channel model in which a language model generates English, and then a translation model transforms English trees into Chinese." N09-2002,J93-2003,o,"2 IBM Model 4 In this paper we focus on the translation model defined by IBM Model 4 (Brown et al., 1993)." N09-2005,J93-2003,o,"The triplet lexicon model presented in this work can also be interpreted as an extension of the standard IBM model 1 (Brown et al., 1993) with an additional trigger." N09-2024,J93-2003,o,"In this paper, sentence pairs are extracted by a simple model that is based on the so-called IBM Model1 (Brown et al., 1993)." N09-2055,J93-2003,o,"The translation problem can be statistically formulated as in (Brown et al., 1993)." P00-1041,J93-2003,o,"The work reported in this paper is most closely related to work on statistical machine translation, particularly the IBM-style work on CANDIDE (Brown et al. , 1993)." P01-1008,J93-2003,o,"Examples of such contexts are verb-object relations and noun-modifier relations, which were traditionally used in word similarity tasks from non-parallel corpora (Pereira et al. , 1993; Hatzivassiloglou and McKeown, 1993)." P01-1008,J93-2003,o,"This characteristic of our corpus is similar to problems with noisy and comparable corpora (Veronis, 2000), and it prevents us from using methods developed in the MT community based on clean parallel corpora, such as (Brown et al. , 1993)." P01-1008,J93-2003,o,"We also record for each token its derivational root, using the CELEX(Baayen et al. , 1993) database." P01-1026,J93-2003,o,"P (d) P L (d) (4) Statistical approaches to language modeling have been used in much NLP research, such as machine translation (Brown et al. , 1993) and speech recognition (Bahl et al. , 1983)." P01-1027,J93-2003,o,"Similar techniques are used in (Papineni et al. , 1996; Papineni et al. , 1998) for socalled direct translation models instead of those proposed in (Brown et al. , 1993)." P01-1027,J93-2003,o,"If we assign a probability a15a17a16a19a18 a12 a13a7a21a20a4a6a5a7a23a22 to each pair of strings a18 a12a14a13a7a25a24 a4 a5a7 a22, then according to Bayes decision rule, we have to choose the target string that maximizes the product of the target language model a15a17a16a19a18 a12a14a13a7 a22 and the string translation model a15a17a16a19a18a26a4a6a5 a7 a20 a12 a13 a7 a22 . Many existing systems for statistical machine translation (Berger et al. , 1994; Wang and Waibel, 1997; Tillmann et al. , 1997; Nieen et al. , 1998) make use of a special way of structuring the string translation model like proposed by (Brown et al. , 1993): The correspondence between the words in the source and the target string is described by alignments that assign one target word position to each source word position." P01-1027,J93-2003,o,"That is obtained using the Viterbi alignment provided by a translation model as described in (Brown et al. , 1993)." P01-1027,J93-2003,o,"This is exactly the standard lexicon probability a27a28a18a26a4 a20a12 a22 employed in the translation model described in (Brown et al. , 1993) and in Section 2." P01-1050,J93-2003,o,"In this framework, the source language, let-s say English, is assumed to be generated by a noisy probabilistic source.1 Most of the current statistical MT systems treat this source as a sequence of words (Brown et al. , 1993)." P01-1050,J93-2003,o,"First, we show how one can use an existing statistical translation model (Brown et al. , 1993) in order to automatically derive a statistical TMEM." P01-1050,J93-2003,o,"2 The IBM Model 4 For the work described in this paper we used a modified version of the statistical machine translation tool developed in the context of the 1999 Johns HopkinsSummer Workshop (Al-Onaizan et al. , 1999), which implements IBM translation model 4 (Brown et al. , 1993)." P01-1050,J93-2003,o,"a65 The rest of the factors denote distorsion probabilities (d), which capture the probability that words change their position when translated from one language into another; the probability of some French words being generated from an invisible English NULL element (pa6 ), etc. See (Brown et al. , 1993) or (Germann et al. , 2001) for a detailed discussion of this translation model and a description of its parameters." P01-1067,J93-2003,o,"Mathematical details are fully described in (Brown et al. , 1993)." P01-1067,J93-2003,o,"Let a183a49a48a50 a69 a188 a50 a51a181a51a181a51a212a188 a50a7a51a24a52 a48a54a53 a185a56a55 be a substring of a183 from the word a188 a50 with length a57 . Note this notation is different from (Brown et al. , 1993)." P01-1067,J93-2003,o,"Following (Brown et al. , 1993) and the other literature in TM, this paper only focuses the details of TM." P01-1067,J93-2003,o,"To make this paper comparable to (Brown et al. , 1993), we use English-French notation in this section." P02-1038,J93-2003,o,"1 perform the following maximization: eI1 = argmax eI1 fPr(eI1)Pr(fJ1 jeI1)g (2) This approach is referred to as source-channel approach to statistical MT. Sometimes, it is also referred to as the fundamental equation of statistical MT (Brown et al. , 1993)." P02-1038,J93-2003,o,"If the language model Pr(eI1) = p (eI1) depends on parameters and the translation model Pr(fJ1 jeI1) = p (fJ1 jeI1) depends on parameters, then the optimal parameter values are obtained by maximizing the likelihood on a parallel training corpus fS1;eS1 (Brown et al. , 1993): = argmax SY s=1 p (fsjes) (3) = argmax SY s=1 p (es) (4) Computational Linguistics (ACL), Philadelphia, July 2002, pp." P02-1039,J93-2003,p,"For the IBM models defined by a pioneering paper (Brown et al. , 1993), a decoding algorithm based on a left-to-right search was described in (Berger et al. , 1996)." P02-1051,J93-2003,o,"The score for a given candidate a9 is given by a modified IBM Model 1 probability (Brown et al. , 1993) as follows: a2a4a3a6a9a21a10a13a12a15a7a14a2 a15 a24a26a17a16 a2a4a3a6a9a19a18 a14a15a10a12 a7 (4) a2 a15 a20 a24a16a22a21a24a23a26a25a1a27a28a27a28a27 a20 a24a16a30a29a1a23a26a25 a31 a32 a33 a23a35a34a37a36 a3a38a12 a33 a10a12a9 a16a8a39 a7 (5) where a40 is the length of a9, a41 is the length of a12, a15 is a scaling factor based on the number of matches of a9 found, and a14 a33 is the index of the English word aligned with a12 a33 according to alignment a14 . The probability a36 a3a6a9 a16a8a39 a10a12 a33 a7 is a linear combination of the transliteration and translation score, where the translation score is a uniform probability over all dictionary entries for a12 a33 . The scored matches form the list of translation candidates." P02-1052,J93-2003,o,"Proceedings of the 40th Annual Meeting of the Association for (Brown et al. , 1990; Brown et al. , 1993), a number of other algorithms have been developed." P03-1003,J93-2003,p,"Being inspired by the success of noisy-channel-based approaches in applications as diverse as speech recognition (Jelinek, 1997), part of speech tagging (Church, 1988), machine translation (Brown et al. , 1993), information retrieval (Berger and Lafferty, 1999), and text summarization (Knight and Marcu, 2002), we develop a noisy channel model for QA." P03-1003,J93-2003,o,"(see Brown et al. , 1993 for a detailed mathematical description of the model and the formula for computing the probability of an alignment and target string given a source string)." P03-1003,J93-2003,o,"To help our model learn that it is desirable to copy answer words into the question, we add to each corpus a list of identical dictionary word pairs w iw i . For each corpus, we use GIZA (Al-Onaizan et al. , 1999), a publicly available SMT package that implements the IBM models (Brown et al. , 1993), to train a QA noisy-channel model that maps flattened answer parse trees, obtained using the cut procedure described in Section 3.1, into questions." P03-1012,J93-2003,o,"These constraints tie words in such a way that the space of alignments cannot be enumerated as in IBM models 1 and 2 (Brown et al. , 1993)." P03-1012,J93-2003,o,"1 Introduction Word alignments were first introduced as an intermediate result of statistical machine translation systems (Brown et al. , 1993)." P03-1016,J93-2003,o,"Equation (2) is rewritten as: )|()|()|( )|()|()|()|( 2211 21 ce colecolcolcolcol rrpcepcep crpcepcepcep = = (3) It is equal to a word translation model if we take the relation type in the collocations as an element like a word, which is similar to Model 1 in (Brown et al. , 1993)." P03-1016,J93-2003,o,"2.3.4 Word Translation Probability Estimation Many methods are used to estimate word translation probabilities from unparallel or parallel bilingual corpora (Koehn and Knight, 2000; Brown et al. , 1993)." P03-1039,J93-2003,o,"The next section briefly reviews the word alignment based statistical machine translation (Brown et al. , 1993)." P03-1039,J93-2003,p,"The former term P(E) is called a language model, representing the likelihood of E. The latter term P(J|E) is called a translation model, representing the generation probability from E into J. As an implementation of P(J|E), the word alignment based statistical translation (Brown et al. , 1993) has been successfully applied to similar language pairs, such as FrenchEnglish and German English, but not to drastically dierent ones, such as JapaneseEnglish." P03-1040,J93-2003,o,"As a baseline, we use an IBM Model 4 (Brown et al. , 1993) system3 with a greedy decoder4 (Germann et al. , 2001)." P03-1041,J93-2003,o,"Re-ordering effects across languages have been modeled in several ways, including word-based (Brown et al. , 1993), template-based (Och et al. , 1999) and syntax-based (Yamada, Knight, 2001)." P03-1041,J93-2003,o,"The traditional framework presented in (Brown et al. , 1993) assumes a generative process where the source sentence is passed through a noisy stochastic process to produce the target sentence." P03-1041,J93-2003,p,"Within the generative model, the Bayes reformulation is used to estimate a31 a0a15a14a35a33a1a26a13a37a36 a31 a0a15a14a19a13 a31 a0a2a1a38a33a14a39a13 where a31 a0a15a14a39a13 is considered the language model, and a31 a0a2a1a38a33a14a19a13 is the translation model; the IBM (Brown et al. , 1993) models being the de facto standard." P03-1050,J93-2003,o,"2.2 The Translation Model We adapted Model 1 (Brown et al. , 1993) to our purposes." P03-1051,J93-2003,o,"(Darwish 2002), is not very useful for applications like statistical machine translation, (Brown et al. 1993), for which an accurate word-to-word alignment between the source and the target languages is critical for high quality translations." P03-1051,J93-2003,n,"By segmenting words into morphemes, we can improve the performance of natural language systems including machine translation (Brown et al. 1993) and information retrieval (Franz, M. and McCarley, S. 2002)." P03-2017,J93-2003,o,"4, we see strong parallels between TransType and ITU: language model enumerating word sequences vs 4 Initially statistical MT used a noisy-channel approach [Brown et al. 1993]; but recently [Och and Ney 2002] have introduced a more general framework based on the maximum-entropy principle, which shows nice prospects in terms of flexibility and learnability." P03-2017,J93-2003,o,"He then goes on to adapt the conventional noisy channel MT model of [Brown et al 1993] to NLU, where extracting a semantic representation from an input text corresponds to finding: argmax(Sem) {p(Input|Sem) p(Sem)}, where p(Sem) is a model for generating semantic representations, and p(Input|Sem) is a model for the relation between semantic representations and corresponding texts." P04-1022,J93-2003,o,"Some studies have been done for acquiring collocation translations using parallel corpora (Smadja et al, 1996; Kupiec, 1993; Echizen-ya et al. , 2003)." P04-1022,J93-2003,o,"Most previous research in translation knowledge acquisition is based on parallel corpora (Brown et al. , 1993)." P04-1022,J93-2003,o,"We have: )|(),|(),|( )|,,()|( 21 21 trictrictric trictritri erpercpercp ecrcpecp = = (6) Assumption 2: For an English triple tri e, assume that i c only depends on {1,2}) (i i e, and c r only depends on e r . Equation (6) is rewritten as: )|()|()|( )|(),|(),|()|( 2211 21 ec trietrictrictritri rrpecpecp erpercpercpecp = = (7) Notice that )|( 11 ecp and )|( 22 ecp are translation probabilities within triples, they are different from the unrestricted probabilities such as the ones in IBM models (Brown et al. , 1993)." P04-1022,J93-2003,o,"These range from twoword to multi-word, with or without syntactic structure (Smadja 1993; Lin, 1998; Pearce, 2001; Seretan et al. 2003)." P04-1023,J93-2003,o,"1 Introduction Machine translation systems based on probabilistic translation models (Brown et al. , 1993) are generally trained using sentence-aligned parallel corpora." P04-1063,J93-2003,o,"ALM does this by using alignment models from the statistical machine translation literature (Brown et al. , 1993)." P04-1064,J93-2003,n,"Although the first three are particular cases where N=1 and/or M=1, the distinction is relevant, because most word-based translation models (eg IBM models (Brown et al. , 1993)) can typically not accommodate general M-N alignments." P04-1064,J93-2003,o,"Note that our use of cepts differs slightly from that of (Brown et al. , 1993, sec.3), inasmuch cepts may not overlap, according to our definition." P04-1064,J93-2003,o,"Obtaining a word-aligned corpus usually involves training a word-based translation models (Brown et al. , 1993) in each directions and combining the resulting alignments." P04-1066,J93-2003,p,"1 Introduction IBM Model 1 (Brown et al. , 1993a) is a wordalignment model that is widely used in working with parallel bilingual corpora." P04-1066,J93-2003,o,"The first of these nonstructural problems with Model 1, as standardly trained, is that rare words in the source language tend to act as garbage collectors (Brown et al. , 1993b; Och and Ney, 2004), aligning to too many words in the target language." P04-1083,J93-2003,p,"Bootstrapping a PMTG from a lower-dimensional PMTG and a word-to-word translation model is similar in spirit to the way that regular grammars can help to estimate CFGs (Lari & Young, 1990), and the way that simple translation models can help to bootstrap more sophisticated ones (Brown et al. , 1993)." P04-1083,J93-2003,o,"This kind of synchronizer stands in contrast to more ad-hoc approaches (e.g. , Matsumoto, 1993; Meyers, 1996; Wu, 1998; Hwa et al. , 2002)." P04-3002,J93-2003,n,"2 2.1 Word Alignment Adaptation Bi-directional Word Alignment In statistical translation models (Brown et al. , 1993), only one-to-one and more-to-one word alignment links can be found." P04-3002,J93-2003,o,"1 Introduction Bilingual word alignment is first introduced as an intermediate result in statistical machine translation (SMT) (Brown et al. , 1993)." P04-3005,J93-2003,n,"For the results in this paper, we have used Pointwise Mutual Information (PMI) instead of IBM Model 1 (Brown et al. , 1993), since (Rogati and Yang, 2004) found it to be as effective on Springer, but faster to compute." P04-3014,J93-2003,p,"Syntax-light alignment models such as the five IBM models (Brown et al. , 1993) and their relatives have proved to be very successful and robust at producing word-level alignments, especially for closely related languages with similar word order and mostly local reorderings, which can be captured via simple models of relative word distortion." P05-1009,J93-2003,o,"In Machine Translation, for example, sentences are produced using application-specific decoders, inspired by work on speech recognition (Brown et al. , 1993), whereas in Summarization, summaries are produced as either extracts or using task-specific strategies (Barzilay, 2003)." P05-1032,J93-2003,o,"translation including the joint probability phrasebased model (Marcu and Wong, 2002) and a variant on the alignment template approach (Och and Ney, 2004), and contrast them to the performance of the word-based IBM Model 4 (Brown et al. , 1993)." P05-1032,J93-2003,n,"By increasing the size of the basic unit of translation, phrase-based machine translation does away with many of the problems associated with the original word-based formulation of statistical machine translation (Brown et al. , 1993), in particular: The Brown et al." P05-1033,J93-2003,o,"The basic phrase-based model is an instance of the noisy-channel approach (Brown et al. , 1993),1 in which the translation of a French sentence f into an 1Throughout this paper, we follow the convention of Brown et al. of designating the source and target languages as French and English, respectively." P05-1057,J93-2003,o,"Statistical approaches, which depend on a set of unknown parameters that are learned from training data, try to describe the relationship between a bilingual sentence pair (Brown et al. , 1993; Vogel and Ney, 1996)." P05-1057,J93-2003,o,"1 Introduction Word alignment, which can be defined as an object for indicating the corresponding words in a parallel text, was first introduced as an intermediate result of statistical translation models (Brown et al. , 1993)." P05-1057,J93-2003,o,"If e has length l and f has length m, there are possible 2lm alignments between e and f (Brown et al. , 1993)." P05-1058,J93-2003,o,"2 Statistical Word Alignment According to the IBM models (Brown et al. , 1993), the statistical word alignment model can be generally represented as in Equation (1)." P05-1058,J93-2003,o,"This simplified version does not take word classes into account as described in (Brown et al. , 1993)." P05-1058,J93-2003,o," = == = = m aj j m j aj l i i l i ii m j j mlajdeft en pp m ap 0:1 11 1 2 0 0 0 ),( ),,|()|( ! )|( )|,Pr()|,( 00 eef (3) 1 A cept is defined as the set of target words connected to a source word (Brown et al. , 1993)." P05-1058,J93-2003,o,"1 Introduction Word alignment was first proposed as an intermediate result of statistical machine translation (Brown et al. , 1993)." P05-1066,J93-2003,o,"2 Background 2.1 Previous Work 2.1.1 Research on Phrase-Based SMT The original work on statistical machine translation was carried out by researchers at IBM (Brown et al. , 1993)." P05-1066,J93-2003,n,"These methods go beyond the original IBM machine translation models (Brown et al. , 1993), by allowing multi-word units (phrases) in one language to be translated directly into phrases in another language." P05-1067,J93-2003,p,"1 Introduction Statistical approaches to machine translation, pioneered by (Brown et al. , 1993), achieved impressive performance by leveraging large amounts of parallel corpora." P05-1067,J93-2003,o,"As a unified approach, we augment the SDIG by adding all the possible word pairs (,) ji fe as a parallel ET pair and using the IBM Model 1 (Brown et al. , 1993) word to word translation probability as the ET translation probability." P05-1067,J93-2003,o,"In comparison, we deployed the GIZA++ MT modeling tool kit, which is an implementation of the IBM Models 1 to 4 (Brown et al. , 1993; AlOnaizan et al. , 1999; Och and Ney, 2003)." P05-1067,J93-2003,o,"In our implementation, the IBM Model 1 (Brown et al. , 1993) is used." P05-1068,J93-2003,o,"Most of the phrase-based translation models have adopted the noisy-channel based IBM style models (Brown et al. , 1993): CMCT C1 BD BP CPD6CVD1CPDC CT C1 BD C8D6B4CU C2 BD CYCT C1 BD B5C8D6B4CT C1 BD B5 (1) In these model, we have two types of knowledge: translation model, C8D6B4CU C2 BD CYCT C1 BD B5 and language model, C8D6B4CT C1 BD B5." P05-1068,J93-2003,o,"3.1 Learning Chunk-based Translation We learn chunk alignments from a corpus that has been word-aligned by a training toolkit for wordbased translation models: the Giza++ (Och and Ney, 2000) toolkit for the IBM models (Brown et al. , 1993)." P05-1074,J93-2003,o,"The original formulation of statistical machine translation (Brown et al. , 1993) was defined as a word-based operation." P05-2016,J93-2003,o,"The first work on SMT done at IBM (Brown et al. , 1990; Brown et al. , 1992; Brown et al. , 1993; Berger et al. , 1994), used a noisy-channel model, resulting in what Brown et al." P05-2022,J93-2003,p,"There are basically two kinds of systems working at these segmentation levels: the most widespread rely on statistical models, in particular the IBM ones (Brown et al. , 1993); others combine simpler association measures with different kinds of linguistic information (Arhenberg et al. , 2000; Barbu, 2004)." P06-1002,J93-2003,o,"2 Related Work Starting with the IBM models (Brown et al. , 1993), researchers have developed various statistical word alignment systems based on different models, such as hidden Markov models (HMM) (Vogel et al. , 1996), log-linear models (Och and Ney, 2003), and similarity-based heuristic methods (Melamed, 2000)." P06-1009,J93-2003,o,"Most current SMT systems (Och and Ney, 2004; Koehn et al. , 2003) use a generative model for word alignment such as the freely available GIZA++ (Och and Ney, 2003), an implementation of the IBM alignment models (Brown et al. , 1993)." P06-1011,J93-2003,o,"The first one, GIZA-Lex, is obtained by running the GIZA++2 implementation of the IBM word alignment models (Brown et al. , 1993) on the initial parallel corpus." P06-1032,J93-2003,p,"In this paper, we show that a noisy channel model instantiated within the paradigm of Statistical Machine Translation (SMT) (Brown et al. , 1993) can successfully provide editorial assistance for non-native writers." P06-1032,J93-2003,o,"Rather than learning how strings in one language map to strings in another, however, translation now involves learning how systematic patterns of errors in ESL learners English map to corresponding patterns in native English 2.2 A Noisy Channel Model of ESL Errors If ESL error correction is seen as a translation task, the task can be treated as an SMT problem using the noisy channel model of (Brown et al. , 1993): here the L2 sentence produced by the learner can be regarded as having been corrupted by noise in the form of interference from his or her L1 model and incomplete language models internalized during language learning." P06-1062,J93-2003,o,"is combined with [ ]E jiT,1+ to be aligned with [ ] F nmT,, then [ ]( ) [ ]( )ATTCNTATTr E K E i FEF jinmjinm,.Pr,P,1],[,],[ ],1[ += where K is the degree of .EiN Finally, the node translation probability is modeled as ( ) ( ) ( )tNtNlNlNNN EiFlEiFlEjFl PrPrPr . And the text translation probability ( )EF ttPr is model using IBM model I (Brown et al 1993)." P06-1062,J93-2003,o,"However, current sentence alignment models, (Brown et al 1991; Gale & Church 1991; Wu 1994; Chen 489 1993; Zhao and Vogel, 2002; etc)." P06-1067,J93-2003,o,"N-gram language models have also been used in Statistical Machine Translation (SMT) as proposed by (Brown et al. , 1990; Brown et al. , 1993)." P06-1067,J93-2003,o,"Distortion models were first proposed by (Brown et al. , 1993) in the so-called IBM Models." P06-1077,J93-2003,n,"1 Introduction Phrase-based translation models (Marcu and Wong, 2002; Koehn et al. , 2003; Och and Ney, 2004), which go beyond the original IBM translation models (Brown et al. , 1993) 1 by modeling translations of phrases rather than individual words, have been suggested to be the state-of-theart in statistical machine translation by empirical evaluations." P06-1082,J93-2003,o,"Use of sententially aligned corpora for word alignment has already been recommended in (Brown et al. , 1993)." P06-1082,J93-2003,o,"1 Introduction Several approaches including statistical techniques (Gale and Church, 1991; Brown et al. , 1993), lexical techniques (Huang and Choi, 2000; Tiedemann, 2003) and hybrid techniques (Ahrenberg et al. , 2000), have been pursued to design schemes for word alignment which aims at establishing links between words of a source language and a target language in a parallel corpus." P06-1091,J93-2003,o,"5 Discussion and Future Work The work in this paper substantially differs from previous work in SMT based on the noisy channel approach presented in (Brown et al. , 1993)." P06-1097,J93-2003,p,"1 Introduction The most widely applied training procedure for statistical machine translation IBM model 4 (Brown et al. , 1993) unsupervised training followed by post-processing with symmetrization heuristics (Och and Ney, 2003) yields low quality word alignments." P06-1097,J93-2003,o,"We rst recast the problem of estimating the IBM models (Brown et al. , 1993) in a discriminative framework, which leads to an initial increase in word-alignment accuracy." P06-1097,J93-2003,o,"4 Semi-Supervised Training for Word Alignments Intuitively, in approximate EM training for Model 4 (Brown et al. , 1993), the E-step corresponds to calculating the probability of all alignments according to the current model estimate, while the M-step is the creation of a new model estimate given a probability distribution over alignments (calculated in the E-step)." P06-1098,J93-2003,o,"1 Introduction In a classical statistical machine translation, a foreign language sentence f J1 = f1, f2, fJ is translated into another language, i.e. English, eI1 = e1, e2,, eI by seeking a maximum likely solution of: eI1 = argmax eI1 Pr(eI1|f J1 ) (1) = argmax eI1 Pr( f J1|eI1)Pr(eI1) (2) The source channel approach in Equation 2 independently decomposes translation knowledge into a translation model and a language model, respectively (Brown et al. , 1993)." P06-1122,J93-2003,p,"Aligning tokens in parallel sentences using the IBM Models (Brown et al. , 1993), (Och and Ney, 2003) may require less information than full-blown translation since the task is constrained by the source and target tokens present in each sentence pair." P06-2005,J93-2003,o,"We thus propose to adapt the statistical machine translation model (Brown et al. , 1993; Zens and Ney, 2004) for SMS text normalization." P06-2005,J93-2003,o,"A null Assuming that one SMS word is mapped exactly to one English word in the channel model under an alignment, we need to consider only two types of probabilities: the alignment probabilities denoted by Pm and the lexicon mapping probabilities denoted by (Brown et al. 1993)." P06-2014,J93-2003,o,"The IBM models (Brown et al. , 1993) benefit from a one-tomany constraint, where each target word has ex105 the tax causes unrest l' impt cause le malaise Figure 1: A cohesion constraint violation." P06-2014,J93-2003,o,"Originally introduced as a byproduct of training statistical translation models in (Brown et al. , 1993), word alignment has become the first step in training most statistical translation systems, and alignments are useful to a host of other tasks." P06-2061,J93-2003,o,"4.7 Fertility-Based Transducer In (Brown et al. , 1993), three alignment models are described that include fertility models, these are IBM Models 3, 4, and 5." P06-2061,J93-2003,o,"In (Brown et al. , 1994), the authors proposed a method to integrate the IBM translation model 2 (Brown et al. , 1993) with an ASR system." P06-2061,J93-2003,o,"We rescore the ASR N-best lists with the standard HMM (Vogel et al. , 1996) and IBM (Brown et al. , 1993) MT models." P06-2065,J93-2003,o,"This is similar to Model 3 of (Brown et al. , 1993), but without null-generated elements or re-ordering." P06-2065,J93-2003,o,"Learned vowels include (in order of generation probability): e, a, o, u, i, y. Learned sonorous consonants include: n, s, r, l, m. Learned non-sonorous consonants include: d, c, t, l, b, m, p, q. The model bootstrapping is good for dealing with too many parameters; we see a similar approach in Brown et als (1993) march from Model 1 to Model 5." P06-2065,J93-2003,o,"Machine translation has code-like characteristics, and indeed, the initial models of (Brown et al. , 1993) took a word-substitution/transposition approach, trained on a parallel text." P06-2065,J93-2003,p,"Such methods have also been a key driver of progress in statistical machine translation, which depends heavily on unsupervised word alignments (Brown et al. , 1993)." P06-2070,J93-2003,o,"The corpus is aligned in the word level using IBM Model4 (Brown et al. , 1993)." P06-2092,J93-2003,o,"The alignment of sentences can be done sufficiently well using cues such as sentence length (Gale and Church, 1993) or cognates (Simard et al. , 1992)." P06-2092,J93-2003,o,"Wordalignment, however, isalmost exclusively done using statistics (Brown et al. , 1993; Hiemstra, 1996; Vogel et al. , 1999; Toutanova et al. , 2002)." P06-2092,J93-2003,o,"2.2 Word Alignment Aligning below the sentence level is usually done using statistical models for machine translation (Brown et al. , 1991; Brown et al. , 1993; Hiemstra, 1996; Vogel et al. , 1999) where any word of the targetlanguageistakentobeapossibletranslation for each source language word." P06-2092,J93-2003,o,"3.2 Word Order Differences Another problem that has been noticed as early as 1993 with the first research on word alignment (Brown et al. , 1993) concerns the differences in word order between source and target language." P06-2092,J93-2003,o,"While simple statistical alignment models like IBM-1 (Brown et al. , 1993) and the symmetric alignment approach by Hiemstra (1996) treat sentences as unstructured bags of words, the more sophisticated IBM-models by Brown et al." P06-2092,J93-2003,o,"1 Introduction Aligning parallel text, i.e. automatically setting the sentences or words in one text into correspondence with their equivalents in a translation, is a very useful preprocessing step for a range of applications, including but not limited to machine translation (Brown et al. , 1993), cross-language information retrieval (Hiemstra, 1996), dictionary creation (Smadja et al. , 1996) and induction of NLP-tools (Kuhn, 2004)." P06-2093,J93-2003,o,"The classical Bayes relation is used to introduce a target language model (Brown et al. , 1993): e = argmaxe Pr(e|f) = argmaxe Pr(f|e)Pr(e) where Pr(f|e) is the translation model and Pr(e) is the target language model." P06-2093,J93-2003,o,"2 Statistical Translation Engine A word-based translation engine is used based on the so-called IBM-4 model (Brown et al. , 1993)." P06-2103,J93-2003,o,"(A similar intuition holds for the Machine Translation models generically known as the IBM models (Brown et al. , 1993), which assume that certain words in a source language sentence tend to trigger the usage of certain words in a target language translation of that sentence.)" P06-2107,J93-2003,o,"The methodology used (Brown et al. , 1993) is based on the definition of a function Pr(tI1|sJ1) that returns the probability that tI1 is a 835 source Transferir documentos explorados a otro directorio interaction-0 Move documents scanned to other directory interaction-1 Move s canned documents to other directory interaction-2 Move scanned documents to a nother directory interaction-3 Move scanned documents to another f older acceptance Move scanned documents to another folder Figure 1: Example of CAT system interactions to translate the Spanish source sentence into English." P06-2107,J93-2003,o,"Models of this kind assume that an input word is generated by only one output word (Brown et al. , 1993)." P06-2107,J93-2003,o,"These alignments can be obtained from single-word models (Brown et al. , 1993) using the available public software GIZA++ (Och and Ney, 2003)." P06-2111,J93-2003,o,"For this we used two resources: CELEX a linguistically annotated dictionary of English, Dutch and German (Baayen et al. , 1993), and the Dutch snowball stemmer implementing a suf x stripping algorithm based on the Porter stemmer." P06-2111,J93-2003,p,"For the word alignment, we apply standard techniques derived from statistical machine translation using the well-known IBM alignment models (Brown et al. , 1993) implemented in the opensource tool GIZA++ (Och, 2003)." P06-2112,J93-2003,o,"1 Introduction Word alignment was first proposed as an intermediate result of statistical machine translation (Brown et al. , 1993)." P06-2112,J93-2003,o,"3 Statistical Word Alignment According to the IBM models (Brown et al. , 1993), the statistical word alignment model can be generally represented as in equation (1)." P06-2117,J93-2003,o,"2 Statistical Word Alignment Model According to the IBM models (Brown et al. , 1993), the statistical word alignment model can be generally represented as in equation (1)." P06-2117,J93-2003,o,"1 A cept is defined as the set of target words connected to a source word (Brown et al. , 1993)." P06-2117,J93-2003,o,"1 Introduction Word alignment was first proposed as an intermediate result of statistical machine translation (Brown et al. , 1993)." P06-2124,J93-2003,o,"Most current approaches emphasize within-sentence dependencies such as the distortion in (Brown et al. , 1993), the dependency of alignment in HMM (Vogel et al. , 1996), and syntax mappings in (Yamada and Knight, 2001)." P06-2124,J93-2003,o,"2.1 Baseline: IBM Model-1 The translation process can be viewed as operations of word substitutions, permutations, and insertions/deletions (Brown et al. , 1993) in noisychannel modeling scheme at parallel sentence-pair level." P06-2124,J93-2003,o,"(1) 1We follow the notations in (Brown et al. , 1993) for English-French, i.e., e f, although our models are tested, in this paper, for English-Chinese." P07-1001,J93-2003,o,"For instance, the most relaxed IBM Model-1, which assumes that any source word can be generated by any target word equally regardless of distance, can be improved by demanding a Markov process of alignments as in HMM-based models (Vogel et al. , 1996), or implementing a distribution of number of target words linked to a source word as in IBM fertility-based models (Brown et al. , 1993)." P07-1001,J93-2003,o,"For the simple bag-of-word bilingual LSA as describedinSection2.2.1,afterSVDonthesparsematrix using the toolkit SVDPACK (Berry et al. , 1993), all source and target words are projected into a lowdimensional (R = 88) LSA-space." P07-1001,J93-2003,o,"It can be applied to complicated models such IBM Model-4 (Brown et al. , 1993)." P07-1001,J93-2003,o,"We shall take HMM-based word alignment model (Vogel et al. , 1996) as an example and follow the notation of (Brown et al. , 1993)." P07-1001,J93-2003,o,"Berry et al (1993)) to yield W W = U S V T as Figure 3 shows, where, for some order R lessmuch min(M,N) of the decomposition, U is a MR left singular matrix with rows ui, i = 1,,M, S is a RR diagonal matrix of singular values s1 s2 sR greatermuch 0, and V is NR a right singular matrix with rows vj, j = 1,,N. For each i, the scaled R-vector uiS may be viewed as representing wi, thei-th word in the vocabulary, and similarly the scaled R-vector vjS as representing dj, j-th document in the corpus." P07-1004,J93-2003,o,"These lists are rescored with the following models: (a) the different models used in the decoder which are described above, (b) two different features based on IBM Model 1 (Brown et al. , 1993), (c) posterior probabilities for words, phrases, n-grams, and sentence length (Zens and Ney, 2006; Ueffing and Ney, 2007), all calculated over the Nbest list and using the sentence probabilities which the baseline system assigns to the translation hypotheses." P07-1011,J93-2003,o,"Second, it can be applied to control the quality of parallel bilingual sentences mined from the Web, which are critical sources for a wide range of applications, such as statistical machine translation (Brown et al. , 1993) and cross-lingual information retrieval (Nie et al. , 1999)." P07-1016,J93-2003,o,"By treating a letter/character as a word and a group of letters/characters as a phrase or token unit in SMT, one can easily apply the traditional SMT models, such as the IBM generative model (Brown et al. , 1993) or the phrase-based translation model (Crego et al. , 2005) to transliteration." P07-1020,J93-2003,o,"Most of the previous work on statistical machine translation, as exemplified in (Brown et al. , 1993), employs word-alignment algorithm (such as GIZA++ (Och and Ney, 2003)) that provides local associations between source and target words." P07-1039,J93-2003,o,"4.3 Baseline We use a standard log-linear phrase-based statistical machine translation system as a baseline: GIZA++ implementation of IBM word alignment model 4 (Brown et al. , 1993; Och and Ney, 2003),8 the refinement and phrase-extraction heuristics described in (Koehn et al. , 2003), minimum-error-rate training 7More specifically, we choose the first English reference from the 7 references and the Chinese sentence to construct new sentence pairs." P07-1039,J93-2003,o,"To quickly (and approximately) evaluate this phenomenon, we trained the statistical IBM wordalignment model 4 (Brown et al. , 1993),1 using the GIZA++ software (Och and Ney, 2003) for the following language pairs: ChineseEnglish, Italian English, and DutchEnglish, using the IWSLT-2006 corpus (Takezawa et al. , 2002; Paul, 2006) for the first two language pairs, and the Europarl corpus (Koehn, 2005) for the last one." P07-1039,J93-2003,o,"They can be seen as extensions of the simpler IBM models 1 and 2 (Brown et al. , 1993)." P07-1039,J93-2003,o,"Most current statistical models (Brown et al. , 1993; Vogel et al. , 1996; Deng and Byrne, 2005) treat the aligned sentences in the corpus as sequences of tokens that are meant to be words; the goal of the alignment process is to find links between source and target words." P07-1047,J93-2003,o,"This situation is very similar to the training process of translation models in statistical machine translation (Brown et al. , 1993), where parallel corpus is used to find the mappings between words from different languages by exploiting their co-occurrence patterns." P07-1047,J93-2003,p,"Finally, the translation model can be formalized as the following optimization problem argmax logPr(D;) s.t. mwsummationdisplay j=1 Pr(wj|ok) = 1,k This optimization problem can be solved by the EM algorithm (Brown et al. , 1993)." P07-1082,J93-2003,o,"(2004) argue that precise alignment can improve transliteration effectiveness, experimenting on English-Chinese data and comparing IBM models (Brown et al. , 1993) with phonemebased alignments using direct probabilities." P07-1090,J93-2003,n,"In pursuit of better translation, phrase-based models (OchandNey,2004)havesignificantlyimprovedthe quality over classical word-based models (Brown et al. , 1993)." P07-1092,J93-2003,o,"1 Introduction Statistical machine translation (Brown et al. , 1993) has seen many improvements in recent years, most notably the transition from wordto phrase-based models (Koehn et al. , 2003)." P07-1108,J93-2003,n,"1 Introduction For statistical machine translation (SMT), phrasebased methods (Koehn et al. , 2003; Och and Ney, 2004) and syntax-based methods (Wu, 1997; Alshawi et al. 2000; Yamada and Knignt, 2001; Melamed, 2004; Chiang, 2005; Quick et al. , 2005; Mellebeek et al. , 2006) outperform word-based methods (Brown et al. , 1993)." P07-3010,J93-2003,o,"Similarity measures can be based on any level of linguistic analysis: semantic similarity relies on context vectors(Rapp, 1999), whilesyntacticsimilarityisbased on the alignment of parallel corpora (Brown et al. , 1993)." P08-1010,J93-2003,o,"3.1 Model-based Phrase Pair Posterior In a statistical generative word alignment model (Brown et al., 1993), it is assumed that (i) a random variable a specifies how each target word fj is generated by (therefore aligned to) a source 1 word eaj; and (ii) the likelihood function f(f,a|e) specifies a generativeprocedurefromthesourcesentencetothe target sentence." P08-1012,J93-2003,o,"The traditional estimation method for word 98 alignment models is the EM algorithm (Brown et al., 1993) which iteratively updates parameters to maximize the likelihood of the data." P08-1019,J93-2003,o,"More specifically, by using translation probabilities, we can rewrite equation (11) and (12) as follow: nullnullnullnullnull null nullnull null nullnullnull null null nullnullnullnull null nullnull null null nullnull null nullnull null null | null null null null nullnull null nullnull null nullnull null null null null nullnull null nullnull null null null 1nullnull null nullnull null null null nullnull|nullnull (13) nullnullnullnullnull null nullnull null nullnullnull null null nullnullnullnull null nullnull null null nullnull null nullnull null null | null null null null nullnull null nullnull null nullnull null null null null nullnull null nullnull null null null 1nullnull null nullnull null null null nullnull|nullnull (14) where nullnullnullnull|null null null denotes the probability that topic term null is the translation of null null . In our experiments, to estimate the probability nullnullnullnull|null null null , we used the collections of question titles and question descriptions as the parallel corpus and the IBM model 1 (Brown et al., 1993) as the alignment model." P08-1082,J93-2003,o,"The text was split at the sentence level, tokenized and PoS tagged, in the style of the Wall Street Journal Penn TreeBank (Marcus et al., 1993)." P08-1082,J93-2003,o,"This probability is computed using IBMs Model 1 (Brown et al., 1993): P(Q|A) = productdisplay qQ P(q|A) (3) P(q|A) = (1)Pml(q|A)+Pml(q|C) (4) Pml(q|A) = summationdisplay aA (T(q|a)Pml(a|A)) (5) where the probability that the question term q is generated from answer A, P(q|A), is smoothed using the prior probability that the term q is generated from the entire collection of answers C, Pml(q|C)." P09-1011,J93-2003,o,"This model is similar in spirit to IBM model 1 (Brown et al., 1993)." P09-1088,J93-2003,o,"We use the GIZA++ implementation of IBM Model 4 (Brown et al., 1993; Och and Ney, 2003) coupled with the phrase extraction heuristics of Koehn et al." P09-1088,J93-2003,n,"1 Introduction The field of machine translation has seen many advances in recent years, most notably the shift from word-based (Brown et al., 1993) to phrasebased models which use token n-grams as translation units (Koehn et al., 2003)." P09-1098,J93-2003,o,"1 Introduction Bilingual data (including bilingual sentences and bilingual terms) are critical resources for building many applications, such as machine translation (Brown, 1993) and cross language information retrieval (Nie et al., 1999)." P09-2037,J93-2003,p,"This is an important feature from the MT viewpoint, since the decomposition into translation model and language model proved to be extremely useful in statistical MT since (Brown et al., 1993)." P09-2057,J93-2003,o,"1 is based on several realvalued feature functions fi . Their computation is based on the so-called IBM Model-1 (Brown et al., 1993)." P09-2058,J93-2003,p,"Widely used alignment models, such as IBM Model serial (Brown et al., 1993) and HMM , all assume one-to-many alignments." P93-1002,J93-2003,o,"11 However, modeling word order under translation is notoriously difficult (Brown et al. , 1993), and it is unclear how much improvement in accuracy a good model of word order would provide." P93-1002,J93-2003,o,"The natural next step in sentence alignment is to account for word ordering in the translation model, e.g., the models described in (Brown et al. , 1993) could be used." P95-1033,J93-2003,o,"Since Chinese text is not orthographically separated into words, the standard methodology is to first preproce~ input texts through a segmentation module (Chiang et al. 1992; Linet al. 1992; Chang & Chert 1993; Linet al. 1993; Wu & Tseng 1993; Sproat et al. 1994)." P95-1033,J93-2003,o,"1 Introduction Parallel corpora have been shown to provide an extremely rich source of constraints for statistical analysis (e.g. , Brown et al. 1990; Gale & Church 1991; Gale et al. 1992; Church 1993; Brown et al. 1993; Dagan et al. 1993; Dagan & Church 1994; Fung & Church 1994; Wu & Xia 1994; Fung & McKeown 1994)." P95-1033,J93-2003,o,"Aside from purely linguistic interest, bracket structure has been empirically shown to be highly effective at constraining subsequent training of, for example, stochastic context-free grammars (Pereira & ~ 1992; Black et al. 1993)." P95-1033,J93-2003,o,"A simpler, related idea of penalizing distortion from some ideal matching pattern can be found in the statistical translation (Brown et al. 1990; Brown et al. 1993) and word alignment (Dagan et al. 1993; Dagan & Church 1994) models." P95-1034,J93-2003,n,"This approach addresses the problematic aspects of both pure knowledge-based generation (where incomplete knowledge is inevitable) and pure statistical bag generation (Brown et al. , 1993) (where the statistical system has no linguistic guidance)." P95-1034,J93-2003,o,"However, compositional approaches to lexical choice have been successful whenever detailed representations of lexical constraints can be collected and entered into the lexicon (e.g. , (Elhadad, 1993; Kukich et al. , 1994))." P96-1020,J93-2003,p,"Corpus-based or example-based MT (Sato and Nagao, 1990; Sumita and Iida, 1991) and statistical MT (Brown et al. , 1993) systems provide the easiest customizability, since users have only to supply a collection of source and target sentence pairs (a bilingual corpus)." P96-1021,J93-2003,o,"Estimation of the parameters has been described elsewhere (Brown et al. , 1993)." P96-1021,J93-2003,o,"Such linguistic-preprocessing techniques could 1Various models have been constructed by the IBM team (Brown et al. , 1993)." P96-1023,J93-2003,o,"The node mapping function f for the entire tree thus has a different role from the alignment function in the IBM statistical translation model (Brown et al. 1990, 1993); the role of the latter includes the linear ordering of words in the target string." P97-1022,J93-2003,o,"Model 1 is the word-pair translation model used in simple machine translation and understanding models (Brown et al. , 1993; Epstein et al. , 1996)." P97-1022,J93-2003,o,"In earlier IBM translation systems (Brown et al. , 1993) each English word would be generated by, or ""aligned to"", exactly one formal language word." P97-1022,J93-2003,o,"This paper extends the IBM Machine Translation Group's concept of fertility (Brown et al. , 1993) to the generation of clumps for natural language understanding." P97-1037,J93-2003,o,"Among all possible target strings, we will choose the one with the highest probability which is given by Bayes' decision rule (Brown et al 1993):,~ = argmaxP,'(e\]~lfg~)} = argmax {P,'(ef)." P97-1037,J93-2003,o,"Models describing these types of dependencies are referred to as alignrnen.t models (Brown et al. , 1993), (Dagan eta\] 1993)." P97-1037,J93-2003,o,"The concept of these alignments is similar to the ones introduced by (Brown et al. , 1993), but we will use another type of dependence in the probability distributions." P97-1037,J93-2003,o,"Therefore the probability of alignment aj for position j should have a dependence on the previous alignment position O j_l: P((/j \[(/j-1 ) A similar approach has been chosen by (Dagan et al. , 1993) and (Vogel et al 1996)." P97-1037,J93-2003,o,"The IBM model 1 (Brown et al. , 1993) is used to find an initial estimate of the translation probabilities." P97-1039,J93-2003,o,"correspondence points associated with frequent token types (Church, 1993) or by deleting frequent token types from the bitext altogether (Dagan et al. , 1993)." P97-1039,J93-2003,o,"One important application of bitext maps is the construction of translation lexicons (Dagan et al. , 1993) and, as discussed, translation lexicons are an important information source for bitext mapping." P97-1039,J93-2003,o,"In addition to their use in machine translation (Sato & Nagao, 1990; Brown et al. , 1993; Melamed, 1997), translation models can be applied to machineassisted translation (Sato, 1992; Foster et al. , 1996), cross-lingual information retrieval (SIGIR, 1996), and gisting of World Wide Web pages (Resnik, 1997)." P97-1039,J93-2003,o,"Bitexts also play a role in less automated applications such as concordancing for bilingual lexicography (Catizone et al. , 1993; Gale & Church, 1991b), computer-assisted language learning, and tools for translators (e.g." P97-1046,J93-2003,o,5 Effectiveness Comparison 5.1 English-Chinese ATIS Models Both the transfer and transducer systems were trained and evaluated on English-to-Mandarin Chinese translation of transcribed utterances from the ATIS corpus (Hirschman et al. 1993). P97-1047,J93-2003,o,"Therefore, P(g l e) is the sum of the probabilities of generating g from e over all possible alignments A, in which the position i in the target sentence g is aligned to the position ai in the source sentence e: P(gle) = I l m e ~, ~"" IT t(g# le=jla(a~ Ij, l,m)= al=0 amm0j=l m ! e 1""I ~ t(g# l e,)a(ilj, t, m) (3) j=l i=0 (Brown et al. , 1993) also described how to use the EM algorithm to estimate the parameters a(i I j,l, m) and $(g I e) in the aforementioned model." P97-1047,J93-2003,o,"1.2 Decoding in Statistical Machine Translation (Brown et al. , 1993) and (Vogel, Ney, and Tillman, 1996) have discussed the first two of the three problems in statistical machine translation." P97-1047,J93-2003,n,"Although the authors of (Brown et al. , 1993) stated that they would discuss the search problem in a follow-up arti cle, so far there have no publications devoted to the decoding issue for statistical machine translation." P97-1047,J93-2003,o,"We dealt with this by either limiting the translation probability from the null word (Brown 367 et al. , 1993) at the hypothetical 0-position(Brown et al. , 1993) over a threshold during the EM training, or setting SHo (j) to a small probability 7r instead of 0 for the initial null hypothesis H0." P97-1063,J93-2003,o,"(Macklovitch, 1994; Melamed, 1996b)), concordancing for bilingual lexicography (Catizone et al. , 1993; Gale & Church, 1991), computerassisted language learning, corpus linguistics (Melby." P97-1063,J93-2003,o,"for their models (Brown et al. , 1993b)." P97-1063,J93-2003,o,"The co-occurrence relation can also be based on distance in a bitext space, which is a more general representations of bitext correspondence (Dagan et al. , 1993; Resnik & Melamed, 1997), or it can be restricted to words pairs that satisfy some matching predicate, which can be extrinsic to the model (Melamed, 1995; Melamed, 1997)." P97-1063,J93-2003,o,"Models of translational equivalence that are ignorant of indirect associations have ""a tendency to be confused by collocates"" (Dagan et al. , 1993)." P97-1063,J93-2003,o,"It is analogous to the step in other translation model induction algorithms that sets all probabilities below a certain threshold to negligible values (Brown et al. , 1990; Dagan et al. , 1993; Chen, 1996)." P97-1063,J93-2003,p,"1 Introduction Over the past decade, researchers at IBM have developed a series of increasingly sophisticated statistical models for machine translation (Brown et al. , 1988; Brown et al. , 1990; Brown et al. , 1993a)." P98-1069,J93-2003,o,"This approach has also been used by (Dagan and Itai, 1994; Gale et al. , 1992; Shiitze, 1992; Gale et al. , 1993; Yarowsky, 1995; Gale and Church, 1Lunar is not an unknown word in English, Yeltsin finds its translation in the 4-th candidate." P98-1069,J93-2003,o,"Some of the early statistical terminology translation methods are (Brown et al. , 1993; Wu and Xia, 1994; Dagan and Church, 1994; Gale and Church, 1991; Kupiec, 1993; Smadja et al. , 1996; Kay and RSscheisen, 1993; Fung and Church, 1994; Fung, 1995b)." P98-1069,J93-2003,o,"In the years since the appearance of the first papers on using statistical models for bilingual lexicon compilation and machine translation(Brown et al. , 1993; Brown et al. , 1991; Gale and Church, 1993; Church, 1993; Simard et al. , 1992), large amount of human effort and time has been invested in collecting parallel corpora of translated texts." P98-1074,J93-2003,o,"1 Introduction Early works, (Gale and Church, 1993; Brown et al. , 1993), and to a certain extent (Kay and R6scheisen, 1993), presented methods to ex~.:'~.ct bi'_.'i~gua!" P98-1074,J93-2003,p,"(Brown et al. , 1993) then extended their method and established a sound probabilistic model series, relying on different parameters describing how words within parallel sentences are aligned to each other." P98-1074,J93-2003,o,"On the other hand, (Dagan et al. , 1993) proposed an algorithm, borrowed to the field of dynamic programming and based on the output of their previous work, to find the best alignment, subject to certain constraints, between words in parallel sentences." P98-2158,J93-2003,n,"(Vogel et al. , 1996) report better perplexity results on the Verbmobil Corpus with their HMMbased alignment model in comparison to Model 2 of (Brown et al. , 1993)." P98-2158,J93-2003,o,"960 1.2 Alignment with Mixture Distribution Several papers have discussed the first issue, especially the problem of word alignments for bilingual corpora (Brown et al. , 1993), (Dagan et al. , 1993), (Kay and RSscheisen, 1993), (Fung and Church, 1994), (Vogel et al. , 1996)." P98-2158,J93-2003,o,"In our search procedure, we use a mixture-based alignment model that slightly differs from the model introduced as Model 2 in (Brown et al. , 1993)." P98-2158,J93-2003,o,"It assumes that the distance of the positions relative to the diagonal of the (j, i) plane is the dominating factor: r(i _j I) p(ilj, J, I) = (7), Ei,=l r(i' j ) As described in (Brown et al. , 1993), the EM algorithm can be used to estimate the parameters of the model." P98-2158,J93-2003,o,"The underlying translation model is Model 2 from (Brown et al. , 1993)." P98-2162,J93-2003,o,"The simple model 1 (Brown et al. , 1993) for the translation of a SL sentence d = dldt in a TL sentence e = el em assumes that every TL word is generated independently as a mixture of the SL words: m l P(e\[d),,~ H ~ t(ej\[di) (2) j=l i=O In the equation above t(ej\[di) stands for the probability that ej is generated by di." P98-2162,J93-2003,o,"In the refined model 2 (Brown et al. , 1993) alignment probabilities a(ilj, l, m) are included to model the effect that the position of a word influences the position of its translation." P98-2162,J93-2003,o,"The application of this algorithm to the basic problem using a parallel bilingual corpus aligned on the sentence level is described in (Brown et al. , 1993)." P98-2221,J93-2003,o,"1 Introduction Most (if not all) statistical machine translation systems employ a word-based alignment model (Brown et al. , 1993; Vogel, Ney, and Tillman, 1996; Wang and Waibel, 1997), which treats words in a sentence as independent entities and ignores the structural relationship among them." P98-2221,J93-2003,o,"The subset was the neighboring alignments (Brown et al. , 1993) of the Viterbi alignments discovered by Model 1 and Model 2." P98-2230,J93-2003,o,"Estimation of the parameters has been described elsewhere (Brown et al. , 1993)." P98-2230,J93-2003,o,"I Various models have been constructed by the IBM team (Brown et al. , 1993)." P99-1027,J93-2003,o,"2 Translation Model The algorithm for fast translation, which has been described previously in some detail (McCarley and Roukos, 1998) and used with considerable success in TREC (Franz et al. , 1999), is a descendent of IBM Model 1 (Brown et al. , 1993)." P99-1027,J93-2003,o,"This model is trained on approximately 5 million sentence pairs of Hansard (Canadian parliamentary) and UN proceedings which have been aligned on a sentence-by-sentence basis by the methods of (Brown et al. , 1991), and then further aligned on a word-by-word basis by methods similar to (Brown et al. , 1993)." W00-0507,J93-2003,o,"2.2.1 The evaluator The evaluator is a function p(t\[t', s) which assigns to each target-text unit t an estimate of its probability given a source text s and the tokens t' which precede t in the current translation of s. Our approach to modeling this distribution is based to a large extent on that of the IBM group (Brown et al. , 1993), but it diflhrs in one significant aspect: whereas the IBM model involves a ""noisy channel"" decomposition, we use a linear combination of separate predictions from a language model p(t\[t') and a translation model p(t\[s)." W00-0507,J93-2003,o,"Both models are based on IBM translation model 2 (Brown et al. , 1993) which has the 49 property that it generates tokens independently." W00-0507,J93-2003,o,"This formula follows the convention of (Brown et al. , 1993) in letting so designate the null state." W00-0508,J93-2003,o,"In (Knight and A1-Onaizan, 1998), finite-state machine translation is based on (Brown et al. , 1993) and is used for decoding the target language string." W00-0508,J93-2003,o,"The statistical machine translation approach is based on the noisy channel paradigm and the Maximum-A-Posteriori decoding algorithm (Brown et al. , 1993)." W00-0508,J93-2003,o,"The sequence Ws is thought as a noisy version of WT and the best guess I)d~ is then computed as ^ W~ = argmax P(WTWs) wT = argmax P(WslWT)P(WT) (1) wT In (Brown et al. , 1993) they propose a method for maximizing P(WTIWs) by estimating P(WT) and P(WsIWT) and solving the problem in equation 1." W00-0508,J93-2003,o,"Our approach to statistical machine translation differs from the model proposed in (Brown et al. , 1993) in that: We compute the joint model P(Ws, WT) from the bilanguage corpus to account for the direct mapping of the source sentence Ws into the target sentence I?VT that is ordered according to the source language word order." W00-0707,J93-2003,p,"In previous work (Foster, 2000), I described a Maximum Entropy/Minimum Divergence (MEMD) model (Berger et al. , 1996) for p(w\[hi, s) which incorporates a trigram language model and a translation component which is an analog of the well-known IBM translation model 1 (Brown et al. , 1993)." W00-0707,J93-2003,o,"The model consists of a set of word-pair parameters p(t\[s) and position parameters p(j\[i,/); in model 1 (IBM1) the latter are fixed at 1/(1 + 1), as each position, including the empty position 0, is considered equally likely to contain a translation for w. Maximum likelihood estimates for these parameters can be obtained with the EM algorithm over a bilingual training corpus, as described in (Brown et al. , 1993)." W00-0801,J93-2003,o,"It is an implementation of Models 1-4 of Brown et al. \[1993\], where each of these models produces a Viterbi alignment." W01-1208,J93-2003,o,"P(d) P L (d) (4) Statistical approaches to language modeling have been used in much NLP research, such as machine translation (Brown et al. , 1993) and speech recognition (Bahl et al. , 1983)." W01-1405,J93-2003,o,"In order to minimize the number of decision errors at the sentence level, we have to choose the sequence of target words eI1 according to the equation (Brown et al. 1993): eI1 = argmax eI1 n Pr(eI1jfJ1 ) o = argmax eI1 n Pr(eI1)Pr(fJ1 jeI1) o : Here, the posterior probability Pr(eI1jfJ1 ) is decomposed into the language model probability Pr(eJ1) and the string translation probability Pr(fJ1 jeI1)." W01-1405,J93-2003,o,Models describing these types of dependencies are referred to as alignment mappings (Brown et al. 1993): alignment mapping: j ! i = aj ; which assigns a source word fj in position j to a target word ei in position i = aj. W01-1405,J93-2003,o,"As a result, the string translation probability can be decomposed into a lexicon probability and an alignment probability (Brown et al. 1993)." W01-1405,J93-2003,p,"3 Experimental Results Whereas stochastic modelling is widely used in speech recognition, there are so far only a few research groups that apply stochastic modelling to language translation (Berger et al. 1994; Brown et al. 1993; Knight 1999)." W01-1407,J93-2003,o,"If we assign a probability a13a15a14a17a16 a10a12a11a5a19a18a2 a3a5a21a20 to each pair of strings a16 a10 a11a5a12a22 a2a4a3a5 a20, then according to Bayes decision rule, we have to choose the English string that maximizes the product of the English language model a13a23a14a24a16 a10 a11a5 a20 and the string translation model a13a15a14a17a16a25a2 a3a5a26a18a10a27a11a5a28a20 . Many existing systems for statistical machine translation (Wang and Waibel, 1997; Nieen et al. , 1998; Och and Weber, 1998) make use of a special way of structuring the string translation model like proposed by (Brown et al. , 1993): The correspondence between the words in the source and the target string is described by alignments which assign one target word position to each source word position." W01-1407,J93-2003,o,"Table 2 summarizes the characteristics of the training corpus used for training the parameters of Model 4 proposed in (Brown et al. , 1993)." W01-1408,J93-2003,o,"2 IBM Model 4 Various statistical alignment models of the form Pr(fJ1 ;aJ1jeI1) have been introduced in (Brown et al. , 1993; Vogel et al. , 1996; Och and Ney, 2000a)." W01-1408,J93-2003,o,"In this paper we use the so-called Model 4 from (Brown et al. , 1993)." W01-1408,J93-2003,o,"For a detailed description for Model 4 the reader is referred to (Brown et al. , 1993)." W01-1408,J93-2003,o,"They developed a simple heuristic function for Model 2 from (Brown et al. , 1993) which was non admissible." W01-1408,J93-2003,o,"Many statistical translation models (Brown et al. , 1993; Vogel et al. , 1996; Och and Ney, 2000b) try to model word-to-word correspondences between source and target words." W01-1409,J93-2003,o,(1993); Brown et al. W01-1409,J93-2003,o,"input pegging a ?transfer correct partially correct b incorrect 1 raw no M4 decoding c 7 4 4 2 stemmed yes M4 decoding 8 3 4 3 stemmed no M4 decoding 13 2 0 4 raw no gloss 13 1 1 5a stemmed yes gloss 8 3 4 5b stemmed yes gloss 12 2 1 6 stemmed no gloss 11 2 2 a pegging causes the training algorithm to consider a larger search space b correct top level category but incorrect sub-category c translation by maximizing the IBM Model 4 probability of the source/translation pair (Brown et al. , 1993; Brown et al. , 1995) classification might be performed by automatic procedures rather than humans." W01-1409,J93-2003,o,"We trained IBM Translation Model 4 (Brown et al. , 1993) both on our corpus alone and on the augmented corpus, using the EGYPT toolkit (Knight et al. , 1999; Al-Onaizan et al. , 1999), and then translated a number of texts using different translation models and different transfer methods, namely glossing (replacing each Tamil word by the most likely candidate from the translation tables created with the EGYPT toolkit) and Model 4 decoding (Brown et al. , 1995; Germann et al. , 2001)." W01-1410,J93-2003,o,"6 Concluding remarks Our work presents a set of improvements on previous state of the art of Grammar Association: first, by providing better language models to the original system described in (Vidal et al. , 1993); second, by setting the technique into a rigorous statistical framework, clarifying which kind of probabilities have to be estimated by association models; third, by developing a novel and especially adequate association model: Loco C. On the other hand, though experimental results are quite good, we find them particularly relevant for pointing out directions to follow for further improvement of the Grammar Association technique." W01-1410,J93-2003,o,"However, in the Grammar Association context, when developing (using Bayes decomposition) the basic equations of the system presented in (Vidal et al. , 1993), it is said that the reverse model for a28 a13a37a3a38a5a39a32a21a0a35a7 does not seem to admit a simple factorization which is also correct and convenient, so crude heuristics were adopted in the mathematical development of the expression to be maximized." W01-1410,J93-2003,o,"Moreover, it was (without imposing determinism) the inference technique employed in (Vidal et al. , 1993)." W01-1410,J93-2003,o,"We based our design on the IBM models 1 and 2 (Brown et al. , 1993), but taking into account that our model must generate correct derivations in a given grammar, not any seBEGIN some END eat (a) ""some a88 animalsa89 eat a88 animalsa89 "" BEGIN some END eat are dangerous (b) ""some a88 animalsa89 are dangerous"" BEGIN some END eat are dangerous (c) ""a88 animalsa89 are dangerous"" BEGIN snakes rats people some END eat are snakes rats people dangerous (d) Expansion of a88 animalsa89 Figure 3: Using a category a86 animalsa87 for ""snakes"", ""rats"" and ""people"" in the example of Figure 1." W01-1410,J93-2003,o,"We carefully implemented the original Grammar Association system described in (Vidal et al. , 1993), tuned empirically a couple of smoothing parameters, trained the models and, finally, obtained an a119a21a120 a100 a104a122a121 of correct translations.9 Then, we studied the impact of: (1) sorting, as proposed in Section 3, the set of sentences presented to ECGI; (2) making language models deterministic and minimum; (3) constraining the best translation search to those sentences whose lengths have been seen, in the training set, related to the length of the input sentence." W01-1413,J93-2003,o,"One interesting approach to extending the current system is to introduce a statistical translation model (Brown et al. , 1993) to filter out irrelevant translation candidates and to extract the most appropriate subpart from a long English sequence as the translation by locally aligning the Japanese and English sequences." W02-1012,J93-2003,o,"(1993) and the HMM alignment model of (Vogel et al. , 1996)." W02-1012,J93-2003,o,"We refer to a3a16a5a7 as the source language string and a10 a11a7 as the target language string in accordance with the noisy channel terminology used in the IBM models of (Brown et al. , 1993)." W02-1018,J93-2003,o,"Intuitively, if we allow any Source words to be aligned to any Target words, the best alignment that we can come up with is the one in Figure 1.c. Sentence pair (S2, T2) offers strong evidence that b c in language S means the same thing as x in language T. On the basis of this evidence, we expect the system to also learn from sentence pair (S1, T1) that a in language S means the same thing as y in language T. Unfortunately, if one works with translation models that do not allow Target words to be aligned to more than one Source word as it is the case in the IBM models (Brown et al. , 1993) it is impossible to learn that the phrase b c in language S means the same thing as word x in language T. The IBM Model 4 (Brown et al. , 1993), for example, converges to the word alignments shown in Figure 1.b and learns the translation probabilities shown in Figure 1.a.2 Since in the IBM model one cannot link a Target word to more than a Source word, the training procedure 2To train the IBM-4 model, we used Giza (Al-Onaizan et al. , 1999)." W02-1018,J93-2003,o,"For example, in our previous work (Marcu, 2001), we have used a statistical translation memory of phrases in conjunction with a statistical translation model (Brown et al. , 1993)." W02-1018,J93-2003,o,"In constrast with many previous approaches (Brown et al. , 1993; Och et al. , 1999; Yamada and Knight, 2001), our model does not try to capture how Source sentences can be mapped into Target sentences, but rather how Source and Target sentences can be generated simultaneously." W02-1018,J93-2003,o,"1 Motivation Most of the noisy-channel-based models used in statistical machine translation (MT) (Brown et al. , 1993) are conditional probability models." W02-1018,J93-2003,o,"A variety of methods are used to account for the re-ordering stage: word-based (Brown et al. , 1993), templatebased (Och et al. , 1999), and syntax-based (Yamada and Knight, 2001), to name just a few." W02-1019,J93-2003,o,"2 Word-to-Word Bitext Alignment We will study the problem of aligning an English sentence to a French sentence and we will use the word alignment of the IBM statistical translation models (Brown et al. , 1993)." W02-1019,J93-2003,o,"5.4 IBM-3 Word Alignment Models Since the true distribution over alignments is not known, we used the IBM-3 statistical translation model (Brown et al. , 1993) to approximate . This model is specified through four components: Fertility probabilities for words; Fertility probabilities for NULL; Word Translation probabilities; and Distortion probabilities." W02-1020,J93-2003,o,"The translation component is an analog of the IBM model 2 (Brown et al. , 1993), with parameters that are optimized for use with the trigram." W02-1022,J93-2003,o,"Using alignment for grammar and lexicon induction has been an active area of research, both in monolingual settings (van Zaanen, 2000) and in machine translation (MT) (Brown et al. , 1993; Melamed, 2000; Och and Ney, 2000) | interestingly, statistical MT techniques have been used to derive lexico-semantic mappings in the \reverse"" direction of language understanding rather than generation (Papineni et al. , 1997; Macherey et al. , 2001)." W02-1039,J93-2003,o,"When an S alignment exists, there will always also exist a P alignment such that P a65 S. The English sentences were parsed using a state-of-the-art statistical parser (Charniak, 2000) trained on the University of Pennsylvania Treebank (Marcus et al. , 1993)." W02-1039,J93-2003,o,"The first work in SMT, done at IBM (Brown et al. , 1993), developed a noisy-channel model, factoring the translation process into two portions: the translation model and the language model." W02-1405,J93-2003,o,"2 Our statistical engine 2.1 The statistical models In this study, we built an SMT engine designed to translate from French to English, following the noisy-channel paradigm flrst described by (Brown et al. , 1993b)." W02-1405,J93-2003,o,"Among them, (Brown et al. , 1993a) have proposed a way to exploit bilingual dictionnaries at training time." W03-0301,J93-2003,o,"Four teams had approaches that relied (to varying degrees) on an IBM model of statistical machine translation (Brown et al. , 1993)." W03-0302,J93-2003,o,"ProAlign models P(A|E,F) directly, using a different decomposition of terms than the model used by IBM (Brown et al. , 1993)." W03-0302,J93-2003,o,"To avoid this problem, we sample from a space of probable alignments, as is done in IBM models 3 and above (Brown et al. , 1993), and weight counts based on the likelihood of each alignment sampled under the current probability model." W03-0303,J93-2003,o,"However, instead of estimating the probabilities for the production rules via EM as described in [Wu 1997], we assign the probabilities to the rules using the Model-1 statistical translation lexicon [Brown et al. 1993]." W03-0304,J93-2003,o,"Yet, the very nature of these alignments, as defined in the IBM modeling approach (Brown et al. , 1993), lead to descriptions of the correspondences between sourcelanguage (SL) and target-language (TL) words of a translation that are often unsatisfactory, at least from a human perspective." W03-0305,J93-2003,o,"2 Word Alignment algorithm We use IBM Model 4 (Brown et al. , 1993) as a basis for our word alignment system." W03-0309,J93-2003,o,"(Brown et al. , 1993) introduced five statistical translation models (IBM Models 1 5)." W03-0309,J93-2003,o,"The Duluth Word Alignment System is a Perl implementation of IBM Model 2 (Brown et al. , 1993)." W03-0310,J93-2003,o,"This cost can often be substantial, as with the Penn Treebank (Marcus et al. , 1993)." W03-0313,J93-2003,o,"These methods are based on IBM statistical translation Model 2 (Brown et al. , 1993), but take advantage of certain characteristics of the segments of text that can typically be extracted from translation memories." W03-0315,J93-2003,p,"2.2 Statistical Translation Lexicon We use a statistical translation lexicon known as IBM Model-1 in (Brown et al. , 1993) for both efficiency and simplicity." W03-0315,J93-2003,o,"Given training data consisting of parallel sentences: }1),,{( )()( Sief ii =, our Model-1 training for t(f|e) is as follows: = = S s ss e efefceft 1 )()(1 ),;|()|( Where 1 e is a normalization factor such that 0.1)|( = j j eft ),;|( )()( ss efefc denotes the expected number of times that word e connects to word f. == = = l i i m j jl k k ss eeff eft eft efefc 11 1 )()( ),(),( )|( )|( ),;|( With the conditional probability t(f|e), the probability for an alignment of foreign string F given English string E is in (1): = = + = m j n i ijm eft l EFP 1 0 )|( )1( 1 )|( (1) The probability of alignment F given E: )|( EFP is shown to achieve the global maximum under this EM framework as stated in (Brown et al. ,1993)." W03-0315,J93-2003,o,"In our approach, equation (1) is further normalized so that the probability for different lengths of F is comparable at the word level: m m j n i ijm eft l EFP /1 10 )|( )1( 1 )|( + = == (2) The alignment models described in (Brown et al. , 1993) are all based on the notion that an alignment aligns each source word to exactly one target word." W03-0413,J93-2003,o,"The first model, referred to as Maxent1 below, is a loglinear combination of a trigram language model with a maximum entropy translation component that is an analog of the IBM translation model 2 (Brown et al. , 1993)." W03-0414,J93-2003,p,"(Brown et al. , 1990; Brown et al. , 1993)) are best known and studied." W03-0604,J93-2003,o,"We have developed a set of extensions to a probabilistic translation model (Brown et al. , 1993) that enable us to successfully merge oversegmented regions into coherent objects." W03-0604,J93-2003,o,"(1993) (as in Duygulu et al. , 2002), and extend it to structured shape descriptions of visual data." W03-0604,J93-2003,o,"Probabilistic translation models generally seek to find the translation string e that maximizes the probability Pra5 ea6fa7, given the source string f (where f referred to French and e to English in the original work, Brown et al. , 1993)." W03-0608,J93-2003,o,"Fortunately, there is a straightforward parallel between our object recognition formulation and the statistical machine translation problem of building a lexicon from an aligned bitext (Brown et al. , 1993; Al-Onaizan et al. , 1999)." W03-1001,J93-2003,o,"1 Introduction Various papers use phrase-based translation systems (Och et al. , 1999; Marcu and Wong, 2002; Yamada and Knight, 2002) that have shown to improve translation quality over single-word based translation systems introduced in (Brown et al. , 1993)." W03-1002,J93-2003,p,"2 Prior Work Statistical machine translation, as pioneered by IBM (e.g. Brown et al. , 1993), is grounded in the noisy channel model." W03-1002,J93-2003,o,"POS tagging and phrase chunking in English were done using the trained systems provided with the fnTBL Toolkit (Ngai and Florian, 2001); both were trained from the annotated Penn Treebank corpus (Marcus et al. , 1993)." W03-1003,J93-2003,o,"Specifically, stochastic translation lexicons estimated using the IBM method (Brown et al. , 1993) from a fairly large sentence-aligned Chinese-English parallel corpus are used in their approach a considerable demand for a resourcedeficient language." W03-1508,J93-2003,p,"The IBM source-channel model for statistical machine translation (P. Brown et al. , 1993) plays a central role in our system." W04-0857,J93-2003,o,"To solve this problem, we will adapt the idea of null generated words from machine translation (Brown et al. , 1993)." W04-1118,J93-2003,o,"The relationship between the translation model and the alignment model is given by: Pr(fJ1 jeI1) = X aJ1 Pr(fJ1 ;aJ1jeI1) (3) In this paper, we use the models IBM-1, IBM4 from (Brown et al. , 1993) and the HiddenMarkovalignmentmodel(HMM)from(Vogelet al. , 1996)." W05-0612,J93-2003,o,"Our methods are most influenced by IBMs Model 1 (Brown et al. , 1993)." W05-0614,J93-2003,o,"Further, we can learn the channel probabilities in an unsupervised manner using a variant of the EM algorithm similar to machine translation (Brown et al. , 1993), and statistical language understanding (Epstein, 1996)." W05-0614,J93-2003,o,"We follow IBM Model 1 (Brown et al. , 1993) and assume that each word in an utterance is generated by exactly one role in the parallel frame Using standard EM to learn the role to word mapping is only sufficient if one knows to which level in the tree the utterance should be mapped." W05-0712,J93-2003,n,"A word based approach depends upon traditional statistical machine translation techniques such as IBM Model1 (Brown et al. , 1993) and may not always yield satisfactory results due to its inability to handle difficult many-to-many phrase translations." W05-0804,J93-2003,o,"Previous work from (Wang et al. , 1996) showed improvements in perplexity-oriented measures using mixture-based translation lexicon (Brown et al. , 1993)." W05-0806,J93-2003,o,"For detailed descriptions of SMT models see for example (Brown et al. , 1993; Och and Ney, 2003)." W05-0809,J93-2003,n,"Several teams had approaches that relied (to varying degrees) on an IBM model of statistical machine translation (Brown et al. , 1993), with different improvements brought by different teams, consisting of new submodels, improvements in the HMM model, model combination for optimal alignment, etc. Se-veral teams used symmetrization metrics, as introduced in (Och and Ney, 2003) (union, intersection, refined), most of the times applied on the alignments produced for the two directions sourcetarget and targetsource, but also as a way to combine different word alignment systems." W05-0810,J93-2003,o,"First, we considered single sentences as documents, and tokens as sentences (we define a token as a sequence of characters delimited by 1In our case, the score we seek to globally maximize by dynamic programming is not only taking into account the length criteria described in (Gale and Church, 1993) but also a cognate-based one similar to (Simard et al. , 1992)." W05-0810,J93-2003,p,"When efficient techniques have been proposed (Brown et al. , 1993; Och and Ney, 2003), they have been mostly evaluated on safe pairs of languages where the notion of word is rather clear." W05-0812,J93-2003,o,"IBM Model 4 parameters are then estimated over this partial search space as an approximation to EM (Brown et al. , 1993; Och and Ney, 2003)." W05-0812,J93-2003,p,"1 Introduction The most widely used alignment model is IBM Model 4 (Brown et al. , 1993)." W05-0814,J93-2003,o,"For these experiments, we have implemented an alignment package for IBM Model 4 using a hillclimbing search and Viterbi training as described in (Brown et al. , 1993), and extended this to use new submodels." W05-0814,J93-2003,p,"Turning off the extensions to GIZA++ and training p0 as in (Brown et al. , 1993) produces a substantial increase in AER." W05-0814,J93-2003,o,"We solve this using the local search defined in (Brown et al. , 1993)." W05-0814,J93-2003,o,"The system used for baseline experiments is two runs of IBM Model 4 (Brown et al. , 1993) in the GIZA++ (Och and Ney, 2003) implementation, which includes smoothing extensions to Model 4." W05-0815,J93-2003,o,"The idea is that the translation of a sentence x into a sentence y can be performed in the following steps1: (a) If x is small enough, IBMs model 1 (Brown et al. , 1993) is employed for the translation." W05-0816,J93-2003,o,"Word correspondence was further developed in IBM Model-1 (Brown et al. , 1993) for statistical machine translation." W05-0817,J93-2003,o,"The first one is a hypotheses testing approach (Gale and Church, 1991; Melamed, 2001; Tufi 2002) while the second one is closer to a model estimating approach (Brown et al. , 1993; Och and Ney, 2000)." W05-0817,J93-2003,p,"A quite different approach from our hypotheses testing implemented in the TREQ-AL aligner is taken by the model-estimating aligners, most of them relying on the IBM models (1 to 5) described in the (Brown et al. 1993) seminal paper." W05-0823,J93-2003,o,"1 Introduction During the last decade, statistical machine translation (SMT) systems have evolved from the original word-based approach (Brown et al. , 1993) into phrase-based translation systems (Koehn et al. , 2003)." W05-0823,J93-2003,o,"Finally, the fourth and fifth feature functions corresponded to two lexicon models based on IBM Model 1 lexical parameters p(t|s) (Brown et al. , 1993)." W05-0825,J93-2003,o,"3 Length Model: Dynamic Programming Given the word fertility de nitions in IBM Models (Brown et al. , 1993), we can compute a probability to predict phrase length: given the candidate target phrase (English) eI1, and a source phrase (French) of length J, the model gives the estimation of P(J|eI1) via a dynamic programming algorithm using the source word fertilities." W05-0826,J93-2003,o,"Far from full syntactic complexity, we suggest to go back to the simpler alignment methods first described by (Brown et al. , 1993)." W05-0829,J93-2003,n,"1 Introduction In recent years, various phrase translation approaches (Marcu and Wong, 2002; Och et al. , 1999; Koehn et al. , 2003) have been shown to outperform word-to-word translation models (Brown et al. , 1993)." W05-0835,J93-2003,o,"Different models have been presented in the literature, see for instance (Brown et al. , 1993; Och and Ney, 2004; Vidal et al. , 1993; Vogel et al. , 1996)." W05-0835,J93-2003,o,"Most of them rely on the concept of alignment: a mapping from words or groups of words in a sentence into words or groups in the other (in the case of (Vidal et al. , 1993) the mapping goes from rules in a grammar for a language into rules of a grammar for the other language)." W05-0835,J93-2003,o,"This concept of alignment has been also used for tasks like authomatic vocabulary derivation and corpus alignment (Dagan et al. , 1993)." W05-0835,J93-2003,o,"The elements of this set are pairs (x, y) where y is a possible translation for x. 4 IBMs model 1 IBMs model 1 is the simplest of a hierarchy of five statistical models introduced in (Brown et al. , 1993)." W05-0836,J93-2003,o,"Using the log-linear form to model p(e|f) gives us the flexibility to introduce overlapping features that can represent global context while decoding (searching the space of candidate translations) and rescoring (ranking a set of candidate translations before performing the argmax operation), albeit at the cost of the traditional source-channel generative model of translation proposed in (Brown et al. , 1993)." W05-1208,J93-2003,o,"This differs from typical generative settings for IR and MT (Ponte and croft, 1998; Brown et al. , 1993), where all conditioned events are disjoint by construction." W05-1208,J93-2003,o,"Alternatively, one can view (2) as inducing an alignment between terms in the h to the terms in the t, somewhat similar to alignment models in statistical MT (Brown et al. , 1993)." W06-1204,J93-2003,p,"State-of-art systems for doing word alignment use generative models like GIZA++ (Och and Ney, 2003; Brown et al. , 1993)." W06-1606,J93-2003,o,"Pr(pi,F,A) = summationdisplay i,c()=(pi,F,A) productdisplay rji p(rj) (4) In order to acquire the rules specific to our model and to induce their probabilities, we parse the English side of our corpus with an in-house implementation (Soricut, 2005) of Collins parsing models (Collins, 2003) and we word-align the parallel corpus with the Giza++2 implementation of the IBM models (Brown et al. , 1993)." W06-1607,J93-2003,o,"To derive the joint counts c(s,t) from which p(s|t) and p(t|s) are estimated, we use the phrase induction algorithm described in (Koehn et al. , 2003), with symmetrized word alignments generated using IBM model 2 (Brown et al. , 1993)." W06-1609,J93-2003,o,"This feature, which is based on the lexical parameters of the IBM Model 1 (Brown et al. , 1993), provides a complementary probability for each tuple in the translation table." W06-1609,J93-2003,o,"1 Introduction During the last few years, SMT systems have evolved from the original word-based approach (Brown et al. , 1993) to phrase-based translation systems (Koehn et al. , 2003)." W06-1626,J93-2003,o,"1 Introduction Statistical language modeling has been widely used in natural language processing applications such as Automatic Speech Recognition (ASR), Statistical Machine Translation (SMT) (Brown et al. , 1993) and Information Retrieval (IR) (Ponte and Croft, 1998)." W06-1628,J93-2003,n,"1 Introduction Phrase-based approaches (Och and Ney, 2004) to statistical machine translation (SMT) have recently achieved impressive results, leading to significant improvements in accuracy over the original IBM models (Brown et al. , 1993)." W06-2008,J93-2003,o,"GIZA++ consists of a set of statistical translation models of different complexity, namely the IBM ones (Brown et al. , 1993)." W06-2402,J93-2003,o,"1 Introduction Statistical machine translation (SMT) was originally focused on word to word translation and was based on the noisy channel approach (Brown et al. , 1993)." W06-3102,J93-2003,p,"1 Introduction The availability of large amounts of so-called parallel texts has motivated the application of statistical techniques to the problem of machine translation starting with the seminal work at IBM in the early 90s (Brown et al. , 1992; Brown et al. , 1993)." W06-3102,J93-2003,o,"Statistical machine translation views the translation process as a noisy-channel signal recovery process in which one tries to recover the input signal e, from the observed output signal f.1 Early statistical machine translation systems used a purely word-based approach without taking into account any of the morphological or syntactic properties of the languages (Brown et al. , 1993)." W06-3104,J93-2003,o,"Initial estimates of lexical translation probabilities came from the IBM Model 4 translation tables produced by GIZA++ (Brown et al. , 1993; Och and Ney, 2003)." W06-3104,J93-2003,o,"1.2 From Synchronous to Quasi-Synchronous Grammars Because our approach will let anything align to anything, it is reminiscent of IBM Models 15 (Brown et al. , 1993)." W06-3105,J93-2003,p,"Specifically, in the task of word alignment, heuristic approaches such as the Dice coefficient consistently underperform their re-estimated counterparts, such as the IBM word alignment models (Brown et al. , 1993)." W06-3107,J93-2003,p,"Statistical models for machine translation heavily depend on the concept of alignment, specifically, the well known IBM word based models (Brown et al. , 1993)." W06-3107,J93-2003,o,"Alignment models to structure the translation model are introduced in (Brown et al. , 1993)." W06-3107,J93-2003,o,"Nevertheless, in the problem described in this article, the source and the target sentences are given, and we are focusing on the optimization of the aligment a. The translation probability Pr(f,a|e) can be rewritten as follows: Pr(f,a|e) = Jproductdisplay j=1 Pr(fj,aj|fj11,aj11,eI1) = Jproductdisplay j=1 Pr(aj|fj11,aj11,eI1) Pr(fj|fj11,aj1,eI1) (2) The probability Pr(f,a|e) can be estimated by using the word-based IBM statistical alignment models (Brown et al. , 1993)." W06-3109,J93-2003,p,"The most widely used single-word-based statistical alignment models (SAMs) have been proposed in (Brown et al. , 1993; Ney et al. , 2000)." W06-3111,J93-2003,o,"Monotone Nonmonotone Target B A Positions C D Source Positions Figure 1: Two Types of Alignment The IBM model 1 (IBM-1) (Brown et al. , 1993) assumes that all alignments have the same probability by using a uniform distribution: p(fJ1 |eI1) = 1IJ Jproductdisplay j=1 Isummationdisplay i=1 p(fj|ei) (2) We use the IBM-1 to train the lexicon parameters p(f|e), the training software is GIZA++ (Och and Ney, 2003)." W06-3116,J93-2003,o,"A Viterbi alignment computed from an IBM model 4 (Brown et al. , 1993) was computed for each translation direction." W06-3119,J93-2003,o,"1 Introduction Recent work in machine translation has evolved from the traditional word (Brown et al. , 1993) and phrase based (Koehn et al. , 2003a) models to include hierarchical phrase models (Chiang, 2005) and bilingual synchronous grammars (Melamed, 2004)." W06-3123,J93-2003,o,"The original IBM Models (Brown et al. , 1993) learn word-to-word alignment probabilities which makes it computationally feasible to estimate model parameters from large amounts of training data." W06-3125,J93-2003,o,"Based on IBM Model 1 lexical parameters(Brown et al. , 1993), providing a complementary probability for each tuple in the translation table." W06-3602,J93-2003,o,"4), it constitutes a bijection between source and target sentence positions, since the intersecting alignments are functions according to their definition in (Brown et al. , 1993) 3." W07-0212,J93-2003,o,"Appendix A: Derivation of the Probability of RWE We take a noisy channel approach, which is a common technique in NLP (for example (Brown et al. , 1993)), including spellchecking (Kernighan et al. , 1990)." W07-0403,J93-2003,o,"This alignment system is powered by the IBM translation models (Brown et al. , 1993), in which one sentence generates the other." W07-0409,J93-2003,n,"1 Introduction Recent works in statistical machine translation (SMT) shows how phrase-based modeling (Och and Ney, 2000a; Koehn et al. , 2003) significantly outperform the historical word-based modeling (Brown et al. , 1993)." W07-0412,J93-2003,o,"Systems based on word-to-word lexicons, such as the IBM systems (Brown et al. , 1990; Brown et al. , 1993), incorporate further devices that allow reordering of words (a distortion model) and ranking of alternatives (a monolingual language model)." W07-0703,J93-2003,o,"To generate phrase pairs from a parallel corpus, we use the ""diag-and"" phrase induction algorithm described in (Koehn et al, 2003), with symmetrized word alignments generated using IBM model 2 (Brown et al, 1993)." W07-0705,J93-2003,o,"In the original work (Brown et al. , 1993) the posterior probability p(eI1|fJ1 ) is decomposed following a noisy-channel approach, but current stateof-the-art systems model the translation probability directly using a log-linear model(Och and Ney, 2002): p(eI1|fJ1 ) = exp parenleftBigsummationtextM m=1 mhm(e I1,fJ1 ) parenrightBig summationdisplay ?eI1 exp parenleftBigsummationtextM m=1 mhm(?eI1,fJ1 ) parenrightBig, (2) with hm different models, m scaling factors and the denominator a normalization factor that can be ignored in the maximization process." W07-0708,J93-2003,o,"A monotonic segmentation copes with monotonic alignments, that is, j < k ??aj < ak following the notation of (Brown et al. , 1993)." W07-0715,J93-2003,n,"(2006) tried a different generative phrase translation model analogous to IBM word-translation Model 3 (Brown et al. , 1993), and again found that the standard model outperformed their generative model." W07-0715,J93-2003,o,"The lexical scores are computed as the (unnormalized) log probability of the Viterbi alignment for a phrase pair under IBM word-translation Model 1 (Brown et al. , 1993)." W07-0717,J93-2003,o,"To derive the joint counts c(?s,?t) from which p(?s|?t) and p(?t|?s) are estimated, we use the phrase induction algorithm described in (Koehn et al. , 2003), with symmetrized word alignments generated using IBM model 2 (Brown et al. , 1993)." W07-0721,J93-2003,o,"This feature, which is based on the lexical parameters of the IBM Model 1 (Brown et al. , 1993), provides a complementary probability for each tuple in the translation table." W07-0724,J93-2003,o,"These lists are rescored with the different models described above, a character penalty, and three different features based on IBM Models 1 and 2 (Brown et al. , 1993) calculated in both translation directions." W07-1205,J93-2003,o,"Training of the phrase translation model builds on top of a standard statistical word alignment over the training corpus of parallel text (Brown et al. , 1993) for identifying corresponding word blocks, assuming no further linguistic analysis of the source or target language." W08-0306,J93-2003,o,"GIZA++ (Och and Ney, 2003), an implementation of the IBM (Brown et al., 1993) and HMM (?)" W08-0307,J93-2003,o,"The simple idea that words in a source chunk are typically aligned to words in a single possible target chunk is used to discard alignments which link words from 2We use IBM-1 to IBM-5 models (Brown et al., 1993) implemented with GIZA++ (Och and Ney, 2003)." W08-0321,J93-2003,p,"In the well-known so-called IBM word alignment models (Brown et al., 1993), re-estimating the model parameters depends on the empirical probability P(ek,fk) for each sentence pair (ek,fk)." W08-0321,J93-2003,o,"The empirical probability for each sentence pair is estimated by maximum likelihood estimation over the training data (Brown et al., 1993)." W08-0326,J93-2003,o,"Assuming that the parameters P(etk|fsk) are known, the most likely alignment is computed by a simple dynamic-programming algorithm.1 Instead of using an Expectation-Maximization algorithm to estimate these parameters, as commonly done when performing word alignment (Brown et al., 1993; Och and Ney, 2003), we directly compute these parameters by relying on the information contained within the chunks." W08-0333,J93-2003,o,"The IBM models, together with a Hidden Markov Model (HMM), form a class of generative models that are based on a lexical translation model P(fj|ei) where each word fj in the foreign sentence fm1 is generated by precisely one word ei in the sentence el1, independently of the other translation decisions (Brown et al., 1993; Vogel et al., 1996; Och and Ney, 2000)." W08-0333,J93-2003,o,"203 Estimating the parameters for these models is more difficult (and more computationally expensive) than with the models considered in the previous section: rather than simply being able to count the word pairs and alignment relationships and estimate the models directly, we must use an existing model to compute the expected counts for all possible alignments, and then use these counts to update the new model.7 This training strategy is referred to as expectationmaximization (EM) and is guaranteed to always improve the quality of the prior model at each iteration (Brown et al., 1993; Dempster et al., 1977)." W08-0409,J93-2003,o,"4.3 Baselines 4.3.1 Word Alignment We used the GIZA++ implementation of IBM word alignment model 4 (Brown et al., 1993; Och and Ney, 2003) for word alignment, and the heuristics described in (Och and Ney, 2003) to derive the intersection and refined alignment." W08-0409,J93-2003,p,"Generative word alignment models, initially developed at IBM (Brown et al., 1993), and then augmented by an HMM-based model (Vogel et al., 1996), have provided powerful modeling capability for word alignment." W08-0409,J93-2003,o,"The notation will assume ChineseEnglish word alignment and ChineseEnglish MT. Here we adopt a notation similar to (Brown et al., 1993)." W08-0509,J93-2003,o,"The word alignment models implemented in GIZA++, the so-called IBM (Brown et al., 1993) and HMM alignment models (Vogel et al., 1996) are typical implementation of the EM algorithm (Dempster et al., 1977)." W08-0509,J93-2003,o,"2.2 Implementation of GIZA++ GIZA++ is an implementation of ML estimators for several statistical alignment models, including IBM Model 1 through 5 (Brown et al., 1993), HMM (Vogel et al., 1996) and Model 6 (Och and Ney, 2003)." W08-0509,J93-2003,o,"For example, (Brown et al., 1993) suggested two different methods: using only the alignment with the maximum probability, the so-called Viterbi alignment, or generating a set of alignments by starting from the Viterbi alignment and making changes, which keep the alignment probability high." W09-0407,J93-2003,o,"We use the IBM Model 1 (Brown et al., 1993) and the Hidden Markov Model (HMM, (Vogel et al., 1996)) to estimate the alignment model." W09-0412,J93-2003,o,"We then built separate English-to-Spanish and Spanish-to-English directed word alignments using IBM model 4 (Brown et al., 1993), combined them using the intersect+grow heuristic (Och and Ney, 2003), and extracted phrase-level translation pairs of maximum length 7 using the alignment template approach (Och and Ney, 2004)." W09-0420,J93-2003,o,"Word alignments were generated using Model 4 (Brown et al., 1993) using the multi-threaded implementation of GIZA++ (Och and Ney, 2003; Gao and Vogel, 2008)." W09-0430,J93-2003,o,"Then the two models and a search module are used to decode the best translation (Brown et al., 1993; Koehn et al., 2003)." W09-0430,J93-2003,o,"Several automatic sentence alignment approaches have been proposed based on sentence length (Brown et al., 1991) and lexical information (Kay and Roscheisen, 1993)." W09-0432,J93-2003,o,"To address this drawback, we proposed a new method3 to compute a more reliable and smoothed score in the undefined case, based on the IBM model 1 (Brown et al., 1993)." W09-0603,J93-2003,o,"The lexical acquisition phase uses the GIZA++ word-alignment tool, an implementation (Och and Ney, 2003) of IBM Model 5 (Brown et al., 1993) to construct an alignment of MRs with NL strings." W09-1804,J93-2003,o,"Probabilistic generative models like IBM 1-5 (Brown et al., 1993), HMM (Vogel et al., 1996), ITG (Wu, 1997), and LEAF (Fraser and Marcu, 2007) define formulas for P(f | e) or P(e, f), with ok-voon ororok sprok at-voon bichat dat erok sprok izok hihok ghirok totat dat arrat vat hilat ok-drubel ok-voon anok plok sprok at-drubel at-voon pippat rrat dat ok-voon anok drok brok jok at-voon krat pippat sat lat wiwok farok izok stok totat jjat quat cat lalok sprok izok jok stok wat dat krat quat cat lalok farok ororok lalok sprok izok enemok wat jjat bichat wat dat vat eneat lalok brok anok plok nok iat lat pippat rrat nnat wiwok nok izok kantok ok-yurp totat nnat quat oloat at-yurp lalok mok nok yorok ghirok clok wat nnat gat mat bat hilat lalok nok crrrok hihok yorok zanzanok wat nnat arrat mat zanzanat lalok rarok nok izok hihok mok wat nnat forat arrat vat gat Figure 1: Word alignment exercise (Knight, 1997)." W09-1804,J93-2003,o,"Practical Model 4 systems therefore make substantial search approximations (Brown et al., 1993)." W09-1804,J93-2003,o,"For now, we consider it to be one where: Every foreign word is aligned exactly once (Brown et al., 1993)." W09-1804,J93-2003,o,"We use these tuples to calculate a balanced f-score against the gold alignment tuples.4 Method Dict size f-score Gold 28 100.0 Monotone 39 68.9 IBM-1 (Brown et al., 1993) 30 80.3 IBM-4 (Brown et al., 1993) 29 86.9 IP 28 95.9 The last line shows an average f-score over the 8 tied IP solutions." W09-2501,J93-2003,n,"One prominent constraint of the IBM word alignment models (Brown et al., 1993) is functional alignment, that is each target word is mapped onto at most one source word." W93-0301,J93-2003,n,"4 Conclusions Compared with other word alignment algorithms (Brown et al. , 1993; Gale and Church, 1991a), word_align does not require sentence alignment as input, and was shown to produce useful alignments for small and noisy corpora." W93-0301,J93-2003,n,"The program takes the output of char_align (Church, 1993), a robust alternative to sentence-based alignment programs, and applies word-level constraints using a version of Brown el al.'s Model 2 (Brown et al. , 1993), modified and extended to deal with robustness issues." W93-0301,J93-2003,n,"The method was intended as a replacement for sentence-based methods (e.g. , (Brown et al. , 1991a; Gale and Church, 1991b; Kay and Rosenschein, 1993)), which are very sensitive to noise." W93-0301,J93-2003,o,"2 The alignment Algorithm 2.1 Estimation of translation probabilities The translation probabilities are estimated using a method based on Brown et al.'s Model 2 (1993), which is summarized in the following subsection, 2.1.1." W93-0301,J93-2003,o,"1 Introduction Aligning parallel texts has recently received considerable attention (Warwick et al. , 1990; Brown et al. , 1991a; Gale and Church, 1991b; Gale and Church, 1991a; Kay and Rosenschein, 1993; Simard et al. , 1992; Church, 1993; Kupiec, 1993; Matsumoto et al. , 1993)." W93-0301,J93-2003,o,"These methods have been used in machine translation (Brown et al. , 1990; Sadler, 1989), terminology research and translation aids (Isabelle, 1992; Ogden and Gonzales, 1993), bilingual lexicography (Klavans and Tzoukermann, 1990), collocation studies (Smadja, 1992), word-sense disambiguation (Brown et al. , 1991b; Gale et al. , 1992) and information retrieval in a multilingual environment (Landauer and Littman, 1990)." W94-0115,J93-2003,p,Brute-force methods (ie those that exploit the massive raw computing power currently available cheaply) may well produce some useful results (eg Brown et al 1993). W95-0106,J93-2003,o,"Numerous experiments have shown parallel bilingual corpora to provide a rich source of constraints for statistical analysis (e.g. , Brown et al. 1990; Gale & Church 1991 ; Gale et al. 1992; Church 1993; Brown et al. 1993; Dagan et al. 1993; Fung & Church 1994; Wu & Xia 1994; Fung & McKeown 1994)." W95-0106,J93-2003,o,"1 Introduction A number of empirical studies have found bracketing to be a useful type of corpus annotation (e.g. , Pereira & Schabes 1992; Black et al. 1993)." W95-0106,J93-2003,o,"It is interesting to constrast this method with the ""parse-parse-match"" approaches that have been reported recently for producing parallel bracketed corpora (Sadler & Vendelmans 1990; Kaji et al. 1992; Matsumoto et al. 1993; Cranias et al. 1994; Gfishman 1994)." W97-0119,J93-2003,n,"1 Introduction Despite a surge in research using parallel corpora for various machine translation tasks (Brown et al. 1993),(Brown et al. 1991; Gale & Church 1993; Church 1993; Dagan & Church 1994; Simard et al. 1992; Chen 1993; Melamed 1995; Wu & Xia 1994; Wu 1994; Smadja et aI." W97-0311,J93-2003,o,"(Brown et aL, 1993) The heuristics in Section 6 are designed specifically to find the interesting features in that featureless desert." W97-0311,J93-2003,o,"Several authors have used mutual information and similar statistics as an objective function for word clustering (Dagan et al. , 1993; Brown et al. , 1992; Pereira et al. , 1993; Wang et al. , 1996), for automatic determination of phonemic baseforms (Lucassen & Mercer, 1984), and for language modeling for speech recognition (Ries ct al. , 1996)." W97-0311,J93-2003,o,"2 Translation Models A translation model can be constructed automatically from texts that exist in two languages (bitexts) (Brown et al. , 1993; Melamed, 1997)." W97-0405,J93-2003,o,"Pure statistical machine translation (Brown et al. , 1993) mltst in principle recover the most probable alignment out of all possible alignments between the input and a translation." W97-0408,J93-2003,o,3.3 Model Construction The head transducer model was trained and evaluated on English-to-Mandarin Chinese translation of transcribed utterances from the ATIS corpus (Hirschman et al. 1993). W97-1014,J93-2003,o,"In several papers (Bahl et al. , 1984, Lau and Rosenfeld, 1993, Tillmann and Ney, 1996), selection criteria for single word trigger pairs were studied." W97-1014,J93-2003,o,"1 Introduction In this paper, we study the use of so-called word trigger pairs (for short: word triggers) (Bahl et al. , 1984, Lau and Rosenfeld, 1993, Tillmann and Ney, 1996) to improve an existing language model, which is typically a trigram model in combination with a cache component (Ney and Essen, 1994)." W99-0602,J93-2003,o,"Bilingual alignments have so far shown that they can play multiple roles in a wide range of linguistic applications, such as computer assisted translation (Isabelle et al. , 1993; Brown et al. , 1990), terminology (Dagan and Church, 1994) lexicography (Langlois, 1996; Klavans and Tzoukermann, 1995; Melamed, 1996), and cross-language information retrieval (Nie et al. , * This research was funded by the Canadian Department of Foreign Affairs and International Trade (http://~.dfait-maeci.gc.ca/), via the Agence de la francophonie (http://~." W99-0602,J93-2003,o,"However, in the experiments described here, we focus on alignment at the level of sentences, this for a number of reasons: First, sentence alignments have so far proven their usefulness in a number of applications, e.g. bilingual lexicography (Langlois, 1996; Klavans and Tzoukermann, 1995; Dagan and Church, 1994), automatic translation verification (Macklovitch, 1995; Macklovitch, 1996) and the automatic acquisition of knowledge about translation (Brown et al. , 1993)." W99-0604,J93-2003,o,"Many statistical translation models (Vogel et al. , 1996; Tillmann et al. , 1997; Niessen et al. , 1998; Brown et al. , 1993) try to model word-toword correspondences between source and target words." W99-0604,J93-2003,o,"This alignment representation is a generalization of the baseline alignments described in (Brown et al. , 1993) and allows for many-to-many alignments." A00-1025,J93-2004,o,"The approach is able to achieve 94% precision and recall for base NPs derived from the Penn Treebank Wall Street Journal (Marcus et al. , 1993)." A00-1031,J93-2004,o,"Recent comparisons of approaches that can be trained on corpora (van Halteren et al. , 1998; Volk and Schneider, 1998) have shown that in most cases statistical aproaches (Cutting et al. , 1992; Schmid, 1995; Ratnaparkhi, 1996) yield better results than finite-state, rule-based, or memory-based taggers (Brill, 1993; Daelemans et al. , 1996)." A00-1031,J93-2004,o,"The annotation consists of four parts: 1) a context-free structure augmented with traces to mark movement and discontinuous constituents, 2) phrasal categories that are annotated as node labels, 3) a small set of grammatical functions that are annotated as extensions to the node labels, and 4) part-of-speech tags (Marcus et al. , 1993)." A00-1031,J93-2004,o,"As two examples, (Rabiner, 1989) and (Charniak et al. , 1993) give good overviews of the techniques and equations used for Markov models and part-ofspeech tagging, but they are not very explicit in the details that are needed for their application." A00-1031,J93-2004,o,"Additionally, we present results of the tagger on the NEGRA corpus (Brants et al. , 1999) and the Penn Treebank (Marcus et al. , 1993)." A00-2005,J93-2004,o,"2.3 Experiment The training set for these experiments was sections 01-21 of the Penn Treebank (Marcus et al. , 1993)." A00-2007,J93-2004,o,"The main data set consist of four sections (15-18) of the Wall Street Journal (WSJ) part of the Penn Treebank (Marcus et al. , 1993) as training material and one section (20) as test material 1." A00-2020,J93-2004,o,"We evaluate this method over the part of speech tagged portion of the Penn Treebank corpus (Marcus et al. , 1993)." A00-2023,J93-2004,o,"so they conform to the Penn Treebank corpus (Marcus et al. , 1993) annotation style, and then do experiments using models built with Treebank data." A00-2030,J93-2004,o,"However, because these estimates are too sparse to be relied upon, we use interpolated estimates consisting of mixtures of successively lowerorder estimates (as in Placeway et al. 1993)." A00-2030,J93-2004,o,"We were already using a generative statistical model for part-of-speech tagging (Weischedel et al. 1993), and more recently, had begun using a generative statistical model for name finding (Bikel et al. 1997)." A00-2030,J93-2004,o,"Word features are introduced primarily to help with unknown words, as in (Weischedel et al. 1993)." A00-2033,J93-2004,o,"The PT grammar 2 was extracted from the Penn Treebank (Marcus et al. , 1993)." A94-1009,J93-2004,p,"Preparing tagged corpora either by hand is labour-intensive and potentially error-prone, and although a semi-automatic approach can be used (Marcus et al. , 1993), it is a good thing to reduce the human involvement as much as possible." A97-1017,J93-2004,o,"2.2.2 ENGLISH TRAINING DATA For training in the English experiments, we used WSJ (Marcus et al. , 1993)." C00-1009,J93-2004,o,"The part of the 1Release 2 of this data set can be obtained t'rmn the Linguistic Data Consortium with Catalogue number LDC94T4B (http://www.ldc.upenn.edu/ldc/nofranm.html) 2There are 48 labels defined in (Marcus et al. , 1993), however, three of ttmm do not appear in the corpus." C00-1011,J93-2004,o,"We compared this nonprobabilistic DOP model against tile probabilistic DOP model (which estimales the most probable parse for each sentence) on three different domains: tbe Penn ATIS treebank (Marcus et al. 1993), the Dutch OVIS treebank (Bonnema el al. 1997) and tile Penn Wall Street Journal (WSJ) treebank (Marcus el al. 1993)." C00-1011,J93-2004,o,"While this technique has been sttccessfully applied to parsing lhe ATIS portion in the Penn Treebank (Marcus et al. 1993), it is extremely time consuming." C00-1011,J93-2004,o,"Experimental Comparison 4.1 Experiments on the ATIS corpus For our first comparison, we used I0 splits from the Penn ATIS corpus (Marcus et al. 1993) into training sets of 675 sentences and test sets of 75 sentences." C00-1034,J93-2004,o,"(levelopment of cor1)ora with morl)ho-synta(:ti(: and syntacti(: mmotation (Marcus et al. , 1993), (Sampson, 1995)." C00-1034,J93-2004,o,"2'\]'he WSJ corpus (Marcus et al. , 1993)." C00-1041,J93-2004,o,"5.1 The Prague Dependency Tree Bank (PDT in the sequel), which has been inspired by the build-up of the Penn Treebank (Marcus, Santorini & Marcinkiewicz 1993; Marcus, Kim, Marcinkiewicz et al. 1994), is aimed at a complex annotation of (a part of) the Czech National Corpus (CNC in the sequel), the creation of which is under progress at the Department of Czech National Corpus at the Faculty of Philosophy, Charles University (the corpus currently comprises about 100 million tokens of word forms)." C00-1044,J93-2004,o,"302 (Marcus et al. , 1993) was nlanually annotated with subjeciivity chlssifications." C00-2157,J93-2004,o,"(Carpenter, 1992), (Copestake, 1999), (DSrre and Dorna, 1993), (D6I're et al. , 1996), (Emele and Zajac, 1990), (H6ht~ld and Smolka, 1988)), and to pick those ingredients which are known to be con~i)utationally 'tractable' in some sense." C00-2157,J93-2004,p,"1 Introduction Syntactically annotated corpora like the Penn Treebank (Marcus et al. , 1993), the NeGra corpus (Skut et al. , 1998) or the statistically dismnbiguated parses in (Bell et al. , 1999) provide a wealth of intbrmation, which can only be exploited with an adequate query language." C02-1100,J93-2004,o,"Without removing them, extracted rules cannot be triggered until when completely the same strings appear in a text.4 6 Performance Evaluation We measured the performance of our robust parsing algorithm by measuring coverage and degree of overgeneration for the Wall Street Journal in the Penn Treebank (Marcus et al. , 1993)." C02-1100,J93-2004,o,"2 Background Default unification has been investigated by many researchers (Bouma, 1990; Russell et al. , 1991; Copestake, 1993; Carpenter, 1993; Lascarides and Copestake, 1999) in the context of developing lexical semantics." C02-1100,J93-2004,o,"As other researchers pursued efficient default unification (Bouma, 1990; Russell et al. , 1991; Copestake, 1993), we also propose another definition of default unification, which we call lenient default unification." C02-1126,J93-2004,o,"For Penn Treebank II style annotation (Marcus et al. , 1993), in which a nonterminal symbol is a category together with zero or more functional tags, we adopt the following scheme: the atomic pattern a matches any label with category a or functional tag a; moreover, we define Boolean operators^,_, and:." C02-1138,J93-2004,o,"One kind is the Penn Treebank (Marcus et al. , 1993)." C02-2024,J93-2004,o,"The data set consisting of 249,994 TFSs was generated by parsing the Figure 3: The size of Dpi; for the size of the data set 800 bracketed sentences in the Wall Street Journal corpus (the first 800 sentences in Wall Street Journal 00) in the Penn Treebank (Marcus et al. , 1993) with the XHPSG grammar (Tateisi et al. , 1998)." C04-1004,J93-2004,o,"In recent years, HMMs have enjoyed great success in many tagging applications, most notably part-of-speech (POS) tagging (Church 1988; Weischedel et al 1993; Merialdo 1994) and named entity recognition (Bikel et al 1999; Zhou et al 2002)." C04-1004,J93-2004,o,Experimentation The corpus used in shallow parsing is extracted from the PENN TreeBank (Marcus et al. 1993) of 1 million words (25 sections) by a program provided by Sabine Buchholz from Tilburg University. C04-1010,J93-2004,p,"To some extent, this can probably be explained by the strong tradition of constituent analysis in Anglo-American linguistics, but this trend has been reinforced by the fact that the major treebank of American English, the Penn Treebank (Marcus et al. , 1993), is annotated primarily with constituent analysis." C04-1010,J93-2004,o,"The learning algorithm used is the IB1 algorithm (Aha et al. , 1991) with k = 5, i.e. classification based on 5 nearest neighbors.4 Distances are measured using the modified value difference metric (MVDM) (Stanfill and Waltz, 1986; Cost and Salzberg, 1993) for instances with a frequency of at least 3 (and the simple overlap metric otherwise), and classification is based on distance weighted class voting with inverse distance weighting (Dudani, 1976)." C04-1040,J93-2004,o,"Dependency Analyzer PP-Attachment Resolver Root-Node Finder Base NP Chunker (POS Tagger) = SVM, = Preference Learning Figure 2: Module layers in the system That is, we use Penn Treebanks Wall Street Journal data (Marcus et al. , 1993)." C04-1055,J93-2004,o,"The TRIPS structure generally has more levels of structure (roughly corresponding to levels in X-bar theory) than the Penn Treebank analyses (Marcus et al. , 1993), in particular for base noun phrases." C04-1082,J93-2004,o,"The tagger described in this paper is based on the standard Hidden Markov Model architecture (Charniak et al. , 1993; Brants, 2000)." C04-1082,J93-2004,o,"3.1 Experiments The model described in section 2 has been tested on the Brown corpus (Francis and Kucera, 1982), tagged with the 45 tags of the Penn treebank tagset (Marcus et al. , 1993), which constitute the initial tagset T0." C04-1140,J93-2004,o,"The Brill tagger comes with an English default version also trained on general-purpose language corpora like the PENN TREEBANK (Marcus et al. , 1993)." C04-1140,J93-2004,o,"This is most prominently evidenced by the PENN TREEBANK (Marcus et al. , 1993)." C04-1197,J93-2004,o,"The training set is extracted from TreeBank (Marcus et al. , 1993) section 1518, the development set, used in tuning parameters of the system, from section 20, and the test set from section 21." C08-1012,J93-2004,o,"2 Data Sets for the Experiments 2.1 Coordination Annotation in the PENN TREEBANK For our experiments, we used the WSJ part of the PENN TREEBANK (Marcus et al., 1993)." C08-1025,J93-2004,n,"For instance, about 38% of verbs in the training sections of the Penn Treebank (PTB) (Marcus et al., 1993) occur only once the lexical properties of these verbs (such as their most common subcategorization frames ) cannot be represented accurately in a model trained exclusively on the Penn Treebank." C08-1026,J93-2004,o,"For example, in the WSJ corpus, part of the Penn Treebank 3 release (Marcus et al., 1993), the string in (1) is a variation 12-gram since off is a variation nucleus that is tagged preposition (IN) in one corpus occurrence and particle (RP) in another.1 Dickinson (2005) shows that examining those cases with identical local contextin this case, lookingat ward off aresultsinanestimated error detection precision of 92.5%." C08-1038,J93-2004,o,"1999), OpenCCG (White, 2004) and XLE (Crouch et al., 2007), or created semi-automatically (Belz, 2007), or fully automatically extracted from annotated corpora, like the HPSG (Nakanishi et al., 2005), LFG (Cahill and van Genabith, 2006; Hogan et al., 2007) and CCG (White et al., 2007) resources derived from the Penn-II Treebank (PTB) (Marcus et al., 1993)." C08-1050,J93-2004,o,"By habit, most systems for automatic role-semantic analysis have used Pennstyle constituents (Marcus et al., 1993) produced by Collins (1997) or Charniaks (2000) parsers." C08-1050,J93-2004,o,"Statistical dependency parsers of English must therefore rely on dependency structures automatically converted from a constituent corpus such as the Penn Treebank (Marcus et al., 1993)." C08-1094,J93-2004,o,"Hence our classifier evaluation omits those two word positions, leading to n2 classifications for a string of length n. Table 1 shows statistics from sections 2-21 of the Penn WSJ Treebank (Marcus et al., 1993)." C08-1113,J93-2004,o,"As mentioned in Section 2.2, there are words which have two or more candidate POS tags in the PTB corpus (Marcus et al., 1993)." C94-2149,J93-2004,o,"Of the 1600 IBM sentences that have been parsed (those available from the Penn Treebank \[Marcus et al. , 19931), only 67 overlapped with the IBM-manual treebank that was bracketed by University of Lancaster." C94-2149,J93-2004,o,"\[Marcus et al. , 1993\] Marcus, M. , Santorini, B. , and Malvinkiewicz, M.A." C96-1003,J93-2004,o,"to estimale a model (clustering words), and measured the I(L distancd ~ between 'l'he K\], distance (relative Clt|,l:Opy), which is widely used in information theory and sta, tist, ics, is a, nleasur,2 of 'dista, n<:c' l>~\[,wcen two distributions 5.2 Experiment 2: Qualitative Evaluation We extracted roughly 180,000 case fl:anles from the bracketed WSJ (Wall Street Journal) corpus of the Penn Tree Bank (Marcus et al. , 1993) as co-occurrence data." C96-1003,J93-2004,o,"In particular, we used this method with WordNet (Miller et al. , 1993) and using the same training data." C96-1003,J93-2004,o,"have been proposed (Hindle, 1990; Brown et al. , 1992; Pereira et al. , 1993; Tokunaga et al. , 1995)." C96-1020,J93-2004,o,"Treebanks have been used within the field of natural language processing as a source of training data for statistical part og speech taggers (Black et al. , 1992; Brill, 1994; Merialdo, 1994; Weischedel et al. , 1993) and for statistical parsers (Black et al. , 1993; Brill, 1993; aelinek et al. , 1994; Magerman, 1995; Magerman and Marcus, 1991)." C96-1020,J93-2004,o,"All of the features of the ATR/Lancaster Treebank that are described below represent a radical departure from extant large-scale (Eyes and Leech, 1993; Garside and McEnery, 1993; Marcus et al. , 1993) treebanks." C96-1038,J93-2004,o,"4 Experiments The Penn Treebank (Marcus et al. , 1993) is used as the testing corpus." C96-1041,J93-2004,o,"'\['here are three main approaches in tagging problem: rule-based approach (Klein and Simmons 1%3; Brodda 1982; Paulussen and Martin 1992; Brill et al. 1990), statistical approach (Church :1988; Merialdo 1994; Foster 1991; Weischedel et al. 1993; Kupiec 1992) and connectionist approach (Benello et al. 1989; Nakanmra et al. 1989)." C96-2114,J93-2004,o,"(Marcus et al. 1993, 316)." C96-2114,J93-2004,o,"The tagger used is thus one that does not need tagged and disambiguated material to be trained on, namely the XPOST originally constructed at Xerox Parc (Cutting et al. 1992, Cutting and Pedersen 1993)." C96-2125,J93-2004,p,"Recently, we can see an important development in natural language processing and computational linguistics towards the use of empirical learning methods (for instance, (Charniak, 1993; Marcus et al. , 1993; Wermter, 11995; Jones, 1995; Werml;er et al. , 1996))." C96-2185,J93-2004,o,"4 Information Base 4.1 Text Corpus Text corpora are essential to statistical modeling, in developing formal theories of the grammars, investigating prosodic phenomena in speech, and evaluating or comparing the adequacy of parsing models (Marcus et al. , 1993)." C96-2187,J93-2004,p,"Successflfl examples of reuse of data resources include: the WordNet thesaurus (Miller el; al. , 1993); the Penn Tree Bank (Marcus et al. , 1993); the Longmans Dictionary of Contemporary English (Summers, 1995)." D07-1003,J93-2004,o,"This sort of problem can be solved in principle by conditional variants of the Expectation-Maximization algorithm (Baum et al. , 1970; Dempster et al. , 1977; Meng and Rubin, 1993; Jebara and Pentland, 1999)." D07-1003,J93-2004,o,"Similarly, Murdock and Croft (2005) adopted a simple translation model from IBM model 1 (Brown et al. , 1990; Brown et al. , 1993) and applied it to QA." D07-1003,J93-2004,o,"The tree is produced by a state-of-the-art dependency parser (McDonald et al. , 2005) trained on the Wall Street Journal Penn Treebank (Marcus et al. , 1993)." D07-1018,J93-2004,o,"For example, given that each semantic class exhibits a particular syntactic behaviour, information on the semantic class should improve POStagging for adjective-noun and adjective-participle ambiguities, probably the most difficult distinctions both for humans and computers (Marcus et al. , 1993; Brants, 2000)." D07-1023,J93-2004,o,"We use as our English corpus the Wall Street Journal (WSJ) portion of the Penn Treebank (Marcus et al. , 1993)." D07-1028,J93-2004,o,"When tested on f-structures for all sentences from Section 23 of the Penn Wall Street Journal (WSJ) treebank (Mar267 cus et al. , 1993), the techniques described in this paper improve BLEU score from 66.52 to 68.82." D07-1031,J93-2004,o,"2 Evaluation All of the experiments described below have the same basic structure: an estimator is used to infer a bitag HMM from the unsupervised training corpus (the words of Penn Treebank (PTB) Wall Street Journal corpus (Marcus et al. , 1993)), and then the resulting model is used to label each word of that corpus with one of the HMMs hidden states." D07-1058,J93-2004,o,"For a second set of parsing experiments, we used the WSJ portion of the Penn Tree Bank (Marcus et al. , 1993) and Helmut Schmids enrichment program tmod (Schmid, 2006)." D07-1078,J93-2004,o,"1 Introduction Syntax-based translation models (Eisner, 2003; Galley et al. , 2006; Marcu et al. , 2006) are usually built directly from Penn Treebank (PTB) (Marcus et al. , 1993) style parse trees by composing treebank grammar rules." D07-1082,J93-2004,o,"6 Evaluation 6.1 Data The data used for our comparison experiments were developed as part of the OntoNotes project (Hovy et al. , 2006), which uses the WSJ part of the Penn Treebank (Marcus et al. , 1993)." D07-1096,J93-2004,o,"918 English For English we used the Wall Street Journal section of the Penn Treebank (Marcus et al. , 1993)." D07-1096,J93-2004,o,"3.2 Domain Adaptation Track As mentioned previously, the source data is drawn from a corpus of news, specifically the Wall Street Journal section of the Penn Treebank (Marcus et al. , 1993)." D07-1097,J93-2004,o,"1 Introduction In the multilingual track of the CoNLL 2007 shared task on dependency parsing, a single parser must be trained to handle data from ten different languages: Arabic (Hajic et al. , 2004), Basque (Aduriz et al. , 2003), Catalan, (Mart et al. , 2007), Chinese (Chen et al. , 2003), Czech (Bohmova et al. , 2003), English (Marcus et al. , 1993; Johansson and Nugues, 2007), Greek (Prokopidis et al. , 2005), Hungarian (Csendes et al. , 2005), Italian (Montemagni et al. , 2003), and Turkish (Oflazer et al. , 2003).1 Our contribution is a study in multilingual parser optimization using the freely available MaltParser system, which performs 1For more information about the task and the data sets, see Nivre et al." D07-1099,J93-2004,o,"4 Experiments We evaluated the ISBN parser on all the languages considered in the shared task (Hajic et al. , 2004; Aduriz et al. , 2003; Mart et al. , 2007; Chen et al. , 2003; Bohmova et al. , 2003; Marcus et al. , 1993; Johansson and Nugues, 2007; Prokopidis et al. , 2005; Csendes et al. , 2005; Montemagni et al. , 2003; Oflazer et al. , 2003)." D07-1100,J93-2004,o,"We participated in the multilingual track of the CoNLL 2007 shared task (Nivre et al. , 2007), and evaluated the system on data sets of 10 languages (Hajic et al. , 2004; Aduriz et al. , 2003; Mart et al. , 2007; Chen et al. , 2003; Bohmova et al. , 2003; Marcus et al. , 1993; Johansson and Nugues, 2007; Prokopidis et al. , 2005; Csendes et al. , 2005; Montemagni et al. , 2003; Oflazer et al. , 2003)." D07-1101,J93-2004,o,"To train models, we used projectivized versions of the training dependency trees.2 1We are grateful to the providers of the treebanks that constituted the data for the shared task (Hajic et al. , 2004; Aduriz et al. , 2003; Mart et al. , 2007; Chen et al. , 2003; Bohmova et al. , 2003; Marcus et al. , 1993; Johansson and Nugues, 2007; Prokopidis et al. , 2005; Csendes et al. , 2005; Montemagni et al. , 2003; Oflazer et al. , 2003)." D07-1102,J93-2004,o,"This task evaluated parsing performance on 10 languages: Arabic, Basque, Catalan, Chinese, Czech, English, Greek, Hungarian, Italian, and Turkish using data originating from a wide variety of dependency treebanks, and transformations of constituency-based treebanks (Hajic et al. , 2004; Aduriz et al. , 2003; Mart et al. , 2007; Chen et al. , 2003; Bohmova et al. , 2003; Marcus et al. , 1993; Johansson and Nugues, 2007; Prokopidis et al. , 2005; Csendes et al. , 2005; Montemagni et al. , 2003; Oflazer et al. , 2003)." D07-1111,J93-2004,o,"In the multilingual parsing track, participants train dependency parsers using treebanks provided for ten languages: Arabic (Hajic et al. , 2004), Basque (Aduriz et al. 2003), Catalan (Mart et al. , 2007), Chinese (Chen et al. , 2003), Czech (Bhmova et al. , 2003), English (Marcus et al. , 1993; Johansson and Nugues, 2007), Greek (Prokopidis et al. , 2005), Hungarian (Czendes et al. , 2005), Italian (Montemagni et al. , 2003), and Turkish (Oflazer et al. , 2003)." D07-1111,J93-2004,o,"In the domain adaptation track, participants were provided with English training data from the Wall Street Journal portion of the Penn Treebank (Marcus et al. , 1993) converted to dependencies (Johansson and Nugues, 2007) to train parsers to be evaluated on material in the biological (development set) and chemical (test set) domains (Kulick et al. , 2004), and optionally on text from the CHILDES database (MacWhinney, 2000; Brown, 1973)." D07-1112,J93-2004,o,"We were given around 15K sentences of labeled text from the Wall Street Journal (WSJ) (Marcus et al. , 1993; Johansson and Nugues, 2007) as well as 200K unlabeled sentences." D07-1112,J93-2004,o,"The annotation guidelines for the Penn Treebank flattened noun phrases to simplify annotation (Marcus et al. , 1993), so there is no complex structure to NPs." D07-1119,J93-2004,o,"The following treebanks were used for training the parser: (Aduriz et al. , 2003; Bhmov et al. , 2003; Chen et al. , 2003; Haji et al. , 2004; Marcus et al. , 1993; Mart et al. , 2002; Montemagni et al. 2003; Oflazer et al. , 2003; Prokopidis et al. , 2005; Csendes et al. , 2005)." D07-1121,J93-2004,o,"(1993), Johansson and Nugues (2007), Prokopidis et al." D07-1122,J93-2004,o,"We took part the Multilingual Track of all ten languages provided by the CoNLL-2007 shared task organizers(Hajic et al. , 2004; Aduriz et al. , 2003; Mart et al. , 2007; Chen et al. , 2003; Bohmova et al. , 2003; Marcus et al. , 1993; Johansson and Nugues, 2007; Prokopidis et al. , 2005; Csendes et al. , 2005; Montemagni et al. , 2003; Oflazer et al. , 2003)." D07-1124,J93-2004,o,"5 Results and Discussion The system with online learning and Nivres parsing algorithm was trained on the data released by CoNLL Shared Task Organizers for all the ten languages (Hajic et al. , 2004; Aduriz et al. , 2003; Mart et al. , 2007; Chen et al. , 2003; Bohmova et al. , 2003; Marcus et al. , 1993; Johansson and Nugues, 2007; Prokopidis et al. , 2005; Csendes et al. , 2005; Montemagni et al. , 2003; Oflazer et al. , 2003)." D07-1125,J93-2004,o,"Building heavily on the ideas of History-based parsing (Black et al. , 1993; Nivre, 2006), training the parser means essentially running the parsing algorithms in a learning mode on the data in order to gather training instances for the memory-based learner." D07-1126,J93-2004,o,"The pchemtb-closed shared task (Marcus et al. , 1993; Johansson and Nugues, 2007; Kulick et al. , 2004) is used to illustrate our models." D07-1126,J93-2004,o,"3 Experimental Results and Discussion We test our parsing models on the CONLL-2007 (Hajic et al. , 2004; Aduriz et al. , 2003; Mart et al. , 2007; Chen et al. , 2003; Bohmova et al. , 2003; Marcus et al. , 1993; Johansson and Nugues, 2007; Prokopidis et al. , 2005; Csendes et al. , 2005; Montemagni et al. , 2003; Oflazer et al. , 2003) data set on various languages including Arabic, Basque, Catalan, Chinese, English, Italian, Hungarian, and Turkish." D07-1127,J93-2004,o,"3 Experiments and Results All experiments were conducted on the treebanks provided in the shared task (Hajic et al. , 2004; Aduriz et al. , 2003; Mart et al. , 2007; Chen et al. , 2003; Bhmov et al. , 2003; Marcus et al. , 1993; Johansson and Nugues, 2007; Prokopidis et al. , 2005; Csendes et al. , 2005; Montemagni et al. , 2003; Oflazer et al. , 2003)." D07-1128,J93-2004,o,"We have achieved average results in the CoNLL domain adaptation track open submission (Marcus et al. , 1993; Johansson and Nugues, 2007; Kulick et al. , 2004; MacWhinney, 2000; Brown, 1973)." D07-1128,J93-2004,o,"We use a hand-written competence grammar, combined with performance-driven disambiguation obtained from the Penn Treebank (Marcus et al. , 1993)." D07-1129,J93-2004,o,"4 Experiments Our experiments were conducted on CoNLL-2007 shared task domain adaptation track (Nivre et al. , 2007) using treebanks (Marcus et al. , 1993; Johansson and Nugues, 2007; Kulick et al. , 2004)." D07-1131,J93-2004,o,"In this year, CoNLL-2007 shared task (Nivre et al. , 2007) focuses on multilingual dependency parsing based on ten different languages (Hajic et al. , 2004; Aduriz et al. , 2003; Mart et al. , 2007; Chen et al. , 2003; Bhmova et al. , 2003; Marcus et al. , 1993; Johansson and Nugues, 2007; Prokopidis et al. , 2005; Czendes et al. , 2005; Montemagni et al. , 2003; Oflazer et al. , 2003) and domain adaptation for English (Marcus et al. , 1993; Johansson and Nugues, 2007; Kulick et al. , 2004; MacWhinney, 2000; Brown, 1973) without taking the languagespecific knowledge into consideration." D08-1008,J93-2004,o,"Most systems for automatic role-semantic analysis have used constituent syntax as in the Penn Treebank (Marcus et al., 1993), although there has also been much research on the use of shallow syntax (Carreras and Mrquez, 2004) in SRL." D08-1050,J93-2004,o,"1 Introduction Most state-of-the-art wide-coverage parsers are based on the Penn Treebank (Marcus et al., 1993), making such parsers highly tuned to newspaper text." D08-1056,J93-2004,o,"These categories were automatically generated using the labeled parses in Penn Treebank (Marcus et al., 1993) and the labeled semantic roles of PropBank (Kingsbury et al., 2002)." D08-1070,J93-2004,o,"The model was trained on sections 221 from the English Penn Treebank (Marcus et al., 1993)." D08-1071,J93-2004,o,"4.1 Data Sets Our results are based on syntactic data drawn from the Penn Treebank (Marcus et al., 1993), specifically the portion used by CoNLL 2000 shared task (Tjong Kim Sang and Buchholz, 2000)." D08-1071,J93-2004,o,"We use 3500 sentences from CoNLL (Tjong Kim Sang and De Meulder, 2003) as the NER data and section 20-23 of the WSJ (Marcus et al., 1993; Ramshaw and Marcus, 1995) as the POS/chunk data (8936 sentences)." D08-1091,J93-2004,o,"Set Test Set ENGLISH-WSJ Sections Section 22 Section 23 (Marcus et al., 1993) 2-21 ENGLISH-BROWN see 10% of 10% of the (Francis et al. 2002) ENGLISH-WSJ the data6 the data6 FRENCH7 Sentences Sentences Sentences (Abeille et al., 2000) 1-18,609 18,610-19,609 19,609-20,610 GERMAN Sentences Sentences Sentences (Skut et al., 1997) 1-18,602 18,603-19,602 19,603-20,602 Table 1: Corpora and standard experimental setups." D08-1093,J93-2004,o,"The other recipe that is currently used on a large scale is to measure the performance of a parser on existing treebanks, such as WSJ (Marcus et al., 1993), and assume that the accuracy measure will carry over to the domains of interest." D08-1105,J93-2004,o,"Building on the annotations from the Wall Street Journal (WSJ) portion of the Penn Treebank (Marcus et al., 1993), the project added several new layers of semantic annotations, such as coreference information, word senses, etc. In its first release (LDC2007T21) through the Linguistic Data Consortium (LDC), the project manually sense-tagged more than 40,000 examples belonging to hundreds of noun and verb types with an ITA of 90%, based on a coarse-grained sense inventory, where each word has an average of only 3.2 senses." D09-1015,J93-2004,o,"We removed all but the first two characters of each POS tag, resulting in a set of 57 tags which more closely resembles that of the Penn TreeBank (Marcus et al., 1993)." D09-1015,J93-2004,o,"Our trees look just like syntactic constituency trees, such as those in the Penn TreeBank (Marcus et al., 1993), 141 ROOT PROT PROT NN PEBP2 PROT NN alpha NN A1 , , PROT NN alpha NN B1 , , CC and PROT NN alpha NN B2 NNS proteins VBD bound DT the DNA PROT NN PEBP2 NN site IN within DT the DNA NN mouse PROT NN GM-CSF NN promoter . . Figure 1: An example of our tree representation over nested named entities." D09-1031,J93-2004,o,"For (1), the morphemes and labels for our task are: (2) kita NEG tINC inE1S chabe VT -j SC laj PREP inA1S yol S -j SC iin PRON We also consider POS-tagging for Danish, Dutch, English, and Swedish; the English is from sections 00-05 (as training set) and 19-21 (as development set) of the Penn Treebank (Marcus et al., 1993), and the other languages are from the CoNLL-X dependency parsing shared task (Buchholz and Marsi, 2006).1 We split the original training data into training and development sets." D09-1034,J93-2004,o,"The modified version of the Roark parser, trained on the Brown Corpus section of the Penn Treebank (Marcus et al., 1993), was used to parse the different narratives and produce the word by word measures." D09-1047,J93-2004,o,"The PropBank corpus adds a semantic layer to parse trees from the Wall Street Journal section of the Penn Treebank II corpus (Marcus et al., 1993)." D09-1059,J93-2004,p,"In the statistical NLP community, the most widely used grammatical resource is the Penn Treebank (Marcus et al., 1993)." D09-1060,J93-2004,o,"For English, we used the Penn Treebank (Marcus et al., 1993) in our experiments and the tool Penn2Malt7 to convert the data into dependency structures using a standard set of head rules (Yamada and Matsumoto, 2003)." D09-1076,J93-2004,o,"Though this model uses trees in the formal sense, it does not create Penn Treebank (Marcus et al., 1993) style linguistic trees, but uses only one non-terminal label (X) to create those trees using six simple rule structures." D09-1088,J93-2004,o,"1 Introduction Parsing technology has come a long way since Charniak (1996) demonstrated that a simple treebank PCFG performs better than any other parser (with F175 accuracy) on parsing the WSJ Penn treebank (Marcus et al., 1993)." D09-1126,J93-2004,o,"HockenmaierandSteedman(2007)showedthat a CCG corpus could be created by adapting the Penn Treebank (Marcus et al., 1993)." D09-1161,J93-2004,o,Averaged Perceptron Algorithm 5 Experiments We evaluate our method on both Chinese and English syntactic parsing task with the standard division on Chinese Penn Treebank Version 5.0 and WSJ English Treebank 3.0 (Marcus et al. 1993) as shown in Table 1. E06-1015,J93-2004,o,"4.1 Experimental Set-up We used two different corpora: PropBank (www.cis.upenn.edu/ace) along with PennTree bank 2 (Marcus et al. , 1993) and FrameNet." E06-1034,J93-2004,o,"For example, in the WSJ corpus, part of the Penn Treebank 3 release (Marcus et al. , 1993), the string in (1) is a variation 12-gram since off is a variation nucleus that in one corpus occurrence is tagged as a preposition (IN), while in another it is tagged as a particle (RP)." E09-1033,J93-2004,o,"287 System Train +base Test +base 1 Baseline 87.89 87.89 2 Contrastive 88.70 0.82 88.45 0.56 (5 trials/fold) 3 Contrastive 88.82 0.93 88.55 0.66 (greedy selection) Table 1: Average F1 of 7-way cross-validation To generate the alignments, we used Model 4 (Brown et al., 1993), as implemented in GIZA++ (Och and Ney, 2003)." E09-1033,J93-2004,o,"Our test set is 3718 sentences from the English Penn treebank (Marcus et al., 1993) which were translated into German." E09-1060,J93-2004,o,"Corpora in various languages, such as the English Penn Treebank corpus (Marcus et al., 1993), the Swedish Stockholm-Ume corpus (Ejerhed et al., 1992), and the Icelandic Frequency Dictionary (IFD) corpus (Pind et al., 1991), have been used to train (in the case of data-driven methods) and develop (in the case of linguistic rule-based methods) different taggers, and to evaluate their accuracy, e.g." E09-1079,J93-2004,o,"Purely syntactic categories lead to a smaller number of tags which also improves the accuracy of manual tagging 2 (Marcus et al., 1993)." E09-1080,J93-2004,o,"4.3 Corpora The evaluations of the different models were carried out on the Penn Wall Street Journal corpus (Marcus et al., 1993) for English, and the Tiger treebank (Brants et al., 2002) for German." E09-1094,J93-2004,o,"Our model uses an exemplar memory that consists of 133566 verb-role-noun triples extracted from the Wall Street Journal and Brown parts of the Penn Treebank (Marcus et al., 1993)." E95-1015,J93-2004,o,"4.1 The test environment For our experiments, we used a manually corrected version of the Air Travel Information System (ATIS) spoken language corpus (Hemphill et al. , 1990) annotated in the Pennsylvania Treebank (Marcus et al. , 1993)." E95-1022,J93-2004,o,"157 ena or the linguist's abstraction capabilities (e.g. knowledge about what is relevant in the context), they tend to reach a 95-97% accuracy in the analysis of several languages, in particular English (Marshall 1983; Black et aL 1992; Church 1988; Cutting et al. 1992; de Marcken 1990; DeRose 1988; Hindle 1989; Merialdo 1994; Weischedel et al. 1993; Brill 1992; Samuelsson 1994; Eineborg and Gamb~ick 1994, etc.)." E95-1022,J93-2004,o,Note in passing that the ratio 1.04-1.08/99.7% compares very favourably with other systems; c.f. 3.0/99.3% by POST (Weischedel et al. 1993) and 1.04/97.6% or 1.09/98.6% by de Marcken (1990). E95-1029,J93-2004,o,"Our results agree, at least at the level of morphology, with (Leech and Eyes 1993; Marcus et al. 1993)." E95-1029,J93-2004,o,"A more optimistic view can be found in (Leech and Eyes 1993, p. 39; Marcus et al. 1993, p. 328); they argue that a near-100% interjudge agreement is possible, provided the part-of-speech annotation is done carefully by experts." E99-1031,J93-2004,o,"Our experiments created translation modules for two evaluation corpora: written news stories from the Penn Treebank corpus (Marcus et al. , 1993) and spoken task-oriented dialogues from the TRAINS93 corpus (Heeman and Allen, 1995)." E99-1050,J93-2004,o,"The tags sets we shall examine are the set used in the Penn Tree Bank (PTB) (Marcus et al. , 1993) and the C5 tag-set used by the CLAWS part-of-speech tagger (Garside, 1996)." H05-1035,J93-2004,o,"The difference in accuracy between a SVM model applied to RRR dataset (RRR-basic experiment) and the same experiment applied to TB2 dataset (TB2278 Description Accuracy Data Extra Supervision Always noun 55.0 RRR Most likely for each P 72.19 RRR Most likely for each P 72.30 TB2 Most likely for each P 81.73 FN Average human, headwords (Ratnaparkhi et al. , 1994) 88.2 RRR Average human, whole sentence (Ratnaparkhi et al. , 1994) 93.2 RRR Maximum Likelihood-based (Hindle and Rooth, 1993) 79.7 AP Maximum entropy, words (Ratnaparkhi et al. , 1994) 77.7 RRR Maximum entropy, words & classes (Ratnaparkhi et al. , 1994) 81.6 RRR Decision trees (Ratnaparkhi et al. , 1994) 77.7 RRR Transformation-Based Learning (Brill and Resnik, 1994) 81.8 WordNet Maximum-Likelihood based (Collins and Brooks, 1995) 84.5 RRR Maximum-Likelihood based (Collins and Brooks, 1995) 86.1 TB2 Decision trees & WSD (Stetina and Nagao, 1997) 88.1 RRR WordNet Memory-based Learning (Zavrel et al. , 1997) 84.4 RRR LexSpace Maximum entropy, unsupervised (Ratnaparkhi, 1998) 81.9 Maximum entropy, supervised (Ratnaparkhi, 1998) 83.7 RRR Neural Nets (Alegre et al. , 1999) 86.0 RRR WordNet Boosting (Abney et al. , 1999) 84.4 RRR Semi-probabilistic (Pantel and Lin, 2000) 84.31 RRR Maximum entropy, ensemble (McLauchlan, 2001) 85.5 RRR LSA SVM (Vanschoenwinkel and Manderick, 2003) 84.8 RRR Nearest-neighbor (Zhao and Lin, 2004) 86.5 RRR DWS FN dataset, w/o semantic features (FN-best-no-sem) 91.79 FN PR-WWW FN dataset, w/ semantic features (FN-best-sem) 92.85 FN PR-WWW TB2 dataset, best feature set (TB2-best) 93.62 TB2 PR-WWW Table 5: Accuracy of PP-attachment ambiguity resolution (our results in bold) basic experiment) is 2.9%." H05-1035,J93-2004,o,"But if one limits the information used for disambiguation of the PPattachment to include only the verb, the noun representing its object, the preposition and the main noun in the PP, the accuracy for human decision degrades from 93.2% to 88.2% (Ratnaparkhi et al. , 1994) on a dataset extracted from Penn Treebank (Marcus et 273 al. , 1993)." H05-1066,J93-2004,o,"In fact, the largest source of English dependency trees is automatically generated from the Penn Treebank (Marcus et al. , 1993) and is by convention exclusively projective." H05-1066,J93-2004,o,"Table 2 shows the results for English projective dependency trees extracted from the Penn Treebank (Marcus et al. , 1993) using the rules of Yamada and Matsumoto (2003)." H05-1070,J93-2004,o,"Examples are the Penn Treebank (Marcus et al. , 1993) for American English annotated at the University of Pennsylvania, the French treebank (Abeille and Clement, 1999) developed in Paris, the TIGER Corpus (Brants et al. , 2002) for German annotated at the Universities of Saarbrcurrency1ucken and This research was funded by a German Science Foundation grant (DFG SFB441-6)." H05-1078,J93-2004,o,"Statistical parsers trained on the Penn Treebank (PTB) (Marcus et al. , 1993) produce trees annotated with bare phrase structure labels (Collins, 1999; Charniak, 2000)." H05-1083,J93-2004,o,"As for parser, we train three off-shelf maximum-entropy parsers (Ratnaparkhi, 1999) using the Arabic, Chinese and English Penn treebank (Maamouri and Bies, 2004; Xia et al. , 2000; Marcus et al. , 1993)." H05-1083,J93-2004,o,"This is possible because of the availability of statistical parsers, which can be trained on human-annotated treebanks (Marcus et al. , 1993; Xia et al. , 2000; Maamouri and Bies, 2004) for multiple languages; (2) The binding theory is used as a guideline and syntactic structures are encoded as features in a maximum entropy coreference system; (3) The syntactic features are evaluated on three languages: Arabic, Chinese and English (one goal is to see if features motivated by the English language can help coreference resolution in other languages)." H05-1099,J93-2004,o,"Li and Roth demonstrated that their shallow parser, trained to label shallow constituents along the lines of the well-known CoNLL2000 task (Sang and Buchholz, 2000), outperformed the Collins parser in correctly identifying these constituents in the Penn Wall Street Journal (WSJ) Treebank (Marcus et al. , 1993)." H94-1034,J93-2004,o,"In a test set of 756 utterances containing 26 repairs (Dowding et al. , 1993), they obtained a detection recall rate of 42% and a precision of 84.6%; for correction, they obtained a recall rate of 30% and a precision rate of 62%." H94-1034,J93-2004,o,"Good partof-speech results can be obtained using only the preceding category (Weischedel et al. , 1993), which is what we will be using." I05-2019,J93-2004,o,"eBonsai first performs syntactic analysis of a sentence using a parser based on GLR algorithm (MSLR parser) (Tanaka et al. , 1993), and provides candidates of its syntactic structure." I05-2019,J93-2004,o,"The MSLR parser (Tanaka et al. , 1993) performs syntactic analysis of the sentence." I05-2019,J93-2004,p,"Particularly, syntactically annotated corpora (treebanks), such as Penn Treebank (Marcus et al. , 1993), Negra Corpus (Skut et al. , 1997) and EDR Corpus (Jap, 1994), contribute to improve the performance of morpho-syntactic analysis systems." I05-2041,J93-2004,o,"The first approaches are used for Penn Treebank (Marcus et al. , 1993) and the KAIST language resource (Lee et al. , 1997; Choi, 2001)." I05-2041,J93-2004,o,"However, most parsers still tend to show low performance on the long sentences (Li et al. , 1990; Doi et al. , 1993; Kim et al. , 2000)." I05-2041,J93-2004,p,"This kind of corpus has served as an extremely valuable resource for computational linguistics applications such as machine translation and question answering (Lee et al. , 1997; Choi, 2001), and has also proved useful in theoretical linguistics research (Marcus et al. , 1993)." I05-3005,J93-2004,o,"(Ng and Low 2004, Toutanova et al, 2003, Brants 2000, Ratnaparkhi 1996, Samuelsson 1993)." I05-3016,J93-2004,o,"The implementation of the algorithm is one that has a core of code that can run on either the Penn Treebank (Marcus et al. , 1993) or on the Chinese Treebank." I05-4002,J93-2004,o,"For many languages, large-scale syntactically annotated corpora have been built (e.g. the Penn Treebank (Marcus et al. , 1993)), and many parsing algorithms using CFGs have been proposed." I05-6007,J93-2004,o,"Since the texts in the RST Treebank are taken from the syntactically annotated Penn Treebank (Marcus et al. , 1993), it is natural to ask what the relation is between the discourse structures in the RST Treebank and the syntactic structures of the Penn Treebank." I08-2096,J93-2004,o,"Evaluations are typically carried out on newspaper texts, i.e. on section 23 of the Penn Treebank (PTB) (Marcus et al., 1993)." I08-2099,J93-2004,p,"Some notable efforts in this direction for other languages have been the Penn Tree Bank (Marcus et al., 1993) for English and the Prague Dependency Bank (Hajicova, 1998) for Czech." I08-2099,J93-2004,o,"Other works based on this scheme like (Bharati et al., 1993; Bharati et al., 2002; Pedersen et al., 2004) have shown promising results." J00-4003,J93-2004,o,"Only recently have robust knowledge-based methods for some of these tasks begun to appear, and their performance is still not very good, as seen above in our discussion of using WordNet as a semantic network; 33 as for checking the plausibility of a hypothesis on the basis of causal knowledge about the world, we now have a much better theoretical grasp of how such inferences could be made (see, for example, Hobbs et al. \[1993\] and Lascarides and Asher \[1993\]), but we are still quite a long way from a general inference engine." J01-2004,J93-2004,o,"It has been shown repeatedly--e.g. , Briscoe and Carroll (1993), Charniak (1997), Collins (1997), Inui et al." J01-4003,J93-2004,o,"Next we use the conclusions from two psycholinguistic experiments on ranking the Cf-list, the salience of discourse entities in prepended phrases (Gordon, Grosz, and Gilliom 1993) and the ordering of possessor and possessed in complex NPs (Gordon et al. 1999), to try to improve the performance of LRC." J06-1005,J93-2004,o,"5 The SemCor collection (Miller et al., 1993) is a subset of the Brown Corpus and consists of 352 news articles distributed into three sets in which the nouns, verbs, adverbs, and adjectives have been manually tagged with their corresponding WordNet senses and part-of-speech tags using Brills tagger (1995)." J07-3004,J93-2004,p,"One of the largest and earliest such efforts is the Penn Treebank (Marcus, Santorini, and Marcinkiewicz 1993; Marcus et al. 1994), which contains a one-million word Institute for Research in Cognitive Science, University of Pennsylvania, 3401 Walnut Street, Suite 400A, Philadelphia, PA 19104-6228, USA." J08-3003,J93-2004,o,"Whereas most of the work on English has been based on constituency-based representations,partly inuenced by the availability of data resources such as the Penn Treebank (Marcus,Santorini,and Marcinkiewicz 1993),it has been argued that free constituent order languages can be analyzed more adequately using dependency-based representations,which is also the kind of annotation found, for example,in the Prague Dependency Treebank of Czech (Haji c et al. 2001)." J08-4003,J93-2004,o,"Dependency Treebank (Hajic et al. 2001; Bohmova et al. 2003), and in Figure 2, for an English sentence taken from the Penn Treebank (Marcus, Santorini, and Marcinkiewicz 1993; Marcus et al. 1994)." J94-4005,J93-2004,o,"Lexical collocation functions, especially those determined statistically, have recently attracted considerable attention in computational linguistics (Calzolari and Bindi 1990; Church and Hanks 1990; Sekine et al. 1992; Hindle and Rooth 1993) mainly, though not exclusively, for use in disambiguation." J94-4005,J93-2004,o,"More specifically, the work on optimizing preference factors and semantic collocations was done as part of a project on spoken language translation in which the CLE was used for analysis and generation of both English and Swedish (AgnSs et al. 1993)." J95-2001,J93-2004,o,"Rule-based taggers (Brill 1992; Elenius 1990; Jacobs and Zernik 1988; Karlsson 1990; Karlsson et al. 1991; Voutilainen, Heikkila, and Antitila 1992; Voutilainen and Tapanainen 1993) use POS-dependent constraints defined by experienced linguists." J95-2001,J93-2004,o,"Stochastic taggers use both contextual and morphological information, and the model parameters are usually defined or updated automatically from tagged texts (Cerf-Danon and E1-Beze 1991; Church 1988; Cutting et al. 1992; Dermatas and Kokkinakis 1988, 1990, 1993, 1994; Garside, Leech, and Sampson 1987; Kupiec 1992; Maltese * Department of Electrical Engineering, Wire Communications Laboratory (WCL), University of Patras, 265 00 Patras, Greece." J95-2001,J93-2004,o,"(~) 1995 Association for Computational Linguistics Computational Linguistics Volume 21, Number 2 and Mancini 1991; Meteer, Schwartz, and Weischedel 1991; Merialdo 1991; Pelillo, Moro, and Refice 1992; Weischedel et al. 1993; Wothke et al. 1993)." J95-2001,J93-2004,o,"Recently, several solutions to the problem of tagging unknown words have been presented (Charniak et al. 1993; Meteer, Schwartz, and Weischedel 1991)." J95-2001,J93-2004,o,"Hypotheses for unknown words, both stochastic (Dermatas and Kokkinakis 1993, 1994; Maltese and Mancini 1991; Weischedel et al. 1993), and connectionist (Eineborg and Gamback 1993; Elenius 1990) have been applied to unlimited vocabulary taggers." J95-2001,J93-2004,o,"When the training text is adequate to estimate the tagger parameters, more efficient stochastic taggers (Dermatas and Kokkinakis 1994; Maltese and Mancini 1991; Weischedel et al. 1993) and training methods can be implemented (Merialdo 1994)." J95-4004,J93-2004,o,"A number of part-of-speech taggers are readily available and widely used, all trained and retrainable on text corpora (Church 1988; Cutting et al. 1992; Brill 1992; Weischedel et al. 1993)." J95-4004,J93-2004,o,"Endemic structural ambiguity, which can lead to such difficulties as trying to cope with the many thousands of possible parses that a grammar can assign to a sentence, can be greatly reduced by adding empirically derived probabilities to grammar rules (Fujisaki et al. 1989; Sharman, Jelinek, and Mercer 1990; Black et al. 1993) and by computing statistical measures of lexical association (Hindle and Rooth 1993)." J95-4004,J93-2004,o,"Part-of-speech tagging is an active area of research; a great deal of work has been done in this area over the past few years (e.g. , Jelinek 1985; Church 1988; Derose 1988; Hindle 1989; DeMarcken 1990; Merialdo 1994; Brill 1992; Black et al. 1992; Cutting et al. 1992; Kupiec 1992; Charniak et al. 1993; Weischedel et al. 1993; Schutze and Singer 1994)." J95-4004,J93-2004,o,Almost all recent work in developing automatically trained part-of-speech taggers has been on further exploring Markovmodel based tagging (Jelinek 1985; Church 1988; Derose 1988; DeMarcken 1990; Merialdo 1994; Cutting et al. 1992; Kupiec 1992; Charniak et al. 1993; Weischedel et al. 1993; Schutze and Singer 1994). J97-3003,J93-2004,o,"Such methods can achieve better performance, reaching tagging accuracy of up to 85% on unknown words for English (Brill 1994; Weischedel et al. 1993)." J97-3003,J93-2004,o,"It has been noticed (as in \[Weischedel et al. , 1993\], for example) that capitalized and hyphenated words have a different distribution from other words." J98-2001,J93-2004,o,"In the past two or three years, this kind of verification has been attempted for other aspects of semantic interpretation: by Passonneau and Litman (1993) for segmentation and by Kowtko, Isard, and Doherty (1992) and Carletta et al." J98-2002,J93-2004,o,"Some of these methods make use of prior knowledge in the form of an existing thesaurus (Resnik 1993a, 1993b; Framis 1994; Almuallim et al. 1994; Tanaka 1996; Utsuro and Matsumoto 1997), while others do not rely on any prior knowledge (Pereira, Tishby, and Lee 1993; Grishman and Sterling 1994; Tanaka 1994)." J98-2002,J93-2004,o,"The second approach (Sekine et al. 1992; Chang, Luo, and Su 1992; Resnik 1993a; Grishman and Sterling 1994; Alshawi and Carter 1994) takes triples (verb, prep, noun2) and (nounl, prep, noun2), like those in Table 10, as training data for acquiring semantic knowledge and performs PP-attachment disambiguation on quadruples." J99-2004,J93-2004,o,"2.2 Statistical Parsers Pioneered by the IBM natural language group (Fujisaki et al. 1989) and later pursued by, for example, Schabes, Roth, and Osborne (1993), Jelinek et al." J99-2004,J93-2004,o,Supertags Part-of-speech disambiguation techniques (POS taggers) (Church 1988; Weischedel et al. 1993; Brill 1993) are often used prior to parsing to eliminate (or substantially reduce) the part-of-speech ambiguity. J99-4003,J93-2004,o,These are the same distributions that are needed by previous POS-based language models (Equation 5) and POS taggers (Church 1988; Charniak et al. 1993). J99-4003,J93-2004,o,"This source is very important for repairs that do not have initial retracing, and is the mainstay of the ""parser-first"" approach (e.g. , 550 Heeman and Allen Modeling Speakers' Utterances Dowding et al. 1993)--keep trying alternative corrections until one of them parses." J99-4003,J93-2004,o,"In a test set containing 26 repairs Dowding et al. 1993, they obtained a detection recall rate of 42% with a precision of 85%, and a correction recall rate of 31% with a precision of 62%." L08-1018,J93-2004,o,"http://duc.nist.gov 2004 Journal of the Association for Computing Machinery 16 264--285 (Voorhees and Harman, 1999), Message Understanding Conferences (MUC) (Chinchor et al, 1993), TIPSTER SUMMAC Text Summarization Evaluation (Mani et al, 1998), Document Understanding Conference (DUC) (DUC, 2004), and Text Summarization Challenge (TSC) (Fukushima and Okumura, 2001), have attested the importance of this topic." L08-1018,J93-2004,o,"264-285. T Fukushima M Okumura Text summarization challenge: text summarization in Japan 2001 in Proceedings of NAACL 2001 Workshop Automatic Summarization 51--59 Conferences (MUC) (Chinchor et al, 1993), TIPSTER SUMMAC Text Summarization Evaluation (Mani et al, 1998), Document Understanding Conference (DUC) (DUC, 2004), and Text Summarization Challenge (TSC) (Fukushima and Okumura, 2001), have attested the importance of this topic." L08-1018,J93-2004,o,"Statistics in linguistics, Oxford.: Basil Blackwell. N Chinchor Evaluating message understanding systems: an analysis of the third Message Understanding Conference (MUC-3 1993 Computational Linguistics 19 409--449 Chinchor, 1993 Chinchor, N., et al, 1993." L08-1018,J93-2004,o,"In acknowledgment of this fact, a series of conferences like Text Retrieval Conferences (TREC) (Voorhees and Harman, 1999), Message Understanding Conferences (MUC) (Chinchor et al, 1993), TIPSTER SUMMAC Text Summarization Evaluation (Mani et al, 1998), Document Understanding Conference (DUC) (DUC, 2004), and Text Summar Voorhees, Harman, 1999 Voorhees, E. M. and Harman, D. K., 1999." M98-1009,J93-2004,o,"Training Data Our source for syntactically annotated training data was the Penn Treebank (Marcus et al. , 1993)." N01-1023,J93-2004,o,"6 Experiment 6.1 Setup The experiments we report were done on the Penn Treebank WSJ Corpus (Marcus et al. , 1993)." N01-1023,J93-2004,o,"They train from the Penn Treebank (Marcus et al. , 1993); a collection of 40,000 sentences that are labeled with corrected parse trees (approximately a million word tokens)." N03-1013,J93-2004,o,"3 Building the CatVar The CatVar database was developed using a combination of resources and algorithms including the Lexical Conceptual Structure (LCS) Verb and Preposition Databases (Dorr, 2001), the Brown Corpus section of the Penn Treebank (Marcus et al. , 1993), an English morphological analysis lexicon developed for PC-Kimmo (Englex) (Antworth, 1990), NOMLEX (Macleod et al. , 1998), Longman Dictionary of Contemporary English 2For a deeper discussion and classification of Porter stemmers errors, see (Krovetz, 1993)." N03-1013,J93-2004,o,"Resources specifying the relations among lexical items such as WordNet (Fellbaum, 1998) and HowNet (Dong, 2000) (among others) have inspired the work of many researchers in NLP (Carpuat et al. , 2002; Dorr et al. , 2000; Resnik, 1999; Hearst, 1998; Voorhees, 1993)." N03-1014,J93-2004,o,"6 The Experimental Results We used the Penn Treebank (Marcus et al. , 1993) to perform empirical experiments on this parsing model." N03-1030,J93-2004,p,"1 Introduction By exploiting information encoded in human-produced syntactic trees (Marcus et al. , 1993), research on probabilistic models of syntax has driven the performance of syntactic parsers to about 90% accuracy (Charniak, 2000; Collins, 2000)." N03-1031,J93-2004,o,"1 Introduction Current state-of-the-art statistical parsers (Collins, 1999; Charniak, 2000) are trained on large annotated corpora such as the Penn Treebank (Marcus et al. , 1993)." N03-1033,J93-2004,o,"Secondly, while all taggers use lexical information, and, indeed, it is well-known that lexical probabilities are much more revealing than tag sequence probabilities (Charniak et al. , 1993), most taggers make quite limited use of lexical probabilities (compared with, for example, the bilexical probabilities commonly used in current statistical parsers)." N03-3006,J93-2004,o,"The parser has been trained, developed and tested on a large collection of syntactically analyzed sentences, the Penn Treebank (Marcus et al. , 1993)." N04-1016,J93-2004,o,"Table 6 shows 3An exception is Golding (1995), who uses the entire Brown corpus for training (1M words) and 3/4 of the Wall Street Journal corpus (Marcus et al. , 1993) for testing." N04-1016,J93-2004,o,"6 Bracketing of Compound Nouns The first analysis task we consider is the syntactic disambiguation of compound nouns, which has received a fair amount of attention in the NLP literature (Pustejovsky et al. , 1993; Resnik, 1993; Lauer, 1995)." N04-1016,J93-2004,o,"The simplest model of compound noun disambiguation compares the frequencies of the two competing analyses and opts for the most frequent one (Pustejovsky et al. , Model Alta BNC Baseline 63.93 63.93 f (n1;n2) : f (n2;n3) 77.86 66.39 f (n1;n2) : f (n1;n3) 78.68# 65.57 f (n1;n2)= f (n1) : f (n2;n3)= f (n2) 68.85 65.57 f (n1;n2)= f (n2) : f (n2;n3)= f (n3) 70.49 63.11 f (n1;n2)= f (n2) : f (n1;n3)= f (n3) 80.32 66.39 f (n1;n2) : f (n2;n3) (NEAR) 68.03 63.11 f (n1;n2) : f (n1;n3) (NEAR) 71.31 67.21 f (n1;n2)= f (n1) : f (n2;n3)= f (n2) (NEAR) 61.47 62.29 f (n1;n2)= f (n2) : f (n2;n3)= f (n3) (NEAR) 65.57 57.37 f (n1;n2)= f (n2) : f (n1;n3)= f (n3) (NEAR) 75.40 68.03# Table 8: Performance of Altavista counts and BNC counts for compound bracketing (data from Lauer 1995) Model Accuracy Baseline 63.93 Best BNC 68.036 Lauer (1995): adjacency 68.90 Lauer (1995): dependency 77.50 Best Altavista 78.686 Lauer (1995): tuned 80.70 Upper bound 81.50 Table 9: Performance comparison with the literature for compound bracketing 1993)." N06-1020,J93-2004,o,"3.3 Corpora Our labeled data comes from the Penn Treebank (Marcus et al. , 1993) and consists of about 40,000 sentences from Wall Street Journal (WSJ) articles 153 annotated with syntactic information." N06-1021,J93-2004,o,"For English, we used the Penn Treebank version 3.0 (Marcus et al. , 1993) and extracted dependency relations by applying the head-finding rules of (Yamada and Matsumoto, 2003)." N06-1031,J93-2004,o,"The yield of this tree gives the target translation: the gunman was killed by police . The Penn English Treebank (PTB) (Marcus et al. , 1993) is our source of syntactic information, largely due to the availability of reliable parsers." N06-1040,J93-2004,o,"Finally, Section 4 reports the results of parsing experiments using our exhaustive k-best CYK parser with the concise PCFGs induced from the Penn WSJ treebank (Marcus et al. , 1993)." N06-1054,J93-2004,o,"We use data from the CoNLL-2004 shared taskthe PropBank (Palmer et al. , 2005) annotations of the Penn Treebank (Marcus et al. , 1993), with sections 1518 as the training set and section 20 as the development set." N06-2015,J93-2004,p,"2 Treebanking The Penn Treebank (Marcus et al. , 1993) is annotated with information to make predicate-argument structure easy to decode, including function tags and markers of empty categories that represent displaced constituents." N06-2019,J93-2004,o,"For our out-of-domain training condition, the parser was trained on sections 2-21 of the Wall Street Journal (WSJ) corpus (Marcus et al. , 1993)." N06-2025,J93-2004,o,"4.2 Experiments on SRL dataset We used two different corpora: PropBank (www.cis.upenn.edu/ace) along with Penn Treebank 2 (Marcus et al. , 1993) and FrameNet." N06-2026,J93-2004,o,"PropBank encodes propositional information by adding a layer of argument structure annotation to the syntactic structures of the Penn Treebank (Marcus et al. , 1993)." N06-2033,J93-2004,o,"Much of this work has been fueled by the availability of large corpora annotated with syntactic structures, especially the Penn Treebank (Marcus et al. , 1993)." N07-1049,J93-2004,o,"Occasionally, in 59 sentences out of 2416 on section 23 of the Wall Street Journal Penn Treebank (Marcus et al. , 1993), the shift-reduce parser fails to attach a node to a head, producing a disconnected graph." N07-1051,J93-2004,o,"ENGLISH GERMAN CHINESE (Marcus et al. , 1993) (Skut et al. , 1997) (Xue et al. , 2002) TrainSet Section 2-21 Sentences 1-18,602 Articles 26-270 DevSet Section 22 18,603-19,602 Articles 1-25 TestSet Section 23 19,603-20,602 Articles 271-300 Table 3: Experimental setup." N07-1058,J93-2004,p,The default training set of Penn Treebank (Marcus et al. 1993) was used for the parser because the domain and style of those texts actually matches fairly well with the domain and style of the texts on which a reading level predictor for second language learners might be used. N07-2045,J93-2004,o,"2.1 Training the model As with (Minnen et al. , 2000), we train the language model on the Penn Treebank (Marcus et al. , 1993)." N07-2045,J93-2004,o,"We also test our language model using leave-one-out cross-validation on the Penn Treebank (Marcus et al. , 1993) (WSJ), giving us 86.74% accuracy (see Table 1)." N09-1009,J93-2004,o,"4 Experiments Our experiments involve data from two treebanks: the Wall Street Journal Penn treebank (Marcus et al., 1993) and the Chinese treebank (Xue et al., 2004)." N09-1012,J93-2004,o,"6 Results We trained on the standard Penn Treebank WSJ corpus (Marcus et al., 1993)." N09-1013,J93-2004,o,"Standard CI Model 1 training, initialised with a uniform translation table so that t(ejf) is constant for all source/target word pairs (f,e), was run on untagged data for 10 iterations in each direction (Brown et al., 1993; Deng and Byrne, 2005b)." N09-1013,J93-2004,o,"2.1 EM parameter estimation We train using Expectation Maximisation (EM), optimising the log probability of the training setfe(s),f(s)gSs=1 (Brown et al., 1993)." N09-1013,J93-2004,o,"Then P(eI1jfj1) = summationtextaI 1 P(eI1,aI1jfj1) (Brown et al., 1993)." N09-1063,J93-2004,o,"We used the Berkeley Parser 2 to train such grammars on sections 2-21 of the Penn Treebank (Marcus et al., 1993)." N09-1073,J93-2004,o,"For this paper, we use an exact inference (exhaustive search) CYK parser, using a simple probabilistic context-free grammar (PCFG) induced from the Penn WSJ Treebank (Marcus et al., 1993)." N09-2032,J93-2004,o,"To determine the target distribution we classified 171 (approximately 5%) randomly selected utterances from the TownInfo data, that were used as a development set.2 In Table 1 we can see that 15.2 % of the trees in the artificial corpus will be NP NSUs.3 4 Data generation We constructed our artificial corpus from sections 2 to 21 of the Wall Street Journal (WSJ) section of the Penn Treebank corpus (Marcus et al., 1993) 2We discarded very short utterances (yes, no, and greetings) since they dont need parsing." N09-2044,J93-2004,o,"173 The standard features for genre classification models include words, part-of-speech (POS) tags, and punctuation (Kessler et al., 1997; Stamatatos et al., 2000; Lee and Myaeng, 2002; Biber, 1993), but constituent-based syntactic categories have also been explored (Karlgren and Cutting, 1994)." N09-2044,J93-2004,o,"A similar approach is used here, including a collapsed version of the Treebank POS tag set (Marcus et al., 1993), with additions for specific words (e.g. personal pronouns and filled pause markers), compound punctuation (e.g. multiple exclamation marks), and a general emoticon tag, resulting in a total of 41 tags." N09-2061,J93-2004,o,"2 Previous Work We briefly outline the most important existing methods and cite error rates on a standard English data set, sections 03-06 of the Wall Street Journal (WSJ) corpus (Marcus et al., 1993), containing nearly 27,000 examples." N09-3004,J93-2004,o,"4 Experimental Set-up For the experiments, we use the WSJ portion of the Penn tree bank (Marcus et al., 1993), using the standard train/development/test splits, viz 39,832 sentences from 2-21 sections, 2416 sentences from section 23 for testing and 1,700 sentences from section 22 for development." N09-3017,J93-2004,o,"OHara and Wiebe (2003) also make use of high level features, in their case the Penn Treebank (Marcus et al., 1993) and FrameNet (Baker et al., 1998) to classify prepositions." N09-3017,J93-2004,o,"O'Hara and Wiebe (2003) make use of Penn Treebank (Marcus et al., 1993) and FrameNet (Baker et al., 1998) to classify prepositions." P00-1061,J93-2004,o,"In all of the cited approaches, the Penn Wall Street Journal Treebank (Marcus et al. , 1993) is used, the availability of whichobviates the standard eort required for treebank traininghandannotating large corpora of specic domains of specic languages with specic parse types." P01-1003,J93-2004,o,"4 Experimental Work A part of the Wall Street Journal (WSJ) which had been processed in the Penn Treebanck Project (Marcus et al. , 1993) was used in the experiments." P01-1010,J93-2004,o,"While this technique has been successfully applied to parsing the ATIS portion in the Penn Treebank (Marcus et al. 1993), it is extremely time consuming." P01-1044,J93-2004,o,"We used treebank grammars induced directly from the local trees of the entire WSJ section of the Penn Treebank (Marcus et al. , 1993) (release 3)." P02-1018,J93-2004,o,"Evaluating the algorithm on the output of Charniaks parser (Charniak, 2000) and the Penn treebank (Marcus et al. , 1993) shows that the patternmatching algorithm does surprisingly well on the most frequently occuring types of empty nodes given its simplicity." P02-1026,J93-2004,o,"Penn Treebank corpus (Marcus et al. , 1993) sections 0-20 were used for training, sections 2124 for testing." P02-1034,J93-2004,o,The Penn Wall Street Journal treebank (Marcus et al. 1993) was used as training and test data. P02-1055,J93-2004,o,"task (Church, 1988; Brill, 1993; Ratnaparkhi, 1996; Daelemans et al. , 1996), and reported errors in the range of 26% are common." P02-1055,J93-2004,o,"In one experiment, it has to be performed on the basis of the gold-standard, assumed-perfect POS taken directly from the training data, the Penn Treebank (Marcus et al. , 1993), so as to abstract from a particular POS tagger and to provide an upper bound." P02-1055,J93-2004,o,"Our chunks and functions are based on the annotations in the third release of the Penn Treebank (Marcus et al. , 1993)." P03-1013,J93-2004,o,"The annotation scheme (Skut et al. , 1997) is modeled to a certain extent on that of the Penn Treebank (Marcus et al. , 1993), with crucial differences." P03-1013,J93-2004,o,"However, most of the existing models have been developed for English and trained on the Penn Treebank (Marcus et al. , 1993), which raises the question whether these models generalize to other languages, and to annotation schemes that differ from the Penn Treebank markup." P03-1055,J93-2004,o,"We used the Wall Street Journal (WSJ) part of the Penn Treebank (Marcus et al. , 1993), where extraction is represented by co-indexing an empty terminal element (henceforth EE) to its antecedent." P03-1069,J93-2004,o,"It achieves 90.1% average precision/recall for sentences with maximum length 40 and 89.5% for sentences with maximum length 100 when trained and tested on the standard sections of the Wall Street Journal Treebank (Marcus et al. , 1993)." P03-2006,J93-2004,o,"We performed experiments with two statistical classifiers: the decision tree induction system C4.5 (Quinlan, 1993) and the Tilburg Memory-Based Learner (TiMBL) (Daelemans et al. , 2002)." P03-2006,J93-2004,o,"The corpus was automatically derived from the Penn Treebank II corpus (Marcus et al. , 1993), by means of the script chunklink.pl (Buchholz, 2002) that we modified to fit our purposes." P03-2036,J93-2004,o,"We performed a comparison between the existing CFG filtering techniques for LTAG (Poller and Becker, 1998) and HPSG (Torisawa et al. , 2000), using strongly equivalent grammars obtained by converting LTAGs extracted from the Penn Treebank (Marcus et al. , 1993) into HPSG-style." P03-2036,J93-2004,o,"(2003) from Sections 2-21 of the Wall Street Journal (WSJ) in the Penn Treebank (Marcus et al. , 1993) and its subsets.3 We then converted them into strongly equivalent HPSG-style grammars using the grammar conversion described in Section 2.1." P04-1006,J93-2004,o,"The first stage parser is a best-first PCFG parser trained on sections 2 through 22, and 24 of the Penn WSJ treebank (Marcus et al. , 1993)." P04-1013,J93-2004,o,"6 The Experiments We used the Penn Treebank (Marcus et al. , 1993) to perform empirical experiments on the proposed parsing models." P04-1043,J93-2004,o,"4.1 Corpora set-up The above kernels were experimented over two corpora: PropBank (www.cis.upenn.edu/ ace) along with Penn TreeBank5 2 (Marcus et al. , 1993) and FrameNet." P04-1052,J93-2004,o,"4 Evaluation As our algorithm works in open domains, we were able to perform a corpus-based evaluation using the Penn WSJ Treebank (Marcus et al. , 1993)." P04-1052,J93-2004,o,"Also, attribute classi cation is a hard problem and there is no existing classi cation scheme that can be used for open domains like newswire; for example, WordNet (Miller et al. , 1993) organises adjectives as concepts that are related by the non-hierarchical relations of synonymy and antonymy (unlike nouns that are related through hierarchical links such as hyponymy, hypernymy and metonymy)." P04-1082,J93-2004,o,"Using linguistic principles to recover empty categories Richard CAMPBELL Microsoft Research One Microsoft Way Redmond, WA 98052 USA richcamp@microsoft.com Abstract This paper describes an algorithm for detecting empty nodes in the Penn Treebank (Marcus et al. , 1993), finding their antecedents, and assigning them function tags, without access to lexical information such as valency." P04-1082,J93-2004,o,"In the Penn Treebank (Marcus et al. , 1993), null elements, or empty categories, are used to indicate non-local dependencies, discontinuous constituents, and certain missing elements." P05-1012,J93-2004,o,"3 Experiments We tested our methods experimentally on the English Penn Treebank (Marcus et al. , 1993) and on the Czech Prague Dependency Treebank (Hajic, 1998)." P05-1023,J93-2004,o,"5 The Experimental Results We used the Penn Treebank WSJ corpus (Marcus et al. , 1993) to perform empirical experiments on the proposed parsing models." P05-1023,J93-2004,o,"The most sophisticated of these techniques (such as Support Vector Machines) are unfortunately too computationally expensive to be used on large datasets like the Penn Treebank (Marcus et al. , 1993)." P05-1025,J93-2004,o,"Unlabeled dependencies can be readily obtained by processing constituent trees, such as those in the Penn Treebank (Marcus et al. , 1993), with a set of rules to determine the lexical heads of constituents." P05-1036,J93-2004,o,"The K&M model creates a packed parse forest of all possible compressions that are grammatical with respect to the Penn Treebank (Marcus et al. , 1993)." P05-1038,J93-2004,o,"Compared to the Penn Treebank (PTB; Marcus et al. 1993), the POS tagset of the French Treebank is smaller (13 tags vs. 36 tags): all punctuation marks are represented as the single PONCT tag, there are no separate tags for modal verbs, whwords, and possessives." P05-1039,J93-2004,o,"It is available in several formats, and in this paper, we use the Penn Treebank (Marcus et al. , 1993) format of NEGRA." P05-1040,J93-2004,o,"(2001) compare taggers trained and tested on the Wall Street Journal (WSJ, Marcus et al. , 1993) and the Lancaster-Oslo-Bergen (LOB, Johansson, 1986) corpora and find that the results for the WSJ perform significantly worse." P05-1040,J93-2004,o,"Given the estimated 3% error rate of the WSJ tagging (Marcus et al. , 1993), they argue that the difference in performance is not sufficient to establish which of the two taggers is actually better." P05-1040,J93-2004,o,"On the other hand, structural annotation such as that used in syntactic treebanks (e.g. , Marcus et al. , 1993) assigns a syntactic category to a contiguous sequence of corpus positions." P05-1073,J93-2004,o,"In the February 2004 version of the PropBank corpus, annotations are done on top of the Penn TreeBank II parse trees (Marcus et al. , 1993)." P05-2004,J93-2004,o,"4.2 Data The data comes from the CoNLL 2000 shared task (Sang and Buchholz, 2000), which consists of sentences from the Penn Treebank Wall Street Journal corpus (Marcus et al. , 1993)." P05-2010,J93-2004,o,"4 Analysis of Experimental Data Most of the existing research in computational linguistics that uses human annotators is within the framework of classification, where an annotator decides, for every test item, on an appropriate tag out of the pre-specified set of tags (Poesio and Vieira, 1998; Webber and Byron, 2004; Hearst, 1997; Marcus et al. , 1993)." P05-3018,J93-2004,n,"a time-consuming process (Litman and Pan, 2002; Marcus et al. , 1993; Xia et al. , 2000; Wiebe, 2002)." P06-1021,J93-2004,o,"For instance, the Penn Treebank policy (Marcus et al. , 1993; Marcus et al. , 1994) is to annotate the lowest node that is unfinished with an -UNF tag as in Figure 4(a)." P06-1023,J93-2004,o,"In an evaluation on the PENN treebank (Marcus et al. , 1993), the parser outperformed other unlexicalized PCFG parsers in terms of labeled bracketing fscore." P06-1023,J93-2004,o,"1 Introduction Empty categories (also called null elements) are used in the annotation of the PENN treebank (Marcus et al. , 1993) in order to represent syntactic phenomena like constituent movement (e.g. whextraction), discontinuous constituents, and missing elements (PRO elements, empty complementizers and relative pronouns)." P06-1043,J93-2004,o,"But the lack of corpora has led to a situation where much of the current work on parsing is performed on a single domain using training data from that domain the Wall Street Journal (WSJ) section of the Penn Treebank (Marcus et al. , 1993)." P06-1043,J93-2004,o,"3.2 Wall Street Journal Our out-of-domain data is the Wall Street Journal (WSJ) portion of the Penn Treebank (Marcus et al. , 1993) which consists of about 40,000 sentences (one million words) annotated with syntactic information." P06-1060,J93-2004,o,"There are cases, though, where the labels consist of several related, but not entirely correlated, properties; examples include mention detectionthe task we are interested in, syntactic parsing with functional tag assignment (besides identifying the syntactic parse, also label the constituent nodes with their functional category, as defined in the Penn Treebank (Marcus et al. , 1993)), and, to a lesser extent, part-of-speech tagging in highly inflected languages.4 The particular type of mention detection that we are examining in this paper follows the ACE general definition: each mention in the text (a reference to a real-world entity) is assigned three types of information:5 An entity type, describing the type of the entity it points to (e.g. person, location, organization, etc) An entity subtype, further detailing the type (e.g. organizations can be commercial, governmental and non-profit, while locations can be a nation, population center, or an international region) A mention type, specifying the way the entity is realized a mention can be named (e.g. John Smith), nominal (e.g. professor), or pronominal (e.g. she)." P06-1063,J93-2004,o,"Large treebanks are available for major languages, however these are often based on a speci c text type or genre, e.g. nancial newspaper text (the Penn-II Treebank (Marcus et al. , 1993))." P06-1064,J93-2004,o,"1 Introduction A number of wide-coverage TAG, CCG, LFG and HPSG grammars (Xia, 1999; Chen et al. , 2005; Hockenmaier and Steedman, 2002a; ODonovan et al. , 2005; Miyao et al. , 2004) have been extracted from the Penn Treebank (Marcus et al. , 1993), and have enabled the creation of widecoverage parsers for English which recover local and non-local dependencies that approximate the underlying predicate-argument structure (Hockenmaier and Steedman, 2002b; Clark and Curran, 2004; Miyao and Tsujii, 2005; Shen and Joshi, 2005)." P06-1123,J93-2004,o,"3.3 Methods We parsed the English side of each bilingual bitext and both sides of each English/English bitext using an off-the-shelf syntactic parser (Bikel, 2004), which was trained on sections 02-21 of the Penn English Treebank (Marcus et al. , 1993)." P06-2002,J93-2004,o,"2.2 Generalization pseudocode In order to identify the portions in common between the patterns, and to generalize them, we apply the following pseudocode (Ruiz-Casado et al. , in press): 1All the PoS examples in this paper are done with Penn Treebank labels (Marcus et al. , 1993)." P06-2004,J93-2004,o,"With the exception of (Hindle and Rooth, 1993), most unsupervised work on PP attachment is based on superficial analysis of the unlabeled corpus without the use of partial parsing (Volk, 2001; Calvo et al. , 2005)." P06-2004,J93-2004,o,"The labeled corpus is the Penn Wall Street Journal treebank (Marcus et al. , 1993)." P06-2004,J93-2004,o,"1 Introduction The best performing systems for many tasks in natural language processing are based on supervised training on annotated corpora such as the Penn Treebank (Marcus et al. , 1993) and the prepositional phrase data set first described in (Ratnaparkhi et al. , 1994)." P06-2009,J93-2004,o,"4 Experiments and Results We use the standard corpus for this task, the Penn Treebank (Marcus et al. , 1993)." P06-2009,J93-2004,o,"Policy #Shift #Left #Right Start over 156545 26351 27918 Stay 117819 26351 27918 Step back 43374 26351 27918 Table 1: The number of actions required to build all the trees for the sentences in section 23 of Penn Treebank (Marcus et al. , 1993) as a function of the focus point placement policy." P06-2010,J93-2004,o,"The data consist of sections of the Wall Street Journal (WSJ) part of the Penn TreeBank (Marcus et al. , 1993), with information on predicate-argument structures extracted from the PropBank corpus (Palmer et al. , 2005)." P06-2028,J93-2004,o,"Typically, the local context around the 215 word to be sense-tagged is used to disambiguate the sense (Yarowsky, 1993), and it is common for linguistic resources such as WordNet (Li et al. , 1995; Mihalcea and Moldovan, 1998; Ramakrishnan and Prithviraj, 2004), or bilingual data (Li and Li, 2002) to be employed as well as more longrange context." P06-2067,J93-2004,o,"1 Introduction Robust statistical syntactic parsers, made possible by new statistical techniques (Collins, 1999; Charniak, 2000; Bikel, 2004) and by the availability of large, hand-annotated training corpora such as WSJ (Marcus et al. , 1993) and Switchboard (Godefrey et al. , 1992), have had a major impact on the field of natural language processing." P06-2069,J93-2004,o,"TB TBR JJ, JJR, JJS JJ RB,RBR,RBS RB CD, LS CD CC CC DT, WDT, PDT DT FW FW MD, VB, VBD, VBG, VBN, VBP, VBZ, VH, VHD, VHG, VHN, VHP, VHZ MD NN, NNS, NP, NPS NN PP, WP, PP$, WP$, EX, WRB PP IN, TO IN POS PO RP RP SYM SY UH UH VV, VVD, VVG, VVN, VVP, VVZ VB (Marcus et al. , 1993)." P06-2088,J93-2004,o,"The experiment used all 578 sentences in the ATIS corpus with a parse tree, in the Penn Treebank (Marcus et al. 1993)." P06-2089,J93-2004,p,"However, evaluations on the widely used WSJ corpus of the Penn Treebank (Marcus et al. , 1993) show that the accuracy of these parsers still lags behind the state-of-theart." P06-2089,J93-2004,o,"4 Experiments We evaluated our classifier-based best-first parser on the Wall Street Journal corpus of the Penn Treebank (Marcus et al. , 1993) using the standard split: sections 2-21 were used for training, section 22 was used for development and tuning of parameters and features, and section 23 was used for testing." P06-3014,J93-2004,p,"1 Introduction Robust statistical syntactic parsers, made possible by new statistical techniques (Collins, 1999; Charniak, 2000; Bikel, 2004) and by the availability of large, hand-annotated training corpora such as WSJ (Marcus et al. , 1993) and Switchboard (Godefrey et al. , 1992), have had a major impact on the field of natural language processing." P07-1026,J93-2004,o,"The data consists of sections of the Wall Street Journal part of the Penn TreeBank (Marcus et al. , 1993), with information on predicate-argument structures extracted from the PropBank corpus (Palmer et al. , 2005)." P07-1031,J93-2004,p,"1 Introduction The Penn Treebank (Marcus et al. , 1993) is perhaps the most in uential resource in Natural Language Processing (NLP)." P07-1031,J93-2004,o,"RECALL F-SCORE Brackets 89.17 87.50 88.33 Dependencies 96.40 96.40 96.40 Brackets, revised 97.56 98.03 97.79 Dependencies, revised 99.27 99.27 99.27 Table 1: Agreement between annotators few weeks, and increased to about 1000 words per hour after gaining more experience (Marcus et al. , 1993)." P07-1035,J93-2004,o,"For both experiments, we used dependency trees extracted from the Penn Treebank (Marcus et al. , 1993) using the head rules and dependency extractor from Yamada and Matsumoto (2003)." P07-1062,J93-2004,o,"The RST-DT consists of 385 documents from the Wall Street Journal, about 176,000 words, which overlaps with the Penn Wall St. Journal (WSJ) Treebank (Marcus et al. , 1993)." P07-1071,J93-2004,o,"The current version of the dataset gives semantic tags for the same sentencesas inthe PennTreebank (Marcuset al. , 1993), whichareexcerptsfromtheWallStreetJournal." P07-1079,J93-2004,o,"We created a dependency training corpus based on the Penn Treebank (Marcus et al. , 1993), or more specifically on the HPSG Treebank generated from the Penn Treebank (see section 2.2)." P07-1079,J93-2004,o,"4 Experiments We evaluate the accuracy of HPSG parsing with dependencyconstraintsontheHPSGTreebank(Miyao et al. , 2003), which is extracted from the Wall Street Journal portion of the Penn Treebank (Marcus et al. , 1993)1." P07-1080,J93-2004,o,"We used the Penn Treebank WSJ corpus (Marcus et al. , 1993) to perform the empirical evaluation of the considered approaches." P07-1120,J93-2004,o,"First, we trained a finitestate shallow parser on base phrases extracted from the Penn Wall St. Journal (WSJ) Treebank (Marcus et al. , 1993)." P08-1039,J93-2004,n,"This is because their training data, the Penn Treebank (Marcus et al., 1993), does not fully annotate NP structure." P08-1042,J93-2004,o,"Treebank (Marcus et al., 1993), six of which are errors." P08-1061,J93-2004,o,"For experiment on English, we used the English Penn Treebank (PTB) (Marcus et al., 1993) and the constituency structures were converted to dependency trees using the same rules as (Yamada and Matsumoto, 2003)." P08-1067,J93-2004,o,"5 Experiments We compare the performance of our forest reranker against n-best reranking on the Penn English Treebank (Marcus et al., 1993)." P08-1068,J93-2004,o,"We show that our semi-supervised approach yields improvements for fixed datasets by performing parsing experiments on the Penn Treebank (Marcus et al., 1993) and Prague Dependency Treebank (Hajic, 1998; Hajic et al., 2001) (see Sections 4.1 and 4.3)." P08-1068,J93-2004,o,"The English experiments were performed on the Penn Treebank (Marcus et al., 1993), using a standard set of head-selection rules (Yamada and Matsumoto, 2003) to convert the phrase structure syntax of the Treebank to a dependency tree representation.6 We split the Treebank into a training set (Sections 221), a development set (Section 22), and several test sets (Sections 0,7 1, 23, and 24)." P08-1082,J93-2004,o,"The text was split at the sentence level, tokenized and PoS tagged, in the style of the Wall Street Journal Penn TreeBank (Marcus et al., 1993)." P08-1082,J93-2004,o,"This probability is computed using IBMs Model 1 (Brown et al., 1993): P(Q|A) = productdisplay qQ P(q|A) (3) P(q|A) = (1)Pml(q|A)+Pml(q|C) (4) Pml(q|A) = summationdisplay aA (T(q|a)Pml(a|A)) (5) where the probability that the question term q is generated from answer A, P(q|A), is smoothed using the prior probability that the term q is generated from the entire collection of answers C, Pml(q|C)." P08-1098,J93-2004,o,"The WSJ corpus is based on the WSJ part of the PENN TREEBANK (Marcus et al., 1993); we used the first 10,000 sentences of section 2-21 as the pool set, and section 00 as evaluation set (1,921 sentences)." P08-1098,J93-2004,o,"In the general language UPenn annotation efforts for the WSJ sections of the Penn Treebank (Marcus et al., 1993), sentences are annotated with POS tags, parse trees, as well as discourse annotation from the Penn Discourse Treebank (Miltsakaki et al., 2008), while verbs and verb arguments are annotated with Propbank rolesets (Palmer et al., 2005)." P08-1109,J93-2004,o,"5 Experiments For all experiments, we trained and tested on the Penn treebank (PTB) (Marcus et al., 1993)." P08-1117,J93-2004,o,"In (Bayraktar et al., 1998) the WSJ PennTreebank corpus (Marcus et al., 1993) is analyzed and a very detailed list of syntactic patterns that correspond to different roles of commas is created." P08-1117,J93-2004,o,"4 Corpus Annotation For our corpus, we selected 1,000 sentences containing at least one comma from the Penn Treebank (Marcus et al., 1993) WSJ section 00, and manually annotated them with comma information3." P08-2026,J93-2004,o,"One possible use for this technique is for parser adaptation initially training the parser on one type of data for which hand-labeled trees are available (e.g., Wall Street Journal (M. Marcus et al., 1993)) and then self-training on a second type of data in order to adapt the parser to the second domain." P09-1006,J93-2004,o,"1 Introduction The last few decades have seen the emergence of multiple treebanks annotated with different grammar formalisms, motivated by the diversity of languages and linguistic theories, which is crucial to the success of statistical parsing (Abeille et al., 2000; Brants et al., 1999; Bohmova et al., 2003; Han et al., 2002; Kurohashi and Nagao, 1998; Marcus et al., 1993; Moreno et al., 2003; Xue et al., 2005)." P09-1033,J93-2004,o,"2.1 Data and Semantic Role Annotation Proposition Bank (Palmer et al., 2005) adds Levins style predicate-argument annotation and indication of verbs alternations to the syntactic structures of the Penn Treebank (Marcus et al., 289 1993)." P09-1043,J93-2004,o,"the Wall Street Journal (WSJ) sections of the Penn Treebank (Marcus et al., 1993) as training set, tests on BROWN Sections typically result in a 6-8% drop in labeled attachment scores, although the average sentence length is much shorter in BROWN than that in WSJ." P09-1055,J93-2004,o,"The unlabeled data for English we use is the union of the Penn Treebank tagged WSJ data (Marcus et al., 1993) and the BLLIP corpus.5 For the rest of the languages we use only the text of George Orwells novel 1984, which is provided in morphologically disambiguated form as part of MultextEast (but we dont use the annotations)." P09-1056,J93-2004,o,"3.2 Rare Word Accuracy For these experiments, we use the Wall Street Journal portion of the Penn Treebank (Marcus et al., 1993)." P09-1059,J93-2004,o,"Penn Treebank (Marcus et al., 1993) the HPSG LinGo Redwoods Treebank (Oepen et al., 2002), and a smaller dependency treebank (Buchholz and Marsi, 2006)." P09-1059,J93-2004,o,"1 Introduction Much of statistical NLP research relies on some sort of manually annotated corpora to train their models, but these resources are extremely expensive to build, especially at a large scale, for example in treebanking (Marcus et al., 1993)." P09-1108,J93-2004,o,"We used the Berkeley Parser4 to learn such grammars from Sections 2-21 of the Penn Treebank (Marcus et al., 1993)." P09-2010,J93-2004,o,"3.1 Data The English data set consists of the Wall Street Journal sections 2-24 of the Penn treebank (Marcus et al., 1993), converted to dependency format." P09-4003,J93-2004,p,"1 Introduction Research in language processing has benefited greatly from the collection of large annotated corpora such as Penn PropBank (Kingsbury and Palmer, 2002) and Penn Treebank (Marcus et al., 1993)." P94-1034,J93-2004,p,"One major resource for corpus-based research is the treebanks available in many research organizations \[Marcus et al.1993\], which carry skeletal syntactic structures or 'brackets' that have been manually verified." P94-1034,J93-2004,o,"Several frameworks for finding translation equivalents or translation units in machine translation, such as \[Chang and Su 1993, Isabelle et al.1993\] and other example-based MT approaches, might be used to select the preferred mapping." P94-1044,J93-2004,o,"In addition, corpus-based stochastic modelling of lexical patterns (see Weischedel et al. , 1993) may provide information about word sense frequency of the kind advocated since (Ford et al. , 1982)." P96-1043,J93-2004,o,"A similar approach was taken in (Weischedel et al. , 1993) where an unknown word was guessed given the probabilities for an unknown word to be of a particular POS, its capitalisation feature and its ending." P96-1043,J93-2004,o,"These texts were not seen at the training phase which means that neither the 6Since Brill's tagger was trained on the Penn tag-set (Marcus et al. , 1993) we provided an additional mapping." P97-1024,J93-2004,o,"3.5 Adding Context to the Model Next, we added of a stochastic POS tagger (Charniak et al. , 1993) to provide a model of context." P97-1024,J93-2004,o,"3.3 Accuracy Results (Weischedel et al. , 1993) describe a model for unknown words that uses four features, but treats the features ms independent." P97-1062,J93-2004,o,"with parse action sequences for 40,000 Wall Street Journal sentences derived from the Penn Treebank (Marcus et al. , 1993)." P98-1034,J93-2004,o,"The bracketed portions of Figure 1, for example, show the base NPs in one sentence from the Penn Treebank Wall Street Journal (WSJ) corpus (Marcus et al. , 1993)." P98-1083,J93-2004,o,"The main reason behind this lies in the difference between the two corpora used: Penn Treebank (Marcus et al. , 1993) and EDR corpus (EDR, 1995)." P98-1083,J93-2004,p,"Penn Treebank(Marcus et al. , 1993) was also used to induce part-of-speech (POS) taggers because the corpus contains very precise and detailed POS markers as well as bracket, annotations." P98-2140,J93-2004,o,"The simplest ""period-space-capital_letter"" approach works well for simple texts but is rather unreliable for texts with many proper names and abbreviations at the end of sentence as, for instance, the Wall Street Journal (WSJ) corpus ( (Marcus et al. , 1993) )." P98-2182,J93-2004,o,"To identify conjunctions, lists, and appositives, we first parsed the corpus, using an efficient statistical parser (Charniak et al. , 1998), trMned on the Penn Wall Street Journal Treebank (Marcus et al. , 1993)." P98-2184,J93-2004,o,"1 Introduction The probabilistic relation between verbs and their arguments plays an important role in modern statistical parsers and supertaggers (Charniak 1995, Collins 1996/1997, Joshi and Srinivas 1994, Kim, Srinivas, and Trueswell 1997, Stolcke et al. 1997), and in psychological theories of language processing (Clifton et al. 1984, Ferfeira & McClure 1997, Gamsey et al. 1997, Jurafsky 1996, MacDonald 1994, Mitchell & Holmes 1985, Tanenhaus et al. 1990, Trueswell et al. 1993)." P98-2184,J93-2004,o,"This can be done automatically with unparsed corpora (Briscoe and Carroll 1997, Manning 1993, Ushioda et al. 1993), from parsed corpora such as Marcus et al.'s (1993) Treebank (Merlo 1994, Framis 1994) or manually as was done for COMLEX (Macleod and Grishman 1994)." P98-2184,J93-2004,o,"1984), written discourse (Brown and WSJ from Penn Treebank Marcus et al. 1993), and conversational data (Switchboard Godfrey et al. 1992)." P98-2201,J93-2004,o,21418 examples of structures of the kind 'VB N1 PREP N2' were extracted from the Penn-TreeBank Wall Street Journal (Marcus et al. 1993). P98-2234,J93-2004,o,"6 Experiments 6.1 Data preparation Our experiments were conducted with data made available through the Penn Treebank annotation effort (Marcus et al. , 1993)." P98-2234,J93-2004,o,"By core phrases, we mean the kind of nonrecursive simplifications of the NP and VP that in the literature go by names such as noun/verb groups (Appelt et al. , 1993) or chunks, and base NPs (Ramshaw and Marcus, 1995)." P98-2251,J93-2004,o,"Charniak (Charniak et al. , 1993) gives a thorough explanation of the equations for an HMM model, and Kupiec (Kupiec, 1992) describes an HMM tagging system in detail." P98-2251,J93-2004,o,"Weischedel's group (Weischedel et al. , 1993) examines unknown words in the context of part-of-speech tagging." P98-2251,J93-2004,o,"An example set of tags can be found in the Penn Treebank project (Marcus et al. , 1993)." P99-1009,J93-2004,o,"1 To train their system, R&M used a 200k-word chunk of the Penn Treebank Parsed Wall Street Journal (Marcus et al. , 1993) tagged using a transformation-based tagger (Brill, 1995) and extracted base noun phrases from its parses by selecting noun phrases that contained no nested noun phrases and further processing the data with some heuristics (like treating the possessive marker as the first word of a new base noun phrase) to flatten the recursive structure of the parse." P99-1010,J93-2004,o,"The study is conducted on both a simple Air Travel Information System (ATIS) corpus (Hemphill et al. , 1990) and the more complex Wall Street Journal (WSJ) corpus (Marcus et al. , 1993)." P99-1016,J93-2004,o,"Some of the data comes from the parsed files 2-21 of the Wall Street Journal Penn Treebank corpus (Marcus et al. , 1993), and additional parsed text was obtained by parsing the 1987 Wall Street Journal text using the parser described in Charniak et al." P99-1018,J93-2004,o,"4 The Corpus We used two corpora for our analysis: hospital discharge summaries from 1991 to 1997 from the Columbia-Presbyterian Medical Center, and the January 1996 part of the Wall Street Journal corpus from the Penn TreeBank \[Marcus et al. 1993\]." P99-1018,J93-2004,o,"In the future, we will experiment with semantic (rather than positional) clustering of premoditiers, using techniques such as those proposed in \[Hatzivassiloglou and McKeown 1993; Pereira et al. 1993\]." P99-1021,J93-2004,o,"Both taggers used the Penn Treebank tagset and were trained on the Wall Street Journal corpus (Marcus et al. , 1993)." P99-1023,J93-2004,o,"Much research has been done to improve tagging accuracy using several different models and methods, including: hidden Markov models (HMMs) (Kupiec, 1992), (Charniak et al. , 1993); rule-based systems (Brill, 1994), (Brill, 1995); memory-based systems (Daelemans et al. , 1996); maximum-entropy systems (Ratnaparkhi, 1996); path voting constraint systems (Tiir and Oflazer, 1998); linear separator systems (Roth and Zelenko, 1998); and majority voting systems (van Halteren et al. , 1998)." P99-1023,J93-2004,o,"The tagger was tested on two corpora-the Brown corpus (from the Treebank II CDROM (Marcus et al. , 1993)) and the Wall Street Journal corpus (from the same source)." P99-1023,J93-2004,o,"The MBT (Daelemans et al. , 1996) 180 Tagger Type Standard Trigram (Weischedel et al. , 1993) MBT (Daelemans et al. , 1996) Rule-based (Brill, 1994) Maximum-Entropy (Ratnaparkhi, 1996) Full Second-Order HMM SNOW (Roth and Zelenko, 1998) Voting Constraints (Tiir and Oflazer, 1998) Full Second-Order HMM Known Unknown Overall Open/Closed Lexicon?" P99-1023,J93-2004,o,"Most work in the area of unknown words and tagging deals with predicting part-of-speech information based on word endings and affixation information, as shown by work in (Mikheev, 1996), (Mikheev, 1997), (Weischedel et al. , 1993), and (Thede, 1998)." P99-1023,J93-2004,o,"The Penn Treebank documentation (Marcus et al. , 1993) defines a commonly used set of tags." P99-1032,J93-2004,o,"Two disjoint corpora are used in steps 2 and 5, both consisting of complete articles taken from the Wall Street Journal Treebank Corpus (Marcus et al. , 1993)." P99-1051,J93-2004,o,"For instance, the to-PP frame is poorly' represented in the syntactically annotated version of the Penn Treebank (Marcus et al. , 1993)." P99-1054,J93-2004,o,"The grammars were induced from sections 2-21 of the Penn Wall St. Journal Treebank (Marcus et al. , 1993), and tested on section 23." P99-1079,J93-2004,o,"3 Evaluation of Algorithms All four algorithms were run on a 3900 utterance subset of the Penn Treebank annotated corpus (Marcus et al. , 1993) provided by Charniak and Ge (1998)." W00-0709,J93-2004,o,"4 Experiments The experiments described here were conducted using the Wall Street Journal Penn Treebank corpus (Marcus et al. , 1993)." W00-0716,J93-2004,o,"The syntactic and part-of-speech informations were obtained from the part of the corpus processed in the Penn Treebank project (Marcus et al. , 1993)." W00-0721,J93-2004,o,"The data sets used are the standard data sets for this problem (Ramshaw and Maxcus, 1995; Argamon et al. , 1999; Mufioz et al. , 1999; Tjong Kim Sang and Veenstra, 1999) taken from the Wall Street Journal corpus in the Penn Treebank (Marcus et al. , 1993)." W00-0725,J93-2004,o,"The experiments were performed using the Wall Street Journal (WSJ) corpus of the University of Pennsylvania (Marcus et al. , 1993) modified as described in (Charniak, 1996) and (Johnson, 1998)." W00-0726,J93-2004,o,"We have chosen to work with a corpus with parse information, the Wall Street Journal WSJ part of the Penn Treebank II corpus (Marcus et al. , 1993), and to extract chunk information from the parse trees in this corpus." W00-0735,J93-2004,o,"While the tag features, containing WSJ paxt-ofspeech tags (Marcus et al. , 1993), have about 45 values, the word features have more than 10,000 values." W00-0905,J93-2004,o,"1 Data Data for 64 verbs (shown in Table 1) was collected from three corpora; The British National Corpus (BNC) (http'J/info.ox.ac.uk/bnc/index.html), the Penn Treehank parsed version of the Brown Corpus (Brown), and the Penn Treebank Wall Street Journal corpas (WSJ) (Marcus et al. 1993)." W00-0905,J93-2004,o,"Introduction Verb subcategorizafion probabilities play an important role in both computational linguistic applications (e.g. Carroll, Minnen, and Briscoe 1998, Charniak 1997, Collins 1996/1997, Joshi and Srinivas 1994, Kim, Srinivas, and Tmeswell 1997, Stolcke et al. 1997) and psycholinguisfic models of language processing (e.g. Boland 1997, Clifton et al. 1984, Ferreira & McClure 1997, Fodor 1978, Garnsey et al. 1997, Jurafsky 1996, MacDonald 1994, Mitchell & Holmes 1985, Tanenhaus et al. 1990, Trueswell et al. 1993)." W00-1201,J93-2004,o,"The success of statistical methods in particular has been quite evident in the area of syntactic parsing, most recently with the outstanding results of (Charniak, 2000) and (Colhns, 2000) on the now-standard English test set of the Penn Treebank (Marcus et al. , 1993)." W00-1205,J93-2004,p,Introduction The Penn Treebank (Marcus et al. 1993) initiated a new paradigm in corpus-based research. W00-1208,J93-2004,o,"By comparing derivation trees for parallel sentences in two languages, instances of structural divergences (Dorr, 1993; Dorr, 1994; Palmer et al. , 1998) can be automatically detected." W00-1208,J93-2004,o,"2.2 Three Treebanks The Treebanks that we used in this paper are the English Penn Treebank II (Marcus et al. , 1993), the Chinese Penn Treebank (Xia et al. , 2000b), and the Korean Penn Treebank (Chung-hye Han, 2000)." W00-1301,J93-2004,o,"The training and test set were derived by finding all instances of the confusable words in the Brown Corpus, using the Penn Treebank parts of speech and tokenization (Marcus, Santorini et al. 1993), and then dividing this set into 80% for training and 20% for testing." W00-1304,J93-2004,o,"The corpus consists of sections 15-18 and section 20 of the Penn Treebank (Marcus et al. , 1993), and is pre-divided into a 8936-sentence (211727 tokens) training set and a 2012-sentence (47377 tokens) test set." W00-1306,J93-2004,o,"This paper presents an empirical study measuring the effectiveness of our evaluation functions at selecting training sentences from the Wall Street Journal (WSJ) corpus (Marcuset al. , 1993) for inducing grammars." W00-1307,J93-2004,o,"3.5 The Experiments We have ran LexTract on the one-millionword English Penn Treebank (Marcus et al. , 1993) and got two Treebank grammars." W00-1309,J93-2004,o,"The data used for all our experiments is extracted from the PENN"" WSJ Treebank (Marcus et al. 1993) by the program provided by Sabine Buchholz from Tilbug University." W00-1320,J93-2004,o,"2.2 Motivation from previous work 2.2.1 Parsing In recent years, the success of statistical parsing techniques can be attributed to several factors, such as the increasing size of computing machinery to accommodate larger models, the availability of resources such as the Penn Treebank (Marcus et al. , 1993) and the success of machine learning techniques for lowerlevel NLP problems, such as part-of-speech tagging (Church, 1988; Brill, 1995), and PPattachment (Brill and Resnik, 1994; Collins and Brooks, 1995)." W00-1320,J93-2004,o,"3.2 Probability structure of the original model We use p to denote the unlexicalized nonterminal corresponding to P, and similarly for li, ri and h. We now present the top-level generation probabilities, along with examples from 4The inclusion of the word feature in the BBN model was due to the work described in (Weischedel et al. , 1993), where word features helped reduce part of speech ambiguity for unknown words." W00-1427,J93-2004,o,"The analyser--and therefore the generator-includes exception lists derived from WordNet (version 1.5: Miller et al. , 1993)." W00-1427,J93-2004,o,"corpus (Garside et al. , 1987), the Penn Treebank (Marcus et al. , 1993), the SUSANNE corpus (Sampson, 1995), the Spoken English Corpus (Taylor and Knowles, 1988), the Oxford Psycholinguistic Database (Quinlan, 1992), and the ""Computer-Usable"" version of the Oxford Advanced Learner's Dictionary of Current English (OALDCE; Mitton, 1.9.92)." W00-1427,J93-2004,o,"2.5 Evaluation Minnen and Carroll (Under review) report an evaluation of the accuracy of the morphological generator with respect to the CELEX lexical database (version 2.5; Baayen et al. , 1993)." W01-0702,J93-2004,o,"The system is tested on base noun-phrase (NP) chunking using the Wall Street Journal corpus (Marcus et al. , 1993)." W01-0706,J93-2004,o,"First, it has been noted that in many natural language applications it is sufficient to use shallow parsing information; information such as noun phrases (NPs) and other syntactic sequences have been found useful in many large-scale language processing applications including information extraction and text summarization (Grishman, 1995; Appelt et al. , 1993)." W01-0706,J93-2004,o,"Parsers Precision(a4 ) Recall(a4 ) a5a7a6 (a4 ) a8KM00 a9 93.45 93.51 93.48 a8Hal00 a9 93.13 93.51 93.32 a8CSCL a9 * 93.41 92.64 93.02 a8TKS00 a9 94.04 91.00 92.50 a8ZST00 a9 91.99 92.25 92.12 a8Dej00 a9 91.87 91.31 92.09 a8Koe00 a9 92.08 91.86 91.97 a8Osb00 a9 91.65 92.23 91.94 a8VB00 a9 91.05 92.03 91.54 a8PMP00 a9 90.63 89.65 90.14 a8Joh00 a9 86.24 88.25 87.23 a8VD00 a9 88.82 82.91 85.76 Baseline 72.58 82.14 77.07 2.2 Data Training was done on the Penn Treebank (Marcus et al. , 1993) Wall Street Journal data, sections 02-21." W01-0712,J93-2004,o,"We have used three different algorithms: the nearest neighbour algorithm IB1IG, which is part of the Timbl software package (Daelemans et al. , 1999), the decision tree learner IGTREE, also from Timbl, and C5.0, a commercial version of the decision tree learner C4.5 (Quinlan, 1993)." W01-0712,J93-2004,o,"It consists of sections 15-18 of the Wall Street Journal part of the Penn Treebank II (Marcus et al. , 1993) as training data (211727 tokens) and section 20 as test data (47377 tokens)." W01-0720,J93-2004,o,"Here, we present experiments performed using two complex corpora, C1 and C2, extracted from the Penn Treebank (Marcus et al. , 1993; Marcus et al. , 1994)." W01-0720,J93-2004,o,"CLL has then been applied to a corpus of declarative sentences from the Penn Treebank (Marcus et al. , 1993; Marcus et al. , 1994) on which it has been shown to perform comparatively well with respect to much less psychologically plausible systems, which are significantly more supervised and are applied to somewhat simpler problems." W01-0904,J93-2004,o,"For example, the Penn Treebank (Marcus et al. , 1993; Marcus et al. , 1994; Bies et al. , 1994) provides a large corpus of syntactically annotated examples mostly from the Wall Street Journal." W01-0904,J93-2004,o,"Hockenmaier et al (Hockenmaier et al. , 2000), although to some extent following the approach of Xia (Xia, 1999) where LTAGs are extracted, have pursued an alternative by extracting Combinatory Categorial Grammar (CCG) (Steedman, 1993; Wood, 1993) lexicons from the Penn Treebank." W01-0904,J93-2004,o,"3.1 The Corpus The systems are applied to examples from the Penn Treebank (Marcus et al. , 1993; Marcus et al. , 1994; Bies et al. , 1994) a corpus of over 4.5 million words of American English annotated with both part-of-speech and syntactic tree information." W01-0904,J93-2004,o,"However, as Categorial Grammar formalisms do not usually change the lexical entries of words to deal with movement, but use further rules (Wood, 1993; Steedman, 1993; Hockenmaier et al. , 2000), the lexicons learned here will be valid over corpora with movement." W01-0904,J93-2004,o,"Firstly, there is also H(RB) A(ADVP) declined H(VBD) H(VP) the dollar A(DT) H(NN) C(NP-SBJ) H(VP) H(S) Figure 2: A tree with constituents marked the top-down method, which is a version of the algorithm described by Hockenmaier et al (Hockenmaier et al. , 2000), but used for translating into simple (AB) CG rather than the Steedmans Combinatory Categorial Grammar (CCG) (Steedman, 1993)." W01-0908,J93-2004,o,"4.1 Data We used Penn-Treebank (Marcus et al. , 1993) data, presented in Table 1." W01-1605,J93-2004,o,"The resulting corpus contains 385 documents of American English selected from the Penn Treebank (Marcus et al. , 1993), annotated in the framework of Rhetorical Structure Theory." W01-1605,J93-2004,o," Previous research has shown that RST trees can play a crucial role in building natural language generation systems (Hovy, 1993; Moore and Paris, 1993; Moore, 1995) and text summarization systems (Marcu, 2000); can be used to increase the naturalness of machine translation outputs (Marcu et al. 2000); and can be used to build essayscoring systems that provide students with discourse-based feedback (Burstein et al. , 2001)." W01-1626,J93-2004,o,"One judge annotated allarticles in four datasets of the Wall Street Journal Treebank corpus (Marcus et al. , 1993) (W9-4, W9-10, W9-22, and W933, each approximately 160K words) as well as thecorpusofWall Street Journal articles used in (Wiebe et al. , 1999) (called WSJ-SE below)." W01-1626,J93-2004,o,"3 Previous Work on Subjectivity Tagging In previous work (Wiebe et al. , 1999;; Bruce and Wiebe, 1999), a corpus of sentences from the Wall Street Journal Treebank Corpus (Marcus et al. , 1993) was manually annotated with subjectivity classi cations bymultiplejudges." W02-0817,J93-2004,o,"One of the first large scale hand tagging efforts is reported in (Miller et al. , 1993), where a subset of the Brown corpus was tagged with WordNet July 2002, pp." W02-0817,J93-2004,o,"3.1 Data The starting corpus we use is formed by a mix of three different sources of data, namely the Penn Treebank corpus (Marcus et al. , 1993), the Los Angeles Times collection, as provided during TREC conferences1, and Open Mind Common Sense2, a collection of about 400,000 commonsense assertions in English as contributed by volunteers over the Web." W02-1009,J93-2004,o,"Each dataset consisted of a collection of flat rules such as Sput!NP put NP PP extracted from the Penn Treebank (Marcus et al. , 1993)." W02-1017,J93-2004,o,"In one set of experiments, we generated lexicons for PEOPLE and ORGANIZATIONS using 2500 Wall Street Journal articles from the Penn Treebank (Marcus et al. , 1993)." W02-1028,J93-2004,o,"Even for relatively general texts, such as the Wall Street Journal (Marcus et al. , 1993) or terrorism articles (MUC4 Proceedings, 1992), Roark and Charniak (Roark and Charniak, 1998) reported that 3 of every 5 terms generated by their semantic lexicon learner were not present in WordNet." W02-1031,J93-2004,o,"We have observed in several experiments that the number of SuperARVs does not grow signi cantly as training set size increases; the moderate-sized Resource Management corpus (Price et al. , 1988) with 25,168 words produces 328 SuperARVs, compared to 538 SuperARVs for the 1 million word Wall Street Journal (WSJ) Penn Treebank set (Marcus et al. , 1993), and 791 for the 37 million word training set of the WSJ continuous speech recognition task." W02-1039,J93-2004,o,"When an S alignment exists, there will always also exist a P alignment such that P a65 S. The English sentences were parsed using a state-of-the-art statistical parser (Charniak, 2000) trained on the University of Pennsylvania Treebank (Marcus et al. , 1993)." W02-1039,J93-2004,o,"The first work in SMT, done at IBM (Brown et al. , 1993), developed a noisy-channel model, factoring the translation process into two portions: the translation model and the language model." W02-1504,J93-2004,o,"In this paper, we give an overview of NLPWin, a multi-application natural language analysis and generation system under development at Microsoft Research (Jensen et al. , 1993; Gamon et al. , 1997; Heidorn 2000), incorporating analysis systems for 7 languages (Chinese, English, French, German, Japanese, Korean and Spanish)." W02-1504,J93-2004,o,"consistency among raters who may have different levels of fluency in the source language, raters are not shown the original French or Spanish sentence (for similar methodologies, see Ringger et al. , 2001; White et al. , 1993)." W02-1504,J93-2004,o,"The most common answer is component testing, where the component is compared against a standard of goodness, usually the Penn Treebank for English (Marcus et al. , 1993), allowing a numerical score of precision and recall (e.g. Collins, 1997)." W02-1507,J93-2004,o,"For instance (Chiang, 2000), (Xia, 2001) (Chen, 2001) all automatically acquire large TAGs for English from the Penn Treebank (Marcus et al. , 1993)." W02-1509,J93-2004,o,"With the availability of large natural language corpora annotated for syntactic structure, the treebanks, e.g., (Marcus et al. , 1993), automatic grammar extraction became possible (Chen and VijayShanker, 2000; Xia, 1999)." W02-2001,J93-2004,o,"Any linguistic annotation required during the extraction process, therefore, is produced through automatic means, and it is only for reasons of accessibility and comparability with other research that we choose to work over the Wall Street Journal section of the Penn Treebank (Marcus et al. , 1993)." W02-2001,J93-2004,o,"2.2 Corpus occurrence In order to get a feel for the relative frequency of VPCs in the corpus targeted for extraction, namely 0 5 10 15 20 25 30 35 40 0 10 20 30 40 50 60 70 VPC types (%) Corpus frequency Figure 1: Frequency distribution of VPCs in the WSJ Tagger correctextracted Prec Rec Ffl=1 Brill 135135 1.000 0.177 0.301 Penn 667800 0.834 0.565 0.673 Table 1: POS-based extraction results the WSJ section of the Penn Treebank, we took a random sample of 200 VPCs from the Alvey Natural Language Tools grammar (Grover et al. , 1993) and did a manual corpus search for each." W03-0310,J93-2004,n,"This cost can often be substantial, as with the Penn Treebank (Marcus et al. , 1993)." W03-0402,J93-2004,o,"In recent years, reranking techniques have been successfully applied to the so-called history-based models (Black et al. , 1993), especially to parsing (Collins, 2000; Collins and Duffy, 2002)." W03-0505,J93-2004,o,"Table 3 compares precision, recall, and F scores for our system with CoNLL-2001 results training on sections 15-18 of the Penn Treebank and testing on section 21 (Marcus et al. , 1993)." W03-0806,J93-2004,n,"For example, 10 million words of the American National Corpus (Ide et al. , 2002) will have manually corrected POS tags, a tenfold increase over the Penn Treebank (Marcus et al. , 1993), currently used for training POS taggers." W03-0806,J93-2004,o,"Machine learning methods should be interchangeable: Transformation-based learning (TBL) (Brill, 1993) and Memory-based learning (MBL) (Daelemans et al. , 2002) have been applied to many different problems, so a single interchangeable component should be used to represent each method." W03-0902,J93-2004,o,"Our work so far has focused on data in the Penn Treebank (Marcus et al. , 1993), particularly the Brown corpus and some examples from the Wall Street Journal corpus." W03-1002,J93-2004,o,"2 Prior Work Statistical machine translation, as pioneered by IBM (e.g. Brown et al. , 1993), is grounded in the noisy channel model." W03-1002,J93-2004,o,"POS tagging and phrase chunking in English were done using the trained systems provided with the fnTBL Toolkit (Ngai and Florian, 2001); both were trained from the annotated Penn Treebank corpus (Marcus et al. , 1993)." W03-1006,J93-2004,o,"The PropBank superimposes an annotation of semantic predicate-argument structures on top of the Penn Treebank (PTB) (Marcus et al. , 1993; Marcus et al. , 1994)." W03-1009,J93-2004,o,"4.1 Experimental Setup We use the whole Penn Treebank corpus (Marcus et al. , 1993) as our data set." W03-1707,J93-2004,p,"The creation of the Penn English Treebank (Marcus et al. , 1993), a syntactically interpreted corpus, played a crucial role in the advances in natural language parsing technology (Collins, 1997; Collins, 2000; Charniak, 2000) for English." W03-1712,J93-2004,o,"Although few corpora annotated with semantic knowledge are available now, there are some valuable lexical databases describing the lexical semantics in dictionary form, for example English WordNet (Miller et al. , 1993) and Chinese HowNet (Dong and Dong, 2001)." W03-1712,J93-2004,o,"For example, the Penn Treebank (Marcus et al. , 1993) was annotated with skeletal syntactic structure, and many syntactic parsers were evaluated and compared on the corpus." W03-1903,J93-2004,o,"Ontologies are formal specifications of a conceptualization (Gruber, 1993) so that it seems straightforward to formalize annotation schemes as ontologies and make use of semantic annotation tools such as OntoMat (Handschuh et al. , 2001) for the purpose of linguistic annotation." W03-1903,J93-2004,o,"Part-of-Speech (POS) annotation for example can be seen as the task of choosing the appropriate tag for a word from an ontology of word categories (compare for example the Penn Treebank POS tagset as described in (Marcus et al. , 1993))." W03-2102,J93-2004,o,"3 Previous Work on Subjectivity Tagging In previous work (Wiebe et al., 1999), a corpus of sentences from the Wall Street Journal Treebank Corpus (Marcus et al., 1993) was manually anno- tated with subjectivity classifications by multiple judges." W04-0212,J93-2004,p,"1 Introduction Large scale annotated corpora such as the Penn TreeBank (Marcus et al. , 1993) have played a central role in speech and natural language research." W04-0212,J93-2004,o,"However, developing the PDTB may help facilitate the production of more such corpora, through an initial pass of automatic annotation, followed by manual correction, much as was done in developing the PTB (Marcus et al. , 1993).)" W04-0214,J93-2004,o,"Since parsing is just an initial stage of natural language understanding, the project was focused not just on obtaining syntactic trees alone (as is done in many other parsed corpora, for example, Penn TreeBank (Marcus et al. , 1993) or Tiger (Brants and Plaehn, 2000))." W04-0302,J93-2004,o,"The elementary trees were extracted from the parse trees in sections 02-21 of the Wall Street Journal in Penn Treebank (Marcus et al. , 1993), which is transformed by using parent-child annotation and left factoring (Roark and Johnson, 1999)." W04-0305,J93-2004,o,"6 The Experiments To investigate the e ects of lookahead on our family of deterministic parsers, we ran empirical experiments on the standard the Penn Treebank (Marcus et al. , 1993) datasets." W04-0508,J93-2004,o,"Although LDD annotation is actually provided in Treebanks such as the Penn Treebank (Marcus et al. , 1993) over which they are typically trained, most probabilistic parsers largely or fully ignore this information." W04-0707,J93-2004,o,"2 Detecting Discourse-New Definite Descriptions 2.1 Vieira and Poesio Poesio and Vieira (1998) carried out corpus studies indicating that in corpora like the Wall Street Journal portion of the Penn Treebank (Marcus et al. , 1993), around 52% of DDs are discourse-new (Prince, 1992), and another 15% or so are bridging references, for a total of about 66-67% firstmention." W04-1114,J93-2004,o,"Word association norms, mutual information, and lexicography, Computational Linguistics, 16(1): 22-29 Marcus, M. et al. 1993." W04-1114,J93-2004,o,"Collocation Dictionary of Modern Chinese Lexical Words, Business Publisher, China Yuan Liu, et al. 1993." W04-1114,J93-2004,o,"The segmentation is based on the guidelines, given in the Chinese national standard GB13715, (Liu et al. 1993) and the POS tagging specification was developed according to the Grammatical Knowledge-base of contemporary Chinese." W04-1501,J93-2004,o,"Also in the Penn Treebank ((Marcus et al. , 1993), (Marcus et al. , 1994)) a limited set of relations is placed over the constituencybased annotation in order to make explicit the (morpho-syntactic or semantic) roles that the constituents play." W04-1602,J93-2004,o,"(Marcus, et al. , 1993), (Marcus, et al. , 1994) In addition to the usual issues involved with the complex annotation of data, we have come to terms with a number of issues that are specific to a highly inflected language with a rich history of traditional grammar." W04-1903,J93-2004,p,"Annotated reference corpora, such as the Brown Corpus (Kucera, Francis, 1967), the Penn Treebank (Marcus et al. , 1993), and the BNC (Leech et al. , 2001.), have helped both the development of English computational linguistics tools and English corpus linguistics." W04-2002,J93-2004,o,"A quick search in the Penn Treebank (Marcus et al. , 1993) shows that about 17% of all sentences contain parentheticals or other sentence fragments, interjections, or unbracketable constituents." W04-2003,J93-2004,o,"Although grammatical function and empty nodes annotation expressing long-distance dependencies are provided in Treebanks such as the Penn Treebank (Marcus et al. , 1993), most statistical Treebank trained parsers fully or largely ignore them 1, which entails two problems: first, the training cannot profit from valuable annotation data." W04-2208,J93-2004,p,"On the other hand, high-quality treebanks such as the Penn Treebank (Marcus et al. , 1993) and the Kyoto University text corpus (Kurohashi and Nagao, 1997) have contributed to improving the accuracies of fundamental techniques for natural language processing such as morphological analysis and syntactic structure analysis." W04-2208,J93-2004,o,"The definitions of part-of-speech (POS) categories and syntactic labels follow those of the Treebank I style (Marcus et al. , 1993)." W04-2403,J93-2004,o,"4 The Experiments For the experiments, we used PropBank (www.cis.upenn.edu/ace) along with PennTreeBank5 2 (www.cis.upenn.edu/treebank) (Marcus et al. , 1993)." W04-2407,J93-2004,o,"Thus, the Penn Treebank of American English (Marcus et al. , 1993) has been used to train and evaluate the best available parsers of unrestricted English text (Collins, 1999; Charniak, 2000)." W04-2412,J93-2004,o,"3 Data The data consists of six sections of the Wall Street Journal part of the Penn Treebank (Marcus et al. , 1993), and follows the setting of past editions of the CoNLL shared task: training set (sections 15-18), development set (section 20) and test set (section 21)." W04-2703,J93-2004,p,"The Penn TreeBank (PTB) is an example of such a resource with worldwide impact on natural language processing (Marcus et al. , 1993)." W04-2708,J93-2004,n,"Since Czech is a language with relatively high degree of word-order freedom, and its sentences contain certain syntactic phenomena, such as discontinuous constituents (non-projective constructions), which cannot be straightforwardly handled using the annotation scheme of Penn Treebank (Marcus et al. , 1993; Linguistic Data Consortium, 1999), based on phrase-structure trees, we decided to adopt for the PCEDT the dependency-based annotation scheme of the Prague Dependency Treebank PDT (Linguistic Data Consortium, 2001)." W05-0106,J93-2004,o,"A model was trained using Maximum Likelihood from the UPenn Treebank (Marcus et al. , 1993)." W05-0302,J93-2004,p,"Introduction The creation of the Penn Treebank (Marcus et al, 1993) and the word sense-annotated SEMCOR (Fellbaum, 1997) have shown how even limited amounts of annotated data can result in major improvements in complex natural language understanding systems." W05-0305,J93-2004,o,"1 Introduction The overall goal of the Penn Discourse Treebank (PDTB) is to annotate the million word WSJ corpus in the Penn TreeBank (Marcus et al. , 1993) with a layer of discourse annotations." W05-0307,J93-2004,o,"A third of this is syntactically parsed as part of the Penn Treebank (Marcus et al. , 1993) and has dialog act annotation (Shriberg et al. , 1998)." W05-0309,J93-2004,p,"1 Introduction There is a pressing need for a consensus on a taskoriented level of semantic representation that can enable the development of powerful new semantic analyzers in the same way that the Penn Treebank (Marcus et al. , 1993) enabled the development of statistical syntactic parsers (Collins, 1999; Charniak, 2001)." W05-0310,J93-2004,o,"6 Discussion Lack of interannotator agreement presents a significant problem in annotation efforts (see, e.g., Marcus et al. 1993)." W05-0310,J93-2004,o,"Post-editing of automatic annotation has been pursued in various projects (e.g. , Brants 2000, and Marcus et al. 1993)." W05-0310,J93-2004,o,"The latter group did an experiment early on in which they found that manual tagging took about twice as long as correcting [automated tagging], with about twice the interannotator disagreement rate and an error rate that was about 50% higher (Marcus et al. 1993)." W05-0402,J93-2004,o,"The list is obtained by first extracting the phrases with -TMP function tags from the PennTree bank, and taking the words in these phrases (Marcus et al. , 1993)." W05-0404,J93-2004,o,"In our framework, we employ a simple HMM-based tagger, where the most probable tag sequence, a29a30, given the words, a31, is output (Weischedel et al. , 1993): a29 a30 a20a22a32a34a33a36a35a38a37a39a32a41a40 a42 a43a45a44 a30a47a46 a31a49a48a17a20a22a32a34a33a50a35a38a37a39a32a41a40 a42 a43a45a44 a31 a46a30 a48 a43a51a44 a30 a48 Since we do not have enough data which is manually tagged with part-of-speech tags for our applications, we used Penn Treebank (Marcus et al. , 1994) as our training set." W05-0407,J93-2004,o,"As referring dataset, we used the PropBank corpora available at www.cis.upenn.edu/ace, along with the Penn TreeBank 2 (www.cis.upenn.edu/treebank) (Marcus et al. , 1993)." W05-0620,J93-2004,o,"2.2 Closed Challenge Setting The organization provided training, development and test sets derived from the standard sections of the Penn TreeBank (Marcus et al. , 1993) and PropBank (Palmer et al. , 2005) corpora." W05-0620,J93-2004,o,"3 Data The data consists of sections of the Wall Street Journal part of the Penn TreeBank (Marcus et al. , 1993), with information on predicate-argument structures extracted from the PropBank corpus (Palmer et al. , 2005)." W05-1002,J93-2004,o,"PB, available at www.cis.upenn.edu/ace, is used along with the Penn TreeBank 2 (www.cis.upenn.edu /treebank) (Marcus et al. , 1993)." W05-1008,J93-2004,o,"4.4 Corpora We ran the three syntactic preprocessors over a total of three corpora, of varying size: the Brown corpus (460K tokens) and Wall Street Journal corpus (1.2M tokens), both derived from the Penn Treebank (Marcus et al. , 1993), and the written component of the British National Corpus (98M tokens: Burnard (2000))." W05-1506,J93-2004,o,"For this experiment, we used sections 02 21 of the Penn Treebank (PTB) (Marcus et al. , 1993) as the training data and section 23 (2416 sentences) for evaluation, as is now standard." W05-1506,J93-2004,o,"This paper, however, aims at the k-best tree algorithms whose packed representations are hypergraphs (Gallo et al. , 1993; Klein and Manning, 2001) (equivalently, and/or graphs or packed forests), which includes most parsers and parsing-based MT decoders." W05-1506,J93-2004,o,(1993) study the shortest hyperpath problem and Nielsen et al. W05-1506,J93-2004,o,"3 Formulation Following Klein and Manning (2001), we use weighted directed hypergraphs (Gallo et al. , 1993) as an abstraction of the probabilistic parsing problem." W05-1509,J93-2004,o,"State-of-the-art statistical parsers trained on the Penn Treebank (PTB) (Marcus et al. , 1993) proS a8a8 a8a8a8 a72a72 a72a72a72 NP-SBJ a16a16a16 a80a80a80the authority VP a16a16a16 a16a16a16a16 a0 a0a0 a64 a64a64 a80a80a80 a80a80a80a80 VBD dropped PP-TMP a8a8 a72a72IN at NP NN midnight NP-TMP NNP Tuesday PP-DIR a8a8 a72a72TO to NP QP a16a16a16 a80a80a80$ 2.80 trillion Figure 1: A sample syntactic structure with function labels." W05-1510,J93-2004,p,"We evaluated the generator on the Penn Treebank (Marcus et al. , 1993), which is highly reliable corpus consisting of real-world texts." W05-1511,J93-2004,o,"Probabilistic models where probabilities are assigned to the CFG backbone of the unification-based grammar have been developed (Kasper et al. , 1996; Briscoe and Carroll, 1993; Kiefer et al. , 2002), and the most probable parse is found by PCFG parsing." W05-1511,J93-2004,o,"Most of them were developed for exhaustive parsing, i.e., producing all parse results that are given by the grammar (Matsumoto et al. , 1983; Maxwell and Kaplan, 1993; van Noord, 1997; Kiefer et al. , 1999; Malouf et al. , 2000; Torisawa et al. , 2000; Oepen et al. , 2002; Penn and Munteanu, 2003)." W05-1512,J93-2004,o,"Data and Parameters To facilitate comparison with previous work, we trained our models on sections 2-21 of the WSJ section of the Penn tree-bank (Marcus et al. , 1993)." W05-1513,J93-2004,o,"We trained and tested the parser on the Wall Street Journal corpus of the Penn Treebank (Marcus et al. , 1993) using the standard split: sections 2-21 were used for training, section 22 was used for development and tuning of parameters and features, and section 23 was used for testing." W05-1619,J93-2004,o,"For instance, the HALOGEN statistical realizer [LangkildeGeary, 2002] underwent the most comprehensive evaluation of any surface realizer, which was conducted by measuring sentences extracted from the Penn TreeBank [Marcus et al. , 1993], converting them into its input formalism, and then producing output strings." W05-1619,J93-2004,o,"Since text planners cannot generate either the requisite syntactic variation or quantity of text, [Langkilde-Geary, 2002] developed an evaluation strategy for HALOGEN employing a substitute: sentence parses from the Penn TreeBank [Marcus et al. , 1993], a corpus that includes texts from newspapers such as the Wall Street Journal, and which have been hand-annotated for syntax by linguists." W06-0305,J93-2004,o,"2 The Penn Discourse TreeBank (PDTB) The PDTB contains annotations of discourse relations and their arguments on the Wall Street Journal corpus (Marcus et al. , 1993)." W06-0602,J93-2004,o,"This corpus contains annotations of semantic PASs superimposed on the Penn Treebank (PTB) (Marcus et al. , 1993; Marcus et al. , 1994)." W06-0609,J93-2004,o,"(Marcus, et al. 1993; Santorini 1990) The syntactic annotation task consists of marking constituent boundaries, inserting empty categories (traces of movement, PRO, pro), showing the relationships between constituents (argument/adjunct structures), and specifying a particular subset of adverbial roles." W06-0611,J93-2004,o,"Section 4 concludes the paper with a critical assessment of the proposed approach and a discussion of the prospects for application in the construction of corpora comparable in size and quality to existing treebanks (such as, for example, the Penn Treebank for English (Marcus et al. , 1993) or the TIGER Treebank for German (Brants et al. , 2002))." W06-1205,J93-2004,o,"1 Introduction A ""pain in the neck"" (Sag et al. , 2002) for NLP in languages of the Indo-Aryan family (e.g. Hindi-Urdu, Bangla and Kashmiri) is the fact that most verbs (nearly half of all instances in Hindi) occur as complex predicates multi-word complexes which function as a single verbal unit in terms of argument and event structure (Hook, 1993; Butt and Geuder, 2003; Raina and Mukerjee, 2005)." W06-1205,J93-2004,o,"4.2 Word alignment We have used IBM models proposed by Brown (Brown et al. , 1993) for word aligning the parallel corpus." W06-1601,J93-2004,o,"5 Datasets and Evaluation We train our models with verb instances extracted from three parsed corpora: (1) the Wall Street Journal section of the Penn Treebank (PTB), which was parsed by human annotators (Marcus et al. , 1993), (2) the Brown Laboratory for Linguistic Information Processing corpus of Wall Street Journal text (BLLIP), which was parsed automatically by the Charniak parser (Charniak, 2000), and (3) the Gigaword corpus of raw newswire text (GW), which we parsed ourselves with the Stanford parser." W06-1608,J93-2004,o,"The parser is trained on dependencies extracted from the English Penn Treebank version 3.0 (Marcus et al. , 1993) by using the head-percolation rules of (Yamada and Matsumoto, 2003)." W06-1612,J93-2004,o,"A third of the corpus is syntactically parsed as part of the Penn Treebank (Marcus et al. , 1993) 2This type corresponds to Princes (1981; 1992) inferrables." W06-1615,J93-2004,o,"5 Data Sets and Supervised Tagger 5.1 Source Domain: WSJ We used sections 02-21 of the Penn Treebank (Marcus et al. , 1993) for training." W06-1615,J93-2004,o,"There are many choices for modeling co-occurrence data (Brown et al. , 1992; Pereira et al. , 1993; Blei et al. , 2003)." W06-1636,J93-2004,o,"1 Introduction and Previous Research It is by now commonplace knowledge that accurate syntactic parsing is not possible given only a context-free grammar with standard Penn Treebank (Marcus et al. , 1993) labels (e.g. , S, NP, etc)." W06-1638,J93-2004,o,"321 Jensen-Shannon divergence is defined as D(q,r) = 12 parenleftbigg D parenleftbigg q|| q +r2 parenrightbigg +D parenleftbigg r|| q +r2 parenrightbiggparenrightbigg These experiments are a kind of poor mans version of the deterministic annealing clustering algorithm (Pereira et al. , 1993; Rose, 1998), which gradually increases the number of clusters during the clustering process." W06-1638,J93-2004,o,"We used sections 220 of the Penn Treebank 2 Wall Street Journal corpus (Marcus et al. , 1993) for training, section 22 as development set and section 23 for testing." W06-1652,J93-2004,o,"The OP data consists of 2,452 documents from the Penn Treebank (Marcus et al. , 1993)." W06-1666,J93-2004,o,"5 Experimental Evaluation To perform empirical evaluations of the proposed methods, we considered the task of parsing the Penn Treebank Wall Street Journal corpus (Marcus et al. , 1993)." W06-2110,J93-2004,o,"4 Data Collection We evaluated out method by running RASP over Brown Corpus and Wall Street Journal, as contained in the Penn Treebank (Marcus et al. , 1993)." W06-2112,J93-2004,o,"Neither (Hindle and Rooth, 1993) with 67% nor (Ratnaparkhi et al. , 1994) with 59% noun attachment were anywhere close to this figure." W06-2112,J93-2004,o,"But it makes obvious that (Ratnaparkhi et al. , 1994) were tackling a problem different from (Hindle and Rooth, 1993) given the fact that their baseline was at 59% guessing noun attachment (rather than 67% in the Hindle and Rooth experiments).3 Of course, the baseline is not a direct indicator of the difficulty of the disambiguation task." W06-2112,J93-2004,o,"3.1 Results for English We used sections 0 to 12 of the WSJ part of the Penn Treebank (Marcus et al. , 1993) with a total of 24,618 sentences for our experiments." W06-2303,J93-2004,o,"PropBank encodes propositional information by adding a layer of argument structure annotation to the syntactic structures of the Penn Treebank (Marcus et al. , 1993)." W06-2902,J93-2004,o,"We use the Penn Treebank Wall Street Journal corpus as the large corpus and individual sections of the Brown corpus as the target corpora (Marcus et al. , 1993)." W06-2902,J93-2004,o,"This research has focused mostly on the development of statistical parsers trained on large annotated corpora, in particular the Penn Treebank WSJ corpus (Marcus et al. , 1993)." W06-3122,J93-2004,o,"We retrained the parser on lowercased Penn Treebank II (Marcus et al. , 1993), to match the lowercased output of the MT decoder." W06-3327,J93-2004,o,"We measured the accuracy of the POS tagger trained in three settings: Original: The tagger is trained with the union of Wall Street Journal (WSJ) section of Penn Treebank (Marcus et al 1993), GENIA, and Penn BioIE." W06-3604,J93-2004,o,"As the third test set we selected all tokens of the Brown corpus part of the Penn Treebank (Marcus et al. , 1993), a selected portion of the original one-million word Brown corpus (Kucera and Francis, 1967), a collection of samples of American English in many different genres, from sources printed in 1961; we refer to this test set as BROWN." W07-0738,J93-2004,o,"Tag sets for English are derived from the Penn Treebank (Marcus et al. , 1993)." W07-1001,J93-2004,o,"The CDR (Morris, 1993) is assigned with access to clinical and cognitive test information, independent of performance on the battery of neuropsychological tests used for this research study, and has been shown to have high expert inter-annotator reliability (Morris et al. , 1997)." W07-1001,J93-2004,o,"Narrative retellings provide a natural, conversational speech sample that can be analyzed for many of the characteristics of speech and language that have been shown to discriminate between healthy and impaired subjects, including syntactic complexity (Kemper et al. , 1993; Lyons et al. , 1994) and mean pause duration (Singh et al. , 2001)." W07-1217,J93-2004,o,"Empirical evaluation has been done with the ERG on a small set of texts from the Wall Street Journal Section 22 of the Penn Treebank (Marcus et al. , 1993)." W07-1502,J93-2004,p,"After the success in syntactic (Penn TreeBank (Marcus et al. , 1993)) and propositional encodings (Penn PropBank (Palmer et al. , 2005)), more sophisticated semantic data (such as temporal (Pustejovsky et al. , 2003) or opinion annotations (Wiebe et al. , 2005)) and discourse data (e.g. , for anaphora resolution (van Deemter and Kibble, 2000) and rhetorical parsing (Carlson et al. , 2003)) are being generated." W07-1502,J93-2004,p,"While significant time savings have already been reported on the basis of automatic pre-tagging (e.g. , for POS and parse tree taggings in the Penn TreeBank (Marcus et al. , 1993), or named entity taggings for the Genia corpus (Ohta et al. , 2002)), this kind of pre-processing does not reduce the number of text tokens actually to be considered." W07-1505,J93-2004,o,"With respect to already available POS tagsets, the scheme allows corresponding extensions of the supertype POSTag to, e.g., PennPOSTag (for the Penn Tag Set (Marcus et al. , 1993)) or GeniaPOSTag (for the GENIA Tag Set (Ohta et al. , 2002))." W07-1505,J93-2004,o,"Currently, the scheme supports PhraseChunks with subtypes such as NP, VP, PP, or ADJP (Marcus et al. , 1993)." W07-1505,J93-2004,o,"The Dublin Core Metadata Initiative3 established a de facto standard for the Semantic Web.4 For (computational) linguistics proper, syntactic annotation schemes, such as the one from the Penn Treebank (Marcus et al. , 1993), or semantic annotations, such as the one underlying ACE (Doddington et al. , 2004), are increasingly being used in a quasi standard way." W07-1517,J93-2004,o,"The Penn Treebank annotation (Marcus et al. , 1993) was chosen to be the first among equals: it is the starting point for the merger and data from other annotations are attached at tree nodes." W07-1524,J93-2004,o,"Some of them are based upon syntactic structure, with PropBank (Kingsbury and Palmer, 2003) being one of the most relevant, building the annotation upon the syntactic representation of the TreeBank corpus (Marcus et al. , 1993)." W07-1530,J93-2004,o,"6 Penn Discourse Treebank (Bonnie Webber, Edinburgh) The Penn Discourse TreeBank (Miltsakaki et al. , 2004; Prasad et al. , 2004; Webber, 2005) annotates discourse relations over the Wall Street Journal corpus (Marcus et al. , 1993), in terms of discourse connectives and their arguments." W07-1530,J93-2004,n,"It has been difficult to identify all and only those cases where a token functions as a discourse connective, and in many cases, the syntactic analysis in the Penn TreeBank (Marcus et al. , 1993) provides no help." W07-1602,J93-2004,o,"For this reason, each preposition and verb was assigned a weight based on the proportion of occurrences of that word in the Penn Treebank (Marcus et al. , 1993) which are labelled with a spatial meaning." W07-2048,J93-2004,o,"We trained the parser on the Penn Treebank (Marcus et al. , 1993)." W07-2052,J93-2004,o,"We parsed the TimeEval data using MSTParser v0.2 (McDonald and Pereira, 2006), which is trained with all Penn Treebank (Marcus et al. , 1993) without dependency label." W07-2204,J93-2004,o,"The sentences included in the gold standard were chosen at random from the BNC, subject to the condition that they contain a verb which does not occur in the training sections of the WSJ section of the PTB (Marcus et al. , 1993)." W07-2211,J93-2004,o,"Examples of this work include a system by Liu et al (1990), and experiments by Hindle and Rooth (1993), and Resnik and Hearst (1993).2 These efforts had mixed success, suggesting that while multi-level preference scores are problematic, integrating some corpus data does not solve the problems." W07-2216,J93-2004,o,"Figure 1 gives an example dependency graph for the sentence Mr. Tomash will remain as a director emeritus, whichhasbeenextractedfromthe Penn Treebank (Marcus et al. , 1993)." W07-2217,J93-2004,o,"5 Parsing experiments 5.1 Data and setup We used the standard partitions of the Wall Street Journal Penn Treebank (Marcus et al. , 1993); i.e., sections 2-21 for training, section 22 for development and section 23 for evaluation." W08-0614,J93-2004,p,"1 Introduction Large scale annotated corpora, e.g., the Penn TreeBank (PTB) project (Marcus et al. 1993), have played an important role in text-mining." W08-0614,J93-2004,o,"The current release of PDTB2.0 contains the annotations of 1,808 Wall Street Journal articles (~1 million words) from the Penn TreeBank (Marcus et al. 1993) II distribution and a total of 40,600 discourse connective tokens (Prasad et al. 2008b)." W08-1008,J93-2004,o,"Other languagesfor which this is the case include English (with the Penn treebank (Marcus et al., 1993), the Susanne Corpus (Sampson, 1993), and the British section of the ICE Corpus (Wallis and Nelson, 2006)) and Italian (with ISST (Montegmagni et al., 2000) and TUT (Bosco et al., 2000))." W08-1301,J93-2004,o,"First, we noted how frequently WordNet (Fellbaum, 1998) gets used compared to other resources, such as FrameNet (Fillmore et al., 2003) or the Penn Treebank (Marcus et al., 1993)." W08-2101,J93-2004,o,"2 The Data Our experiments on joint syntactic and semantic parsing use data that is produced automatically by merging the Penn Treebank (PTB) with PropBank (PRBK) (Marcus et al., 1993; Palmer et al., 2005), as shown in Figure 1." W08-2121,J93-2004,o,"html 162 3.1.1 Penn Treebank 3 The Penn Treebank 3 corpus (Marcus et al., 1993) consists of hand-coded parses of the Wall Street Journal (test, development and training) and a small subset of the Brown corpus (W. N. Francis and H. Kucera, 1964) (test only)." W08-2121,J93-2004,o,"3.2 Conversion to Dependencies 3.2.1 Syntactic Dependencies There exists no large-scale dependency treebank for English, and we thus had to construct a dependency-annotated corpus automatically from the Penn Treebank (Marcus et al., 1993)." W08-2122,J93-2004,o,"In this vein, the CoNLL 2008 shared task sets the challenge of learning jointly both syntactic dependencies (extracted from the Penn Treebank (Marcus et al., 1993) ) and semantic dependencies (extracted both from PropBank (Palmer et al., 2005) c2008." W09-0103,J93-2004,o,"My guess is that the features used in e.g., the Collins (2003) or Charniak (2000) parsers are probably close to optimal for English Penn Treebank parsing (Marcus et al., 1993), but that other features might improve parsing of other languages or even other English genres." W09-0104,J93-2004,o,"I have made a preliminary analysis of the inventory of syntactic categories used in the tagging for labelling trees in the 18 Penn Treebank (Marcus et al., 1993), comparing them to the categories used in CGEL." W09-0608,J93-2004,o,"However, with their system trained on the medical corpus and then tested on the Wall Street Journal corpus (Marcus et al., 1993), they achieve an overall prediction accuracy of only 54%." W09-0905,J93-2004,o,"Due to its popularity for unsupervised POS induction research (e.g., Goldberg et al., 2008; Goldwater and Griffiths, 2007; Toutanova and Johnson, 2008) and its often-used tagset, for our initial research, we use the Wall Street Journal (WSJ) portion of the Penn Treebank (Marcus et al., 1993), with 36 tags (plus 9 punctuation tags), and we use sections 00-18, leaving held-out data for future experiments.4 Defining frequent frames as those occurring at 4Even if we wanted child-directed speech, the CHILDES database (MacWhinney, 2000) uses coarse POS tags." W09-1007,J93-2004,o,"For testing purposes, we used the Wall Street Journal part of the Penn Treebank corpus (Marcus et al., 1993)." W09-1117,J93-2004,o,"The Spanish corpus was parsed using the MST dependency parser (McDonald et al., 2005) trained using dependency trees generated from the the English Penn Treebank (Marcus et al., 1993) and Spanish CoNLL-X data (Buchholz and Marsi, 2006)." W09-1404,J93-2004,o,"The parser expresses distinctions that are especially important for a predicate-argument based deep syntactic representation, as far as they are expressed in the training data generated from the Penn Treebank (Marcus et al., 1993)." W09-1805,J93-2004,o,"A description of the flat featurized dependency-style syntactic representation we use is available in (Langkilde-Geary and Betteridge, 2006), which describes how the entire Penn Treebank (Marcus et al., 1993) was converted to this representation." W09-2310,J93-2004,o,"5.3 Experimental setup We used the Stanford Parser (Klein and Manning, 2003) for both languages, Penn English Treebank (Marcus et al., 1993) and Penn Arabic Treebank set (Kulick et al., 2006)." W09-2406,J93-2004,o,"3 Network Evaluation We present an evaluation which has been carried out on an initial set of annotations of English articles from The Wall Street Journal (covering those annotated at the syntactic level in the Penn Treebank (Marcus et al., 1993))." W09-2603,J93-2004,p,"The Penn Treebank (Marcus et al., 1993) has until recently been the only such corpus, covering 4.5M words in a single genre of financial reporting." W95-0101,J93-2004,o,"Unsupervised Learning: Results To test the effectiveness of the above unsupervised learning algorithm, we ran a number of experiments using two different corpora and part of speech tag sets: the Penn Treebank Wall Street Journal Corpus \[Marcus et al. , 1993\] and the original Brown Corpus \[Francis and Kucera, 1982\]." W95-0101,J93-2004,o,"\[Francis and Kucera, 1982; Marcus et al. , 1993\]), training on a corpus of one type and then applying the tagger to a corpus of a different type usually results in a tagger with low accuracy \[Weischedel et al. , 1993\]." W95-0101,J93-2004,o,"Transformation-based error-driven learning has been applied to a number of natural language problems, including part of speech tagging, prepositional phrase attachment disambiguation, speech generation and syntactic parsing \[Brill, 1992; Brill, 1994; Ramshaw and Marcus, 1994; Roche and Schabes, 1995; Brill and Resnik, 1994; Huang et al. , 1994; Brill, 1993a; Brill, 1993b\]." W95-0101,J93-2004,o,"Almost all of the work in the area of automatically trained taggers has explored Markov-model based part of speech tagging \[Jelinek, 1985; Church, 1988; Derose, 1988; DeMarcken, 1990; Cutting et al. , 1992; Kupiec, 1992; Charniak et al. , 1993; Weischedel et al. , 1993; Schutze and Singer, 1994; Lin et al. , 1994; Elworthy, 1994; Merialdo, 1995\]." W95-0101,J93-2004,o,"Below is an example of the initial-state tagging of a sentence from the Penn Treebank \[Marcus et al. , 1993\], where an underscore is to be read as or." W95-0104,J93-2004,o,"The performance figures given below are based on training each method on the 1-million-word Brown corpus \[Ku~:era and Francis, 1967\] and testing it on a 3/4-million-word corpus of Wall Street Journal text \[Marcus et al. , 1993\]." W95-0105,J93-2004,o,"(1993) found that direct annotation takes twice as long as automatic tagging plus correction, for partof-speech annotation); and the output quality reflects the difficulty of the task (inter-annotator disagreement is on the order of 10%, as contrasted with the approximately 3% error rate reported for part-of-speech annotation by Marcus et al.)." W95-0105,J93-2004,o,"The traditional method of evaluating similarity in a semantic network by measuring the path length between two nodes (Lee et al. , 1993; Rada et al. , 1989) also captures this, albeit indirectly, when the semantic network is just an IS-A hierarchy: if the minimal path of IS-A links between two nodes is long, that means it is necessary to go high in the taxonomy, to more abstract concepts, in order to find their least upper bound." W95-0105,J93-2004,o,"(Bensch and Savitch, 1992; Brill, 1991; Brown et al. , 1992; Grefenstette, 1994; McKcown and Hatzivassiloglou, 1993; Pereira et al. , 1993; Schtltze, 1993))." W95-0112,J93-2004,o,"Furthermore, training corpora for information extraction are typically annotated with domain-specific tags, in contrast to general-purpose annotations such as part-of-speech tags or noun-phrase bracketing (e.g. , the Brown Corpus \[Francis and Kucera, 1982\] and the Penn Treebank \[Marcus et al. , 1993\])." W95-0112,J93-2004,o,"General purpose text annotations, such as part-of-speech tags and noun-phrase bracketing, are costly to obtain but have wide applicability and have been used successfully to develop statistical NLP systems (e.g. , \[Church, 1989; Weischedel et al. , 1993\])." W96-0111,J93-2004,o,"In previous work, we tested the DOP method on a cleaned-up set of analyzed part-of-speech strings from the Penn Treebank (Marcus et al. , 1993), achieving excellent test results (Bod, 1993a, b)." W96-0111,J93-2004,o,"The latter approach has become increasingly popular (e.g. Schabes et al. , 1993; Weischedel et al. , 1993; Briscoe, 1994; Magerman, 1995; Collins, 1996)." W96-0111,J93-2004,o,"To deal with this question, we use ATIS p-o-s trees as found in the Penn Treebank (Marcus et al. , 1993)." W96-0112,J93-2004,o,"1993; Chang et al. , 1992; Collins and Brooks, 1995; Fujisaki, 1989; Hindle and Rooth, 1991; Hindle and Rooth, 1993; Jelinek et al. , 1990; Magerman and Marcus, 1991; Magerman, 1995; Ratnaparkhi et al. , 1994; Resnik, 1993; Su and Chang, 1988)." W96-0112,J93-2004,o,"We extracted 181,250 case frames from the WSJ (Wall Street Journal) bracketed corpus of the Penn Tree Bank (Marcus et al. , 1993)." W96-0112,J93-2004,o,"For subproblem (a), we have devised a new method, based on LPR, which has some good properties not shared by the methods proposed so far (Alshawi and Carter, 1995; Chang et al. , 1992; Collins and Brooks, 1995; Hindle and Rooth, 1991; Ratnaparkhi et al. , 1994; Resnik, 1993)." W96-0203,J93-2004,o,"Clusters are created by means of distributional techniques in (Ratnaparkhi et al, 1994), while in (Resnik and Hearst, 1993) low level synonim sets in WordNet are used." W96-0203,J93-2004,o,"This method is described hereafter, while the subsequent steps, that use deeper (rulebased) levels of knowledge, are implemented into the ARIOSTO_LEX lexical learning system, described in (Basili et al. , 1993b, 1933c and 1996)." W96-0203,J93-2004,o,"The class based disambiguation operator is the Mutual Conditioned Plausibility (MCPI) (Basili et al. ,1993a)." W96-0203,J93-2004,o,"In general the training set is the parsed Wall Street Journal (Marcus et al, 1993), with few exceptions, and the size of the training samples is around 10-20,000 test cases." W96-0203,J93-2004,o,"This incremental process can be iterated to the point that the system 1 It is not just a matter of time, but also of required linguistic skills (see for example (Marcus et al, 1993))." W96-0203,J93-2004,o,"These later inductive phases may rely on some level of a priori knowledge, like for example the naive case relations used in the ARIOSTO_LEX system (Basili et al, 1993c, 1996)." W96-0203,J93-2004,o,"To simplify, the plausibility of a detected esl is roughly inversely proportional to the number of mutually excluding syntactic structures in the text segment that generated the esl (see (Basili et al, 1993a) for details)." W96-0208,J93-2004,o,"Learning to Disambiguate Word Senses Several recent research projects have taken a corpus-based approach to lexical disambiguation (Brown, Della-Pietra, Della-Pietra, & Mercer, 1991; Gale, Church, & Yarowsky, 1992b; Leacock et al. , 1993b; Lehman, 1994)." W96-0213,J93-2004,o,"~ gtPdl= |&.allm~WI.Lqlf IDW,t~lIO, r I~""1~~ ~ II, Mlmulm, IP, il~,,lllb, l~ ~ I I I I I I I I I 0 200 400 600 800 1000 1200 1400 1600 1800 Article# 2000 Figure 1: Distribution of Tags for the word ""about"" vs. Article# Training Size(wrds)I Test571190 Size(wrds) I Baseline44478 97.04% Specialized 197.13% Table 10: Performance of Baseline ~ Specialized Model When Tested on Consistent Subset of Development Set 139 POS Tag 35 30 25 2O 15 10 5 0 1 I o. Oho m I I I B ~ m M I I I 2 3 4 Annotator Figure 2: Distribution of Tags for the word ""about"" vs. Annotator (Weischedel et al. , 1993) provide the results from a battery of ""tri-tag"" Markov Model experiments, in which the probability P(W,T) of observing a word sequence W = {wl,w2,,wn} together with a tag sequence T = {tl,t2,,tn} is given by: P(TIW)P(W) p(tl)p(t21tl) H P(tilti-lti-2) p(wilti i=3 Furthermore, p(wilti) for unknown words is computed by the following heuristic, which uses a set of 35 pre-determined endings: p(wilti) p(unknownwordlti ) x p(capitalfeature\[ti) x p(endings, hypenationlti ) This approximation works as well as the MaxEnt model, giving 85% unknown word accuracy(Weischedel et al. , 1993) on the Wall St. Journal, but cannot be generalized to handle more diverse information sources." W96-0213,J93-2004,o,"Previous uses of this model include language modeling(Lau et al. , 1993), machine translation(Berger et al. , 1996), prepositional phrase attachment(Ratnaparkhi et al. , 1994), and word morphology(Della Pietra et al. , 1995)." W96-0213,J93-2004,o,"In practice, 7-/ is very large and the model's expectation Efj cannot be computed directly, so the following approximation(Lau et al. , 1993) is used: n E fj,~ E15(hi)p(tilhi)fj(hi,ti) i=1 where fi(hi) is the observed probability of the history hi in the training set." W96-0213,J93-2004,o,"Comparison With Previous Work Most of the recent corpus-based POS taggers in the literature are either statistically based, and use Markov Model(Weischedel et al. , 1993, Merialdo, 1994) or Statistical Decision Tree(Jelinek et al. , 1994, Magerman, 1995)(SDT) techniques, or are primarily rule based, such as Drill's Transformation Based Learner(Drill, 1994)(TBL)." W97-0105,J93-2004,o,"Slrs Parse Base (Black et al. , 1993a) is 1.76." W97-0105,J93-2004,o,"In all other respects, our work departs from previous research on broad--coverage 16 I t I I I I I i ! I i I I I I I I I I I I I i I 1, I. I I I I i I 1 I I I I probabilistic parsing, which either attempts to learn to predict gr~rarn~tical structure of test data directly from a training treebank (Brill, 1993; Collins, 1996; Eisner, 1996; Jelinek et al. , 1994; Magerman, 1995; S~kine and Orishman, 1995; Sharman et al. , 1990), or employs a grammar and sometimes a dictionary to capture linguistic expertise directly (Black et al. , 1993a; GrinBerg et al. , 1995; Schabes; 1992), but arguably at a less detailed and informative level than in the research reported here." W97-0105,J93-2004,o,"Clearly the present research task is quite considerably harder than the parsing and tagging tasks undertaken in (Jelinek et al. , 1994; Magerman, 1995; Black et al. , 1993b), which would seem to be the closest work to ours, and any comparison between this work and ours must be approached with extreme caution." W97-0105,J93-2004,o,"Table 3 shows the differences between the treebank~ utilized in (Jelinek et al. , 1994) on the one hand, and in the work reported here, on the other, is Table 4 shows relevant lSFigures for Average Sentence Length ('l~raLuing Corpus) and Training Set Size, for the IBM ManuaLs Corpus, are approximate, and cz~e fzom (Black et aL, 1993a)." W97-0105,J93-2004,o,"By labelling Treeb~n~ nodes with Gr~ramar rule names, and not with phrasal and clausal n~raes, as in other (non-gr~rarnar-based) treebanks' (Eyes and Leech, 1993; Garside and McEnery, 1993; Marcus et al. , 1993), we gain access to all information provided by the Grammar regarding each ~reebank node." W97-0105,J93-2004,o,"For example, the feature 1 On the ATR English Grammar, see below; for a detailed description of a precursor to the Gr-r~raar, see (Black et al. , 1993a)." W97-0121,J93-2004,o,We collected training samples from the Brown Corpus distributed with the Penn Treebank (Marcus et al.1993 ). W97-0201,J93-2004,o,"3 The Effect of Training Corpus Size A number of past research work on WSD, such as (Leacock et al. , 1993; Bruce and Wiebe, 1994; Mooney, 1996), were tested on a small number of words like ""line"" and ""interest""." W97-0202,J93-2004,o,"edu Abstract This paper reports on our experience hand tagging the senses of 25 of the most frequent verbs in 12,925 sentences of the Wall Street Journal Treebank corpus (Marcus et al. 1993)." W97-0202,J93-2004,o,"1 Introduction This paper reports on our experience hand tagging the senses of 25 of the most frequent verbs in 12,925 sentences of the Wall Street Journal Treebank corpus (Marcus et al. 1993)." W97-0209,J93-2004,o,"In marked contrast to annotated training material for partof-speech tagging, (a) there is no coarse-level set of sense distinctions widely agreed upon (whereas part-of-speech tag sets tend to differ in the details); (b) sense annotation has a comparatively high error rate (Miller, personal communication, reports an upper bound for human annotators of around 90% for ambiguous cases, using a non-blind evaluation method that may make even this estimate overly optimistic); and (c) no fully automatic method provides high enough quality output to support the ""annotate automatically, correct manually"" methodology used to provide high volume annotation by data providers like the Penn Treebank project (Marcus et al. , 1993)." W97-0209,J93-2004,o,"Test and training materials were derived from the Brown corpus of American English, all of which has been parsed and manually verified by the Penn T~eebank project (Marcus et al. , 1993) and parts of which have been manually sense-tagged by the WordNet group (Miller et al. , 1993)." W97-0209,J93-2004,o,"The approach combines statistical and knowledge-based methods, but unlike many recent corpus-based approaches to sense disambiguation (arowsky, 1993; Bruce and Wiebe, 1994; Miller et al. , 1994), it takes as its starting point the assumption that senseannotated training text is not available." W97-0301,J93-2004,o,"3 Probability Model This paper takes a ""history-based"" approach (Black et al. , 1993) where each tree-building procedure uses a probability model p(alb), derived from p(a, b), to weight any action a based on the available context, or history, b. First, we present a few simple categories of contextual predicates that capture any information in b that is useful for predicting a. Next, the predicates are used to extract a set of features from a corpus of manually parsed sentences." W97-0308,J93-2004,o,"We have processed the Susanne corpus (Sampson, 1995) and Penn treebank (Marcus et al, 1993) to provide tables of word and subtree alignments." W97-1005,J93-2004,o,"Both data were extracted from the Penn Treebank Wall Street Journal (WSJ) Corpus (Marcus et al. , 1993)." W97-1005,J93-2004,o,"Statistical and information theoretic approaches (Hindle and Rooth, 1993), (Ratnaparkhi et al. , 1994),(Collins and Brooks, 1995), (Franz, 1996) Using lexical collocations to determine PPA with statistical techniques was first proposed by (Hindle and Rooth, 1993)." W97-1502,J93-2004,p,"Many mainstream systems and formalisms would satisfy these criteria, including ones such as the University of Pennsylvania Treebank (Marcus et al, 1993) which are purely syntactic (though of course, only syntactic properties could then be extracted)." W98-0701,J93-2004,o,"Because our algorithm does not consider the context given by the preceding sentences, we have conducted the following experiment to see to what extent the discourse context could improve the performance of the wordsense disambiguation: Using the semantic concordance files (Miller et al. , 1993), we have counted the occurrences of content words which previously appear in the same discourse file." W98-0701,J93-2004,o,"Both for the training and for the testing of our algorithm, we used the syntactically analysed sentences of the Brown Corpus (Marcus, 1993), which have been manually semantically tagged (Miller et al. , 1993) into semantic concordance files (SemCor)." W98-0717,J93-2004,o,"(1994) from the Penn Treebank (Marcus et al. , 1993) WSJ corpus." W98-1114,J93-2004,o,"Systems which are able to acquire a small number of verbal subcategorisation classes automatically from corpus text have been described by Brent (1991, 1993), and Ushioda et al." W98-1115,J93-2004,o,"4 The Experiment For our experiment, we used a tree-bank grammar induced from sections 2-21 of the Penn Wall Street Journal text (Marcus et al. , 1993), with section 22 reserved for testing." W98-1119,J93-2004,o,"This program differs from earlier work in its almost complete lack of hand-crafting, relying instead on a very small corpus of Penn Wall Street Journal Tree-bank text (Marcus et al. , 1993) that has been marked with co-reference information." W98-1121,J93-2004,o,"Dialogs Speakers Turns Words Fragments Distinct Words Distinct Words/POS Singleton Words Singleton Words/POS Intonational Phrases Speech Repairs 98 34 6163 58298 756 859 1101 252 350 10947 2396 Table 1: Size of the Trains Corpus 2.1 POS Annotations Our POS tagset is based on the Penn Treebank tagset (Marcus et al. , 1993), but modified to include tags for discourse markers and end-of-turns, and to provide richer syntactic information (Heeman, 1997)." W98-1121,J93-2004,o,"(Charniak et al. , 1993)) simplify these probability distributions, as given in Equations 9 and 10." W98-1126,J93-2004,o,"The data consists of 2,544 main clauses from the Wall Street Journal Treebank corpus (Marcus et al. , 1993)." W99-0104,J93-2004,o,"We use the finite-state parses of FaSTU$ (Appelt et al. , 1993) for recognizing these entities, but the method extends to any basic phrasal parser 4." W99-0104,J93-2004,o,"This knowledge is represented in axiomatic form, using the notation proposed in (Hobbs et al. , 1993) and previously implemented in TACITUS." W99-0104,J93-2004,o,"Such methods were presented in (Hoblm et al. , 1993) and ~flensky, 1978)." W99-0104,J93-2004,o,"The first one makes use of the advances in the parsing technology or on the availability of large parsed corpora (e.g. Trcebank (Marcus et al.1993)) to produce algorithms inspired by Hobbs' baseline method (Hobbs, 1978)." W99-0204,J93-2004,o,"jp/et I/nl/GDA/t agset, html 2http://www.uic.edu:80/orgs/tei/ 25 ing insights from EAGLES s, Penn TreeBank \[Marcus et al. , 1993\], and so forth." W99-0301,J93-2004,o,"html\] provided by Lynette Hirschman; syntactic structures in the style of the Penn TreeBank (Marcus et al. , 1993) provided by Ann Taylor; and an alternative annotation for the F0 aspects of prosody, known as Tilt (Taylor, 1998) and provided by its inventor, Paul Taylor." W99-0502,J93-2004,p,"It us widely acknowledged that word sense d~samblguatmn (WSD) us a central problem m natural language processing In order for computers to be able to understand and process natural language beyond simple keyword matching, the problem of d~samblguatmg word sense, or dlscermng the meamng of a word m context, must be effectively dealt with Advances in WSD v, ill have slgmficant Impact on apphcatlons hke information retrieval and machine translation For natural language subtasks hke part-of-speech tagging or s)ntactm parsing, there are relatlvely well defined and agreed-upon cnterm of what it means to have the ""correct"" part of speech or syntactic structure assigned to a word or sentence For instance, the Penn Treebank corpus (Marcus et al, 1993) pro~ide~,t large repo.~tory of texts annotated w~th partof-speech and s}ntactm structure mformatlon Tv.o independent human annotators can achieve a high rate of agreement on assigning part-of-speech tags to words m a g~ven sentence Unfortunately, th~s us not the case for word sense assignment F~rstly, it is rarely the case that any two dictionaries will have the same set of sense defimtmns for a g~ven word Different d~ctlonanes tend to carve up the ""semantic space"" m a different way, so to speak Secondly, the hst of senses for a word m a typical dmtmnar~ tend to be rather refined and comprehensive This is especmlly so for the commonly used words which have a large number of senses The sense dustmctmn between the different senses for a commonly used word m a d~ctmnary hke WoRDNET (Miller, 1990) tend to be rather fine Hence, two human annotators may genuinely dusagree m their sense assignment to a word m context The agreement rate between human annotators on word sense assignment us an Important concern for the evaluatmn of WSD algorithms One would prefer to define a dusamblguatlon task for which there us reasonably hlgh agreement between human annotators The agreement rate between human annotators will then form the upper ceiling against whmh to compare the performance of WSD algorithms For instance, the SENSEVAL exerclse has performed a detaded study to find out the raterannotator agreement among ~ts lexicographers taggrog the word senses (Kllgamff, 1998c, Kllgarnff, 1998a, Kflgarrlff, 1998b) 2 A Case Study In this-paper, we examine the ~ssue of raterannotator agreement by comparing the agreement rate of human annotators on a large sense-tagged corpus of more than 30,000 instances of the most frequently occurring nouns and verbs of Enghsh This corpus is the intersection of the WORDNET Semcor corpus (Miller et al, 1993) and the DSO corpus (Ng and Lee, 1996, Ng, 1997), which has been independently tagged wlth the refined senses of WORDNET by two separate groups of human annotators The Semcor corpus us a subset of the Brown corpus tagged with ~VoRDNET senses, and consists of more than 670,000 words from 352 text files Sense taggmg was done on the content words (nouns, ~erbs, adjectives and adverbs) m this subset The DSO corpus consists of sentences drawn from the Brown corpus and the Wall Street Journal For each word w from a hst of 191 frequently occurring words of Enghsh (121 nouns and 70 verbs), sentences containing w (m singular or plural form, and m its various reflectional verb form) are selected and each word occurrence w ~s tagged w~th a sense from WoRDNET There ~s a total of about 192,800 sentences in the DSO corpus m which one word occurrence has been sense-tagged m each sentence The intersection of the Semcor corpus and the DSO corpus thus consists of Brown corpus sentences m which a word occurrence w is sense-tagged m each sentence, where w Is one of.the 191 frequently oc-,currmg English nouns or verbs Since this common pomon has been sense-tagged by two independent groups of human annotators, ~t serves as our data set for investigating inter-annotator agreement in this paper 3 Sentence Matching To determine the extent of inter-annotator agreement, the first step ~s to match each sentence m Semcor to its corresponding counterpart In the DSO corpus This step ~s comphcated by the following factors 1 Although the intersected portion of both corpora came from Brown corpus, they adopted different tokemzatmn convention, and segmentartan into sentences differed sometimes 2 The latest versmn of Semcor makes use of the senses from WORDNET 1 6, whereas the senses used m the DSO corpus were from WoRDNET 15 1 To match the sentences, we first converted the senses m the DSO corpus to those of WORDNET 1 6 We ignored all sentences m the DSO corpus m which a word is tagged with sense 0 or -1 (A word is tagged with sense 0 or -1 ff none of the given senses m WoRDNFT applies ) 4, sentence from Semcor is considered to match one from the DSO corpus ff both sentences are exactl) ldent~cal or ff the~ differ only m the pre~ence or absence of the characters "" (permd) or -' (hyphen) For each remaining Semcor sentence, taking into account word ordering, ff 75% or more of the words m the sentence match those in a DSO corpus sentence, then a potential match ~s recorded These i -kctua\[ly, the WORD~q'ET senses used m the DSO corpus were from a shght variant of the official WORDNE'I 1 5 release Th~s ssas brought to our attention after the pubhc release of the DSO corpus potential matches are then manually verffied to ensure that they are true matches and to ~eed out any false matches Using this method of matching, a total of 13,188 sentence-palrs contasnmg nouns and 17,127 sentence-pa~rs containing verbs are found to match from both corpora, ymldmg 30,315 sentences which form the intersected corpus used m our present study 4 The Kappa Statistic Suppose there are N sentences m our corpus where each sentence contains the word w Assume that w has M senses Let 4 be the number of sentences which are assigned identical sense b~ two human annotators Then a simple measure to quantify the agreement rate between two human annotators Is Pc, where Pc, = A/N The drawback of this simple measure is that it does not take into account chance agreement between two annotators The Kappa statistic a (Cohen, 1960) is a better measure of rater-annotator agreement which takes into account the effect of chance agreement It has been used recently w~thm computatmnal hngu~stlcs to measure raterannotator agreement (Bruce and Wmbe, 1998, Carletta, 1996, Veroms, 1998) Let Cj be the sum of the number of sentences which have been assigned sense 3 by annotator 1 and the number of sentences whmh have been assigned sense 3 by annotator 2 Then P~-P~ 1-P~ where M j=l and Pe measures the chance agreement between two annotators A Kappa ~alue of 0 indicates that the agreement is purely due to chance agreement, whereas a Kappa ~alue of 1 indicates perfect agreement A Kappa ~alue of 0 8 and above is considered as mdmatmg good agreement (Carletta, 1996) Table 1 summarizes the inter-annotator agreement on the mtersected corpus The first (becond) row denotes agreement on the nouns (xerbs), wh~le the lass row denotes agreement on all words combined The a~erage ~ reported m the table is a s~mpie average of the individual ~ value of each word The agreement rate on the 30,315 sentences as measured by P= is 57% This tallies with the figure reported ~n our earlier paper (Ng and Lee, 1996) where we performed a quick test on a subset of 5,317 sentences,n the intersection of both the Semcor corpus and the DSO corpus 10 \[\] mm m m m m m mm m m m m mm m m m Type Num of v, ords A N \[ P~ Avg Nouns 121 7,676 13,188 I 0 582 0 300 Verbs 70 9,520 17,127 I 0 555 0 347 All I 191 I 17,196 30,315 I 056T 0317 Table 1 Raw inter-annotator agreement 5 Algorithm Since the rater-annotator agreement on the intersected corpus is not high, we would like to find out how the agreement rate would be affected if different sense classes were in use In this section, we present a greedy search algorithm that can automatmalb derive coarser sense classes based on the sense tags assigned by two human annotators The resulting derived coarse sense classes achmve a higher agreement rate but we still maintain as many of the original sense classes as possible The algorithm is given m Figure 1 The algorithm operates on a set of sentences where each sentence contains an occurrence of the word w whmh has been sense-tagged by two human annotators At each Iteration of the algorithm, tt finds the pair of sense classes Ct and Cj such that merging these two sense classes results in the highest t~ value for the resulting merged group of sense classes It then proceeds to merge Cz and C~ Thin process Is repeated until the ~ value reaches a satisfactory value ~,~t,~, which we set as 0 8 Note that this algorithm is also applicable to deriving any coarser set of classes from a refined set for any NLP tasks in which prior human agreement rate may not be high enough Such NLP tasks could be discourse tagging, speech-act categorization, etc 6 Results For each word w from the list of 121 nouns and 70 verbs, ~e applied the greedy search algorithm to each set of sentences in the intersected corpus contaming w For a subset of 95 words (53 nouns and 42 verbs), the algorithm was able to derive a coarser set of 2 or more senses for each of these 95 words such that the resulting Kappa ~alue reaches 0 8 or higher For the other 96 words, m order for the Kappa value to reach 0 8 or higher, the algorithm collapses all senses of the ~ord to a single (trivial) class Table 2 and 3 summarizes the results for the set of 53 nouns and 42 ~erbs, respectively Table 2 md~cates that before the collapse of sense classes, these 53 nouns have an average of 7 6 senses per noun There is a total of 5,339 sentences in the intersected corpus containing these nouns, of which 3,387 sentences were assigned the same sense by the two groups of human annotators The average Kappa statistic (computed as a simple average of the Kappa statistic of ~he mdlwdual nouns) is 0 463 After the collapse of sense classes by the greedy search algorithm, the average number of senses per noun for these 53 nouns drops to 40 Howe~er, the number of sentences which have been asmgned the same coarse sense by the annotators increases to 5,033 That is, about 94 3% of the sentences have been assigned the same coarse sense, and that the average Kappa statistic has improved to 0 862, mgmfymg high rater-annotator agreement on the derived coarse senses Table3 gl~es the analogous figures for the 42 verbs, agmn mdmatmg that high agreement is achieved on the coarse sense classes den~ed for verbs 7 Discussion Our findings on rater-annotator agreement for word sense tagging indicate that for average language users, it is quite dl~cult to achieve high agreement when they are asked to assign refned sense tags (such as those found in WORDNET) given only the scanty definition entries m the WORDNET dlctionary and a few or no example sentences for the usage of each word sense Thin observation agrees wlth that obtmned m a recent study done by (Veroms, 1998), where the agreement on sense-tagging by naive users was also not hlgh Thus It appears that an average language user is able to process language wlthout needing to perform the task of dlsamblguatmg word sense to a very fine-grained resolutmn as formulated m a tradltlonal dmtlonary In contrast, expert lexicographers tagged the ~ ord sense in the sentences used m the SENSEVAL exerclse, where high rater-annotator agreement was reported There are also fuller dlctlonary entries m the HECTOR dlctlonary used and more e ~* then ~"" +~(C~,,C~_t), z* +~, ~* +end for merge the sense class C,." W99-0606,J93-2004,o,Ralph Weischedel et al. 1993. W99-0606,J93-2004,o,"3 Tagging 3.1 Corpus To facilitate comparison with previous results, we used the UPenn Treebank corpus (Marcus et al. , 1993)." W99-0611,J93-2004,o,"(1998) present a probabilistic model for pronoun resolution trained on a small subset of the Penn Treebank Wall Street Journal corpus (Marcus et al. , 1993)." W99-0621,J93-2004,o,"These data sets were based on the Wall Street Journal corpus in the Penn Treebank (Marcus et al. , 1993)." W99-0622,J93-2004,o,"As an example, consider the fiat NP structures that are in the Penn Treebank (Marcus et al. , 1993)." W99-0623,J93-2004,o,"These three parsers have given the best reported parsing results on the Penn Treebank Wall Street Journal corpus (Marcus et al. , 1993)." W99-0628,J93-2004,o,"Some works \[Woods et al, 1972\], \[Boguraev, 1979\], \[Marcus et al. 1993\] suggested several strategies that based their 231 decision-making on the relationships existing between predicates and argumentswhat \[Katz and Fodor, 1963\] called selectional restrictions." W99-0628,J93-2004,o,A very impor232 Author Best Hindle and Rooth (1993) 80.0 % Resnik and Hearst (1993) 83.9 % WN Resnik and Hearst (1993) 75.0 % Ratnaparkhi et al. W99-0628,J93-2004,o,In this data set the 4-tuples of the test and training sets were extracted from Penn Treebank Wall Street Journal \[Marcus et al. 1993\]. W99-0629,J93-2004,o,"The data for all our experiments was extracted from the Penn Treebank II Wall Street Journal (WSJ) corpus (Marcus et al. , 1993)." W99-0701,J93-2004,o,"Experiments We have conducted a series of lexical acquisition experiments with the above algorithm on largescale English corpora, e.g., the Brown corpus \[Francis and Kucera 1982\] and the PTB WSJ corpus \[Marcus et al. 1993\]." W99-0704,J93-2004,o,"The WSJNPVP set consists of part-of speech tagged Wall Street Journal material (Marcus, Santorini & Marcinkiewicz, 1993), supplemented with syntactic tags indicating noun phrase and verb phrase boundaries (Daelemans et al, 1999iii)." W99-0706,J93-2004,o,"The figures given above were the original (1998) results for the system in \[Argamon et al. , 1998\], which came from training and testing on data derived from the Penn Treebank corpus \[Marcus et al. , 1993\] in which the added null elements (like null subjects) were left in." W99-0706,J93-2004,p,"Our training and test corpora, for instance, are lessthan-gargantuan compared to such collections as the Penn Treebank \[Marcus et al. , 1993\]." W99-0706,J93-2004,o,"Many systems (e.g. , the KERNEL system \[Palmer et al. , 1993\]) use these relationships as an intermediate, form when determining the semantics of syntactically parsed text." W99-0707,J93-2004,o,"The approach is evaluated by cross-validation on the WSJ treebank corpus \[Marcus et al. , 1993\]." X98-1014,J93-2004,o,"Training Data Our source for syntactically annotated training data was the Penn Treebank (Marcus et al. , 1993)." A00-1019,J96-1002,o,"Techniques for weakening the independence assumptions made by the IBM models 1 and 2 have been proposed in recent work (Brown et al. , 1993; Berger et al. , 1996; Och and Weber, 98; Wang and Waibel, 98; Wu and Wong, 98)." A00-2026,J96-1002,o,"Our approach differs from the corpus-based surface generation approaches of (Langkilde and Knight, 1998) and (Berger et al. , 1996)." A00-2026,J96-1002,o,"There are more sophisticated surface generation packages, such as FUF/SURGE (Elhadad and Robin, 1996), KPML (Bateman, 1996), MUMBLE (Meteer et al. , 1987), and RealPro (Lavoie and Rambow, 1997), which produce natural language text from an abstract semantic representation." A00-2026,J96-1002,o,"The only trainable approaches (known to the author) to surface generation are the purely statistical machine translation (MT) systems such as (Berger et al. , 1996) and the corpus-based generation system described in (Langkilde and Knight, 1998)." A00-2026,J96-1002,o,"The MT systems of (Berger et al. , 1996) learn to generate text in the target language straight from the source language, without the aid of an explicit semantic representation." A00-2026,J96-1002,o,"The form of the maximum entropy probability model is identical to the one used in (Berger et al. , 1996; Ratnaparkhi, 1998): k f$(wi,wi-1,wi-2,at~ri) YIj=I Otj p(wilwi-l, wi-2,attri) = Z(Wi-l, wi-2, attri) k to t j=l where wi ranges over V t3 .stop." A00-2026,J96-1002,o,"The features used in NLG2 are described in the next section, and the feature weights aj, obtained from the Improved Iterative Scaling algorithm (Berger et al. , 1996), are set to maximize the likelihood of the training data." A00-2031,J96-1002,o,"This is concordant with the usage in the maximum entropy literature (Berger et al. , 1996)." A97-1056,J96-1002,o,"However, the Naive Bayes classifier has been found to perform well for word-sense disambiguation both here and in a variety of other works (e.g. , (Bruce and Wiebe, 1994a), (Gale et al. , 1992), (Leacock et al. , 1993), and (Mooney, 1996))." A97-1056,J96-1002,o,"Maximum Entropy models have been used to express the interactions among multiple feature variables (e.g. , (Berger et al. , 1996)), but within this framework no systematic study of interactions has been proposed." A97-1056,J96-1002,o,"(Pedersen et al. , 1996) and (Zipf, 1935))." A97-1056,J96-1002,o,"Because their joint distributions have such closed-form expressions, the parameters can be estimated directly from the training data without the need for an iterative fitting procedure (as is required, for example, to estimate the parameters of maximum entropy models; (Berger et al. , 1996))." A97-1056,J96-1002,o,"The significance of G 2 based on the exact conditional distribution does not rely on an asymptotic approximation and is accurate for sparse and skewed data samples (Pedersen et al. , 1996) 4.2 Information criteria The family of model evaluation criteria known as information criteria have the following expression: IC,~ = G 2 ~ x dof (3) where G ~ and dof are defined above." A97-1056,J96-1002,o,"5 Experimental Data The sense-tagged text and feature set used in these experiments are the same as in (Bruce et al. , 1996)." C00-1060,J96-1002,o,"We report that our parsing framework achieved high accuracy (88.6%) in dependency analysis of Japanese with a combination of an underspecified HPSG-based Japanese grammar, SLUNG (Mitsuishi et al. , 1998) and the maximum entropy method (Berger et al. , 1996)." C00-1060,J96-1002,o,"2.2 Statistical Approaches with a grmnnmr There have been nlally l)rOl)osals tbr statistical t'rameworks particularly designed tbr 1)arsers with hand-crafted grmnmars (Schal)es, 1992; Briscoe and Carroll, 1993; Abney, 1996; Inui et al. , 1!)97)." C00-1061,J96-1002,o,"1, 2 show the examples of w~rious transliterations in KTSET 2.0(Park et al. , 1996)." C00-1064,J96-1002,o,"4 Maximum Entropy To explain our method, we l)riefly des(:ribe the con(:ept of maximum entrol)y. Recently, many al)lnoaches l)ased on the maximum entroi)y lnodel have t)een applied to natural language processing (Berger eL al. , \]994; Berger et al. , 1996; Pietra et al. , 1997)." C00-1064,J96-1002,o,"We referred to the studies of (Berger et al. , 1996; Pietra e.t al. , 1997)." C00-1064,J96-1002,o,Wu (1996) adopted chammls that eliminate syntactically unlikely alignments and Wang et al. C00-1064,J96-1002,o,"Thus, a lot of alignment techniques have been suggested at; the sentence (Gale et al. , 1993), phrase (Shin et al. , 1996), nomt t)hrase (Kupiec, 1993), word (Brown et al. , 1993; Berger et al. , 1996; Melamed, 1997), collocation (Smadja et al. , 1996) and terminology level." C00-1082,J96-1002,o,"a.2 Maximum-entropy method The maximum-entropy method is useful with sparse data conditions and has been used by many researchers (Berger et al. , 1996; Ratnaparkhi, 1996; Ratnaparkhi, 1997; Borthwick el; al. , 1998; Uchimoto et al. , 1999)." C00-2124,J96-1002,o,"For every class the weights of the active features are combined and the best scoring class is chosen (Berger et al. , 1996)." C00-2126,J96-1002,o,"This allows us to compute the conditional probability as follows (Berger et al. , 1996): ag~ (h .f) P(/Ih)1L ' (2) Z (h) ct i ." C02-1064,J96-1002,o,"We implemented these models within an maximum entropy framework (Berger et al. , 1996; Ristad, 1997; Ristad, 1998)." C02-1143,J96-1002,o,"We used a maximummatching algorithm and a dictionary compiled from the CTB (Sproat et al. , 1996; Xue, 2001) to do segmentation, and trained a maximum entropy part-ofspeech tagger (Ratnaparkhi, 1998) and TAG-based parser (Bikel and Chiang, 2000) on the CTB to do tagging and parsing.4 Then the same feature extraction and model-training was done for the PDN corpus as for the CTB." C02-1143,J96-1002,o,"Under the maximum entropy framework (Berger et al. , 1996), evidence from different features can be combined with no assumptions of feature independence." C02-2019,J96-1002,o,"One is to find unknown words from corpora and put them into a dictionary (e.g. , (Mori and Nagao, 1996)), and the other is to estimate a model that can identify unknown words correctly (e.g. , (Kashioka et al. , 1997; Nagata, 1999))." C02-2019,J96-1002,o,"(1) Here has(h,x) is a binary function that returns true if the history h has feature x.Inour experiments, we focused on such information as whether or not a string is found in a dictionary, the length of the string, what types of characters are used in the string, and what part-of-speech the adjacent morpheme is. Given a set of features and some training data, the M.E. estimation process produces a model, which is represented as follows (Berger et al. , 1996; Ristad, 1997; Ristad, 1998): P(f|h)= producttext i g i (h,f) i Z (h) (2) Z (h)= summationdisplay f productdisplay i g i (h,f) i." C04-1017,J96-1002,o,"In previous research on splitting sentences, many methods have been based on word-sequence characteristics like N-gram (Lavie et al. , 1996; Berger et al. , 1996; Nakajima and Yamamoto, 2001; Gupta et al. , 2002)." C04-1067,J96-1002,o,"The candidates of unknown words can be generated by heuristic rules(Matsumoto et al. , 2001) or statistical word models which predict the probabilities for any strings to be unknown words (Sproat et al. , 1996; Nagata, 1999)." C04-1067,J96-1002,o,"In the above equation, P(ti) and P(wi;t) are estimated by the maximum-likelihood method, and the probability of a POC tag ti, given a character wi (P(tijwi;ti 2 TPOC)) is estimated using ME models (Berger et al. , 1996)." C04-1112,J96-1002,o,"The statistical classifier used in the experiments reported in this paper is a maximum entropy classifier (Berger et al. , 1996; Ratnaparkhi, 1997b)." C04-1112,J96-1002,p,"Furthermore, good results have been produced in other areas of NLP research using maximum entropy techniques (Berger et al. , 1996; Koeling, 2001; Ratnaparkhi, 1997a)." C04-1179,J96-1002,o,"3 Maximum Entropy ME models implement the intuition that the best model is the one that is consistent with the set of constraints imposed by the evidence, but otherwise is as uniform as possible (Berger et al. 1996)." C04-1204,J96-1002,o,"Following recent research about disambiguation models on linguistic grammars (Abney, 1997; Johnson et al. , 1999; Riezler et al. , 2002; Clark and Curran, 2003; Miyao et al. , 2003; Malouf and van Noord, 2004), we apply a log-linear model or maximum entropy model (Berger et al. , 1996) on HPSG derivations." C08-1041,J96-1002,p,"The maximum entropy approach (Berger et al., 1996) is known to be well suited to solve the classification problem." C08-1079,J96-1002,o,"3 Implementation 3.1 Pronoun resolution model We built a machine learning based pronoun resolution engine using a Maximum Entropy ranker model (Berger et al., 1996), similar with Denis and Baldridges model (Denis and Baldridge, 2007)." C08-1083,J96-1002,o,"Preparing an aligned abbreviation corpus, we obtain the optimal combination of the features by using the maximum entropy framework (Berger et al., 1996)." C08-1083,J96-1002,o,"We directly model the conditional probability of the alignment a, given x and y, using the maximum entropy framework (Berger et al., 1996), P(a|x,y) = exp{F(a,x,y)}summationdisplay aC(x,y) exp{F(a,x,y)} ." C08-1142,J96-1002,o,"We utilize maximum entropy (MaxEnt) model (Berger et al., 1996) to design the basic classifier used in active learning for WSD and TC tasks." C08-1143,J96-1002,o,"6.2 Experimental Settings We utilize a maximum entropy (ME) model (Berger et al., 1996) to design the basic classifier for WSD and TC tasks." C08-2016,J96-1002,o,"When we have a junction tree for each document, we can efficiently perform belief propagation in order to compute argmax in Equation (1), or the marginal probabilities of cliques and labels, necessary for the parameter estimation of machine learning classifiers, including perceptrons (Collins, 2002), and maximum entropy models (Berger et al., 1996)." C08-2016,J96-1002,o,"In the following experiments, we run two machine learning classifiers: Bayes Point Machines (BPM) (Herbrich et al., 2001), and the maximum entropy model (ME) (Berger et al., 1996)." D07-1019,J96-1002,o,"One is how to learn a statistical model to estimate the conditional probability , and the other is how to generate confusion set C of a given query q 4.1 Maximum Entropy Model for Query Spelling Correction We take a feature-based approach to model the posterior probability . Specifically we use the maximum entropy model (Berger et al. , 1996) for this task: = exp , =1 exp( (, ) =1 ) (2) where exp( (, ) =1 ) is the normalization factor; , is a feature function defined over query q and correction candidate c, while is the corresponding feature weight." D07-1051,J96-1002,o,"optimization approaches which aim at selecting those examples that optimize some (algorithm-dependent) objective function, such as prediction variance (Cohn et al. , 1996), and heuristic methods with uncertainty sampling (Lewis and Catlett, 1994) and query-by-committee (QBC) (Seung et al. , 1992) just to name the most prominent ones." D07-1051,J96-1002,o,"AL has already been applied to several NLP tasks, such as document classification (Schohn and Cohn, 2000), POS tagging (Engelson and Dagan, 1996), chunking (Ngai and Yarowsky, 2000), statistical parsing (Thompson et al. , 1999; Hwa, 2000), and information extraction (Lewis and Catlett, 1994; Thompson et al. , 1999)." D07-1051,J96-1002,o,"4.2 Classifier and Features For our AL framework we decided to employ a Maximum Entropy (ME) classifier (Berger et al. , 1996)." D07-1077,J96-1002,o,"2 Related Work A number of researchers (Brown et al. , 1992; Berger et al. , 1996; Niessen and Ney, 2004; Xia and McCord, 2004; Collins et al. , 2005) have described approaches that preprocess the source language input in SMT systems." D07-1082,J96-1002,o,"We utilize a maximum entropy (ME) model (Berger et al. , 1996) to design the basic classifier used in active learning for WSD." D07-1111,J96-1002,o,"The first LR model for each language uses maximum entropy classification (Berger et al. , 1996) to determine possible parser actions and their probabilities4." D08-1047,J96-1002,o,"(1) Here, the candidate generator gen(s) enumerates candidates of destination (correct) strings, and the scorer P(t|s) denotes the conditional probability of the string t for the given s. The scorer was modeled by a noisy-channel model (Shannon, 1948; Brill and Moore, 2000; Ahmad and Kondrak, 2005) and maximum entropy framework (Berger et al., 1996; Li et al., 2006; Chen et al., 2007)." D08-1063,J96-1002,p,"The classification is performed with a statistical approach, built around the maximum entropy (MaxEnt) principle (Berger et al., 1996), that has the advantage of combining arbitrary types of information in making a classification decision." D08-1063,J96-1002,o,"The {ij}j=1m weights are estimated during the training phase to maximize the likelihood of the data (Berger et al., 1996)." D08-1097,J96-1002,p,"2.2 Maximum Entropy Models Maximum entropy (ME) models (Berger et al., 1996; Manning and Klein, 2003), also known as 928 log-linear and exponential learning models, provide a general purpose machine learning technique for classification and prediction which has been successfully applied to natural language processing including part of speech tagging, named entity recognition etc. Maximum entropy models can integrate features from many heterogeneous information sources for classification." D09-1003,J96-1002,p,"To estimate the parameters of the MEMM+pred model we turn to the successful Maximum Entropy (Berger et al., 1996) parameter estimation method." D09-1057,J96-1002,p,"2 Maximum Entropy Models Maximum entropy (ME) models (Berger et al., 1996; Manning and Klein, 2003), also known as log-linear and exponential learning models, provideageneralpurposemachinelearningtechnique for classification and prediction which has been successfully applied to natural language processing including part of speech tagging, named entity recognition etc. Maximum entropy models can integrate features from many heterogeneous information sources for classification." D09-1069,J96-1002,o,"(11)(13) (Berger et al., 1996)." D09-1069,J96-1002,o,"(14), where i is the parameter to be estimated and f i (a, b) is a feature function corresponding to i (Berger et al., 1996; Ratnaparkhi, 1997): P(E P |E G ) productdisplay i P(ep i |ep i1 ik ,eg i+k ik ) (11) P(C P |E G ,E P ) (12) productdisplay i P(cp i |cp i1 ik ,eg, ep i+k ik ) P(C G |E G ,E P ,C P ) (13) productdisplay i P(cg i |cg i1 ik ,eg, ep, cp i+k ik ) P(b|a)= exp( summationtext i i f i (a, b)) summationtext b prime exp( summationtext i i f i (a, b prime )) (14) f i (a, b) is a binary function returning TRUE or FALSE based on context a and output b. If f i (a, b)=1, its corresponding model parameter i contributes toward conditional probability P(b|a) (Berger et al., 1996; Ratnaparkhi, 1997)." D09-1123,J96-1002,o,"DTM2, introduced in (Ittycheriah and Roukos, 2007), expresses the phrase-based translation task in a unified log-linear probabilistic framework consisting of three components: (i) a prior conditional distribution P0(.|S), (ii) a number of feature functions i() that capture the translation and language model effects, and (iii) the weights of the features i that are estimated under MaxEnt (Berger et al., 1996), as in (1): P(T|S) = P0(T,J|S)Z expsummationdisplay i ii(T,J,S) (1) Here J is the skip reordering factor for the phrase pair captured by i() and represents the jump from the previous source word, and Z is the per source sentence normalization term." D09-1128,J96-1002,o,"3.4 Learning algorithm Maximum entropy (ME) models (Berger et al., 1996; Manning and Klein, 2003), also known as log-linear and exponential learning models, has been adopted in the SC classification task." D09-1154,J96-1002,o,"As described in Section 4, we define the problem of term variation identifica1484 tion as a binary classification task, and build two types of classifiers according to the maximum entropy model (Berger et al., 1996) and the MART algorithm (Friedman, 2001), where all term similarity metrics are incorporated as features and are jointly optimized." D09-1160,J96-1002,o,"lscript1-regularized log-linear models (lscript1-LLMs), on the other hand, provide sparse solutions, in which weights of irrelevant features are exactly zero, by assumingaLaplacianpriorontheweights(Tibshirani, 1996; Kazama and Tsujii, 2003; Goodman, 2004; Gao et al., 2007)." D09-1160,J96-1002,o,"2.1 Log-Linear Models The log-linear model (LLM), or also known as maximum-entropy model (Berger et al., 1996), is a linear classifier widely used in the NLP literature." E06-2002,J96-1002,o,"By introducing the hidden word alignment variable a, the following approximate optimization criterion can be applied for that purpose: e = argmaxe Pr(e | f) = argmaxe summationdisplay a Pr(e,a | f) argmaxe,a Pr(e,a | f) Exploiting the maximum entropy (Berger et al. , 1996) framework, the conditional distribution Pr(e,a | f) can be determined through suitable real valued functions (called features) hr(e,f,a),r = 1R, and takes the parametric form: p(e,a | f) exp Rsummationdisplay r=1 rhr(e,f,a)} The ITC-irst system (Chen et al. , 2005) is based on a log-linear model which extends the original IBM Model 4 (Brown et al. , 1993) to phrases (Koehn et al. , 2003; Federico and Bertoldi, 2005)." E06-2002,J96-1002,o,"Hence, either the best translation hypothesis is directly extracted from the word graph and output, or an N-best list of translations is computed (Tran et al. , 1996)." E06-2015,J96-1002,o,"2.2 Learning Algorithm For learning coreference decisions, we used a Maximum Entropy (Berger et al. , 1996) model." E09-1012,J96-1002,p,"In order to estimate the conditional distributions shown in Table 1, we use the general technique of choosing the MaxEnt distribution that properly estimates the average of each feature over the training data (Berger et al., 1996)." E09-1012,J96-1002,o,"These feature vectors and the associated parser actions are used to train maximum entropy models (Berger et al., 1996)." E09-1022,J96-1002,o,"We chose to train maximum entropy models (Berger et al., 1996)." E09-3005,J96-1002,o,"So far, most previous work on domain adaptation for parsing has focused on data-driven systems (Gildea, 2001; Roark and Bacchiani, 2003; McClosky et al., 2006; Shimizu and Nakagawa, 2007), i.e. systems employing (constituent or dependency based) treebank grammars (Charniak, 1996)." E09-3005,J96-1002,o,"The Maximum Entropy model (Berger et al., 1996; Ratnaparkhi, 1997; Abney, 1997) is a conditional model that assigns a probability to every possible parse for a given sentence s. The model consists of a set of m feature functions fj() that describe properties of parses, together with their associated weights j. The denominator is a normalization term where Y (s) is the set of parses with yield s: p(|s;) = exp( summationtextm j=1 jfj())summationtext yY (s) exp( summationtextm j=1 jfj(y))) (1) The parameters (weights) j can be estimated efficiently by maximizing the regularized conditional likelihood of a training corpus (Johnson et al., 1999; van Noord and Malouf, 2005): = argmax logL() summationtextm j=1 2j 22 (2) where L() is the likelihood of the training data." E99-1026,J96-1002,o,"Other methods that have been proposed are one based on using the gain (Berger et al. , 1996) and an approximate method for selecting informative features (Shirai et al. , 1998a), and several criteria for feature selection were proposed and compared with other criteria (Berger and Printz, 1998)." E99-1026,J96-1002,o,"This allows us to compute the conditional probability as follows (Berger et al. , 1996): P(flh) YIia\[ '(n'l) z~(h) (2) ~,i (3) I i The maximum entropy estimation technique guarantees that for every feature gi, the expected value of gi according to the M.E. model will equal the empirical expectation of gi in the training corpus." H05-1012,J96-1002,o,"These IBM models and more recent refinements (Moore, 2004) as well as algorithms that bootstrap from these models like the HMM algorithm described in (Vogel et al. , 1996) are unsupervised algorithms." H05-1012,J96-1002,o,"(Berger et al. , 1996)), 1We are overloading the word state to mean Arabic word position." H05-1022,J96-1002,o,"We use a simple, single parameter distribution, with = 8.0 throughout P(K|m,e) = P(K|m,l) K Word-to-Phrase Alignment Alignment is a Markov process that specifies the lengths of phrases and their alignment with source words P(aK1,hK1,K1 |K,m,e) = Kproductdisplay k=1 P(ak,hk,k|ak1,k1,e) = Kproductdisplay k=1 p(ak|ak1,hk;l)d(hk)n(k;eak) The actual word-to-phrase alignment (ak) is a firstorder Markov process, as in HMM-based word-toword alignment (Vogel et al. , 1996)." H05-1022,J96-1002,o,"The bigram translation probability t2(f|f,e) specifies the likelihood that target word f is to follow f in a phrase generated by source word e. 170 2.1 Properties of the Model and Prior Work The formulation of the WtoP alignment model was motivated by both the HMM word alignment model (Vogel et al. , 1996) and IBM Model-4 with the goal of building on the strengths of each." H05-1022,J96-1002,o,"In fact, the WtoP model is a segmental Hidden Markov Model (Ostendorf et al. , 1996), in which states emit observation sequences." H05-1022,J96-1002,p,"The bigram translation probability relies on word context, known to be helpful in translation (Berger et al. , 1996), to improve the identification of target phrases." H05-1059,J96-1002,o,"A common choice for the local probabilistic classifier is maximum entropy classifiers (Berger et al. , 1996)." H05-1059,J96-1002,o,"3 Maximum Entropy Classifier For local classifiers, we used a maximum entropy model which is a common choice for incorporating various types of features for classification problems in natural language processing (Berger et al. , 1996)." H05-1083,J96-1002,o,"660 2 Statistical Coreference Resolution Model Our coreference system uses a binary entity-mention model PL( je, m) (henceforth link model ) to score the action of linking a mention m to an entity e. In our implementation, the link model is computed as PL(L = 1je, m) max mprimee PL(L = 1je, mprime, m), (1) where mprime is one mention in entity e, and the basic model building block PL(L = 1je, mprime, m) is an exponential or maximum entropy model (Berger et al. , 1996): PL(Lje, mprime, m) = exp braceleftbig summationtext i igi(e, m prime, m, L)bracerightbig Z(e, mprime, m), (2) where Z(e, mprime, m) is a normalizing factor to ensure that PL( je, mprime, m) is a probability, fgi(e, mprime, m, L)g are features and fig are feature weights." I05-2046,J96-1002,o,"Given a set of features and a training corpus, the ME estimation process produces a model in which every feature fi has a weight i. From (Berger et al. , 1996), we can compute the conditional probability as: p(o|h) = 1Z(h)productdisplay i fi(h,o)i (2) Z(h) =summationdisplay o productdisplay i fi(h,o)i (3) The probability is given by multiplying the weights of active features (i.e. , those fi(h,o) = 1)." I05-2046,J96-1002,o,"The MBT POS tagger (Daelemans et al. , 1996) is used to provide POS information." I05-3031,J96-1002,o,"As the taskisanimportantprecursortomanynaturallanguage processing systems, it receives a lot of attentions in the literature for the past decade (Wu and Tseng, 1993; Sproat et al. , 1996)." I08-1008,J96-1002,o,"3 MaxEnt Model and Features 3.1 MaxEnt Model for NOR The principle of maximum entropy (MaxEnt) model is that given a collection of facts, choose a model consistent with all the facts, but otherwise as uniform as possible (Berger et al., 1996)." I08-1048,J96-1002,o,"We utilize a maximum entropy (ME) model (Berger et al., 1996) to design the basic classifier used in active learning for WSD." I08-1060,J96-1002,o,"There are other types of variations for phrases; for example, insertion, deletion or substitution of words, and permutation of words such as view point and point of view are such variations (Daille et al., 1996)." I08-1060,J96-1002,o,"Then, we build a classier learned by training data, using a maximum entropy model (Berger et al., 1996) and the features related to spelling variations in Table 3." I08-2122,J96-1002,o,"Uses Maximum Entropy (Berger et al., 1996) classification, trained on JNLPBA (Kim et al., 2004) (NER)." J00-3003,J96-1002,o,"(1996), Warnke et al." J00-3003,J96-1002,o,"The idea caught on very quickly: Suhm and Waibel (1994), Mast et aL (1996), Warnke et al." J00-3003,J96-1002,o,"Computational approaches to prosodic modeling of DAs have aimed to automatically extract various prosodic parameters--such as duration, pitch, and energy patterns--from the speech signal (Yoshimura et al. \[1996\]; Taylor et al. \[1997\]; Kompe \[1997\], among others)." J00-3003,J96-1002,o,"Suhm and Waibel (1994) and Eckert, Gallwitz, and Niemann (1996) each condition a recognizer LM on left-to-right DA predictions and are able to 366 Stolcke et al. Dialogue Act Modeling show reductions in word error rate of 1% on task-oriented corpora." J00-3003,J96-1002,o,Automatic segmentation of spontaneous speech is an open research problem in its own right (Mast et al. 1996; Stolcke and Shriberg 1996). J04-4002,J96-1002,o,"Here, we use the hidden Markov model (HMM) alignment model (Vogel, Ney, and Tillmann 1996) and Model 4 of Brown et al." J05-1003,J96-1002,o,"Feature selection methods have been proposed in the maximum-entropy literature by several authors (Ratnaparkhi, Roukos, and Ward 1994; Berger, Della Pietra, and Della Pietra 1996; Della Pietra, Della Pietra, and Lafferty 1997; Papineni, Roukos, and Ward 1997, 1998; McCallum 2003; Zhou et al. 2003; Riezler and Vasserman 2004)." J05-1003,J96-1002,o,"6.4 Feature Selection Methods A number of previous papers (Berger, Della Pietra, and Della Pietra 1996; Ratnaparkhi 1998; Della Pietra, Della Pietra, and Lafferty 1997; McCallum 2003; Zhou et al. 2003; Riezler and Vasserman 2004) describe feature selection approaches for log-linear models applied to NLP problems." J05-1003,J96-1002,n,"More recent work (McCallum 2003; Zhou et al. 2003; Riezler and Vasserman 2004) has considered methods for speeding up the feature selection methods described in Berger, Della Pietra, and Della Pietra (1996), Ratnaparkhi (1998), and Della Pietra, Della Pietra, and Lafferty (1997)." J99-1004,J96-1002,o,"The theory has been applied in probabilistic language modeling (Mark, Miller, and Grenander 1996; Mark et al. 1996; Johnson 1998), natural language processing (Berger, Della Pietra, and Della Pietra 1996; Della Pietra, Della Pietra, and Lafferty 1997), as well as computational vision (Zhu, Wu, and Mumford 1997)." J99-1004,J96-1002,o,"Among the most widely studied is the Gibbs distribution (Mark, Miller, and Grenander 1996; Mark et al. 1996; Mark 1997; Abney 1997)." N03-1004,J96-1002,o,"These distributions are modeled using a maximum entropy formulation (Berger et al. , 1996), using training data which consists of human judgments of question answer pairs." N03-1028,J96-1002,o,"The sequential classi cation approach can handle many correlated features, as demonstrated in work on maximum-entropy (McCallum et al. , 2000; Ratnaparkhi, 1996) and a variety of other linear classi ers, including winnow (Punyakanok and Roth, 2001), AdaBoost (Abney et al. , 1999), and support-vector machines (Kudo and Matsumoto, 2001)." N03-1028,J96-1002,o,"(2001) used iterative scaling algorithms for CRF training, following earlier work on maximumentropy models for natural language (Berger et al. , 1996; Della Pietra et al. , 1997)." N03-2008,J96-1002,o,"(Berger et al. , 1996)." N04-1001,J96-1002,o,"For mention detection we use approaches based on Maximum Entropy (MaxEnt henceforth) (Berger et al. , 1996) and Robust Risk Minimization (RRM henceforth) 1For a description of the ACE program see http://www.nist.gov/speech/tests/ace/." N04-1001,J96-1002,o,"Algorithm 1 The RRM Decoding Algorithm foreacha26a29a27a67a42 foreacha68 a1a20a23a69a10a11a10a12a10a45 a60 a48a22a70a26a22a71 a1a73a72a2a25 a57a38a50 a7 a56 a48a54a57 a64a74a30 a57 a31a33a26a17a34 a5a11a75 a60a77a76a74a76 a31a78a26a35a34a66a79a81a80a83a82a38a84a69a85a86a80a24a87a88a48 a60 a48 a70a26a61a71 Somewhat similarly, the MaxEnt algorithm has an associated set of weights a31a33a89 a48a54a57 a34a48a90a50 a7a53a52a54a52a54a52a15 a57a38a50 a7a58a52a54a52a54a52 a25, which are estimated during the training phase so as to maximize the likelihood of the data (Berger et al. , 1996)." N04-1037,J96-1002,o,"Maximum Entropy Modeling As previously indicated, the weight-based scheme of L&L suggests MaxEnt modeling (Berger et al. , 1996) as a particularly natural choice for a machine learning approach." N04-1039,J96-1002,o,"1 Introduction Conditional Maximum Entropy (maxent) models have been widely used for a variety of tasks, including language modeling (Rosenfeld, 1994), part-of-speech tagging, prepositional phrase attachment, and parsing (Ratnaparkhi, 1998), word selection for machine translation (Berger et al. , 1996), and finding sentence boundaries (Reynar and Ratnaparkhi, 1997)." N04-2003,J96-1002,o,"A major issue in MaxEnt training is how to select proper features and determine the feature targets (Berger et al. , 1996; Jebara and Jaakkola, 2000)." N04-2003,J96-1002,o,3 Feature selection Berger et al (1996) proposed an iterative procedure of adding news features to feature set driven by data. N06-1013,J96-1002,o,"Given a collection of facts, ME chooses a model consistent with all the facts, but otherwise as uniform as possible (Berger et al. , 1996)." N06-1013,J96-1002,o,"Maximum entropy (ME) models have been used in bilingual sense disambiguation, word reordering, and sentence segmentation (Berger et al. , 1996), parsing, POS tagging and PP attachment (Ratnaparkhi, 1998), machine translation (Och and Ney, 2002), and FrameNet classification (Fleischman et al. , 2003)." N06-1025,J96-1002,o,"3.2 Learning Algorithm For learning coreference decisions, we used a Maximum Entropy (Berger et al. , 1996) model." N06-1026,J96-1002,o,Maximum Entropy models implement the intuition that the best model is the one that is consistent with the set of constraints imposed by the evidence but otherwise is as uniform as possible (Berger et al. 1996). N06-2036,J96-1002,o,The algorithm employs the OpenNLP MaxEnt implementation of the maximum entropy classification algorithm (Berger et al. 1996) to develop word sense recognition signatures for each lemma which predicts the most likely sense for the lemma according to the context in which the lemma occurs. N07-1001,J96-1002,o,"The best prosodic label sequence is then, L = argmax L nproductdisplay i P(li|) (6) To estimate the conditional distribution P(li|) we use the general technique of choosing the maximum entropy (maxent) distribution that estimates the average of each feature over the training data (Berger et al. , 1996)." N07-1001,J96-1002,o,"We report results on the Boston University (BU) Radio Speech Corpus (Ostendorf et al. , 1995) and Boston Directions Corpus (BDC) (Hirschberg and Nakatani, 1996), two publicly available speech corpora with manual ToBI annotations intended for experiments in automatic prosody labeling." N07-1009,J96-1002,n,"But without the global normalization, the maximumlikelihood criterion motivated by the maximum entropy principle (Berger et al. , 1996) is no longer a feasible option as an optimization criterion." N07-1010,J96-1002,o,"3 Implementation 3.1 Feature Structure To implement the twin model, we adopt the log linear or maximum entropy (MaxEnt) model (Berger et al. , 1996) for its flexibility of combining diverse sources of information." N07-1010,J96-1002,o,"Once the set of features functions are selected, algorithm such as improved iterative scaling (Berger et al. , 1996) or sequential conditional generalized iterative scaling (Goodman, 2002) can be used to find the optimal parameter values of fkg and fig." N07-1030,J96-1002,o,"Model parameters are estimated using maximum entropy (Berger et al. , 1996)." N07-1046,J96-1002,o,"With hand-labeled data, {m} can be learnt via generalized iterative scaling algorithm (GIS) (Darroch and Ratcliff, 1972) or improved iterative scaling (IIS) (Berger 367 et al. , 1996)." N07-1046,J96-1002,o,"This sequential property is well suited to HMMs (Vogel et al. , 1996), in which the jumps from the current aligned position can only be forward." N07-2043,J96-1002,o,"To reduce the knowledge engineering burden on the user in constructing and porting an IE system, unsupervised learning has been utilized, e.g. Riloff (1996), Yangarber et al." N07-2043,J96-1002,o,"For the classifier, we used the OpenNLP MaxEnt implementation (maxent.sourceforge.net) of the maximum entropy classification algorithm (Berger et al. 1996)." N09-1013,J96-1002,o,"Since it is not feasible to maximise the likelihood of the observations directly, we maximise the expected log likelihood by considering the EM auxiliary function, in a similar manner to that used for modelling contextual variations of phones for ASR (Young et al., 1994; Singer and Ostendorf, 1996)." N09-1022,J96-1002,o,"3.5 Maximum Entropy Model In order to build a unified probabilistic query alteration model, we used the maximum entropy approach of (Beger et al., 1996), which Li et al." N09-1046,J96-1002,p,"We decided to use the class of maximum entropy models, which are probabilistically sound, can make use of possibly many overlapping features, and can be trained efficiently (Berger et al., 1996)." N09-1065,J96-1002,o,"2.1 The Standard Machine Learning Approach We use maximum entropy (MaxEnt) classification (Berger et al., 1996) in conjunction with the 33 features described in Ng (2007) to acquire a model, PC, for determining the probability that two mentions, mi and mj, are coreferent." N09-1065,J96-1002,o,"Research in the first category aims to identify specific types of nonanaphoric phrases, with some identifying pleonastic it (using heuristics [e.g., Paice and Husk (1987), Lappin and Leass (1994), Kennedy and Boguraev (1996)], supervised approaches [e.g., Evans (2001), Muller (2006), Versley et al." N09-2014,J96-1002,o,"Our approach is to use maximum entropy models (Berger et al., 1996) to learn a suitable mapping from features derived from the words in the ASR output to semantic frames." N09-2026,J96-1002,o,"In (Teevan et al., 1996) it was observed that a significant percent of the queries made by a user in a search engine are associated to a repeated search." N09-3017,J96-1002,p,"2.3 Classifier Training We chose maximum entropy (Berger et al., 1996) as our primary classifier, since it had been successfully applied by the highest performing systems in both the SemEval-2007 preposition sense disambiguation task (Ye and Baldwin, 2007) and the general word sense disambiguation task (Tratz et al., 2007)." N09-3017,J96-1002,p,"2.3 Classifier Training We chose maximum entropy (Berger, 1996) as our primary classifier because the highest performing systems in both the SemEval-2007 preposition sense disambiguation task (Ye and Baldwin, 2007) and the general word sense disambiguation task (Tratz et al., 2007) used it." P01-1003,J96-1002,o,"Using the ME principle, we can combine information from a variety of sources into the same language model (Berger et al. , 1996; Rosenfeld, 1996)." P01-1027,J96-1002,o,"(Berger et al. , 1996) applies this approach to the so-called IBM Candide system to build context dependent models, compute automatic sentence splitting and to improve word reordering in translation." P01-1027,J96-1002,o,"Similar techniques are used in (Papineni et al. , 1996; Papineni et al. , 1998) for socalled direct translation models instead of those proposed in (Brown et al. , 1993)." P01-1027,J96-1002,o,"Other authors have applied this approach to language modeling (Rosenfeld, 1996; Martin et al. , 1999; Peters and Klakow, 1999)." P01-1027,J96-1002,o,"The resulting model has an exponential form with free parameters a102 a91 a24a94a93 a8 a87 a24 a10a11a10a11a10 a24a46a95 . The parameter values which maximize the likelihood for a given training corpus can be computed with the socalled GIS algorithm (general iterative scaling) or its improved version IIS (Pietra et al. , 1997; Berger et al. , 1996)." P01-1027,J96-1002,o,"In this work we use the following contextual information: a3 Target context: As in (Berger et al. , 1996) we consider a window of 3 words to the left and to the right of the target word considered." P01-1042,J96-1002,o,"In statistical computational linguistics, maximum conditional likelihood estimators have mostly been used with general exponential or maximum entropy models because standard maximum likelihood estimation is usually computationally intractable (Berger et al. , 1996; Della Pietra et al. , 1997; Jelinek, 1997)." P02-1002,J96-1002,o,"1 Introduction Conditional Maximum Entropy models have been used for a variety of natural language tasks, including Language Modeling (Rosenfeld, 1994), partof-speech tagging, prepositional phrase attachment, and parsing (Ratnaparkhi, 1998), word selection for machine translation (Berger et al. , 1996), and finding sentence boundaries (Reynar and Ratnaparkhi, 1997)." P02-1025,J96-1002,o,"One solution would be to apply the maximum entropy estimation technique (MaxEnt (Berger et al. , 1996)) to all of the three components of the SLM, or at least to the CONSTRUCTOR." P02-1038,J96-1002,o,"An especially well-founded framework for doing this is maximum entropy (Berger et al. , 1996)." P02-1063,J96-1002,o,"Various learning models have been studied such as Hidden Markov models (HMMs) (Rabiner and Juang, 1993), decision trees (Breiman et al. , 1984) and maximum entropy models (Berger et al. , 1996)." P03-1012,J96-1002,p,"Maximum entropy can be used to improve IBM-style translation probabilities by using features, such as improvements to P(f|e) in (Berger et al. , 1996)." P03-1012,J96-1002,o,"For example, alignments can be used to learn translation lexicons (Melamed, 1996), transfer rules (Carbonell et al. , 2002; Menezes and Richardson, 2001), and classifiers to find safe sentence segmentation points (Berger et al. , 1996)." P03-1012,J96-1002,o,"It has been observed that words close to each other in the source language tend to remain close to each other in the translation (Vogel et al. , 1996; Ker and Change, 1997)." P03-1015,J96-1002,o,"We used the Maximum Entropy approach5 (Berger et al. , 1996) as a machine learner for this task." P03-1055,J96-1002,o,"The other main difference is the apparently nonlocal nature of the problem, which motivates our choice of a Maximum Entropy (ME) model for the tagging task (Berger et al. , 1996)." P03-1061,J96-1002,o,"One is to find unknown words from corpora and put them into a dictionary (e.g. , (Mori and Nagao, 1996)), and the other is to estimate a model that can identify unknown words correctly (e.g. , (Kashioka et al. , 1997; Nagata, 1999))." P03-1061,J96-1002,o,"We implemented this model within an ME modeling framework (Jaynes, 1957; Jaynes, 1979; Berger et al. , 1996)." P04-1014,J96-1002,o,"They use a conditional model, based on Collins (1996), which, as the authors acknowledge, has a number of theoretical deficiencies; thus the results of Clark et al. provide a useful baseline for the new models presented here." P04-1014,J96-1002,o,"Setting the gradient to zero yields the usual maximum entropy constraints (Berger et al. , 1996), except that in this case the empirical values are themselves expectations (over all derivations leading to each gold standard dependency structure)." P04-1018,J96-1002,o,"We use maximum entropy model (Berger et al. , 1996) for both the mention-pair model (9) and the entity-mention model (8): a83a84a1a86a85a88a87 a43 a44 a71 a43 a16 a5a13a7 a55a35a34a23a36 a6a35a37 a6a39a38a40a6a42a41 a31a44a43a3a45a31 a6 a45a46a48a47a24a49 a50 a1 a43 a44 a71 a43 a16 a5 a71 (10) a83a84a1a4a85 a87 a55 a81 a71 a43 a16 a5a13a7 a55a35a34 a36 a6 a37 a6a39a38a40a6a42a41 a11a7a32 a45a31 a6 a45a46a48a47 a49 a50 a1 a55a39a81 a71 a43 a16 a5 a71 (11) wherea57 a16 a1a51a8 a71a52a8 a71a90a85a73a5 is a feature and a53 a16 is its weight; a50 a1a33a8 a71a54a8a5 is a normalizing factor to ensure that (10) or (11) is a probability." P04-1018,J96-1002,p,"Effective training algorithm exists (Berger et al. , 1996) once the set of features a42 a57 a16 a1a33a8 a71a54a8 a71a100a85a68a5 a53 is selected." P04-1020,J96-1002,o,"(In our experiments, we use maximum entropy classification (MaxEnt) (Berger et al. , 1996) to train this probability model)." P04-1085,J96-1002,o,"We use maximum entropy modeling (Berger et al. , 1996) to directly model the conditional probability a17a19a18a20a2a21a15a23a22a24a26a25, where each a27a5a15 in a24a29a28a30a18a31a27a32a4a33a6a7a8a9a8a9a8a9a6a23a27a34a11a14a25 is an observation associated with the corresponding speaker a2 a15 . a27 a15 is represented here by only one variable for notational ease, but it possibly represents several lexical, durational, structural, and acoustic observations." P04-1085,J96-1002,o,"Speaker ranking accuracy Table 2 summarizes the accuracy of our statistical ranker on the test data with different feature sets: the performance is 89.39% when using all feature sets, and reaches 90.2% after applying Gaussian smoothing and using incremental feature selection as described in (Berger et al. , 1996) and implemented in the yasmetFS package.6 Note that restricting ourselves to only backward looking features decreases the performance significantly, as we can see in Table 2." P05-1017,J96-1002,p,"Another interesting point is the relation to maximum entropy model (Berger et al. , 1996), which is popular in the natural language processing community." P05-1020,J96-1002,o,"We consider three learning algorithms, namely, the C4.5 decision tree induction system (Quinlan, 1993), the RIPPER rule learning algorithm (Cohen, 1995), and maximum entropy classification (Berger et al. , 1996)." P05-1027,J96-1002,o,"216 The Maximum Entropy Principle (Berger et al. , 1996) is to nd a model p = argmax pC H(p), which means a probability model p(y|x) that maximizes entropy H(p)." P05-1031,J96-1002,o," MAXENT, Zhang Les C++ implementation8 of maximum entropy modelling (Berger et al. , 1996)." P05-1037,J96-1002,p,"5.4 Maximum Entropy Maximum entropy has been proven to be an effective method in various natural language processing applications (Berger et al. , 1996)." P05-1057,J96-1002,o,"Statistical approaches, which depend on a set of unknown parameters that are learned from training data, try to describe the relationship between a bilingual sentence pair (Brown et al. , 1993; Vogel and Ney, 1996)." P05-1057,J96-1002,o,"Heuristic approaches obtain word alignments by using various similarity functions between the types of the two languages (Smadja et al. , 1996; Ker and Chang, 1997; Melamed, 2000)." P05-1057,J96-1002,p,"An especially well-founded framework is maximum entropy (Berger et al. , 1996)." P05-1061,J96-1002,o,"We use a standard maximum entropy classifier (Berger et al. , 1996) implemented as part of MALLET (McCallum, 2002)." P05-1066,J96-1002,o,"For this reason there is currently a great deal of interest in methods which incorporate syntactic information within statistical machine translation systems (e.g. , see (Alshawi, 1996; Wu, 1997; Yamada and Knight, 2001; Gildea, 2003; Melamed, 2004; Graehl and Knight, 2004; Och et al. , 2004; Xia and McCord, 2004))." P05-1066,J96-1002,o,"2.1.2 Research on Syntax-Based SMT A number of researchers (Alshawi, 1996; Wu, 1997; Yamada and Knight, 2001; Gildea, 2003; Melamed, 2004; Graehl and Knight, 2004; Galley et al. , 2004) have proposed models where the translation process involves syntactic representations of the source and/or target languages." P05-1066,J96-1002,o,"A number of other re532 searchers (Berger et al. , 1996; Niessen and Ney, 2004; Xia and McCord, 2004) have described previous work on preprocessing methods." P05-1066,J96-1002,o,"(Berger et al. , 1996) describe an approach that targets translation of French phrases of the form NOUN de NOUN (e.g. , conflit dinteret)." P05-2024,J96-1002,o,"We employ loglinear models (Berger et al. , 1996) for the disambiguation." P06-1026,J96-1002,o,"However, in order to cope with the prediction errors of the classi er, we approximate a74a51a18a77a76 a28 with an a80 -gram language model on sequences of the re ned tag labels: a38a58a39 a41 a81 a43a82a44a47a46a83a48a47a50a75a44a15a52 a53a9a54a49a84 a53a9a54a83a84a49a85a9a86a13a87a89a88a91a90 a55a57a56 a38a40a39 a81 a59a60a42a61 (2) a92 a44a47a46a83a48a47a50a75a44a15a52 a53a9a54 a84 a53a9a54a83a84a49a85a9a86a13a87a89a88a91a90 a93 a94a96a95 a55a57a56a98a97a66a99 a95 a59a100a27a61 (3) In order to estimate the conditional distribution a101 a18a20a19a15a21 a1 a68 a72 a28 we use the general technique of choosing the maximum entropy (maxent) distribution that properly estimates the average of each feature over the training data (Berger et al. , 1996)." P06-1042,J96-1002,o,"7However, the algorithms shares many common points with iterative algorithm that are known to converge and that have been proposed to find maximum entropy probability distributions under a set of constraints (Berger et al. , 1996)." P06-1071,J96-1002,o,"2.1 Conditional Maximum Entropy Model The goal of CME is to find the most uniform conditional distribution of y given observation x, ( )xyp, subject to constraints specified by a set of features ()yxf i,, where features typically take the value of either 0 or 1 (Berger et al. , 1996)." P06-1071,J96-1002,o,"This leads to a good amount of work in this area (Ratnaparkhi et al. , 1994; Berger et al. , 1996; Pietra et al, 1997; Zhou et al. , 2003; Riezler and Vasserman, 2004) In the most basic approach, such as Ratnaparkhi et al." P06-1071,J96-1002,o,"1 Introduction Conditional Maximum Entropy (CME) modeling has received a great amount of attention within natural language processing community for the past decade (e.g. , Berger et al. , 1996; Reynar and Ratnaparkhi, 1997; Koeling, 2000; Malouf, 2002; Zhou et al. , 2003; Riezler and Vasserman, 2004)." P06-1073,J96-1002,o,"579 The MaxEnt algorithm associates a set of weights (ij)i=1nj=1m with the features, which are estimated during the training phase to maximize the likelihood of the data (Berger et al. , 1996)." P06-1073,J96-1002,o,"Our appoach is based on Maximum Entropy (MaxEnt henceforth) technique (Berger et al. , 1996)." P06-1089,J96-1002,o,"There have been many studies on POS guessing of unknown words (Mori and Nagao, 1996; Mikheev, 1997; Chen et al. , 1997; Nagata, 1999; Orphanos and Christodoulakis, 1999)." P06-1089,J96-1002,o,"p0(t|w) is calculated by ME models as follows (Berger et al. , 1996): p0(t|w)= 1Y(w) exp braceleftBigg Hsummationdisplay h=1 hgh(w,t) bracerightBigg, (20) 709 Language Features English Prefixes of 0 up to four characters, suffixes of 0 up to four characters, 0 contains Arabic numerals, 0 contains uppercase characters, 0 contains hyphens." P06-1089,J96-1002,o,"The features we use are shown in Table 2, which are based on the features used by Ratnaparkhi (1996) and Uchimoto et al." P06-1112,J96-1002,p,"(Berger et al. , 1996) gave a good description of ME model." P06-1129,J96-1002,p,"The maximum entropy model (Berger et al. , 1996) provides us with a well-founded framework for this purpose, which has been extensively used in natural lan guage processing tasks ranging from part-ofspeech tagging to machine translation." P06-2018,J96-1002,o,"4.2 Cast3LB Function Tagging For the task of Cast3LB function tag assignment we experimented with three generic machine learning algorithms: a memory-based learner (Daelemans and van den Bosch, 2005), a maximum entropy classifier (Berger et al. , 1996) and a Support Vector Machine classifier (Vapnik, 1998)." P06-2063,J96-1002,o,"Maximum Entropy models implement the intuition that the best model is the one that is consistent with the set of constraints imposed by the evidence but otherwise is as uniform as possible (Berger et al. , 1996)." P06-2089,J96-1002,o,"One such approach is maximum entropy classification (Berger et al. , 1996), which we use in the form of a library implemented by Tsuruoka1 and used in his classifier-based parser (Tsuruoka and Tsujii, 2005)." P06-2093,J96-1002,o,"Several algorithms have been proposed in the literature that try to find the best splits, see for instance (Berger et al. , 1996)." P06-2109,J96-1002,o,"2.2 Maximum Entropy Model The maximum entropy model (Berger et al. , 1996) estimates a probability distribution from training data." P07-1020,J96-1002,o,"This logistic regression is also called Maxent as it finds the distribution with maximum entropy that properly estimates the average of each feature over the training data (Berger et al. , 1996)." P07-1079,J96-1002,o,"The disambiguation model of this parser is based on a maximum entropy model (Berger et al. , 1996)." P07-1079,J96-1002,o,"1 Introduction Several efficient, accurate and robust approaches to data-driven dependency parsing have been proposed recently (Nivre and Scholz, 2004; McDonald et al. , 2005; Buchholz and Marsi, 2006) for syntactic analysis of natural language using bilexical dependency relations (Eisner, 1996)." P07-1096,J96-1002,o,"Following (Ratnaparkhi, 1996; Collins, 2002; Toutanova et al. , 2003; Tsuruoka and Tsujii, 2005), 765 Feature Sets Templates Error% A Ratnaparkhis 3.05 B A + [t0,t1],[t0,t1,t1],[t0,t1,t2] 2.92 C B + [t0,t2],[t0,t2],[t0,t2,w0],[t0,t1,w0],[t0,t1,w0], [t0,t2,w0], [t0,t2,t1,w0],[t0,t1,t1,w0],[t0,t1,t2,w0] 2.84 D C + [t0,w1,w0],[t0,w1,w0] 2.78 E D + [t0,X = prefix or suffix of w0],4 < |X| 9 2.72 Table 2: Experiments on the development data with beam width of 3 we cut the PTB into the training, development and test sets as shown in Table 1." P07-1096,J96-1002,o,"766 System Beam Error% (Ratnaparkhi, 1996) 5 3.37 (Tsuruoka and Tsujii, 2005) 1 2.90 (Collins, 2002) 2.89 Guided Learning, feature B 3 2.85 (Tsuruoka and Tsujii, 2005) all 2.85 (Gimenez and M`arquez, 2004) 2.84 (Toutanova et al. , 2003) 2.76 Guided Learning, feature E 1 2.73 Guided Learning, feature E 3 2.67 Table 4: Comparison with the previous works According to the experiments shown above, we build our best system by using feature set E with beam width B = 3." P07-1113,J96-1002,p,"Weusemaximumentropy models (Berger et al. , 1996), which are particularly well-suited for tasks (like ours) with many overlapping features, to harness these linguistic insights by using features in our models which encode, directly or indirectly, the linguistic correlates to SE types." P08-1002,J96-1002,o,"For classi cation, we use a maximum entropy model (Berger et al., 1996), from the logistic regression package in Weka (Witten and Frank, 2005), with all default parameter settings." P08-1033,J96-1002,o,"2.4 Maximum Entropy Classifier Maximum Entropy Models (Berger et al., 1996) seek to maximise the conditional probability of classes, given certain observations (features)." P08-1056,J96-1002,o,"These belong to two main categories based on machine learning (Bikel et al., 1997; Borthwick, 1999; McCallum and Li, 2003) and language or domain specific rules (Grishman, 1995; Wakao et al., 1996)." P08-1056,J96-1002,o,"Given a set of features and a training corpus, the MaxEnt estimation process produces a model in which every feature fi has a weight i. We can compute the conditional probability as (Berger et al., 1996): p(o|h) = 1Z(h) productdisplay i ifi(h,o) (1) Z(h) = summationdisplay o productdisplay i ifi(h,o) (2) The conditional probability of the outcome is the product of the weights of all active features, normalized over the products of all the features." P08-1115,J96-1002,o,"Formally, the approach we take can be thought of as a noisier channel, where an observed signal o gives rise to a set of source-language strings fprime F(o) and we seek e = arg maxe max fprimeF(o) Pr(e,fprime|o) (2) = arg maxe max fprimeF(o) Pr(e)Pr(fprime|e,o) (3) = arg maxe max fprimeF(o) Pr(e)Pr(fprime|e)Pr(o|fprime).(4) Following Och and Ney (2002), we use the maximum entropy framework (Berger et al., 1996) to directly model the posterior Pr(e,fprime|o) with parameters tuned to minimize a loss function representing 1012 the quality only of the resulting translations." P08-2001,J96-1002,o,"Themodeling approachhere describedis discriminative, and is based on maximum entropy (ME) models, firstly applied to natural language problems in (Berger et al., 1996)." P09-1005,J96-1002,o,"For the identification and labeling steps, we train a maximum entropy classifier (Berger et al., 1996) over sections 02-21 of a version of the CCGbank corpus (Hockenmaier and Steedman, 2007) that has been augmented by projecting the Propbank semantic annotations (Boxwell and White, 2008)." P09-3007,J96-1002,o,"4.1 Evaluation of Different Features and Models In pilot experiments on a subset of the features, we provide a comparison of HM-SVM with other two learning models, maximum entropy (MaxEnt) model (Berger et al., 1996) and SVM model (Kudo, 2001), to test the effectiveness of HMSVM on function labeling task, as well as the generality of our hypothesis on different learning 58 Table 3: Features used in each experiment round." P98-2140,J96-1002,o,"We adopted the stop condition suggested in (Berger et al. , 1996) the maximization of the likelihood on a cross-validation set of samples which is unseen at the parameter estimation." P98-2140,J96-1002,o,"To 848 make feature ranking computationally tractable in (Della Pietra et al. , 1995) and (Berger et al. , 1996) a simplified process proposed: at the feature ranking stage when adding a new feature to the model, all previously computed parameters are kept fixed and, thus, we have to fit only one new constraint imposed by the candidate feature." P98-2140,J96-1002,o,"We also do not require a newly added feature to be either atomic or a collocation of an atomic feature with a feature already included into the model as it was proposed in (Della Pietra et al. , 1995) (Berger et al. , 1996)." P98-2191,J96-1002,o,"Therefore, estimating a natural language model based on the maximum entropy (ME) method (Pietra et al. , 1995; Berger et al. , 1996) has been highlighted recently." P98-2191,J96-1002,o,"Then, to solve p. E C in equation (8) is equivalent to solve h. that maximize the loglikelihood: = (x)log zj,(z) + x i (10) h. = argmax kV(h) Such h. can be solved by one of the numerical algorithm called the Improved Iteratire Scaling Algorithm (Berger et al. , 1996)." P98-2191,J96-1002,o,"We build a subset S C ~"" incrementally by iterating to adjoin a feature f E ~"" which maximizes loglikelihood of the model to S. This algorithm is called the Basic Feature Selection (Berger et al. , 1996)." P98-2214,J96-1002,o,"As a model learning method, we adopt the maximum entropy model learning method (Della Pietra et al. , 1997; Berger et al. , 1996)." W00-0704,J96-1002,o,"We will provide a more detailed and systematic comparison between MAXIMUM ENTROPY MODELING (aatnaparkhi, 1996) and MEMORY BASED LEARNING (Daelemans et al. , 1996) for morpho-syntactic disambiguation and we investigate whether earlier observed differences in tagging accuracy can be attributed to algorithm bias, information source issues or both." W00-0704,J96-1002,o,"A word is considered to be known when it has an ambiguous tag (henceforth ambitag) attributed to it in the LEXICON, which is compiled in the same way as for the MBT-tagger (Daelemans et al. , 1996)." W00-0707,J96-1002,o,"In previous work (Foster, 2000), I described a Maximum Entropy/Minimum Divergence (MEMD) model (Berger et al. , 1996) for p(w\[hi, s) which incorporates a trigram language model and a translation component which is an analog of the well-known IBM translation model 1 (Brown et al. , 1993)." W00-0707,J96-1002,o,"For a given choice of q and f, the IIS algorithm (Berger et al. , 1996) can be used to find maximum likelihood values for the parameters ~." W00-0714,J96-1002,o,"We have used the Improved Iterative Scaling algorithm (IIS) (Berger et al. , 1996)." W00-0729,J96-1002,o,"In the last few years there has been an increasing interest in applying MaxEnt models for NLP applications (Ratnaparkhi, 1998; Berger et al. , 1996; Rosenfeld, 1994; Ristad, 1998)." W01-0712,J96-1002,o,"For every class the weights of the active features are combined and the best scoring class is chosen (Berger et al. , 1996)." W02-0301,J96-1002,o,"We use the maximum entropy tagging method described in (Kazama et al. , 2001) for the experiments, which is a variant of (Ratnaparkhi, 1996) modified to use HMM state features." W02-0301,J96-1002,p,"Support Vector Machines (SVMs) (Vapnik, 1995) and Maximum Entropy (ME) method (Berger et al. , 1996) are powerful learning methods that satisfy such requirements, and are applied successfully to other NLP tasks (Kudo and Matsumoto, 2000; Nakagawa et al. , 2001; Ratnaparkhi, 1996)." W02-0811,J96-1002,o,"For the maximum entropy classifier, we estimate the weights by maximizing the likelihood of a heldout set, using the standard IIS algorithm (Berger et al. , 1996)." W02-0813,J96-1002,o,"Under the maximum entropy framework (Berger et al. , 1996), evidence from different features can be combined with no assumptions of feature independence." W02-1002,J96-1002,o,"Unconstrained CL corresponds exactly to a conditional maximum entropy model (Berger et al. , 1996; Lafferty et al. , 2001)." W02-1011,J96-1002,o,"However, feature/class functions are traditionally deflned as binary (Berger et al. , 1996); hence, explicitly incorporating frequencies would require difierent functions for each count (or count bin), making training impractical." W02-1011,J96-1002,p,"5.2 Maximum Entropy Maximum entropy classiflcation (MaxEnt, or ME, for short) is an alternative technique which has proven efiective in a number of natural language processing applications (Berger et al. , 1996)." W02-2018,J96-1002,o,"A conditional maximum entropy model q(xjw) for p has the parametric form (Berger et al. , 1996; Chi, 1998; Johnson et al. , 1999): q(xjw) = exp T f (x) y2Y(w) exp(T f (y)) (1) where is a d-dimensional parameter vector and T f (x) is the inner product of the parameter vector and a feature vector." W02-2018,J96-1002,o,"In natural language processing, recent years have seen ME techniques used for sentence boundary detection, part of speech tagging, parse selection and ambiguity resolution, and stochastic attribute-value grammars, to name just a few applications (Abney, 1997; Berger et al. , 1996; Ratnaparkhi, 1998; Johnson et al. , 1999)." W02-2018,J96-1002,o,"Finally, it should be noted that in the current implementation, we have not applied any of the possible optimizations that appear in the literature (Lafferty and Suhm, 1996; Wu and Khudanpur, 2000; Lafferty et al. , 2001) to speed up normalization of the probability distribution q. These improvements take advantage of a models structure to simplify the evaluation of the denominator in (1)." W02-2019,J96-1002,p,"Maximum entropy models (Jaynes, 1957; Berger et al. , 1996; Della Pietra et al. , 1997) are a class of exponential models which require no unwarranted independence assumptions and have proven to be very successful in general for integrating information from disparate and possibly overlapping sources." W03-0401,J96-1002,o,"A possible solution to this problem is to directly estimate p(A|w) by applying a maximum entropy model (Berger et al. , 1996)." W03-0401,J96-1002,o,"The parsing algorithm was CKY-style parsing with beam thresholding, which was similar to ones used in (Collins, 1996; Clark et al. , 2002)." W03-0401,J96-1002,o,"Recently used machine learning methods including maximum entropy models (Berger et al. , 1996) and support vector machines (Vapnik, 1995) provide grounds for this type of modeling, because it allows various dependent features to be incorporated into the model without the independence assumption." W03-0417,J96-1002,p,"State-of-theart machine learning techniques including Support Vector Machines (Vapnik, 1995), AdaBoost (Schapire and Singer, 2000) and Maximum Entropy Models (Ratnaparkhi, 1998; Berger et al. , 1996) provide high performance classifiers if one has abundant correctly labeled examples." W03-0420,J96-1002,o,"Thus, we obtain the following second-order model: a36a39a38a41a40 a17 a5a7 a42a4 a5a7 a44 a8 a5a57 a15a27a58 a7 a36a39a38a41a40 a17a20a15a59a42a17 a15a41a49 a7 a7 a60 a4 a5a7 a44 a8 ma61a63a62a65a64a33a66 a5a57 a15a27a58 a7a68a67 a40 a17 a15 a42a17 a15a50a49 a7 a15a50a49a51a48 a60 a4 a15a27a47a55a48 a15a50a49a54a48 a44 a11 A well-founded framework for directly modeling the posterior probability a67 a40 a17 a15 a42a17 a15a50a49 a7 a15a50a49a54a48 a60 a4 a15a12a47a55a48 a15a50a49a54a48 a44 is maximum entropy (Berger et al. , 1996)." W03-0420,J96-1002,o,"1 Introduction In this paper, we present an approach for extracting the named entities (NE) of natural language inputs which uses the maximum entropy (ME) framework (Berger et al. , 1996)." W03-0425,J96-1002,o,"The model weights are trained using the improved iterative scaling algorithm (Berger et al. , 1996)." W03-0425,J96-1002,o,"(1999), a robust risk minimization classifier, based on a regularized winnow method (Zhang et al. , 2002) (henceforth RRM) and a maximum entropy classifier (Darroch and Ratcliff, 1972; Berger et al. , 1996; Borthwick, 1999) (henceforth MaxEnt)." W03-0505,J96-1002,o,"The first two phases are approached as straightforward classification in a maximum entropy framework (Berger et al. , 1996)." W03-1007,J96-1002,o,"3.2 Maximum Entropy ME models implement the intuition that the best model will be the one that is consistent with the set of constrains imposed by the evidence, but otherwise is as uniform as possible (Berger et al. , 1996)." W03-1013,J96-1002,o,"We have implemented a parallel version of our GIS code using the MPICH library (Gropp et al. , 1996), an open-source implementation of the Message Passing Interface (MPI) standard." W03-1018,J96-1002,p,"1 Introduction The maximum entropy model (Berger et al. , 1996; Pietra et al. , 1997) has attained great popularity in the NLP field due to its power, robustness, and successful performance in various NLP tasks (Ratnaparkhi, 1996; Nigam et al. , 1999; Borthwick, 1999)." W03-1020,J96-1002,p,"A more refined algorithm, the incremental feature selection algorithm by Berger et al (1996), allows one feature being added at each selection and at the same time keeps estimated parameter values for the features selected in the previous stages." W03-1020,J96-1002,o,"In contrast to what is shown in Berger et al 1996s paper, here is how the different values in this variant of the IFS algorithm are computed." W03-1020,J96-1002,o,"The goal of each selection stage is to select the feature f that maximizes the gain of the log likelihood, where the a and gain of f are derived through following steps: Let the log likelihood of the model be -= yx xZysump pL,, )(/|log()( ~ and the empirical expectation of feature f be E p (f)= p (x,y)f(x,y) x,y With the approximation assumption in Berger et al (1996)s paper, the un-normalized component and the normalization factor of the model have the following recursive forms: )|()|( aa exysumxysum SfS = | Z f + The approximate gain of the log likelihood is computed by G Sf (a)L(p Sf a )-L(p S ) =- p (x)(logZ Sf,a (x) x /Z S (x)) +aE p (f) (1) The maximum approximate gain and its corresponding a are represented as: )(max),(~ a fS GfSL =D maxarg f 3 A Fast Feature Selection Algorithm The inefficiency of the IFS algorithm is due to the following reasons." W03-1020,J96-1002,o,"1 Introduction Maximum Entropy (ME) modeling has received a lot of attention in language modeling and natural language processing for the past few years (e.g. , Rosenfeld, 1994; Berger et al 1996; Ratnaparkhi, 1998; Koeling, 2000)." W03-1021,J96-1002,o,"We should note from equation 4 that the neural network model is similar in functional form to the maximum entropy model (Berger et al. , 1996) except that the neural network learns the feature functions by itself from the training data." W03-1025,J96-1002,o,"There are multiple studies (Wu and Fung, 1994; Sproat et al. , 1996; Luo and Roukos, 1996) showing that the agreement between two (untrained) native speakers is about upper a15 a12a14a7 to lower a0a4a12a14a7." W03-1025,J96-1002,o,"Chinese word segmentation is a well-known problem that has been studied extensively (Wu and Fung, 1994; Sproat et al. , 1996; Luo and Roukos, 1996) and it is known that human agreement is relatively low." W03-1025,J96-1002,p,"Each component model takes the exponential form: a37a55a38a57a56 a51 a42a6a44a59a58a60a56 a61 a51a64a63a65a53a67a66 a53 a45a46a70 a71a16a72a21a73a75a74a77a76a79a78a81a80 a78a16a82a11a78 a38a83a44a59a58a60a56a84a61 a51a64a63a65a53a67a66 a53 a58a60a56 a51 a45a86a85 a87 a38a83a44a59a58a60a56a84a61 a51a64a63a65a53a67a66 a53 a45 a58 (2) where a87 a38a83a44a59a58a60a56 a61 a51a41a63a65a53a67a66 a53 a45 is a normalization term to ensure that a37a55a38a57a56 a51a42a6a44a88a58a60a56a62a61 a51a41a63a65a53a67a66 a53 a45 is a probability, a82a11a78 a38a83a44a59a58a60a56 a61 a51a64a63a65a53a67a66 a53 a58a60a56 a51 a45 is a feature function (often binary) and a80 a78 is the weight ofa82a21a78 . Given a set of features and a corpus of training data, there exist ef cient training algorithms (Darroch and Ratcliff, 1972; Berger et al. , 1996) to nd the optimal parameters a89 a80 a78a14a90 . The art of building a maximum entropy parser then reduces to choosing good features." W03-1718,J96-1002,o,"The training algorithm we used is the improved iterative scaling (IIS) described in (Berger et al, 1996)3." W04-0701,J96-1002,o,"models implement the intuition that the best model will be the one that is consistent with the set of constrains imposed by the evidence, but otherwise is as uniform as possible (Berger et al. , 1996)." W04-0859,J96-1002,o,"Our systems use both corpus-based and knowledge-based approaches: Maximum Entropy(ME) (Lau et al. , 1993; Berger et al. , 1996; Ratnaparkhi, 1998) is a corpus-based and supervised method based on linguistic features; ME is the core of a bootstrapping algorithm that we call re-training inspired This paper has been partially supported by the Spanish Government (CICyT) under project number TIC-2003-7180 and the Valencia Government (OCyT) under project number CTIDIB-2002-151 by co-training (Blum and Mitchell, 1998); Relevant Domains (RD) (Montoyo et al. , 2003) is a resource built from WordNet Domains (Magnini and Cavaglia, 2000) that is used in an unsupervised method that assigns domain and sense labels; Specification Marks(SP) (Montoyo and Palomar, 2000) exploits the relations between synsets stored in WordNet (Miller et al. , 1993) and does not need any training corpora; Commutative Test (CT) (Nica et al. , 2003), based on the Sense Discriminators device derived from EWN (Vossen, 1998), disambiguates nouns inside their syntactic patterns, with the help of information extracted from raw corpus." W04-0860,J96-1002,o,"The supervised methods are based on Maximum Entropy (ME) (Lau et al. , 1993; Berger et al. , 1996; Ratnaparkhi, 1998), neural network using the Learning Vector Quantization algorithm (Kohonen, 1995) and Specialized Hidden Markov Models (Pla, 2000)." W04-1007,J96-1002,o,"First, two maximum entropy classifiers (Berger et al. , 1996) are applied, where the first predicts clause start labels and the second predicts clause end labels." W04-1802,J96-1002,o,"Figures 1 and 2 present best results in the learning experiments for the complete set of patterns used in the collocation approach, over two of our evaluation corpora.11 Type Positions Tags/Words Features Accuracy Precision Recall GIS 1 W 1254 0.97 0.96 0.98 IIS 1 T 136 0.95 0.96 0.94 NB 1 T 136 0.88 0.97 0.84 9 see Rish, 2001, Ratnaparkhi, 1997 and Berger et al, 1996 for a formal description of these algorithms." W05-0509,J96-1002,o,"It can be proven that the probability distribution p satisfying the above assumption is the one with the highest entropy, is unique and has the following expone ntial form (Berger et al. 1996): (1) = = k j cajf jcZcap 1 ),( )( 1)|( a where Z(c) is a normalization factor, fj(a,c) are the values of k features of the pair (a,c) and correspond to the linguistic cues of c that are relevant to predict the outcome a. Features are extracted from the training data and define the constraints that the probabilistic model p must satisfy." W05-0612,J96-1002,o,"When labeled training data is available, we can use the Maximum Entropy principle (Berger et al. , 1996) to optimize the weights." W05-0627,J96-1002,o,"In our SRL system, we select maximum entropy (Berger et al. , 1996) as a classi er to implement the semantic role labeling system." W05-0709,J96-1002,o,"The principle of maximum entropy states that when one searches among probability distributions that model the observed data (evidence), the preferred one is the one that maximizes the entropy (a measure of the uncertainty of the model) (Berger et al. , 1996)." W05-0709,J96-1002,o,"where mk is one mention in entity e, and the basic model building block PL(L = 1je, mk, m) is an exponential or maximum entropy model (Berger et al. , 1996)." W05-0709,J96-1002,o,"Both systems are built around from the maximum-entropy technique (Berger et al. , 1996)." W05-0709,J96-1002,o,"SEP/epsilon a/A# epsilon/# a/epsilon a/epsilon b/epsilon b/B UNK/epsilon c/C b/epsilon c/BC e/+E epsilon/+ d/epsilon d/epsilon epsilon/epsilon b/AB# b/A#B# e/+DE c/epsilon d/BCD e/+D+E Figure 1: Illustration of dictionary based segmentation finite state transducer 3.1 Bootstrapping In addition to the model based upon a dictionary of stems and words, we also experimented with models based upon character n-grams, similar to those used for Chinese segmentation (Sproat et al. , 1996)." W05-1304,J96-1002,o,"In this paper we adopt a maximum entropy model (Berger et al. , 1996) to estimate the local probabilities a28 a14 a1 a25 a19a1 a25a30a29 a2 a9a22a21 since it can incorporate diverse types of features with reasonable computational cost." W05-1505,J96-1002,o,"For a more detailed introduction to maximum entropy estimation see (Berger et al. , 1996)." W05-1510,J96-1002,o,"The forest representation was obtained by adopting chart generation (Kay, 1996; Car93 roll et al. , 1999) where ambiguous candidates are packed into an equivalence class and mapping a chart into a forest in the same way as parsing." W05-1510,J96-1002,o,"2.3 Probabilistic models for generation with HPSG Some existing studies on probabilistic models for HPSG parsing (Malouf and van Noord, 2004; Miyao and Tsujii, 2005) adopted log-linear models (Berger et al. , 1996)." W05-1511,J96-1002,o,"Probabilistic models where probabilities are assigned to the CFG backbone of the unification-based grammar have been developed (Kasper et al. , 1996; Briscoe and Carroll, 1993; Kiefer et al. , 2002), and the most probable parse is found by PCFG parsing." W05-1511,J96-1002,o,"Previous studies (Abney, 1997; Johnson et al. , 1999; Riezler et al. , 2000; Miyao et al. , 2003; Malouf and van Noord, 2004; Kaplan et al. , 2004; Miyao and Tsujii, 2005) defined a probabilistic model of unification-based grammars as a log-linear model or maximum entropy model (Berger et al. , 1996)." W05-1514,J96-1002,o,"6 Phrase Recognition with a Maximum Entropy Classifier For the candidates which are not filtered out in the above two phases, we perform classification with maximum entropy classifiers (Berger et al. , 1996)." W05-1520,J96-1002,o,"2.2 Maximum Entropy Our next approach is the Maximum Entropy (Berger et al. , 1996) classification approach." W06-0301,J96-1002,o,"As a learning algorithm for our classification model, we used Maximum Entropy (Berger et al. , 1996)." W06-1314,J96-1002,o,"96 Research on DA classification initially focused on two-party conversational speech (Mast et al. , 1996; Stolcke et al. , 1998; Shriberg et al. , 1998) and, more recently, has extended to multi-party audio recordings like the ICSI corpus (Shriberg et al. , 2004)." W06-1314,J96-1002,o,"We apply a maximum entropy (maxent) model (Berger et al. , 1996) to this task." W06-1617,J96-1002,p,"Since its introduction to the Natural Language Processing (NLP) community (Berger et al. , 1996), ME-based classifiers have been shown to be effective in various NLP tasks." W06-1619,J96-1002,o,"Previous studies (Abney, 1997; Johnson et al. , 1999; Riezler et al. , 2000; Malouf and van Noord, 2004; Kaplan et al. , 2004; Miyao and Tsujii, 2005) defined a probabilistic model of unification-based grammars including HPSG as a log-linear model or maximum entropy model (Berger et al. , 1996)." W06-1633,J96-1002,o,"Based on the data seen, a maximum entropy model (Berger et al. , 1996) offers an expression (1) for the probability that there exists coreference C between a mention mi and a mention mj." W06-1643,J96-1002,o,"We performed feature selection by incrementally growing a log-linear model with order0 features f(x,yt) using a forward feature selection procedure similar to (Berger et al. , 1996)." W06-2601,J96-1002,n,"Despite ME theory and its related training algorithm (Darroch and Ratcliff, 1972) do not set restrictions on the range of feature functions1, popular NLP text books (Manning and Schutze, 1999) and research papers (Berger et al. , 1996) seem to limit them to binary features." W06-2601,J96-1002,p,"1 Introduction The Maximum Entropy (ME) statistical framework (Darroch and Ratcliff, 1972; Berger et al. , 1996) has been successfully deployed in several NLP tasks." W06-2601,J96-1002,o,"6 Parameter Estimation From the duality of ME and maximum likelihood (Berger et al. , 1996), optimal parameters for model (3) can be found by maximizing the log-likelihood function over a training sample {(xt,yt) : t = 1,,N}, i.e.: = argmax Nsummationdisplay t=1 logp(yt|xt)." W06-2922,J96-1002,o,"Using Maximum Entropy (Berger, et al. 1996) classifiers I built a parser that achieves a throughput of over 200 sentences per second, with a small loss in accuracy of about 23 %." W06-2928,J96-1002,o,"4 The Dependency Labeler 4.1 Classifier We used a maximum entropy classifier (Berger et al. , 1996) to assign labels to the unlabeled dependencies produced by the Bayes Point Machine." W06-3108,J96-1002,p,"In the case of two orientation classes, cj,j is defined as: cj,j = braceleftbigg left, if j < j right, if j > j (4) Then, the reordering model has the form p(cj,j|fJ1,eI1,i,j) A well-founded framework for directly modeling the probability p(cj,j|fJ1,eI1,i,j) is maximum entropy (Berger et al. , 1996)." W07-0401,J96-1002,o,"Many reordering constraints have been used for word reorderings, such as ITG constraints (Wu, 1996), IBM constraints (Berger et al. , 1996) and local constraints (Kanthak et al. , 2005)." W07-0413,J96-1002,o,"The probability distributions of these binary classifiers are learnt using maximum entropy model (Berger et al. , 1996; Haffner, 2006)." W07-0604,J96-1002,p,"(2006), but we use a maximum entropy classifier (Berger et al. , 1996) to determine parser actions, which makes parsing extremely fast." W07-1027,J96-1002,o,"Maximum Entropy Modeling (MaxEnt) (Berger et al. , 1996) and Support Vector Machine (SVM) (Vapnik, 1995) were used to build the classifiers in our solution." W07-1033,J96-1002,o,"is the previous BIO tag, S is the target sentence, and fj and lj are feature functions and parameters of a log-linear model (Berger et al. , 1996)." W07-1110,J96-1002,o,"5.2 Maximum Entropy Model We use the Maximum Entropy (ME) Model (Berger et al. , 1996) for our classification task." W07-1110,J96-1002,o,"(Dahl et al. , 1987; Hull and Gomez, 1996) use hand-coded slot-filling rules to determine the semantic roles of the arguments of a nominalization." W07-2057,J96-1002,o,"We utilize the OpenNLP MaxEnt implementation2 of the maximum entropy classification algorithm (Berger et al. , 1996) to train classification models for each lemma and part-of-speech combination in the training corpus." W07-2059,J96-1002,p,"Exponential family models are a mainstay of modern statistical modeling (Brown, 1986) and they are widely and successfully used for example in text classification (Berger et al. , 1996)." W07-2202,J96-1002,o,"The disambiguation model of Enju is based on a feature forest model (Miyao and Tsujii, 2002), which is a log-linear model (Berger et al. , 1996) on packed forest structure." W07-2208,J96-1002,o,"This was overcome by a probabilistic model which provides probabilities of discriminating a correct parse tree among candidates of parse trees in a log-linear model or maximum entropy model (Berger et al. , 1996) with many features for parse trees (Abney, 1997; Johnson et al. , 1999; Riezler et al. , 2000; Malouf and van Noord, 2004; Kaplan et al. , 2004; Miyao and Tsujii, 2005)." W07-2208,J96-1002,o,"Previous studies (Abney, 1997; Johnson et al. , 1999; Riezler et al. , 2000; Malouf and van Noord, 2004; Kaplan et al. , 2004; Miyao and Tsujii, 2005) defined a probabilistic model of unification-based grammars including HPSG as a log-linear model or maximum entropy model (Berger et al. , 1996)." W08-0206,J96-1002,o,"For instance, for Maximum Entropy, I picked (Berger et al., 1996; Ratnaparkhi, 1997) for the basic theory, (Ratnaparkhi, 1996) for an application (POS tagging in this case), and (Klein and Manning, 2003) for more advanced topics such as optimization and smoothing." W08-0404,J96-1002,o,"Maximum entropy estimation for translation of individual words dates back to Berger et al (1996), and the idea of using multi-class classifiers to sharpen predictions normally made through relative frequency estimates has been recently reintroducedundertherubricofwordsensedisambiguation and generalized to substrings (Chan et al 2007; Carpuat and Wu 2007a; Carpuat and Wu 2007b)." W08-0504,J96-1002,p,"(2006), but we use a maximum entropy classifier (Berger et al., 1996) to determine parser actions, which makes parsing considerably faster." W08-1130,J96-1002,o,"These feature functions fi were used to train a maximum entropy classifier (Berger et al., 1996) (Le, 2004)thatassignsaprobabilitytoaREregiven context cx as follows: p(re| cx) = Z(cx)exp nsummationdisplay i=1 ifi(cx,re) where Z(cx) is a normalizing sum and the i are the parameters (feature weights) learned." W08-1130,J96-1002,o,"We use discourse-level feature predicates in a maximum entropy classifier (Berger et al., 1996) with binary and n-class classification to select referring expressions from a list." W08-1302,J96-1002,o,"2 Background: MaxEnt Models Maximum Entropy (MaxEnt) models are widely used in Natural Language Processing (Berger et al., 1996; Ratnaparkhi, 1997; Abney, 1997)." W08-2130,J96-1002,o,"In this paper a discriminative parser is proposed to implement maximum entropy (ME) models (Berger, et al., 1996) to address the learning task." W08-2139,J96-1002,o,"The maximum entropy classier (Berger et al, 1996) used is Le Zhang's Maximum Entropy Modeling Toolkit and the L-BFGS parameter estimation algorithm with gaussian prior smoothing (Chen and Rosenfeld, 1999)." W09-0435,J96-1002,o,Wu (1996) and Berger et al. W09-0706,J96-1002,o,"1.2 Recent work A few publications, so far, deal with POS-tagging of Northern Sotho; most prominently, de Schryver and de Pauw (2007) have presented the MaxTag method, a tagger based on Maximum Entropy 38 Learning (Berger et al., 1996) as implemented in the machine learning package Maxent (Le, 2004)." W09-1118,J96-1002,o,"(2007): The committee consists of k = 3 Maximum Entropy (ME) classifiers (Berger et al., 1996)." W09-1207,J96-1002,o,"During the SRC stage, a Maximum entropy (Berger et al., 1996) classifier is used to predict the probabilities of a word in the sentence Language No-duplicated-roles Catalan arg0-agt, arg0-cau, arg1-pat, arg2-atr, arg2-loc Chinese A0, A1, A2, A3, A4, A5, Czech ACT, ADDR, CRIT, LOC, PAT, DIR3, COND English A0, A1, A2, A3, A4, A5, German A0, A1, A2, A3, A4, A5, Japanese DE, GA, TMP, WO Spanish arg0-agt, arg0-cau, arg1-pat, arg1-tem, arg2-atr, arg2-loc, arg2-null, arg4-des, argL-null, argMcau, argM-ext, argM-fin Table 1: No-duplicated-roles for different languages to be each semantic role." W09-2309,J96-1002,o,"IBM constraints (Berger et al., 1996), the lexical word reordering model (Tillmann, 2004), and inversion transduction grammar (ITG) constraints (Wu, 1995; Wu, 1997) belong to this type of approach." W96-0213,J96-1002,o,"Previous uses of this model include language modeling(Lau et al. , 1993), machine translation(Berger et al. , 1996), prepositional phrase attachment(Ratnaparkhi et al. , 1994), and word morphology(Della Pietra et al. , 1995)." W97-0121,J96-1002,o,"To make feature ranking computationally tractable in Della Pietra et al. 1995 and Berger et al. 1996 a simplified process proposed: at the feature ranking stage when adding a new feature to the model all previously computed parameters are kept fixed and, thus, we have to fit only one new constraint imposed by a candidate feature." W97-0121,J96-1002,o,First as the configuration space we can use only the reference nodes (w) from the lattice which makes it similar to the method of Berger et al. 1996 described in section 2.1. W97-0121,J96-1002,o,We adopted the stop condition suggested in Berger et al. 1996 the maximization of the likelihood on a cross-validation set of samples which is unseen at the parameter esti~_tion. W97-0121,J96-1002,o,Our method uses assumptions similar to Berger et al. 1996 but is naturally suitable for distributed parallel computations. W97-0121,J96-1002,o,"Berger et al. 1996 presented a way of computing conditional maximum entropy models directly by modifying equation 6 as follows (now instead of w we will explicitly use (x, y) ): i ~Cx~) = ~ f~(~, y) * ~(~, y) ~ ~ .~(~, y) * ~(~) * pCy I ~) = p(xk) (9) x6X yEY xEX yEY where ~(x, y) is an empirical probability of a joint configuration (w) of certain instantiated factor I variables with certain instantiated behavior variables." W97-0301,J96-1002,o,"6 Comparison With Previous Work The two parsers which have previously reported the best accuracies on the Penn Treebank Wall St. Journal are the bigram parser described in (Collins, 1996) and the SPATTER parser described in (Jelinek et al. , 1994; Magerman, 1995)." W97-0319,J96-1002,o,"164 and Itai, 1990; Dagan et al. , 1995; Kennedy and Boguraev, 1996a; Kennedy and Boguraev, 1996b)." W97-0319,J96-1002,o,"(1996) show that this model is a member of an exponential family with one parameter for each constraint, specifically a model of the form 1 ~ I~ (x,~) p(yl ) = E' in which z(x) = eZ, Y The parameters A1, , An are Lagrange multipliers that impose the constraints corresponding to the chosen features fl, -,fnThe term Z(x) normalizes the probabilities by summing over all possible outcomes y. Berger et al." W97-0319,J96-1002,o,"Figure 1 exhibits this scenario with a typical IE system such as SRI's FASTUS system (Hobbs et al. , 1996)." W97-1005,J96-1002,o,"Statistical and information theoretic approaches (Hindle and Rooth, 1993), (Ratnaparkhi et al. , 1994),(Collins and Brooks, 1995), (Franz, 1996) Using lexical collocations to determine PPA with statistical techniques was first proposed by (Hindle and Rooth, 1993)." W97-1005,J96-1002,o,"The approach made use of a maximum entropy model (Berger et al. , 1996) formulated from frequency information for various combinations of the observed features." W98-0701,J96-1002,o,", i.e.: (ll) Lj = ~ maz(zi(j, u)) i=I where xi(j,u)E Qi and max(xi(j,u)) is the highest score in the line of the matrix Qi which corresponds to the head word sense j. n is the number of modifiers of the head word h at the current tree level, and k i Lj = j~l Lj where k is the number of senses of the head word h. The reason why gj (I0) is calculated as a sum of the best scores (ll), rather than by using the traditional maximum likelihood estimate (Berger et al. , 1996)(Gah eta\[." W98-0701,J96-1002,o,"To determine the tree head-word we used a set of rules similar to that described by (Magerman, 1995)(Jelinek et al. , 1994) and also used by (Collins, 1996), which we modified in the following way: The head of a prepositional phrase (PP-IN NP) was substituted by a function the name of which corresponds to the preposition, and its sole argument corresponds to the head of the noun phrase NP." W98-1117,J96-1002,o,"Its applications range from sentence boundary disambiguation (Reynar and Ratnaparkhi, 1997) to part-of-speech tagging (Ratnaparkhi, 1996), parsing (Ratnaparkhi, 1997) and machine translation (Berger et al. , 1996)." W98-1118,J96-1002,p,"Clearly a more sophisticated feature selection routine such as the ones in (Berger et al. , 1996), or (Berger and Printz, 1998) would be required in this case." W98-1118,J96-1002,o,"Other recent work has applied M.E. to language modeling (Rosenfeld, 1994), machine translation (Berger et al. , 1996), and reference resolution (Kehler, 1997)." W98-1118,J96-1002,o,"This allows us to compute the conditional probability as follows (Berger et al. , 1996): P(flh) = ~i~ '(h'I) (2) Z~(h) Z~(h) = ~I~I~ '(h'~) (a) ff i The maximum entropy estimation technique guarantees that for every feature gi, the expected value of gi according to the M.E. model will equal the empirical expectation of gi in the training corpus." W98-1118,J96-1002,p,"More complete discussions of M.E. as applied to computational linguistics, including a description of the M.E. estimation procedure can be found in (Berger et al. , 1996) and (Della Pietra et al. , 1995)." A00-1007,J96-2004,o,"1 Introduction on measures for inter-rater reliability (Carletta, 1996), on frameworks for evaluating spoken dialogue agents (Walker et al. , 1998) and on the use of different corpora in the development of a particular system (The Carnegie-Mellon Communicator, Eskenazi et al." A00-1012,J96-2004,o,Carletta (1996) argues that the kappa statistic (a) should be adopted to judge annotator consistency for classification tasks in the area of discourse and dialogue analysis. A00-1012,J96-2004,o,It has been claimed that content analysis researchers usually regard a > .8 to demonstrate good reliability and .67 < ~ < .8 alf16 lows tentative conclusions to be drawn (see Carletta (1996)). A00-1012,J96-2004,o,"Carletta mentions this problem, asking what the difference would be if the kappa statistic were computed across ""clause boundaries, transcribed word boundaries, and transcribed phoneme boundaries"" (Carletta, 1996, p. 252) rather than the sentence boundaries she suggested." A00-2003,J96-2004,o,"len.: median length of sequences of co-specifying referring expressions with Cohen's n (Cohen, 1960; Carletta, 1996)." A97-1050,J96-2004,o,"4.5 Consistency of Annotations In order to assess the consistency of annotation, we follow Carletta (1996) in using Cohen's ~, a chancecorrected measure of inter-rater agreement." C00-1039,J96-2004,o,"It ewduato.s the pairwise agreement mnong a set; of coders making category.iudgment, correcting tbr expected chance agreement (Carletta, 1996)." C04-1020,J96-2004,o,"In order to determine inter-annotator agreement for the database of annotated texts, we computed kappa statistics (Carletta, 1996)." C04-1034,J96-2004,o,"The reliability for the two annotation tasks (-statistics (Carletta, 1996)) was of 0.94 and 0.90 respectively." C04-1035,J96-2004,o,"[KD1, 2371] 2.3 Reliability To evaluate the reliability of the annotation, we use the kappa coe cient (K) (Carletta, 1996), which measures pairwise agreement between a set of coders making category judgements, correcting for expected chance agreement." C04-1128,J96-2004,o,"The kappa statistic (Carletta, 1996) for identifying question segments is 0.68, and for linking question and answer segments given a question segment is 0.81." C04-1161,J96-2004,o,Carletta (1996) says that 0.67 a10a14a11a15a10 0.8 allows just tentative conclusions to be drawn. C08-1109,J96-2004,o,"While the need for annotation by multiple raters has been well established in NLP tasks (Carletta, 1996), most previous work in error detection has surprisingly relied on only one rater to either create an annotated corpus of learner errors, or to check the systems output." C96-1059,J96-2004,o,"To support a more rigorous analysis, however, wc have followed Carletta's suggestion (1996) of using the K coettMcnt (Siegel and Castellan, 1988) as a measure of coder agreement." D08-1021,J96-2004,o,"We measured inter-annotator agreement with the Kappa statistic (Carletta, 1996) using the 1,391 items that two annotators scored in common." D09-1150,J96-2004,o,"3.1 Agreement for Emotion Classes The kappa coefficient of agreement is a statistic adopted by the Computational Linguistics community as a standard measure for this purpose (Carletta, 1996)." D09-1155,J96-2004,p,"As agreement measure we choose the Kappa coefficient (Fleiss, 1971; Siegel and Castellan, 1988), the agreement measure predominantly used in natural language processing research (Carletta, 1996)." E06-1007,J96-2004,o,"We then examined the inter-annotator reliability of the annotation by calculating the score (Carletta, 1996)." E99-1006,J96-2004,o,"After each step the annotations were compared using the ~ statistic as reliability measure for all classification tasks (Carletta, 1996)." E99-1015,J96-2004,o,"Kappa is a better measurement of agreement than raw percentage agreement (Carletta, 1996) because it factors out the level of agreement which would be reached by random annotators using the same distribution of categories as the real coders." H05-1031,J96-2004,o,5.2 Results on the Newsblaster data We measured how well the models trained on DUC data perform with current news labeled using human 4http://newsblaster.cs.columbia.edu 5a20 (kappa) is a measure of inter-annotator agreement over and above what might be expected by pure chance (See Carletta (1996) for discussion of its use in NLP).a20a22a21a24a23 if there is perfect agreement between annotators anda20a25a21a27a26 if the annotators agree only as much as you would expect by chance. H05-1115,J96-2004,o,"Once an acceptable rate of interjudge agreement was verified on the first nine clusters (Kappa (Carletta, 1996) of 0.68), the remaining 11 clusters were annotated by one judge each." J00-3003,J96-2004,o,"As argued in Carletta (1996), Kappa values of 0.8 or higher are desirable for detecting associations between several coded variables; we were thus satisfied with the level of agreement achieved." J00-4003,J96-2004,o,Agreement among annotators was measured using the K statistic (Siegel and Castellan 1988; Carletta 1996). J01-3003,J96-2004,o,"Carletta (1996) cites the convention from the domain of content analysis indicating that .67 K K < .8 indicates marginal agreement, while K > .8 is an indication of good agreement." J02-3004,J96-2004,p,"Although the Kappa coefficient has a number of advantages over percentage agreement (e.g. , it takes into account the expected chance interrater agreement; see Carletta (1996) for details), we also report percentage agreement as it allows us to compare straightforwardly the human performance and the automatic methods described below, whose performance will also be reported in terms of percentage agreement." J02-4001,J96-2004,o,"Other commonly used measures include kappa (Carletta 1996) and relative utility (Radev, Jing, and Budzikowska 2000), both of which take into account the performance of a summarizer that randomly picks passages from the original document to produce an extract." J02-4002,J96-2004,o,"421 Teufel and Moens Summarizing Scientific Articles We use the kappa coefficient K (Siegel and Castellan 1988) to measure stability and reproducibility, following Carletta (1996)." J03-1004,J96-2004,o,"As Carletta (1996) notes, many tasks in computational linguistics are simply more difficult than the content analysis classifications addressed by Krippendorff, and according to Fleiss (1981), kappa values between .4 and .75 indicate fair to good agreement anyhow." J04-1005,J96-2004,o,Carletta (1996) deserves the credit for bringing to the attention of computational linguists. J04-3003,J96-2004,o,"One of our goals was to use for this study only information that could be annotated reliably (Passonneau and Litman 1993; Carletta 1996), as we believe this will make our results easier to replicate." J04-3003,J96-2004,o,"The agreement on identifying the boundaries of units, using the kappa statistic discussed in Carletta (1996), was = .9 (for two annotators and 500 units); the agreement on features (two annotators and at least 200 units) was as follows: utype: = .76; verbed: = .9; nite: = .81." J05-2005,J96-2004,o,"In order to determine interannotator agreement for step 2 of the coding procedure for the database of annotated texts, we calculated kappa statistics (Carletta 1996)." J05-3001,J96-2004,o,"For example, the coding manual for the Switchboard DAMSL dialogue act annotation scheme (Jurafsky, Shriberg, and Biasca 1997, page 2) states that kappa is used to assess labelling accuracy, and Di Eugenio and Glass (2004) relate reliability to the objectivity of decisions, whereas Carletta (1996) regards reliability as the degree to which we understand the judgments that annotators are asked to make." J05-3001,J96-2004,o,"This is an unsuitable measure for inferring reliability, and it was the use of this measure that prompted Carletta (1996) to recommend chance-corrected measures." J05-3001,J96-2004,p,"Since Jean Carletta (1996) exposed computational linguists to the desirability of using chance-corrected agreement statistics to infer the reliability of data generated by applying coding schemes, there has been a general acceptance of their use within the field." J05-3001,J96-2004,o,The prevalent use of this criterion despite repeated advice that it is unlikely to be suitable for all studies (Carletta 1996; Di Eugenio and Glass 2004; Krippendorff 2004a) is probably due to a desire for a simple system that can be easily applied to a scheme. J07-1002,J96-2004,o,G-Theory and Agreement Indices Two well-known measures for capturing the quality of manual annotations are agreement percentages and the kappa statistic (Cohen 1960; Carletta 1996; Eugenio and Glass 2004). J08-3001,J96-2004,o,"He uses a specic reliability statistic, , for his measurements, but Carletta (1996) implicitly assumes kappa-like metrics are similar enough in practice for the rule of thumb to apply to them as well.A detailed discussion on the differences and similarities of these, and other, measures is provided by Krippendorff (2004); in this article we will use Cohens (1960) to investigate the value of the 0.8 reliability cut-off for computational linguistics." J97-1002,J96-2004,o,"4.2 Interpreting reliability results It has been argued elsewhere (Carletta 1996) that since the amount of agreement one would expect by chance depends on the number and relative frequencies of the categories under test, reliability for category classifications should be measured using the kappa coefficient." J97-1003,J96-2004,o,6.1 Reader Judgments There is a growing concern surrounding issues of intercoder reliability when using human judgments to evaluate discourse-processing algorithms (Carletta 1996; Condon and Cech 1995). J97-1003,J96-2004,o,Proposals have recently been made for protocols for the collection of human discourse segmentation data (Nakatani et al. 1995) and for how to evaluate the validity of judgments so obtained (Carletta 1996; Isard and Carletta 1995; Ros6 1995; Passonneau and Litman 1993; Litman and Passonneau 1995). J97-1003,J96-2004,o,Carletta (1996) and Ros6 (1995) point out the importance of taking into account the expected chance agreement among judges when computing whether or not judges agree significantly. J97-1003,J96-2004,o,"According to Carletta (1996), K measures pairwise agreement among a set of coders making category judgments, correcting for expected chance agreement as follows: KP(A) -P(E) 1 -P(E) where P(A) is the proportion of times that the coders agree and P(E) is the proportion of times that they would be expected to agree by chance." J97-1003,J96-2004,o,"Carletta (1996) also states that in the behavioral sciences, K > .8 signals good replicability, and .67 < K < .8 allows tentative conclusions to be drawn." J97-1005,J96-2004,o,"Reliability metrics (Krippendorff 1980; Carletta 1996) are designed to give a robust measure of how well distinct sets of data agree with, or replicate, one another." J97-1005,J96-2004,o,"In Hirschberg and Nakatani (1996), average reliability (measured using the kappa coefficient discussed in Carletta \[1996\]) of segmentinitial labels among 3 coders on 9 monologues produced by the same speaker, labeled using text and speech, is.8 or above for both read and spontaneous speech; values of at least .8 are typically viewed as representing high reliability (see Section 3.2)." J98-2001,J96-2004,o,Our study is also different from these previous ones in that measuring the agreement among annotators became an issue (Carletta 1996). J98-2001,J96-2004,o,"Idiom 0 0 1 1 0 2 V. Doubt 3 0 4 0 0 7 Total A 294 160 546 39 1 1,040 In order to measure the agreement in a more precise way, we used the Kappa statistic (Siegel and Castellan 1988), recently proposed by Carletta as a measure of agreement for discourse analysis (Carletta 1996)." J98-2001,J96-2004,o,"And indeed, the agreement figures went up from K = 0.63 to K = 0.68 (ignoring doubts) when we did so, i.e., within the ""tentative"" margins of agreement according to Carletta (1996) (0.68 <_ x < 0.8)." N01-1010,J96-2004,o,"11 This low agreement ratio is also re ected in a measure called the statistic (Carletta, 1996;; Bruce and Wiebe, 1998;; Ng et al. , 1999)." N01-1010,J96-2004,o,"Normally, :8 is considered a good agreement (Carletta, 1996)." N01-1010,J96-2004,o,"The results are quite promising: our extraction method discovered 89% of the WordNet cousins, and the sense partitions in our lexicon yielded better values (Carletta, 1996) than arbitrary sense groupings on the agreement data." N03-1012,J96-2004,o,"The resulting Kappa statistics (Carletta, 1996) over the annotated data yields a0a2a1 a3a5a4a7a6, which seems to indicate that human annotators can reliably distinguish between coherent samples (as in Example (1a)) and incoherent ones (as in Example (1b))." N04-1026,J96-2004,o,"The two annotators agreed on the annotations of 385/453 turns, achieving 84.99% agreement, with Kappa = 0.68.2 This inter-annotator agreement exceeds that of prior studies of emotion annotation in naturally occurring speech 2a3a5a4a7a6a8a6a9a4a11a10a13a12a15a14a17a16a19a18a21a20a22a12a23a14a25a24a26a18 a27 a20a22a12a23a14a25a24a26a18 (Carletta, 1996)." N06-2040,J96-2004,o,"The kappa (Carletta, 1996) obtained on this feature was 0.93." N07-1072,J96-2004,o,"The metric we used is the kappa statistic (Carletta, 1996), which factors out the agreement that is expected by chance: )(1 )()( EP EPAP = where P(A) is the observed agreement among the raters, and P(E) is the expected agreement, i.e., the probability that the raters agree by chance." P00-1051,J96-2004,o,"One of our goals was to use for our study only information that could be annotated reliably (Passonneau and Litman, 1993; Carletta, 1996), as we believe this will make our results easier to replicate." P00-1051,J96-2004,o,"The agreement on identifying the boundaries of units, using the AK statistic discussed in (Carletta, 1996), was AK BP BMBL (for two annotators and 500 units); the agreement on features(2 annotators and at least 200 units) was follows: Attribute AK Value utype .76 verbed .9 finite .81 subject .86 NPs Our instructions for identifying NP markables derive from those proposed in the MATE project scheme for annotating anaphoric relations (Poesio et al. , 1999)." P01-1032,J96-2004,o,"On the one hand, even the higher of the kappa coefficients mentioned above is significantly lower than the standard suggested for good reliability (a124a126a125a128a127a130a129 ) or even the level where tentative conclusions may be drawn (a127a130a131a133a132a135a134a72a124 a134 a127a130a129 ) (Carletta, 1996), (Krippendorff, 1980)." P01-1038,J96-2004,o,"This information can be annotated reliably (a1a3a2a5a4a7a6a9a8 a10a12a11a14a13a16a15 and a1a17a2a5a4a19a18a20a8 a10a12a11a14a13a16a21 ).4 4Following (Carletta, 1996), we use the a22 statistic to estimate reliability of annotation." P01-1051,J96-2004,o,"Analyze resulting findings to determine a progression of competence In (Michaud et al. , 2001) we discuss the initial steps we took in this process, including the development of a list of error codes documented by a coding manual, the verification of our manual and coding scheme by testing inter-coder reliability in a subset of the corpus (where we achieved a Kappa agreement score (Carletta, 1996) of a0 a1a3a2a5a4a7a6 )2, and the subsequent tagging of the entire corpus." P02-1045,J96-2004,o,"The reliability of the annotations was checked using the kappa statistic (Carletta, 1996)." P03-1008,J96-2004,o,"The annotation can be considered reliable (Krippendorff, 1980) with 95% agreement and a kappa (Carletta, 1996) of.88." P03-1048,J96-2004,o,"Co-selection measures include precision and recall of co-selected sentences, relative utility (Radev et al. , 2000), and Kappa (Siegel and Castellan, 1988; Carletta, 1996)." P03-1048,J96-2004,o,"3.1.2 Kappa Kappa (Siegel and Castellan, 1988) is an evaluation measure which is increasingly used in NLP annotation work (Krippendorff, 1980; Carletta, 1996)." P04-1049,J96-2004,o,"Inter-annotator agreement was determined for six pairs of two annotators each, resulting in kappa values (Carletta (1996)) ranging from 0.62 to 0.82 for the whole database (Carlson et al." P04-1088,J96-2004,o,"To support this claim, first, we used the coefficient (Krippendorff, 1980; Carletta, 1996) to assess the agreement between the classification made by FLSA and the classification from the corpora see Table 8." P05-1031,J96-2004,o,"5To test the reliability of the annotation scheme, we had a subset of the data annotated by two annotators and found a satisfactory -agreement (Carletta, 1996) of = 0.81." P05-2010,J96-2004,o,"We use the by now standard a0 statistic (Di Eugenio and Glass, 2004; Carletta, 1996; Marcu et al. , 1999; Webber and Byron, 2004) to quantify the degree of above-chance agreement between multiple annotators, and the a1 statistic for analysis of sources of unreliability (Krippendorff, 1980)." P05-2014,J96-2004,o,"Labelling was carried out by three computational linguistics graduate students with 89% agreement resulting in a Kappa statistic of 0.87, which is a satisfactory indication that our corpus can be labelled with high reliability using our tag set (Carletta, 1996)." P06-1050,J96-2004,o,"The kappa statistic (Krippendorff, 1980; Carletta, 1996) has become the de facto standard to assess inter-annotator agreement." P07-3006,J96-2004,o,"Therefore, the results are more informative than a simple agreement average (Cohen, 1960; Carletta, 1996)." P08-2064,J96-2004,o,"Table 1 shows the percentage of agreement in classifying words as compounds or non-compounds (Compound Classification Agreement, CCA) for each language and the Kappa score (Carletta, 1996) obtained from it, and the percentage of words for which also the decomposition provided was identical (Decompounding Agreement, DA)." P09-1094,J96-2004,o,"Kappa is defined as K = P(A)P(E)1P(E) (Carletta, 1996), where P(A) is the proportion of times that the labels agree, and P(E) is the proportion of times that they may agree by chance." P09-1095,J96-2004,o,"To measure interannotator agreement, we compute Cohens Kappa (Carletta, 1996) from the two sets of annotations, obtaining a Kappa value of only 0.43." P09-1101,J96-2004,o,(Carletta 1996) is another method of comparing inter-annotator agreement 0 30 60 90 120 150 1 2 3 4 5 6 7 8 9 10 11 >11 120 25 10 32 3 4 3 1 2 0 17 2 Nu mb er of an not ators Number of dialogues completed Figure 2. P09-2044,J96-2004,o,"We then used Cohens Kappa () to determine the level of agreement (Carletta, 1996)." P97-1034,J96-2004,o,"We then used the kappa statistic (Siegel and Castellan, 1988; Carletta, 1996) to assess the level of agreement between the three coders with respect to the 2 An agent holds the task initiative during a turn as long as some utterance during the turn directly proposes how the agents should accomplish their goal, as in utterance (3c)." P97-1034,J96-2004,o,"Carletta suggests that content analysis researchers consider K >.8 as good reliability, with.67< /~"" <.8 allowing tentative conclusions to be drawn (Carletta, 1996)." P98-1052,J96-2004,o,"Table 1 reports values for the Kappa (K) coefficient of agreement (Carletta, 1996) for Forward and Backward Functions .6 The columns in the tables read as follows: if utterance Ui has tag X, do coders agree on the subtag?" P99-1032,J96-2004,o,"We perform a statistical analysis that provides information that complements the information provided by Cohen's Kappa (Cohen, 1960; Carletta, 1996)." P99-1068,J96-2004,o,"The table also shows Cohen's to, an agreement measure that corrects for chance agreement (Carletta, 1996); the most important t value in the table is the value of 0.7 for the two human judges, which can be interpreted as sufficiently high to indicate that the task is reasonably well defined." W00-1302,J96-2004,o,"We measured stability (the degree to which the same annotator will produce an annotation after 6 weeks) and reproducibility (the degree to which two unrelated annotators will produce the same annotation), using the Kappa coefficient K (Siegel and Castellan, 1988; Carletta, 1996), which controls agreement P(A) for chance agreement P(E): K = PA)-P(E) 1-P(Z) Kappa is 0 for if agreement is only as would be expected by chance annotation following the same distribution as the observed distribution, and 1 for perfect agreement." W00-1403,J96-2004,o,"The agreement was statistically significant (Kappa = 0.65.0 > 0.01 for Japanese and Kappa = 0.748,0 > 0.01 for English (Carletta, 1996; Siegel-and Castellan, 1988))." W00-1415,J96-2004,o,"In other words, (4b) can be used in substitution of (4a), whereas (5b) cannot, so easily 41n (Carletta, 1996), a value of K between .8 and I indicates good agreement; a value between .6 and .8 indicates some agreement." W02-0207,J96-2004,o,"5 Reliability of Annotations 5.1 The Kappa Statistic To measure the reliability of annotations we used the Kappa statistic (Carletta, 1996)." W02-0226,J96-2004,o,"To test the reliability of group segmentation within GDM-IS, we calculate the kappa coefficient (C3) 8 (Carletta, 1996; Carletta et al. , 1997; Flammia, 1998) to measure pairwise agreement between the subject and the expert." W02-0226,J96-2004,o," From (Carletta, 1996) 9 Combined metric BY BP B4AC BE B7BDB5C8CABPB4AC BE C8 B7 CAB5, from (Jurafsky and Martin, 2000, p.578), AC BPBD." W02-0806,J96-2004,o,"A detailed discussion on the use of kappa in natural language processing is presented in (Carletta, 1996)." W02-0808,J96-2004,o,"We chose nouns that occur a minimum of 10 times in the corpus, have no undetermined translations and at least five different translations in the six nonEnglish languages, and have the log likelihood score of at least 18; that is: LL(T T, T S ) = = 2 1 ij n* j * j*i ij n log 18 where n ij stands for the number of times T T and T S have been seen together in aligned sentences, n i* and n *j stand for the number occurrences of T T and T S, respectively, and n ** represents the total 4 We computed raw percentages only; common measures of annotator agreement such as the Kappa statistic (Carletta, 1996) proved to be inappropriate for our two-category (yesno) classification scheme." W02-0904,J96-2004,o,"The kappa value (Carletta, 1996) was used to evaluate the agreement among the judges and to estimate how difficult the evaluation task was." W02-1040,J96-2004,o,"The reliability of the annotations was checked using the kappa statistic (Carletta, 1996)." W03-0201,J96-2004,o,"Though inter-rater reliability using the kappa statistic (Carletta 1996) may be calculated for each group, the distribution of categories in the contribution group was highly skewed and warrants further discussion." W03-0902,J96-2004,o,"Overall % agreement among judges for 250 propositions 60.1 A commonly used metric for evaluating interrater reliability in categorization of data is the kappa statistic (Carletta, 1996)." W03-1010,J96-2004,o,"The statistic (Carletta, 1996) is recast as: (fs,w)(sys,sys) = agr(fs,w)(sys,sys) P agr(fs,)(sys,sys) N P agr(fs,)(sys,sys) N In this modified form, (fs,w) represents the divergence in relative agreement wrt f s for target noun w, relative to the mean relative agreement wrt f s over all words." W03-1903,J96-2004,o,"In fact, it has been shown that the agreement of subjects annotating bridging (Poesio and Vieira, 1998) or discourse (Cimiano, 2003) relations can be too low for tentative conclusion to be drawn (Carletta, 1996)." W03-1903,J96-2004,o,"In this sense, instead of measuring only the categorial agreement between annotators with the kappa statistic (Carletta, 1996) or the performance of a system in terms of precision/recall, we could take into account the hierarchical organization of the categories or concepts by making use of measures considering the hierarchical distance between two concepts such as proposed by (Hahn and Schnattinger, 1998) or (Madche et al. , 2002)." W03-2802,J96-2004,o,"With the help of the kappa coefficient (Carletta, 1996) proposes to represent the dialog success independently from the task intrinsic complexity, thus opening the way to task generic comparative evaluation." W04-0204,J96-2004,o,"The intercoder reliability is a constant concern of everyone working with corpora to test linguistic hypotheses (Carletta, 1996), and the more so when one is coding for semanto-pragmatic interpretations, as in the case of the analysis of connectives." W04-0210,J96-2004,o,"The agreement on identifying the boundaries of units, using the statistic discussed in (Carletta, 1996), was =.9 (for two annotators and 500 units); the agreement on features (2 annotators and at least 200 units) was as follows: UTYPE: =.76; VERBED: =.9; FINITE: =.81." W04-0216,J96-2004,o,"6 Coding reliability The reliability of the annotation was evaluated using the kappa statistic (Carletta, 1996)." W04-0713,J96-2004,o,"The reliability for the two annotation tasks (-statistics (Carletta, 1996)) was of 0.94 and 0.90 respectively." W04-0807,J96-2004,o,"In addition to raw inter-tagger agreement, the kappa statistic, which removes from the agreement rate the amount of agreement that is expected by chance(Carletta, 1996), was also determined." W04-1211,J96-2004,o,Kappa coefficient is given in (1) (Carletta 1996) (1) )(1 )()( EP EPAP Kappa = where P(A) is the proportion of times the annotators actually agree and P(E) is the proportion of times the annotators are expected to agree due to chance 3. W04-1211,J96-2004,o,"An acceptable agreement for most NLP classification tasks lies between 0.7 and 0.8 (Carletta 1996, Poessio and Vieira 1988)." W04-2312,J96-2004,n,"The class-based kappa statistic of (Cohen, 1960; Carletta, 1996) cannot be applied here, as the classes vary depending on the number of ambiguities per entry in the lexicon." W04-2323,J96-2004,o,"The a0 coefficient is computed as follows: a0 a47 a1a32a2 a9 a1 a30 a68 a9 a1a32a30 Carletta (1996) reports that content analysis researchers generally think of a0a34a33 a49a36a35a37 as good reliability, with a49a36a35a38a40a39a37a41 a0 a41a25a49a36a35a37 allowing tentative conclusions to be drawn. All that remains is to define the chance agreement probability a1 a30 . Let a1a32a41 a1 a30 a7 and a1a32a42 a1 a30 a7 be the fraction of utterances that begin or end one or more segments in segmentation a30 respectively." W04-2326,J96-2004,o,"The two annotators agreed on the annotations of 385/453 turns, achieving 84.99% agreement (Kappa = 0.68 (Carletta, 1996))." W04-2703,J96-2004,o,"4 Data analysis To test the reliability of the annotation, we first considered the kappa statistic (Siegel and Castellan, 1988) which is used extensively in empirical studies of discourse (Carletta, 1996)." W04-2802,J96-2004,o,"Much like kappa statistics proposed by Carletta (1996), existing employments of majority class baselines assume an equal set of identical potential mark-ups, i.e. attributes and their values, for all markables." W05-0307,J96-2004,o,"We evaluated annotation reliability by using the Kappa statistic (Carletta, 1996)." W05-0901,J96-2004,o,"In the SUMMAC experiments, the Kappa score (Carletta, 1996; Eugenio and Glass, 2004) for interannotator agreement was reported to be 0.38 (Mani et al. , 2002)." W05-0906,J96-2004,o,"Computational linguistics research generally attaches great value to high kappa measures (Carletta, 1996), which indicate high human agreement on a particular task." W05-1009,J96-2004,o,"The judges had an acceptable 0.74 mean agreement (Carletta, 1996) for the assignment of the primary class, but a meaningless 0.21 for the secondary class (they did not even agree on which lemmata were polysemous)." W05-1612,J96-2004,o,"The table also shows the -score, which is another commonly used measure for inter-annotator agreement [Carletta, 1996]." W06-0906,J96-2004,o,"In the literature on the kappa statistic, most authors address only category data; some can handle more general data, such as data in interval scales or ratio scales (Krippendorff, 1980; Carletta, 1996)." W06-0906,J96-2004,o,"The kappa statistic (Krippendorff, 1980; Carletta, 1996) has become the de facto standard to assess inter-annotator agreement." W06-1203,J96-2004,o,"4This was a straightforward task; two annotators annotated independently, with very high agreementkappa score of over 0.95 (Carletta, 1996)." W06-1312,J96-2004,o,"7Following Carletta (1996), we measure agreement in Kappa, which follows the formula K = P(A)P(E)1P(E) where P(A) is observed, and P(E) expected agreement." W06-1314,J96-2004,o,"Inter-annotator agreement is typically measured by the kappa statistic (Carletta, 1996), dekappa frequency 0.0 0.2 0.4 0.6 0.8 1.0 0 2 4 6 8 Figure 2: Distribution of (inter-annotator agreement) across the 54 ICSI meetings tagged by two annotators." W06-1318,J96-2004,o,"Agreement is sometimes measured as percentage of the cases on which the annotators agree, but more often expected agreement is taken into account in using the kappa statistic (Cohen, 1960; Carletta, 1996), which is given by: = po pe1 p e (1) where po is the observed proportion of agreement and pe is the proportion of agreement expected by chance." W06-1318,J96-2004,n,"Ever since its introduction in general (Cohen, 1960) and in computational linguistics (Carletta, 1996), many researchers have pointed out that there are quite some problems in using (e.g." W06-1318,J96-2004,o,"Following the suggestions in (Carletta, 1996), Core et al. consider kappa scores above 0.67 to indicate significant agreement and scores above 0.8 reliable agreement." W06-1602,J96-2004,o,"Inter-annotator agreement was assessed mainly using f-score and percentage agreement as well as 11 Table 1: Annotation examples of superlative adjectives example sup span det num car mod comp set The third-largest thrift institution in Puerto Rico also [] 22 def sg no ord 37 The Agriculture Department reported that feedlots in the 13 biggest ranch states held [] 910 def pl yes no 1112 The failed takeover would have given UAL employees 75 % voting control of the nation s second-largest airline [] 1717 pos sg no ord 1418 the kappa statistics (K), where applicable (Carletta, 1996)." W06-1612,J96-2004,o,"Inter-annotator agreement was measured using the kappa (K) statistics (Cohen, 1960; Carletta, 1996) on 1,502 instances (three Switchboard dialogues) marked by two annotators who followed specific written guidelines." W06-1613,J96-2004,o,"4Following Carletta (1996), we measure agreement in Kappa, which follows the formula K = P(A)P(E)1P(E) where P(A) is observed, and P(E) expected agreement." W06-2404,J96-2004,o,"7 For the most frequent 184 expressions, on the average, the agreement rate between two human annotators is 0.93 and the Kappa value is 0.73, which means allowing tentative conclusions to be drawn (Carletta, 1996; Ng et al. , 1999)." W06-2505,J96-2004,o,"5.1 Agreement between translators In an attempt to quantify the agreement between the two groups of translators, we computed the Kappa coefficient for annotation tasks, as defined by Carletta (1996)." W06-3328,J96-2004,o,"Secondly, we used the Kappa coefficient (Carletta, 1996), which has become the standard evaluation metric and the score obtained was 0.905." W06-3406,J96-2004,o,"The Kappa statistic (Carletta, 1996) is typically used to measure the human interrater agreement." W07-0718,J96-2004,p,"6.1 Interand Intra-annotator agreement We measured pairwise agreement among annotators usingthekappacoefficient(K)whichiswidelyused in computational linguistics for measuring agreement in category judgments (Carletta, 1996)." W07-1602,J96-2004,o,"For these classications, we calculated a kappa statistic of 0.528 (Carletta, 1996)." W07-1707,J96-2004,o,"Obtained percent agreement of 0.988 and coefficient (Carletta, 1996) of 0.975 suggest high convergence of both annotations." W07-2007,J96-2004,o,"Annotation was highly reliable with a kappa (Carletta, 1996) of 3https://www.cia.gov/cia/publications/ factbook/index.html 4Given that the task is not about standard Named Entity Recognition, we assume that the general semantic class of the name is already known." W08-0112,J96-2004,o,"3 Analysis Results 3.1 Kappa Statistic Kappa coefficient (Carletta, 1996) is commonly used as a standard to reflect inter-annotator agreement." W08-0309,J96-2004,o,"7.1 Interand Intra-annotator agreement We measured pairwise agreement among annotators usingthekappacoefficient(K)whichiswidelyused in computational linguistics for measuring agreement in category judgments (Carletta, 1996)." W09-2109,J96-2004,o,"The resulting intercoder reliability, measured with the Kappa statistic(Carletta,1996), is considered excellent (= 0.80)." W96-0402,J96-2004,o,"The percentage agreement for each of the features is shown in the following table: feature percent agreement form 100% intentionality 74.9% awareness 93.5% safety 90.7% As advocated by Carletta (1996), we have used the Kappa coefficient (Siegel and Castellan, 1988) as a measure of coder agreement." W97-0113,J96-2004,o,"We will do this by examining how humans perform on summary extraction and evaluating the reliability of their performance, using the kappa statistic, a metric standardly used in the behavioral sciences (Jean Carletta, 1996; Sidney Siegel and N. John Castellan Jr. , 1988)." W97-0113,J96-2004,o,"Measurement of B.eliability The Kappa Statistic Following Jean Carletta (1996), we use the kappa statistic (Sidney Siegel and N. John Castellan Jr. , 1988) to measure degree of agreement among subjects." W97-0113,J96-2004,p,"As aptly pointed out in Jean Carletta (1996), agreement measures proposed so far in the computational linguistics literature has failed to ask an important question of whether results obtained using agreement data are in any way different from random data." W97-0203,J96-2004,o,"Its roots are the same as computational linguistics (CL), but it has been largely ignored in CL until recently (Dunning, 1993; Carletta, 1996; Kilgarriff, 1996)." W97-0320,J96-2004,o,"As in much recent empirical work in discourse processing (e.g. , Arhenberg et al. 1995; Isard & Carletta 1995; Litman & Passonneau 1995; Moser & Moore 1995; Hirschberg & Nakatani 1996), we performed an intercoder reliability study investigating agreement in annotating the times." W97-0320,J96-2004,o,"Intercoder reliability was assessed using Cohen's Kappa statistic (~) (Siegel & Castellan 1988, Carletta 1996)." W97-0320,J96-2004,o,"A ~ value of 0.8 or greater indicates a high level of reliability among raters, with values between 0.67 and 0.8 indicating only moderate agreement (Hirschberg ~ Nakatani 1996; Carletta 1996)." W98-0317,J96-2004,o,"Cohen's Kappa ~ (Bakeman and Gottman, 1986; Carletta, 1996)." W98-0319,J96-2004,o,"The labeling agreement was 84% (n =.80; (Carletta, 1996))." W99-0305,J96-2004,o,"Such a coding procedure covers, for example, how segmentation of a corpus is performed, if multiple tagging is allowed and if so, is it unlimited or are there just certain combinations of tags not allowed, is look ahead permitted, etc For further information on coding procedures we want to refer to \[Dybkjmr et al.1998\] and for good examples of coding books see, for example, \[Carletta et al.1996\], \[Alexandersson et al.1998\], or \[Thym~-Gobbel and Levin1998\]." W99-0305,J96-2004,o,"A CHECK move requests the partner to confirm information that the speaker has some reason to believe, but is not entirely sure about \[Carletta et al.1996\]." W99-0305,J96-2004,o,"However, CHECK moves are almost always about some information which the speaker has been told \[Carletta et al.1996\] -a description that models the backward looking functionality of a dialogue act." W99-0307,J96-2004,o,"It has been argued that the reliability of a coding schema can be assessed only on the basis of judgments made by naive coders (Carletta, 1996)." W99-0307,J96-2004,o,k -~ P(A) P(E) (3) 1P(E) Carletta (1996) suggests that the units over which the kappa statistic is computed affects the outcome. W99-0311,J96-2004,o,"The rationale for using Kappa is explained in (Carletta, 1996)." W99-0502,J96-2004,o,"It us widely acknowledged that word sense d~samblguatmn (WSD) us a central problem m natural language processing In order for computers to be able to understand and process natural language beyond simple keyword matching, the problem of d~samblguatmg word sense, or dlscermng the meamng of a word m context, must be effectively dealt with Advances in WSD v, ill have slgmficant Impact on apphcatlons hke information retrieval and machine translation For natural language subtasks hke part-of-speech tagging or s)ntactm parsing, there are relatlvely well defined and agreed-upon cnterm of what it means to have the ""correct"" part of speech or syntactic structure assigned to a word or sentence For instance, the Penn Treebank corpus (Marcus et al, 1993) pro~ide~,t large repo.~tory of texts annotated w~th partof-speech and s}ntactm structure mformatlon Tv.o independent human annotators can achieve a high rate of agreement on assigning part-of-speech tags to words m a g~ven sentence Unfortunately, th~s us not the case for word sense assignment F~rstly, it is rarely the case that any two dictionaries will have the same set of sense defimtmns for a g~ven word Different d~ctlonanes tend to carve up the ""semantic space"" m a different way, so to speak Secondly, the hst of senses for a word m a typical dmtmnar~ tend to be rather refined and comprehensive This is especmlly so for the commonly used words which have a large number of senses The sense dustmctmn between the different senses for a commonly used word m a d~ctmnary hke WoRDNET (Miller, 1990) tend to be rather fine Hence, two human annotators may genuinely dusagree m their sense assignment to a word m context The agreement rate between human annotators on word sense assignment us an Important concern for the evaluatmn of WSD algorithms One would prefer to define a dusamblguatlon task for which there us reasonably hlgh agreement between human annotators The agreement rate between human annotators will then form the upper ceiling against whmh to compare the performance of WSD algorithms For instance, the SENSEVAL exerclse has performed a detaded study to find out the raterannotator agreement among ~ts lexicographers taggrog the word senses (Kllgamff, 1998c, Kllgarnff, 1998a, Kflgarrlff, 1998b) 2 A Case Study In this-paper, we examine the ~ssue of raterannotator agreement by comparing the agreement rate of human annotators on a large sense-tagged corpus of more than 30,000 instances of the most frequently occurring nouns and verbs of Enghsh This corpus is the intersection of the WORDNET Semcor corpus (Miller et al, 1993) and the DSO corpus (Ng and Lee, 1996, Ng, 1997), which has been independently tagged wlth the refined senses of WORDNET by two separate groups of human annotators The Semcor corpus us a subset of the Brown corpus tagged with ~VoRDNET senses, and consists of more than 670,000 words from 352 text files Sense taggmg was done on the content words (nouns, ~erbs, adjectives and adverbs) m this subset The DSO corpus consists of sentences drawn from the Brown corpus and the Wall Street Journal For each word w from a hst of 191 frequently occurring words of Enghsh (121 nouns and 70 verbs), sentences containing w (m singular or plural form, and m its various reflectional verb form) are selected and each word occurrence w ~s tagged w~th a sense from WoRDNET There ~s a total of about 192,800 sentences in the DSO corpus m which one word occurrence has been sense-tagged m each sentence The intersection of the Semcor corpus and the DSO corpus thus consists of Brown corpus sentences m which a word occurrence w is sense-tagged m each sentence, where w Is one of.the 191 frequently oc-,currmg English nouns or verbs Since this common pomon has been sense-tagged by two independent groups of human annotators, ~t serves as our data set for investigating inter-annotator agreement in this paper 3 Sentence Matching To determine the extent of inter-annotator agreement, the first step ~s to match each sentence m Semcor to its corresponding counterpart In the DSO corpus This step ~s comphcated by the following factors 1 Although the intersected portion of both corpora came from Brown corpus, they adopted different tokemzatmn convention, and segmentartan into sentences differed sometimes 2 The latest versmn of Semcor makes use of the senses from WORDNET 1 6, whereas the senses used m the DSO corpus were from WoRDNET 15 1 To match the sentences, we first converted the senses m the DSO corpus to those of WORDNET 1 6 We ignored all sentences m the DSO corpus m which a word is tagged with sense 0 or -1 (A word is tagged with sense 0 or -1 ff none of the given senses m WoRDNFT applies ) 4, sentence from Semcor is considered to match one from the DSO corpus ff both sentences are exactl) ldent~cal or ff the~ differ only m the pre~ence or absence of the characters "" (permd) or -' (hyphen) For each remaining Semcor sentence, taking into account word ordering, ff 75% or more of the words m the sentence match those in a DSO corpus sentence, then a potential match ~s recorded These i -kctua\[ly, the WORD~q'ET senses used m the DSO corpus were from a shght variant of the official WORDNE'I 1 5 release Th~s ssas brought to our attention after the pubhc release of the DSO corpus potential matches are then manually verffied to ensure that they are true matches and to ~eed out any false matches Using this method of matching, a total of 13,188 sentence-palrs contasnmg nouns and 17,127 sentence-pa~rs containing verbs are found to match from both corpora, ymldmg 30,315 sentences which form the intersected corpus used m our present study 4 The Kappa Statistic Suppose there are N sentences m our corpus where each sentence contains the word w Assume that w has M senses Let 4 be the number of sentences which are assigned identical sense b~ two human annotators Then a simple measure to quantify the agreement rate between two human annotators Is Pc, where Pc, = A/N The drawback of this simple measure is that it does not take into account chance agreement between two annotators The Kappa statistic a (Cohen, 1960) is a better measure of rater-annotator agreement which takes into account the effect of chance agreement It has been used recently w~thm computatmnal hngu~stlcs to measure raterannotator agreement (Bruce and Wmbe, 1998, Carletta, 1996, Veroms, 1998) Let Cj be the sum of the number of sentences which have been assigned sense 3 by annotator 1 and the number of sentences whmh have been assigned sense 3 by annotator 2 Then P~-P~ 1-P~ where M j=l and Pe measures the chance agreement between two annotators A Kappa ~alue of 0 indicates that the agreement is purely due to chance agreement, whereas a Kappa ~alue of 1 indicates perfect agreement A Kappa ~alue of 0 8 and above is considered as mdmatmg good agreement (Carletta, 1996) Table 1 summarizes the inter-annotator agreement on the mtersected corpus The first (becond) row denotes agreement on the nouns (xerbs), wh~le the lass row denotes agreement on all words combined The a~erage ~ reported m the table is a s~mpie average of the individual ~ value of each word The agreement rate on the 30,315 sentences as measured by P= is 57% This tallies with the figure reported ~n our earlier paper (Ng and Lee, 1996) where we performed a quick test on a subset of 5,317 sentences,n the intersection of both the Semcor corpus and the DSO corpus 10 \[\] mm m m m m m mm m m m m mm m m m Type Num of v, ords A N \[ P~ Avg Nouns 121 7,676 13,188 I 0 582 0 300 Verbs 70 9,520 17,127 I 0 555 0 347 All I 191 I 17,196 30,315 I 056T 0317 Table 1 Raw inter-annotator agreement 5 Algorithm Since the rater-annotator agreement on the intersected corpus is not high, we would like to find out how the agreement rate would be affected if different sense classes were in use In this section, we present a greedy search algorithm that can automatmalb derive coarser sense classes based on the sense tags assigned by two human annotators The resulting derived coarse sense classes achmve a higher agreement rate but we still maintain as many of the original sense classes as possible The algorithm is given m Figure 1 The algorithm operates on a set of sentences where each sentence contains an occurrence of the word w whmh has been sense-tagged by two human annotators At each Iteration of the algorithm, tt finds the pair of sense classes Ct and Cj such that merging these two sense classes results in the highest t~ value for the resulting merged group of sense classes It then proceeds to merge Cz and C~ Thin process Is repeated until the ~ value reaches a satisfactory value ~,~t,~, which we set as 0 8 Note that this algorithm is also applicable to deriving any coarser set of classes from a refined set for any NLP tasks in which prior human agreement rate may not be high enough Such NLP tasks could be discourse tagging, speech-act categorization, etc 6 Results For each word w from the list of 121 nouns and 70 verbs, ~e applied the greedy search algorithm to each set of sentences in the intersected corpus contaming w For a subset of 95 words (53 nouns and 42 verbs), the algorithm was able to derive a coarser set of 2 or more senses for each of these 95 words such that the resulting Kappa ~alue reaches 0 8 or higher For the other 96 words, m order for the Kappa value to reach 0 8 or higher, the algorithm collapses all senses of the ~ord to a single (trivial) class Table 2 and 3 summarizes the results for the set of 53 nouns and 42 ~erbs, respectively Table 2 md~cates that before the collapse of sense classes, these 53 nouns have an average of 7 6 senses per noun There is a total of 5,339 sentences in the intersected corpus containing these nouns, of which 3,387 sentences were assigned the same sense by the two groups of human annotators The average Kappa statistic (computed as a simple average of the Kappa statistic of ~he mdlwdual nouns) is 0 463 After the collapse of sense classes by the greedy search algorithm, the average number of senses per noun for these 53 nouns drops to 40 Howe~er, the number of sentences which have been asmgned the same coarse sense by the annotators increases to 5,033 That is, about 94 3% of the sentences have been assigned the same coarse sense, and that the average Kappa statistic has improved to 0 862, mgmfymg high rater-annotator agreement on the derived coarse senses Table3 gl~es the analogous figures for the 42 verbs, agmn mdmatmg that high agreement is achieved on the coarse sense classes den~ed for verbs 7 Discussion Our findings on rater-annotator agreement for word sense tagging indicate that for average language users, it is quite dl~cult to achieve high agreement when they are asked to assign refned sense tags (such as those found in WORDNET) given only the scanty definition entries m the WORDNET dlctionary and a few or no example sentences for the usage of each word sense Thin observation agrees wlth that obtmned m a recent study done by (Veroms, 1998), where the agreement on sense-tagging by naive users was also not hlgh Thus It appears that an average language user is able to process language wlthout needing to perform the task of dlsamblguatmg word sense to a very fine-grained resolutmn as formulated m a tradltlonal dmtlonary In contrast, expert lexicographers tagged the ~ ord sense in the sentences used m the SENSEVAL exerclse, where high rater-annotator agreement was reported There are also fuller dlctlonary entries m the HECTOR dlctlonary used and more e ~* then ~"" +~(C~,,C~_t), z* +~, ~* +end for merge the sense class C,." W99-0508,J96-2004,o,",.~.eqmvalent ot duty in a parallel French text, the correct sense of the Enghsh word is identified These studies exploit th~s lnformatmn m order to gather co-occurrence data for the different senses, which ts then used to dtsamb~guate new texts In related work, Dywk (1998) used patterns of translational relatmns in an EnghshNorwegian paralle ! corpus (ENPC, Oslo Umverslty) to define semantic propemes such as synonymy, ambtgmty, vagueness, and semantic helds and suggested a derivation otsemantic representations for signs (eg, lexemes), captunng semantm relatmnshlps such as hyponymy etc, fiom such translatmnal relatmns Recently, Resnlk and Yarowsky (1997) suggested that fol the purposes ot WSD, the different senses of a wo~d could be detelmlned by considering only sense d~stmctmns that are lextcahzed cross-hngmstlcally In particular, they propose that some set of target languages be ~dent~fied, and that the sense d~stmctmns to be considered for language processing appllcatmns and evaluatmn be restricted to those that are reahzed lexlcally in some minimum subset of those languages This idea would seem to p~ovtde an answer, at least m part, to the problem of determining different senses of a word mtumvely, one assumes that ff another language lexlcahzes a word m two or more ways, there must be a conceptual monvatmn If we look at enough languages, we would be likely to fred the s~gmficant lexlcal differences that dehmtt different senses of a word However, th~s suggestmn raises several questions Fo~ instance, ~t ~s well known that many amb~gumes are preserved across languages (for example, the French tntdrYt and the Enghsh interest), especmlly languages that are relatively closely related Assuming this problem can be overcome, should differences found m closely related languages be given lesser (or greater) weight than those found m more distantly related languages 9 More generally, which languages should be considered for this exermse 9 All languages 9 Closely related languages9 Languages from different language famlhes '~ A mixture of the two 9 How many languages, and of which types, would be ""enough"" to provide adequate lnfotmanon tot this purpose~ There ts also the questmn ot the crlterm that would be used to estabhsh that a sense distinction is ""lexlcahzed cross-hngu~stmally"" How consistent must the d~stlnCtlOn be 9 Does it mean that two concepts are expressed by mutually non-lntetchangeable lexmal items in some slgmficant number ot other languages, or need tt only be the case that the option ot a different lexlcahzatlon exists m a certain percentage of cases 9 Another conslderatmn ts where the cross-hngual mformatlon to answer these questmns would come from Using bdmgual dictionaries would be extremely tedmus and error-prone, g~ven the substantial d~vergence among d~ctlonanes in terms of the kinds and degree of sense dlstmctmns they make Resmk and Yalowsky (1997) suggest EutoWordNet (Vossen, 1998) as a possible somce of mformatmn, but, given that EuroWordNet ts pttmatdy a lexmon and not a corpus, ~t is subject to many of the same objections as for bl-hngual dictionaries An alternative would be to gather the reformation from parallel, ahgned corpma Unlike bilingual and muttt-hngual dictionaries, translatmn eqmvalents xn parallel texts a~e determined by experienced translatols, who evaluate each instance ot a word's use m context rather than as a part of the meta-hngmst~c actlvlty of classifying senses for mclusmn in a dictionary However, at present very few parallel ahgned corpora exist The vast majority ot these are bl-texts, mvolwng only two languages, one of which is very often English Ideally, a serious 53 evaluation of Resnik and Yarowsky's proposal would include parallel texts m languages from several different language families, and, to maximally ensure that the word m question is used in the exact same sense across languages, ~t would be preferable that the same text were used over all languages in the study The only currently avadable parallel corpora for more than two languages are Olwell's Nmeteen Eighty-Four (Erjavec and Ide, 1998), Plato's Repubhc (Erjavec, et al, 1998), the MULTEXT Journal .o/ the Commt.~ston corpus (Ide and V6roms, 1994), and the Bible (Resnlk, et al, m press) It is likely that these corpora do not provide enough appropriate data to reliably determine sense distinctions Also, ~t Is not clear how the lexlcahzatlon of sense distractions across languages Is affected by genre, domain, style, etc Thls paper attempts to provide some prehmlnary answers to the questions outhned above, In order to eventually determine the degree to which the use of parallel data ts vmble to determine sense distinctions, and, ff so, the ways in which th~s reformation might be used Given the lack of lalge parallel texts across multiple languages, the study is necessarily hmlted, however, close exammanon of a small sample of parallel data can, as a first step, provide the basis and dlrectmn for more extensive studies 1 Methodology I have conducted a small study using parallel, aligned versmns ot George Orwell's Nineteen Etghtv-Fo,lr (Euavec and Ide, 1998)m five languages Enghsh, Slovene, Estonian, Romanlan, and Czech I The study therefole Involves languages from four language families The O~well parallel corpus also includes vers|ons o) Ntneteen-E~gho Four m Hungarian, Bulgarmn, Latwan, Llthuaman, Se~bmn, and Russmn (Germanic, Slavic, Fmno-Ugrec, and Romance), two languages from the same family (Czech and Slovene), as well as one non-Indo-European language (Estoman) Nmeteen Eighty-Four Is a text of about 100,000 words, translated directly from the original English m each of the other languages The parallel versions of the text are sentence-aligned to the English and tagged for part of speech Although Nineteen Eighty-Four is a work of fiction, Orwell's prose IS not highly stylized and, as such, it provides a reasonable sample ot modern, ordinary language that ~s not tied to a given topic or sub-domain (such as newspapers, technical reports, etc ) Furthermore, the translations of the text seem to be relatively faithful to the original for instance, over 95% ot the sentence alignments in the full pmallel corpus of seven languages are one-to-one (Prlest-Dorman, et al, 1997) Nine ambiguous English words were considered hard, head, country, hne, promise, shght, seize, scrap, float The first four were chosen because they have been used in other dlsamb~guatlon studies, the latter five were chosen from among the words used m the Senseval dlsamblguatlon exercise (Kllgamff and Palmer, forthcoming) In all cases, the study was necessarily hmlted to words that occurred frequently enough in the Orwell text to warrant consideration F~ve hundred forty-two sentences conta|nmg an occurrence or occurrences (Including morphological variants) of each of the nine words were extracted from the Enghsh text, together w~th the parallel sentences m which they occur m the texts ot the four comparison languages (Czech, Estonian, Romantan, Slovene) As Walks and Stevenson (1998) have pointed out, pa~t-of-speech tagging accomplishes a good portion of the work ot semantic dlsamb~guatmn, therefore occmrences of wolds that appemed in the data in more than 54 one part of speech were grouped separately 2 The Enghsh occurrences were then grouped usmg the sense distinctions m WordNet, (version 1 6) \[Miller et al, 1990, Fellbaum, 1998\]) The sense categonzatmn was performed by the author and two student assistants, results from the three were compared and a final, mutually agreeable set of sense assignments was estabhshed For each of the four comparison languages, the corpus of sense-grouped parallel sentences were sent to a llngmst and natl,ve speaker of the comparison language The hngmsts were asked to provide the lexlcal item m each parallel sentence that corresponds to the ambiguous Enghsh word If inflected, they were asked to provide both the inflected form and the root form In addttmn, the lmgmsts were asked to indicate the type of translatmn, according to the dtstmctmns given m Table 1 For over 85% of the Enghsh word occurrences (corresponding to types 1 and 2 m Table 1), a specific lexlcal item or items could be identified as the translation equivalent for the corresponding Enghsh word For comparison purposes, each translanon equivalent was represented by ~ts lemma (or the lemma of the toot form in the case of derivatives) and associated w~th the WordNet sense to which it corresponds In order to determine the degree to which the assigned sense dlstlncttons correspond to translation eqmvalents, a coherence index ( Cl) was computed that measures how often each pmr of senses is translated usmg the same word as well as the consistency with which a g~ven se,ls,z ~s translated with the same word ~ Note that the z The adJective and adverb senses of hard are consadeied together because the distinction is not consistent across the translations used m the study Note that the CI ~s similar to semanuc entropy (Melamed, 1997) However, Melamed computes CIs do not determine whether or not a sense dtstmctton can be lextcahzed in the target language, but only the degree to whmh they are lexicahzed differently m the translated text However, tt can be assumed that the CIs provide a measure of the tendency to lex~cahze different WordNet senses differently, which can m turn be seen as an mdtcatmn of the degree to which the distraction ts vahd For each ambiguous word, the CI Is computed for each pair of senses, as follows S Cl(sqS, ) = '=1 m rnrt where @ n ~s the number of comparison languages under consideration, nl~q and m,, are the nt~mber of occurrences olsense sqand sense s~ m the Enghsh corpus, respectively, including occurrences that have no idenufiable translation, s<~ ~>m ts the number of times that senses q and r are translated by the same lex~cal Item m language t, i e, x=y t ~tJan ~( q ), r~oan~( r ) The CI ts a value between 0 and 1, computed by examining clusters of occurrences translated by the same word In the othel languages If sense and sense ) are consistently translated w~th the same wo~d in each comparison language, then Cl(s, s~) = 1, if they are translated with a different word m every occurrence, Cl(s, ~) = 0 In general, the CI for pans of different senses provides an index of thmr relatedness, t e, the greater the value of Cl(s, sj), the more frequently occurrences of-sense t and sense j are translated with the same lextcal item When t = j, we entropy tOl wold types, lather than word senses 55 obtain a measure of the coherence of a ~lven sense Type Meaning 1 A slngle lexlcal Item is used to translate the En@izsh equivalent (possibly a 2 The English word is translated by a phrase of two or more words or a compound, meaning as the slngle English word 3 The En@izsh word is not lexzcalized in the translation 4 A pronoun is substituted for the English word In the translation An English phrase contalnmng the ambiguous word Is translated by a single language which has a broader or more specific meanlng, or by a phrase in whl corresponding to the English word Is not explicltl~ lexlcallzed Table 1 Translation types and their trequencles % dizen whl%h h 6% 6% 6% of s p same Word # Description hard 1 1 difficult 2 head i i i 1 Table 2 1 2 _meta~horlcally hard _\] 3 not yielding to pressure, 1 4 very strong or ~lgorous, ar 2 I wlth force or vigor (adv) 3 earnestly, intently (adv) i_ ~art of the body 3 intellect 4 _r~le_!r, ch,%ef 7 front, front part WoldNet senses ot hard and head CIs were also computed for each language individually as well as for different language groupings Romaman, Czech, and Estonian (three different language families) Czech and Slovene (same family), Romaman, Czech, Slovene (Indo-European, and Estonian (nonIndo-European) To better visualize the relationship between senses, a hierarchical clustering algorithm was applied to the CI data to generate trees reflecting sense proximity 4 Finally, in order to determine the degree to which the linguistic relaUon between languages may affect coherence, a correlation was run among CIs for all pairs of the four target languages Fol example, Table 2 gives the senses of hard and head that occurred in the data s The CI data .s 'sobS' hard and head are given in Tables 3 and 4 ~uous CIs measuring the aff, mty of a sense with itself--that is, the tendency for all occurrences of that sense to be translated wlth the same word--show that all of the s,x senses of ha,d have greatel internal consistency tfian athmty with other senses, with senses 1 1 (""dlff|cult"" CI = 56) and 13 (,'not soft,, ci = 63) registenng the h,ghest internal consistency 6 The same holds true for three of the four senses of head, while the CI for senses 1 3 (""Intellect"") and 1 1 (""part of the body"") is higher than the CI for 1 3/1 3 WordNet Sense 2 1 2 3 1 4 1 3 1 1 1 2 21 23 1 4 13 0 50 o 13 i ool 0 O0 0 25 i O0 0 04 0 50 0 17 0 56 0 19 0 00 0 00 0 00 0 00 0 00 0 25 0 21 Table 3 CIs for hard I i 12 0,,63 0 00 0 50 2 Results Although the data sample is small, It gives some insight into ways m which a larger sample might contribute to sense discrimination 4 Developed by Andleas Stolcke Results tor all words m the study are avadable at http//www cs vassar edu/~~de/wsd/cross-hng html 6 Senses 2 3 and 1 4 have CIs ot 1 because each ot these senses exists m a single occurrence m the corpus, and have theretote been dlscarded horn consideration ot CIs to~ individual senses We a~e currently mvesugatmg the use oI the Kappa staUst~c (Carletta, 1996) to normahze these sparse data 56 WordNet Sense 1 1 1 3 1 4 1 7 1 1 0 69 1 3 0 53 0 45 1 4 0 12 0 07, 0 50 1 7 0 40 0 001 0 00 1 00 Table 4 CIs for head Figure 2 shows the sense clusters for hard generated from the CI data 7 The senses fall into two mare clusters, w~th the two most internally consistent senses (1 1 and 1 3) at the deepest level of each ot the respecuve groups The two adverbml forms 8 are placed in separate groups, leflectmg thmr semantic proximity to the different adjecuval meanings of hard The clusters for head (Figure 2) stmdarly show two dlstmct groupings, each anchored in the two senses with the h~ghest internal consistency and the lowest mutual CI (""part of the body"" (1 1) and ""ruler, chief"" (1 4)) The h~erarchtes apparent m the cluster graphs make intuitive sense Structured hke dictmnary enmes, the clusters for hard and head might appeal as m F~gure 1 This ts not dissimilar to actual dlctLonary entries for hard and head, for example, the enmes for hard in four differently constructed dlctmnanes ( Colhns Enghsh (CED), Longman's (LDOCE), OxJotd Advanced Learner's (OALD), and COBUILD) all hst the ""'d~fficult"" and ""not soft"" senses first and second, whmh, since most dictionaries hst the most common Ol frequently used senses hrst, reflects the gross dlwslon apparent m the clusters Beyond this, ~t ~s difficult to assess the 7 Foi the purposes ot the cluster analys~s, CIs of l 00 resulting from a single occurrrence were normahzed to 5 8 Because ~oot to, ms were used m the analysis, no dzstlncUon m UanslaUon eqmvalents was made tor part ot speech correspondence between the senses In the dictionary entries and the clusters The remamlng WordNet senses are scattered at various places within the entries or, m some cases, split across various senses The h~erarchlcal relatmns apparent m the clusters are not reflected m the d~cttonary enmes, smce the senses are for the most part presented in flat, hnear hsts However, It is interesting to note that the first five senses of hard In the COBUILD d~cuonary, which is the only d~cttonary in the group constructed on the bas~s of colpus examples 9 and presents senses m ruder of frequency, correspond to hve of the six WordNet senses in thls study WordNet's ""metaphorically hard"" is spread over multiple senses in the COB UILD, as it.is In the other d~ctlonarles HARD HEAD I 1 dlfflcult 2 vlgorously II 1 a not soft b strong 2 a earnestly b metaphorlcally hard I 1 a part of the body b zntellect 2 front, front part II ruler, chlef Flgme 1 Clusteis tol hard and head suuctured as dlcuonary entt ~es The results tor dlftment language groupings show that the tendency to lextcahze senses differently is not aftected by language d~stance (Table 5) In fact, the mean CI fol Estonian, the only non-Indo-European language m the study, ~s lower than that for any other group, mdmatmg that WordNet sense dtstmctmns are slightly less hkely to be lexlcahzed differently m Estonian 9 Edmons ot the LDOCE (1987 vexsmn) and OALD (1985 version) dictlonalles consulted m this study ple-date edmons ol those same d~ctlonanes based on colpus evidence 57 Correlations of CIs for each language pair (Table 5) also show no relationship between the degree to which sense d~stmcuons are lexlcahzed differently and language distance This is contrary to results obtained by Resmk and Yarowsky (subm,tted), who, using a memc slmdar to the one used in this study, found that that non-Indo-European languages tended to lexlcallze English sense d~stmctlons more than Indo-European languages, especially at finergrained levels However, their translation data was generated by native speakers presented with Isolated sentences in English, who were asked to provide the translation for a given word In the sentence It is not clear how this data compares to translations generated by trained translators working with full context Lanquaqe qroup Averaqe CI ALL 0 27 RO/ES/SL 0 28 SL/CS 0 28 RO/SL/CS 0 27 ES 0 26 Table 5 Average CI values Lanqs Hard Country Llne Head Ave ES/CS 0 86 0 72 0 68 0 69 0 74 RO/SL 0 73 0 78 0 68 1 00 0 80 RO/CS 0 83 0 66 0 67 0 72 0 72 SL/CS 0 88 0 51 0 72 0 71 0 71 RO/ES 0 97 0 26 0 70 0 98 0 73 ES/SL 0 73 0 59 0 90 0 99 0 80 Table 6 CI correlauon tor the tour target languages I -I I I I m~nlmum dlstance = 0 249399 m~nlmum d~stance = 0 434856 mlnlmum dlstance = 0 555158 mlnlmum dlstance = 0 602972 m~nlmum dlstance = 0 761327 I >21 I >ii I >23 l >13 l >14 I >12 (13) (23) (12) (1,4) (ii) (21) (1412) (2313) ( 2 3 1 3 1 4 1 2 ) ( 2 111 ) Figure 2 Cluster tree and distance measures tor the sm senses of hard I >14 -i I > i i I--- 1 J > i 3 I >17 mlnlmum dlstance = 0 441022 mlnlmum dlstance = 0 619052 mln~mum dlstance = 0 723157 (13) (ll) (17) (1113) (111317) (14) F,gure 3 Cluster tree and dmtance measures tot the tout senses ot head 58 Conclusion The small sample m this study suggests that cross-hngual lexlcahzat~on can be used to define and structure sense d~stmct~ons The cluster graphs above provide mformat~on about relations among WordNet senses that could be used, for example, to determine the granularity of sense differences, whtch m turn could be used in tasks such as machine translatton, mtormaUon retrieval, etc For example, it is hkely that as sense dtstmcttons become finer, the degree of error ~s less severe Resmk and Yarowsky (1997) suggest that confusing freer-grained sense dtstmctlons should be penahzed less severely than confusing grosser d~stmct~ons when evaluatmg the performance of sense dtsambtguatt0n systems The clusters also provide insight into the lexlcallzatlon of sense dtstmcttons related by various semantic relations (metonymy, meronymy, etc ) across languages, for instance, the ""part of the body"" and ""intellect"" senses of head are lex~cahzed with the same ~tem a s~gnlficant portion of the t~me across all languages, reformation that could be used m machine translatton In addtt~on, cluster data such as that presented here could be used m lexicography, to determine a mole detaded hierarchy of relations among senses in dtct~onary entries It is less clear how cross-hngual reformation can be used to determine sense d~st~nctlons independent of a pre-deflned set, such as the WordNet senses used here In an effort to explore how thts mlght be done, I have used the small sample from thts study to create word groupmgs from ""back translations"" (l e, additional translations m the original language ot the translations m the target language) and developed a metric that uses th~s mformatton to determine relatedness between occurrences, whtch ~s m turn used to cluster occurrences into sense groups I have also compared sets of back translations for words representing the various WordNet senses, which provtde word groups s~mdar to WordNet synsets Interestingly, there ts virtually no overlap between the WordNet synsets and word groups generated from back translations The results show, however, that sense dlstmctlons useful for natural language processing tasks such as machme translanon could potentsally be determined, ot at least influenced, by constdeHng this mformatton The automatically generated synsets themselves may also be useful m the same apphcatlons; where WordNet synsets (and ontologtes) have been used tn the past More work needs to be done on the topic of cross-hngual sense determination, utthzmg substantially larger parallel corpora that include a variety ot language types as well as texts fiom several genres This small study explores a possible methodology to apply when such resources become avatlable Acknowledgements The author would hke to gratefully acknowledge the contrtbut~on of those who provided the translatton mfotmat~on Tomaz Eua~ec (Slovene), Kadrt Muxschnek (Estonian), Vladtmlr Petkevtc (Czech), and Dan Tubs (Romanlan), as well as Dana Fleut and Darnel Khne, who helped to transcrtbe and evaluate the data Special thanks to Dan Melamed and Hlnrtch Schutze for their helpful comments 59 \[\] \[\] in \[\] in i i Hg nn i an i am References Ca~letta, Jean (1996) Assessing Agreement on Classthcatton Tasks The Kappa Stat~st~t. Computational Lmgulstlcs, 22(2), 249-254 Dagan, Ido and Ita~, Alon (1994) Wo~d sense dlsambxguat~on using a second language monohngual corpus Computattonal Ltngmsttcs, 20(4), 563-596 Dagan, Ido, Ital, Alon, and Schwall, Ulnke (1991) Two languages a~e more mformattve than one Proceedings of the 29th Annual Meettng of the Assoctatton for Computattonal Ltngutsttcs, 18-21 June 1991, Berkeley, Cahfornm, 130-137 Dyvtk, Helge (1998) Translations as Semantic Mirrors Proceedmgs of Workshop W13 Multzlmguahty in the Lextcon II, The 13th Biennial European Conference on Arttftctal lntelhgence (ECA198), Brighton, UK, 24-44 Eqavec, Tomaz and Ide, Nancy (1998) The MULTEXT-EAST Corpus Proceedlng~ of the Fltst International Conference on Language Resources and Evaluatton, 27-30 May 1998, Granada, 971-74 Erjavec, Tomaz, Lawson, Ann, and Romary, Laurent (1998) East meets West Producing Multflmgual Resources m a European Context Pioceedtngs of the Ftrst Internattonal Conference on Language Resources and Evaluation, 27-30 May 1998, Gtanada, 981-86 Fellbaum, Chttstmne (ed) (1998) WordNet An Electrontc Lexlcal Database MIT Press, Cambridge, Massachusetts Gale, Wdham A, Church, Kenneth W and Yatowsky, Davtd (1993) A method tor dlsamblguatmg word senses m a large cmpus Computers and the Humamtles, 26, 415-439, Hearst, M'attl A (1991) Noun homograph ' dlsamblguatlon using local:'~.'0ntext m large corpora Proceedtngs of the 7th Annual Conference of the Umver~lt~ of Waterloo Centre for the New OED and Text ReaeaJch, Oxford, Umted Kingdom, 1-19 Ide, Nancy and V61oms, Jean (1998) Word sense d~samb~guat~on The state of the alt Computational Lmgut~ttc~, 24 1, 1-40 Kdgar~ttt, Adam and Palmer, Ma~tha, Eds (forthcoming) Proceedmgs ot the Senseval Word Sense D~samb~guatlon Workshop, Specml double ~ssue otComputer~ and the Humamttes, 33 4-5 Leacock, Claudia, Towell, Geoffrey and Voorhees, Ellen (1993) Corpus-based stattstlcal sense resolution Proceedtng~ of the ARPA Human Language Technology Worsl~shop, San Francisco, Morgan Kautman Melamed, I Dan (1997) Measuring Semantic Entropy ACL-SIGLEX Workshop Taggmg Tert wtth Lextcal Semanttcs Why, What, and How ~ April 4-5, 1997, Washington, D C, 41-46 Mtllet, George A, Beckwlth, Richard T Fellbaum." C02-1003,J97-3002,o,"1 A bilingual language model ITG Wu (1997) has proposed a bilingual language model called Inversion Transduction Grammar (ITG), which can be used to parse bilingual sentence pairs simultaneously." C02-1003,J97-3002,o,"For details please refer to (Wu 1995, Wu 1997)." C02-1003,J97-3002,o,"S BNP VP PP VP Mr./g1820g10995 Wu/g2568 plays/g6183 basketball/g12738g10711 on/e Sunday/g7155g7411g3837 S ./g452 Figure 1 Inversion transduction Grammar parsing Any ITG can be converted to a normal form, where all productions are either lexical productions or binary-fanout nonterminal productions(Wu 1997)." C02-1003,J97-3002,p,"Because the expressiveness characteristics of ITG naturally constrain the space of possible matching in a highly appropriate fashion, BTG achieves encouraging results for bilingual bracketing using a word-translation lexicon alone (Wu 1997)." C02-1003,J97-3002,o,The optimal bilingual parsing tree for a given sentence-pair can be computed using dynamic programming (DP) algorithm(Wu 1997). C02-1010,J97-3002,o,"415-458, Wu, Dekai (1997) Stochastic inversion transduction grammars and bilingual parsing of parallel corpora." C02-1010,J97-3002,o,"To deal with the difficulties in parse-to-parse matching, Wu (1997) utilizes inversion transduction grammar (ITG) for bilingual parsing." C02-1010,J97-3002,o,"2.2 The Crossing Constraint According to (Wu, 1997), crossing constraint can be defined in the following." C04-1005,J97-3002,o,"In addition, Wu (1997) used a stochastic inversion transduction grammar to simultaneously parse the sentence pairs to get the word or phrase alignments." C04-1006,J97-3002,o,"Bilingual bracketing methods were used to produce a word alignment in (Wu, 1997)." C04-1030,J97-3002,o,"3.2 ITG Constraints In this section, we describe the ITG constraints (Wu, 1995; Wu, 1997)." C04-1032,J97-3002,o,"Bilingual bracketing methods were used to produce a word alignment in (Wu, 1997)." C04-1060,J97-3002,o,"Wu (1997) modeled the reordering process with binary branching trees, where each production could be either in the same or in reverse order going from source to target language." C04-1060,J97-3002,o,"Zens and Ney (2003) compute the viterbi alignments for German-English and French-English sentences pairs using IBM Model 5, and then measure how many of the resulting alignments fall within the hard constraints of both Wu (1997) and Berger et al." C04-1060,J97-3002,o,"This gives the translation model more information about the structure of the source language, and further constrains the reorderings to match not just a possible bracketing as in Wu (1997), but the specific bracketing of the parse tree provided." C04-1060,J97-3002,o,"In this paper, we make a direct comparison of a syntactically unsupervised alignment model, based on Wu (1997), with a syntactically supervised model, based on Yamada and Knight (2001)." C04-1060,J97-3002,o,2 The Inversion Transduction Grammar The Inversion Transduction Grammar of Wu (1997) can be thought as a a generative process which simultaneously produces strings in both languages through a series of synchronous context-free grammar productions. C04-1060,J97-3002,o,"In our experiments we use a grammar with a start symbol S, a single preterminal C, and two nonterminals A and B used to ensure that only one parse can generate any given word-level alignment (ignoring insertions and deletions) (Wu, 1997; Zens and Ney, 2003)." C04-1060,J97-3002,o,"The trees may be learned directly from parallel corpora (Wu, 1997), or provided by a parser trained on hand-annotated treebanks (Yamada and Knight, 2001)." C04-1060,J97-3002,o,"Inversion Transduction Grammar (ITG) is the model of Wu (1997), Tree-to-String is the model of Yamada and Knight (2001), and Tree-to-String, Clone allows the node cloning operation described above." C04-1134,J97-3002,o,"Inspired by previous work on syntax-driven semantic parsing (Gildea and Jurafsky, 2002; Fleischman et al. , 2003), and syntax-based machine translation (Wu, 1997; Cuerzan and Yarowsky, 2002), we postulate that syntactically similar sentences with the same predicate also share similar semantic roles." C08-1127,J97-3002,o,"The straight-forward way is to first generate the best BTG tree for each sentence pair using the way of (Wu, 1997), then annotate each BTG node with linguistic elements by projecting source-side syntax tree to BTG tree, and finally extract rules from these annotated BTG trees." C08-1127,J97-3002,o,"1 Introduction Formal grammar used in statistical machine translation (SMT), such as Bracketing Transduction Grammar (BTG) proposed by (Wu, 1997) and the synchronous CFG presented by (Chiang, 2005), provides a natural platform for integrating linguistic knowledge into SMT because hierarchical structures produced by the formal grammar resemble linguistic structures." C08-1138,J97-3002,o,"Many grammars, such as finite-state grammars (FSG), bracket/inversion transduction grammars (BTG/ITG) (Wu, 1997), context-free grammar (CFG), tree substitution grammar (TSG) (Comon et al., 2007) and their synchronous versions, have been explored in SMT." C08-2026,J97-3002,o,"Coling 2008: Companion volume Posters and Demonstrations, pages 103106 Manchester, August 2008 Range concatenation grammars for translation Anders Sgaard University of Potsdam soegaard@ling.uni-potsdam.de Abstract Positive and bottom-up non-erasing binary range concatenation grammars (Boullier, 1998) with at most binary predicates ((2,2)-BRCGs) is a O(|G|n6) time strict extension of inversion transduction grammars (Wu, 1997) (ITGs)." C08-2026,J97-3002,o,"It is shown that (2,2)-BRCGs induce inside-out alignments (Wu, 1997) and cross-serial discontinuous translation units (CDTUs); both phenomena can be shown to occur frequently in many hand-aligned parallel corpora." C08-2026,J97-3002,o,"ITGs translate into simple (2,2)-BRCGs in the following way; see Wu (1997) for a definition of ITGs." C08-2026,J97-3002,n,"Inside-out alignments (Wu, 1997), such as the one in Example 1.3, cannot be induced by any of these theories; in fact, there seems to be no useful synchronous grammar formalisms available that handle inside-out alignments, with the possible exceptions of synchronous tree-adjoining grammars (Shieber and Schabes, 1990), Bertsch and Nederhof (2001) and generalized multitext grammars (Melamed et al., 2004), which are all way more complex than ITG, STSG and (2,2)-BRCG." D07-1006,J97-3002,o,"In designing LEAF, we were also inspired by dependency-based alignment models (Wu, 1997; Alshawi et al. , 2000; Yamada and Knight, 2001; Cherry and Lin, 2003; Zhang and Gildea, 2004)." D07-1030,J97-3002,o,"SMT has evolved from the original word-based approach (Brown et al. , 1993) into phrase-based approaches (Koehn et al. , 2003; Och and Ney, 2004) and syntax-based approaches (Wu, 1997; Alshawi et al. , 2000; Yamada and Knignt, 2001; Chiang, 2005)." D07-1038,J97-3002,o,"Construct a parse chart with a CKY parser simultaneously constrained on the foreign string and English tree, similar to the bilingual parsing of Wu (1997) 1." D07-1091,J97-3002,o,"The goal of integrating syntactic information into the translation model has prompted many researchers to pursue tree-based transfer models (Wu, 1997; Alshawi et al. , 1998; Yamada and Knight, 2001; Melamed, 2004; Menezes and Quirk, 2005; Galley et al. , 2006), with increasingly encouraging results." D08-1012,J97-3002,o,"108 To follow related work and to focus on the effects of the language model, we present translation resultsunderaninversiontransductiongrammar(ITG) translation model (Wu, 1997) trained on the Europarl corpus (Koehn, 2005), described in detail in Section 3, and using a trigram language model." D08-1012,J97-3002,o,"3 Inversion Transduction Grammars While our approach applies in principle to a variety of machine translation systems (phrase-based or syntactic), we will use the inversion transduction grammar (ITG) approach of Wu (1997) to facilitate comparison with previous work (Zens and Ney, 2003;ZhangandGildea,2008)aswellastofocuson language model complexity." D08-1016,J97-3002,n,"String alignment with synchronous grammars is quite expensive even for simple synchronous formalisms like ITG (Wu, 1997)but Duchi et al." D08-1060,J97-3002,o,"(2007) are appealing, as they have rather simple structure, modeling only NP, VP and LCP via one-level sub-tree structure with two children, in the source parse-tree (a special case of ITG (Wu, 1997))." D08-1066,J97-3002,o,"We use binary Synchronous ContextFree Grammar (bSCFG), based on Inversion Transduction Grammar (ITG) (Wu, 1997; Chiang, 2005a), to define the set of eligible segmentations for an aligned sentence pair." D08-1066,J97-3002,o,"In particular, this holds for the SCFG implementing Inversion 3For two sequences of numbers, the notation y < z stands for y y,z z : y < z. Transduction Grammar (Wu, 1997)." D08-1089,J97-3002,p,"Coming from the other direction, such observations about phrase reordering between different languages are precisely thekindsoffactsthatparsingapproachestomachine translation are designed to handle and do successfully handle (Wu, 1997; Melamed, 2003; Chiang, 2005)." D08-1089,J97-3002,o,"To be able identify that adjacent blocks (e.g., the development and and progress) can be merged into larger blocks, our model infers binary (non-linguistic) trees reminiscent of (Wu, 1997; Chiang, 2005)." D09-1021,J97-3002,o,"Early examples of this work include (Alshawi, 1996; Wu, 1997); more recent models include (Yamada and Knight, 2001; Eisner, 2003; Melamed, 2004; Zhang and Gildea, 2005; Chiang, 2005; Quirk et al., 2005; Marcu et al., 2006; Zollmann and Venugopal, 2006; Nesson et al., 2006; Cherry, 2008; Mi et al., 2008; Shen et al., 2008)." D09-1039,J97-3002,o,"Each linked fragment pair consists of a source-language side and a target-language side, similar to (Wu, 1997)." D09-1050,J97-3002,o,"Since many concepts are expressed by idiomatic multiword expressions instead of single words, and different languages may realize the same concept using different numbers of words (Ma et al., 2007; Wu, 1997), word alignment based methods, which are highly dependent on the probability information at the lexical level, are not well suited for this type of translation." D09-1073,J97-3002,o,"In this paper, we bring forward the first idea by studying the issue of how to utilize structured syntactic features for phrase reordering in a phrase-based SMT system with BTG (Bracketing Transduction Grammar) constraints (Wu, 1997)." D09-1073,J97-3002,o,"Recently, many phrase reordering methods have been proposed, ranging from simple distancebased distortion model (Koehn et al., 2003; Och and Ney, 2004), flat reordering model (Wu, 1997; Zens et al., 2004), lexicalized reordering model (Tillmann, 2004; Kumar and Byrne, 2005), to hierarchical phrase-based model (Chiang, 2005; Setiawan et al., 2007) and classifier-based reordering model with linear features (Zens and Ney, 2006; Xiong et al., 2006; Zhang et al., 2007a; Xiong et al., 2008)." D09-1073,J97-3002,p,"1 Introduction Phrase-based method (Koehn et al., 2003; Och and Ney, 2004; Koehn et al., 2007) and syntaxbased method (Wu, 1997; Yamada and Knight, 2001; Eisner, 2003; Chiang, 2005; Cowan et al., 2006; Marcu et al., 2006; Liu et al., 2007; Zhang et al., 2007c, 2008a, 2008b; Shen et al., 2008; Mi and Huang, 2008) represent the state-of-the-art technologies in statistical machine translation (SMT)." D09-1105,J97-3002,o,"S S0,n Si,k Si,j Sj,k Si1,i pii Figure 1: A grammar for a large neighborhood of permutations, given one permutation pi of length n. The Si,k rules are instantiated for each 0 i < j < k n, and the Si1,i rules for each 0 5 (Wu, 1997)." E06-1019,J97-3002,o,"3.1 A simple solution Wu (1997) suggests that in order to have an ITG take advantage of a known partial structure, one can simply stop the parser from using any spans that would violate the structure." E06-1019,J97-3002,o,"Having a single, canonical tree structure for each possible alignment can help when flattening binary trees, as it indicates arbitrary binarization decisions (Wu, 1997)." E06-1019,J97-3002,o,"Normally, one would eliminate the redundant structures produced by the grammar in (1) by replacing it with the canonical form grammar (Wu, 1997), which has the following form: S A | B | C A [AB] | [BB] | [CB] | [AC] | [BC] | [CC] B AA |BA|CA| AC |BC|CC C e/f (2) By design, this grammar allows only one struc147 a0 a1 a2 a0 a3 a4 a2 a5 a1 a6 a7 a8 a6 a8 a9 a8 a2 a8 a10 a8 a1 a2 a3 a6 a8 a4 a7 a8 a6 a8 a9 a8 a8 a11 a12 a11 a0 a1 a2 a0 a3 a4 a2 a5 a1 a6 a7 a8 a6 a8 a9 a8 a0 a1 a2 a0 a3 a4 a2 a5 a1 a6 a7 a8 a6 a8 a9 a8 a0 a1 a2 a0 a3 a4 a2 a5 a1 a6 a7 a8 a6 a8 a9 a8 a13 a11 Figure 3: An example of how dependency trees interact with ITGs." H01-1035,J97-3002,p,"Wu (1995, 1997) investigated the use of concurrent parsing of parallel corpora in a transduction inversion framework, helping to resolve attachment ambiguities in one language by the coupled parsing state in the second language." H05-1023,J97-3002,o,"In this respect, it resembles bilingual bracketing (Wu, 1997), but our model has more lexical items in the blocks with many-to-many word alignment freedom in both inner and outer parts." H05-1036,J97-3002,o,"These techniques included unweighted FS morphology, conditional random fields (Lafferty et al. , 2001), synchronous parsers (Wu, 1997; Melamed, 2003), lexicalized parsers (Eisner and Satta, 1999),22 partially supervised training `a la (Pereira and Schabes, 1992),23 and grammar induction (Klein and Manning, 2002)." H05-1098,J97-3002,o,"Since one of these filters restricts the number of nonterminal symbols to two, our extracted grammar is equivalent to an inversion transduction grammar (Wu, 1997)." H05-1101,J97-3002,o,"Among the several proposals, we mention here the models presented in (Wu, 1997; Wu and Wong, 1998), (Alshawi et al. , 2000), (Yamada and Knight, 2001), (Gildea, 2003) and (Melamed, 2003)." H05-1101,J97-3002,o,"This problem has been considered for instance in (Wu, 1997) for his inversion transduction grammars and has applications in the support of several tasks of automatic annotation of parallel corpora, as for instance segmentation, bracketing, phrasal and word alignment." I05-1023,J97-3002,o,"We present a new implication of Wus (1997) Inversion Transduction Grammar (ITG) Hypothesis, on the problem of retrieving truly parallel sentence translations from large collections of highly non-parallel documents." I08-2087,J97-3002,o,"However, formally syntax-based methods propose simple but efficient ways to parse and translate sentences (Wu 1997; Chiang 2005)." I08-8001,J97-3002,p,"Some methods which can offer powerful reordering policies have been proposed like syntax based machine translation (Yamada and Knight, 2001) and Inversion Transduction Grammar (Wu, 1997)." J00-1004,J97-3002,o,"Concluding Remarks Formalisms for finite-state and context-free transduction have a long history (e.g. , Lewis and Stearns 1968; Aho and Ullman 1972), and such formalisms have been applied to the machine translation problem, both in the finite-state case (e.g. , Vilar et al. 1996) and the context-free case (e.g. , Wu 1997)." J07-2003,J97-3002,o,"This also makes our grammar weakly equivalent to an inversion transduction grammar (Wu 1997), although the conversion would create a very large number of new nonterminal symbols." J07-2003,J97-3002,o,"Because our system uses a synchronous CFG, it could be thought of as an example of syntax-based statistical machine translation (MT), joining a line of research (Wu 1997; Alshawi, Bangalore, and Douglas 2000; Yamada and Knight 2001) that has been fruitful but has not previously produced systems that can compete with phrase-based systems in large-scale translation tasks such as the evaluations held by NIST." J07-2003,J97-3002,o,"At one extreme are those, exemplified by that of Wu (1997), that have no dependence on syntactic theory beyond the idea that natural language is hierarchical." N03-1017,J97-3002,o,"Another motivation to evaluate the performance of a phrase translation model that contains only syntactic phrases comes from recent efforts to built syntactic translation models [Yamada and Knight, 2001; Wu, 1997]." N03-2017,J97-3002,o,"Methods such as (Wu, 1997), (Alshawi et al. , 2000) and (Lopez et al. , 2002) employ a synchronous parsing procedure to constrain a statistical alignment." N03-2017,J97-3002,o,"More recently, there have been many proposals to introduce syntactic knowledge into SMT models (Wu, 1997; Alshawi et al. , 2000; Yamada and Knight, 2001; Lopez et al. , 2002)." N04-1014,J97-3002,o,"Recently, specific probabilistic tree-based models have been proposed not only for machine translation (Wu, 1997; Alshawi, Bangalore, and Douglas, 2000; Yamada and Knight, 2001; Gildea, 2003; Eisner, 2003), but also for This work was supported by DARPA contract F49620-001-0337 and ARDA contract MDA904-02-C-0450." N04-1023,J97-3002,o,Wu (1997) introduced constraints on alignments using a probabilistic synchronous context-free grammar restricted to Chomskynormal form. N04-1023,J97-3002,o,"(Wu, 1997) was an implicit or selforganizing syntax model as it did not use a Treebank." N04-1035,J97-3002,o,"One approach here is that of Wu (1997), in which word-movement is modeled by rotations at unlabeled, binary-branching nodes." N06-1031,J97-3002,o,"Some approaches have used syntax at the core (Wu, 1997; Alshawi et al. , 2000; Yamada and Knight, 2001; Gildea, 2003; Eisner, 2003; Hearne and Way, 2003; Melamed, 2004) while others have integrated syntax into existing phrase-based frameworks (Xia and McCord, 2004; Chiang, 2005; Collins et al. , 2005; Quirk et al. , 2005)." N06-1033,J97-3002,o,"We use [] and for straight and inverted combinations respectively, following the ITG notation (Wu, 1997)." N06-1033,J97-3002,o,"One way around this dif culty is to stipulate that all rules must be binary from the outset, as in inversion-transduction grammar (ITG) (Wu, 1997) and the binary synchronous context-free grammar (SCFG) employed by the Hiero system (Chiang, 2005) to model the hierarchical phrases." N06-1033,J97-3002,o,"It has been shown by Shapiro and Stephens (1991) and Wu (1997, Sec." N06-1033,J97-3002,o,"Wu (1997) shows that parsing a binary SCFG is in O(|w|6) while parsing SCFG is NP-hard in general (Satta and Peserico, 2005)." N06-1033,J97-3002,o,"This problem can be cast as an instance of synchronous ITG parsing (Wu, 1997)." N07-1052,J97-3002,o,"Then, h(s) h(s) + Lmax, s S. This epsilon1-admissible heuristic (Ghallab and Allard, 1982) bounds our search error by Lmax.3 3 Bitext Parsing In bitext parsing, one jointly infers a synchronous phrase structure tree over a sentence ws and its translation wt (Melamed et al. , 2004; Wu, 1997)." N07-1052,J97-3002,o,"We can, however, produce a useful surrogate: a pair of monolingual WCFGs with structures projected by G and weights that, when combined, underestimate the costs of G. Parsing optimally relative to a synchronous grammar using a dynamic program requires time O(n6) in the length of the sentence (Wu, 1997)." N09-1009,J97-3002,o,"They are most commonly used for parsing and linguistic analysis (Charniak and Johnson, 2005; Collins, 2003), but are now commonly seen in applications like machine translation (Wu, 1997) and question answering (Wang et al., 2007)." N09-1026,J97-3002,o,"Meanwhile, translation grammars have grown in complexity from simple inversion transduction grammars (Wu, 1997) to general tree-to-string transducers (Galley et al., 2004) and have increased in size by including more synchronous tree fragments (Galley et al., 2006; Marcuetal.,2006; DeNeefeetal.,2007)." P01-1067,J97-3002,o,Wu (1997) and Alshawi et al. P02-1039,J97-3002,n,"Other statistical machine translation systems such as (Wu, 1997) and (Alshawi et al. , 2000) also produce a tree a15 given a sentence a16 . Their models are based on mechanisms that generate two languages at the same time, so an English tree a15 is obtained as a subproduct of parsing a16 . However, their use of the LM is not mathematically motivated, since their models do not decompose into Pa4a5a2a9a8a3a10a6 and a12a14a4a5a3a7a6 unlike the noisy channel model." P03-1011,J97-3002,p,Wu (1997) showed that restricting word-level alignments between sentence pairs to observe syntactic bracketing constraints significantly reduces the complexity of the alignment problem and allows a polynomial-time solution. P03-1012,J97-3002,o,"Methods such as (Wu, 1997), (Alshawi et al. , 2000) and (Lopez et al. , 2002) employ a synchronous parsing procedure to constrain a statistical alignment." P03-1019,J97-3002,o,"For this purpose, we adopt the view of the ITG constraints as a bilingual grammar as, e.g., in (Wu, 1997)." P03-1019,J97-3002,o,"Obviously, these productions are not in the normal form of an ITG, but with the method described in (Wu, 1997), they can be normalized." P03-1019,J97-3002,o,"The first constraints are based on inversion transduction grammars (ITG) (Wu, 1995; Wu, 1997)." P03-1019,J97-3002,o,"the parse trees of the simple grammar in (Wu, 1997)." P03-1019,J97-3002,o,"With this constraint, each of these binary trees is unique and equivalent to a parse tree of the canonical-form grammar in (Wu, 1997)." P03-1019,J97-3002,o,"In (Wu, 1997), these forbidden subsequences are called inside-out transpositions." P03-2041,J97-3002,o,"3However, the binary-branching SCFGs used by Wu (1997) and Alshawi et al." P03-2041,J97-3002,o,"Previous work in statistical synchronous grammars has been limited to forms of synchronous context-free grammar (Wu, 1997; Alshawi et al. , 2000; Yamada and Knight, 2001)." P04-1060,J97-3002,o,"(Wu, 1997) also includes a brief discussion of crossing constraints that can be derived from phrase structure correspondences." P04-1083,J97-3002,o,This normal form allows simpler algorithm descriptions than the normal forms used by Wu (1997) and Melamed (2003). P04-1083,J97-3002,o,"Item Form: a32 a2 a49a51 a15 a52 a49 a51a16a33 Goal: a32a35a34 a49 a51 a15 a23a4a3 a12 a0a36a5 a24 a49 a51a37a33 Inference Rules Scan component d, a10a38a8 a7 a8 a0 : a39a41a40a43a42a44 a44a45 a23a25a24 a49 a5a47a46 a49 a2 a23a25a24 a5a49a48 a49 a51 a50 a23a25a24 a49 a5a47a46 a49 a20a43a5 a3a22 a23a25a24 a5a49a48 a49 a51 a51a14a52 a52 a53 a54a55 a55 a56 a23a25a24 a49 a5a47a46 a49 a2 a23a25a24 a5a49a48 a49 a51 a50 a23a25a24 a49 a5a47a46 a49a23 a19a57a24 a10a13a12 a19 a24 a23a25a24 a5a49a48 a49 a51 a58a59 a59 a60 Compose: a61a63a62a65a64 a66a68a67a69 a64 a66a71a70 a61a35a72a37a64 a66a68a67a73 a64 a66a71a70a36a74a76a75 a32a78a77 a64 a66a76a67a69 a64 a66a80a79a81a73 a64 a66 a14 a62a82a64 a66 a14 a72a37a64 a66 a33 a10 a77 a64 a66 a67a69 a64 a66a37a83 a73 a64 a66 a18 Figure 3: Logic C (C for CKY) These constraints are enforced by the d-span operators a84 and a85 . Parser C is conceptually simpler than the synchronous parsers of Wu (1997), Alshawi et al." P04-1084,J97-3002,o,"Thus, GCNF is a more restrictive normal form than those used by Wu (1997) and Melamed (2003)." P04-1084,J97-3002,o,"Inversion Transduction Grammar (ITG) (Wu, 1997) and Syntax-Directed Translation Schema (SDTS) (Aho and Ullman, 1969) lack both of these properties." P04-3002,J97-3002,o,"In previous alignment methods, some researchers modeled the alignments with different statistical models (Wu, 1997; Och and Ney, 2000; Cherry and Lin, 2003)." P04-3032,J97-3002,o,"The simplest (Wu, 1997) uses constit(np,3,5,np,4,8) to denote a NP spanning positions 35 in the English string that is aligned with an NP spanning positions 48 in the Chinese string." P05-1033,J97-3002,o,"In this respect it resembles Wus 264 bilingual bracketer (Wu, 1997), but ours uses a different extraction method that allows more than one lexical item in a rule, in keeping with the phrasebased philosophy." P05-1058,J97-3002,o,"In recent years, many researchers have employed statistical models (Wu, 1997; Och and Ney, 2003; Cherry and Lin, 2003) or association measures (Smadja et al. , 1996; Ahrenberg et al. , 1998; Tufis and Barbu, 2002) to build alignment links." P05-1059,J97-3002,o,"Wu (1997) demonstrated that for pairs of sentences that are less than 16 words, the ITG alignment space has a good coverage over all possibilities." P05-1059,J97-3002,o,"The ITG we apply in our experiments has more structural labels than the primitive bracketing grammar: it has a start symbol S, a single preterminal C, and two intermediate nonterminals A and B used to ensure that only one parse can generate any given word-level alignment, as discussed by Wu (1997) and Zens and Ney (2003)." P05-1059,J97-3002,o,1 Introduction The Inversion Transduction Grammar (ITG) of Wu (1997) is a syntactically motivated algorithm for producing word-level alignments of pairs of translationally equivalent sentences in two languages. P05-1066,J97-3002,o,"For this reason there is currently a great deal of interest in methods which incorporate syntactic information within statistical machine translation systems (e.g. , see (Alshawi, 1996; Wu, 1997; Yamada and Knight, 2001; Gildea, 2003; Melamed, 2004; Graehl and Knight, 2004; Och et al. , 2004; Xia and McCord, 2004))." P05-1066,J97-3002,o,"2.1.2 Research on Syntax-Based SMT A number of researchers (Alshawi, 1996; Wu, 1997; Yamada and Knight, 2001; Gildea, 2003; Melamed, 2004; Graehl and Knight, 2004; Galley et al. , 2004) have proposed models where the translation process involves syntactic representations of the source and/or target languages." P05-1067,J97-3002,o,"(Wu, 1997) introduced a polynomial-time solution for the alignment problem based on synchronous binary trees." P06-1062,J97-3002,o,"For example, (Wu 1997; Alshawi, Bangalore, and Douglas, 2000; Yamada and Knight, 2001) have studied synchronous context free grammar." P06-1066,J97-3002,o,"Here, under the ITG constraint (Wu, 1997; Zens et al. , 2004), we need to consider just two kinds of reorderings, straight and inverted between two consecutive blocks." P06-1077,J97-3002,o,"Wu (1997) proposes Inversion Transduction Grammars, treating translation as a process of parallel parsing of the source and target language via a synchronized grammar." P06-1098,J97-3002,p,"In the hierarchical phrase-based model (Chiang, 2005), and an inversion transduction grammar (ITG) (Wu, 1997), the problem is resolved by restricting to a binarized form where at most two non-terminals are allowed in the righthand side." P06-1121,J97-3002,o,"7 Related work Similarly to (Poutsma, 2000; Wu, 1997; Yamada and Knight, 2001; Chiang, 2005), the rules discussed in this paper are equivalent to productions of synchronous tree substitution grammars." P06-1123,J97-3002,p,"Wu (1997) has been unable to find real examples of cases where hierarchical alignment would fail under these conditions, at least in fixed-word-order languages that are lightly inflected, such as English and Chinese. (p. 385)." P06-1123,J97-3002,o,"Following Wu (1997), the prevailing opinion in the research community has been that more complex patterns of word alignment in real bitexts are mostly attributable to alignment errors." P06-1123,J97-3002,o,"A hierarchical alignment algorithm is a type of synchronous parser where, instead of constraining inferences by the production rules of a grammar, the constraints come from word alignments and possibly other sources (Wu, 1997; Melamed and Wang, 2005)." P06-2014,J97-3002,o,"Some methods parse two flat strings at once using a bitext grammar (Wu, 1997)." P06-2014,J97-3002,p,"The Inversion Transduction Grammar or ITG formalism, described in (Wu, 1997), is well suited for our purposes." P06-2014,J97-3002,n,Wu (1997) provides anecdotal evidence that only incorrect alignments are eliminated by ITG constraints. P06-2014,J97-3002,p,"Fortunately, Wu (1997) provides a method to have an ITG respect a known partial structure." P06-2036,J97-3002,o,"Variations of SCFGs go back to Aho and Ullman (1972)s Syntax-Directed Translation Schemata, but also include the Inversion Transduction Grammars in Wu (1997), which restrict grammar rules to be binary, the synchronous grammars in Chiang (2005), which use only a single nonterminal symbol, and the Multitext Grammars in Melamed (2003), which allow independent rewriting, as well as other tree-based models such as Yamada and Knight (2001) and Galley et al." P06-2112,J97-3002,o,"Many researchers build alignment links with bilingual corpora (Wu, 1997; Och and Ney, 2003; Cherry and Lin, 2003; Zhang and Gildea, 2005)." P06-2117,J97-3002,o,"In recent years, many researchers build alignment links with bilingual corpora (Wu, 1997; Och and Ney, 2003; Cherry and Lin, 2003; Wu et al. , 2005; Zhang and Gildea, 2005)." P06-2122,J97-3002,o,953 2 Bilexicalization of Inversion Transduction Grammar The Inversion Transduction Grammar of Wu (1997) models word alignment between a translation pair of sentences by assuming a binary synchronous tree on top of both sides. P06-2122,J97-3002,n,"Synchronous grammar formalisms that are capable of modeling such complex relationships while maintaining the context-free property in each language have been proposed for many years, (Aho and Ullman, 1972; Wu, 1997; Yamada and Knight, 2001; Melamed, 2003; Chiang, 2005), but have not been scaled to large corpora and long sentences until recently." P06-2122,J97-3002,o,"In this paper we focus on the second issue, constraining the grammar to the binary-branching Inversion Transduction Grammar of Wu (1997)." P07-1002,J97-3002,o,"Alternatively, order is modelled in terms of movement of automatically induced hierarchical structure of sentences (Chiang, 2005; Wu, 1997)." P07-1003,J97-3002,o,"1 Introduction Syntactic methods are an increasingly promising approach to statistical machine translation, being both algorithmically appealing (Melamed, 2004; Wu, 1997) and empirically successful (Chiang, 2005; Galley et al. , 2006)." P07-1020,J97-3002,o,"A few exceptions are the hierarchical (possibly syntax-based) transduction models (Wu, 1997; Alshawi et al. , 1998; Yamada and Knight, 2001; Chiang, 2005) and the string transduction models (Kanthak et al. , 2005)." P07-1039,J97-3002,o,"(Wu, 1997))." P07-1039,J97-3002,o,"We use a bootstrap approach in which we first extract 1-to-n word alignments using an existing word aligner, and then estimate the confidence of those alignments to decide whether or not the n words have to be grouped; if so, this group is conwould thus be completely driven by the bilingual alignment process (see also (Wu, 1997; Tiedemann, 2003) for related considerations)." P07-1039,J97-3002,o,"Note that the need to consider segmentation and alignment at the same time is also mentioned in (Tiedemann, 2003), and related issues are reported in (Wu, 1997)." P07-1090,J97-3002,o,"Instead of using Inversion Transduction Grammar (ITG) (Wu, 1997) directly, we will discuss an ITG extension to accommodate gapping." P07-1090,J97-3002,n,"The utility of ITG as a reordering constraint for most language pairs, is well-known both empirically (Zens and Ney, 2003) and analytically (Wu, 1997), howeverITGsstraight (monotone)andinverted (reverse) rules exhibit strong cohesiveness, which is inadequate to express orientations that require gaps." P07-1108,J97-3002,p,"1 Introduction For statistical machine translation (SMT), phrasebased methods (Koehn et al. , 2003; Och and Ney, 2004) and syntax-based methods (Wu, 1997; Alshawi et al. 2000; Yamada and Knignt, 2001; Melamed, 2004; Chiang, 2005; Quick et al. , 2005; Mellebeek et al. , 2006) outperform word-based methods (Brown et al. , 1993)." P07-1121,J97-3002,o,"Among the grammar formalisms successfully put into use in syntaxbased SMT are synchronous context-free grammars (SCFG) (Wu, 1997) and synchronous treesubstitutiongrammars(STSG)(YamadaandKnight, 2001)." P08-1009,J97-3002,o,"methods for syntactic SMT held to this assumption in its entirety (Wu, 1997; Yamada and Knight, 2001)." P08-1012,J97-3002,o,"2 Phrasal Inversion Transduction Grammar We use a phrasal extension of Inversion Transduction Grammar (Wu, 1997) as the generative framework." P08-1023,J97-3002,o,"Depending on the type of input, these efforts can be divided into two broad categories: the string-based systems whose input is a string to be simultaneously parsed and translated by a synchronous grammar (Wu, 1997; Chiang, 2005; Galley et al., 2006), and the tree-based systems whose input is already a parse tree to be directly converted into a target tree or string (Lin, 2004; Ding and Palmer, 2005; Quirk et al., 2005; Liu et al., 2006; Huang et al., 2006)." P08-1024,J97-3002,o,"This is an instance of the ITG alignment algorithm (Wu, 1997)." P08-1025,J97-3002,o,"Thus, we are focusing on Inversion Transduction Grammars (Wu, 1997) which are an important subclass of SCFG." P08-1064,J97-3002,p,"Recently, many syntax-based models have been proposed to address the above deficiencies (Wu, 1997; Chiang, 2005; Eisner, 2003; Ding and Palmer, 2005; Quirk et al, 2005; Cowan et al., 2006; Zhang et al., 2007; Bod, 2007; Yamada and Knight, 2001; Liu et al., 2006; Liu et al., 2007; Gildea, 2003; Poutsma, 2000; Hearne and Way, 2003)." P08-1064,J97-3002,o,The formally syntax-based model for SMT was first advocated by Wu (1997). P08-1064,J97-3002,o,"(2006) propose a MaxEnt-based reordering model for BTG (Wu, 1997) while Setiawan et al." P08-1114,J97-3002,o,"(Chiang, 2005; Chiang, 2007; Wu, 1997))." P08-2021,J97-3002,o,"An alternative to tercom, considered in this paper, is to use the Inversion Transduction Grammar (ITG) formalism (Wu, 1997) which allows one to view the problem of alignment as a problem of bilingual parsing." P08-2038,J97-3002,p,"1 Introduction In recent years, Bracketing Transduction Grammar (BTG) proposed by (Wu, 1997) has been widely used in statistical machine translation (SMT)." P09-1009,J97-3002,p,"Research in this direction was pioneered by (Wu, 1997), who developed Inversion Transduction Grammars to capture crosslingual grammar variations such as phrase reorderings." P09-1036,J97-3002,o,"In this paper, we implement the SDB model in a state-of-the-art phrase-based system which adapts a binary bracketing transduction grammar (BTG) (Wu, 1997) to phrase translation and reordering, described in (Xiong et al., 2006)." P09-1053,J97-3002,o,"Most related to our approach, Wu (2005) used inversion transduction grammarsa synchronous context-free formalism (Wu, 1997)for this task." P09-1065,J97-3002,o,"(2006) develop a bottom-up decoder for BTG (Wu, 1997) that uses only phrase pairs." P09-1088,J97-3002,o,"Moreover, the inference procedure for each sentence pair is non-trivial, proving NP-complete for learning phrase based models (DeNero and Klein, 2008) or a high order polynomial (O(|f|3|e|3))1 for a sub-class of weighted synchronous context free grammars (Wu, 1997)." P09-1088,J97-3002,o,"Following the broad shift in the field from finite state transducers to grammar transducers (Chiang, 2007), recent approaches to phrase-based alignment have used synchronous grammar formalisms permitting polynomial time inference (Wu, 1997; 783 Cherry and Lin, 2007; Zhang et al., 2008b; Blunsom et al., 2008)." P09-1104,J97-3002,p,"This source of overcounting is considered and fixed by Wu (1997) and Zens and Ney (2003), which we briefly review here." P09-1104,J97-3002,o,"Null productions are also a source of double counting, as there are many possible orders in 926 N I 2+ N IN N I } N IN I I I N N N (a) Normal Domain Rules } I squigglerightN 2+ I squigglerightNI I squigglerightNI I squigglerightN N N N I I I (b) Inverted Domain Rules N 11 ,fN 11 N 11 N 10 N 10 N 10 e, N 10 N 00 } N 11 ,f N 10 } N 10 N 00 e, } N 00 I 11 N NI 11 N NI 00 N 00 I + 11 I 00 N 00 N 10 N 10 N 11 N N I 11 I 11 I 00 N 00 N 11 (c) Normal Domain with Null Rules } } } I 11 squiggleright ,fI 11 I 11 squigglerightI 10 I 11 squiggleright ,f I 10 I 10 squiggleright I 10 e, I 10 squigglerightI 00 I 10 squiggleright I 00 e, I 00 squigglerightN + 11 N 00 I I N 00 N 11 N 11 I 00 squigglerightN 11 I I squigglerightN 11 I I squigglerightN 00 I 00 I 00 I 10 I 10 I 11 I 11 (d) Inverted Domain with Null Rules Figure 2: Illustration of two unambiguous forms of ITG grammars: In (a) and (b), we illustrate the normal grammar without nulls (presented in Wu (1997) and Zens and Ney (2003))." P09-1104,J97-3002,o,"Because of this, Wu (1997) and Zens and Ney (2003) introduced a normal form ITG which avoids this over-counting." P09-1104,J97-3002,o,2.2 Inversion Transduction Grammar Wu (1997)s inversion transduction grammar (ITG) is a synchronous grammar formalism in which derivations of sentence pairs correspond to alignments. P09-1104,J97-3002,o,"The set of such ITG alignments,AITG, are a strict subset of A1-1 (Wu, 1997)." P09-1104,J97-3002,o,"1 Introduction Inversion transduction grammar (ITG) constraints (Wu, 1997) provide coherent structural constraints on the relationship between a sentence and its translation." P09-2032,J97-3002,o,"1 Introduction The use of various synchronous grammar based formalisms has been a trend for statistical machine translation (SMT) (Wu, 1997; Eisner, 2003; Galley et al., 2006; Chiang, 2007; Zhang et al., 2008)." P09-2036,J97-3002,o,"There are rules, though rare, that cannot be binarized synchronously at all (Wu, 1997), but can be incorporated in two-stage decoding with asynchronous binarization." P98-1006,J97-3002,n,"The work reported in Wu (1997), which uses an inside-outside type of training algorithm to learn statistical contextfree transduction, has a similar motivation to the current work, but the models we describe here, being fully lexical, are more suitable for direct statistical modelling." P98-1074,J97-3002,o,"Unfortunately, this is not always the case, and the above methodology suffers from the weaknesses pointed out by (Wu, 1997) concerning parse-parse-match procedures." P98-1076,J97-3002,o,"Consequently, the mainstream research in the literature has been focused on the modeling and utilization of local and sentential contexts, either linguistically in a rule-based framework or statistically in a searching and optimization set-up (Gan, Palmer and Lua 1996; Sproat, Shih, Gale and Chang 1996; Wu 1997; Gut 1997)." P98-2230,J97-3002,o,"The model employs a stochastic version of an inversion transduction grammar or ITG (Wu, 1995c; Wu, 1995d; Wu, 1997)." P98-2230,J97-3002,o,"1409 cally, and experimentally (Wu, 1995b; Wu, 1997)." P98-2230,J97-3002,o,"If the target CFG is purely binary branching, then the previous theoretical and linguistic analyses (Wu, 1997) suggest that much of the requisite constituent and word order transposition may be accommodated without change to the mirrored ITG." W00-0508,J97-3002,o,"There are other approaches to statistical machine translation where translation is achieved through transduction of source language structure to target language structure (Alshawi et al. , 1998b; Wu, 1997)." W02-1039,J97-3002,p,"Several studies have reported alignment or translation performance for syntactically augmented translation models (Wu, 1997; Wang, 1998; Alshawi et al. , 2000; Yamada and Knight, 2001; Jones and Havrilla, 1998) and these results have been promising." W03-0303,J97-3002,o,"2 Bilingual Bracketing In [Wu 1997], the Bilingual Bracketing PCFG was introduced, which can be simplified as the following production rules: A ! [AA] (1) A ! < AA > (2) A ! f=e (3) A ! f=null (4) A ! null=e (5) Where f and e are words in the target vocabulary Vf and source vocabulary Ve respectively." W03-0303,J97-3002,o,"However, instead of estimating the probabilities for the production rules via EM as described in [Wu 1997], we assign the probabilities to the rules using the Model-1 statistical translation lexicon [Brown et al. 1993]." W03-0303,J97-3002,o,Bilingual Bracketing [Wu 1997] is one of the bilingual shallow parsing approaches studied for Chinese-English word alignment. W03-0303,J97-3002,p,"More suitable ways could be bilingual chunk parsing, and refining the bracketing grammar as described in [Wu 1997]." W03-0304,J97-3002,o,"This contrasts with alternative alignment models such as those of Melamed (1998) and Wu (1997), which impose a one-to-one constraint on alignments." W03-0313,J97-3002,o,"This contrasts with alternative alignment models such as those of Melamed (1998) and Wu (1997), which impose a one-to-one constraint on alignments." W03-1002,J97-3002,o,Wu (1997) and Jones and Havrilla (1998) have sought to more closely tie the allowed motion of constituents between languages to those syntactic transductions supported by the independent rotation of parse tree constituents. W03-1807,J97-3002,o,"Related Works Generally speaking, approaches to MWE extraction proposed so far can be divided into three categories: a) statistical approaches based on frequency and co-occurrence affinity, b) knowledgebased or symbolic approaches using parsers, lexicons and language filters, and c) hybrid approaches combining different methods (Smadja 1993; Dagan and Church 1994; Daille 1995; McEnery et al. 1997; Wu 1997; Wermter et al. 1997; Michiels and Dufour 1998; Merkel and Andersson 2000; Piao and McEnery 2001; Sag et al. 2001a, 2001b; Biber et al. 2003)." W03-1807,J97-3002,o,"For example, Wu (1997) used an English-Chinese bilingual parser based on stochastic transduction grammars to identify terms, including multiword expressions." W04-1513,J97-3002,o,"5 Synchronous DIG 5.1 Definition (Wu, 1997) introduced synchronous binary trees and (Shieber, 1990) introduced synchronous tree adjoining grammars, both of which view the translation process as a synchronous derivation process of parallel trees." W04-1513,J97-3002,o,"Syntax based statistical MT approaches began with (Wu 1997), who introduced a polynomial-time solution for the alignment problem based on synchronous binary trees." W04-1513,J97-3002,o,"At the same time, grammar theoreticians have proposed various generative synchronous grammar formalisms for MT, such as Synchronous Context Free Grammars (S-CFG) (Wu, 1997) or Synchronous Tree Adjoining Grammars (S-TAG) (Shieber and Schabes, 1990)." W05-0803,J97-3002,o,"The only requirement will be that a parallel corpus exist for the language under consideration and one or more other languages.2 Induction of grammars from parallel corpora is rarely viewed as a promising task in its own right; in work that has addressed the issue directly (Wu, 1997; Melamed, 2003; Melamed, 2004), the synchronous grammar is mainly viewed as instrumental in the process of improving the translation model in a noisy channel approach to statistical MT.3 In the present paper, we provide an important prerequisite for parallel corpus-based grammar induction work: an efficient algorithm for synchronous parsing of sentence pairs, given a word alignment." W05-0803,J97-3002,o,"Graphically speaking, parsing amounts to identifying rectangular crosslinguistic constituents by assembling smaller rectangles that will together cover the full string spans in both dimensions (compare (Wu, 1997; Melamed, 2003))." W05-0803,J97-3002,o,"(2) X1/X2 Y1:r1/Y2:r2 , [i1, j1, i2, j2], Y1/Y2 , [j1, k1, j2, k2] X1/X2 Y1:r1/Y2:r2 , [i1, k1, i2, k2] (3) X1/X2 Y1:r1/Y2:r2 , [i1, j1, j2, k2], Y1/Y2 , [j1, k1, i2, j2] X1/X2 Y1:r1/Y2:r2 , [i1, k1, i2, k2] Since each inference rule contains six free variables over string positions (i1, j1, k1, i2, j2, k2), we get a parsing complexity of order O(n6) for unlexicalized grammars (where n is the number of words in the longer of the two strings from language L1 and L2) (Wu, 1997; Melamed, 2003)." W05-0815,J97-3002,o,"This model shares some similarities with the stochastic inversion transduction grammars (SITG) presented by Wu in (Wu, 1997)." W05-0830,J97-3002,p,"Whereas language generation has benefited from syntax [Wu, 1997; Alshawi et al. , 2000], the performance of statistical phrase-based machine translation when relying solely on syntactic phrases has been reported to be poor [Koehn et al. , 2003]." W05-0831,J97-3002,o,"4.5 ITG Constraints Another type of reordering can be obtained using Inversion Transduction Grammars (ITG) (Wu, 1997)." W05-0835,J97-3002,o,"This model shares some similarities with the stochastic inversion transduction grammars (SITG) presented by Wu in (Wu, 1997)." W05-1205,J97-3002,o,"c2005 Association for Computational Linguistics Recognizing Paraphrases and Textual Entailment using Inversion Transduction Grammars Dekai Wu1 Human Language Technology Center HKUST Department of Computer Science University of Science and Technology, Clear Water Bay, Hong Kong dekai@cs.ust.hk Abstract We present first results using paraphrase as well as textual entailment data to test the language universal constraint posited by Wus (1995, 1997) Inversion Transduction Grammar (ITG) hypothesis." W05-1205,J97-3002,o,"Let W1,W2 be the vocabulary sizes of the two languages, and N = {A1,,AN} be the set of nonterminals with indices 1,,N. Wu (1997) also showed that ITGs can be equivalently be defined in two other ways." W05-1205,J97-3002,p,"Moreover, for reasons discussed by Wu (1997), ITGs possess an interesting intrinsic combinatorial property of permitting roughly up to four arguments of any frame to be transposed freely, but not more." W05-1205,J97-3002,o,"The result in Wu (1997) implies that for the special case of Bracketing ITGs, the time complexity of the algorithm is parenleftbigT3V 3parenrightbig where T and V are the lengths of the two sentences." W05-1205,J97-3002,o,"1 Introduction The Inversion Transduction Grammar or ITG formalism, which historically was developed in the context of translation and alignment, hypothesizes strong expressiveness restrictions that constrain paraphrases to vary word order only in certain allowable nested permutations of arguments (Wu, 1997)." W05-1507,J97-3002,o,"This is the same complexity as the ITG alignment algorithm used by Wu (1997) and others, meaning complete Viterbi decoding is possible without pruning for realistic-length sentences." W05-1507,J97-3002,o,2 Machine Translation using Inversion Transduction Grammar The Inversion Transduction Grammar (ITG) of Wu (1997) is a type of context-free grammar (CFG) for generating two languages synchronously. W06-1504,J97-3002,o,"A related example would be a version of synchronous CFG that allows only one pair of linked nonterminals and any number of unlinked nonterminals, which could be bitextparsed in O(n5) time, whereas inversion transduction grammar (Wu, 1997) takes O(n6)." W06-1627,J97-3002,o,"Alignment, whether for training a translation model using EM or for nding the Viterbi alignment of test data, is O(n6) (Wu, 1997), while translation (decoding) is O(n7) using a bigram language model, and O(n11) with trigrams." W06-1627,J97-3002,o,1 Introduction The Inversion Transduction Grammar (ITG) of Wu (1997) is a syntactically motivated algorithm for producing word-level alignments of pairs of translationally equivalent sentences in two languages. W06-1628,J97-3002,o,Wu (1997) and Alshawi (1996) describe early work on formalisms that make use of transductive grammars; Graehl and Knight (2004) describe methods for training tree transducers. W06-2403,J97-3002,o,"2 Related Work The issue of MWE processing has attracted much attention from the Natural Language Processing (NLP) community, including Smadja, 1993; Dagan and Church, 1994; Daille, 1995; 1995; McEnery et al. , 1997; Wu, 1997; Michiels and Dufour, 1998; Maynard and Ananiadou, 2000; Merkel and Andersson, 2000; Piao and McEnery, 2001; Sag et al. , 2001; Tanaka and Baldwin, 2003; Dias, 2003; Baldwin et al. , 2003; Nivre and Nilsson, 2004 Pereira et al,." W06-3104,J97-3002,o,"It differs from the many approaches where (1) is defined by a stochastic synchronous grammar (Wu, 1997; Alshawi et al. , 2000; Yamada and Knight, 2001; Eisner, 2003; Gildea, 2003; Melamed, 2004) and from transfer-based systems defined by context-free grammars (Lavie et al. , 2003)." W06-3108,J97-3002,o,"The approach presented here has some resemblance to the bracketing transduction grammars (BTG) of (Wu, 1997), which have been applied to a phrase-based machine translation system in (Zens et al. , 2004)." W06-3111,J97-3002,n,"An alternative method (Wu, 1997) makes decisions at the end but has a high computational requirement." W06-3117,J97-3002,o,"In this work, we focus on learning bilingual word phrases by using Stochastic Inversion Transduction Grammars (SITGs) (Wu, 1997)." W06-3117,J97-3002,o,"3 Stochastic Inversion Transduction Grammars Stochastic Inversion Transduction Grammars (SITGs) (Wu, 1997) can be viewed as a restricted subset of Stochastic Syntax-Directed Transduction Grammars." W06-3117,J97-3002,p,"An efficient Viterbi-like parsing algorithm that is based on a Dynamic Programing Scheme is proposed in (Wu, 1997)." W06-3117,J97-3002,o,"1A Normal Form for SITGs can be defined (Wu, 1997) by analogy to the Chomsky Normal Form for Stochastic ContextFree Grammars." W06-3117,J97-3002,o,"In this work, we study a method for obtaining word phrases that is based on Stochastic Inversion Transduction Grammars that was proposed in (Wu, 1997)." W06-3601,J97-3002,n,"Besides, our model, as being linguistically motivated, is also more expressive than the formally syntax-based models of Chiang (2005) and Wu (1997)." W06-3602,J97-3002,p,"The efficient block alignment algorithm in Section 4 is related to the inversion transduction grammar approach to bilingual parsing described in (Wu, 1997): in both cases the number of alignments is drastically reduced by introducing appropriate re-ordering restrictions." W07-0401,J97-3002,o,"(Wu, 1997; Yamada and Knight, 2001; Gildea, 2003; Melamed, 2004; Graehl and Knight, 2004; Galley et al. , 2006)." W07-0403,J97-3002,o,"In the meantime, synchronous parsing methods efficiently process the same bitext phrases while building their bilingual constituents, but continue to be employed primarily for word-to-word analysis (Wu, 1997)." W07-0403,J97-3002,p,"Inversion transduction grammar (Wu, 1997), or ITG, is a wellstudied synchronous grammar formalism." W07-0403,J97-3002,o,"This ITG constraint is characterized by the two forbidden structures shown in Figure 1 (Wu, 1997)." W07-0403,J97-3002,o,"Stochastic ITGs are parameterized like their PCFG counterparts (Wu, 1997); productions A X are assigned probability Pr(X|A)." W07-0403,J97-3002,o,"Wu (1997) used a binary bracketing ITG to segment a sen19 tence while simultaneously word-aligning it to its translation, but the model was trained heuristically with a fixed segmentation." W07-0403,J97-3002,o,"The similarities become moreapparentwhenweconsiderthecanonical-form binary-bracketing ITG (Wu, 1997) shown here: S A | B | C A [AB] | [BB] | [CB] | [AC] | [BC] | [CC] B AA | BA | CA | AC | BC | CC C e/ f (3) (3) is employed in place of (2) to reduce redundant alignments and clean up EM expectations.1 More importantly for our purposes, it introduces a preterminal C, which generates all phrase pairs or cepts." W07-0404,J97-3002,o,"Wu (1997) demonstrates the case of binary SCFG parsing, where six string boundary variables, three for each language as in monolingual CFG parsing, interact with each other, yielding an O(N6) dynamic programming algorithm, where N is the string length, assuming the two paired strings are comparable in length." W07-0404,J97-3002,o,"Wu (1997)s Inversion Transduction Grammar, as well as tree-transformation models of translation such as Yamada and Knight (2001), Galley et al." W07-0406,J97-3002,o,"Machine translation based on a deeper analysis of the syntactic structure of a sentence has long been identified as a desirable objective in principle (consider (Wu, 1997; Yamada and Knight, 2001))." W07-0408,J97-3002,p,"Synchronous parsing models have been explored with moderate success (Wu, 1997; Quirk et al. , 2005)." W07-0410,J97-3002,o,"Actually, now that SMT has reached some maturity, we see several attempts to integrate more structure into these systems, ranging from simple hierarchical alignment models (Wu 1997, Chiang 2005) to syntax-based statistical systems (Yamada and Knight 2001, Zollmann and Venugopal 2006)." W07-0413,J97-3002,o,"A few exceptions are the hierarchical (possibly syntaxbased) transduction models (Wu, 1997; Alshawi et al. , 1998; Yamada and Knight, 2001; Chiang, 2005) and the string transduction models (Kanthak et al. , 2005)." W07-0414,J97-3002,o,"Other models (Wu (1997), Xiong et al." W07-0414,J97-3002,o,"2.3 ITG Constraints The Inversion Transduction Grammar (ITG) (Wu, 1997), a derivative of the Syntax Directed Transduction Grammars (Aho and Ullman, 1972), constrains the possible permutations of the input string by defining rewrite rules that indicate permutations of the string." W07-0709,J97-3002,o,"Syntax-based MT approaches began with Wu (1997), who introduced the Inversion Transduction Grammars." W07-1205,J97-3002,o,"The other form of hybridization ??a statistical MT model that is based on a deeper analysis of the syntactic 33 structure of a sentence ??has also long been identified as a desirable objective in principle (consider (Wu, 1997; Yamada and Knight, 2001))." W08-0308,J97-3002,o,The idea of synchronous SSMT can be traced back to Wu (1997)s Stochastic Inversion Transduction Grammars. W08-0401,J97-3002,o,"IBM constraints (Berger et al., 1996), lexical word reordering model (Tillmann, 2004), and inversion transduction grammar (ITG) constraints (Wu, 1995; Wu, 1997) belong to this type of approach." W08-0403,J97-3002,o,"Examples include Wus (Wu, 1997) ITG and Chiangs hierarchical models (Chiang, 2007)." W08-0403,J97-3002,o,"2 Related Work Syntax-based translation models engaged with SCFG have been actively investigated in the literature (Wu, 1997; Yamada and Knight, 2001; Gildea, 2003; Galley et al., 2004; Satta and Peserico, 2005)." W08-0408,J97-3002,o,"Recent work on reordering has been on trying to find smart ways to decide word order, using syntactic features such as POS tags (Lee and Ge 2005) , parse trees (Zhang et.al, 2007, Wang et.al. 2007, Collins et.al. 2005, Yamada and Knight 2001) to name just a few, and synchronized CFG (Wu 1997, Chiang 2005), again to name just a few." W08-0409,J97-3002,o,"Deeper syntax, e.g. phrase or dependency structures, has been shown useful in generative models (Wang and Zhou, 2004; Lopez and Resnik, 2005), heuristic-based models (Ayan et al., 2004; Ozdowska, 2004) and even for syntactically motivated models such as ITG (Wu, 1997; Cherry and Lin, 2006)." W08-0411,J97-3002,o,"The underlying formalisms used has been quite broad and include simple formalisms such as ITGs (Wu, 1997), hierarchicalsynchronousrules(Chiang, 2005), string to tree models by (Galley et al., 2004) and (Galley et al., 2006), synchronous CFG models such (Xia and McCord, 2004) (Yamada and Knight, 2001), synchronous Lexical Functional Grammar inspired approaches (Probst et al., 2002) and others." W09-0434,J97-3002,o,"Modeling reordering as the inversion in order of two adjacent blocks is similar to the approach taken by the Inverse Transduction Model (ITG) (Wu, 1997), except that here we are not limited to a binary tree." W09-1804,J97-3002,o,"Probabilistic generative models like IBM 1-5 (Brown et al., 1993), HMM (Vogel et al., 1996), ITG (Wu, 1997), and LEAF (Fraser and Marcu, 2007) define formulas for P(f | e) or P(e, f), with ok-voon ororok sprok at-voon bichat dat erok sprok izok hihok ghirok totat dat arrat vat hilat ok-drubel ok-voon anok plok sprok at-drubel at-voon pippat rrat dat ok-voon anok drok brok jok at-voon krat pippat sat lat wiwok farok izok stok totat jjat quat cat lalok sprok izok jok stok wat dat krat quat cat lalok farok ororok lalok sprok izok enemok wat jjat bichat wat dat vat eneat lalok brok anok plok nok iat lat pippat rrat nnat wiwok nok izok kantok ok-yurp totat nnat quat oloat at-yurp lalok mok nok yorok ghirok clok wat nnat gat mat bat hilat lalok nok crrrok hihok yorok zanzanok wat nnat arrat mat zanzanat lalok rarok nok izok hihok mok wat nnat forat arrat vat gat Figure 1: Word alignment exercise (Knight, 1997)." W09-1804,J97-3002,o,"4 Related Work (Zhang et al., 2003) and (Wu, 1997) tackle the problem of segmenting Chinese while aligning it to English." W09-1908,J97-3002,o,"While traditional approaches to syntax based MT were dependent on availability of manual grammar, more recent approaches operate within the resources of PB-SMT and induce hierarchical or linguistic grammars from existing phrasal units, to provide better generality and structure for reordering (Yamada and Knight, 2001; Chiang, 2005; Wu, 1997)." W09-2303,J97-3002,o,"production rules are typically learned from alignment structures (Wu, 1997; Zhang and Gildea, 2004; Chiang, 2007) or from alignment structures and derivation trees for the source string (Yamada and Knight, 2001; Zhang and Gildea, 2004)." W09-2303,J97-3002,o,"Related work includes Wu (1997), Zens and Ney (2003) and Wellington et al." W09-2303,J97-3002,o,"They are also used for inducing alignments (Wu, 1997; Zhang and Gildea, 2004)." W09-2303,J97-3002,o,"The production rules in ITGs are of the following form (Wu, 1997), with a notation similar to what is typically used for SDTSs and SCFGs in the right column: A [BC] A B1C2,B1C2 A BC A B1C2,C2B1 A e | f A e,f A e | A e, A | f A ,f It is important to note that RHSs of production rules have at most one source-side and one targetside terminal symbol." W09-2303,J97-3002,o,"In this paper it is shown that the synchronous grammars used in Wu (1997), Zhang et al." W09-2303,J97-3002,o,"of Linguistics University of Potsdam kuhn@ling.uni-potsdam.de Abstract The empirical adequacy of synchronous context-free grammars of rank two (2-SCFGs) (Satta and Peserico, 2005), used in syntaxbased machine translation systems such as Wu (1997), Zhang et al." W09-2303,J97-3002,o,"(2006) and Chiang (2007), in terms of what alignments they induce, has been discussed in Wu (1997) and Wellington et al." W09-2303,J97-3002,o,"2 Inside-out alignments Wu (1997) identified so-called inside-out alignments, two alignment configurations that cannot be induced by binary synchronous context-free grammars; these alignment configurations, while infrequent in language pairs such as EnglishFrench (Cherry and Lin, 2006; Wellington et al., 2006), have been argued to be frequent in other language pairs, incl." W09-2306,J97-3002,o,"To overcome these limitations, many syntaxbased SMT models have been proposed (Wu, 1997; Chiang, 2007; Ding et al., 2005; Eisner, 2003; Quirk et al., 2005; Liu et al., 2007; Zhang et al., 2007; Zhang et al., 2008a; Zhang et al., 2008b; Gildea, 2003; Galley et al., 2004; Marcu et al., 2006; Bod, 2007)." W09-2308,J97-3002,o,"In this paper, two synchronous grammar formalisms are discussed, inversion transduction grammars (ITGs) (Wu, 1997) and two-variable binary bottom-up non-erasing range concatenation grammars ((2,2)-BRCGs) (Sgaard, 2008)." W09-2308,J97-3002,o,It is known that ITGs do not induce the class of inside-out alignments discussed in Wu (1997). W09-2308,J97-3002,o,"The complexities of 15 restricted alignment problems in two very different synchronous grammar formalisms of syntax-based machine translation, inversion transduction grammars (ITGs) (Wu, 1997) and a restricted form of range concatenation grammars ((2,2)-BRCGs) (Sgaard, 2008), are investigated." W09-2308,J97-3002,o,"2 Inversion transduction grammars Inversion transduction grammars (ITGs) (Wu, 1997) are a notational variant of binary syntax-directed translation schemas (Aho and Ullman, 1972) and are usually presented with a normal form: A [BC] A BC A e|f A e| A |f where A,B,C N and e,f T. The first production rule, intuitively, says that the subtree [[]B[]C]A in the source language translates into 62 a subtree [[]B[]C]A, whereas the second production rule inverts the order in the target language, i.e. [[]C[]B]A. The universal recognition problem of ITGs can be solved in time O(n6|G|) by a CYKstyle parsing algorithm with two charts." W09-2309,J97-3002,o,"IBM constraints (Berger et al., 1996), the lexical word reordering model (Tillmann, 2004), and inversion transduction grammar (ITG) constraints (Wu, 1995; Wu, 1997) belong to this type of approach." A00-1005,J99-3003,p,"However, the only known work which automates part of a customer service center using natural language dialogue is the one by Chu-Carroll and Carpenter (1999)." A00-1014,J99-3003,o,"Using a vector-based topic identification process (Salton, 1971; Chu-Carroll and Carpenter, 1999), these keywords are used to determine a set of likely values (including null) for that attribute." A00-2028,J99-3003,o,"Research prototypes exist for applications such as personal email and calendars, travel and restaurant information, and personal banking (Baggia et al. , 1998; Walker et al. , 1998; Seneff et al. , 1995; Sanderman et al. , 1998; Chu-Carroll and Carpenter, 1999) inter alia." A00-2028,J99-3003,o,"2 Experimental System and Data HMIHY is a spoken dialogue system based on the notion of call routing (Gorin et al. , 1997; Chu-Carroll and Carpenter, 1999)." P04-1010,J99-3003,o,"Adapting a vectorbased approach reported by Chu-Carroll and Carpenter (1999), the Task ID Frame Agent is domain-independent and automatically trained." W00-0310,J99-3003,o,"1 Specifically, MIMIC uses an n-dimensional call router front-end (Chu-Carroll, 2000), which is a generalization of the vector-based call-routing paradigm of semantic interpretation (Chu-CarroU and Carpenter, 1999); that is, instead of detecting one concept per utterance, MIMIC's semantic interpretation engine detects multiple (n) concepts or classes conveyed by a single utterance, by using n call touters in parallel." W01-1602,J99-3003,o,Our approach thus provides an even more extreme version of automatic con rmation generation than that used byChu-Carroll and Carpenter (1999) where only a small eort is required by the developer. W03-2126,J99-3003,o,"Their approaches include the use of a vector-based information retrieval technique (Lee et al., 2000; Chu-Carroll and Carpenter, 1999) /bin/bash: line 1: a: command not found Our do- mains are more varied, which may results in more recognition errors." W05-0404,J99-3003,o,"An alternative would be using a vector space model for classi cation where calltypes and utterances are represented as vectors including word a2 -grams (Chu-Carroll and Carpenter, 1999)." W05-0404,J99-3003,o,"This step can be seen as a multi-label, multi-class call classi cation problem for customer care applications (Gorin et al. , 1997; Chu-Carroll and Carpenter, 1999; Gupta et al. , To appear, among others)." W06-1303,J99-3003,o,"An alternative is to create an automatic system that uses a set of training question-answer pairs to learn the appropriate question-answer matching algorithm (Chu-Carroll and Carpenter, 1999)." W07-0310,J99-3003,o,"Chu-Carroll and Carpenter (1999) describe a method of disambiguation, where disambiguation questions are dynamically constructed on the basis of an analysis of the differences among the closest routing destination vectors." P09-1032,L08-1329,o,"While this is certainly a daunting task, it is possible that for annotation studies that do not require expert annotators and extensive annotator training, the newly available access to a large pool of inexpensive annotators, such as the Amazon Mechanical Turk scheme (Snow et al., 2008),4 or embedding the task in an online game played by volunteers (Poesio et al., 2008; von Ahn, 2006) could provide some solutions." C04-1051,N03-1003,o,"Lee & Barzilay (2003), for example, use MultiSequence Alignment (MSA) to build a corpus of paraphrases involving terrorist acts." C04-1051,N03-1003,o,"Mean number of instances of paraphrase phenomena per sentence (such as Multiple Sequence Alignment, as employed by Barzilay & Lee 2003)." C04-1051,N03-1003,n,"While the idea of exploiting multiple news reports for paraphrase acquisition is not new, previous efforts (for example, Shinyama et al. 2002; Barzilay and Lee 2003) have been restricted to at most two news sources." C04-1051,N03-1003,o,"1 Introduction The importance of learning to manipulate monolingual paraphrase relationships for applications like summarization, search, and dialog has been highlighted by a number of recent efforts (Barzilay & McKeown 2001; Shinyama et al. 2002; Lee & Barzilay 2003; Lin & Pantel 2001)." C08-1029,N03-1003,o,"Multiple translations of the same text (Barzilay and McKeown, 2001), corresponding articles from multiple news sources (Barzilay and Lee, 2003; Quirk et al., 2004; Dolan et al., 2004), and bilingual corpus (Bannard and Callison-Burch, 2005) have been utilized." C08-1107,N03-1003,o,"Some works focused on learning rules from comparable corpora, containing comparable documents such as different news articles from the same date on the same topic (Barzilay and Lee, 2003; Ibrahim et al., 2003)." C08-1110,N03-1003,o,"This is related to the wellstudied problem of identifying paraphrases (Barzilay and Lee, 2003; Pang et al., 2003) and the more general variant of recognizing textual entailment, which explores whether information expressed in a hypothesis can be inferred from a given premise." D09-1122,N03-1003,o,"2 Related Work Previous studies on entailment, inference rules, and paraphrase acquisition are roughly classified into those that require comparable corpora (Shinyama et al., 2002; Barzilay and Lee, 2003; Ibrahim et al., 2003) and those that do not (Lin and Pantel, 2001; Weeds and Weir, 2003; Geffet and Dagan, 2005; Pekar, 2006; Bhagat et al., 2007; Szpektor and Dagan, 2008)." D09-1122,N03-1003,o,Barzilay and Lee (2003) also used newspaper articles on the same event as comparable corpora to acquire paraphrases. E09-1082,N03-1003,o,(2006) propose using a statistical word alignment algorithm as a more robust way of aligning (monolingual) outputs into a confusion network for system com2Barzilay and Lee (2003) construct lattices over paraphrases using an iterative pairwise multiple sequence alignment (MSA) algorithm. I05-5001,N03-1003,o,"Barzilay & Lee (2003) employ Multiple Sequence Alignment (MSA, e.g., Durbin et al. , 1998) to align strings extracted from closely related news articles." I05-5001,N03-1003,o,"The word-based edit distance heuristic yields pairs that are relatively clean but offer relatively minor rewrites in generation, especially when compared to the MSA model of (Barzilay & Lee, 2003)." I05-5001,N03-1003,o,"A growing body of recent research has focused on the problems of identifying and generating paraphrases, e.g., Barzilay & McKeown (2001), Lin & Pantel (2002), Shinyama et al, (2002), Barzilay & Lee (2003), and Pang et al." I05-5001,N03-1003,o,Barzilay & Lee (2003) and Quirk et al. I05-5002,N03-1003,p,"g2 2 Motivation The success of Statistical Machine Translation (SMT) has sparked a successful line of investigation that treats paraphrase acquisition and generation essentially as a monolingual machine translation problem (e.g. , Barzilay & Lee, 2003; Pang et al. , 2003; Quirk et al. , 2004; Finch et al. , 2004)." I05-5004,N03-1003,o,"Some studies exploit topically related articles derived from multiple news sources (Barzilay and Lee, 2003; Shinyama and Sekine, 2003; Quirk et al. , 2004; Dolan et al. , 2004)." I05-5007,N03-1003,o,"Generation of paraphrase examples was also investigated (Barzilay and Lee, 2003; Quirk et al. , 2004)." I05-5008,N03-1003,p,"Such a method alleviates the problem of creating templates from examples which would be used in an ulterior phase of generation (BARZILAY and LEE, 2003)." I08-1070,N03-1003,o,"The other utilizes a sort of parallel texts, such as multiple translation of the same text (Barzilay and McKeown, 2001; Pang et al., 2003), corresponding articles from multiple news sources (Barzilay and Lee, 2003; Dolan et al., 2004), and bilingual corpus (Wu and Zhou, 2003; Bannard and Callison-Burch, 2005)." I08-2110,N03-1003,o,"Recently, some work has been done on corpusbased paraphrase extraction (Lin and Pantel, 2001; Barzilay and Lee, 2003)." N03-1024,N03-1003,o,"Its still possible to use MSA if, for example, the input is pre-clustered to have the same constituent ordering (Barzilay and Lee (2003))." N04-1015,N03-1003,o,"But because we want the insertion state a1a16a20 to model digressions or unseen topics, we take the novel step of forcing its language model to be complementary to those of the other states by setting a2 a3a27a38 a21 a8 a8 a4 a8 a24 a26a11a28a30a29a6 a39a41a40a43a42a45a44a16a46 a1a48a47a1a50a49 a20 a2 a3 a26a17a21 a8a9a8 a4 a8 a24 a51a53a52a55a54a57a56 a21 a39a58a40a43a42a45a44a16a46 a1a59a47a1a50a49 a20 a2 a3a27a26a11a21a50a60 a4 a8 a24a30a24 a17 4Following Barzilay and Lee (2003), proper names, numbers and dates are (temporarily) replaced with generic tokens to help ensure that clusters contain sentences describing the same event type, rather than same actual event." N04-1031,N03-1003,o,"Although a large number of studies have been made on learning paraphrases, for example (Barzilay and Lee, 2003), there are only a few studies which address the connotational difference of paraphrases." N04-1031,N03-1003,o,"There are several works that try to learn paraphrase pairs from parallel or comparable corpora (Barzilay and McKeown, 2001; Shinyama et al. , 2002; Barzilay and Lee, 2003; Pang et al. , 2003)." N06-1008,N03-1003,o,"Previous attempts have used, for instance, the similarities between case frames (Lin and Pan57 tel, 2001), anchor words (Barzilay and Lee, 2003; Shinyama et al. , 2002; Szepektor et al. , 2004), and a web-based method(Szepektor et al. , 2004;Geffet and Dagan, 2005)." N06-1058,N03-1003,n,"2This can explain why previous attempts to use WordNet for generating sentence-level paraphrases (Barzilay and Lee, 2003; Quirk et al. , 2004) were unsuccessful." N06-1058,N03-1003,o,"2 Related Work Automatic Paraphrasing and Entailment Our work is closely related to research in automatic paraphrasing, in particular, to sentence level paraphrasing (Barzilay and Lee, 2003; Pang et al. , 2003; Quirk et al. , 2004)." N09-3008,N03-1003,o,"The use of Profile HMMs for multiple sequence alignment also presents applications to the acquisition of mapping dictionaries (Barzilay and Lee, 2002) and sentence-level paraphrasing (Barzilay and Lee, 2003)." P04-1077,N03-1003,o,Paraphrases can also be automatically acquired using statistical methods as shown by Barzilay and Lee (2003). P04-2006,N03-1003,o,Barzilay & Lee (2003) also identify paraphrases in their paraphrased sentence generation system. P05-1074,N03-1003,o,"Past work (Barzilay and McKeown, 2001; Barzilay and Lee, 2003; Pang et al. , 2003; Ibrahim et al. , 2003) has examined the use of monolingual parallel corpora for paraphrase extraction." P05-1074,N03-1003,o,"2 Extracting paraphrases Much previous work on extracting paraphrases (Barzilay and McKeown, 2001; Barzilay and Lee, 2003; Pang et al. , 2003) has focused on finding identifying contexts within aligned monolingual sentences from which divergent text can be extracted, and treated as paraphrases." P06-1034,N03-1003,o,"5 Related Work Automatically finding sentences with the same meaning has been extensively studied in the field of automatic paraphrasing using parallel corpora and corporawith multiple descriptionsof the same events (Barzilay and McKeown, 2001; Barzilay and Lee, 2003)." P06-1114,N03-1003,o,"Classi er Training Set Precision Recall F-Measure Linear 10K pairs 0.837 0.774 0.804 Maximum Entropy 10K pairs 0.881 0.851 0.866 Maximum Entropy 450K pairs 0.902 0.944 0.922 Table 4: Performance of Alignment Classi er 3.2 Paraphrase Acquisition Much recent work on automatic paraphrasing (Barzilay and Lee, 2003) has used relatively simple statistical techniques to identify text passages that contain the same information from parallel corpora." P06-1114,N03-1003,o,"In order increase the likelihood that 909 only true paraphrases were considered as phraselevel alternations for an example, extracted sentences were clustered using complete-link clustering using a technique proposed in (Barzilay and Lee, 2003)." P06-2027,N03-1003,o,"The procedure of substituting named entities with their respective tags previously proved to be useful for various tasks (Barzilay and Lee, 2003; Sudo et al. , 2003; Filatova and Prager, 2005)." P06-2027,N03-1003,o,"Many of the current approaches of domain modeling collapse together different instances and make the decision on what information is important for a domain based on this generalized corpus (Collier, 1998; Barzilay and Lee, 2003; Sudo et al. , 2003)." P06-2070,N03-1003,n,"If we consider these probabilities as a vector, the similarities of two English words can be obtained by computing the dot product of their corresponding vectors.2 The formula is described below: similarity(ei, ej) = Nsummationdisplay k=1 p(ei|fk)p(ej|fk) (3) Paraphrasing methods based on monolingual parallel corpora such as (Pang et al. , 2003; Barzilay and Lee, 2003) can also be used to compute the similarity ratio of two words, but they dont have as rich training resources as the bilingual methods do." P06-2096,N03-1003,o,"Previous work aligns a group of sentences into a compact word lattice (Barzilay and Lee, 2003), a finite state automaton representation that can be used to identify commonality or variability among comparable texts and generate paraphrases." P06-2096,N03-1003,o,"2 Related work Our work is closest in spirit to the two papers that inspired us (Barzilay and Lee, 2003) and (Pang et al. , 2003)." P07-1058,N03-1003,o,"2.2 Evaluation of Acquisition Algorithms Many methods for automatic acquisition of rules have been suggested in recent years, ranging from distributional similarity to finding shared contexts (Lin and Pantel, 2001; Ravichandran and Hovy, 2002; Shinyama et al. , 2002; Barzilay and Lee, 2003; Szpektor et al. , 2004; Sekine, 2005)." P07-1058,N03-1003,o,"Indeed, the prominent approach for evaluating the quality of rule acquisition algorithms is by human judgment of the learned rules (Lin and Pantel, 2001; Shinyama et al. , 2002; Barzilay and Lee, 2003; Pang et al. , 2003; Szpektor et al. , 2004; Sekine, 2005)." P07-1058,N03-1003,o,"Indeed, only few earlier works reported inter-judge agreement level, and those that did reported rather low Kappa values, such as 0.54 (Barzilay and Lee, 2003) and 0.55 0.63 (Szpektor et al. , 2004)." P08-1077,N03-1003,o,(2004) and Barzilay and Lee (2003) used comparable news articles to obtain sentence level paraphrases. P08-1089,N03-1003,o,"Some methods only extract paraphrase patternsusingnewsarticlesoncertaintopics(Shinyama et al., 2002; Barzilay and Lee, 2003), while some others need seeds as initial input (Ravichandran and Hovy, 2002)." P08-1089,N03-1003,o,"In paraphrase generation, a text unit that matches a pattern P can be rewritten using the paraphrase patterns of P. Avarietyofmethodshavebeenproposedonparaphrase patterns extraction (Lin and Pantel, 2001; Ravichandran and Hovy, 2002; Shinyama et al., 2002; Barzilay and Lee, 2003; Ibrahim et al., 2003; Pang et al., 2003; Szpektor et al., 2004)." P08-1089,N03-1003,o,The preci781 start Palestinian suicide bomberblew himself up in SLOT1 on SLOT2 killing SLOT3 other people and injuring wounding SLOT4 end detroit the *e* a s *e* building buildingin detroit flattened ground levelled to blasted leveled *e* was reduced razed leveled to down rubble into ashes *e* to *e* (1) (2) Figure 1: Examples of paraphrase patterns extracted by Barzilay and Lee (2003) and Pang et al. P08-1089,N03-1003,o,Barzilay and Lee (2003) applied multi-sequence alignment (MSA) to parallel news sentences and induced paraphrase patterns for generating new sentences (Figure 1 (1)). P08-1116,N03-1003,o,"3 Monolingual comparable corpus: Similar to the methods in (Shinyama et al., 2002; Barzilay and Lee, 2003), we construct a corpus of comparable documents from a large corpus D of news articles." P08-1116,N03-1003,o,"Different news articles reporting on the same event are commonly used as monolingual comparable corpora, from which both paraphrase patterns and phrasal paraphrases can be derived (Shinyama et al., 2002; Barzilay and Lee, 2003; Quirk et al., 2004)." P08-1116,N03-1003,o,"For example, Barzilay and Lee (2003) applied multiple-sequence alignment (MSA) to parallel news sentences and induced paraphrasing patterns for generating new sentences." P09-1053,N03-1003,o,"For natural language engineers, the problem bears on information management systems like abstractive summarizers that must measure semantic overlap between sentences (Barzilay and Lee, 2003), question answering modules (Marsi and Krahmer, 2005) and machine translation (Callison-Burch et al., 2006)." P09-1094,N03-1003,o,"Some researchers then tried to automatically extract paraphrase rules (Lin and Pantel, 2001; Barzilay and Lee, 2003; Zhao et al., 2008b), which facilitates the rule-based PG methods." P09-2063,N03-1003,o,"For instance, automatic summary can be seen as a particular paraphrasing task (Barzilay and Lee, 2003) with the aim of selecting the shortest paraphrase." P09-3004,N03-1003,o,"In another generation approach, Barzilay and Lee (2002; 2003) look for pairs of slotted word lattices that share many common slot fillers; the lattices are generated by applying a multiplesequence alignment algorithm to a corpus of multiple news articles about the same events." W03-1602,N03-1003,o,"(Barzilay and McKeown, 2001; Shinyama et al. , 2002; Barzilay and Lee, 2003)." W03-1605,N03-1003,o,"For this reason, paraphrase poses a great challenge for many Natural Language Processing (NLP) tasks, just as ambiguity does, notably in text summarization and NL generation (Barzilay and Lee, 2003; Pang et al. , 2003)." W03-1608,N03-1003,o,"Similar to the work of Barzilay and Lee (2003), who have applied paraphrase generation techniques to comparable corpora consisting of different newspaper articles about the same event, we are currently attempting to solve the data sparseness problem by extending our approach to non-parallel corpora." W04-0910,N03-1003,o,"Similarly, (Barzilay and Lee, 2003) and (Shinyanma et al. , 2002) learn sentence level paraphrase templates from a corpus of news articles stemming from different news source." W05-1210,N03-1003,o,"Such transformations are typically denoted as paraphrases in the literature, where a wealth of methods for their automatic acquisition were proposed (Lin and Pantel, 2001; Shinyama et al. , 2002; Barzilay and Lee, 2003; Szpektor et al. , 2004)." W06-1403,N03-1003,o,"Our experience suggests that disjunctive LFs are an important capability, especially as one seeks to make grammars reusable across applications, and to employ domain-specific, sentence-level paraphrases (Barzilay and Lee, 2003)." W06-1603,N03-1003,o,"Barzilay and Lee (2003) proposed to apply multiple-sequence alignment (MSA) for traditional, sentence-level PR." W07-0716,N03-1003,o,"At the sentence level, (Barzilay and Lee, 2003) employed an unsupervised learning approach to cluster sentences and extract lattice pairs from comparable monolingual corpora." W07-0716,N03-1003,o,"Most previous work on paraphrase has focused on high quality rather than coverage (Barzilay and Lee, 2003; Quirk et al. , 2004), but generating artificial references for MT parameter tuning in our setting has two unique properties compared to other paraphrase applications." W07-0909,N03-1003,o,"Automatically Learning Entailment Rules from the Web Many algorithms for automatically learning paraphrases and entailment rules have been explored in recent years (Lin and Pantel, 2001; 1http://jakarta.apache.org/lucene/docs/index.html 67 Ravichandran and Hovy, 2002; Shinyama et al. , 2002; Barzilay and Lee, 2003; Sudo et al. , 2003; Szpektor et al. , 2004; Satoshi, 2005)." W07-1424,N03-1003,o,"Most of the reported work on paraphrase generation from arbitrary input sentences uses machine learning techniques trained on sentences that are known or can be inferred to be paraphrases of each other (Bannard and Callison-Burch, 2005; Barzilay and Lee, 2003; Barzilay and McKeown, 2001; Callison-Burch et al. , 2006; Dolan et al. , 2004; Ibrahim et al. , 2003; Lin and Pantel, 2001; Pang et al. , 2003; Quirk et al. , 2004; Shinyama et al. , 2002)." W07-1425,N03-1003,o,"The third estimates the equivalence based on word alignment composed using templates or translation probabilities derived from a set of parallel text (Barzilay and Lee, 2003; Brockett and Dolan, 2005)." W07-1429,N03-1003,o,"Second, we will discuss the work done by (Barzilay & Lee, 2003) who use clustering of paraphrases to induce rewriting rules." W07-1429,N03-1003,o,"2 Related Work Two different approaches have been proposed for Sentence Compression: purely statistical methodologies (Barzilay & Lee, 2003; Le Nguyen & Ho, 2004) and hybrid linguistic/statistic methodologies (Knight & Marcu, 2002; Shinyama et al. , 2002; Daelemans et al. , 2004; Marsi & Krahmer, 2005; Unno et al. , 2006)." W07-1429,N03-1003,n,"Experiments, by using 4 algorithms and through visualization techniques, revealed that clustering is a worthless effort for paraphrase corpora construction, contrary to the literature claims (Barzilay & Lee, 2003)." W07-1429,N03-1003,o,"As our work is based on the first paradigm, we will focus on the works proposed by (Barzilay & Lee, 2003) and (Le Nguyen & Ho, 2004)." W07-1429,N03-1003,o,"(Barzilay & Lee, 2003) present a knowledge-lean algorithm that uses multiple-sequence alignment to 177 learn generate sentence-level paraphrases essentially from unannotated corpus data alone." W07-1429,N03-1003,o,"Comparatively, (Barzilay & Lee, 2003) propose to use the N-gram Overlap metric to capture similarities between sentences and automatically create paraphrase corpora." W07-1429,N03-1003,o,"Unlike (Le Nguyen & Ho, 2004), one interesting idea proposed by (Barzilay & Lee, 2003) is to cluster similar pairs of paraphrases to apply multiplesequence alignment." W07-1429,N03-1003,o,"Second, we discuss the work done by (Barzilay & Lee, 2003) who use clustering of paraphrases to induce rewriting rules." W07-1429,N03-1003,o,"3.1 Paraphrase Identification A few unsupervised metrics have been applied to automatic paraphrase identification and extraction (Barzilay & Lee, 2003; Dolan & Brockett, 2004)." W07-1429,N03-1003,o,"However, these unsupervised methodologies show a major drawback by extracting quasi-exact2 or even exact match pairs of sentences as they rely on classical string similarity measures such as the Edit Distance in the case of (Dolan & Brockett, 2004) and word N-gram overlap for (Barzilay & Lee, 2003)." W07-1429,N03-1003,o,"In particular, it shows systematically better F-Measure and Accuracy measures over all other metrics showing an improvement of (1) at least 2.86% in terms of F-Measure and 3.96% in terms of Accuracy and (2) at most 6.61% in terms of FMeasure and 6.74% in terms of Accuracy compared to the second best metric which is also systematically the word N-gram overlap similarity measure used by (Barzilay & Lee, 2003)." W07-1429,N03-1003,o,"On one hand, as (Barzilay & Lee, 2003) evidence, clusters of paraphrases can lead to better learning of text-totext rewriting rules compared to just pairs of paraphrases." W07-1429,N03-1003,o,"However, as (Barzilay & Lee, 2003) do not propose any evaluation of which clustering algorithm should be used, we experiment a set of clustering algorithms and present the comparative results." W07-1429,N03-1003,n,"Table 2: Figures about clustering algorithms Algorithm # Sentences/# Clusters S-HAC 6,23 C-HAC 2,17 QT 2,32 EM 4,16 In fact, table 2 shows that most of the clusters have less than 6 sentences which leads to question the results presented by (Barzilay & Lee, 2003) who only keep the clusters that contain more than 10 sentences." W07-1429,N03-1003,o,"Sentence Compression takes an important place for Natural Language Processing (NLP) tasks where specific constraints must be satisfied, such as length in summarization (Barzilay & Lee, 2002; Knight & Marcu, 2002; Shinyama et al. , 2002; Barzilay & Lee, 2003; Le Nguyen & Ho, 2004; Unno et al. , 2006), style in text simplification (Marsi & Krahmer, 2005) or sentence simplification for subtitling (Daelemans et al. , 2004)." W07-1429,N03-1003,o,"These results confirm the observed figures in the previous subsection and reinforce the sight that clustering is a worthless effort for automatic paraphrase corpora construction, contrarily to what (Barzilay & Lee, 2003) suggest." W08-0906,N03-1003,o,"In order to be able to compare the edit distance with the other metrics, we have used the following formula(Wen et al., 2002)whichnormalisesthe minimum edit distance by the length of the longest questionand transformsit into a similaritymetric: normalisededitdistance = 1 edit dist(q1,q2)max(| q 1 |,| q2 |) Word Ngram Overlap This metric compares the word n-gramsin both questions: ngramoverlap = 1N Nsummationdisplay n=1 | Gn(q1) Gn(q2) | min(| Gn(q1) |,| Gn(q2) |) where Gn(q) is the set of n-grams of length n in question q and N usually equals 4 (Barzilay and Lee, 2003;Cordeiroet al., 2007)." W08-0906,N03-1003,o,"While word and phrasal paraphrases can be assimilated to the well-studied notion of synonymy, sentencelevel paraphrasingis moredifficult to grasp and cannot be equated with word-for-word or phrase-by-phrase substitution since it might entail changes in the structure of the sentence (Barzilay and Lee, 2003)." W08-0906,N03-1003,o,"There exist many different string similarity measures: word overlap (Tomuro and Lytinen, 2004), longest common subsequence (Islamand Inkpen,2007), Levenshteinedit distance (Dolan et al., 2004), word n-gramoverlap (Barzilay and Lee, 2003) etc. Semantic similarity measures are obtained by first computing the semantic similarity of the words containedin the sentencesbeing compared." W08-1911,N03-1003,o,"Barzilay and Lee (Barzilay and Lee, 2003) learned paraphrasing patterns as pairs of word lattices, which are then used to produce sentence level paraphrases." W09-0604,N03-1003,o,"3.4 Perspectives for automatic paraphrase extraction There is a growing amount of work on automatic extraction of paraphrases from text corpora (Lin and Pantel, 2001; Barzilay and Lee, 2003; Ibrahim et al., 2003; Dolan et al., 2004)." W09-2805,N03-1003,o,"A few unsupervised metrics have been applied to automatic paraphrase identification and extraction (Barzilay & Lee, 2003; Dolan et al., 2004)." W09-2805,N03-1003,o,"However, these unsupervised methodologies show a major drawback by extracting quasi-exact or even exact match pairs of sentences as they rely on classical string similarity measures such as the Edit Distance in the case of (Dolan et al., 2004) and Word N-gram Overlap for (Barzilay & Lee, 2003)." C04-1090,N03-1017,o,"However, (Koehn et al 2003) found that it is actually harmful to restrict phrases to constituents in parse trees, because the restriction would cause the system to miss many reliable translations, such as the correspondence between there is in English and es gibt (it gives) in German." C08-1005,N03-1017,o,"grow-diagfinal (Koehn et al., 2003))." C08-1014,N03-1017,o,"Our MT baseline system is based on Moses decoder (Koehn et al., 2007) with word alignment obtained from GIZA++ (Och et al., 2003)." C08-1014,N03-1017,p,"1 Introduction State-of-the-art Statistical Machine Translation (SMT) systems usually adopt a two-pass search strategy (Och, 2003; Koehn, et al., 2003) as shown in Figure 1." C08-1017,N03-1017,o,"However, Moores Law, the driving force of change in computing since then, has opened the way for recent progress in the field, such as Statistical Machine Translation (SMT) (Koehn et al. 2003)." C08-1027,N03-1017,p,"1 Introduction The emergence of phrase-based statistical machine translation (PSMT) (Koehn et al., 2003) has been one of the major developments in statistical approaches to translation." C08-1041,N03-1017,o,"Then the word alignment is refined by performing growdiag-final method (Koehn et al., 2003)." C08-1064,N03-1017,o,"Except where noted, each system was trained on 27 million words of newswire data, aligned with GIZA++ (Och and Ney, 2003) and symmetrized with the grow-diag-final-and heuristic (Koehn et al., 2003)." C08-1064,N03-1017,o,"Sum of logarithms of source-to-target lexical weighting (Koehn et al., 2003)." C08-1064,N03-1017,o,"4.3 Relaxing Length Restrictions Increasing the maximum phrase length in standard phrase-based translation does not improve BLEU (Koehn et al., 2003; Zens and Ney, 2007)." C08-1064,N03-1017,o,"Our results are similar to those for conventional phrase-based models (Koehn et al., 2003; Zens and Ney, 2007)." C08-1064,N03-1017,o,"It compares favorably 505 with conventional phrase-based translation (Koehn et al., 2003) on Chinese-English news translation (Chiang, 2007)." C08-1064,N03-1017,o,"Our baseline uses Giza++ alignments (Och and Ney, 2003) symmetrized with the grow-diag-final-and heuristic (Koehn et al., 2003)." C08-1127,N03-1017,n,"With these linguistic annotations, we expect the LABTG to address two traditional issues of standard phrase-based SMT (Koehn et al., 2003) in a more effective manner." C08-1127,N03-1017,o,"2 Related Work There have been various efforts to integrate linguistic knowledge into SMT systems, either from the target side (Marcu et al., 2006; Hassan et al., 2007; Zollmann and Venugopal, 2006), the source side (Quirk et al., 2005; Liu et al., 2006; Huang et al., 2006) or both sides (Eisner, 2003; Ding et al., 2005; Koehn and Hoang, 2007), just to name a few." C08-1127,N03-1017,o,"Firstly, we run GIZA++ (Och and Ney, 2000) on the training corpus in both directions and then apply the ogrow-diag-finalprefinement rule (Koehn et al., 2003) to obtain many-to-many word alignments." C08-1138,N03-1017,o,"Based on these grammars, a great number of SMT models have been recently proposed, including string-to-string model (Synchronous FSG) (Brown et al., 1993; Koehn et al., 2003), tree-to-string model (TSG-string) (Huang et al., 2006; Liu et al., 2006; Liu et al., 2007), string-totree model (string-CFG/TSG) (Yamada and Knight, 2001; Galley et al., 2006; Marcu et al., 2006), tree-to-tree model (Synchronous CFG/TSG, Data-Oriented Translation) (Chiang, 2005; Cowan et al., 2006; Eisner, 2003; Ding and Palmer, 2005; Zhang et al., 2007; Bod, 2007; Quirk wt al., 2005; Poutsma, 2000; Hearne and Way, 2003) and so on." C08-1144,N03-1017,o,"Starting with bilingualphrasepairsextractedfromautomatically aligned parallel text (Och and Ney, 2004; Koehn et al., 2003), these PSCFG approaches augment each contiguous (in source and target words) phrase pair with a left-hand-side symbol (like the VP in the example above), and perform a generalization procedure to form rules that include nonterminal symbols." C08-1144,N03-1017,o,"Phrase pairs are extracted up to a fixed maximum length, since very long phrases rarely have a tangible impact during translation (Koehn et al., 2003)." C08-2005,N03-1017,o,"1 Introduction In phrase-based statistical machine translation (Koehn et al., 2003) phrases extracted from word-aligned parallel data are the fundamental unit of translation." C08-2032,N03-1017,o,"This paper proposes a method for building a bilingual lexicon through a pivot language by using phrase-based statistical machine translation (SMT) (Koehn et al., 2003)." C08-2032,N03-1017,o,"Let us suppose that we have two bilingual lexicons L f L p and L p L e . We obtain word alignments of these lexicons by applying GIZA++ (Och and Ney, 2003), and grow-diag-final heuristics (Koehn et al., 2007)." D07-1006,N03-1017,o,"This operation does not change the collection of phrases or rules extracted from a hypothesized alignment, see, for instance, (Koehn et al. , 2003)." D07-1006,N03-1017,p,"For French/English translation we use a state of the art phrase-based MT system similar to (Och and Ney, 2004; Koehn et al. , 2003)." D07-1006,N03-1017,o,"(Och and Ney, 2003) invented heuristic symmetriza57 FRENCH/ENGLISH ARABIC/ENGLISH SYSTEM F-MEASURE ( = 0.4) BLEU F-MEASURE ( = 0.1) BLEU GIZA++ 73.5 30.63 75.8 51.55 (FRASER AND MARCU, 2006B) 74.1 31.40 79.1 52.89 LEAF UNSUPERVISED 74.5 72.3 LEAF SEMI-SUPERVISED 76.3 31.86 84.5 54.34 Table 3: Experimental Results tion of the output of a 1-to-N model and a M-to-1 model resulting in a M-to-N alignment, this was extended in (Koehn et al. , 2003)." D07-1008,N03-1017,o,"Our corpora were automatically aligned with Giza++ (Och et al. , 1999) in both directions between source and target and symmetrised using the intersection heuristic (Koehn et al. , 2003)." D07-1030,N03-1017,o,"SMT has evolved from the original word-based approach (Brown et al. , 1993) into phrase-based approaches (Koehn et al. , 2003; Och and Ney, 2004) and syntax-based approaches (Wu, 1997; Alshawi et al. , 2000; Yamada and Knignt, 2001; Chiang, 2005)." D07-1030,N03-1017,o,"3.1 Phrase-Based Models According to the translation model presented in (Koehn et al. , 2003), given a source sentence f, the best target translation can be obtained using the following model best e 288 )( )()(maxarg )(maxarg | | e e e eef fee length LM best pp p = = (1) Where the translation model can be decomposed into )( | efp = = I i i iii i i II aefpbadef efp 1 1 1 1 ),|()()|( )|( w (2) Where )|( i i ef is the phrase translation probability." D07-1036,N03-1017,o,"In training process, we use GIZA++ 4 toolkit for word alignment in both translation directions, and apply grow-diag-final method to refine it (Koehn et al. , 2003)." D07-1056,N03-1017,o,"In phrase-based SMT systems (Koehn et al. , 2003; Koehn, 2004), foreign sentences are firstly segmented into phrases which consists of adjacent words." D07-1056,N03-1017,o,"There have been considerable amount of efforts to improve the reordering model in SMT systems, ranging from the fundamental distance-based distortion model (Och and Ney, 2004; Koehn et al. , 2003), flat reordering model (Wu, 1996; Zens et al. , 2004; Kumar et al. , 2005), to lexicalized reordering model (Tillmann, 2004; Kumar et al. , 2005; Koehn et al. , 2005), hierarchical phrase-based model (Chiang, 2005), and maximum entropy-based phrase reordering model (Xiong et al. , 2006)." D07-1079,N03-1017,o,"Approaches include word substitution systems (Brown et al. , 1993), phrase substitution systems (Koehn et al. , 2003; Och and Ney, 2004), and synchronous context-free grammar systems (Wu and Wong, 1998; Chiang, 2005), all of which train on string pairs and seek to establish connections between source and target strings." D07-1080,N03-1017,o,"Second, the word alignment is refined by a grow-diag-final heuristic (Koehn et al. , 2003)." D07-1080,N03-1017,n,"Such a quasi-syntactic structure can naturally capture the reordering of phrases that is not directly modeled by a conventional phrase-based approach (Koehn et al. , 2003)." D07-1080,N03-1017,p,"1 Introduction The recent advances in statistical machine translation have been achieved by discriminatively training a small number of real-valued features based either on (hierarchical) phrase-based translation (Och and Ney, 2004; Koehn et al. , 2003; Chiang, 2005) or syntax-based translation (Galley et al. , 2006)." D07-1103,N03-1017,o,"These joint counts are estimated using the phrase induction algorithm described in (Koehn et al. , 2003), with symmetrized word alignments generated using IBM model 2 (Brown et al. , 1993)." D07-1103,N03-1017,o,"The features used are: the length of t; a single-parameter distortion penalty on phrase reordering in a, as described in (Koehn et al. , 2003); phrase translation model probabilities; and 4-gram language model probabilities logp(t), using Kneser-Ney smoothing as implemented in the SRILM toolkit (Stolcke, 2002)." D07-1104,N03-1017,o,"So far, these techniques have focused on phrasebased models using contiguous phrases (Koehn et al. , 2003; Och and Ney, 2004)." D07-1104,N03-1017,o,"We symmetrized bidirectional alignments using the growdiag-final heuristic (Koehn et al. , 2003)." D07-1105,N03-1017,o,"For instance, word alignment models are often trained using the GIZA++ toolkit (Och and Ney, 2003); error minimizing training criteria such as the Minimum Error Rate Training (Och, 2003) are employed in order to learn feature function weights for log-linear models; and translation candidates are produced using phrase-based decoders (Koehn et al. , 2003) in combination with n-gram language models (Brants et al. , 2007)." D08-1010,N03-1017,o,"Thenthewordalignment is refined by performing grow-diag-final method (Koehn et al., 2003)." D08-1021,N03-1017,o,"They give a probabilistic formation of paraphrasing which naturally falls out of the fact that they use techniques from phrase-based statistical machine translation: e2 = argmax e2:e2negationslash=e1 p(e2|e1) (1) where p(e2|e1) = summationdisplay f p(f|e1)p(e2|f,e1) (2) summationdisplay f p(f|e1)p(e2|f) (3) Phrase translation probabilities p(f|e1) and p(e2|f) are commonly calculated using maximum likelihood estimation (Koehn et al., 2003): p(f|e) = count(e,f)summationtext f count(e,f) (4) where the counts are collected by enumerating all bilingual phrase pairs that are consistent with the 197 conseguido .opportunitiesequalcreatetofailedhasprojecteuropeanthe oportunidadesdeigualdadlahanoeuropeoproyectoel Figure 1: The interaction of the phrase extraction heuristic with unaligned English words means that the Spanish phrase la igualdad aligns with equal, create equal, and to create equal." D08-1024,N03-1017,o,"5.1 Experimental setup The baseline model was Hiero with the following baseline features (Chiang, 2005; Chiang, 2007): two language models phrase translation probabilities p(f | e) and p(e| f) lexical weighting in both directions (Koehn et al., 2003) word penalty penalties for: automatically extracted rules identity rules (translating a word into itself) two classes of number/name translation rules glue rules The probability features are base-100 logprobabilities." D08-1051,N03-1017,p,"486 One of the most popular instantiations of loglinear models is that including phrase-based (PB) models (Zens et al., 2002; Koehn et al., 2003)." D08-1059,N03-1017,p,"Beam-search has been successful in many NLP tasks (Koehn et al., 2003; 562 Inputs: training examples (xi,yi) Initialization: set vectorw = 0 Algorithm: // R training iterations; N examples for t = 1R, i = 1N: zi = argmaxyGEN(xi) (y) vectorw if zi negationslash= yi: vectorw = vectorw + (yi)(zi) Outputs: vectorw Figure 1: The perceptron learning algorithm Collins and Roark, 2004), and can achieve accuracy that is close to exact inference." D08-1066,N03-1017,o,"From this aligned training corpus, we extract the phrase pairs according to the heuristics in (Koehn et al., 2003)." D08-1066,N03-1017,o,"(Koehn et al., 2003; Och and Ney, 2004))." D08-1066,N03-1017,o,"These heuristics define a phrase pair to consist of a source and target ngrams of a word-aligned source-target sentence pair such that if one end of an alignment is in the one ngram, the other end is in the other ngram (and there is at least one such alignment) (Och and Ney, 2004; Koehn et al., 2003)." D08-1066,N03-1017,o,"1 Motivation A major component in phrase-based statistical Machine translation (PBSMT) (Zens et al., 2002; Koehn et al., 2003) is the table of conditional probabilities of phrase translation pairs." D08-1066,N03-1017,o,"The pervading method for estimating these probabilities is a simple heuristic based on the relative frequency of the phrase pair in the multi-set of the phrase pairs extracted from the word-aligned corpus (Koehn et al., 2003)." D08-1078,N03-1017,o,"The automatic alignments were extracted by appending the manually aligned sentences on to the respective Europarl v3 corpora and aligning them using GIZA++ (Och and Ney, 2003) and the growfinal-diag algorithm (Koehn et al., 2003)." D08-1089,N03-1017,n,"1 Introduction Statistical phrase-based systems (Och and Ney, 2004; Koehn et al., 2003) have consistently delivered state-of-the-art performance in recent machine translation evaluations, yet these systems remain weak at handling word order changes." D09-1006,N03-1017,p,"1 Introduction Many state-of-the-art machine translation (MT) systems over the past few years (Och and Ney, 2002; Koehn et al., 2003; Chiang, 2007; Koehn et al., 2007; Li et al., 2009) rely on several models to evaluate the goodness of a given candidate translation in the target language." D09-1021,N03-1017,o,"The method thereby retains the full set of lexical entries of phrase-based systems (e.g., (Koehn et al., 2003)).1 The model allows a straightforward integration of lexicalized syntactic language modelsfor example the models of (Charniak, 2001)in addition to a surface language model." D09-1021,N03-1017,o,"The future score is based on the source-language words that are still to be translatedthis can be directly inferred from the items bit-stringthis is similar to the use of future scores in Pharoah (Koehn et al., 2003), and in fact we use Pharoahs future scores in our model." D09-1021,N03-1017,o,"We used Pharoah (Koehn et al., 2003) as a baseline system for comparison; the s-phrases used in our system include all phrases, with the same scores, as those used by Pharoah, allowing a direct comparison." D09-1021,N03-1017,o,"In our experiments we use standard methods in phrase-based systems (Koehn et al., 2003) to define the set of phrase entries for each sentence in training data." D09-1023,N03-1017,o,"These estimates are usually heuristic and inconsistent (Koehn et al., 2003)." D09-1023,N03-1017,o,"220 (Koehn et al., 2003); they can overlap.5 Additionally, since phrase features can be any function of words and alignments, we permit features that consider phrase pairs in which a target word outside the target phrase aligns to a source word inside the source phrase, as well as phrase pairs with gaps (Chiang, 2005; Ittycheriah and Roukos, 2007)." D09-1023,N03-1017,o,"2.4 Reordering Reordering features take many forms in MT. In phrase-based systems, reordering is accomplished both within phrase pairs (local reordering) as well as through distance-based distortion models (Koehn et al., 2003) and lexicalized reordering models (Koehn et al., 2007)." D09-1023,N03-1017,p,"1 Introduction We have seen rapid recent progress in machine translation through the use of rich features and the development of improved decoding algorithms, often based on grammatical formalisms.1 If we view MT as a machine learning problem, features and formalisms imply structural independence assumptions, which are in turn exploited by efficient inference algorithms, including decoders (Koehn et al., 2003; Yamada and Knight, 2001)." D09-1037,N03-1017,o,"These heuristics are extensions of those developed for phrase-based models (Koehn et al., 2003), and involve symmetrising two directional word alignments followed by a projection step which uses the alignments to find a mapping between source words and nodes in the target parse trees (Galley et al., 2004)." D09-1037,N03-1017,o,"The rules are then treated as events in a relative frequency estimate.4 We used Giza++ Model 4 to obtain word alignments (Och and Ney, 2003), using the grow-diag-final-and heuristic to symmetrise the two directional predictions (Koehn et al., 2003)." D09-1037,N03-1017,o,"2.1 Heuristic Grammar Induction Grammar based SMT models almost exclusively follow the same two-stage approach to grammar induction developed for phrase-based methods (Koehn et al., 2003)." D09-1037,N03-1017,o,"The production weights are estimated either by heuristic counting (Koehn et al., 2003) or using the EM algorithm." D09-1037,N03-1017,n,"In contrast, standard phrase-based models (Koehn et al., 2003) assume a mostly monotone mapping between source and target, and therefore cannot adequately model these phenomena." D09-1040,N03-1017,n,"1 Introduction Phrase-based systems, flat and hierarchical alike (Koehn et al., 2003; Koehn, 2004b; Koehn et al., 2007; Chiang, 2005; Chiang, 2007), have achieved a much better translation coverage than wordbased ones (Brown et al., 1993), but untranslated words remain a major problem in SMT." D09-1050,N03-1017,o,"(Och et al., 1999; Koehn et al., 2003; Liang et al., 2006)." D09-1050,N03-1017,o,"Computing the phrase translation probability is trivial in the training corpora, but lexical weighting (Koehn et al., 2003) needs lexical-level alignment." D09-1073,N03-1017,o,"Recently, many phrase reordering methods have been proposed, ranging from simple distancebased distortion model (Koehn et al., 2003; Och and Ney, 2004), flat reordering model (Wu, 1997; Zens et al., 2004), lexicalized reordering model (Tillmann, 2004; Kumar and Byrne, 2005), to hierarchical phrase-based model (Chiang, 2005; Setiawan et al., 2007) and classifier-based reordering model with linear features (Zens and Ney, 2006; Xiong et al., 2006; Zhang et al., 2007a; Xiong et al., 2008)." D09-1073,N03-1017,p,"1 Introduction Phrase-based method (Koehn et al., 2003; Och and Ney, 2004; Koehn et al., 2007) and syntaxbased method (Wu, 1997; Yamada and Knight, 2001; Eisner, 2003; Chiang, 2005; Cowan et al., 2006; Marcu et al., 2006; Liu et al., 2007; Zhang et al., 2007c, 2008a, 2008b; Shen et al., 2008; Mi and Huang, 2008) represent the state-of-the-art technologies in statistical machine translation (SMT)." D09-1106,N03-1017,o,"Word-aligned corpora have been found to be an excellent source for translation-related knowledge, not only for phrase-based models (Och and Ney, 2004; Koehn et al., 2003), but also for syntax-based models (e.g., (Chiang, 2007; Galley et al., 2006; Shen et al., 2008; Liu et al., 2006))." D09-1106,N03-1017,o,"Then, we used the refinement technique grow-diag-final-and (Koehn et al., 2003) to all 50 50 bidirectional alignment pairs." D09-1106,N03-1017,o,"The methods for calculating relative frequencies (Och and Ney, 2004) and lexical weights (Koehn et al., 2003) are also adapted for the weighted matrix case." D09-1106,N03-1017,p,"Besides relative frequencies, lexical weights (Koehn et al., 2003) are widely used to estimate how well the words in f translate the words in e. To do this, one needs first to estimate a lexical translation probability distribution w(e|f) by relative frequency from the same word alignments in the training corpus: w(e|f) = count(f,e)summationtext e count(f,e) (3) Note that a special source NULL token is added to each source sentence and aligned to each unaligned target word." D09-1107,N03-1017,o,"While theoretically sound, this approach is computationally challenging both in practice (DeNero et al., 2008) and in theory (DeNero and Klein, 2008), may suffer from reference reachability problems (DeNero et al., 2006), and in the end may lead to inferior translation quality (Koehn et al., 2003)." D09-1111,N03-1017,o,"Substring-based transliteration with a generative hybrid model is very similar to existing solutions for phrasal SMT (Koehn et al., 2003), operating on characters rather than words." D09-1115,N03-1017,o,"Then, we apply a grow-diag-final algorithm which is widely used in bilingual phrase extraction (Koehn et al., 2003) to monolingual alignments." D09-1123,N03-1017,o,"The prior probability P0 is the prior distribution for the phrase probability which is estimated using the phrase normalized counts commonly used in conventional Phrasebased SMT systems, e.g., (Koehn et al., 2003)." D09-1136,N03-1017,o,"1313 E2C C2E Union Heuristic w/ Big 13.37 12.66 14.55 14.28 w/o Big 13.20 12.62 14.53 14.21 Table 3: BLEU-4 scores (test set) of systems based on GIZA++ word alignments 5 6 7 8 BLEU-4 14.27 14.42 14.43 14.45 14.55 Table 4: BLEU-4 scores (test set) of the union alignment, using TTS templates up to a certain size, in terms of the number of leaves in their LHSs 4.1 Baseline Systems GHKM (Galley et al., 2004) is used to generate the baseline TTS templates based on the word alignments computed using GIZA++ and different combination methods, including union and the diagonal growing heuristic (Koehn et al., 2003)." D09-1136,N03-1017,o,"The word alignment is computed using GIZA++2 for the selected 73,597 sentence pairs in the FBIS corpus in both directions and then combined using union and heuristic diagonal growing (Koehn et al., 2003)." E06-2002,N03-1017,o,"By introducing the hidden word alignment variable a, the following approximate optimization criterion can be applied for that purpose: e = argmaxe Pr(e | f) = argmaxe summationdisplay a Pr(e,a | f) argmaxe,a Pr(e,a | f) Exploiting the maximum entropy (Berger et al. , 1996) framework, the conditional distribution Pr(e,a | f) can be determined through suitable real valued functions (called features) hr(e,f,a),r = 1R, and takes the parametric form: p(e,a | f) exp Rsummationdisplay r=1 rhr(e,f,a)} The ITC-irst system (Chen et al. , 2005) is based on a log-linear model which extends the original IBM Model 4 (Brown et al. , 1993) to phrases (Koehn et al. , 2003; Federico and Bertoldi, 2005)." E09-1003,N03-1017,o,"It is today common practice to use phrases as translation units (Koehn et al., 2003; Och and Ney, 2003) instead of the original word-based approach." E09-1033,N03-1017,o,"These were combined using the Grow Diag Final And symmetrization heuristic (Koehn et al., 2003)." E09-1043,N03-1017,n,"The problem is typically presented in log-space, which simplifies computations, but otherwise does not change the problem due to the monotonicity of the log function (hm = log hprimem) log p(t|s) = summationdisplay m m hm(t,s) (3) Phrase-based models (Koehn et al., 2003) are limited to the mapping of small contiguous chunks of text." E09-1049,N03-1017,o,"(2003), or in more recent implementation, the MOSES MT system1 (Koehn et al., 2007)." E09-1063,N03-1017,o,"5.3 Baseline System We conducted experiments using different segmenters with a standard log-linear PB-SMT model: GIZA++ implementation of IBM word alignment model 4 (Och and Ney, 2003), the refinement and phrase-extraction heuristics described in (Koehn et al., 2003), minimum-errorrate training (Och, 2003), a 5-gram language model with Kneser-Ney smoothing trained with SRILM (Stolcke, 2002) on the English side of the training data, and Moses (Koehn et al., 2007; Dyer et al., 2008) to translate both single best segmentation and word lattices." H05-1009,N03-1017,o,"We computed precision, recall and error rate on the entire set of sentence pairs for each data set.5 To evaluate NeurAlign, we used GIZA++ in both directions (E-to-F and F-to-E, where F is either Chinese (C) or Spanish (S)) as input and a refined alignment approach (Och and Ney, 2000) that uses a heuristic combination method called grow-diagfinal (Koehn et al. , 2003) for comparison." H05-1021,N03-1017,o,"Phrase-pairs are then extracted from the word alignments (Koehn et al. , 2003)." H05-1022,N03-1017,o,"5 Phrase Pair Induction A common approach to phrase-based translation is to extract an inventory of phrase pairs (PPI) from bitext (Koehn et al. , 2003), For example, in the phraseextract algorithm (Och, 2002), a word alignment am1 is generated over the bitext, and all word subsequences ei2i1 and fj2j1 are found that satisfy : am1 : aj [i1,i2] iff j [j1,j2] ." H05-1023,N03-1017,p,"1 Introduction Todays statistical machine translation systems rely on high quality phrase translation pairs to acquire state-of-the-art performance, see (Koehn et al. , 2003; Zens and Ney, 2004; Och and Ney, 2003)." H05-1024,N03-1017,o,"We computed precision, recall and error rate on the entire set for each data set.6 For an initial alignment, we used GIZA++ in both directions (E-to-F and F-to-E, where F is either Chinese (C) or Spanish (S)), and also two different combined alignments: intersection of E-to-F and F-to-E; and RA using a heuristic combination approach called grow-diag-final (Koehn et al. , 2003)." H05-1024,N03-1017,o,"The standard method to overcome this problem to use the model in both directions (interchanging the source and target languages) and applying heuristic-based combination techniques to produce a refined alignment (Och and Ney, 2000; Koehn et al. , 2003)henceforth referred to as RA. Several researchers have proposed algorithms for improving word alignment systems by injecting additional knowledge or combining different alignment models." H05-1024,N03-1017,p,"For our experiments, we chose GIZA++ (Och and Ney, 2000) and the RA approach (Koehn et al. , 2003) the best known alignment combination technique as our initial aligners.1 4.2 TBL Templates Our templates consider consecutive words (of size 1, 2 or 3) in both languages." H05-1096,N03-1017,p,"Nowadays, most of the state-of-the-art SMT systems are based on bilingual phrases (Bertoldi et al. , 2004; Koehn et al. , 2003; Och and Ney, 2004; Tillmann, 2003; Vogel et al. , 2004; Zens and Ney, 2004)." H05-1098,N03-1017,o,"The basic model uses the following features, analogous to Pharaohs default feature set: P( | ) and P( | ) the lexical weights Pw( | ) and Pw( | ) (Koehn et al. , 2003);1 a phrase penalty exp(1); a word penalty exp(l), where l is the number of terminals in ." H05-1098,N03-1017,o,"The feature weights are learned by maximizing the BLEU score (Papineni et al. , 2002) on held-out data,usingminimum-error-ratetraining(Och,2003) as implemented by Koehn." H05-1098,N03-1017,o,"The need for some way to model aspects of syntactic behavior, such as the tendency of constituents to move together as a unit, is widely recognizedthe role of syntactic units is well attested in recent systematic studies of translation (Fox, 2002; Hwa et al. , 2002; Koehn and Knight, 2003), and their absence in phrase-based models is quite evident when looking at MT system output." I08-1033,N03-1017,o,"The phrasebased machine translation (Koehn et al., 2003) uses the grow-diag-final heuristic to extend the word alignment to phrase alignment by using the intersection result." I08-1033,N03-1017,o,"However for remedy, many of the current word alignment methods combine the results of both alignment directions, via intersection or 249 grow-diag-final heuristic, to improve the alignment reliability (Koehn et al., 2003; Liang et al., 2006; Ayan et al., 2006; DeNero et al., 2007)." I08-1064,N03-1017,p,"Although bi-alignments are known to exhibit high precision (Koehn et al., 2003), in the face of sparse annotations we use unidirectional alignments as a fallback, as has been proposed in the context of phrase-based machine translation (Koehn et al., 2003; Tillmann, 2003)." I08-1067,N03-1017,p,"Phrases extracted using these heuristics are also shown to perform better than syntactically motivated phrases, the joint model, and IBM model 4 (Koehn et al., 2003)." I08-1067,N03-1017,o,"The phrase translation table is learnt in the following manner: The parallel corpus is word-aligned bidirectionally, and using various heuristics (see (Koehn et al., 2003) for details) phrase correspondences are established." I08-2088,N03-1017,o,"We used the preprocessed data to train the phrase-based translation model by using GIZA++ (Och and Ney, 2003) and the Pharaoh tool kit (Koehn et al., 2003)." I08-2088,N03-1017,o,"3.2.2 Features We used eight features (Och and Ney, 2003; Koehn et al., 2003) and their weights for the translations." I08-8001,N03-1017,n,"However, reordering models in traditional phrase-based systems are not sufficient to treat such complex cases when we translate long sentences (Koehn et al, 2003)." J07-1003,N03-1017,p,"Nowadays, most state-of-the-art SMT systems are based on bilingual phrases (Och, Tillmann, and Ney 1999; Koehn, Och, and Marcu 2003; Tillmann 2003; Bertoldi et al. 2004; Vogel et al. 2004; Zens and Ney 2004; Chiang 2005)." J07-2003,N03-1017,o,"Above the phrase level, some models perform no reordering (Zens and Ney 2004; Kumar, Deng, and Byrne 2006), some have a simple distortion model that reorders phrases independently of their content (Koehn, Och, and Marcu 2003; Och and Ney 2004), and some, for example, the Alignment Template System (Och et al. 2004; Thayer et al. 2004), hereafter ATS, and the IBM phrase-based system (Tillmann 2004; Tillmann and Zhang 2005), have phrase-reordering models that add some lexical sensitivity." N04-1033,N03-1017,o,"In (Koehn et al. , 2003), various aspects of phrase-based systems are compared, e.g. the phrase extraction method, the underlying word alignment model, or the maximum phrase length." N04-1035,N03-1017,p,"Along this line, (Koehn et al. , 2003) present convincing evidence that restricting phrasal translation to syntactic constituents yields poor translation performance the ability to translate nonconstituent phrases (such as there are, note that, and according to) turns out to be critical and pervasive." N04-4026,N03-1017,p,"1 Introduction In recent years, phrase-based systems for statistical machine translation (Och et al. , 1999; Koehn et al. , 2003; Venugopal et al. , 2003) have delivered state-of-the-art performance on standard translation tasks." N06-1002,N03-1017,o,"As an additional baseline, we compare against a phrasal SMT decoder, Pharaoh (Koehn et al. 2003)." N06-1002,N03-1017,o,"We used the heuristic combination described in (Och and Ney 2003) and extracted phrasal translation pairs from this combined alignment as described in (Koehn et al. , 2003)." N06-1003,N03-1017,p,"2 The Problem of Coverage in SMT Statistical machine translation made considerable advances in translation quality with the introduction of phrase-based translation (Marcu and Wong, 2002; Koehn et al. , 2003; Och and Ney, 2004)." N06-1004,N03-1017,o,"Phrase tables were learned from the training corpus using the diag-and method (Koehn et al. , 2003), and using IBM model 2 to produce initial word alignments (these authors found this worked as well as IBM4)." N06-1004,N03-1017,o,"1 Introduction: Defining SCMs The work presented here was done in the context of phrase-based MT (Koehn et al. , 2003; Och and Ney, 2004)." N06-1013,N03-1017,o,"Based on the observations in (Koehn et al. , 2003), we also limited the phrase length to 3 for computational reasons." N06-1013,N03-1017,n,"For comparison purposes, three additional heuristically-induced alignments are generated for each system: (1) Intersection of both directions (Aligner(int)); (2) Union of both directions (Aligner(union)); and (3) The previously bestknown heuristic combination approach called growdiag-final (Koehn et al. , 2003) (Aligner(gdf))." N06-1013,N03-1017,o,"1 Introduction Word alignmentdetection of corresponding words between two sentences that are translations of each otheris usually an intermediate step of statistical machine translation (MT) (Brown et al. , 1993; Och and Ney, 2003; Koehn et al. , 2003), but also has been shown useful for other applications such as construction of bilingual lexicons, word-sense disambiguation, projection of resources, and crosslanguage information retrieval." N06-1014,N03-1017,o,"Using GIZA++ model 4 alignments and Pharaoh (Koehn et al. , 2003), we achieved a BLEU score of 0.3035." N06-1014,N03-1017,o,"1 Introduction Word alignment is an important component of a complete statistical machine translation pipeline (Koehn et al. , 2003)." N06-1015,N03-1017,p,"We view this as a particularly promising aspect of our work, given that phrase-based systems such as Pharaoh (Koehn et al. , 2003) perform better with higher recall alignments." N06-1031,N03-1017,o,"1 Introduction Recent work in statistical machine translation (MT) has sought to overcome the limitations of phrasebased models (Marcu and Wong, 2002; Koehn et al. , 2003; Och and Ney, 2004) by making use of syntactic information." N06-1032,N03-1017,o,"1 Introduction Recent approaches to statistical machine translation (SMT) piggyback on the central concepts of phrasebased SMT (Och et al. , 1999; Koehn et al. , 2003) and at the same time attempt to improve some of its shortcomings by incorporating syntactic knowledge in the translation process." N07-1007,N03-1017,p,"Most stateof-the-art SMT systems treat grammatical elements in exactly the same way as content words, and rely on general-purpose phrasal translations and target language models to generate these elements (e.g. , Och and Ney, 2002; Koehn et al. , 2003; Quirk et al. , 2005; Chiang, 2005; Galley et al. , 2006)." N07-1022,N03-1017,o,"In this paper we present results on using a recent phrase-based SMT system, PHARAOH (Koehn et al. , 2003), for NLG.1 Although moderately effec1We also tried IBM Model 4/REWRITE (Germann, 2003), a word-based SMT system, but it gave much worse results." N07-1022,N03-1017,n,"Like WASP1, the phrase extraction algorithm of PHARAOH is based on the output of a word alignment model such as GIZA++ (Koehn et al. , 2003), which performs poorly when applied directly to MRLs (Section 3.2)." N07-1022,N03-1017,o,"Toremedythis situation, we can borrow the probabilistic model of PHARAOH, and define the parsing model as: Pr(d|e(d)) = productdisplay dd w(r(d)) (4) which is the product of the weights of the rules used in a derivation d. The rule weight, w(X ,), is in turn defined as: P(|)1P(|)2Pw(|)3Pw(|)4 exp(||)5 where P(|) and P(|) are the relative frequencies of and , and Pw(|) and Pw(|) are 176 the lexical weights (Koehn et al. , 2003)." N07-1022,N03-1017,o,"Following the phrase extraction phase in PHARAOH, we eliminate word gaps by incorporating unaligned words as part of the extracted NL phrases (Koehn et al. , 2003)." N07-1022,N03-1017,o,"3.1 Generation using PHARAOH PHARAOH (Koehn et al. , 2003) is an SMT system that uses phrases as basic translation units." N07-1022,N03-1017,o,"These rules are learned using a word alignment model, which finds an optimal mapping from words to MR predicates given a set of training sentences and their correct MRs. Word alignment models have been widely used for lexical acquisition in SMT (Brown et al. , 1993; Koehn et al. , 2003)." N07-1061,N03-1017,o,"2 Phrase-based SMT We use a phrase-based SMT system, Pharaoh, (Koehn et al. , 2003; Koehn, 2004), which is based on a log-linear formulation (Och and Ney, 2002)." N07-1061,N03-1017,o,"For details on these feature functions, please refer to (Koehn et al. , 2003; Koehn, 2004; Koehn et al. , 2005)." N07-1061,N03-1017,o,"That is, phrases are heuristically extracted from word-level alignments produced by doing GIZA++ training on the corresponding parallel corpora (Koehn et al. , 2003)." N07-1061,N03-1017,o,"The definitions of the phrase and lexical translation probabilities are as follows (Koehn et al. , 2003)." N07-1062,N03-1017,o,"Even a length limit of 3, as proposed by (Koehn et al. , 2003), would result in almost optimal translation quality." N07-1062,N03-1017,o,"We have investigated this and our results are in line with (Koehn et al. , 2003) showing that the translation quality does not improve if we utilize phrases beyond a certain length." N07-1063,N03-1017,o,"Grammar rules were induced with the syntaxbased SMT system SAMT described in (Zollmann and Venugopal, 2006), which requires initial phrase alignments that we generated with GIZA++ (Koehn et al. , 2003), and syntactic parse trees of the target training sentences, generated by the Stanford Parser (D. Klein, 2003) pre-trained on the Penn Treebank." N07-2007,N03-1017,o,"The baseline we measure against in all of these experiments is the state-of-the-art grow-diag-final (gdf ) alignment refinement heuristic commonly used in phrase-based SMT (Koehn et al. , 2003)." N07-2008,N03-1017,o,"They have been employed in word sense disambiguation (Diab and Resnik, 2002), automatic construction of bilingual dictionaries (McEwan et al. , 2002), and inducing statistical machine translation models (Koehn et al. , 2003)." N07-2009,N03-1017,o,"For comparison, we use the MT training program, GIZA++ (Och and Ney, 2003), the phrase-base decoder, Pharaoh (Koehn et al. , 2003), and the wordbased decoder, Rewrite (Germann, 2003)." N07-2015,N03-1017,o,"Many research groups use a decoder based on a log-linear approach incorporating phrases as main paradigm (Koehn et al. , 2003)." N07-2053,N03-1017,o,"(2006), modified from (Koehn et al. , 2003), which is an average of pairwise word translation probabilities." N07-2053,N03-1017,p,"They provide pairs of phrases that are used to construct a large set of potential translations for each input sentence, along with feature values associated with each phrase pair that are used to select the best translation from this set.1 The most widely used method for building phrase translation tables (Koehn et al. , 2003) selects, from a word alignment of a parallel bilingual training corpus, all pairs of phrases (up to a given length) that are consistent with the alignment." N09-1013,N03-1017,o,"3.2.2 Alignment Error Rate Since MT systems are usually built on the union of the two sets of alignments (Koehn et al., 2003), we consider the union of alignments in the two directions as well as those in each direction." N09-1021,N03-1017,o,"In particular, we adopt the approach of phrase-based statistical machine translation (Koehn et al., 2003; Koehn and Hoang, 2007)." N09-1029,N03-1017,o,"We obtain aligned parallel sentences and the phrase table after the training of Moses, which includes running GIZA++ (Och and Ney, 2003), grow-diagonal-final symmetrization and phrase extraction (Koehn et al., 2005)." N09-1029,N03-1017,o,"For example, in phrase-based SMT systems (Koehn et al., 2003; Koehn, 2004), distortion model is used, in which reordering probabilities depend on relative positions of target side phrases between adjacent blocks." N09-1029,N03-1017,o,"Therefore, while phrase-based SMT moves from words to phrases as the basic unit of translation, implying effective local reordering within phrases, it suffers when determining phrase reordering, especially when phrases are longer than three words (Koehn et al., 2003)." N09-1046,N03-1017,o,"Word alignment was carried out by running Giza++ implementation of IBM Model 4 initialized with 5 iterations of Model 1, 5 of the HMM aligner, and 3 iterations of Model 4 (Och and Ney, 2003) in both directions and then symmetrizing using the grow-diag-final-and heuristic (Koehn et al., 2003)." N09-1046,N03-1017,o,"Unfortunately, determining the optimal segmentation is challenging, typically requiring extensive experimentation (Koehn and Knight, 2003; Habash and Sadat, 2006; Chang et al., 2008)." N09-1046,N03-1017,o,"The features used by the decoder were the English language model log probability, logf(e|f), the lexical translation log probabilities in both directions (Koehn et al., 2003), and a word count feature." N09-2024,N03-1017,o,"Typically, a phrase-based SMT system includes a feature that scores phrase pairs using lexical weights (Koehn et al., 2003) which are computed for two directions: source to target and target to source." N09-2055,N03-1017,o,"Each model can represent an important feature for the translation, such as phrase-based, language, or lexical models (Koehn et al., 2003)." N09-3016,N03-1017,o,"The transcription probabilities can then be easily learnt from the alignments induced by GIZA++, using a scoring function (Koehn et al., 2003)." N09-3016,N03-1017,o,"We used minimum error rate training (Och, 2003) and the A* beam search decoder implemented by Koehn (Koehn et al., 2003)." N09-3016,N03-1017,o,"5.1 ExploringtheParameters Theparameterswhichhaveamajorinuenceonthe performance of a phrase-based SMT model are the alignment heuristics, the maximum phrase length (MPR) and the order of the language model (Koehn et al., 2003)." P04-1023,N03-1017,o,"The phrase-based decoder extracts phrases from the word alignments produced by GIZA++, and computes translation probabilities based on the frequency of one phrase being aligned with another (Koehn et al. , 2003)." P04-1060,N03-1017,o,"For each span in the chart, we get a weight factor that is multiplied with the parameter-based expectations.9 4 Experiments We applied GIZA++ (Al-Onaizan et al. , 1999; Och and Ney, 2003) to word-align parts of the Europarl corpus (Koehn, 2002) for English and all other 10 languages." P04-1060,N03-1017,o,"We use the Europarl corpus (Koehn, 2002), and the statistical word alignment was performed with the GIZA++ toolkit (Al-Onaizan et al. , 1999; Och and Ney, 2003).1 For the current experiments we assume no preexisting parser for any of the languages, contrary to the information projection scenario." P04-1060,N03-1017,o,"(Koehn et al. , 2003) show that exploiting all contiguous word blocks in phrase-based alignment is better than focusing on syntactic constituents only." P04-1064,N03-1017,o,"It is important because a wordaligned corpus is typically used as a first step in order to identify phrases or templates in phrase-based Machine Translation (Och et al. , 1999), (Tillmann and Xia, 2003), (Koehn et al. , 2003, sec." P05-1033,N03-1017,p,"We compared a baseline system, the state-of-the-art phrase-based system Pharaoh (Koehn et al. , 2003; Koehn, 2004a), against our system." P05-1033,N03-1017,o,"5.1 Baseline The baseline system we used for comparison was Pharaoh (Koehn et al. , 2003; Koehn, 2004a), as publicly distributed." P05-1033,N03-1017,o,"Above the phrase level, these models typically have a simple distortion model that reorders phrases independently of their content (Och and Ney, 2004; Koehn et al. , 2003), or not at all (Zens and Ney, 2004; Kumar et al. , 2005)." P05-1033,N03-1017,o,"When we run a phrase-based system, Pharaoh (Koehn et al. , 2003; Koehn, 2004a), on this sentence (using the experimental setup described below), we get the following phrases with translations: (4) [Aozhou] [shi] [yu] [Bei Han] [you] [bangjiao]1 [de shaoshu guojia zhiyi] [Australia] [is] [dipl." P05-1033,N03-1017,o,"For our experiments we used the following features, analogous to Pharaohs default feature set: P( | ) and P( | ), the latter of which is not found in the noisy-channel model, but has been previously found to be a helpful feature (Och and Ney, 2002); the lexical weights Pw( | ) and Pw( | ) (Koehn et al. , 2003), which estimate how well the words in translate the words in ;2 a phrase penalty exp(1), which allows the model to learn a preference for longer or shorter derivations, analogous to Koehns phrase penalty (Koehn, 2003)." P05-1033,N03-1017,o,"To do this, we first identify initial phrase pairs using the same criterion as previous systems (Och and Ney, 2004; Koehn et al. , 2003): Definition 1." P05-1066,N03-1017,o,"In experiments with the system of (Koehn et al. , 2003) we have found that in practice a large number of complete translations are completely monotonic (i.e. , have a0 skips), suggesting that the system has difficulty learning exactly what points in the translation should allow reordering." P05-1066,N03-1017,o,"Our baseline is the phrase-based MT system of (Koehn et al. , 2003)." P05-1066,N03-1017,p,"Results using the method show an improvement from 25.2% Bleu score to 26.8% Bleu score (a statistically significant improvement), using a phrase-based system (Koehn et al. , 2003) which has been shown in the past to be a highly competitive SMT system." P05-1066,N03-1017,p,"More recently, phrase-based models (Och et al. , 1999; Marcu and Wong, 2002; Koehn et al. , 2003) have been proposed as a highly successful alternative to the IBM models." P05-1066,N03-1017,o,"In this paper we use the phrase-based system of (Koehn et al. , 2003) as our underlying model." P05-1066,N03-1017,o,"Reranking methods have also been proposed as a method for using syntactic information (Koehn and Knight, 2003; Och et al. , 2004; Shen et al. , 2004)." P05-1066,N03-1017,o,"1 Introduction Recent research on statistical machine translation (SMT) has lead to the development of phrasebased systems (Och et al. , 1999; Marcu and Wong, 2002; Koehn et al. , 2003)." P05-1068,N03-1017,p,"Recently, various works have improved the quality of statistical machine translation systems by using phrase translation (Koehn et al. , 2003; Marcu et al. , 2002; Och et al. , 1999; Och and Ney, 2000; Zens et al. , 2004)." P05-1069,N03-1017,o,"3.4 Lexical Weighting The lexical weight a27 a14a12a91 a29 a92a93a21 of the block a9 a72 a14a12a91 a19a86a92a93a21 is computed similarly to (Koehn et al. , 2003), but the lexical translation probability a27 a14a12a94 a29 a97a100a21 is derived from the block set itself rather than from a word alignment, resulting in a simplified training." P05-1069,N03-1017,o,"Two block sets are derived for each of the training sets using a phrase-pair selection algorithm similar to (Koehn et al. , 2003; Tillmann and Xia, 2003)." P05-1069,N03-1017,o,"2 Block Orientation Bigrams This section describes a phrase-based model for SMT similar to the models presented in (Koehn et al. , 2003; Och et al. , 1999; Tillmann and Xia, 2003)." P05-1069,N03-1017,o,"Lexical Weighting: (e) the lexical weight a27 a14a12a91 a29 a92a93a21 of the block a9 a72 a14a12a91 a19a86a92a93a21 is computed similarly to (Koehn et al. , 2003), details are given in Section 3.4." P05-1074,N03-1017,o,"Our method for identifying paraphrases is an extension of recent work in phrase-based statistical machine translation (Koehn et al. , 2003)." P05-1074,N03-1017,o,"Koehn (2004), Tillmann (2003), and Vogel et al." P05-2016,N03-1017,o," Statistical Phrase-based Translation (Koehn et al. , 2003): Here phrase-based means subsequence-based, as there is no guarantee that the phrases learned by the model will have any relation to what we would think of as syntactic phrases." P06-1009,N03-1017,o,"Most current SMT systems (Och and Ney, 2004; Koehn et al. , 2003) use a generative model for word alignment such as the freely available GIZA++ (Och and Ney, 2003), an implementation of the IBM alignment models (Brown et al. , 1993)." P06-1066,N03-1017,o,"One is distortion model (Och and Ney, 2004; Koehn et al. , 2003) which penalizes translations according to their jump distance instead of their content." P06-1067,N03-1017,o,"However, their decoder is outperformed by phrase-based decoders such as (Koehn, 2004), (Och et al. , 1999), and (Tillmann and Ney, 2003)." P06-1067,N03-1017,o,"Similarly, (Koehn et al. , 2003) propose a relative distortion model to be used with a phrase decoder." P06-1077,N03-1017,o,"5.1 Pharaoh The baseline system we used for comparison was Pharaoh (Koehn et al. , 2003; Koehn, 2004), a freely available decoder for phrase-based translation models: p(e|f) = p(f|e) pLM(e)LM pD(e,f)D length(e)W(e) (10) We ran GIZA++ (Och and Ney, 2000) on the training corpus in both directions using its default setting, and then applied the refinement rule diagand described in (Koehn et al. , 2003) to obtain a single many-to-many word alignment for each sentence pair." P06-1077,N03-1017,o,"h1(eI1,fJ1 ) = log Kproductdisplay k=1 N(z)(T(z), Tk) N(T(z)) h2(eI1,fJ1 ) = log Kproductdisplay k=1 N(z)(T(z), Tk) N(S(z)) h3(eI1,fJ1 ) = log Kproductdisplay k=1 lex(T(z)|S(z))(T(z), Tk) h4(eI1,fJ1 ) = log Kproductdisplay k=1 lex(S(z)|T(z))(T(z), Tk) h5(eI1,fJ1 ) = K h6(eI1,fJ1 ) = log Iproductdisplay i=1 p(ei|ei2,ei1) h7(eI1,fJ1 ) = I 4When computing lexical weighting features (Koehn et al. , 2003), we take only terminals into account." P06-1077,N03-1017,p,"1 Introduction Phrase-based translation models (Marcu and Wong, 2002; Koehn et al. , 2003; Och and Ney, 2004), which go beyond the original IBM translation models (Brown et al. , 1993) 1 by modeling translations of phrases rather than individual words, have been suggested to be the state-of-theart in statistical machine translation by empirical evaluations." P06-1090,N03-1017,o,"Here, ppicker shows the accuracy when phrases are extracted by using the N-best phrase alignment method described in Section 4.1, while growdiag-final shows the accuracy when phrases are extracted using the standard phrase extraction algorithm described in (Koehn et al. , 2003)." P06-1090,N03-1017,o,"The translation model used in (Koehn et al. , 2003) is the product of translation probability a34a35a4 a29 a0 a33 a6 a29 a2 a33 a8 and distortion probability a36a37a4a39a38 a33a41a40a43a42a44a33a46a45 a32 a8, a3a5a4a35a29 a0 a30 a32 a6 a29 a2 a30 a32 a8 a10 a30 a47 a33a49a48 a32 a34a35a4 a29 a0a22a33 a6 a29 a2 a33a50a8 a36a51a4a39a38 a33 a40a52a42 a33a53a45 a32 a8 (1) where a38 a33 denotes the start position of the source phrase translated into the a54 -th target phrase, and a42 a33a53a45 a32 denotes the end position of the source phrase translated into the a4a53a54 a40a56a55 a8 -th target phrase." P06-1090,N03-1017,o,"(Koehn et al. , 2003) used the following distortion model, which simply penalizes nonmonotonic phrase alignments based on the word distance of successively translated source phrases with an appropriate value for the parameter a71, a36a51a4a39a38 a33 a40a52a42 a33a53a45 a32 a8 a10 a71a26a72a73a25a74 a45a62a75 a74a77a76a24a78 a45 a32 a72 (3) a79a17a80a82a81a84a83a85a15a86a88a87a70a89a91a90 languageis a means communication of MG RA RA b1 b2 b3 b4 Figure 1: Phrase alignment and reordering bi-1 bi fi-1 fi ei-1 ei bi-1 bi fi-1 fi ei-1 ei bi-1 bi fi-1 fi ei-1 ei bi-1 bi fi-1 fi ei-1 ei source target target source target target source source d=MA d=MG d=RA d=RG Figure 2: Four types of reordering patterns 3 The Global Phrase Reordering Model Figure 1 shows an example of Japanese-English phrase alignment that consists of four phrase pairs." P06-1090,N03-1017,o,"For comparison, we also implemented a different N-best phrase alignment method, where _ _ _ _ the_light_was_red _ _ _ the_light was_red _ _ the_light was red (1) (2) (3) Figure 4: N-best phrase alignments phrase pairs are extracted using the standard phrase extraction method described in (Koehn et al. , 2003)." P06-1090,N03-1017,o,"a1 Graduated in March 2006 Standard phrase-based translation systems use a word distance-based reordering model in which non-monotonic phrase alignment is penalized based on the word distance between successively translated source phrases without considering the orientation of the phrase alignment or the identities of the source and target phrases (Koehn et al. , 2003; Och and Ney, 2004)." P06-1091,N03-1017,o,"The block set is generated using a phrase-pair selection algorithm similar to (Koehn et al. , 2003; Al-Onaizan et al. , 2004), which includes some heuristic filtering to mal statement here." P06-1091,N03-1017,o,"Word-based features are used as well, e.g. feature a75 a11a39a99a78a99a18a11 captures word-to-word translation de4On our test set, (Tillmann and Zhang, 2005) reports a BLEU score of a100a63a101a63a102a43a103 and (Ittycheriah and Roukos, 2005) reports a BLEU score of a104a89a103a63a102 a105 . pendencies similar to the use of Model a98 probabilities in (Koehn et al. , 2003)." P06-1096,N03-1017,n,"The process of phrase extraction is difficult to optimize in a non-discriminative setting: many heuristics have been proposed (Koehn et al. , 2003), but it is not obvious which one should be chosen for a given language pair." P06-1096,N03-1017,o,"The discrepancy between DEV performance and TEST performance is due to temporal distance from TRAIN and high variance in BLEU score.11 We also compared our model with Pharaoh (Koehn et al. , 2003)." P06-1096,N03-1017,o,"At the end we ran our models once on TEST to get final numbers.2 4 Models Our experiments used phrase-based models (Koehn et al. , 2003), which require a translation table and language model for decoding and feature computation." P06-1096,N03-1017,o,"In the future, we plan to explore our discriminative framework on a full distortion model (Koehn et al. , 2003) or even a hierarchical model (Chiang, 2005)." P06-1098,N03-1017,o,"A phrase-based translation model is one of the modern approaches which exploits a phrase, a contiguous sequence of words, as a unit of translation (Koehn et al. , 2003; Zens and Ney, 2003; Tillman, 2004)." P06-1098,N03-1017,o,"Many-to-many word alignments are induced by running a one-to-many word alignment model, such as GIZA++ (Och and Ney, 2003), in both directions and by combining the results based on a heuristic (Koehn et al. , 2003)." P06-1098,N03-1017,o,"Second, phrase translation pairs are extracted from the word alignment corpus (Koehn et al. , 2003)." P06-1122,N03-1017,p,"4 Experiments Phrase-based SMT systems have been shown to outperform word-based approaches (Koehn et al. , 2003)." P06-1122,N03-1017,o,"4.1 Applications to phrase-based SMT Aphrase-basedtranslationmodelcanbeestimated in two stages: first a parallel corpus is aligned at the word-level and then phrase pairs are extracted (Koehn et al. , 2003)." P06-1123,N03-1017,o,"is relevant to finite-state phrase-based models that use no parse trees (Koehn et al. , 2003), tree-tostring models that rely on one parse tree (Yamada and Knight, 2001), and tree-to-tree models that rely on two parse trees (Groves et al. , 2004, e.g.)." P06-1139,N03-1017,o,"Automatic Creation of WIDL-expressions for MT. We generate WIDL-expressions from Chinese strings by exploiting a phrase-based translation table (Koehn et al. , 2003)." P06-1139,N03-1017,o,"When evaluated against the state-of-the-art, phrase-based decoder Pharaoh (Koehn, 2004), using the same experimental conditions translation table trained on the FBIS corpus (7.2M Chinese words and 9.2M English words of parallel text), trigram language model trained on 155M words of English newswire, interpolation weights a65 (Equation 2) trained using discriminative training (Och, 2003) (on the 2002 NIST MT evaluation set), probabilistic beam a90 set to 0.01, histogram beam a58 set to 10 and BLEU (Papineni et al. , 2002) as our metric, the WIDL-NGLM-Aa86 a129 algorithm produces translations that have a BLEU score of 0.2570, while Pharaoh translations have a BLEU score of 0.2635." P06-2005,N03-1017,o,"The normalization is visualized as a translation problem where messages in the SMS language are to be translated to normal English using a similar phrase-based statistical MT method (Koehn et al. , 2003)." P06-2101,N03-1017,o,"791 and score the alignment template models phrases (Koehn et al. , 2003)." P06-2107,N03-1017,o,"The second one is heuristic and tries to use a wordaligned corpus (Zens et al. , 2002; Koehn et al. , 2003)." P07-1001,N03-1017,o,"Our decoder is a phrase-based multi-stack imple5 mentation of the log-linear model similar to Pharaoh (Koehn et al. , 2003)." P07-1005,N03-1017,p,"To perform translation, state-of-the-art MT systems use a statistical phrase-based approach (Marcu and Wong, 2002; Koehn et al. , 2003; Och and Ney, 2004) by treating phrases as the basic units of translation." P07-1005,N03-1017,p,"Recently, Cabezas and Resnik (2005) experimented with incorporating WSD translations into Pharaoh, a state-of-the-art phrase-based MT system (Koehn et al. , 2003)." P07-1039,N03-1017,o,"4.3 Baseline We use a standard log-linear phrase-based statistical machine translation system as a baseline: GIZA++ implementation of IBM word alignment model 4 (Brown et al. , 1993; Och and Ney, 2003),8 the refinement and phrase-extraction heuristics described in (Koehn et al. , 2003), minimum-error-rate training 7More specifically, we choose the first English reference from the 7 references and the Chinese sentence to construct new sentence pairs." P07-1039,N03-1017,o,"Running words 1,864 14,437 Vocabulary size 569 1,081 Table 2: ChineseEnglish corpus statistics (Och, 2003) using Phramer (Olteanu et al. , 2006), a 3-gram language model with Kneser-Ney smoothing trained with SRILM (Stolcke, 2002) on the English side of the training data and Pharaoh (Koehn, 2004) with default settings to decode." P07-1039,N03-1017,o,"To quickly (and approximately) evaluate this phenomenon, we trained the statistical IBM wordalignment model 4 (Brown et al. , 1993),1 using the GIZA++ software (Och and Ney, 2003) for the following language pairs: ChineseEnglish, Italian English, and DutchEnglish, using the IWSLT-2006 corpus (Takezawa et al. , 2002; Paul, 2006) for the first two language pairs, and the Europarl corpus (Koehn, 2005) for the last one." P07-1059,N03-1017,o,"We present two approaches to SMT-based query expansion, both of which are implemented in the framework of phrase-based SMT (Och and Ney, 2004; Koehn et al. , 2003)." P07-1059,N03-1017,o,"4 SMT-Based Query Expansion Our SMT-based query expansion techniques are based on a recent implementation of the phrasebased SMT framework (Koehn et al. , 2003; Och and Ney, 2004)." P07-1083,N03-1017,o,"A similar use of the term phrase exists in machine translation, where phrases are often pairs of word sequences consistent with word-based alignments (Koehn et al. , 2003)." P07-1089,N03-1017,o,"We ran GIZA++ (Och and Ney, 2000) on the training corpus in both directions using its default setting, and then applied the refinement rule diagand described in (Koehn et al. , 2003) to obtain a single many-to-many word alignment for each sentence pair." P07-1089,N03-1017,o,"We compared our system Lynx against a freely available phrase-based decoder Pharaoh (Koehn et al. , 2003)." P07-1090,N03-1017,o,"The basic phrase reordering model is a simple unlexicalized, context-insensitive distortion penalty model (Koehn et al. , 2003)." P07-1090,N03-1017,o,"However, the pb features yields no noticeable improvement unlike in prefect lexical choice scenario; this is similar to the findings in (Koehn et al. , 2003)." P07-1091,N03-1017,o,"The translation table is obtained as described in (Koehn et al. , 2003), i.e. the alignment tool GIZA++ is run over the training data in both translation directions, and the two alignTest Setting BLEU B1 standard phrase-based SMT 29.22 B2 (B1) + clause splitting 29.13 Table 2: Experiment Baseline Test Setting BLEU BLEU 2-ary 2,3-ary 1 rule 29.77 30.31 2 ME (phrase label) 29.93 30.49 3 ME (left,right) 30.10 30.53 4 ME ((3)+head) 30.24 30.71 5 ME ((3)+phrase label) 30.12 30.30 6 ME ((4)+context) 30.24 30.76 Table 3: Tests on Various Reordering Models The 3rd column comprises the BLEU scores obtained by reordering binary nodes only, the 4th column the scores by reordering both binary and 3-ary nodes." P07-1091,N03-1017,o,"The implementation is similar to the idea of lexical weight in (Koehn et al. , 2003): all points in the alignment matrices of the entire training corpus are collected to calculate the probabilistic distribution, P(t|s), of some TL word 3Some readers may prefer the expression the subtree rooted at node N to node N. The latter term is used in this paper for simplicity." P07-1091,N03-1017,o,"For example, the distancebased reordering model (Koehn et al. , 2003) allows a decoder to translate in non-monotonous order, under the constraint that the distance between two phrases translated consecutively does not exceed a limit known as distortion limit." P07-1092,N03-1017,o,"The translation models and lexical scores were estimated on the training corpus whichwasautomaticallyalignedusingGiza++(Och et al. , 1999) in both directions between source and target and symmetrised using the growing heuristic (Koehn et al. , 2003)." P07-1092,N03-1017,o,"This represents the translation probability of a phrase when it is decomposed into a series of independent word-for-word translation steps (Koehn et al. , 2003), and has proven a very effective feature (Zens and Ney, 2004; Foster et al. , 2006)." P07-1092,N03-1017,o,"As with conventional smoothing methods (Koehn et al. , 2003; Foster et al. , 2006), triangulation increases the robustness of phrase translation estimates." P07-1092,N03-1017,p,"1 Introduction Statistical machine translation (Brown et al. , 1993) has seen many improvements in recent years, most notably the transition from wordto phrase-based models (Koehn et al. , 2003)." P07-1108,N03-1017,o,"3 Phrase-Based SMT According to the translation model presented in (Koehn et al. , 2003), given a source sentence f, the best target translation best e can be obtained according to the following model )( )()|(maxarg )|(maxarg e e e eef fee length LM best pp p = = (1) Where the translation model )|( efp can be decomposed into = = I i i iii i i II aefpbadef efp 1 1 1 1 ),|()()|( )|( w (2) Where )|( i i ef and )( 1 ii bad denote phrase translation probability and distortion probability, respectively." P07-1108,N03-1017,o,"Thus, equation (3) can be rewritten as = i p i iii i i eppfef )|()|()|( (4) 4.2 Lexical Weight Given a phrase pair ),( ef and a word alignment a between the source word positions ni,,1= and the target word positions mj,,1=, the lexical weight can be estimated according to the following method (Koehn et al. , 2003)." P07-1108,N03-1017,o,"1 Introduction For statistical machine translation (SMT), phrasebased methods (Koehn et al. , 2003; Och and Ney, 2004) and syntax-based methods (Wu, 1997; Alshawi et al. 2000; Yamada and Knignt, 2001; Melamed, 2004; Chiang, 2005; Quick et al. , 2005; Mellebeek et al. , 2006) outperform word-based methods (Brown et al. , 1993)." P07-1119,N03-1017,o,"The phrase-based approach developed for statistical machine translation (Koehn et al. , 2003) is designed to overcome the restrictions on many-tomany mappings in word-based translation models." P07-1119,N03-1017,o,"Starting from a word-based alignment for each pair of sentences, the training for the algorithm accepts all contiguous bilingual phrase pairs (up to a predetermined maximum length) whose words are only aligned with each other (Koehn et al. , 2003)." P07-2045,N03-1017,p,1 Motivation Phrase-based statistical machine translation (Koehn et al. 2003) has emerged as the dominant paradigm in machine translation research. P07-2046,N03-1017,o,"It is an extension of Pharaoh (Koehn et al. , 2003), and supports factor training and decoding." P08-1009,N03-1017,p,"Phrase-based decoding (Koehn et al., 2003) is a dominant formalism in statistical machine translation." P08-1009,N03-1017,o,"Early experiments with syntactically-informed phrases (Koehn et al., 2003), and syntactic reranking of K-best lists (Och et al., 2004) produced mostly negative results." P08-1009,N03-1017,o,"Restricting phrases to syntactic constituents has been shown to harm performance (Koehn et al., 2003), so we tighten our definition of a violation to disregard cases where the only point of overlap is obscured by our phrasal resolution." P08-1010,N03-1017,p,"The most widely used approach derives phrase pairs from word alignment matrix (Och and Ney, 2003; Koehn et al., 2003)." P08-1010,N03-1017,o,"4.1 Training and Translation Setup Our decoder is a phrase-based multi-stack implementation of the log-linear model similar to Pharaoh (Koehn et al., 2003)." P08-1010,N03-1017,o,"Since most phrases appear only a few times in training data, a phrase pair translation is also evaluated by lexical weights (Koehn et al., 2003) or term weighting (Zhao et al., 2004) as additional features to avoid overestimation." P08-1010,N03-1017,o,"The commonly used phrase extraction approach based on word alignment heuristics (referred as ViterbiExtract algorithm for comparison in this paper) as described in (Och, 2002; Koehn et al., 2003) is a special case of the algorithm, where candidate phrase pairs are restricted to those that respect word alignment boundaries." P08-1011,N03-1017,o,"In most statistical machine translation (SMT) models (Och et al., 2004; Koehn et al., 2003; Chiang, 2005), some of measure words can be generated without modification or additional processing." P08-1011,N03-1017,o,"We ran GIZA++ (Och and Ney, 2000) on the training corpus in both directions with IBM model 4, and then applied the refinement rule described in (Koehn et al., 2003) to obtain a many-to-many word alignment for each sentence pair." P08-1024,N03-1017,o,"The standard solution is to approximate the maximum probability translation using a single derivation (Koehn et al., 2003)." P08-1024,N03-1017,o,"(Koehn et al., 2003)." P08-1049,N03-1017,o,"While the research in statistical machine translation (SMT) has made significant progress, most SMT systems (Koehn et al., 2003; Chiang, 2007; Galleyetal., 2006) relyonparallel corpora toextract translation entries." P08-1049,N03-1017,p,"However, since most of statistical translation models (Koehn et al., 2003; Chiang, 2007; Galley et al., 2006) are symmetrical, it is relatively easy to train a translation system to translate from English to Chinese, except that weneed to train aChinese language model from the Chinese monolingual data." P08-1064,N03-1017,o,"However, most of them fail to utilize non-syntactic phrases well that are proven useful in the phrase-based methods (Koehn et al., 2003)." P08-1064,N03-1017,p,"1 Introduction Phrase-based modeling method (Koehn et al., 2003; Och and Ney, 2004a) is a simple, but powerful mechanism to machine translation since it can model local reorderings and translations of multiword expressions well." P08-1089,N03-1017,o,"LW was originally used to validate the quality of a phrase translation pair in MT (Koehn et al., 2003)." P08-1114,N03-1017,o,"Rules have the form X e, f, where e and f are phrases containing terminal symbols (words) and possibly co-indexed instances of the nonterminal symbol X.2 Associated with each rule is a set of translation model features, i( f, e); for example, one intuitively natural feature of a rule is the phrase translation (log-)probability ( f, e) = log p(e| f) , directly analogous to the corresponding feature in non-hierarchical phrase-based models like Pharaoh (Koehn et al., 2003)." P08-1114,N03-1017,o,"In addition to this phrase translation probability feature, Hieros feature set includes the inverse phrase translation probability log p( f|e), lexical weights lexwt( f|e) and lexwt(e| f), which are estimates of translation quality based on word-level correspondences (Koehn et al., 2003), and a rule penalty allowing the model to learn a preference for longer or shorter derivations; see (Chiang, 2007) for details." P08-1115,N03-1017,o,"Models that support non-monotonic decoding generally include a distortion cost, such as|aibi11|where ai is the starting position of the foreign phrasefi andbi1 is the ending position of phrase fi1 (Koehn et al., 2003)." P08-1116,N03-1017,o,"Given phrase p1 and its paraphrase p2, we compute Score3(p1,p2) by relative frequency (Koehn et al., 2003): Score3(p1,p2) = p(p2|p1) = count(p2,p1)P pprime count(pprime,p1) (7) People may wonder why we do not use the same method on the monolingual parallel and comparable corpora." P08-2041,N03-1017,n,"1 Introduction Currently, most of the phrase-based statistical machine translation (PBSMT) models (Marcu and Wong, 2002; Koehn et al., 2003) adopt full matching strategy for phrase translation, which means that a phrase pair (tildewidef,tildewidee) can be used for translating a source phrase f, only if tildewidef = f. Due to lack of generalization ability, the full matching strategy has some limitations." P09-1036,N03-1017,n,"In such a process, original phrase-based decoding (Koehn et al., 2003) does not take advantage of any linguistic analysis, which, however, is broadly used in rule-based approaches." P09-1036,N03-1017,o,"This, unfortunately, significantly jeopardizes performance (Koehn et al., 2003; Xiong et al., 2008) because by integrating syntactic constraint into decoding as a hard constraint, it simply prohibits any other useful non-syntactic translations which violate constituent boundaries." P09-1038,N03-1017,o,"5.2 Translation experiments with a bigram language model In this section we consider two real translation tasks, namely, translation from English to French, trained on Europarl (Koehn et al., 2003) and translation from German to Spanish training on the NewsCommentary corpus." P09-1038,N03-1017,p,"1 Introduction Phrase-based systems (Koehn et al., 2003) are probably the most widespread class of Statistical Machine Translation systems, and arguably one of the most successful." P09-1063,N03-1017,o,"We obtained word alignments of the training data by first running GIZA++ (Och and Ney, 2003) and then applying the refinement rule grow-diagfinal-and (Koehn et al., 2003)." P09-1065,N03-1017,o,"We obtained word alignments of training data by first running GIZA++ (Och and Ney, 2003) and then applying the refinement rule grow-diag-final-and (Koehn et al., 2003)." P09-1065,N03-1017,o,"On the other hand, other authors (e.g., (Och and Ney, 2004; Koehn et al., 2003; Chiang, 2007)) do use the expression phrase-based models." P09-1067,N03-1017,o,"They recover additional latent variables so-called nuisance variablesthat are not of interest to the user.1 For example, though machine translation (MT) seeks to output a string, typical MT systems (Koehn et al., 2003; Chiang, 2007) 1These nuisance variables may be annotated in training data, but it is more common for them to be latent even there, i.e., there is no supervision as to their correct values." P09-1067,N03-1017,o,"594 2.3 Viterbi Approximation To approximate the intractable decoding problem of (2), most MT systems (Koehn et al., 2003; Chiang, 2007) use a simple Viterbi approximation, y = argmax yT(x) pViterbi(y|x) (4) = argmax yT(x) max dD(x,y) p(y,d|x) (5) = Y parenleftBigg argmax dD(x) p(y,d|x) parenrightBigg (6) Clearly, (5) replaces the sum in (2) with a max." P09-1087,N03-1017,p,"1 Introduction Hierarchical approaches to machine translation have proven increasingly successful in recent years (Chiang, 2005; Marcu et al., 2006; Shen et al., 2008), and often outperform phrase-based systems (Och and Ney, 2004; Koehn et al., 2003) on target-language fluency and adequacy." P09-1088,N03-1017,o,"These wordbased models are used to find the latent wordalignments between bilingual sentence pairs, from which a weighted string transducer can be induced (either finite state (Koehn et al., 2003) or synchronous context free grammar (Chiang, 2007))." P09-1088,N03-1017,o,"We use the GIZA++ implementation of IBM Model 4 (Brown et al., 1993; Och and Ney, 2003) coupled with the phrase extraction heuristics of Koehn et al." P09-1088,N03-1017,p,"1 Introduction The field of machine translation has seen many advances in recent years, most notably the shift from word-based (Brown et al., 1993) to phrasebased models which use token n-grams as translation units (Koehn et al., 2003)." P09-1094,N03-1017,o,"Actually, it is defined similarly to the translation model in SMT (Koehn et al., 2003)." P09-1103,N03-1017,o,"Between them, the phrase-based approach (Marcu and Wong, 2002; Koehn et al, 2003; Och and Ney, 2004) allows local reordering and contiguous phrase translation." P09-2031,N03-1017,o,"1LDC2002E18 (4,000 sentences), LDC2002T01, LDC2003E07, LDC2003E14, LDC2004T07, LDC2005T10, LDC2004T08 HK Hansards (500,000 sentences) 2http://www.statmt.org/wmt07/shared-task.html For both the tasks, the word alignment were trained by GIZA++ in two translation directions and refined by grow-diag-final method (Koehn et al., 2003)." P09-2037,N03-1017,o,"(Koehn et al., 2003)), in which translation and language models are trainable separately too." P09-2058,N03-1017,o,"This is applied to maximize coverage, which is similar as the final in (Koehn et al., 2003)." P09-2058,N03-1017,o,"Our decoder is a phrase-based multi-stack implementation of the log-linear model similar to Pharaoh (Koehn et al., 2003)." P09-2058,N03-1017,o,"The next two methods are heuristic (H) in (Och and Ney, 2003) and grow-diagonal (GD) proposed in (Koehn et al., 2003)." P09-2058,N03-1017,o,"It is a fundamental and often a necessary step before linguistic knowledge acquisitions, such as training a phrase translation table in phrasal machine translation (MT) system (Koehn et al., 2003), or extracting hierarchial phrase rules or synchronized grammars in syntax-based translation framework." P09-2060,N03-1017,o,"1 Introduction Phrase-based translation (Koehn et al., 2003) and hierarchical phrase-based translation (Chiang, 2005) are the state of the art in statistical machine translation (SMT) techniques." P09-4005,N03-1017,o,"4 Options from the Translation Table Phrase-based statistical machine translation methods acquire their translation knowledge in form of large phrase translation tables automatically from large amounts of translated texts (Koehn et al., 2003)." W03-1001,N03-1017,o,"A word link extension algorithm similar to the one presented in this paper is given in (Koehn et al. , 2003)." W05-0820,N03-1017,o,"(2004)), better language-specific preprocessing (Koehn and Knight, 2003) and restructuring (Collins et al. , 2005), additional feature functions such as word class language models, and minimum error rate training (Och, 2003) to optimize parameters." W05-0823,N03-1017,o,"1 Introduction During the last decade, statistical machine translation (SMT) systems have evolved from the original word-based approach (Brown et al. , 1993) into phrase-based translation systems (Koehn et al. , 2003)." W05-0826,N03-1017,o,"See (Och and Ney, 2000), (Yamada and Knight, 2001), (Koehn and Knight, 2002), (Koehn et al. , 2003), (Schafer and Yarowsky, 2003) and (Gildea, 2003)." W05-0829,N03-1017,p,"1 Introduction In recent years, various phrase translation approaches (Marcu and Wong, 2002; Och et al. , 1999; Koehn et al. , 2003) have been shown to outperform word-to-word translation models (Brown et al. , 1993)." W05-0830,N03-1017,o,"Whereas language generation has benefited from syntax [Wu, 1997; Alshawi et al. , 2000], the performance of statistical phrase-based machine translation when relying solely on syntactic phrases has been reported to be poor [Koehn et al. , 2003]." W05-0830,N03-1017,o,"The inclusion of phrases longer than three words in translation resources has been avoided, as it has been shown not to have a strong impact on translation performance [Koehn et al. , 2003]." W05-0833,N03-1017,o,"(Koehn et al. , 2003); (Och, 2003))." W05-0833,N03-1017,o,"Accordingly, in this section we describe a set of experiments which extends the work of (Way and Gough, 2005) by evaluating the Marker-based EBMT system of (Gough & Way, 2004b) against a phrase-based SMT system built using the following components: Giza++, to extract the word-level correspondences; The Giza++ word alignments are then refined and used to extract phrasal alignments ((Och & Ney, 2003); or (Koehn et al. , 2003) for a more recent implementation); Probabilities of the extracted phrases are calculated from relative frequencies; The resulting phrase translation table is passed to the Pharaoh phrase-based SMT decoder which along with SRI language modelling toolkit5 performs translation." W05-0836,N03-1017,o,"Under a phrase based translation model (Koehn et al. , 2003; Marcu and Wong, 2002), this distinction is important and will be discussed in more detail." W05-0836,N03-1017,o,"The first system is the Pharaoh decoder provided by (Koehn et al. , 2003) for the shared data task." W05-0836,N03-1017,o,"For further information on these parameter settings, confer (Koehn et al. , 2003)." W05-0908,N03-1017,o,"In the area of statistical machine translation (SMT), recently a combination of the BLEU evaluation metric (Papineni et al. , 2001) and the bootstrap method for statistical significance testing (Efron and Tibshirani, 1993) has become popular (Och, 2003; Kumar and Byrne, 2004; Koehn, 2004b; Zhang et al. , 2004)." W06-1606,N03-1017,p,"1 Introduction During the last four years, various implementations and extentions to phrase-based statistical models (Marcu and Wong, 2002; Koehn et al. , 2003; Och and Ney, 2004) have led to significant increases in machine translation accuracy." W06-1607,N03-1017,o,"Traditionally, maximum-likelihood estimation from relative frequencies is used to obtain conditional probabilities (Koehn et al. , 2003), eg, p(s|t) = c(s,t)/summationtexts c(s,t) (since the estimation problems for p(s|t) and p(t|s) are symmetrical, we will usually refer only to p(s|t) for brevity)." W06-1607,N03-1017,o,"The features used in this study are: the length of t; a single-parameter distortion penalty on phrase reordering in a, as described in (Koehn et al. , 2003); phrase translation model probabilities; and trigram language model probabilities logp(t), using Kneser-Ney smoothing as implemented in the SRILM toolkit (Stolcke, 2002)." W06-1607,N03-1017,o,"To derive the joint counts c(s,t) from which p(s|t) and p(t|s) are estimated, we use the phrase induction algorithm described in (Koehn et al. , 2003), with symmetrized word alignments generated using IBM model 2 (Brown et al. , 1993)." W06-1607,N03-1017,o,"This is the traditional approach for glass-box smoothing (Koehn et al. , 2003; Zens and Ney, 2004)." W06-1608,N03-1017,o,"In Englishto-German, this result produces results very comparable to a phrasal SMT system (Koehn et al. , 2003) trained on the same data." W06-1608,N03-1017,o,"This dependency graph is partitioned into treelets; like (Koehn et al. , 2003), we assume a uniform probability distribution over all partitions." W06-1608,N03-1017,o,"It has been shown that phrasal machine translation systems are not affected by the quality of the input word alignments (Koehn et al. , 2003)." W06-1609,N03-1017,p,"1 Introduction During the last few years, SMT systems have evolved from the original word-based approach (Brown et al. , 1993) to phrase-based translation systems (Koehn et al. , 2003)." W06-3102,N03-1017,o,"Table 2: The set of tags used to mark explicit morphemes in English Tag Meaning JJR Adjective, comparative JJS Adjective, superlative NNS Noun, plural POS Possessive ending RBR Adverb, comparative RBS Adverb, superlative VB Verb, base form VBD Verb, past tense VBG Verb, gerund or present participle VBN Verb, past participle VBP Verb, non3rd person singular present VBZ Verb, 3rd person singular present Figure 2: Morpheme alignment between a Turkish and an English sentence 4 Experiments We proceeded with the following sequence of experiments: (1) Baseline: As a baseline system, we used a pure word-based approach and used Pharaoh Training tool (2004), to train on the 22,500 sentences, and decoded using Pharaoh (Koehn et al. , 2003) to obtain translations for a test set of 50 sentences." W06-3106,N03-1017,o,"PP-model WecollectedthePPparametersbysimply reading the alignment matrices resulting from the word alignment, in a way similar to the one described in (Koehn et al. , 2003)." W06-3106,N03-1017,o,"This includes the standard notion of phrase, popular with phrasedbased SMT (Koehn et al. , 2003; Vogel et al. , 2003) aswellassequencesofwordsthatcontaingaps(possibly of arbitrary size)." W06-3106,N03-1017,n,"It has the advantage of naturally capturing local reorderings and is shown to outperform word-based machine translation (Koehn et al. , 2003)." W06-3109,N03-1017,o,"On the other hand, models that deal with structures or phrases instead of single words have also been proposed: the syntax translation models are described in (Yamada and Knight, 2001), alignment templates are used in (Och, 2002), and the alignment template approach is re-framed into the so-called phrase based translation (PBT) in (Marcu and Wong, 2002; Zens et al. , 2002; Koehn et al. , 2003; Tomas and Casacuberta, 2003)." W06-3112,N03-1017,o,"Word alignment and phrase extraction We used the GIZA++ word alignment software 3 to produce initial word alignments for our miniature bilingual corpus consisting of the source French file and the English reference file, and the refined word alignment strategy of (Och and Ney, 2003; Koehn et al. , 2003; Tiedemann, 2004) to obtain improved word and phrase alignments." W06-3113,N03-1017,p,"The current state of the art is represented by the so-called phrase-based translation approach (Och and Ney, 2004; Koehn et al. , 2003)." W06-3115,N03-1017,o,"In a phrase-based statistical translation (Koehn et al. , 2003), a bilingual text is decomposed as K phrase translation pairs (e1, fa1), (e2, fa2 ),: The input foreign sentence is segmented into phrases f K1, 122 mapped into corresponding English eK1, then, reordered to form the output English sentence according to a phrase alignment index mapping a. In a hierarchical phrase-based translation (Chiang, 2005), translation is modeled after a weighted synchronous-CFG consisting of production rules whose right-hand side is paired (Aho and Ullman, 1969): X ,, where X is a non-terminal, and are strings of terminals and non-terminals." W06-3115,N03-1017,o,"Second, phrase translation pairs are extracted from the word aligned corpus (Koehn et al. , 2003)." W06-3115,N03-1017,o,"The decoding process is very similar to those described in (Koehn et al. , 2003): It starts from an initial empty hypothesis." W06-3115,N03-1017,o,"2.3 Feature Functions Our phrase-based model uses a standard pharaoh feature functions listed as follows (Koehn et al. , 2003): Relative-count based phrase translation probabilities in both directions." W06-3115,N03-1017,o,"For each differently tokenized corpus, we computed word alignments by a HMM translation model (Och and Ney, 2003) and by a word alignment refinement heuristic of grow-diagfinal (Koehn et al. , 2003)." W06-3115,N03-1017,o,"One is a phrase-based translation in which a phrasal unit is employed for translation (Koehn et al. , 2003)." W06-3119,N03-1017,o,"138 2 Rule Generation We start with phrase translations on the parallel training data using the techniques and implementation described in (Koehn et al. , 2003a)." W06-3119,N03-1017,o,"We use the following features for our rules: sourceand target-conditioned neg-log lexical weights as described in (Koehn et al. , 2003b) neg-log relative frequencies: left-handside-conditioned, target-phrase-conditioned, source-phrase-conditioned Counters: n.o. rule applications, n.o. target words Flags: IsPurelyLexical (i.e. , contains only terminals), IsPurelyAbstract (i.e. , contains only nonterminals), IsXRule (i.e. , non-syntactical span), IsGlueRule 139 Penalties: rareness penalty exp(1 RuleFrequency); unbalancedness penalty |MeanTargetSourceRatio n.o. source words n.o. target words| 4 Parsing Our SynCFG rules are equivalent to a probabilistic context-free grammar and decoding is therefore an application of chart parsing." W06-3119,N03-1017,o,"5 Results We present results that compare our system against the baseline Pharaoh implementation (Koehn et al. , 2003a) and MER training scripts provided for this workshop." W06-3119,N03-1017,o," Baseline Pharaoh with phrases extracted from IBM Model 4 training with maximum phrase length 7 and extraction method diag-growthfinal (Koehn et al. , 2003a) Lex Phrase-decoder simulation: using only the initial lexical rules from the phrase table, all with LHS X, the Glue rule, and a binary reordering rule with its own reordering-feature XCat All nonterminals merged into a single X nonterminal: simulation of the system Hiero (Chiang, 2005)." W06-3119,N03-1017,o,"1 Introduction Recent work in machine translation has evolved from the traditional word (Brown et al. , 1993) and phrase based (Koehn et al. , 2003a) models to include hierarchical phrase models (Chiang, 2005) and bilingual synchronous grammars (Melamed, 2004)." W06-3119,N03-1017,o,"The hierarchical translation operations introduced in these methods call for extensions to the traditional beam decoder (Koehn et al. , 2003a)." W06-3120,N03-1017,o,"The huge increase in computational and storage cost of including longer phrases does not provide a signi cant improvement in quality (Koehn et al. , 2003) as the probability of reappearance of larger phrases decreases." W06-3121,N03-1017,o,"We generated for each phrase pair in the translation table 5 features: phrase translation probability (both directions), lexical weighting (Koehn et al. , 2003) (both directions) and phrase penalty (constant value)." W06-3122,N03-1017,o,"It generates a vector of 5 numeric values for each phrase pair: phrase translation probability: ( f|e) = count( f, e) count(e),(e| f) = count( f, e) count( f) 2http://www.phramer.org/ Java-based open-source phrase based SMT system 3http://www.isi.edu/licensed-sw/carmel/ 4http://www.speech.sri.com/projects/srilm/ 5http://www.iccs.inf.ed.ac.uk/pkoehn/training.tgz 150 lexical weighting (Koehn et al. , 2003): lex( f|e,a) = nproductdisplay i=1 1 |{j|(i, j) a}| summationdisplay (i,j)a w(fi|ej) lex(e|f,a) = mproductdisplay j=1 1 |{i|(i, j) a}| summationdisplay (i,j)a w(ej|fi) phrase penalty: ( f|e) = e; log(( f|e)) = 1 2.2 Decoding We used the Pharaoh decoder for both the Minimum Error Rate Training (Och, 2003) and test dataset decoding." W06-3123,N03-1017,o,"For the future, the joint model would benefit from lexical weighting like that used in the standard model (Koehn et al. , 2003)." W06-3123,N03-1017,o,"154 2 Translation Models 2.1 Standard Phrase-based Model Most phrase-based translation models (Och, 2003; Koehn et al. , 2003; Vogel et al. , 2003) rely on a pre-existing set of word-based alignments from which they induce their parameters." W06-3123,N03-1017,o,"On smaller data sets (Koehn et al. , 2003) the joint model shows performance comparable to the standard model, however the joint model does not reach the level of performance of the stan156 EN-ES ES-EN Joint 3-gram, dl4 20.51 26.64 5-gram, dl6 26.34 27.17 + lex." W06-3125,N03-1017,p,"This translation model differs from the well known phrase-based translation approach (Koehn et al. , 2003) in two basic issues: rst, training data is monotonously segmented into bilingual units; and second, the model considers n-gram probabilities instead of relative frequencies." W06-3601,N03-1017,n,"2 Previous Work It is helpful to compare this approach with recent efforts in statistical MT. Phrase-based models (Koehn et al. , 2003; Och and Ney, 2004) are good at learning local translations that are pairs of (consecutive) sub-strings, but often insufficient in modeling the reorderings of phrases themselves, especially between language pairs with very different word-order." W07-0403,N03-1017,o,"4.1 Translation Modeling We can test our models utility for translation by transforming its parameters into a phrase table for the phrasal decoder Pharaoh (Koehn et al. , 2003)." W07-0403,N03-1017,o,"Pharaoh also includes lexical weighting parameters that are derived from the alignments used to induce its phrase pairs (Koehn et al. , 2003)." W07-0403,N03-1017,o,"2 Background 2.1 Phrase Table Extraction Phrasal decoders require a phrase table (Koehn et al. , 2003), which contains bilingual phrase pairs and 17 scores indicating their utility." W07-0403,N03-1017,o,"Two are conditionalized phrasal models, each EM trained until performance degrades: C-JPTM3 as described in (Birch et al. , 2006) Phrasal ITG as described in Section 4.1 Three provide alignments for the surface heuristic: GIZA++ with grow-diag-final (GDF) Viterbi Phrasal ITG with and without the noncompositional constraint We use the Pharaoh decoder (Koehn et al. , 2003) with the SMT Shared Task baseline system (Koehn and Monz, 2006)." W07-0403,N03-1017,o,"It extracts all consistent phrase pairs from word-aligned bitext (Koehn et al. , 2003)." W07-0403,N03-1017,o,"The grow-diag-final (GDF) combination heuristic (Koehn et al. , 2003) adds links so that each new link connects a previously unlinked token." W07-0406,N03-1017,p,"However, attempts to retrofit syntactic information into the phrase-based paradigm have not met with enormous success (Koehn et al. , 2003; Och et al. , 2003)1, and purely phrase-based machine translation systems continue to outperform these syntax/phrase-based hybrids." W07-0409,N03-1017,p,"1 Introduction Recent works in statistical machine translation (SMT) shows how phrase-based modeling (Och and Ney, 2000a; Koehn et al. , 2003) significantly outperform the historical word-based modeling (Brown et al. , 1993)." W07-0412,N03-1017,p,"And again, we see this insight informing statistical machine translation systems, for instance, in the phrase-based approaches of Och (2003) and Koehn et al." W07-0701,N03-1017,o,"We compared our system to Pharaoh, a leading phrasal SMT decoder (Koehn et al. , 2003), and our treelet system." W07-0701,N03-1017,p,"1 Introduction Modern phrasal SMT systems such as (Koehn et al. , 2003) derive much of their power from being able to memorize and use long phrases." W07-0703,N03-1017,o,"Portage is a statistical phrase-based SMT system similar to Pharaoh (Koehn et al, 2003)." W07-0703,N03-1017,o,"To generate phrase pairs from a parallel corpus, we use the ""diag-and"" phrase induction algorithm described in (Koehn et al, 2003), with symmetrized word alignments generated using IBM model 2 (Brown et al, 1993)." W07-0704,N03-1017,o,"We employ the phrase-based SMT framework (Koehn et al. , 2003), and use the Moses toolkit (Koehn et al. , 2007), and the SRILM language modelling toolkit (Stolcke, 2002), and evaluate our decoded translations using the BLEU measure (Papineni et al. , 2002), using a single reference translation." W07-0709,N03-1017,o,"5.1 The baseline System used for comparison was Pharaoh (Koehn et al. , 2003; Koehn, 2004), which uses a beam search algorithm for decoding." W07-0711,N03-1017,o,"1 Introduction Word alignment is an important step of most modern approaches to statistical machine translation (Koehn et al. , 2003)." W07-0715,N03-1017,o,"2 Previous Approaches Koehn, et al.?s (2003) method of estimating phrasetranslation probabilities is very simple." W07-0716,N03-1017,o,"??Initial phrase pairs are identified following the procedure typically employed in phrase based systems (Koehn et al. , 2003; Och and Ney, 2004)." W07-0716,N03-1017,o,"1 Introduction Viewed at a very high level, statistical machine translationinvolvesfourphases: languageandtranslation model training, parameter tuning, decoding, and evaluation (Lopez, 2007; Koehn et al. , 2003)." W07-0716,N03-1017,o,"We use the following features in our induced English-to-English grammar:3 3Hiero also uses lexical weights (Koehn et al. , 2003) in both 122 ??The joint probability of the two English hierarchical paraphrases, conditioned on the nonterminal symbol, as defined by this formula: p(e1, e2|x) = c(X ???e1, e2??summationtext e1prime, e2prime c(X ???e1prime, e2prime??" W07-0717,N03-1017,o,"2 Phrase-based Statistical MT Our baseline is a standard phrase-based SMT system (Koehn et al. , 2003)." W07-0717,N03-1017,o,"The features used in this study are: the length of t; a single-parameter distortion penalty on phrase reordering in a, as described in (Koehn et al. , 2003); phrase translation model probabilities; and 4-gram language model probabilities logp(t), using Kneser-Ney smoothing as implemented in the SRILM toolkit." W07-0717,N03-1017,o,"To derive the joint counts c(?s,?t) from which p(?s|?t) and p(?t|?s) are estimated, we use the phrase induction algorithm described in (Koehn et al. , 2003), with symmetrized word alignments generated using IBM model 2 (Brown et al. , 1993)." W07-0719,N03-1017,o,"159 2.1 Baseline System The baseline system is a phrase-based SMT system (Koehn et al. , 2003), built almost entirely using freely available components." W07-0719,N03-1017,n,"1 Introduction Translations tables in Phrase-based Statistical Machine Translation (SMT) are often built on the basis of Maximum-likelihood Estimation (MLE), being one of the major limitations of this approach that the source sentence context in which phrases occur is completely ignored (Koehn et al. , 2003)." W07-0721,N03-1017,o,"1 Introduction Nowadays, statistical machine translation is mainly based on phrases (Koehn et al. , 2003)." W07-0724,N03-1017,o,"They are generated from the training corpus via the ?diag-and??method (Koehn et al. , 2003) and smoothed using Kneser-Ney smoothing (Foster et al. , 2006), ??one or several n-gram language model(s) trained with the SRILM toolkit (Stolcke, 2002); in the baseline experiments reported here, we used a trigram model, ??a distortion model which assigns a penalty based on the number of source words which are skipped when generating a new target phrase, ??a word penalty." W07-0725,N03-1017,o,"2 Architecture of the system The goal of statistical machine translation (SMT) is to produce a target sentence e from a source sentence f. It is today common practice to use phrases as translation units (Koehn et al. , 2003; Och and Ney, 2003) and a log linear framework in order to introduce several models explaining the translation process: e??= argmaxp(e|f) = argmaxe {exp(summationdisplay i ihi(e,f))} (1) The feature functions hi are the system models and the i weights are typically optimized to maximize a scoring function on a development set (Och and Ney, 2002)." W07-0731,N03-1017,o,"As in phrasebased translation model estimation, ? also contains two lexical weights (Koehn et al. , 2003), counters for number of target terminals generated." W07-0731,N03-1017,o,"Table 1 shows the impact of increasing reordering window length (Koehn et al. , 2003) on translation quality for the ?dev06??data.2 Increasing the reordering window past 2 has minimal impact on translation quality, implying that most of the reordering effects across Spanish and English are well modeled at the local or phrase level." W07-1512,N03-1017,o,"However, many of these models are not applicable to parallel treebanks because they assume translation units where either the source text, the target text or both are represented as word sequences without any syntactic structure (Galley et al. , 2004; Marcu et al. , 2006; Koehn et al. , 2003)." W08-0301,N03-1017,o,"4 Experiments 4.1 Experiment Settings A series of experiments were run to compare the performance of the three SWD models against the baseline, which is the standard phrase-based approach to SMT as elaborated in (Koehn et al., 2003)." W08-0301,N03-1017,o,"The subsequent construction of translation table was done in exactly the same way as explained 4 in (Koehn et al., 2003)." W08-0302,N03-1017,o,"Baseline We use the Moses MT system (Koehn et al., 2007) as a baseline and closely follow the example training procedure given for the WMT-07 and WMT-08 shared tasks.4 In particular, we perform word alignment in each direction using GIZA++ (Och and Ney, 2003), apply the grow-diag-finaland heuristic for symmetrization and use a maximum phrase length of 7." W08-0302,N03-1017,o,"(2003), in which we translate a source-language sentence f into the target-language sentence e that maximizes a linear combination of features and weights:1 e,a = argmax e,a score(e,a,f) (1) = argmax e,a Msummationdisplay m=1 mhm(e,a,f) (2) where a represents the segmentation of e and f into phrases and a correspondence between phrases, and each hm is a R-valued feature with learned weight m. The translation is typically found using beam search (Koehn et al., 2003)." W08-0302,N03-1017,p,"Phrase-based MT systems are straightforward to train from parallel corpora (Koehn et al., 2003) and, like the original IBM models (Brown et al., 1990), benefit from standard language models built on large monolingual, target-language corpora (Brants et al., 2007)." W08-0303,N03-1017,o,"For the first two tasks, all heuristics of the Pharaoh-Toolkit (Koehn et al., 2003) as well as the refined heuristic (Och and Ney, 2003) to combine both IBM4-alignments were tested and the best ones are shown in the tables." W08-0305,N03-1017,o,"The de-facto answer came during the 1990s from the research community on Statistical Machine Translation, who made use of statistical tools based on a noisy channel model originally developed for speech recognition (Brown et al., 1994; Och and Weber, 1998; R.Zens et al., 2002; Och and Ney, 2001; Koehn et al., 2003)." W08-0306,N03-1017,o,"GIZA++ refined alignments have been used in state-of-the-art phrase-based statistical MT systems such as (Och, 2004); variations on the refined heuristic have been used by (Koehn et al., 2003) (diag and diag-and) and by the phrase-based system Moses (grow-diag-final) (Koehn et al., 2007)." W08-0309,N03-1017,o,"The phrases in the translations were located using standard phrase extraction techniques (Koehn et al., 2003)." W08-0310,N03-1017,o,"translation systems (Och and Ney, 2004; Koehn et al., 2003) and use Moses (Koehn et al., 2007) to search for the best target sentence." W08-0313,N03-1017,o,"2 Architecture of the system The goal of statistical machine translation (SMT) is to produce a target sentence e from a source sentence f. It is today common practice to use phrases as translation units (Koehn et al., 2003; Och and Ney, 2003) and a log linear framework in order to introduce several models explaining the translation process: e = argmaxp(e|f) = argmaxe {exp(summationdisplay i ihi(e,f))} (1) The feature functions hi are the system models and the i weights are typically optimized to maximize a scoring function on a development set (Och and Ney, 2002)." W08-0314,N03-1017,p,"3 System Overview 3.1 Translation model The system developed for this years shared task is a state-of-the-art, two-pass phrase-based statistical machine translation system based on a log-linear translation model (Koehn et al, 2003)." W08-0316,N03-1017,o,"After unioning the Viterbi alignments, the stems were replaced with their original words, and phrase-pairs of up to five foreign words in length were extracted in the usual fashion (Koehn et al., 2003)." W08-0318,N03-1017,o,"Foralllanguagepairs,weusedtheMosesdecoder (Koehnetal.,2007), whichfollowsthephrase-based statistical machine translation approach (Koehn et al., 2003), with default settings as a starting point." W08-0322,N03-1017,o,"Our system is actually designed as a hybrid of the classic phrase-based SMT model (Koehn et al., 2003) and the kernel regression model as follows: First, for each source sentence a small relevant set of sentence pairs are retrieved from the large-scale parallel corpus." W08-0326,N03-1017,o,"For example, our system configuration for the shared task incorporates a wrapper around GIZA++ (Och and Ney, 2003) for word alignment and a wrapper around Moses (Koehn et al., 2007) for decoding." W08-0333,N03-1017,o,"In this paper we present MapReduce implementations of training algorithms for two kinds of models commonly used in statistical MT today: a phrasebased translation model (Koehn et al., 2003) and word alignment models based on pairwise lexical translation trained using expectation maximization (Dempster et al., 1977)." W08-0333,N03-1017,o,"4 Phrase-Based Translation In phrase-based translation, the translation process is modeled by splitting the source sentence into phrases (a contiguous string of words) and translating the phrases as a unit (Och et al., 1999; Koehn et al., 2003)." W08-0335,N03-1017,o,"The training and decoding system of our SMT used the publicly available Pharaoh (Koehn et al., 2003)2." W08-0336,N03-1017,p,"2.2 Phrase-based Chinese-to-English MT The MT system used in this paper is Moses, a stateof-the-art phrase-based system (Koehn et al., 2003)." W08-0403,N03-1017,o,"Our baseline model follows Chiangs hierarchical model (Chiang, 2007) in conjunction with additional features: conditional probabilities in both directions: P(|) and P(|); lexical weights (Koehn et al., 2003) in both directions: Pw(|) and Pw(|); 21 word counts |e|; rule counts |D|; target n-gram language model PLM(e); glue rule penalty to learn preference of nonterminal rewriting over serial combination through Eq." W08-0404,N03-1017,o,"by diag-and symmetrization (Koehn et al., 2003)." W08-0404,N03-1017,o,"Consider the lexical model pw(ry|rx), defined following Koehn et al (2003), with a denoting the most frequent word alignment observed for the rule in the training set." W08-0405,N03-1017,o,"2 Baseline DP Decoder The translation model used in this paper is a phrasebased model (Koehn et al., 2003), where the translation units are so-called blocks: a block b is a pair consisting of a source phrase s and a target phrase t which are translations of each other." W08-0406,N03-1017,p,"1 Introduction The emergence of phrase-based statistical machine translation (PSMT) (Koehn et al., 2003) has been one of the major developments in statistical approaches to translation." W08-0409,N03-1017,o,"73 ment and phrase-extraction heuristics described in (Koehn et al., 2003), minimum-error-rate training (Och, 2003), a trigram language model with KneserNey smoothing trained with SRILM (Stolcke, 2002) on the English side of the training data, and Moses (Koehn et al., 2007) to decode." W08-0411,N03-1017,p,"1 Introduction Phrase-based Statistical MT (PB-SMT) (Koehn et al., 2003) has become the predominant approach to Machine Translation in recent years." W08-1501,N03-1017,o,"To generate the n-best lists, a phrase based SMT (Koehn et al., 2003) was used." W08-1911,N03-1017,o,"4 Experiments and evaluation We carried out an evaluation on the local rephrasing of French sentences, using English as the pivot language.2 We extracted phrase alignments of up to 7 word forms using the Giza++ alignment tool (Och and Ney, 2003) and the grow-diag-final-and heuristics described in (Koehn et al., 2003) on 948,507 sentences of the French-English part of the Europarl corpus (Koehn, 2005) and obtained some 42 million phrase pairs for which probabilities were estimated using maximum likelihood estimation." W08-1911,N03-1017,o,"(Och and Ney, 2003)), and the phrase-based approach to Statistical Machine Translation (Koehn et al., 2003) has led to the development of heuristics for obtaining alignments between phrases of any number of words." W08-2119,N03-1017,o,"We use the same featuresas (Koehnet al., 2003)." W08-2119,N03-1017,o,"Our method does not suppose a uniform distribution over all possible phrase segmentationsas (Koehn et al., 2003) since each phrase tree has a probability." W08-2119,N03-1017,o,"This system uses all featuresof conventionalphrase-basedSMT as in (Koehn et al., 2003)." W09-0423,N03-1017,o,"4 Architecture of the SMT system The goal of statistical machine translation (SMT) is to produce a target sentence e from a source sentence f. It is today common practice to use phrases as translation units (Koehn et al., 2003; Och and Ney, 2003) and a log linear framework in order to introduce several models explaining the translation process: e = argmaxp(e|f) = argmaxe {exp(summationdisplay i ihi(e,f))} (1) The feature functions hi are the system models and the i weights are typically optimized to maximize a scoring function on a development set (Och and Ney, 2002)." W09-0424,N03-1017,n,"In such tasks, feature calculation is also very expensive in terms of time required; huge sets of extracted rules must be sorted in two directions for relative frequency calculation of such features as the translation probability p(f|e) and reverse translation probability p(e|f) (Koehn et al., 2003)." W09-0430,N03-1017,o,"Then the two models and a search module are used to decode the best translation (Brown et al., 1993; Koehn et al., 2003)." W09-0434,N03-1017,p,"Phrase-based models (Och and Ney, 2004; Koehn et al., 2003) have been a major paradigm in statistical machine translation in the last few years, showing state-of-the-art performance for many language pairs." W09-0436,N03-1017,p,"4 Machine Translation Experiments 4.1 Experimental Setting For our MT experiments, we used a reimplementation of Moses (Koehn et al., 2003), a state-of-the-art phrase-based system." W09-0437,N03-1017,o,"The corpus was aligned with GIZA++ (Och and Ney, 2003) and symmetrized with the grow-diag-finaland heuristic (Koehn et al., 2003)." W09-0439,N03-1017,o,"1 Introduction Most recent approaches in SMT, eg (Koehn et al., 2003; Chiang, 2005), use a log-linear model to combine probabilistic features." W09-0809,N03-1017,o,"Separating the scoring from the source language reordering also has the advantage that the approach in essence is compatible with other approaches such as a traditional PSMT system (Koehn et al., 2003b) or a hierarchical phrase system (Chiang, 2005)." W09-0809,N03-1017,o,"In addition to the manual alignment supplied with these data, we create an automatic word alignment for them using GIZA++ (Och and Ney, 2003) and the grow-diagfinal (GDF) symmetrization algorithm (Koehn et al., 2005)." W09-0809,N03-1017,o,"We are also interested in examining the approach within a standard phrase-based decoder such as Moses (Koehn et al., 2003b) or a hierarchical phrase system (Chiang, 2005)." W09-0809,N03-1017,p,"1 Introduction The emergence of phrase-based statistical machine translation (PSMT) (Koehn et al., 2003a) has been one of the major developments in statistical approaches to translation." W09-1104,N03-1017,o,"By using only the bidirectional word alignment links, one can implement a very robust such filter, as the bidirectional links are generally reliable, even though they have low recall for overall translational correspondences (Koehn et al., 2003)." W09-1114,N03-1017,o,"Our technique is based on a novel Gibbs sampler that draws samples from the posterior distributionofaphrase-basedtranslationmodel(Koehn et al., 2003) but operates in linear time with respect to the number of input words (Section 2)." W09-1117,N03-1017,o,"1 Introduction Recent trends in machine translation illustrate that highly accurate word and phrase translations can be learned automatically given enough parallel training data (Koehn et al., 2003; Chiang, 2007)." W09-1908,N03-1017,n,"While the amount of parallel data required to build such systems is orders of magnitude smaller than corresponding phrase based statistical systems (Koehn et al., 2003), the variety of linguistic annotation required is greater." W09-1908,N03-1017,p,"We conclude with some challenges that still remain in applying proactive learning for MT. 2 Syntax Based Machine Translation In recent years, corpus based approaches to machine translation have become predominant, with Phrase Based Statistical Machine Translation (PBSMT) (Koehn et al., 2003) being the most actively progressing area." W09-2301,N03-1017,n,"1 Introduction The dominance of traditional phrase-based statistical machine translation (PBSMT) models (Koehn et al., 2003) has recently been challenged by the development and improvement of a number of new models that explicity take into account the syntax of the sentences being translated." W09-2306,N03-1017,p,"1 Introduction Phrase-based statistical machine translation models (Marcu and Wong, 2002; Koehn et al., 2003; Och and Ney, 2004; Koehn, 2004; Koehn et al., 2007) have achieved significant improvements in translation accuracy over the original IBM word-based model." W09-2307,N03-1017,o,"Our MT experiments use a re-implementation of Moses (Koehn et al., 2003) called Phrasal, which provides an easier API for adding features." W09-2307,N03-1017,o,"2 Discriminative Reordering Model Basic reordering models in phrase-based systems use linear distance as the cost for phrase movements (Koehn et al., 2003)." W09-2310,N03-1017,p,"The state-of-the-art SMT system Moses implements a distance-based reordering model (Koehn et al., 2003) and a distortion model, operating with rewrite patterns extracted from a phrase alignment table (Tillman, 2004)." C04-1112,N03-5008,o,"4.2 Smoothing: Gaussian Priors Since NLP maximum entropy models usually have lots of features and lots of sparseness (e.g. features seen in testing not occurring in training), smoothing is essential as a way to optimize the feature weights (Chen and Rosenfeld, 2000; Klein and Manning, 2003)." C08-1145,N03-5008,o,"As machine learners we used SVM-light1 (Joachims, 1998) and the MaxEnt decider from the Stanford Classifier2 (Manning and Klein, 2003)." D08-1097,N03-5008,p,"2.2 Maximum Entropy Models Maximum entropy (ME) models (Berger et al., 1996; Manning and Klein, 2003), also known as 928 log-linear and exponential learning models, provide a general purpose machine learning technique for classification and prediction which has been successfully applied to natural language processing including part of speech tagging, named entity recognition etc. Maximum entropy models can integrate features from many heterogeneous information sources for classification." D08-1097,N03-5008,o,"In this paper, we adopt Stanford Maximum Entropy (Manning and Klein, 2003) implementation in our experiments." D08-1097,N03-5008,o,"There are accurate parsers available such as Chaniak parser (Charniak and Johnson, 2005), Stanford parser (Klein and Manning, 2003) and Berkeley parser (Petrov and Klein, 2007), among which we use the Berkeley parser 2 to help identify the head word." D08-1097,N03-5008,o,"Collins head words finder rules have been modified to extract semantic head word (Klein and Manning, 2003)." D09-1057,N03-5008,o,"5.5 Dependency validity features Like (Cui et al., 2004), we extract the dependency path from the question word to the common word (existing in both question and sentence), and the path from candidate answer (such as CoNLL NE and numerical entity) to the common word for each pair of question and candidate sentence using Stanford dependency parser (Klein and Manning, 2003; Marneffe et al., 2006)." D09-1057,N03-5008,p,"2 Maximum Entropy Models Maximum entropy (ME) models (Berger et al., 1996; Manning and Klein, 2003), also known as log-linear and exponential learning models, provideageneralpurposemachinelearningtechnique for classification and prediction which has been successfully applied to natural language processing including part of speech tagging, named entity recognition etc. Maximum entropy models can integrate features from many heterogeneous information sources for classification." D09-1158,N03-5008,o,"We used the implementation of MaxEnt classifier described in (Manning and Klein, 2003)." W04-2328,N03-5008,o,"Correspondences between MALTUS and other tagsets (Klein and Soria, 1998) were also provided (Popescu-Belis, 2003)." W04-2328,N03-5008,p,"5.2 Results We use a Maximum Entropy (ME) classi er (Manning and Klein, 2003) which allows an e cient combination of many overlapping features." W08-0206,N03-5008,o,"For instance, for Maximum Entropy, I picked (Berger et al., 1996; Ratnaparkhi, 1997) for the basic theory, (Ratnaparkhi, 1996) for an application (POS tagging in this case), and (Klein and Manning, 2003) for more advanced topics such as optimization and smoothing." C08-1136,N04-1035,o,"From wordlevel alignments, such systems extract the grammar rules consistent either with the alignments and parse trees for one of languages (Galley et al., 2004), or with the the word-level alignments alone without reference to external syntactic analysis (Chiang, 2005), which is the scenario we address here." C08-1138,N04-1035,o,"It reconfirms that only allowing sibling nodes reordering as done in SCFG may be inadequate for translational equivalence modeling (Galley et al., 2004) 4 . 3) All the three models on the FBIS corpus show much lower performance than that on the other two corpora." C08-1138,N04-1035,n,"This implies that the complexity of structure divergence between two languages is higher than suggested in literature (Fox, 2002; Galley et al., 2004)." C08-1138,N04-1035,o,"However, as discussed in prior arts (Galley et al., 2004) and this paper, linguistically-informed SCFG is an inadequate model for parallel corpora due to its nature that only allowing child-node reorderings." C08-1138,N04-1035,o,"Fox (2002), Galley et al (2004) and Wellington et al." D08-1021,N04-1035,o,"Other recent work has incorporated constituent and dependency subtrees into the translation rules used by phrase-based systems (Galley et al., 2004; Quirk et al., 2005)." D08-1064,N04-1035,o,"We trained three Arabic-English syntax-based statistical MT systems (Galley et al., 2004; Galley et al., 2006) using max-B training (Och, 2003): one on a newswire development set, one on a weblog development set, and one on a combined development set containing documents from both genres." D08-1093,N04-1035,o,"As a result, they are being used in a variety of applications, such as question answering (Hermjakob, 2001), speech recognition (Chelba and Jelinek, 1998), language modeling (Roark, 2001), language generation (Soricut, 2006) and, most notably, machine translation (Charniak et al., 2003; Galley et al., 2004; Collins et al., 2005; Marcu et al., 2006; Huang et al., 2006; Avramidis and Koehn, 2008)." D09-1076,N04-1035,n,"Current tree-based models that integrate linguistics and statistics, such as GHKM (Galley et al., 2004), are not able to generalize well from a single phrase pair." D09-1108,N04-1035,o,"The tree-to-string model (Galley et al. 2004; Liu et al. 2006) views the translation as a structure mapping process, which first breaks the source syntax tree into many tree fragments and then maps each tree fragment into its corresponding target translation using translation rules, finally combines these target translations into a complete sentence." D09-1108,N04-1035,p,"1 Introduction Recently linguistically-motivated syntax-based translation method has achieved great success in statistical machine translation (SMT) (Galley et al., 2004; Liu et al., 2006, 2007; Zhang et al., 2007, 2008a; Mi et al., 2008; Mi and Huang 2008; Zhang et al., 2009)." D09-1127,N04-1035,o,"We envision the use of a clever datastructure would reduce the complexity, but leave this to future work, as the experiments (Table 8) show that 5Our definition implies that we only consider faithful spans to be contiguous (Galley et al., 2004)." D09-1127,N04-1035,o,"For example, Smith and Smith (2004) and Burkett and Klein (2008) show that joint parsing (or reranking) on a bitext improves accuracies on either or both sides by leveraging bilingual constraints, which is very promising for syntax-based machine translation which requires (good-quality) parse trees for rule extraction (Galley et al., 2004; Mi and Huang, 2008)." D09-1127,N04-1035,o,"To make things worse, languages are non-isomorphic, i.e., there is no 1to-1 mapping between tree nodes, thus in practice one has to use more expressive formalisms such as synchronous tree-substitution grammars (Eisner, 2003; Galley et al., 2004)." D09-1136,N04-1035,o,"1313 E2C C2E Union Heuristic w/ Big 13.37 12.66 14.55 14.28 w/o Big 13.20 12.62 14.53 14.21 Table 3: BLEU-4 scores (test set) of systems based on GIZA++ word alignments 5 6 7 8 BLEU-4 14.27 14.42 14.43 14.45 14.55 Table 4: BLEU-4 scores (test set) of the union alignment, using TTS templates up to a certain size, in terms of the number of leaves in their LHSs 4.1 Baseline Systems GHKM (Galley et al., 2004) is used to generate the baseline TTS templates based on the word alignments computed using GIZA++ and different combination methods, including union and the diagonal growing heuristic (Koehn et al., 2003)." D09-1136,N04-1035,p,"This algorithm is referred to as GHKM (Galley et al., 2004) and is widely used in SSMT systems (Galley et al., 2006; Liu et al., 2006; Huang et al., 2006)." N06-1001,N04-1035,o,"To this end, the translational correspondence is described within a translation rule, i.e., (Galley et al. , 2004) (or a synchronous production), rather than a translational phrase pair; and the training data will be derivation forests, instead of the phrase-aligned bilingual corpus." N06-1031,N04-1035,o,"Step 2 involves extracting minimal xRS rules (Galley et al. , 2004) from the set of string/tree/alignments triplets." N06-1031,N04-1035,o,"In this work, we employ a syntax-based model that applies a series of tree/string (xRS) rules (Galley et al. , 2004; Graehl and Knight, 2004) to a source language string to produce a target language phrase structure tree." N06-1033,N04-1035,o,"1 Introduction Several recent syntax-based models for machine translation (Chiang, 2005; Galley et al. , 2004) can be seen as instances of the general framework of synchronous grammars and tree transducers." N06-3004,N04-1035,p,"However, to be more expressive and flexible, it is often easier to start with a general SCFG or tree-transducer (Galley et al. , 2004)." N06-3004,N04-1035,p,"Experiments show that the resulting rule set significantly improves the speed and accuracy over monolingual binarization (see Table 1) in a stateof-the-art syntax-based machine translation system (Galley et al. , 2004)." N06-3004,N04-1035,o,"These rules can be learned from a parallel corpus using English parsetrees, Chinese strings, and word alignment (Galley et al. , 2004)." N09-1025,N04-1035,o,"From this data, we use the the GHKM minimal-rule extraction algorithm of (Galley et al., 2004) to yield rules like: NP-C(x0:NPB PP(IN(of x1:NPB))$x1 de x0 Though this rule can be used in either direction, here we use it right-to-left (Chinese to English)." N09-1026,N04-1035,o,"Meanwhile, translation grammars have grown in complexity from simple inversion transduction grammars (Wu, 1997) to general tree-to-string transducers (Galley et al., 2004) and have increased in size by including more synchronous tree fragments (Galley et al., 2006; Marcuetal.,2006; DeNeefeetal.,2007)." N09-1026,N04-1035,o,"stituent alignments (Galley et al., 2004)." N09-1058,N04-1035,o,"Language modeling (Chen and Goodman, 1996), noun-clustering (Ravichandran et al., 2005), constructing syntactic rules for SMT (Galley et al., 2004), and finding analogies (Turney, 2008) are examples of some of the problems where we need to compute relative frequencies." P05-1066,N04-1035,o,"2.1.2 Research on Syntax-Based SMT A number of researchers (Alshawi, 1996; Wu, 1997; Yamada and Knight, 2001; Gildea, 2003; Melamed, 2004; Graehl and Knight, 2004; Galley et al. , 2004) have proposed models where the translation process involves syntactic representations of the source and/or target languages." P05-3025,N04-1035,o,"(2004) describe how to learn hundreds of millions of treetransformation rules from a parsed, aligned Chinese/English corpus, and Galley et al." P06-1077,N04-1035,o,"Similarly to (Galley et al. , 2004), the tree-to-string alignment templates discussed in this paper are actually transformation rules." P06-1121,N04-1035,o,"We contrast our work with (Galley et al. , 2004), highlight some severe limitations of probability estimates computed from single derivations, and demonstrate that it is critical to account for many derivations for each sentence pair." P06-1121,N04-1035,n,"Finally, we show that our contextually richer rules provide a 3.63 BLEU point increase over those of (Galley et al. , 2004)." P06-1121,N04-1035,o,"8 Conclusions In this paper, we developed probability models for the multi-level transfer rules presented in (Galley et al. , 2004), showed how to acquire larger rules that crucially condition on more syntactic context, and how to pack multiple derivations, including interpretations of unaligned words, into derivation forests." P06-1121,N04-1035,n,"We presented some theoretical arguments for not limiting extraction to minimal rules, validated them on concrete examples, and presented experiments showing that contextually richer rules provide a 3.63 BLEU point increase over the minimal rules of (Galley et al. , 2004)." P06-1121,N04-1035,o,"Formally, transformational rules ri presented in (Galley et al. , 2004) are equivalent to 1-state xRs transducers mapping a given pattern (subtree to match in pi) to a right hand side string." P06-1121,N04-1035,o,"In this paper, we take the framework for acquiring multi-level syntactic translation rules of (Galley et al. , 2004) from aligned tree-string pairs, and present two main extensions of their approach: first, instead of merely computing a single derivation that minimally explains a sentence pair, we construct a large number of derivations that include contextually richer rules, and account for multiple interpretations of unaligned words." P06-1123,N04-1035,o,"We would expect the opposite effect with hand-aligned data (Galley et al. , 2004)." P06-1123,N04-1035,o,"Analogous techniques for tree-structured translation models involve either allowing each nonterminal to generate both terminals and other nonterminals (Groves et al. , 2004; Chiang, 2005), or, given a constraining parse tree, to flatten it (Fox, 2002; Zens and Ney, 2003; Galley et al. , 2004)." P08-1023,N04-1035,o,"Depending on the type of input, these efforts can be divided into two broad categories: the string-based systems whose input is a string to be simultaneously parsed and translated by a synchronous grammar (Wu, 1997; Chiang, 2005; Galley et al., 2006), and the tree-based systems whose input is already a parse tree to be directly converted into a target tree or string (Lin, 2004; Ding and Palmer, 2005; Quirk et al., 2005; Liu et al., 2006; Huang et al., 2006)." P09-1020,N04-1035,o,"4 Training This section discusses how to extract our translation rules given a triple nullnull,null null ,nullnull . As we know, the traditional tree-to-string rules can be easily extracted from nullnull,null null ,nullnull using the algorithm of Mi and Huang (2008) 2 . We would like 2 Mi and Huang (2008) extend the tree-based rule extraction algorithm (Galley et al., 2004) to forest-based by introducing non-deterministic mechanism." P09-1020,N04-1035,o,"From the above discussion, we can see that traditional tree sequence-based method uses single tree as translation input while the forestbased model uses single sub-tree as the basic translation unit that can only learn tree-to-string (Galley et al. 2004; Liu et al., 2006) rules." P09-1063,N04-1035,o,"While Galley (2004) describes extracting treeto-string rules from 1-best trees, Mi and Huang et al." P09-1064,N04-1035,o,"The synchronous grammar rules are extracted from word aligned sentence pairs where the target sentence is annotated with a syntactic parse (Galley et al., 2004)." P09-1108,N04-1035,o,"3.3 Tree Transducer Grammars Syntactic machine translation (Galley et al., 2004) uses tree transducer grammars to translate sentences." W05-0803,N04-1035,o,"(Gildea, 2003) and (Galley et al. , 2004) discuss different ways of generalizing the tree-level crosslinguistic correspondence relation, so it is not confined to single tree nodes, thereby avoiding a continuity assumption." W06-3601,N04-1035,o,"Besides being linguistically motivated, the need for EDL is also supported by empirical findings in MT that one-level rules are often inadequate (Fox, 2002; Galley et al. , 2004)." W07-1512,N04-1035,n,"However, many of these models are not applicable to parallel treebanks because they assume translation units where either the source text, the target text or both are represented as word sequences without any syntactic structure (Galley et al. , 2004; Marcu et al. , 2006; Koehn et al. , 2003)." W08-0411,N04-1035,o,"(Chiang, 2005) (Imamura et al., 2004) (Galley et al., 2004)." W08-0411,N04-1035,o,"The underlying formalisms used has been quite broad and include simple formalisms such as ITGs (Wu, 1997), hierarchicalsynchronousrules(Chiang, 2005), string to tree models by (Galley et al., 2004) and (Galley et al., 2006), synchronous CFG models such (Xia and McCord, 2004) (Yamada and Knight, 2001), synchronous Lexical Functional Grammar inspired approaches (Probst et al., 2002) and others." W08-0411,N04-1035,o,"Our process of extraction of rules as synchronous trees and then converting them to synchronous CFG rules is most similar to that of (Galley et al., 2004)." W09-2306,N04-1035,o,"To overcome these limitations, many syntaxbased SMT models have been proposed (Wu, 1997; Chiang, 2007; Ding et al., 2005; Eisner, 2003; Quirk et al., 2005; Liu et al., 2007; Zhang et al., 2007; Zhang et al., 2008a; Zhang et al., 2008b; Gildea, 2003; Galley et al., 2004; Marcu et al., 2006; Bod, 2007)." W09-2306,N04-1035,o,"Firstly, they classify all the GHKM2 rules (Galley et al., 2004; Galley et al., 2006) into two categories: lexical rules and non-lexical rules." W09-2306,N04-1035,o,"We guess it is an acronym for the authors of (Galley et al., 2004): Michel Galley, Mark Hopkins, Kevin Knight and Daniel Marcu." H05-1003,N04-1038,o,"Recently Bean and Riloff (2004) have sought to acquire automatically some semantic patterns that can be used as contextual information to improve reference resolution, using techniques adapted from information extraction." P05-1020,N04-1038,o,"(2001)) and unsupervised approaches (e.g. , Cardie and Wagstaff (1999), Bean and Riloff (2004))." P05-1020,N04-1038,o,"3.1 Selecting Coreference Systems A learning-based coreference system can be defined by four elements: the learning algorithm used to train the coreference classifier, the method of creating training instances for the learner, the feature set 2Examples of such scoring functions include the DempsterShafer rule (see Kehler (1997) and Bean and Riloff (2004)) and its variants (see Harabagiu et al." P05-1021,N04-1038,o,"Recently, Bean and Riloff (2004) presented an unsupervised approach to coreference resolution, which mined the co-referring NP pairs with similar predicatearguments from a large corpus using a bootstrapping method." P06-1005,N04-1038,o,"Since no such corpus exists, researchers have used coarser features learned from smaller sets through supervised learning (Soon et al. , 2001; Ng and Cardie, 2002), manually-de ned coreference patterns to mine speci c kinds of data (Bean and Riloff, 2004; Bergsma, 2005), or accepted the noise inherent in unsupervised schemes (Ge et al. , 1998; Cherry and Bergsma, 2005)." P06-1005,N04-1038,o,"Bean and Riloff (2004) used bootstrapping to extend their semantic compatibility model, which they called contextual-role knowledge, by identifying certain cases of easily-resolved anaphors and antecedents." P07-1067,N04-1038,o,Bean and Riloff (2004) present a system called BABAR that uses contextual role knowledge to do coreference resolution. P07-1068,N04-1038,o,"(2004)) or Wikipedia (Ponzetto and Strube, 2006), and the contextual role played by an NP (see Bean and Riloff (2004))." P08-1090,N04-1038,o,Bean and Riloff (2004) proposed the use of caseframe networks as a kind of contextual role knoweldge for anaphora resolution. P09-1068,N04-1038,o,"In this paper we extend this work to represent sets of situation-specific events not unlike scripts, caseframes (Bean and Riloff, 2004), and FrameNet frames (Baker et al., 1998)." P09-1074,N04-1038,o,"3.5 Anaphoricity Determination Finally, several coreference systems have successfully incorporated anaphoricity determination 660 modules (e.g. Ng and Cardie (2002a) and Bean and Riloff (2004))." W05-0612,N04-1038,o,"There are also approaches to anaphora resolution using unsupervised methods to extract useful information, such as gender and number (Ge et al. , 1998), or contextual role-knowledge (Bean and Riloff, 2004)." W06-0206,N04-1038,o,"It has shown promise in improving the performance of many tasks such as name tagging (Miller et al. , 2004), semantic class extraction (Lin et al. , 2003), chunking (Ando and Zhang, 2005), coreference resolution (Bean and Riloff, 2004) and text classification (Blum and Mitchell, 1998)." D09-1160,N04-1039,o,"lscript1-regularized log-linear models (lscript1-LLMs), on the other hand, provide sparse solutions, in which weights of irrelevant features are exactly zero, by assumingaLaplacianpriorontheweights(Tibshirani, 1996; Kazama and Tsujii, 2003; Goodman, 2004; Gao et al., 2007)." N09-1051,N04-1039,o,"Goodman, 2004) and lscript22 regularization (Lau, 1994; Chen and Rosenfeld, 2000; Lebanon and Lafferty, 2001)." P06-1124,N04-1039,n,"Our interpretation is more useful than past interpretations involving marginal constraints (Kneser and Ney, 1995; Chen and Goodman, 1998) or maximum-entropy models (Goodman, 2004) as it can recover the exact formulation of interpolated Kneser-Ney, and actually produces superior results." P07-1104,N04-1039,o,"L1 or Lasso regularization of linear models, introduced by Tibshirani (1996), embeds feature selection into regularization so that both an assessment of the reliability of a feature and the decision about whether to remove it are done in the same framework, and has generated a large amount of interest in the NLP community recently (e.g. , Goodman 2003; Riezler and Vasserman 2004)." D07-1061,N04-3012,o,"We used the WordNet::Similarity package (Pedersen et al. , 2004) to compute baseline scores for several existing measures, noting that one word pair was not processed in WS-353 because one of the words was missing from WordNet." D07-1107,N04-3012,o,"We consider the outputs of the top 3 allwords WSD systems that participated in Senseval-3: Gambl (Decadt et al. , 2004), SenseLearner (Mihalcea and Faruque, 2004), and KOC University (Yuret, Nouns Verbs Adjectives F-SCORE 0.4228 0.4319 0.4727 Feature F-Score Ablation Difference TOPSIG 0.0403 OED 0.0355 0.0126 -0.0124 DERIV 0.0351 0.0977 0.0352 RES 0.0287 0.0147 TWIN 0.0285 0.0109 -0.0130 MN 0.0188 0.0358 LESK 0.0183 0.0541 -0.0250 SENSENUM 0.0155 0.0146 -0.0147 SENSECNT 0.0121 0.0160 0.0168 DOMAIN 0.0119 0.0082 -0.0265 LCH 0.0099 0.0068 WUP 0.0036 0.0168 JCN 0.0025 0.0190 ANTONYM 0.0000 0.0295 0.0000 MAXMN -0.0013 0.0179 VEC -0.0024 0.0371 -0.0062 HSO -0.0073 0.0112 -0.0246 LIN -0.0086 0.0742 COUSIN -0.0094 VERBGRP 0.0327 VERBFRM 0.0102 PERTAINYM -0.0029 Table 4: Feature ablation study; F-score difference obtained by removal of the single feature 2004)." D07-1107,N04-3012,o,"A variety of synset similarity measures based on properties of WordNet itself have been proposed; nine such measures are discussed in (Pedersen et al. , 2004), including gloss-based heuristics (Lesk, 1986; Banerjee and Pedersen, 2003), information-content based measures (Resnik, 1995; Lin, 1998; Jiang and Conrath, 1997), and others." D07-1107,N04-3012,o,"We use eight similarity measures implemented within the WordNet::Similarity package5, described in (Pedersen et al. , 2004); these include three measures derived from the paths between the synsets in WordNet: HSO (Hirst and St-Onge, 1998), LCH (Leacock and Chodorow, 1998), and WUP (Wu and Palmer, 1994); three measures based on information content: RES (Resnik, 1995), LIN (Lin, 1998), and JCN (Jiang and Conrath, 1997); the gloss-based Extended Lesk Measure LESK, (Banerjee and Pedersen, 2003), and finally the gloss vector similarity measure VECTOR (Patwardan, 2003)." E09-1077,N04-3012,o,"is a WordNet based relatedness measure (Pedersen et al., 2004)." I08-1055,N04-3012,o,"We could also use the value of semantic similarity and relatedness measures (Pedersen et al., 2004) or the existence of hypernym or hyponym relations as features." I08-2105,N04-3012,o,"Selectional preferences are estimated using grammatical collocation information from the British National Corpus (BNC), obtained with the Word Sketch Engine (WSE) (Kilgarriff et al., 2004)." I08-2105,N04-3012,o,"Relatedness scores are computed for each pair of senses of the grammatically linked pair of words (w1; w2; GR), using the WordNet-Similarity-1.03 package and the lesk 759 option (Pedersen et al., 2004)." N06-1025,N04-3012,o,"In addition, we use the measure from Resnik (1995), which is computed using an intrinsic information content measure relying on the hierarchical structure of the category tree (Seco et al. , 2004)." N06-1025,N04-3012,o,"1 Introduction The last years have seen a boost of work devoted to the development of machine learning based coreference resolution systems (Soon et al. , 2001; Ng & Cardie, 2002; Yang et al. , 2003; Luo et al. , 2004, inter alia)." N06-1025,N04-3012,o,"We enrich the semantic information available to the classifier by using semantic similarity measures based on the WordNet taxonomy (Pedersen et al. , 2004)." N06-2004,N04-3012,o,"1 Introduction Estimating the degree of semantic relatedness between words in a text is deemed important in numerous applications: word-sense disambiguation (Banerjee and Pedersen, 2003), story segmentation (Stokes et al. , 2004), error correction (Hirst and Budanitsky, 2005), summarization (Barzilay and Elhadad, 1997; Gurevych and Strube, 2004)." N06-2004,N04-3012,o,"We use the default configuration of the measure in WordNet::Similarity-0.12 package (Pedersen et al. , 2004), and, with a single exception, the measure performed below Gic; see BP in table 1." N06-2004,N04-3012,o,"8See formula in appendix B. We use (Pedersen et al. , 2004) implementation with a minor alteration see Beigman Klebanov (2006)." N09-5005,N04-3012,o,"SR-AW finds the sense of each word that is most relatedormostsimilartothoseofitsneighborsinthe sentence, according to any of the ten measures available in WordNet::Similarity (Pedersen et al., 2004)." P05-3002,N04-3012,p,"Extensive research concerning the integration of semantic knowledge into NLP for the English language has been arguably fostered by the emergence of WordNet::Similarity package (Pedersen et al. , 2004).1 In its turn, the development of the WordNet based semantic similarity software has been facilitated by the availability of tools to easily retrieve 1http://www.d.umn.edu/a0 tpederse/similarity.html data from WordNet, e.g. WordNet::QueryData,2 jwnl.3 Research integrating semantic knowledge into NLP for languages other than English is scarce." P05-3019,N04-3012,o,"The disambiguation algorithms also require that the semantic relatedness measures WordNet::Similarity (Pedersen et al. , 2004) be installed." P06-1051,N04-3012,o,"Unfortunately, a counterexample illustrated in (Boughorbel et al. , 2004) shows that the max function does not produce valid kernels in general." P06-1051,N04-3012,o,"The wn::similarity package (Pedersen et al. , 2004) to compute the Jiang&Conrath (J&C) distance (Jiang and Conrath, 1997) as in (Corley and Mihalcea, 2005)." P07-1070,N04-3012,o,"The measures vary from simple edge-counting to attempt to factor in peculiarities of the network structure by considering link direction, relative path, and density, such as vector, lesk, hso, lch, wup, path, res, lin and jcn (Pedersen et al. , 2004)." P07-1126,N04-3012,o,"The WordNet::Similarity package (Pedersen et al. , 2004) implements this distance measure and was used by the authors." P07-2013,N04-3012,o,"The implementation includes path-length (Rada et al. , 1989; Wu & Palmer, 1994; Leacock & Chodorow, 1998), information-content (Resnik, 1995; Seco et al. , 2004) and text-overlap (Lesk, 1986; Banerjee & Pedersen, 2003) measures, as described in Strube & Ponzetto (2006)." P07-2013,N04-3012,p,"We believe that the extensive usage of such measures derives also from the availability of robust and freely availablesoftwarethatallowstocomputethem(Pedersen et al. , 2004, WordNet::Similarity)." P07-2033,N04-3012,o,"2 WordNet-based semantic relatedness measures 2.1 Basic measures Two similarity/distance measures from the Perl package WordNet-Similarity written by (Pedersen et al. , 2004) are used." W06-1659,N04-3012,o,"The approaches proposed to the ACE RDC task such as kernel methods (Zelenko et al. , 2002) and Maximum Entropy methods (Kambhatla, 2004) required the availability of large set of human annotated corpora which are tagged with relation instances." W06-3806,N04-3012,o,"We also used the following resources: the Charniak parser (Charniak, 2000) to carry out the syntactic analysis; the wn::similaritypackage (Pedersen et al. , 2004) to compute the Jiang&Conrath (J&C) distance (Jiang and Conrath, 1997) needed to implement the lexical similarity siml(T,H) as defined in (Corley and Mihalcea, 2005); SVM-lightTK (Moschitti, 2004) to encode the basic tree kernel function, KT, in SVM-light (Joachims, 1999)." W07-0211,N04-3012,o,"Such a similarity is calculated by using the WordNet::Similarity tool (Pedersen et al. , 2004), and, concretely, the Wu-Palmer measure, as defined in Equation1 (Wu and Palmer, 1994)." W07-2086,N04-3012,o,"The system uses WordNet-based 1http://senserelate.sourceforge.net measures of semantic relatedness2 (Pedersen et al. , 2004) to measure the relatedness between the different senses of the target word and the words in its context." W07-2086,N04-3012,o,"The relatedness between two word senses is computed using a measure of semantic relatedness defined in the WordNet::Similarity software package (Pedersen et al. , 2004), which is a suite of Perl modules implementing a number WordNet-based measures of semantic relatedness." W09-2403,N04-3012,o,"(2007) observe that their predominant sense method is not performing as well for 3We use the Lesk (overlap) similarity as implemented by the WordNet::similarity package (Pedersen et al., 2004)." W09-2404,N04-3012,o,"We measure semantic similarity using the shortest path length in WordNet (Fellbaum, 1998) as implemented in the WordNet Similarity package (Pedersen et al., 2004)." W09-2405,N04-3012,p,"The WordNet::Similarity package provides a flexible implementation of many of these measures (Pedersen et al., 2004)." C08-1064,N06-1002,o,"It is often argued that the ability to translate discontiguous phrases is important to modeling translation (Chiang, 2007; Simard et al., 2005; Quirk and Menezes, 2006), and it may be that this explains the results." E09-1082,N06-1002,o,"This definition is similar to that of minimal translation units as described in Quirk and Menezes (2006), although they allow null words on either side." N07-1008,N06-1002,p,"Also, slightly restating the advantages of phrase-pairs identified in (Quirk and Menezes, 2006), these blocks are effective at capturing context including the encoding of non-compositional phrase pairs, and capturing local reordering, but they lack variables (e.g. embedding between ne . . .pas in French), have sparsity problems, and lack a strategy for global reordering." N07-1008,N06-1002,p,"However, in (Quirk and Menezes, 2006), the authors investigate minimum translation units (MTU) which is a refinement over a similar approach by (Banchs et al. , 2005) to eliminate the overlap issue." P08-1064,N06-1002,o,"However, it cannot handle long-distance reorderings properly and does not exploit discontinuous phrases and linguistically syntactic structure features (Quirk and Menezes, 2006)." W09-2306,N06-1002,o,"One of the theoretical problems with phrase based SMT models is that they can not effectively model the discontiguous translations and numerous attempts have been made on this issue (Simard et al., 2005; Quirk and Menezes, 2006; Wellington et al., 2006; Bod, 2007; Zhang et al., 2007)." W07-0211,N06-1005,o,"Some authors have already designed similar matching techniques, such as the ones described in (MacCartney et al. , 2006) and (Snow et al. , 2006)." C08-1071,N06-1020,p,"David McClosky, Eugene Charniak, and Mark Johnson Brown Laboratory for Linguistic Information Processing (BLLIP) Brown University Providence, RI 02912 {dmcc|ec|mj}@cs.brown.edu Abstract Self-training has been shown capable of improving on state-of-the-art parser performance (McClosky et al., 2006) despite the conventional wisdom on the matter and several studies to the contrary (Charniak, 1997; Steedman et al., 2003)." C08-1071,N06-1020,o,"4 Testing the Four Hypotheses The question of why self-training helps in some cases (McClosky et al., 2006; Reichart and Rappoport, 2007) but not others (Charniak, 1997; Steedman et al., 2003) has inspired various theories." C08-1071,N06-1020,o,"The self-training protocol is the same as in (Charniak, 1997; McClosky et al., 2006; Reichart and Rappoport, 2007): we parse the entire unlabeled corpus in one iteration." D08-1071,N06-1020,o,"glish (previously used for self-training of parsers (McClosky et al., 2006))." D08-1071,N06-1020,p,"Its success stories range from parsing (McClosky et al., 2006) to machine translation (Ueffing, 2006)." D09-1087,N06-1020,p,"For English, self-training contributes 0.83% absolute improvement to the PCFG-LA parser, which is comparable to the improvement obtained from using semi-supervised training with the twostage parser in (McClosky et al., 2006)." D09-1087,N06-1020,o,"(2006) effectively utilized unlabeled data to improve parsing accuracy on the standard WSJ training set, but they used a two-stage parser comprised of Charniaks lexicalized probabilistic parser with n-best parsing and a discriminative reranking parser (Charniak and Johnson, 2005), and thus it would be better categorized as co-training (McClosky et al., 2008)." D09-1160,N06-1020,o,"1 Introduction Deep and accurate text analysis based on discriminative models is not yet efficient enough as a component of real-time applications, and it is inadequate to process Web-scale corpora for knowledge acquisition (Pantel, 2007; Saeger et al., 2009) or semi-supervised learning (McClosky et al., 2006; Spoustov et al., 2009)." D09-1160,N06-1020,o,"The feature combinations play an essential role in obtaining a classifier with state-of-the-art accuracy for several NLP tasks; recent examples include dependency parsing (Koo et al., 2008), parse re-ranking (McClosky et al., 2006), pronoun resolution (Nguyen and Kim, 2008), and semantic role labeling (Liu and Sarkar, 2007)." D09-1161,N06-1020,o,"The other is the self-training (McClosky et al. 2006) which first parses and reranks the NANC corpus, and then use them as additional training data to retrain the model." E09-1033,N06-1020,o,"Third, we hope that the improved parses of bitext will serve as higher quality training data for improving monolingual parsing using a process similar to self-training (McClosky et al., 2006)." E09-1090,N06-1020,o,"A totally different approach to improving the accuracy of our parser is to use the idea of selftraining described in (McClosky et al., 2006)." E09-3005,N06-1020,o,"The problem itself has started to get attention only recently (Roark and Bacchiani, 2003; Hara et al., 2005; Daume III and Marcu, 2006; Daume III, 2007; Blitzer et al., 2006; McClosky et al., 2006; Dredze et al., 2007)." E09-3005,N06-1020,o,"In contrast, semi-supervised domain adaptation (Blitzer et al., 2006; McClosky et al., 2006; Dredze et al., 2007) is the scenario in which, in addition to the labeled source data, we only have unlabeled and no labeled target domain data." E09-3005,N06-1020,p,"2 Motivation and Prior Work While several authors have looked at the supervised adaptation case, there are less (and especially less successful) studies on semi-supervised domain adaptation (McClosky et al., 2006; Blitzer et al., 2006; Dredze et al., 2007)." E09-3005,N06-1020,o,"So far, most previous work on domain adaptation for parsing has focused on data-driven systems (Gildea, 2001; Roark and Bacchiani, 2003; McClosky et al., 2006; Shimizu and Nakagawa, 2007), i.e. systems employing (constituent or dependency based) treebank grammars (Charniak, 1996)." E09-3005,N06-1020,o,"Parse selection constitutes an important part of many parsing systems (Johnson et al., 1999; Hara et al., 2005; van Noord and Malouf, 2005; McClosky et al., 2006)." I08-2097,N06-1020,p,"There are only a few successful studies, such as (Ando and Zhang, 2005) for chunking and (McClosky et al., 2006a; McClosky et al., 2006b) on constituency parsing." I08-2097,N06-1020,o,"Theyalsoappliedself-training to domain adaptation of a constituency parser (McClosky et al., 2006b)." N07-1070,N06-1020,o,"Recently there have been some improvements to the Charniak parser, use n-best re-ranking as reported in (Charniak and Johnson, 2005) and selftraining and re-ranking using data from the North American News corpus (NANC) and adapts much better to the Brown corpus (McClosky et al. , 2006a; McClosky et al. , 2006b)." N07-1070,N06-1020,o,"The syntactic parser is the version that is selftrained using 2,500,000 sentences from NANC, and where the starting version is trained only on WSJ data (McClosky et al. , 2006b)." N07-2045,N06-1020,o,"As far as we know, language modeling always improves with additional training data, so we add data from the North American News Text Corpus (NANC) (Graff, 1995) automatically parsed with the Charniak parser (McClosky et al. , 2006) to train our language model on up to 20 million additional words." P06-1043,N06-1020,o,"While (McClosky et al. , 2006) showed that this technique was effective when testing on WSJ, the true distribution was closer to WSJ so it made sense to emphasize it." P06-1043,N06-1020,o,"Thus, the WSJ+NANC model has better oracle rates than the WSJ model (McClosky et al. , 2006) for both the WSJ and BROWN domains." P06-1043,N06-1020,p,"Recent work, (McClosky et al. , 2006), has shown that adding many millions of words of machine parsed and reranked LA Times articles does, in fact, improve performance of the parser on the closely related WSJ data." P06-1043,N06-1020,o,"To use the data from NANC, we use self-training (McClosky et al. , 2006)." P06-1043,N06-1020,p,"Furthermore, use of the self-training techniques described in (McClosky et al. , 2006) raise this to 87.8% (an error reduction of 28%) again without any use of labeled Brown data." P06-1043,N06-1020,o,"The trends are the same as in (McClosky et al. , 2006): Adding NANC data improves parsing performance on BROWN development considerably, improving the f-score from 83.9% to 86.4%." P06-1109,N06-1020,o,McClosky et al. 2006). P07-1036,N06-1020,o,"Another way to look the algorithm is from the self-training perspective (McClosky et al. , 2006)." P07-1049,N06-1020,o,"This can either be semi-supervised parsing, using both annotated and unannotated data (McClosky et al. , 2006) or unsupervised parsing, training entirely on unannotated text." P07-1051,N06-1020,n,"While most parsing methods are currently supervised or semi-supervised (McClosky et al. 2006; Henderson 2004; Steedman et al. 2003), they depend on hand-annotated data which are difficult to come by and which exist only for a few languages." P07-1078,N06-1020,o,"Unknown words were not identified in (McClosky et al. , 2006a) as a useful predictor for the benefit of self-training." P07-1078,N06-1020,o,"622 We also identified a length effect similar to that studied by (McClosky et al. , 2006a) for self-training (using a reranker and large seed, as detailed in Section 2)." P07-1078,N06-1020,o,"Indeed, in the II scenario, (Steedman et al. , 2003a; McClosky et al. , 2006a; Charniak, 1997) reported no improvement of the base parser for small (500 sentences, in the first paper) and large (40K sentences, in the last two papers) seed datasets respectively." P07-1078,N06-1020,p,"In the II, OO, and OI scenarios, (McClosky et al, 2006a; 2006b) succeeded in improving the parser performance only when a reranker was used to reorder the 50-best list of the generative parser, with a seed size of 40K sentences." P07-1078,N06-1020,p,"Recently, (McClosky et al. , 2006a; McClosky et al. , 2006b) have successfully applied self-training to various parser adaptation scenarios using the reranking parser of (Charniak and Johnson, 2005)." P07-1078,N06-1020,o,"McClosky et al (2006a) use sections 2-21 of the WSJ PennTreebank as seed data and between 50K to 2,500K unlabeled NANC corpus sentences as self-training data." P07-1078,N06-1020,p,"As a result, the good results of (McClosky et al, 2006a; 2006b) with large seed sets do not immediately imply success with small seed sets." P07-1078,N06-1020,o,"For the Brown corpus, we based our division on (Bacchiani et al. , 2006; McClosky et al. , 2006b)." P08-1037,N06-1020,p,"Tighter integration of semantics into the parsing models, possibly in the form of discriminative reranking models (Collins and Koo, 2005; Charniak and Johnson, 2005; McClosky et al., 2006), is a promising way forward in this regard." P08-1067,N06-1020,o,"type system F1% D Collins (2000) 89.7 Henderson (2004) 90.1 Charniak and Johnson (2005) 91.0 updated (Johnson, 2006) 91.4 this work 91.7 G Bod (2003) 90.7Petrov and Klein (2007) 90.1 S McClosky et al." P08-2026,N06-1020,o,"However more recent results have shown that it can indeed improve parser performance (Bacchiani et al., 2006; McClosky et al., 2006a; McClosky et al., 2006b)." P08-2026,N06-1020,o,(2006) and McClosky et al. P08-2026,N06-1020,o,"This second point is emphasized by the second paper on self-training for adaptation (McClosky et al., 2006b)." P09-1006,N06-1020,o,"Recently there have been some works on using multiple treebanks for domain adaptation of parsers, where these treebanks have the same grammar formalism (McClosky et al., 2006b; Roark and Bacchiani, 2003)." P09-1006,N06-1020,o,"4.3 Using Unlabeled Data for Parsing Recent studies on parsing indicate that the use of unlabeled data by self-training can help parsing on the WSJ data, even when labeled data is relatively large (McClosky et al., 2006a; Reichart and Rappoport, 2007)." P09-1006,N06-1020,o,"Our results on Chinese data confirm previous findings on English data shown in (McClosky et al., 2006a; Reichart and Rappoport, 2007)." P09-1108,N06-1020,o,"Uses for k-best lists include minimum Bayes risk decoding (Goodman, 1998; Kumar and Byrne, 2004), discriminative reranking (Collins, 2000; Charniak and Johnson, 2005), and discriminative training (Och, 2003; McClosky et al., 2006)." W07-1033,N06-1020,p,"We also plan to apply self-training of n-best tagger which successfully boosted the performance of one of the best existing English syntactic parser (McClosky et al. , 2006)." W08-1122,N06-1020,o,"These parser output trees can by produced by a second parser in a co-training scenario (Steedman et al., 2003), or by the same parser with a reranking component in a type of selftraining scenario(McCloskyetal., 2006)." W09-1008,N06-1020,o,"(McClosky et al., 2006) uses selftraining to perform this step) (2) smoothing, usually this is performed using a markovization procedure (Collins, 1999; Klein and Manning, 2003a) and (3) make the data more coarse (i.e. clustering)." W09-1104,N06-1020,o,"Research in the field of unsupervised and weakly supervised parsing ranges from various forms of EM training (Pereira and Schabes, 1992; Klein and Manning, 2004; Smith and Eisner, 2004; Smith and Eisner, 2005) over bootstrapping approaches like selftraining (McClosky et al., 2006) to feature-based enhancements of discriminative reranking models (Koo et al., 2008) and the application of semisupervised SVMs (Wang et al., 2008)." W09-2201,N06-1020,o,"Such approaches have shown promise in applications such as web page classification (Blum and Mitchell, 1998), named entity classification (Collins and Singer, 1999), parsing (McClosky et al., 2006), and machine translation (Ueffing, 2006)." W09-2205,N06-1020,o,"5 Conclusions and Future Work The paper compares Structural Correspondence Learning (Blitzer et al., 2006) with (various instances of) self-training (Abney, 2007; McClosky et al., 2006) for the adaptation of a parse selection model to Wikipedia domains." W09-2205,N06-1020,o,"We examine Structural Correspondence Learning (SCL) (Blitzer et al., 2006) for this task, and compare it to several variants of Self-training (Abney, 2007; McClosky et al., 2006)." W09-2205,N06-1020,o,"Studies on self-training have focused mainly on generative, constituent based parsing (Steedman et al., 2003; McClosky et al., 2006; Reichart and Rappoport, 2007)." W09-2205,N06-1020,o,"Improvements are obtained (McClosky et al., 2006; McClosky and Charniak, 2008), showing that a reranker is necessary for successful self-training in such a high-resource scenario." W09-2205,N06-1020,o,"The techniques examined are Structural Correspondence Learning (SCL) (Blitzer et al., 2006) and Self-training (Abney, 2007; McClosky et al., 2006)." W09-2205,N06-1020,o,"1 Introduction and Motivation Parse selection constitutes an important part of many parsing systems (Hara et al., 2005; van Noord and Malouf, 2005; McClosky et al., 2006)." D07-1068,N06-1025,o,"Strube and Ponzetto explored the use of Wikipedia for measuring Semantic Relatedness between two concepts (2006), and for Coreference Resolution (2006)." D07-1073,N06-1025,o,"In fact, many studies that try to exploit Wikipedia as a knowledge source have recently emerged (Bunescu and Pasca, 2006; Toral and Munoz, 2006; Ruiz-Casado et al. , 2006; Ponzetto and Strube, 2006; Strube and Ponzetto, 2006; Zesch et al. , 2007)." D08-1067,N06-1025,o,"(2005), Ponzetto and Strube (2006)) and the exploitation of advanced techniques that involve joint learning (e.g., Daume III and Marcu (2005)) and joint inference (e.g., Denis and Baldridge (2007)) for coreference resolution and a related extraction task." D09-1081,N06-1025,o,"Also, on WS-353, our hybrid sense-filtered variants and word-cos-ll obtained a correlation score higher than published results using WordNet-based measures (Jarmasz and Szpakowicz, 2003) (.33 to .35) and Wikipediabased methods (Ponzetto and Strube, 2006) (.19 to .48); and very close to the results obtained by thesaurus-based (Jarmasz and Szpakowicz, 2003) (.55) and LSA-based methods (Finkelstein et al., 2002) (.56)." D09-1101,N06-1025,o,"(2004), Ponzetto and Strube (2006))." E09-1051,N06-1025,o,"(Luo et al., 2004; Ponzetto and Strube, 2006) for other approaches with an evaluation based on true mentions only)." N07-3003,N06-1025,o,and Semantic Knowledge Sources for Coreference Resolution Ponzetto & Strube (2006) and Strube & Ponzetto (2006) aimed at showing that the encyclopedia that anyone can edit can be indeed used as a semantic resource for research in NLP. N07-3003,N06-1025,p,The novel idea presented in Strube & Ponzetto (2006) was to induce a semantic network from the Wikipedia categorization graph to compute measures of semantic relatedness. N07-3003,N06-1025,o,"Accordingly, in Ponzetto & Strube (2006) we used a machine learning based coreference resolution system to provide an extrinsic evaluation of the utility of WordNet and Wikipedia relatedness measures for NLP applications." N09-1065,N06-1025,o,"2 Baseline Coreference Resolution System Our baseline coreference system implements the standard machine learning approach to coreference resolution (see Ng and Cardie (2002b), Ponzetto and Strube (2006), Yang and Su (2007), for instance), which consists of probabilistic classification and clustering, as described below." P07-1067,N06-1025,p,"Recently, Ponzetto and Strube (2006) suggest to mine semantic relatedness from Wikipedia, which can deal with the data sparseness problem suffered by using WordNet." P07-1068,N06-1025,o,"(2004)) or Wikipedia (Ponzetto and Strube, 2006), and the contextual role played by an NP (see Bean and Riloff (2004))." P07-1068,N06-1025,o,"Following Ponzetto and Strube (2006), we consider an anaphoric reference, NPi, correctly resolved if NPi and its closest antecedent are in the same coreference chain in the resulting partition." P07-1068,N06-1025,o,"(2001) and Ponzetto and Strube (2006)), we generate training instances as follows: a positive instance is created for each anaphoric NP, NPj, and its closest antecedent, NPi; and a negative instance is created for NPj paired with each of the intervening NPs, NPi+1, NPi+2, . . ., NPj1." P08-4003,N06-1025,o,"It is based on code and ideas from the system of Ponzetto and Strube (2006), but also includes some ideas from GUITAR (Steinberger et al., 2007) and other coreference systems (Versley, 2006; Yang et al., 2006)." P09-5006,N06-1025,o,"We accordingly introduce approaches which attempt to include semantic information into the coreference models from a variety of knowledge sources, e.g. WordNet (Harabagiu et al., 2001), Wikipedia (Ponzetto & Strube, 2006) and automatically harvested patterns (Poesio et al., 2002; Markert & Nissim, 2005; Yang & Su, 2007)." C08-1145,N06-1032,o,"Riezler and Maxwell (2006) combine transfer-based and statistical MT; they back off to the SMT translation when the grammar is inadequate, analysing the grammar to determine this." D07-1028,N06-1032,o,"It is an important and growing field of natural language processing with applications in areas such as transferbased machine translation (Riezler and Maxwell, 2006) and sentence condensation (Riezler et al. , 2003)." P08-1114,N06-1032,o,"Finally, another soft-constraint approach that can also be viewed as coming from the data-driven side, adding syntax, is taken by Riezler and Maxwell (2006)." P08-1114,N06-1032,p,"Riezler and Maxwell (2006) do not achieve higher BLEU scores, but do score better according to human grammaticality judgments for in-coverage cases." W06-1628,N06-1032,o,Riezler and Maxwell (2006) describe a method for learning a probabilistic model that maps LFG parse structures in German into LFG parse structures in English. W08-0319,N06-1032,p,Riezler and III (2006) report an improvement in MT grammaticality on a very restricted test set: short sentences parsable by an LFG grammar without back-off rules. D07-1078,N06-1033,o,"We used a bottom-up, CKY-style decoder that works with binary xRs rules obtained via a synchronous binarization procedure (Zhang et al. , 2006)." D07-1078,N06-1033,o,"2 Related Research Several researchers (Melamed et al. , 2004; Zhang et al. , 2006) have already proposed methods for binarizing synchronous grammars in the context of machine translation." D07-1079,N06-1033,o,"Translation rules can: look like phrase pairs with syntax decoration: NPB(NNP(prime) NNP(minister) NNP(keizo) NNP(obuchi)) BUFDFKEUBWAZ carry extra contextual constraints: VP(VBD(said) x0:SBAR-C) DKx0 (according to this rule, DK can translate to said only if some Chinese sequence to the right ofDK is translated into an SBAR-C) be non-constituent phrases: VP(VBD(said) SBAR-C(IN(that) x0:S-C)) DKx0 VP(VBD(pointed) PRT(RP(out)) x0:SBAR-C) DXGPx0 contain non-contiguous phrases, effectively phrases with holes: PP(IN(on) NP-C(NPB(DT(the) x0:NNP)) NN(issue)))) GRx0 EVABG6 PP(IN(on) NP-C(NPB(DT(the) NN(issue)) x0:PP)) GRx0 EVEVABABG6 be purely structural (no words): S(x0:NP-C x1:VP)x0 x1 re-order their children: NP-C(NPB(DT(the) x0:NN) PP(IN(of) x1:NP-C)) x1 DFx0 Decoding with this model produces a tree in the target language, bottom-up, by parsing the foreign string using a CYK parser and a binarized rule set (Zhang et al. , 2006)." D08-1060,N06-1033,o,"A CYK-style decoder has to rely on binarization to preprocess the grammar as did in (Zhang et al., 2006) to handle multi-nonterminal rules." D08-1060,N06-1033,o,"Work in (Al-Onaizan and Kishore, 2006; Xiong et al., 2006; Zens et al., 2004; Kumar and Byrne, 2005; Tillmann and Zhang, 2005) modeled the limited information available at phrase-boundaries." D08-1066,N06-1033,o,"This is in line with earlier work on consistent estimation for similar models (Zollmann and Simaan, 2006), and agrees with the most up-to-date work that employs Bayesian priors over the estimates (Zhang et al., 2008)." D08-1066,N06-1033,o,"Our work expands on the general approach taken by (DeNero et al., 2006; Moore and Quirk, 2007) but arrives at insights similar to those of the most recent work (Zhang et al., 2006), albeit in a completely different manner." D08-1066,N06-1033,o,"3.1 Binarizable segmentations (a) Following (Zhang et al., 2006; Huang et al., 2008), every sequence of phrase alignments can be viewed 1For example, if the cut-off on phrase pairs is ten words, all sentence pairs smaller than ten words in the training data will be included as phrase pairs as well." D08-1066,N06-1033,o,"(Zhang et al., 2006; Huang et al., 2008)), a binarizable segmentation/permutation can be recognized by a binarized Synchronous Context-Free Grammar (SCFG), i.e., an SCFG in which the right hand sides of all non-lexical rules constitute binarizable permutations." D08-1066,N06-1033,o,"While this heuristic estimator gives good empirical results, it does not seem to optimize any intuitively reasonable objective function of the (wordaligned) parallel corpus (see e.g., (DeNero et al., 2006)) The mounting number of efforts attacking this problem over the last few years (DeNero et al., 2006; Marcu and Wong, 2002; Birch et al., 2006; Moore and Quirk, 2007; Zhang et al., 2008) exhibits its difficulty." D09-1007,N06-1033,o,"Binarizing the grammars (Zhang et al., 2006) further increases the size of these sets, due to the introduction of virtual nonterminals." D09-1037,N06-1033,o,"7Our decoder lacks certain features shown to be beneficial to synchronous grammar decoding, in particular rule binarisation (Zhang et al., 2006)." D09-1038,N06-1033,o,"The baseline system is based on the synchronous binarization (Zhang et al., 2006)." D09-1038,N06-1033,o,"4.2 Binarization Schemes Besides the baseline (Zhang et al., 2006) and iterative cost reduction binarization methods, we also perform right-heavy and random synchronous binarizations for comparison." D09-1038,N06-1033,o,"Given the following SCFG rule: VP VB NP JJR , VB NP will be JJR we can obtain a set of equivalent binary rules using the synchronous binarization method (Zhang et al., 2006) as follows: VP V1 JJR , V1 JJR V1 VB V2 , VB V2 V2 NP , NP will be This binarization is shown with the solid lines as binarization (a) in Figure 1." D09-1038,N06-1033,o,"Generally, two edges can be re-combined if they satisfy the following two constraints: 1) the LHS (left-hand side) nonterminals are identical and the sub-alignments are the same (Zhang et al., 2006); and 2) the boundary words 1 on both sides of the partial translations are equal between the two edges (Chiang, 2007)." D09-1038,N06-1033,n,"V B N P J J R ( a ) ( b ) V 2 V 1 V 2 ' V 1 ' V P V B N P w ill b e J J R Figure 1: Two different binarizations (a) and (b) of the same SCFG rule distinguished by the solid lines and dashed lines ( W e h o p e t h e s i t u a t i o n w i l l b e b e t t e r . ) N P J J R d e c o d i n g m a t c h 8 7 4 r u l e s m a t c h 6 2 r u l e s c o m p e t i n g e d g e s : 8 0 1 c o m p e t i n g e d g e s : 5 7 Figure 2: Edge competitions caused by different binarizations The edge competition problem for SMT decoding is not addressed in previous work (Zhang et al., 2006; Huang, 2007) in which each SCFG rule is binarized in a fixed way." D09-1038,N06-1033,n,"The experimental results show that our method outperforms the synchronous binarization method (Zhang et al., 2006) with over 0.8 BLEU scores on both NIST 2005 and NIST 2008 Chinese-to-English evaluation data sets." D09-1038,N06-1033,o,"A synchronous 363 binarization method is proposed in (Zhang et al., 2006) whose basic idea is to build a left-heavy binary synchronous tree (Shapiro and Stephens, 1991) with a left-to-right shift-reduce algorithm." D09-1038,N06-1033,n,"Although this method is comparatively easy to be implemented, it just achieves the same performance as the synchronous binarization method (Zhang et al., 2006) for syntaxbased SMT systems." D09-1038,N06-1033,o,"3 Synchronous Binarization Optimization by Cost Reduction As discussed in Section 1, binarizing an SCFG in a fixed (left-heavy) way (Zhang et al., 2006) may lead to a large number of competing edges and consequently high risk of making search errors." D09-1038,N06-1033,o,"The time complexity of the CKY-based binarization algorithm is (n3), which is higher than that of the linear binarization such as the synchronous binarization (Zhang et al., 2006)." D09-1038,N06-1033,o,"Iterative cost reduction algorithm Input: An SCFG Output: An equivalent binary SCFG of 1: Function ITERATIVECOSTREDUCTION( ) 2: 0 3: for each 0do 4: ( ) = , 0 5: while ( ) does not converge do 6: for each do 7: [ ] ( ) 8: for each ( ) do 9: for each , do 10: 1 11: ( ) CKYBINARIZATION( , ) 12: [ ] ( ) 13: for each ( ) do 14: for each , do 15: + 1 16: return In the iterative cost reduction algorithm, we first obtain an initial binary SCFG 0 using the synchronous binarization method proposed in (Zhang et al., 2006)." N06-3004,N06-1033,o,"We develop this intuition into a technique called synchronous binarization (Zhang et al. , 2006) which binarizes a synchronous production or treetranduction rule on both source and target sides simultaneously." N07-1063,N06-1033,o,"(Zhang et al. , 2006) binarize grammars into CNF normal form, while (Watanabe et al. , 2006) allow only Griebach-Normal form grammars." N09-1026,N06-1033,o,"Rulesize and lexicalization affect parsing complexity whether the grammar is binarized explicitly (Zhang et al., 2006) or implicitly binarized using Early-style intermediate symbols (Zollmann et al., 2006)." N09-1049,N06-1033,o,"Extensions to Hiero Several authors describe extensions to Hiero, to incorporate additional syntactic information (Zollmann and Venugopal, 2006; Zhang and Gildea, 2006; Shen et al., 2008; Marton and Resnik, 2008), or to combine it with discriminative latent models (Blunsom et al., 2008)." N09-1061,N06-1033,o,"Not only is this beneficial in terms of parsing complexity, but smaller rules can also improve a translation models ability to generalize to new data (Zhang et al., 2006)." P06-1121,N06-1033,o,"Its rule binarization is described in (Zhang et al. , 2006)." P06-1123,N06-1033,o,"So unlike some other studies (Zens and Ney, 2003; Zhang et al. , 2006), we used manually annotated alignments instead of automatically generated ones." P06-1123,N06-1033,o,"Decomposing the translational equivalence relations in the training data into smaller units of knowledge can improve a models ability to generalize (Zhang et al. , 2006)." P08-1023,N06-1033,o,"Compared with their string-based counterparts, treebased systems offer some attractive features: they are much faster in decoding (linear time vs. cubic time, see (Huang et al., 2006)), do not require a binary-branching grammar as in string-based models (Zhang et al., 2006), and can have separate grammars for parsing and translation, say, a context-free grammar for the former and a tree substitution grammar for the latter (Huang et al., 2006)." P08-1069,N06-1033,o,"and Gildea, 2007; Zhang et al., 2006; Gildea, Satta, and Zhang, 2006)." P09-2036,N06-1033,o,"Past work has synchronously binarized such rules for efficiency (Zhang et al., 2006; Huang et al., 2008)." P09-2036,N06-1033,o,"model reranking has also been established, both for synchronous binarization (Zhang et al., 2006) and for target-only binarization (Huang, 2007)." W06-1606,N06-1033,o,"The decoder uses a binarized representation of the rules, which is obtained via a syncronous binarization procedure (Zhang et al. , 2006)." W07-0403,N06-1033,o,"We can use a linear-time algorithm (Zhang et al. , 2006) to detect non-ITG movement in our high-confidence links, and remove the offending sentence pairs from our training corpus." W07-0405,N06-1033,p,"Synchronous binarization (Zhang et al. , 2006) solves this problem by simultaneously binarizing both source and target-sides of a synchronous rule, making sure of contiguous spans on both sides whenever possible." W07-0405,N06-1033,o,"Decoding with an SCFG (e.g. , translating from Chinese to English using the above grammar) can be cast as a parsing problem (see Section 3 for details), in which case we need to binarize a synchronous rule with more than two nonterminals to achieve polynomial time algorithms (Zhang et al. , 2006)." W07-0405,N06-1033,o,"Intuitively speaking, the gaps on the target-side will lead to exponential complexity in decoding with integrated language models (see Section 3), as well as synchronous parsing (Zhang et al. , 2006)." W07-0405,N06-1033,o,"This representation, being contiguous on both sides, successfully reduces the decoding complexity to a low polynomial and significantly improved the search quality (Zhang et al. , 2006)." W07-0412,N06-1033,o,"On the positive side, recent work exploring the automaticbinarizationofsynchronousgrammars(Zhang et al. , 2006) has indicated that non-binarizable constructions seem to be relatively rare in practice." W08-0403,N06-1033,p,"Recent work by (Zhang et al., 2006) shows a practically ef cient approach that binarizes linguistically SCFG rules when possible." C08-1008,N06-1041,o,This is the scenario considered by Haghighi and Klein (2006) for POS tagging: how to construct an accurate tagger given a set of tags and a few example words for each of those tags. C08-1042,N06-1041,o,"The mapping typically is made to try to give the most favorable mapping in terms of accuracy, typically using a greedy assignment (Haghighi and Klein, 2006)." D07-1023,N06-1041,o,"Haghighi and Klein s (2006) prototype-driven approach requires just a few prototype examples for each POS tag, exploiting these labeled words to constrain the labels of their distributionally similar words when training a generative log-linear model for POS tagging." D07-1031,N06-1041,o,"In fact, we found that it doesnt do so badly at all: the bitag HMM estimated by EM achieves a mean 1-to1 tagging accuracy of 40%, which is approximately the same as the 41.3% reported by (Haghighi and Klein, 2006) for their sophisticated MRF model." D07-1031,N06-1041,o,"Most previous work exploiting unsupervised training data for inferring POS tagging models has focused on semi-supervised methods in the in which the learner is provided with a lexicon specifying the possible tags for each word (Merialdo, 1994; Smith and Eisner, 2005; Goldwater and Griffiths, 2007) or a small number of prototypes for each POS (Haghighi and Klein, 2006)." D07-1031,N06-1041,o,Haghighi and Klein (2006) propose constraining the mapping from hidden states to POS tags so that at most one hidden state maps to any POS tag. D07-1031,N06-1041,o,"It is difficult to compare these with previous work, but Haghighi and Klein (2006) report that in a completely unsupervised setting, their MRF model, which uses a large set of additional features and a more complex estimation procedure, achieves an average 1-to-1 accuracy of 41.3%." D08-1004,N06-1041,o,"Haghighi and Klein (2006) do the reverse: for each class label y, they ask the annotators to propose a few prototypical featuresf such thatp(y|f) is as high as possible." D08-1036,N06-1041,o,"Finally, following Haghighi and Klein (2006) and Johnson (2007) we can instead insist that at most one HMM state can be mapped to any part-of-speech tag." D08-1109,N06-1041,o,"In addition, a number of approaches have focused on developing discriminative approaches for unsupervised and semi-supervised tagging (Smith and Eisner, 2005; Haghighi and Klein, 2006)." D09-1009,N06-1041,o,"We use the same feature processing as Haghighi and Klein (2006), with the addition of context features in a window of3." D09-1009,N06-1041,n,"The 74.6% final accuracy on apartments is higher than any result obtained by Haghighi and Klein (2006) (the highest is 74.1%), higher than the supervised HMM results reported by Grenager et al." D09-1009,N06-1041,o,"For example, both Haghighi and Klein (2006) and Mann and McCallum (2008) have demonstrated results better than 66.1% on the apartments task described above using only a list of 33 highly discriminative features and the labels they indicate." D09-1009,N06-1041,o,"Consequently, we abstract away from specifying a distribution by allowing the user to assign labels to features (c.f. Haghighi and Klein (2006) , Druck et al." D09-1009,N06-1041,o,"Similarly, prototype-driven learning (PDL) (Haghighi and Klein, 2006) optimizes the joint marginal likelihood of data labeled with prototype input features for each label." D09-1072,N06-1041,o,"One other work that investigates the use of a limited lexicon is (Haghighi&Klein, 2006), which develops a prototype-drive approach to propagate the categorical property using distributional similarity features; using only three exemplars of each tag, they achieve a tagging accuracy of 80.5% using a somewhat larger dataset but also the full Penn tagset, which is much larger." D09-1134,N06-1041,o,"This sparse information, however, can be propagated across all data based on distributional similarity (Haghighi and Klein, 2006)." D09-1134,N06-1041,o,"Prototype-drive learning (Haghighi and Klein, 2006) specifies prior knowledge by providing a few prototypes (i.e., canonical example words) for each label." D09-1134,N06-1041,o,"In this work, we use the prototype lists originally defined by Haghighi and Klein (2006) (HK06) and subsequently used by Chang et al." E09-1041,N06-1041,o,"354 supervised induction techniques that have been successfully developed for English (e.g., Schutze (1995), Clark (2003)), including the recentlyproposed prototype-driven approach (Haghighi and Klein, 2006) and Bayesian approach (Goldwater and Griffiths, 2007)." E09-1042,N06-1041,o,"Haghighi and Klein (2006) develop a prototype-driven approach, which requires just a few prototype examples for each POS tag and exploits these labeled words to constrain the labels of their distributionally similar words." I08-1069,N06-1041,o,"Still, however, such techniques often require seeds, or prototypes (c.f., (Haghighi and Klein, 2006)) which are used to prune search spaces or direct learners." I08-1069,N06-1041,p,Haghighi and Klein (2006) showed that adding a small set of prototypes to the unlabeled data can improve tagging accuracy significantly. I08-1069,N06-1041,o,"For instance, the frequency collected from the data can be used to bias initial transition and emission probabilities in an HMM model; the tagged words in IGT can be used to label the resulting clusters produced by the word clustering approach; the frequent and unambiguous words in the target lines can serve as prototype examples in the prototype-driven approach (Haghighi and Klein, 2006)." I08-2093,N06-1041,o,"The recent work of (Haghighi and Klein, 2006) and (Quirk et al., 2005) were also sources of inspiration." I08-2093,N06-1041,o,"In some recent grammar induction and MT work (Haghighi and Klein, 2006; Quirk et al., 2005) it has been shown that even a small amount of knowledge about a language, in the form of grammar fragments, treelets or prototypes, can go a long way in helping with the induction of a grammar from raw text or with alignment of parallel corpora." N07-1057,N06-1041,o,"The order of constituents, for instance, can be used to inform prototype-driven learning strategies (Haghighi and Klein, 2006), which can then be applied to raw corpora." N07-1057,N06-1041,o,"In particular, knowing a little about the structure of a language can help in developing annotated corpora and tools, since a little knowledge can go a long way in inducing accurate structure and annotations (Haghighi and Klein, 2006)." N09-1034,N06-1041,o,"This has been shown both in supervised settings (Roth and Yih, 2004; Riedel and Clarke, 2006) and unsupervised settings (Haghighi and Klein, 2006; Chang et al., 2007) in which constraints are used to bootstrap the model." N09-3010,N06-1041,o,Haghighi and Klein (2006) ask the user to suggest a few prototypes (examples) for each class and use those as features. N09-3010,N06-1041,o,"Supervision for simple features has been explored in the literature (Raghavan et al., 2006; Druck et al., 2008; Haghighi and Klein, 2006)." P07-1035,N06-1041,o,"First, we use the standard approach of greedily assigning each of the learned classes to the POS tag with which it has the greatest overlap, and then computing tagging accuracy (Smith and Eisner, 2005; Haghighi and Klein, 2006).8 Additionally, we compute the mutual information of the learned clusters with the gold tags, and we compute the cluster F-score (Ghosh, 2003)." P07-1035,N06-1041,o,"For comparison, Haghighi and Klein (2006) report an unsupervised baseline of 41.3%, and a best result of 80.5% from using hand-labeled prototypes and distributional similarity." P07-1036,N06-1041,o,"In many cases, improving semi-supervised models was done by seeding these models with domain information taken from dictionaries or ontology (Cohen and Sarawagi, 2004; Collins and Singer, 1999; Haghighi and Klein, 2006; Thelen and Riloff, 2002)." P07-1036,N06-1041,o,"(Grenager et al. , 2005) and (Haghighi and Klein, 2006) also report results for semi-supervised learning for these domains." P07-1036,N06-1041,o,"(Haghighi and Klein, 2006) also worked on one of our data sets." P07-1036,N06-1041,o,"(Haghighi and Klein, 2006) extends the dictionarybased approach to sequential labeling tasks by propagating the information given in the seeds with contextual word similarity." P07-1036,N06-1041,o,"We implement some global constraints and include unary constraints which were largely imported from the list of seed words used in (Haghighi and Klein, 2006)." P07-1094,N06-1041,o,Haghighi and Klein (2006) use a small list of labeled prototypes and no dictionary. P08-1085,N06-1041,p,"In particular, (Haghighi and Klein, 2006) presents very strong results using a distributional-similarity module and achieve impressive tagging accuracy while starting with a mere 116 prototypical words." P08-1085,N06-1041,o,"We also report state-of-the-art results for Hebrew full mor1Another notable work, though within a slightly different framework, is the prototype-driven method proposed by (Haghighi and Klein, 2006), in which the dictionary is replaced with a very small seed of prototypical examples." P08-1099,N06-1041,o,"We achieve competitive performance in comparison to alternate model families, in particular generative models such as MRFs trained with EM (Haghighi and Klein, 2006) and HMMs trained with soft constraints (Chang et al., 2007)." P08-1099,N06-1041,p,"This type of input information (features + majority label) is a powerful and flexible model for specifying alternative inputs to a classifier, and has been additionally used by Haghighi and Klein (2006)." P08-1099,N06-1041,o,"(2005) and compare with results reported by HK06 (Haghighi and Klein, 2006) and CRR07 (Chang et al., 2007)." D08-1016,N06-1054,o,"1As do constraint relaxation (Tromble and Eisner, 2006) and forest reranking (Huang, 2008)." P08-1035,N06-1054,o,"Semantic features are used for classifying entities into semantic types such as name of person, organization, or place, while syntactic features characterize the kinds of dependency 5It is worth noting that the present approach can be recast into one based on constraint relaxation (Tromble and Eisner, 2006)." C08-1075,N06-1060,o,"This approach will generally take advantage of language-specific (e.g. in (Freeman et al., 2006)) and domain-specific knowledge, of any external resources (e.g. database, names dictionaries, etc.), and of any information about the entities to process, e.g. their type (person name, organization, etc.), or internal structure (e.g. in (Prager et al., 2007))." W08-1402,N06-1060,o,"2 Basic Approaches 2.1 Cross-Lingual Approach Our cross-lingual approach (called MLEV) is based on (Freeman et al. 2006), who used a modified Levenshtein string edit-distance algorithm to match Arabic script person names against their corresponding English versions." W08-1402,N06-1060,o,"For this study, the Levenshtein edit-distance score (where a perfect match scores zero) is Roman Chinese (Pinyin) Alignment Score LEV ashburton ashenbodu | a s h b u r t o n | | a s h e n b o d u | 0.67 MLEV ashburton ashenbodu | a s h b u r t o n | | a s h e n b o d u | 0.72 MALINE asVburton aseCnpotu | a sV b < u r t o | n | a s eC n p o t u | 0.48 3 normalized to a similarity score as in (Freeman et al. 2006), where the score ranges from 0 to 1, with 1 being a perfect match." W08-1402,N06-1060,o,"In Table 1, the MALINE row 3 shows that the English name has a palato-alveolar modification 2 As (Freeman et al., 2006) point out, these insights are not easy to come by: These rules are based on first author Dr. Andrew Freemans experience with reading and translating Arabic language texts for more than 16 years (Freeman et al., 2006, p. 474)." I08-2119,N07-1015,o,"Early work employed a diverse range of features in a linear classifier (commonly referred to as feature-based approaches), including lexical features, syntactic parse features, dependency features and semantic features (Jiang and Zhai, 2007; Kambhatla, 2004; Zhou et al., 2005)." I08-2119,N07-1015,o,"Feature-based methods (Jiang and Zhai, 2007; Kambhatla, 2004; Zhou et al., 2005) use pre-defined feature sets to extract features to train classification models." I08-2119,N07-1015,o,"Jiang & Zhai (2007) gave a systematic examination of the efficacy of unigram, bigram and trigram features drawn from different representations surface text, constituency parse tree and dependency parse tree." P08-2023,N07-1015,o,"Jiang and Zhai (2007) then systematically explored a large space of features and evaluated the effectiveness of different feature subspaces corresponding to sequence, syntactic parse tree and dependency parse tree." P09-1114,N07-1015,o,"The model presented above is based on our previous work (Jiang and Zhai, 2007c), which bears the same spirit of some other recent work on multitask learning (Ando and Zhang, 2005; Evgeniou and Pontil, 2004; Daume III, 2007)." P09-1114,N07-1015,o,"While transfer learning was proposed more than a decade ago (Thrun, 1996; Caruana, 1997), its application in natural language processing is still a relatively new territory (Blitzer et al., 2006; Daume III, 2007; Jiang and Zhai, 2007a; Arnold et al., 2008; Dredze and Crammer, 2008), and its application in relation extraction is still unexplored." P09-1114,N07-1015,o,"We systematically explored the feature space for relation extraction (Jiang and Zhai, 2007b) . Kernel methods allow a large set of features to be used without being explicitly extracted." P09-1114,N07-1015,o,"Following our previous work (Jiang and Zhai, 2007b), we extract features from a sequence representation and a parse tree representation of each relation instance." W08-0603,N07-1015,o,"A systematic exploration of a set of such features for proteinprotein interaction extraction was recently provided by Jiang and Zhai (2007), who also used features derived from the Collins parser." N07-4013,N07-1016,o,"For a full discussion of previous work, please see (Banko et al. , 2007), or see (Yates and Etzioni, 2007) for work relating to synonym resolution." N07-4013,N07-1016,o,"(Yates and Etzioni, 2007) 4." P08-1004,N07-1016,o,"Following extraction, O-CRF applies the RESOLVER algorithm (Yates and Etzioni, 2007) to find relation synonyms, the various ways in which a relation is expressed in text." D07-1023,N07-1020,p,"In particular, we have implemented an unsupervised morphological analyzer that outperforms Goldsmith s (2001) Linguistica and Creutz and Lagus s (2005) Morfessor for our English and Bengali datasets and compares favorably to the bestperforming morphological parsers in MorphoChallenge 20053 (see Dasgupta and Ng (2007))." D09-1070,N07-1020,o,"Letter successor variety (LSV) models (Hafer and Weiss, 1974; Gaussier, 1999; Bernhard, 2005; Bordag, 2005; 669 Keshava and Pitler, 2005; Hammarstrom, 2006; Dasgupta and Ng, 2007; Demberg, 2007) use the hypothesis that there is less certainty when predicting the next character at morpheme boundaries." E09-1015,N07-1020,o,"Relative frequencies of word-forms have been used in previous work to detect incorrect affix attachments in Bengali and English (Dasgupta and Ng, 2007)." I08-1003,N07-1020,o,"There has been recent work on discovering allomorphic phenomena automatically (Dasgupta and Ng, 2007; Demberg, 2007)." N09-1024,N07-1020,o,"Unsupervised approaches are attractive due to the the availability of large quantities of unlabeled text, and unsupervised morphological segmentation has been extensively studied for a number of languages (Brent et al., 1995; Goldsmith, 2001; Dasgupta and Ng, 2007; Creutz and Lagus, 2007)." N09-1024,N07-1020,o,"Some adopt a pipeline approach (Schone and Jurafsky, 2001; Dasgupta and Ng, 2007; Demberg, 2007), which works by first extracting candidate affixes and stems, and then segmenting the words based on the candidates." W08-2117,N07-1020,n,"Allomorphs (e.g., deni and deny) are also automatically identified in (Dasgupta, 2007), but the general problem of recognizing highly irregular forms is examined more extensively in (Yarowsky and Wicentowski, 2000)." W09-0106,N07-1020,o,"A second example of subtle language dependence comes from Dasgupta and Ng (2007), who present an unsupervised morphological segmentation algorithm meant to be language-independent." W09-0106,N07-1020,n,"However, it seems unrealistic to expect a one-size-fits-all approach to be achieve uniformly high performance across varied languages, and, in fact, it doesnt. Though the system presented in (Dasgupta and Ng, 2007) outperforms the best systems in the 2006 PASCAL challenge for Turkish and Finnish, it still does significantly worse on these languages than English (F-scores of 66.2 and 66.5, compared to 79.4)." W09-0805,N07-1020,p,"Dasgupta and Ng (2007) improves over (Creutz, 2003) by suggesting a simpler approach." E09-1045,N07-1025,o,"Thus, some research has been focused on deriving different word-sense groupings to overcome the finegrained distinctions of WN (Hearst and Schutze, 1993), (Peters et al., 1998), (Mihalcea and Moldovan, 2001), (Agirre and LopezDeLaCalle, 2003), (Navigli, 2006) and (Snow et al., 2007)." E09-1045,N07-1025,o,"In this way, Wikipedia provides a new very large source of annotated data, constantly expanded (Mihalcea, 2007)." N09-1004,N07-1025,o,"Generally, WSD methods use the context of a word for its sense disambiguation, and the context information can come from either annotated/unannotated text or other knowledge resources, such as WordNet (Fellbaum, 1998), SemCor (SemCor, 2008), Open Mind Word Expert (Chklovski and Mihalcea, 2002), eXtended WordNet (Moldovan and Rus, 2001), Wikipedia (Mihalcea, 2007), parallel corpora (Ng, Wang, and Chan, 2003)." W08-2231,N07-1025,o,1Mihalcea (2007) shows that Wikipedia can indeed be used as a sense inventory for sense disambiguation. W09-2403,N07-1025,o,"Mihalcea (2007) demonstrates that manual mappings can be created for a small number of words with relative ease, but for a very large number of words the e ort involved in mapping would approach presented involves no be considerable." N09-1014,N07-1026,o,"In natural language processing, label propagation has been used for document classification (Zhu, 2005), word sense disambiguation (Niu et al., 2005; Alexandrescu and Kirchhoff, 2007), and sentiment categorization (Goldberg and Zhu, 2006)." D09-1020,N07-1039,o,"They may rely only on this information (e.g., (Turney, 2002; Whitelaw et al., 2005; Riloff and Wiebe, 2003)), or they may combine it with additional information as well (e.g., (Yu and Hatzivassiloglou, 2003; Kim and Hovy, 2004; Bloom et al., 2007; Wilson et al., 2005a))." N09-1055,N07-1039,p,"With the in-depth study of opinion mining, researchers committed their efforts for more accurate results: the research of sentiment summarization (Philip et al., 2004; Hu et al., KDD 2004), domain transfer problem of the sentiment analysis (Kanayama et al., 2006; Tan et al., 2007; Blitzer et al., 2007; Tan et al., 2008; Andreevskaia et al., 2008; Tan et al., 2009) and finegrained opinion mining (Hatzivassiloglou et al., 2000; Takamura et al., 2007; Bloom et al., 2007; Wang et al., 2008; Titov et al., 2008) are the main branches of the research of opinion mining." P09-1026,N07-1039,o,"A number of works in product review mining (Hu and Liu, 2004; Popescu et al., 2005; Kobayashi et al., 2005; Bloom et al., 2007) automatically find features of the reviewed products." C08-1051,N07-1043,o,"One of the most relevant work is (Bollegala et al., 2007), which proposed to integrate various patterns in order to measure semantic similarity between words." C08-1051,N07-1043,p,"In addition to the classical window-based technique, some studies investigated the use of lexico-syntactic patterns (e.g., X or Y) to get more accurate co-occurrence statistics (Chilovski and Pantel, 2004; Bollegala et al., 2007)." W08-2113,N07-1043,p,"However, the most interesting work is certainly proposed by (Bollegala et al., 2007) who extract patterns in two steps." W08-2113,N07-1043,o,"On the other hand, works done by (Snow et al., 2005; Snow et al., 2006; Sang and Hofmann, 2007; Bollegala et al., 2007) have proposed methodologies to automatically acquire these patterns mostly based on supervised learning to leverage manual work." C04-1015,P02-1040,o,"BLEU: Automatic evaluation by BLEU score (Papineni et al. , 2002)." C04-1016,P02-1040,p,"Automated metrics such as BLEU (Papineni et al. , 2002), RED (Akiba et al, 2001), Weighted N-gram model (WNM) (Babych, 2004), syntactic relation / semantic vector model (Rajman and Hartley, 2001) have been shown to correlate closely with scoring or ranking by different human evaluation parameters." C04-1016,P02-1040,p,"It was found to produce automated scores, which strongly correlate with human judgements about translation fluency (Papineni et al. , 2002)." C04-1030,P02-1040,o,"This score measures the precision of unigrams, bigrams, trigrams and fourgrams with respect to a reference translation with a penalty for too short sentences (Papineni et al. , 2002)." C04-1064,P02-1040,o,"As an example of it s application, N-gram co-occurrence is used for evaluating machine translations (Papineni et al. , 2002)." C04-1114,P02-1040,o,"Both calculate the precision of a translation by comparing it to a reference translation and incorporating a length penalty (Doddington, 2001; Papineni et al. , 2002).]" C04-1168,P02-1040,o,"The following four metrics were used speci cally in this study: BLEU (Papineni et al. , 2002): A weighted geometric mean of the n-gram matches between test and reference sentences multiplied by a brevity penalty that penalizes short translation sentences." C08-1014,P02-1040,o," Our evaluation metrics are BLEU (Papineni et al., 2002) and NIST, which are to perform caseinsensitive matching of n-grams up to n = 4." C08-1038,P02-1040,o,"5.2 Experimental Results Following (Langkilde, 2002) and other work on general-purpose generators, BLEU score (Papineni et al., 2002), average NIST simple string accuracy (SSA) and percentage of exactly matched sentences are adopted as evaluation metrics." C08-1041,P02-1040,o,"The translation quality is evaluated by BLEU metric (Papineni et al., 2002), as calculated by mteval-v11b.pl with case-insensitive matching of n-grams, where n =4." C08-1064,P02-1040,o,"Optimization and measurement were done with the NIST implementation of case-insensitive BLEU 4n4r (Papineni et al., 2002).4 4.1 Baseline We compared translation by pattern matching with a conventional exact model representation using external prefix trees (Zens and Ney, 2007)." C08-1128,P02-1040,o,"We evaluated performance by measuring WER (word error rate), PER (position-independent word error rate), BLEU (Papineni et al., 2002) and TER (translation error rate) (Snover et al., 2006) using multiple references." C08-1138,P02-1040,o,"The evaluation metric is casesensitive BLEU-4 (Papineni et al., 2002)." C08-1141,P02-1040,p,"The state-of-the-art methods for automatic MT evaluation are using an n-gram based metric represented by BLEU (Papineni et al., 2002) and its variants." C08-1144,P02-1040,o,"Translation quality is automatically evaluated by the IBM-BLEU metric (Papineni et al., 2002) (case-sensitive, using length of the closest reference translation) on the following publicly 1148 Ch.-En." D07-1007,P02-1040,p,"In addition to the widely used BLEU (Papineni et al. , 2002) and NIST (Doddington, 2002) scores, we also evaluate translation quality with the recently proposed Meteor (Banerjee and Lavie, 2005) and four edit-distance style metrics, Word Error Rate (WER), Positionindependent word Error Rate (PER) (Tillmann et al. , 1997), CDER, which allows block reordering (Leusch et al. , 2006), and Translation Edit Rate (TER) (Snover et al. , 2006)." D07-1008,P02-1040,o,"To counteract this, we introduce two brevity penalty measures (BP) inspired by BLEU (Papineni et al. , 2002) which we incorporate into the loss function, using a product, loss = 1PrecBP: BP1 = exp(1max(1, rc)) (6) BP2 = exp(1max(cr, rc)) where r is the reference length and c is the candidate length." D07-1030,P02-1040,o,"In our experiments using BLEU (Papineni et al. , 2002) as the metric, the interpolated synthetic model achieves a relative improvement of 11.7% over the best RBMT system that is used to produce the synthetic bilingual corpora." D07-1030,P02-1040,p,"The translation quality is evaluated using a well-established automatic measure: BLEU score (Papineni et al. , 2002)." D07-1036,P02-1040,o,"The translation quality is evaluated by BLEU metric (Papineni et al. , 2002), as calculated by mteval-v11b.pl 6 with case-sensitive matching of n-grams." D07-1049,P02-1040,o,"All evaluation is in terms of the BLEU score on our test set (Papineni et al. , 2002)." D07-1054,P02-1040,o,"This approach gave an improvement of 2.7 in BLEU (Papineni et al. , 2002) score on the IWSLT05 Japanese to English evaluation corpus (improving the score from 52.4 to 55.1)." D07-1055,P02-1040,o,"There exists a variety of different metrics, e.g., word error rate, position-independent word error rate, BLEU score (Papineni et al. , 2002), NIST score (Doddington, 2002), METEOR (Banerjee and Lavie, 2005), GTM (Turian et al. , 2003)." D07-1055,P02-1040,p,"A popular metric for evaluating machine translation quality is the Bleu score (Papineni et al. , 2002)." D07-1077,P02-1040,o,"The reordered sentence is then re-tokenized to be consistent with the baseline system, which uses a different tokenization scheme that is more friendly to the MT system.3 We use BLEU scores as the performance measure in our evaluation (Papineni et al. , 2002)." D07-1080,P02-1040,o,"The algorithm is slightly different from other online training algorithms (Tillmann and Zhang, 2006; Liang et al. , 2006) in that we keep and update oracle translations, which is a set of good translations reachable by a decoder according to a metric, i.e. BLEU (Papineni et al. , 2002)." D07-1080,P02-1040,o,"4.2 Approximated BLEU We used the BLEU score (Papineni et al. , 2002) as the loss function computed by: BLEU(E; E) = exp 1N Nsummationdisplay n=1 log pn(E, E) BP(E, E) (7) where pn() is the n-gram precision of hypothesized translations E ={et}Tt=1 given reference translations E ={et}Tt=1 and BP()1 is a brevity penalty." D07-1080,P02-1040,o,"The translation quality is evaluated by case-sensitive NIST (Doddington, 2002) and BLEU (Papineni et al. , 2002)2." D07-1090,P02-1040,o,"For each training data size, we report the size of the resulting language model, the fraction of 5-grams from the test data that is present in the language model, and the BLEU score (Papineni et al. , 2002) obtained by the machine translation system." D07-1092,P02-1040,o,"Results in terms of word-error-rate (WER) and BLEU score (Papineni et al. , 2002) are reported in Table 4 for those sentences that contain at least one unknown word." D08-1010,P02-1040,o,"The translation quality is evaluated by BLEU metric (Papineni et al., 2002), as calculated by mtevalv11b.pl with case-insensitive matching of n-grams, where n =4." D08-1011,P02-1040,o,"In the following experiments, the NIST BLEU score is used as the evaluation metric (Papineni et al., 2002), which is reported as a percentage in the following sections." D08-1023,P02-1040,o,"Model performance is evaluated using the standard BLEU metric (Papineni et al., 2002) which measures average n-gram precision, n 4, and we use the NIST definition of the brevity penalty for multiple reference test sets." D08-1028,P02-1040,o,"There are however other similarity metrics (e.g. BLEU (Papineni et al., 2002)) which could be used equally well." D08-1033,P02-1040,o,"scored with lowercased, tokenized NIST BLEU, and exact match METEOR (Papineni et al., 2002; Lavie and Agarwal, 2007)." D08-1051,P02-1040,o,"EsEn 63.00.9 59.20.9 6.01.4 EnEs 63.80.9 60.51.0 5.21.6 DeEn 71.60.8 69.00.9 3.61.3 EnDe 75.90.8 73.50.9 3.21.2 FrEn 62.90.9 59.21.0 5.91.6 EnFr 63.40.9 60.00.9 5.41.4 bined in a log-linear fashion by adjusting a weight for each of them by means of the MERT (Och, 2003) procedure, optimising the BLEU (Papineni et al., 2002) score obtained on the development partition." D08-1051,P02-1040,o,"To this purpose, different authors (Papineni et al., 1998; Och and Ney, 2002) propose the use of the so-called log-linear models, where the decision rule is given by the expression y = argmax y Msummationdisplay m=1 mhm(x,y) (3) where hm(x,y) is a score function representing an important feature for the translation of x into y, M is the number of models (or features) and m are the weights of the log-linear combination." D08-1060,P02-1040,o,"We also report the result of our translation quality in terms of both BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) against four human reference translations." D08-1063,P02-1040,o,"7 Experiments To show the effectiveness of cross-language mention propagation information in improving mention detection system performance in Arabic, Chinese and Spanish, we use three SMT systems with very competitive performance in terms of BLEU11 (Papineni et al., 2002)." D08-1066,P02-1040,o,"Performance is measured by computing the BLEU scores (Papineni et al., 2002) of the systems translations, when compared against a single reference translation per sentence." D08-1078,P02-1040,o,"6.1 Evaluation of Translation Performance We use the BLEU score (Papineni et al., 2002) to evaluate our systems." D08-1090,P02-1040,o,"All conditions were optimized using BLEU (Papineni et al., 2002) and evaluated using both BLEU and Translation Edit Rate (TER) (Snover et al., 2006)." D09-1006,P02-1040,p,"2.1 The BLEU Metric The metric most often used with MERT is BLEU (Papineni et al., 2002), where the score of a candidate c against a reference translation r is: BLEU = BP(len(c),len(r))exp( 4summationdisplay n=1 1 4 logpn), where pn is the n-gram precision2 and BP is a brevity penalty meant to penalize short outputs, to discourage improving precision at the expense of recall." D09-1006,P02-1040,p,"It is the most widely reported metric in MT research, and has been shown to correlate well with human judgment (Papineni et al., 2002; Coughlin, 2003)." D09-1007,P02-1040,o,"We report BLEU scores (Papineni et al., 2002) on untokenized, recapitalized output." D09-1021,P02-1040,o,"For efficiency reasons we report results on sentences of length 30 words or less.10 The syntax-based method gives a BLEU (Papineni et al., 2002) score of 25.04, a 0.46 BLEU point gain over Pharoah." D09-1024,P02-1040,o,"For this we aligned 170,863 pairs of Arabic/English newswire sentences from LDC, trained a state-of-the-art syntax-based statistical machine translation system (Galley et al., 2006) on these sentences and alignments, and measured BLEU scores (Papineni et al., 2002) on a separate set of 1298 newswire test sentences." D09-1030,P02-1040,o,"Instead, researchers routinely use automatic metrics like Bleu (Papineni et al., 2002) as the sole evidence of improvement to translation quality." D09-1037,P02-1040,o,"5.2 Translation In order to test the translation performance of the grammars induced by our model and the GHKM method6 we report BLEU (Papineni et al., 2002) scores on sentences of up to twenty words in length from the MT03 NIST evaluation." D09-1039,P02-1040,o,"Similarly to MERT, Tillmann and Zhang estimate the parameters of a weight vector on a linear combination of (binary) features using a global objective function correlated with BLEU (Papineni et al., 2002)." D09-1042,P02-1040,o,"Following the evaluation methodology of Wong and Mooney (2007), we performed 4 runs of the standard 10-fold cross validation and report the averaged performance in this section using the standard automatic evaluation metric BLEU (Papineni et al., 2002) and NIST (Doddington, 2002)2." D09-1043,P02-1040,p,"The full model yields a stateof-the-art BLEU (Papineni et al., 2002) score of 0.8506 on Section 23 of the CCGbank, which is to our knowledge the best score reported to date 410 using a reversible, corpus-engineered grammar." D09-1050,P02-1040,o,"using the BLEU metric (Papineni et al., 2002)." D09-1073,P02-1040,o,"Besides the the case-sensitive BLEU-4 (Papineni et al., 2002) used in the two experiments, we design another evaluation metrics Reordering Accuracy (RAcc) for forced decoding evaluation." D09-1073,P02-1040,o,"Similar to BLEU score, we also use the similar Brevity Penalty BP (Papineni et al., 2002) to penalize the short translations in computing RAcc." D09-1075,P02-1040,o,"For practical reasons, the maximum size of a token was set at three for Chinese, andfour forKorean.2 Minimum error rate training (Och, 2003) was run on each system afterwardsand BLEU score (Papineni et al., 2002) was calculated on the test sets." D09-1105,P02-1040,o,"Demonstrating the inadequacy of such approaches, Al-Onaizan and Papineni (2006) showed that even given the words in the reference translation, and their alignment to the source words, a decoder of this sort charged with merely rearranging them into the correct target-language order could achieve a BLEU score (Papineni et al., 2002) of at best 69%and that only when restricted to keep most words very close to their source positions." D09-1106,P02-1040,o,"We evaluated the translation quality using case-insensitive BLEU metric (Papineni et al., 2002)." D09-1123,P02-1040,o,"We show that our DDTM system provides significant improvements in BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) scores over the already extremely competitive DTM2 system." D09-1136,P02-1040,o,"The model weights of the transducer are tuned based on the development set using a grid-based line search, and the translation results are evaluated based on a single Chinese reference6 using BLEU-4 (Papineni et al., 2002)." D09-1141,P02-1040,o,"We set all weights by optimizing Bleu (Papineni et al., 2002) using minimum error rate training (MERT) (Och, 2003) on a separate development set of 2,000 sentences (Indonesian or Spanish), and we used them in a beam search decoder (Koehn et al., 2007) to translate 2,000 test sentences (Indonesian or Spanish) into English." E06-1005,P02-1040,p,"3.2 Evaluation Criteria Well-established objective evaluation measures like the word error rate (WER), positionindependent word error rate (PER), and the BLEU score (Papineni et al. , 2002) were used to assess the translation quality." E06-1031,P02-1040,p,"State-of-the-art measures such as BLEU (Papineni et al. , 2002) or NIST (Doddington, 2002) aim at measuring the translation quality rather on the document level1 than on the level of single sentences." E06-1032,P02-1040,o,"1 Introduction Over the past five years progress in machine translation, and to a lesser extent progress in natural language generation tasks such as summarization, has been driven by optimizing against n-grambased evaluation metrics such as Bleu (Papineni et al. , 2002)." E06-1040,P02-1040,p,"While studies have shown that ratings of MT systems by BLEU and similar metrics correlate well with human judgments (Papineni et al. , 2002; Doddington, 2002), we are not aware of any studies that have shown that corpus-based evaluation metrics of NLG systems are correlated with human judgments; correlation studies have been made of individual components (Bangalore et al. , 2000), but not of systems." E06-1040,P02-1040,p,"The BLEU metric (Papineni et al. , 2002) in MT has been particularly successful; for example MT05, the 2005 NIST MT evaluation exercise, used BLEU-4 as the only method of evaluation." E06-1040,P02-1040,p,"Properly calculated BLEU scores have been shown to correlate reliably with human judgments (Papineni et al. , 2002)." E06-1040,P02-1040,p,"Some NLG researchers are impressed by the success of the BLEU evaluation metric (Papineni et al. , 2002) in Machine Translation (MT), which has transformed the MT field by allowing researchers to quickly and cheaply evaluate the impact of new ideas, algorithms, and data sets." E09-1017,P02-1040,o,"For example, in machine translation evaluation, approaches such as BLEU (Papineni et al., 2002) use n-gram overlap comparisons with a model to judge overall goodness, with higher n-grams meant to capture fluency considerations." E09-1017,P02-1040,o,"Evaluation metrics such as BLEU (Papineni et al., 2002) have a built-in preference for shorter translations." E09-1063,P02-1040,o,"The performance of PB-SMT system is measured with BLEU score (Papineni et al., 2002)." E09-1097,P02-1040,o,"Success is indicated by the proportion of the original sentence regenerated, as measured by any string comparison method: in our case, using the BLEU metric (Papineni et al., 2002)." E09-3008,P02-1040,o,"5.1 Evaluation of Translation Translations are evaluated on two automatic metrics: Bleu (Papineni et al., 2002) and PER, position independent error-rate (Tillmann et al., 1997)." E09-3008,P02-1040,o,"This system was worse than the baseline on Bleu (Papineni et al., 2002), but an error analysis showed some improvements." H05-1005,P02-1040,p,"The table also shows the popular BLEU (Papineni et al. , 2002) and NIST2 MT metrics." H05-1019,P02-1040,n,"They reported that their method is superior to BLEU (Papineni et al. , 2002) in terms of the correlation between human assessment and automatic evaluation." H05-1023,P02-1040,o,"We report case sensitive Bleu (Papineni et al. , 2002)scoreBleuCforallexperiments." H05-1049,P02-1040,o,"3 Semantic Representation 3.1 The Need for Dependencies Perhaps the most common representation of text for assessing content is Bag-Of-Words or Bag-of-NGrams (Papineni et al. , 2002)." H05-1095,P02-1040,n,"Unfortunately, this is not the case for such widely used MT evaluation metrics as BLEU (Papineni et al. , 2002) and NIST (Doddington, 2002)." H05-1098,P02-1040,o,"The feature weights are learned by maximizing the BLEU score (Papineni et al. , 2002) on held-out data,usingminimum-error-ratetraining(Och,2003) as implemented by Koehn." H05-1098,P02-1040,o,"5 Analysis Over the last few years, several automatic metrics for machine translation evaluation have been introduced, largely to reduce the human cost of iterative system evaluation during the development cycle (Lin and Och, 2004; Melamed et al. , 2003; Papineni et al. , 2002)." H05-1109,P02-1040,o,"For extrinsic evaluation of machine translation, we use the BLEU metric (Papineni et al. , 2002)." H05-1117,P02-1040,p,"3 Previous Work The idea of employing n-gram co-occurrence statistics to score the output of a computer system against one or more desired reference outputs was first successfully implemented in the BLEU metric for machine translation (Papineni et al. , 2002)." H05-2007,P02-1040,o,"We can incorporate each model into the system in turn, and rank the results on a test corpus using BLEU (Papineni et al. , 2002)." H05-2007,P02-1040,o,"1 Introduction Over the last few years, several automatic metrics for machine translation (MT) evaluation have been introduced, largely to reduce the human cost of iterative system evaluation during the development cycle (Papineni et al. , 2002; Melamed et al. , 2003)." I05-2021,P02-1040,o,"However, recent progress in machine translation and the continuous improvement on evaluation metrics such as BLEU (Papineni et al. , 2002) suggest that SMT systems are already very good at choosing correct word translations." I05-2039,P02-1040,o,"Because it is not feasible here to have humans judge the quality of many sets of translated data, we rely on an array of well known automatic evaluation measures to estimate translation quality : BLEU (Papineni et al. 2002) is the geometric mean of the n-gram precisions in the output with respect to a set of reference translations." I05-2042,P02-1040,n,"Although, there are various manual/automatic evaluation methods for these systems, e.g., BLEU (Papineni et al. 2002), these methods are basically incapable of dealing with an MTsystem and a w/p-MT-system at the same time, as they have different output forms." I05-5008,P02-1040,o,"Automatic measures like BLEU (PAPINENI et al. , 2001) or NIST (DODDINGTON, 2002) do so by counting sequences of words in such paraphrases." I08-1030,P02-1040,o,"The translations are evaluated in terms of BLEU score (Papineni et al., 2002)." I08-2088,P02-1040,o,"The horizontal axis represents the weight for the outof-domain translation model, and the vertical axis 15% 16% 17% 18% 19% 20% 21% 22% 23% 24% 25% 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0Weight for out-of-domain translation model BLEU sco re 400 K 800 K 1.2 M 1.6 M 2.5 M Figure 2: Results of data selection and linear interpolation (BLEU) represents the automatic metric of translation quality (BLEU score (Papineni et al., 2002) in Fig." J05-3002,P02-1040,n,"This restriction is necessary because the problem of optimizing many-to-many alignments 5 Our preliminary experiments with n-gram-based overlap measures, such as BLEU (Papineni et al. 2002) and ROUGE (Lin and Hovy 2003), show that these metrics do not correlate with human judgments on the fusion task, when tested against two reference outputs." J05-4003,P02-1040,o,5 Translation performance was measured using the automatic BLEU evaluation metric (Papineni et al. 2002) on four reference translations. J06-4002,P02-1040,o,"For instance, several studies have shown that BLEU correlates with human ratings on machine translation quality (Papineni et al. 2002; Doddington 2002; Coughlin 2003)." J06-4002,P02-1040,o,"However, they can be usefully employed during system development, for example, for quickly assessing modeling ideas or for comparing across different system configurations (Papineni et al. 2002; Bangalore, Rambow, and Whittaker 2000)." J06-4004,P02-1040,o,"Translation accuracy is measured in terms of the BLEU score (Papineni et al. 2002), which is computed here for translations generated by using the tuple n-gram model alone, in the case of Table 2, and by using the tuple n-gram model along with the additional four feature functions described in Section 3.2, in the case of Table 3." J06-4004,P02-1040,o,"In our SMT system implementation, this optimization procedure is performed by using a tool developed in-house, which is based on a simplex method (Press et al. 2002), and the BLEU score (Papineni et al. 2002) is used as a translation quality measurement." J07-1003,P02-1040,o,"The translation quality on the TransType2 task in terms of WER, PER, BLEU score (Papineni et al. 2002), and NIST score (NIST 2002) is given in Table 4." J07-2003,P02-1040,o,"Finally, the parameters i of the log-linear model (18) are learned by minimumerror-rate training (Och 2003), which tries to set the parameters so as to maximize the BLEU score (Papineni et al. 2002) of a development set." J07-2003,P02-1040,o,"Our evaluation metric is case-insensitive BLEU-4 (Papineni et al. 2002), as defined by NIST, that is, using the shortest (as opposed to closest) reference sentence length for the brevity penalty." J07-2003,P02-1040,o,"9 The definition of BLEU used in this training was the original IBM definition (Papineni et al. 2002), which defines the effective reference length as the reference length that is closest to the test sentence length." N03-1003,P02-1040,o,"This could, for example, aid machine-translation evaluation, where it has become common to evaluate systems by comparing their output against a bank of several reference translations for the same sentences (Papineni et al. , 2002)." N03-1010,P02-1040,o,"distance (MSD) and the maximum swap segment size (MSSS) ranging from 0 to 10 and evaluated the translations with the BLEU7 metric (Papineni et al. , 2002)." N03-1013,P02-1040,o,"7For details about the Bleu evaluation metric, see (Papineni et al. , 2002)." N03-2013,P02-1040,o,"Expansion of the equivalent sentence set can be applied to automatic evaluation of machine translation quality (Papineni et al. , 2002; Akiba et al. , 2001), for example." N03-2016,P02-1040,o,"For the evaluation of translation quality, we used the BLEU metric (Papineni et al. , 2002), which measures the n-gram overlap between the translated output and one or more reference translations." N03-2036,P02-1040,o,"The third column reports the BLEU score (Papineni et al. , 2002) along with 95% confidence interval." N04-1008,P02-1040,p,"4.4.1 N-gram Co-Occurrence Statistics for Answer Extraction N-gram co-occurrence statistics have been successfully used in automatic evaluation (Papineni et al. 2002, Lin and Hovy 2003), and more recently as training criteria in statistical machine translation (Och 2003)." N04-1019,P02-1040,p,"In machine translation, the rankings from the automatic BLEU method (Papineni et al. , 2002) have been shown to correlate well with human evaluation, and it has been widely used since and has even been adapted for summarization (Lin and Hovy, 2003)." N04-4003,P02-1040,o,"Word Error Rate (WER), which penalizes the edit distance against reference translations (Su et al. , 1992) BLEU: the geometric mean of n-gram precision for the translation results found in reference translations (Papineni et al. , 2002) Translation Accuracy (ACC): subjective evaluation ranks ranging from A to D (A: perfect, B: fair, C: acceptable and D: nonsense), judged blindly by a native speaker (Sumita et al. , 1999) In contrast to WER, higher BLEU and ACC scores indicate better translations." N04-4015,P02-1040,o,"Translation qualities are measured by uncased BLEU (Papineni et al. 2002) with 4 reference translations, sysids: ahb, ahc, ahd, ahe." N06-1003,P02-1040,o,"To set the weights, m, we performed minimum error rate training (Och, 2003) on the development set using Bleu (Papineni et al. , 2002) as the objective function." N06-1004,P02-1040,o,"2 Disperp and Distortion Corpora 2.1 Defining Disperp The ultimate reason for choosing one SCM over another will be the performance of an MT system containing it, as measured by a metric like BLEU (Papineni et al. , 2002)." N06-1013,P02-1040,o,"MT output is evaluated using the standard MT evaluation metric BLEU (Papineni et al. , 2002)." N06-1031,P02-1040,o,"In the nal step, we score our translations with 4-gram BLEU (Papineni et al. , 2002)." N06-1058,P02-1040,o,"4.2 Impact of Paraphrases on Machine Translation Evaluation The standard way to analyze the performance of an evaluation metric in machine translation is to compute the Pearson correlation between the automatic metric and human scores (Papineni et al. , 2002; Koehn, 2004; Lin and Och, 2004; Stent et al. , 2005)." N06-1058,P02-1040,n,"This strategy is commonly used in MT evaluation, because of BLEUs well-known problems with documents of small size (Papineni et al. , 2002; Koehn, 2004)." N06-1058,P02-1040,o,"The Pearson correlation is calculated over these ten pairs (Papineni et al. , 2002; Stent et al. , 2005)." N06-1058,P02-1040,o,"Our scores fall within the range of previous researchers (Papineni et al. , 2002; Lin and Och, 2004)." N06-1058,P02-1040,o,"Automatic Evaluation Measures A variety of automatic evaluation methods have been recently proposed in the machine translation community (NIST, 2002; Melamed et al. , 2003; Papineni et al. , 2002)." N06-2029,P02-1040,o,"For evaluation, we used the BLEU metrics, which calculates the geometric mean of n-gram precision for the MT outputs found in reference translations (Papineni et al. , 2002)." N06-2051,P02-1040,o,"We optimized separately for both the NIST (Doddington, 2002) and the BLEU metrics (Papineni et al. , 2002)." N07-1005,P02-1040,o,"Many methods for calculating the similarity have been proposed (Niessen et al. , 2000; Akiba et al. , 2001; Papineni et al. , 2002; NIST, 2002; Leusch et al. , 2003; Turian et al. , 2003; Babych and Hartley, 2004; Lin and Och, 2004; Banerjee and Lavie, 2005; Gimenez et al. , 2005)." N07-1005,P02-1040,o,"In our research, 23 scores, namely BLEU (Papineni et al. , 2002) with maximum n-gram lengths of 1, 2, 3, and 4, NIST (NIST, 2002) with maximum n-gram lengths of 1, 2, 3, 4, and 5, GTM (Turian et al. , 2003) with exponents of 1.0, 2.0, and 3.0, METEOR (exact) (Banerjee and Lavie, 2005), WER (Niessen et al. , 2000), PER (Leusch et al. , 2003), and ROUGE (Lin, 2004) with n-gram lengths of 1, 2, 3, and 4 and 4 variants (LCS, S,SU, W-1.2), were used to calculate each similarity S i . Therefore, the value of m in Eq." N07-1005,P02-1040,o,"In recent years, many researchers have tried to automatically evaluate the quality of MT and improve the performance of automatic MT evaluations (Niessen et al. , 2000; Akiba et al. , 2001; Papineni et al. , 2002; NIST, 2002; Leusch et al. , 2003; Turian et al. , 2003; Babych and Hartley, 2004; Lin and Och, 2004; Banerjee and Lavie, 2005; Gimenez et al. , 2005) because improving the performance of automatic MT evaluation is expected to enable us to use and improve MT systems efficiently." N07-1006,P02-1040,o,"2 Three New Features for MT Evaluation Since our source-sentence constrained n-gram precision and discriminative unigram precision are both derived from the normal n-gram precision, it is worth describing the original n-gram precision metric, BLEU (Papineni et al. , 2002)." N07-1006,P02-1040,n,"The most commonly used metric, BLEU, correlates well over large test sets with human judgments (Papineni et al. , 2002), but does not perform as well on sentence-level evaluation (Blatz et al. , 2003)." N07-1007,P02-1040,o,"We also show that integrating our case prediction model improves the quality of translation according to BLEU (Papineni et al. , 2002)g2 and human evaluation." N07-1021,P02-1040,o,"BLEU (Papineni et al. , 2002) is a precision metric that assesses the quality of a translation in terms of the proportion of its word n-grams (n 4 has become standard) that it shares with several reference translations." N07-1022,P02-1040,o,"#Reference: If our player 2, 3, 7 or 5 has the ball and the ball is close to our goal line PHARAOH++: If player 3 has the ball is in 2 5 the ball is in the area near our goal line WASP1++: If players 2, 3, 7 and 5 has the ball and the ball is near our goal line Figure 4: Sample partial system output in the ROBOCUP domain ROBOCUP GEOQUERY BLEU NIST BLEU NIST PHARAOH 0.3247 5.0263 0.2070 3.1478 WASP1 0.4357 5.4486 0.4582 5.9900 PHARAOH++ 0.4336 5.9185 0.5354 6.3637 WASP1++ 0.6022 6.8976 0.5370 6.4808 Table 1: Results of automatic evaluation; bold type indicates the best performing system (or systems) for a given domain-metric pair (p < 0.05) 5.1 Automatic Evaluation Weperformed4runsof10-foldcrossvalidation,and measured the performance of the learned generators using the BLEU score (Papineni et al. , 2002) and the NIST score (Doddington, 2002)." N07-1029,P02-1040,o,"The NIST BLEU-4 is a variant of BLEU (Papineni et al. , 2002) and is computed as a49a51a50 a2a16a52a53a6 a0a9a8a10a0a12a11a54a13a55a15 a26a57a56a33a58a60a59 a43 a61a63a62 a64 a65a67a66a69a68 a28a71a70a46a72a74a73 a65 a6 a0a9a8a10a0a3a11a54a13a19a75a77a76 a6 a0a9a8a10a0a3a11a54a13 (2) where a73 a65 a6 a0a78a8a10a0a3a11a54a13 is the precision of a79 -grams in the hypothesis a0 given the reference a0 a11 and a76 a6 a0a78a8a10a0a3a11a54a13a81a80 a43 is a brevity penalty." N07-1046,P02-1040,o,"Therefore, having correct transliterations would give only small improvements in terms of BLEU (Papineni et al. , 2002) and NIST scores." N07-1061,P02-1040,o,"To set the weights, m, we carried out minimum error rate training (Och, 2003) using BLEU (Papineni et al. , 2002) as the objective function." N07-1063,P02-1040,o,"We present results in the form of search error analysis and translation quality as measured by the BLEU score (Papineni et al. , 2002) on the IWSLT 06 text translation task (Eck and Hori, 2005)1, comparing Cube Pruning with our two-pass approach." N07-2006,P02-1040,o,"The baseline score using all phrase pairs was 59.11 (BLEU, Papineni et al. , 2002) with a 95% confidence interval of [57.13, 61.09]." N07-2013,P02-1040,o,"BLEU score In order to measure the extent to which whole chunks of text from the prompt are reproduced in the student essays, we used the BLEU score, known from studies of machine translation (Papineni et al. 2002)." N07-2037,P02-1040,o,"We measure translation performance by the BLEU score (Papineni et al, 2002) with one reference for each hypothesis." N09-1014,P02-1040,o,"Here, we compare two similarity measures: the familiar BLEU score (Papineni et al., 2002) and a score based on string kernels." N09-1027,P02-1040,o,"Feature weights vector are trained discriminatively in concert with the language model weight to maximize the BLEU (Papineni et al., 2002) automatic evaluation metric via Minimum Error Rate Training (MERT) (Och, 2003)." N09-1029,P02-1040,o,"Our evaluation metric is BLEU (Papineni et al., 2002) with caseinsensitive matching from unigram to four-gram." N09-1046,P02-1040,o,"The feature weights were tuned on a heldout development set so as to maximize an equally weighted linear combination of BLEU and 1-TER (Papineni et al., 2002; Snover et al., 2006) using the minimum error training algorithm on a packed forest representation of the decoders hypothesis space (Macherey et al., 2008)." N09-1048,P02-1040,o,"The results were evaluated using the character/pinyin-based 4-gram BLEU score (Papineni et al., 2002), word error rate (WER), position independent word error rate (PER), and exact match (EMatch)." N09-1058,P02-1040,o,"The final SMT system performance is evaluated on a uncased test set of 3071 sentences using the BLEU (Papineni et al., 2002), NIST (Doddington, 2002) and METEOR (Banerjee and Lavie, 2005) scores." N09-2001,P02-1040,o,"Results are reported using lowercase BLEU (Papineni et al., 2002)." N09-2003,P02-1040,o,"In this case, one is often required to find the translation(s) in the hypergraph that are most similar to the desired translations, with similarity computed via some automatic metric such as BLEU (Papineni et al., 2002)." N09-2006,P02-1040,n,"Due to limited variations in the N-Best list, the nature of ranking, and more importantly, the non-differentiable objective functions used for MT (such as BLEU (Papineni et al., 2002)), one often found only local optimal solutions to , with no clue to walk out of the riddles." N09-2024,P02-1040,o,"Including about 1.4 million sentence pairs extracted from the Gigaword data, we obtain a statistically significant improvement from 42.3 to 45.6 in BLEU (Papineni et al., 2002)." N09-2038,P02-1040,o,"Day 1 Day 2 No ASR adaptation 29.39 27.41 Unsupervised ASR adaptation 31.55 27.66 Supervised ASR adaptation 32.19 27.65 Table 2: Impact of ASR adaptation to SMT Table 2 shows the impact of ASR adaptation on the performance of the translation system in BLEU (Papineni et al., 2002)." N09-2055,P02-1040,o,"The automatic assessment of the translation quality has been carried out using the BiLingual Evaluation Understudy (BLEU) (Papineni et al., 2002), and the Translation Error Rate (TER) (Snover et al., 2006)." N09-2056,P02-1040,o,"For the evaluation of translation quality, we applied standard automatic evaluation metrics, i.e., BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005)." P02-1039,P02-1040,o,"As an overall decoding performance measure, we used the BLEU metric (Papineni et al. , 2002)." P03-1039,P02-1040,o,"BLEU: BLEU score, which computes the ratio of n-gram for the translation results found in reference translations (Papineni et al. , 2002)." P03-1040,P02-1040,o,"Performance is also measured by the BLEU score (Papineni et al. , 2002), which measures similarity to the reference translation taken from the English side of the parallel corpus." P03-1057,P02-1040,o,"Another current topic of machine translation is automatic evaluation of MT quality (Papineni et al. , 2002; Yasuda et al. , 2001; Akiba et al. , 2001)." P03-1057,P02-1040,o,"3 Automatic Evaluation of MT Quality We utilize BLEU (Papineni et al. , 2002) for the automatic evaluation of MT quality in this paper." P04-1027,P02-1040,o,"From this point of view, some of the measures used in the evaluation of Machine Translation systems, such as BLEU (Papineni et al. , 2002), have been imported into the summarization task." P04-1063,P02-1040,o,"Regressive FLM (rFLM) h(FLM(e,j)) = w1 FLM(e,j)+b Regressive ALM (rALM) h(ALM(e,j)) = w1 ALM(e,j)+b Notice that h() here is supposed to relate FLM or ALM to some independent evaluation metric such as BLEU (Papineni et al. , 2002), not the log likelihood of a translation." P04-1078,P02-1040,p,"1 Introduction With the introduction of the BLEU metric for machine translation evaluation (Papineni et al, 2002), the advantages of doing automatic evaluation for various NLP applications have become increasingly appreciated: they allow for faster implement-evaluate cycles (by by-passing the human evaluation bottleneck), less variation in evaluation performance due to errors in human assessor judgment, and, not least, the possibility of hill-climbing on such metrics in order to improve system performance (Och 2003)." P04-1078,P02-1040,o,"For comparison purposes, we also computed the value of R 2 for fluency using the BLEU score formula given in (Papineni et al. , 2002), for the 7 systems using the same one reference, and we obtained a similar value, 78.52%; computing the value of R 2 for fluency using the BLEU scores computed with all 4 references available yielded a lower value for R 2, 64.96%, although BLEU scores obtained with multiple references are usually considered more reliable." P04-1078,P02-1040,o,"For comparison purposes, we also computed the value of R 2 for adequacy using the BLEU score formula given in (Papineni et al. , 2002), for the 7 systems using the same one reference, and we obtain a similar value, 83.91%; computing the value of R 2 for adequacy using the BLEU scores computed with all 4 references available also yielded a lower value for R 2, 62.21%." P04-1079,P02-1040,o,"On the one hand using 1 human reference with uniform results is essential for our methodology, since it means that there is no more trouble with Recall (Papineni et al. , 2002:314) a systems ability to avoid under-generation of N-grams can now be reliably measured." P04-1079,P02-1040,o,"A similar observation was made in (Papineni et al. , 2002: 313)." P04-1079,P02-1040,n,"Automatic evaluation methods such as BLEU (Papineni et al. , 2002), RED (Akiba et al. , 2001), or the weighted N-gram model proposed here may be more consistent in judging quality as compared to human evaluators, but human judgments remain the only criteria for metaevaluating the automatic methods." P04-1079,P02-1040,o,"Besides saving cost, the ability to dependably work with a single human translation has an additional advantage: it is now possible to create Recall-based evaluation measures for MT, which has been problematic for evaluation with multiple reference translations, since only one of the choices from the reference set is used in translation (Papineni et al. 2002:314)." P04-1079,P02-1040,o,"Some of them use human reference translations, e.g., the BLEU method (Papineni et al. , 2002), which is based on comparison of N-gram models in MT output and in a set of human reference translations." P05-1009,P02-1040,o,"We evaluate accuracy performance using two automatic metrics: an identity metric, ID, which measures the percent of sentences recreated exactly, and BLEU (Papineni et al. , 2002), which gives the geometric average of the number of uni-, bi-, tri-, and four-grams recreated exactly." P05-1018,P02-1040,o,"Existing automatic evaluation measures such as BLEU (Papineni et al. , 2002) and ROUGE (Lin 2The collections are available from http://www.csail." P05-1032,P02-1040,o,"We calculated the translation quality using Bleus modified n-gram precision metric (Papineni et al. , 2002) for n-grams of up to length four." P05-1032,P02-1040,o,"They used the Bleu evaluation metric (Papineni et al. , 2002), but capped the n-gram precision at 4-grams." P05-1033,P02-1040,o,"Our evaluation metric was BLEU (Papineni et al. , 2002), as calculated by the NIST script (version 11a) with its default settings, which is to perform case-insensitive matching of n-grams up to n = 4, and to use the shortest (as opposed to nearest) reference sentence for the brevity penalty." P05-1048,P02-1040,o,"Using our WSD model to constrain the translation candidates given to the decoder hurts translation quality, as measured by the automated BLEU metric (Papineni et al. , 2002)." P05-1066,P02-1040,o,"We use BLEU scores (Papineni et al. , 2002) to measure translation accuracy." P05-1067,P02-1040,o,"Our MT system was evaluated using the n-gram based Bleu (Papineni et al. , 2002) and NIST machine translation evaluation software." P05-1069,P02-1040,o,"Experimental results are reported in Table 2: here cased BLEU results are reported on MT03 Arabic-English test set (Papineni et al. , 2002)." P05-1074,P02-1040,o,"Examples of monolingual parallel corpora that have been used are multiple translations of classical French novels into English, and data created for machine translation evaluation methods such as Bleu (Papineni et al. , 2002) which use multiple reference translations." P05-3026,P02-1040,n,"METEOR was chosen since, unlike the more commonly used BLEU metric (Papineni et al. , 2002), it provides reasonably reliable scores for individual sentences." P06-1002,P02-1040,o,"Other metrics assess the impact of alignments externally, e.g., different alignments are tested by comparing the corresponding MT outputs using automated evaluation metrics (e.g. , BLEU (Papineni et al. , 2002) or METEOR (Banerjee and Lavie, 2005))." P06-1002,P02-1040,o,"MT output was evaluated using the standard evaluation metric BLEU (Papineni et al. , 2002).2 The parameters of the MT System were optimized for BLEU metric on NIST MTEval2002 test sets using minimum error rate training (Och, 2003), and the systems were tested on NIST MTEval2003 test sets for both languages." P06-1011,P02-1040,o,"Translation performance is measured using the automatic BLEU (Papineni et al. , 2002) metric, on one reference translation." P06-1067,P02-1040,o,"This new model leads to significant improvements in MT quality as measured by BLEU (Papineni et al. , 2002)." P06-1077,P02-1040,o,"We evaluated the translation quality using the BLEU metric (Papineni et al. , 2002), as calculated by mteval-v11b.pl with its default setting except that we used case-sensitive matching of n-grams." P06-1090,P02-1040,p,"We report results using the well-known automatic evaluation metrics Bleu (Papineni et al. , 2002)." P06-1091,P02-1040,o,"We show translation results in terms of the automatic BLEU evaluation metric (Papineni et al. , 2002) on the MT03 Arabic-English DARPA evaluation test set consisting of a212a89a212a89a87 sentences with a98a89a212a161a213a89a214a89a215 Arabic words with a95 reference translations." P06-1119,P02-1040,p,"First, we compared our system output to human reference translations using Bleu (Papineni, et al. , 2002), a widelyaccepted objective metric for evaluation of machine translations." P06-1130,P02-1040,o,"4.2 String-Based Evaluation We evaluate the output of our generation system against the raw strings of Section 23 using the Simple String Accuracy and BLEU (Papineni et al. , 2002) evaluation metrics." P06-1139,P02-1040,o,"When evaluated against the state-of-the-art, phrase-based decoder Pharaoh (Koehn, 2004), using the same experimental conditions translation table trained on the FBIS corpus (7.2M Chinese words and 9.2M English words of parallel text), trigram language model trained on 155M words of English newswire, interpolation weights a65 (Equation 2) trained using discriminative training (Och, 2003) (on the 2002 NIST MT evaluation set), probabilistic beam a90 set to 0.01, histogram beam a58 set to 10 and BLEU (Papineni et al. , 2002) as our metric, the WIDL-NGLM-Aa86 a129 algorithm produces translations that have a BLEU score of 0.2570, while Pharaoh translations have a BLEU score of 0.2635." P06-2005,P02-1040,o,"For evaluation, we use IBMs BLEU score (Papineni et al. , 2002) to measure the performance of the SMS normalization." P06-2005,P02-1040,o,"We use IBMs BLEU score (Papineni et al. , 2002) to measure the performance of SMS text normalization." P06-2070,P02-1040,o,"2 Recap of BLEU, ROUGE-W and METEOR The most commonly used automatic evaluation metrics, BLEU (Papineni et al. , 2002) and NIST (Doddington, 2002), are based on the assumption that The closer a machine translation is to a promt1: Life is like one nice chocolate in box ref: Life is just like a box of tasty chocolate ref: Life is just like a box of tasty chocolate mt2: Life is of one nice chocolate in box Figure 1: Alignment Example for ROUGE-W fessional human translation, the better it is (Papineni et al. , 2002)." P06-2070,P02-1040,p,"BLEU and NIST have been shown to correlate closely with human judgments in ranking MT systems with different qualities (Papineni et al. , 2002; Doddington, 2002)." P06-2101,P02-1040,n,"The ongoing evaluationliteratureisperhapsmostobviousinthe machine translation communitys efforts to better BLEU (Papineni et al. , 2002)." P06-2103,P02-1040,p,"One of the most successful metrics for judging machine-generated text is BLEU (Papineni et al. , 2002)." P06-2109,P02-1040,o,"4 Experiment 4.1 Evaluation Method We evaluated each sentence compression method using word F-measures, bigram F-measures, and BLEU scores (Papineni et al. , 2002)." P06-2124,P02-1040,o,"For word alignment accuracy, F-measure is reported, i.e., the harmonic mean of precision and recall against a gold-standard reference set; for translation quality, Bleu (Papineni et al. , 2002) and its variation of NIST scores are reported." P07-1001,P02-1040,o,"We measure translation performance by the BLEU score (Papineni et al. , 2002) and Translation Error Rate (TER) (Snover et al. , 2006) with one reference for each hypothesis." P07-1004,P02-1040,o,"Evaluation Metrics We evaluated the generated translations using three different evaluation metrics: BLEU score (Papineni et al. , 2002), mWER (multi-reference word error rate), and mPER (multi-reference positionindependent word error rate) (Nieen et al. , 2000)." P07-1005,P02-1040,o,"Following (Chiang, 2005), we used the version 11a NIST BLEU script with its default settings to calculate the BLEU scores (Papineni et al. , 2002) based on case-insensitive ngram matching, where n is up to 4." P07-1038,P02-1040,p,"The well-known BLEU (Papineni et al. , 2002) is based on the number of common n-grams between the translation hypothesis and human reference translations of the same sentence." P07-1038,P02-1040,o,"Reference-based metrics such as BLEU (Papineni et al. , 2002) have rephrased this subjective task as a somewhat more objective question: how closely does the translation resemble sentences that are known to be good translations for the same source?" P07-1039,P02-1040,o,"The quality of the translation output is evaluated using BLEU (Papineni et al. , 2002)." P07-1040,P02-1040,p,"2 Evaluation Metrics Currently, the most widely used automatic MT evaluation metric is the NIST BLEU-4 (Papineni et al. , 2002)." P07-1044,P02-1040,o,"BLEU (Papineni et al. , 2002) is a canonical example: in matching n-grams in a candidate translation text with those in a reference text, the metric measures faithfulness by counting the matches, and fluency by implicitly using the reference n-grams as a language model." P07-1066,P02-1040,o,"During evaluation two performance metrics, BLEU (Papineni et al. , 2002) and NIST, were computed." P07-1089,P02-1040,o,"Our evaluation metric is BLEU-4 (Papineni et al. , 2002), as calculated by the script mteval-v11b.pl with its default setting except that we used case-sensitive matching of n-grams." P07-1091,P02-1040,o,"(Case-sensitive) BLEU-4 (Papineni et al. , 2002) is used as the evaluation metric." P07-1092,P02-1040,o,"The parameters, j, were trained using minimum error rate training (Och, 2003) to maximise the BLEU score (Papineni et al. , 2002) on a 150 sentence development set." P07-1108,P02-1040,o,"Using BLEU (Papineni et al. , 2002) as a metric, our method achieves an absolute improvement of 0.06 (22.13% relative) as compared with the standard model trained with 5,000 L f -L e sentence pairs for French-Spanish translation." P07-1108,P02-1040,p,"The translation quality was evaluated using a well-established automatic measure: BLEU score (Papineni et al. , 2002)." P07-1111,P02-1040,o,"Since the introduction of BLEU (Papineni et al. , 2002) the basic n-gram precision idea has been augmented in a number of ways." P07-2026,P02-1040,o,"3.3 BLEU Score The BLEU score (Papineni et al. , 2002) measures the agreement between a hypothesiseI1 generated by the MT system and a reference translation eI1." P07-2045,P02-1040,o,It also contains tools for tuning these models using minimum error rate training (Och 2003) and evaluating the resulting translations using the BLEU score (Papineni et al. 2002). P08-1007,P02-1040,o,"2.1 BLEU BLEU (Papineni et al., 2002) is essentially a precision-based metric and is currently the standard metric for automatic evaluation of MT performance." P08-1007,P02-1040,p,"Among all the automatic MT evaluation metrics, BLEU (Papineni et al., 2002) is the most widely used." P08-1009,P02-1040,o,"4.2 Automatic Evaluation We first present our soft cohesion constraints effect on BLEU score (Papineni et al., 2002) for both our dev-test and test sets." P08-1010,P02-1040,o,"We measure translation performance by the BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) scores with multiple translation references." P08-1010,P02-1040,o,"Other possibilities for the weighting include assigning constant one or the exponential of the final score etc. One of the advantages of the proposed phrase training algorithm is that it is a parameterized procedure that can be optimized jointly with the trans82 lation engine to minimize the final translation errors measured by automatic metrics such as BLEU (Papineni et al., 2002)." P08-1011,P02-1040,o,"In addition to precision and recall, we also evaluate the Bleu score (Papineni et al., 2002) changes before and after applying our measure word generation method to the SMT output." P08-1022,P02-1040,o,"Moreover, the overall BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007) scores, as well as numbers of exact string matches (as measured against to the original sentences in the CCGbank) are higher for the hypertagger-seeded realizer than for the preexisting realizer." P08-1064,P02-1040,o,"The evaluation metric is case-sensitive BLEU-4 (Papineni et al., 2002)." P08-1071,P02-1040,o,"For example, in machine translation, BLEU score (Papineni et al., 2002) is developed to assess the quality of machine translated sentences." P08-1086,P02-1040,o,"Instead we report BLEU scores (Papineni et al., 2002) of the machine translation system using different combinations of wordand classbased models for translation tasks from English to Arabic and Arabic to English." P08-1112,P02-1040,o,"Unfortunately, as was shown by Fraser and Marcu (2007) AER can have weak correlation with translation performance as measured by BLEU score (Papineni et al., 2002), when the alignments are used to train a phrase-based translation system." P08-2015,P02-1040,o,"We use the standard NIST MTEval data sets for the years 2003, 2004 and 2005 (henceforth MT03, MT04 and MT05, respectively).6 We report results in terms of case-insensitive 4gram BLEU (Papineni et al., 2002) scores." P08-2020,P02-1040,o,"1.2 Evaluation In this paper we report results using the BLEU metric (Papineni et al., 2002), however as the evaluation criterion in GALE is HTER (Snover et al., 2006), we also report in TER (Snover et al., 2005)." P08-2040,P02-1040,o,"Our evaluation metric is BLEU (Papineni et al., 2002)." P09-1018,P02-1040,o,"In this paper, we modify the method in Albrecht and Hwa (2007) to only prepare human reference translations for the training examples, and then evaluate the translations produced by the subject systems against the references using BLEU score (Papineni et al., 2002)." P09-1020,P02-1040,o,"Our evaluation metrics is casesensitive BLEU-4 (Papineni et al., 2002)." P09-1021,P02-1040,o,"Afterwards, we select and remove a subset of highly informative sentences from U, and add those sentences together with their human-provided translations to L. This process is continued iteratively until a certain level of translation quality is met (we use the BLEU score, WER and PER) (Papineni et al., 2002)." P09-1034,P02-1040,p,"Since human evaluation is costly and difficult to do reliably, a major focus of research has been on automatic measures of MT quality, pioneered by BLEU (Papineni et al., 2002) and NIST (Doddington, 2002)." P09-1036,P02-1040,o,"Our experimental results display that our SDB model achieves a substantial improvement over the baseline and significantly outperforms XP+ according to the BLEU metric (Papineni et al., 2002)." P09-1064,P02-1040,o,"1 Introduction In statistical machine translation, output translations are evaluated by their similarity to human reference translations, where similarity is most often measured by BLEU (Papineni et al., 2002)." P09-1065,P02-1040,o,"We evaluated the translation quality using case-insensitive BLEU metric (Papineni et al., 2002)." P09-1089,P02-1040,o,"For example, (Kauchak and Barzilay, 2006) paraphrase references to make them closer to the system translation in order to obtain more reliable results when using automatic evaluation metrics like BLEU (Papineni et al., 2002)." P09-1092,P02-1040,o,"We evaluate the string chosen by the log-linear model against the original treebank string in terms of exact match and BLEU score (Papineni et al., 821 Syntactic feature Type Definites Definite descriptions SIMPLE DEF simple definite descriptions POSS DEF simple definite descriptions with a possessive determiner (pronoun or possibly genitive name) DEF ATTR ADJ definite descriptions with adjectival modifier DEF GENARG definite descriptions with a genitive argument DEF PPADJ definite descriptions with a PP adjunct DEF RELARG definite descriptions including a relative clause DEF APP definite descriptions including a title or job description as well as a proper name (e.g. an apposition) Names PROPER combinations of position/title and proper name (without article) BARE PROPER bare proper names Demonstrative descriptions SIMPLE DEMON simple demonstrative descriptions MOD DEMON adjectivally modified demonstrative descriptions Pronouns PERS PRON personal pronouns EXPL PRON expletive pronoun REFL PRON reflexive pronoun DEMON PRON demonstrative pronouns (not: determiners) GENERIC PRON generic pronoun (man one) DA PRON da-pronouns (darauf, daruber, dazu, ) LOC ADV location-referring pronouns TEMP ADV,YEAR Dates and times Indefinites SIMPLE INDEF simple indefinites NEG INDEF negative indefinites INDEF ATTR indefinites with adjectival modifiers INDEF CONTRAST indefinites with contrastive modifiers (einige some, andere other, weitere further, ) INDEF PPADJ indefinites with PP adjuncts INDEF REL indefinites with relative clause adjunct INDEF GEN indefinites with genitive adjuncts INDEF NUM measure/number phrases INDEF QUANT quantified indefinites Table 5: An inventory of interesting syntactic characteristics in IS phrases Label 1 (+ features) Label 2 (+ features) B/A Total D-GIVEN-PRONOUN INDEF-REL 0 19 PERS PRON 39 INDEF ATTR 23 DA PRON 25 SIMPLE INDEF 17 DEMON PRON 19 GENERIC PRON 11 D-GIVEN-PRONOUN D-GIVEN-CATAPHOR 0.1 11 PERS PRON 39 SIMPLE DEF 13 DA PRON 25 DA PRON 10 DEMON PRON 19 GENERIC PRON 11 D-GIVEN-REFLEXIVE NEW 0.11 31 REFL PRON 54 SIMPLE INDEF 113 INDEF ATTR 53 INDEF NUM 32 INDEF PPADJ 26 INDEF GEN 25 Table 6: IS asymmetric pairs augmented with syntactic characteristics 822 2002)." P09-1093,P02-1040,o,"For MCE learning, we selected the reference compression that maximize the BLEU score (Papineni et al., 2002) (=argmax rR BLEU(r, R\r)) from the set of reference compressions and used it as correct data for training." P09-1093,P02-1040,o,"For automatic evaluation, we employed BLEU (Papineni et al., 2002) by following (Unno et al., 2006)." P09-1094,P02-1040,o,"Methods have been proposed for automatic evaluation in MT (e.g., BLEU (Papineni et al., 2002))." P09-1099,P02-1040,o,"Automated evaluation metrics that rate system behaviour based on automatically computable properties have been developed in a number of other fields: widely used measures include BLEU (Papineni et al., 2002) for machine translation and ROUGE (Lin, 2004) for summarisation, for example." P09-1103,P02-1040,o,"The evaluation metric is casesensitive BLEU-4 (Papineni et al., 2002)." P09-1106,P02-1040,o,"4.3 Experiments results Our evaluation metric is BLEU (Papineni et al., 2002), which are to perform case-insensitive matching of n-grams up to n = 4." P09-2034,P02-1040,p,"It could be shown that such methods, of which BLEU (Papineni et al., 2002) is the most common, can deliver evaluation results that show a high agreement with human judgments (Papineni et al., 2002; Coughlin, 2003; Koehn & Monz, 2006)." P09-2034,P02-1040,n,"By doing so we must emphasize that, as described in the previous section, the BLEU score was not designed to deliver satisfactory results at the sentence level (Papineni et al., 2002), and this also applies to the closely related NIST score." P09-2035,P02-1040,o,"We evaluate our results with case-sensitive BLEU-4 metric (Papineni et al., 2002)." P09-2058,P02-1040,o,"We tune all feature weights automatically (Och, 2003) to maximize the BLEU (Papineni et al., 2002) score on the dev set." P09-3004,P02-1040,o,"The measures are: word overlap, length difference (in words), BLEU (Papineni et al., 2002), dependency relation overlap (i.e., R1 and R2 but not FR1,R2), and dependency tree edit distance." W02-1022,P02-1040,o,"While recent proposals for evaluation of MT systems have involved multi-parallel corpora (Thompson and Brew, 1996; Papineni et al. , 2002), statistical MT algorithms typically only use one-parallel data." W03-0501,P02-1040,o,"5.2 Bleu: Automatic Evaluation BLEU (Papineni et al, 2002) is a system for automatic evaluation of machine translation." W03-1001,P02-1040,o,"These blocks are used to compute the results in the fourth column: the BLEU score (Papineni et al. , 2002) with a153 reference translation using a153 -grams along with 95% confidence interval is reported 4." W03-1612,P02-1040,o,"BLEU (Papineni et al. , 2002b) is one of the methods for automatic evaluation of translation quality." W03-1612,P02-1040,p,"High correlation is reported between the BLEU score and human evaluations for translations from Arabic, Chinese, French, and Spanish to English (Papineni et al. , 2002a)." W03-1612,P02-1040,o,"2 Background: Overview of BLEU This section briefly describes the original BLEU (Papineni et al. , 2002b)1, which was designed for English translation evaluation, so English sentences are used as examples in this section." W03-1612,P02-1040,p,"Empirically the BLEU score has a high correlation with human evaluation when N = 4 for English translation evaluations (Papineni et al. , 2002b)." W03-2804,P02-1040,o,"BLEU Score: BLEU is an automatic metric designed by IBM, which uses several references (Papineni et al., 2002)." W04-1014,P02-1040,o,"To evaluate sentence automatically generated with taking consideration word concatenation into by using references varied among humans, various metrics using n-gram precision and word accuracy have been proposed: word string precision (Hori and Furui, 2000b) for summarization through word extraction, ROUGE (Lin and Hovy, 2003) for abstracts, and BLEU (Papineni et al. , 2002) for machine translation." W04-1016,P02-1040,o,"Work in this area includes that of Lin and Hovy (2003) and Pastra and Saggion (2003), both of whom inspect the use of Bleu-like metrics (Papineni et al. , 2002) in summarization." W04-1708,P02-1040,o,"The core technology of the proposed method, i.e., the automatic evaluation of translations, was developed in research aiming at the efficient development of Machine Translation (MT) technology (Su et al. , 1992; Papineni et al. , 2002; NIST, 2002)." W04-1708,P02-1040,o,"The unit of utterance corresponds to the unit of segment in the original BLEU and NIST studies (Papineni et al. , 2002; NIST, 2002)." W04-2203,P02-1040,o,"3.1 Golden-standard-based criteria In the domain of machine translation systems, an increasingly accepted way to measure the quality of a system is to compare the outputs it produces with a set of reference translations, considered as an approximation of a golden standard (Papineni et al. , 2002; hovy et al. , 2002)." W05-0712,P02-1040,o,"of Words Person names 803 1749 Organization names 312 867 Location names 345 614 The BLEU score (Papineni et al. , 2002) with a single reference translation was deployed for evaluation." W05-0806,P02-1040,o,"4.2 Translation Results The evaluation metrics used in our experiments are WER (Word Error Rate), PER (Positionindependent word Error Rate) and BLEU (BiLingual Evaluation Understudy) (Papineni et al. , 2002)." W05-0820,P02-1040,o,"Translation performance was measured using the BLEU score (Papineni et al. , 2002), which measures n-gram overlap with a reference translation." W05-0822,P02-1040,o,"Once this is accomplished, a variant of Powells algorithm is used to find weights that optimize BLEU score (Papineni et al, 2002) over these hypotheses, compared to reference translations." W05-0823,P02-1040,o,"This algorithm adjusts the log-linear weights so that BLEU (Papineni et al. , 2002) is maximized over a given development set." W05-0828,P02-1040,o,"3.2 Results and Discussion The BLEU scores (Papineni et al. , 2002) for 10 direct translations and 4 sets of heuristic selections 4Admittedly, in typical instances of such chains, English would appear earlier." W05-0831,P02-1040,o,"5.2 Evaluation Criteria For the automatic evaluation, we used the criteria from the IWSLT evaluation campaign (Akiba et al. , 2004), namely word error rate (WER), positionindependent word error rate (PER), and the BLEU and NIST scores (Papineni et al. , 2002; Doddington, 2002)." W05-0833,P02-1040,o,"We provide results using a range of automatic evaluation metrics: BLEU (Papineni et al. , 2002), Precision and Recall (Turian et al. , 2003), and Wordand Sentence Error Rates." W05-0833,P02-1040,o,"In order to create the necessary SMT language and translation models, they used: Giza++ (Och & Ney, 2003);2 the CMU-Cambridge statistical toolkit;3 the ISI ReWrite Decoder.4 Translation was performed from EnglishFrench and FrenchEnglish, and the resulting translations were evaluated using a range of automatic metrics: BLEU (Papineni et al. , 2002), Precision and Recall 2http://www.isi.edu/och/Giza++.html 3http://mi.eng.cam.ac.uk/prc14/toolkit.html 4http://www.isi.edu/licensed-sw/rewrite-decoder/ 185 (Turian et al. , 2003), and Wordand Sentence Error Rates." W05-0836,P02-1040,o,"5.3 Evaluation Metric This paper focuses on the BLEU metric as presented in (Papineni et al. , 2002)." W05-0836,P02-1040,o," The piecewise linearity observation made in (Papineni et al. , 2002) is no longer applicable since we cannot move the log operation into the expected value." W05-0904,P02-1040,p,"BLEU and NIST have been shown to correlate closely with human judgments in ranking MT systems with different qualities (Papineni et al. , 2002; Doddington, 2002)." W05-0904,P02-1040,o,"The most commonly used automatic evaluation metrics, BLEU (Papineni et al. , 2002) and NIST (Doddington, 2002), are based on the assumption that The closer a machine translation is to a professional human translation, the better it is (Papineni et al. , 2002)." W05-0906,P02-1040,o,"This idea of employing n-gram co-occurrence statistics to score the output of a computer system against one or more desired reference outputs has its roots in the BLEU metric for machine translation (Papineni et al. , 2002) and the ROUGE (Lin and Hovy, 2003) metric for summarization." W05-0909,P02-1040,p,"1 Introduction Automatic Metrics for machine translation (MT) evaluation have been receiving significant attention in the past two years, since IBM's BLEU metric was proposed and made available (Papineni et al 2002)." W05-0909,P02-1040,o,"2 The METEOR Metric 2.1 Weaknesses in BLEU Addressed in METEOR The main principle behind IBMs BLEU metric (Papineni et al, 2002) is the measurement of the 66 overlap in unigrams (single words) and higher order n-grams of words, between a translation being evaluated and a set of one or more reference translations." W05-1203,P02-1040,o,"Text similarity has been also used for relevance feedback and text classification (Rocchio, 1971), word sense disambiguation (Lesk, 1986), and more recently for extractive summarization (Salton et al. , 1997b), and methods for automatic evaluation of machine translation (Papineni et al. , 2002) or text summarization (Lin and Hovy, 2003)." W05-1204,P02-1040,o,"Consequently, here we employ multiple references to evaluate MT systems like BLEU (Papineni et al. , 2002) and NIST (Doddington, 2002)." W05-1510,P02-1040,o,"The accuracy of the generator outputs was evaluated by the BLEU score (Papineni et al. , 2001), which is commonly used for the evaluation of machine translation and recently used for the evaluation of generation (Langkilde-Geary, 2002; Velldal and Oepen, 2005)." W06-1112,P02-1040,n,"They are a bit controversial in a proper machine translation, where the popular BLEU score (Papineni et al. , 2002), although widely accepted as a measure of translation accuracy, seems to favor stochastic approaches based on 91 an n-gram model over other MT methods (see the results in (Nist, 2001))." W06-1608,P02-1040,o,"3.2 Translation quality Table 2 presents the impact of parse quality on a treelet translation system, measured using BLEU (Papineni et al. , 2002)." W06-3101,P02-1040,p,"The most widely used are Word Error Rate (WER), Position Independent Word Error Rate (PER), the BLEU score (Papineni et al. , 2002) and the NIST score (Doddington, 2002)." W06-3101,P02-1040,o,"2 Related Work There is a number of publications dealing with various automatic evaluation measures for machine translation output, some of them proposing new measures, some proposing improvements and extensions of the existing ones (Doddington, 2002; Papineni et al. , 2002; Babych and Hartley, 2004; Matusov et al. , 2005)." W06-3102,P02-1040,o,"Although the BLEU (Papineni et al. , 2002) score from Finnish to English is 21.8, the score in the reverse direction is reported as 13.0 which is one of the lowest scores in 11 European languages scores (Koehn, 2005)." W06-3103,P02-1040,o,"5.2 Evaluation Metrics The commonly used criteria to evaluate the translation results in the machine translation community are: WER (word error rate), PER (positionindependent word error rate), BLEU (Papineni et al. , 2002), and NIST (Doddington, 2002)." W06-3106,P02-1040,o,"Two error rates: the sentence error rate (SER) and the word error rate (WER) that we seek to minimize, and BLEU (Papineni et al. , 2002), that we seek to maximize." W06-3108,P02-1040,o,"5.3 Translation Results For the translation experiments on the BTEC task, we report the two accuracy measures BLEU (Papineni et al. , 2002) and NIST (Doddington, 2002) as well as the two error rates: word error rate (WER) and position-independent word error rate (PER)." W06-3110,P02-1040,o,"To measure the translation quality, we use the BLEU score (Papineni et al. , 2002) and the NIST score (Doddington, 2002)." W06-3111,P02-1040,o,"(Papineni et al. , 2002)." W06-3112,P02-1040,n,"Even the creators of BLEU point out that it may not correlate particularly well with human judgment at the sentence level (Papineni et al. , 2002), a problem also noted by (Och et al. , 2003) and (Russo-Lassner et al. , 2005)." W06-3112,P02-1040,o,"1 Introduction Since their appearance, BLEU (Papineni et al. , 2002) and NIST (Doddington, 2002) have been the standard tools used for evaluating the quality of machine translation." W06-3121,P02-1040,o,"The release has implementations for BLEU (Papineni et al. , 2002), WER and PER error criteria and it has decoding interfaces for Phramer and Pharaoh." W06-3122,P02-1040,o,"Although Phramer provides decoding functionality equivalent to Pharaohs, we preferred to use Pharaoh for this task because it is much faster than Phramer between 2 and 15 times faster, depending on the configuration and preliminary tests showed that there is no noticeable difference between the output of these two in terms of BLEU (Papineni et al. , 2002) score." W06-3508,P02-1040,o,"What, therefore, has to be explored are various similarity metrics, defining similarity in a concrete way and evaluate the results against human annotations (see Papineni et al. , 2002)." W07-0401,P02-1040,o,"6.2 Translation Results For the translation experiments, we report the two accuracy measures BLEU (Papineni et al. , 2002) and NIST (Doddington, 2002) as well as the two error rates word error rate (WER) and positionindependent word error rate (PER)." W07-0403,P02-1040,o,"Results on the provided 2000sentence development set are reported using the BLEU metric (Papineni et al. , 2002)." W07-0409,P02-1040,o,"2.2 Weight optimization A common criterion to optimize the coefficients of the log-linear combination of feature functions is to maximize the BLEU score (Papineni et al. , 2002) on a development set (Och and Ney, 2002)." W07-0410,P02-1040,o,"On the other hand, both BLEU (Papineni et al. , 2002) and NIST (Doddington 2002) scores are higher for the baseline system (mteval-v11b.pl)." W07-0411,P02-1040,n,"Even the creators of BLEU point out that it may not correlate particularly well with human judgment at the sentence level (Papineni et al. , 2002)." W07-0411,P02-1040,o,"1 Introduction Since their appearance, string-based evaluation metrics such as BLEU (Papineni et al. , 2002) and NIST (Doddington, 2002) have been the standard tools used for evaluating MT quality." W07-0701,P02-1040,o,"6 Experiments We evaluated the translation quality of the system using the BLEU metric (Papineni et al. , 2002)." W07-0703,P02-1040,o,"BLEU (Papineni et al, 2002) was devised to provide automatic evaluation of MT output." W07-0704,P02-1040,o,"We employ the phrase-based SMT framework (Koehn et al. , 2003), and use the Moses toolkit (Koehn et al. , 2007), and the SRILM language modelling toolkit (Stolcke, 2002), and evaluate our decoded translations using the BLEU measure (Papineni et al. , 2002), using a single reference translation." W07-0707,P02-1040,p,"The most widely used are Word Error Rate (WER), Position independent word Error Rate (PER), the BLEU score (Papineni et al. , 2002) and the NIST score (Doddington, 2002)." W07-0707,P02-1040,p,"The BLEU metric (Papineni et al. , 2002) and the closely related NIST metric (Doddington, 2002) along with WER and PER 48 have been widely used by many machine translation researchers." W07-0710,P02-1040,o,"2.2.1 BLEU Evaluation The BLEU score (Papineni et al. , 2002) was defined to measure overlap between a hypothesized translation and a set of human references." W07-0710,P02-1040,p,"1 Introduction In recent years, statistical machine translation have experienced a quantum leap in quality thanks to automatic evaluation (Papineni et al. , 2002) and errorbased optimization (Och, 2003)." W07-0711,P02-1040,o,"84 5.2 Machine translation on Europarl corpus We further tested our WDHMM on a phrase-based machine translation system to see whether our improvement on word alignment can also improve MT accuracy measured by BLEU score (Papineni et al. , 2002)." W07-0713,P02-1040,p,"The most widely known are the Word Error Rate (WER), the Position independent word Error Rate (PER), the NIST score (Doddington, 2002) and, especially in recent years, the BLEU score (Papineni et al. , 2002) and the Translation Error Rate (TER) (Snover et al. , 2005)." W07-0714,P02-1040,n,"Even the 3 A demo of the parser can be found at http://lfgdemo.computing.dcu.ie/lfgparser.html creators of BLEU point out that it may not correlate particularly well with human judgment at the sentence level (Papineni et al. , 2002)." W07-0714,P02-1040,o,"1 Introduction Since the creation of BLEU (Papineni et al. , 2002) and NIST (Doddington, 2002), the subject of automatic evaluation metrics for MT has been given quite a lot of attention." W07-0715,P02-1040,o,"Since this trade-off is also affected by the settings of various pruning parameters, we compared decoding time and translation quality, as measured by BLEU score (Papineni et al, 2002), for the two models on our first test set over a broad range of settings for the decoder pruning parameters." W07-0716,P02-1040,o,"Och showed thatsystemperformanceisbestwhenparametersare optimizedusingthesameobjectivefunctionthatwill be used for evaluation; BLEU (Papineni et al. , 2002) remains common for both purposes and is often retained for parameter optimization even when alternative evaluation measures are used, e.g., (Banerjee and Lavie, 2005; Snover et al. , 2006)." W07-0729,P02-1040,o,"Translation scores are reported using caseinsensitive BLEU (Papineni et al. , 2002) with a single reference translation." W07-0734,P02-1040,o,"The most commonly used MT evaluation metric in recent years has been IBM?s Bleu metric (Papineni et al. , 2002)." W07-0735,P02-1040,o,"To further emphasize the importance of morphology in MT to Czech, we compare the standard BLEU (Papineni et al. , 2002) of a baseline phrasebased translation with BLEU which disregards word forms (lemmatized MT output is compared to lemmatized reference translation)." W07-0737,P02-1040,o,"We further assume that the degree of difficulty of a phrase is directly correlated with the quality of the translation produced by the MT system, which can be approximated using an automatic evaluation metric, such as BLEU (Papineni et al. , 2002)." W08-0301,P02-1040,o,"(Case-insensitive) BLEU-4 (Papineni et al., 2002) is used as the evaluation metric." W08-0302,P02-1040,o,"Evaluation We evaluate translation output using three automatic evaluation measures: BLEU (Papineni et al., 2002), NIST (Doddington, 2002), and METEOR (Banerjee and Lavie, 2005, version 0.6).5 All measures used were the case-sensitive, corpuslevel versions." W08-0302,P02-1040,o,"The weights 1,,M are typically learned to directly minimize a standard evaluation criterion on development data (e.g., the BLEU score; Papineni et al., (2002)) using numerical search (Och, 2003)." W08-0304,P02-1040,o,"This approach attempts to improve translation quality by optimizing an automatic translation evaluation metric, such as the BLEU score (Papineni et al., 2002)." W08-0306,P02-1040,o,"BLEU For all translation tasks, we report caseinsensitive NIST BLEU scores (Papineni et al., 2002) using 4 references per sentence." W08-0307,P02-1040,o,"5http://opennlp.sourceforge.net/ We use the standard four-reference NIST MTEval data sets for the years 2003, 2004 and 2005 (henceforth MT03, MT04 and MT05, respectively) for testing and the 2002 data set for tuning.6 BLEU4 (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005) and multiple-reference Word Error Rate scores are reported." W08-0308,P02-1040,o,"TheChinesesentencefromtheselected pair is used as the single reference to tune and evaluate the MT system with word-based BLEU-4 (Papineni et al., 2002)." W08-0309,P02-1040,o,"The automatic metrics that were evaluated in this years shared task were the following: Bleu (Papineni et al., 2002)Bleu remains the de facto standard in machine translation evaluation." W08-0312,P02-1040,p,"3 Extending Bleu and Ter with Flexible Matching Many widely used metrics like Bleu (Papineni et al., 2002) and Ter (Snover et al., 2006) are based on measuring string level similarity between the reference translation and translation hypothesis, just like Meteor . Most of them, however, depend on finding exact matches between the words in two strings." W08-0312,P02-1040,o,"The most commonly used MT evaluation metric in recent years has been IBMs Bleu metric (Papineni et al., 2002)." W08-0317,P02-1040,o,"De-En En-De Baseline 26.95 20.16 Factored baseline 27.43 20.27 Submitted system 27.63 20.46 Table 1: Bleu scores for Europarl (test2007) De-En En-De Baseline 19.54 14.31 Factored baseline 20.16 14.37 Submitted system 20.61 14.77 Table 2: Bleu scores for News Commentary (nc-test2007) 5 Results Case-sensitive Bleu scores4 (Papineni et al., 2002) for the Europarl devtest set (test2007) are shown in table 1." W08-0320,P02-1040,o,"We used these weights in a beam search decoder to produce translations for the test sentences, which we compared to the WMT07 gold standard using Bleu (Papineni et al., 2002)." W08-0321,P02-1040,o,"Table 2 shows results in lowercase BLEU (Papineni et al., 2002) for both the baseline (B) and the improved baseline systems (B5) on development and held151 out evaluation sets." W08-0322,P02-1040,o,"The results evaluated by BLEU score (Papineni et al., 2002) is shown in Table 2." W08-0324,P02-1040,o,"3 Evaluation We trained our model parameters on a subset of the provided dev2006 development set, optimizing for case-insensitive IBM-style BLEU (Papineni et al., 2002) with several iterations of minimum error rate training on n-best lists." W08-0324,P02-1040,o,"We report case-insensitive scores for version 0.6 of METEOR (Lavie and Agarwal, 2007) with all modules enabled, version 1.04 of IBM-style BLEU (Papineni et al., 2002), and version 5 of TER (Snover et al., 2006)." W08-0328,P02-1040,o,"Table 1 shows the evaluation of all the systems in terms of BLEU score (Papineni et al., 2002) with the best score highlighted." W08-0329,P02-1040,o,"The translation quality is measured by three MT evaluation metrics: TER (Snover et al., 2006), BLEU (Papineni et al., 2002), and METEOR (Lavie and Agarwal, 2007)." W08-0401,P02-1040,o,"4 5 Experiments 5.1 Evaluation Measures We evaluated the proposed method using four evaluation measures, BLEU (Papineni et al., 2002), NIST (Doddington 2002), WER(word error rate), and PER(position independent word error rate)." W08-0402,P02-1040,o,"As shown in Table 1, the JAVA decoder (without explicit parallelization) is 22 times faster than the PYTHON decoder, while achieving slightly better translation quality as measured by BLEU-4 (Papineni et al., 2002)." W08-0405,P02-1040,o,"Translation results are given in terms of the automaticBLEUevaluation metric (Papineni et al., 2002) as well as the TER metric (Snover et al., 2006)." W08-0409,P02-1040,o,"The translation output is measured using BLEU (Papineni et al., 2002)." W08-0509,P02-1040,o,"To compare the performance of system, we recorded the total training time and the BLEU score, which is a standard automatic measurement of the translation quality(Papineni et al., 2002)." W08-0903,P02-1040,o,"A summary of the differences between our proposed approach and that of (Papineni et al., 2002) would include: The reliance of BLEU on the diversity of multiple reference translations in order to capture some of the acceptable alternatives in both word choice and word ordering that we have shown above." W08-0903,P02-1040,o,"Techniques that analyze n-gram precision such as BLEU score (Papineni et al., 2002) have been developed with the goal of comparing candidate translations against references provided by human experts in order to determine accuracy; although in our application the candidate translator is a student and not a machine, the principle is the same, and we wish to adapt their technique to our context." W08-1112,P02-1040,o,"Following (Langkilde, 2002) and other work on general-purpose generators, we adopt BLEU score (Papineni et al., 2002), average simple string accuracy (SSA) and percentage of exactly matched sentences for accuracy evaluation.6 For coverage evaluation, we measure the percentage of input fstructures that generate a sentence." W08-1113,P02-1040,o,"Such metrics have been introduced in other fields, including PARADISE (Walker et al., 1997) for spoken dialogue systems, BLEU (Papineni et al., 2002) for machine translation,1 and ROUGE (Lin, 2004) for summarisation." W08-2118,P02-1040,o,"To optimize the parameters of the decoder, we performed minimum error rate training on IWSLT04 optimizing for the IBM-BLEU metric (Papineni et al., 2002)." W09-0401,P02-1040,o,"In this years shared task we evaluated a number of different automatic metrics: Bleu (Papineni et al., 2002)Bleu remains the de facto standard in machine translation evaluation." W09-0402,P02-1040,o,"2 Syntactic-oriented evaluation metrics We investigated the following metrics oriented on the syntactic structure of a translation output: POSBLEU The standard BLEU score (Papineni et al., 2002) calculated on POS tags instead of words; POSP POS n-gram precision: percentage of POS ngrams in the hypothesis which have a counterpart in the reference; POSR Recall measure based on POS n-grams: percentage of POS n-grams in the reference which are also present in the hypothesis; POSF POS n-gram based F-measure: takes into account all POS n-grams which have a counter29 part, both in the reference and in the hypothesis." W09-0403,P02-1040,p,"After a brief period following the introduction of generally accepted and widely used metrics, BLEU (Papineni et al., 2002) and NIST (Doddington, 2002), when it seemed that this persistent problem has finally been solved, the researchers active in the field of machine translation (MT) started to express their worries that although these metrics are simple, fast and able to provide consistent results for a particular system during its development, they are not sufficiently reliable for the comparison of different systems or different language pairs." W09-0404,P02-1040,o,"We combine different parametrization of (smoothed) BLEU (Papineni et al., 2002), NIST (Doddington, 2002), and TER (Snover et al., 2006), to give a total of roughly 100 features." W09-0404,P02-1040,o,"BLEU (Papineni et al., 2002), NIST (Doddington, 2002)." W09-0407,P02-1040,o,"For the WMT 2009 Workshop, we selected a linear combination of BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) as optimization criterion, := argmax{(2BLEU)TER}, based on previous experience (Mauser et al., 2008)." W09-0408,P02-1040,o,"Of these, only feature weights can be trained, for which we used minimum error rate training with version 1.04 of IBM-style BLEU (Papineni et al., 2002) in case-insensitive mode." W09-0408,P02-1040,o,"We scored systems and our own output using case-insensitive IBM-style BLEU 1.04 (Papineni et al., 2002), METEOR 0.6 (Lavie and Agarwal, 2007) with all modules, and TER 5 (Snover et al., 2006)." W09-0409,P02-1040,o,"The system combination weights one for each system, LM weight, and word and NULL insertion penalties were tuned to maximize the BLEU (Papineni et al., 2002) score on the tuning set (newssyscomb2009)." W09-0412,P02-1040,o,"We set all feature weights by optimizing Bleu (Papineni et al., 2002) directly using minimum error rate training (MERT) (Och, 2003) on the tuning part of the development set (dev-test2009a)." W09-0416,P02-1040,o,"Instead of using a single system output as the skeleton, we employ a minimum Bayes-risk decoder to select the best single system output from the merged N-best list by minimizing the BLEU (Papineni et al., 2002) loss." W09-0418,P02-1040,o,"In this paper, translation quality is evaluated according to (1) the BLEU metrics which calculates the geometric mean of ngram precision by the system output with respect to reference translations (Papineni et al., 2002), and (2) the METEOR metrics that calculates unigram overlaps between translations (Banerjee and Lavie, 2005)." W09-0421,P02-1040,o,"In this paper we report case-insensitive Bleu scores (Papineni et al., 2002), unless otherwise stated, calculated with the NIST tool, and caseinsensitive Meteor-ranking scores, without WordNet (Agarwal and Lavie, 2008)." W09-0425,P02-1040,o,"For each, we give case-insensitive scores on version 0.6 of METEOR (Lavie and Agarwal, 2007) with all modules enabled, version 1.04 of IBMstyle BLEU (Papineni et al., 2002), and version 5 of TER (Snover et al., 2006)." W09-0426,P02-1040,o,"2.2 Automatic evaluation metric Since the official evaluation criterion for WMT09 is human sentence ranking, we chose to minimize a linear combination of two common evaluation metrics, BLEU and TER (Papineni et al., 2002; Snover et al., 2006), during system development and tuning: TERBLEU 2 Although we are not aware of any work demonstrating that this combination of metrics correlates better than either individually in sentence ranking, Yaser Al-Onaizan (personal communication) reports that it correlates well with the human evaluation metric HTER." W09-0427,P02-1040,o,"The loglinear model feature weights were learned using minimum error rate training (MERT) (Och, 2003) with BLEU score (Papineni et al., 2002) as the objective function." W09-0431,P02-1040,o,"As expected, as we double the size of the data, the BLEU score (Papineni et al., 2002) increases." W09-0432,P02-1040,o,"5http://www.statmt.org/wmt08 185 the BLEU score (Papineni et al., 2002), and tested on test2008." W09-0437,P02-1040,o,"1 Introduction Most empirical work in translation analyzes models and algorithms using BLEU (Papineni et al., 2002) and related metrics." W09-0441,P02-1040,o,"1 Introduction Since the introduction of the BLEU metric (Papineni et al., 2002), statistical MT systems have moved away from human evaluation of their performance and towards rapid evaluation using automatic metrics." W09-0441,P02-1040,o,"We compare TERp with BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), and TER (Snover et al., 2006)." W09-1114,P02-1040,o,"Translation quality is reported using case-insensitive BLEU (Papineni et al., 2002)." W09-2301,P02-1040,o,"We report case-insensitive scores on version 0.6 of METEOR (Lavie and Agarwal, 2007) with all modules enabled, version 1.04 of IBM-style BLEU (Papineni et al., 2002), and version 5 of TER (Snover et al., 2006)." W09-2309,P02-1040,o,"To tune the decoder parameters, we conducted minimum error rate training (Och, 2003) with respect to the word BLEU score (Papineni et al., 2002) using 2.0K development sentence pairs." W09-2310,P02-1040,o,"The highest BLEU score (Papineni et al., 2002) was chosen as the optimization criterion." W09-2404,P02-1040,o,"5.2 Impact on translation quality As reported in Table 3, small increases in METEOR (Banerjee and Lavie, 2005), BLEU (Papineni et al., 2002) and NIST scores (Doddington, 2002) suggest that SMT output matches the references better after postprocessing or decoding with the suggested lemma translations." D07-1009,P02-1047,o,"4 Features Features used in our experiments are inspired by previous work on corpus-based approaches for discourse analysis (Marcu and Echihabi, 2002; Lapata, 2003; Elsner et al. , 2007)." D08-1103,P02-1047,o,"Antonyms often indicate the discourse relation of contrast (Marcu and Echihabi, 2002)." D08-1103,P02-1047,o,"As Marcu and Echihabi (2002) point out, WordNet does not encode antonymy across part-of-speech (for example, legallyembargo)." D09-1036,P02-1047,o,Marcu and Echihabi (2002) demonstrated that word pairs extracted from the respective text spans are a good signal of the discourse relation between arguments. D09-1036,P02-1047,o,Note that Row 3 of Table 3 corresponds to Marcu and Echihabi (2002)s system which applies only word pair features. D09-1036,P02-1047,o,2 Related Work One of the first works that use statistical methods to detect implicit discourse relations is that of Marcu and Echihabi (2002). I05-6007,P02-1047,o,"Syntactic criteria are relevant, but clearly not decisive, as can be observed in (Marcu and Echihabi, 2002)." J03-4002,P02-1047,o,"In showing how DLTAG and an interpretative process on its derivations operate, we must, of necessity, gloss over how inference triggered by adjacency or associated with a structural connective provides the intended relation between adjacent discourse 578 Computational Linguistics Volume 29, Number 4 units: It may be a matter simply of statistical inference, as in Marcu and Echihabi (2002), or of more complex inference, as in Hobbs et al." N04-1020,P02-1047,o,A similar approach has been advocated for the interpretation of discourse relations by Marcu and Echihabi (2002). N04-1020,P02-1047,o,"Apart from the fact that we present an alternative model, our work differs from Marcu and Echihabi (2002) in two important ways." N06-2034,P02-1047,o,(Marcu and Echihabi 2002) proposed a method to identify discourse relations between text segments using Nave Bayes classifiers trained on a huge corpus. N06-2034,P02-1047,o,"When we consider the frequency of discourse relations, i.e. 43% for ELABORATION, 32% for CONTRAST etc. , the weighted accuracy was 53% using only lexical information, which is comparable to the similar experiment by (Marcu and Echihabi 2002) of 49.7%." N07-1038,P02-1047,o,The effectiveness of these features for recognition of discourse relations has been previously shown by Marcu and Echihabi (2002). N07-1038,P02-1047,o,"Since this relation can often be determined automatically for a given text (Marcu and Echihabi, 2002), we can readily use it to improve rank prediction." N07-1054,P02-1047,o,We draw on and extend the work of Marcu and Echihabi (2002). N07-1054,P02-1047,o,"Marcu and Echihabi (2002) use a pattern-based approach in mining instances of RSRs such as Contrast and Elaboration from large, unannotated corpora." N07-1054,P02-1047,o,"Marcu and Echihabi (2002) lter training instances based on Part-of-Speech (POS) tags, and Soricut and Marcu (2003) use syntactic features to identify sentence-internal RST structure." N07-1054,P02-1047,o,"We adopt the approach of Marcu and Echihabi (2002), using a small set of patterns to build relation models, and extend their work by re ning the training and classi cation process using parameter optimization, topic segmentation and syntactic parsing." N07-1054,P02-1047,o,"3 The M&E Framework We model two RSRs, Cause and Contrast, adopting the de nitions of Marcu and Echihabi (2002) (henceforth M&E) for their Cause-ExplanationEvidence and Contrast relations, respectively." P04-1087,P02-1047,o,"4.1.1 Lexical co-occurrences Lexical co-occurrences have previously been shown to be useful for discourse level learning tasks (Lapata and Lascarides, 2004; Marcu and Echihabi, 2002)." P04-1087,P02-1047,o,"As such, discourse markers play an important role in the parsing of natural language discourse (Forbes et al. , 2001; Marcu, 2000), and their correspondence with discourse relations can be exploited for the unsupervised learning of discourse relations (Marcu and Echihabi, 2002)." P05-1019,P02-1047,o,"For such cases, unsupervised approaches have been developed for predicting relations, by using sentences containing discourse connectives as training data (Marcu and Echihabi, 2002; Lapata and Lascarides, 2004)." P08-1118,P02-1047,n,"Presently, there exist methods for learning oppositional terms (Marcu and Echihabi, 2002) and paraphrase learning has been thoroughly studied, but successfully extending these techniques to learn incompatible phrases poses difficulties because of the data distribution." P09-1076,P02-1047,o,"7 Automated Sense Labelling of Discourse Connectives The focus here is on automated sense labelling of discourse connectives (Elwell and Baldridge, 2008; Marcu and Echihabi, 2002; Pitler et al., 2009; Wellner and Pustejovsky, 2007; Wellner, 679 Total Density of Intra-Sentential Intra-Sentential Total Intra-Sentential Intra-Sentential Subordinating Coordinating Discourse Genre Sentences Connectives Connectives/Sentence Conjunctions Conjunctions Adverbials ESSAYS 4774 1397 0.293 808 (57.8%) 438 (31.4%) 151 (10.8%) SUMMARIES 2118 275 0.130 166 (60.4%) 99 (36.0%) 10 (3.6%) LETTERS 739 200 0.271 126 (63.0%) 56 (28.0%) 18 (9.0%) NEWS 40095 9336 0.233 5514 (59.1%) 3015 (32.3%) 807 (8.6%) Figure 4: Distribution of Explicit Intra-Sentential Connectives." P09-1076,P02-1047,o,"Section 7 considers recent efforts to induce effective procedures for automated sense labelling of discourse relations that are not lexically marked (Elwell and Baldridge, 2008; Marcu and Echihabi, 2002; Pitler et al., 2009; Wellner and Pustejovsky, 2007; Wellner, 2008)." W06-3309,P02-1047,o,"Although this study falls under the general topic of discourse modeling, our work differs from previous attempts to characterize text in terms of domainindependent rhetorical elements (McKeown, 1985; Marcu and Echihabi, 2002)." W06-3401,P02-1047,p,"A novel approach was described in (Marcu and Echihabi, 2002), which used an unsupervised training technique, extracting relations that were explicitly and unamibiguously signalled and automatically labelling those examples as the training set." C04-1071,P02-1053,n,"2 Previous work on Sentiment Analysis Some prior studies on sentiment analysis focused on the document-level classification of sentiment (Turney, 2002; Pang et al. , 2002) where a document is assumed to have only a single sentiment, thus these studies are not applicable to our goal." C04-1121,P02-1053,o,"Much research is also being directed at acquiring affect lexica automatically (Turney 2002, Turney and Littman 2002)." C04-1145,P02-1053,o,"SO can be used to classify reviews (e.g. , movie reviews) as positive or negative (Turney, 2002), and applied to subjectivity analysis such as recognizing hostile messages, classifying emails, mining reviews (Wiebe et al. , 2001)." C04-1200,P02-1053,o,"Recent computational work either focuses on sentence subjectivity (Wiebe et al. 2002; Riloff et al. 2003), concentrates just on explicit statements of evaluation, such as of films (Turney 2002; Pang et al. 2002), or focuses on just one aspect of opinion, e.g., (Hatzivassiloglou and McKeown 1997) on adjectives." C08-1024,P02-1053,o,"Another line of research closely related to our work is the recognition of semantic orientation and sentiment analysis (Turney, 2002; Takamura et al., 2006; Kaji and Kitsuregawa, 2006)." C08-1031,P02-1053,o,"One major focus is sentiment classification and opinion mining (e.g., Pang et al 2002; Turney 2002; Hu and Liu 2004; Wilson et al 2004; Kim and Hovy 2004; Popescu and Etzioni 2005) 2008." C08-1031,P02-1053,n,"Point-wise mutual information (PMI) is commonly used for computing the association of two terms (e.g., Turney 2002), which is defined as: nullnullnull null null,null null nullnullnull nullnullnullnull,nullnull nullnull null null null nullnullnullnullnull . However, we argue that PMI is not a suitable measure for our purpose." C08-1031,P02-1053,o,"Sentiment classification at the document level investigates ways to classify each evaluative document (e.g., product review) as positive or negative (Pang et al 2002; Turney 2002)." C08-1052,P02-1053,o,"The acquisition of clues is a key technology in these research efforts, as seen in learning methods for document-level SA (Hatzivassiloglou and McKeown, 1997; Turney, 2002) and for phraselevel SA (Wilson et al., 2005; Kanayama and Nasukawa, 2006)." C08-1103,P02-1053,o,"(2002), Turney (2002)), we are interested in fine-grained subjectivity analysis, which is concerned with subjectivity at the phrase or clause level." C08-1111,P02-1053,o,"80 8.0% Positive child education Positive cost Negative SUBJECT increase Figure 3: An example of a word-polarity lattice Various methods have already been proposed for sentiment polarity classification, ranging from the use of co-occurrence with typical positive and negative words (Turney, 2002) to bag of words (Pang et al., 2002) and dependency structure (Kudo and Matsumoto, 2004)." C08-1111,P02-1053,o,"(2007) apply the theory of (Hatzivassiloglou and McKeown, 1997) and (Turney, 2002) to emotion classification and propose a method based on the co-occurrence distribution over content words and six emotion words (e.g. joy, fear)." C08-1135,P02-1053,o,Another possible comparison could be with a version of Turney's (2002) sentiment classification method applied to Chinese. C08-1135,P02-1053,o,Turney (2002) describes a method of sentiment classification using two human-selected seed words (the words poor and excellent) in conjunction with a very large text corpus; the semantic orientation of phrases is computed as their association with the seed words (as measured by pointwise mutual information). D07-1113,P02-1053,o,(2002) and Turney (2002) classified sentiment polarity of reviews at the document level. D07-1114,P02-1053,o,"This direction has been forming the mainstream of research on opinion-sensitive text processing (Pang et al. , 2002; Turney, 2002, etc.)." D07-1115,P02-1053,o,"This idea is the same as (Turney, 2002)." D07-1115,P02-1053,p,"(Turney, 2002) is one of the most famous work that discussed learning polarity from corpus." D07-1115,P02-1053,o,"The polarity value proposed by (Turney, 2002) is as follows." D07-1115,P02-1053,o,"Typically, a small set of seed polar phrases are prepared, and new polar phrases are detected based on the strength of co-occurrence with the seeds (Hatzivassiloglous and McKeown, 1997; Turney, 2002; Kanayama and Nasukawa, 2006)." D07-1115,P02-1053,o,"In summary, the strength of our approach is to exploit extremely precise structural clues, and to use 5 Semantic Orientation in (Turney, 2002)." D07-1115,P02-1053,o,"In Turneys work, the co-occurrence is considered as the appearance in the same window (Turney, 2002)." D07-1115,P02-1053,o,"For example, if the lexicon contains an adjective excellent, it matches every adjective phrase that includes excellent such as view-excellent etc. As a baseline, we built lexicon similarly by using polarity value of (Turney, 2002)." D07-1115,P02-1053,o,"Its size is compatible to (Turney and Littman, 2002)." D07-1115,P02-1053,n,"Turneys method did not work well although they reported 80% accuracy in (Turney and Littman, 2002)." D08-1058,P02-1053,o,"Turney (2002) predicates the sentiment orientation of a review by the average semantic orientation of the phrases in the review that contain adjectives or adverbs, which is denoted as the semantic oriented method." D09-1017,P02-1053,o,"Sentiment summarization has been well studied in the past decade (Turney, 2002; Pang et al., 2002; Dave et al., 2003; Hu and Liu, 2004a, 2004b; Carenini et al., 2006; Liu et al., 2007)." D09-1019,P02-1053,o,"There are many research directions, e.g., sentiment classification (classifying an opinion document as positive or negative) (e.g., Pang, Lee and Vaithyanathan, 2002; Turney, 2002), subjectivity classification (determining whether a sentence is subjective or objective, and its associated opinion) (Wiebe and Wilson, 2002; Yu and Hatzivassiloglou, 2003; Wilson et al, 2004; Kim and Hovy, 2004; Riloff and Wiebe, 2005), feature/topic-based sentiment analysis (assigning positive or negative sentiments to topics or product features) (Hu and Liu 2004; Popescu and Etzioni, 2005; Carenini et al., 2005; Ku et al., 2006; Kobayashi, Inui and Matsumoto, 2007; Titov and McDonald." D09-1019,P02-1053,o,"One of the main directions is sentiment classification, which classifies the whole opinion document (e.g., a product review) as positive or negative (e.g., Pang et al, 2002; Turney, 2002; Dave et al, 2003; Ng et al. 2006; McDonald et al, 2007)." D09-1020,P02-1053,o,"They may rely only on this information (e.g., (Turney, 2002; Whitelaw et al., 2005; Riloff and Wiebe, 2003)), or they may combine it with additional information as well (e.g., (Yu and Hatzivassiloglou, 2003; Kim and Hovy, 2004; Bloom et al., 2007; Wilson et al., 2005a))." D09-1061,P02-1053,p,Turneys (2002) work is perhaps one of the most notable examples of unsupervised polarity classification. D09-1063,P02-1053,o,"Automatic methods for this often make use of lexicons of words tagged with positive and negative semantic orientation (Turney, 2002; Wilson et al., 2005; Pang and Lee, 2008)." E06-1025,P02-1053,o,"determining document orientation (or polarity), as in deciding if a given Subjective text expresses a Positive or a Negative opinion on its subject matter (Pang and Lee, 2004; Turney, 2002); 3." E06-1025,P02-1053,p,"The conceptually simplest approach to this latter problem is probably Turneys (2002), who has obtained interesting results on Task 2 by considering the algebraic sum of the orientations of terms as representative of the orientation of the document they belong to; but more sophisticated approaches arealsopossible (Hatzivassiloglou and Wiebe, 2000; Riloff et al. , 2003; Wilson et al. , 2004)." E06-1026,P02-1053,o,"This problem will be solved by incorporating other resources such as thesaurus or a dictionary,orcombiningourmethodwithothermethods using external wider contexts (Suzuki et al. , 2006; Turney, 2002; Baron and Hirst, 2004)." E06-1026,P02-1053,o,"Turney (2002) applied an internet-based technique to the semantic orientation classification of phrases, whichhadoriginallybeendevelopedforwordsentiment classification." E09-1004,P02-1053,o,"2 Literature Survey The task of sentiment analysis has evolved from document level analysis (e.g., (Turney., 2002); (Pang and Lee, 2004)) to sentence level analysis (e.g., (Hu and Liu., 2004); (Kim and Hovy., 2004); (Yu and Hatzivassiloglou, 2003))." H05-1043,P02-1053,n,"While other systems, such as (Hu and Liu, 2004; Turney, 2002), have addressed these tasks to some degree, OPINE is the first to report results." H05-1043,P02-1053,o,"PMI++ is an extended version of (Turney, 2002)s method for finding the SO label of a phrase (as an attempt to deal with context-sensitive words)." H05-1043,P02-1053,o,"Subjective phrases are used by (Turney, 2002; Pang and Vaithyanathan, 2002; Kushal et al. , 2003; Kim and Hovy, 2004) and others in order to classify reviews or sentences as positive or negative." H05-1043,P02-1053,o,"As a result, the problem of opinion mining has seen increasing attention over the last three years from (Turney, 2002; Hu and Liu, 2004) and many others." H05-1044,P02-1053,o,"7 Related Work Much work on sentiment analysis classifies documents by their overall sentiment, for example determining whether a review is positive or negative (e.g. , (Turney, 2002; Dave et al. , 2003; Pang and Lee, 2004; Beineke et al. , 2004))." H05-1044,P02-1053,o,"A number of researchers have explored learning words and phrases with prior positive or negative polarity (another term is semantic orientation) (e.g. , (Hatzivassiloglou and McKeown, 1997; Kamps and Marx, 2002; Turney, 2002))." H05-1045,P02-1053,o,"(2002), Turney (2002), Dave et al." H05-1071,P02-1053,o,"6 Conclusions and Future Directions In previous work, statistical NLP computation over large corpora has been a slow, of ine process, as in KNOWITALL (Etzioni et al. , 2005) and also in PMI-IR applications such as sentiment classi cation (Turney, 2002)." H05-1073,P02-1053,o,"(Turney, 2002), (Bai, Padman and Airoldi, 2004), (Beineke, Hastie and Vaithyanathan, 2003), (Mullen and Collier, 2003), (Pang and Lee, 2003)." H05-1116,P02-1053,o,"(2002), Turney (2002), Dave et al." I05-2011,P02-1053,o,Turney (2002) and Wiebe (2000) focused on learning adjectives and adjectival phrases and Wiebe et al. I05-2030,P02-1053,o,"(Dave et al. , 2003; Pang and Lee, 2004; Turney, 2002))." I08-1040,P02-1053,o,"(Turney, 2002; Pang et al., 2002; Dave at al., 2003)." I08-1040,P02-1053,o,"2 RelatedWork 2.1 Sentiment Classification Most previous work on the problem of categorizing opinionated texts has focused on the binary classification of positive and negative sentiment (Turney, 2002; Pang et al., 2002; Dave at al., 2003)." I08-1040,P02-1053,o,But it is close to the paradigm described by Yarowsky (1995) and Turney (2002) as it also employs self-training based on a relatively small seed data set which is incrementally enlarged with unlabelled samples. J06-3003,P02-1053,o,"Measures of attributional similarity have been studied extensively, due to their applications in problems such as recognizing synonyms (Landauer and Dumais 1997), information retrieval (Deerwester et al. 1990), determining semantic orientation (Turney 2002), grading student essays (Rehder et al. 1998), measuring textual cohesion (Morris and Hirst 1991), and word sense disambiguation (Lesk 1986)." J06-3003,P02-1053,o,Many 412 Turney Similarity of Semantic Relations researchers have argued that metaphor is the heart of human thinking (Lakoff and Johnson 1980; Hofstadter and the Fluid Analogies Research Group 1995; Gentner et al. 2001; French 2002). N03-1025,P02-1053,o,"We are currently investigating more challenging problems like multiple category classification using the Reuters-21578 data set (Lewis, 1992) and subjective sentiment classification (Turney, 2002)." N03-1025,P02-1053,o,"Such techniques are currently being applied in many areas, including language identification, authorship attribution (Stamatatos et al. , 2000), text genre classification (Kesseler et al. , 1997; Stamatatos et al. , 2000), topic identification (Dumais et al. , 1998; Lewis, 1992; McCallum, 1998; Yang, 1999), and subjective sentiment classification (Turney, 2002)." N06-1026,P02-1053,o,(2002) and Turney (2002) classified sentiment polarity of reviews at the document level. N07-1037,P02-1053,o,"Turney (2002) applied an internet-based technique to the semantic orientation classification of phrases, which had originally been developed for word sentiment classification." N07-1038,P02-1053,o,"2 Related Work Sentiment Classi cation Traditionally, categorization of opinion texts has been cast as a binary classication task (Pang et al. , 2002; Turney, 2002; Yu and Hatzivassiloglou, 2003; Dave et al. , 2003)." N07-1038,P02-1053,o,"1 Introduction Previous work on sentiment categorization makes an implicit assumption that a single score can express the polarity of an opinion text (Pang et al. , 2002; Turney, 2002; Yu and Hatzivassiloglou, 2003)." N07-1039,P02-1053,o,"Much of this work has utilized the fundamental concept of semantic orientation, (Turney, 2002); however, sentiment analysis still lacks a unified field theory." N07-1039,P02-1053,o,"Sentiment analysis includes a variety of different problems, including: sentiment classification techniques to classify reviews as positive or negative, based on bag of words (Pang et al. , 2002) or positive and negative words (Turney, 2002; Mullen and Collier, 2004); classifying sentences in a document as either subjective or objective (Riloff and Wiebe, 2003; Pang and Lee, 2004); identifying or classifying appraisal targets (Nigam and Hurst, 2004); identifying the source of an opinion in a text (Choi et al. , 2005), whether the author is expressing the opinion, or whether he is attributing the opinion to someone else; and developing interactive and visual opinion mining methods (Gamon et al. , 2005; Popescu and Etzioni, 2005)." N07-2048,P02-1053,o,Turney (2002) has presented an unsupervised opinion classification algorithm called SO-PMI (Semantic Orientation Using Pointwise Mutual Information). N07-2048,P02-1053,o,"2 Details of the SO-PMI Algorithm The SO-PMI algorithm (Turney, 2002) is used to estimate the semantic orientation (SO) of a phrase by 1http://www.epinions.com 189 References Peter D. Turney." N07-4010,P02-1053,o,"In related work (Chaovalit, 2005; Turney, 2002), both supervised and unsupervised approaches have been shown to have their pros and cons." N09-1001,P02-1053,o,"2 Related Work There has been a large and diverse body of research in opinion mining, with most research at the text (Pang et al., 2002; Pang and Lee, 2004; Popescu and Etzioni, 2005; Ounis et al., 2006), sentence (Kim and Hovy, 2005; Kudo and Matsumoto, 2004; Riloff et al., 2003; Yu and Hatzivassiloglou, 2003) or word (Hatzivassiloglou and McKeown, 1997; Turney and Littman, 2003; Kim and Hovy, 2004; Takamura et al., 2005; Andreevskaia and Bergler, 2006; Kaji and Kitsuregawa, 2007) level." N09-1002,P02-1053,o,"3 Related Work Many methods have been developed for automatically identifying subjective (opinion, sentiment, attitude, affect-bearing, etc.) words, e.g., (Turney, 2002; Riloff and Wiebe, 2003; Kim and Hovy, 2004; Taboada et al., 2006; Takamura et al., 2006)." N09-1055,P02-1053,o,"The research of opinion mining began in 1997, the early research results mainly focused on the polarity of opinion words (Hatzivassiloglou et al., 1997) and treated the text-level opinion mining as a classification of either positive or negative on the number of positive or negative opinion words in one text (Turney et al., 2003; Pang et al., 2002; Zagibalov et al., 2008;)." N09-1056,P02-1053,o,"Examples of such early work include (Turney, 2002; Pang et al., 2002; Dave et al., 2003; Hu and Liu, 2004; Popescu and Etzioni, 2005)." N09-2046,P02-1053,o,"1 Introduction In the community of sentiment analysis (Turney 2002; Pang et al., 2002; Tang et al., 2009), transferring a sentiment classifier from one source domain to another target domain is still far from a trivial work, because sentiment expression often behaves with strong domain-specific nature." N09-3013,P02-1053,o,"In general, previous work in opinion mining includes document level sentiment classification using supervised (Chaovalit and Zhou, 2005) and unsupervised methods (Turney, 2002), machine learning techniques and sentiment classification considering rating scales (Pang, Lee and Vaithyanathan, 2002), and scoring of features (Dave, Lawrence and Pennock, 2003)." P04-1034,P02-1053,o,"A contrasting approach (Turney, 2002) relies only upon documents whose labels are unknown." P04-1034,P02-1053,o,"Accuracy on sentiment classification in other domains exceeds 80% (Turney, 2002)." P04-1034,P02-1053,o,"In Turney (2002), features are selected according to part-of-speech labels." P04-1035,P02-1053,o,"Second, movie reviews are apparently harder to classify than reviews of other products (Turney, 2002; Dave, Lawrence, and Pennock, 2003)." P04-3025,P02-1053,o,"2 Motivation In the past, work has been done in the area of characterizing words and phrases according to their emotive tone (Turney and Littman, 2003; Turney, 2002; Kamps et al. , 2002; Hatzivassiloglou and Wiebe, 2000; Hatzivassiloglou and McKeown, 2002; Wiebe, 2000), but in many domains of text, the values of individual phrases may bear little relation to the overall sentiment expressed by the text." P04-3025,P02-1053,o,"In the present work, the approach taken by Turney (2002) is used to derive such values for selected phrases in the text." P05-1015,P02-1053,o,"Also, even the two-category version of the rating-inference problem for movie reviews has proven quite challenging for many automated classi cation techniques (Pang, Lee, and Vaithyanathan, 2002; Turney, 2002)." P05-1015,P02-1053,o,"Most prior work on the speci c problem of categorizing expressly opinionated text has focused on the binary distinction of positive vs. negative (Turney, 2002; Pang, Lee, and Vaithyanathan, 2002; Dave, Lawrence, and Pennock, 2003; Yu and Hatzivassiloglou, 2003)." P05-1015,P02-1053,o,"(Termbased versions of this premise have motivated much sentiment-analysis work for over a decade (Das and Chen, 2001; Tong, 2001; Turney, 2002))." P05-2008,P02-1053,o,"Turney (2002) noted that the unigram unpredictable might have a positive sentiment in a movie review (e.g. unpredictable plot), but could be negative in the review of an automobile (e.g. unpredictable steering)." P06-1034,P02-1053,o,"(Turney, 2002))." P06-1134,P02-1053,o,"The first is identifying words and phrases that are associated with subjectivity, for example, that think is associated with private states and that beautiful is associated with positive sentiments (e.g. , (Hatzivassiloglou and McKeown, 1997; Wiebe, 2000; Kamps and Marx, 2002; Turney, 2002; Esuli and Sebastiani, 2005))." P06-1134,P02-1053,o,"The third exploits automatic subjectivity analysis in applications such as review classification (e.g. , (Turney, 2002; Pang and Lee, 2004)), mining texts for product reviews (e.g. , (Yi et al. , 2003; Hu and Liu, 2004; Popescu and Etzioni, 2005)), summarization (e.g. , (Kim and Hovy, 2004)), information extraction (e.g. , (Riloff et al. , 2005)), 1Note that sentiment, the focus of much recent work in the area, is a type of subjectivity, specifically involving positive or negative opinion, emotion, or evaluation." P06-2059,P02-1053,p,"Turney also reported good result without domain customization (Turney, 2002)." P06-2059,P02-1053,o,"6.3 Unsupervised sentiment classification Turney proposed the unsupervised method for sentiment classification (Turney, 2002), and similar method is utilized by many other researchers (Yu and Hatzivassiloglou, 2003)." P06-2063,P02-1053,o,"Identifying subjectivity helps separate opinions from fact, which may be useful in question answering, summarization, etc. Semantic orientation classification is a task of determining positive or negative sentiment of words (Hatzivassiloglou and McKeown, 1997; Turney, 2002; Esuli and Sebastiani, 2005)." P06-2063,P02-1053,o,"Document level sentiment classification is mostly applied to reviews, where systems assign a positive or negative sentiment for a whole review document (Pang et al. , 2002; Turney, 2002)." P06-2079,P02-1053,o,"(2002), Turney (2002)), a sentence (e.g. , Liu et al." P06-2079,P02-1053,o,"Much work has been performed on learning to identify and classify polarity terms (i.e. , terms expressing a positive sentiment (e.g. , happy) or a negative sentiment (e.g. , terrible)) and exploiting them to do polarity classification (e.g. , Hatzivassiloglou and McKeown (1997), Turney (2002), Kim and Hovy (2004), Whitelaw et al." P06-2079,P02-1053,o,"For instance, instead of representing the polarity of a term using a binary value, Mullen and Collier (2004) use Turneys (2002) method to assign a real value to represent term polarity and introduce a variety of numerical features that are aggregate measures of the polarity values of terms selected from the document under consideration." P06-2081,P02-1053,o,"For instance, both Pang and Lee (2002) and Turney (2002) consider the thumbs up/thumbs down decision: is a film review positive or negative?" P06-2081,P02-1053,o,"Work focusses on analysing subjective features of text or speech, such as sentiment, opinion, emotion or point of view (Pang et al. , 2002; Turney, 2002; Dave et al. , 2003; Liu et al. , 2003; Pang and Lee, 2005; Shanahan et al. , 2005)." P07-1053,P02-1053,o,"We follow the approach by Turney (2002), who note that the semantic orientation of an adjective depends on the noun that it modifies and suggest using adjective-noun or adverb-verb pairs to extract semantic orientation." P07-1053,P02-1053,o,"However, we do not rely on linguistic resources (Kamps and Marx, 2002) or on search engines (Turney and Littman, 2003) to determine the semantic orientation, but rather rely on econometrics for this task." P07-1053,P02-1053,o,"To evaluate the polarity and strength of opinions, most of the existing approaches rely either on training from human-annotated data (Hatzivassiloglou and McKeown, 1997), or use linguistic resources (Hu and Liu, 2004; Kim and Hovy, 2004) like WordNet, or rely on co-occurrence statistics (Turney, 2002) between words that are unambiguously positive (e.g. , excellent) and unambiguously negative (e.g. , horrible)." P07-1055,P02-1053,o,"Previous workonsentimentanalysishascoveredawiderange of tasks, including polarity classification (Pang et al. , 2002; Turney, 2002), opinion extraction (Pang and Lee, 2004), and opinion source assignment (Choi et al. , 2005; Choi et al. , 2006)." P07-1055,P02-1053,o,"Furthermore, these systems have tackled the problem at different levels of granularity, from the document level (Pang et al. , 2002), sentence level (Pang and Lee, 2004; Mao and Lebanon, 2006), phrase level (Turney, 2002; Choi et al. , 2005), as well as the speaker level in debates (Thomas et al. , 2006)." P07-1056,P02-1053,o,The work most similar in spirit to ours that of Turney (2002). P07-1056,P02-1053,n,"While we do not have a direct comparison, we note that Turney (2002) performs worse on movie reviews than on his other datasets, the same type of data as the polarity dataset." P07-1056,P02-1053,o,"1 Introduction Sentiment detection and classification has received considerable attention recently (Pang et al. , 2002; Turney, 2002; Goldberg and Zhu, 2004)." P07-1123,P02-1053,o,"2 Motivation Automatic subjectivity analysis methods have been used in a wide variety of text processing applications, such as tracking sentiment timelines in online forums and news (Lloyd et al. , 2005; Balog et al. , 2006), review classification (Turney, 2002; Pang et al. , 2002), mining opinions from product reviews (Hu and Liu, 2004), automatic expressive text-to-speech synthesis (Alm et al. , 2005), text semantic analysis (Wiebe and Mihalcea, 2006; Esuli and Sebastiani, 2006), and question answering (Yu and Hatzivassiloglou, 2003)." P07-1124,P02-1053,o,"Others, such as Turney (2002), Pang and Vaithyanathan (2002), have examined the positive or negative polarity, rather than presence or absence, of affective content in text." P07-1124,P02-1053,o,"Much of the work in sentiment analysis in the computational linguistics domain has focused either on short segments, such as sentences (Wilson et al. , 2005), or on longer documents with an explicit polarity orientation like movie or product reviews (Turney, 2002)." P07-3007,P02-1053,o,"(2002), Turney (2002), Kim and Hovy (2004) and others), however, the research described in this paper uses the information retrieval (IR) paradigm which has also been used by some researchers." P08-1034,P02-1053,o,"291 3.1 Level of Analysis Research on sentiment annotation is usually conducted at the text (Aue and Gamon, 2005; Pang et al., 2002; Pang and Lee, 2004; Riloff et al., 2006; Turney, 2002; Turney and Littman, 2003) or at the sentence levels (Gamon and Aue, 2005; Hu and Liu, 2004; Kim and Hovy, 2005; Riloff et al., 2006)." P08-1034,P02-1053,o,"For example, it has been observed that texts often contain multiple opinions on different topics (Turney, 2002; Wiebe et al., 2001), which makes assignment of the overall sentiment to the whole document problematic." P08-1036,P02-1053,o,"Sentiment classification is a well studied problem (Wiebe, 2000; Pang et al., 2002; Turney, 2002) and in many domains users explicitly 1We use the term aspect to denote properties of an object that can be rated by a user as in Snyder and Barzilay (2007)." P09-1027,P02-1053,o,"Turney (2002) predicates the sentiment orientation of a review by the average semantic orientation of the phrases in the review that contain adjectives or adverbs, which is denoted as the semantic oriented method." P09-1028,P02-1053,o,"Methods focussing on the use and generation of dictionaries capturing the sentiment of words have ranged from manual approaches of developing domain-dependent lexicons (Das and Chen, 2001) to semi-automated approaches (Hu and Liu, 2004; Zhuang et al., 2006; Kim and Hovy, 2004), and even an almost fully automated approach (Turney, 2002)." P09-1079,P02-1053,p,Turneys (2002) work is perhaps one of the most notable examples of unsupervised polarity classification. P09-2041,P02-1053,o,"3 Method 3.1 Standard text classication approach We take our starting point from topic-based text classication (Dumais et al., 1998; Joachims, 1998) and sentiment classication (Turney, 2002; Pang and Lee, 2008)." P09-2043,P02-1053,o,"1 Introduction Sentiment analysis have been widely conducted in several domains such as movie reviews, product reviews, news and blog reviews (Pang et al., 2002; Turney, 2002)." W02-1011,P02-1053,o,"Since adjectives have been a focus of previous work in sentiment detection (Hatzivassiloglou and Wiebe, 2000; Turney, 2002)13, we looked at the performance of using adjectives alone." W02-1011,P02-1053,o,"In terms of relative performance, Naive Bayes tends to do the worst and SVMs tend to do the best, although the 12http://www.english.bham.ac.uk/stafi/oliver/software/tagger/index.htm 13Turneys (2002) unsupervised algorithm uses bigrams containing an adjective or an adverb." W02-1011,P02-1053,o,"(Turney (2002) makes a similar point, noting that for reviews, \the whole is not necessarily the sum of the parts"")." W02-1011,P02-1053,o,"Some of this work focuses on classifying the semantic orientation of individual words or phrases, using linguistic heuristics or a pre-selected set of seed words (Hatzivassiloglou and McKeown, 1997; Turney and Littman, 2002)." W02-1011,P02-1053,o,"Turneys (2002) work on classiflcation of reviews is perhaps the closest to ours.2 He applied a speciflc unsupervised learning technique based on the mutual information between document phrases and the words \excellent"" and \poor"", where the mutual information is computed using statistics gathered by a search engine." W02-1011,P02-1053,o,"We also note that Turney (2002) found movie reviews to be the most 2Indeed, although our choice of title was completely independent of his, our selections were eerily similar." W03-0404,P02-1053,o,"Some work identifies inflammatory texts (e.g. , (Spertus, 1997)) or classifies reviews as positive or negative ((Turney, 2002; Pang et al. , 2002))." W03-0404,P02-1053,o,"Researchers have focused on learning adjectives or adjectival phrases (Turney, 2002; Hatzivassiloglou and McKeown, 1997; Wiebe, 2000) and verbs (Wiebe et al. , 2001), but no previous work has focused on learning nouns." W03-0404,P02-1053,o,"(Turney, 2002) used patterns representing part-of-speech sequences, (Hatzivassiloglou and McKeown, 1997) recognized adjectival phrases, and (Wiebe et al. , 2001) learned N-grams." W03-1014,P02-1053,o,"Some existing resources contain lists of subjective words (e.g. , Levins desire verbs (1993)), and some empirical methods in NLP have automatically identified adjectives, verbs, and N-grams that are statistically associated with subjective language (e.g. , (Turney, 2002; Hatzivassiloglou and McKeown, 1997; Wiebe, 2000; Wiebe et al. , 2001))." W03-1014,P02-1053,o,"For example, (Spertus, 1997) developed a system to identify inflammatory texts and (Turney, 2002; Pang et al. , 2002) developed methods for classifying reviews as positive or negative." W03-1017,P02-1053,o,"Turney (2002) showed that it is possible to use only a few of those semantically oriented words (namely, excellent and poor) to label other phrases co-occuring with them as positive or negative." W03-1017,P02-1053,o,"For determining whether an opinion sentence is positive or negative, we have used seed words similar to those produced by (Hatzivassiloglou and McKeown, 1997) and extended them to construct a much larger set of semantically oriented words with a method similar to that proposed by (Turney, 2002)." W03-1017,P02-1053,n,"Our focus is on the sentence level, unlike (Pang et al. , 2002) and (Turney, 2002); we employ a significantly larger set of seed words, and we explore as indicators of orientation words from syntactic classes other than adjectives (nouns, verbs, and adverbs)." W03-1017,P02-1053,o,"The approach is based on the hypothesis that positive words co-occur more than expected by chance, and so do negative words; this hypothesis was validated, at least for strong positive/negative words, in (Turney, 2002)." W03-1017,P02-1053,o,"In earlier work (Turney, 2002) only singletons were used as seed words; varying their number allows us to test whether multiple seed words have a positive effect in detection performance." W05-0408,P02-1053,o,"1 Introduction The field of sentiment classification has received considerable attention from researchers in recent years (Pang and Lee 2002, Pang et al. 2004, Turney 2002, Turney and Littman 2002, Wiebe et al. 2001, Bai et al. 2004, Yu and Hatzivassiloglou 2003 and many others)." W05-0408,P02-1053,o,"Movie and product reviews have been the main focus of many of the recent studies in this area (Pang and Lee 2002, Pang et al. 2004, Turney 2002, Turney and Littman 2002)." W05-0408,P02-1053,o,"accuracy Training data Turney (2002) 66% unsupervised Pang & Lee (2004) 87.15% supervised Aue & Gamon (2005) 91.4% supervised SO 73.95% unsupervised SM+SO to increase seed words, then SO 74.85% weakly supervised Table 7: Classification accuracy on the movie review domain Turney (2002) achieves 66% accuracy on the movie review domain using the PMI-IR algorithm to gather association scores from the web." W05-0408,P02-1053,o,c2005 Association for Computational Linguistics Automatic identification of sentiment vocabulary: exploiting low association with known sentiment terms Michael Gamon Anthony Aue Natural Language Processing Group Natural Language Processing Group Microsoft Research Microsoft Research mgamon@microsoft.com anthaue@microsoft.com Abstract We describe an extension to the technique for the automatic identification and labeling of sentiment terms described in Turney (2002) and Turney and Littman (2002). W05-0408,P02-1053,o,Turney (2002) and Turney and Littman (2002) exploit the first two generalizations for unsupervised sentiment classification of movie reviews. W05-0408,P02-1053,o,Turney (2002) starts from a small (2 word) set of terms with known orientation (excellent and poor). W05-0408,P02-1053,o,"Given a set of terms with unknown sentiment orientation, Turney (2002) then uses the PMI-IR algorithm (Turney 2001) to issue queries to the web and determine, for each of these terms, its pointwise mutual information (PMI) with the two seed words across a large set of documents." W05-0408,P02-1053,o,"We can then use this newly identified set to: (1) use Turneys method to find the orientation for the terms and employ the terms and their scores in a classifier, and (2) use Turneys method to find the orientation for the terms and add the new terms as additional seed terms for a second iteration As opposed to Turney (2002), we do not use the web as a resource to find associations, rather we apply the method directly to in-domain data." W05-0408,P02-1053,o,"It is worth noting, however, that even in Turney (2002) the choice of seed words is explicitly motivated by domain properties of movie reviews." W06-0301,P02-1053,o,"Identifying subjectivity helps separate opinions from fact, which may be useful in question answering, summarization, etc. Sentiment detection is the task of determining positive or negative sentiment of words (Hatzivassiloglou and McKeown, 1997; Turney, 2002; Esuli and Sebastiani, 2005), phrases and sentences (Kim and Hovy, 2004; Wilson et al. , 2005), or documents (Pang et al. , 2002; Turney, 2002)." W06-0302,P02-1053,o,"(2002), Turney (2002), Dave et al." W06-0305,P02-1053,o,"Most of the annotation approaches tackling these issues, however, are aimed at performing classifications at either the document level (Pang et al. , 2002; Turney, 2002), or the sentence or word level (Wiebe et al. , 2004; Yu and Hatzivassiloglou, 2003)." W06-0306,P02-1053,o,"In analyzing opinions (Cardie et al. , 2003; Wilson et al. , 2004), judging document-level subjectivity (Pang et al. , 2002; Turney, 2002), and answering opinion questions (Cardie et al. , 2003; Yu and Hatzivassiloglou, 2003), the output of a sentence-level subjectivity classification can be used without modification." W06-0308,P02-1053,o,"The ve part-ofspeech (POS) patterns from (Turney, 2002) were used for the extraction of indicators, all involving at least one adjective or adverb." W06-1613,P02-1053,o,(2002) and Turney (2002). W06-1639,P02-1053,o,"In particular, since we treat each individual speech within a debate as a single document, we are considering a version of document-level sentiment-polarity classification, namely, automatically distinguishing between positive and negative documents (Das and Chen, 2001; Pang et al. , 2002; Turney, 2002; Dave et al. , 2003)." W06-1640,P02-1053,o,"(2002), Turney (2002), Dave et al." W06-1641,P02-1053,o,"For example, the adjective unpredictable may have a negative orientation in an automotive review, in a phrase such as unpredictable steering, but it could have a positive orientation in a movie review, in a phrase such as unpredictable plot, as mentioned in (Turney, 2002) in the context of his sentiment word detection." W06-1641,P02-1053,o,"C3BTC5 and CCCDCA were used in (Kamps and Marx, 2002) and (Turney and Littman, 2003), respectively." W06-1642,P02-1053,o,"The token precision is higher than 90% in all of the corpora, including the movie domain, which is considered to be difficult for SA (Turney, 2002)." W06-1642,P02-1053,o,Turney (2002) used collocation with excellent or poor to obtain positive and negative clues for document classification. W06-1650,P02-1053,n,"In the thriving area of research on automatic analysis and processing of product reviews (Hu and Liu 2004; Turney 2002; Pang and Lee 2005), little attention has been paid to the important task studied here assessing review helpfulness." W06-1650,P02-1053,o,(2002) and Turney (2002) classified sentiment polarity of reviews at the document level. W06-1652,P02-1053,o,"Lexical cues of differing complexities have been used, including single words and Ngrams (e.g. , (Mullen and Collier, 2004; Pang et al. , 2002; Turney, 2002; Yu and Hatzivassiloglou, 2003; Wiebe et al. , 2004)), as well as phrases and lexico-syntactic patterns (e.g, (Kim and Hovy, 2004; Hu and Liu, 2004; Popescu and Etzioni, 2005; Riloff and Wiebe, 2003; Whitelaw et al. , 2005))." W06-1664,P02-1053,o,"Also, PMI-IR is useful for calculating semantic orientation and rating reviews (Turney, 2002)." W06-3301,P02-1053,o,"For example, researchers (Turney 2002; Yu and Hatzivassiloglou 2003) have identified semantic correlation between words and views: positive words tend to appear more frequently in positive movie and product reviews and newswire article sentences that have a positive semantic orientation and vice versa for negative reviews or sentences with a negative semantic orientation." W06-3808,P02-1053,o,"1 Introduction Sentiment analysis of text documents has received considerable attention recently (Shanahan et al. , 2005; Turney, 2002; Dave et al. , 2003; Hu and Liu, 2004; Chaovalit and Zhou, 2005)." W07-1515,P02-1053,o,"TheauthorsapplySO-PMI-IR(Turney, 2002) to extract and determine the polarity of adjectives." W07-2064,P02-1053,o,"As comparison, Turney and Littman (2003) used seed sets consisting of 7 words in their word valence annotation experiments, while Turney (2002) used minimal seed sets consisting of only one positive and one negative word (excellent and poor) in his experiments on review classification." W07-2072,P02-1053,o,"2 Related work Our approach for emotion classification is based on the idea of (Hatzivassiloglou and McKeown, 1997) and is similar to those of (Turney, 2002) and (Turney and Littman, 2003)." W07-2072,P02-1053,o,The idea of tracing polarity through adjective cooccurrence is adopted by Turney (2002) for the binary (positive and negative) classification of text reviews. W07-2072,P02-1053,o,"Following Hatzivassiloglou and McKeown (1997) and Turney (2002), we decided to observe how often the words from the headline co-occur with each one of the six emotions." W07-2072,P02-1053,o,"Some of the differences between our approach and those of Turney (2002) are mentioned below: ??objectives: Turney (2002) aims at binary text classification, while our objective is six class classification of one-liner headlines." W07-2072,P02-1053,n,"??word class: Turney (2002) measures polarity using only adjectives, however in our approach we consider the noun, the verb, the adverb and the adjective content words." W07-2072,P02-1053,o,"??search engines: Turney (2002) uses the Altavista web browser, while we consider and combine the frequency information acquired from three web search engines." W07-2072,P02-1053,o,"??word proximity: For the web searches, Turney (2002) uses the NEAR operator and considers only those documents that contain the adjectives within a specific proximity." W07-2072,P02-1053,o,"??queries: The queries of Turney (2002) are made up of a pair of adjectives, and in our approach the query contains the content words of the headline and an emotion." W09-1606,P02-1053,o,Turney (2002) suggested comparing the frequency of phrase co-occurrences with words predetermined by the sentiment lexicon. W09-1703,P02-1053,o,"Our work builds upon Turneys work on semantic orientation (Turney, 2002) and synonym learning (Turney, 2001), in which he used a PMI-IR algorithm to measure the similarity of words and phrases based on Web queries." W09-1703,P02-1053,o,"Turney (Turney, 2001; Turney, 2002) reported that the NEAR operator outperformed simple page co-occurrence for his purposes; our early experiments informally showed the same for this work." W09-1904,P02-1053,o,"Previous research has focused on classifying subjective-versus-objective expressions (Wiebe et al., 2004), and also on accurate sentiment polarity assignment (Turney, 2002; Yi et al., 2003; Pang and Lee, 2004; Sindhwani and Melville, 2008; Melville et al., 2009)." C08-1018,P02-1057,o,"Finally, we plan to apply the model to other paraphrasing tasks including fully abstractive document summarisation (Daume III and Marcu, 2002)." D08-1057,P02-1057,o,"For content selection, discourse-level considerations were proposed by Daume III and Marcu (2002), who explored the use of Rhetorical Structure Theory (Mann and Thompson, 1988)." J05-4004,P02-1057,o,"Between these two extremes, there has been a relatively modest amount of work in sentence simplification (Chandrasekar, Doran, and Bangalore 1996; Mahesh 1997; Carroll et al. 1998; Grefenstette 1998; Jing 2000; Knight and Marcu 2002) and document compression (Daume III and Marcu 2002; Daume III and Marcu 2004; Zajic, Dorr, and Schwartz 2004) in which words, phrases, and sentences are selected in an extraction process." J05-4004,P02-1057,o,"In our own work on document compression models (Daume III and Marcu 2002; Daume III and Marcu 2004), both of which extend the sentence compression model of Knight and Marcu (2002), we assume that sentences and documents can be summarized exclusively through deletion of contiguous text segments." W04-1016,P02-1057,o,"It has been further observed that simply compressing sentences individually and concatenating the results leads to suboptimal summaries (Daume III and Marcu, 2002)." W04-1016,P02-1057,o,"The third baseline, COMP is the document compression system developed by Daume III and Marcu (2002), which compresses documents by cutting out constituents in a combined syntax and discourse tree." W04-1016,P02-1057,o,"A few researchers have focused on other aspects of summarization, including single sentence (Knight and Marcu, 2002), paragraph or short document (Daume III and Marcu, 2002), query-focused (Berger and Mittal, 2000), or speech (Hori et al. , 2003)." C04-1135,P03-1001,o,"1 Introduction Hyponymy relations can play a crucial role in various NLP systems, and there have been many attempts to develop automatic methods to acquire hyponymy relations from text corpora (Hearst, 1992; Caraballo, 1999; Imasumi, 2001; Fleischman et al. , 2003; Morin and Jacquemin, 2003; Ando et al. , 2003)." C04-1188,P03-1001,o,"The row labelled Precision shows the precision of the extracted information (i.e. , how many entries are correct, according to a human annotator) estimated by random sampling and manual evaluation of 1% of the data for each table, similar to (Fleischman et al. , 2003)." C04-1188,P03-1001,o,"(Fleischman et al. , 2003; Jijkoun et al. , 2003)." C04-1188,P03-1001,p,"In our future work we plan to investigate the effect of more sophisticated and, probably, more accurate filtering methods (Fleischman et al. , 2003) on the QA results." C04-1188,P03-1001,o,"The precision of the extracted information can be improved significantly by using machine learning methods to filter out noise (Fleischman et al. , 2003)." C04-1188,P03-1001,o,"The recall problem is usually addressed by increasing the amount of text data for extraction (taking larger collections (Fleischman et al. , 2003)) or by developing more surface patterns (Soubbotin and Soubbotin, 2002)." C04-1188,P03-1001,o,"The usual recall and precision metrics (e.g. , how many of the interesting bits of information were detected, and how many of the found bits were actually correct) require either a test corpus previously annotated with the required information, or manual evaluation (Fleischman et al. , 2003)." C04-1188,P03-1001,o,"3.2 Questions and Corpus To get a clear picture of the impact of using different information extraction methods for the offline construction of knowledge bases, similarly to (Fleischman et al. , 2003), we focused only on questions about persons, taken from the TREC8 through TREC 2003 question sets." H05-1013,P03-1001,o,"In particular, we use the name/instance lists described by (Fleischman et al. , 2003) and available on Fleischmans web page to generate features between names and nominals (this list contains a110a111a85 pairs mined from a112a73a96 GBs of news data)." H05-1075,P03-1001,o,"2 Related Work Question Answering has attracted much attention from the areas of Natural Language Processing, Information Retrieval and Data Mining (Fleischman et al. , 2003; Echihabi et al. , 2003; Yang et al. , 2003; Hermjakob et al. , 2002; Dumais et al. , 2002; Hermjakob et al. , 2000)." H05-1075,P03-1001,o,"1 Motivation Question Answering has emerged as a key area in natural language processing (NLP) to apply question parsing, information extraction, summarization, and language generation techniques (Clark et al. , 2004; Fleischman et al. , 2003; Echihabi et al. , 2003; Yang et al. , 2003; Hermjakob et al. , 2002; Dumais et al. , 2002)." I08-2126,P03-1001,o,"1 Introduction The goal of this study has been to automatically extract a large set of hyponymy relations, which play a critical role in many NLP applications, such as Q&A systems (Fleischman et al., 2003)." N04-1010,P03-1001,o,"We do not use particular lexicosyntactic patterns, as previous attempts have (Hearst, 1992; Caraballo, 1999; Imasumi, 2001; Fleischman et al. , 2003; Morin and Jacquemin, 2003; Ando et al. , 2003)." N04-1041,P03-1001,o,We compared our system with the concepts in WordNet and Fleischman et al.s instance/concept relations (Fleischman et al. 2003). N04-1041,P03-1001,o,2 Previous Work There have been several approaches to automatically discovering lexico-semantic information from text (Hearst 1992; Riloff and Shepherd 1997; Riloff and Jones 1999; Berland and Charniak 1999; Pantel and Lin 2002; Fleischman et al. 2003; Girju et al. 2003). P06-1102,P03-1001,p,"In contrast, the idea of bootstrapping for relation and information extraction was first proposed in (Riloff and Jones, 1999), and successfully applied to the construction of semantic lexicons (Thelen and Riloff, 2002), named entity recognition (Collins and Singer, 1999), extraction of binary relations (Agichtein and Gravano, 2000), and acquisition of structured data for tasks such as Question Answering (Lita and Carbonell, 2004; Fleischman et al. , 2003)." W04-2709,P03-1001,o,"After that, several million instances of people, locations, and other facts were added (Fleischman et al. , 2003)." C04-1030,P03-1021,o,"Alternatively, one can train them with respect to the final translation quality measured by some error criterion (Och, 2003)." C04-1072,P03-1021,o,"To simulate real world scenario, we use n-best lists from ISIs state-of-the-art statistical machine translation system, AlTemp (Och 2003), and the 2002 NIST Chinese-English evaluation corpus as the test corpus." C04-1072,P03-1021,o,"For example, a statistical machine translation system such as ISIs AlTemp SMT system (Och 2003) can generate a list of n-best alternative translations given a source sentence." C04-1072,P03-1021,o, A natural fit to the existing statistical machine translation framework A metric that ranks a good translation high in an nbest list could be easily integrated in a minimal error rate statistical machine translation training framework (Och 2003). C04-1168,P03-1021,o,"The training of IBM model 4 was implemented by the GIZA++ package (Och and Ney, 2003)." C04-1168,P03-1021,o,"We adopted an N-best hypothesis approach (Och, 2003) to train." C04-1168,P03-1021,o,"Indeed, the proposed speech translation paradigm of log-linear models have been shown e ective in many applications (Beyerlein, 1998) (Vergyri, 2000) (Och, 2003)." C04-1168,P03-1021,o,"The Powells algorithm used in this work is similar as the one from (Press et al. , 2000) but we modi ed the line optimization codes, a subroutine of Powells algorithm, with reference to (Och, 2003)." C08-1005,P03-1021,o,"imum error rate training (MERT) (Och, 2003) to maximize BLEU score (Papineni et al., 2002)." C08-1014,P03-1021,o,"By introducing the hidden word alignment variable a (Brown et al., 1993), the optimal translation can be searched for based on the following criterion: * 1 , arg max( ( , , )) M mm m ea eh = = efa (1) where is a string of phrases in the target language, e f fa is the source language string of phrases, he are feature functions, weights (, , ) m m are typically optimized to maximize the scoring function (Och, 2003)." C08-1014,P03-1021,o,"Our MT baseline system is based on Moses decoder (Koehn et al., 2007) with word alignment obtained from GIZA++ (Och et al., 2003)." C08-1014,P03-1021,p,"1 Introduction State-of-the-art Statistical Machine Translation (SMT) systems usually adopt a two-pass search strategy (Och, 2003; Koehn, et al., 2003) as shown in Figure 1." C08-1041,P03-1021,o,"We use minimum error rate training (Och, 2003) to tune the feature weights for the log-linear model." C08-1064,P03-1021,o,"Except where noted, each system was trained on 27 million words of newswire data, aligned with GIZA++ (Och and Ney, 2003) and symmetrized with the grow-diag-final-and heuristic (Koehn et al., 2003)." C08-1064,P03-1021,o,"In all experiments that follow, each system configuration was independently optimized on the NIST 2003 Chinese-English test set (919 sentences) using minimum error rate training (Och, 2003) and tested on the NIST 2005 Chinese-English task (1082 sentences)." C08-1064,P03-1021,o,"This may be because their system was not tuned using minimum error rate training (Och, 2003)." C08-1064,P03-1021,o,"5We use deterministic sampling, which is useful for reproducibility and for minimum error rate training (Och, 2003)." C08-1064,P03-1021,o,"Our baseline uses Giza++ alignments (Och and Ney, 2003) symmetrized with the grow-diag-final-and heuristic (Koehn et al., 2003)." C08-1074,P03-1021,o,"Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 585592 Manchester, August 2008 Random Restarts in Minimum Error Rate Training for Statistical Machine Translation Robert C. Moore and Chris Quirk Microsoft Research Redmond, WA 98052, USA bobmoore@microsoft.com, chrisq@microsoft.com Abstract Ochs (2003) minimum error rate training (MERT) procedure is the most commonly used method for training feature weights in statistical machine translation (SMT) models." C08-1074,P03-1021,p,"1 Introduction Och (2003) introduced minimum error rate training (MERT) for optimizing feature weights in statistical machine translation (SMT) models, and demonstrated that it produced higher translation quality scores than maximizing the conditional likelihood of a maximum entropy model using the same features." C08-1125,P03-1021,o,"Then we use both Moses decoder and its suppo We run the decoder with its d then use Moses' implementation of minimum error rate training (Och, 2003) to tune the feature weights on the development set." C08-1125,P03-1021,o,"3 Baseline MT System The phrase-based SMT system used in our experiments is Moses, phrase translation pro ing probabilities, and languag ties are combined in the log-linear model to obtain the best translation best e of the source sentence f : = = M p | )(maxarg fee ebest (2) m mm h 1 ,(maxarg f)e e The weights are set by a discriminative training method using a held-out data set as describ in (Och, 2003)." C08-1127,P03-1021,o,"For the efficiency of minimum-error-rate training (Och, 2003), we built our development set (580 sentences) using sentences not exceeding 50 characters from the NIST MT-02 evaluation test data." C08-1127,P03-1021,o,"This wrong translation of content words is similar to the incorrect omission reported in (Och et al., 2003), which both hurt translation adequacy." C08-1127,P03-1021,o,"Firstly, we run GIZA++ (Och and Ney, 2000) on the training corpus in both directions and then apply the ogrow-diag-finalprefinement rule (Koehn et al., 2003) to obtain many-to-many word alignments." C08-1144,P03-1021,o,"Starting with bilingualphrasepairsextractedfromautomatically aligned parallel text (Och and Ney, 2004; Koehn et al., 2003), these PSCFG approaches augment each contiguous (in source and target words) phrase pair with a left-hand-side symbol (like the VP in the example above), and perform a generalization procedure to form rules that include nonterminal symbols." C08-1144,P03-1021,o,"2 Summary of approaches Given a source language sentence f, statistical machine translation defines the translation task as selecting the most likely target translation e under a model P(e|f), i.e.: e(f) = argmax e P(e|f) = argmax e msummationdisplay i=1 hi(e,f)i where the argmax operation denotes a search through a structured space of translation ouputs in the target language, hi(e,f) are bilingual features of e and f and monolingual features of e, and weights i are trained discriminitively to maximize translation quality (based on automatic metrics) on held out data (Och, 2003)." C08-5001,P03-1021,o,"The k-best list is also frequently used in discriminative learning to approximate the whole set of candidates which is usually exponentially large (Och, 2003; McDonald et al., 2005)." D07-1005,P03-1021,o,"Minimum Error Rate Training (MERT) (Och, 2003) under BLEU criterion is used to estimate 20 feature function weights over the larger development set (dev1)." D07-1005,P03-1021,o,"Our human word alignments do not distinguish between Sure and Probable links (Och and Ney, 2003)." D07-1005,P03-1021,o,Such an approach contrasts with the log-linear HMM/Model-4 combination proposed by Och and Ney (2003). D07-1005,P03-1021,o,"2 Word Alignment Framework A statistical translation model (Brown et al. , 1993; Och and Ney, 2003) describes the relationship between a pair of sentences in the source and target languages (f = fJ1,e = eI1) using a translation probability P(f|e)." D07-1005,P03-1021,p,"(2) We note that these posterior probabilities can be computed efficiently for some alignment models such as the HMM (Vogel et al. , 1996; Och and Ney, 2003), Models 1 and 2 (Brown et al. , 1993)." D07-1005,P03-1021,o,"High quality word alignments can yield more accurate phrase-pairs which improve quality of a phrase-based SMT system (Och and Ney, 2003; Fraser and Marcu, 2006b)." D07-1005,P03-1021,o,"Much of the recent work in word alignment has focussed on improving the word alignment quality through better modeling (Och and Ney, 2003; Deng and Byrne, 2005; Martin et al. , 2005) or alternative approaches to training (Fraser and Marcu, 2006b; Moore, 2005; Ittycheriah and Roukos, 2005)." D07-1006,P03-1021,o,"4.2 Experiments To build all alignment systems, we start with 5 iterations of Model 1 followed by 4 iterations of HMM (Vogel et al. , 1996), as implemented in GIZA++ (Och and Ney, 2003)." D07-1006,P03-1021,o,"For all non-LEAF systems, we take the best performing of the union, refined and intersection symmetrization heuristics (Och and Ney, 2003) to combine the 1-to-N and M-to-1 directions resulting in a M-to-N alignment." D07-1006,P03-1021,o,"For French/English translation we use a state of the art phrase-based MT system similar to (Och and Ney, 2004; Koehn et al. , 2003)." D07-1006,P03-1021,o,"(Och and Ney, 2003) invented heuristic symmetriza57 FRENCH/ENGLISH ARABIC/ENGLISH SYSTEM F-MEASURE ( = 0.4) BLEU F-MEASURE ( = 0.1) BLEU GIZA++ 73.5 30.63 75.8 51.55 (FRASER AND MARCU, 2006B) 74.1 31.40 79.1 52.89 LEAF UNSUPERVISED 74.5 72.3 LEAF SEMI-SUPERVISED 76.3 31.86 84.5 54.34 Table 3: Experimental Results tion of the output of a 1-to-N model and a M-to-1 model resulting in a M-to-N alignment, this was extended in (Koehn et al. , 2003)." D07-1006,P03-1021,o,"Our work is most similar to work using discriminative log-linear models for alignment, which is similar to discriminative log-linear models used for the SMT decoding (translation) problem (Och and Ney, 2002; Och, 2003)." D07-1006,P03-1021,o,"(Och and Ney, 2003) presented results suggesting that the additional parameters required to ensure that a model is not deficient result in inferior performance, but we plan to study whether this is the case for our generative model in future work." D07-1006,P03-1021,p,"2.2 Unsupervised Parameter Estimation We can perform maximum likelihood estimation of the parameters of this model in a similar fashion to that of Model 4 (Brown et al. , 1993), described thoroughly in (Och and Ney, 2003)." D07-1006,P03-1021,o,"We use Viterbi training (Brown et al. , 1993) but neighborhood estimation (Al-Onaizan et al. , 1999; Och and Ney, 2003) or pegging (Brown et al. , 1993) could also be used." D07-1006,P03-1021,p,"(Och and Ney, 2003) discussed efficient implementation." D07-1007,P03-1021,o,"The phrase bilexicon is derived from the intersection of bidirectional IBM Model 4 alignments, obtained with GIZA++ (Och and Ney, 2003), augmented to improve recall using the grow-diag-final heuristic." D07-1007,P03-1021,o,"The loglinear model weights are learned using Chiangs implementation of the maximum BLEU training algorithm (Och, 2003), both for the baseline, and the WSD-augmented system." D07-1029,P03-1021,o,"(3) s in Equation 1 are the weights of different feature functions, learned to maximize development set BLEU scores using a method similar to (Och, 2003)." D07-1030,P03-1021,o,"SMT has evolved from the original word-based approach (Brown et al. , 1993) into phrase-based approaches (Koehn et al. , 2003; Och and Ney, 2004) and syntax-based approaches (Wu, 1997; Alshawi et al. , 2000; Yamada and Knignt, 2001; Chiang, 2005)." D07-1030,P03-1021,o,"We run the decoder with its default settings (maximum phrase length 7) and then use Koehn's implementation of minimum error rate training (Och, 2003) to tune the feature weights on the de2 The full name of HTRDP is National High Technology Research and Development Program of China, also named as 863 Program." D07-1036,P03-1021,o,"For the log-linear model training, we take minimum-error-rate training method as described in (Och, 2003)." D07-1038,P03-1021,o,"We obtain weights for the combinations of the features by performing minimum error rate training (Och, 2003) on held-out data." D07-1054,P03-1021,o,"The translation models were pharse-based (Zen et al. , 2002) created using the GIZA++ toolkit (Och et al. , 2003)." D07-1054,P03-1021,o,"For tuning of the decoders parameters, including the language model weight, minimum error training (Och 2003) with respect to the BLEU score using was conducted using the development corpus." D07-1055,P03-1021,n,"Note that the minimum error rate training (Och, 2003) uses only the target sentence with the maximum posterior probability whereas, here, the whole probability distribution is taken into account." D07-1055,P03-1021,n,"We will show that some achieve significantly better results than the standard minimum error rate training of (Och, 2003)." D07-1055,P03-1021,p,"The current state-of-the-art is to optimize these parameters with respect to the final evaluation criterion; this is the so-called minimum error rate training (Och, 2003)." D07-1055,P03-1021,p,"The current state-of-the-art is to use minimum error rate training (MERT) as described in (Och, 2003)." D07-1055,P03-1021,o,"Therefore, (Och and Ney, 2002; Och, 2003) defined the translation candidate with the minimum word-error rate as pseudo reference translation." D07-1055,P03-1021,o,"However, as pointed out in (Och, 2003), there is no reason to believe that the resulting parameters are optimal with respect to translation quality measured with the Bleu score." D07-1056,P03-1021,o,"3.3 Features Similar to the default features in Pharaoh (Koehn, Och and Marcu 2003), we used following features to estimate the weight of our grammar rules." D07-1056,P03-1021,o,"We just assign these rules a constant score trained using our implementation of Minimum Error Rate Training (Och, 2003b), which is 0.7 in our system." D07-1056,P03-1021,o,"6 Training Similar to most state-of-the-art phrase-based SMT systems, we use the SRI toolkit (Stolcke, 2002) for language model training and Giza++ toolkit (Och and Ney, 2003) for word alignment." D07-1056,P03-1021,o,"Based on the word alignment results, if the aligned target words of any two adjacent foreign linguistic phrases can also be formed into two valid adjacent phrase according to constraints proposed in the phrase extraction algorithm by Och (2003a), they will be extracted as a reordering training sample." D07-1056,P03-1021,o,"There have been considerable amount of efforts to improve the reordering model in SMT systems, ranging from the fundamental distance-based distortion model (Och and Ney, 2004; Koehn et al. , 2003), flat reordering model (Wu, 1996; Zens et al. , 2004; Kumar et al. , 2005), to lexicalized reordering model (Tillmann, 2004; Kumar et al. , 2005; Koehn et al. , 2005), hierarchical phrase-based model (Chiang, 2005), and maximum entropy-based phrase reordering model (Xiong et al. , 2006)." D07-1079,P03-1021,o,"Tuning was done using Maximum BLEU hill-climbing (Och, 2003)." D07-1079,P03-1021,o,"A superset of the parallel data was word aligned by GIZA union (Och and Ney, 2003) and EMD (Fraser and Marcu, 2006)." D07-1079,P03-1021,o,"Approaches include word substitution systems (Brown et al. , 1993), phrase substitution systems (Koehn et al. , 2003; Och and Ney, 2004), and synchronous context-free grammar systems (Wu and Wong, 1998; Chiang, 2005), all of which train on string pairs and seek to establish connections between source and target strings." D07-1080,P03-1021,o,"The hierarchical phrase translation pairs are extracted in a standard way (Chiang, 2005): First, the bilingual data are word alignment annotated by running GIZA++ (Och and Ney, 2003) in two directions." D07-1080,P03-1021,o,"The baseline hierarchical phrase-based system is trained using standard max-BLEU training (MERT) without sparse features (Och, 2003)." D07-1080,P03-1021,o,"2 Statistical Machine Translation We use a log-linear approach (Och, 2003) in which a foreign language sentence f is translated into another language, for example English, e, by seeking a maximum solution: e = argmax e wT h( f, e) (1) where h( f, e) is a large-dimension feature vector." D07-1080,P03-1021,o,"1 Introduction The recent advances in statistical machine translation have been achieved by discriminatively training a small number of real-valued features based either on (hierarchical) phrase-based translation (Och and Ney, 2004; Koehn et al. , 2003; Chiang, 2005) or syntax-based translation (Galley et al. , 2006)." D07-1091,P03-1021,o,"The feature weights i in the log-linear model are determined using a minimum error rate training method, typically Powells method (Och, 2003)." D07-1103,P03-1021,o,"To model p(t,a|s), we use a standard loglinear approach: p(t,a|s) exp bracketleftBiggsummationdisplay i ifi(s,t,a) bracketrightBigg where each fi(s,t,a) is a feature function, and weights i are set using Ochs algorithm (Och, 2003) to maximize the systems BLEU score (Papineni et aal." D07-1105,P03-1021,o,"on test BLEU BP BLEU BP pair-CI 95% BLEU BP 3 01 03 32.98 0.92 33.03 0.93 [ -0.23, +0.34] 33.60 0.93 4 01 04 33.44 0.93 33.46 0.93 [ -0.26, +0.29] 34.97 0.94 5 01 05 33.07 0.92 33.14 0.93 [ -0.29, +0.43] 34.33 0.93 6 01 06 32.86 0.92 33.53 0.93 [+0.26, +1.08] 34.43 0.93 7 01 07 33.08 0.93 33.51 0.93 [+0.04, +0.82] 34.49 0.93 8 01 08 33.12 0.93 33.47 0.93 [ -0.06, +0.75] 34.50 0.94 9 01 09 33.15 0.93 33.22 0.93 [ -0.35, +0.51] 34.68 0.93 10 01 10 33.01 0.93 33.59 0.94 [+0.18, +0.96] 34.79 0.94 11 01 11 32.84 0.94 33.40 0.94 [+0.13, +0.98] 34.76 0.94 12 01 12 32.73 0.93 33.49 0.94 [+0.34, +1.18] 34.83 0.94 13 01 13 32.71 0.93 33.54 0.94 [+0.39, +1.26] 34.91 0.94 14 01 14 32.66 0.93 33.69 0.94 [+0.58, +1.47] 34.97 0.94 15 01 15 32.47 0.93 33.57 0.94 [+0.63, +1.57] 34.99 0.94 16 01 16 32.51 0.93 33.62 0.94 [+0.62, +1.59] 35.00 0.94 3.2 Non-Uniform System Prior Weights As pointed out in Section 2.1, a useful property of the MBR-like system selection method is that system prior weights can easily be trained using the Minimum Error Rate Training (Och, 2003)." D07-1105,P03-1021,o,"Note that all systems were optimized using a non-deterministic implementation of the Minimum Error Rate Training described in (Och, 2003)." D07-1105,P03-1021,o,"For instance, word alignment models are often trained using the GIZA++ toolkit (Och and Ney, 2003); error minimizing training criteria such as the Minimum Error Rate Training (Och, 2003) are employed in order to learn feature function weights for log-linear models; and translation candidates are produced using phrase-based decoders (Koehn et al. , 2003) in combination with n-gram language models (Brants et al. , 2007)." D07-1105,P03-1021,p,"For instance, changing the training procedure for word alignment models turned out to be most beneficial; for details see (Och and Ney, 2003)." D07-1105,P03-1021,p," Using the components of the row-vector bm as feature function values for the candidate translation em (m a16 1,,M), the system prior weights can easily be trained using the Minimum Error Rate Training described in (Och, 2003)." D08-1010,P03-1021,o,"We perform minimum error rate training (Och, 2003) to tune the feature weights for the log-linear modeltomaximizethesystemssBLEUscoreonthe development set." D08-1012,P03-1021,o,"When different decoder settings are applied to the same model, MERT weights (Och, 2003) from the unprojected single pass setup are used and are kept constant across runs." D08-1022,P03-1021,o,"These parameters 1 8 are tuned by minimum error rate training (Och, 2003) on the dev sets." D08-1023,P03-1021,o,"We benchmark our results against a model (Hiero) which was directly trained to optimise BLEUNIST using the standard MERT algorithm (Och, 2003) and the full set of translation and lexical weight features described for the Hiero model (Chiang, 2007)." D08-1023,P03-1021,o,"Most work on discriminative training for SMT has focussed on linear models, often with margin based algorithms (Liang et al., 2006; Watanabe et al., 2006), or rescaling a product of sub-models (Och, 2003; Ittycheriah and Roukos, 2007)." D08-1024,P03-1021,o,"2 Learning algorithm The translation model is a standard linear model (Och and Ney, 2002), which we train using MIRA (Crammer and Singer, 2003; Crammer et al., 2006), following Watanabe et al." D08-1024,P03-1021,p,"1 Introduction Since its introduction by Och (2003), minimum error rate training (MERT) has been widely adopted for training statistical machine translation (MT) systems." D08-1033,P03-1021,o,"5.1 Baseline System We trained Moses on all Spanish-English Europarl sentences up to length 20 (177k sentences) using GIZA++ Model 4 word alignments and the growdiag-final-and combination heuristic (Koehn et al., 2007; Och and Ney, 2003; Koehn, 2002), which performed better than any alternative combination heuristic.13 The baseline estimates (Heuristic) come fromextractingphrasesuptolength7fromtheword alignment." D08-1033,P03-1021,o,"The parameters for each phrase table were tuned separately using minimum error rate training (Och, 2003)." D08-1051,P03-1021,p,"An important contribution to interactive CAT technology was carried out around the TransType (TT) project (Langlais et al., 2002; Foster et al., 2002; Foster, 2002; Och et al., 2003)." D08-1051,P03-1021,o,"In the present work, we decided to use WSR instead of Key Stroke Ratio (KSR), which is used in other works on IMT such as (Och et al., 2003)." D08-1051,P03-1021,o,"EsEn 63.00.9 59.20.9 6.01.4 EnEs 63.80.9 60.51.0 5.21.6 DeEn 71.60.8 69.00.9 3.61.3 EnDe 75.90.8 73.50.9 3.21.2 FrEn 62.90.9 59.21.0 5.91.6 EnFr 63.40.9 60.00.9 5.41.4 bined in a log-linear fashion by adjusting a weight for each of them by means of the MERT (Och, 2003) procedure, optimising the BLEU (Papineni et al., 2002) score obtained on the development partition." D08-1051,P03-1021,o,"In (Och et al., 2003), the use of a word graph is proposed as interface between an alignment-template SMT model and the IMT engine." D08-1051,P03-1021,o,"This tolerant search uses the well known concept of Levenshtein distance in order to obtain the most similar string for the given prefix (see (Och et al., 2003) for more details)." D08-1060,P03-1021,o,"The standard Minimum Error Rate training (Och, 2003) was applied to tune the weights for all feature types." D08-1060,P03-1021,o,"We use MER (Och, 2003) to tune the decoders parameters using a development data set." D08-1065,P03-1021,o,"For each language pair, we use two development sets: one for Minimum Error Rate Training (Och, 2003; Macherey et al., 2008), and the other for tuning the scale factor for MBR decoding." D08-1065,P03-1021,o,"We then train word alignment models (Och and Ney, 2003) using 6 Model-1 iterations and 6 HMM iterations." D08-1066,P03-1021,o,"The heuristic estimator employs word-alignment (Giza++) (Och and Ney, 2003) and a few thumb rules for defining phrase pairs, and then extracts a multi-set of phrase pairs and estimates their conditional probabilities based on the counts in the multi-set." D08-1066,P03-1021,o,"The f are optimized by Minimum-Error Training (MERT) (Och, 2003)." D08-1066,P03-1021,o,"For evaluation we use a state-of-the-art baseline system (Moses) (Hoang and Koehn, 2008) which works with a log-linear interpolation of feature functions optimized by MERT (Och, 2003)." D08-1066,P03-1021,o,"(Koehn et al., 2003; Och and Ney, 2004))." D08-1066,P03-1021,o,"These heuristics define a phrase pair to consist of a source and target ngrams of a word-aligned source-target sentence pair such that if one end of an alignment is in the one ngram, the other end is in the other ngram (and there is at least one such alignment) (Och and Ney, 2004; Koehn et al., 2003)." D08-1076,P03-1021,o,"A class of training criteria that provides a tighter connection between the decision rule and the final error metric is known as Minimum Error Rate Training (MERT) and has been suggested for SMT in (Och, 2003)." D08-1076,P03-1021,o,"6 Related Work As suggested in (Och, 2003), an alternative method for the optimization of the unsmoothed error count is Powells algorithm combined with a grid-based line optimization (Press et al., 2007, p. 509)." D08-1076,P03-1021,o,"Assuming that the corpusbased error count for some translations eS1 is additively decomposable into the error counts of the individual sentences, i.e., ED4rS1 ,eS1D5 AG EWSs AG1 ED4rs,esD5,the MERT criterion is given as: M1 AG argmin M1 AZ S F4 sAG1 EA0rs,eD4fs;M1 D5A8 B7 (3) AG argmin M1 AZ S F4 sAG1 K F4 kAG1 ED4rs,es,kD5A0eD4fs;M1 D5,es,kA8 B7 with e D4fs;M1 D5 AG argmaxe AZ M F4 mAG1 mhmD4e,fsD5 B7 (4) In (Och, 2003), it was shown that linear models can effectively be trained under the MERT criterion using a special line optimization algorithm." D08-1076,P03-1021,o,"Starting from an initial point M1 , computing the most probable sentence hypothesis out of a set of K candidate translations Cs AG D8e1,,eKD9 along the line M1 A0 A4 dM1 results in the following optimization problem (Och, 2003): e D4fs;D5 AG argmax eC8Cs AX D4 M 1 A0 A4 d M 1 D5 C2 A4 hM1 D4e,fsD5 B5 AG argmax eC8Cs AY F4 m mhmD4e,fsD5 D0D3D3D3D3D3D3D3D3D1D3D3D3D3D3D3D3D3D2 AGaD4e,fsD5 A0 A4 F4 m dmhmD4e,fsD5 D0D3D3D3D3D3D3D3D3D1D3D3D3D3D3D3D3D3D2 AGbD4e,fsD5 B6 AG argmax eC8Cs AWa D4e,fsD5 A0 A4 bD4e,fsD5 D0D3D3D3D3D3D3D3D3D3D3D3D1D3D3D3D3D3D3D3D3D3D3D3D2 D4A6D5 B4 (5) Hence, the total score D4A6D5 for any candidate translation corresponds to a line in the plane with as the independent variable." D08-1076,P03-1021,o,"The upper envelope is a convex hull and can be inscribed with a convex polygon whose edges are the segments of a piecewise linear function in (Papineni, 1999; Och, 2003): EnvD4fD5 AG max eC8C AWa D4e,fD5 A0 A4 bD4e,fD5 : C8 RB4 (6) 726 Score Error count 0 0 e1 e2 e5 e6 e8 e1e 2 e3 e4 e5e6e 7 e8 Figure 1: The upper envelope (bold, red curve) for a set of lines is the convex hull which consists of the topmost line segments." D08-1088,P03-1021,o,"This operation can be used in applications like Minimum Error Rate Training (Och, 2003), or optimizing system combination as described by Hillard et al." D08-1089,P03-1021,o,"Parameters were tuned with minimum error-rate training (Och, 2003) on the NIST evaluation set of 2006 (MT06) for both C-E and A-E." D08-1089,P03-1021,o,"1 Introduction Statistical phrase-based systems (Och and Ney, 2004; Koehn et al., 2003) have consistently delivered state-of-the-art performance in recent machine translation evaluations, yet these systems remain weak at handling word order changes." D08-1093,P03-1021,o,"Moreover, rather than predicting an intrinsic metric such as the PARSEVAL Fscore, the metric that the predictor learns to predict can be chosen to better fit the final metric on which an end-to-end system is measured, in the style of (Och, 2003)." D09-1005,P03-1021,o,"7.2 Minimum-Risk Training Adjusting or changes the distribution p. Minimum error rate training (MERT) (Och, 2003) tries to tune to minimize the BLEU loss of a decoder that chooses the most probable output according to p." D09-1006,P03-1021,o,Och (2003) shows that setting those weights should take into account the evaluation metric by which the MT system will eventually be judged. D09-1006,P03-1021,o,"(1) Och (2003) provides evidence that should be chosen by optimizing an objective function basd on the evaluation metric of interest, rather than likelihood." D09-1006,P03-1021,o,"1 Introduction Many state-of-the-art machine translation (MT) systems over the past few years (Och and Ney, 2002; Koehn et al., 2003; Chiang, 2007; Koehn et al., 2007; Li et al., 2009) rely on several models to evaluate the goodness of a given candidate translation in the target language." D09-1021,P03-1021,o,"10Both Pharoah and our system have weights trained using MERT (Och, 2003) on sentences of length 30 words or less, to ensure that training and test conditions are matched." D09-1021,P03-1021,o,"However, the approach raises two major challenges: 7In practice, MERT training (Och, 2003) will be used to train relative weights for the different model components." D09-1023,P03-1021,o,"Our approach permits an alternative to minimum error-rate training (MERT; Och, 2003); it is discriminativebuthandleslatentstructureandregularization in more principled ways." D09-1023,P03-1021,o,"We perform word alignment using GIZA++ (Och and Ney, 2003), symmetrize the alignments using the grow-diag-final-and heuristic, and extract phrases up to length 3." D09-1023,P03-1021,o,"The same probabilities are also included using 50 hard word classes derived from the parallel corpus using the GIZA++ mkcls utility (Och and Ney, 2003)." D09-1037,P03-1021,o,"The rules are then treated as events in a relative frequency estimate.4 We used Giza++ Model 4 to obtain word alignments (Och and Ney, 2003), using the grow-diag-final-and heuristic to symmetrise the two directional predictions (Koehn et al., 2003)." D09-1037,P03-1021,o,"No artificial glue-rules or rule span limits were employed.7 The parameters of the translation system were trained to maximize BLEU on the MT02 test set (Och, 2003)." D09-1039,P03-1021,o,"There has been some previous work on accuracy-driven training techniques for SMT, such as MERT (Och, 2003) and the Simplex Armijo Downhill method (Zhao and Chen, 2009), which tune the parameters in a linear combination of various phrase scores according to a held-out tuning set." D09-1040,P03-1021,o,"Feature weights were set with minimum error rate training (Och, 2003) on a development set using BLEU (Papineni et al., 2002) as the objective function." D09-1042,P03-1021,o,"Furthermore, WASP1++ employs minimum error rate training (Och, 2003) to directly optimize the evaluation metrics." D09-1073,P03-1021,o,"For the MER training (Och, 2003), we modify Koehns MER trainer (Koehn, 2004) to train our system." D09-1073,P03-1021,o,"Recently, many phrase reordering methods have been proposed, ranging from simple distancebased distortion model (Koehn et al., 2003; Och and Ney, 2004), flat reordering model (Wu, 1997; Zens et al., 2004), lexicalized reordering model (Tillmann, 2004; Kumar and Byrne, 2005), to hierarchical phrase-based model (Chiang, 2005; Setiawan et al., 2007) and classifier-based reordering model with linear features (Zens and Ney, 2006; Xiong et al., 2006; Zhang et al., 2007a; Xiong et al., 2008)." D09-1073,P03-1021,p,"1 Introduction Phrase-based method (Koehn et al., 2003; Och and Ney, 2004; Koehn et al., 2007) and syntaxbased method (Wu, 1997; Yamada and Knight, 2001; Eisner, 2003; Chiang, 2005; Cowan et al., 2006; Marcu et al., 2006; Liu et al., 2007; Zhang et al., 2007c, 2008a, 2008b; Shen et al., 2008; Mi and Huang, 2008) represent the state-of-the-art technologies in statistical machine translation (SMT)." D09-1075,P03-1021,o,"Default parameters were used for all experiments except for the numberofiterationsforGIZA++(OchandNey, 2003)." D09-1075,P03-1021,o,"For practical reasons, the maximum size of a token was set at three for Chinese, andfour forKorean.2 Minimum error rate training (Och, 2003) was run on each system afterwardsand BLEU score (Papineni et al., 2002) was calculated on the test sets." D09-1076,P03-1021,o,"We train our feature weights using max-BLEU (Och, 2003) and decode with a CKY-based decoder that supports language model scoring directly integrated into the search." D09-1079,P03-1021,o,"We held out 300 sentences for minimum error rate training (MERT) (Och, 2003) and optimised the parameters of the feature functions of the decoder for each experimental run." D09-1105,P03-1021,o,"We used GIZA++ (Och and Ney, 2003) to align approximately 751,000 sentences from the German-English portion of the Europarl corpus (Koehn, 2005), in both the German-to-English and English-to-German directions." D09-1105,P03-1021,o,"Moses used the development data for minimum error-rate training (Och, 2003) of its small number of parameters." D09-1108,P03-1021,o,"We use GIZA++ (Och and Ney, 2003) to do m-to-n word-alignment and adopt heuristic grow-diag-final-and to do refinement." D09-1108,P03-1021,o,"The feature weights are tuned by the modified Koehns MER (Och, 2003, Koehn, 2007) trainer." D09-1111,P03-1021,o,"Their transliteration probability is: P(t|s) PE(s|t)max[PT(t),PL(t)] (1) Inspired by the linear models used in SMT (Och, 2003), we can discriminatively weight the components of this generative model, producing: wE logPE(s|t)+wT logPT(t)+wL logPL(t) with weights w learned by perceptron training." D09-1111,P03-1021,p,"However, this is not unprecedented: discriminatively weighted generative models have been shown to outperform purely discriminative competitors in various NLP classification tasks (Raina et al., 2004; Toutanova, 2006), and remain the standard approach in statistical translation modeling (Och, 2003)." D09-1111,P03-1021,o,"Note that generative hybrids are the norm in SMT, where translation scores are provided by a discriminative combination of generative models (Och, 2003)." D09-1114,P03-1021,o,"Since we also adopt a linear scoring function in Equation (3), the feature weights of our combination model can also be tuned on a development data set to optimize the specified evaluation metrics using the standard Minimum Error Rate Training (MERT) algorithm (Och 2003)." D09-1114,P03-1021,o,"Parameters were tuned with MERT algorithm (Och, 2003) on the NIST evaluation set of 2003 (MT03) for both the baseline systems and the system combination model." D09-1114,P03-1021,o,"GIZA++ toolkit (Och and Ney, 2003) is used to perform word alignment in both directions with default settings, and the intersect-diag-grow method is used to generate symmetric word alignment refinement." D09-1117,P03-1021,o,"The system was trained in a standard manner, using a minimum error-rate training (MERT) procedure (Och, 2003) with respect to the BLEU score (Papineni et al., 2001) on held-out development data to optimize the loglinear model weights." D09-1125,P03-1021,o,"Then the same system weights are applied to both IncHMM and Joint Decoding -based approaches, and the feature weights of them are trained using the max-BLEU training method proposed by Och (2003) and refined by Moore and Quirk (2008)." D09-1141,P03-1021,o,"We then built separate directed word alignments for EnglishX andXEnglish (X{Indonesian, Spanish}) using IBM model 4 (Brown et al., 1993), combined them using the intersect+grow heuristic (Och and Ney, 2003), and extracted phrase-level translation pairs of maximum length seven using the alignment template approach (Och and Ney, 2004)." D09-1141,P03-1021,o,"We set all weights by optimizing Bleu (Papineni et al., 2002) using minimum error rate training (MERT) (Och, 2003) on a separate development set of 2,000 sentences (Indonesian or Spanish), and we used them in a beam search decoder (Koehn et al., 2007) to translate 2,000 test sentences (Indonesian or Spanish) into English." D09-1147,P03-1021,n,"The ubiquitous minimum error rate training (MERT) approach optimizes Viterbi predictions, but does not explicitly boost the aggregated posterior probability of desirable n-grams (Och, 2003)." D09-1147,P03-1021,o,"We extract a phrase table using the Moses pipeline, based on Model 4 word alignments generated from GIZA++ (Och and Ney, 2003)." E06-1006,P03-1021,o,"Phrases are then extracted from the word alignments using the method described in (Och and Ney, 2003)." E06-1006,P03-1021,o,"The score combination weights are trained by a minimum error rate training procedure similar to (Och and Ney, 2003)." E06-1032,P03-1021,p,"The remaining six entries were all fully automatic machine translation systems; in fact, they were all phrase-based statistical machine translation system that had been trained on the same parallel corpus and most used Bleubased minimum error rate training (Och, 2003) to optimize the weights of their log linear models feature functions (Och and Ney, 2002)." E06-1032,P03-1021,o,"For example, work which failed to detect improvements in translation quality with the integration of word sense disambiguation (Carpuat and Wu, 2005), or work which attempted to integrate syntactic information but which failed to improve Bleu (Charniak et al. , 2003; Och et al. , 2004) may deserve a second look with a more targeted manual evaluation." E06-1032,P03-1021,p,"The statistical machine translation community relies on the Bleu metric for the purposes of evaluating incremental system changes and optimizing systems through minimum error rate training (Och, 2003)." E06-2002,P03-1021,o,"This preprocessing step can be accomplished by applying the GIZA++ toolkit (Och and Ney, 2003) that provides Viterbi alignments based on IBM Model-4." E06-2002,P03-1021,o,"Starting from the parallel training corpus, provided with direct and inverted alignments, the socalled union alignment (Och and Ney, 2003) is computed." E09-1011,P03-1021,o,"We tune using Ochs algorithm (Och, 2003) to optimize weights for the distortion model, language model, phrase translation model and word penalty over the BLEU metric (Papineni et al., 2001)." E09-1033,P03-1021,p,Och (2003) has described an efficient exact onedimensional accuracy maximization technique for a similar search problem in machine translation. E09-1033,P03-1021,o,"Due to space we do not describe step 8 in detail (see (Och, 2003))." E09-1033,P03-1021,o,"287 System Train +base Test +base 1 Baseline 87.89 87.89 2 Contrastive 88.70 0.82 88.45 0.56 (5 trials/fold) 3 Contrastive 88.82 0.93 88.55 0.66 (greedy selection) Table 1: Average F1 of 7-way cross-validation To generate the alignments, we used Model 4 (Brown et al., 1993), as implemented in GIZA++ (Och and Ney, 2003)." E09-1044,P03-1021,o,"MET (Och, 2003) iterative parameter estimation under IBM BLEU is performed on the development set." E09-1063,P03-1021,o,"5.3 Baseline System We conducted experiments using different segmenters with a standard log-linear PB-SMT model: GIZA++ implementation of IBM word alignment model 4 (Och and Ney, 2003), the refinement and phrase-extraction heuristics described in (Koehn et al., 2003), minimum-errorrate training (Och, 2003), a 5-gram language model with Kneser-Ney smoothing trained with SRILM (Stolcke, 2002) on the English side of the training data, and Moses (Koehn et al., 2007; Dyer et al., 2008) to translate both single best segmentation and word lattices." E09-3008,P03-1021,o,"The tools used are the Moses toolkit (Koehn et al., 2007) for decoding and training, GIZA++ for word alignment (Och and Ney, 2003), and SRILM (Stolcke, 2002) for language models." E09-3008,P03-1021,o,"To tune feature weights minimum error rate training is used (Och, 2003), optimized against the Neva metric (Forsbom, 2003)." H05-1012,P03-1021,p,"Current state of the art machine translation systems (Och, 2003) use phrasal (n-gram) features extracted automatically from parallel corpora." H05-1012,P03-1021,o,"Although there is a modest cost associated with annotating data, we show that a reduction of 40% relative in alignment error (AER) is possible over the GIZA++ aligner (Och and Ney, 2003)." H05-1021,P03-1021,o,"For the combined set (ALL), we also show the 95% BLEU confidence interval computed using bootstrap resampling (Och, 2003)." H05-1021,P03-1021,o,"Finally we use Minimum Error Training (MET) (Och, 2003) to train log-linear scaling factors that are applied to the WFSTs in Equation 1." H05-1022,P03-1021,o,"5 Phrase Pair Induction A common approach to phrase-based translation is to extract an inventory of phrase pairs (PPI) from bitext (Koehn et al. , 2003), For example, in the phraseextract algorithm (Och, 2002), a word alignment am1 is generated over the bitext, and all word subsequences ei2i1 and fj2j1 are found that satisfy : am1 : aj [i1,i2] iff j [j1,j2] ." H05-1022,P03-1021,o,"Pooling the sets to form two large CE and AE test sets, the AE system improvements are significant at a 95% level (Och, 2003); the CE systems are only equivalent." H05-1022,P03-1021,o,"The hallucination process is motivated by the use of NULL alignments into Markov alignment models as done by (Och and Ney, 2003)." H05-1022,P03-1021,o,"Alignment performance is measured by the Alignment Error Rate (AER) (Och and Ney, 2003) AER(B;B) = 12|B B|/(|B|+|B|) where B is a set reference word links, and B are the word links generated automatically." H05-1027,P03-1021,o,The line search is an extension of that described in (Och 2003; Quirk et al. 2005. H05-1027,P03-1021,o,3.3 Grid Line Search Our implementation of a grid search is a modified version of that proposed in (Och 2003). H05-1027,P03-1021,o,"The modifications are made to deal with the efficiency issue due to the fact that there is a very large number of features and training samples in our task, compared to only 8 features used in (Och 2003)." H05-1034,P03-1021,o,MSR thus adopts the method proposed by Och (2003). H05-1087,P03-1021,o,"This is analogous, and in a certain sense equivalent, to empirical risk minimization, which has been used successfully in related areas, such as speech recognition (Rahim and Lee, 1997), language modeling (Paciorek and Rosenfeld, 2000), and machine translation (Och, 2003)." H05-1095,P03-1021,o,"A first family of libraries was based on a word alignment A, produced using the Refined method described in (Och and Ney, 2003) (combination of two IBM-Viterbi alignments): we call these the A libraries." H05-1095,P03-1021,o,"The first is to align the words using a standard word alignement technique, such as the Refined Method described in (Och and Ney, 2003) (the intersection of two IBM Viterbi alignments, forward and reverse, enriched with alignments from the union) and then generate bi-phrases by combining together individual alignments that co-occur in the same pair of sentences." H05-1095,P03-1021,o,"This is the strategy that is usually adopted in other phrase-based MT approaches (Zens and Ney, 2003; Och and Ney, 2004)." H05-1095,P03-1021,o,"Instead, and as suggested by Och (2003), we chose to maximize directly the quality of the translations produced by the system, as measured with a machine translation evaluation metric." H05-1095,P03-1021,o,"1 Introduction Possibly the most remarkable evolution of recent years in statistical machine translation is the step from word-based models to phrase-based models (Och et al. , 1999; Marcu and Wong, 2002; Yamada and Knight, 2002; Tillmann and Xia, 2003)." H05-1096,P03-1021,o,"Nowadays, most of the state-of-the-art SMT systems are based on bilingual phrases (Bertoldi et al. , 2004; Koehn et al. , 2003; Och and Ney, 2004; Tillmann, 2003; Vogel et al. , 2004; Zens and Ney, 2004)." H05-1096,P03-1021,o,"The model scaling factors 1,,5 and the word and phrase penalties are optimized with respect to some evaluation criterion (Och, 2003), e.g. BLEU score." H05-1098,P03-1021,o,"The feature weights are learned by maximizing the BLEU score (Papineni et al. , 2002) on held-out data,usingminimum-error-ratetraining(Och,2003) as implemented by Koehn." H05-1098,P03-1021,o,"5 Analysis Over the last few years, several automatic metrics for machine translation evaluation have been introduced, largely to reduce the human cost of iterative system evaluation during the development cycle (Lin and Och, 2004; Melamed et al. , 2003; Papineni et al. , 2002)." I05-2039,P03-1021,o,"It has a lower bound of 0, no upper bound, better scores indicate better translations, and it tends to be highly correlated with the adequacy of outputs ; mWER (Och 2003) or Multiple Word Error Rate is the edit distance in words between the system output and the closest reference translation in a set." I08-1030,P03-1021,o,"2 Phrase-based statistical machine translation Phrase-based SMT uses a framework of log-linear models (Och, 2003) to integrate multiple features." I08-1030,P03-1021,o,"In the training phase, bilingual parallel sentences are preprocessed and aligned using alignment algorithms or tools such as GIZA++ (Och and Ney, 2003)." I08-1067,P03-1021,o,"The weights for the various components of the model (phrase translation model, language model, distortion model etc.) are set by minimum error rate training (Och, 2003)." I08-2087,P03-1021,o,The corresponding weight is trained through minimum error rate method (Och 2003). I08-2087,P03-1021,o,"(2003), bilingual sentences are trained by GIZA++ (Och and Ney 2003) in two directions (from source to target and target to source)." I08-2088,P03-1021,o,"We used the preprocessed data to train the phrase-based translation model by using GIZA++ (Och and Ney, 2003) and the Pharaoh tool kit (Koehn et al., 2003)." I08-2088,P03-1021,o,"3.2.2 Features We used eight features (Och and Ney, 2003; Koehn et al., 2003) and their weights for the translations." I08-2088,P03-1021,p,"Target language model probability (weight = 0.5) According to a previous study, the minimum error rate training (MERT) (Och, 2003), which is the optimization of feature weights by maximizing the BLEU score on the development set, can improve the performance of a system." I08-4028,P03-1021,o,"The decision rule here is: W 0 = argmax W {Pr(W|C)} = argmax W { M summationdisplay m=1 m h m (W, C)} (3) The parameters M 1 of this model can be optimized by standard approaches, such as the Minimum Error Rate Training used in machine translation (Och, 2003)." J04-4002,P03-1021,o,"A comparison of the two approaches can be found in Koehn, Och, and Marcu (2003)." J04-4002,P03-1021,p,An efficient algorithm for performing this tuning for a larger number of model parameters can be found in Och (2003). J04-4002,P03-1021,o,"Looking at the results of the recent machine translation evaluations, this approach seems currently to give the best results, and an increasing number of researchers are working on different methods for learning phrase translation lexica for machine translation purposes (Marcu and Wong 2002; Venugopal, Vogel, and Waibel 2003; Tillmann 2003; Koehn, Och, and Marcu 2003)." J04-4002,P03-1021,o,An alternative training criterion therefore directly optimizes translation quality as measured by an automatic evaluation criterion (Och 2003). J04-4002,P03-1021,o,(1993) and Och and Ney (2003). J04-4002,P03-1021,o,"The alignment a J 1 that has the highest probability (under a certain model) is also called the Viterbi alignment (of that model): a J 1 = argmax a J 1 p (f J 1, a J 1 | e I 1 ) (8) A detailed comparison of the quality of these Viterbi alignments for various statistical alignment models compared to human-made word alignments can be found in Och and Ney (2003)." J05-4003,P03-1021,o,"Using this alignment strategy, we follow (Och and Ney 2003) and compute one alignment for each translation direction ( f e and e f ), and then combine them." J05-4003,P03-1021,o,All our MT systems were trained using a variant of the alignment template model described in (Och 2003). J05-4005,P03-1021,o,"It is also related to (log-)linear models described in Berger, Della Pietra, and Della Pietra (1996), Xue (2003); Och (2003), and Peng, Feng, and McCallum (2004)." J06-4002,P03-1021,o,"Furthermore, statistical generation systems (Lapata 2003; Barzilay and Lee 2004; Karamanis and Manurung 2002; Mellish et al. 1998) could use as a means of directly optimizing information ordering, much in the same way MT systems optimize model parameters using BLEU as a measure of translation quality (Och 2003)." J07-1003,P03-1021,o,These weights or scaling factors can be optimized with respect to some evaluation criterion (Och 2003). J07-1003,P03-1021,o,"Nowadays, most state-of-the-art SMT systems are based on bilingual phrases (Och, Tillmann, and Ney 1999; Koehn, Och, and Marcu 2003; Tillmann 2003; Bertoldi et al. 2004; Vogel et al. 2004; Zens and Ney 2004; Chiang 2005)." J07-1003,P03-1021,o,"The model scaling factors 1 ,, 5 and the word and phrase penalties are optimized with respect to some evaluation criterion (Och 2003) such as BLEU score." J07-2003,P03-1021,o,"4.2 Features For our experiments, we use a feature set analogous to the default feature set of Pharaoh (Koehn, Och, and Marcu 2003)." J07-2003,P03-1021,o,"The rules extracted from the training bitext have the following features: a114 P( | )andP( | ), the latter of which is not found in the noisy-channel model, but has been previously found to be a helpful feature (Och and Ney 2002); 210 Chiang Hierarchical Phrase-Based Translation a114 the lexical weights P w ( | )andP w ( | ), which estimate how well the words in translate the words in (Koehn, Och, and Marcu 2003); 4 a114 a penalty exp(1) for extracted rules, analogous to Koehns phrase penalty (Koehn 2003), which allows the model to learn a preference for longer or shorter derivations." J07-2003,P03-1021,o,"Finally, the parameters i of the log-linear model (18) are learned by minimumerror-rate training (Och 2003), which tries to set the parameters so as to maximize the BLEU score (Papineni et al. 2002) of a development set." J07-2003,P03-1021,o,"But Koehn, Och, and Marcu (2003) find that phrases longer than three words improve performance little for training corpora of up to 20 million words, suggesting that the data may be too sparse to learn longer phrases." J07-2003,P03-1021,o,"Above the phrase level, some models perform no reordering (Zens and Ney 2004; Kumar, Deng, and Byrne 2006), some have a simple distortion model that reorders phrases independently of their content (Koehn, Och, and Marcu 2003; Och and Ney 2004), and some, for example, the Alignment Template System (Och et al. 2004; Thayer et al. 2004), hereafter ATS, and the IBM phrase-based system (Tillmann 2004; Tillmann and Zhang 2005), have phrase-reordering models that add some lexical sensitivity." J07-2003,P03-1021,o,"Phrases of up to 10 in length on the French side were extracted from the parallel text, and minimum-error-rate training (Och 2003) was 8 We can train on the full training data shown if tighter constraints are placed on rule extraction for the United Nations data." J07-2003,P03-1021,p,Other insights borrowed from the current state of the art include minimum-error-rate training of log-linear models (Och and Ney 2002; Och 2003) and use of an m-gram language model. J07-3002,P03-1021,o,Some of the alignment sets also have links which are not Sure links but are Possible links (Och and Ney 2003). J07-3002,P03-1021,o,"We also have an additional held-out translation set, the development set, which is employed by the MT system to train the weights of its log-linear model to maximize BLEU (Och 2003)." J07-3002,P03-1021,o,"The training data for the French/English data set is taken from the LDC Canadian Hansard data set, from which the word aligned data (presented in Och and Ney 2003) was also taken." J07-3002,P03-1021,o,"294 Fraser and Marcu Measuring Word Alignment Quality for Statistical Machine Translation 2.2 Measuring Translation Performance Changes Caused By Alignment In phrased-based SMT (Koehn, Och, and Marcu 2003) the knowledge sources which vary with the word alignment are the phrase translation lexicon (which maps source phrases to target phrases using counts from the word alignment) and some of the word level translation parameters (sometimes called lexical smoothing)." J07-3002,P03-1021,o,"The weights of the different knowledge sources in the log-linear model used by our system are trained using Maximum BLEU (Och 2003), which we run for 25 iterations individually for each system." J07-3002,P03-1021,o,"To generate word alignments we use GIZA++ (Och and Ney 2003), which implements both the IBM Models of Brown et al." J07-3002,P03-1021,o,The output of GIZA++ is then post-processed using the three symmetrization heuristics described in Och and Ney (2003). J07-3002,P03-1021,o,Word Alignment Quality Metrics 3.1 Alignment Error Rate is Not a Useful Measure We begin our study of metrics for word alignment quality by testing AER (Och and Ney 2003). J07-3002,P03-1021,o,Och and Ney (2003) state that AER is derived from F-Measure. N04-1008,P03-1021,p,"4.4.1 N-gram Co-Occurrence Statistics for Answer Extraction N-gram co-occurrence statistics have been successfully used in automatic evaluation (Papineni et al. 2002, Lin and Hovy 2003), and more recently as training criteria in statistical machine translation (Och 2003)." N04-1021,P03-1021,o,"However, certain properties of the BLEU metric can be exploited to speed up search, as described in detail by Och (2003)." N04-1022,P03-1021,o,"For all performance metrics, we show the 70% confidence interval with respect to the MAP baseline computed using bootstrap resampling (Press et al. , 2002; Och, 2003)." N04-1022,P03-1021,o,Och (2003) developed a training procedure that incorporates various MT evaluation criteria in the training procedure of log-linear MT models. N04-1023,P03-1021,p,"Recently so-called reranking techniques, such as maximum entropy models (Och and Ney, 2002) and gradient methods (Och, 2003), have been applied to machine translation (MT), and have provided significant improvements." N04-1023,P03-1021,o,"The minimum error training (Och, 2003) was used on the development data for parameter estimation." N04-1023,P03-1021,o,"Six features from (Och, 2003) were used as baseline features." N04-1023,P03-1021,o,"In our experiments, we will use 4 different kinds of feature combinations: a157 Baseline: The 6 baseline features used in (Och, 2003), such as cost of word penalty, cost of aligned template penalty." N04-1023,P03-1021,o,Och (2003) described the use of minimum error training directly optimizing the error rate on automatic MT evaluation metrics such as BLEU. N04-1023,P03-1021,o,"SMT Team (2003) also used minimum error training as in Och (2003), but used a large number of feature functions." N04-1023,P03-1021,o,"By reranking a 1000-best list generated by the baseline MT system from Och (2003), the BLEU (Papineni et al. , 2001) score on the test dataset was improved from 31.6% to 32.9%." N04-1033,P03-1021,o,"The model scaling factors are optimized on the development corpus with respect to mWER similar to (Och, 2003)." N04-1033,P03-1021,n,"This method has the advantage that it is not limited to the model scaling factors as the method described in (Och, 2003)." N04-1033,P03-1021,o,"Alternatively, one can train them with respect to the final translation quality measured by some error criterion (Och, 2003)." N06-1002,P03-1021,o,"Word alignments were produced by GIZA++ (Och and Ney 2003) with a standard training regimen of five iterations of Model 1, five iterations of the HMM Model, and five iterations of Model 4, in both directions." N06-1002,P03-1021,o,"Finally we trained model weights by maximizing BLEU (Och 2003) and set decoder optimization parameters (n-best list size, timeouts 14 etc) on a development test set of 200 held-out sentences each with a single reference translation." N06-1002,P03-1021,o,"We used the heuristic combination described in (Och and Ney 2003) and extracted phrasal translation pairs from this combined alignment as described in (Koehn et al. , 2003)." N06-1002,P03-1021,o,Model weights were also trained following Och (2003). N06-1003,P03-1021,o,"2 The Problem of Coverage in SMT Statistical machine translation made considerable advances in translation quality with the introduction of phrase-based translation (Marcu and Wong, 2002; Koehn et al. , 2003; Och and Ney, 2004)." N06-1003,P03-1021,o,"To set the weights, m, we performed minimum error rate training (Och, 2003) on the development set using Bleu (Papineni et al. , 2002) as the objective function." N06-1004,P03-1021,o,"Weights on the components were assigned using the (Och, 2003) method for max-BLEU training on the development set." N06-1004,P03-1021,o,"1 Introduction: Defining SCMs The work presented here was done in the context of phrase-based MT (Koehn et al. , 2003; Och and Ney, 2004)." N06-1013,P03-1021,o,"The parameters of the MT system were optimized on MTEval02 data using minimum error rate training (Och, 2003)." N06-1013,P03-1021,p,"In a later study, Och and Ney (2003) present a loglinear combination of the HMM and IBM Model 4 that produces better alignments than either of those." N06-1013,P03-1021,o,"1 Introduction Word alignmentdetection of corresponding words between two sentences that are translations of each otheris usually an intermediate step of statistical machine translation (MT) (Brown et al. , 1993; Och and Ney, 2003; Koehn et al. , 2003), but also has been shown useful for other applications such as construction of bilingual lexicons, word-sense disambiguation, projection of resources, and crosslanguage information retrieval." N06-1013,P03-1021,o,"Maximum entropy (ME) models have been used in bilingual sense disambiguation, word reordering, and sentence segmentation (Berger et al. , 1996), parsing, POS tagging and PP attachment (Ratnaparkhi, 1998), machine translation (Och and Ney, 2002), and FrameNet classification (Fleischman et al. , 2003)." N06-1032,P03-1021,o,"number of words in target string These statistics are combined into a log-linear model whose parameters are adjusted by minimum error rate training (Och, 2003)." N06-1032,P03-1021,o,Minimum-error-rate training was done using Koehns implementation of Ochs (2003) minimum-error-rate model. N06-1032,P03-1021,o,"(2003), and component weights are adjusted by minimum error rate training (Och, 2003)." N06-1032,P03-1021,o,"1 Introduction Recent approaches to statistical machine translation (SMT) piggyback on the central concepts of phrasebased SMT (Och et al. , 1999; Koehn et al. , 2003) and at the same time attempt to improve some of its shortcomings by incorporating syntactic knowledge in the translation process." N06-2013,P03-1021,o,"Decoding weights are optimized using Ochs algorithm (Och, 2003) to set weights for the four components of the log-linear model: language model, phrase translation model, distortion model, and word-length feature." N06-3004,P03-1021,o,"This is also true for reranking and discriminative training, where the k-best list of candidates serves as an approximation of the full set (Collins, 2000; Och, 2003; McDonald et al. , 2005)." N07-1005,P03-1021,o,"Many methods for calculating the similarity have been proposed (Niessen et al. , 2000; Akiba et al. , 2001; Papineni et al. , 2002; NIST, 2002; Leusch et al. , 2003; Turian et al. , 2003; Babych and Hartley, 2004; Lin and Och, 2004; Banerjee and Lavie, 2005; Gimenez et al. , 2005)." N07-1005,P03-1021,o,"In recent years, many researchers have tried to automatically evaluate the quality of MT and improve the performance of automatic MT evaluations (Niessen et al. , 2000; Akiba et al. , 2001; Papineni et al. , 2002; NIST, 2002; Leusch et al. , 2003; Turian et al. , 2003; Babych and Hartley, 2004; Lin and Och, 2004; Banerjee and Lavie, 2005; Gimenez et al. , 2005) because improving the performance of automatic MT evaluation is expected to enable us to use and improve MT systems efficiently." N07-1005,P03-1021,o,"For example, Och reported that the quality of MT results was improved by using automatic MT evaluation measures for the parameter tuning of an MT system (Och, 2003)." N07-1006,P03-1021,p,"This type of direct optimization is known as Minimum Error Rate Training (Och, 2003) in the MT community, and is an essential component in building the stateof-art MT systems." N07-1007,P03-1021,o,"(2003), a trigram target language model, an order model, word count, phrase count, average phrase size functions, and whole-sentence IBM Model 1 logprobabilities in both directions (Och et al. 2004)." N07-1007,P03-1021,o,The weights of these models are determined using the max-BLEU method described in Och (2003). N07-1007,P03-1021,o,"Most stateof-the-art SMT systems treat grammatical elements in exactly the same way as content words, and rely on general-purpose phrasal translations and target language models to generate these elements (e.g. , Och and Ney, 2002; Koehn et al. , 2003; Quirk et al. , 2005; Chiang, 2005; Galley et al. , 2006)." N07-1008,P03-1021,o,"Unlike MaxEnt training, the method (Och, 2003) used for estimating the weight vector for BLEU maximization are not computationally scalable for a large number of feature functions." N07-1008,P03-1021,o,"The f are trained using a held-out corpus using maximum BLEU training (Och, 2003)." N07-1022,P03-1021,o,"The model parameters are trained using minimum error-rate training (Och, 2003)." N07-1022,P03-1021,o,"In WASP, GIZA++ (Och and Ney, 2003) is used to obtain the best alignments from the training examples." N07-1029,P03-1021,p,"The modified Powells method has been previously used in optimizing the weights of a standard feature-based MT decoder in (Och, 2003) where a more efficient algorithm for log-linear models was proposed." N07-1029,P03-1021,o,"If the alignments are not available, they can be automatically generated; e.g., using GIZA++ (Och and Ney, 2003)." N07-1061,P03-1021,o,"This is the shared task baseline system for the 2006 NAACL/HLT workshop on statistical machine translation (Koehn and Monz, 2006) and consists of the Pharaoh decoder (Koehn, 2004), SRILM (Stolcke, 2002), GIZA++ (Och and Ney, 2003), mkcls (Och, 1999), Carmel,1 and a phrase model training code." N07-1061,P03-1021,o,"2 Phrase-based SMT We use a phrase-based SMT system, Pharaoh, (Koehn et al. , 2003; Koehn, 2004), which is based on a log-linear formulation (Och and Ney, 2002)." N07-1061,P03-1021,o,"To set the weights, m, we carried out minimum error rate training (Och, 2003) using BLEU (Papineni et al. , 2002) as the objective function." N07-1062,P03-1021,o,"The model scaling factors are optimized using minimum error rate training (Och, 2003)." N07-1063,P03-1021,o,"Parameters used to calculate P(D) are trained using MER training (Och, 2003) on development data." N07-1064,P03-1021,o,"Feature function weights in the loglinear model are set using Ochs minium error rate algorithm (Och, 2003)." N07-2022,P03-1021,o,"In order to improve translation quality, this tuning can be effectively performed by minimizing translation error over a development corpus for which manually translated references are available (Och, 2003)." N07-2022,P03-1021,o,"Unsupervised systems (Och and Ney, 2003; Liang et al. , 2006) are based on generative models trained with the EM algorithm." N07-2047,P03-1021,o,"Whilst, the parameters for the maximum entropy model are developed based on the minimum error rate training method (Och, 2003)." N07-2053,P03-1021,p,"Finally, to estimate the parameters i of the weighted linear model, we adopt the popular minimum error rate training procedure (Och, 2003) which directly optimizes translation quality as measured by the BLEU metric." N09-1013,P03-1021,o,"MET (Och, 2003) was carried out using a development set, and the BLEU score evaluated on two test sets." N09-1015,P03-1021,o,"The way a decoder constructs translation hypotheses is directly related to the weights for different model features in a SMT system, which are usually optimized for a given set of models with minimum error rate training (MERT) (Och, 2003) to achieve better translation performance." N09-1025,P03-1021,o,"The models are trained using the Margin Infused Relaxed Algorithm or MIRA (Crammer et al., 2006) instead of the standard minimum-error-rate training or MERT algorithm (Och, 2003)." N09-1027,P03-1021,o,"Feature weights vector are trained discriminatively in concert with the language model weight to maximize the BLEU (Papineni et al., 2002) automatic evaluation metric via Minimum Error Rate Training (MERT) (Och, 2003)." N09-1029,P03-1021,o,"We obtain aligned parallel sentences and the phrase table after the training of Moses, which includes running GIZA++ (Och and Ney, 2003), grow-diagonal-final symmetrization and phrase extraction (Koehn et al., 2005)." N09-1029,P03-1021,o,"To tune all lambda weights above, we perform minimum error rate training (Och, 2003) on the development set described in Section 7." N09-1047,P03-1021,o,"Their weights are optimized w.r.t. BLEU score using the algorithm described in (Och, 2003)." N09-1049,P03-1021,o,"Standard MET (Och, 2003) iterative parameter estimation under IBM BLEU (Papineni et al., 2001) is performed on the corresponding development set." N09-2001,P03-1021,o,"The component features are weighted to minimize a translation error criterion on a development set (Och, 2003)." N09-2001,P03-1021,o,"3 Experiments We built baseline systems using GIZA++ (Och and Ney, 2003), Moses phrase extraction with grow-diag-finalend heuristic (Koehn et al., 2007), a standard phrasebased decoder (Vogel, 2003), the SRI LM toolkit (Stolcke, 2002), the suffix-array language model (Zhang and Vogel, 2005), a distance-based word reordering model Algorithm 5 Rich Interruption Constraints (Coh5) Input: Source tree T, previous phrase fh, current phrase fh+1, coverage vector HC 1: Interruption False 2: ICount,VerbCount,NounCount 0 3: F the left and right-most tokens of fh 4: for each of f F do 5: Climb the dependency tree from f until you reach the highest node n such that fh+1 / T(n)." N09-2001,P03-1021,o,"All model weights were trained on development sets via minimum-error rate training (MERT) (Och, 2003) with 200 unique n-best lists and optimizing toward BLEU." N09-2006,P03-1021,o,"Starting from a N-Best list generated from a translation decoder, an optimizer, such as Minimum Error Rate (MER) (Och, 2003) training, proposes directions to search for a better weight-vector to combine feature functions." P04-1059,P03-1021,o,An alternative to linear models is the log-linear models suggested by Och (2003). P04-1059,P03-1021,o,"It is also related to loglinear models for machine translation (Och, 2003)." P04-1059,P03-1021,o,"For each feature function, there is a model parameter i . The best word segmentation W * is determined by the decision rule as = == M i ii W M W WSfWSScoreW 0 0 * ),(maxarg),,(maxarg (2) Below we describe how to optimize s. Our method is a discriminative approach inspired by the Minimum Error Rate Training method proposed in Och (2003)." P04-1078,P03-1021,o,"1 Introduction With the introduction of the BLEU metric for machine translation evaluation (Papineni et al, 2002), the advantages of doing automatic evaluation for various NLP applications have become increasingly appreciated: they allow for faster implement-evaluate cycles (by by-passing the human evaluation bottleneck), less variation in evaluation performance due to errors in human assessor judgment, and, not least, the possibility of hill-climbing on such metrics in order to improve system performance (Och 2003)." P05-1033,P03-1021,o,"We ran the trainer with its default settings (maximum phrase length 7), and then used Koehns implementation of minimumerror-rate training (Och, 2003) to tune the feature weights to maximize the systems BLEU score on our development set, yielding the values shown in Table 2." P05-1033,P03-1021,o,"Above the phrase level, these models typically have a simple distortion model that reorders phrases independently of their content (Och and Ney, 2004; Koehn et al. , 2003), or not at all (Zens and Ney, 2004; Kumar et al. , 2005)." P05-1033,P03-1021,o,"For our experiments we used the following features, analogous to Pharaohs default feature set: P( | ) and P( | ), the latter of which is not found in the noisy-channel model, but has been previously found to be a helpful feature (Och and Ney, 2002); the lexical weights Pw( | ) and Pw( | ) (Koehn et al. , 2003), which estimate how well the words in translate the words in ;2 a phrase penalty exp(1), which allows the model to learn a preference for longer or shorter derivations, analogous to Koehns phrase penalty (Koehn, 2003)." P05-1033,P03-1021,o,"(2003), which is based on that of Och and Ney (2004)." P05-1033,P03-1021,o,"To do this, we first identify initial phrase pairs using the same criterion as previous systems (Och and Ney, 2004; Koehn et al. , 2003): Definition 1." P05-1057,P03-1021,o,"We used GIZA++ package (Och and Ney, 2003) to train IBM translation models." P05-1057,P03-1021,o,"After that, we used three types of methods for performing a symmetrization of IBM models: intersection, union, and refined methods (Och and Ney, 2003)." P05-1057,P03-1021,p,"Studies reveal that statistical alignment models outperform the simple Dice coefficient (Och and Ney, 2003)." P05-1057,P03-1021,p,"It is promising to optimize the model parameters directly with respect to AER as suggested in statistical machine translation (Och, 2003)." P05-1057,P03-1021,o,"Och and Ney (2003) proposed Model 6, a log-linear combination of IBM translation models and HMM model." P05-1057,P03-1021,o,"In order to incorporate a new dependency which contains extra information other than the bilingual sentence pair, we modify Eq.2 by adding a new variable v: Pr(a|e,f,v) = exp[ summationtextM m=1 mhm(a,e,f,v)]summationtext aprime exp[ summationtextM m=1 mhm(aprime,e,f,v)](4) Accordingly, we get a new decision rule: a = argmax a braceleftbigg Msummationdisplay m=1 mhm(a,e,f,v) bracerightbigg (5) Note that our log-linear models are different from Model 6 proposed by Och and Ney (2003), which defines the alignment problem as finding the alignment a that maximizes Pr(f, a|e) given e. 3 Feature Functions In this paper, we use IBM translation Model 3 as the base feature of our log-linear models." P05-1066,P03-1021,o,"In practice, when training the parameters of an SMT system, for example using the discriminative methods of (Och, 2003), the cost for skips of this kind is typically set to a very high value." P05-1066,P03-1021,o,"For this reason there is currently a great deal of interest in methods which incorporate syntactic information within statistical machine translation systems (e.g. , see (Alshawi, 1996; Wu, 1997; Yamada and Knight, 2001; Gildea, 2003; Melamed, 2004; Graehl and Knight, 2004; Och et al. , 2004; Xia and McCord, 2004))." P05-1066,P03-1021,o,"More recently, phrase-based models (Och et al. , 1999; Marcu and Wong, 2002; Koehn et al. , 2003) have been proposed as a highly successful alternative to the IBM models." P05-1066,P03-1021,o,"Reranking methods have also been proposed as a method for using syntactic information (Koehn and Knight, 2003; Och et al. , 2004; Shen et al. , 2004)." P05-1066,P03-1021,o,"1 Introduction Recent research on statistical machine translation (SMT) has lead to the development of phrasebased systems (Och et al. , 1999; Marcu and Wong, 2002; Koehn et al. , 2003)." P05-1069,P03-1021,o,"Instead of directly minimizing error as in earlier work (Och, 2003), we decompose the decoding process into a sequence of local decision steps based on Eq." P05-1069,P03-1021,o,"As far as the log-linear combination of float features is concerned, similar training procedures have been proposed in (Och, 2003)." P05-1069,P03-1021,o,"2 Block Orientation Bigrams This section describes a phrase-based model for SMT similar to the models presented in (Koehn et al. , 2003; Och et al. , 1999; Tillmann and Xia, 2003)." P06-1001,P03-1021,o,"Decoding weights are optimized using Ochs algorithm (Och, 2003) to set weights for the four components of the loglinear model: language model, phrase translation model, distortion model, and word-length feature." P06-1002,P03-1021,o,"2 Related Work Starting with the IBM models (Brown et al. , 1993), researchers have developed various statistical word alignment systems based on different models, such as hidden Markov models (HMM) (Vogel et al. , 1996), log-linear models (Och and Ney, 2003), and similarity-based heuristic methods (Melamed, 2000)." P06-1002,P03-1021,o,"MT output was evaluated using the standard evaluation metric BLEU (Papineni et al. , 2002).2 The parameters of the MT System were optimized for BLEU metric on NIST MTEval2002 test sets using minimum error rate training (Och, 2003), and the systems were tested on NIST MTEval2003 test sets for both languages." P06-1028,P03-1021,o,"Several non-linear objective functions, such as F-score for text classification (Gao et al. , 2003), and BLEU-score and some other evaluation measures for statistical machine translation (Och, 2003), have been introduced with reference to the framework of MCE criterion training." P06-1032,P03-1021,o,"N-best results for phrasal alignment and ordering models in the decoder were optimized by lambda training via Maximum Bleu, along the lines described in (Och, 2003)." P06-1066,P03-1021,o,"One is distortion model (Och and Ney, 2004; Koehn et al. , 2003) which penalizes translations according to their jump distance instead of their content." P06-1066,P03-1021,o,Line 4 and 5 are similar to the phrase extraction algorithm by Och (2003b). P06-1066,P03-1021,o,"The k-best list is very important for the minimum error rate training (Och, 2003a) which is used for tuning the weights for our model." P06-1077,P03-1021,o,"5.1 Pharaoh The baseline system we used for comparison was Pharaoh (Koehn et al. , 2003; Koehn, 2004), a freely available decoder for phrase-based translation models: p(e|f) = p(f|e) pLM(e)LM pD(e,f)D length(e)W(e) (10) We ran GIZA++ (Och and Ney, 2000) on the training corpus in both directions using its default setting, and then applied the refinement rule diagand described in (Koehn et al. , 2003) to obtain a single many-to-many word alignment for each sentence pair." P06-1077,P03-1021,o,"To perform minimum error rate training (Och, 2003) to tune the feature weights to maximize the systems BLEU score on development set, we used optimizeV5IBMBLEU.m (Venugopal and Vogel, 2005)." P06-1077,P03-1021,p,"1 Introduction Phrase-based translation models (Marcu and Wong, 2002; Koehn et al. , 2003; Och and Ney, 2004), which go beyond the original IBM translation models (Brown et al. , 1993) 1 by modeling translations of phrases rather than individual words, have been suggested to be the state-of-theart in statistical machine translation by empirical evaluations." P06-1091,P03-1021,p,"While error-driven training techniques are commonly used to improve the performance of phrasebased translation systems (Chiang, 2005; Och, 2003), this paper presents a novel block sequence translation approach to SMT that is similar to sequential natural language annotation problems 727 such as part-of-speech tagging or shallow parsing, both in modeling and parameter training." P06-1091,P03-1021,o,"The current approach does not use specialized probability features as in (Och, 2003) in any stage during decoder parameter training." P06-1091,P03-1021,o,"The novel algorithm differs computationally from earlier work in discriminative training algorithms for SMT (Och, 2003) as follows: a90 No computationally expensive a57 -best lists are generated during training: for each input sentence a single block sequence is generated on each iteration over the training data." P06-1091,P03-1021,o,"Although the training algorithm can handle realvalued features as used in (Och, 2003; Tillmann and Zhang, 2005) the current paper intentionally excludes them." P06-1096,P03-1021,n,"Unlike minimum error rate training (Och, 2003), our system is able to exploit large numbers of specific features in the same manner as static reranking systems (Shen et al. , 2004; Och et al. , 2004)." P06-1096,P03-1021,o,"We tuned Pharaohs four parameters using minimum error rate training (Och, 2003) on DEV.12 We obtained an increase of 0.8 9As in the POS features, we map each phrase pair to its majority constellation." P06-1096,P03-1021,o,"The first approach is to reuse the components of a generative model, but tune their relative weights in a discriminative fashion (Och and Ney, 2002; Och, 2003; Chiang, 2005)." P06-1097,P03-1021,o,"We run Maximum BLEU (Och, 2003) for 25 iterations individually for each system." P06-1097,P03-1021,o,"However, union and rened alignments, which are many-to-many, are what are used to build competitive phrasal SMT systems, because intersection performs poorly, despite having been shown to have the best AER scores for the French/English corpus we are using (Och and Ney, 2003)." P06-1097,P03-1021,o,"An additional translation set called the Maximum BLEU set is employed by the SMT system to train the weights associated with the components of its log-linear model (Och, 2003)." P06-1097,P03-1021,o,"For each training direction, we run GIZA++ (Och and Ney, 2003), specifying 5 iterations of Model 1, 4 iterations of the HMM model (Vogel et al. , 1996), and 4 iterations of Model 4." P06-1097,P03-1021,o,"We use the union, re ned and intersection heuristics de ned in (Och and Ney, 2003) which are used in conjunction with IBM Model 4 as the baseline in virtually all recent work on word alignment." P06-1097,P03-1021,n,"1 Introduction The most widely applied training procedure for statistical machine translation IBM model 4 (Brown et al. , 1993) unsupervised training followed by post-processing with symmetrization heuristics (Och and Ney, 2003) yields low quality word alignments." P06-1097,P03-1021,p,Och (2003) has described an ef cient exact one-dimensional error minimization technique for a similar search problem in machine translation. P06-1098,P03-1021,o,"Feature function scaling factors m are optimized based on a maximum likely approach (Och and Ney, 2002) or on a direct error minimization approach (Och, 2003)." P06-1098,P03-1021,o,"Many-to-many word alignments are induced by running a one-to-many word alignment model, such as GIZA++ (Och and Ney, 2003), in both directions and by combining the results based on a heuristic (Koehn et al. , 2003)." P06-1139,P03-1021,o,"When evaluated against the state-of-the-art, phrase-based decoder Pharaoh (Koehn, 2004), using the same experimental conditions translation table trained on the FBIS corpus (7.2M Chinese words and 9.2M English words of parallel text), trigram language model trained on 155M words of English newswire, interpolation weights a65 (Equation 2) trained using discriminative training (Och, 2003) (on the 2002 NIST MT evaluation set), probabilistic beam a90 set to 0.01, histogram beam a58 set to 10 and BLEU (Papineni et al. , 2002) as our metric, the WIDL-NGLM-Aa86 a129 algorithm produces translations that have a BLEU score of 0.2570, while Pharaoh translations have a BLEU score of 0.2635." P06-1139,P03-1021,o,"The interpolation weights a65 (Equation 2) are trained using discriminative training (Och, 2003) using ROUGEa129 as the objective function, on the development set." P06-2061,P03-1021,o,"A statistical prediction engine provides the completions to what a human translator types (Foster et al. , 1997; Och et al. , 2003)." P06-2061,P03-1021,o,"In the post-editing step, a prediction engine helps to decrease the amount of human interaction (Och et al. , 2003)." P06-2061,P03-1021,o,"For instance, the resulting word graph can be used in the prediction engine of a CAT system (Och et al. , 2003)." P06-2061,P03-1021,o,"The model scaling factors M1 are trained on a development corpus according to the final recognition quality measured by the word error rate (WER)(Och, 2003)." P06-2101,P03-1021,o,"To find the optimal coefficients for a loglinear combination of these experts, we use separate development data, using the following procedure due to Och (2003): 1." P06-2101,P03-1021,o,"Despite these difficulties, some work has shown it worthwhile to minimize error directly (Och, 2003; Bahl et al. , 1988)." P06-2101,P03-1021,o,"Och (2003) observed, however, that the piecewiseconstant property could be exploited to characterize the function exhaustively along any line in parameter space, and hence to minimize it globally along that line." P06-2101,P03-1021,o,Och (2003) found that such smoothing during training gives almost identical results on translation metrics. P06-2103,P03-1021,o,The solution we employ here is the discriminative training procedure of Och (2003). P06-2103,P03-1021,o,There are two necessary ingredients to implement Ochs (2003) training procedure. P06-2103,P03-1021,o,"In contrast, more recent research has focused on stochastic approaches that model discourse coherence at the local lexical (Lapata, 2003) and global levels (Barzilay and Lee, 2004), while preserving regularities recognized by classic discourse theories (Barzilay and Lapata, 2005)." P07-1004,P03-1021,o,"Their weights are optimized w.r.t. BLEU score using the algorithm described in (Och, 2003)." P07-1005,P03-1021,o,"To perform translation, state-of-the-art MT systems use a statistical phrase-based approach (Marcu and Wong, 2002; Koehn et al. , 2003; Och and Ney, 2004) by treating phrases as the basic units of translation." P07-1005,P03-1021,o,"6.1 Hiero Results Using the MT 2002 test set, we ran the minimumerror rate training (MERT) (Och, 2003) with the decoder to tune the weights for each feature." P07-1024,P03-1021,o,"This setting is reminiscent of the problem of optimizing feature weights for reranking of candidate machine translation outputs, and we employ an optimization technique similar to that used by Och (2003) for machine translation." P07-1037,P03-1021,o,"The NIST MT03 test set is used for development, particularly for optimizing the interpolation weights using Minimum Error Rate training (Och, 2003)." P07-1037,P03-1021,o,"Firstly, rather than induce millions of xRS rules from parallel data, we extract phrase pairs in the standard way (Och & Ney, 2003) and associate with each phrase-pair a set of target language syntactic structures based on supertag sequences." P07-1037,P03-1021,o,"The bidirectional word alignment is used to obtain phrase translation pairs using heuristics presented in 2http://www.fjoch.com/GIZA++.html 289 (Och & Ney, 2003) and (Koehn et al. , 2003), and the Moses decoder was used for phrase extraction and decoding.3 Let t and s be the target and source language sentences respectively." P07-1037,P03-1021,o,"The bidirectional word alignmentisusedtoobtainlexicalphrasetranslationpairs using heuristics presented in (Och & Ney, 2003) and (Koehn et al. , 2003)." P07-1039,P03-1021,o,"4.3 Baseline We use a standard log-linear phrase-based statistical machine translation system as a baseline: GIZA++ implementation of IBM word alignment model 4 (Brown et al. , 1993; Och and Ney, 2003),8 the refinement and phrase-extraction heuristics described in (Koehn et al. , 2003), minimum-error-rate training 7More specifically, we choose the first English reference from the 7 references and the Chinese sentence to construct new sentence pairs." P07-1039,P03-1021,o,"Running words 1,864 14,437 Vocabulary size 569 1,081 Table 2: ChineseEnglish corpus statistics (Och, 2003) using Phramer (Olteanu et al. , 2006), a 3-gram language model with Kneser-Ney smoothing trained with SRILM (Stolcke, 2002) on the English side of the training data and Pharaoh (Koehn, 2004) with default settings to decode." P07-1039,P03-1021,o,"To quickly (and approximately) evaluate this phenomenon, we trained the statistical IBM wordalignment model 4 (Brown et al. , 1993),1 using the GIZA++ software (Och and Ney, 2003) for the following language pairs: ChineseEnglish, Italian English, and DutchEnglish, using the IWSLT-2006 corpus (Takezawa et al. , 2002; Paul, 2006) for the first two language pairs, and the Europarl corpus (Koehn, 2005) for the last one." P07-1039,P03-1021,o,": there is : want to : need not : in front of : as soon as : look at Figure 2: Examples of entries from the manually developed dictionary 4 Experimental Setting 4.1 Evaluation The intrinsic quality of word alignment can be assessed using the Alignment Error Rate (AER) metric (Och and Ney, 2003), that compares a systems alignment output to a set of gold-standard alignment." P07-1040,P03-1021,o,"The same Powells method has been used to estimate feature weights of a standard feature-based phrasal MT decoder in (Och, 2003)." P07-1040,P03-1021,o,"In (Matusov et al. , 2006), different word orderings are taken into account by training alignment models by considering all hypothesis pairs as a parallel corpus using GIZA++ (Och and Ney, 2003)." P07-1059,P03-1021,o,"We present two approaches to SMT-based query expansion, both of which are implemented in the framework of phrase-based SMT (Och and Ney, 2004; Koehn et al. , 2003)." P07-1059,P03-1021,o,"4 SMT-Based Query Expansion Our SMT-based query expansion techniques are based on a recent implementation of the phrasebased SMT framework (Koehn et al. , 2003; Och and Ney, 2004)." P07-1059,P03-1021,o,"as follows: p(synI1|trgI1) = ( Iproductdisplay i=1 p(syni|trgi) (4) pprime(trgi|syni)prime pw(syni|trgi)w pwprime(trgi|syni)wprime pd(syni,trgi)d) lw(synI1)l c(synI1)c pLM(synI1)LM For estimation of the feature weights vector defined in equation (4) we employed minimum error rate (MER) training under the BLEU measure (Och, 2003)." P07-1089,P03-1021,o,"To perform minimum error rate training (Och, 2003) to tune the feature weights to maximize the systems BLEU score on development set, we used the script optimizeV5IBMBLEU.m (Venugopal and Vogel, 2005)." P07-1089,P03-1021,o,"We ran GIZA++ (Och and Ney, 2000) on the training corpus in both directions using its default setting, and then applied the refinement rule diagand described in (Koehn et al. , 2003) to obtain a single many-to-many word alignment for each sentence pair." P07-1091,P03-1021,o,"All the feature weights (s) were trained using our implementation of Minimum Error Rate Training (Och, 2003)." P07-1091,P03-1021,o,"We use the Stanford parser (Klein and Manning, 2003) with its default Chinese grammar, the GIZA++ (Och and Ney, 2000) alignment package with its default settings, and the ME tool developed by (Zhang, 2004)." P07-1092,P03-1021,o,"The parameters, j, were trained using minimum error rate training (Och, 2003) to maximise the BLEU score (Papineni et al. , 2002) on a 150 sentence development set." P07-1092,P03-1021,o,"The translation models and lexical scores were estimated on the training corpus whichwasautomaticallyalignedusingGiza++(Och et al. , 1999) in both directions between source and target and symmetrised using the growing heuristic (Koehn et al. , 2003)." P07-1092,P03-1021,o,"A single translation is then selected by finding the candidate that yields the best overall score (Och and Ney, 2001; Utiyama and Isahara, 2007) or by cotraining (Callison-Burch and Osborne, 2003)." P07-1092,P03-1021,o,"As an alternative to linear interpolation, we also employ a weighted product for phrase-table combination: p(s|t) productdisplay j pj(s|t)j (3) This has the same form used for log-linear training of SMT decoders (Och, 2003), which allows us to treateachdistributionasafeature,andlearnthemixing weights automatically." P07-1108,P03-1021,o,"1 Introduction For statistical machine translation (SMT), phrasebased methods (Koehn et al. , 2003; Och and Ney, 2004) and syntax-based methods (Wu, 1997; Alshawi et al. 2000; Yamada and Knignt, 2001; Melamed, 2004; Chiang, 2005; Quick et al. , 2005; Mellebeek et al. , 2006) outperform word-based methods (Brown et al. , 1993)." P07-1108,P03-1021,o,"We run the decoder with its default settings and then use Koehn's implementation of minimum error rate training (Och, 2003) to tune the feature weights on the development set." P07-1111,P03-1021,o,"We want to avoid training a metric that as5Or, in a less adversarial setting, a system may be performing minimum error-rate training (Och, 2003) signs a higher than deserving score to a sentence that just happens to have many n-gram matches against the target-language reference corpus." P07-1111,P03-1021,o,"Metrics in the Rouge family allow for skip n-grams (Lin and Och, 2004a); Kauchak and Barzilay (2006) take paraphrasing into account; metrics such as METEOR (Banerjee and Lavie, 2005) and GTM (Melamed et al. , 2003) calculate both recall and precision; METEOR is also similar to SIA (Liu and Gildea, 2006) in that word class information is used." P07-2026,P03-1021,o,"The model scaling factors M1 are optimized with respect to the BLEU score as described in (Och, 2003)." P07-2045,P03-1021,o,It also contains tools for tuning these models using minimum error rate training (Och 2003) and evaluating the resulting translations using the BLEU score (Papineni et al. 2002). P07-2045,P03-1021,o,"Moses uses standard external tools for some of the tasks to avoid duplication, such as GIZA++ (Och and Ney 2003) for word alignments and SRILM for language modeling." P07-2046,P03-1021,o,"The weighting parameters of these features were optimized in terms of BLEU by the approach of minimum error rate training (Och, 2003)." P07-2046,P03-1021,p,"1 Introduction Raw parallel data need to be preprocessed in the modern phrase-based SMT before they are aligned by alignment algorithms, one of which is the wellknown tool, GIZA++ (Och and Ney, 2003), for training IBM models (1-4)." P08-1009,P03-1021,o,"Word alignments are provided by GIZA++ (Och and Ney, 2003) with grow-diag-final combination, with infrastructure for alignment combination and phrase extraction provided by the shared task." P08-1009,P03-1021,o,"Candidate translations are scored by a linear combination of models, weighted according to Minimum Error Rate Training or MERT (Och, 2003)." P08-1009,P03-1021,o,"Early experiments with syntactically-informed phrases (Koehn et al., 2003), and syntactic reranking of K-best lists (Och et al., 2004) produced mostly negative results." P08-1012,P03-1021,o,"We also trained a baseline model with GIZA++ (Och and Ney, 2003) following a regimen of 5 iterations of Model 1, 5 iterations of HMM, and 5 iterations of Model 4." P08-1012,P03-1021,o,"Minimum Error Rate training (Och, 2003) over BLEU was used to optimize the weights for each of these models over the development test data." P08-1023,P03-1021,o,"We use the standard minimum error-rate training (Och, 2003) to tune the feature weights to maximize the systems BLEU score on the dev set." P08-1024,P03-1021,o,"However, while discriminative models promise much, they have not been shown to deliver significant gains 1We class approaches using minimum error rate training (Och, 2003) frequency count based as these systems re-scale a handful of generative features estimated from frequency counts and do not support large sets of non-independent features." P08-1049,P03-1021,o,"Once we obtain the augmented phrase table, we should run the minimum-error-rate training (Och, 2003) with the augmented phrase table such that the model parameters are properly adjusted." P08-1049,P03-1021,o,"The feature functions are combined under a log-linear framework, andtheweights aretuned bytheminimum-error-rate training (Och, 2003) using BLEU (Papineni et al., 2002) as the optimization metric." P08-1049,P03-1021,o,"4.5.2 BLEU on NIST MT Test Sets We use MT02 as the development set4 for minimum error rate training (MERT) (Och, 2003)." P08-1049,P03-1021,o,"Moreover, our approach integrates the abbreviation translation component into the baseline system in a natural way, and thus is able to make use of the minimum-error-rate training (Och, 2003) to automatically adjust the model parameters to reflect the change of the integrated system over the baseline system." P08-1059,P03-1021,o,"The features are similar to the ones used in phrasal systems, and their weights are trained using max-BLEU training (Och, 2003)." P08-1064,P03-1021,o,"For the MER training (Och, 2003), we modified Koehns MER trainer (Koehn, 2004) for our tree sequence-based system." P08-1064,P03-1021,o,"1 Introduction Phrase-based modeling method (Koehn et al., 2003; Och and Ney, 2004a) is a simple, but powerful mechanism to machine translation since it can model local reorderings and translations of multiword expressions well." P08-1066,P03-1021,o,"Given sentence-aligned bi-lingual training data, we first use GIZA++ (Och and Ney, 2003) to generate word level alignment." P08-1066,P03-1021,o,"Following (Och, 2003), the k-best results are accumulated as the input of the optimizer." P08-1066,P03-1021,o,"Hierarchical rules were extracted from a subset which has about 35M/41M words5, and the rest of the training data were used to extract phrasal rules as in (Och, 2003; Chiang, 2005)." P08-1086,P03-1021,o,"The weights are trained using minimum error rate training (Och, 2003) with BLEU score as the objective function." P08-1087,P03-1021,o,"A Greek model was trained on 440,082 aligned sentences of Europarl v.3, tuned with Minimum Error Training (Och, 2003)." P08-1102,P03-1021,o,"To obtain their corresponding weights, we adapted the minimum-error-rate training algorithm (Och, 2003) to train the outside-layer model." P08-1114,P03-1021,o,"Each i is a weight associated with feature i, and these weights are typically optimized using minimum error rate training (Och, 2003)." P08-2010,P03-1021,o,"This shows that hypothesis features are either not discriminative enough, or that the reranking model is too weak This performance gap can be mainly attributed to two problems: optimization error and modeling error (see Figure 1).1 Much work has focused on developing better algorithms to tackle the optimization problem (e.g. MERT (Och, 2003)), since MT evaluation metrics such as BLEU and PER are riddled with local minima and are difficult to differentiate with respect to re-ranker parameters." P08-2038,P03-1021,o,"For the efficiency of minimum-errorrate training (Och, 2003), we built our development set (580 sentences) using sentences not exceeding 50 characters from the NIST MT-02 evaluation test data." P08-2041,P03-1021,o,"We perform minimum-error-rate training (Och, 2003) to tune the feature weights of the translation model to maximize the BLEU score on development set." P09-1018,P03-1021,o,"We ran the decoder with its default settings and then used Moses implementation of minimum error rate training (Och, 2003) to tune the feature weights on the development set." P09-1018,P03-1021,o,sp and pt are feature weights set by performing minimum error rate training as described in Och (2003). P09-1019,P03-1021,p,"Two popular techniques that incorporate the error criterion are Minimum Error Rate Training (MERT) (Och, 2003) and Minimum BayesRisk (MBR) decoding (Kumar and Byrne, 2004)." P09-1019,P03-1021,o,"A path in a translation hypergraph induces a translation hypothesis E along with its sequence of SCFG rules D = r1,r2,,rK which, if applied to the start symbol, derives E. The sequence of SCFG rules induced by a path is also called a derivation tree for E. 3 Minimum Error Rate Training Given a set of source sentences FS1 with corresponding reference translations RS1, the objective of MERT is to find a parameter set M1 which minimizes an automated evaluation criterion under a linear model: M1 = argmin M1 SX s=1 Err`Rs, E(Fs; M1 ) ff E(Fs; M1 ) = argmax E SX s=1 mhm(E, Fs) ff . In the context of statistical machine translation, the optimization procedure was first described in Och (2003) for N-best lists and later extended to phrase-lattices in Macherey et al." P09-1020,P03-1021,o,"GIZA++ (Och and Ney, 2003) and the heuristics grow-diag-final-and are used to generate m-ton word alignments." P09-1020,P03-1021,o,"For the MER training (Och, 2003), Koehns MER trainer (Koehn, 2007) is modified for our system." P09-1021,P03-1021,o,"The number of weights wi is 3 plus the number of source languages, and they are trained using minimum error-rate training (MERT) to maximize the BLEU score (Och, 2003) on a development set." P09-1021,P03-1021,o,"(Ueffing et al., 2007; Haffari et al., 2009) show that treating U+ as a source for a new feature function in a loglinear model for SMT (Och and Ney, 2004) allows us to maximally take advantage of unlabeled data by finding a weight for this feature using minimum error-rate training (MERT) (Och, 2003)." P09-1034,P03-1021,o,"We are currently investigating caching and optimizations that will enable the use of our metric for MT parameter tuning in a Minimum Error Rate Training setup (Och, 2003)." P09-1036,P03-1021,o,"These constituent matching/violation counts are used as a feature in the decoders log-linear model and their weights are tuned via minimal error rate training (MERT) (Och, 2003)." P09-1064,P03-1021,o,"The model was trained using minimum error rate training for Arabic (Och, 2003) and MIRA for Chinese (Chiang et al., 2008)." P09-1065,P03-1021,p,"4 Extended Minimum Error Rate Training Minimum error rate training (Och, 2003) is widely used to optimize feature weights for a linear model (Och and Ney, 2002)." P09-1065,P03-1021,o,"Instead of computing all intersections, Och (2003) only computes critical intersections where highest-score translations will change." P09-1065,P03-1021,o,"We obtained word alignments of training data by first running GIZA++ (Och and Ney, 2003) and then applying the refinement rule grow-diag-final-and (Koehn et al., 2003)." P09-1065,P03-1021,o," As multiple derivations are used for finding optimal translations, we extend the minimum error rate training (MERT) algorithm (Och, 2003) to tune feature weights with respect to BLEU score for max-translation decoding (Section 4)." P09-1065,P03-1021,o,"While they train the parameters using a maximum a posteriori estimator, we extend the MERT algorithm (Och, 2003) to take the evaluation metric into account." P09-1065,P03-1021,o,"On the other hand, other authors (e.g., (Och and Ney, 2004; Koehn et al., 2003; Chiang, 2007)) do use the expression phrase-based models." P09-1066,P03-1021,o,"2.5 Model Training We adapt the Minimum Error Rate Training (MERT) (Och, 2003) algorithm to estimate parameters for each member model in co-decoding." P09-1067,P03-1021,o,"In the geometric interpolation above, the weight n controls the relative veto power of the n-gram approximation and can be tuned using MERT (Och, 2003) or a minimum risk procedure (Smith and Eisner, 2006)." P09-1067,P03-1021,p,"The NIST MT03 set is used to tune model weights (e.g. those of (16)) and the scaling factor 17We have also experimented with MERT (Och, 2003), and found that the deterministic annealing gave results that were more consistent across runs and often better." P09-1087,P03-1021,o,"Parametertuningwasdonewithminimum error rate training (Och, 2003), which was used to maximize BLEU (Papineni et al., 2001)." P09-1087,P03-1021,n,"1 Introduction Hierarchical approaches to machine translation have proven increasingly successful in recent years (Chiang, 2005; Marcu et al., 2006; Shen et al., 2008), and often outperform phrase-based systems (Och and Ney, 2004; Koehn et al., 2003) on target-language fluency and adequacy." P09-1088,P03-1021,o,"We use the GIZA++ implementation of IBM Model 4 (Brown et al., 1993; Och and Ney, 2003) coupled with the phrase extraction heuristics of Koehn et al." P09-1088,P03-1021,o,"The parameters of the NIST systems were tuned using Ochs algorithm to maximize BLEU on the MT02 test set (Och, 2003)." P09-1090,P03-1021,o,"Tuning (learning the values discussed in section 4.1) was done using minimum error rate training (Och, 2003)." P09-1094,P03-1021,p,"3.6 Parameter Estimation To estimate parameters k(1 k K), lm, and um, we adopt the approach of minimum error rate training (MERT) that is popular in SMT (Och, 2003)." P09-1104,P03-1021,o,"The pipeline extracts a Hiero-style synchronous context-free grammar (Chiang, 2007), employs suffix-array based rule extraction (Lopez, 2007), and tunes model parameters with minimum error rate training (Och, 2003)." P09-1106,P03-1021,o,"The weights of feature functions are optimized to maximize the scoring measure (Och, 2003)." P09-1106,P03-1021,o,"(2006, 2008) proposed using GIZA++ (Och and Ney, 2003) to align words between the backbone and hypothesis." P09-1108,P03-1021,o,"Uses for k-best lists include minimum Bayes risk decoding (Goodman, 1998; Kumar and Byrne, 2004), discriminative reranking (Collins, 2000; Charniak and Johnson, 2005), and discriminative training (Och, 2003; McClosky et al., 2006)." P09-2035,P03-1021,o,"We also use minimum error-rate training (Och, 2003) to tune our feature weights." P09-2058,P03-1021,o,"(2003) grow the set of word links by appending neighboring points, while Och and Hey (2003) try to avoid both horizontal and vertical neighbors." P09-2058,P03-1021,o,"We train IBM Model-4 using GIZA++ toolkit (Och and Ney, 2003) in two translation directions and perform different word alignment combination." P09-2058,P03-1021,o,"The next two methods are heuristic (H) in (Och and Ney, 2003) and grow-diagonal (GD) proposed in (Koehn et al., 2003)." P09-2058,P03-1021,o,"We tune all feature weights automatically (Och, 2003) to maximize the BLEU (Papineni et al., 2002) score on the dev set." W04-1513,P03-1021,p,"By having the advantage of leveraging large parallel corpora, the statistical MT approach outperforms the traditional transfer based approaches in tasks for which adequate parallel corpora is available (Och, 2003)." W05-0814,P03-1021,o,"We applied the union, intersection and refined symmetrization metrics (Och and Ney, 2003) to the final alignments output from training, as well as evaluating the two final alignments directly." W05-0814,P03-1021,p,"We wish to minimize this error function, so we select accordingly: argmin summationdisplay a E(a)(a, (argmax a p(a, f|e))) (4) Maximizing performance for all of the weights at once is not computationally tractable, but (Och, 2003) has described an efficient one-dimensional search for a similar problem." W05-0814,P03-1021,o,"The discriminative training regimen is otherwise similar to (Och, 2003)." W05-0814,P03-1021,o,"The system used for baseline experiments is two runs of IBM Model 4 (Brown et al. , 1993) in the GIZA++ (Och and Ney, 2003) implementation, which includes smoothing extensions to Model 4." W05-0814,P03-1021,p,"For symmetrization, we found that Och and Neys refined technique described in (Och and Ney, 2003) produced the best AER for this data set under all experimental conditions." W05-0820,P03-1021,p,"The field of statistical machine translation has been blessed with a long tradition of freely available software tools such as GIZA++ (Och and Ney, 2003) and parallel corpora such as the Canadian Hansards2." W05-0820,P03-1021,o,"In addition, we also made a word alignment available, which was derived using a variant of the current default method for word alignment Och and Ney (2003)s refined method." W05-0820,P03-1021,o,"(2004)), better language-specific preprocessing (Koehn and Knight, 2003) and restructuring (Collins et al. , 2005), additional feature functions such as word class language models, and minimum error rate training (Och, 2003) to optimize parameters." W05-0822,P03-1021,o,"s To set weights on the components of the loglinear model, we implemented Ochs algorithm (Och, 2003)." W05-0833,P03-1021,o,"(Koehn et al. , 2003); (Och, 2003))." W05-0833,P03-1021,o,"In order to create the necessary SMT language and translation models, they used: Giza++ (Och & Ney, 2003);2 the CMU-Cambridge statistical toolkit;3 the ISI ReWrite Decoder.4 Translation was performed from EnglishFrench and FrenchEnglish, and the resulting translations were evaluated using a range of automatic metrics: BLEU (Papineni et al. , 2002), Precision and Recall 2http://www.isi.edu/och/Giza++.html 3http://mi.eng.cam.ac.uk/prc14/toolkit.html 4http://www.isi.edu/licensed-sw/rewrite-decoder/ 185 (Turian et al. , 2003), and Wordand Sentence Error Rates." W05-0833,P03-1021,o,"Accordingly, in this section we describe a set of experiments which extends the work of (Way and Gough, 2005) by evaluating the Marker-based EBMT system of (Gough & Way, 2004b) against a phrase-based SMT system built using the following components: Giza++, to extract the word-level correspondences; The Giza++ word alignments are then refined and used to extract phrasal alignments ((Och & Ney, 2003); or (Koehn et al. , 2003) for a more recent implementation); Probabilities of the extracted phrases are calculated from relative frequencies; The resulting phrase translation table is passed to the Pharaoh phrase-based SMT decoder which along with SRI language modelling toolkit5 performs translation." W05-0834,P03-1021,o,"More details on these standard criteria can be found for instance in (Och, 2003)." W05-0834,P03-1021,o,"(Och et al. , 2003)." W05-0834,P03-1021,o,"The model scaling factors are optimized with respect to some evaluation criterion (Och, 2003)." W05-0836,P03-1021,o,"In this paper we will compare and evaluate several aspects of these techniques, focusing on Minimum Error Rate (MER) training (Och, 2003) and Minimum Bayes Risk (MBR) decision rules, within a novel training environment that isolates the impact of each component of these methods." W05-0836,P03-1021,o,"2.1 Minimum Error Rate Training The predominant approach to reconciling the mismatch between the MAP decision rule and the evaluation metric has been to train the parameters of the exponential model to correlate the MAP choice with the maximum score as indicated by the evaluation metric on a development set with known references (Och, 2003)." W05-0836,P03-1021,o,"In the following, we summarize the optimization algorithm for the unsmoothed error counts presented in (Och, 2003) and the implementation detailed in (Venugopal and Vogel, 2005)." W05-0836,P03-1021,o,"As discussed in (Och, 2003), the direct translation model represents the probability of target sentence English e = e1eI being the translation for a source sentence French f = f1 fJ through an exponential, or log-linear model p(e|f) = exp( summationtextm k=1 k hk(e,f))summationtext eprimeE exp( summationtextm k=1 k hk(eprime,f)) (1) where e is a single candidate translation for f from the set of all English translations E, is the parameter vector for the model, and each hk is a feature function of e and f. In practice, we restrict E to the set Gen(f) which is a set of highly likely translations discovered by a decoder (Vogel et al. , 2003)." W05-0904,P03-1021,o,The translations were generated by the alignment template system of Och (2003). W05-0908,P03-1021,o,"In the area of statistical machine translation (SMT), recently a combination of the BLEU evaluation metric (Papineni et al. , 2001) and the bootstrap method for statistical significance testing (Efron and Tibshirani, 1993) has become popular (Och, 2003; Kumar and Byrne, 2004; Koehn, 2004b; Zhang et al. , 2004)." W05-0908,P03-1021,o,"Our system is a re-implementation of the phrase-based system described in Koehn (2003), and uses publicly available components for word alignment (Och and Ney, 2003)1, decoding (Koehn, 2004a)2, language modeling (Stolcke, 2002)3 and finite-state processing (Knight and Al-Onaizan, 1999)4." W05-1506,P03-1021,o,"For example, Och (2003) shows how to train a log-linear translation model not by maximizing the likelihood of training data, but maximizing the BLEU score (among other metrics) of the model on 53 the data." W06-1606,P03-1021,o,The weights of the models are computed automatically using a variant of the Maximum Bleu training procedure proposed by Och (2003). W06-1606,P03-1021,o,"The decoder is capable of producing nbest derivations and nbest lists (Knight and Graehl, 2005), which are used for Maximum Bleu training (Och, 2003)." W06-1606,P03-1021,o,"We concatenate the lists and we learn a new combination of weights that maximizes the Bleu score of the combined nbest list using the same development corpus we used for tuning the individual systems (Och, 2003)." W06-1606,P03-1021,p,"1 Introduction During the last four years, various implementations and extentions to phrase-based statistical models (Marcu and Wong, 2002; Koehn et al. , 2003; Och and Ney, 2004) have led to significant increases in machine translation accuracy." W06-1607,P03-1021,o,"To model p(t,a|s), we use a standard loglinear approach: p(t,a|s) exp bracketleftBiggsummationdisplay i ifi(s,t,a) bracketrightBigg where each fi(s,t,a) is a feature function, and weights i are set using Ochs algorithm (Och, 2003) to maximize the systems BLEU score (Papineni et al. , 2001) on a development corpus." W06-1607,P03-1021,o,"In fact, a limitation of the experiments described in this paper is that the loglinear weights for the glass-box techniques were optimized for BLEU using Ochs algorithm (Och, 2003), while the linear weights for 55 black-box techniques were set heuristically." W06-1608,P03-1021,o,"The weights for these models are determined using the method described in (Och, 2003)." W06-1615,P03-1021,p,"Furthermore, end-to-end systems like speech recognizers (Roark et al. , 2004) and automatic translators (Och, 2003) use increasingly sophisticated discriminative models, which generalize well to new data that is drawn from the same distribution as the training data." W06-2606,P03-1021,o,"Alternatively, one can train them with respect to the final translation quality measured by an error criterion (Och, 2003)." W06-3103,P03-1021,o,"The model scaling factors M1 are trained with respect to the final translation quality measured by an error criterion (Och, 2003)." W06-3108,P03-1021,o,"The model scaling factors M1 are trained with respect to the final translation quality measured by an error criterion (Och, 2003)." W06-3108,P03-1021,o,"We train IBM Model 4 with GIZA++ (Och and Ney, 2003) in both translation directions." W06-3108,P03-1021,o,"Then the alignments are symmetrized using a refined heuristic as described in (Och and Ney, 2003)." W06-3110,P03-1021,o,"The model scaling factors M1 are trained with respect to the final translation quality measured by an error criterion (Och, 2003)." W06-3115,P03-1021,o,"Feature function scaling factors m are optimized based on a maximum likelihood approach (Och and Ney, 2002) or on a direct error minimization approach (Och, 2003)." W06-3115,P03-1021,o,"First, manyto-many word alignments are induced by running a one-to-many word alignment model, such as GIZA++ (Och and Ney, 2003), in both directions and by combining the results based on a heuristic (Och and Ney, 2004)." W06-3115,P03-1021,o,"For each differently tokenized corpus, we computed word alignments by a HMM translation model (Och and Ney, 2003) and by a word alignment refinement heuristic of grow-diagfinal (Koehn et al. , 2003)." W06-3119,P03-1021,o,"Given a source sentence f, the preferred translation output is determined by computing the lowest-cost derivation (combination of hierarchical and glue rules) yielding f as its source side, where the cost of a derivation R1 Rn with respective feature vectors v1,,vn Rm is given by msummationdisplay i=1 i nsummationdisplay j=1 (vj)i. Here, 1,,m are the parameters of the loglinear model, which we optimize on a held-out portion of the training set (2005 development data) using minimum-error-rate training (Och, 2003)." W06-3121,P03-1021,p,"The MERT module is a highly modular, efficient and customizable implementation of the algorithm described in (Och, 2003)." W06-3121,P03-1021,o,"The software also required GIZA++ word alignment tool(Och and Ney, 2003)." W06-3121,P03-1021,o,"In this paper, we present Phramer, an open-source system that embeds a phrase-based decoder, a minimum error rate training (Och, 2003) module and various tools related to Machine Translation (MT)." W06-3122,P03-1021,o,"It generates a vector of 5 numeric values for each phrase pair: phrase translation probability: ( f|e) = count( f, e) count(e),(e| f) = count( f, e) count( f) 2http://www.phramer.org/ Java-based open-source phrase based SMT system 3http://www.isi.edu/licensed-sw/carmel/ 4http://www.speech.sri.com/projects/srilm/ 5http://www.iccs.inf.ed.ac.uk/pkoehn/training.tgz 150 lexical weighting (Koehn et al. , 2003): lex( f|e,a) = nproductdisplay i=1 1 |{j|(i, j) a}| summationdisplay (i,j)a w(fi|ej) lex(e|f,a) = mproductdisplay j=1 1 |{i|(i, j) a}| summationdisplay (i,j)a w(ej|fi) phrase penalty: ( f|e) = e; log(( f|e)) = 1 2.2 Decoding We used the Pharaoh decoder for both the Minimum Error Rate Training (Och, 2003) and test dataset decoding." W06-3122,P03-1021,n,"The size of the development set used to generate 1 and 2 (1000 sentences) compensates the tendency of the unsmoothed MERT algorithm to overfit (Och, 2003) by providing a high ratio between number of variables and number of parameters to be estimated." W06-3601,P03-1021,o,"Feature weights of both systems are tuned on the same data set.3 For Pharaoh, we use the standard minimum error-rate training (Och, 2003); and for our system, since there are only two independent features (as we always fix = 1), we use a simple grid-based line-optimization along the language-model weight axis." W06-3601,P03-1021,o,"2 Previous Work It is helpful to compare this approach with recent efforts in statistical MT. Phrase-based models (Koehn et al. , 2003; Och and Ney, 2004) are good at learning local translations that are pairs of (consecutive) sub-strings, but often insufficient in modeling the reorderings of phrases themselves, especially between language pairs with very different word-order." W06-3602,P03-1021,o,"The real-valued features include the following: a block translation score derived from phrase occurrence statistics a4a9a113a77a11, a trigram language model to predict target words a4a179a112a229 a78a204a11, a lexical weighting score for the block internal words a4a127a202a204a11, a distortion model a4a0a207a229 a218a147a11 as well as the negative target phrase length a4a60a36a87a11 . The transition cost is computed as a19 a4a20a6 a23 a6 a39 a11a224a15 a27 a28 a30a89a32 a4a7a6 a83 a6a20a39a34a11, where a27 a199a230a227 a228 is a weight vector that sums up to a113a89a35a116 : a228 a13a26a17 a10 a27 a13a217a15a231a113a25a35a116 . The weights are trained using a procedure similar to (Och, 2003) on held-out test data." W07-0401,P03-1021,o,"Alternatively, one can train them with respect to the final translation quality measured by an error criterion (Och, 2003)." W07-0401,P03-1021,o,"Here, we train word alignments in both directions with GIZA++ (Och and Ney, 2003)." W07-0403,P03-1021,o,"We report precision, recall and balanced F-measure (Och and Ney, 2003)." W07-0403,P03-1021,o,"Weights for the log-linear model are set using the 500-sentence tuning set provided for the shared task with minimum error rate training (Och, 2003) as implemented by Venugopal and Vogel (2005)." W07-0403,P03-1021,o,"The surface heuristic can define consistency according to any word alignment; but most often, the alignment is provided by GIZA++ (Och and Ney, 2003)." W07-0403,P03-1021,o,"Many-to-many alignments can be created by combining two GIZA++ alignments, one where English generates Foreign and another with those roles reversed (Och and Ney, 2003)." W07-0410,P03-1021,o,"Different optimization techniques are available, like the Simplex algorithm or the special Minimum Error Training as described in (Och 2003)." W07-0701,P03-1021,o,"The comparison phrasal system was constructed using the same GIZA++ alignments and the heuristic combination described in (Och & Ney, 2003)." W07-0701,P03-1021,o,"Model weights were trained separately for all 3 systems using minimum error rate training to maximize BLEU (Och, 2003) on the development set (dev)." W07-0702,P03-1021,o,"The factored translation model combines features in a log-linear fashion (Och, 2003)." W07-0703,P03-1021,o,"Weights on the loglinear features are set using Och's algorithm (Och, 2003) to maximize the system's BLEU score on a development corpus." W07-0706,P03-1021,o,"We selected 580 short sentences of length at most 50 characters from the 2002 NIST MT Evaluation test set as our development corpus and used it to tune s by maximizing the BLEU score (Och, 2003), and used the 2005 NIST MT Evaluation test set as our test corpus." W07-0710,P03-1021,o,"We use the n-best generation scheme interleaved with optimization as described in (Och, 2003)." W07-0710,P03-1021,p,"73 2.2.4 Minimum Error Rate Training A good way of training is to minimize empirical top-1 error on training data (Och, 2003)." W07-0710,P03-1021,p,"1 Introduction In recent years, statistical machine translation have experienced a quantum leap in quality thanks to automatic evaluation (Papineni et al. , 2002) and errorbased optimization (Och, 2003)." W07-0711,P03-1021,o,"In the experiment, only the first 500 sentences were used to train the log-linear model weight vector, where minimum error rate (MER) training was used (Och, 2003)." W07-0713,P03-1021,o,"Still, a confidence range for BLEU can be estimated by bootstrapping (Och, 2003; Zhang and Vogel, 2004)." W07-0715,P03-1021,o,The feature weights for the overall translation models were trained using Och?s (2003) minimum-error-rate training procedure. W07-0716,P03-1021,o,"Och (2003) introduced minimum error rate training (MERT), a technique for optimizing log-linear modelparametersrelativetoameasureoftranslation quality." W07-0716,P03-1021,o,"??Initial phrase pairs are identified following the procedure typically employed in phrase based systems (Koehn et al. , 2003; Och and Ney, 2004)." W07-0716,P03-1021,o,"Oncetraininghastakenplace,minimumerrorrate training (Och, 2003) is used to tune the parameters i. Finally, decoding in Hiero takes place using a CKY synchronous parser with beam search, augmented to permit efficient incorporation of language model scores (Chiang, 2007)." W07-0717,P03-1021,o,"To model p(t,a|s), we use a standard loglinear approach: p(t,a|s) ??exp bracketleftBiggsummationdisplay i ifi(s,t,a) bracketrightBigg (1) where each fi(s,t,a) is a feature function, and weights i are set using Och?s algorithm (Och, 2003) to maximize the system?s BLEU score (Papineni et al. , 2001) on a development corpus." W07-0724,P03-1021,o,"Their weights are optimized w.r.t. BLEU score using the algorithm described in (Och, 2003)." W07-0726,P03-1021,o,"3see http://www.statmt.org/moses/ 194 4 Implementation Details 4.1 Alignment of MT output The input text and the output text of the MT systems was aligned by means of GIZA++ (Och and Ney, 2003), a tool with which statistical models for alignment of parallel texts can be trained." W07-0726,P03-1021,o,"The optimal weights for the different columns can then be assigned with the help of minimum error rate training (Och, 2003)." W07-0727,P03-1021,o,"To optimize the system towards a maximal BLEU or NIST score, we use Minimum Error Rate (MER) Training as described in (Och, 2003)." W07-0729,P03-1021,o,"Feature weight tuning was carried out using minimum error rate training, maximizing BLEU scores on a held-out development set (Och, 2003)." W07-0730,P03-1021,n,"Unfortunately, longer sentences (up to 100 tokens, rather than 40), longer phrases (up to 10 tokens, rather than 7), two LMs (rather than just one), higher-order LMs (order 7, rather than 3), multiple higher-order lexicalized re-ordering models (up to 3), etc. all contributed to increased system?s complexity, and, as a result, time limitations prevented us from performing minimum-error-rate training (MERT) (Och, 2003) for ucb3, ucb4 and ucb5." W07-0731,P03-1021,o,"The feature weights i are trained in concert with the LM weight via minimum error rate (MER) training (Och, 2003)." W07-0733,P03-1021,o,"are combined in a log-linear model to obtainthescoreforthetranslationeforaninputsentence f: score(e,f) = exp summationdisplay i i hi(e,f) (1) The weights of the components i are set by a discriminative training method on held-out development data (Och, 2003)." W07-0734,P03-1021,p,"Bleu is fast and easy to run, and it can be used as a target function in parameter optimization training procedures that are commonly used in state-of-the-art statistical MT systems (Och, 2003)." W07-0735,P03-1021,o,"In all experiments, word alignment was obtained using the grow-diag-final heuristic for symmetrizing GIZA++ (Och and Ney, 2003) alignments." W07-0735,P03-1021,o,"3.1 Evaluation Measure and MERT We evaluate our experiments using the (lowercase, tokenized) BLEU metric and estimate the empirical confidence using the bootstrapping method described in Koehn (2004b).6 We report the scores obtained on the test section with model parameters tuned using the tuning section for minimum error rate training (MERT, (Och, 2003))." W08-0127,P03-1021,o,"We also plan to employ this evaluation metric as feedback in building dialogue coherence models as is done in machine translation (Och, 2003)." W08-0302,P03-1021,o,"Baseline We use the Moses MT system (Koehn et al., 2007) as a baseline and closely follow the example training procedure given for the WMT-07 and WMT-08 shared tasks.4 In particular, we perform word alignment in each direction using GIZA++ (Och and Ney, 2003), apply the grow-diag-finaland heuristic for symmetrization and use a maximum phrase length of 7." W08-0302,P03-1021,o,"Minimum error-rate (MER) training (Och, 2003) was applied to obtain weights (m in Equation 2) for these features." W08-0302,P03-1021,o,"The weights 1,,M are typically learned to directly minimize a standard evaluation criterion on development data (e.g., the BLEU score; Papineni et al., (2002)) using numerical search (Och, 2003)." W08-0302,P03-1021,o,"The mixture coefficients are trained in the usual way (minimum error-rate training, Och, 2003), so that the additional context is exploited when it is useful and ignored when it isnt. The paper proceeds as follows." W08-0302,P03-1021,o,"To combine the many differently-conditioned features into a single model, we provide them as features to the linear model (Equation 2) and use minimum error-rate training (Och, 2003) to obtain interpolation weights m. This is similar to an interpolation of backed-off estimates, if we imagine that all of the different contextsaredifferently-backedoffestimatesofthe complete context." W08-0304,P03-1021,o,"Och (2003) claimed that this approximation achieved essentially equivalent performance to that obtained when directly using the loss as the objective, O = lscript." W08-0304,P03-1021,o,"(2003) of running GIZA++ (Och & Ney, 2003) in both directions and then merging the alignments using the grow-diag-final heuristic." W08-0304,P03-1021,o,The first is a novel stochastic search strategy that appears to make better use of Och (2003)s algorithm for finding the global minimum along any given search direction than either coordinate descent or Powells method. W08-0304,P03-1021,o,"However, by exploiting the fact that the underlying scores assigned to competing hypotheses, w(e,h,f), vary linearly w.r.t. changes in the weight vector, w, Och (2003) proposed a strategy for finding the global minimum along any given search direction." W08-0304,P03-1021,o,"1 Introduction Och (2003) introduced minimum error rate training (MERT) as an alternative training regime to the conditional likelihood objective previously used with log-linear translation models (Och & Ney, 2002)." W08-0304,P03-1021,o,"This is seen in that each time we check for the nearest intersection to the current 1-best for some n-best list l, we Algorithm 1 Och (2003)s line search method to find the global minimum in the loss, lscript, when starting at the point w and searching along the direction d using the candidate translations given in the collection of n-best lists L. Input: L, w, d, lscript I {} for l L do for e l do m{e} e.features d b{e} e.features w end for bestn argmaxel m{e}{b{e} breaks ties} loop bestn+1 = argminel max parenleftBig 0, b{bestn}b{e}m{e}m{bestn} parenrightBig intercept max parenleftBig 0, b{bestn}b{bestn+1}m{bestn+1}m{bestn} parenrightBig if intercept > 0 then add(I, intercept) else break end if end loop end for add(I, max(I)+2epsilon1) ibest = argminiI evallscript(L,w+(iepsilon1)d) return w+(ibest epsilon1)d must calculate its intersection with all other candidate translations that have yet to be selected as the 1-best." W08-0304,P03-1021,o,"The first, Powells method, was advocated by Och (2003) when MERT was first introduced for statistical machine translation." W08-0304,P03-1021,p,"While the former is piecewise constant and thus cannot be optimized using gradient techniques, Och (2003) provides an approach that performs such training efficiently." W08-0305,P03-1021,o,"The de-facto answer came during the 1990s from the research community on Statistical Machine Translation, who made use of statistical tools based on a noisy channel model originally developed for speech recognition (Brown et al., 1994; Och and Weber, 1998; R.Zens et al., 2002; Och and Ney, 2001; Koehn et al., 2003)." W08-0305,P03-1021,o,"These models can be tuned using minimum error rate training (Och, 2003)." W08-0305,P03-1021,o,"Moses uses standard external tools for some of these tasks, such as GIZA++ (Och and Ney, 2003) for word alignments and SRILM (Stolcke, 2002) for language modeling." W08-0306,P03-1021,o,"We show that link 1For a complete discussion of alignment symmetrization heuristics, including union, intersection, and refined, refer to (Och and Ney, 2003)." W08-0306,P03-1021,o,"GIZA++ (Och and Ney, 2003), an implementation of the IBM (Brown et al., 1993) and HMM (?)" W08-0306,P03-1021,o,"After maximum BLEU tuning (Och, 2003a) on a held-out tuning set, we evaluate translation quality on a held-out test set." W08-0306,P03-1021,n,"3.2 Evaluation Metrics AER (Alignment Error Rate) (Och and Ney, 2003) is the most widely used metric of alignment quality, but requires gold-standard alignments labelled with sure/possible annotations to compute; lacking such annotations, we can compute alignment fmeasure instead." W08-0306,P03-1021,o,"GIZA++ refined alignments have been used in state-of-the-art phrase-based statistical MT systems such as (Och, 2004); variations on the refined heuristic have been used by (Koehn et al., 2003) (diag and diag-and) and by the phrase-based system Moses (grow-diag-final) (Koehn et al., 2007)." W08-0306,P03-1021,o,"The feature weights are tuned using minimum error rate training (Och and Ney, 2003) to optimize BLEU score on a held-out development set." W08-0309,P03-1021,o,"The word alignments were created with Giza++ (Och and Ney, 2003) applied to a parallel corpus containing the complete Europarl training data, plus sets of 4,051 sentence pairs created by pairing the test sentences with the reference translations, and the test sentences paired with each of the system translations." W08-0309,P03-1021,o,"A large database of human judgments might also be useful as an objective function for minimum error rate training (Och, 2003) or in other system development tasks." W08-0310,P03-1021,o,"translation systems (Och and Ney, 2004; Koehn et al., 2003) and use Moses (Koehn et al., 2007) to search for the best target sentence." W08-0310,P03-1021,o,"These fourteen scores are weighted and linearly combined (Och and Ney, 2002; Och, 2003); their respective weights are learned on development data so as to maximize the BLEU score." W08-0312,P03-1021,p,"Bleu is fast and easy to run, and it can be used as a target function in parameter optimization training procedures that are commonly used in state-of-the-art statistical MT systems (Och, 2003)." W08-0316,P03-1021,o,"Word alignments were generated using GIZA++ (Och and Ney, 2003) over a stemmed version of the parallel text." W08-0316,P03-1021,o,"3.1 System Tuning Minimum error training (Och, 2003) under BLEU (Papineni et al., 2001) was used to optimise the feature weights of the decoder with respect to the dev2006 development set." W08-0319,P03-1021,o,"We use the minimum-error rate training procedure by Och (2003) as implemented in the Moses toolkit to set the weights of the various translation and language models, optimizing for BLEU." W08-0320,P03-1021,o,"We set the feature weights by optimizing the Bleu score directly using minimum error rate training (Och, 2003) on the development set." W08-0321,P03-1021,o,"Following the guidelines of the workshop we built baseline systems, using the lower-cased Europarl parallel corpus (restricting sentence length to 40 words), GIZA++ (Och and Ney, 2003), Moses (Koehn et al., 2007), and the SRI LM toolkit (Stolcke, 2002) to build 5-gram LMs." W08-0321,P03-1021,o,"Instead of interpolating the two language models, we explicitly used them in the decoder and optimized their weights via minimumerror-rate (MER) training (Och, 2003)." W08-0321,P03-1021,o,"For example, in IBM Model 1 the lexicon probability of source word f given target word e is calculated as (Och and Ney, 2003): p(f|e) = summationtext k c(f|e;e k,fk) summationtext k,f c(f|e;e k,fk) (1) c(f|e;ek,fk) = summationdisplay ek,fk P(ek,fk)summationdisplay a P(a|ek,fk) (2) summationdisplay j (f,fkj )(e,ekaj) Therefore, the distribution of P(ek,fk) will affect the alignment results." W08-0326,P03-1021,o,"For example, our system configuration for the shared task incorporates a wrapper around GIZA++ (Och and Ney, 2003) for word alignment and a wrapper around Moses (Koehn et al., 2007) for decoding." W08-0326,P03-1021,o,"Assuming that the parameters P(etk|fsk) are known, the most likely alignment is computed by a simple dynamic-programming algorithm.1 Instead of using an Expectation-Maximization algorithm to estimate these parameters, as commonly done when performing word alignment (Brown et al., 1993; Och and Ney, 2003), we directly compute these parameters by relying on the information contained within the chunks." W08-0326,P03-1021,o,"We tuned our system on the development set devtest2006 for the EuroParl tasks and on nc-test2007 for CzechEnglish, using minimum error-rate training (Och, 2003) to optimise BLEU score." W08-0328,P03-1021,o,"This set of 800 sentences was used for Minimum Error Rate Training (Och, 2003) to tune the weights of our system with respect to BLEU score." W08-0328,P03-1021,o,"This setup provides an elegant solution to the fairly complex task of integrating multiple MT results that may differ in word order using only standard software modules, in particular GIZA++ (Och and Ney, 2003) for the identification of building blocks and Moses for the recombination, but the authors were not able to observe improvements in 1see http://www.statmt.org/moses/ terms of BLEU score." W08-0334,P03-1021,o,"Decoding Conditions For tuning of the decoder's parameters, minimum error training (Och 2003) with respect to the BLEU score using was conducted using the respective development corpus." W08-0335,P03-1021,o,"The feature weights were optimized against the BLEU scores (Och, 2003)." W08-0336,P03-1021,o,"We build phrase translations by first acquiring bidirectional GIZA++ (Och and Ney, 2003) alignments, and using Moses grow-diag alignment symmetrization heuristic.1 We set the maximum phrase length to a large value (10), because some segmenters described later in this paper will result in shorter 1In our experiments, this heuristic consistently performed better than the default, grow-diag-final." W08-0336,P03-1021,o,"We tuned the parameters of these features with Minimum Error Rate Training (MERT) (Och, 2003) on the NIST MT03 Evaluation data set (919 sentences), and then test the MT performance on NIST MT03 and MT05 Evaluation data (878 and 1082 sentences, respectively)." W08-0401,P03-1021,o,"For phrase-based translation model training, we used the GIZA++ toolkit (Och et al., 2003)." W08-0401,P03-1021,o,"For tuning of decoder parameters, we conducted minimum error training (Och 2003) with respect to the BLEU score using 916 development sentence pairs." W08-0401,P03-1021,o,"One of the popular statistical machine translation paradigms is the phrase-based model (PBSMT) (Marcu et al., 2002; Koehn et al., 2003; Och et al., 2004)." W08-0402,P03-1021,o,"We use the GIZA toolkit (Och and Ney, 2000), a suffix-array architecture (Lopez, 2007), the SRILM toolkit (Stolcke, 2002), and minimum error rate training (Och et al., 2003) to obtain wordalignments, a translation model, language models, and the optimal weights for combining these models, respectively." W08-0402,P03-1021,o,"Furthermore, techniques such as iterative minimum errorrate training (Och et al., 2003) as well as web-based MT services require the decoder to translate a large number of source-language sentences per unit time." W08-0403,P03-1021,o,"Minimum-error-rate training (Och, 2003) are conducted on dev-set to optimize feature weights maximizing the BLEU score up to 4grams, and the obtained feature weights are blindly applied on the test-set." W08-0404,P03-1021,o,"The decision rule was based on the standard loglinear interpolation of several models, with weights tunedbyMERTonthedevelopmentset(Och,2003)." W08-0404,P03-1021,n,"While minimum error training (Och, 2003) has by now become a standard tool for interpolating a small number of aggregate scores, it is not well suited for learning in high-dimensional feature spaces." W08-0409,P03-1021,o,"4.3 Baselines 4.3.1 Word Alignment We used the GIZA++ implementation of IBM word alignment model 4 (Brown et al., 1993; Och and Ney, 2003) for word alignment, and the heuristics described in (Och and Ney, 2003) to derive the intersection and refined alignment." W08-0409,P03-1021,o,"73 ment and phrase-extraction heuristics described in (Koehn et al., 2003), minimum-error-rate training (Och, 2003), a trigram language model with KneserNey smoothing trained with SRILM (Stolcke, 2002) on the English side of the training data, and Moses (Koehn et al., 2007) to decode." W08-0409,P03-1021,o,"Slightly differently from (Och and Ney, 2003), we use possible alignments in computing recall." W08-0409,P03-1021,o,"Since manual word alignment is an ambiguous task, we also explicitly allow for ambiguous alignments, i.e. the links are marked as sure (S) or possible (P) (Och and Ney, 2003)." W08-0510,P03-1021,o,"These include scripts for creating alignments from a parallel corpus, creating phrase tables and language models, binarizing phrase tables, scripts for weight optimization using MERT (Och 2003), and testing scripts." W08-0510,P03-1021,p,"GIZA++ (Och and Ney 2003) is a very popular system within SMT for creating word alignment from parallel corpus, in fact, the Moses training scripts uses it." W09-0404,P03-1021,o,"However, this may still be too expensive as part of an MT model that directly optimizes some performance measure, e.g., minimum error rate training (Och, 2003)." W09-0405,P03-1021,o,"Then, we run GIZA++ (Och and Ney, 2003) on the corpus to obtain word alignments in both directions." W09-0405,P03-1021,o,"We use Minimal Error Rate Training (Och, 2003) to maximize BLEU on the complete development data." W09-0412,P03-1021,o,"We then built separate English-to-Spanish and Spanish-to-English directed word alignments using IBM model 4 (Brown et al., 1993), combined them using the intersect+grow heuristic (Och and Ney, 2003), and extracted phrase-level translation pairs of maximum length 7 using the alignment template approach (Och and Ney, 2004)." W09-0412,P03-1021,o,"We set all feature weights by optimizing Bleu (Papineni et al., 2002) directly using minimum error rate training (MERT) (Och, 2003) on the tuning part of the development set (dev-test2009a)." W09-0416,P03-1021,o,"The features we used are as follows: word posterior probability (Fiscus, 1997); 3, 4-gram target language model; word length penalty; Null word length penalty; Also, we use MERT (Och, 2003) to tune the weights of confusion network." W09-0417,P03-1021,o,"2.6 Tuning procedure The Moses-based systems were tuned using the implementation of minimum error rate training (MERT) (Och, 2003) distributed with the Moses decoder, using the development corpus (dev2009a)." W09-0418,P03-1021,o,"4.1 Baseline Our baseline system is a fairly typical phrasebased machine translation system (Finch and Sumita, 2008a) built within the framework of a feature-based exponential model containing the following features: Table 1: Language Resources Corpus Train Dev Eval NC Spanish sentences 74K 2,001 2,007 words 2,048K 49,116 56,081 vocab 61K 9,047 8,638 length 27.6 24.5 27.9 OOV (%) 5.2 / 2.9 1.4 / 0.9 English sentences 74K 2,001 2,007 words 1,795K 46,524 49,693 vocab 47K 8,110 7,541 length 24.2 23.2 24.8 OOV (%) 5.2 / 2.9 1.2 / 0.9 perplexity 349 / 381 348 / 458 EP Spanish sentences 1,404K 1,861 2,000 words 41,003K 50,216 61,293 vocab 170K 7,422 8,251 length 29.2 27.0 30.6 OOV (%) 2.4 / 0.1 2.4 / 0.2 English sentences 1,404K 1,861 2,000 words 39,354K 48,663 59,145 vocab 121K 5,869 6,428 length 28.0 26.1 29.6 OOV (%) 1.8 / 0.1 1.9 / 0.1 perplexity 210 / 72 305 / 125 Table 2: Testset 2009 Corpus Test NC Spanish sentences 3,027 words 80,591 vocab 12,616 length 26.6 Source-target phrase translation probability Inverse phrase translation probability Source-target lexical weighting probability Inverse lexical weighting probability Phrase penalty Language model probability Lexical reordering probability Simple distance-based distortion model Word penalty For the training of the statistical models, standard word alignment (GIZA++ (Och and Ney, 2003)) and language modeling (SRILM (Stolcke, 2002)) tools were used." W09-0418,P03-1021,o,"Minimum error rate training (MERT) with respect to BLEU score was used to tune the decoders parameters, and performed using the technique proposed in (Och, 2003)." W09-0421,P03-1021,o,"The translation system is a factored phrasebased translation system that uses the Moses toolkit (Koehn et al., 2007) for decoding and training, GIZA++ for word alignment (Och and Ney, 2003), and SRILM (Stolcke, 2002) for language models." W09-0421,P03-1021,o,"Minimum error rate training was used to tune the model feature weights (Och, 2003)." W09-0424,P03-1021,o,"Deterministic Annealing: In this system, instead of using the regular MERT (Och, 2003) whose training objective is to minimize the onebest error, we use the deterministic annealing training procedure described in Smith and Eisner (2006), whose objective is to minimize the expected error (together with the entropy regularization technique)." W09-0424,P03-1021,o,"The toolkit also implements suffixarray grammar extraction (Callison-Burch et al., 2005; Lopez, 2007) and minimum error rate training (Och, 2003)." W09-0424,P03-1021,o,"The toolkit also implements suffix-array grammar extraction (Lopez, 2007) and minimum error rate training (Och, 2003)." W09-0424,P03-1021,p,The search across a dimension uses the efficient method of Och (2003). W09-0424,P03-1021,o,"3.2 Translation Scores The translation scores for four different systems are reported in Table 1.5 Baseline: In this system, we use the GIZA++ toolkit (Och and Ney, 2003), a suffix-array architecture (Lopez, 2007), the SRILM toolkit (Stolcke, 2002), and minimum error rate training (Och, 2003) to obtain word-alignments, a translation model, language models, and the optimal weights for combining these models, respectively." W09-0426,P03-1021,o,"The preprocessed training data was filtered for length and aligned using the GIZA++ implementation of IBM Model 4 (Och and Ney, 2003) in both directions and symmetrized using the grow-diag-final-and heuristic." W09-0426,P03-1021,o,"2.3 Forest minimum error training To tune the feature weights of our system, we used a variant of the minimum error training algorithm (Och, 2003) that computes the error statistics from the target sentences from the translation search space (represented by a packed forest) that are exactly those that are minimally discriminable by changing the feature weights along a single vector in the dimensions of the feature space (Macherey et al., 2008)." W09-0427,P03-1021,o,"The loglinear model feature weights were learned using minimum error rate training (MERT) (Och, 2003) with BLEU score (Papineni et al., 2002) as the objective function." W09-0431,P03-1021,o,"We use GIZA++ (Och and Ney, 2003) for 5 http://iit-iti.nrc-cnrc.gc.ca/projects-projets/portage_e.html 176 word alignment, and the Pharaoh system suite to build the phrase table and decode (Koehn, 2004)." W09-0431,P03-1021,o,"Tuning is done for each experimental condition using Ochs Minimum Error Training (Och, 2003)." W09-0433,P03-1021,o,"4 Experiment Our baseline system is a popular phrase-based SMT system, Moses (Koehn et al., 2007), with 5-gram SRILM language model (Stolcke, 2002), tuned with Minimum Error Training (Och, 2003)." W09-0436,P03-1021,o,"Parameter tuning is done with Minimum Error Rate Training (MERT) (Och, 2003)." W09-0437,P03-1021,o,"The corpus was aligned with GIZA++ (Och and Ney, 2003) and symmetrized with the grow-diag-finaland heuristic (Koehn et al., 2003)." W09-0437,P03-1021,o,"Systems were optimized on the WMT08 French-English development data (2000 sentences) using minimum error rate training (Och, 2003) and tested on the WMT08 test data (2000 sentences)." W09-0439,P03-1021,p,"Ochs procedure is the most widely-used version of MERT for SMT (Och, 2003)." W09-0439,P03-1021,p,"Although they obtained consistent and stable performance gains for MT, these were inferior to the gains yielded by Ochs procedure in (Och, 2003)." W09-1114,P03-1021,o,"3.2 Translation performance For the experiments reported in this section, we used feature weights trained with minimum error rate training (MERT; Och, 2003) . Because MERT ignores the denominator in Equation 1, it is invariant with respect to the scale of the weight vector the Moses implementation simply normalises the weight vector it finds by its lscript1-norm." W09-2307,P03-1021,o,"Parameter tuning is done with Minimum Error Rate Training (MERT) (Och, 2003)." W09-2309,P03-1021,o,"A popular statistical machine translation paradigms is the phrase-based model (Koehn et al., 2003; Och and Ney, 2004)." W09-2309,P03-1021,o,"For phrase-based translation model training, we used the GIZA++ toolkit (Och and Ney, 2003), and 1.0M bilingual sentences." W09-2309,P03-1021,o,"To tune the decoder parameters, we conducted minimum error rate training (Och, 2003) with respect to the word BLEU score (Papineni et al., 2002) using 2.0K development sentence pairs." W09-2309,P03-1021,o,"This involves running GIZA++ (Och and Ney, 2003) on the corpus in both directions, and applying renement rules (the variant they designate is nal-and) to obtain a single many-tomany word alignment for each sentence." C08-2012,P04-1015,o,"As to analysis of NPs, there have been a lot of work on statistical techniques for lexical dependency parsing of sentences (Collins and Roark, 2004; McDonald et al., 2005), and these techniques potentially can be used for analysis of NPs if appropriate resources for NPs are available." D07-1009,P04-1015,o,"In fact, when the perceptron update rule of (Dekel et al. , 2004) which modifies the weights of every divergent node along the predicted and true paths is used in the ranking framework, it becomes virtually identical with the standard, flat, ranking perceptron of Collins (2002).5 In contrast, our approach shares the idea of (Cesa-Bianchi et al. , 2006a) that if a parent class has been predicted wrongly, then errors in the children should not be taken into account. We also view this as one of the key ideas of the incremental perceptron algorithm of (Collins and Roark, 2004), which searches through a complex decision space step-by-step and is immediately updated at the first wrong move." D07-1033,P04-1015,o,"7 Discussion As we mentioned, there are some algorithms similar to ours (Collins and Roark, 2004; Daume III and Marcu, 2005; McDonald and Pereira, 2006; Liang et al. , 2006)." D07-1033,P04-1015,o,CollinsandRoark(2004)proposedanapproximate incremental method for parsing. D07-1033,P04-1015,o,"Collins and Roark (2004) used the averaged perceptron (Collins, 2002a)." D07-1033,P04-1015,o,"With regard to the local update, (B), in Algorithm 4.2, early updates (Collins and Roark, 2004) and y-good requirement in (Daume III and Marcu, 2005) resemble our local update in that they tried to avoid the situation where the correct answer cannot be output." D07-1033,P04-1015,o,"Recently, severalmethods(Collins and Roark, 2004; Daume III and Marcu, 2005; McDonald and Pereira, 2006) have been proposed with similar motivation to ours." D07-1129,P04-1015,o,"2.3 Online Learning Again following (McDonald et al. , 2005), we have used the single best MIRA (Crammer and Singer, 2003), which is a margin aware variant of perceptron (Collins, 2002; Collins and Roark, 2004) for structured prediction." D07-1129,P04-1015,o,"We discriminatively trained our parser in an on-line fashion using a variant of the voted perceptron (Collins, 2002; Collins and Roark, 2004; Crammer and Singer, 2003)." D08-1052,P04-1015,p,"Some recent work on incremental parsing (Collins and Roark, 2004; Shen and Joshi, 2005) showed another way to handle this problem." D08-1052,P04-1015,p,"Variants of this method have been successfully used in many NLP tasks, like shallow processing (Daume III and Marcu, 2005), parsing (Collins and Roark, 2004; Shen and Joshi, 2005) and word alignment (Moore, 2005)." D08-1052,P04-1015,o,"We still use complex structures to represent the partial analyses, so as to employ both top-down and bottom-up information as in (Collins and Roark, 2004; Shen and Joshi, 2005)." D08-1059,P04-1015,o,"Beam-search has been successful in many NLP tasks (Koehn et al., 2003; 562 Inputs: training examples (xi,yi) Initialization: set vectorw = 0 Algorithm: // R training iterations; N examples for t = 1R, i = 1N: zi = argmaxyGEN(xi) (y) vectorw if zi negationslash= yi: vectorw = vectorw + (yi)(zi) Outputs: vectorw Figure 1: The perceptron learning algorithm Collins and Roark, 2004), and can achieve accuracy that is close to exact inference." D08-1059,P04-1015,o,"During training, the early update strategy of Collins and Roark (2004) is used: when the correct state item falls out of the beam at any stage, parsing is stopped immediately, and the model is updated using the current best partial item." D09-1034,P04-1015,p,"Incremental top-down and left-corner parsers have been shown to effectively (and efficiently) make use of non-local features from the left-context to yield very high accuracy syntactic parses (Roark, 2001; Henderson, 2003; Collins and Roark, 2004), and we will use such rich models to derive our scores." D09-1043,P04-1015,o,"These findings are in line with Collins & Roarks (2004) results with incremental parsing with perceptrons, where it is suggested that a generative baseline feature provides the perceptron algorithm with a much better starting point for learning." D09-1127,P04-1015,o,"In shift-reduce parsing, further mistakes are often caused by previous ones, so only the first mistake in each sentence (if there is one) is easily identifiable;7 this is also the argument for early update in applying perceptron learning to these incremental parsing algorithms (Collins and Roark, 2004) (see also Section 2)." D09-1127,P04-1015,o,"Following Collins and Roark (2004) we also use the early-update strategy, where an update happens whenever the goldstandard action-sequence falls off the beam, with the rest of the sequence neglected." E06-1011,P04-1015,o,"Online learning algorithms have been shown to be robust even with approximate rather than exact inference in problems such as word alignment (Moore, 2005), sequence analysis (Daume and Marcu, 2005; McDonald et al. , 2005a) and phrase-structure parsing (Collins and Roark, 2004)." H05-1102,P04-1015,o,"We also employ the voted perceptron algorithm (Freund and Schapire, 1999) and the early update technique as in (Collins and Roark, 2004)." H05-1102,P04-1015,o,"Both left-corner strategy (Ratnaparkhi, 1997; Roark, 2001; Prolo, 2003; Henderson, 2003; Collins and Roark, 2004) and head-corner strategy (Henderson, 2000; Yamada and Matsumoto, 2003) were employed in incremental parsing." J07-4004,P04-1015,o,"Another alternative for future work is to compare the dynamic programming approach taken here with the beam-search approach of Collins and Roark (2004), which allows more global features." N06-1045,P04-1015,o,"Thelistsmaybeused withannotation and a tuning process, such as in (Collins and Roark, 2004), to iteratively alter feature weights and improve results." N07-1011,P04-1015,o,Collins and Roark (2004) present an incremental perceptron algorithm for parsing that uses early update to update the parameters when an error is encountered. P05-1012,P04-1015,o,Our approach is related to those of Collins and Roark (2004) and Taskar et al. P05-1012,P04-1015,o,Collins and Roark (2004) presented a linear parsing model trained with an averaged perceptron algorithm. P05-1012,P04-1015,o,"Discriminatively trained parsers that score entire trees for a given sentence have only recently been investigated (Riezler et al. , 2002; Clark and Curran, 2004; Collins and Roark, 2004; Taskar et al. , 2004)." P05-1023,P04-1015,o,"In particular, most of the work on parsing with kernel methods has focussed on kernels over parse trees (Collins and Duffy, 2002; Shen and Joshi, 2003; Shen et al. , 2003; Collins and Roark, 2004)." P05-1023,P04-1015,o,"For comparison to previous results, table 2 lists the results on the testing set for our best model (TOP-Efficient-Freq20) and several other statistical parsers (Collins, 1999; Collins and Duffy, 2002; Collins and Roark, 2004; Henderson, 2003; Charniak, 2000; Collins, 2000; Shen and Joshi, 2004; Shen et al. , 2003; Henderson, 2004; Bod, 2003)." P05-1023,P04-1015,n,"When compared to other kernel methods, our approach performs better than those based on the Tree kernel (Collins and Duffy, 2002; Collins and Roark, 2004), and is only 0.2% worse than the best results achieved by a kernel method for parsing (Shen et al. , 2003; Shen and Joshi, 2004)." P06-1096,P04-1015,p,"2.2 Perceptron-based training To tune the parameters w of the model, we use the averaged perceptron algorithm (Collins, 2002) because of its efficiency and past success on various NLP tasks (Collins and Roark, 2004; Roark et al. , 2004)." P06-1110,P04-1015,p,"Collins and Roark (2004) saw a LFMS improvement of 0.8% over their baseline discriminative parser after adding punctuation features, one of which encoded the sentence-final punctuation." P06-1110,P04-1015,o,"The left-to-right parser would likely improve if we were to use a left-corner transform (Collins & Roark, 2004)." P06-1110,P04-1015,o,Collins and Roark (2004) and Taskar et al. P06-1110,P04-1015,n,"Although generating training examples in advance without a working parser (Turian & Melamed, 2005) is much faster than using inference (Collins & Roark, 2004; Henderson, 2004; Taskar et al. , 2004), our training time can probably be decreased further by choosing a parsing strategy with a lower branching factor." P06-1110,P04-1015,o,"Successful discriminative parsers have relied on generative models to reduce training time and raise accuracy above generative baselines (Collins & Roark, 2004; Henderson, 2004; Taskar et al. , 2004)." P07-1069,P04-1015,o,"3For decoding, loc is averaged over the training iterations as in Collins and Roark (2004)." P07-1069,P04-1015,p,"Similar models have been successfully applied in the past to other tasks including parsing (Collins and Roark, 2004), chunking (Daume and Marcu, 2005), and machine translation (Cowan et al. , 2006)." P07-1069,P04-1015,o,"This linear model is learned using a variant of the incremental perceptron algorithm (Collins and Roark, 2004; Daume and Marcu, 2005)." P07-1096,P04-1015,o,"In (Daume III and Marcu, 2005), as well as other similar works (Collins, 2002; Collins and Roark, 2004; Shen and Joshi, 2005), only left-toright search was employed." P07-1096,P04-1015,o,"In (Collins and Roark, 2004; Shen and Joshi, 2005), a search stops if there is no hypothesis compatible with the gold standard in the queue of candidates." P07-1096,P04-1015,o,"We proposed a Perceptron like learning algorithm (Collins and Roark, 2004; Daume III and Marcu, 2005) for guided learning." P07-1106,P04-1015,o,Hence we use a beam-search decoder during training and testing; our idea is similar to that of Collins and Roark (2004) who used a beam-search decoder as part of a perceptron parsing model. P09-1032,P04-1015,o,"This algorithm and its many variants are widely used in the computational linguistics community (Collins, 2002a; Collins and Duffy, 2002; Collins, 2002b; Collins and Roark, 2004; Henderson and Titov, 2005; Viola and Narasimhan, 2005; Cohen et al., 2004; Carreras et al., 2005; Shen and Joshi, 2005; Ciaramita and Johnson, 2003)." P09-1059,P04-1015,p,"It is an online training algorithm and has been successfully used in many NLP tasks, such as POS tagging (Collins, 2002), parsing (Collins and Roark, 2004), Chinese word segmentation (Zhang and Clark, 2007; Jiang et al., 2008), and so on." P09-2011,P04-1015,o,"To tackle this problem, we defined 2The best results of Collins and Roark (2004) (LR=88.4%, LP=89.1% and F=88.8%) are achieved when the parser utilizes the information about the final punctuation and the look-ahead." P09-2011,P04-1015,o,"2 Incremental Parsing This section gives a description of Collins and Roarks incremental parser (Collins and Roark, 2004) and discusses its problem." P09-2011,P04-1015,o,"3 Incremental Parsing Method Based on Adjoining Operation In order to avoid the problem of infinite local ambiguity, the previous works have adopted the following approaches: (1) a beam search strategy (Collins and Roark, 2004; Roark, 2001; Roark, 2004), (2) limiting the allowable chains to those actually observed in the treebank (Collins and Roark, 2004), and (3) transforming the parse trees with a selective left-corner transformation (Johnson and Roark, 2000) before inducing the allowable chains and allowable triples (Collins and Roark, 2004)." P09-2011,P04-1015,o,"Several incremental parsing methods have been proposed so far (Collins and Roark, 2004; Roark, 2001; Roark, 2004)." P09-2011,P04-1015,o,"The limited contexts used in this model are similar to the previous methods (Collins and Roark, 2004; Roark, 2001; Roark, 2004)." P09-2011,P04-1015,p,"To achieve efficient parsing, we use a beam search strategy like the previous methods (Collins and Roark, 2004; Roark, 2001; Roark, 2004)." P09-2011,P04-1015,o,Each queue Hi stores the only N-best 43 Table 1: Parsing results LR(%) LP(%) F(%) Roark (2004) 86.4 86.8 86.6 Collins and Roark (2004) 86.5 86.8 86.7 No adjoining 86.3 86.8 86.6 Non-monotonic adjoining 86.1 87.1 86.6 Monotonic adjoining 87.2 87.7 87.4 partial parse trees. W04-0303,P04-1015,p,"This approach has been shown to be accurate, relatively efficient, and robust using both generative and discriminative models (Roark, 2001; Roark, 2004; Collins and Roark, 2004)." W04-0303,P04-1015,o,"Beam-search parsing using an unnormalized discriminative model, as in Collins and Roark (2004), requires a slightly different search strategy than the original generative model described in Roark (2001; 2004)." W04-0303,P04-1015,o,"A generative parsing model can be used on its own, and it was shown in Collins and Roark (2004) that a discriminative parsing model can be used on its own." W05-1505,P04-1015,p,"1 Introduction Statistical parsing models have been shown to be successful in recovering labeled constituencies (Collins, 2003; Charniak and Johnson, 2005; Roark and Collins, 2004) and have also been shown to be adequate in recovering dependency relationships (Collins et al. , 1999; Levy and Manning, 2004; Dubey and Keller, 2003)." W05-1515,P04-1015,p,"Its also worth noting that Collins and Roark (2004) saw a LFMS improvement of 0.8% over their baseline discriminative parser after adding punctuation features, one of which encoded the sentence-final punctuation." W05-1515,P04-1015,o,"Training discriminative parsers is notoriously slow, especially if it requires generating examples by repeatedly parsing the treebank (Collins & Roark, 2004; Taskar et al. , 2004)." W06-1628,P04-1015,o,"This combination of the perceptron algorithm with beam-search is similar to that described by Collins and Roark (2004).5 The perceptron algorithm is a convenient choice because it converges quickly usually taking only a few iterations over the training set (Collins, 2002; Collins and Roark, 2004)." W06-2936,P04-1015,o,"Using a variant of the voted perceptron (Collins, 2002; Collins and Roark, 2004; Crammer and Singer, 2003), we discriminatively trained our parser in an on-line fashion." W06-2936,P04-1015,o,"3 Online Learning Again following (McDonald et al. , 2005), we have used the single best MIRA (Crammer and Singer, 2003), which is a variant of the voted perceptron (Collins, 2002; Collins and Roark, 2004) for structured prediction." W06-3603,P04-1015,n,"Although generating training examples in advance without a working parser (Sagae & Lavie, 2005) is much faster than using inference (Collins & Roark, 2004; Henderson, 2004; Taskar et al. , 2004), our training time can probably be decreased further by choosing a parsing strategy with a lower branching factor." W06-3603,P04-1015,o,"Successful discriminative parsers have used generative models to reduce training time and raise accuracy above generative baselines (Collins & Roark, 2004; Henderson, 2004; Taskar et al. , 2004)." W07-1202,P04-1015,o,"Parsing research has also begun to adopt discriminative methods from the Machine Learning literature, such as the perceptron (Freund and Schapire, 1999; Collins and Roark, 2004) and the largemargin methods underlying Support Vector Machines (Taskar et al. , 2004; McDonald, 2006)." W07-1202,P04-1015,o,The existing work most similar to ours is Collins and Roark (2004). W07-1202,P04-1015,o,"1 Introduction A recent development in data-driven parsing is the use of discriminative training methods (Riezler et al. , 2002; Taskar et al. , 2004; Collins and Roark, 2004; Turian and Melamed, 2006)." W07-2211,P04-1015,o,"5.1 Relationship to ""supervised"" training To illustrate the relationship between the above symbolic training method for preference scoring and corpus-based methods, perhaps the easiest way is to compare it to an adaptation (Collins and Roark, 2004) of the perceptron training method to the problem of obtaining a best parse (either directly, or for parse reranking), because the two methods are analogous in a number of ways." W08-2129,P04-1015,o,"Here, it might be useful to relax the strict linear control regime by exploring beam search strategies, e.g. along the lines of Collins and Roark (2004)." W09-0508,P04-1015,o,"It is possible to prove that, provided the training set (xi,zi) is separable with margin > 0, the algorithm is assured to converge after a finite number of iterations to a model with zero training errors (Collins and Roark, 2004)." W09-0508,P04-1015,o,"Albeit simple, the algorithm has proven to be very efficient and accurate for the task of parse selection (Collins and Roark, 2004; Collins, 2004; Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007)." C08-1101,P04-1035,o,"6 Related work Evidence from the surrounding context has been used previously to determine if the current sentence should be subjective/objective (Riloff et al., 2003; Pang and Lee, 2004) and adjacency pair information has been used to predict congressional votes (Thomas et al., 2006)." C08-1104,P04-1035,o,"Movie-domainSubjectivityDataSet(Movie): Pang and Lee (2004) used a collection of labeled subjective and objective sentences in their work on review classification.5 The data set contains 5000 subjective sentences, extracted from movie reviews collected from the Rotten Tomatoes web formed best." C08-1104,P04-1035,o,"2 Related Work There has been extensive research in opinion mining at the document level, for example on product and movie reviews (Pang et al., 2002; Pang and Lee, 2004; Dave et al., 2003; Popescu and Etzioni, 2005)." C08-1135,P04-1035,o,Pang and Lee (2004) use a graph-based technique to identify and analyze only subjective parts of texts. C08-2004,P04-1035,o,"Within NLP, applications include sentiment-analysis problems (Pang and Lee, 2004; Agarwal and Bhattacharyya, 2005; Thomas et al., 2006) and content selection for text generation (Barzilay and Lapata, 2005)." D07-1035,P04-1035,o,"(2003), Pang and Lee (2004, 2005)." D08-1004,P04-1035,o,"(2007), we introduced the Movie Review Polarity Dataset Enriched with Annotator Rationales.8 It is based on the dataset of Pang and Lee (2004),9 which consists of 1000 positive and 1000 negative movie reviews, tokenized and divided into 10 folds (F0F9)." D08-1004,P04-1035,o,"We use the same set of binary features as in previous work on this dataset (Pang et al., 2002; Pang and Lee, 2004; Zaidan et al., 2007)." D08-1004,P04-1035,o,"We collect substring rationales for a sentiment classification task (Pang and Lee, 2004) and use them to obtain significant accuracy improvements for each annotator." D08-1058,P04-1035,o,"(2002), various classification models and linguistic features have been proposed to improve the classification performance (Pang and Lee, 2004; Mullen and Collier, 2004; Wilson et al., 2005a; Read, 2005)." D09-1017,P04-1035,o,"With this model, we can provide not only qualitative textual summarization such as good food and bad service, but also a numerical scoring of sentiment, i.e., how good the food is and how bad the service is. 2 Related Work There have been many studies on sentiment classification and opinion summarization (Pang and Lee, 2004, 2005; Gamon et al., 2005; Popescu and Etzioni, 2005; Liu et al., 2005; Zhuang et al., 2006; Kim and Hovy, 2006)." D09-1018,P04-1035,o,"Others use sentence cohesion (Pang and Lee, 2004), agreement/disagreement between speakers (Thomas et al., 2006; Bansal et al., 2008), or structural adjacency." D09-1018,P04-1035,o,"et al., 2007)) and unigrams (used by many researchers, e.g., (Pang and Lee, 2004))." D09-1020,P04-1035,p,"Second, benefits for sentiment analysis can be realized by decomposing the problem into S/O (or neutral versus polar) and polarity classification (Yu and Hatzivassiloglou, 2003; Pang and Lee, 2004; Wilson et al., 2005a; Kim and Hovy, 2006)." E06-1025,P04-1035,o,"This amounts to performing binary text categorization under categories Objective and Subjective (Pang and Lee, 2004; Yu and Hatzivassiloglou, 2003); 2." E06-1025,P04-1035,o,"determining document orientation (or polarity), as in deciding if a given Subjective text expresses a Positive or a Negative opinion on its subject matter (Pang and Lee, 2004; Turney, 2002); 3." E09-1004,P04-1035,o,"2 Literature Survey The task of sentiment analysis has evolved from document level analysis (e.g., (Turney., 2002); (Pang and Lee, 2004)) to sentence level analysis (e.g., (Hu and Liu., 2004); (Kim and Hovy., 2004); (Yu and Hatzivassiloglou, 2003))." E09-1077,P04-1035,o,"Mincuts have been used 4As of this writing, WordNet is available for more than 40 world languages (http://www.globalwordnet.org) Figure 2: Semi-supervised classification using mincuts in semi-supervised learning for various tasks, including document level sentiment analysis (Pang and Lee, 2004)." H05-1042,P04-1035,o,"This formulation is similar to the energy minimization framework, which is commonly used in image analysis (Besag, 1986; Boykov et al. , 1999) and has been recently applied in natural language processing (Pang and Lee, 2004)." H05-1044,P04-1035,o,"7 Related Work Much work on sentiment analysis classifies documents by their overall sentiment, for example determining whether a review is positive or negative (e.g. , (Turney, 2002; Dave et al. , 2003; Pang and Lee, 2004; Beineke et al. , 2004))." H05-1045,P04-1035,o,"(2003), Pang and Lee (2004))." H05-1045,P04-1035,o,"(2003), Pang and Lee (2004), Wilson et al." H05-1073,P04-1035,o,"(Turney, 2002), (Bai, Padman and Airoldi, 2004), (Beineke, Hastie and Vaithyanathan, 2003), (Mullen and Collier, 2003), (Pang and Lee, 2003)." H05-1115,P04-1035,p,"Recently, graph-based methods have proved useful for a number of NLP and IR tasks such as document re-ranking in ad hoc IR (Kurland and Lee, 2005) and analyzing sentiments in text (Pang and Lee, 2004)." H05-1116,P04-1035,o,"(2003), Pang and Lee (2004))." H05-1116,P04-1035,o,"(2004), Pang and Lee (2004), Wilson et al." I05-2030,P04-1035,o,"(Dave et al. , 2003; Pang and Lee, 2004; Turney, 2002))." I08-1039,P04-1035,o,"5 Evaluation 5.1 Datasets We used two datasets, customer reviews 1 (Hu and Liu, 2004) and movie reviews 2 (Pang and Lee, 2005) to evaluate sentiment classification of sentences." I08-1039,P04-1035,o,Pang and Lee (2004) proposed to eliminate objective sentences before the sentiment classification of documents. I08-1040,P04-1035,o,"There has also been previous work on determining whether a given text is factual or expresses opinion (Yu& Hatzivassiloglu, 2003; Pang & Lee, 2004); again this work uses a binary distinction, and supervised rather than unsupervised approaches." I08-1041,P04-1035,p,"SVM has been shown to be useful for text classification tasks (Joachims, 1998), and has previously given good performance in sentiment classification experiments (Kennedy and Inkpen, 2006; Mullen and Collier, 2004; Pang and Lee, 2004; Pang et al., 2002)." N06-1027,P04-1035,o,"Inspired by the idea of graph based algorithms to collectively rank and select the best candidate, research efforts in the natural language community have applied graph-based approaches on keyword selection (Mihalcea and Tarau, 2004), text summarization (Erkan and Radev, 2004; Mihalcea, 2004), word sense disambiguation (Mihalcea et al. , 2004; Mihalcea, 2005), sentiment analysis (Pang and Lee, 2004), and sentence retrieval for question answering (Otterbacher et al. , 2005)." N07-1013,P04-1035,o,"For examples, see (Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Pang and Lee, 2004)." N07-1026,P04-1035,o,"2 Background Several graph-based learning techniques have recently been developed and applied to NLP problems: minimum cuts (Pang and Lee, 2004), random walks (Mihalcea, 2005; Otterbacher et al. , 2005), graph matching (Haghighi et al. , 2005), and label propagation (Niu et al. , 2005)." N07-1033,P04-1035,o,"(2002) and Pang and Lee (2004) in merely using binary unigram features, corresponding to the 17,744 unstemmed word or punctuation types with count 4 in the full 2000-document corpus." N07-1033,P04-1035,p,"We chose a dataset that would be enjoyable to reannotate: the movie review dataset of (Pang et al. , 2002; Pang and Lee, 2004).3 The dataset consists of 1000 positive and 1000 negative movie reviews obtained from the Internet Movie Database (IMDb) review archive, all written before 2002 by a total of 312 authors, with a cap of 20 reviews per author per 2Taking Ccontrast to be constant means that all rationales are equally valuable." N07-1039,P04-1035,o,"Sentiment analysis includes a variety of different problems, including: sentiment classification techniques to classify reviews as positive or negative, based on bag of words (Pang et al. , 2002) or positive and negative words (Turney, 2002; Mullen and Collier, 2004); classifying sentences in a document as either subjective or objective (Riloff and Wiebe, 2003; Pang and Lee, 2004); identifying or classifying appraisal targets (Nigam and Hurst, 2004); identifying the source of an opinion in a text (Choi et al. , 2005), whether the author is expressing the opinion, or whether he is attributing the opinion to someone else; and developing interactive and visual opinion mining methods (Gamon et al. , 2005; Popescu and Etzioni, 2005)." N09-1001,P04-1035,o,"2 Related Work There has been a large and diverse body of research in opinion mining, with most research at the text (Pang et al., 2002; Pang and Lee, 2004; Popescu and Etzioni, 2005; Ounis et al., 2006), sentence (Kim and Hovy, 2005; Kudo and Matsumoto, 2004; Riloff et al., 2003; Yu and Hatzivassiloglou, 2003) or word (Hatzivassiloglou and McKeown, 1997; Turney and Littman, 2003; Kim and Hovy, 2004; Takamura et al., 2005; Andreevskaia and Bergler, 2006; Kaji and Kitsuregawa, 2007) level." N09-1001,P04-1035,o,"Graph-based algorithms for classification into subjective/objective or positive/negative language units have been mostly used at the sentence and document level (Pang and Lee, 2004; Agarwal and Bhattacharyya, 2005; Thomas et al., 2006), instead of aiming at dictionary annotation as we do." N09-1001,P04-1035,o,"We also cannot use prior graph construction methods for the document level (such as physical proximity of sentences, used in Pang and Lee (2004)) at the word sense level." N09-1001,P04-1035,o,"W(S,T) = summationdisplay uS,vT w(u,v) Globally optimal minimum cuts can be found in polynomial time and near-linear running time in practice, using the maximum flow algorithm (Pang and Lee, 2004; Cormen et al., 2002)." N09-1002,P04-1035,o,"In fact, researchers in sentiment analysis have realized benefits by decomposing the problem into S/O and polarity classification (Yu and Hatzivassiloglou, 2003; Pang and Lee, 2004; Wilson et al., 2005; Kim and Hovy, 2006)." N09-1065,P04-1035,o,The description of the minimum cut framework in Section 4.1 was inspired by Pang and Lee (2004). N09-3010,P04-1035,o,"3.1 Data and Experimental Setup The data set by Pang and Lee (2004) consists of 2000 movie reviews (1000-pos, 1000-neg) from the IMDb review archive." P05-1015,P04-1035,p,"All reviews were automatically preprocessed to remove both explicit rating indicators and objective sentences; the motivation for the latter step is that it has previously aided positive vs. negative classi cation (Pang and Lee, 2004)." P05-1015,P04-1035,p,"Interestingly, previous sentiment analysis research found that a minimum-cut formulation for the binary subjective/objective distinction yielded good results (Pang and Lee, 2004)." P05-2008,P04-1035,p,"A later study (Pang and Lee, 2004) found that performance increased to 87.2% when considering only those portions of the text deemed to be subjective." P06-1133,P04-1035,o,"There are also research work on automatically classifying movie or product reviews as positive or negative (Nasukawa and Yi, 2003; Mullen and Collier, 2004; Beineke et al. , 2004; Pang and Lee, 2004; Hu and Liu, 2004)." P06-1134,P04-1035,o,"The third exploits automatic subjectivity analysis in applications such as review classification (e.g. , (Turney, 2002; Pang and Lee, 2004)), mining texts for product reviews (e.g. , (Yi et al. , 2003; Hu and Liu, 2004; Popescu and Etzioni, 2005)), summarization (e.g. , (Kim and Hovy, 2004)), information extraction (e.g. , (Riloff et al. , 2005)), 1Note that sentiment, the focus of much recent work in the area, is a type of subjectivity, specifically involving positive or negative opinion, emotion, or evaluation." P06-2079,P04-1035,o,"4.1 Experimental Setup Like several previous work (e.g. , Mullen and Collier (2004), Pang and Lee (2004), Whitelaw et al." P06-2079,P04-1035,p,"Note that our result on Dataset A is as strong as that obtained by Pang and Lee (2004) via their subjectivity summarization algorithm, which retains only the subjective portions of a document." P06-2079,P04-1035,o,"Next, we learn our polarity classifier using positive and negative reviews taken from two movie 611 review datasets, one assembled by Pang and Lee (2004) and the other by ourselves." P06-2079,P04-1035,p,"Indeed, recent work has shown that benefits can be made by first separating facts from opinions in a document (e.g, Yu and Hatzivassiloglou (2003)) and classifying the polarity based solely on the subjective portions of the document (e.g. , Pang and Lee (2004))." P07-1053,P04-1035,o,"Finally, other approaches rely on reviews with numeric ratings from websites (Pang and Lee, 2002; Dave et al. , 2003; Pang and Lee, 2004; Cui et al. , 2006) and train (semi-)supervised learning algorithms to classify reviews as positive or negative, or in more fine-grained scales (Pang and Lee, 2005; Wilson et al. , 2006)." P07-1055,P04-1035,o,"In both cases there 1Alternatively, decisions from the sentence classifier can guide which input is seen by the document level classifier (Pang and Lee, 2004)." P07-1055,P04-1035,o,"In fact, it has already been established that sentence level classification can improve document level analysis (Pang and Lee, 2004)." P07-1055,P04-1035,o,Cascaded models for fine-to-coarse sentiment analysis were studied by Pang and Lee (2004). P07-1055,P04-1035,o,"For instance, in Pang and Lee (2004), yd would be the polarity of the document and ysi would indicate whether sentence si is subjective or objective." P07-1055,P04-1035,o,The local dependencies between sentiment labels on sentences is similar to the work of Pang and Lee (2004) where soft local consistency constraints were created between every sentence in adocument and inference wassolved using a min-cut algorithm. P07-1055,P04-1035,o,"Previous workonsentimentanalysishascoveredawiderange of tasks, including polarity classification (Pang et al. , 2002; Turney, 2002), opinion extraction (Pang and Lee, 2004), and opinion source assignment (Choi et al. , 2005; Choi et al. , 2006)." P07-1055,P04-1035,o,"Furthermore, these systems have tackled the problem at different levels of granularity, from the document level (Pang et al. , 2002), sentence level (Pang and Lee, 2004; Mao and Lebanon, 2006), phrase level (Turney, 2002; Choi et al. , 2005), as well as the speaker level in debates (Thomas et al. , 2006)." P07-1123,P04-1035,p,"First, even when sentiment is the desired focus, researchers in sentiment analysis have shown that a two-stage approach is often beneficial, in which subjective instances are distinguished from objective ones, and then the subjective instances are further classified according to polarity (Yu and Hatzivassiloglou, 2003; Pang and Lee, 2004; Wilson et al. , 2005; Kim and Hovy, 2006)." P07-3007,P04-1035,o,It is worth noting that we observed the same relation between subjectivity detection and polarity classification accuracy as described by Pang and Lee (2004) and Eriksson (2006). P08-1034,P04-1035,o,Pang and Lee (2004) applied two different classifiers to perform sentiment annotation in two sequential steps: the first classifier separated subjective (sentiment-laden) texts from objective (neutral) ones and then they used the second classifier to classify the subjective texts into positive and negative. P08-1034,P04-1035,o,"291 3.1 Level of Analysis Research on sentiment annotation is usually conducted at the text (Aue and Gamon, 2005; Pang et al., 2002; Pang and Lee, 2004; Riloff et al., 2006; Turney, 2002; Turney and Littman, 2003) or at the sentence levels (Gamon and Aue, 2005; Hu and Liu, 2004; Kim and Hovy, 2005; Riloff et al., 2006)." P08-1034,P04-1035,p,"Table 1: Datasets 3.3 Establishing a Baseline for a Corpus-based System (CBS) Supervised statistical methods have been very successful in sentiment tagging of texts: on movie review texts they reach accuracies of 85-90% (Aue and Gamon, 2005; Pang and Lee, 2004)." P08-1034,P04-1035,o,"It has been shown that both Nave Bayes and SVMs perform with similar accuracy on different sentiment tagging tasks (Pang and Lee, 2004)." P08-1041,P04-1035,o,"In many applications, it has been shown that sentences with subjective meanings are paid more attention than factual ones(Pang and Lee, 2004)(Esuli and Sebastiani, 2006)." P08-2004,P04-1035,o,"(Wilson et al., 2005; Pang and Lee, 2004)), and emotion studies (e.g." P09-1027,P04-1035,o,"(2002), various classification models and linguistic features have been proposed to improve the classification performance (Pang and Lee, 2004; Mullen and Collier, 2004; Wilson et al., 2005; Read, 2005)." P09-1028,P04-1035,p,"A two-tier scheme (Pang and Lee, 2004) where sentences are rst classi ed as subjective versus objective, and then applying the sentiment classi er on only the subjective sentences further improves performance." P09-1078,P04-1035,p,"And 20NG is a collection of approximately 20,000 20-category documents 1 . In sentiment text classification, we also use two data sets: one is the widely used Cornell movie-review dataset2 (Pang and Lee, 2004) and one dataset from product reviews of domain DVD3 (Blitzer et al., 2007)." P09-1078,P04-1035,n,"(2006) examine the FS of the weighted log-likelihood ratio (WLLR) on the movie review dataset and achieves an accuracy of 87.1%, which is higher than the result reported by Pang and Lee (2004) with the same dataset." P09-1079,P04-1035,o,"For instance, Pang and Lee (2004) train an independent subjectivity classifier to identify and remove objective sentences from a review prior to polarity classification." W05-0408,P04-1035,o,"1 Introduction The field of sentiment classification has received considerable attention from researchers in recent years (Pang and Lee 2002, Pang et al. 2004, Turney 2002, Turney and Littman 2002, Wiebe et al. 2001, Bai et al. 2004, Yu and Hatzivassiloglou 2003 and many others)." W05-0408,P04-1035,o,"Movie and product reviews have been the main focus of many of the recent studies in this area (Pang and Lee 2002, Pang et al. 2004, Turney 2002, Turney and Littman 2002)." W05-0408,P04-1035,o,"accuracy Training data Turney (2002) 66% unsupervised Pang & Lee (2004) 87.15% supervised Aue & Gamon (2005) 91.4% supervised SO 73.95% unsupervised SM+SO to increase seed words, then SO 74.85% weakly supervised Table 7: Classification accuracy on the movie review domain Turney (2002) achieves 66% accuracy on the movie review domain using the PMI-IR algorithm to gather association scores from the web." W05-0408,P04-1035,o,Pang and Lee (2004) report 87.15% accuracy using a unigram-based SVM classifier combined with subjectivity detection. W06-0302,P04-1035,o,"(2003), Pang and Lee (2004))." W06-0302,P04-1035,o,"(2004), Pang and Lee (2004), Wilson et al." W06-0303,P04-1035,o,"For process (2), existing methods aim to distinguish between subjective and objective descriptions in texts (Kim and Hovy, 2004; Pang and Lee, 2004; Riloff and Wiebe, 2003)." W06-0303,P04-1035,o,"For process (3), machine-learning methods are usually used to classify subjective descriptions into bipolar categories (Dave et al. , 2003; Beineke et al. , 2004; Hu and Liu, 2004; Pang and Lee, 2004) or multipoint scale categories (Kim and Hovy, 2004; Pang and Lee, 2005)." W06-0304,P04-1035,o,"The focus of much of the automatic sentiment analysis research is on identifying the affect bearing words (words with emotional content) and on measurement approaches for sentiment (Turney & Littman, 2003; Pang & Lee, 2004; Wilson et al. , 2005)." W06-0304,P04-1035,o,"(Pang & Lee, 2004; Aue & Gamon, 2005)." W06-1639,P04-1035,p,"As has been previously observed and exploited in the NLP literature (Pang and Lee, 2004; Agarwal and Bhattacharyya, 2005; Barzilay and Lapata, 2005), the above optimization function, unlike many others that have been proposed for graph or set partitioning, can be solved exactly in an provably efficient manner via methods for finding minimum cuts in graphs." W06-1640,P04-1035,o,"In contrast to the opinion extracts produced by Pang and Lee (2004), our summaries are not text extracts, but rather explicitly identify and 337 characterize the relations between opinions and their sources." W06-1642,P04-1035,o,"Inter-sentential contexts as in our approach were used as a clue also for subjectivity analysis (Riloff and Wiebe, 2003; Pang and Lee, 2004), which is two-fold classification into subjective and objective sentences." W06-1652,P04-1035,o,"3 Data Sets We used three opinion-related data sets for our analyses and experiments: the OP data set created by (Wiebe et al. , 2004), the Polarity data set5 created by (Pang and Lee, 2004), and the MPQA data set created by (Wiebe et al. , 2005).6 The OP and Polarity data sets involve document-level opinion classi cation, while the MPQA data set involves 5Version v2.0, which is available at: http://www.cs.cornell.edu/people/pabo/movie-review-data/ 6Available at http://www.cs.pitt.edu/mpqa/databaserelease/ sentence-level classi cation." W07-2013,P04-1035,o,"Unlike previous annotations of sentiment or subjectivity (Wiebe et al. , 2005; Pang and Lee, 2004), which typically relied on binary 0/1 annotations, we decided to use a finer-grained scale, hence allowing the annotators to select different degrees of emotional load." W07-2022,P04-1035,p,"Sentence-level subjectivity detection, where training data is easier to obtain than for positive vs. negative classification, has been successfully performed using supervised statistical methods alone (Pang and Lee, 2004) or in combination with a knowledgebased approach (Riloff et al. , 2006)." W07-2022,P04-1035,p,"3 CLaC-NB System: Nave Bayes Supervised statistical methods have been very successful in sentiment tagging of texts and in subjectivity detection at sentence level: on movie review texts they reach an accuracy of 85-90% (Aue and Gamon, 2005; Pang and Lee, 2004) and up to 92% accuracy on classifying movie review snippets into subjective and objective using both Nave Bayes and SVM (Pang and Lee, 2004)." W08-0122,P04-1035,o,"5 Related Work Evidence from the surrounding context has been used previously to determine if the current sentence should be subjective/objective (Riloff et al., 2003; Pang and Lee, 2004)) and adjacency pair information has been used to predict congressional votes (Thomas et al., 2006)." W09-1606,P04-1035,o,"3.3 Language Model (LM) As a second baseline we use the classification based on the language model using overlapping ngram sequences (n was set to 8) as suggested by Pang & Lee (2004, 2005) for the English language." W09-1606,P04-1035,o,Pang & Lee (2004) propose the use of language models for sentiment analysis task and subjectivity extraction. W09-1904,P04-1035,o,"Previous research has focused on classifying subjective-versus-objective expressions (Wiebe et al., 2004), and also on accurate sentiment polarity assignment (Turney, 2002; Yi et al., 2003; Pang and Lee, 2004; Sindhwani and Melville, 2008; Melville et al., 2009)." W09-2804,P04-1035,o,Pang and Lee (2004) frame the problem of detecting subjective sentences as finding the minimum cut in a graph representation of the sentences. C08-1038,P04-1041,o,"(2007) present a chart generator using wide-coverage PCFG-based LFG approximations automatically acquired from treebanks (Cahill et al., 2004)." C08-1038,P04-1041,o,"Our approach is data-driven: following the methodology in (Cahill et al., 2004; Guo et al., 2007), we automatically convert the English PennII treebank and the Chinese Penn Treebank (Xue et al., 2005) into f-structure banks." C08-1038,P04-1041,o,"1999), OpenCCG (White, 2004) and XLE (Crouch et al., 2007), or created semi-automatically (Belz, 2007), or fully automatically extracted from annotated corpora, like the HPSG (Nakanishi et al., 2005), LFG (Cahill and van Genabith, 2006; Hogan et al., 2007) and CCG (White et al., 2007) resources derived from the Penn-II Treebank (PTB) (Marcus et al., 1993)." D07-1027,P04-1041,o,"In addition to CFG-oriented approaches, a number of richer treebank-based grammar acquisition and parsing methods based on HPSG (Miyao et al. , 2003), CCG (Clark and Hockenmaier, 2002), LFG (Riezler et al. , 2002; Cahill et al. , 2004) and Dependency Grammar (Nivre and Nilsson, 2005) incorporate non-local dependencies into their deep syntactic or semantic representations." D07-1027,P04-1041,o,"F (Cahill et al. , 2004) overall 95.98 57.86 72.20 73.00 40.28 51.91 90.16 54.35 67.82 65.54 36.16 46.61 args only 98.64 42.03 58.94 82.69 30.54 44.60 86.36 36.80 51.61 66.08 24.40 35.64 Basic Model overall 92.44 91.28 91.85 63.87 62.15 63.00 63.12 62.33 62.72 42.69 41.54 42.10 args only 89.42 92.95 91.15 60.89 63.45 62.15 47.92 49.81 48.84 31.41 32.73 32.06 Basic Model with Subject Path Constraint overall 92.16 91.36 91.76 63.72 62.20 62.95 75.96 75.30 75.63 50.82 49.61 50.21 args only 89.04 93.08 91.02 60.69 63.52 62.07 66.15 69.15 67.62 42.77 44.76 44.76 Table 7: Evaluation of trace insertion and antecedent recovery for C04 algorithm, our basic algorithm and basic algorithm with the subject path constraint." D07-1027,P04-1041,o,"We also combine our basic algorithm (Section 4.2) with (Cahill et al. , 2004)s algorithm in order to resolve the modifier-function traces." D07-1027,P04-1041,o,"Inspired by (Cahill et al. , 2004)s methodology which was originally designed for English and Penn-II treebank, our approach to Chinese non-local dependency recovery is based on Lexical-Functional Grammar (LFG), a formalism that involves both phrase structure trees and predicate-argument structures." D07-1027,P04-1041,o,"Our method revises and considerably extends the approach of (Cahill et al. , 2004) originally designed for English, and, to the best of our knowledge, is the first NLD recovery algorithm for Chinese." D07-1027,P04-1041,n,"The evaluation shows that our algorithm considerably outperforms (Cahill et al. , 2004)s with respect to Chinese data." D07-1027,P04-1041,o,"In Section 3 we review (Cahill et al. , 2004)s method for recovering English NLDs in treebank-based LFG approximations." D07-1027,P04-1041,o,"3.2 F-Structure Based NLD Recovery (Cahill et al. , 2004) presented a NLD recovery algorithm operating at LFG f-structure for treebankbased LFG approximations." D07-1027,P04-1041,o,"(Cahill et al. , 2004)s approach for English resolves three LDD types in parser output trees without traces and coindexation (Figure 2(b)), i.e. topicalisation (TOPIC), wh-movement in relative clauses (TOPIC REL) and interrogatives (FOCUS)." D07-1027,P04-1041,o,"Inspired by (Cahill et al. , 2004; Burke et al. , 2004), we have implemented an f-structure annotation algorithm to automatically obtain f-structures from CFG-trees in the CTB5.1." D07-1027,P04-1041,o,"4.2 Adaptation to Chinese (Cahill et al. , 2004)s algorithm (Section 3.2) only resolves certain NLDs with known types of antecedents (TOPIC, TOPIC REL and FOCUS) at fstructures." D07-1027,P04-1041,o,"In order to resolve all Chinese NLDs represented in the CTB, we modify and substantially extend the (Cahill et al. , 2004) (henceforth C04 for short) algorithm as follows: Given the set of subcat frames s for the word w, and a set of paths p for the trace t, the algorithm traverses the f-structure f to: predict a dislocated argument t at a sub-fstructure h by comparing the local PRED:w to ws subcat frames s t can be inserted at h if h together with t is complete and coherent relative to subcat frame s traverse f starting from t along the path p link t to its antecedent a if ps ending GF a exists in a sub-f-structure within f; or leave t without an antecedent if an empty path for t exists In the modified algorithm, we condition the probability of NLD path p (including the empty path without an antecedent) on the GF associated of the trace t rather than the antecedent a as in C04." D07-1028,P04-1041,o,"The LFG annotation algorithm of (Cahill et al. , 2004) was used to produce the f-structures for development, test and training sets." D09-1085,P04-1041,o,"It has also obtained competitive scores on general GR evaluation corpora (Cahill et al., 2004)." D09-1085,P04-1041,o,"3.2 The parsers The parsers that we chose to evaluate are the C&C CCG parser (Clark and Curran, 2007), the Enju HPSG parser (Miyao and Tsujii, 2005), the RASP parser (Briscoe et al., 2006), the Stanford parser (Klein and Manning, 2003), and the DCU postprocessor of PTB parsers (Cahill et al., 2004), based on LFG and applied to the output of the Charniak and Johnson reranking parser." E06-1010,P04-1041,o,"Most of this work has so far focused either on post-processing to recover non-local dependencies from context-free parse trees (Johnson, 2002; Jijkoun and De Rijke, 2004; Levy and Manning, 2004; Campbell, 2004), or on incorporating nonlocal dependency information in nonterminal categories in constituency representations (Dienes and Dubey, 2003; Hockenmaier, 2003; Cahill et al. , 2004) or in the categories used to label arcs in dependency representations (Nivre and Nilsson, 2005)." J05-3003,P04-1041,o,"However, more recent work (Cahill et al. 2002; Cahill, McCarthy, et al. 2004) has presented efforts in evolving and scaling up annotation techniques to the Penn-II Treebank (Marcus et al. 1994), containing more than 1,000,000 words and 49,000 sentences." J05-3003,P04-1041,o,"Our approach is based on earlier work on LFG semantic form extraction (van Genabith, Sadler, and Way 1999) and recent progress in automatically annotating the Penn-II and Penn-III Treebanks with LFG f-structures (Cahill et al. 2002; Cahill, McCarthy, et al. 2004)." J05-3003,P04-1041,o,"We have also applied our more general unification grammar acquisition methodology to the TIGER Treebank (Brants et al. 2002) and Penn Chinese Treebank (Xue, Chiou, and Palmer 2002), extracting wide-coverage, probabilistic LFG grammar 361 Computational Linguistics Volume 31, Number 3 approximations and lexical resources for German (Cahill et al. 2003) and Chinese (Burke, Lam, et al. 2004)." J07-3004,P04-1041,o,"Because treebank annotation for individual formalisms is prohibitively expensive, there have been a number of efforts to extract TAGs, LFGs, and, more recently, HPSGs, from the Penn Treebank (Xia 1999; Chen and Vijay-Shanker 2000; Xia, Palmer, and Joshi 2000; Xia 2001; Cahill et al. 2002; Miyao, Ninomiya, and Tsujii 2004; ODonovan et al. 2005; Shen and Joshi 2005; Chen, Bangalore, and Vijay-Shanker 2006)." J07-3004,P04-1041,o,"For the Penn Treebank, our research and the work of others (Xia 1999; Chen and Vijay-Shanker 2004; Chiang 2000; Cahill et al. 2002) have shown that such a correspondence exists in most cases." J07-4004,P04-1041,o,"Statistical parsers have been developed for TAG (Chiang 2000; Sarkar and Joshi 2003), LFG (Riezler et al. 2002; Kaplan et al. 2004; Cahill et al. 2004), and HPSG (Toutanova et al. 2002; Toutanova, Markova, and Manning 2004; Miyao and Tsujii 2004; Malouf and van Noord 2004), among others." N06-1019,P04-1041,o,"Even robust parsers using linguistically sophisticated formalisms, such as TAG (Chiang, 2000), CCG (Clark and Curran, 2004b; Hockenmaier, 2003), HPSG (Miyao et al. , 2004) and LFG (Riezler et al. , 2002; Cahill et al. , 2004), often use training data derived from the Penn Treebank." N07-2031,P04-1041,o,The feasibility of such post-parse deepening (for a statistical parser) is demonstrated by Cahill et al (2004). P04-1047,P04-1041,o,"Our approach is based on earlier work on LFG semantic form extraction (van Genabith et al. , 1999) and recent progress in automatically annotating the Penn-II treebank with LFG f-structures (Cahill et al. , 2004b)." P04-1047,P04-1041,o,"In this paper we show how the extraction process can be scaled to the complete Wall Street Journal (WSJ) section of the Penn-II treebank, with about 1 million words in 50,000 sentences, based on the automatic LFG f-structure annotation algorithm described in (Cahill et al. , 2004b)." P04-1047,P04-1041,o,"We are already using the extracted semantic forms in parsing new text with robust, wide-coverage PCFG-based LFG grammar approximations automatically acquired from the f-structure annotated Penn-II treebank (Cahill et al. , 2004a)." P04-1047,P04-1041,o,"We utilise the automatic annotation algorithm of (Cahill et al. , 2004b) to derive a version of Penn-II where each node in each tree is annotated with an LFG functional annotation (i.e. an attribute value structure equation)." P04-1047,P04-1041,o,"(Cahill et al. , 2004b) provide four sets of annotation principles, one for non-coordinate configurations, one for coordinate configurations, one for traces (long distance dependencies) and a final catch all and clean up phase." P04-1047,P04-1041,o,"The algorithm of (Cahill et al. , 2004b) translates the traces into corresponding re-entrancies in the f-structure representation (Figure 1)." P04-1047,P04-1041,o,"(Cahill et al. , 2004b) measure annotation quality in terms of precision and recall against manually constructed, gold-standard f-structures for 105 randomly selected trees from section 23 of the WSJ section of Penn-II." P05-1013,P04-1041,o,"Finally, since non-projective constructions often involve long-distance dependencies, the problem is closely related to the recovery of empty categories and non-local dependencies in constituency-based parsing (Johnson, 2002; Dienes and Dubey, 2003; Jijkoun and de Rijke, 2004; Cahill et al. , 2004; Levy and Manning, 2004; Campbell, 2004)." P06-1130,P04-1041,o,"Recent work on the automatic acquisition of multilingual LFG resources from treebanks for Chinese, German and Spanish (Burke et al. , 2004; Cahill et al. , 2005; ODonovan et al. , 2005) has shown that given a suitable treebank, it is possible to automatically acquire high quality LFG resources in a very short space of time." P06-1130,P04-1041,o,"c2006 Association for Computational Linguistics Robust PCFG-Based Generation using Automatically Acquired LFG Approximations Aoife Cahill1 and Josef van Genabith1,2 1 National Centre for Language Technology (NCLT) School of Computing, Dublin City University, Dublin 9, Ireland 2 Center for Advanced Studies, IBM Dublin, Ireland {acahill,josef}@computing.dcu.ie Abstract We present a novel PCFG-based architecture for robust probabilistic generation based on wide-coverage LFG approximations (Cahill et al. , 2004) automatically extracted from treebanks, maximising the probability of a tree given an f-structure." P06-1130,P04-1041,o,"In this paper we present a novel PCFG-based architecture for probabilistic generation based on wide-coverage, robust Lexical Functional Grammar (LFG) approximations automatically extracted from treebanks (Cahill et al. , 2004)." P06-2018,P04-1041,o,"The f-structure annotation algorithm used for inducing LFG resources from the Penn-II treebank for English (Cahill et al. , 2004) uses configurational, categorial, function tag and trace information." P06-2018,P04-1041,o,"It has been shown that the methods can be ported to other languages and treebanks (Burke et al. , 2004; Cahill et al. , 2003), including Cast3LB (ODonovan et al. , 2005)." P06-2018,P04-1041,o,"1 Introduction The research presented in this paper forms part of an ongoing effort to develop methods to induce wide-coverage multilingual LexicalFunctional Grammar (LFG) (Bresnan, 2001) resources from treebanks by means of automatically associating LFG f-structure information with constituency trees produced by probabilistic parsers (Cahill et al. , 2004)." P07-1032,P04-1041,o,"1 Introduction Parsers have been developed for a variety of grammar formalisms, for example HPSG (Toutanova et al. , 2002; Malouf and van Noord, 2004), LFG (Kaplan et al. , 2004; Cahill et al. , 2004), TAG (Sarkar and Joshi, 2003), CCG (Hockenmaier and Steedman, 2002; Clark and Curran, 2004b), and variants of phrase-structure grammar (Briscoe et al. , 2006), including the phrase-structure grammar implicit in the Penn Treebank (Collins, 2003; Charniak, 2000)." W04-2003,P04-1041,o,"Our approach is to use finite-state approximations of long-distance dependencies, as they are described in (Schneider, 2003a) for Dependency Grammar (DG) and (Cahill et al. , 2004) for Lexical Functional Grammar (LFG)." W07-0411,P04-1041,o,"The translation and reference files are analyzed by a treebank-based, probabilistic Lexical-Functional Grammar (LFG) parser (Cahill et al. , 2004), which produces a set of dependency triples for each input." W07-0714,P04-1041,o,"The translation and reference files are analyzed by a treebank-based, probabilistic LFG parser (Cahill et al. , 2004), which produces a set of dependency triples for each input." W07-2206,P04-1041,o,"1 Introduction A recent theme in parsing research has been the application of statistical methods to linguistically motivated grammars, for example LFG (Kaplan et al. , 2004; Cahill et al. , 2004), HPSG (Toutanova et al. , 2002; Malouf and van Noord, 2004), TAG (Sarkar and Joshi, 2003) and CCG (Hockenmaier andSteedman,2002; ClarkandCurran,2004b)." W07-2211,P04-1041,o,"Methods for doing so, for stochastic parser output, are described by Johnson (2002) and Cahill et al (2004)." W08-1122,P04-1041,o,"The f-structures are created automatically by annotating nodes in the gold standard WSJ trees with LFG functional equations and then passing these equations through a constraint solver (Cahill et al., 2004)." W08-1306,P04-1041,o,"(Cahill et al., 2004) managed to extract LFG subcategorisation frames and paths linking long distance dependencies reentrancies from f-structures generated automatically for the PennII treebank trees and used them in an long distance dependency resolution algorithm to parse new text." W09-2605,P04-1041,o,(2004) and Cahill et al. P06-2031,P04-1048,o,"6 Related Work The most relevant previous works include word sense translation and translation disambiguation (Li & Li 2003; Cao & Li 2002; Koehn and Knight 2000; Kikui 1999; Fung et al. , 1999), frame semantic induction (Green et al. , 2004; Fung & Chen 2004), and bilingual semantic mapping (Fung & Chen 2004; Huang et al. 2004; Ploux & Ji, 2003, Ngai et al. , 2002; Palmer & Wu 1995)." P06-2031,P04-1048,o,"It would be necessary to apply either semiautomatic or automatic methods such as those in (Burchardt et al. 2005, Green et al 2004) to extend FrameNet coverage for final application to machine translation tasks." W05-1007,P04-1048,p,"This paper demonstrates several of the characteristics and benefits of SemFrame (Green et al. , 2004; Green and Dorr, 2004), a system that produces such a resource." C08-1106,P05-1010,o,"When the data has distinct sub-structures, models that exploit hidden state variables are advantageous in learning (Matsuzaki et al. 2005; Petrov et al. 2007)." D07-1014,P05-1010,o,"More recently, EM has been used to learn hidden variables in parse trees; these can be head-childannotations(ChiangandBikel, 2002), latent head features (Matsuzaki et al. , 2005; Prescher, 2005; Dreyer and Eisner, 2006), or hierarchicallysplit nonterminal states (Petrov et al. , 2006)." D07-1014,P05-1010,p,"6 Discussion Noting that adding latent features to nonterminals in unlexicalized context-free parsing has been very successful (Chiang and Bikel, 2002; Matsuzaki et al. , 2005; Prescher, 2005; Dreyer and Eisner, 2006; Petrov et al. , 2006), we were surprised not to see a 3Czech experiments were not done, since the number of features (more than 14 million) was too high to multiply out by clusters." D07-1072,P05-1010,o,"We compare an ordinary PCFG estimated with maximum likelihood (Matsuzaki et al. , 2005) and the HDP-PCFG estimated using the variational inference algorithm described in Section 2.6." D07-1072,P05-1010,o,"Unlexicalized methods refine the grammar in a more conservative fashion, splitting each non-terminal or pre-terminal symbol into a much smaller number of subsymbols (Klein and Manning, 2003; Matsuzaki et al. , 2005; Petrov et al. , 2006)." D08-1016,P05-1010,o,"We could also introduce new variables, e.g., nonterminal refinements (Matsuzaki et al., 2005), or secondary linksMij (not constrained by TREE/PTREE) that augment the parse with representations of control, binding, etc." D08-1091,P05-1010,o,"The parameters of the refined productions Ax By Cz, where Ax is a subcategory of A, By of B, and Cz of C, can then be estimated in various ways; past work has included both generative (Matsuzaki et al., 2005; Liang et al., 2007) and discriminative approaches (Petrov and Klein, 2008)." D08-1091,P05-1010,o,"The resulting memory limitations alone can prevent the practical learning of highly split grammars (Matsuzaki et al., 2005)." D08-1091,P05-1010,o,"1 Introduction In latent variable approaches to parsing (Matsuzaki et al., 2005; Petrov et al., 2006), one models an observed treebank of coarse parse trees using a grammar over more refined, but unobserved, derivation trees." D09-1087,P05-1010,p,"2 Parsing Model The Berkeley parser (Petrov et al., 2006; Petrov and Klein, 2007) is an efficient and effective parser that introduces latent annotations (Matsuzaki et al., 2005) to refine syntactic categories to learn better PCFG grammars." D09-1119,P05-1010,p,"This was recently followed by (Matsuzaki et al., 2005; Petrov et al., 2006) who introduce state-of-the-art nearly unlexicalized PCFG parsers." D09-1135,P05-1010,o,"Then, some manual and automatic symbol splitting methods are presented, which get comparable performance with lexicalized parsers (Klein and Manning, 2003; Matsuzaki et al., 2005)." D09-1161,P05-1010,p,The latent-annotation model (Matsuzaki et al. 2005; Petrov et al. 2006) is one of the most effective un-lexicalized models. D09-1161,P05-1010,o,"In general, they can be divided into two major categories, namely lexicalized models (Collins 1997, 1999; Charniak 1997, 2000) and un-lexicalized models (Klein and Manning 2003; Matsuzaki et al. 2005; Petrov et al. 2006; Petrov and Klein 2007)." E09-1088,P05-1010,p,"1 Introduction When data have distinct sub-structures, models exploiting latent variables are advantageous in learning (Matsuzaki et al., 2005; Petrov and Klein, 2007; Blunsom et al., 2008)." N07-1051,P05-1010,o,"The refined grammar is estimated using a variant of the forward-backward algorithm (Matsuzaki et al. , 2005)." N07-1051,P05-1010,p,"Previous work has shown that high-quality unlexicalized PCFGs can be learned from a treebank, either by manual annotation (Klein and Manning, 2003) or automatic state splitting (Matsuzaki et al. , 2005; Petrov et al. , 2006)." N09-2054,P05-1010,o,"Rather than explicit annotation, we could use latent annotations to split the POS tags, similarly to the introduction of latent annotations to PCFG grammars (Matsuzaki et al., 2005; Petrov et al., 2006)." N09-2054,P05-1010,o,"Building upon the large body of research to improve tagging performance for various languages using various models (e.g., (Thede and Harper, 1999; Brants, 2000; Tseng et al., 2005b; Huang et al., 2007)) and the recent work on PCFG grammars with latent annotations (Matsuzaki et al., 2005; Petrov et al., 2006), we will investigate the use of fine-grained latent annotations for Chinese POS tagging." P06-1055,P05-1010,n,"As one can see in Table 4, the resulting parser ranks among the best lexicalized parsers, beating those of Collins (1999) and Charniak and Johnson (2005).8 Its F1 performance is a 27% reduction in error over Matsuzaki et al." P06-1055,P05-1010,o,(2005) 86.6 86.7 1.19 61.1 Collins (1999) 88.7 88.5 0.92 66.7 Charniak and Johnson (2005) 90.1 90.1 0.74 70.1 This Paper 90.3 90.0 0.78 68.5 all sentences LP LR CB 0CB Klein and Manning (2003) 86.3 85.1 1.31 57.2 Matsuzaki et al. P07-1022,P05-1010,o,"For example, incremental CFG parsing algorithms can be used with the CFGs produced by this transform, as can the Inside-Outside estimation algorithm (Lari and Young, 1990) and more exotic methods such as estimating adjoined hidden states (Matsuzaki et al. , 2005; Petrov et al. , 2006)." P07-1080,P05-1010,o,"5 Related Work There has not been much previous work on graphical models for full parsing, although recently several latent variable models for parsing have been proposed (Koo and Collins, 2005; Matsuzaki et al. , 2005; Riezler et al. , 2002)." P07-1080,P05-1010,o,"(Koo and Collins, 2005; Matsuzaki et al. , 2005; Riezler et al. , 2002))." P07-2052,P05-1010,o,"splitting tags (Matsuzaki et al. , 2005; Petrov et al. , 2006)." P07-2052,P05-1010,o,"Unlexicalized parsers, on the other hand, achieved accuracies almost equivalent to those of lexicalized parsers (Klein and Manning, 2003; Matsuzaki et al. , 2005; Petrov et al. , 2006)." P08-1006,P05-1010,o,"5http://nlp.cs.berkeley.edu/Main.html#Parsing 47 Figure 3: Predicate argument structure timized automatically by assigning latent variables to each nonterminal node and estimating the parameters of the latent variables by the EM algorithm (Matsuzaki et al., 2005)." P08-1068,P05-1010,o,"Previous research in this area includes several models which incorporate hidden variables (Matsuzaki et al., 2005; Koo and Collins, 2005; Petrov et al., 2006; Titov and Henderson, 2007)." P08-2054,P05-1010,o,"CFGs extracted from such structures were then annotated with hidden variables encoding the constraints described in the previous section and trained until convergence by means of the Inside-Outside algorithm defined in (Pereira and Schabes, 1992) and applied in (Matsuzaki et al., 2005)." P08-2054,P05-1010,o,"Such methods stand in sharp contrast to partially supervised techniques that have recently been proposed to induce hidden grammatical representations that are finer-grained than those that can be read off the parsed sentences in treebanks (Henderson, 2003; Matsuzaki et al., 2005; Prescher, 2005; Petrov et al., 2006)." P09-1067,P05-1010,o,"Indeed, our methods were inspired by past work on variational decoding for DOP (Goodman, 1996) and for latent-variable parsing (Matsuzaki et al., 2005)." W06-1636,P05-1010,o,"Instead researchers condition parsing decisions on many other features, such as parent phrase-marker, and, famously, the lexical-head of the phrase (Magerman, 1995; Collins, 1996; Collins, 1997; Johnson, 1998; Charniak, 2000; Henderson, 2003; Klein and Manning, 2003; Matsuzaki et al. , 2005) (and others)." W06-1636,P05-1010,o,"In retrospect, however, there are perhaps even greater similarities to that of (Magerman, 1995; Henderson, 2003; Matsuzaki et al. , 2005)." W06-2902,P05-1010,o,"(Matsuzaki et al. , 2005; Koo and Collins, 2005))." W06-2903,P05-1010,p,"Compared to a basic treebank grammar (Charniak, 1996), the grammars of highaccuracy parsers weaken independence assumptions by splitting grammar symbols and rules with either lexical (Charniak, 2000; Collins, 1999) or nonlexical (Klein and Manning, 2003; Matsuzaki et al. , 2005) conditioning information." W07-2218,P05-1010,o,"Recently several latent variable models for constituent parsing have been proposed (Koo and Collins, 2005; Matsuzaki et al. , 2005; Prescher, 2005; Riezler et al. , 2002)." W07-2218,P05-1010,o,"In (Matsuzaki et al. , 2005) non-terminals in a standard PCFG model are augmented with latent variables." W07-2218,P05-1010,n,"While the model of (Matsuzaki et al. , 2005) significantly outperforms the constrained model of (Prescher, 2005), they both are well below the state-of-the-art in constituent parsing." W07-2219,P05-1010,o,"3.1 A Note on State-Splits Recent studies (Klein and Manning, 2003; Matsuzaki et al. , 2005; Prescher, 2005; Petrov et al. , 2006) suggest that category-splits help in enhancing the performance of treebank grammars, and a previous study on MH (Tsarfaty, 2006) outlines specific POS-tags splits that improve MH parsing accuracy." W08-1005,P05-1010,o,"2 Latent Variable Parsing In latent variable parsing (Matsuzaki et al., 2005; Prescher, 2005; Petrov et al., 2006), we learn rule probabilities on latent annotations that, when marginalized out, maximize the likelihood of the unannotated training trees." W09-1008,P05-1010,o,"This leads to 49 methods that use semi-supervised techniques on a treebank-infered grammar backbone, such as (Matsuzaki et al., 2005; Petrov et al., 2006)." W09-1008,P05-1010,o,"Solving this first methodological issue, has led to solutions dubbed hereafter as unlexicalized statistical parsing (Johnson, 1998; Klein and Manning, 2003a; Matsuzaki et al., 2005; Petrov et al., 2006)." W09-1008,P05-1010,o,"A further development has been first introduced by (Matsuzaki et al., 2005) who recasts the problem of adding latent annotations as an unsupervised learning problem: given an observed PCFG induced from the treebank, the latent grammar is generated by combining every non terminal of the observed grammar to a predefined set H of latent symbols." C08-1061,P05-1045,o,"In this paper, Stanford Named Entity Recognizer (Finkel et al. 2005) is used to classify noun phrases into four semantic categories: PERSON, LOCATION, ORGANIZARION and MISC." D07-1033,P05-1045,o,"For example, non-local features such as same phrases in a document do not have different entity classes were shown to be useful in named entity recognition (Sutton and McCallum, 2004; Bunescu and Mooney, 2004; Finkel et al. , 2005; Krishnan and Manning, 2006)." D07-1033,P05-1045,n,"Although several methods have already been proposed to incorporate non-local features (Sutton and McCallum, 2004; Bunescu and Mooney, 2004; Finkel et al. , 2005; Roth and Yih, 2005; Krishnan and Manning, 2006; Nakagawa and Matsumoto, 2006), these present a problem that the types of non-local features are somewhat constrained." D07-1033,P05-1045,o,"The performance of the related work (Finkel et al. , 2005; Krishnan and Manning, 2006) is listed in Table4." D07-1033,P05-1045,o,"Method dev test Finkel et al. , 2005 (Finkel et al. , 2005) baseline CRF 85.51 + non-local features 86.86 Krishnan and Manning, 2006 (Krishnan and Manning, 2006) baseline CRF 85.29 + non-local features 87.24 Table 5: Summary of performance with POS/chunk tags by TagChunk." D07-1033,P05-1045,o,"However, the achieved accuracy was not better than that of related work (Finkel et al. , 2005; Krishnan and Manning, 2006) based on CRFs." D09-1016,P05-1045,o,"The named-entity features are generated by the freely available Stanford NER tagger (Finkel et al., 2005)." D09-1057,P05-1045,n,"However, due to the lack of a fine grained NER tool at hand, we employ the Stanford NER package (Finkel et al., 2005) which identifies only four types of named entities." D09-1057,P05-1045,o,"5.1 CoNLL named entities presence feature We use Stanford named entity recognizer (NER) (Finkel et al., 2005) to identify CoNLL style NEs7 as possible answer strings in a candidate sentence for a given type of question." D09-1101,P05-1045,o,"Semantic (1): The named entity (NE) tag of wi obtained using the Stanford CRF-based NE recognizer (Finkel et al., 2005)." D09-1119,P05-1045,p,"As discussed above, all state-of-the-art published methods rely on lexical features for such tasks (Zhang et al., 2001; Sha and Pereira, 2003; Finkel et al., 2005; Ratinov and Roth, 2009)." D09-1120,P05-1045,o,"Instead, we opt to utilize the Stanford NER tagger (Finkel et al., 2005) over the sentences in a document and annotate each NP with the NER label assigned to that mention head." D09-1158,P05-1045,o,"4.1 NER features We used the features generated by the CRF package (Finkel et al., 2005)." E09-1007,P05-1045,o,"F-me. 1 CBC-NER system M 71.67 23.47 35.36CBC-NER system A 70.66 32.86 44.86 2 XIP NER 77.77 56.55 65.48 XIP + CBC M 78.41 60.26 68.15 XIP + CBC A 76.31 60.48 67.48 3 Stanford NER 67.94 68.01 67.97 Stanford + CBC M 69.40 71.07 70.23 Stanford + CBC A 70.09 72.93 71.48 4 GATE NER 63.30 56.88 59.92 GATE + CBC M 66.43 61.79 64.03 GATE + CBC A 66.51 63.10 64.76 5 Stanford + XIP 72.85 75.87 74.33 Stanford + XIP + CBC M 72.94 77.70 75.24 Stanford + XIP + CBC A 73.55 78.93 76.15 6 GATE + XIP 69.38 66.04 67.67 GATE + XIP + CBC M 69.62 67.79 68.69 GATE + XIP + CBC A 69.87 69.10 69.48 7 GATE + Stanford 63.12 69.32 66.07 GATE + Stanford + CBC M 65.09 72.05 68.39 GATE + Stanford + CBC A 65.66 73.25 69.25 Table 1: Results given by different hybrid NER systems and coupled with the CBC-NER system corpora (CoNLL, MUC6, MUC7 and ACE): ner-eng-ie.crf-3-all2008-distsim.ser.gz (Finkel et al., 2005) (line 3 in Table 1), GATE NER or in short GATE (Cunningham et al., 2002) (line 4 in Table 1), and several hybrid systems which are given by the combination of pairs taken among the set of the three last-mentioned NER systems (lines 5 to 7 in Table 1)." E09-1011,P05-1045,o,"We parse the data using the Collins Parser (Collins, 1997), and then tag person, location and organization names using the Stanford Named Entity Recognizer (Finkel et al., 2005)." E09-1037,P05-1045,o,"Some stem from work on graphical models,includingloopybeliefpropagation(Suttonand McCallum, 2004; Smith and Eisner, 2008), Gibbs sampling (Finkel et al., 2005), sequential Monte Carlo methods such as particle filtering (Levy et al., 2008), and variational inference (Jordan et al., 1999; MacKay, 1997; Kurihara and Sato, 2006)." E09-1091,P05-1045,o,"In all the experiments, our source side language is English, and the Stanford Named Entity Recognizer (Finkel et al, 2005) was used to extract NEs from the source side article." I08-4013,P05-1045,o,"In the first approach, heuristic rules are used to find the dependencies (Bunescu and Mooney, 2004) or penalties for label inconsistency are required to handset ad-hoc (Finkel et al., 2005)." I08-6004,P05-1045,o,"Corpus Time Period Size Articles Words New Indian Express (English) 2007.01.01 to 2007.08.31 2,359 347,050 Dinamani (Tamil) 2007.01.01 to 2007.08.31 2,359 256,456 Table 1: Statistics on Comparable Corpora From the above corpora, we first extracted all the NEs from the English side, using the Stanford NER tool [Finkel et al, 2005]." N06-1054,P05-1045,o,"That is a significant shortcoming, because in many domains, hard or soft global constraints on the label sequence are motivated by common sense: For named entity recognition, a phrase that appears multiple times should tend to get the same label each time (Finkel et al. , 2005)." N06-1054,P05-1045,o,"should appear with at most one value in each announcement, although the field and value may be repeated (Finkel et al. , 2005)." N06-1054,P05-1045,o,"Such techniques include Gibbs sampling (Finkel et al. , 2005), a general-purpose Monte Carlo method, and integer linear programming (ILP), (Roth and Yih, 2005), a general-purpose exact framework for NP-complete problems." N09-1037,P05-1045,o,"For the named entity features, we used a fairly standard feature set, similar to those described in (Finkel et al., 2005)." N09-1068,P05-1045,o,"Our features were based on those in (Finkel et al., 2005)." P06-1059,P05-1045,o,"Many of the previous studies of Bio-NER tasks have been based on machine learning techniques including Hidden Markov Models (HMMs) (Bikel et al. , 1997), the dictionary HMM model (Kou et al. , 2005) and Maximum Entropy Markov Models (MEMMs) (Finkel et al. , 2004)." P06-1059,P05-1045,o,"However, other types of nonlocal information have also been shown to be effective (Finkel et al. , 2005) and we will examine the effectiveness of other non-local information which can be embedded into label information." P06-1059,P05-1045,o,"information about the previous state (Finkel et al. , 2005)." P06-1059,P05-1045,n,"In a recent study by Finkel et al. , (2005), nonlocal information is encoded using an independence model, and the inference is performed by Gibbs sampling, which enables us to use a stateof-the-art factored model and carry out training efficiently, but inference still incurs a considerable computational cost." P06-1089,P05-1045,o,"Global information is known to be useful in other NLP tasks, especially in the named entity recognition task, and several studies successfully used global features (Chieu and Ng, 2002; Finkel et al. , 2005)." P06-1141,P05-1045,o," Most existing work to capture labelconsistency, has attempted to create all parenleftbign2parenrightbig pairwise dependencies between the different occurrences of an entity, (Finkel et al. , 2005; Sutton and McCallum, 2004), where n is the number of occurrences of the given entity." P06-1141,P05-1045,o," Most work has looked to model non-local dependencies only within a document (Finkel 1125 et al. , 2005; Chieu and Ng, 2002; Sutton and McCallum, 2004; Bunescu and Mooney, 2004)." P06-1141,P05-1045,n,"The simplicity of our approach makes it easy to incorporate dependencies across the whole corpus, which would be relatively much harder to incorporate in approaches like (Bunescu and Mooney, 2004) and (Finkel et al. , 2005)." P06-1141,P05-1045,o,"Additionally, our approach makes it possible to do inference in just about twice the inference time with a single sequential CRF; in contrast, approaches like Gibbs Sampling that model the dependencies directly can increase inference time by a factor of 30 (Finkel et al. , 2005)." P06-1141,P05-1045,n,"We also compare our performance against (Bunescu and Mooney, 2004) and (Finkel et al. , 2005) and find that we manage higher relative improvement than existing work despite starting from a very competitive baseline CRF." P06-1141,P05-1045,o,"A very common case of this in the CoNLL dataset is that of documents containing references to both The China Daily, a newspaper, and China, the country (Finkel et al. , 2005)." P06-2054,P05-1045,o,"An additional consistent edge of a linear-chain conditional random field (CRF) explicitly models the dependencies between distant occurrences of similar words (Sutton and McCallum, 2004; Finkel et al. , 2005)." P08-4003,P05-1045,o,"Starting out with a chunking pipeline, which uses a classical combination of tagger and chunker, with the Stanford POS tagger (Toutanova et al., 2003), the YamCha chunker (Kudoh and Matsumoto, 2000) and the Stanford Named Entity Recognizer (Finkel et al., 2005), the desire to use richer syntactic representations led to the development of a parsing pipeline, which uses Charniak and Johnsons reranking parser (Charniak and Johnson, 2005) to assign POS tags and uses base NPs as chunk equivalents, while also providing syntactic trees that can be used by feature extractors." P09-1113,P05-1045,o,"We perform named entity tagging using the Stanford four-class named entity tagger (Finkel et al., 2005)." P09-2041,P05-1045,o,"To implement this method, we rst use the Stanford Named Entity Recognizer4 (Finkel et al., 2005)toidentifythesetofpersonandorganisation entities, E, from each article in the corpus." W06-1643,P05-1045,o,"Most previous work with CRFs containing nonlocal dependencies used approximate probabilistic inference techniques, including TRP (Sutton and McCallum, 2004) and Gibbs sampling (Finkel et al. , 2005)." W06-1655,P05-1045,o,"4 Relation to Previous Work There is a significant volume of work exploring the use of CRFs for a variety of chunking tasks, including named-entity recognition, gene prediction, shallow parsing and others (Finkel et al. , 2005; Culotta et al. , 2005; Sha and Pereira, 2003)." W07-2058,P05-1045,o,"We extract the named entities from the web pages using the Stanford Named Entity Recognizer (Finkel et al. , 2005)." W09-0422,P05-1045,o,"One of the steps in the analysis of English is named entity recognition using Stanford Named Entity Recognizer (Finkel et al., 2005)." W09-1119,P05-1045,o,"The results we obtained on the CoNLL03 test set were consistent with what was reported in (Finkel et al., 2005)." W09-1119,P05-1045,o,"NER proves to be a knowledgeintensive task, and it was reassuring to observe that System Resources Used F1 + LBJ-NER Wikipedia, Nonlocal Features, Word-class Model 90.80 (Suzuki and Isozaki, 2008) Semi-supervised on 1Gword unlabeled data 89.92 (Ando and Zhang, 2005) Semi-supervised on 27Mword unlabeled data 89.31 (Kazama and Torisawa, 2007a) Wikipedia 88.02 (Krishnan and Manning, 2006) Non-local Features 87.24 (Kazama and Torisawa, 2007b) Non-local Features 87.17 + (Finkel et al., 2005) Non-local Features 86.86 Table 7: Results for CoNLL03 data reported in the literature." W09-1218,P05-1045,o,"Use of global features for structured prediction problem has been explored by several NLP applications such as sequential labeling (Finkel et al., 2005; Krishnan and Manning, 2006; Kazama and Torisawa, 2007) and dependency parsing (Nakagawa, 2007) with a great deal of success." N07-2005,P05-3024,o,"K-best suffix arrays have been used in autocomplete applications (Church and Thiesson, 2005)." D07-1107,P06-1014,o,"Finally, we compare against the mapping from WordNet to the Oxford English Dictionary constructed in (Navigli, 2006), equivalent to clustering based solely on the OED feature." D07-1107,P06-1014,o,"Of the methods we compare against, only the WordNet-based similarity measures, (Mihalcea and Moldovan, 2001), and (Navigli, 2006) provide a method for predicting verb similarities; our learned measure widely outperforms these methods, achieving a 13.6% F-score improvement over the LESK similarity measure." D07-1107,P06-1014,o,"Only the measures provided by LESK, HSO, VEC, (Mihalcea and Moldovan, 2001), and (Navigli, 2006) provide a method for predicting adjective similarities; of these, only LESK and VEC outperform the uninformed baseline on adjectives, while our learned measure achieves a 4.0% improvement over the LESK measure on adjectives." D07-1107,P06-1014,o,"(Navigli, 2006) presents an automatic approach for mapping between sense inventories; here similarities in gloss definition and structured relations between the two sense inventories are exploited in order to map between WordNet senses and distinctions made within the coarser-grained Oxford English Dictionary." D07-1107,P06-1014,o,"Finally, we use as a feature the mappings produced in (Navigli, 2006) of WordNet senses to Oxford English Dictionary senses." D09-1020,P06-1014,o,"Several researchers (e.g., (Palmer et al., 2004; Navigli, 2006; Snow et al., 2007; Hovy et al., 2006)) work on reducing the granularity of sense inventories for WSD." D09-1046,P06-1014,o,"Such coarse-grained inventories can be produced manually from scratch (Hovy et al., 2006) or by automatically relating (McCarthy, 2006) or clustering (Navigli, 2006; Navigli et al., 2007) existing word senses." D09-1046,P06-1014,o,"WordNet has been criticized for being overly finegrained (Navigli et al., 2007; Ide and Wilks, 2006), we are using it here because it is the sense inventory used by Erk et al." D09-1081,P06-1014,o,"WordNet sense information has been criticized to be too fine grained (Agirre and Lopez de Lacalle Lekuona, 2003; Navigli, 2006)." E09-1045,P06-1014,o,"Thus, some research has been focused on deriving different word-sense groupings to overcome the finegrained distinctions of WN (Hearst and Schutze, 1993), (Peters et al., 1998), (Mihalcea and Moldovan, 2001), (Agirre and LopezDeLaCalle, 2003), (Navigli, 2006) and (Snow et al., 2007)." E09-1092,P06-1014,o,"The first-sense heuristic can be thought of as striving for maximal specificity at the risk of precluding some admissible senses (reduced recall), 7Allowing for multiple fine-grained senses to be judged as appropriate in a given context goes back at least to Sussna (1993); discussed more recently by, e.g., Navigli (2006)." P08-2063,P06-1014,o,Navigli (2006) has induced clusters by mapping WordNet senses to a more coarse-grained lexical resource. P08-2063,P06-1014,n,"Although ITA rates and system performance both significantly improve with coarse-grained senses (Duffield et al., 2007; Navigli, 2006), the question about what level of granularity is needed remains." P08-2063,P06-1014,n,"WSD systems have been far more successful in distinguishing coarsegrained senses than fine-grained ones (Navigli, 2006), but does that approach neglect necessary meaning differences?" W07-1404,P06-1014,o,"This clustering was created automatically with the aid of a methodology described in (Navigli, 2006)." W07-2006,P06-1014,o,"2.2 Creation of a Coarse-Grained Sense Inventory To tackle the granularity issue, we produced a coarser-grained version of the WordNet sense inventory3 based on the procedure described by Navigli (2006)." W07-2059,P06-1014,o,"However, in the coarse-grained task, the sense inventory was first clustered semi-automatically with each cluster representing an equivalence class over senses (Navigli, 2006)." D07-1070,P06-1027,o,"In particular, Abney defines a function K that is an upper bound on the negative log-likelihood, and shows his bootstrapping algorithms locally minimize K. We now present a generalization of Abneys K function and relate it to another semi-supervised learning technique, entropy regularization (Brand, 1999; Grandvalet and Bengio, 2005; Jiao et al. , 2006)." D07-1070,P06-1027,o,"We thus introduce a multiplier to form the actual objective function that we minimize with respect to :4 summationdisplay iL logp,i(yi ) + Nsummationdisplay inegationslashL H(p,i) (4) One may regard as a Lagrange multiplier that is used to constrain the classifiers uncertainty H to be low, as presented in the work on entropy regularization (Brand, 1999; Grandvalet and Bengio, 2005; Jiao et al. , 2006)." D07-1083,P06-1027,o,"In fact, many attempts have recently been made to develop semi-supervised SOL methods (Zhu et al. , 2003; Li and McCallum, 2005; Altun et al. , 2005; Jiao et al. , 2006; Brefeld and Scheffer, 2006)." D07-1083,P06-1027,o,"5.3 Comparison with SS-CRF-MER When we consider semi-supervised SOL methods, SS-CRF-MER (Jiao et al. , 2006) is the most competitive with HySOL, since both methods are defined based on CRFs." D07-1083,P06-1027,o,"In fact, we still have a question as to whether SS-CRF-MER is really scalable in practical time for such a large amount of unlabeled data as used in our experiments, which is about 680 times larger than that of (Jiao et al. , 2006)." D07-1083,P06-1027,o,"Semi-supervised conditional random fields (CRFs) based on a minimum entropy regularizer (SS-CRF-MER) have been proposed in (Jiao et al. , 2006)." D07-1088,P06-1027,p,"Recent work includes improved model variants (e.g. , Jiao et al. , 2006; Okanohara et al. , 2006) and applications such as web data extraction (Pinto et al. , 2003), scientific citation extraction (Peng and McCallum, 2004), and word alignment (Blunsom and Cohn, 2006)." D09-1005,P06-1027,o,"The variance semiring is essential for many interesting training paradigms such as deterministic 40 annealing (Rose, 1998), minimum risk (Smith and Eisner, 2006), active and semi-supervised learning (Grandvalet and Bengio, 2004; Jiao et al., 2006)." D09-1009,P06-1027,o,"We use Entropy Regularization (ER) (Jiao et al., 2006) to leverage unlabeled instances.7 We weight the ER term by choosing the best8 weight in {103,102,101,1,10} multiplied by #labeled#unlabeled for each data set and query selection method." D09-1134,P06-1027,o,"For example, minimum entropy regularization (Grandvalet and Bengio, 2004; Jiao et al., 2006), aims to maximize the conditional likelihood of labeled data while minimizing the conditional entropy of unlabeled data: summationdisplay i logp(y(i)|x(i)) 122bardblbardbl2H(y|x) (3) This approach generally would result in sharper models which can be data-sensitive in practice." I08-2124,P06-1027,o,"Pattern-based IE approaches employ seed data to learn useful patterns to pinpoint required fields values (e.g. Ravichandran and Hovy, 2002; Mann and Yarowsky, 2005; Feng et al., 2006)." I08-2124,P06-1027,o,"Reported work includes improved model variants (e.g., Jiao et al., 2006) and applications such as web data extraction (Pinto et al., 2003), scientific citation extraction (Peng and McCallum, 2004), word alignment (Blunsom and Cohn, 2006), and discourselevel chunking (Feng et al., 2007)." P08-1099,P06-1027,o,"High values of fall into the minimal entropy trap, while low values ofhave no effect on the model (see (Jiao et al., 2006) for an example)." W09-2208,P06-1027,o,"Jiao et al. propose semi-supervised conditional random fields (Jiao et al., 2006) that try to maximize the conditional log-likelihood on the training data and simultaneously minimize the conditional entropy of the class labels on the unlabeled data." D07-1083,P06-1028,o,"(Suzuki et al. , 2006) 88.02 (+0.82) + unlabeled data (17M 27M words) 88.41 (+0.39) + supplied gazetters 88.90 (+0.49) + add dev." D07-1083,P06-1028,o,"(Suzuki et al. , 2006) 94.36 (+0.06) Table 8: The HySOL performance with the F-score optimization technique on Chunking (CoNLL-2000) experiments from unlabeled data appear different from each other." D07-1083,P06-1028,o,"5.5 Applying F-score Optimization Technique In addition, we can simply apply the F-score optimization technique for the sequence labeling tasks proposed in (Suzuki et al. , 2006) to boost the HySOL performance since the base discriminative models pD(y|x) and discriminative combination, namely Equation (3), in our hybrid model basically uses the same optimization procedure as CRFs." P07-1093,P06-1028,o,"More specialized methods also exist, for example for support vector machines (Musicant et al. , 2003) and for conditional random fields (Gross et al. , 2007; Suzuki et al. , 2006)." W08-0303,P06-1028,o,"We follow (Gao et al., 2006; Suzuki et al., 2006) and approximate the metrics using the sigmoid function." D07-1032,P06-1053,o,"Dubey et al. proposed an unlexicalized PCFG parser that modied PCFG probabilities to condition the existence of syntactic parallelism (Dubey et al. , 2006)." W06-1637,P06-1053,o,"The results have demonstrated the existence of priming effects in corpus data: they occur for specific syntactic constructions (Gries, 2005; Szmrecsanyi, 2005), consistent with the experimental literature, but also generalize to syntactic rules across the board, which repeated more often than expected by chance (Reitter et al. , 2006b; Dubey et al. , 2006)." C08-1001,P06-1079,o,"The other intriguing issue is how our anchor-based method for shared argument identification can benefit from recent advances in coreference and zero-anaphora resolution (Iida et al., 2006; Komachi et al., 2007, etc.)." C08-1121,P06-1079,o,We follow (Yang et al. 2006; Iida et al. 2006) in using a tree kernel to represent structural information using the subtree that covers a pronoun and its antecedent candidate. D08-1055,P06-1079,o,"There have been many studies of zero-pronoun identification (Walker et al., 1994) (Nakaiwa, 1997) (Iida et al., 2006)." D08-1055,P06-1079,o,"We divided these case roles into four types by location in the article as in (Iida et al., 2006), i) the case role depends on the predicate or the predicate depends on the case role in the intra-sentence (dependency relations), ii) the case role does not depend on the predicate and the predicate does not depend on the case role in the intra-sentence (zeroanaphoric (intra-sentential)), iii) the case role is not in the sentence containing the predicate (zeroanaphoric (inter-sentential)), and iv) the case role and the predicate are in the same phrase (in same phrase)." I08-1065,P06-1079,p,"One possible approach is to employ state-of-the-art techniques for coreference and zeroanaphora resolution (Iida et al., 2006; Komachi et al., 2007, etc.) in preprocessing cooccurrence samples." D07-1025,P06-1091,o,"Both Liang, et al (2006), and Tillmann and Zhang (2006) report on effective machine translation (MT) models involving large numbers of features with discriminatively trained weights." D07-1055,P06-1091,o,Tillmann and Zhang (2006) describe a perceptron style algorithm for training millions of features. D07-1080,P06-1091,o,"The algorithm is slightly different from other online training algorithms (Tillmann and Zhang, 2006; Liang et al. , 2006) in that we keep and update oracle translations, which is a set of good translations reachable by a decoder according to a metric, i.e. BLEU (Papineni et al. , 2002)." D07-1080,P06-1091,o,Tillmann and Zhang (2006) avoided the problem by precomputing the oracle translations in advance. D07-1080,P06-1091,o,"Tillmann and Zhang (2006) used a different update style based on a convex loss function: = L(e, e; et)max parenleftBig 0, 1 parenleftBig si( f t, e)si( f t, e) parenrightBigparenrightBig 768 Table 1: Experimental results obtained by varying normalized tokens used with surface form." D07-1080,P06-1091,o,Tillmann and Zhang (2006) and Liang et al. D07-1080,P06-1091,o,Tillmann and Zhang (2006) trained their feature set using an online discriminative algorithm. D07-1080,P06-1091,o,Online discriminative training has already been studied by Tillmann and Zhang (2006) and Liang et al. D07-1080,P06-1091,o,"Tillmann and Zhang (2006), Liang et al." D08-1024,P06-1091,o,"This paper continues a line of research on online discriminative training (Tillmann and Zhang, 2006; Liang et al., 2006; Arun and Koehn, 2007), extending that of Watanabe et al." D08-1024,P06-1091,o,"The second uses the decoder to search for the highest-B translation (Tillmann and Zhang, 2006), which Arun and Koehn (2007) call max-B updating." D09-1008,P06-1091,o,"One is to use a stochastic gradient descent (SGD) or Perceptron like online learning algorithm to optimize the weights of these features directly for MT (Shen et al., 2004; Liang et al., 2006; Tillmann and Zhang, 2006)." D09-1039,P06-1091,o,Tillmann and Zhang (2006) present a procedure to directly optimize the global scoring function used by a phrasebased decoder on the accuracy of the translations. I08-2087,P06-1091,o,"2 Related Work This method is similar to block-orientation modeling (Tillmann and Zhang 2005) and maximum entropy based phrase reordering model (Xiong et al. 2006), in which local orientations (left/right) of phrase pairs (blocks) are learned via MaxEnt classifiers." I08-2087,P06-1091,o,The use of structured prediction to SMT is also investigated by (Liang et al. 2006; Tillmann and Zhang 2006; Watanabe et al. 2007). N07-1008,P06-1091,o,"Recently, there have been several discriminative approaches at training large parameter sets including (Tillmann and Zhang, 2006) and (Liang et al. , 2006)." N07-1008,P06-1091,o,"In (Tillmann and Zhang, 2006) the model is optimized to produce a block orientation and the target sentence is used only for computing a sentence level BLEU." N09-1025,P06-1091,o,"Others have introduced alternative discriminative training methods (Tillmann and Zhang, 2006; Liang et al., 2006; Turian et al., 2007; Blunsom et al., 2008; Macherey et al., 2008), in which a recurring challenge is scalability: to train many features, we need many train218 ing examples, and to train discriminatively, we need to search through all possible translations of each training example." P07-1020,P06-1091,o,"Discriminative training has been used mainly for translation model combination (Och and Ney, 2002) and with the exception of (Wellington et al. , 2006; Tillmann and Zhang, 2006), has not been used to directly train parameters of a translation model." P08-1010,P06-1091,o,The translation probability can also be discriminatively trained such as in Tillmann and Zhang (2006). P09-1054,P06-1091,o,"SGD was recently used for NLP tasks including machine translation (Tillmann and Zhang, 2006) and syntactic parsing (Smith and Eisner, 2008; Finkel et al., 2008)." W07-0414,P06-1091,o,"If the input consists of sevWe also adopt the approximation that treats every sentence with its reference as a separate corpus (Tillmann and Zhang, 2006) so that ngram counts are not accumulated, and parallel processing of sentences becomes possible." W07-0414,P06-1091,o,Tillmann and Zhang (2006) use a BLEU oracle decoder for discriminative training of a local reordering model. W07-0414,P06-1091,o,"They can be used for discriminative training of reordering models (Tillmann and Zhang, 2006)." W07-0716,P06-1091,o,"where they are expected to be maximally discriminative (Tillmann and Zhang, 2006)." W07-0716,P06-1091,o,"This might prove beneficial for various discriminative training methods (Tillmann and Zhang, 2006)." W07-0717,P06-1091,o,"This makes it suitable for discriminative SMT training, which is still a challenge for large parameter sets (Tillmann and Zhang, 2006; Liang et al. , 2006)." W07-0719,P06-1091,o,"However, at the short term, the incorporation of these type of features will force us to either build a new decoder or extend an existing one, or to move to a new MT architecture, for instance, in the fashion of the architectures suggested by Tillmann and Zhang (2006) or Liang et al." W08-0404,P06-1091,o,"Several studies have shown that large-margin methods can be adapted to the special complexities of the task (Liang et al., 2006; Tillmann and Zhang, 2006; Cowan et al., 2006) . However, the capacity of these algorithms to improve over state-of-the-art baselines is currently limited by their lack of robust dimensionality reduction." D07-1025,P06-1096,p,"Both Liang, et al (2006), and Tillmann and Zhang (2006) report on effective machine translation (MT) models involving large numbers of features with discriminatively trained weights." D07-1080,P06-1096,o,"The algorithm is slightly different from other online training algorithms (Tillmann and Zhang, 2006; Liang et al. , 2006) in that we keep and update oracle translations, which is a set of good translations reachable by a decoder according to a metric, i.e. BLEU (Papineni et al. , 2002)." D07-1080,P06-1096,o,Tillmann and Zhang (2006) and Liang et al. D07-1080,P06-1096,o,Online discriminative training has already been studied by Tillmann and Zhang (2006) and Liang et al. D07-1080,P06-1096,o,"In this method, each training sentence is decoded and weights are updated at every iteration (Liang et al. , 2006)." D07-1080,P06-1096,o,"When updating model parameters, we employ a memorizationvariant of a local updating strategy (Liang et al. , 2006) in which parameters are optimized toward a set of good translations found in the k-best list across iterations." D07-1080,P06-1096,o,"Tillmann and Zhang (2006), Liang et al." D08-1023,P06-1096,o,"Most work on discriminative training for SMT has focussed on linear models, often with margin based algorithms (Liang et al., 2006; Watanabe et al., 2006), or rescaling a product of sub-models (Och, 2003; Ittycheriah and Roukos, 2007)." D08-1024,P06-1096,o,"This paper continues a line of research on online discriminative training (Tillmann and Zhang, 2006; Liang et al., 2006; Arun and Koehn, 2007), extending that of Watanabe et al." D08-1024,P06-1096,n,"Sentence-level approximations to B exist (Lin and Och, 2004; Liang et al., 2006), but we found it most effective to perform B computations in the context of a setOof previously-translated sentences, following Watanabe et al." D08-1064,P06-1096,o,"Moreover, this evaluation concern dovetails with a frequent engineering concern, that sentence-level scores are useful at various points in the MT pipeline: for example, minimum Bayes risk decoding (Kumar and Byrne, 2004), selecting oracle translations for discriminative reranking (Liang 614 et al., 2006; Watanabe et al., 2007), and sentenceby-sentence comparisons of outputs during error analysis." D09-1008,P06-1096,o,"One is to use a stochastic gradient descent (SGD) or Perceptron like online learning algorithm to optimize the weights of these features directly for MT (Shen et al., 2004; Liang et al., 2006; Tillmann and Zhang, 2006)." D09-1107,P06-1096,o,"In (Liang et al., 2006) a standard phrase-based model is augmented with more than a million features whose weights are trained discriminatively by a variant of the perceptron algorithm." D09-1111,P06-1096,o,"By building the entire system on the derivation level, we side-step issues that can occur when perceptron training with hidden derivations (Liang et al., 2006), but we also introduce the need to transform our training source-target pairs into training derivations." D09-1127,P06-1096,o,"So we will engineer more such features, especially with lexicalization and soft alignments (Liang et al., 2006), and study the impact of alignment quality on parsing improvement." E09-1056,P06-1096,p,"Online votedperceptrons have been reported to work well in a number of NLP tasks (Collins, 2002; Liang et al., 2006)." E09-1061,P06-1096,o,"Alignment is often used in training both generative and discriminative models (Brown et al., 1993; Blunsom et al., 2008; Liang et al., 2006)." E09-1061,P06-1096,o,"item form: [i,j,ueve] goal: [I,j,ue] rules: [i,j,ue] R(fifiprime/ejejprime) [iprime,j,ejejprime] [i,j,ueejve] [i,j + 1,ueejve] ej+1 = rj+1 (Logic MONOTONE-ALIGN) Under the boolean semiring, this (minimal) logic decides if a training example is reachable by the model, which is required by some discriminative training regimens (Liang et al., 2006; Blunsom et al., 2008)." I08-2087,P06-1096,o,The use of structured prediction to SMT is also investigated by (Liang et al. 2006; Tillmann and Zhang 2006; Watanabe et al. 2007). N07-1008,P06-1096,o,"Recently, there have been several discriminative approaches at training large parameter sets including (Tillmann and Zhang, 2006) and (Liang et al. , 2006)." N07-1008,P06-1096,o,"(Liang et al. , 2006) demonstrates a discriminatively trained system for machine translation that has the following characteristics: 1) requires a varying update strategy (local vs. bold) depending on whether the reference sentence is reachable or not, 2) uses sentence level BLEU as a criterion for selecting which output to update towards, and 3) only trains on limited length (5-15 words) sentences." N07-1008,P06-1096,n,"This latter point is a critical difference that contrasts to the major weakness of the work of (Liang et al. , 2006) which uses a top-N list of translations to select the maximum BLEU sentence as a target for training (so called local update)." N09-1025,P06-1096,o,"Others have introduced alternative discriminative training methods (Tillmann and Zhang, 2006; Liang et al., 2006; Turian et al., 2007; Blunsom et al., 2008; Macherey et al., 2008), in which a recurring challenge is scalability: to train many features, we need many train218 ing examples, and to train discriminatively, we need to search through all possible translations of each training example." P07-1055,P06-1096,o,"Work on learning with hidden variables can be used for both CRFs (Quattoni et al. , 2004) and for inference based learning algorithms like those used in this work (Liang et al. , 2006)." P07-1055,P06-1096,o,"These algorithms are usually applied to sequential labeling or chunking, but have also been applied to parsing (Taskar et al. , 2004; McDonald et al. , 2005), machine translation (Liang et al. , 2006) and summarization (Daume III et al. , 2006)." P08-1024,P06-1096,o,"For this reason, to our knowledge, all discriminative models proposed to date either side-step the problem by choosing simple model and feature structures, such that spurious ambiguity is lessened or removed entirely (Ittycheriah and Roukos, 2007; Watanabe et al., 2007), or else ignore the problem and treat derivations as translations (Liang et al., 2006; Tillmann and Zhang, 2007)." P08-1024,P06-1096,n,"To our knowledge no systems directly address Problem 1, instead choosing to ignore the problem by using one or a small handful of reference derivations in an n-best list (Liang et al., 2006; Watanabe et al., 2007), or else making local independence assumptions which side-step the issue (Ittycheriah and Roukos, 2007; Tillmann and Zhang, 2007; Wellington et al., 2006)." P08-1024,P06-1096,n,"Both the global models (Liang et al., 2006; Watanabe et al., 2007) use fairly small training sets, and there is no evidence that their techniques will scale to larger data sets." P08-2007,P06-1096,o,"Forced decoding arises in online discriminative training, where model updates are made toward the most likely derivation of a gold translation (Liang et al., 2006)." W07-0717,P06-1096,o,"This makes it suitable for discriminative SMT training, which is still a challenge for large parameter sets (Tillmann and Zhang, 2006; Liang et al. , 2006)." W07-0719,P06-1096,o,"However, at the short term, the incorporation of these type of features will force us to either build a new decoder or extend an existing one, or to move to a new MT architecture, for instance, in the fashion of the architectures suggested by Tillmann and Zhang (2006) or Liang et al." W08-0306,P06-1096,o,"In general, Agold / Acandidates; following (Collins, 2000) and (Charniak and Johnson, 2005) for parse reranking and (Liang et al., 2006) for translation reranking, we define Aoracle as alignment in Acandidates that is most similar to Agold.8 We update each feature weight i as follows: i = i + hAoraclei hA1-besti .9 Following (Moore, 2005), after each training pass, we average all the feature weight vectors seen during the pass, and decode the discriminative training set using the vector of averaged feature weights." W08-0306,P06-1096,o,"9(Liang et al., 2006) report that, for translation reranking, such local updates (towards the oracle) outperform bold updates (towards the gold standard)." W08-0404,P06-1096,n,"Several studies have shown that large-margin methods can be adapted to the special complexities of the task (Liang et al., 2006; Tillmann and Zhang, 2006; Cowan et al., 2006) . However, the capacity of these algorithms to improve over state-of-the-art baselines is currently limited by their lack of robust dimensionality reduction." W08-0410,P06-1096,o,"As modern systems move toward integrating many features (Liang et al., 2006), resources such as this will become increasingly important in improving translation quality." W08-0510,P06-1096,o,"Research have also been made into alternatives to the current log-linear scoring model such as discriminative models with millions of features (Liang et al. 2006), or kernel based models (Wang et al. 2007)." W09-2211,P06-1096,p,"Perhaps more importantly, discriminative models have been shown to offer competitive performance on a variety of sequential and structured learning tasks in NLP that are traditionally tackled via generative models , such as letter-to-phoneme conversion (Jiampojamarn et al., 2008), semantic role labeling (Toutanova et al., 2005), syntactic parsing (Taskar et al., 2004), language modeling (Roark et al., 2004), and machine translation (Liang et al., 2006)." D07-1005,P06-1097,o,"We observe that AER is loosely correlated to BLEU ( = 0.81) though the relation is weak, as observed earlier by Fraser and Marcu (2006a)." D07-1005,P06-1097,o,"High quality word alignments can yield more accurate phrase-pairs which improve quality of a phrase-based SMT system (Och and Ney, 2003; Fraser and Marcu, 2006b)." D07-1005,P06-1097,o,"Much of the recent work in word alignment has focussed on improving the word alignment quality through better modeling (Och and Ney, 2003; Deng and Byrne, 2005; Martin et al. , 2005) or alternative approaches to training (Fraser and Marcu, 2006b; Moore, 2005; Ittycheriah and Roukos, 2005)." D07-1006,P06-1097,p,"We compare semisupervised LEAF with a previous state of the art semi-supervised system (Fraser and Marcu, 2006b)." D07-1006,P06-1097,o,"We ran the baseline semisupervised system for two iterations (line 2), and in contrast with (Fraser and Marcu, 2006b) we found that the best symmetrization heuristic for this system was union, which is most likely due to our use of fully linked alignments which was discussed at the end of Section 3." D07-1006,P06-1097,o,"(Och and Ney, 2003) invented heuristic symmetriza57 FRENCH/ENGLISH ARABIC/ENGLISH SYSTEM F-MEASURE ( = 0.4) BLEU F-MEASURE ( = 0.1) BLEU GIZA++ 73.5 30.63 75.8 51.55 (FRASER AND MARCU, 2006B) 74.1 31.40 79.1 52.89 LEAF UNSUPERVISED 74.5 72.3 LEAF SEMI-SUPERVISED 76.3 31.86 84.5 54.34 Table 3: Experimental Results tion of the output of a 1-to-N model and a M-to-1 model resulting in a M-to-N alignment, this was extended in (Koehn et al. , 2003)." D07-1006,P06-1097,o,"(Fraser and Marcu, 2006b) described symmetrized training of a 1-toN log-linear model and a M-to-1 log-linear model." D07-1006,P06-1097,o,"We use the semi-supervised EMD algorithm (Fraser and Marcu, 2006b) to train the model." D07-1006,P06-1097,o,"We then perform the D-step following (Fraser and A B C D d110d110d110d110d110 d110d110d110d110d110 d110d110d110d110 E d64d64d64 d64d64d64 d64 d126d126d126 d126d126d126 d126 A B C D d110d110d110d110d110 d110d110d110d110d110 d110d110d110d110 E d64d64d64 d64d64d64 d64 d126d126d126 d126d126d126 d126 Figure 2: Two alignments with the same translational correspondence Marcu, 2006b)." D07-1006,P06-1097,o,"(Fraser and Marcu, 2006a) established that it is important to tune (the trade-off between Precision and Recall) to maximize performance." D07-1038,P06-1097,o,"For an alignment model, most of these use the Aachen HMM approach (Vogel et al. , 1996), the implementation of IBM Model 4 in GIZA++ (Och and Ney, 2000) or, more recently, the semi-supervised EMD algorithm (Fraser and Marcu, 2006)." D07-1038,P06-1097,o,"If human-aligned data is available, the EMD algorithm provides higher baseline alignments than GIZA++ that have led to better MT performance (Fraser and Marcu, 2006)." D07-1038,P06-1097,o,"We follow the approach of bootstrapping from a model with a narrower parameter space as is done in, e.g. Och and Ney (2000) and Fraser and Marcu (2006)." D07-1079,P06-1097,o,"A superset of the parallel data was word aligned by GIZA union (Och and Ney, 2003) and EMD (Fraser and Marcu, 2006)." J07-3002,P06-1097,o,"F-Measure with an appropriate setting of will be useful during the development process of new alignment models, or as a maximization criterion for discriminative training of alignment models (Cherry and Lin 2003; Ayan, Dorr, and Monz 2005; Ittycheriah and Roukos 2005; Liu, Liu, and Lin 2005; Fraser and Marcu 2006; Lacoste-Julien et al. 2006; Moore, Yih, and Bode 2006)." N07-2007,P06-1097,p,"2 Related Work Recently, several successful attempts have been made at using supervised machine learning for word alignment (Liu et al. , 2005; Taskar et al. , 2005; Ittycheriah and Roukos, 2005; Fraser and Marcu, 2006)." N07-2007,P06-1097,o,"With the exception of Fraser and Marcu (2006), these previous publications do not entirely discard the generative models in that they integrate IBM model predictions as features." N07-2022,P06-1097,o,"85 Recently some alignment evaluation metrics have been proposed which are more informative when the alignments are used to extract translation units (Fraser and Marcu, 2006; Ayan and Dorr, 2006)." P07-1001,P06-1097,o,"It has been shown that human knowledge, in the form of a small amount of manually annotated parallel data to be used to seed or guide model training, can significantly improve word alignment F-measure and translation performance (Ittycheriah and Roukos, 2005; Fraser and Marcu, 2006)." P07-1004,P06-1097,o,"Along similar lines, (Fraser and Marcu, 2006) combine a generative model of word alignment with a log-linear discriminative model trained on a small set of hand aligned sentences." P08-4006,P06-1097,o,"Consequently, considerable effort has gone into devising and improving automatic word alignment algorithms, and into evaluating their performance (e.g., Och and Ney, 2003; Taskar et al., 2005; Moore et al., 2006; Fraser and Marcu, 2006, among many others)." W07-0403,P06-1097,o,"Method Prec Rec F-measure GIZA++ Intersect 96.7 53.0 68.5 GIZA++ Union 82.5 69.0 75.1 GIZA++ GDF 84.0 68.2 75.2 Phrasal ITG 50.7 80.3 62.2 Phrasal ITG + NCC 75.4 78.0 76.7 Following the lead of (Fraser and Marcu, 2006), we hand-aligned the first 100 sentence pairs of our training set according to the Blinker annotation guidelines (Melamed, 1998)." W07-0407,P06-1097,o,"(Fraser and Marcu, 2006) have proposed an algorithm for doing word alignment which applies a discriminative step at every iteration of the traditional Expectation-Maximization algorithm used in IBM models." W07-1520,P06-1097,o,"Because of its central role in building machine translation systems and because of the complexity of the task, sub-sentential alignment of parallel corpora continues to be an active area of research (e.g. , Moore et al. , 2006; Fraser and Marcu, 2006), and this implies a continuing demand for manually created or human-verified gold standard alignments for development and evaluation purposes." W09-0421,P06-1097,o,"5 Augmenting the corpus with an extracted dictionary Previous research (Callison-Burch et al., 2004; Fraser and Marcu, 2006) has shown that including word aligned data during training can improve translation results." W09-1804,P06-1097,o,"EMD training (Fraser and Marcu, 2006) combines generative and discriminative elements." D08-1056,P06-1101,o,"(Snow et al., 2006; Nakov & Hearst, 2008)." D09-1089,P06-1101,o,"Due to the importance of WN for NLP tasks, substantial research was done on direct or indirect automated extension of the English WN (e.g., (Snow et al., 2006)) or WN in other languages (e.g., (Vintar and Fiser, 2008))." D09-1089,P06-1101,o,"The majority of this research was done on extending the tree structure (finding new synsets (Snow et al., 2006) or enriching WN with new relationships (Cuadros and Rigau, 2008)) rather than improving the quality of existing concept/synset nodes." D09-1156,P06-1101,o,"Although some early systems for web-page analysis induce rules at character-level (e.g., such as WIEN (Kushmerick et al., 1997) and DIPRE (Brin, 1998)), most recent approaches for set expansion have used either tokenized and/or parsed free-text (Carlson et al., 2009; Talukdar et al., 2006; Snow et al., 2006; Pantel and Pennacchiotti, 2006), or have incorporated heuristics for exploiting HTML structures that are likely to encode lists and tables (Nadeau et al., 2006; Etzioni et al., 2005)." D09-1156,P06-1101,o,"1510 5 Related Work In recent years, many research has been done on extracting relations from free text (e.g., (Pantel and Pennacchiotti, 2006; Agichtein and Gravano, 2000; Snow et al., 2006)); however, almost all of them require some language-dependent parsers or taggers for English, which restrict the language of their extractions to English only (or languages that have these parsers)." E09-1064,P06-1101,o,"Beyond WordNet (Fellbaum, 1998), a wide range of resources has been developed and utilized, including extensions to WordNet (Moldovan and Rus, 2001; Snow et al., 2006) and resources based on automatic distributional similarity methods (Lin, 1998; Pantel and Lin, 2002)." E09-1068,P06-1101,o,"Finally, methods in the literature more focused on a specific disambiguation task include statistical methods for the attachment of hyponyms under the most likely hypernym in the WordNet taxonomy (Snow et al., 2006), structural approaches based on semantic clusters and distance metrics (Pennacchiotti and Pantel, 2006), supervised machine learning methods for the disambiguation of meronymy relations (Girju et al., 2003), etc. 6 Conclusions In this paper we presented a novel approach to disambiguate the glosses of computational lexicons and machine-readable dictionaries, with the aim of alleviating the knowledge acquisition bottleneck." N07-1016,P06-1101,o,"Second, we follow Snow et al.s work (2006) on taxonomy induction in incorporating transitive closure constraints in our probability calculations, as explained below." N07-1017,P06-1101,p,"The state of the art technology for relation extraction primarily relies on pattern-based approaches (Snow et al. , 2006)." P07-1072,P06-1101,o,"Other researchers (Pantel and Pennacchiotti, 2006), (Snow et al. , 2006) use clustering techniques coupled with syntactic dependency features to identify IS-A relations in large text collections." P07-2042,P06-1101,o,"Recently, Snow, Jurafsky and Ng (2005) generated tens of thousands of hypernym patterns and combined these with noun clusters to generate high-precision suggestions for unknown noun insertion into WordNet (Snow et al. , 2006)." P08-1003,P06-1101,o,"4 Related Work 4.1 Acquisition of Classes of Instances Although some researchers focus on re-organizing or extending classes of instances already available explicitly within manually-built resources such as Wikipedia (Ponzetto and Strube, 2007) or WordNet (Snow et al., 2006) or both (Suchanek et al., 2007), a large body of previous work focuses on compiling sets of instances, not necessarily labeled, from unstructured text." P08-1003,P06-1101,o,"1 Introduction Current methods for large-scale information extraction take advantage of unstructured text available from either Web documents (Banko et al., 2007; Snow et al., 2006) or, more recently, logs of Web search queries (Pasca, 2007) to acquire useful knowledge with minimal supervision." P08-1003,P06-1101,o,"(Ponzetto and Strube, 2007; Snow et al., 2006)), can be summarized as: [] C [such as|including] I [and|,|.], where I is a potential instance (e.g., Venezuelan equine encephalitis) and C is a potential class label for the instance (e.g., zoonotic diseases), for example in the sentence: The expansion of the farms increased the spread of zoonotic diseases such as Venezuelan equine encephalitis []." P08-1027,P06-1101,o,"Since (Hearst, 1992), numerous works have used patterns for discovery and identification of instances of semantic relationships (e.g., (Girju et al., 2006; Snow et al., 2006; Banko et al, 2007))." P08-1048,P06-1101,o,"Some work has been done on adding new terms and relations to WordNet (Snow et al., 2006) and FACTOTUM (OHara and Wiebe, 2003)." P08-1079,P06-1101,o,"2.1 Relationship Types There is a large body of related work that deals with discovery of basic relationship types represented in useful resources such as WordNet, including hypernymy (Hearst, 1992; Pantel et al., 2004; Snow et al., 2006), synonymy (Davidov and Rappoport, 2006; Widdows and Dorow, 2002) and meronymy (Berland and Charniak, 1999; Girju et al., 2006)." P09-1031,P06-1101,o,"To have a fair comparison, for PR, we estimate the conditional probability of a relation given the evidence P(Rij|Eij), as in (Snow et al. 2006), by using the same set of features as in ME. Table 3 shows precision, recall, and F1measure of each system for WordNet hypernyms (is-a), WordNet meronyms (part-of) and ODP hypernyms (is-a)." P09-1031,P06-1101,o,"We compare system performance between (Snow et al., 2006) and our framework in Section 5." P09-1050,P06-1101,o,"5.3 (Snow et al., 2006) Snow (Snow et al., 2006) has extended the WordNet 2.1 by adding thousands of entries (synsets) at a relatively high precision." P09-1050,P06-1101,n,"We have also illustrated that ASIA outperforms three other English systems (Kozareva et al., 2008; Pasca, 2007b; Snow et al., 2006), even though many of these use more input than just a semantic class name." P09-1050,P06-1101,n,"We also compare ASIA on twelve additional benchmarks to the extended Wordnet 2.1 produced by Snow et al (Snow et al., 2006), and show that for these twelve sets, ASIA produces more than five times as many set instances with much higher precision (98% versus 70%)." P09-1050,P06-1101,o,"Snow etal (Snow et al., 2006) use known hypernym/hyponym pairs to generate training data for a machine-learning system, which then learns many lexico-syntactic patterns." P09-1051,P06-1101,o,"4.3 Scoring All-N Rules We observed that the likelihood of nouns mentioned in a definition to be referred by the concept title depends greatly on the syntactic path connecting them (which was exploited also in (Snow et al., 2006))." P09-1051,P06-1101,o,"An extension to WordNet was presented by (Snow et al., 2006)." P09-1051,P06-1101,o,"4.1 Judging Rule Correctness Following the spirit of the fine-grained human evaluation in (Snow et al., 2006), we randomly sampled 800 rules from our rule-base and presented them to an annotator who judged them for correctness, according to the lexical reference notion specified above." P09-1070,P06-1101,o,"6 Related Work A large body of previous work exists on extending WORDNET with additional concepts and instances (Snow et al., 2006; Suchanek et al., 2007); these methods do not address attributes directly." W07-1527,P06-1101,p,"Currently, the best-performing English NP interpretation methods in computational linguistics focus mostly on two consecutive noun instances (noun compounds) and are either (weakly) supervised, knowledge-intensive (Rosario and Hearst, 2001), (Rosario et al. , 2002), (Moldovan et al. , 2004), (Pantel and Pennacchiotti, 2006), (Pennacchiotti and Pantel, 2006), (Kim and Baldwin, 2006), (Snow et al. , 2006), (Girju et al. , 2005; Girju et al. , 2006), or use statistical models on large collections of unlabeled data (Berland and Charniak, 1999), (Lapata and Keller, 2004), (Nakov and Hearst, 2005), (Turney, 2006)." W08-2207,P06-1101,o,"Obviously, all these semantic resources have been acquiredusing a very differentset of processes (Snow et al., 2006), tools and corpora." W09-0209,P06-1101,o,"Given the probabilistic taxonomy learning model introduced by (Snow et al., 2006), we leverage on the computation of logistic regression to exploit singular value decomposition (SVD) as unsupervised feature selection." W09-0209,P06-1101,o,"First, we need to determine whether or not the positive effect of SVD feature selection is preserved in more complex feature spaces such as syntactic feature spaces as those used in (Snow et al., 2006)." W09-0209,P06-1101,o,"In Section 3 we then describe the probabilistic taxonomy learning model introduced by (Snow et al., 2006)." W09-0209,P06-1101,o,"3.4) 3.1 Probabilistic model In the probabilistic formulation (Snow et al., 2006), the task of learning taxonomies from a corpus is seen as a probability maximization problem." W09-0209,P06-1101,o,"Given a set of evidences E over all the relevant word pairs, in (Snow et al., 2006), the probabilistic taxonomy learning task is defined as the problem of finding the taxonomy hatwideT that maximizes the 67 probability of having the evidences E, i.e.: hatwideT = arg max T P(E|T) In (Snow et al., 2006), this maximization problem is solved with a local search." W09-0209,P06-1101,p,"This increase of probabilities is defined as multiplicative change (N) as follows: (N) = P(E|Tprime)/P(E|T) (2) The main innovation of the model in (Snow et al., 2006) is the possibility of adding at each step the best relation N = {Ri,j}as well as N = I(Ri,j) that is Ri,j with all the relations by the existing taxonomy." W09-0209,P06-1101,o,"The last important fact is that it is possible to demonstrate that (Ei,j) = k P(Ri,jT| ei,j) 1P(Ri,jT|ei,j) = = kodds(Ri,j) where k is a constant (see (Snow et al., 2006)) that will be neglected in the maximization process." W09-0209,P06-1101,p,"Automatically creating or extending taxonomies for specific domains is then a very interesting area of research (OSullivan et al., 1995; Magnini and Speranza, 2001; Snow et al., 2006)." W09-1109,P06-1101,p,"Because of this property, vector space models have been used successfully both in computational linguistics (Manning et al., 2008; Snow et al., 2006; Gorman and Curran, 2006; Schutze, 1998) and in cognitive science (Landauer and Dumais, 1997; Lowe and McDonald, 2000; McDonald and Ramscar, 2001)." W09-1109,P06-1101,o,"In NLP, vector space models have featured most prominently in information retrieval (Manning et al., 2008), but have also been used for ontology learning (Lin, 1998; Snow et al., 2006; Gorman and Curran, 2006) and word sense-related tasks (McCarthy et al., 2004; Schutze, 1998)." W09-1122,P06-1101,o,"We have adopted the evaluation method of Snow et al (2006): compare the generated hypernyms with hypernyms present in a lexical resource, in our case the Dutch part of EuroWordNet (1998)." W09-2504,P06-1101,o,"6 Related Work Several works attempt to extend WordNet with additional lexical semantic information (Moldovan and Rus, 2001; Snow et al., 2006; Suchanek et al., 2007; Clark et al., 2008)." W09-2508,P06-1101,o,"As our basic data source, we use 500 000 sentences from the Wikipedia XML corpus (Denoyer and Gallinari, 2006); this is the corpus used by Akhmatova and Dras (2007), and related to one used in one set of experiments by Snow et al." C08-1138,P06-1123,o,"Thus, it may not suffer from the issues of non-isomorphic structure alignment and non-syntactic phrase usage heavily (Wellington et al., 2006)." C08-1138,P06-1123,o,"1 Introduction Translational equivalence is a mathematical relation that holds between linguistic expressions with the same meaning (Wellington et al., 2006)." N07-1057,P06-1123,n,"However, to what extent that assumption holds is tested only on a small number of language pairs using hand aligned data (Fox, 2002; Hwa et al. , 2002; Wellington et al. , 2006)." N07-1063,P06-1123,o,"(Wellington et al. , 2006) argue that these restrictions reduce our ability to model translation equivalence effectively." P07-1002,P06-1123,o,"Figure 1(b) shows several orders of the sentence which violate this constraint.1 Previous studies have shown that if both the source and target dependency trees represent linguistic constituency, the alignment between subtrees in the two languages is very complex (Wellington et al. , 2006)." P08-1064,P06-1123,o,"565 es (Wellington et al., 2006)." W07-0404,P06-1123,o,"We use the same alignment data for the five language pairs Chinese/English, Romanian/English, Hindi/English, Spanish/English, and French/English (Wellington et al. , 2006)." W07-0405,P06-1123,o,"However, this method is more sophisticated to implement than the previous method and binarizability ratio decreases on freer word-order languages (Wellington et al. , 2006)." W07-0405,P06-1123,o,"More importantly, the ratio of binarizability, as expected, decreases on freer word-order languages (Wellington et al. , 2006)." W09-2303,P06-1123,o,"It is for all three reasons, i.e. translation, induction from alignment structures and induction of alignment structures, important that the synchronous grammars are expressive enough to induce all the alignment structures found in hand-aligned gold standard parallel corpora (Wellington et al., 2006)." W09-2303,P06-1123,o,"(2006) and Chiang (2007), in terms of what alignments they induce, has been discussed in Wu (1997) and Wellington et al." W09-2303,P06-1123,o,"2 Inside-out alignments Wu (1997) identified so-called inside-out alignments, two alignment configurations that cannot be induced by binary synchronous context-free grammars; these alignment configurations, while infrequent in language pairs such as EnglishFrench (Cherry and Lin, 2006; Wellington et al., 2006), have been argued to be frequent in other language pairs, incl." W09-2303,P06-1123,o,"EnglishChinese (Wellington et al., 2006) and EnglishSpanish (Lepage and Denoual, 2005)." W09-2306,P06-1123,o,"One of the theoretical problems with phrase based SMT models is that they can not effectively model the discontiguous translations and numerous attempts have been made on this issue (Simard et al., 2005; Quirk and Menezes, 2006; Wellington et al., 2006; Bod, 2007; Zhang et al., 2007)." D08-1033,P06-1124,o,"Gibbs sampling is not new to the natural language processing community (Teh, 2006; Johnson et al., 2007)." N09-1009,P06-1124,p,"Nonparametricmodels (Teh, 2006) may be appropriate." P08-2036,P06-1124,o,"The relationship between Kneser-Ney smoothing to the Bayesian approach have been explored in (Goldwater et al., 2006; Teh, 2006) using Pitman-Yor processes." W09-0210,P06-1124,o,"Recent work has applied Bayesian non-parametric models to anaphora resolution (Haghighi and Klein, 2007), lexical acquisition (Goldwater, 2007) and language modeling (Teh, 2006) with good results." C08-1038,P06-1130,o,The most direct comparison is between our system and those presented in Cahill and van Genabith (2006) and Hogan et al. C08-1038,P06-1130,o,"298 within LFG includes the XLE,3 Cahill and van Genabith (2006), Hogan et al." C08-1038,P06-1130,o,Cahill and van Genabith (2006) and Hogan et al. C08-1038,P06-1130,o,"1999), OpenCCG (White, 2004) and XLE (Crouch et al., 2007), or created semi-automatically (Belz, 2007), or fully automatically extracted from annotated corpora, like the HPSG (Nakanishi et al., 2005), LFG (Cahill and van Genabith, 2006; Hogan et al., 2007) and CCG (White et al., 2007) resources derived from the Penn-II Treebank (PTB) (Marcus et al., 1993)." D07-1028,P06-1130,o,"aoife.cahill@ims.uni-stuttgart.de and van Genabith (2006), which do not rely on handcrafted grammars and thus can easily be ported to new languages." D07-1028,P06-1130,o,"As in (Cahill and van Genabith, 2006) fstructures are generated from the (now altered) treebank and from this data, along with the treebank trees, the PCFG-based grammar, which is used for training the generation model, is extracted." D07-1028,P06-1130,o,"In Table 10, Baseline gives the results of the generation algorithm of (Cahill and van Genabith, 2006)." D07-1028,P06-1130,o,In the LFG-based generation algorithm presented by Cahill and van Genabith (2006) complex named entities (i.e. those consisting of more than one word token) and other multi-word units can be fragmented in the surface realization. D07-1028,P06-1130,o,"We take the generator of (Cahill and van Genabith, 2006) as our baseline generator." D07-1028,P06-1130,o,"These rules can be handcrafted grammar rules, such as those of (LangkildeGeary, 2002; Carroll and Oepen, 2005), created semi-automatically (Belz, 2007) or, alternatively, extracted fully automatically from treebanks (Bangalore and Rambow, 2000; Nakanishi et al. , 2005; Cahill and van Genabith, 2006)." D07-1028,P06-1130,o,"3 Surface Realisation from f-Structures Cahill and van Genabith (2006) present a probabilistic surface generation model for LFG (Kaplan, 1995)." D07-1028,P06-1130,o,"The up-arrows and down-arrows are shorthand for (M(ni)) = (ni) where ni is the c-structure node annotated with the equation.2 Treebest := argmaxTreeP(Tree|F-Str) (1) P(Tree|F-Str) := productdisplay X Y in Tree Feats = {ai|vj((X))ai = vj} P(X Y |X, Feats) (2) The generation model of (Cahill and van Genabith, 2006) maximises the probability of a tree given an f-structure (Eqn." D07-1028,P06-1130,o,Cahill and van Genabith (2006) note that conditioning f-structure annotated generation rules on local features (Eqn. D07-1028,P06-1130,o,"To solve the problem, Cahill and van Genabith (2006) apply an automatic generation grammar transformation to their training data: they automatically label CFG nodes with additional case information and the model now learns the new improved generation rules of Tables 4 and 5." D07-1028,P06-1130,o,"F-Struct Feats Grammar Rules {PRED=PRO,NUM=SG PER=3, GEN=FEM} PRP-nom(=) she {PRED=PRO,NUM=SG PER=3, GEN=FEM} PRP-acc(=) her Table 5: Lexical item rules with case markings 4 A History-Based Generation Model The automatic generation grammar transform presented in (Cahill and van Genabith, 2006) provides a solution to coarse-grained and (in fact) inappropriate independence assumptions in the basic generation model." D07-1028,P06-1130,n,"This additional conditioning has the effect of making the choice of generation rules sensitive to the history of the generation process, and, we argue, provides a simpler, more uniform, general, intuitive and natural probabilistic generation model obviating the need for CFG-grammar transforms in the original proposal of (Cahill and van Genabith, 2006)." D07-1028,P06-1130,o,"Note, that for our example the effect of the uniform additional conditioning on mother grammatical function has the same effect as the generation grammar transform of (Cahill and van Genabith, 2006), but without the need for the gramF-Struct Feats Grammar Rules {PRED=PRO,NUM=SG PER=3, GEN=FEM, SUBJ} PRP(=) she {PRED=PRO,NUM=SG PER=3, GEN=FEM, OBJ} PRP(=) her Table 7: Lexical item rules." D07-1028,P06-1130,n,"In addition, uniform conditioning on mother grammatical function is more general than the case-phenomena specific generation grammar transform of (Cahill and van Genabith, 2006), in that it applies to each and every sub-part of a recursive input f-structure driving generation, making available relevant generation history (context) to guide local generation decisions." N07-1021,P06-1130,o,"Existing statistical NLG (i) uses corpus statistics to inform heuristic decisions in what is otherwise symbolic generation (Varges and Mellish, 2001; White, 2004; Paiva and Evans, 2005); (ii) applies n-gram models to select the overall most likely realisation after generation (HALOGEN family); or (iii) reuses an existing parsing grammar or treebank for surface realisation (Velldal et al. , 2004; Cahill and van Genabith, 2006)." N09-3004,P06-1130,o,"There are other approaches in which the generation grammars are extracted semiautomatically (Belz, 2007) or automatically (such as HPSG (Nakanishi and Miyao, 2005), LFG (Cahill and van Genabith, 2006; Hogan et al., 2007) and CCG (White et al., 2007))." P08-1022,P06-1130,n,"Even with the current incomplete set of semantic templates, the hypertagger brings realizer performance roughly up to state-of-the-art levels, as our overall test set BLEU score (0.6701) slightly exceeds that of Cahill and van Genabith (2006), though at a coverage of 96% insteadof98%." P08-1022,P06-1130,o,(2005) and Cahill and van Genabith (2006) with HPSG and LFG grammars. W08-1111,P06-1130,o,"One possible strategy is to exploit a widecoverage realizer that aims for applicability in multiple application domains (White et al., 2007; Cahill and van Genabith, 2006; Zhong and Stent, 2005; Langkilde-Geary, 2002; Langkilde and Knight, 1998; Elhadad, 1991)." W08-1112,P06-1130,o,"From the same treebank, Cahill and van Genabith (2006) automatically extracted wide-coverage LFG approximations for a PCFG-based generation model." W08-1112,P06-1130,n,"Our model improves the baseline provided by (Cahill and van Genabith, 2006): (i) accuracy is increased by creating a lexicalised PCFG grammar and enriching conditioning context with parent f-structure features; and (ii) coverage is increased by providing lexical smoothing and fuzzy matching techniques for rule smoothing." W08-1112,P06-1130,o,"Based on this theoretical cornerstone, Cahill and van Genabith (2006) presented a PCFG-based chart generator using wide-coverage LFG approximations automatically extracted from the Penn-II treebank." W08-1112,P06-1130,o,"Tbest = argmax T P(T|F) (1) P(T|F) = productdisplay X Y in T Feats = {ai|ai (X)} P(X Y|X,Feats) (2) 3 Disambiguation Models The basic generation model presented in (Cahill and van Genabith, 2006) used simple probabilistic context-free grammars." W08-1112,P06-1130,n,"(2007) presented a history-based generation model to overcome some of the inappropriate independence assumptions in the basic generation model of (Cahill and van Genabith, 2006)." W08-1122,P06-1130,o,"(Cahill and van Genabith, 2006), and the third type is a mixture of the first and second type, employing n-gram and grammarbased features, e.g." W08-1122,P06-1130,o,"The generator used in our experiments is an instance of the second type, using a probability model defined over Lexical Functional Grammar c-structure and f-structure annotations (Cahill and van Genabith, 2006; Hogan et al., 2007)." W08-1122,P06-1130,o,2 Background The natural language generator used in our experiments is the WSJ-trained system described in Cahill and van Genabith (2006) and Hogan et al. W08-1122,P06-1130,o,Cahill and van Genabith (2006) attain 98.2% coverage and a BLEU score of 0.6652 on the standard WSJ test set (Section 23). W09-0806,P06-1130,o,"From this LFG annotated treebank, large-scale unification grammar resources were automatically extracted and used in parsing (Cahill and al., 2008) and generation (Cahill and van Genabith, 2006)." W09-0806,P06-1130,o,"This approach was subsequently extended to other languages including German (Cahill and al., 2003), Chinese (Burke, 2004), (Guo and al., 2007), Spanish (ODonovan, 2004), (Chrupala and van Genabith, 2006) and French (Schluter and van Genabith, 2008)." W09-0806,P06-1130,o,"c2009 Association for Computational Linguistics Automatic Treebank-Based Acquisition of Arabic LFG Dependency Structures Lamia Tounsi Mohammed Attia NCLT, School of Computing, Dublin City University, Ireland {lamia.tounsi, mattia, josef}@computing.dcu.ie Josef van Genabith Abstract A number of papers have reported on methods for the automatic acquisition of large-scale, probabilistic LFG-based grammatical resources from treebanks for English (Cahill and al., 2002), (Cahill and al., 2004), German (Cahill and al., 2003), Chinese (Burke, 2004), (Guo and al., 2007), Spanish (ODonovan, 2004), (Chrupala and van Genabith, 2006) and French (Schluter and van Genabith, 2008)." D09-1024,P07-1003,o,"1 Introduction Word alignment is a critical component in training statistical machine translation systems and has received a significant amount of research, for example, (Brown et al., 1993; Ittycheriah and Roukos, 2005; Fraser and Marcu, 2007), including work leveraging syntactic parse trees, e.g., (Cherry and Lin, 2006; DeNero and Klein, 2007; Fossum et al., 2008)." D09-1105,P07-1003,o,"The second alternative used BerkeleyAligner (Liang et al., 2006; DeNero and Klein, 2007), which shares information between the two alignment directions to improve alignment quality." D09-1136,P07-1003,o,DeNero and Klein (2007) use a syntaxbased distance in an HMM word alignment model to favor syntax-friendly alignments. D09-1136,P07-1003,o,"The word alignment used in GHKM is usually computed independent ofthesyntacticstructure,andasDeNeroandKlein (2007) and May and Knight (2007) have noted, Ch-En En-Ch Union Heuristic 28.6% 33.0% 45.9% 20.1% Table 1: Percentage of corpus used to generate big templates, based on different word alignments 9-12 13-20 21 Ch-En 18.2% 17.4% 64.4% En-Ch 15.9% 20.7% 63.4% Union 9.8% 15.1% 75.1% Heuristic 24.6% 27.9% 47.5% Table 2: In the selected big templates, the distribution of words in the templates of different sizes, which are measured based on the number of symbols in their RHSs is not the best for SSMT systems." P09-1104,P07-1003,o,"When we trained external Chinese models, we used the same unlabeled data set as DeNero and Klein (2007), including the bilingual dictionary." P09-1104,P07-1003,o,"For example, the HMM aligner achieves an AER of 20.7 when using the competitive thresholding heuristic of DeNero and Klein (2007)." P09-1104,P07-1003,o,We also trained an HMM aligner as described in DeNero and Klein (2007) and used the posteriors of this model as features. P09-1104,P07-1003,o,"thresholding (DeNero and Klein, 2007)." W08-0306,P07-1003,o,"(Lopez and Resnik, 2005) and (Denero and Klein, 2007) modify the distortion model of the HMM alignment model (Vogel et al., 1996) to reflect tree distance rather than string distance; (Cherry and Lin, 2006) modify an ITG aligner by introducing a penalty for induced parses that violate syntactic bracketing constraints." W08-0308,P07-1003,o,"For example, the word alignment computed by GIZA++ and used as a basis to extract the TTS templates in most SSMT systems has been observed to be a problem for SSMT (DeNero and Klein, 2007; May and Knight, 2007), due to the fact that the word-based alignment models are not aware of the syntactic structure of the sentences and could produce many syntax-violating word alignments." W08-0308,P07-1003,o,"Approaches have been proposed recently towards getting better word alignment and thus better TTS templates, such as encoding syntactic structure information into the HMM-based word alignment model DeNero and Klein (2007), and build62 ing a syntax-based word alignment model May and Knight (2007) with TTS templates." W08-0308,P07-1003,n,"DeNero and Klein (2007) focus on alignment and do not present MT results, while May and Knight (2007) takesthesyntacticre-alignmentasaninputtoanEM algorithm where the unaligned target words are insertedintothetemplatesandminimumtemplatesare combinedintobiggertemplates(Galleyetal.,2006)." N09-1067,P07-1009,o,We use the version extracted and preprocessed by Daume III and Campbell (2007). C08-1029,P07-1010,o,"One way of obtaining a suitable granularity of nodes is to introduce latent classes, such as the Semi-Markov class model (Okanohara and Tsujii, 2007)." D08-1006,P07-1010,o,"More recently, however, Okanohara and Tsujii (2007) showed that a 1 Conditional maximum entropy models (Rosenfeld, 1996) provide somewhat of a counter-example, but there, too, many kinds of global and non-local features are difficult to use (Rosenfeld, 1997)." D08-1006,P07-1010,o,"Unfortunately, as shown in (Okanohara and Tsujii, 2007), with the represetation of sentences that we use, linear classifiers cannot discriminate real sentences from sentences sampled from a trigram, which is the model we use as a baseline, so here we resort to a non-linear large-margin classifier (see section 3 for details)." D08-1006,P07-1010,o,"As shown in (Okanohara and Tsujii, 2007), using this representation, a linear classifier cannot distinguish sentences sampled from a trigram and real sentences." D08-1007,P07-1010,o,Our technique of generating negative examples is similar to the approach of Okanohara and Tsujii (2007). P08-2056,P07-1010,o,"Artificial ungrammaticalities have been used in various NLP tasks (Smith and Eisner, 2005; Okanohara and Tsujii, 2007) The idea of an automatically generated ungrammatical treebank was proposed by Foster (2007)." W09-2112,P07-1010,o,"Examples are Andersen (2006; 2007), Okanohara and Tsujii (2007), Sun et al." W09-2112,P07-1010,o,Both Okanohara and Tsujii (2007) and Wagner et al. W09-2112,P07-1010,o,Okanohara and Tsujii (2007) generate ill-formed sentences by sampling a probabilistic language model and end up with pseudo-negative examples which resemble machine translation output more than they do learner texts. C08-1144,P07-1019,o,"Adaptations to the algorithms in the presence of ngram LMs are discussed in (Chiang, 2007; Venugopal et al., 2007; Huang and Chiang, 2007)." D08-1012,P07-1019,o,"Huang and Chiang (2007) searches with the full model, but makes assumptions about the the amount of reordering the language model can trigger in order to limit exploration." D08-1022,P07-1019,o,"The forest concept is also used in machine translation decoding, for example to characterize the search space of decoding with integrated language models (Huang and Chiang, 2007)." D09-1007,P07-1019,o,"These include cube pruning (Chiang, 2007), cube growing (Huang and Chiang, 2007), early pruning (Moore and Quirk, 2007), closing spans (Roark and Hollingshead, 2008; Roark and Hollingshead, 2009), coarse-to-fine methods (Petrov et al., 2008), pervasive laziness (Pust and Knight, 2009), and many more." D09-1108,P07-1019,p,"In the SMT research community, the second step has been well studied and many methods have been proposed to speed up the decoding process, such as node-based or span-based beam search with different pruning strategies (Liu et al., 2006; Zhang et al., 2008a, 2008b) and cube pruning (Huang and Chiang, 2007; Mi et al., 2008)." D09-1123,P07-1019,o,"To circumvent these computational limitations, various pruning techniques are usually needed, e.g., (Huang and Chiang, 2007)." D09-1147,P07-1019,p,"To speed our computations, we use the cube pruning method of Huang and Chiang (2007) with a fixed beam size." D09-1147,P07-1019,o,"3.1 Translation Model Form We first assume the general hypergraph setting of Huang and Chiang (2007), namely, that derivations under our translation model form a hypergraph." E09-1044,P07-1019,p,Hiero Search Refinements Huang and Chiang (2007) offer several refinements to cube pruning to improve translation speed. E09-1061,P07-1019,n,"13Huang and Chiang (2007) give an informal example, but do not elaborate on it." N09-1026,P07-1019,p,"Recent innovations have greatly improved the efficiency of language model integration through multipass techniques, such as forest reranking (Huang and Chiang, 2007), local search (Venugopal et al., 2007), and coarse-to-fine pruning (Petrov et al., 2008; Zhang and Gildea, 2008)." N09-1026,P07-1019,o,"As an alternative, Huang and Chiang (2007) describes a forest-based reranking algorithm called cube growing, which also employs beam search, but focuses computation only where necessary in a top-down pass through a parse forest." N09-1026,P07-1019,o,"3Huang and Chiang (2007) describes the cube growing algorithm in further detail, including the precise form of the successor function for derivations." N09-1027,P07-1019,o,"In a second top-down pass similar to Huang and Chiang (2007), we can recalculate psyn(d) for alternative derivations in the hypergraph; potentially correcting search errors made in the first pass." N09-1049,P07-1019,p,433 Hiero Search Refinements Huang and Chiang (2007) offer several refinements to cube pruning to improve translation speed. N09-2003,P07-1019,p,"1 Introduction A hypergraph, as demonstrated by Huang and Chiang (2007), is a compact data-structure that can encode an exponential number of hypotheses generated by a regular phrase-based machine translation (MT) system (e.g., Koehn et al." N09-2036,P07-1019,o,"Taken together with cube pruning (Chiang, 2007), k-best tree extraction (Huang and Chiang, 2005), and cube growing (Huang and Chiang, 2007), these results provide evidence that lazy techniques may penetrate deeper yet into MT decoding and other NLP search problems." N09-2036,P07-1019,o,"Huang and Chiang (2007) de143 5x108 1x109 1.5x109 2x109 2.5x109 3x109 edges created 42000 43000 44000 45000 model cost lazy cube generation exhaustive cube generation Figure 3: Number of edges produced by the decoder, versus model cost of 1-best decodings." P08-1023,P07-1019,o,"For 1-best search, we use the cube pruning technique (Chiang, 2007; Huang and Chiang, 2007) which approximately intersects the translation forest with the LM." P08-1025,P07-1019,o,But we did not use any LM estimate to achieve early stopping as suggested by Huang and Chiang (2007). P08-1025,P07-1019,o,The cube-pruning by Chiang (2007) and the lazy cube-pruning of Huang and Chiang (2007) turn the computation of beam pruning of CYK decoders into a top-k selection problem given two columns of translation hypotheses that need to be combined. P08-1067,P07-1019,o,"So we propose forest reranking, a technique inspired by forest rescoring (Huang and Chiang, 2007) that approximately reranks the packed forest of exponentially many parses." P08-1067,P07-1019,o,"For non-local features, we adapt cube pruning from forest rescoring (Chiang, 2007; Huang and Chiang, 2007), since the situation here is analogous to machine translation decoding with integrated language models: we can view the scores of unit nonlocal features as the language model cost, computed on-the-fly when combining sub-constituents." P09-1020,P07-1019,p,We also use Cube Pruning algorithm (Huang and Chiang 2007) to speed up the translation process. P09-1063,P07-1019,o,"6 Related Work In machine translation, the concept of packed forest is first used by Huang and Chiang (2007) to characterize the search space of decoding with language models." P09-1065,P07-1019,p,"Hypergraphs have been successfully used in parsing (Klein and Manning., 2001; Huang and Chiang, 2005; Huang, 2008) and machine translation (Huang and Chiang, 2007; Mi et al., 2008; Mi and Huang, 2008)." P09-1067,P07-1019,o,"3A hypergraph is analogous to a parse forest (Huang and Chiang, 2007)." P09-2035,P07-1019,o,"4 Sub Translation Combining For sub translation combining, we mainly use the best-first expansion idea from cube pruning (Huang and Chiang, 2007) to combine subtranslations and generate the whole k-best translations." P09-2035,P07-1019,o,"Decoding time of our experiments (h means hours) language model for rescoring (Huang and Chiang, 2007)." P09-2036,P07-1019,o,"Recent work has explored two-stage decoding, which explicitly decouples decoding into a source parsing stage and a target language model integration stage (Huang and Chiang, 2007)." P09-2036,P07-1019,o,"We rerank derivations with cube growing, a lazy beam search algorithm (Huang and Chiang, 2007)." P09-2036,P07-1019,o,Forest reranking with a language model can be performed over this n-ary forest using the cube growing algorithm of Huang and Chiang (2007). W07-0701,P07-1019,o,"Since we approach decoding as xR transduction, the process is identical to that of constituencybased algorithms (e.g. Huang and Chiang, 2007)." W08-0402,P07-1019,o,"However, with the algorithms proposed in (Huang and Chiang, 2005; Chiang, 2007; Huang and Chiang, 2007), it is possible to develop a general-purpose decoder that can be used by all the parsing-based systems." W08-0402,P07-1019,o,"In our decoder, we incorporate two pruning techniques described by (Chiang, 2007; Huang and Chiang, 2007)." W09-0429,P07-1019,o,"Note that this early discarding is related to ideas behind cube pruning (Huang and Chiang, 2007), which generates the top n most promising hypotheses, but in our method the decision not to generate hypotheses is guided by the quality of hypotheses on the result stack." W09-0439,P07-1019,o,"Decoding used beam search with the cube pruning algorithm (Huang and Chiang, 2007)." E09-1034,P07-1021,o,"In order to prove this induction step, we use the concept of order annotations (Kuhlmann, 2007; Kuhlmann and Mohl, 2007), which are strings that lexicalise the precedence relation between the nodes of a dependency tree." N09-1038,P07-1021,o,"The dependency trees induced when each rewrite rule in an i-th order LCFRS distinguish a unique head can similarly be characterized by being of gap-degree i, so that i is the maximum number of gaps that may appear between contiguous substrings of any subtree in the dependency tree (Kuhlmann and Mohl, 2007)." P09-3002,P07-1021,o,"(Kuhlmann and Mohl, 2007; McDonald and Nivre, 2007; Nivre et al., 2007) Hindi is a verb final, flexible word order language and therefore, has frequent occurrences of non-projectivity in its dependency structures." C08-1003,P07-1033,p,"In the supervised setting, a recent paper by Daume III (2007) shows that, using a very simple feature augmentation method coupled with Support Vector Machines, he is able to effectively use both labeled target and source data to provide the best results in a number of NLP tasks." C08-1003,P07-1033,o,"In order to build models that perform well in new (target) domains we usually find two settings (Daume III, 2007): In the semi-supervised setting the goal is to improve the system trained on the source domain using unlabeled data from the target domain, and the baseline is that of the system c2008." C08-1015,P07-1033,o,"There are two tasks(Daume III, 2007) for the domain adaptation problem." C08-1059,P07-1033,o,"This is comparable to the accuracy of 96.29% reported by (Daume III, 2007) on the newswire domain." D08-1105,P07-1033,o,"In particular, we use a feature augmentation technique recently introduced by Daume III (2007), and active learning (Lewis and Gale, 1994) to perform domain adaptation of WSD systems." D08-1105,P07-1033,o,"5 Combining In-Domain and Out-of-Domain Data for Training In this section, we will first introduce the AUGMENT technique of Daume III (2007), before showing the performance of our WSD system with and without using this technique." D08-1105,P07-1033,p,5.1 The AUGMENT technique for Domain Adaptation The AUGMENT technique introduced by Daume III (2007) is a simple yet very effective approach to performing domain adaptation. D08-1105,P07-1033,o,"In the English all-words task of the previous SENSEVAL evaluations (SENSEVAL-2, SENSEVAL3, SemEval-2007), the best performing English all-words task systems with the highest WSD accuracy were trained on SEMCOR (Mihalcea and Moldovan, 2001; Decadt et al., 2004; Chan et al., 2007b)." D09-1086,P07-1033,o,"Many adaptation methods operate by simple augmentations of the target feature space, as we have donehere(DaumeIII,2007)." D09-1158,P07-1033,o,"(Blitzer et al., 2006; Jiang and Zhai, 2007; Daume III, 2007; Finkel and Manning, 2009), or [S+T-], where no labeled target domain data is available, e.g." D09-1158,P07-1033,o,"Daume III (Daume III, 2007) divided features into three classes: domainindependent features, source-domain features and target-domain features." E09-1006,P07-1033,o,"The last row shows the results for the feature augmentation algorithm (Daume III, 2007)." E09-1006,P07-1033,p,"In the supervised setting, a recent paper by Daume III (2007) shows that a simple feature augmentation method for SVM is able to effectively use both labeled target and source data to provide the best domainadaptation results in a number of NLP tasks." E09-1006,P07-1033,o,"In order to build models that perform well in new (target) domains we usually find two settings (Daume III, 2007)." E09-3005,P07-1033,o,"The problem itself has started to get attention only recently (Roark and Bacchiani, 2003; Hara et al., 2005; Daume III and Marcu, 2006; Daume III, 2007; Blitzer et al., 2006; McClosky et al., 2006; Dredze et al., 2007)." E09-3005,P07-1033,o,"We distinguish two main approaches to domain adaptation that have been addressed in the literature (Daume III, 2007): supervised and semi-supervised." E09-3005,P07-1033,o,"In supervised domain adaptation (Gildea, 2001; Roark and Bacchiani, 2003; Hara et al., 2005; Daume III, 2007), besides the labeled source data, we have access to a comparably small, but labeled amount of target data." E09-3005,P07-1033,p,"Studies on the supervised task have shown that straightforward baselines (e.g. models based on source only, target only, or the union of the data) achieve a relatively high performance level and are surprisingly difficult to beat (Daume III, 2007)." E09-3005,P07-1033,o,"Thus, one conclusion from that line of work is that as soon as there is a reasonable (often even small) amount of labeled target data, it is often more fruitful to either just use that, or to apply simple adaptation techniques (Daume III, 2007; Plank and van Noord, 2008)." E09-3005,P07-1033,o,"Therefore, whenever we have access to a large amount of labeled data from some source (out-of-domain), but we would like a model that performs well on some new target domain (Gildea, 2001; Daume III, 2007), we face the problem of domain adaptation." I08-2097,P07-1033,o,"There are many possible methods for combining unlabeled and labeled data (Daume III, 2007), but we simply concatenate unlabeled data with labeled data to see the effectiveness of the selected reliable parses." I08-2097,P07-1033,o,"This was a difcult challenge as many participants in the task failed to obtain any meaningful gains from unlabeled data (Dredze et al., 2007)." I08-2097,P07-1033,o,"For the multilingual dependency parsing track, which was the other track of the shared task, Nilsson et al. achieved the best performance using an ensemble method (Hall et al., 2007)." N09-1032,P07-1033,o,Daume III (2007) further augments the feature space on the instances of both domains. N09-1068,P07-1033,o,"Because Daume III (2007) views the adaptation as merely augmenting the feature space, each of his features has the same prior mean and variance, regardless of whether it is domain specific or independent." N09-1068,P07-1033,o,"Trained and tested using the same technique as (Daume III, 2007)." N09-1068,P07-1033,o,"5 Related Work We already discussed the relation of our work to (Daume III, 2007) in Section 2.4." N09-1068,P07-1033,o,"We also show that the domain adaptation work of (Daume III, 2007), which is presented as an ad-hoc preprocessing step, is actually equivalent to our formal model." N09-1068,P07-1033,o,"However, our representation of the model conceptually separates some of the hyperparameters which are not separated in (Daume III, 2007), and we found that setting these hyperparameters with different values from one another was critical for improving performance." N09-1068,P07-1033,o,"We show that the method of (Daume III, 2007), which was presented as a simple preprocessing step, is actually equivalent, except our representation explicitly separates hyperparameters which were tied in his work." N09-1068,P07-1033,o,"We demonstrate that allowing different values for these hyperparameters significantly improves performance over both a strong baseline and (Daume III, 2007) within both a conditional random field sequence model for named entity recognition and a discriminatively trained dependency parser." N09-1068,P07-1033,o,"2.4 Formalization of (Daume III, 2007) As mentioned earlier, our model is equivalent to that presented in (Daume III, 2007), and can be viewed as a formal version of his model.2 In his presentation, the adapation is done through feature augmentation." N09-1068,P07-1033,o,"Recall that the log likelihood of our model is: d parenleftBigg Lorig(Dd;d) i (d,i ,i)2 2 2d parenrightBigg i (,i)2 2 2 We now introduce a new variable d = d , and plug it into the equation for log likelihood: d parenleftBigg Lorig(Dd;d +) i (d,i)2 2 2d parenrightBigg i (,i)2 2 2 The result is the model of (Daume III, 2007), where the d are the domain-specific feature weights, and d are the domain-independent feature weights." P08-1029,P07-1033,o,"Other techniques have tried to quantify the generalizability of certain features across domains (Daume III and Marcu, 2006; Jiang and Zhai, 2006), or tried to exploit the common structure of related problems (Ben-David et al., 2007; Scholkopf et al., 2005)." P08-1029,P07-1033,o,"Daume allows an extra degree of freedom among the features of his domains, implicitly creating a two-level feature hierarchy with one branch for general features, and another for domain specific ones, but does not extend his hierarchy further (Daume III, 2007))." P09-1056,P07-1033,o,"Unlike our technique, in most cases researchers have focused on the scenario where labeled training data is available in both the source and the target domain (e.g., (Daume III, 2007; Chelba and Acero, 2004; Daume III and Marcu, 2006))." P09-1059,P07-1033,o,"(2006) and Daume III (2007) (and see below for discussions), so in this paper we focus on the less studied, but equally important problem of annotationstyle adaptation." P09-1059,P07-1033,o,"This method is very similar to some ideas in domain adaptation (Daume III and Marcu, 2006; Daume III, 2007), but we argue that the underlying problems are quite different." P09-1059,P07-1033,o,"ald, 2008), and is also similar to the Pred baseline for domain adaptation in (Daume III and Marcu, 2006; Daume III, 2007)." P09-1087,P07-1033,p,"For example, (Daume III, 2007) shows that training a learning algorithm on the weighted union of different data sets (which is basically what we did) performs almost as well as more involved domain adaptation approaches." P09-1114,P07-1033,o,"The model presented above is based on our previous work (Jiang and Zhai, 2007c), which bears the same spirit of some other recent work on multitask learning (Ando and Zhang, 2005; Evgeniou and Pontil, 2004; Daume III, 2007)." P09-1114,P07-1033,o,"While transfer learning was proposed more than a decade ago (Thrun, 1996; Caruana, 1997), its application in natural language processing is still a relatively new territory (Blitzer et al., 2006; Daume III, 2007; Jiang and Zhai, 2007a; Arnold et al., 2008; Dredze and Crammer, 2008), and its application in relation extraction is still unexplored." P09-1114,P07-1033,o,Daume III (2007) proposed a simple feature augmentation method to achieve domain adaptation. P09-2079,P07-1033,o,"Also, the aspect of generalizing features across different products is closely related to fully supervised domain adaptation (Daume III, 2007), and we plan to combine our approach with the idea from Daume III (2007) to gain insights into whether the composite back-off features exhibit different behavior in domain-general versus domain-specific feature sub-spaces." W09-2420,P07-1033,o,"Differences in behavior of WSD systems when applied to lexical-sample and all-words datasets have been observed on previous Senseval and Semeval competitions (Kilgarriff, 2001; Mihalcea et al., 2004; Pradhan et al., 2007): supervised systems attain results on the high 80s and beat the most frequent baseline by a large margin for lexical-sample datasets, but results on the all-words datasets were much more modest, on the low 70s, and a few points above the most frequent baseline." W09-2420,P07-1033,o,"1 Introduction Word Sense Disambiguation (WSD) competitions have focused on general domain texts, as attested in the last Senseval and Semeval competitions (Kilgarriff, 2001; Mihalcea et al., 2004; Pradhan et al., 2007)." W09-2420,P07-1033,p,"For instance, (Daume III, 2007) shows that a simple feature augmentation method for SVM is able to effectively use both labeled target and source data to provide the best domainadaptation results in a number of NLP tasks." C08-1014,P07-1040,o,"This technique is called system combination (Bangalore et al., 2001; Matusov et al., 2006; Sim et al., 2007; Rosti et al., 2007a; Rosti et al., 2007b)." C08-1014,P07-1040,o,"Re-decoding (Rosti et al., 2007a) based regeneration re-decodes the source sentence using original LM as well as new trans105 lation and reordering models that are trained on the source-to-target N-best translations generated in the first pass." C08-1014,P07-1040,o,"Confusion network and re-decoding have been well studied in the combination of different MT systems (Bangalore et al., 2001; Matusov et al., 2006; Sim et al., 2007; Rosti et al., 2007a; Rosti et al., 2007b)." C08-1014,P07-1040,o,"(Rosti et al., 2007a) also used re-decoding to do system combination by extracting sentence-specific phrase translation tables from the outputs of different MT systems and running a phrase-based decoding with this new translation table." C08-1014,P07-1040,o,"3.1 Regeneration with Re-decoding One way of regeneration is by running the decoding again to obtain new hypotheses through a re-decoding process (Rosti et al., 2007a)." C08-1014,P07-1040,o,"(2007), Rosti et al." C08-1014,P07-1040,o,"(2007a), and Rosti et al." C08-1014,P07-1040,o,"(2007), Rosti et al." C08-1014,P07-1040,o,"(2007a), and Rosti et al." D08-1011,P07-1040,o,(2007) and Rosti et al. D08-1011,P07-1040,o,"Similar to (Rosti et al., 2007), each word in the confusion network is associated with a word posterior probability." D08-1011,P07-1040,p,"2 Confusion-network-based MT system combination The current state-of-the-art is confusion-networkbased MT system combination as described by 98 Rosti and colleagues (Rosti et al., 2007a, Rosti et al., 2007b)." D08-1011,P07-1040,o,"Recently, confusion-network-based system combination algorithms have been developed to combine outputs of multiple machine translation (MT) systems to form a consensus output (Bangalore, et al. 2001, Matusov et al., 2006, Rosti et al., 2007, Sim et al., 2007)." D09-1114,P07-1040,n,"Although various approaches to SMT system combination have been explored, including enhanced combination model structure (Rosti et al., 2007), better word alignment between translations (Ayan et al., 2008; He et al., 2008) and improved confusion network construction (Rosti et al., 2008), most previous work simply used the ensemble of SMT systems based on different models and paradigms at hand and did not tackle the issue of how to obtain the ensemble in a principled way." D09-1114,P07-1040,o,"3.2 System Combination Scheme In our work, we use a sentence-level system combination model to select best translation hypothesis from the candidate pool ( ) . This method can also be viewed to be a hypotheses reranking model since we only use the existing translations instead of performing decoding over a confusion network as done in the word-level combination method (Rosti et al., 2007)." D09-1115,P07-1040,o,"ps(arc) is increased by 1110 1/(k+1) if the hypothesis ranking k in the system s contains the arc (Rosti et al., 2007a; He et al., 2008)." D09-1115,P07-1040,p,"In recent several years, the system combination methods based on confusion networks developed rapidly (Bangalore et al., 2001; Matusov et al., 2006; Sim et al., 2007; Rosti et al., 2007a; Rosti et al., 2007b; Rosti et al., 2008; He et al., 2008), which show state-of-the-art performance in benchmarks." D09-1125,P07-1040,o,"al 2006, Rosti, et al. 2007a)." N09-2019,P07-1040,p,"It is very likely that even greater gains can be achieved by more complicated combination schemes (Rosti et al., 2007), although significantly more effort in tuning would be required." P09-1065,P07-1040,o,"5.3 Comparison with System Combination We re-implemented a state-of-the-art system combination method (Rosti et al., 2007)." P09-1065,P07-1040,p,"In machine translation, confusion-network based combination techniques (e.g., (Rosti et al., 2007; He et al., 2008)) have achieved the state-of-theart performance in MT evaluations." P09-1065,P07-1040,p,"Recent several years have witnessed the rapid development of system combination methods based on confusion networks (e.g., (Rosti et al., 2007; He et al., 2008)), which show state-of-theart performance in MT benchmarks." P09-1106,P07-1040,o,"Among the four steps, the hypothesis alignment presents the biggest challenge to the method due to the varying word orders between outputs from different MT systems (Rosti et al, 2007)." P09-1106,P07-1040,o,"Similar to (Rosti et al., 2007a), each word in the hypothesis is assigned with a rank-based score of 1/(1 )r+ , where r is the rank of the hypothesis." P09-1106,P07-1040,o,"(2007), Rosti et al." P09-1106,P07-1040,o,"(2007a), and Rosti et al." P09-1106,P07-1040,o,"We follow the work of (Sim et al., 2007; Rosti et al., 2007a; Rosti et al., 2007b; He et al., 2008) and choose the hypothesis that best agrees with other hypotheses on average as the backbone by applying Minimum Bayes Risk (MBR) decoding (Kumar and Byrne, 2004)." P09-1106,P07-1040,p,"Confusion network based system combination for machine translation has shown promising advantage compared with other techniques based system combination, such as sentence level hypothesis selection by voting and source sentence re-decoding using the phrases or translation models that are learned from the source sentences and target hypotheses pairs (Rosti et al., 2007a; Huang and Papineni, 2007)." P09-1106,P07-1040,o,"TER-based: TER-based word alignment method (Sim et al., 2007; Rosti et al., 2007a; Rosti et al., 2007b) is an extension of multiple string matching algorithm based on Levenshtein edit distance (Bangalore et al., 2001)." W08-0309,P07-1040,o,"73 ID Participant BBN-COMBO BBN system combination (Rosti et al., 2008) CMU-COMBO Carnegie Mellon University system combination (Jayaraman and Lavie, 2005) CMU-GIMPEL Carnegie Mellon University Gimpel (Gimpel and Smith, 2008) CMU-SMT Carnegie Mellon University SMT (Bach et al., 2008) CMU-STATXFER Carnegie Mellon University Stat-XFER (Hanneman et al., 2008) CU-TECTOMT Charles University TectoMT (Zabokrtsky et al., 2008) CU-BOJAR Charles University Bojar (Bojar and Hajic, 2008) CUED Cambridge University (Blackwood et al., 2008) DCU Dublin City University (Tinsley et al., 2008) LIMSI LIMSI (Dechelotte et al., 2008) LIU Linkoping University (Stymne et al., 2008) LIUM-SYSTRAN LIUM / Systran (Schwenk et al., 2008) MLOGIC Morphologic (Novak et al., 2008) PCT a commercial MT provider from the Czech Republic RBMT16 Babelfish, Lingenio, Lucy, OpenLogos, ProMT, SDL (ordering anonymized) SAAR University of Saarbruecken (Eisele et al., 2008) SYSTRAN Systran (Dugast et al., 2008) UCB University of California at Berkeley (Nakov, 2008) UCL University College London (Wang and Shawe-Taylor, 2008) UEDIN University of Edinburgh (Koehn et al., 2008) UEDIN-COMBO University of Edinburgh system combination (Josh Schroeder) UMD University of Maryland (Dyer, 2007) UPC Universitat Politecnica de Catalunya, Barcelona (Khalilov et al., 2008) UW University of Washington (Axelrod et al., 2008) XEROX Xerox Research Centre Europe (Nikoulina and Dymetman, 2008) Table 2: Participants in the shared translation task." W08-0329,P07-1040,o,"As in (Rosti et al., 2007), confusion networks built around all skeletons are joined into a lattice which is expanded and rescored with language models." W08-0329,P07-1040,o,"Other scores for the word arc are set as in (Rosti et al., 2007)." W08-0329,P07-1040,o,"The recent approaches used pair-wise alignment algorithms based on symmetric alignments from a HMM alignment model (Matusov et al., 2006) or edit distance alignments allowing shifts (Rosti et al., 2007)." W08-0329,P07-1040,o,"to the pair-wise TER alignment described in (Rosti et al., 2007)." W09-0405,P07-1040,o,"Previous work on building hybrid systems includes, among others, approaches using reranking, regeneration with an SMT decoder (Eisele et al., 2008; Chen et al., 2007), and confusion networks (Matusov et al., 2006; Rosti et al., 2007; He et al., 2008)." W09-0407,P07-1040,n,"In contrast to existing approaches (Jayaraman and Lavie, 2005; Rosti et al., 2007), the context of the whole corpus rather than a single sentence is considered in this iterative, unsupervised procedure, yielding a more reliable alignment." W09-0407,P07-1040,p,"In our experience, this approach is advantageous in terms of translation quality, e.g. by 0.7% in BLEU compared to a minimum Bayes risk primary (Rosti et al., 2007)." W09-0409,P07-1040,p,"The availability of the TER software has made it easy to build a high performance system combination baseline (Rosti et al., 2007)." W09-0409,P07-1040,o,"The hypothesis scores and tuning are identical to the setup used in (Rosti et al., 2007)." W09-0411,P07-1040,o,"Besides continued research on improving MT techniques, one line of research is dedicated to better exploitation of existing methods for the combination of their respective advantages (Macherey and Och, 2007; Rosti et al., 2007a)." W09-0411,P07-1040,o,"This can be seen as a simplified version of (Rosti et al., 2007b)." W09-0441,P07-1040,o,"Such a technique has been used with TER to combine the output of multiple translation systems (Rosti et al., 2007)." D07-1102,P07-1050,o,"Recent work shows that k-best maximum spanning tree (MST) parsing and reranking is also viable (Hall, 2007)." D07-1102,P07-1050,o,"2.1.4 Model Features Our MST models are based on the features described in (Hall, 2007); specifically, we use features based on a dependency nodes form, lemma, coarse and fine part-of-speech tag, and morphologicalstring attributes." D07-1102,P07-1050,o,"The tree-based reranker includes the features described in (Hall, 2007) as well as features based on non-projective edge attributes explored in (Havelka, 2007a; Havelka, 2007b)." D07-1102,P07-1050,o,"3 Results and Analysis Hall (2007) shows that the oracle parsing accuracy of a k-best edge-factored MST parser is considerably higher than the one-best score of the same parser, even when k is small." D08-1059,P07-1050,o,"Nakagawa (2007) and Hall (2007) also showed the effectiveness of global features in improving the accuracy of graph-based parsing, using the approximate Gibbs sampling method and a reranking approach, respectively." D08-1059,P07-1050,o,"An existing method to combine multiple parsing algorithms is the ensemble approach (Sagae and Lavie, 2006a), which was reported to be useful in improving dependency parsing (Hall et al., 2007)." P08-1108,P07-1050,o,"Thus, Nakagawa (2007) and Hall (2007) both try to overcome the limited feature scope of graph-based models by adding global features, in the former case using Gibbs sampling to deal with the intractable inference problem, in the latter case using a re-ranking scheme." D08-1013,P07-1055,o,(McDonald et al 2007; Ivan et al 2008) proposed a structured model based on CRFs for jointly classifying the sentiment of text at varying levels of granularity. D09-1019,P07-1055,o,"There are many research directions, e.g., sentiment classification (classifying an opinion document as positive or negative) (e.g., Pang, Lee and Vaithyanathan, 2002; Turney, 2002), subjectivity classification (determining whether a sentence is subjective or objective, and its associated opinion) (Wiebe and Wilson, 2002; Yu and Hatzivassiloglou, 2003; Wilson et al, 2004; Kim and Hovy, 2004; Riloff and Wiebe, 2005), feature/topic-based sentiment analysis (assigning positive or negative sentiments to topics or product features) (Hu and Liu 2004; Popescu and Etzioni, 2005; Carenini et al., 2005; Ku et al., 2006; Kobayashi, Inui and Matsumoto, 2007; Titov and McDonald." D09-1019,P07-1055,o,"One of the main directions is sentiment classification, which classifies the whole opinion document (e.g., a product review) as positive or negative (e.g., Pang et al, 2002; Turney, 2002; Dave et al, 2003; Ng et al. 2006; McDonald et al, 2007)." D09-1019,P07-1055,o,"Another important direction is classifying sentences as subjective or objective, and classifying subjective sentences or clauses as positive or negative (Wiebe et al, 1999; Wiebe and Wilson, 2002, Yu and Hatzivassiloglou, 2003; Wilson et al, 2004; Kim and Hovy, 2004; Riloff and Wiebe, 2005; Gamon et al 2005; McDonald et al, 2007)." D09-1019,P07-1055,o,"Several researchers also studied feature/topicbased sentiment analysis (e.g., Hu and Liu, 2004; Popescu and Etzioni, 2005; Ku et al, 2006; Carenini et al, 2006; Mei et al, 2007; Ding, Liu and Yu, 2008; Titov and R. McDonald, 2008; Stoyanov and Cardie, 2008; Lu and Zhai, 2008)." D08-1072,P07-1056,o,"Domain adaptation deals with these feature distribution changes (Blitzer et al., 2007; Jiang and Zhai, 2007)." D08-1072,P07-1056,o,"5 Datasets For evaluation we selected two domain adaptation datasets: spam (Jiang and Zhai, 2007) and sentiment (Blitzer et al., 2007)." D09-1061,P07-1056,o,"We use five sentiment classification datasets, including the widely-used movie review dataset [MOV] (Pang et al., 2002) as well as four datasets containing reviews of four different types of products from Amazon [books (BOO), DVDs (DVD), electronics (ELE), and kitchen appliances (KIT)] (Blitzer et al., 2007)." D09-1061,P07-1056,o,"However, such methods require the existence of either a parallel corpus/machine translation engine for projecting/translating annotations/lexica from a resource-rich language to the target language (Banea et al., 2008; Wan, 2008), or a domain that is similar enough to the target domain (Blitzer et al., 2007)." E09-3005,P07-1056,o,"The problem itself has started to get attention only recently (Roark and Bacchiani, 2003; Hara et al., 2005; Daume III and Marcu, 2006; Daume III, 2007; Blitzer et al., 2006; McClosky et al., 2006; Dredze et al., 2007)." E09-3005,P07-1056,o,"In contrast, semi-supervised domain adaptation (Blitzer et al., 2006; McClosky et al., 2006; Dredze et al., 2007) is the scenario in which, in addition to the labeled source data, we only have unlabeled and no labeled target domain data." E09-3005,P07-1056,n,"2 Motivation and Prior Work While several authors have looked at the supervised adaptation case, there are less (and especially less successful) studies on semi-supervised domain adaptation (McClosky et al., 2006; Blitzer et al., 2006; Dredze et al., 2007)." E09-3005,P07-1056,p,"While SCL has been successfully applied to PoS tagging and Sentiment Analysis (Blitzer et al., 2006; Blitzer et al., 2007), its effectiveness for parsing was rather unexplored." E09-3005,P07-1056,p,"Similarly, Structural Correspondence Learning (Blitzer et al., 2006; Blitzer et al., 2007; Blitzer, 2008) has proven to be successful for the two tasks examined, PoS tagging and Sentiment Classification." E09-3005,P07-1056,o,"So far, SCL has been applied successfully in NLP for Part-of-Speech tagging and Sentiment Analysis (Blitzer et al., 2006; Blitzer et al., 2007)." E09-3005,P07-1056,o,"4 Structural Correspondence Learning SCL (Structural Correspondence Learning) (Blitzer et al., 2006; Blitzer et al., 2007; Blitzer, 2008) is a recently proposed domain adaptation technique which uses unlabeled data from both source and target domain to learn correspondences between features from different domains." E09-3005,P07-1056,o,"So far, pivot features on the word level were used (Blitzer et al., 2006; Blitzer et al., 2007; Blitzer, 2008), e.g. Does the bigram not buy occur in this document? (Blitzer, 2008)." N09-1055,P07-1056,p,"With the in-depth study of opinion mining, researchers committed their efforts for more accurate results: the research of sentiment summarization (Philip et al., 2004; Hu et al., KDD 2004), domain transfer problem of the sentiment analysis (Kanayama et al., 2006; Tan et al., 2007; Blitzer et al., 2007; Tan et al., 2008; Andreevskaia et al., 2008; Tan et al., 2009) and finegrained opinion mining (Hatzivassiloglou et al., 2000; Takamura et al., 2007; Bloom et al., 2007; Wang et al., 2008; Titov et al., 2008) are the main branches of the research of opinion mining." N09-1056,P07-1056,o,"On a separate note, previous research has explicitly studied sentiment analysis as an application of transfer learning (Blitzer et al., 2007)." P08-2059,P07-1056,o,"We selected four binary NLP datasets for evaluation: 20 Newsgroups1 and Reuters (Lewis et al., 2004) (used by Tong and Koller) and sentiment classification (Blitzer et al., 2007) and spam (Bickel, 2006)." P08-2065,P07-1056,o,"As the training data from DVDs is much more similar to books than that from kitchen (Blitzer et al., 2007), we should give the data from DVDs a higher weight." P09-1001,P07-1056,o,"Various machine learning strategies have been proposed to address this problem, including semi-supervised learning (Zhu, 2007), domain adaptation (Wu and Dietterich, 2004; Blitzer et al., 2006; Blitzer et al., 2007; Arnold et al., 2007; Chan and Ng, 2007; Daume, 2007; Jiang and Zhai, 2007; Reichart and Rappoport, 2007; Andreevskaia and Bergler, 2008), multi-task learning (Caruana, 1997; Reichart et al., 2008; Arnold et al., 2008), self-taught learning (Raina et al., 2007), etc. A commonality among these methods is that they all require the training data and test data to be in the same feature space." P09-1027,P07-1056,o,"Training Set (Labeled English Reviews): There are many labeled English corpora available on the Web and we used the corpus constructed for multi-domain sentiment classification (Blitzer et al., 2007) 9 , because the corpus was large-scale and it was within similar domains as the test set." P09-1027,P07-1056,o,"We will employ the structural correspondence learning (SCL) domain adaption algorithm used in (Blitzer et al., 2007) for linking the translated text and the natural text." P09-1028,P07-1056,o,"Amazon Reviews: The dataset contains product reviews taken from Amazon.com from 4 product types: Kitchen, Books, DVDs, and Electronics (Blitzer et al., 2007)." P09-1028,P07-1056,o,"Finally, recent efforts have also looked at transfer learning mechanisms for sentiment analysis, e.g., see (Blitzer et al., 2007)." P09-1078,P07-1056,o,"And 20NG is a collection of approximately 20,000 20-category documents 1 . In sentiment text classification, we also use two data sets: one is the widely used Cornell movie-review dataset2 (Pang and Lee, 2004) and one dataset from product reviews of domain DVD3 (Blitzer et al., 2007)." P09-1079,P07-1056,o,"4 Evaluation 4.1 Experimental Setup For evaluation, we use five sentiment classification datasets, including the widely-used movie review dataset [MOV] (Pang et al., 2002) as well as four datasets that contain reviews of four different types of product from Amazon [books (BOO), DVDs (DVD), electronics (ELE), and kitchen appliances (KIT)] (Blitzer et al., 2007)." P09-2080,P07-1056,o,"The second one needs no labeled data for the new domain (Blitzer et al., 2007; Tan et al., 2007; Andreevskaia and Bergler, 2008; Tan et al., 2008; Tan et al., 2009)." P09-2080,P07-1056,o,"We also compare our algorithm to Structural Correspondence Learning (SCL) (Blitzer et al., 2007)." P09-2080,P07-1056,o,"Seen from Table 2, our result about SCL is in accord with that in (Blitzer et al., 2007) on the whole." W08-0804,P07-1056,p,"3 Experiments We evaluated the effect of random feature mixing on four popular learning methods: Perceptron, MIRA (Crammer et al., 2006), SVM and Maximum entropy; with 4 NLP datasets: 20 Newsgroups1, Reuters (Lewis et al., 2004), Sentiment (Blitzer et al., 2007) and Spam (Bickel, 2006)." W09-2205,P07-1056,o,"5 Conclusions and Future Work The paper compares Structural Correspondence Learning (Blitzer et al., 2006) with (various instances of) self-training (Abney, 2007; McClosky et al., 2006) for the adaptation of a parse selection model to Wikipedia domains." W09-2205,P07-1056,o,"We examine Structural Correspondence Learning (SCL) (Blitzer et al., 2006) for this task, and compare it to several variants of Self-training (Abney, 2007; McClosky et al., 2006)." W09-2205,P07-1056,p,"2 Previous Work So far, Structural Correspondence Learning has been applied successfully to PoS tagging and Sentiment Analysis (Blitzer et al., 2006; Blitzer et al., 2007)." W09-2205,P07-1056,o,"The techniques examined are Structural Correspondence Learning (SCL) (Blitzer et al., 2006) and Self-training (Abney, 2007; McClosky et al., 2006)." W09-2205,P07-1056,o,"SCL for Discriminative Parse Selection So far, pivot features on the word level were used (Blitzer et al., 2006; Blitzer et al., 2007)." W09-2211,P07-1056,o,"Labeled data for one domain might be used to train a initial classifier for another (possibly related) domain, and then bootstrapping can be employed to learn new knowledge from the new domain (Blitzer et al., 2007)." D07-1049,P07-1065,o,"5.3 Analysis of BF-LM framework We refer to (Talbot and Osborne, 2007) for empirical results establishing the performance of the logfrequency BF-LM: overestimation errors occur with 474 0.01 0.025 0.05 0.1 0.25 0.5 0.03 0.02 0.01 0.005 0.0025 0.001 Mean squared error of log probabilites Memory in GB MSE between WB 3-gram SRILM and BF-LMs Base 3 Base 1.5 Base 1.1 Figure 5: MSE between SRILM and BF-LMs 22 23 24 25 26 27 28 29 30 0.01 0.1 1 BLEU Score Mean squared error WB-smoothed BF-LM 3-gram model BF-LM base 1.1 BF-LM base 1.5 BF-LM base 3 Figure 6: MSE vs. BLEU for WB 3-gram BF-LMs a probability that decays exponentially in the size of the overestimation error." D07-1049,P07-1065,o,"Wehope the present work will, together with Talbot and Osborne (2007), establish the Bloom filter as a practical alternative to conventional associative data structures used in computational linguistics." D07-1049,P07-1065,o,"In this paper, we build on recent work (Talbot and Osborne, 2007) that demonstrated how the Bloom filter (Bloom (1970); BF), a space-efficient randomised data structure for representing sets, could be used to store corpus statistics efficiently." D07-1049,P07-1065,o,"Our framework makes use of the log-frequency Bloom filter presented in (Talbot and Osborne, 2007), and described briefly below, to compute smoothed conditional n-gram probabilities on the fly." D07-1049,P07-1065,p,"3 Language modelling with Bloom filters Recentwork(TalbotandOsborne, 2007)presenteda scheme for associating static frequency information with a set of n-grams in a BF efficiently.1 3.1 Log-frequency Bloom filter The efficiency of the scheme for storing n-gram statistics within a BF presented in Talbot and Osborne (2007) relies on the Zipf-like distribution of n-gramfrequencies: mosteventsoccuranextremely small number of times, while a small number are very frequent." D07-1049,P07-1065,o,"As noted in Talbot and Osborne (2007), errors for this log-frequency BF scheme are one-sided: frequencies will never be underestimated." D07-1049,P07-1065,o,3.2.1 Proxy items There is a potential risk of redundancy if we represent related statistics using the log-frequency BF scheme presented in Talbot and Osborne (2007). I08-2089,P07-1065,o,"Also the use of lossy data structures based on Bloom filters has been demonstrated to be effective for LMs (Talbot and Osborne, 2007a; Talbot and Osborne, 2007b)." N09-1058,P07-1065,p,"There also have been prior work on maintaining approximate counts for higher-order language models (LMs) ((Talbot and Osborne, 2007a; Talbot and Osborne, 2007b; Talbot and Brants, 2008)) operates under the model that the goal is to store a compressed representation of a disk-resident table of counts and use this compressed representation to answer count queries approximately." N09-1058,P07-1065,p,"Since the use of cluster of machines is not always practical, (Talbot and Osborne, 2007b; Talbot and Osborne, 2007a) showed a randomized data structure called Bloom filter, that can be used to construct space efficient language models 513 for SMT." N09-1058,P07-1065,p,"3 Space-Efficient Approximate Frequency Estimation Prior work on approximate frequency estimation for language models provide a no-false-negative guarantee, ensuring that counts for n-grams in the model are returned exactly, while working to make sure the false-positive rate remains small (Talbot and Osborne, 2007a)." P08-1058,P07-1065,p,"Following (Talbot and Osborne, 2007a) we can avoid unnecessary false positives by not querying for the longer n-gram in such cases." P08-1058,P07-1065,p,"Recent work (Talbot and Osborne, 2007b) has demonstrated that randomized encodings can be used to represent n-gram counts for LMs with signficant space-savings, circumventing information-theoretic constraints on lossless data structures by allowing errors with some small probability." P08-1058,P07-1065,o,"However, if we are willing to accept that occasionally our model will be unable to distinguish between distinct n-grams, then it is possible to store each parameter in constant space independent of both n and the vocabulary size (Carter et al., 1978), (Talbot and Osborne, 2007a)." P08-1058,P07-1065,o,"2.3 Previous Randomized LMs Recent work (Talbot and Osborne, 2007b) has used lossy encodings based on Bloom filters (Bloom, 1970) to represent logarithmically quantized corpus statistics for language modeling." P08-1058,P07-1065,o,"Note that unlike the constructions in (Talbot and Osborne, 2007b) and (Church et al., 2007) no errors are possible for ngrams stored in the model." W09-0420,P07-1065,p,"RANDLM (Talbot and Osborne, 2007) performs well and scaled to the full data with improvement (resulting in our best overall system)." W09-0424,P07-1065,o,"We have also implemented a Bloom Filter LM in Joshua, following Talbot and Osborne (2007)." C08-1071,P07-1080,o,"1 Introduction Supervised statistical parsers attempt to capture patterns of syntactic structure from a labeled set of examples for the purpose of annotating new sentences with their structure (Bod, 2003; Charniak and Johnson, 2005; Collins and Koo, 2005; Petrov et al., 2006; Titov and Henderson, 2007)." D07-1099,P07-1080,p,"We use a recently proposed dependency parser (Titov and Henderson, 2007b)1 which has demonstrated state-of-theart performance on a selection of languages from the 1The ISBN parser will be soon made downloadable from the authors web-page." D07-1099,P07-1080,p,"When conditioning on words, we treated each word feature individually, as this proved to be useful in (Titov and Henderson, 2007b)." D07-1099,P07-1080,o,"ISBNs, originally proposed for constituent parsing in (Titov and Henderson, 2007a), use vectors of binary latent variables to encode information about the parse history." D07-1099,P07-1080,o,"In fact, in (Titov and Henderson, 2007a) it was shown that this neural network can be viewed as a coarse approximation to the corresponding ISBN model." D07-1099,P07-1080,o,"In our experiments we use the same definition of structural locality as was proposed for the ISBN dependency parser in (Titov and Henderson, 2007b)." D07-1099,P07-1080,p,"3 Parsing Exact inference in ISBN models is not tractable, but effective approximations were proposed in (Titov and Henderson, 2007a)." D07-1099,P07-1080,o,"Unlike (Titov and Henderson, 2007b), in the shared task we used only the simplest feed-forward approximation, which replicates the computation of a neural network of the type proposed in (Henderson, 2003)." D07-1099,P07-1080,o,"We would expect better performance with the more accurate approximation based on variational inference proposed and evaluated in (Titov and Henderson, 2007a)." D07-1099,P07-1080,o,"To search for the most probable parse, we use the heuristic search algorithm described in (Titov and Henderson, 2007b), which is a form of beam search." D07-1099,P07-1080,o,"As was demonstrated in (Titov and Henderson, 2007b), even a minimal set of local explicit features achieves results which are non-significantly different from a carefully chosen set of explicit features, given the language independent definition of locality described in section 2." D07-1099,P07-1080,o,"This curve plots the average labeled attachment score over Basque, Chinese, English, and Turkish as a function of parsing time per token.4 Accuracy of only 1% below the maximum can be achieved with average processing time of 17 ms per token, or 60 tokens per second.5 We also refer the reader to (Titov and Henderson, 2007b) for more detailed analysis of the ISBN dependency parser results, where, among other things, it was shown that the ISBN model is especially accurate at modeling long dependencies." N09-2032,P07-1080,o,"5.1 The statistical parser The parsing model is the one proposed in Merlo and Musillo (2008), which extends the syntactic parser of Henderson (2003) and Titov and Henderson (2007) with annotations which identify semantic role labels, and has competitive performance." N09-2032,P07-1080,o,"The probabilities of derivation decisions are modelled using the neural network approximation (Henderson, 2003) to a type of dynamic Bayesian Network called an Incremental Sigmoid Belief Network (ISBN) (Titov and Henderson, 2007)." P08-1068,P07-1080,o,"Previous research in this area includes several models which incorporate hidden variables (Matsuzaki et al., 2005; Koo and Collins, 2005; Petrov et al., 2006; Titov and Henderson, 2007)." W07-2218,P07-1080,o,"It is based on Incremental Sigmoid Belief Networks (ISBNs), a class of directed graphical model for structure prediction problems recently proposed in (Titov and Henderson, 2007), where they were demonstrated to achieve competitive results on the constituent parsing task." W07-2218,P07-1080,p,"As discussed in (Titov and Henderson, 2007), computing the conditional probabilities which we need for parsing is in general intractable with ISBNs, but they can be approximated efficiently in several ways." W07-2218,P07-1080,o,"We expect that the mean field approximation should demonstrate better results than feed-forward approximation on this task as it is theoretically expected and confirmed on the constituent parsing task (Titov and Henderson, 2007)." W07-2218,P07-1080,o,"As discussed in (Titov and Henderson, 2007), undirected graphical models do not seem to be suitable for history-based parsing models." W07-2218,P07-1080,o,"The extension of dynamic SBNs with incrementally specified model structure (i.e. Incremental Sigmoid Belief Networks, used in this paper) was proposed and applied to constituent parsing in (Titov and Henderson, 2007)." W07-2218,P07-1080,o,"145 2 The Latent Variable Architecture In this section we will begin by briefly introducing the class of graphical models we will be using, Incremental Sigmoid Belief Networks (Titov and Henderson, 2007)." W07-2218,P07-1080,p,"They are latent variable models which are not tractable to compute exactly, but two approximations exist which have been shown to be effective for constituent parsing (Titov and Henderson, 2007)." W07-2218,P07-1080,o,"Incremental Sigmoid Belief Networks (Titov and Henderson, 2007) differ from simple dynamic SBNs in that they allow the model structure to depend on the output variable values." W07-2218,P07-1080,o,"146 2.3 Approximating ISBNs (Titov and Henderson, 2007) proposes two approximations for inference in ISBNs, both based on variational methods." W07-2218,P07-1080,o,"(Titov and Henderson, 2007) proposes two approximate models based on the variational approach." W07-2218,P07-1080,o,"The second approximation proposed in (Titov and Henderson, 2007) takes into consideration the fact that, after each decision is made, all the preceding latent variables should have their means i updated." W07-2218,P07-1080,p,"For the mean field approximation, propagating the error all the way back through the structure of the graphical model requires a more complicated calculation, but it can still be done efficiently (see (Titov and Henderson, 2007) for details)." W08-2101,P07-1080,p,"3 The Syntactic and Semantic Parser Architecture To achieve the complex task of joint syntactic and semantic parsing, we extend a current state-of-theart statistical parser (Titov and Henderson, 2007) to learn semantic role annotation as well as syntactic structure." W08-2101,P07-1080,o,"Following (Titov and Henderson, 2007), we describe the original parsing architecture and our modifications to it as a Dynamic Bayesian network." W08-2101,P07-1080,o,"For more detail, explanations and experiments see (Titov and Henderson, 2007)." W08-2101,P07-1080,o,"parsing (Titov and Henderson, 2007)." W08-2122,P07-1080,p,"Our probabilistic model is based on Incremental Sigmoid Belief Networks (ISBNs), a recently proposed latent variable model for syntactic structure prediction, which has shown very good behaviour for both constituency (Titov and Henderson, 2007a) and dependency parsing (Titov and Henderson, 2007b)." W08-2122,P07-1080,o,"2.1 Synchronous derivations The derivations for syntactic dependency trees are the same as specified in (Titov and Henderson, 2007b), which are based on the shift-reduce style parser of (Nivre et al., 2006)." W08-2122,P07-1080,o,"P(ctd|C1,,Ct1) =producttextiP(Did|Dbtdd ,,Di1d ,C1,,Ct1) (3) The actions are also sometimes split into a sequence of elementary decisions Di = di1,,din, as discussed in (Titov and Henderson, 2007a)." W08-2122,P07-1080,o,"As with many dependency parsers (Nivre et al., 2006; Titov and Henderson, 2007b), we handle non-projective (i.e. crossing) arcs by transforming them into noncrossing arcs with augmented labels.1 Because our syntactic derivations are equivalent to those of (Nivre et al., 2006), we use their HEAD methods to projectivise the syntactic dependencies." W08-2122,P07-1080,o,"3 The Learning Architecture The synchronous derivations described above are modelled with an Incremental Sigmoid Belief Network (ISBN) (Titov and Henderson, 2007a)." W08-2122,P07-1080,o,"We use the neural network approximation (Titov and Henderson, 2007a) to perform inference in our model." W09-0438,P07-1080,o,"Neural networks have been used in NLP in the past, e.g. for machine translation (Asuncion Castano et al., 1997) and constituent parsing (Titov and Henderson, 2007)." P08-1078,P07-1088,o,"Recently, some generic methods were proposed to handle context-sensitive inference (Dagan et al., 2006; Pantel et al., 2007; Downey et al., 2007; Connor and Roth, 2007), but these usually treat only a single aspect of context matching (see Section 6)." P08-1078,P07-1088,o,"(Downey et al., 2007) use HMM-based similarity for the same purpose." P09-1056,P07-1088,o,"REALM uses an HMM trained on a large corpus to help determine whether the arguments of a candidate relation are of the appropriate type (Downey et al., 2007)." D08-1052,P07-1096,o,"Similar to bidirectional labelling in (Shen et al., 2007), there are two learning tasking in this model." D08-1052,P07-1096,o,"The idea of bidirectional parsing is related to the bidirectional sequential classification method described in (Shen et al., 2007)." D08-1052,P07-1096,o,"The learning algorithm for level-0 dependency is similar to the guided learning algorithm for labelling as described in (Shen et al., 2007)." E09-1087,P07-1096,p,"The state-of-the art taggers are using feature sets discribed in the corresponding articles ((Collins, 2002), (Gimenez and M`arquez, 2004), (Toutanova et al., 2003) and (Shen et al., 2007)), Morce supervised and Morce semi-supervised are using feature set desribed in section 4." E09-1087,P07-1096,n,"The combination is significantly better than (Shen et al., 2007) at a very high level, but more importantly, Shens results (currently representing the replicable state-of-the-art in POS tagging) have been significantly surpassed also by the semisupervised Morce (at the 99 % confidence level)." E09-1087,P07-1096,n,"In addition, the semi-supervised Morce performs (on single CPU and development data set) 77 times faster than the combination and 23 times faster than (Shen et al., 2007)." E09-1087,P07-1096,o,"Finally, it would be nice to merge some of the approaches by (Toutanova et al., 2003) and (Shen et al., 2007) with the ideas of semi-supervised learning introduced here, since they seem orthogonal in at least some aspects (e.g., to replace the rudimentary lookahead features with full bidirectionality)." E09-1087,P07-1096,p,"For English, after a relatively big jump achieved by (Collins, 2002), we have seen two significant improvements: (Toutanova et al., 2003) and (Shen et al., 2007) pushed the results by a significant amount each time.1 1In our final comparison, we have also included the results of (Gimenez and M`arquez, 2004), because it has surpassed (Collins, 2002) as well and we have used this tagger in the data preparation phase." E09-1087,P07-1096,p,"As a result of this tuning, our (fully supervised) version of the Morce tagger gives the best accuracy among all single taggers for Czech and also very good results for English, being beaten only by the tagger (Shen et al., 2007) (by 0.10 % absolute) and (not significantly) by (Toutanova et al., 2003)." E09-1087,P07-1096,o,"3 The data 3.1 The supervised data For English, we use the same data division of Penn Treebank (PTB) parsed section (Marcus et al., 1994) as all of (Collins, 2002), (Toutanova et al., 2003), (Gimenez and M`arquez, 2004) and (Shen et al., 2007) do; for details, see Table 1." E09-1087,P07-1096,o,"In the following sections, we present the best performing set of feature templates as determined on the development data set using only the supervised training setting; our feature templates have thus not been influenced nor extended by the unsupervised data.13 11The full list of tags, as used by (Shen et al., 2007), also makes the underlying Viterbi algorithm unbearably slow." E09-1087,P07-1096,n,"Most recently, (Suzuki and Isozaki, 2008) published their Semi-supervised sequential labelling method, whose results on POS tagging seem to be optically better than (Shen et al., 2007), but no significance tests were given and the tool is not available for download, i.e. for repeating the results and significance testing." E09-1087,P07-1096,p,"For English, we use three state-of-the-art taggers: the taggers of (Toutanova et al., 2003) and (Shen et al., 2007) in Step 1, and the SVM tagger (Gimenez and M`arquez, 2004) in Step 3." P08-1076,P07-1096,o,"test additional resources JESS-CM (CRF/HMM) 97.35 97.40 1G-word unlabeled data (Shen et al., 2007) 97.28 97.33 (Toutanova et al., 2003) 97.15 97.24 crude company name detector [sup." P08-1076,P07-1096,p,"5 Comparison with Previous Top Systems and Related Work In POS tagging, the previous best performance was reported by (Shen et al., 2007) as summarized in Table 7." P08-1076,P07-1096,o,"For our POS tagging experiments, we used the Wall Street Journal in PTB III (Marcus et al., 1994) with the same data split as used in (Shen et al., 2007)." P08-2009,P07-1096,o,"5 Bidirectional Sequence Classification Bidirectional POS tagging (Shen et al., 2007), the current state of the art for English, has some properties that make it appropriate for Icelandic." P09-1054,P07-1096,o,"Shen et al., (2007) report an accuracy of 97.33% on the same data set using a perceptron-based bidirectional tagging model." W08-2103,P07-1096,o,"Networks (Toutanova et al., 2003) 97.24 SVM (Gimenez and M`arquez, 2003) 97.05 ME based a bidirectional inference (Tsuruoka and Tsujii, 2005) 97.15 Guided learning for bidirectional sequence classification (Shen et al., 2007) 97.33 AdaBoost.SDF with candidate features (=2,=1,=100, W-dist) 97.32 AdaBoost.SDF with candidate features (=2,=10,=10, F-dist) 97.32 SVM with candidate features (C=0.1, d=2) 97.32 Text Chunking F=1 Regularized Winnow + full parser output (Zhang et al., 2001) 94.17 SVM-voting (Kudo and Matsumoto, 2001) 93.91 ASO + unlabeled data (Ando and Zhang, 2005) 94.39 CRF+Reranking(Kudo et al., 2005) 94.12 ME based a bidirectional inference (Tsuruoka and Tsujii, 2005) 93.70 LaSo (Approximate Large Margin Update) (Daume III and Marcu, 2005) 94.4 HySOL (Suzuki et al., 2007) 94.36 AdaBoost.SDF with candidate featuers (=2,=1,=, W-dist) 94.32 AdaBoost.SDF with candidate featuers (=2,=10,=10,W-dist) 94.30 SVM with candidate features (C=1, d=2) 94.31 One of the reasons that boosting-based classifiers realize faster classification speed is sparseness of rules." D09-1160,P07-1104,o,"lscript1-regularized log-linear models (lscript1-LLMs), on the other hand, provide sparse solutions, in which weights of irrelevant features are exactly zero, by assumingaLaplacianpriorontheweights(Tibshirani, 1996; Kazama and Tsujii, 2003; Goodman, 2004; Gao et al., 2007)." E09-1090,P07-1104,o,"The L1 or L2 norm is commonly used in statistical natural language processing (Gao et al., 2007)." N09-2025,P07-1104,o,"In other words, learning with L1 regularization naturally has an intrinsic effect of feature selection, which results in an 97 efficient and interpretable inference with almost the same performance as L2 regularization (Gao et al., 2007)." P09-1054,P07-1104,p,"There is usually not a considerable difference between the two methods in terms of the accuracy of the resulting model (Gao et al., 2007), but L1 regularization has a significant advantage in practice." W08-0404,P07-1104,o,"3.5 Regularization We apply lscript1 regularization (Ng, 2004; Gao et al., 2007) to make learning more robust to noise and control the effective dimensionality of the feature spacebysubtractingaweightedsumofabsolutevalues of parameter weights from the log-likelihood of the training data w = argmaxw LL(w) summationdisplay i Ci|wi| (6) We optimize the objective using a variant of the orthant-wise limited-memory quasi-Newton algorithm proposed by Andrew & Gao (2007).3 All values Ci are set to 1 in most of the experiments below, although we apply stronger regularization (Ci = 3) to reordering features." C08-1079,P07-1107,o,"Since Soon (Soon et al., 2001) started the trend of using the machine learning approach by using a binary classifier in a pairwise manner for solving co-reference resolution problem, many machine learning-based systems have been built, using both supervised and, unsupervised learning methods (Haghighi and Klein, 2007)." D08-1033,P07-1107,o,"CRP-based samplers have served the communitywellinrelatedlanguagetasks,suchaswordsegmentation and coreference resolution (Goldwater et al., 2006; Haghighi and Klein, 2007)." D08-1067,P07-1107,o,"Salience Feature Pronoun Name Nominal TOP 0.75 0.17 0.08 HIGH 0.55 0.28 0.17 MID 0.39 0.40 0.21 LOW 0.20 0.45 0.35 NONE 0.00 0.88 0.12 Table 2: Posterior distribution of mention type given salience (taken from Haghighi and Klein (2007)) 3.3 Modifications to the H&K Model Next, we discuss the potential weaknesses of H&Ks model and propose three modifications to it." D08-1067,P07-1107,n,"For comparison purposes, we revisit a fullygenerative Bayesian model for unsupervised coreference resolution recently introduced by Haghighi and Klein (2007), discuss its potential weaknesses and consequently propose three modifications to their model (Section 3)." D08-1067,P07-1107,o,"First, the addition of each modification improves the F-score for both true and system mentions 9The H&K results shown here are not directly comparable with those reported in Haghighi and Klein (2007), since H&K evaluated their system on the ACE 2004 coreference corpus." D08-1067,P07-1107,n,Experimental results indicate that our model outperforms Haghighi and Kleins (2007) coreference model by a large margin on the ACE data sets and compares favorably to a modified version of their model. D08-1067,P07-1107,n,"For comparison purposes, we revisit Haghighi and Kleins (2007) fully-generative Bayesian model for unsupervised coreference resolution, discuss its potential weaknesses and consequently propose three modifications to their model." D08-1067,P07-1107,o,"3 Haghighi and Kleins Coreference Model To gauge the performance of our model, we compare it with a Bayesian model for unsupervised coreference resolution that was recently proposed by Haghighi and Klein (2007)." D08-1069,P07-1107,o,"More recently, Haghighi and Klein (2007) use the distinction between pronouns, nominals and proper nouns 660 in their unsupervised, generative model for coreference resolution; for their model, this is absolutely critical for achieving better accuracy." D08-1069,P07-1107,o,This therefore suggests that better parameters are likely to be learned in the 2Haghighi and Kleins (2007) generative coreference model mirrors this in the posterior distribution which it assigns to mention types given their salience (see their Table 1). D09-1120,P07-1107,n,12Poon and Domingos (2008) outperformed Haghighi and Klein (2007). D09-1120,P07-1107,o,"1153 While much research (Ng and Cardie, 2002; Culotta et al., 2007; Haghighi and Klein, 2007; Poon and Domingos, 2008; Finkel and Manning, 2008) has explored how to reconcile pairwise decisions to form coherent clusters, we simply take the transitive closure of our pairwise decision (as in Ng and Cardie (2002) and Bengston and Roth (2008)) which can and does cause system errors." E09-1018,P07-1107,o,"The probabilities are ordered according to, at least my, intuition with pronoun being the most likely (0.094), followed by proper nouns (0.057), followed by common nouns (0.032), a fact also noted by (Haghighi and Klein, 2007)." E09-1018,P07-1107,p,"In addition, their system does not classify non-anaphoric pronouns, A third paper that has significantly influenced our work is that of (Haghighi and Klein, 2007)." N09-1019,P07-1107,o,The model of Haghighi and Klein (2007) incorporated a latent variable for named entity class. N09-1019,P07-1107,o,"5 Discussion As stated above, we aim to build an unsupervised generative model for named entity clustering, since such a model could be integrated with unsupervised coreference models like Haghighi and Klein (2007) for joint inference." N09-1019,P07-1107,o,"Named entities also pose another problem with the Haghighi and Klein (2007) coreference model; since it models only the heads of NPs, it will fail to resolve some references to named entities: (Ford Motor Co., Ford), while erroneously merging others: (Ford Motor Co., Lockheed Martin Co.)." N09-1019,P07-1107,n,"Our system improves over the latent named-entity tagging in Haghighi and Klein (2007), from 61% to 87%." N09-1019,P07-1107,o,"Like Haghighi and Klein (2007), we give our model information about the basic types of pronouns in English." P08-1002,P07-1107,n,"Secondly, while most pronoun resolution evaluations simply exclude non-referential pronouns, recent unsupervised approaches (Cherry and Bergsma, 2005; Haghighi and Klein, 2007) must deal with all pronouns in unrestricted text, and therefore need robust modules to automatically handle non-referential instances." W09-0210,P07-1107,p,"In terms of applying non-parametric Bayesian approaches to NLP, Haghighi and Klein (2007) evaluated the clustering properties of DPMMs by performing anaphora resolution with good results." W09-0210,P07-1107,p,"Recent work has applied Bayesian non-parametric models to anaphora resolution (Haghighi and Klein, 2007), lexical acquisition (Goldwater, 2007) and language modeling (Teh, 2006) with good results." E09-1070,P08-1001,o,Richman and Schone (2008) used a method similar to Nothman et al. N09-1032,P08-1001,o,"5.2.1 Generate English Annotated Corpus from Wikipedia Wikipedia provides a variety of data resources for NER and other NLP research (Richman and Schone, 2008)." D09-1101,P08-1002,o,"(2008)), and distributional methods (e.g., Bergsma et al." D09-1102,P08-1002,o,"More recently, the problem has been tackled using statistics-based (e.g., Bean and Riloff 1999; Bergsma et al 2008) and learning-based (e.g. Evans 2001; Ng and Cardie 2002a; Ng 2004; Yang et al 2005; Denis and Balbridge 2007) methods." D09-1102,P08-1002,o,"2 Related Work Given its potential usefulness in coreference resolution, anaphoricity determination has been studied fairly extensively in the literature and can be classified into three categories: heuristic rule-based (e.g. Paice and Husk 1987; Lappin and Leass 1994; Kennedy and Boguraev 1996; Denber 1998; Vieira and Poesio 2000), statistics-based (e.g., Bean and Riloff 1999; Cherry and Bergsma 2005; Bergsma et al 2008) and learning-based (e.g. Evans 2001; Ng and Cardie 2002a; Ng 2004; Yang et al 2005; Denis and Balbridge 2007)." D09-1102,P08-1002,o,"Bergsma et al (2008) proposed a distributional method in detecting non-anaphoric pronouns by first extracting the surrounding textual context of the pronoun, then gathering the distribution of words that occurred within that context from a large corpus and finally learning to classify these distributions as representing either anaphoric and non-anaphoric pronoun instances." N09-1065,P08-1002,o,"(2008)], and distributional methods [e.g., Bergsma et al." D08-1002,P08-1004,o,"Instead of analyzing sentences directly, AUCONTRAIRE relies on the TEXTRUNNER Open Information Extraction system (Banko et al., 2007; Banko and Etzioni, 2008) to map each sentence to one or more tuples that represent the entities in the sentences and the relationships between them (e.g., was born in(Mozart,Salzburg))." D09-1152,P08-1004,o,"Recent research in open information extraction (Banko and Etzioni, 2008; Davidov and Rappaport, 2008) has shown that we can extract large amounts of relational data from open-domain text with high accuracy." E09-1073,P08-1004,o,"1 Introduction Motivation: Sharing basic intuitions and longterm goals with other tasks within the area of Webbased information extraction (Banko and Etzioni, 2008; Davidov and Rappoport, 2008), the task of acquiring class attributes relies on unstructured text available on the Web, as a data source for extracting generally-useful knowledge." P09-1114,P08-1004,o,"Banko and Etzioni (2008) studied open domain relation extraction, for which they manually identified several common relation patterns." P09-1093,P08-1035,p,"For Japanese, dependency trees are trimmed instead of full parse trees (Takeuchi and Matsumoto, 2001; Oguro et al., 2002; Nomoto, 2008) 1 This parsing approach is reasonable because the compressed output is grammatical if the 1 Hereafter, we refer these compression processes as tree trimming. input is grammatical, but it offers only moderate compression rates." P09-1093,P08-1035,o,"For Japanese sentences, instead of using full parse trees, existing sentence compression methods trim dependency trees by the discriminative model (Takeuchi and Matsumoto, 2001; Nomoto, 2008) through the use of simple linear combined features (Oguro et al., 2002)." D08-1058,P08-1036,o,"2005; Choi et al., 2006; Ku et al., 2006; Titov and McDonald, 2008)." D09-1017,P08-1036,o,"Specifically, aspect rating as an interesting topic has also been widely studied (Titov and McDonald, 2008a; Snyder and Barzilay, 2007; Goldberg and Zhu, 2006)." D09-1017,P08-1036,o,"Titov and McDonald (2008b) proposed a joint model of text and aspect ratings which utilizes a modified LDA topic model to build topics that are representative of ratable aspects, and builds a set of sentiment predictors." D09-1019,P08-1036,o,"Several researchers also studied feature/topicbased sentiment analysis (e.g., Hu and Liu, 2004; Popescu and Etzioni, 2005; Ku et al, 2006; Carenini et al, 2006; Mei et al, 2007; Ding, Liu and Yu, 2008; Titov and R. McDonald, 2008; Stoyanov and Cardie, 2008; Lu and Zhai, 2008)." E09-1059,P08-1036,o,"For example, aspects of a digital camera could include picture quality, battery life, size, color, value, etc. Finding such aspects is a challenging research problem that has been addressed in a number of ways (Hu and Liu, 2004b; Gamon et al., 2005; Carenini et al., 2005; Zhuang et al., 2006; Branavan et al., 2008; Blair-Goldensohn et al., 2008; Titov and McDonald, 2008b; Titov and McDonald, 2008a)." P09-1027,P08-1036,o,"In recent years, sentiment classification has drawn much attention in the NLP field and it has many useful applications, such as opinion mining and summarization (Liu et al., 2005; Ku et al., 2006; Titov and McDonald, 2008)." P09-2043,P08-1036,o,"Aspect-based sentiment analysis summarizes sentiments with diverse attributes, so that customers may have to look more closely into analyzed sentiments (Titov and McDonald, 2008)." D08-1037,P08-1045,o,"Identifying transliteration pairs is an important component in many linguistic applications which require identifying out-of-vocabulary words, such as machine translation and multilingual information retrieval (Klementiev and Roth, 2006b; Hermjakob et al., 2008)." D09-1024,P08-1045,o,"We finally also include as alignment candidates those word pairs that are transliterations of each other to cover rare proper names (Hermjakob et al., 2008), which is important for language pairs that dont share the same alphabet such as Arabic and English." E09-1050,P08-1045,o,"Identification of Terms To-be Transliterated (TTT) must not be confused with recognition of Named Entities (NE) (Hermjakob et al., 2008)." N09-1005,P08-1045,o,"There are many techniques for transliteration and back-transliteration, and they vary along a number of dimensions: phoneme substitution vs. character substitution heuristic vs. generative vs. discriminative models manual vs. automatic knowledge acquisition We explore the third dimension, where we see several techniques in use: Manually-constructed transliteration models, e.g., (Hermjakob et al., 2008)." N09-1034,P08-1045,o,"Automatic NE transliteration is an important component in many cross-language applications, such as Cross-Lingual Information Retrieval (CLIR) and Machine Translation(MT) (Hermjakob et al., 2008; Klementiev and Roth, 2006a; Meng et al., 2001; Knight and Graehl, 1998)." C08-1051,P08-1047,o,"Furthermore, recent studies revealed that word clustering is useful for semi-supervised learning in NLP (Miller et al., 2004; Li and McCallum, 2005; Kazama and Torisawa, 2008; Koo et al., 2008)." D08-1056,P08-1052,o,"(2006) and Nakov and Hearst (2008), among others, look at using a large amount of unlabeled data to classify relations between words." D08-1056,P08-1052,o,"(Snow et al., 2006; Nakov & Hearst, 2008)." N09-1059,P08-1052,p,Nakov and Hearst (2008) solved relational similarity problems using the Web as a corpus. N09-1059,P08-1052,o,"(Nakov and Hearst, 2005; Gledson and Keane, 2008))." W09-2415,P08-1052,o,The patterns will be manually constructed following the approach of Hearst (1992) and Nakov and Hearst (2008).6 The example collection for each relation R will be passed to two independent annotators. W09-2415,P08-1052,o,"They propose a two-level hierarchy, with 5 classes at the first level and 30 classes at the second one; other researchers (Kim and Baldwin, 2005; Nakov and Hearst, 2008; Nastase et al., 2006; Turney, 2005; Turney and Littman, 2005) have used their class scheme and data set." W09-2415,P08-1052,o,"As a first step, SemEval2007 Task 4 offered many useful insights into the performance of different approaches to semantic relation classification; it has also motivated followup research (Davidov and Rappoport, 2008; KatrenkoandAdriaans, 2008; NakovandHearst, 2008; O Seaghdha and Copestake, 2008)." W09-2416,P08-1052,o,"Pearsons correlation coefficient is a standard measure of the correlation strength between two distributions; it can be calculated as follows: = E(XY ) E(X)E(Y )radicalbigE(X2) [E(X)]2radicalbigE(Y 2) [E(Y )]2 (1) where X = (x1,,xn) and Y = (y1,,yn) are vectors of numerical scores for each paraphrase provided by the humans and the competing systems, respectively, n is the number of paraphrases to score, and E(X) is the expectation of X. Cosine correlation coefficient is another popular alternative and was used by Nakov and Hearst (2008); it can be seen as an uncentered version of Pearsons correlation coefficient: = X.YbardblXbardblbardblYbardbl (2) Spearmans rank correlation coefficient is suitable for comparing rankings of sets of items; it is a special case of Pearsons correlation, derived by considering rank indices (1,2,) as item scores . It is defined as follows: = n summationtextx iyi ( summationtextx i)( summationtexty i)radicalBig nsummationtextx2i (summationtextxi)2 radicalBig nsummationtexty2i (summationtextyi)2 (3) One problem with using Spearmans rank coefficient for the current task is the assumption that swapping any two ranks has the same effect." W09-2416,P08-1052,o,"The SemEval-2010 task we present here builds on thework ofNakov (Nakovand Hearst, 2006; Nakov, 2007; Nakov, 2008b), where NCs are paraphrased by combinations of verbs and prepositions." W09-2416,P08-1052,o,"Paraphrasesofthiskind have been shown to be useful in applications such as machine translation (Nakov, 2008a) and as an intermediate step in inventory-based classification of abstract relations (Kim and Baldwin, 2006; Nakov and Hearst, 2008)." P09-1062,P08-1054,o,"In practice, we used MMR in our experiments, since the original MEAD considers also sentence positions 3 , which can always been added later as in (Penn and Zhu, 2008)." P09-1062,P08-1054,o,"3The usefulness of position varies significantly in different genres (Penn and Zhu, 2008)." P09-1062,P08-1054,o,"This obviously does not preclude using the audio-based system together with other features such as utterance position, length, speakers roles, and most others used in the literature (Penn and Zhu, 2008)." P09-1062,P08-1054,p,"These models have achieved state-of-the-art performance in transcript-based speech summarization (Zechner, 2001; Penn and Zhu, 2008)." P09-1062,P08-1054,o,"Audio data amenable to summarization include meeting recordings (Murray et al., 2005), telephone conversations (Zhu and Penn, 2006; Zechner, 2001), news broadcasts (Maskey and Hirschberg, 2005; Christensen et al., 2004), presentations (He et al., 2000; Zhang et al., 2007; Penn and Zhu, 2008), etc. Although extractive summarization is not as ideal as abstractive summarization, it outperforms several comparable alternatives." P09-1062,P08-1054,o,"The usefulness of prosody was found to be very limited by itself, if the effect of utterance length is not considered (Penn and Zhu, 2008)." P09-1062,P08-1054,o,"1 Introduction Summarizing spoken documents has been extensively studied over the past several years (Penn and Zhu, 2008; Maskey and Hirschberg, 2005; Murray et al., 2005; Christensen et al., 2004; Zechner, 2001)." D08-1024,P08-1058,o,"Both were 5gram models with modified Kneser-Ney smoothing, lossily compressed using a perfect-hashing scheme similar to that of Talbot and Brants (2008) but using minimal perfect hashing (Botelho et al., 2005)." D09-1079,P08-1058,o,"This fact, along with the observation that machine translation quality improves as the amount of monolingual training material increases, has lead to the introduction of randomised techniques for representing large LMs in small space (Talbot and Osborne, 2007; Talbot and Brants, 2008)." D09-1079,P08-1058,o,We set our space usage to match the 3.08 bytes per n-gram reported in Talbot and Brants (2008) and held out just over 1M unseen n-grams to test the error rates of our models. D09-1079,P08-1058,o,It is a variant of the batch-based Bloomier filter LM of Talbot and Brants (2008) which we refer to as the TB-LM henceforth. D09-1079,P08-1058,o,"Any encoding scheme, such as the packed representation of Talbot and Brants (2008), is viable here." D09-1079,P08-1058,o,"As with other randomised models we construct queries with the appropriate sanity checks to lower the error rate efficiently (Talbot and Brants, 2008)." D09-1079,P08-1058,o,Talbot and Brants (2008) used a Bloomier filter to encode a LM. D09-1079,P08-1058,o,"The Bloomier filter LM (Talbot and Brants, 2008) has a precomputed matching of keys shared between a constant number of cells in the filter array." N09-1058,P08-1058,o,"There also have been prior work on maintaining approximate counts for higher-order language models (LMs) ((Talbot and Osborne, 2007a; Talbot and Osborne, 2007b; Talbot and Brants, 2008)) operates under the model that the goal is to store a compressed representation of a disk-resident table of counts and use this compressed representation to answer count queries approximately." N09-1058,P08-1058,o,"(Talbot and Brants, 2008) presented randomized language model based on perfect hashing combined with entropy pruning to achieve further memory reductions." N09-1058,P08-1058,o,"A problem mentioned in (Talbot and Brants, 2008) is that the algorithm that computes the compressed representation might need to retain the entire database in memory; in their paper, they design strategies to work around this problem." P09-2086,P08-1058,o,"Either pruning (Stolcke, 1998; Church et al., 2007) or lossy randomizing approaches (Talbot and Brants, 2008) may result in a compact representation for the application run-time." P09-2086,P08-1058,o,"By using 8-bit floating point quantization 1 , N-gram language models are compressed into 10 GB, which is comparable to a lossy representation (Talbot and Brants, 2008)." W09-1505,P08-1058,o,"Talbot and Brants (2008) show that Bloomier filters (Chazelle et al., 2004) can be used to create perfect hash functions for language models." N09-1035,P08-1065,o,"Although some work has been done on syllabifying orthographic forms (Muller et al., 2000; Bouma, 2002; Marchand and Damper, 2007; Bartlett et al., 2008), syllables are, technically speaking, phonological entities that can only be composed of strings of phonemes." P09-1014,P08-1065,o,"Stress is an attribute of syllables, but syllabification is a non-trivial task in itself (Bartlett et al., 2008)." W09-0106,P08-1065,o,"(Jiampojamarn et al., 2008) and (Bartlett et al., 2008) do worse on the English test data than they do on German, Dutch, or French." D09-1008,P08-1066,o,"Thus, we can compute the source dependency LM score in the same way we compute the target side score, using a procedure described in (Shen et al., 2008)." D09-1008,P08-1066,o,"Due to the lack of a good Arabic parser compatible with the Sakhr tokenization that we used on the source side, we did not test the source dependency LM for Arabic-to-English MT. When extracting rules with source dependency structures, we applied the same well-formedness constraint on the source side as we did on the target side, using a procedure described by (Shen et al., 2008)." D09-1008,P08-1066,o,"In (Post and Gildea, 2008; Shen et al., 2008), target trees were employed to improve the scoring of translation theories." D09-1008,P08-1066,o,"73 1.2.2 Baseline System and Experimental Setup We take BBNs HierDec, a string-to-dependency decoder as described in (Shen et al., 2008), as our baseline for the following two reasons: It provides a strong baseline, which ensures the validity of the improvement we would obtain." D09-1008,P08-1066,o,"2 Linguistic and Context Features 2.1 Non-terminal Labels In the original string-to-dependency model (Shen et al., 2008), a translation rule is composed of a string of words and non-terminals on the source side and a well-formed dependency structure on the target side." D09-1021,P08-1066,o,"Early examples of this work include (Alshawi, 1996; Wu, 1997); more recent models include (Yamada and Knight, 2001; Eisner, 2003; Melamed, 2004; Zhang and Gildea, 2005; Chiang, 2005; Quirk et al., 2005; Marcu et al., 2006; Zollmann and Venugopal, 2006; Nesson et al., 2006; Cherry, 2008; Mi et al., 2008; Shen et al., 2008)." D09-1021,P08-1066,o,"Other factors that distinguish us from previous work are the use of all phrases proposed by a phrase-based system, and the use of a dependency language model that also incorporates constituent information (although see (Charniak et al., 2003; Shen et al., 2008) for related approaches)." D09-1023,P08-1066,p,"There is also substantial work in the use of target-side syntax (Galley et al., 2006; Marcu et al., 2006; Shen et al., 2008)." D09-1023,P08-1066,o,"Features that consider only target-side syntax and words without considering s can be seen as syntactic language model features (Shen et al., 2008)." D09-1073,P08-1066,p,"1 Introduction Phrase-based method (Koehn et al., 2003; Och and Ney, 2004; Koehn et al., 2007) and syntaxbased method (Wu, 1997; Yamada and Knight, 2001; Eisner, 2003; Chiang, 2005; Cowan et al., 2006; Marcu et al., 2006; Liu et al., 2007; Zhang et al., 2007c, 2008a, 2008b; Shen et al., 2008; Mi and Huang, 2008) represent the state-of-the-art technologies in statistical machine translation (SMT)." D09-1106,P08-1066,o,"Word-aligned corpora have been found to be an excellent source for translation-related knowledge, not only for phrase-based models (Och and Ney, 2004; Koehn et al., 2003), but also for syntax-based models (e.g., (Chiang, 2007; Galley et al., 2006; Shen et al., 2008; Liu et al., 2006))." D09-1123,P08-1066,o,"Recently, (Shen et al., 2008) introduced an approach for incorporating a dependency-based language model into SMT." D09-1123,P08-1066,o,"Firstly, (Shen et al., 2008) resorted to heuristics to extract the Stringto-Dependency trees, whereas our approach employs the well formalized CCG grammatical theory." D09-1123,P08-1066,n,"Thirdly, (Shen et al., 2008) deploys the dependency language model to augment the lexical language model probability be1183 tween two head words but never seek a full dependency graph." E09-1044,P08-1066,n,"This is in direct contrast to recent reported results in which other filtering strategies lead to degraded performance (Shen et al., 2008; Zollmann et al., 2008)." N09-1049,P08-1066,o,"Extensions to Hiero Several authors describe extensions to Hiero, to incorporate additional syntactic information (Zollmann and Venugopal, 2006; Zhang and Gildea, 2006; Shen et al., 2008; Marton and Resnik, 2008), or to combine it with discriminative latent models (Blunsom et al., 2008)." P09-1042,P08-1066,o,"Dependency representation has been used for language modeling, textual entailment and machine translation (Haghighi et al., 2005; Chelba et al., 1997; Quirk et al., 2005; Shen et al., 2008), to name a few tasks." P09-1063,P08-1066,o,"They can be roughly divided into three categories: string-to-tree models (e.g., (Galley et al., 2006; Marcu et al., 2006; Shen et al., 2008)), tree-to-string models (e.g., (Liu et al., 2006; Huang et al., 2006)), and tree-totree models (e.g., (Eisner, 2003; Ding and Palmer, 2005; Cowan et al., 2006; Zhang et al., 2008))." P09-1065,P08-1066,o,"On the contrary, a string-to-tree decoder (e.g., (Galley et al., 2006; Shen et al., 2008)) is a parser that applies string-to-tree rules to obtain a target parse for the source string." P09-1087,P08-1066,n,"This provides a compelling advantage over previous dependency language models for MT (Shen et al., 2008),whichusea5-gramLMonlyduringreranking." P09-1087,P08-1066,o,"Dependency models have recently gained considerable interest in many NLP applications, including machine translation (Ding and Palmer, 2005; Quirk et al., 2005; Shen et al., 2008)." P09-1087,P08-1066,p,"1 Introduction Hierarchical approaches to machine translation have proven increasingly successful in recent years (Chiang, 2005; Marcu et al., 2006; Shen et al., 2008), and often outperform phrase-based systems (Och and Ney, 2004; Koehn et al., 2003) on target-language fluency and adequacy." W09-2423,P08-1066,o,"Finally, we are investigating several avenues for using this system output for Machine Translation (MT) including: (1) aiding word alignment for other MT system (Wang et al., 2007); and (2) aiding the creation various MT models involving analyzed text, e.g., (Gildea, 2004; Shen et al., 2008)." E09-1021,P08-1079,o,"In the concept extension part of our algorithm we adapt our concept acquisition framework (Davidov and Rappoport, 2006; Davidov et al., 2007; Davidov and Rappoport, 2008a; Davidov and Rappoport, 2008b) to suit diverse languages, including ones without explicit word segmentation." W09-0805,P08-1079,o,"While in this paper we evaluated our framework on the discovery of concepts, we have recently proposed fully unsupervised frameworks for the discovery of different relationship types (Davidov et al., 2007; Davidov and Rappoport, 2008a; Davidov and Rappoport, 2008b)." W09-1111,P08-1079,o,"In computational linguistics, our pattern discovery procedure extends over previous approaches that use surface patterns as indicators of semantic relations between nouns or verbs ((Hearst, 1998; Chklovski and Pantel, 2004; Etzioni et al., 2004; Turney, 2006; Davidov and Rappoport, 2008) inter alia)." W09-1111,P08-1079,o,"This approach is similar to that of seed words (e.g., (Hearst, 1998)) or hook words (e.g., (Davidov and Rappoport, 2008)) in previous work." D09-1054,P08-1081,o,"Note that apart from previous work (Ding et al., 2008) we use complete skip-chain (contextanswer) edges in hc(x,y)." D09-1054,P08-1081,o,"We made use of the same data set as introduced in (Cong et al., 2008; Ding et al., 2008)." D09-1054,P08-1081,o,"The suffixes C* and V* denote the models using incomplete skip-chain edges and vertical sequential edges proposed in (Ding et al., 2008), as shown in Figures 2(a) and 2(c)." D09-1054,P08-1081,n,"In comparison, the 2D model in Figure 2(c) used in previous work (Ding et al., 2008) can only model the interaction between adjacent questions." D09-1054,P08-1081,n,"Our graphical representation has two advantages over previous work (Ding et al., 2008): unifying sentence relations and incorporating question interactions." D09-1054,P08-1081,n,"Previous work (Ding et al., 2008) performs the extraction of contexts and answers in multiple passes of the thread (with each pass corresponding to one question), which cannot address the interactions well." D09-1054,P08-1081,o,"We design special inference algorithms, instead of general-purpose inference algorithms used in previous works (Cong et al., 2008; Ding et al., 2008), by taking advantage of special properties of our task." D09-1054,P08-1081,o,"1 Introduction Recently, extracting questions, contexts and answers from post discussions of online forums incurs increasing academic attention (Cong et al., 2008; Ding et al., 2008)." C08-1026,P08-1085,o,"Even for many unsupervised situations, this is available from a lexicon (e.g., Banko and Moore, 2004; Goldberg et al., 2008)." C08-1026,P08-1085,o,"Thus, an orthogonal line of research can involve inducing classes for words which are more general than single categories, i.e., something akin to ambiguity classes (see, e.g., the discussion of ambiguity class guessers in Goldberg et al., 2008)." C08-1026,P08-1085,p,"4.1 Complete ambiguity classes Ambiguity classes capture the relevant property we are interested in: words with the same category possibilities are grouped together.4 And ambiguity classes have been shown to be successfully employed, in a variety of ways, to improve POS tagging (e.g., Cutting et al., 1992; Daelemans et al., 1996; Dickinson, 2007; Goldberg et al., 2008; Tseng et al., 2005)." E09-1038,P08-1085,o,"Traditionally, such unsupervised EM-trained HMM taggers are thought to be inaccurate, but (Goldberg et al., 2008) showed that by feeding the EM process with sufficiently good initial probabilities, accurate taggers (> 91% accuracy) can be learned for both English and Hebrew, based on a (possibly incomplete) lexicon and large amount of raw text." P09-1057,P08-1085,o,"6 Smaller Tagset and Incomplete Dictionaries Previously, researchers working on this task have also reported results for unsupervised tagging with a smaller tagset (Smith and Eisner, 2005; Goldwater and Griffiths, 2007; Toutanova and Johnson, 2008; Goldberg et al., 2008)." P09-1057,P08-1085,o,"The table in Figure 9 shows a comparison of different systems for which tagging accuracies have been reported previously for the 17-tagset case (Goldberg et al., 2008)." P09-1057,P08-1085,o,"Some previous approaches (Toutanova and Johnson, 2008; Goldberg et al., 2008) handle unknown words explicitly using ambiguity class components conditioned on various morphological features, and this has shown to produce good tagging results, especially when dealing with incomplete dictionaries." P09-1057,P08-1085,o,"EM-HMM tagger provided with good initial conditions (Goldberg et al., 2008) 91.4* (*uses linguistic constraints and manual adjustments to the dictionary) Figure 1: Previous results on unsupervised POS tagging using a dictionary (Merialdo, 1994) on the full 45-tag set." W09-0905,P08-1085,o,"Due to its popularity for unsupervised POS induction research (e.g., Goldberg et al., 2008; Goldwater and Griffiths, 2007; Toutanova and Johnson, 2008) and its often-used tagset, for our initial research, we use the Wall Street Journal (WSJ) portion of the Penn Treebank (Marcus et al., 1993), with 36 tags (plus 9 punctuation tags), and we use sections 00-18, leaving held-out data for future experiments.4 Defining frequent frames as those occurring at 4Even if we wanted child-directed speech, the CHILDES database (MacWhinney, 2000) uses coarse POS tags." N09-1031,P08-1092,o,"Regression has also been used to order sentences in extractive summarization (Biadsy et al., 2008)." P09-1024,P08-1092,o,"These domains have been commonly used in prior work on summarization (Weischedel et al., 2004; Zhou et al., 2004; Filatova and Prager, 2005; DemnerFushman and Lin, 2007; Biadsy et al., 2008)." P09-1024,P08-1092,o,"Instead, we follow a simplified form of previous work on biography creation, where a classifier is trained to distinguish biographical text (Zhou et al., 2004; Biadsy et al., 2008)." P09-1024,P08-1092,o,"For instance, some approaches coarsely discriminate between biographical and non-biographical information (Zhou et al., 2004; Biadsyetal.,2008),whileothersgobeyondbinary distinction by identifying atomic events e.g., occupation and marital status that are typically included in a biography (Weischedel et al., 2004; Filatova and Prager, 2005; Filatova et al., 2006)." P09-1024,P08-1092,o,"Instances of this work include information extraction, ontology induction and resource acquisition (Wu and Weld, 2007; Biadsy et al., 2008; Nastase, 2008; Nastase and Strube, 2008)." N09-1066,P08-1093,o,Citation texts have also been used to create summaries of single scientific articles in Qazvinian and Radev (2008) and Mei and Zhai (2008). P09-1023,P08-1093,o,"By analyzing rhetorical discourse structure of aim, background, solution, etc. or citation context, we can obtain appropriate abstracts and the most influential contents from scientific articles (Teufel and Moens, 2002; Mei and Zhai, 2008)." D08-1079,P08-1094,o,Nenkova and Louis (2008) investigate how summary length and the characteristics of the input influence the summary quality in multi-document summarization. E09-1062,P08-1094,o,"For the first set of experiments, we divide all inputs based on the mean value of the average system scores as in (Nenkova and Louis, 2008)." E09-1062,P08-1094,o,"Only recently the issue has drawn attention: (Nenkova and Louis, 2008) present an initial analysis of the factors that influence system performance in content selection." E09-1062,P08-1094,o,"4 Features For our experiments we use the features proposed, motivated and described in detail by (Nenkova and Louis, 2008)." P09-1023,P08-1094,o,"204 4.2.2 Correlation between TREC nuggets and non-text features Analyzing the features used could let us understand summarization better (Nenkova and Louis, 2008)." E09-1031,P08-1101,o,"Finally, Zhang and Clark (2008) achieve an SF of 95.90% and a TF of 91.34% by 10-fold cross validation using CTB data." P09-1058,P08-1101,o,0.9595 0.9590 0.9611 0.9085 0.9134 0.9152 Table 8: Comparison of F1 results of our baseline model with Nakagawa and Uchimoto (2007) and Zhang and Clark (2008) on CTB 3.0. P09-1058,P08-1101,o,Zhang and Clark (2008) (Z&C08) generated CTB 3.0 from CTB 4.0. P09-1058,P08-1101,o,Zhang and Clark (2008) indicated that their results cannot directly compare to the results of Shi and Wang (2007) due to different experimental settings. P09-1058,P08-1101,o,(2008a; 2008b) on CTB 5.0 and Zhang and Clark (2008) on CTB 4.0 since they reported the best performances on joint word segmentation and POS tagging using the training materials only derived from the corpora. P09-1058,P08-1101,o,"Following Zhang and Clark (2008), we first generated CTB 3.0 from CTB 4.0 using sentence IDs 110364." P09-1058,P08-1101,o,Table 8 compares the F1 results of our baseline model with Nakagawa and Uchimoto (2007) and Zhang and Clark (2008) on CTB 3.0. P09-1058,P08-1101,o,"For example, a perceptron algorithm is used for joint Chinese word segmentation and POS tagging (Zhang and Clark, 2008; Jiang et al., 2008a; Jiang et al., 2008b)." P09-1058,P08-1101,p,"Word segmentation and POS tagging in a joint process have received much attention in recent research and have shown improvements over a pipelined fashion (Ng and Low, 2004; Nakagawa and Uchimoto, 2007; Zhang and Clark, 2008; Jiang et al., 2008a; Jiang et al., 2008b)." P09-1058,P08-1102,o,"In this paper, we used CTB 5.0 (LDC2005T01) as our main corpus, defined the training, development and test sets according to (Jiang et al., 2008a; Jiang et al., 2008b), and designed our experiments to explore the impact of the training corpus size on our approach." P09-1058,P08-1102,o,"(Jiang et al., 2008a; Jiang et al., 2008b)." P09-1058,P08-1102,o,"For example, a perceptron algorithm is used for joint Chinese word segmentation and POS tagging (Zhang and Clark, 2008; Jiang et al., 2008a; Jiang et al., 2008b)." P09-1058,P08-1102,p,"Word segmentation and POS tagging in a joint process have received much attention in recent research and have shown improvements over a pipelined fashion (Ng and Low, 2004; Nakagawa and Uchimoto, 2007; Zhang and Clark, 2008; Jiang et al., 2008a; Jiang et al., 2008b)." P09-1059,P08-1102,n,"In addition, the performance of the adapted model for Joint S&T obviously surpass that of (Jiang et al., 2008), which achieves an F1 of 93.41% for Joint S&T, although with more complicated models and features." P09-1059,P08-1102,o,"following our previous work (Jiang et al., 2008)." P09-1059,P08-1102,p,"It is an online training algorithm and has been successfully used in many NLP tasks, such as POS tagging (Collins, 2002), parsing (Collins and Roark, 2004), Chinese word segmentation (Zhang and Clark, 2007; Jiang et al., 2008), and so on." W09-1120,P08-1117,o,"Many NLP systems use the output of supervised parsers (e.g., (Kwok et al., 2001) for QA, (Moldovan et al., 2003) for IE, (Punyakanok et al., 2008) for SRL, (Srikumar et al., 2008) for Textual Inference and (Avramidis and Koehn, 2008) for MT)." D08-1002,P08-1118,o,"1 Introduction and Motivation Detecting contradictory statements is an important and challenging NLP task with a wide range of potential applications including analysis of political discourse, of scientific literature, and more (de Marneffe et al., 2008; Condoravdi et al., 2003; Harabagiu et al., 2006)." D08-1103,P08-1118,o,"Automatically determining the degree of antonymy between words has many uses including detecting and generating paraphrases (The dementors caught Sirius Black / Black could not escape the dementors) and detecting contradictions (Marneffe et al., 2008; Voorhees, 2008) (Kyoto has a predominantly wet climate / It is mostly dry in Kyoto)." D09-1082,P08-1118,o,"Some other researchers also work on detecting negative cases, i.e. contradiction, instead of entailment (de Marneffe et al., 2008)." E09-1025,P08-1118,o,"This can be the base of a principled method for detecting structural contradictions (de Marneffe et al., 2008)." A88-1026,P85-1008,o,"By associating natural language with concepts as they are entered into a knowledge A Model Of Semantic Analysis All of the following discussion is based on a model of semantic analysis similar to that proposed in (Hobbs, 1985)." A88-1032,P85-1008,o,"Hobbs, Jerry (1985) ""Ontological Promiscuity"", Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics, Chicago, Illinois, pp." A88-1034,P85-1008,o,"Stage 2 processing is then free to assign to the compound any bracketing for which it 3The design of this level of Lucy is influenced by Hobbs (1985), which advocates a level of ""surfaey"" logical form with predicates close to actual English words and a structure similar to the syntactic structure of the sentence." C90-2008,P85-1008,o,"(1) a) ~ x e' ~ y ?read(e' x y) & book(y) b) ~ x 3 e e' y past(e) & enjoy(e x e') & ?read(e' x y) & book(y) c) 3 e e' y past(e) & enjoy(e j e') & ?read(e' j y) & book(y) We follow Hobbs (1985), Alshawi et al." C90-2008,P85-1008,o,"Thus, we are lead to an 'ontologically promiscuous' semantics (Hobbs, 1985)." H86-1013,P85-1008,o,"Independently, in AI an effort arose to encode large amounts of commonsense knowledge (Hayes, 1979; Hobbs and Moore, 1985; Hobbs et al. 1985)." H86-1013,P85-1008,o,"We can stipulate the time line to be linearly ordered (although it is not in approaches that build ignorance of relative times into the representation of time (e.g. , Hobbs, 1974) nor in approaches using branching futures (e.g. , McDermott, 1985)), and we can stipulate it to be dense (although it is not in the situation calculus)." H86-1013,P85-1008,o,"We are encoding the knowledge as axioms in what is for the most part first-order logic, described in Hobbs (1985a), although quantification over predicates is sometimes convenient." H86-1013,P85-1008,o,"Since so many concepts used in discourse are $q'aindependent, a theory of granularity is also fundamental (see Hobbs 1985b)." J03-4002,P85-1008,o,"Essentially, we follow Hobbs (1985) in using a rich ontology and a representation scheme that makes explicit all the individuals and abstract objects (i.e. , propositions, facts/beliefs, and eventualities) (Asher 1993) involved in the LF interpretation of an utterance." J87-3004,P85-1008,o,"We can stipulate the time line to be linearly ordered (although it is not in approaches that build ignorance of relative times into the representation of time (e.g. , Hobbs, 1974) nor in approaches employing branching futures (e.g. , McDermott, 1985)), and we can stipulate it to be dense (although it is not in the situation calculus)." J87-3004,P85-1008,o,"Independently, in artificial intelligence an effort arose to encode large amounts of commonsense knowledge (Hayes, 1979; Hobbs and Moore, 1985; Hobbs et al. 1985)." J87-3004,P85-1008,o,"We are encoding the knowledge as axioms in what is for the most part a first-order logic, described by Hobbs (1985a), although quantification over predicates is sometimes convenient." J87-3004,P85-1008,o,"Since so many concepts used in discourse are graindependent, a theory of granularity is also fundamental (see Hobbs 1985b)." J98-4001,P85-1008,o,"The separation of these two requirements 7 A more precise account of what it means to be able to identify an object is beyond the scope of this paper; for further details, see the discussions by Hobbs (1985), Appelt (1985), Kronfeld (1986, 1990), and Morgenstern (1988)." M93-1013,P85-1008,o,"The proxy slot denotes a semantic individual which serves the role of an event instance in a partially Davidsonian scheme, as in (Hobbs 1985) or (Bayer d-Vilai n 1991)." P86-1035,P85-1008,o,"Independently, in AI an effort arose to encode large amounts of commonsense knowledge (Hayes, 1979; Hobbs and Moore, 1985; Hobbs et al. 1985)." P86-1035,P85-1008,o,"We can stipulate the time line to be linearly ordered (although it is not in approaches that build ignorance of relative times into the representation of time (e.g. , Hobbs, 1974) nor in approaches using branching futures (e.g. , McDermott, 1985)), and we can stipulate it to be dense (although it is not in the situation calculus)." P86-1035,P85-1008,o,"Since so many concepts used in discourse are graindependent, a theory of granularity is also fundamental (see Hobbs 1985b)." P88-1012,P85-1008,o,"A subst(req, cons(c, argo)) st ^ rel(c, z) s2 ~(i,k,=,;~z\[p~(:) ^ ~(~)\]) (Vi,j,w)n(i,j,w) D (3z)cn(i,j,z,w) (Vi,j, k, w, z, c, rel)prep(i, j, w) ^ np(j, k, x) A rel(c, z) In 3 ptXi, k,,~z\[w(c, z)\], , Req(w)) For example, the first axiom says that there is a sentence from point i to point k asserting eventuality e if there is a noun phrase from i to j referring to z and a verb phrase from j to k denoting predicate p with arguments arg8 and having an associated requirement req, and there is (or, for $3, can be assumed to be) an eventuality e of p's being true of , where c is related to or coercible from x (with an assumability cost of $20), and the requirement req associated with p can be proved or, for $10, assumed to hold of the arguments of p. The symbol c&el denotes the conjunction of eventualities e and el (See Hobbs (1985b), p. 35)." P88-1012,P85-1008,o,SSee Hobbs (1985a) for explanation of this notation for events. P88-1012,P85-1008,o,"4For justification for this kind of logical form for sentences with quantifiers and inteusional operators, see Hobbs(1983) and Hobbs (1985a)." P96-1027,P85-1008,o,"They have made semantic formalisms like those now usually associated with Davison (Davidson, 1980, Parsons, 1990) attractive in artificial intelligence for many years (Hobbs 1985, Kay, 1970)." P97-1026,P85-1008,o,"First, we adopt an ONTOLOGICALLY PROMISCUOUS representation (Hobbs, 1985) that includes a wide variety of types of entities." W03-0906,P85-1008,p,"We do not completely rule out the possibility that some more sophisticated, ontologically promiscuous, first-order analysis (perhaps along the lines of (Hobbs, 1985)) might account for these kinds of monotonicity inferences." W03-0906,P85-1008,p,"More sophisticated first-order accounts (Hirst, 1991; Hobbs, 1985) may be extendable to bear this load." W03-2806,P85-1008,o,"The MLFs use reification to achieve flat expressions, very much in the line of Davidson (1967), Hobbs (1985), and Copestake et al." W04-2803,P85-1008,o,"Note that the predicate language representation utilized by Carmel-Tools is in the style of Davidsonian event based semantics (Hobbs, 1985)." W04-2803,P85-1008,o,"After the parser produces a semantic feature structure representation of the sentence, predicate mapping rules then match against that representation in order to produce a predicate language representation in the style of Davidsonian event based semantics (Davidson, 1967; Hobbs, 1985), as mentioned above." W05-1609,P85-1008,o,"We adopt their idea of an utterance as a description, generated from a communicative goal, and also use an ontologically promiscuous formalism for representing meaning [Hobbs, 1985]." W05-1609,P85-1008,o,"2 Background 2.1 Hybrid Logic Dependency Semantics Hybrid Logic Dependency Semantics (HLDS; [Kruijff, 2001; Baldridge and Kruijff, 2002]) is an ontologically promiscuous [Hobbs, 1985] framework for representing the propositional content (or meaning) of an expression as an ontologically richly sorted, relational structure." W07-1430,P85-1008,o,"Nevertheless, as (Hobbs, 1985) and others have argued, semantic representations for natural language need not be higher-order in that ontological promiscuity can solve the problem." W07-1430,P85-1008,o,"Moreover, as stated in (Hobbs, 1985), we assume that the alleged predicate is existentially opaque in its second argument." W07-1431,P85-1008,o,"Doing inference with representations close to natural language has also been advocated by Jerry Hobbs, as in (Hobbs, 1985)." W96-0410,P85-1008,o,"Second, in keeping with ontological promiscuity (Hobbs, 1985), we represent the importance of attributes by the salience of events and states in the discourse model--these states and events now have the same status in the discourse model as any other entities." W96-0410,P85-1008,o,"First, as originally advocated by Hobbs (1985), we adopt an ONTOLOGICALLY PROMISCUOUS representation that includes a wide variety of types of entities." C88-1026,P86-1010,n,"Previous literature on GB parsing /Wehrli, 1984; Sharp, 1985; Kashket, 1986; Kuhns, 1986; Abney, 1986/has not addressed the issue of implementation of the Binding theory) The present paper intends in part to fill this gap." J90-4003,P86-1010,n,"Formal complexity analysis has not been carried out, but my algorithm is simpler, at least conceptually, than the variable-word-order parsers of Johnson (1985), Kashket (1986), and Abramson and Dahl (1989)." P87-1007,P86-1010,n,"Although the parser is not yet complete, we expect that its breath of coverage of the language will be substantially larger than that of other Government-binding parsers recently reported in the literature (Kashket (1986), Kuhns (1986), Sharp (1985), and Wehrli (1984))." P93-1015,P86-1010,o,"There are similarities with dependency grammars here because such constraint graphs are also produced by dependency grammars (Covington, 1990) (Kashket, 1986)." C90-2071,P88-1012,o,"The construction is defined in Fillmore's (1988) Construction Grammar as ""a pairing of a syntactic pattern with a meaning structure""; they are similar to signs in HPSG (Pollard & Sag 1987) and pattern-concept pairs (Wilensky & Arens 1980; Wilensky et al. 1988)." C90-3028,P88-1012,o,"(See also Kaplan et al. , 1988, on the latter point)." C90-3028,P88-1012,o,"It has been implemented in the TACITUS System (Itobbs et al. , 1988, 1990; Stickel, 1989) and has been applied to several varieties of text." C90-3028,P88-1012,p,"Recently, an elegant approach to inference in discourse interpretation has been developed at a number of sites (e.g. , ltobbs et al. , 1988; Charniak and Goldman, 1988; Norvig, 1987), all based on tim notion of abduction, and we have begun to explore its potential application to machine translation." C90-3040,P88-1012,o,"Probability Based Commensurability Charniak and Goldman (1988) started out with a model very similar to Hobbs et al. , but became concerned with 227 the lack of theoretical grounding for Ihe number, in rules, much as we we.re." C92-2108,P88-1012,o,"We suggest two ways to do it: a version of \[\[obbs et al's \[1988, 1990\] Generation as Abduction; and the Interactive Defaults strategy introduced by aoshi et al \[1984a, 1984b, 1986\]." C92-2108,P88-1012,o,"Modulo more minor differences, these notions are close to the ideas of interpretation as abduction (Hobbs et al \[1988\]) and generation as abduction (ltobbs et al \[1990:26-28\]), where we take abduction, in the former case for instance, to be a process returning a temporal-causal structure which can explain the utterance in context." H90-1012,P88-1012,o,"Ordinary Prologstyle, backchaining deduction is augmented with the capability of making assumptions and of factoring two goal literals that are unifiable (see Hobbs et al. , 1988)." J90-2003,P88-1012,o,We borrow this useful term from the Core Language Engine project (Alshawi et al. 1988; 1989). J90-2003,P88-1012,o,"(1972); later elaborations and refinements have been implemented in a number of systems, notably CHAT-80 (Pereira 1983), TEAM (Grosz et al. 1986), and CLE (Moran 1988; Alshawi et al. 1989)." J90-2003,P88-1012,o,"1.2.2 SPECIFIC SYNTACTIC AND SEMANTIC ASSUMPTIONS The basic scheme, or some not too distant relative, is the one used in many large-scale implemented systems; as examples, we can quote TEAM (Grosz et al. 1987), PUNDIT (Dahl et al. 1987), TACITUS (Hobbs et al. 1988), MODL (McCord 1987), CLE (Alshawi et al. 1989), and SNACK-85 (Rayner and Banks 1986)." J90-2003,P88-1012,o,It also has close links with theoretical work in situation semantics (Pollard and Sag 1988; Fenstad et al. 1987). J91-4003,P88-1012,o,Walker et al. \[forthcoming\] and Boguraev and Briscoe \[1988\]). J91-4003,P88-1012,o,Additional evidence for this distinction is given in Pustejovsky and Anick (1988) and Briscoe et al. J91-4003,P88-1012,o,Hobbs et al. 1988; Charniak and Goldman 1988). J95-3001,P88-1012,o,"(1980), Walker (1978), Fink and Biermann (1986), Mudler and Paulus (1988), Carbonell and Pierrel (1988), Young (1990), and Young et al." J95-4001,P88-1012,o,"Abduction has been applied to the solution of local pragmatics problems (Hobbs et al. 1988, 1993) and to story understanding (Charniak and Goldman 1988)." N06-1006,P88-1012,o,"(2005), is to translate dependency parses into neo-Davidsonian-style quasilogical forms, and to perform weighted abductive theorem proving in the tradition of (Hobbs et al. , 1988)." P93-1012,P88-1012,o,"Volume 17, Number 1 March 1991 References Lakoff, George and Johnson, Mark Metaphors We Live 8y University of Chicago Press 1980 MADCOW Committee (Hirschman, Lynette et al) Multi-Site Data Collection for a Spoken Language Corpus in Proceedings Speech and Natural Language Workshop February 1992 Grice, H. P. Logic and Conversation in P. Cole and J. L. Morgan, Speech Acts, New York: Academic Press, 1975 Pustejovsky, James The Generative Lexicon Computational Linguistics Volume 17, Number 4 December 1991 Hobbs, Jerry R. and Stickel, Mark Interpretation as Abduction in Proceedings of the 26th ACL June 1988 Bobrow, R. , Ingria, R. and Stallard, D. The Mapping Unit Approach to Subcategorization in Proceedings Speech and Natural Language Workshop February 1991 Hobbs, Jerry R. , and Martin, Paul Local Pragmatics in Proceedings, 10th International Joint Conference on Artificial Intelligence (IJCAI-87)." P94-1030,P88-1012,o,"This is known as cost-based abduction (Hobbs et al. , 1988)." P94-1030,P88-1012,p,"The abduction-based approach (Hobbs et al. , 1988) has provided a simple and elegant way to realize such a task." W02-0211,P88-1012,o,"Two main extensions from that work that we are making use of are: 1) proofs falling below a user defined cost threshold halt the search 2) a simple variable typing system reduces the number of axioms written and the size of the search space (Hobbs et al. , 1988, pg 102)." W02-0211,P88-1012,o,"The domain axioms will bind the body variables to their most likely referents during unification with facts, and previously assumed and proven propositions similarly to (Hobbs et al. , 1988)." W94-0101,P88-1012,o,".1 is a set of assumptions sufficient to support the inI,'rl)n'lation given S and R. In other words, this is h,~crl)rctal, ion as abduction' (Itobbs et al. 1988), since ~!)(i,('lion, not deduction, is needed to arrive at the :~>.'d H II I~tiOIIS,4." A92-1013,P90-1034,o,"D. Hindle, Noun classification from predicate argument structures, in (ACL,1990)." A92-1013,P90-1034,o,"In (Hindle,1990; Zernik, 1989; Webster el Marcus, 1989) cooccurrence analyses augmented with syntactic parsing is used for the purpose of word classification." A92-1013,P90-1034,o,"(Hindle, 1990; Hindle and Rooths,1991) and (Smadja, 1991) use syntactic markers to increase the significance of the data." A92-1013,P90-1034,o,"Combining statistical and parsing methods has been done by (Hindle, 1990; Hindle and Rooths,1991) and (Smadja and McKewon, 1990; Smadja,1991)." A94-1011,P90-1034,p,"Typical examples of linguistically sophisticated annotation include tagging words with their syntactic category (although this has not been found to be effective for 1R), lemma of the word (e.g. ""corpus"" for ""corpora""), phrasal information (e.g. identifying noun groups and phrases (Lewis 1992c, Church 1988)), and subject-predicate identification (e.g. Hindle 1990)." C00-2104,P90-1034,o,Hindle (1990) classified nouns on the basis of co-occurring patterns of subjectverb and verb-object pairs. C04-1036,P90-1034,p,"Probably the most widely used association weight function is (point-wise) Mutual Information (MI) (Church et al. , 1990), (Hindle, 1990), (Lin, 1998), (Dagan, 2000), defined by: )()( ),(log),( 2 fPwP fwPfwMI = A known weakness of MI is its tendency to assign high weights for rare features." C04-1036,P90-1034,o,"1 Introduction Distributional Similarity has been an active research area for more than a decade (Hindle, 1990), (Ruge, 1992), (Grefenstette, 1994), (Lee, 1997), (Lin, 1998), (Dagan et al. , 1999), (Weeds and Weir, 2003)." C04-1111,P90-1034,o,"2.2 Co-occurrence-based approaches The second class of algorithms uses cooccurrence statistics (Hindle 1990, Lin 1998)." C04-1116,P90-1034,o,"Our method is similar to (Hindle, 1990), (Lin, 1998), and (Gasperin, 2001) in the use of dependency relationships as the word features." C04-1116,P90-1034,o,"The words we want to aggregate for text analysis are not rigorous synonyms, but the role is the same, so we have to consider the syntactic relation based on the assumptions that words with the same role tend to modify or be modified by similar words (Hindle, 1990; Strzalkowski, 1992)." C04-1165,P90-1034,o,"Hindle (1990) used noun-verb syntactic relations, and Hatzivassiloglou and McKeown (1993) used coordinated adjective-adjective modifier pairs." C08-1051,P90-1034,o,"Others proposed distributional similarity measures between words (Hindle, 1990; Lin, 1998; Lee, 1999; Weeds et al., 2004)." C92-2082,P90-1034,o,"There bas recently been work in the detection of semantically related nouns via, for example, shared argument structures (Hindle 1990), and shared dictionary definition context (Wilks e al. 1990)." C94-1074,P90-1034,n,"Among the applications of collocational analysis for lexical acquisition are: the derivation of syntactic disambiguation cues (Basili et al. 1991, 1993a; Hindle and Rooths 1991,1993; Sekine 1992) (Bogges et al. 1992), sense preference (Yarowski 1992), acquisition of selectional restrictions (Basili et al. 1992b, 1993b; Utsuro et al. 1993), lexical preference in generation (Smadjia 1991), word clustering (Pereira 1993; Hindle 1990; Basili et al. 1993c), etc. In the majority of these papers, even though the (precedent or subsequent) statistical processing reduces the number of accidental associations, very large corpora (10,000,000 words) are necessary to obtain reliable data on a ""large enough"" number of words." C96-1003,P90-1034,o,"have been proposed (Hindle, 1990; Brown et al. , 1992; Pereira et al. , 1993; Tokunaga et al. , 1995)." C96-1083,P90-1034,o,"4 Towards an adequate similarity esfimatation for the building of ontologies The comparison with the similarity score of (Hindle, 1990) shows that SYCLADE similarity indicator is specifically relevant for ontology bootstrap and tuning." C96-1083,P90-1034,o,"Hindle uses the observed frequencies within a specific syntactic pattern (subject/verb, and verb/object) to derive a cooccu,> rence score which is an estimate of mutual information (Church and Hanks, 1990)." C96-1083,P90-1034,o,"In the past five years, important research on the automatic acquisition of word classes based on lexical distribution has been published (Church and Hanks, 1990; Hindle, 1990; Smadja, 1993; Grei~nstette, 1994; Grishman and Sterling, 1994)." C96-1083,P90-1034,o,"Section 4 compares our results to Itindle's ones (Hindle, 1990)." C96-1083,P90-1034,o,"Itowever, Harris' methodology implies also to simplify and transform each parse tree 2, so as to obtain so-called ""elementary sentences"" exhibiting the main conceptual classes for the domain (Sager lIa'or instance, Hindle (Hindle, 1990) needs a six million word corpus in order to extract noun similarities from predicate-argunlent structures." C96-2205,P90-1034,o,"2.3 Measuring the similarity between classes (step 3) In step 3, we measure the similarity between two primitive classes by using the method given by Hindle (Hindle, 1990)." C96-2205,P90-1034,o,"Since a handmade thesaurus is not slfitahle for machine use, and expensive to compile, automatical construction of~a thesaurus has been attempted using corpora (Hindle, 1990)." E09-1086,P90-1034,o,"Distributional approaches, on the other hand, rely on text corpora, and model relatedness by comparing the contexts in which two words occur, assuming that related words occur in similar context (e.g., Hindle (1990), Lin (1998), Mohammad and Hirst (2006))." E91-1038,P90-1034,p,"Semantic collocations are harder to extract than cooccurrence patterns--the state of the art does not enable us to find semantic collocations automatically t. This paper however argues that if we take advantage of lexicai paradigmatic behavior underlying the lexicon, we can at least achieve semi-automatic extraction of semantic collocations (see also Calzolari and Bindi (1990) I But note the important work by Hindle \[HindlegO\] on extracting semantically similar nouns based on their substitutability in certain verb contexts." E99-1013,P90-1034,n,"Our syntactic-relation-based thesaurus is based on the method proposed by Hindle (1990), although Hindle did not apply it to information retrieval." E99-1013,P90-1034,o,"Words appearing in similax grammatical contexts are assumed to be similar, and therefore classified into the same class (Lin, 1998; Grefenstette, 1994; Grefenstette, 1992; Ruge, 1992; Hindle, 1990)." H93-1049,P90-1034,o,"Hindle, D. , (1990) ""Noun Classification from Predicate-Argument Structures,"" Proceedings of the 28th Annual Meeting of the ACL, pp." I08-1060,P90-1034,o,"Some researchers (Hindle, 1990; Grefenstette, 1994; Lin, 1998) classify terms by similarities based on their distributional syntactic patterns." I08-1072,P90-1034,p,"A wide range of contextual information, such as surrounding words (Lowe and McDonald, 2000; Curran and Moens, 2002a), dependency or case structure (Hindle, 1990; Ruge, 1997; Lin, 1998), and dependency path (Lin and Pantel, 2001; Pado and Lapata, 2007), has been utilized for similarity calculation, and achieved considerable success." J04-3002,P90-1034,o,"Features identified using distributional similarity have previously been used for syntactic and semantic disambiguation (Hindle 1990; Dagan, Pereira, and Lee 1994) and to develop lexical resources from corpora (Lin 1998; Riloff and Jones 1999)." J05-4002,P90-1034,o,"Similarity-based smoothing (Hindle 1990; Brown et al. 1992; Dagan, Marcus, and Markovitch 1993; Pereira, Tishby, and Lee 1993; Dagan, Lee, and Pereira 1999) provides an intuitively appealing approach to language modeling." J05-4002,P90-1034,o,"4.5 Hindles Measure Hindle (1990) proposed an MI-based measure, which he used to show that nouns could be reliably clustered based on their verb co-occurrences." J05-4002,P90-1034,o,This hypothesized relationship between distributional similarity and semantic similarity has given rise to a large body of work on automatic thesaurus generation (Hindle 1990; Grefenstette 1994; Lin 1998a; Curran and Moens 2002; Kilgarriff 2003). J93-2002,P90-1034,o,"Most work on corpora of naturally occurring language 244 Michael R. Brent From Grammar to Lexicon either uses no a priori grammatical knowledge (Brill and Marcus 1992; Ellison 1991; Finch and Chater 1992; Pereira and Schabes 1992), or else it relies on a large and complex grammar (Hindle 1990, 1991)." J93-2002,P90-1034,o,Many other projects have used statistics in a way that summarizes facts about the text but does not draw any explicit conclusions from them (Finch and Chater 1992; Hindle 1990). J93-2005,P90-1034,o,"We have found, however, that collocational evidence can be employed to suggest which noun compounds reflect taxonomic relationships, using a strategy similar to that employed by Hindle (1990) for detecting synonyms." J93-2005,P90-1034,o,Hindle 1990). J93-2005,P90-1034,o,"Using techniques described in Church and Hindle (1990), Church and Hanks (1990), and Hindle and Rooth (1991), Figure 4 shows some examples of the most frequent V-O pairs from the AP corpus." J93-2005,P90-1034,o,"Hindle (1990) reports interesting results of this kind based on literal collocations, where he parses the corpus (Hindle 1983) into predicate-argument structures and applies a mutual information measure (Fano 1961; Magerman and Marcus 1990) to weigh the association between the predicate and each of its arguments." J94-4003,P90-1034,o,The use of such relations (mainly relations between verbs or nouns and their arguments and modifiers) for various purposes has received growing attention in recent research (Church and Hanks 1990; Zernik and Jacobs 1990; Hindle 1990; Smadja 1993). J94-4003,P90-1034,o,"More specifically, two recent works have suggested using statistical data on lexical relations for resolving ambiguity of prepositional phrase attachment (Hindle and Rooth 1991) and pronoun references (Dagan and Itai 1990, 1991)." J94-4003,P90-1034,p,His results may be improved if more sophisticated methods and larger corpora are used to establish similarity between words (such as in Hindle 1990). J98-4002,P90-1034,o,"Predicate argument structures, which consist of complements (case filler nouns and case markers) and verbs, have also been used in the task of noun classification (Hindle 1990)." N03-1015,P90-1034,o,"32-39 Proceedings of HLT-NAACL 2003 similar distribution patterns (Hindle, 1990; Peraira, et al. , 1993; Grefenstette, 1994)." N03-4011,P90-1034,o,There have been many approaches to compute the similarity between words based on their distribution in a corpus (Hindle 1990; Landauer and Dumais 1997; Lin 1998). N04-1041,P90-1034,o,One approach constructs automatic thesauri by computing the similarity between words based on their distribution in a corpus (Hindle 1990; Lin 1998). N07-1016,P90-1034,o,"This second source of evidence is sometimes referred to as distributional similarity (Hindle, 1990)." N09-3007,P90-1034,o,glish nouns first appeared in Hindle (1990). P05-1016,P90-1034,o,Researchers have mostly looked at representing words by their surrounding words (Lund and Burgess 1996) and by their syntactical contexts (Hindle 1990; Lin 1998). P05-1077,P90-1034,o,"4 Building Noun Similarity Lists A lot of work has been done in the NLP community on clustering words according to their meaning in text (Hindle, 1990; Lin, 1998)." P06-1015,P90-1034,o,"To date, researchers have harvested, with varying success, several resources, including concept lists (Lin and Pantel 2002), topic signatures (Lin and Hovy 2000), facts (Etzioni et al. 2005), and word similarity lists (Hindle 1990)." P06-1045,P90-1034,p,"For example, Hindle (1990) used cooccurrences between verbs and their subjects and objects, and proposed a similarity metric based on mutual information, but no exploration concerning the effectiveness of other kinds of word relationship is provided, although it is extendable to any kinds of contextual information." P06-1045,P90-1034,o,"Various methods (Hindle, 1990; Lin, 1998; Hagiwara et al. , 2005) have been proposed for synonym acquisition." P06-1072,P90-1034,o,The only difference is that we 5See also work on partial parsing as a task in its own right: Hindle (1990) inter alia. P06-1100,P90-1034,o,"1 Introduction NLP researchers have developed many algorithms for mining knowledge from text and the Web, including facts (Etzioni et al. 2005), semantic lexicons (Riloff and Shepherd 1997), concept lists (Lin and Pantel 2002), and word similarity lists (Hindle 1990)." P06-1101,P90-1034,o,"3.2 (m,n)-cousin Classification The classifier for learning coordinate terms relies on the notion of distributional similarity, i.e., the idea that two words with similar meanings will be used in similar contexts (Hindle, 1990)." P06-1102,P90-1034,o,"Many methods have been proposed to compute distributional similarity between words, e.g., (Hindle, 1990), (Pereira et al. , 1993), (Grefenstette, 1994) and (Lin, 1998)." P06-1116,P90-1034,o,"We use the cosine similarity measure for windowbased contexts and the following commonly used similarity measures for the syntactic vector space: Hindles (1990) measure, the weighted Lin measure (Wu and Zhou, 2003), the -Skew divergence measure (Lee, 1999), the Jensen-Shannon (JS) divergence measure (Lin, 1991), Jaccards coef cient (van Rijsbergen, 1979) and the Confusion probability (Essen and Steinbiss, 1992)." P07-1028,P90-1034,o,"We will be using the similarity metrics shown in Table 1: Cosine, the Dice and Jaccard coefficients, and Hindles (1990) and Lins (1998) mutual information-based metrics." P07-1057,P90-1034,o,"Hindle (1990) uses a mutual-information based metric derived from the distribution of subject, verb and object in a large corpus to classify nouns." P08-1002,P90-1034,o,"Our method is thus related to previous work based on Harris (1985)s distributional hypothesis.2 It has been used to determine both word and syntactic path similarity (Hindle, 1990; Lin, 1998a; Lin and Pantel, 2001)." P08-2008,P90-1034,o,Hindle (1990) grouped nouns into thesaurus-like lists based on the similarity of their syntactic contexts. P08-3001,P90-1034,o,"A number of researches which utilized distributional similarity have been conducted, including (Hindle, 1990; Lin, 1998; Geffet and Dagan, 2004) and many others." P09-1052,P90-1034,o,"Syntactic context information is used (Hindle, 1990; Ruge, 1992; Lin, 1998) to compute term similarities, based on which similar words to a particular word can directly be returned." P09-2018,P90-1034,o,"This has been now an active research area for a couple of decades (Hindle, 1990; Lin, 1998; Weeds and Weir, 2003)." P91-1017,P90-1034,o,"The use of such relations (mainly relations between verbs or nouns and their arguments and modifiers) for various purposes has received growing attention in recent research (Church and Hanks, 1990; Zernik and Jacobs, 1990; Hindle, 1990)." P91-1017,P90-1034,o,"More specifically, two recent works have suggested to use statistical data on lexical relations for resolving ambiguity cases of PP-attachment (Hindle and Rooth, 1990) and pronoun references (Dagan and Itai, 1990a; Dagan and Itai, 1990b)." P91-1017,P90-1034,o,"His results may be improved if more sophisticated techniques and larger corpora are used to establish similarity between words (such as in (Hindle, 1990))." P91-1027,P90-1034,o,"Three recent papers in this area are Church and Hanks (1990), Hindle (1990), and Smadja and McKeown (1990)." P91-1027,P90-1034,o,"(1) a. I expected \[nv the man who smoked NP\] to eat ice-cream h. I doubted \[NP the man who liked to eat ice-cream NP\] Current high-coverage parsers tend to use either custom, hand-generated lists of subcategorization frames (e.g. , Hindle, 1983), or published, handgenerated lists like the Ozford Advanced Learner's Dictionary of Contemporary English, Hornby and Covey (1973) (e.g. , DeMarcken, 1990)." P92-1028,P90-1034,p,"8Interestingly, in work on the automated classification of nouns, (Hindle, 1990) also noted problems with ""empty"" words that depend on their complements for meaning." P92-1028,P90-1034,o,"In comparison, most corpus-based algorithms employ substantially larger corpora (e.g. , 1 million words (de Marcken, 1990), 2.5 million words (Brent, 1991), 6 million words (Hindle, 1990), 13 million words (Hindle, & Rooth, 1991))." P93-1022,P90-1034,o,"Statistical data about these various cooccurrence relations is employed for a variety of applications, such as speech recognition (Jelinek, 1990), language generation (Smadja and McKeown, 1990), lexicography (Church and Hanks, 1990), machine translation (Brown et al. , ; Sadler, 1989), information retrieval (Maarek and Smadja, 1989) and various disambiguation tasks (Dagan et al. , 1991; Hindle and Rooth, 1991; Grishman et al. , 1986; Dagan and Itai, 1990)." P93-1022,P90-1034,o,"The search is based on the property that when computing sim(wl, w2), words that have high mutual information values 5The nominator in our metric resembles the similarity metric in (Hindle, 1990)." P93-1024,P90-1034,o,"Hindle (1990) proposed dealing with the sparseness problem by estimating the likelihood of unseen events from that of ""similar"" events that have been seen." P94-1032,P90-1034,o,"Some researchers apply shallow or partial parsers (Smadja, 1991; Hindle, 1990) to acquiring specific patterns from texts." P97-1066,P90-1034,o,"In fact, we are considering ""word usage rather than word meanin\]' (Zernik, 1990) following in this the distributional point of view, see (Harris, 1968), (Hindle, 1990)." P97-1066,P90-1034,o,"Statistical or probabilistic methods are often used to extract semantic clusters from corpora in order to build lexical resources for ANLP tools (Hindle, 1990), (Zernik, 1990), (Resnik, 1993), or for automatic thesaurus generation (Grefenstette, 1994)." P98-1082,P90-1034,o,"Works on word similarity and word sense disambiguation are generally based on statistical methods designed for large or even very large corpora (Hindle, 1990; Agirre and Rigau, 1996)." P98-2127,P90-1034,o,"In (Hindle, 1990), a small set of sample results are presented." P98-2127,P90-1034,o,"When the value of Ilw, r,w'll is unknown, we assume that A and C are conditionally independent given B. The probability of A, B and C cooccurring is estimated by PMLE( B ) PMLE( A\[B ) PMLE( C\[B ), where PMLE is the maximum likelihood estimation of a probability distribution and P.LE(B) = II*,*,*ll' P. ,~E(AIB ) = II*,~,*ll ' P, LE(CIB) = When the value of Hw, r, w~H is known, we can obtain PMLE(A, B, C) directly: PMLE(A, B, C) = \[\[w, r, wll/\[\[*, *, *H Let I(w,r,w ~) denote the amount information contained in Hw, r,w~\]\]=c. Its value can be corn769 simgindZe(Wl, W2) = ~'~(r,w)eTCwl)NTCw2)Aresubj.of.obj-of} min(I(Wl, r, w), I(w2, r, w) ) simHindte, (Wl, W2) = ~,(r,w)eT(w,)nT(w2) min(I(wl, r, w), I(w2, r, w)) \]T(Wl)NT(w2)I simcosine(Wl,W2) = x/IZ(w~)llZ(w2)l 2x IT(wl)nZ(w2)l simDice(Wl, W2) = iT(wl)l+lT(w2) I simJacard (Wl, W2) = T(wl )OT(w2)l T(wl) + T(w2)l-IT(Wl)rlT(w2)l Figure 1: Other Similarity Measures puted as follows: I(w,r,w') = _ Iog(PMLE(B)PMLE(A\]B)PMLE(CIB)) --(-log PMLE(A, B, C)) log IIw,r,wflll*,r,*ll -IIw,r,*ll xll*,r,w'll It is worth noting that I(w,r,w') is equal to the mutual information between w and w' (Hindle, 1990)." P98-2127,P90-1034,o,"The measure simHinate is the same as the similarity measure proposed in (Hindle, 1990), except that it does not use dependency triples with negative mutual information." P98-2127,P90-1034,o,"Ours is 772 similar to (Grefenstette, 1994; Hindle, 1990; Ruge, 1992) in the use of dependency relationship as the word features, based on which word similarities are computed." P99-1004,P90-1034,p,"Arguably the most widely used is the mutual information (Hindle, 1990; Church and Hanks, 1990; Dagan et al. , 1995; Luk, 1995; D. Lin, 1998a)." W02-1107,P90-1034,o,"To extract semantic information of words such as synonyms and antonyms from corpora, previous research used syntactic structures (Hindle 1990, Hatzivassiloglou 1993 and Tokunaga 1995), response time to associate synonyms and antonyms in psychological experiments (Gross 1989), or extracting related words automatically from corpora (Grefensette 1994)." W03-1610,P90-1034,o,"The most frequently used resource for synonym extraction is large monolingual corpora (Hindle, 1990; Crouch and Yang, 1992; Grefenstatte, 1994; Park and Choi, 1997; Gasperin et al. , 2001 and Lin, 1998)." W04-1216,P90-1034,o,"For example, the words corruption and abuse are similar because both of them can be subjects of verbs like arouse, become, betray, cause, continue, cost, exist, force, go on, grow, have, increase, lead to, and persist, etc, and both of them can modify nouns like accusation, act, allegation, appearance, and case, etc. Many methods have been proposed to compute distributional similarity between words, e.g., (Hindle, 1990), (Pereira et al. 1993), (Grefenstette 1994) and (Lin 1998)." W05-1504,P90-1034,o,"If the bound is too tight to allow the correct parse of some sentence, we would still like to allow an accurate partial parse: a sequence of accurate parse fragments (Hindle, 1990; Abney, 1991; Appelt et al. , 1993; Chen, 1995; Grefenstette, 1996)." W05-1516,P90-1034,o,"For example, the words test and exam are similar because both of them follow verbs such as administer, cancel, cheat on, conduct, and both of them can be preceded by adjectives such as academic, comprehensive, diagnostic, difficult, Many methods have been proposed to compute distributional similarity between words (Hindle, 1990; Pereira et al. , 1993; Grefenstette, 1994; Lin, 1998)." W06-2904,P90-1034,o,"For example, the words test and exam are similar because both of them can follow verbs such as administer, cancel, cheat on, conduct, etc. Many methods have been proposed to compute distributional similarity between words, e.g., (Hindle, 1990; Pereira et al. , 1993; Grefenstette, 1994; Lin, 1998)." W93-0107,P90-1034,o,"More recent papers Hindle (1990), Pereira and Tishby (1992) proposed to cluster nouns on the basis of a metric derived from the distribution of subject, verb and object in the texts." W93-0113,P90-1034,o,"A number of knowledge-rich \[Jacobs and Rau, 1990, Calzolari and Bindi, 1990, Mauldin, 1991\] and knowledge-poor \[Brown et al. , 1992, Hindle, 1990, Ruge, 1991, Grefenstette, 1992\] methods have been proposed for recognizing when words are similar." W97-0205,P90-1034,o,"MI is defined in general as follows: y) I ix y) = log2 P(x) P(y) We can use this definition to derive an estimate of the connectedness between words, in terms of collocations (Smadja, 1993), but also in terms of phrases and grammatical relations (Hindle, 1990)." W97-0803,P90-1034,o,"This criticism leads us to automatic approaches for building thesauri from large corpora \[Hirschman et al. , 1975; Hindle, 1990; Hatzivassiloglou and McKeown, 1993; Pereira et al. , 1993; Tokunaga et aL, 1995; Ushioda, 1996\]." W98-0704,P90-1034,n,"Our predicate-argument structure-based thesatmis is based on the method proposed by Hindie (Hindle, 1990), although Hindle did not apply it to information retrieval." A00-1039,P93-1022,o,"This is similar to work by several other groups which aims to induce semantic classes through syntactic co-occurrence analysis (Riloff and Jones, 1999; Pereira et al. , 1993; Dagan et al. , 1993; Hirschman et al. , 1975), although in .our case the contexts are limited to selected patterns, relevant to the scenario." C96-2205,P93-1022,o,"I)agan eL al. proposed a similarity-based model in which each word is generalized, not to its own specific class, but to a set of words which are most similar to it (Dagan et al. , 1993)." E99-1028,P93-1022,o,"We say that wv and nq are semantically related if w~i and nq are semantically related and (wp, nq) and (w~i, nq) are semantically similar (Dagan et al. , 1993)." J02-2001,P93-1022,o,"There are many different similarity measures, which variously use taxonomic lexical hierarchies or lexical-semantic networks, large text corpora, word definitions in machine-readable dictionaries or other semantic formalisms, or a combination of these (Dagan, Marcus, and Markovitch 1993; Kozima and Furugori 1993; Pereira, Tishby, and Lee 1993; Church et al. 1994; Grefenstette 1994; Resnik 1995; McMahon and Smith 1996; Jiang and Conrath 1997; Sch utze 1998; Lin 1998; Resnik and Diab 2000; Budanitsky 1999; Budanitsky and Hirst 2001, 2002)." J05-4002,P93-1022,p,"Similarity-based smoothing (Hindle 1990; Brown et al. 1992; Dagan, Marcus, and Markovitch 1993; Pereira, Tishby, and Lee 1993; Dagan, Lee, and Pereira 1999) provides an intuitively appealing approach to language modeling." J05-4002,P93-1022,o,"5.2 Pseudo-Disambiguation Task Pseudo-disambiguation tasks have become a standard evaluation technique (Gale, Church, and Yarowsky 1992; Sch utze 1992; Pereira, Tishby, and Lee 1993; Sch utze 1998; Lee 1999; Dagan, Lee, and Pereira 1999; Golding and Roth 1999; Rooth et al. 1999; EvenZohar and Roth 2000; Lee 2001; Clark and Weir 2002) and, in the current setting, we may use a nouns neighbors to decide which of two co-occurrences is the most likely." J94-4003,P93-1022,p,"A promising approach may be to use aligned bilingual corpora, especially for augmenting existing lexicons with domain-specific terminology (Brown et al. 1993; Dagan, Church, and Gale 1993)." J96-1001,P93-1022,o,"Related Work The recent availability of large amounts of bilingual data has attracted interest in several areas, including sentence alignment (Gale and Church 1991b; Brown, Lai, and Mercer 1991; Simard, Foster and Isabelle 1992; Gale and Church 1993; Chen 1993), word alignment (Gale and Church 1991a; Brown et al. 1993; Dagan, Church, and Gale 1993; Fung and McKeown 1994; Fung 1995b), alignment of groups of words (Smadja 1992; Kupiec 1993; van der Eijk 1993), and statistical translation (Brown et al. 1993)." J96-2003,P93-1022,p,"Successful approaches aimed at trying to overcome the sparse data limitation include backoff (Katz 1987), Turing-Good variants (Good 1953; Church and Gale 1991), interpolation (Jelinek 1985), deleted estimation (Jelinek 1985; Church and Gale 1991), similarity-based models (Dagan, Pereira, and Lee 1994; Essen and Steinbiss 1992), Pos-language models (Derouault and Merialdo 1986) and decision tree models (Bahl et al. 1989; Black, Garside, and Leech 1993; Magerman 1994)." J98-1002,P93-1022,o,"This can be done by smoothing the observed frequencies 7 (Church and Mercer 1993) or by class-based methods (Brown et al. 1991; Pereira and Tishby 1992; Pereira, Tishby, and Lee 1993; Hirschman 1986; Resnik 1992; Brill et al. 1990; Dagan, Marcus, and Markovitch 1993)." J98-1004,P93-1022,n,"Regardless of whether it takes the form of dictionaries (Lesk 1986; Guthrie et al. 1991; Dagan, Itai, and Schwall 1991; Karov and Edelman 1996), thesauri (Yarowsky 1992; Walker and Amsler 1986), bilingual corpora (Brown et al. 1991; Church and Gale 1991), or hand-labeled training sets (Hearst 1991; Leacock, Towell, and Voorhees 1993; Niwa and Nitta 1994; Bruce and Wiebe 1994), providing information for sense definitions can be a considerable burden." P95-1025,P93-1022,o,"In the similaritybased approaches (Dagan et al. , 1993 & 1994; Grishman et al. , 1993), rather than a class, each word is modelled by its own set of similar words derived from statistical data collected from corpora." P98-2127,P93-1022,o,"In (Dagan et al. , 1993) and (Pereira et al. , ! 993), clusters of similar words are evaluated by how well they are able to recover data items that are removed from the input corpus one at a time." W96-0104,P93-1022,o,"This can be done by smoothing the observed frequencies (Church and Mercer, 1993), or by class-based methods (Brown et al. , 1991; Pereira and Tishby, 1992; Pereira et ah, 1993; Hirschman, 1986; Resnik, 1992; Brill et ah, 1990; Dagan et al. , 1993)." W97-0311,P93-1022,o,"Several authors have used mutual information and similar statistics as an objective function for word clustering (Dagan et al. , 1993; Brown et al. , 1992; Pereira et al. , 1993; Wang et al. , 1996), for automatic determination of phonemic baseforms (Lucassen & Mercer, 1984), and for language modeling for speech recognition (Ries ct al. , 1996)." A00-3006,P95-1026,o,"(1991), Yarowsky (1995))." A97-2010,P95-1026,o,"A Broad-Coverage Word Sense Tagger Dekang Lin Department of Computer Science University of Manitoba Winnipeg, Manitoba, Canada R3T 2N2 lindek@cs.umanitoba.ca Previous corpus-based Word Sense Disambiguation (WSD) algorithms (Hearst, 1991; Bruce and Wiebe, 1994; Leacock et al. , 1996; Ng and Lee, 1996; Yarowsky, 1992; Yarowsky, 1995) determine the meanings of polysemous words by exploiting their local contexts." C00-1023,P95-1026,o,"Statistical techniques, both supervised learning from tagged corpora (Yarowsky, 1992), (Ng and Lee, 1.996), and unsupervised learning (Yarowsky, 1995), (Resnik, 1997), have been investigated." C00-1023,P95-1026,o,"The model can be seen as a bootstrapping learning process tbr disambiguation, where the information gained from one part (selectional preference) is used to improve tile other (disambiguation) and vice versa, reminiscent of the work by Riloff and Jones (1.999) and Yarowsky (1995)." C00-2094,P95-1026,o,"However, the best performing statistical approaches to lexical ambiguity resolution l;lmmselves rely on complex infornmtion sources such as ""lemmas, inflected forms, parts of speech and arbitrary word classes If\] local and distant collocations, trigram sequences, a.nd predicate m'gument association"" (Yarowsky (1995), p. 190) or large context-windows up to 1000 neighboring words (Sch/itze, 1992)." C00-2094,P95-1026,o,The SENSEVAL '~tan(lard is clearly beaten by the earlier results of Yarowsky (1995) (96.5 % precision) and Schiitze (1992) (92 % precision). C02-1058,P95-1026,o,"A variety of unsupervised WSD methods, which use a machinereadable dictionary or thesaurus in addition to a corpus, have also been proposed (Yarowsky 1992; Yarowsky 1995; Karov and Edelman 1998)." C02-1088,P95-1026,o,David Yarowsky (1995) showed it was accurate in the word sense disambiguation. C02-1097,P95-1026,o,"Distance from a target word is used for this purpose and it is calculated by the assumption that the target words in the context window have the same sense (Yarowsky, 1995)." C02-1127,P95-1026,p,"Recent work emphasizes corpus-based unsupervised approach (Dagon and Itai, 1994; Yarowsky, 1992; Yarowsky, 1995) that avoids the need for costly truthed training data." C04-1071,P95-1026,o,"Many techniques which have been studied for the purpose of machine translation, such as word sense disambiguation (Dagan and Itai, 1994; Yarowsky, 1995), anaphora resolution (Mitamura et al. , 2002), and automatic pattern extraction from corpora (Watanabe et al. , 2003), can accelerate the further enhancement of sentiment analysis, or other NLP tasks." C04-1133,P95-1026,o,"However, following the work of Yarowsky (1992), Yarowsky (1995), many supervised WSD systems use minimal information about syntactic structures, for the most part restricting the notion of context to topical and local features." C08-1135,P95-1026,o,"Yarowsky (1995) describes a 'semi-unsupervised' approach to the problem of sense disambiguation of words, also using a set of initial seeds, in this case a few high quality sense annotations." C96-2157,P95-1026,o,"An alternative method we considered was to estimate certain conditional probabilities, similarly to the formula used in (Yarowsky, 1995): SW(t) log P(p C A/t) f(t, A)f(A) = ~ log (2) P(p C R/t) f(t, .R)f(.l~) Here f(A) is (an estimate of) the probability that any given candidate phrase will be accepted by the spotter, and f(R) is the probability that this phrase is rejected, i.e., f(R) = l-f (A)." C96-2157,P95-1026,o,"In additioil, (Yarowsky, 1995), (Gale, Church &; Yarowsky, 1992) point ou; that there is a st, rent tenden(:y for words 1;O occur in (}Ile sense within any given dis:ourse (""one sense pe, r dis:ourse"")." D07-1070,P95-1026,o,"In his analysis of Yarowsky (1995), Abney (2004) formulates several variants of bootstrapping." D07-1070,P95-1026,p,"Although we see statistically significant improvements (at the .05 level on a paired permutation test), the quality of the parsers is still quite poor, in contrast to other applications of bootstrapping which rival supervised methods (Yarowsky, 1995)." D07-1070,P95-1026,o,"Our observation is that this situation is ideal for so-called bootstrapping, co-training, or minimally supervised learning methods (Yarowsky, 1995; Blum and Mitchell, 1998; Yarowsky and Wicentowski, 2000)." D08-1108,P95-1026,o,"An alternative approach to extracting the informal phrases is to use a bootstrapping algorithm (e.g., Yarowsky (1995))." D09-1070,P95-1026,p,"Constraining learning by using document boundaries has been used quite effectively in unsupervised word sense disambiguation (Yarowsky, 1995)." D09-1070,P95-1026,o,Our intuition comes from an observation by Yarowsky (1995) regarding multiple tokens of words in documents. D09-1095,P95-1026,o,The benefits of using grammatical information for automatic WSD were first explored by Yarowsky (1995) and Resnik (1996) in unsupervised approaches to disambiguating single words in context. D09-1134,P95-1026,o,"One heuristic approach is to adapt the self-training algorithm (Yarowsky, 1995) to our model." D09-1134,P95-1026,o,"This process is repeated for a number of iterations in a self-training fashion (Yarowsky, 1995)." D09-1148,P95-1026,o,We propose a method similar to Yarowsky (1995) to generalize beyond the training set. D09-1149,P95-1026,n,"Although previous work (Yarowsky, 1995; Blum and Mitchell, 1998; Abney, 2000; Zhang, 2004) has tackled the bootstrapping approach from both the theoretical and practical point of view, many key problems still remain unresolved, such as the selection of initial seed set." E06-1018,P95-1026,p,"However, as also pointed out by Yarowsky (1995), this observation does not hold uniformly over all possible co-occurrences of two words." E06-2018,P95-1026,o,"This can be done in a supervised (Yarowsky, 1994), a semi-supervised (Yarowsky, 1995) or a fully unsupervised way (Pantel & Lin, 2002)." E06-2018,P95-1026,p,"Although the relative success of previous disambiguation systems (e.g. Yarowsky, 1995) suggests that this should be the case, the effect has usually not been quantified as the emphasis was on a task-based evaluation." E06-3004,P95-1026,o,"(Yarowsky, 1995) and (Mihalcea and Moldovan, 2001) utilized bootstrapping for word sense disambiguation." E99-1024,P95-1026,o,"Our method is based on a decision list proposed by Yarowsky (Yarowsky, 1994; Yarowsky, 1995)." E99-1028,P95-1026,o,"213 Proceedings of EACL '99 Table 2: The result of disambiguation experiment(two senses) (6) \[__ 122 ""-~cause~ e~'ect ~ require a-~ ""-Telose, open, ~ rrect(~ ""-'(fall, decline, win} \] 278 ""-~feel, think, sense T T 280 {hit, attack, strike} I 250 {leave, remain, go} \[ 183 gcty t ~Ol accomplish, operate'}-216 --{occur, happen, ~ --{order, request, arrange-'~""~ 240 ""-~ass, adopt, ~ 274 -'~roduce, create, gro'~~""--""""2~ --~ush, attack, pull~ -~s~ve, 223 ""-{ship, put, send} {stop, end, move} {add, append, total} {keep, maintain, protect} Total 215(77.3 181(72.4 160(87.4 349(92.3) ~-~ Correct(%)\] 83(77.0) 113(86.2) I 169(87.5) J Yarowsky used an unsupervised learning procedure to perform noun WSD (Yarowsky, 1995)." E99-1028,P95-1026,p,"1 Introduction One of the major approaches to disambiguate word senses is supervised learning (Gale et al. , 1992), (Yarowsky, 1992), (Bruce and Janyce, 1994), (Miller et al. , 1994), (Niwa and Nitta, 1994), (Luk, 1995), (Ng and Lee, 1996), (Wilks and Stevenson, 1998)." H05-1017,P95-1026,o,"Within the machine learning paradigm, IL has been incorporated as a technique for bootstrapping an extensional learning algorithm, as in (Yarowsky, 1995; Collins and Singer, 1999; Liu et al. , 2004)." H05-1017,P95-1026,o,"It is possible to recognize a common structure of these works, based on a typical bootstrap schema (Yarowsky, 1995; Collins and Singer, 1999): Step 1: Initial unsupervised categorization." H05-1046,P95-1026,o,"There has of course been a large amount of work on the more general problem of word-sense disambiguation, e.g., (Yarowsky 1995) (Kilgarriff and Edmonds 2002)." H05-1050,P95-1026,o,"We extracted all examples of each word from the 14-million-word English portion of the Hansards.8 Note that this is considerably smaller than Yarowskys (1995) corpus of 460 million words, so bootstrapping will not perform as well, and may be more sensitive to the choice of seed." H05-1050,P95-1026,o,"In the supervised condition, we used just 2 additional task instances, plant and tank, each with 4000 handannotated instances drawn from a large balanced corpus (Yarowsky, 1995)." H05-1050,P95-1026,n,"6 Conclusions In this paper, we showed that it is sometimes possible indeed, preferableto eliminate the initial bit of supervision in bootstrapping algorithms such as the Yarowsky (1995) algorithm for word sense disambiguation." H05-1050,P95-1026,p,2.1 The Yarowsky algorithm Yarowsky (1995) sparked considerable interest in bootstrapping with his successful method for word sense disambiguation. H05-1050,P95-1026,n,"Our experiments on the Canadian Hansards show that our unsupervised technique is significantly more effective than picking seeds by hand (Yarowsky, 1995), which in turn is known to rival supervised methods." H05-1107,P95-1026,o,Yarowsky (1995) used this method for word sense disambiguation. H05-1114,P95-1026,o,"Many corpus based statistical methods have been proposed to solve this problem, including supervised learning algorithms (Leacock et al. , 1998; Towel and Voorheest, 1998), weakly supervised learning algorithms (Dagan and Itai, 1994; Li and Li, 2004; Mihalcea, 2004; Niu et al. , 2005; Park et al. , 2000; Yarowsky, 1995), unsupervised learning algorithms (or word sense discrimination) (Pedersen and Bruce, 1997; Schutze, 1998), and knowledge based algorithms (Lesk, 1986; McCarthy et al. , 2004)." I05-3009,P95-1026,o,"It is appreciated that multi-sense words appearing in the same document tend to be tagged with the same word sense if they belong to the same common domain in the semantic hierarchy (Yarowsky, 1995)." I08-1040,P95-1026,o,But it is close to the paradigm described by Yarowsky (1995) and Turney (2002) as it also employs self-training based on a relatively small seed data set which is incrementally enlarged with unlabelled samples. J01-3001,P95-1026,o,Yarowsky (1995) dealt with this problem largely by producing an unsupervised learning algorithm that generates probabilistic decision list models of word senses from seed collocates. J01-3001,P95-1026,o,"Currently, machine learning methods (Yarowsky 1995; Rigau, Atserias, and Agirre 1997) and combinations of classifiers (McRoy 1992) have been popular." J01-3001,P95-1026,o,"Some researchers have concentrated on producing WSD systems that base results on a limited number of words, for example Yarowsky (1995) and Schtitze (1992) who quoted results for 12 words, and a second group, including Leacock, Towell, and Voorhees (1993) and Bruce and Wiebe (1994), who gave results for just one, namely interest." J01-3001,P95-1026,o,"(1991), Yarowsky (1995) and others." J02-3002,P95-1026,o,"Since then this idea has been applied to several tasks, including word sense disambiguation (Yarowsky 1995) and named-entity recognition (Cucerzan and Yarowsky 1999)." J03-3002,P95-1026,o,"This approach to minimally supervised classifier construction has been widely studied (Yarowsky 1995), especially in cases in which the features of interest are orthogonal in some sense (e.g. , Blum and Mitchell 1998; Abney 2002)." J04-1001,P95-1026,o,Yarowsky (1995) has proposed a bootstrapping method for word sense disambiguation. J04-1001,P95-1026,o,This implementation is exactly the one proposed in Yarowsky (1995). J04-1001,P95-1026,o,"We viewed the seed word as a classified sentence, following a similar proposal in Yarowsky (1995)." J04-1001,P95-1026,o,4.4 Experiment 2: Yarowskys Words We also conducted translation on seven of the twelve English words studied in Yarowsky (1995). J04-1001,P95-1026,o,"Note that the results of MB-D here cannot be directly compared with those in Yarowsky (1995), because the data used are different." J04-1001,P95-1026,o,"Yarowsky (1995) proposed such a method for word sense disambiguation, which we refer to as monolingual bootstrapping." J04-1001,P95-1026,o,"After line 17, we can employ the one-sense-per-discourse heuristic to further classify unclassified data, as proposed in Yarowsky (1995)." J04-1003,P95-1026,p,"A variety of classifiers have been employed for this task (see Mooney [1996] and Ide and Veronis [1998] for overviews), the most popular being decision lists (Yarowsky 1994, 1995) and naive Bayesian classifiers (Pedersen 2000; Ng 1997; Pedersen and Bruce 1998; Mooney 1996; Cucerzan and Yarowsky 2002)." J04-3004,P95-1026,p,The Yarowsky (1995) algorithm was one of the first bootstrapping algorithms to become widely known in computational linguistics. J06-2003,P95-1026,o,The algorithm we implemented is inspired by the work of Yarowsky (1995) on word sense disambiguation. J98-1001,P95-1026,o,"(1992), Pereira and Tishby (1992), and Pereira, Tishby, and Lee (1993) propose methods that derive classes from the distributional properties of the corpus itself, while other authors use external information sources to define classes: Resnik (1992) uses the taxonomy of WordNet; Yarowsky (1992) uses the categories of Roget's Thesaurus, Slator (1992) and Liddy and Paik (1993) use the subject codes in the LDOCE; Luk (1995) uses conceptual sets built from the LDOCE definitions." J98-1001,P95-1026,o,"Aware of this problem, Resnik and Yarowsky suggest creating the sense distance matrix based on results in experimental psychology such as Miller and Charles (1991) or Resnik (1995b)." J98-1002,P95-1026,o,"Word Senses Sample Size Feedback Size % Correct % Correct per Sense Total drug narcotic 65 100 92.3 90.5 medicine 83 65 89.1 sentence judgment 23 327 100.0 92.5 grammar 4 42 50.0 suit court 212 1,461 98.6 94.8 garment 21 81 55.0 player performer 48 230 87.5 92.3 participant 44 1,552 97.7 the feedback sets) consisted of a few dozen examples, in comparison to thousands of examples needed in other corpus-based methods (Sch~itze 1992; Yarowsky 1995)." J98-1002,P95-1026,o,"Recently, Yarowsky (1995) combined an MRD and a corpus in a bootstrapping process." J98-1003,P95-1026,o,"Roget's has been used as the sense division in two recent WSD works (Yarowsky 1992; Luk 1995) more or less as is, except for a small number of senses added to fill gaps." J98-1003,P95-1026,o,"WSD has received increasing attention in recent literature on computational linguistics (Lesk 1986; Schi.itze 1992; Gale, Church, and Yarowsky 1992; Yarowsky 1992, 1995; Bruce and Wiebe 1995; Luk 1995; Ng and Lee 1996; Chang et al. 1996)." J98-1003,P95-1026,o,"(~ 1998 Association for Computational Linguistics Computational Linguistics Volume 24, Number 1 1995), (3) thesaurus categories (Yarowsky 1992; Chen and Chang 1994), (4) translation in another language (Gale, Church, and Yarowsky 1992; Dagan, Itai, and Schwall 1991; Dagan and Itai 1994), (5) automatically induced clusters with sublexical representation (Schiitze 1992), and (6) hand-crafted lexicons (McRoy 1992)." J98-1003,P95-1026,o,"Lacking an automatic method, recent WSD works (Bruce and Wiebe 1995; Luk 1995; Yarowsky 1995) still resort to human intervention to identify and group closely related senses in an MRD." J98-1003,P95-1026,o,Using thesaurus categories directly as a coarse sense division may seem to be a viable alternative (Yarowsky 1995). J98-1003,P95-1026,o,TopSense is tested on 20 words extensively investigated in recent WSD literature (Schi~tze 1992; Yarowsky 1992; Luk 1995). J98-1004,P95-1026,o,The fact that the error rate more than doubles when the seeds in Yarowsky's (1995) experiments are reduced from a sense's best collocations to just one word per sense suggests that the error rate would increase further if no seeds were provided. J98-1004,P95-1026,p,Yarowsky has proposed an algorithm that requires as little user input as one seed word per sense to start the training process (Yarowsky 1995). J98-1005,P95-1026,o,"At each training-set size, a new copy of the network is trained under each of the following conditions: (1) using SULU, (2) using SULU but supplying only the labeled training examples to synthesize, (3) standard network training, (4) using a re-implementation of an algorithm proposed by Yarowsky (1995), and (5) using standard network training but with all training examples labeled to establish an upper bound." J98-1006,P95-1026,o,Several artificial techniques have been used so that classifiers can be developed and tested without having to invest in manually tagging the data: Yarowsky (1993) and Sch/itze (1995) have acquired training and testing materials by creating pseudowords from existing nonhomographic forms. J98-1006,P95-1026,o,"Yarowsky (1995) has proposed automatically augmenting a small set of experimenter-supplied seed collocations (e.g. , manufacturing plant and plant life for two different senses of the noun plant) into a much larger set of training materials." J98-4002,P95-1026,o,"Various corpus-based approaches to word sense disambiguation have been proposed (Bruce and Wiebe 1994; Charniak 1993; Dagan and Itai 1994; Fujii et al. 1996; Hearst 1991; Karov and Edelman 1996; Kurohashi and Nagao 1994; Li, Szpakowicz, and Matwin 1995; Ng and Lee 1996; Niwa and Nitta 1994; Sch~itze 1992; Uramoto 1994b; Yarowsky 1995)." J98-4002,P95-1026,o,External information such as the discourse or domain dependency of each word sense (Guthrie et al. 1991; Nasukawa 1993; Yarowsky 1995) is expected to lead to system improvement. J98-4002,P95-1026,o,"Iterating between these two 1 Note that these problems are associated with corpus-based approaches in general, and have been identified by a number of researchers (Engelson and Dagan 1996; Lewis and Gale 1994; Uramoto 1994a; Yarowsky 1995)." N01-1023,P95-1026,o,"1998; Goldman and Zhou, 2000) that has been used previously to train classifiers in applications like word-sense disambiguation (Yarowsky, 1995), document classification (Blum and Mitchell, 1998) and named-entity recognition (Collins and Singer, 1999) and apply this method to the more complex domain of statistical parsing." N01-1023,P95-1026,o,"Our approach is closely related to previous CoTraining methods (Yarowsky, 1995; Blum and Mitchell, 1998; Goldman and Zhou, 2000; Collins and Singer, 1999)." N01-1023,P95-1026,o,"(Yarowsky, 1995) first introduced an iterative method for increasing a small set of seed data used to disambiguate dual word senses by exploiting the constraint that in a segment of discourse only one sense of a word is used." N01-1023,P95-1026,o,"Co-Training has been used before in applications like word-sense disambiguation (Yarowsky, 1995), web-page classification (Blum and Mitchell, 1998) and namedentity identification (Collins and Singer, 1999)." N01-1023,P95-1026,o,"Co-training (Blum and Mitchell, 1998; Yarowsky, 1995) can be informally described in the following manner: #0F Pick two (or more) views of a classification problem." N03-2025,P95-1026,o,The tag propagation/elimination scheme is adopted from [Yarowsky 1995]. N03-3004,P95-1026,p,"The best example of such an approach is (Yarowsky, 1995), who proposes a method that automatically identifies collocations that are indicative of the sense of a word, and uses those to iteratively label more examples." N03-4012,P95-1026,o,"This iterative optimiser, derived from a word disambiguation technique (Yarowsky, 1995), finds the nearest local maximum in the lexical cooccurrence network from each concept seed." N04-2003,P95-1026,o,"Two major research topics in this field are Named Entity Recognition (NER) (N. Wacholder and Choi, 1997; Cucerzan and Yarowsky, 1999) and Word Sense Disambiguation (WSD) (Yarowsky, 1995; Wilks and Stevenson, 1999)." N06-2014,P95-1026,p,"To alleviate this effort, various semi-supervised learning algorithms such as self-training (Yarowsky, 1995), cotraining (Blum and Mitchell, 1998; Goldman and Zhou, 2000), transductive SVM (Joachims, 1999) and many others have been proposed and successfully applied under different assumptions and settings." N07-1025,P95-1026,o,"This includes the automatic generation of sense-tagged data using monosemous relatives (Leacock et al. , 1998; Mihalcea and Moldovan, 1999; Agirre and Martinez, 2004), automatically bootstrapped disambiguation patterns (Yarowsky, 1995; Mihalcea, 2002), parallel texts as a way to point out word senses bearing different translations in a second language (Diab and Resnik, 2002; Ng et al. , 2003; Diab, 2004), and the use of volunteer contributions over the Web (Chklovski and Mihalcea, 2002)." N07-1025,P95-1026,p,"This method, initially proposed by (Yarowsky, 1995), was successfully evaluated in the context of the SENSEVAL framework (Mihalcea, 2002)." N07-1025,P95-1026,p,"Among the various knowledge-based (Lesk, 1986; Galley and McKeown, 2003; Navigli and Velardi, 2005) and data-driven (Yarowsky, 1995; Ng and Lee, 1996; Pedersen, 2001) word sense disambiguation methods that have been proposed to date, supervised systems have been constantly observed as leading to the highest performance." N07-1032,P95-1026,o,"A variety of algorithms (e.g. , bootstrapping (Yarowsky, 1995), co-training (Blum and Mitchell, 1998), alternating structure optimization (Ando and Zhang, 2005), etc)." N09-1004,P95-1026,o,"To overcome the knowledge acquisition bottleneck problem suffered by supervised methods, these methods make use of a small annotated corpus as seed data in a bootstrapping process (Hearst, 1991) (Yarowsky, 1995)." N09-1004,P95-1026,p,"Disambiguation of a limited number of words is not hard, and necessary context information can be carefully collected and hand-crafted to achieve high disambiguation accuracy as shown in (Yarowsky, 1995)." P01-1005,P95-1026,o,"Numerous approaches have been explored for exploiting situations where some amount of annotated data is available and a much larger amount of data exists unannotated, e.g. Marialdo's HMM part-of-speech tagger training (1994), Charniak's parser retraining experiment (1996), Yarowsky's seeds for word sense disambiguation (1995) and Nigam et al's (1998) topic classifier learned in part from unlabelled documents." P01-1005,P95-1026,o,"The more recent set of techniques includes mult iplicative weightupdate algorithms (Golding and Roth, 1998), latent semantic analysis (Jones and Martin, 1997), transformation-based learning (Mangu and Brill, 1997), differential grammars (Powers, 1997), decision lists (Yarowsky, 1994), and a variety of Bayesian classifiers (Gale et al. , 1993, Golding, 1995, Golding and Schabes, 1996)." P01-1008,P95-1026,o,"This method of co-training has been previously applied to a variety of natural language tasks, such as word sense disambiguation (Yarowsky, 1995), lexicon construction for information extraction (Riloff and Jones, 1999), and named entity classification (Collins and Singer, 1999)." P01-1026,P95-1026,o,"In addition, since word senses are often associated with domains (Yarowsky, 1995), word senses can be consequently distinguished by way of determining the domain of each description." P02-1044,P95-1026,o,"6.2 Experiment 2: Yarowskys Words We also conducted translation on seven of the twelve English words studied in (Yarowsky, 1995)." P02-1044,P95-1026,o,"Note that the results of MB-D here cannot be directly compared with those in (Yarowsky, 1995), mainly because the data used are different." P02-1044,P95-1026,o,"Yarowsky (1995) proposes a method for word sense disambiguation, which is based on Monolingual Bootstrapping." P02-1044,P95-1026,o,", Yarowsky 1995) after using an ensemble of NBCs." P02-1044,P95-1026,o,"This implementation is exactly the one proposed in (Yarowsky 1995), and we will denote it as MB-D hereafter." P02-1044,P95-1026,o,"1 Yarowsky (1995) proposes a method for word sense (translation) disambiguation that is based on a bootstrapping technique, which we refer to here as Monolingual Bootstrapping (MB)." P02-1044,P95-1026,o,"This way of creating classified data is similar to that in (Yarowsky, 1995)." P02-1046,P95-1026,o,"Then the initial precision is 1(Yarowsky, 1995), citing (Yarowsky, 1994), actually uses a superficially different score that is, however, a monotone transform of precision, hence equivalent to precision, since it is used only for sorting." P02-1046,P95-1026,o,"Current work has been spurred by two papers, (Yarowsky, 1995) and (Blum and Mitchell, 1998)." P02-1064,P95-1026,p,"In order to overcome this, some unsupervised learning methods and minimally-supervised methods, e.g., (Yarowsky, 1995; Yarowsky and Wicentowski, 2000), have been proposed." P02-1064,P95-1026,o,"However, few papers in the field of computational linguistics have focused on this approach (Dagan and Engelson, 1995; Thompson et al. , 1999; Ngai and Yarowsky, 2000; Hwa, 2000; Banko and Brill, 2001)." P03-1008,P95-1026,o,"All features encountered in the training data are ranked in the DL (best evidence first) according to the following loglikelihood ratio (Yarowsky, 1995): Log Pr(reading i jfeature k ) P j6=i Pr(reading j jfeature k ) We estimated probabilities via maximum likelihood, adopting a simple smoothing method (Martinez and Agirre, 2000): 0.1 is added to both the denominator and numerator." P03-1042,P95-1026,o,The word sense disambiguation method proposed in Yarowsky (1995) can also be viewed as a kind of co-training. P03-1043,P95-1026,o,The tag propagation/elimination scheme is adopted from (Yarowsky 1995). P03-1044,P95-1026,o,"One example is the algorithm for word sense disambiguation in (Yarowsky, 1995)." P04-1037,P95-1026,n,"Supervised approaches which make use of a small hand-labeled training set (Bruce and Wiebe, 1994; Yarowsky, 1993) typically outperform unsupervised approaches (Agirre et al. , 2000; Litkowski, 2000; Lin, 2000; Resnik, 1997; Yarowsky, 1992; Yarowsky, 1995), but tend to be tuned to a speci c corpus and are constrained by scarcity of labeled data." P04-1039,P95-1026,o,"Two more recent investigations are by Yarowsky, (Yarowsky, 1995), and later, Mihalcea, (Mihalcea, 2002)." P04-1062,P95-1026,p,"Some tasks can thrive on a nearly pure diet of unlabeled data (Yarowsky, 1995; Collins and Singer, 1999; Cucerzan and Yarowsky, 2003)." P04-3026,P95-1026,o,"Starting from the list of 12 ambiguous words provided by Yarowsky (1995) which is shown in table 2, we created a concordance for each word, with the lines in the concordances each relating to a context window of 20 words." P04-3026,P95-1026,o,"In an attempt to provide a quantitative evaluation of our results, for each of the 12 ambiguous words shown in table 1 we manually assigned the top 30 first-order associations to one of the two senses provided by Yarowsky (1995)." P05-1001,P95-1026,o,"A number of bootstrapping methods have been proposed for NLP tasks (e.g. Yarowsky (1995), Collins and Singer (1999), Riloff and Jones (1999))." P05-1044,P95-1026,n,"Unlike well-known bootstrapping approaches (Yarowsky, 1995), EM and CE have the possible advantage of maintaining posteriors over hidden labels (or structure) throughout learning; bootstrapping either chooses, for each example, a single label, or remains completely agnostic." P05-1049,P95-1026,o,"They roughly fall into three categories according to what is used for supervision in learning process: (1) using external resources, e.g., thesaurus or lexicons, to disambiguate word senses or automatically generate sense-tagged corpus, (Lesk, 1986; Lin, 1997; McCarthy et al. , 2004; Seo et al. , 2004; Yarowsky, 1992), (2) exploiting the differences between mapping of words to senses in different languages by the use of bilingual corpora (e.g. parallel corpora or untagged monolingual corpora in two languages) (Brown et al. , 1991; Dagan and Itai, 1994; Diab and Resnik, 2002; Li and Li, 2004; Ng et al. , 2003), (3) bootstrapping sensetagged seed examples to overcome the bottleneck of acquisition of large sense-tagged data (Hearst, 1991; Karov and Edelman, 1998; Mihalcea, 2004; Park et al. , 2000; Yarowsky, 1995)." P05-1049,P95-1026,o,"It has been shown that one sense per discourse property can improve the performance of bootstrapping algorithm (Li and Li, 2004; Yarowsky, 1995)." P05-1049,P95-1026,p,"3.2 Comparison between SVM, Bootstrapping and LP For WSD, SVM is one of the state of the art supervised learning algorithms (Mihalcea et al. , 2004), while bootstrapping is one of the state of the art semi-supervised learning algorithms (Li and Li, 2004; Yarowsky, 1995)." P05-1049,P95-1026,o,"Many methods have been proposed to deal with this problem, including supervised learning algorithms (Leacock et al. , 1998), semi-supervised learning algorithms (Yarowsky, 1995), and unsupervised learning algorithms (Schutze, 1998)." P06-1027,P95-1026,o,"To compare the performance of different taggers learned by different mechanisms, one can measure the precision, recall and F-measure, given by precision = # correct predictions# predicted gene mentions recall = # correct predictions# true gene mentions F-measure = a96a15a14 precision a14 recallprecision a44 recall In our evaluation, we compared the proposed semi-supervised learning approach to the state of the art supervised CRF of McDonald and Pereira (2005), and also to self-training (Celeux and Govaert 1992; Yarowsky 1995), using the same feature set as (McDonald and Pereira 2005)." P06-1027,P95-1026,o,"Many approaches have been proposed for semisupervised learning in the past, including: generative models (Castelli and Cover 1996; Cohen and Cozman 2006; Nigam et al. 2000), self-learning (Celeux and Govaert 1992; Yarowsky 1995), cotraining (Blum and Mitchell 1998), informationtheoretic regularization (Corduneanu and Jaakkola 2006; Grandvalet and Bengio 2004), and graphbased transductive methods (Zhou et al. 2004; Zhou et al. 2005; Zhu et al. 2003)." P06-1027,P95-1026,p,"5.1 Comparison to self-training For completeness, we also compared our results to the self-learning algorithm, which has commonly been referred to as bootstrapping in natural language processing and originally popularized by the work of Yarowsky in word sense disambiguation (Abney 2004; Yarowsky 1995)." P06-1031,P95-1026,o,"Equation (3) reads If the target noun appears, then it is distinguished by the majority . The log-likelihood ratio (Yarowsky, 1995) decides in which order rules are applied to the target noun in novel context." P06-1056,P95-1026,o,"Determining the sense of an ambiguous word, using bootstrapping and texts from a different language was done by Yarowsky (1995), Hearst (1991), Diab (2002), and Li and Li (2004)." P06-1056,P95-1026,o,Yarowsky (1995) has used a few seeds and untagged sentences in a bootstrapping algorithm based on decision lists. P06-1056,P95-1026,o,"Unlike Yarowsky (1995), we use automatic collection of seeds." P06-1058,P95-1026,o,"Yarowsky (1994 and 1995), Mihalcea and Moldovan (2000), and Mihalcea (2002) have made further research to obtain large corpus of higher quality from an initial seed corpus." P06-1072,P95-1026,p,"Annealing resembles the popular bootstrapping technique (Yarowsky, 1995), which starts out aiming for high precision, and gradually improves coverage over time." P06-1089,P95-1026,o,Yarowsky (1995) studied a method for word sense disambiguation using unlabeled data. P06-2022,P95-1026,o,"Yarowsky (1995) uses a conceptually similar technique for WSD that learns from a small set of seed examples and then increases recall by bootstrapping, evaluated on 12 idiosyncratically polysemous words." P06-2065,P95-1026,o,"In cases like (Yarowsky, 1995), unsupervised methods offer accuracy results than rival supervised methods (Yarowsky, 1994) while requiring only a fraction of the data preparation effort." P06-2071,P95-1026,p,"(Yarowsky, 1995) demonstrated that semi-supervised WSD could be successful." P06-2071,P95-1026,o,"Most importantly, whereas the one-sense-per-discourse assumption (Yarowsky, 1995) also applies to discriminating images, there is no guarantee of a local collocational or co-occurrence context around the target image." P06-2071,P95-1026,o,"2 Data and annotation Yahoo!s image query API was used to obtain a corpus of pairs of semantically ambiguous images, in thumbnail and true size, and their corresponding web sites for three ambiguous keywords inspired by (Yarowsky, 1995): BASS, CRANE, and SQUASH." P06-2077,P95-1026,o,"In this method, the decision list (DL) learning algorithm (Yarowsky, 1995) is used." P06-2077,P95-1026,o,"This improvement is close to that of one sense per discourse (Yarowsky, 1995) (improvement ranging from 1.3% to 1.7%), which seems to be a sensible upper bound of the proposed method." P06-2077,P95-1026,o,"These instances can be retagged with their countability by using the proposed method and some kind of bootstrapping (Yarowsky, 1995)." P06-2077,P95-1026,o,"Yarowsky (1995) tested the claim on about 37,000 examples and found that when a polysemous word appeared more than once in a discourse, they took on the majority sense for the discourse 99.8% of the time on average." P06-2077,P95-1026,o,"Note that although the source of the data is the same as in Section 5, as Yarowsky (1995) did." P06-2117,P95-1026,o,"Consequently, semi-supervised learning, which combines both labeled and unlabeled data, has been applied to some NLP tasks such as word sense disambiguation (Yarowsky, 1995; Pham et al. , 2005), classification (Blum and Mitchell, 1998; Thorsten, 1999), clustering (Basu et al. , 2004), named entity classification (Collins and Singer, 1999), and parsing (Sarkar, 2001)." P07-1004,P95-1026,o,"3 The Framework 3.1 The Algorithm Our transductive learning algorithm, Algorithm 1, is inspired by the Yarowsky algorithm (Yarowsky, 1995; Abney, 2004)." P07-1006,P95-1026,o,"2 Related Work WSD approaches can be classified as (a) knowledge-based approaches, which make use of linguistic knowledge, manually coded or extracted from lexical resources (Agirre and Rigau, 1996; Lesk 1986); (b) corpus-based approaches, which make use of shallow knowledge automatically acquired from corpus and statistical or machine learning algorithms to induce disambiguation models (Yarowsky, 1995; Schtze 1998); and (c) hybrid approaches, which mix characteristics from the two other approaches to automatically acquire disambiguation models from corpus supported by linguistic knowledge (Ng and Lee 1996; Stevenson and Wilks, 2001)." P07-1109,P95-1026,o,"Thus, we propose a bootstrapping approach (Yarowsky, 1995) to train the stochastic transducer iteratively as it extracts transliterations from a bitext." P07-1109,P95-1026,p,"In order to overcome this problem, we look to the bootstrapping method outlined in (Yarowsky, 1995)." P07-1125,P95-1026,o,Early work by Yarowsky (1995) falls within this framework. P08-1030,P95-1026,o,"7 Related Work The trigger labeling task described in this paper is in part a task of word sense disambiguation (WSD), so we have used the idea of sense consistency introduced in (Yarowsky, 1995), extending it to operate across related documents." P08-1030,P95-1026,o,"c2008 Association for Computational Linguistics Refining Event Extraction through Cross-document Inference Heng Ji Ralph Grishman Computer Science Department New York University New York, NY 10003, USA (hengji, grishman)@cs.nyu.edu Abstract We apply the hypothesis of One Sense Per Discourse (Yarowsky, 1995) to information extraction (IE), and extend the scope of discourse from one single document to a cluster of topically-related documents." P08-1061,P95-1026,o,"Self-training is a commonly used technique for semi-supervised learning that has been ap532 plied to several natural language processing tasks (Yarowsky, 1995; Charniak, 1997; Steedman et al., 2003)." P08-1088,P95-1026,o,"In our context, bootstrapping has a similar motivation to the annealing approach of Smith and Eisner (2006), which also tries to alter the space of hidden outputs in the E-step over time to facilitate learning in the M-step, though of course the use of bootstrapping in general is quite widespread (Yarowsky, 1995)." P09-1095,P95-1026,p,One of the most notable examples is Yarowskys (1995) bootstrapping algorithm for word sense disambiguation. P09-1117,P95-1026,o,"Self-training (Yarowsky, 1995) is a form of semi-supervised learning." P09-2017,P95-1026,o,"To reduce it we exploit the one sense per collocation property (Yarowsky, 1995)." P96-1006,P95-1026,o,"Most recently, Yarowsky used an unsupervised learning procedure to perform WSD (Yarowsky, 1995), although this is only tested on disambiguating words into binary, coarse sense distinction." P97-1007,P95-1026,o,"Some of them have been fully tested in real size texts (e.g. statistical methods (Yarowsky, 1992), (Yarowsky, 1994), (Miller and Teibel, 1991), knowledge based methods (Sussna, 1993), (Agirre and Rigau, 1996), or mixed methods (Richardson et al. , 1994), (Resnik, 1995))." P97-1007,P95-1026,o,"(Yarowsky, 1995) reports a success rate of 96% disambiguating twelve words with two clear sense distinctions each one)." P97-1007,P95-1026,o,"Furthermore, it is not possible to apply the powerful ""one sense per discourse"" property (Yarowsky, 1995) because there is no discourse in dictionaries." P97-1009,P95-1026,o,"In Yarowsky's experiment (Yarowsky, 1995), an average of 3936 examples were used to disambiguate between two senses." P97-1009,P95-1026,o,"Yarowsky (Yarowsky, 1995) proposed an unsupervised method that used heuristics to obtain seed classifications and expanded the results to the other parts of the corpus, thus avoided the need to hand-annotate any examples." P98-1037,P95-1026,o,"Evidence have shown that by exploiting the constraint of so-called ""one sense per discourse,"" (Gale, Church and Yarowsky 1992b) and the strategy of bootstrapping (Yarowsky 1995), it is possible to boost coverage, while maintaining about the same level of precision." P98-1037,P95-1026,o,The adaptive approach is somehow similar to their idea of incremental learning and to the bootstrap approach proposed by Yarowsky (1995). P98-1069,P95-1026,o,"This approach has also been used by (Dagan and Itai, 1994; Gale et al. , 1992; Shiitze, 1992; Gale et al. , 1993; Yarowsky, 1995; Gale and Church, 1Lunar is not an unknown word in English, Yeltsin finds its translation in the 4-th candidate." P98-2182,P95-1026,p,"Extracting semantic information from word co-occurrence statistics has been effective, particularly for sense disambiguation (Schiitze, 1992; Gale et al. , 1992; Yarowsky, 1995)." P98-2228,P95-1026,p,"Decision lists have already been successfully applied to lexical ambiguity resolution by (Yarowsky, 1995) where they perfromed well." P98-2228,P95-1026,o,"First, researchers are divided between a general method (that attempts to apply WSD to all the content words of texts, the option taken in this paper) and one that is applied only to a small trial selection of texts words (for example (Schiitze, 1992) (Yarowsky, 1995))." P99-1020,P95-1026,o,"WSD that use information gathered from raw corpora (unsupervised training methods) (Yarowsky, 1995) (Resnik, 1997)." P99-1020,P95-1026,p,"Some of the best results were reported in (Yarowsky, 1995) who uses a large training corpus." P99-1043,P95-1026,o,"The results are consistent with the idea in (Gale and Church, 1994; Shfitze, 1992; Yarowsky, 1995)." P99-1043,P95-1026,o,"Co-occurrence information between neighboring words and words in the same sentence has been used in phrase extraction (Smadja, 1993; Fung and Wu, 1994), phrasal translation (Smadja et al. , 1996; Kupiec, 1993; Wu, 1995; Dagan and Church, 1994), target word selection (Liu and Li, 1997; Tanaka and Iwasaki, 1996), domain word translation (Fung and Lo, 1998; Fung, 1998), sense disambiguation (Brown et al. , 1991; Dagan et al. , 1991; Dagan and Itai, 1994; Gale et al. , 1992a; Gale et al. , 1992b; Gale et al. , 1992c; Shiitze, 1992; Gale et al. , 1993; Yarowsky, 1995), and even recently for query translation in cross-language IR as well (Ballesteros and Croft, 1998)." P99-1043,P95-1026,o,"Co-occurrence statistics is collected from either bilingual parallel and 334 non-parallel corpora (Smadja et al. , 1996; Kupiec, 1993; Wu, 1995; Tanaka and Iwasaki, 1996; Fung and Lo, 1998), or monolingual corpora (Smadja, 1993; Fung and Wu, 1994; Liu and Li, 1997; Shiitze, 1992; Yarowsky, 1995)." P99-1043,P95-1026,o,"(Shfitze, 1992; Yarowsky, 1995) all use multiple context words as discriminating features." W00-1320,P95-1026,o,"Finally, we would like to investigate the incorporation of unsupervised methods for WSD, such as the heuristically-based methods of (Stetina and Nagao, 1997) and (Stetina et al. , 1998), and the theoretically purer bootstrapping method of (Yarowsky, 1995)." W00-1320,P95-1026,o,"(Yarowsky, 1995) also uses wide context, but incorporates the one-senseper-discourse and one-sense-per-collocation constraints, using an unsupervised learning technique." W00-1326,P95-1026,p,"They have been successfully applied to accent restoration, word"" sense disambiguation 209 and homograph disambiguation (Yarowsky, 1994; 1995; 1996)." W01-1208,P95-1026,o,"Since word senses are often associated with domains (Yarowsky, 1995), word senses can be consequently distinguished by way of determining the domain of each description." W02-0903,P95-1026,o,"In the last decade or so research on lexical semantics has focused more on sub-problems like word sense disambiguation (Yarowsky, 1995; Stevenson and Wilks, 2001), named entity recognition (Collins and Singer, 1999), and vocabulary construction for information extraction (Riloff, 1996)." W02-0903,P95-1026,o,"Collocations have been widely used for tasks such as word sense disambiguation (WSD) (Yarowsky, 1995), information extraction (IE) (Riloff, 1996), and named-entity recognition (Collins and Singer, 1999)." W02-1304,P95-1026,o,"In another line of research, (Yarowsky, 1995) and (Blum and Mitchell, 1998) have shown that it is possible to reduce the need for supervision with the help of large amounts of unannotated data." W03-0106,P95-1026,o,Recent work emphasizes a corpus-based unsupervised approach [Dagon and Itai 1994; Yarowsky 1992; Yarowsky 1995] that avoids the need for costly truthed training data. W03-0107,P95-1026,o,"Bootstrapping methods similar to ours have been shown to be competitive in word sense disambiguation (Yarowsky and Florian, 2003; Yarowsky, 1995)." W03-0406,P95-1026,p,"To overcome this problem, unsupervised learning methods using huge unlabeled data to boost the performance of rules learned by small labeled data have been proposed recently(Blum and Mitchell, 1998)(Yarowsky, 1995)(Park et al. , 2000)(Li and Li, 2002)." W03-0406,P95-1026,o,"Yarowsky proposed the unsupervised learning method for WSD(Yarowsky, 1995)." W03-0407,P95-1026,o,"1 Introduction Co-training (Blum and Mitchell, 1998), and several variants of co-training, have been applied to a number of NLP problems, including word sense disambiguation (Yarowsky, 1995), named entity recognition (Collins and Singer, 1999), noun phrase bracketing (Pierce and Cardie, 2001) and statistical parsing (Sarkar, 2001; Steedman et al. , 2003)." W03-0417,P95-1026,o,Yarowsky (1995) presented an approach that significantly reduces the amount of labeled data needed for word sense disambiguation. W03-0427,P95-1026,o,"Not unlike (Yarowsky, 1995) we use confidence of our classifier on unannotated data to enrich itself; that is, by adding confidently-classified instances to the memory." W03-0601,P95-1026,o,", 1998; Traupman and Wilensky, 2003; Yarowsky, 1995)." W03-1015,P95-1026,o,See Yarowsky (1995) for details. W03-1027,P95-1026,o,"In order to overcome this, several methods are proposed, including minimally-supervised learning methods (e.g. , (Yarowsky, 1995; Blum and Mitchell, 1998)), and active learning methods (e.g. , (Thompson et al. , 1999; Sassano, 2002))." W03-1302,P95-1026,o,Yarowsky (1995) used the one sense per collocation property as an essential ingredient for an unsupervised Word-SenseDisambiguationalgorithm. W03-1315,P95-1026,o,"For the former we made use Decision Lists similar to Yarowskys method for Word Sense Disambiguation (WSD) (Yarowsky, 1995)." W03-1315,P95-1026,o,"For this reason, name classification has been studied in solving the named entity extraction task in the NLP and information extraction communities (see, for example, (Collins and Singer, 1999; Cucerzan and Yarowsky, 1999) and various approaches reported in the MUC conferences (MUC-6, 1995))." W03-1315,P95-1026,o,"In the WSD work involving the use of context, we can find two approaches: one that uses few strong contextual evidences for disambiguation purposes, as exemplified by (Yarowsky, 1995); and the other that uses weaker evidences but considers a combination of a number of them, as exemplified by (Gale et al. , 1992)." W03-1611,P95-1026,p,"To solve this problem, we adopt an idea one sense per collocation which was introduced in word sense disambiguation research (Yarowsky, 1995)." W03-1702,P95-1026,p,"The approach is very general and modular and can work in conjunction with a number of learning strategies for word sense disambiguation (Yarowsky, 1995; Li and Li, 2002)." W03-1702,P95-1026,o,Yarowsky (1995) showed that the learning strategy of bootstrapping from small tagged data led to results rivaling supervised training methods. W04-0813,P95-1026,o,"The Decision List (DL) algorithm is described in (Yarowsky, 1995b)." W04-0813,P95-1026,p,"4.1 Methods and Parameters DL: On Senseval-2 data, we observed that DL improved significantly its performance with a smoothing technique based on (Yarowsky, 1995a)." W04-0846,P95-1026,o,These include the bootstrapping approach [Yarowsky 1995] and the context clustering approach [Schutze 1998]. W04-2312,P95-1026,o,Yarowsky (1995) used both supervised and unsupervised WSD for correct phonetizitation of words in speech synthesis. W04-2312,P95-1026,o,"2.1 Data-based Methods Data-based approaches extract their information directly from texts and are divided into supervised and unsupervised methods (Yarowsky, 1995; Stevenson, 2003)." W04-2402,P95-1026,p,"We also note that there are a number of bootstrapping methods successfully applied to text e.g., word sense disambiguation (Yarowsky, 1995), named entity instance classification (Collins and Singer, 1999), and the extraction of parts word given the whole word (Berland and Charniak, 1999)." W04-2808,P95-1026,o,Yarowsky (1995) used both supervised and unsupervised WSD for correct phonetizitation of words in speech synthesis. W04-2808,P95-1026,o,"Data-based Methods Data-based approaches extract their information directly from texts and are divided into supervised and unsupervised methods (Yarowsky, 1995; Stevenson, 2003)." W04-2808,P95-1026,o,"Many of these tasks have been addressed in other fields, for example, hypothesis verification in the field of machine translation (Tran et al. , 1996), sense disambiguation in speech synthesis (Yarowsky, 1995), and relation tagging in information retrieval (Marsh and Perzanowski, 1999)." W05-0605,P95-1026,o,These include the bootstrapping approach (Yarowsky 1995) and the context clustering approach (Schtze 1998). W05-0605,P95-1026,o,"For example, (Yarowsky 1995) only requires sense number and a few seeds for each sense of an ambiguous word (hereafter called keyword)." W05-1006,P95-1026,o,"Even for semantically predictable phrases, the fact that the words occur in fixed patterns can be very useful for the purposes of disambiguation, as demonstrated by (Yarowsky, 1995)." W06-0208,P95-1026,p,"7 Related Work Unannotated texts have been used successfully for a variety of NLP tasks, including named entity recognition (Collins and Singer, 1999), subjectivity classification (Wiebe and Riloff, 2005), text classification (Nigam et al. , 2000), and word sense disambiguation (Yarowsky, 1995)." W06-0208,P95-1026,o,"Many recent approaches in natural language processing (Yarowsky, 1995; Collins and Singer, 1999; Riloff and Jones, 1999; Nigam et al. , 2000; Wiebe and Riloff, 2005) have recognized the need to use unannotated data to improve performance." W06-0505,P95-1026,o,"This task is closely related to both named entity recognition (NER), which traditionally assigns nouns to a small number of categories and word sense disambiguation (Agirre and 1http://class.inrialpes.fr/ Rigau, 1996; Yarowsky, 1995), where the sense for a word is chosen from a much larger inventory of word senses." W06-1649,P95-1026,o,"The information for semi-supervised sense disambiguation is usually obtained from bilingual corpora (e.g. parallel corpora or untagged monolingual corpora in two languages) (Brown et al. , 1991; Dagan and Itai, 1994), or sense-tagged seed examples (Yarowsky, 1995)." W06-1649,P95-1026,o,"Many corpus based methods have been proposed to deal with the sense disambiguation problem when given de nition for each possible sense of a target word or a tagged corpus with the instances of each possible sense, e.g., supervised sense disambiguation (Leacock et al. , 1998), and semi-supervised sense disambiguation (Yarowsky, 1995)." W06-1665,P95-1026,o,"The principle of our approach is more similar to (Yarowsky, 1995)." W06-1665,P95-1026,o,"The proposed approach follows the same principle as (Yarowsky, 1995), which tried to determine the appropriate word sense according to one relevant context word." W06-2204,P95-1026,o,"Several approaches for learning from both labeled and unlabeled data have been proposed (Yarowsky, 1995; Blum and Mitchell, 1998; Collins and Singer, 1999) where the unlabeled data is utilised to boost the performance of the algorithm." W06-2207,P95-1026,n,"Although a rich literature covers bootstrapping methods applied to natural language problems (Yarowsky, 1995; Riloff, 1996; Collins and Singer, 1999; Yangarber et al. , 2000; Yangarber, 2003; Abney, 2004) several questions remain unanswered when these methods are applied to syntactic or semantic pattern acquisition." W06-2207,P95-1026,o,"Several approaches have been proposed in the context of word sense disambiguation (Yarowsky, 1995), named entity (NE) classification (Collins and Singer, 1999), patternacquisitionforIE(Riloff,1996; Yangarber, 2003), or dimensionality reduction for text categorization (TC) (Yang and Pedersen, 1997)." W06-2207,P95-1026,o,"Similarlyto(Collins and Singer, 1999; Yarowsky, 1995), we define the strength of a pattern p in a category y as the precision of p in the set of documents labeled with category y, estimated using Laplace smoothing: strength(p,y) = count(p,y) + epsilon1count(p) + kepsilon1 (3) where count(p,y) is the number of documents labeled y containing pattern p, count(p) is the overall number of labeled documents containing p, and k is the number of domains." W07-2051,P95-1026,o,3.1 Collocation Features The collocation features were inspired by the one-sense-per-collocation heuristic proposed by Yarowsky (1995). W09-1116,P95-1026,p,The notion that nouns have only one sense per discourse/collocation was also exploited by Yarowsky (1995) in his seminal work on bootstrapping for word sense disambiguation. W09-1705,P95-1026,o,"The intuition is that the produced clusters will be less sense-conflating than those produced by other graph-based approaches, since collocations provide strong and consistent clues to the senses of a target word (Yarowsky, 1995)." W09-1705,P95-1026,o,"We observe that the tagging method exploits the one sense per collocation property (Yarowsky, 1995), which means that WSD based on collocations is probably finer than WSD based on simple words, since ambiguity is reduced (Klapaftis and Manandhar, 2008)." W09-2207,P95-1026,o,"1http://www.nist.gov/speech/tests/ace/ 49 Bootstrapping techniques have been used for such diverse NLP problems as: word sense disambiguation (Yarowsky, 1995), named entity classification (Collins and Singer, 1999), IE pattern acquisition (Riloff, 1996; Yangarber et al., 2000; Yangarber, 2003; Stevenson and Greenwood, 2005), document classification (Surdeanu et al., 2006), fact extraction from the web (Pasca et al., 2006) and hyponymy relation extraction (Kozareva et al., 2008)." W09-2207,P95-1026,o,"(Yarowsky, 1995) used bootstrapping to train decision list classifiers to disambiguate between two senses of a word, achieving impressive classification accuracy." W09-2208,P95-1026,o,"Algorithms such as co-training (Blum and Mitchell, 1998)(Collins and Singer, 1999)(Pierce and Cardie, 2001) and the Yarowsky algorithm (Yarowsky, 1995) make assumptions about the data that permit such an approach." W09-2208,P95-1026,o,"2 Related Work The Yarowsky algorithm (Yarowsky, 1995), originally proposed for word sense disambiguation, makes the assumption that it is very unlikely for two occurrences of a word in the same discourse to have different senses." W09-2208,P95-1026,o,"Collins et al.(Collins and Singer, 1999) proposed two algorithms for NER by modifying Yarowskys method (Yarowsky, 1995) and the framework suggested by (Blum and Mitchell, 1998)." W09-2404,P95-1026,p,Yarowsky (1995) successfully used this observation as an approximate annotation technique in an unsupervised WSD model. W09-2404,P95-1026,p,"(1992b) has proved to be a simple yet powerful observation and has been successfully used in word sense disambiguation (WSD) and related tasks (e.g., Yarowsky (1995); Agirre and Rigau The author was partially funded by GALE DARPA Contract No." W96-0104,P95-1026,o,"The original training set (before the addition of the feedback sets) consisted of a few dozen examples, in comparison to thousands of examples needed in other corpus-based methods (Schutze, 1992; Yarowsky, 1995)." W96-0104,P95-1026,o,"In comparison, (Yarowsky, 1995) achieved 48 Table 1: A summary of the experimental results on four polysemous words." W96-0104,P95-1026,o,"Recently, Yarowsky (1995) combined a MIlD and a corpus in a bootstrapping process." W97-0108,P95-1026,o,"Attempts to alleviate this tagbottleneck i~lude tmotstr~ias (Te~ ot ill,, 1996; Hearst, 1991) and unsupervised algorith~ (Yarowsky, 199s) Dictionary-based approaches rely on linguistic knowledge sources such as ma~l~i,~e-readable dictionaries (Luk, 1995; Veronis and Ide, 1990) and WordNet (Agirre and Rigau, 1996; Resnik, 1995) and e0(ploit these for word sense disaznbiguation." W97-0108,P95-1026,p,"Unsupervised algorit~m~ such as (Yarowsky, 1995) have reported good accuracy that rivals that of supervised algorithms." W97-0201,P95-1026,o,"Similarly, if the task is to distinguish between binary, coarse sense distinction, then current WSD techniques can achieve very high accuracy (in excess of 96% when tested on a dozen words in (Yarowsky, 1995))." W97-0201,P95-1026,o,"Similarly, (Yarowsky, 1995) tested his WSD algorithm on a dozen words." W97-0208,P95-1026,p,"The best examples of this approach has been the resent work of Yarowsky (Yarowsky, 1992), (Yarowsky, 1993), (Yarowsky, 1995)." W97-0321,P95-1026,o,"Recently, some kinds of learning techniques have been applied to cumulatively acquire exemplars form large corpora (Yarowsky, 1994, 1995)." W97-0321,P95-1026,o,"Introduction Word sense disambiguation has long been one of the major concerns in natural language processing area (e.g. , Bruce et al. , 1994; Choueka et al. , 1985; Gale et al. , 1993; McRoy, 1992; Yarowsky 1992, 1994, 1995), whose aim is to identify the correct sense of a word in a particular context, among all of its senses defined in a dictionary or a thesaurus." W97-0322,P95-1026,o,"In future work, we will expand all of the above types of features and employ techniques to reduce dimensionality along the lines suggested in (Duda and Hart, 1973) and (Gale, Church, and Yarowsky, 1995)." W97-0322,P95-1026,o,"A more recent bootstrapping approach is described in (Yarowsky, 1995)." W97-0322,P95-1026,p,"While (Yarowsky, 1995) does not discuss distinguishing more than 2 senses of a word, there is no immediate reason to doubt that the ""one sense per collocation"" rule (Yarowsky, 1993) would still hold for a larger number of senses." W97-0322,P95-1026,p,"(Yarowsky, 1995) compares his method to (Schiitze, 1992) and shows that for four words the former performs significantly better in distinguishing between two senses." W97-0322,P95-1026,o,"7.3 EM algorithm The only other application of the EM algorithm to word-sense disambiguation is described in (Gale, Church, and Yarowsky, 1995)." W97-0808,P95-1026,o,The other approach selected was Yarowsky's unsupervised algorithm (1995). W97-0812,P95-1026,o,"(1992), Yarowsky (1995), and Karol & Edelman (1996) where strong reliance on statistical techniques for the calculation of word and context similarity commands large source corpora." W97-1004,P95-1026,o,"1 Introduction Word compositions have long been a concern in lexicography(Benson et al. 1986; Miller et al. 1995), and now as a specific kind of lexical knowledge, it has been shown that they have an important role in many areas in natural language processing, e.g., parsing, generation, lexicon building, word sense disambiguation, and information retrieving, etc.(e.g. , Abney 1989, 1990; Benson et al. 1986; Yarowsky 1995; Church and Hanks 1989; Church, Gale, Hans, and Hindle 1989)." W98-0701,P95-1026,o,"6 Discourse Context (Yarowsky, 1995) pointed out that the sense of a target word is highly consistent within any given document (one sense per discourse)." W98-0701,P95-1026,p,"(Yarowsky, 1995), whose training corpus for the noun drug was 9 times bigger than that of Karov and Edelman, reports 91.4% correct performance improved to impressive 93.9% when using the ""one sense per discourse"" constraint." W98-0703,P95-1026,o,WSD that use information gathered from raw corpora (unsupervised training methods) (Yarowsky 1995) (Resnik 1997). W99-0903,P95-1026,n,"However, our system is the unsupervised learning with small POS-tagged corpus,and we do not restrict the word's sense set within either binary senses(Yarowsky,1995; Karov, 1998) or dictionary's homograph level(Wilks, 1997)." W99-0903,P95-1026,o,"Recently, many works combined a MRD and a corpus for word sense disambiguation(Karov, 1998; Luk, 1995; Ng, 1996; Yarowsky,1995)." W99-0903,P95-1026,o,"In (Yarowsky,1995), the definition words were used as initial sense indicators, automatically tagging the target word examples containing them." W99-0905,P95-1026,p,"Among them, the unsupervised algorithm using decisiontrees (Yarowsky, 1995) has achieved promising performance." W99-0908,P95-1026,o,The preliminary labeling by keyword matching used in this paper is similar to the seed collocations used by Yarowsky (1995). A00-2005,P97-1003,o,"The parser induction algorithm used in all of the experiments in this paper was a distribution of Collins's model 2 parser (Collins, 1997)." A00-2016,P97-1003,o,"Training on about 40,000 sentences (Collins, 1997) achieves a crossing brackets rate of 1.07, a better value than our 1.63 value for regular parsing or the 1.13 value assuming perfect segmentation/tagging, but even for similar text types, comparisons across languages are of course problematic." A00-2030,P97-1003,o,"1 Introduction Since 1995, a few statistical parsing algorithms (Magerman, 1995; Collins, 1996 and 1997; Charniak, 1997; Rathnaparki, 1997) demonstrated a breakthrough in parsing accuracy, as measured against the University of Pennsylvania TREEBANK as a gold standard." A00-2030,P97-1003,o,"Finally, our newly constructed parser, like that of (Collins 1997), was based on a generative statistical model." A00-2030,P97-1003,o,"7 Model Structure In our statistical model, trees are generated according to a process similar to that described in (Collins 1996, 1997)." A00-2031,P97-1003,o,"234 ADV Non-specific adverbial BNF Benefemtive CLF It-cleft CLR 'Closely related' DIR Direction DTV Dative EXT Extent HLN Headline LGS Logical subject L0C Location MNI~ Manner N0M Nominal PRD Predicate PRP Purpose PUT Locative complement of 'put' SBJ Subject TMP Temporal TPC Topic TTL Title V0C Vocative Grammatical DTV 0.48% LGS 3.0% PRD 18.% PUT 0.26% SBJ 78.% v0c 0.025% Figure 1: Penn treebank function tags 53.% Form/Function 37.% Topicalisation 2.2% 0.25% NOM 6.8% 2.5% TPC 100% 2.2% 1.5% ADV 11.% 4.2% 9.3% BN'F 0.072% 0.026% 0.13% DIR 8.3% 3.0% 41.% EXT 3.2% 1.2% 0.013% LOC 25.% 9.2% MNR 6.2% 2.3% PI~ 5.2% 1.9% 33.% 12.% Miscellaneous 9.5% CLR 94.% 8.8% CLF 0.34% 0.03% HLN 2.6% 0.25% TTL 3.1% 0.29% Figure 2: Categories of function tags and their relative frequencies one project that used them at all: (Collins, 1997) defines certain constituents as complements based on a combination of label and function tag information." A00-2031,P97-1003,o,"1 Introduction Parsing sentences using statistical information gathered from a treebank was first examined a decade ago in (Chitrao and Grishman, 1990) and is by now a fairly well-studied problem ((Charniak, 1997), (Collins, 1997), (Ratnaparkhi, 1997))." A00-2036,P97-1003,n,"Bilexical context-free grammars have been presented in (Eisner and Satta, 1999) as an abstraction of language models that have been adopted in several recent real-world parsers, improving state-of-the-art parsing accuracy (A1shawl, 1996; Eisner, 1996; Charniak, 1997; Collins, 1997)." C00-1011,P97-1003,o,"40,000 sentences) and section 23 for testing (see Collins 1997, 1999; Charniak 1997, 2000; l~,atnalmrkhi 1999); we only tested on sentences _< 40 words (2245 sentences)." C00-1011,P97-1003,o,"As in other work, we collapsed AI)VP and Pl?Jl"" to the same label when calculating these scores (see Collins 1997; I~,atnaparkhi 1999; Charniak 1997)." C00-1011,P97-1003,n,"These scores are higher than those of several other parsers (e.g. Collins 1997, 99; Charniak 1997), but remain behind tim scores of Charniak (2000) who obtains 90.1% LP and 90.1% LR for sentences _< 40 words." C00-1011,P97-1003,o,"It also shows that DOP's frontier lexicalization is a viable alternative to constituent lexicalization (as proposed in Charniak 1997; Collins 1997, 99; Eisner 1997)." C00-1023,P97-1003,o,"The Ino M E Pollack Intelligent technology for an aging population: The use of AI to assist elders with cognitive impairment 2005 AI Magazine 26--2 rch on developing SDS for home-care and tele-care applications, Examples include scheduling appointments over the phone (Zajicek et al. 2004, Wolters et al., submitted), interactive reminder systems (Pollack, 2005), symptom management systems (Black et al. 2005) or environmental control systems (Clarke et al. 2005)." L08-1063,W04-1013,o,"Oxford, UK: Oxford University Press. L A Black C McMeel M McTear N Black R Harper M Lemon Implementing autonomy in a diabetes management system 2005 J Telemed Telecare 11 -care applications, Examples include scheduling appointments over the phone (Zajicek et al. 2004, Wolters et al., submitted), interactive reminder systems (Pollack, 2005), symptom management systems (Black et al. 2005) or environmental control systems (Clarke et al. 2005)." N06-2006,W04-1013,o,"3.2 ROUGE Version 1.5.5 of the ROUGE scoring algorithm (Lin, 2004) is also used for evaluating results." N07-1005,W04-1013,o,"Many methods for calculating the similarity have been proposed (Niessen et al. , 2000; Akiba et al. , 2001; Papineni et al. , 2002; NIST, 2002; Leusch et al. , 2003; Turian et al. , 2003; Babych and Hartley, 2004; Lin and Och, 2004; Banerjee and Lavie, 2005; Gimenez et al. , 2005)." N07-1005,W04-1013,o,"In our research, 23 scores, namely BLEU (Papineni et al. , 2002) with maximum n-gram lengths of 1, 2, 3, and 4, NIST (NIST, 2002) with maximum n-gram lengths of 1, 2, 3, 4, and 5, GTM (Turian et al. , 2003) with exponents of 1.0, 2.0, and 3.0, METEOR (exact) (Banerjee and Lavie, 2005), WER (Niessen et al. , 2000), PER (Leusch et al. , 2003), and ROUGE (Lin, 2004) with n-gram lengths of 1, 2, 3, and 4 and 4 variants (LCS, S,SU, W-1.2), were used to calculate each similarity S i . Therefore, the value of m in Eq." N07-1005,W04-1013,o,"In recent years, many researchers have tried to automatically evaluate the quality of MT and improve the performance of automatic MT evaluations (Niessen et al. , 2000; Akiba et al. , 2001; Papineni et al. , 2002; NIST, 2002; Leusch et al. , 2003; Turian et al. , 2003; Babych and Hartley, 2004; Lin and Och, 2004; Banerjee and Lavie, 2005; Gimenez et al. , 2005) because improving the performance of automatic MT evaluation is expected to enable us to use and improve MT systems efficiently." N09-1041,W04-1013,o,"2Note that sentence extraction does not solve the problem of selecting and ordering summary sentences to form a coherent There are several approaches to modeling document content: simple word frequency-based methods (Luhn, 1958; Nenkova and Vanderwende, 2005), graph-based approaches (Radev, 2004; Wan and Yang, 2006), as well as more linguistically motivated techniques (Mckeown et al., 1999; Leskovec et al., 2005; Harabagiu et al., 2007)." N09-1041,W04-1013,o,"All topic models utilize Gibbs sampling for inference (Griffiths, 2002; Blei et al., 2004)." N09-1041,W04-1013,o,"Automated evaluation will utilize the standard DUC evaluation metric ROUGE (Lin, 2004) which representsrecallovervariousn-gramsstatisticsfrom asystem-generatedsummaryagainstasetofhumangenerated peer summaries.5 We compute ROUGE scores with and without stop words removed from peer and proposed summaries." N09-1041,W04-1013,o,"Official DUC scoring utilizes the jackknife procedure and assesses significance using bootstrapping resampling (Lin, 2004)." N09-1041,W04-1013,o,"8This result is presented as 0.053 with the official ROUGE scorer (Lin, 2004)." N09-1066,W04-1013,o,"6.1.1 Nugget-Based Pyramid Evaluation For our first approach we used a nugget-based evaluation methodology (Lin and Demner-Fushman, 2006; Nenkova and Passonneau, 2004; Hildebrandt et al., 2004; Voorhees, 2003)." N09-1066,W04-1013,o,"6.1.2 ROUGE evaluation Table 4 presents ROUGE scores (Lin, 2004) of each of human-generated 250-word surveys against each other." N09-1066,W04-1013,o,"Our aim is not only to determine the utility of citation texts for survey creation, but also to examine the quality distinctions between this form of input and others such as abstracts and full textscomparing the results to human-generated surveys using both automatic and nugget-based pyramid evaluation (Lin and Demner-Fushman, 2006; Nenkova and Passonneau, 2004; Lin, 2004)." P04-1077,W04-1013,p,"ROUGE-L, ROUGE-W, and ROUGE-S have also been applied in automatic evaluation of summarization and achieved very promising results (Lin 2004)." P04-1077,W04-1013,o,"In Lin and Och (2004), we proposed a framework that automatically evaluated automatic MT evaluation metrics using only manual translations without further human involvement." P06-1139,W04-1013,o,"This evaluation shows that our WIDL-based approach to generation is capable of obtaining headlines that compare favorably, in both content and fluency, with extractive, state-of-the-art results (Zajic et al. , 2004), while it outperforms a previously-proposed abstractive system by a wide margin (Zhou and Hovy, 2003)." P06-1139,W04-1013,o,"When evaluated against the state-of-the-art, phrase-based decoder Pharaoh (Koehn, 2004), using the same experimental conditions translation table trained on the FBIS corpus (7.2M Chinese words and 9.2M English words of parallel text), trigram language model trained on 155M words of English newswire, interpolation weights a65 (Equation 2) trained using discriminative training (Och, 2003) (on the 2002 NIST MT evaluation set), probabilistic beam a90 set to 0.01, histogram beam a58 set to 10 and BLEU (Papineni et al. , 2002) as our metric, the WIDL-NGLM-Aa86 a129 algorithm produces translations that have a BLEU score of 0.2570, while Pharaoh translations have a BLEU score of 0.2635." P06-1139,W04-1013,o,"We automatically measure performance by comparing the produced headlines against one reference headline produced by a human using ROUGEa129 (Lin, 2004)." P06-2078,W04-1013,o,"using Spearmans rank correlation coefficient and Pearsons rank correlation coefficient (Lin et al. , 2003, Lin, 2004, Hirao et al. , 2005)." P06-2078,W04-1013,o,"(Donaway et al. , 2000, Hirao et al. , 2005, Lin et al. , 2003, Lin, 2004, Hori et al. , 2003) and manual methods" P06-2078,W04-1013,o,"We also tested other automatic methods: content-based evaluation, BLEU (Papineni et al. , 2001) and ROUGE-1 (Lin, 2004), and compared their results with that of evaluation by revision as reference." P06-2078,W04-1013,o,"We tested several measures, such as ROUGE (Lin, 2004) and the cosine distance." P06-2078,W04-1013,o,"ROUGE-N (Lin, 2004) This measure compares n-grams of two summaries, and counts the number of matches." P06-2078,W04-1013,o,"ROUGE-L (Lin, 2004) This measure evaluates summaries by longest common subsequence (LCS) defined by Equation 4." P06-2078,W04-1013,o,"605 ROUGE-S (Lin, 2004) Skip-bigram is any pair of words in their sentence order, allowing for arbitrary gaps." P06-2078,W04-1013,o,"In the following, ROUGE-SN denotes ROUGE-S with maximum skip distance N. ROUGE-SU (Lin, 2004) This measure is an extension of ROUGE-S; it adds a unigram as a counting unit." P06-2109,W04-1013,o,"ROUGE (Lin, 2004) is a set of recall-based criteria that is mainly used for evaluating summarization tasks." P06-2109,W04-1013,o,"ROUGE-L and ROUGE-1 are supposed to be appropriate for the headline gener853 ation task (Lin, 2004)." P07-2049,W04-1013,o,"For evaluation we use ROUGE (Lin, 2004) SU4 recall metric1, which was among the official automatic evaluation metrics for DUC." P08-1094,W04-1013,o,"The idea of topic signature terms was introduced by Lin and Hovy (Lin and Hovy, 2000) in the context of single document summarization, and was later used in several multi-document summarization systems (Conroy et al., 2006; Lacatusu et al., 2004; Gupta et al., 2007)." P08-1094,W04-1013,p,"The dif1The routinely used tool for automatic evaluation ROUGE was adopted exactly because it was demonstrated it is highly correlated with the manual DUC coverage scores (Lin and Hovy, 2003a; Lin, 2004)." P08-2003,W04-1013,p,"We carried out automatic evaluation of our summaries using ROUGE (Lin, 2004) toolkit, which has been widely adopted by DUC for automatic summarization evaluation." P08-2005,W04-1013,o,"We use ROUGE (Lin, 2004) to assess summary quality using common n-gram counts and longest common subsequence (LCS) measures." P08-2005,W04-1013,p,"We report on ROUGE-1 (unigrams), ROUGE-2 (bigrams), ROUGE W-1.2 (weighted LCS), and ROUGE-S* (skip bigrams) as they have been shown to correlate well with human judgments for longer multidocument summaries (Lin, 2004)." P08-2051,W04-1013,o,"Different approaches have been proposed to measure matches using words or more meaningful semantic units, for example, ROUGE (Lin, 2004), factoid analysis (Teufel and Halteren, 2004), pyramid method (Nenkova and Passonneau, 2004), and Basic Element (BE) (Hovy et al., 2006)." P08-2051,W04-1013,o,"3.2 Automatic ROUGE Evaluation ROUGE(Lin, 2004)measuresthen-grammatchbetween system generated summaries and human summaries." P08-2051,W04-1013,p,"ROUGE (Lin, 2004) has been widely used for summarization evaluation." P08-2051,W04-1013,p,"In the news article domain, ROUGE scores have been shown to be generally highly correlated with human evaluation in content match (Lin, 2004)." P08-2052,W04-1013,o,"With our best performing features, we get ROUGE-2 (Lin, 2004) scores of 0.11 and 0.0925 on 2007 and 2006 5This threshold was derived experimentally with previous data." P09-1022,W04-1013,o,"(2004) applied to the output of the reranking parser of Charniak and Johnson (2005), whereas in BE (in the version presented here) dependencies are generated by the Minipar parser (Lin, 1995)." P09-1022,W04-1013,n,"Despite relying on a the same concept, our approach outperforms BE in most comparisons, and it often achieves higher correlations with human judgments than the string-matching metric ROUGE (Lin, 2004)." P09-1022,W04-1013,o,"In TAC 2008 Summarization track, all submitted runs were scored with the ROUGE (Lin, 2004) and Basic Elements (BE) metrics (Hovy et al., 2005)." P09-1024,W04-1013,o,"These domains have been commonly used in prior work on summarization (Weischedel et al., 2004; Zhou et al., 2004; Filatova and Prager, 2005; DemnerFushman and Lin, 2007; Biadsy et al., 2008)." P09-1024,W04-1013,o,"We use the publicly available ROUGE toolkit (Lin, 2004)tocomputerecall, precision, andF-scorefor ROUGE-1." P09-1024,W04-1013,o,"There has been a sizable amount of research on structure induction ranging fromlinearsegmentation(Hearst, 1994)tocontent modeling (Barzilay and Lee, 2004)." P09-1062,W04-1013,o,"As such, we quantify success based on ROUGE (Lin, 2004) scores." P09-1099,W04-1013,o,"Automated evaluation metrics that rate system behaviour based on automatically computable properties have been developed in a number of other fields: widely used measures include BLEU (Papineni et al., 2002) for machine translation and ROUGE (Lin, 2004) for summarisation, for example." P09-2025,W04-1013,o,"ROUGE (Lin, 2004) is an evaluation metric designed to evaluate automatically generated summaries." P09-2027,W04-1013,p,"The ROUGE (Lin, 2004) suite of metrics are n-gram overlap based metrics that have been shown to highly correlate with human evaluations on content responsiveness." P09-2066,W04-1013,o,"For a comparison, we also include the ROUGE-1 Fscores (Lin, 2004) of each system output against the human compressed sentences." P09-2083,W04-1013,o,"2 Automatic Annotation Schemes Using ROUGE Similarity Measures ROUGE (Recall-Oriented Understudy for Gisting Evaluation) is an automatic tool to determine the quality of a summary using a collection of measures ROUGE-N (N=1,2,3,4), ROUGE-L, ROUGE-W and ROUGE-S which count the number of overlapping units such as n-gram, word-sequences, and word-pairs between the extract and the abstract summaries (Lin, 2004)." P09-2083,W04-1013,o,"We evaluate the system generated summaries using the automatic evaluation toolkit ROUGE (Lin, 2004)." W05-0901,W04-1013,o,"(Lin, 2004; Lin and Och, 2004)." W05-0907,W04-1013,o,"5 Related work The methodology which is closest to our framework is ORANGE (Lin, 2004a), which evaluates a similarity metric using the average ranks obtained by reference items within a baseline set." W05-0907,W04-1013,o,"(Lin, 2004b)." W06-0707,W04-1013,o,"Additionally, automatic evaluation of content coverage using ROUGE (Lin, 2004) was explored in 2004." W06-1401,W04-1013,p,"We can credit DUC with the emergence of automatic methods for evaluation such as ROUGE (Lin and Hovy, 2003; Lin, 2004) which allow quick measurement of systems during development and enable evaluation of larger amounts of data." W06-1643,W04-1013,p,"Two metrics have become quite popular in multi-document summarization, namely the Pyramid method (Nenkova and Passonneau, 2004b) and ROUGE (Lin, 2004)." W06-1643,W04-1013,o,"Empirical evaluations using two standard summarization metricsthe Pyramid method (Nenkova and Passonneau, 2004b) and ROUGE (Lin, 2004)show that the best performing system is a CRF incorporating both order-2 Markov dependencies and skip-chain dependencies, which achieves 91.3% of human performance in Pyramid score, and outperforms our best-performing non-sequential model by 3.9%." W06-1643,W04-1013,o,"To find these pairs automatically, wetrainedanon-sequentiallog-linearmodel that achieves a .902 accuracy (Galley et al. , 2004)." W06-1643,W04-1013,o,"Most previous work with CRFs containing nonlocal dependencies used approximate probabilistic inference techniques, including TRP (Sutton and McCallum, 2004) and Gibbs sampling (Finkel et al. , 2005)." W07-1411,W04-1013,o,"We have implemented them as defined in (Lin, 2004)." W08-0127,W04-1013,o,"e.g. BLEU (Papineni et al., 2001) for machine translation, ROUGE (Lin, 2004) for summarization." W08-1106,W04-1013,o,"There are also automatic methods for summary evaluation, such as ROUGE (Lin, 2004), which gives a score based on the similarity in the sequences of words between a human-written model summary and the machine summary." W08-1113,W04-1013,o,"Such metrics have been introduced in other fields, including PARADISE (Walker et al., 1997) for spoken dialogue systems, BLEU (Papineni et al., 2002) for machine translation,1 and ROUGE (Lin, 2004) for summarisation." W08-1406,W04-1013,n,"In what concerns the evaluation process, although ROUGE (Lin, 2004) is the most common evaluation metric for the automatic evaluation of summarization, since our approach might introduce in the summary information that it is not present in the original input source, we found that a human evaluation was more adequate to assess the relevance of that additional information." W08-1407,W04-1013,o,"5 Results The model summaries were compared against 24 summaries generated automatically using SUMMA by calculating ROUGE-1 to ROUGE4, ROUGE-L and ROUGE-W-1.2 recall metrics (Lin, 2004)." W08-1407,W04-1013,o,"We use SUMMA (Saggion and Gaizauskas, 2005) to generate generic and query-based multi-document summaries and evaluate them using ROUGE evaluation metrics (Lin, 2004) relative to human generated summaries." W08-1808,W04-1013,n,"We considered a variety of tools like ROUGE (Lin, 2004) and METEOR (Lavie and Agarwal, 2007) but decided they were unsuitable for this task." W08-2008,W04-1013,o,"Finally, in order to formally evaluate the method and the different heuristics, a large-scale evaluation on the BioMed Corpus is under way, based on computing the ROUGE measures (Lin, 2004)." W09-1607,W04-1013,o,"The summaries from the above algorithm for the QF-MDS were evaluated based on ROUGE metrics (Lin, 2004)." W09-1802,W04-1013,o,"In particular, ROUGE-2 is the recall in bigrams with a set of human-written abstractive summaries (Lin, 2004)." W09-2804,W04-1013,o,"4.2 Building a Human Performance Model We adopt the evaluation approach that a good content selection strategy should perform similarly to humans, which is the view taken by existing summarization evaluation schemes such as ROUGE (Lin, 2004) and the Pyramid method (Nenkova et al., 2007)." W09-2806,W04-1013,o,"This view is supported by Lin (2004a), who concludes that correlations to human judgments were increased by using multiple references but using single reference summary with enough number of samples was a valid alternative." W09-2806,W04-1013,o,"Interestingly, similar conclusions were also reached in the area of Machine Translation evaluation; in their experiments, Zhang and Vogel (2004) show that adding an additional reference translation compensates the effects of removing 1015% of the testing data, and state that, therefore, it seems more cost effective to have more test sentences but fewer reference translations." W09-2806,W04-1013,o,"All submitted runs were evaluated with the automatic metrics: ROUGE (Lin, 2004b), which calculates the proportion of n-grams shared between the candidate summary and the reference summaries, and Basic Elements (Hovy et al., 2005), which compares the candidate to the models in terms of head-modifier pairs." W09-2806,W04-1013,o,"2.2 Automatic metrics Similarly to the Pyramid method, ROUGE (Lin, 2004b) and Basic Elements (Hovy et al., 2005) require multiple topics and model summaries to produce optimal results." W09-2806,W04-1013,o,"Our question here is not only what this relation looks like (as it was examined on the basis of Document Understanding Conference data in Lin (2004a)), but also how it compares to the reliability of other metrics." E09-1017,W04-1016,o,"Considerations of sentence fluency are also key in sentence simplification (Siddharthan, 2003), sentence compression (Jing, 2000; Knight and Marcu, 2002; Clarke and Lapata, 2006; McDonald, 2006; Turner and Charniak, 2005; Galley and McKeown, 2007), text re-generation for summarization (Daume III and Marcu, 2004; Barzilay and McKeown, 2005; Wan et al., 2005) and headline generation (Banko et al., 2000; Zajic et al., 2007; Soricut and Marcu, 2007)." P08-2049,W04-1016,o,Daume III & Marcu (2004) argue that generic sentence fusion is an ill-defined task. I08-1042,W05-0904,o,"For instance, we may find metrics based on full constituent parsing (Liu and Gildea, 2005), and on dependency parsing (Liu and Gildea, 2005; Amigo et al., 2006; Mehay and Brew, 2007; Owczarzak et al., 2007)." N07-1006,W05-0904,o,"The 45 stochastic word mapping is trained on a FrenchEnglish parallel corpus containing 700,000 sentence pairs, and, following Liu and Gildea (2005), we only keep the top 100 most similar words for each English word." N07-1006,W05-0904,o," Metrics based on syntactic similarities such as the head-word chain metric (HWCM) (Liu and Gildea, 2005)." P06-2003,W05-0904,o,"Secondly, we explore the possibility of designing complementary similarity metrics that exploit linguistic information at levels further than lexical. Inspired in the work by Liu and Gildea (2005), who introduced a series of metrics based on constituent/dependency syntactic matching, we have designed three subgroups of syntactic similarity metrics." P06-2070,W05-0904,o,"Liu and Gildea (2005) also pointed out that due to the limited references for every MT output, using the overlapping ratio of n-grams longer than 2 did not improve sentence level evaluation performance of BLEU." P06-2070,W05-0904,o,"This con rms Liu and Gildea (2005)s nding that in sentence level evaluation, long n-grams in BLEU are not bene cial." P07-1011,W05-0904,o,"Syntactic Score (SC) Some erroneous sentences often contain words and concepts that are locally correct but cannot form coherent sentences (Liu and Gildea, 2005)." P07-1038,W05-0904,o,"Also relevant is previous work that applied machine learning approaches to MT evaluation, both with human references (Corston-Oliver et al. , 2001; Kulesza and Shieber, 2004; Albrecht and Hwa, 2007; Liu and Gildea, 2007) and without (Gamon et al. , 2005)." P07-1038,W05-0904,o,"The HWC metrics compare dependency and constituency trees for both reference and machine translations (Liu and Gildea, 2005)." P07-1038,W05-0904,o,"In addition to adapting the idea of Head Word Chains (Liu and Gildea, 2005), we also compared the input sentences argument structures against the treebank for certain syntactic categories." P07-1111,W05-0904,o,"Metrics in the Rouge family allow for skip n-grams (Lin and Och, 2004a); Kauchak and Barzilay (2006) take paraphrasing into account; metrics such as METEOR (Banerjee and Lavie, 2005) and GTM (Melamed et al. , 2003) calculate both recall and precision; METEOR is also similar to SIA (Liu and Gildea, 2006) in that word class information is used." P07-1111,W05-0904,o,"For example, Liu and Gildea (2005) developed the Sub-Tree Metric (STM) over constituent parse trees and the Head-Word Chain Metric (HWCM) over dependency parse trees." P07-1111,W05-0904,o,0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 45 50 55 60 65 70 75 80 85Correlation Coefficient with Human Judgement (R) Human-Likeness Classifier Accuracy (%) Figure 1: This scatter plot compares classifiers accuracy with their corresponding metrics correlations with human assessments been previously observed by Liu and Gildea (2005). P08-3005,W05-0904,o,"In comparison we introduce 28 several metrics coefficients reported in Albrecht and Hwa (2007) including smoothed BLEU (Lin and Och, 2004), METEOR (Banerjee and Lavie, 2005), HWCM (Liu and Gildea 2005), and the metric proposed in Albrecht and Hwa (2007) using the full feature set." P08-3005,W05-0904,o,"Then we compute the same ratio of machine translation sentence to source sentence, and take the output of p-norm function as a feature: ) __/__ ()( s csrcoflengthtoflenght Ptf norm = (7) Features based on parse score The usual practice to model the wellformedness of a sentence is to employ the n-gram language model or compute the syntactic structure similarity (Liu and Gildea 2005)." P09-1034,W05-0904,o,(2008); Liu and Gildea (2005)). P09-1034,W05-0904,o,"(2008) to LFG parses, and by Liu and Gildea (2005) to features derived from phrase-structure tress." P09-1035,W05-0904,o,"But there is also extensive research focused on including linguistic knowledge in metrics (Owczarzak et al., 2006; Reeder et al., 2001; Liu and Gildea, 2005; Amigo et al., 2006; Mehay and Brew, 2007; Gimenez and M`arquez, 2007; Owczarzak et al., 2007; Popovic and Ney, 2007; Gimenez and M`arquez, 2008b) among others." W07-0714,W05-0904,o,"This finding has been previously reported, among others, in Liu and Gildea (2005)." W07-0714,W05-0904,o,"While Liu and Gildea (2005) calculate n-gram matches on non-labelled head-modifier sequences derived by head-extraction rules from syntactic trees, we automatically evaluate the quality of translation by calculating an f-score on labelled dependency structures produced by a LexicalFunctional Grammar (LFG) parser." W07-0714,W05-0904,o,"These dependencies differ from those used by Liu and Gildea (2005), in that they are extracted according to the rules of the LFG grammar and they are labelled with a type of grammatical relation that connects the head and the modifier, such as subject, determiner, etc. The presence of grammatical relation labels adds another layer of important linguistic information into the comparison and allows us to account for partial matches, for example when a lexical item finds itself in a correct relation but with an incorrect partner." W07-0714,W05-0904,p,"The use of dependencies in MT evaluation has not been extensively researched before (one exception here would be Liu and Gildea (2005)), and requires more research to improve it, but the method shows potential to become an accurate evaluation metric." W07-0714,W05-0904,n,"Although evaluated on a different test set, our method also outperforms the correlation with human scores reported in Liu and Gildea (2005)." W07-0714,W05-0904,o,"By contrast, Liu and Gildea (2005) present three metrics that use syntactic and unlabelled dependency information." W07-0714,W05-0904,n,"Our method, extending this line of research with the use of labelled LFG dependencies, partial matching, and n-best parses, allows us to considerably outperform Liu and Gildea?s (2005) highest correlations with human judgement (they report 0.144 for the correlation with human fluency judgement, 0.202 for the correlation with human overall judgement), although it has to be kept in mind that such comparison is only tentative, as their correlation is calculated on a different test set." W07-0714,W05-0904,o,"Our method follows and substantially extends the earlier work of Liu and Gildea (2005), who use syntactic features and unlabelled dependencies to evaluate MT quality, outperforming BLEU on segment-level correlation with human judgement." W07-0738,W05-0904,o,CP-STM(i)-l This metric corresponds to the STM metric presented by Liu and Gildea (2005). W07-0738,W05-0904,o,"2.4 Syntactic Similarity We have incorporated, with minor modifications, some of the syntactic metrics described by Liu and Gildea (2005) and Amigo et al." W07-0738,W05-0904,o,Similarities are captured from different viewpoints: DP-HWC(i)-l This metric corresponds to the HWC metric presented by Liu and Gildea (2005). W08-0331,W05-0904,o,"For instance, BLEU and ROUGE (Lin and Och, 2004) are based on n-gram precisions, METEOR (Banerjee and Lavie, 2005) and STM (Liu and Gildea, 2005) use word-class or structural information, Kauchak (2006) leverages on paraphrases, and TER (Snover et al., 2006) uses edit-distances." W08-0332,W05-0904,o,"We use three different kinds of metrics: DR-STM Semantic Tree Matching, a la Liu and Gildea (2005), but over DRS instead of over constituency trees." W09-0440,W05-0904,o,"Three kinds of metrics have been defined: 1http://www.lsi.upc.edu/nlp/IQMT 2http://svn.ask.it.usyd.edu.au/trac/ candc DR-STM-l (Semantic Tree Matching) These metrics are similar to the Syntactic Tree Matching metric defined by Liu and Gildea (2005), in this case applied to DRSs instead of constituency trees." W09-0440,W05-0904,o,"For instance, we may find metrics which compute similarities over shallow syntactic structures/sequences (Gimenez and M`arquez, 2007; Popovic and Ney, 2007), constituency trees (Liu and Gildea, 2005) and dependency trees (Liu and Gildea, 2005; Amigo et al., 2006; Mehay and Brew, 2007; Owczarzak et al., 2007)." C08-1141,W05-0909,o,"(Banerjee and Lavie, 2005) calculated the scores by matching the unigrams on the surface forms, stemmed forms and senses." C08-3006,W05-0909,o,"This is con rmed by the translation experiments in which the evaluation data sets were translated using the servers translation engines and the translation quality was evaluated using the standard automatic evaluation metrics BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) where scores range between 0 (worst) and 1 (best)." D07-1007,W05-0909,o,"In addition to the widely used BLEU (Papineni et al. , 2002) and NIST (Doddington, 2002) scores, we also evaluate translation quality with the recently proposed Meteor (Banerjee and Lavie, 2005) and four edit-distance style metrics, Word Error Rate (WER), Positionindependent word Error Rate (PER) (Tillmann et al. , 1997), CDER, which allows block reordering (Leusch et al. , 2006), and Translation Edit Rate (TER) (Snover et al. , 2006)." D07-1055,W05-0909,o,"There exists a variety of different metrics, e.g., word error rate, position-independent word error rate, BLEU score (Papineni et al. , 2002), NIST score (Doddington, 2002), METEOR (Banerjee and Lavie, 2005), GTM (Turian et al. , 2003)." D07-1105,W05-0909,o,"Experimental results were only reported for the METEOR metric (Banerjee and Lavie, 2005)." D08-1064,W05-0909,o,"Table 1 shows theresultsalongwithB andthethreemetricsthat achieved higher correlations than B: semantic role overlap (Gimenez and Marquez, 2007), ParaEval recall (Zhou et al., 2006), and METEOR (Banerjee and Lavie, 2005)." D08-1064,W05-0909,o,"1 Introduction B (Papineni et al., 2002) was one of the first automatic evaluation metrics for machine translation (MT), and despite being challenged by a number of alternative metrics (Melamed et al., 2003; Banerjee and Lavie, 2005; Snover et al., 2006; Chan and Ng, 2008), it remains the standard in the statistical MTliterature.Callison-Burchetal.(2006)havesubjected B to a searching criticism, with two realworld case studies of significant failures of correlation between B and human adequacy/fluency judgments.Bothcasesinvolvecomparisonsbetween statistical MT systems and other translation methods (human post-editing and a rule-based MT system), and they recommend that the use of B be restrictedtocomparisonsbetweenrelatedsystemsor different versions of the same systems." D08-1064,W05-0909,o,"In none of these cases did we repeat minimum-error-rate training; all these systems were trained using max-B. The metrics we tested were: METEOR (Banerjee and Lavie, 2005), version 0.6,usingtheexact,Porter-stemmer,andWordNet synonmy stages, and the optimized parameters = 0.81, = 0.83, = 0.28 as reported in (Lavie and Agarwal, 2007)." D09-1023,W05-0909,o,"We evaluate translation output using case-insensitive BLEU (Papineni et al., 2001), as provided by NIST, and METEOR (Banerjee and Lavie, 2005), version 0.6, with Porter stemming and WordNet synonym matching." D09-1117,W05-0909,o,"Therefore, we also carried out evaluations using the NIST (Doddington, 2002), METEOR (Banerjee and Lavie, 2005), WER (Hunt, 1989), PER (Tillmann et al., 1997) and TER (Snover et al., 2005) machine translation evaluation techniques." E06-1031,W05-0909,o,"Examples of such methods are the introduction of information weights as in the NIST measure or the comparison of stems or synonyms, as in METEOR (Banerjee and Lavie, 2005)." E06-1032,W05-0909,o,"Banerjee and Lavie (2005) introduce the Meteor metric, which also incorporates recall on the unigram level and further provides facilities incorporating stemming, and WordNet synonyms as a more flexible match." E06-1032,W05-0909,o,"255 Meteor (Banerjee and Lavie, 2005), Precision and Recall (Melamed et al. , 2003), and other such automatic metrics may also be affected to a greater or lesser degree because they are all quite rough measures of translation similarity, and have inexact models of allowable variation in translation." E09-1063,W05-0909,o,"The quality of the translation output is mainly evaluated using BLEU, with NIST (Doddington, 2002) and METEOR (Banerjee and Lavie, 2005) as complementary metrics." I08-1042,W05-0909,p,"Other well-known metrics are WER (Nieen et al., 2000), NIST (Doddington, 2002), GTM (Melamed et al., 2003), ROUGE (Lin and Och, 2004a), METEOR (Banerjee and Lavie, 2005), and TER (Snover et al., 2006), just to name a few." N06-1058,W05-0909,o,"Examples of such knowledge sources include stemming and TF-IDF weighting (Babych and Hartley, 2004; Banerjee and Lavie, 2005)." N07-1005,W05-0909,o,"Many methods for calculating the similarity have been proposed (Niessen et al. , 2000; Akiba et al. , 2001; Papineni et al. , 2002; NIST, 2002; Leusch et al. , 2003; Turian et al. , 2003; Babych and Hartley, 2004; Lin and Och, 2004; Banerjee and Lavie, 2005; Gimenez et al. , 2005)." N07-1005,W05-0909,o,"In our research, 23 scores, namely BLEU (Papineni et al. , 2002) with maximum n-gram lengths of 1, 2, 3, and 4, NIST (NIST, 2002) with maximum n-gram lengths of 1, 2, 3, 4, and 5, GTM (Turian et al. , 2003) with exponents of 1.0, 2.0, and 3.0, METEOR (exact) (Banerjee and Lavie, 2005), WER (Niessen et al. , 2000), PER (Leusch et al. , 2003), and ROUGE (Lin, 2004) with n-gram lengths of 1, 2, 3, and 4 and 4 variants (LCS, S,SU, W-1.2), were used to calculate each similarity S i . Therefore, the value of m in Eq." N07-1005,W05-0909,o,"In recent years, many researchers have tried to automatically evaluate the quality of MT and improve the performance of automatic MT evaluations (Niessen et al. , 2000; Akiba et al. , 2001; Papineni et al. , 2002; NIST, 2002; Leusch et al. , 2003; Turian et al. , 2003; Babych and Hartley, 2004; Lin and Och, 2004; Banerjee and Lavie, 2005; Gimenez et al. , 2005) because improving the performance of automatic MT evaluation is expected to enable us to use and improve MT systems efficiently." N07-1006,W05-0909,o," Metrics based on word alignment between MT outputs and the references (Banerjee and Lavie, 2005)." N09-1058,W05-0909,o,"The final SMT system performance is evaluated on a uncased test set of 3071 sentences using the BLEU (Papineni et al., 2002), NIST (Doddington, 2002) and METEOR (Banerjee and Lavie, 2005) scores." P06-1002,W05-0909,o,"Other metrics assess the impact of alignments externally, e.g., different alignments are tested by comparing the corresponding MT outputs using automated evaluation metrics (e.g. , BLEU (Papineni et al. , 2002) or METEOR (Banerjee and Lavie, 2005))." P06-2037,W05-0909,o,"We have computed the BLEU score (accumulated up to 4-grams) (Papineni et al. , 2001), the NIST score (accumulated up to 5-grams) (Doddington, 2002), the General Text Matching (GTM) F-measure (e = 1,2) (Melamed et al. , 2003), and the METEOR measure (Banerjee and Lavie, 2005)." P06-2070,W05-0909,o,"In order to improve sentence-level evaluation performance, several metrics have been proposed, including ROUGE-W, ROUGE-S (Lin and Och, 2004) and METEOR (Banerjee and Lavie, 2005)." P07-1038,W05-0909,o,"METEOR uses the Porter stemmer and synonymmatching via WordNet to calculate recall and precision more accurately (Banerjee and Lavie, 2005)." P07-1040,W05-0909,p,"It has been argued that METEOR correlates better with human judgment due to higher weight on recall than precision (Banerjee and Lavie, 2005)." P07-1111,W05-0909,o,"Metrics in the Rouge family allow for skip n-grams (Lin and Och, 2004a); Kauchak and Barzilay (2006) take paraphrasing into account; metrics such as METEOR (Banerjee and Lavie, 2005) and GTM (Melamed et al. , 2003) calculate both recall and precision; METEOR is also similar to SIA (Liu and Gildea, 2006) in that word class information is used." P08-1007,W05-0909,p,"The results show that, as compared to BLEU, several recently proposed metrics such as Semantic-role overlap (Gimenez and Marquez, 2007), ParaEval-recall (Zhou et al., 2006), and METEOR (Banerjee and Lavie, 2005) achieve higher correlation." P08-1007,W05-0909,o,"For METEOR, when used with its originally proposed parameter values of (=0.9, =3.0, =0.5), which the METEOR researchers mentioned were based on some early experimental work (Banerjee and Lavie, 2005), we obtain an average correlation value of 0.915, as shown in the row METEOR." P08-1007,W05-0909,o,"2.4 METEOR Given a pair of strings to compare (a system translation and a reference translation), METEOR (Banerjee and Lavie, 2005) first creates a word alignment between the two strings." P08-1007,W05-0909,o,"To address this, standard measures like precision and recall could be used, as in some previous research (Banerjee and Lavie, 2005; Melamed et al., 2003)." P08-1010,W05-0909,o,"We measure translation performance by the BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) scores with multiple translation references." P08-3005,W05-0909,o,"In comparison we introduce 28 several metrics coefficients reported in Albrecht and Hwa (2007) including smoothed BLEU (Lin and Och, 2004), METEOR (Banerjee and Lavie, 2005), HWCM (Liu and Gildea 2005), and the metric proposed in Albrecht and Hwa (2007) using the full feature set." P09-1022,W05-0909,n,"In Owczarzak (2008), the method achieves equal or higher correlations with human judgments than METEOR (Banerjee and Lavie, 2005), one of the best-performingautomaticMTevaluationmetrics." P09-1034,W05-0909,o,"(2006)), or by using linguistic evidence, mostly lexical similarity (METEOR, Banerjee and Lavie (2005); MaxSim, Chan and Ng (2008)), or syntactic overlap (Owczarzak et al." P09-1034,W05-0909,o,"Banerjee and Lavie (2005) and Chan and Ng (2008) use WordNet, and Zhou et al." P09-1035,W05-0909,o,"In a different work, Banerjee and Lavie (2005) argued that the measured reliability of metrics can be due to averaging effects but might not be robust across translations." P09-1035,W05-0909,o,"This result supports the intuition in (Banerjee and Lavie, 2005) that correlation at segment level is necessary to ensure the reliability of metrics in different situations." W06-3101,W05-0909,o,"An automatic metric which uses base forms and synonyms of the words in order to correlate better to human judgements has been 1 proposed in (Banerjee and Lavie, 2005)." W06-3112,W05-0909,o,"Others try to accommodate both syntactic and lexical differences between the candidate translation and the reference, like CDER (Leusch et al. , 2006), which employs a version of edit distance for word substitution and reordering; METEOR (Banerjee and Lavie, 2005), which uses stemming and WordNet synonymy; and a linear regression model developed by (Russo-Lassner et al. , 2005), which makes use of stemming, WordNet synonymy, verb class synonymy, matching noun phrase heads, and proper name matching." W06-3126,W05-0909,o,"For evaluation we have selected a set of 8 metric variants corresponding to seven different families: BLEU (n = 4) (Papineni et al. , 2001), NIST (n = 5) (Lin and Hovy, 2002), GTM F1-measure (e = 1,2) (Melamed et al. , 2003), 1-WER (Nieen et al. , 2000), 1-PER (Leusch et al. , 2003), ROUGE (ROUGE-S*) (Lin and Och, 2004) and METEOR3 (Banerjee and Lavie, 2005)." W07-0411,W05-0909,o,"Comparing the LFG-based evaluation method with other popular metrics: BLEU, NIST, General Text Matcher (GTM) (Turian et al. , 2003), Translation Error Rate (TER) (Snover et al. , 2006)1, and METEOR (Banerjee and Lavie, 2005), we show that combining dependency representations with paraphrases leads to a more accurate evaluation that correlates better with human judgment." W07-0411,W05-0909,o,"Others try to accommodate both syntactic and lexical differences between the candidate translation and the reference, like CDER (Leusch et al. , 2006), which employs a version of edit distance for word substitution and reordering; or METEOR (Banerjee and Lavie, 2005), which uses stemming and WordNet synonymy." W07-0704,W05-0909,o,"Note that using stems and their synonyms as used in METEOR (Banerjee and Lavie, 2005) could also be considered for word similarity." W07-0707,W05-0909,o,"A new automatic metric METEOR (Banerjee and Lavie, 2005) uses stems and synonyms of the words." W07-0714,W05-0909,p,"In an experiment on 16,800 sentences of Chinese-English newswire text with segment-level human evaluation from the Linguistic Data Consortium?s (LDC) Multiple Translation project, we compare the LFG-based evaluation method with other popular metrics like BLEU, NIST, General Text Matcher (GTM) (Turian et al. , 2003), Translation Error Rate (TER) (Snover et al. , 2006)1, and METEOR (Banerjee and Lavie, 2005), and we show that combining dependency representations with synonyms leads to a more accurate evaluation that correlates better with human judgment." W07-0714,W05-0909,o,"Others try to accommodate both syntactic and lexical differences between the candidate translation and the reference, like CDER (Leusch et al. , 2006), which employs a version of edit distance for word substitution and reordering; or METEOR (Banerjee and Lavie, 2005), which uses stemming and WordNet synonymy." W07-0716,W05-0909,o,"Och showed thatsystemperformanceisbestwhenparametersare optimizedusingthesameobjectivefunctionthatwill be used for evaluation; BLEU (Papineni et al. , 2002) remains common for both purposes and is often retained for parameter optimization even when alternative evaluation measures are used, e.g., (Banerjee and Lavie, 2005; Snover et al. , 2006)." W07-0716,W05-0909,o,"(2006) propose a new metric that extends n-gram matching to include synonyms and paraphrases; and Lavie?s METEOR metric (Banerjee and Lavie, 2005) can be used with additionalknowledgesuchasWordNetinordertosupport inexact lexical matches." W07-0718,W05-0909,o,"They are: ??Meteor (Banerjee and Lavie, 2005)?Meteor measures precision and recall of unigrams when comparing a hypothesis translation 142 Language Pair Test Set Adequacy Fluency Rank Constituent English-German Europarl 1,416 1,418 1,419 2,626 News Commentary 1,412 1,413 1,412 2,755 German-English Europarl 1,525 1,521 1,514 2,999 News Commentary 1,626 1,620 1,601 3,084 English-Spanish Europarl 1,000 1,003 1,064 1,001 News Commentary 1,272 1,272 1,238 1,595 Spanish-English Europarl 1,174 1,175 1,224 1,898 News Commentary 947 949 922 1,339 English-French Europarl 773 772 769 1,456 News Commentary 729 735 728 1,313 French-English Europarl 834 833 830 1,641 News Commentary 1,041 1,045 1,035 2,036 English-Czech News Commentary 2,303 2,304 2,331 3,968 Czech-English News Commentary 1,711 1,711 1,733 0 Totals 17,763 17,771 17,820 27,711 Table 2: The number of items that were judged for each task during the manual evaluation against a reference." W07-0718,W05-0909,p,"While these are based on a relatively few number of items, and while we have not performed any tests to determine whether the differences in ? are statistically significant, the results 7The Czech-English conditions were excluded since there were so few systems 146 are nevertheless interesting, since three metrics have higher correlation than Bleu: ??Semantic role overlap (Gimenez and M`arquez, 2007), which makes its debut in the proceedings of this workshop ??ParaEval measuring recall (Zhou et al. , 2006), which has a model of allowable variation in translation that uses automatically generated paraphrases (Callison-Burch, 2007) ??Meteor (Banerjee and Lavie, 2005) which also allows variation by introducing synonyms and by flexibly matches words using stemming." W07-0719,W05-0909,o,"We might find better suited metrics, such as METEOR (Banerjee and Lavie, 2005), which is oriented towards word selection8." W07-0734,W05-0909,o,"Previous publications on Meteor (Lavie et al. , 2004; Banerjee and Lavie, 2005) have described the details underlying the metric and have extensively compared its performance with Bleu and several other MT evaluation metrics." W07-0734,W05-0909,o,"4 Optimizing Metric Parameters The original version of Meteor (Banerjee and Lavie, 2005) has instantiated values for three parameters in the metric: one for controlling the relative weight of precision and recall in computing the Fmean score (); one governing the shape of the penalty as a function of fragmentation () and one for the relative weight assigned to the fragmentation penalty ()." W07-0737,W05-0909,o,"al. 2006), we are interested in applying alternative metrics such a Meteor (Banerjee and Lavie 2005)." W07-0738,W05-0909,o,"in that order (Banerjee and Lavie, 2005)." W08-0301,W05-0909,o,"It is dubious whether SWD is useful regarding recall-oriented metrics like METEOR (Banerjee and Lavie, 2005), since SWD removes information in source sentences." W08-0302,W05-0909,o,"Evaluation We evaluate translation output using three automatic evaluation measures: BLEU (Papineni et al., 2002), NIST (Doddington, 2002), and METEOR (Banerjee and Lavie, 2005, version 0.6).5 All measures used were the case-sensitive, corpuslevel versions." W08-0305,W05-0909,o,"Moses provides BLEU (K.Papineni et al., 2001) and NIST (Doddington, 2002), but Meteor (Banerjee and Lavie, 2005; Lavie and Agarwal, 2007) and TER (Snover et al., 2006) can easily be used instead." W08-0307,W05-0909,o,"5http://opennlp.sourceforge.net/ We use the standard four-reference NIST MTEval data sets for the years 2003, 2004 and 2005 (henceforth MT03, MT04 and MT05, respectively) for testing and the 2002 data set for tuning.6 BLEU4 (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005) and multiple-reference Word Error Rate scores are reported." W08-0312,W05-0909,o,"Previous publications on Meteor (Lavie et al., 2004; Banerjee and Lavie, 2005; Lavie and Agarwal, 2007) have described the details underlying the metric and have extensively compared its performance with Bleu and several other MT evaluation metrics." W08-0312,W05-0909,o,"Many researchers (Banerjee and Lavie, 2005; Liu and Gildea, 2006), have observed consistent gains by using more flexible matching criteria." W08-0322,W05-0909,o,"Furthermore, the BLEU score performance suggests that our model is not very powerful, but some interesting hints can be found in Table 3 when we compare our method with a 5-gram language model to a state-of-the-art system Moses (Koehn and Hoang, 2007) based on various evaluation metrics, including BLEU score, NIST score (Doddington, 2002), METEOR (Banerjee and Lavie, 2005), TER (Snover et al., 2006), WER and PER." W08-0331,W05-0909,o,"For instance, BLEU and ROUGE (Lin and Och, 2004) are based on n-gram precisions, METEOR (Banerjee and Lavie, 2005) and STM (Liu and Gildea, 2005) use word-class or structural information, Kauchak (2006) leverages on paraphrases, and TER (Snover et al., 2006) uses edit-distances." W08-0334,W05-0909,o,"These were: BLEU (Papineni, 2001), NIST (Doddington, 2002), WER (Word Error Rate), PER (Position-independent WER), GTM (General Text Matcher), and METEOR (Banerjee and Lavie, 2005)." W08-0913,W05-0909,o,"109 machine translation evaluation (e.g., Banerjee and Lavie, 2005; Lin and Och, 2004),paraphraserecognition (e.g., Brockett and Dolan, 2005; Hatzivassiloglou et al., 1999), and automatic grading (e.g., Leacock,2004;Marn, 2004)." W09-0403,W05-0909,o,"(Banerjee and Lavie, 2005)) ." W09-0404,W05-0909,o,"However, there is little agreement on what types of knowledge are helpful: Some suggestions concentrate on lexical information, e.g., by the integration of word similarity information as in Meteor (Banerjee and Lavie, 2005) or MaxSim (Chan and Ng, 2008)." W09-0405,W05-0909,o,"3.3 System evaluation Since both the system translations and the reference translations are available for the tuning 43 set, we first compare each output to the reference translation using BLEU (Papineni et al., 2001) and METEOR (Banerjee and Lavie, 2005) and a combined scoring scheme provided by the ULC toolkit (Gimenez and Marquez, 2008)." W09-0408,W05-0909,o,"2.1 Alignment Sentences from different systems are aligned in pairs using a modified version of the METEOR (Banerjee and Lavie, 2005) matcher." W09-0418,W05-0909,o,"In this paper, translation quality is evaluated according to (1) the BLEU metrics which calculates the geometric mean of ngram precision by the system output with respect to reference translations (Papineni et al., 2002), and (2) the METEOR metrics that calculates unigram overlaps between translations (Banerjee and Lavie, 2005)." W09-0420,W05-0909,o,"Experiments are presented in table 1, using BLEU (Papineni et al., 2001) and METEOR5 (Banerjee and Lavie, 2005), and we also show the length ratio (ratio of hypothesized tokens to reference tokens)." W09-2310,W05-0909,o,"Apart from BLEU, a standard automatic measure METEOR (Banerjee and Lavie, 2005) was used for evaluation." W09-2404,W05-0909,o,"5.2 Impact on translation quality As reported in Table 3, small increases in METEOR (Banerjee and Lavie, 2005), BLEU (Papineni et al., 2002) and NIST scores (Doddington, 2002) suggest that SMT output matches the references better after postprocessing or decoding with the suggested lemma translations." D07-1070,W05-1508,o,"Since the lexical translations and dependency paths are typically not labeled in the English corpus, a given pair must be counted fractionally according to its posterior probability of satisfying these conditions, given models of contextual translation and English parsing.3 3Similarly, Jansche (2005) imputes missing trees by using comparable corpora." I08-1002,W06-0115,o,"The output by each approach will be evaluated using benchmark data sets of Bakeoff-32 (Levow, 2006)." I08-1002,W06-0115,o,"4 Evaluation The evaluation is conducted with all four corpora from Bakeoff-3 (Levow, 2006), as summarized in Table 1 with corpus size in number of characters." I08-4009,W06-0115,p,"1 Introduction Chinese Word Segmentation (CWS) has been witnessed a prominent progress in the last three Bakeoffs (Sproat and Emerson, 2003), (Emerson, 2005), (Levow, 2006)." I08-4010,W06-0115,o,"SIGHAN, the Special Interest Group for Chinese Language Processing of the Association for Computational Linguistics, conducted three prior word segmentation bakeoffs, in 2003, 2005 and 2006(Sproat and Emerson, 2003; Emerson, 2005; Levow, 2006), which established benchmarks for word segmentation and named entity recognition." I08-4013,W06-0115,o,"Taking SIGHAN Bakeoff 2006 (Levow, 2006) as an example, the recall is lower about 5% than the precision for each submitted system on MSRA and CityU closed track." I08-4013,W06-0115,o,"The flow using non-local features in two-stage architecture 2.4 Results We employ BIOE1 label scheme for the NER task because we found it performs better than IOB2 on Bakeoff 2006 (Levow, 2006) NER MSRA and CityU corpora." I08-4015,W06-0115,o,"Thus, as a powerful sequence tagging model, CRF became the dominant method in the Bakeoff 2006 (Levow, 2006)." I08-4015,W06-0115,n,"To analyze our methods on IV and OOV words, we use a detailed evaluation metric than Bakeoff 2006 (Levow, 2006) which includes Foov and Fiv." I08-4017,W06-0115,o,"We tested the techniques described above with the previous Bakeoffs data5 (Sproat and Emerson, 2003; Emerson, 2005; Levow, 2006)." I08-4027,W06-0115,o,"Since the word support model and triple context matching model have been proposed in our previous work (Tsai, 2005, 2006a and 2006b) at the SIGHAN bakeoff 2005 (Thomas, 2005) and 2006 (Levow, 2006), the major descriptions of this paper is on the WBT model." N07-1068,W06-0115,o,"In this paper, we employed the Chinese word segmentation tool (Wu et al. , 2006) that achieved about 0.93-0.96 recall/precision rates in the SIGHAN-3 word segmentation task (Levow, 2006)." W08-0336,W06-0115,o,"Detail of the Bakeoff data sets is in (Levow, 2006)." C08-1103,W06-0301,o,"(2005), Kim and Hovy (2006)), source extraction (e.g. Bethard et al." C08-1103,W06-0301,p,A notable exception is the work of Kim and Hovy (2006). D07-1114,W06-0301,o,"In open-domain opinion extraction, some approaches use syntactic features obtained from parsed input sentences (Choi et al. , 2006; Kim and Hovy, 2006), as is commonly done in semantic role labeling." D07-1114,W06-0301,o,"Kim and Hovy (2006) proposed a method for extracting opinion holders, topics and opinion words, in which they use semantic role labeling as an intermediate step to label opinion holders and topics." D07-1114,W06-0301,o,"Open-domain opinion extraction is another trend of research on opinion extraction, which aims to extract a wider range of opinions from such texts as newspaper articles (Yu and Hatzivassiloglou, 2003; Kim and Hovy, 2004; Wiebe et al. , 2005; Choi et al. , 2006)." W07-2072,W06-0301,o,Kim and Hovy (2006) integrated verb information from FrameNet and incorporated it into semantic role labeling. C08-1031,W06-0302,o,"Sentiment classification at the sentence-level has also been studied (e.g., Riloff and Wiebe 2003; Kim and Hovy 2004; Wilson et al 2004; Gamon et al 242 2005; Stoyanov and Cardie 2006)." N09-3013,W06-0302,o,"Other research has been conducted in analysing sentiment at a sentence level using bootstrapping techniques (Riloff and Wiebe, 2003), finding strength of opinions (Wilson, Wiebe and Hwa, 2004), summing up orientations of opinion words in a sentence (Kim and Hovy, 2004), and identifying opinion holders (Stoyanov and Cardie, 2006)." W06-1640,W06-0302,o,More details on the different parameter settings and instance selection algorithms as well as trends in the performance of different settings can be found in Stoyanov and Cardie (2006). W06-1640,W06-0302,o,More details about why heuristics are needed and the process used to map sources to NPs can be found in Stoyanov and Cardie (2006). N09-1055,W06-0303,o,"Some researchers (Fujii and Ishikawa, 2006) targeted nouns, noun phrases and verb phrases." P09-1026,W06-0303,o,Fujii and Ishikawa (2006) also work with arguments. P09-2020,W06-0305,o,"Reported and direct speech are certainly important in discourse (Prasad et al., 2006); we do not believe, however, that they enter discourse relations of the type that RST attempts to capture." W08-0122,W06-0305,o,"135 Considering the discourse relation annotations in the PDTB (Prasad et al., 2006), there can be alignment between discourse relations (like contrast) and our opinion frames when the frames represent dominant relations between two clauses." W09-1324,W06-0305,o,"Table 2: Corpora and Modalities CORPUS MODALITY ACE asserted, or other TIMEML must, may, should, would, or could Prasad et al., 2006 assertion, belief, facts or eventualities Saur et al., 2007 certain, probable, possible, or other Inui et al., 2008 affirm, infer, doubt, hear, intend, ask, recommend, hypothesize, or other THIS STUDY S/O, necessity, hope, possible, recommend, intend Table 3: Markup Scheme (Tags and Definitions) Tag Definition (Examples) R Remedy, Medical operation (e.g. radiotherapy) T Medical test, Medical examination (e.g., CT, MRI) D Deasese, Symptom (e.g., Endometrial cancer, headache) M Medication, administration of a drug (e.g., Levofloxacin, Flexeril) A patient action (e.g., admitted to a hospital) V Other verb (e.g., cancer spread to ) 2 Related Works 2.1 Previous Markup Schemes In the NLP field, fact identification has not been studied well to date." D07-1070,W06-1615,o,"However, another approach is to train a separate out-of-domain parser, and use this to generate additional features on the supervised and unsupervised in-domain data (Blitzer et al. , 2006)." D07-1096,W06-1615,o,(2006) and Blitzer et al. D07-1129,W06-1615,o,"In this paper, we investigate the effectiveness of structural correspondence learning (SCL) (Blitzer et al. , 2006) in the domain adaptation task given by the CoNLL 2007." D07-1129,W06-1615,o,"c2007 Association for Computational Linguistics Structural Correspondence Learning for Dependency Parsing Nobuyuki Shimizu Information Technology Center University of Tokyo Tokyo, Japan shimizu@r.dl.itc.u-tokyo.ac.jp Hiroshi Nakagawa Information Technology Center University of Tokyo Tokyo, Japan nakagawa@dl.itc.u-tokyo.ac.jp Abstract Following (Blitzer et al. , 2006), we present an application of structural correspondence learning to non-projective dependency parsing (McDonald et al. , 2005)." D07-1129,W06-1615,o,"3 Domain Adaptation Following (Blitzer et al. , 2006), we present an application of structural correspondence learning (SCL) to non-projective dependency parsing (McDonald et al. , 2005)." D09-1058,W06-1615,o,"Note that there are some similarities between our two-stage semi-supervised learning approach and the semi-supervised learning method introduced by (Blitzer et al., 2006), which is an extension of the method described by (Ando and Zhang, 558 2005)." D09-1158,W06-1615,o,"(Blitzer et al., 2006; Jiang and Zhai, 2007; Daume III, 2007; Finkel and Manning, 2009), or [S+T-], where no labeled target domain data is available, e.g." D09-1158,W06-1615,o,"(Blitzer et al., 2006; Jiang and Zhai, 2007)." E09-3005,W06-1615,o,"4.2 Further practical issues of SCL In practice, there are more free parameters and model choices (Ando and Zhang, 2005; Ando, 2006; Blitzer et al., 2006; Blitzer, 2008) besides the ones discussed above." E09-3005,W06-1615,o,"The problem itself has started to get attention only recently (Roark and Bacchiani, 2003; Hara et al., 2005; Daume III and Marcu, 2006; Daume III, 2007; Blitzer et al., 2006; McClosky et al., 2006; Dredze et al., 2007)." E09-3005,W06-1615,o,"Due to the positive results in Ando (2006), Blitzer et al." E09-3005,W06-1615,o,"In contrast, semi-supervised domain adaptation (Blitzer et al., 2006; McClosky et al., 2006; Dredze et al., 2007) is the scenario in which, in addition to the labeled source data, we only have unlabeled and no labeled target domain data." E09-3005,W06-1615,o,"We can confirm that changing the dimensionality parameter h has rather little effect (Table 4), which is in line with previous findings (Ando and Zhang, 2005; Blitzer et al., 2006)." E09-3005,W06-1615,n,"2 Motivation and Prior Work While several authors have looked at the supervised adaptation case, there are less (and especially less successful) studies on semi-supervised domain adaptation (McClosky et al., 2006; Blitzer et al., 2006; Dredze et al., 2007)." E09-3005,W06-1615,o,"c2009 Association for Computational Linguistics Structural Correspondence Learning for Parse Disambiguation Barbara Plank Alfa-informatica University of Groningen, The Netherlands b.plank@rug.nl Abstract The paper presents an application of Structural Correspondence Learning (SCL) (Blitzer et al., 2006) for domain adaptation of a stochastic attribute-value grammar (SAVG)." E09-3005,W06-1615,n,"While SCL has been successfully applied to PoS tagging and Sentiment Analysis (Blitzer et al., 2006; Blitzer et al., 2007), its effectiveness for parsing was rather unexplored." E09-3005,W06-1615,p,"Similarly, Structural Correspondence Learning (Blitzer et al., 2006; Blitzer et al., 2007; Blitzer, 2008) has proven to be successful for the two tasks examined, PoS tagging and Sentiment Classification." E09-3005,W06-1615,o,"Parse selection constitutes an important part of many parsing systems (Johnson et al., 1999; Hara et al., 2005; van Noord and Malouf, 2005; McClosky et al., 2006)." E09-3005,W06-1615,p,"So far, SCL has been applied successfully in NLP for Part-of-Speech tagging and Sentiment Analysis (Blitzer et al., 2006; Blitzer et al., 2007)." E09-3005,W06-1615,p,"We examine the effectiveness of Structural Correspondence Learning (SCL) (Blitzer et al., 2006) for this task, a recently proposed adaptation technique shown to be effective for PoS tagging and Sentiment Analysis." E09-3005,W06-1615,o,"4 Structural Correspondence Learning SCL (Structural Correspondence Learning) (Blitzer et al., 2006; Blitzer et al., 2007; Blitzer, 2008) is a recently proposed domain adaptation technique which uses unlabeled data from both source and target domain to learn correspondences between features from different domains." E09-3005,W06-1615,o,"Pivots are features occurring frequently and behaving similarly in both domains (Blitzer et al., 2006)." E09-3005,W06-1615,o,"Intuitively, if we are able to find good correspondences among features, then the augmented labeled source domain data should transfer better to a target domain (where no labeled data is available) (Blitzer et al., 2006)." E09-3005,W06-1615,o,"Figure 1: SCL algorithm (Blitzer et al., 2006)." E09-3005,W06-1615,o,"Applying the projection WTx (where x is a training instance) would give us m new features, however, for both computational and statistical reasons (Blitzer et al., 2006; Ando and Zhang, 2005) a low-dimensional approximation of the original feature space is computed by applying Singular Value Decomposition (SVD) on W (step 4)." E09-3005,W06-1615,o,"So far, pivot features on the word level were used (Blitzer et al., 2006; Blitzer et al., 2007; Blitzer, 2008), e.g. Does the bigram not buy occur in this document? (Blitzer, 2008)." N07-3002,W06-1615,o,"Furthermore, I plan to apply my parsers in other domains (e.g. , biomedical data) (Blitzer et al. , 2006) besides treebank data, to investigate the effectiveness and generality of my approaches." N09-2046,W06-1615,o,"For each pivot feature k, we use a loss function L k , () 2 1)( wxwxpL i i T ikk += (1) where the function p k (x i ) indicates whether the pivot feature k occurs in the instance x i , otherwise xif xp ik ik 0 1 1 )( > = , where the weight vector w encodes the correspondence of the non-pivot features with the pivot feature k (Blitzer et al., 2006)." N09-2046,W06-1615,o,"For transfer-learning baseline, we implement traditional SCL model (T-SCL) (Blitzer et al., 2006)." N09-2046,W06-1615,p,"Among these techniques, SCL (Structural Correspondence Learning) (Blitzer et al., 2006) is regarded as a promising method to tackle transfer-learning problem." P07-1034,W06-1615,o,"Following (Blitzer et al. , 2006), we call the first the source domain, and the second the target domain." P07-1034,W06-1615,o,"The POS data set and the CTS data set have previously been used for testing other adaptation methods (Daume III and Marcu, 2006; Blitzer et al. , 2006), though the setup there is different from ours." P07-1034,W06-1615,o,"Recently there have been some studies addressing domain adaptation from different perspectives (Roark and Bacchiani, 2003; Chelba and Acero, 2004; Florian et al. , 2004; Daume III and Marcu, 2006; Blitzer et al. , 2006)." P07-1056,W06-1615,o,"We augment each labeled target instance xj with the label assigned by the source domain classifier (Florian et al. , 2004; Blitzer et al. , 2006)." P07-1056,W06-1615,o,"As we noted in Section 5, we are able to significantly outperform basic structural correspondence learning (Blitzer et al. , 2006)." P07-1056,W06-1615,o,"440 respondence learning (SCL) domain adaptation algorithm (Blitzer et al. , 2006) for use in sentiment classification." P07-1056,W06-1615,o,"Then, it models the correlations between the pivot features and all other features by training linear pivot predictors to predict occurrences of each pivot in the unlabeled data from both domains (Ando and Zhang, 2005; Blitzer et al. , 2006)." P08-1029,W06-1615,o,"Most of this prior work deals with supervised transfer learning, and thus requires labeled source domain data, though there are examples of unsupervised (Arnold et al., 2007), semi-supervised (Grandvalet and Bengio, 2005; Blitzer et al., 2006), and transductive approaches (Taskar et al., 2003)." P09-1001,W06-1615,o,"Various machine learning strategies have been proposed to address this problem, including semi-supervised learning (Zhu, 2007), domain adaptation (Wu and Dietterich, 2004; Blitzer et al., 2006; Blitzer et al., 2007; Arnold et al., 2007; Chan and Ng, 2007; Daume, 2007; Jiang and Zhai, 2007; Reichart and Rappoport, 2007; Andreevskaia and Bergler, 2008), multi-task learning (Caruana, 1997; Reichart et al., 2008; Arnold et al., 2008), self-taught learning (Raina et al., 2007), etc. A commonality among these methods is that they all require the training data and test data to be in the same feature space." P09-1056,W06-1615,o,"For our POS tagging experiments, we use 561 MEDLINE sentences (9576 words) from the Penn BioIE project (PennBioIE, 2005), a test set previously used by Blitzer et al.(2006)." P09-1056,W06-1615,o,"Performance also degrades when the domain of the test set differs from the domain of the training set, in part because the test set includes more OOV words and words that appear only a few times in the training set (henceforth, rare words) (Blitzer et al., 2006; Daume III and Marcu, 2006; Chelba and Acero, 2004)." P09-1056,W06-1615,n,"HMM-smoothing improves on the most closely related work, the Structural Correspondence Learning technique for domain adaptation (Blitzer et al., 2006), in experiments." P09-1114,W06-1615,o,"While transfer learning was proposed more than a decade ago (Thrun, 1996; Caruana, 1997), its application in natural language processing is still a relatively new territory (Blitzer et al., 2006; Daume III, 2007; Jiang and Zhai, 2007a; Arnold et al., 2008; Dredze and Crammer, 2008), and its application in relation extraction is still unexplored." W07-2202,W06-1615,o,"Therefore, domain adaptation methods have recently been proposed in several NLP areas, e.g., word sense disambiguation (Chan and Ng, 2006), statistical parsing (Lease and Charniak, 2005; McClosky et al. , 2006), and lexicalized-grammar parsing (Johnson and Riezler, 2000; Hara et al. , 2005)." W09-2205,W06-1615,o,"5 Conclusions and Future Work The paper compares Structural Correspondence Learning (Blitzer et al., 2006) with (various instances of) self-training (Abney, 2007; McClosky et al., 2006) for the adaptation of a parse selection model to Wikipedia domains." W09-2205,W06-1615,o,"We examine Structural Correspondence Learning (SCL) (Blitzer et al., 2006) for this task, and compare it to several variants of Self-training (Abney, 2007; McClosky et al., 2006)." W09-2205,W06-1615,p,"2 Previous Work So far, Structural Correspondence Learning has been applied successfully to PoS tagging and Sentiment Analysis (Blitzer et al., 2006; Blitzer et al., 2007)." W09-2205,W06-1615,o,"37 3 Semi-supervised Domain Adaptation 3.1 Structural Correspondence Learning Structural Correspondence Learning (Blitzer et al., 2006) exploits unlabeled data from both source and target domain to find correspondences among features from different domains." W09-2205,W06-1615,o,"The techniques examined are Structural Correspondence Learning (SCL) (Blitzer et al., 2006) and Self-training (Abney, 2007; McClosky et al., 2006)." W09-2205,W06-1615,o,"Pivots are features occurring frequently and behaving similarly in both domains (Blitzer et al., 2006)." W09-2205,W06-1615,o,"Intuitively, if we are able to find good correspondences through linking pivots, then the augmented source data should transfer better to a target domain (Blitzer et al., 2006)." W09-2205,W06-1615,o,"Algorithm 1 SCL (Blitzer et al., 2006) 1: Select m pivot features." W09-2205,W06-1615,o,"SCL for Discriminative Parse Selection So far, pivot features on the word level were used (Blitzer et al., 2006; Blitzer et al., 2007)." C08-1101,W06-1639,o,"6 Related work Evidence from the surrounding context has been used previously to determine if the current sentence should be subjective/objective (Riloff et al., 2003; Pang and Lee, 2004) and adjacency pair information has been used to predict congressional votes (Thomas et al., 2006)." D07-1069,W06-1639,o,"In (Thomas et al. , 2006), the authors use the transcripts of debates from the US Congress to automatically classify speeches as supporting or opposing a given topic by taking advantage of the voting records of the speakers." D07-1069,W06-1639,o,"2 Data 2.1 The US Congressional Speech Corpus The text used in the experiments is from the United States Congressional Speech corpus (Monroe et al. , 2006), which is an XML formatted version of the electronic United States Congressional Record from the Library of Congress1." D07-1069,W06-1639,o,"The capitalization and punctuation is then removed from the text as in (Monroe et al. , 2006) and then the 1http://thomas.loc.gov 659 text stemmed using Porters Snowball II stemmer2." D09-1018,W06-1639,o,"Others use sentence cohesion (Pang and Lee, 2004), agreement/disagreement between speakers (Thomas et al., 2006; Bansal et al., 2008), or structural adjacency." N09-1001,W06-1639,o,"Graph-based algorithms for classification into subjective/objective or positive/negative language units have been mostly used at the sentence and document level (Pang and Lee, 2004; Agarwal and Bhattacharyya, 2005; Thomas et al., 2006), instead of aiming at dictionary annotation as we do." P07-1055,W06-1639,o,"Furthermore, these systems have tackled the problem at different levels of granularity, from the document level (Pang et al. , 2002), sentence level (Pang and Lee, 2004; Mao and Lebanon, 2006), phrase level (Turney, 2002; Choi et al. , 2005), as well as the speaker level in debates (Thomas et al. , 2006)." P07-1056,W06-1639,o,"While movie reviews have been the most studied domain, sentiment analysis has extended to a number of new domains, ranging from stock message boards to congressional floor debates (Das and Chen, 2001; Thomas et al. , 2006)." W08-0122,W06-1639,o,"5 Related Work Evidence from the surrounding context has been used previously to determine if the current sentence should be subjective/objective (Riloff et al., 2003; Pang and Lee, 2004)) and adjacency pair information has been used to predict congressional votes (Thomas et al., 2006)." N07-1009,W06-1640,o,"For the MUC6 data set, we extract noun phrases (mentions) automatically, but for MPQA, we assume mentions for coreference resolution are given as in Stoyanov and Cardie (2006)." W06-0302,W06-1640,o,"As a follow-up to the work described in this paper we developed a method that utilizes the unlabeled NPs in the corpus using a structured rule learner (Stoyanov and Cardie, 2006)." W06-0302,W06-1640,o,"The latter problem of developing methods that can work with incomplete supervisory information is addressed in a subsequent effort (Stoyanov and Cardie, 2006)." D08-1058,W06-1641,o,"Polarity orientation identification has many useful applications, including opinion summarization (Ku et al., 2006) and sentiment retrieval (Eguchi and Lavrenko, 2006)." P07-3007,W06-1641,o,Several sentiment information retrieval models were proposed in the framework of probabilistic language models by Eguchi and Lavrenko (2006). W09-1606,W06-1641,o,Eguchi & Lavrenko (2006) propose the use of probabilistic language models for ranking the results not only by sentiment but also by the topic relevancy. C08-1031,W06-1642,o,Discovering orientations of context dependent opinion comparative words is related to identifying domain opinion words (Hatzivassiloglou and McKeown 1997; Kanayama and Nasukawa 2006). C08-1052,W06-1642,o,"The acquisition of clues is a key technology in these research efforts, as seen in learning methods for document-level SA (Hatzivassiloglou and McKeown, 1997; Turney, 2002) and for phraselevel SA (Wilson et al., 2005; Kanayama and Nasukawa, 2006)." D07-1115,W06-1642,o,"(Kanayama and Nasukawa, 2006) reported that it was appropriate in 72.2% of cases." D07-1115,W06-1642,o,"Typically, a small set of seed polar phrases are prepared, and new polar phrases are detected based on the strength of co-occurrence with the seeds (Hatzivassiloglous and McKeown, 1997; Turney, 2002; Kanayama and Nasukawa, 2006)." D07-1115,W06-1642,o,"Kanayama and Nasukawa used both intraand inter-sentential co-occurrence to learn polarity of words and phrases (Kanayama and Nasukawa, 2006)." D07-1115,W06-1642,o,"In Kanayamas method, the co-occurrence is considered as the appearance in intraor inter-sentential context (Kanayama and Nasukawa, 2006)." D07-1115,W06-1642,o,"See Table 4 in (Kanayama and Nasukawa, 2006) for the detail." D08-1014,W06-1642,o,"Much of the work in subjectivity analysis has been applied to English data, though work on other languages is growing: e.g., Japanese data are used in (Kobayashi et al., 2004; Suzuki et al., 2006; Takamura et al., 2006; Kanayama and Nasukawa, 2006), Chinese data are used in (Hu et al., 2005), and German data are used in (Kim and Hovy, 2006)." D09-1018,W06-1642,o,"2 Related Work Previous work on polarity disambiguation has used contextual clues and reversal words (Wilson et al., 2005; Kennedy and Inkpen, 2006; Kanayama and Nasukawa, 2006; Devitt and Ahmad, 2007; Sadamitsu et al., 2008)." D09-1019,W06-1642,o,"It is possible that there is a better automated method for finding such phrases, such as the methods in (Kanayama and Nasukawa, 2006; Breck, Choi and Cardie, 2007)." D09-1019,W06-1642,o,"These words and phrases are usually compiled using different approaches (Hatzivassiloglou and McKeown, 1997; Kaji and Kitsuregawa, 2006; Kanayama and Nasukawa, 2006; Esuli and Sebastiani, 2006; Breck et al, 2007; Ding, Liu and Yu." D09-1062,W06-1642,o,"(2005)), while exploring word-to-expression (inter-expression) relations has connections to techniques that employ more of a global-view of corpus statistics (e.g., Kanayama and Nasukawa (2006)).1 While most previousresearch exploits only one or the other type of relation, we propose a unified method that can exploit both types of semantic relation, while adapting a general purpose polarity lexicon into a domain specific one." D09-1063,W06-1642,o,"Automatic approaches to creating a semantic orientation lexicon and, more generally, approaches for word-level sentiment annotation can be grouped into two kinds: (1) those that rely on manually created lexical resourcesmost of which use WordNet (Strapparava and Valitutti, 2004; Hu and Liu, 2004; Kamps et al., 2004; Takamura et al., 2005; Esuli and Sebastiani, 2006; An1http://www.wjh.harvard.edu/ inquirer 599 dreevskaia and Bergler, 2006; Kanayama and Nasukawa, 2006); and (2) those that rely on text corpora (Hatzivassiloglou and McKeown, 1997; Turney and Littman, 2003; Yu and Hatzivassiloglou, 2003; Grefenstette et al., 2004)." N07-1037,W06-1642,o,"Hyperparameter is automatically selected from 2Although Kanayama and Nasukawa (2006) that for their dataset similar to ours was 0.83, this value cannot be directly compared with our value because their dataset includes both individual words and pairs of words." N07-1037,W06-1642,o,"In addition to individual seed words, Kanayama and Nasukawa (2006) used more complicated syntactic patterns that were manually created." P07-1123,W06-1642,o,"While work on subjectivity analysis in other languages is growing (e.g. , Japanese data are used in (Takamura et al. , 2006; Kanayama and Nasukawa, 2006), Chinese data are used in (Hu et al. , 2005), and German data are used in (Kim and Hovy, 2006)), much of the work in subjectivity analysis has been applied to English data." D08-1081,W06-1643,o,"(Maskey and Hirschberg, 2005; Murray et al., 2005a; Galley, 2006))." D08-1081,W06-1643,o,"Galley (2006) used skip-chain Conditional Random Fields to model pragmatic dependencies between paired meeting utterances (e.g. QUESTION-ANSWER relations), and used a combination of lexical, prosodic, structural and discourse features to rank utterances by importance." D09-1054,W06-1643,o,"The skip-chain CRFs (Sutton and McCallum, 2004; Galley, 2006) model the long distance dependency between context and answer sentences and the 2D CRFs (Zhu et al., 2005) model the dependency between contiguous questions." P08-1054,W06-1643,o,"Interestingly, the interannotator agreement on SWITCHBOARD (a0a2a1 a3a5a4a7a6a9a8a9a6 ) is higher than on the lecture corpus (0.372) and higher than the a0 -score reported by Galley (2006) for the ICSI meeting data used by Murray et al." P08-1081,W06-1643,o,"Skipchain CRF model is applied for entity extraction and meeting summarization (Sutton and McCallum, 2006; Galley, 2006)." P08-2051,W06-1643,o,"ROUGE has been used in meeting summarization evaluation (Murray et al., 2005; Galley, 2006), yet the question remained whether ROUGE is a good metric for the meeting domain." P08-2051,W06-1643,o,"For this study, we used the same 6 test meetings as in (Murray et al., 2005; Galley, 2006)." P08-2051,W06-1643,o,"We used four different system summaries for each of the 6 meetings: one based on the MMR method in MEAD (Carbonell and Goldstein, 1998; et al., 2003), the other three are the system output from (Galley, 2006; Murray et al., 2005; Xie and Liu, 2008)." P08-2051,W06-1643,o,"(Galley, 2006) considered some location constrains in meeting summarization evaluation, which utilizes speaker information to some extent." P09-2066,W06-1643,o,"Supervised methods include hidden Markov model (HMM), maximum entropy, conditional random fields (CRF), and support vector machines (SVM) (Galley, 2006; Buist et al., 2005; Xie et al., 2008; Maskey and Hirschberg, 2006)." W08-0112,W06-1643,o,"A variety of approaches have been investigated for speech summarization, for example, maximum entropy, conditional random fields, latent semantic analysis, support vector machines, maximum marginal relevance (Maskey and Hirschberg, 2003; Hori et al., 2003; Buist et al., 2005; Galley, 2006; Murray et al., 2005; Zhang et al., 2007; Xie and Liu, 2008)." D07-1088,W06-1671,o,"(Wick et al. , 2006) report extracting database records by learning record field compatibility." N07-1018,W06-1673,o,"However, much recent work in machine learning and statistics has turned away from maximum-likelihood in favor of Bayesian methods, and there is increasing interest in Bayesian methods in computational linguistics as well (Finkel et al. , 2006)." N07-1018,W06-1673,p,"This algorithm appears fairly widely known: it was described by Goodman (1998) and Finkel et al (2006) and used by Ding et al (2005), and is very similar to other dynamic programming algorithms for CFGs, so we only summarize it here." N09-1037,W06-1673,o,"Previous work on linguistic annotation pipelines (Finkel et al., 2006; Hollingshead and Roark, 2007) has enforced consistency from one stage to the next." P08-1059,W06-1673,o,"(Finkel et al., 2006), and in some cases, to factor the translation problem so that the baseline MT system can take advantage of the reduction in sparsity by being able to work on word stems." P08-1101,W06-1673,o,"Different methods have been proposed to reduce error propagation between pipelined tasks, both in general (Sutton et al., 2004; Daume III and Marcu, 2005; Finkel et al., 2006) and for specific problems such as language modeling and utterance classification (Saraclar and Roark, 2005) and labeling and chunking (Shimizu and Haas, 2006)." P09-1055,W06-1673,o,"Doing joint inference instead of taking a pipeline approach has also been shown useful for other problems (e.g., (Finkel et al., 2006; Cohen and Smith, 2007))." W09-1114,W06-1673,o,"Our use of Gibbs sampling follows from its increasing use in Bayesian inference problems in NLP (Finkel et al., 2006; Johnson et al., 2007b)." D07-1003,W06-3104,o,This model is very similar to Smith and Eisner (2006). D07-1003,W06-3104,o,"Our story makes use of a weighted formalism known as quasi-synchronous grammar (hereafter, QG), originally developed by D. Smith and Eisner (2006) for machine translation." D07-1003,W06-3104,p,"2 Related Work To model the syntactic transformation process, researchers in these fieldsespecially in machine translationhave developed powerful grammatical formalisms and statistical models for representing and learning these tree-to-tree relations (Wu and Wong, 1998; Eisner, 2003; Gildea, 2003; Melamed, 2004; Ding and Palmer, 2005; Quirk et al. , 2005; Galley et al. , 2006; Smith and Eisner, 2006, inter alia)." D07-1003,W06-3104,o,We propose Smith and Eisners (2006) quasi-synchronous grammar (Section 3) as a general solution and the Jeopardy model (Section 4) as a specific instance. D07-1003,W06-3104,o,"We propose a probabilistic quasi-synchronous grammar, inspired by one proposed for machine translation (D. Smith and Eisner, 2006), and parameterized by mixtures of a robust nonlexical syntax/alignment model with a(n optional)lexical-semantics-drivenlog-linear model." D07-1003,W06-3104,p,"3 Quasi-Synchronous Grammar For a formal description of QG, we recommend Smith and Eisner (2006)." D07-1003,W06-3104,o,"Here, following Smith and Eisner (2006), we use a weighted, quasi-synchronous dependency grammar. Apart from the obvious difference in application task, there are a few important differences with their model." D07-1070,W06-3104,o,"A similar soft projection of dependencies was used in supervised machine translation by Smith and Eisner (2006), who used a source sentences dependency paths to bias the generation of its translation." D09-1086,W06-3104,o,"Thus, our generative model is a quasi-synchronous grammar, exactly as in (Smith and Eisner, 2006a).3 When training on target sentences w, therefore, we tune the model parameters to maximize notsummationtextt p(t,w) as in ordinary EM, but rather 3Our task here is new; they used it for alignment." D09-1086,W06-3104,o,"The simplest version, called Dependency Model with Valence (DMV), has been used in isolation and in combination with other models (Klein and Manning, 2004; Smith and Eisner, 2006b)." D09-1086,W06-3104,o,"Bilingual configurations that condition on tprime,wprime (2) are incorporated into the generative process as in Smith and Eisner (2006a)." D09-1086,W06-3104,o,"Much previous work on unsupervised grammar induction has used gold-standard partof-speech tags (Smith and Eisner, 2006b; Klein and Manning, 2004; Klein and Manning, 2002)." D09-1086,W06-3104,o,"One option would be to leverage unannotated text (McClosky et al., 2006; Smith and Eisner, 2007)." D09-1086,W06-3104,o,"Ourmodelisthusa form of quasi-synchronous grammar (QG) (Smith and Eisner, 2006a)." P08-2037,W06-3104,o,"(2007) explored the use a formalism called quasisynchronous grammar (Smith and Eisner, 2006) in order to find a more explicit model for matching the set of dependencies, and yet still allow for looseness in the matching." P09-1053,W06-3104,o,"Following Smith and Eisner (2006), we adopt the view that the syntactic structure of sentences paraphrasing some sentence s should be inspired by the structure of s. Because dependency syntax is still only a crude approximation to semantic structure, we augment the model with a lexical semantics component, based on WordNet (Miller, 1995), that models how words are probabilistically altered in generating a paraphrase." P09-1053,W06-3104,o,"The model cleanly incorporates both syntax and lexical semantics using quasi-synchronous dependency grammars (Smith and Eisner, 2006)." P09-1053,W06-3104,o,3.1 Background Smith and Eisner (2006) introduced the quasisynchronous grammar formalism. P09-1053,W06-3104,o,"Since it loosely links the two sentences syntactic structures, QG is well suited for problems like word alignment for MT (Smith and Eisner, 2006) and question answering (Wang et al., 2007)." P09-1053,W06-3104,o,Smith and Eisner (2006) used a quasisynchronous grammar to discover the correspondence between words implied by the correspondence between the trees. P09-1053,W06-3104,o,"These are identical to prior work (Smith and Eisner, 2006; Wang et al., 2007), except that we add a root configuration that aligns the target parent-child pair to null and the head word of the source sentence, respectively." P06-1110,W06-3603,o,"We estimate loss gradients (Equation 13) using a sample of the inference set, which gives a 100-fold increase in training speed (Turian & Melamed, 2006)." D07-1117,W06-3607,o,"Reranking approaches (Charniak and Johnson, 2005; Chen et al. , 2002; Collins and Koo, 2005; Ji et al. , 2006; Roark et al. , 2006) have been successfully applied to many NLP applications, including parsing, named entity recognition, sentence boundary detection, etc. To the best of our knowledge, reranking approaches have not been used for POS tagging, possibly due to the already high levels of accuracy for English, which leave little room for further improvement." P06-2055,W06-3607,o,"More details about the re-ranking algorithm are presented in (Ji et al. , 2006)." P06-2055,W06-3607,o,"Specifically, we will consider a system which was developed for the ACE (Automatic Content Extraction) task 3 and includes the following stages: name structure parsing, coreference, semantic relation extraction and event extraction (Ji et al. , 2006)." P09-1035,W07-0714,o,"But there is also extensive research focused on including linguistic knowledge in metrics (Owczarzak et al., 2006; Reeder et al., 2001; Liu and Gildea, 2005; Amigo et al., 2006; Mehay and Brew, 2007; Gimenez and M`arquez, 2007; Owczarzak et al., 2007; Popovic and Ney, 2007; Gimenez and M`arquez, 2008b) among others." W08-1707,W08-0502,o,"The query tions, the syntax, semantics, and abstract knowledge representation have type declarations (Crouch and King, 2008) which help to detect malformed representations." W08-1707,W08-0502,o,"The Bridge system uses the XLE (Crouch et al., 2008) parser to produce syntactic structures and then the XLE ordered rewrite system to produce linguistic semantics (Crouch and King, 2006) and abstract knowledge representations." D09-1160,W08-0804,p,"When we run our classifiers on resource-tight environments such as cell-phones, we can use a random feature mixing technique (Ganchev and Dredze, 2008) or a memory-efficient trie implementation based on a succinct data structure (Jacobson, 1989; Delpratt et al., 2006) to reduce required memory usage." C94-2110,W91-0208,o,"(1) a. Please move your car Her sadness moves him b. John enjoys the book John enjoys reading the book e. The two alibis do not accord They accorded him a warm welcome d. John swam for hours John swam across the channel Although the precise nrechanisms which govern lexical knowledge are still largely unknown, there is strong evidence that word sense extensibi\[ity is not arbitrary (Atkins &: Levin, 1991; Pustejovsky, 1991, 1994; Ostler Atkius, 1991)." J98-1003,W91-0208,o,5.3 Systematic Sense Shift Ostler and Atkins (1991) contend that there is strong evidence to suggest that a large part of word sense ambiguity is not arbitrary but follows regular patterns. P06-1031,W91-0208,o,"or cooking, which agrees with the knowledge presented in previous work (Ostler and Atkins, 1991)." P96-1005,W91-0208,o,"It has been lately incorporated into computational lexicography in (Atkins, 1991), (Ostler and Atkins, 1992), (Briscoe and Copestake, 1991), (Copestake and Briscoe, 1992), (Briscoe et al. , 1993))." W96-0308,W91-0208,o,"Within this class would fall the Lexical Implication Rules (LIRs) of Ostler and Atkins (1991), the lexical rules of Copestake and Briscoe (1991), the Generative Lexicon of Pustejovsky (1995), and the ellipsis recovery procedUres of Viegas and Nirenburg (1995)." W96-0308,W91-0208,o,These are most directly presented in Ostler and Atkins (1991). C02-1007,W93-0113,o,"Let us now compare our results to those obtained using shallow parsing, as previously done by Grefenstette (1993)." C02-1090,W93-0113,o,"Its previous applications (e.g. , Grefenstette 1993, Hearst and Schuetze 1993, Takunaga et al 1997, Lin 1998, Caraballo 1999) demonstrated that cooccurrence statistics on a target word is often sufficient for its automatical classification into one of numerous classes such as synsets of WordNet." P99-1067,W93-0113,o,"It can also be considered as an extension from the monolingual to the bilingual case of the well-established methods for semantic or syntactic word clustering as proposed by Schtitze (1993), Grefenstette (1994), Ruge (1995), Rapp (1996), Lin (1998), and others." P99-1067,W93-0113,o,"However, in yet unpublished work we found that at least for the computation of synonyms and related words neither syntactical analysis nor singular value decomposition lead to significantly better results than the approach described here when applied to the monolingual case (see also Grefenstette, 1993), so we did not try to include these methods in our system." W04-2103,W93-0113,o,"Grefenstette (1993) studied two context delineation methods of English nouns: the window-based and the syntactic, whereby all the different types of syntactic dependencies of the nouns were used in the same feature space." W04-2103,W93-0113,o,"The typical practice of preprocessing distributional data is to remove rare word co-occurrences, thus aiming to reduce noise from idiosyncratic word uses and linguistic processing errors and at the same time form more compact word representations (e.g. , Grefenstette, 1993; Ciaramita, 2002)." A00-1011,W95-0107,o,"First, it recognizes non-recursive Base Noun Phrase (BNP) (our specifications for BNP resemble those in Ramshaw and Marcus 1995)." A00-2007,W95-0107,o,"section 20 Majority voting (Mufioz et al. , 1999) (Tjong Kim Sang and Veenstra~ 1999) (Ramshaw and Marcus, 1995) (Argarnon et al. , 1998) accuracy precision O:98.10% C:98.29% 93.63% O:98.1% C:98.2% 93.1% 97.58% 92.50% 97.37% 91.80% 91.6% recall FZ=I 92.89% 93.26 92.4% 92.8 92.25% 92.37 92.27% 92.03 91.6% 91.6 section 00 accuracy precision Majority voting 0:98.59% C:98.65% 95.04% r (Tjong Kim Sang and Veenstra, 1999) 98.04% 93.71% (Ramshaw and Marcus, 1995) 97.8% 93.1% recall FB=I 94.75% 94.90 93.90% 93.81 93.5% 93.3 Table 3: The results of majority voting of different data representations applied to the two standard data sets put forward by (Ramshaw and Marcus, 1995) compared with earlier work." A00-2007,W95-0107,o,"We have applied it to the two data sets mentioned in (Ramshaw and Marcus, 1995)." A00-2007,W95-0107,o,"(Ramshaw and Marcus, 1995) have build a chunker by applying transformation-based learning to sections of the Penn Treebank." A00-2007,W95-0107,o,"They compare two data representations and report that a representation with bracket structures outperforms the IOB tagging representation introduced by (Ramshaw and Marcus, 1995)." A00-2007,W95-0107,o,"Two baseNP data sets have been put forward by (Ramshaw and Marcus, 1995)." A00-2007,W95-0107,o,"The noun phrases in this data set are the same as in the Treebank and therefore the baseNPs in this data set are slightly different from the ones in the (Ramshaw and Marcus, 1995) data sets." A00-2007,W95-0107,o,"And third, 1This (Ramshaw and Marcus, 1995) baseNP data set is available via ftp://ftp.cis.upenn.edu/pub/chunker/ 2Software for generating the data is available from http://lcg-www.uia.ac.be/conl199/npb/ 50 with the FZ=I rate which is equal to (2*precision*recall)/(precision+recall)." A00-2007,W95-0107,o,"An alternative representation for baseNPs has been put forward by (Ramshaw and Marcus, 1995)." A00-2007,W95-0107,o,"They have used the (Ramshaw and Marcus, 1995) representation as well (IOB1)." A00-2007,W95-0107,o,"Like the data used by (Ramshaw and Marcus, 1995), this data was retagged by the Brill tagger in order to obtain realistic part-of-speech (POS) tags 3." A00-2007,W95-0107,o,"The data was segmented into baseNP parts and nonbaseNP parts in a similar fashion as the data used by (Ramshaw and Marcus, 1995)." C00-1034,W95-0107,o,"We (:an tin(l 1;11(: sam(; l;yl)olop;y in other works (\]{anlshaw :rod Marcus, 1995), (Ca rdi(: and Pierc(:, 1998)." C00-1034,W95-0107,o,"(levelopment of cor1)ora with morl)ho-synta(:ti(: and syntacti(: mmotation (Marcus et al. , 1993), (Sampson, 1995)." C00-1034,W95-0107,o,"a(Mufioz et al. , 1999) showed that this representation tends to provide better results than the representation used in (Ramshaw and Marcus, 1995) where each word is tagged with a tag I(inside), O(outskte), or B(breaker)." C00-1082,W95-0107,o,"l lhmsetsu ideni,illcation is a ln'oblem similar to ohm,king (lLamshaw and Marcus, 1995; Sang and \h;ellsl;ra, 1999) in other l;mguages." C00-2089,W95-0107,o,"Precision and recall rates were 92.4% on the same data used in (Ramshaw and Marcus, 1995)." C00-2102,W95-0107,p,"This means that the 1)roblem of recognizing named entities in those cases can be solved by incorporating techniques of base noun phrase chunking (Ramshaw and Marcus, 1995)." C00-2102,W95-0107,o,3.2.1 Inside/Outside Encoding The Inside/Outside scheme of encoding chunking states of base noun phrases was studied in Ibmlshaw and Marcus (1995). C00-2105,W95-0107,p,"Ramshaw and Marcus (Ramshaw and Marcus, 1995) successflflly applied Eric Brill's transformation-based learning method to the chunking problem." C00-2124,W95-0107,o,"Like the data used by Ramshaw and Marcus (1995), this data was retagged by the Brill tagger in order to obtain realistic part-of speech (POS) tags 5." C00-2124,W95-0107,o,The data was seglnente.d into baseNP parts and non-lmseNP t)arts ill a similar fitshion as the data used 1)y Ramshaw and Marcus (1995). C00-2124,W95-0107,o,(1999) O:98.1% C:98.2% 92.4% 93.1% Ramshaw and Marcus (1995) IOB1:97.37% 91.80% 92.27% Argamon et al. C00-2124,W95-0107,o,(1999) 91.6% 91.6% F/3=1 93.86 93.26 92.8 92.03 91.6 Table 3: The overall pertbrmance of the majority voting combination of our best five systems (selected on tinting data perfbrnmnce) applied to the standard data set pnt tbrward by Ramshaw and Marcus (1995) together with an overview of earlier work. C00-2124,W95-0107,o,"tile data put tbrward by ll,amshaw and Marcus (1995)." C00-2124,W95-0107,o,been put forward by Ramshaw and Marcus (1995). C00-2124,W95-0107,o,"The data contains words, their part-of-speech 1This Ramshaw and Marcus (1995) bascNP data set is availal)le via ffp://fti).cis.upe,m.edu/pub/chunker/ 857 (POS) tags as computed by the Brill tagger and their baseNP segmentation as derived from the %'eebank (with some modifications)." C00-2124,W95-0107,o,An alternative representation for baseNPs has been put tbrward by Ramshaw and Marcus (1995). C00-2124,W95-0107,o,He used the Ramshaw and Marcus (1995) representation as well (IOB1). C00-2138,W95-0107,o,Rmnshaw and Marcus (1995) introdu(:e(l a 1)aseNl' whi(:h is a non-re(:ursive NIL They used trmlsfornmtion-1)ase(l learning to i(lentif~y n(/nrecto'sire l)aseNPs in a s(mtence. C04-1066,W95-0107,o,"Table 2 shows the unknown word tags for chunking, which are known as the IOB2 model (Ramshaw and Marcus, 1995)." C04-1067,W95-0107,o,"This approach is also used in base-NP chunking (Ramshaw and Marcus, 1995) and named entity recognition (Sekine et al. , 1998) as well as word segmentation." C08-1061,W95-0107,o,"The noun phrase chunking (NP chunking) module uses the basic NP chunker software from 483 (Ramshaw and Marcus, 1995) to recognize the noun phrases in the question." C08-1106,W95-0107,o,data set (Sang & Buchholz 2000; Ramshow & Marcus 1995). C08-2031,W95-0107,o,"This is confirmed by a comparison between our baseline result (F=1=55.4%) and some baseline results of English base-NP chunking task (e.g. precision=81.9%, recall=78.2%, F=1=80.0% (Ramshaw and Marcus, 1995))." D07-1033,W95-0107,o,"We adopted IOB (IOB2) labeling (Ramshaw and Marcus, 1995), where the rst word of an entity of class C is labeled B-C, the words in the entity are labeled I-C, and other words are labeled O." D07-1084,W95-0107,o,"Training and testing were performed using the noun phrase chunking corpus described in Ramshaw & Marcus (1995) (Ramshaw and Marcus, 1995)." D08-1063,W95-0107,o,"Similarly to classical NLP tasks such as text chunking (Ramshaw and Marcus, 1995) and named entity recognition (Tjong Kim Sang, 2002), we formulate mention detection as a sequence classification problem, by assigning a label to each token in the text, indicating whether it starts a specific mention, is inside a specific mention, or is outside any mentions." D08-1071,W95-0107,o,"Co-training (Yarowsky, 1995; Blum and Mitchell, 1998) is related to self-training, in that an algorithm is trained on its own predictions." D08-1071,W95-0107,o,"We use 3500 sentences from CoNLL (Tjong Kim Sang and De Meulder, 2003) as the NER data and section 20-23 of the WSJ (Marcus et al., 1993; Ramshaw and Marcus, 1995) as the POS/chunk data (8936 sentences)." D08-1075,W95-0107,o,"Our conception of the task is inspired by Ramshaw and Marcus representation of text chunking as a tagging problem (Ramshaw and Marcus, 1995) . The information that can be used to train the system appears in columns 1 to 8 of Table 1." D08-1112,W95-0107,o,"All corpora are formatted in the IOB sequence representation (Ramshaw and Marcus, 1995)." D09-1119,W95-0107,o,"5.2 NP Chunking The goal of this task (Marcus and Ramshaw, 1995) is the identification of non-recursive NPs." D09-1119,W95-0107,o,"1142 We show that by using a variant of SVM Anchored SVM Learning (Goldberg and Elhadad, 2007) with a polynomial kernel, one can learn accurate models for English NP-chunking (Marcus and Ramshaw, 1995), base-phrase chunking (CoNLL 2000), and Dutch Named Entity Recognition (CoNLL 2002), on a heavily pruned feature space." D09-1153,W95-0107,o,"With IOB2 representation (Ramshaw and Marcus, 1995), the problem of Chinese chunking can be regarded as a sequence labeling task." E06-1046,W95-0107,o,"These tags are drawn from a tagset which is constructed by 363 extending each argument label by three additional symbols a80a44a81a83a82a84a81a86a85, following (Ramshaw and Marcus, 1995)." E99-1016,W95-0107,o,"Results for chunking Penn Treebank data were previously presented by several authors (Ramshaw and Marcus, 1995; Argamon et al. , 1998; Veenstra, 1998; Cardie and Pierce, 1998)." E99-1023,W95-0107,o,"We have used the optimal experiment configurations that we had obtained from the fourth experiment series for processing the complete (Ramshaw and Marcus, 1995) data set." E99-1023,W95-0107,n,"Again the best result was obtained with IOB1 (F~=I =92.37) which is an imI)rovement of the best reported F,~=1 rate for this data set ((Ramshaw and Marcus, 1995): 92.03)." E99-1023,W95-0107,o,"We would like to apply our learning approach to the large data set mentioned in (Ramshaw and Marcus, 1995): Wall Street Journal corpus sections 2-21 as training material and section 0 as test material." E99-1023,W95-0107,n,"This time the chunker achieved a F~=l score of 93.81 which is half a point better than the results obtained by (Ramshaw and Marcus, 1995): 93.3 (other chunker rates for this data: accuracy: 98.04%; precision: 93.71%; recalh 93.90%)." E99-1023,W95-0107,o,"Ramshaw and Marcus used transformationbased learning (TBL) for developing two chunkers (Ramshaw and Marcus, 1995)." E99-1023,W95-0107,p,"(Ramshaw and Marcus, 1995) shows that baseNP recognition (Fz=I =92.0) is easier than finding both NP and VP chunks (Fz=1=88.1) and that increasing the size of the training data increases the performance on the test set." E99-1023,W95-0107,p,"It performed slightly worse on baseNP recognition than the (Ramshaw and Marcus, 1995) experiments (Fz=1=91.6)." E99-1023,W95-0107,o,"177 Proceedings of EACL '99 IOB1 IOB2 IOE1 IOE2 \[+\] \[+ IO IO +\] (Ramshaw and Marcus, 1995) (Veenstra, 1998) (Argamon et al. , 1998) (Cardie and Pierce, 1998) accuracy 97.58% 96.50% 97.58% 96.77% 97.37% 97.2% precision 92.50% 91.24% 92.41% 91.93% 93.66% 91.47% 91.25% 91.80% 89.0% 91.6 % 90.7% recall F~=I 92.25% 92.37 92.32% 91.78 92.04% 92.23 92.46% 92.20 90.81% 92.22 92.61% 92.04 92.54% 91.89 92.27% 92.03 94.3% 91.6 91.6% 91.6 91.1% 90.9 Table 6: The F~=I scores for the (Ramshaw and Marcus, 1995) test set after training with their training data set." E99-1023,W95-0107,n,"With all but two formats IBI-IG achieves better FZ=l rates than the best published result in (Ramshaw and Marcus, 1995)." E99-1023,W95-0107,o,"However, they use the (Ramshaw and Marcus, 1995) data set in a different training-test division (10-fold cross validation) which makes it (tifficult to compare their results with others." E99-1023,W95-0107,p,"The IOB1 format, introduced in (Ramshaw and Marcus, 1995), consistently (:ame out as the best format." E99-1023,W95-0107,p,"(l~mshaw and Marcus, 1995) have introduced a ""convenient"" data representation for chunking by converting it to a tagging task." E99-1023,W95-0107,o,"2.1 Data representation We have compared four complete and three partial data representation formats for the baseNP recognition task presented in (Ramshaw and Marcus, 1995)." E99-1023,W95-0107,o,"in their treatment of chunk-initial and chunk-final \[ + \] words: IOB1 IOB2 IOE1 IOE2 The first word inside a baseNP immediately following another baseNP receives a B tag (Ramshaw and Marcus, 1995)." E99-1023,W95-0107,o,"In (Ramshaw and Marcus, 1995) a set of transformational rules is used for modifying the classification of words." E99-1023,W95-0107,o,"~lr~l-l(; is a part of the TiMBL software package which is available from http://ilk.kub.nl 3 Results We have used the baseNP data presented in (Ramshaw and Marcus, 1995) 2." E99-1023,W95-0107,o,"(Ramshaw and Marcus, 1995) describe an error-driven transformation-based learning (TBL) method for finding NP chunks in texts." E99-1023,W95-0107,o,"The chunking classification was made by (Ramshaw and Marcus, 1995) based on the parsing information in the WSJ corpus." E99-1023,W95-0107,o,"All formats 2The data described in (Ramshaw and Marcus, 1995) is available from ftp://ftp.cis.upenn.edu/pub/chunker/ 175 Proceedings of EACL '99 word/POS context chunk tag context IOB1 L=2/R=I IOB2 L--2/R=I IOE1 L=I/R=2 IOE2 L=I/R=2 \[ +\] L=2/R=I + L=0/R=2 \[ + IO L=2/R=0 + L=I/R=I IO +\] L=I/R=I+L=0/R=2 F~=I 1/2 90.12 1/0 89.30 1/2 89.55 0/1 89.73 0/0 + 0/0 89.32 0/0 + I/I 89.78 1/1 + 0/0 89.86 Table 3: Results second experiment series: the best F~=I scores for different left (L) and right (R) chunk tag context sizes for the seven representation formats using 5-fold cross-validation on section 15 of the WSJ corpus." H05-1033,W95-0107,o,"We adopted the chunk representation proposed by Ramshaw and Marcus (1995) and used four different tags: B-NUC and B-SAT for nucleus and satellite-initial tokens, and I-NUC and I-SAT for non-initial tokens, i.e., tokens inside a nucleus and satellite span." H05-1048,W95-0107,o,"If one reduces the problem of entity mention detection to the detection of its head, the nature of the problem changes and the annotation of data becomes at; The [GPE Jordanian] [ORG military] [PER spokesman] This allows us to consider the problem as a tagging/chunking problem and describe each word as beginning (B) an entity mention, inside (I) an entity mention or outside (O) an entity mention (Ramhsaw and Marcus, 1995; Sang and Veenstra, 1999)." H05-1099,W95-0107,o,"2 Evaluating Heterogeneous Parser Output Two commonly reported shallow parsing tasks are Noun-Phrase (NP) Chunking (Ramshaw and Marcus, 1995) and the CoNLL-2000 Chunking task (Sang and Buchholz, 2000), which extends the NPChunking task to recognition of 11 phrase types1 annotated in the Penn Treebank." H05-1099,W95-0107,o,"(2002) 94.17 Li and Roth (2001) 93.02 94.64 Table 2: Baseline results on three shallow parsing tasks: the NP-Chunking task (Ramshaw and Marcus, 1995); the CoNLL-2000 Chunking task (Sang and Buchholz, 2000); and the Li & Roth task (Li and Roth, 2001), which is the same as CoNLL-2000 but with more training data and a different test section." H05-1099,W95-0107,o,"1 Introduction Finite-state parsing (also called chunking or shallow parsing) has typically been motivated as a fast firstpass for or approximation to more expensive context-free parsing (Abney, 1991; Ramshaw and Marcus, 1995; Abney, 1996)." H05-1099,W95-0107,o,"task, originally introduced in Ramshaw and Marcus (1995) and also described in (Collins, 2002; Sha and Pereira, 2003), brackets just base NP constituents5." H05-1099,W95-0107,o,"They mention that the resulting shallow parse tags are somewhat different than those used by Ramshaw and Marcus (1995), but that they found no significant accuracy differences in training on either set." I05-2022,W95-0107,o,"(Ramshaw and Marcus, 1995) used transformation based learning using a large annotated corpus for English." I05-6003,W95-0107,o,"Among the chunk types, NP chunking is the first to receive the attention (Ramshaw and Marcus, 1995), than other chunk types, such as VP and PP chunking (Veenstra, 1999)." I05-6010,W95-0107,o,"Many statistical taggers and parsers have been trained on it, e.g. Ramshaw and Marcus (1995), Srinivas (1997) and Alshawi and Carter (1994)." I08-1050,W95-0107,o,"Meanwhile, it is common for NP chunking tasks to represent a chunk (e.g., NP) with two labels, the begin (e.g., B-NP) and inside (e.g., I-NP) of a chunk (Ramshaw and Marcus, 1995)." I08-2079,W95-0107,o,"On the base of the chunk scheme proposed by Abney (1991) and the BIO tagging system proposed in Ramshaw and Marcus(1995), many machine learning techniques are used to deal with the problem." I08-4029,W95-0107,o,"3.1 Word Sequence Classification Similar to English text chunking (Ramshaw and Marcus, 1995; Lee and Wu, 2007), the word sequence classification model aims to classify each word via encoding its context features." I08-5010,W95-0107,o,"The general label sequence ln1 has the highest probability of occuring for the word sequence W n1 among all possible label sequences, that is Ln1 = argmax {Pr (Ln1 | W n1 ) } 3.2 Tagging Scheme We followed the IOB tagging scheme (Ramshaw and Marcus, 1995) for all the three languages (English, Hindi and Telugu)." I08-5010,W95-0107,o,"This tagging scheme is the IOB scheme originally put forward by Ramshaw and Marcus (Ramshaw and Marcus, 1995)." N01-1025,W95-0107,o,"Various machine learning approaches have been proposed for chunking (Ramshaw and Marcus, 1995; Tjong Kim Sang, 2000a; Tjong Kim Sang et al. , 2000; Tjong Kim Sang, 2000b; Sassano and Utsuro, 2000; van Halteren, 2000)." N01-1025,W95-0107,o,"a176 Base NP standard data set (baseNP-S) This data set was first introduced by (Ramshaw and Marcus, 1995), and taken as the standard data set for baseNP identification task2." N01-1025,W95-0107,o,"Inside/Outside This representation was first introduced in (Ramshaw and Marcus, 1995), and has been applied for base NP chunking." N03-1002,W95-0107,o,"Five chunk tag sets, IOB1, IOB2, IOE1, IOE2 (Ramshaw and Marcus, 1995) and SE (Uchimoto et al. , 2000), are commonly used." N03-1028,W95-0107,p,"The pioneering work of Ramshaw and Marcus (1995) introduced NP chunking as a machine-learning problem, with standard datasets and evaluation metrics." N03-1028,W95-0107,o,"In contrast, generative models are trained to maximize the joint probability of the training data, which is 1Ramshaw and Marcus (1995) used transformation-based learning (Brill, 1995), which for the present purposes can be tought of as a classi cation-based method." N03-1028,W95-0107,o,"Following Ramshaw and Marcus (1995), the input to the NP chunker consists of the words in a sentence annotated automatically with part-of-speech (POS) tags." N03-1028,W95-0107,o,"4.1 Data Preparation NP chunking results have been reported on two slightly different data sets: the original RM data set of Ramshaw and Marcus (1995), and the modi ed CoNLL-2000 version of Tjong Kim Sang and Buchholz (2000)." N03-1035,W95-0107,o,Toward a Task-based Gold Standard for Evaluation of NP Chunks and Technical Terms Nina Wacholder Rutgers University nina@scils.rutgers.edu Peng Song Rutgers University psong@paul.rutgers.edu Abstract We propose a gold standard for evaluating two types of information extraction output -noun phrase (NP) chunks (Abney 1991; Ramshaw and Marcus 1995) and technical terms (Justeson and Katz 1995; Daille 2000; Jacquemin 2002). N03-1035,W95-0107,o,NP chunks (Abney 1991; Ramshaw and Marcus 1995; Evans and Zhai 1996; Frantzi and Ananiadou 1996) and technical terms (Dagan and Church 1994; Justeson and Katz 1995; Daille 1996; Jacquemin 2001; Bourigault et al. 2002) fall into this difficult-toassess category. N03-1035,W95-0107,o,"To evaluate the performance of a parser, NP chunks can usefully be evaluated by a gold standard; many systems (e.g. , Ramshaw and Marcus 1995 and Cardie and Pierce 1988) use the Penn Treebank for this type of evaluation." N04-1001,W95-0107,o,"Similarly to classical NLP tasks such as base noun phrase chunking (Ramshaw and Marcus, 1994), text chunking (Ramshaw and Marcus, 1995) or named entity recognition (Tjong Kim Sang, 2002), we formulate the mention detection problem as a classification problem, by assigning to each token in the text a label, indicating whether it starts a specific mention, is inside a specific mention, or is outside any mentions." N04-1005,W95-0107,o,"These tags are drawn from a tagset which is constructed by extending each argument label by three additional symbols a11 a24 a35 a24a4a12, following (Ramshaw and Marcus, 1995)." N04-1029,W95-0107,o,We then apply Brills rule-based tagger (Brill 1995) and BaseNP noun phrase chunker (Ramshaw and Marcus 1995) to extract noun phrases from these sentences. N04-4037,W95-0107,o,"Then the words are tagged as inside a phrase (I), outside a phrase (O) or beginning of a phrase (B) (Ramhsaw and Marcus, 1995)." P00-1015,W95-0107,o,This second expression is similar to that used in [Marcus 1995]. P02-1055,W95-0107,o,"5 Related Research Ramshaw and Marcus (1995), Munoz et al." P02-1055,W95-0107,o,"For the chunk part of the code, we adopt the Inside, Outside, and Between (IOB) encoding originating from (Ramshaw and Marcus, 1995)." P03-1004,W95-0107,o,"We use a standard data set (Ramshaw and Marcus, 1995) consisting of sections 15-19 of the WSJ corpus as training and section 20 as testing." P03-1060,W95-0107,p,"The simplest one is the BIO representation scheme (Ramshaw and Marcus, 1995), where a B denotes the first item of an element and an I any non-initial item, and a syllable with tag O is not a part of any element." P03-1063,W95-0107,o,"1 Introduction Text chunking has been one of the most interesting problems in natural language learning community since the first work of (Ramshaw and Marcus, 1995) using a machine learning method." P03-1064,W95-0107,o,"(Ramshaw and Marcus, 1995) approached chucking by using Transformation Based Learning(TBL)." P03-1064,W95-0107,o,"On the same dataset as that of (Chen et al. , 1999), our new supertagger achieves an accuracy of a2a4a3a6a5a8a7a10a9a12a11 . Compared with the supertaggers with the same decoding complexity (Chen, 2001), our algorithm achieves an error reduction of a22a23a5a26a9a12a11 . We repeat Ramshaw and Marcus Transformation Based NP chunking (Ramshaw and Marcus, 1995) test by substituting supertags for POS tags in the dataset." P03-1064,W95-0107,o,"We repeat Ramshaw and Marcus Transformation Based NP chunking (Ramshaw and Marcus, 1995) algorithm by substituting supertags for POS tags in the dataset." P05-1027,W95-0107,o,"The class labeling system in our experiment is IOB2 (Sang, 2000), which is a variation of IOB (Ramshaw and Marcus, 1995)." P05-2004,W95-0107,o,"This segmentation task can be achieved by assigning words in a sentence to one of three tokens: B for Begin-NP, I for Inside-NP, or O for OutsideNP (Ramshaw and Marcus, 1995)." P06-1028,W95-0107,o,"These tasks are generally treated as sequential labeling problems incorporating the IOB tagging scheme (Ramshaw and Marcus, 1995)." P06-1087,W95-0107,o,"The results were evaluated using the CoNLL shared task evaluation tools 5 . The approaches tested were Error Driven Pruning (EDP) (Cardie and Pierce, 1998) and Transformational Based Learning of IOB tagging (TBL) (Ramshaw and Marcus, 1995)." P06-1087,W95-0107,o,"For the Transformation Based method, we have used both the PoS tag and the word itself, with the same templates as described in (Ramshaw and Marcus, 1995)." P06-1087,W95-0107,p,"4.2 Support Vector Machines We chose to adopt a tagging perspective for the Simple NP chunking task, in which each word is to be tagged as either B, I or O depending on wether it is in the Beginning, Inside, or Outside of the given chunk, an approach first taken by Ramshaw and Marcus (1995), and which has become the de-facto standard for this task." P06-1087,W95-0107,o,"Conjunctions are a major source of errors for English chunking as well (Ramshaw and Marcus, 1995, Cardie and Pierce, 1998)9, and we plan to address them in future work." P06-1087,W95-0107,o,"The NP chunks in the shared task data are base-NP chunks which are non-recursive NPs, a definition first proposed by Ramshaw and Marcus (1995)." P06-1087,W95-0107,o,"3 Hebrew Simple NP Chunks The standard definition of English base-NPs is any noun phrase that does not contain another noun phrase, with possessives treated as a special case, viewing the possessive marker as the first word of a new base-NP (Ramshaw and Marcus, 1995)." P06-2013,W95-0107,o,"Ramshaw and Marcus(Ramshaw and Marcus, 1995) first represented base noun phrase recognition as a machine learning problem." P06-2013,W95-0107,o,"We only describe these models briefly since full details are presented elsewhere(Kudo and Matsumoto, 2001; Sha and Pereira, 2003; Ramshaw and Marcus, 1995; Sang, 2002)." P06-2054,W95-0107,o,"Following (Ramshaw and Marcus, 1995), the slot labels are drawn from a set of classes constructed by extending each label by three additional symbols, Beginning/Inside/Outside (B/I/O)." P06-2098,W95-0107,o,"(Ramshaw and Marcus, 1995) To reduce the inference time, following (McCallum et al, 2003), we collapsed the 45 different POS labels contained in the original data." P06-3006,W95-0107,o,"We annotated with the BIO tagging scheme used in syntactic chunkers (Ramshaw and Marcus, 1995)." P07-1029,W95-0107,p,"Following Ramshaw and Marcus (1995), the current dominant approach is formulating chunking as a classification task, in which each word is classified as the (B)eginning, (I)nside or (O)outside of a chunk." P07-1029,W95-0107,o,"NP chunks in the shared task data are BaseNPs, which are non-recursive NPs, a definition first proposed by Ramshaw and Marcus (1995)." P07-1029,W95-0107,o,"For the English experiments, we use the now-standard training and test sets that were introduced in (Marcus and Ramshaw, 1995)2." P07-1031,W95-0107,o,"240 2 Motivation Many approaches to identifying base noun phrases have been explored as part of chunking (Ramshaw and Marcus, 1995), but determining sub-NP structure is rarely addressed." P09-3007,W95-0107,o,"As in the work of (Ramshaw and Marcus, 1995), each word or punctuation mark within a sentence is labeled with IOB tag together with its function type." P97-1041,W95-0107,o,"Transformation-based learning has also been successfully applied to text chunking (Ramshaw and Marcus, 1995), morphological disambiguation (Oflazer and Tur, 1996), and phrase parsing (Vilain and Day, 1996)." P98-1010,W95-0107,o,"We used the NP data prepared by Ramshaw and Marcus (1995), hereafter RM95." P98-1010,W95-0107,o,The last line shows the results of Ramshaw and Marcus (1995) (recognizing NP's) with the same train/test data. P98-1010,W95-0107,o,"Vilain and Day (1996) identify (and classify) name phrases such as company names, locations, etc. Ramshaw and Marcus (1995) detect noun phrases, by classifying each word as being inside a phrase, outside or on the boundary between phrases." P98-1010,W95-0107,o,"Surprisingly, though, rather little work has been devoted to learning local syntactic patterns, mostly noun phrases (Ramshaw and Marcus, 1995; Vilain and Day, 1996)." P98-1034,W95-0107,o,"More recently, Ramshaw & Marcus (In press) apply transformation-based learning (Brill, 1995) to the problem." P98-2234,W95-0107,o,"By core phrases, we mean the kind of nonrecursive simplifications of the NP and VP that in the literature go by names such as noun/verb groups (Appelt et al. , 1993) or chunks, and base NPs (Ramshaw and Marcus, 1995)." P99-1009,W95-0107,o,"Much previous work has been done on this problem and many different methods have been used: Church's PARTS (1988) program uses a Markov model; Bourigault (1992) uses heuristics along with a grammar; Voutilainen's NPTool (1993) uses a lexicon combined with a constraint grammar; Juteson and Katz (1995) use repeated phrases; Veenstra (1998), Argamon, Dagan & Krymolowski(1998) and Daelemaus, van den Bosch & Zavrel (1999) use memory-based systems; Ramshaw & Marcus (In Press) and Cardie & Pierce (1998) use rule-based systems." P99-1009,W95-0107,o,"1 To train their system, R&M used a 200k-word chunk of the Penn Treebank Parsed Wall Street Journal (Marcus et al. , 1993) tagged using a transformation-based tagger (Brill, 1995) and extracted base noun phrases from its parses by selecting noun phrases that contained no nested noun phrases and further processing the data with some heuristics (like treating the possessive marker as the first word of a new base noun phrase) to flatten the recursive structure of the parse." P99-1009,W95-0107,p,"Among the machine learning algorithms studied, rule based systems have proven effective on many natural language processing tasks, including part-of-speech tagging (Brill, 1995; Ramshaw and Marcus, 1994), spelling correction (Mangu and Brill, 1997), word-sense disambiguation (Gale et al. , 1992), message understanding (Day et al. , 1997), discourse tagging (Samuel et al. , 1998), accent restoration (Yarowsky, 1994), prepositional-phrase attachment (Brill and Resnik, 1994) and base noun phrase identification (Ramshaw and Marcus, In Press; Cardie and Pierce, 1998; Veenstra, 1998; Argamon et al. , 1998)." W00-0713,W95-0107,o,"(Veenstra, 1998) used the Base-NP tag set as presented in (Ramshaw and Marcus, 1995): I for inside a Base-NP, O for outside a Base-NP, and B for the first word in a Base-NP following another Base-NP." W00-0721,W95-0107,o,"Our goal is to come up with a mechanism that, given an input string, identifies the phrases in this string, this is a fundamental task with applications in natural language (Church, 1988; Ramshaw and Marcus, 1995; Mufioz et al. , 1999; Cardie and Pierce, 1998)." W00-0721,W95-0107,o,"The data sets used are the standard data sets for this problem (Ramshaw and Maxcus, 1995; Argamon et al. , 1999; Mufioz et al. , 1999; Tjong Kim Sang and Veenstra, 1999) taken from the Wall Street Journal corpus in the Penn Treebank (Marcus et al. , 1993)." W00-0726,W95-0107,o,Ramshaw and Marcus (1995) approached chunking by using a machine learning method. W00-0726,W95-0107,o,3.1 NP Our NP chunks are very similar to the ones of Ramshaw and Marcus (1995). W00-0726,W95-0107,o,"Adverbs/adverbial phrases becorae part of the VP chunk (as long as they are in front of the main verb): (VP could (ADVP very well) (VP show )) ""-+ \[ve could very well show \] In contrast to Ramshaw and Marcus (1995), predicative adjectives of the verb are not part of the VP chunk, e.g. in ""\[NP they \] \[vP are \] \[ADJP unhappy \]'." W00-0726,W95-0107,o,There has been a large interest in recognizing non-overlapping noun phrases (Ramshaw and Marcus (1995) and follow-up papers) but relatively little has been written about identifying phrases of other syntactic categories. W00-0726,W95-0107,o,"The first solution might also introduce errors elsewhere As Ramshaw and Marcus (1995) already noted: ""While this automatic derivation process introduced a small percentage of errors on its own, it was the only practical way both to provide the amount of training data required and to allow for fully-automatic testing""." W00-0726,W95-0107,p,"4 Data and Evaluation For the CoNLL shared task, we have chosen to work with the same sections of the Penn Treebank as the widely used data set for base noun phrase recognition (Ramshaw and Marcus, 1995): WSJ sections 15-18 of the Penn Treebank as training material and section 20 as test material 3." W00-0726,W95-0107,o,B-X I-X 0 first word of a chunk of type X non-initial word in an X chunk word outside of any chunk This representation type is based on a representation proposed by Ramshaw and Marcus (1995) for noun phrase chunks. W00-0731,W95-0107,o,"1 Introduction Shallow parsing has received a reasonable amount of attention in the last few years (for example (Ramshaw and Marcus, 1995))." W00-0733,W95-0107,o,"Chunks can be represented with bracket structures but alternatively one can use a tagging representation which classifies words as being inside a chunk (I), outside a chunk (O) or at a chunk boundary (B) (Ramshaw and Marcus, 1995)." W00-0736,W95-0107,o,"The first is a baseline of sorts, our own version of the ""chunking as tagging"" approach introduced by Ramshaw and Marcus (Ramshaw and Marcus, 1995)." W00-0737,W95-0107,o," 00: the current input token and the previous one have the same parent 90: one ancestor of the current input token and the previous input token have the same parent 09: the current input token and one ancestor of the previous input token have the same parent 99 one ancestor of the current input token and one ancestor of the previous input token have the same parent Compared with the B-Chunk and I-Chunk used in Ramshaw and Marcus(1995)~, structural relations 99 and 90 correspond to B-Chunk which represents the first word of the chunk, and structural relations 00 and 09 correspond to I-Chunk which represents each other in the chunk while 90 also means the beginning of the sentence and 09 means the end of the sentence." W00-0744,W95-0107,o,"Ramshaw and Marcus (Ramshaw and Marcus, 1995) views chunking as a tagging problem." W00-1309,W95-0107,n,"Type Precision Recall Fa=l Overall 96.40 96.47 96.44 NP 96.49 96.99 96.74 VP 97.13 97.36 97.25 ADJP 89.92 88.15 89.03 ADVP 91.52 87.57 89.50 97.13 97.36 PP 97.25 Table 16: Results of 25-fold cross-validation chunking experiments with the merged context-dependent lexicon Tables 14 and 16 shows that our new chunk tagger greatly outperforms other reported chunk taggers on the same training data and test data by 2%-3%.(Buchholz S. , Veenstra J. and Daelmans W.(1999), Ramshaw L.A. and Marcus M.P.(1995), Daelemans W. , Buchholz S. and Veenstra J.(1999), and Veenstra J.(1999))." W00-1309,W95-0107,o,"Ramshaw and Marcus(1995) used transformation-based learning, an error-driven learning technique introduced by Eric Bn11(1993), to locate chunks in the tagged corpus." W00-1309,W95-0107,o,"NULL) Compared with the B-Chunk and I-Chunk used in Ramshaw and Marcus(1995), structural relations 99 and 90 correspond to B-Chunk which represents the first word of the chunk, and structural relations 00 and 09 correspond to I-Chunk which represnts each other in the chunk while 90 also means the beginning of the sentence and 09 means the end of the sentence." W01-0702,W95-0107,o,"5 The task: Base NP chunking The task is base NP chunking on section 20 of the Wall Street Journal corpus, using sections 15 to 18 of the corpus as training data as in (Ramshaw and Marcus, 1995)." W01-0706,W95-0107,p,"Thus, over the past few years, along with advances in the use of learning and statistical methods for acquisition of full parsers (Collins, 1997; Charniak, 1997a; Charniak, 1997b; Ratnaparkhi, 1997), significant progress has been made on the use of statistical learning methods to recognize shallow parsing patterns syntactic phrases or words that participate in a syntactic relationship (Church, 1988; Ramshaw and Marcus, 1995; Argamon et al. , 1998; Cardie and Pierce, 1998; Munoz et al. , 1999; Punyakanok and Roth, 2001; Buchholz et al. , 1999; Tjong Kim Sang and Buchholz, 2000)." W01-0712,W95-0107,o,Standard data sets for machine learning approaches to this task were put forward by Ramshaw and Marcus (1995). W01-0712,W95-0107,o,"The original Ramshaw and Marcus (1995) publication evaluated their NP chunker on two data sets, the second holding a larger amount of training data (Penn Treebank sections 02-21) while using 00 as test data." W01-0712,W95-0107,o,The data set that has become standard for evaluation machine learning approaches is the one first used by Ramshaw and Marcus (1995). W01-0719,W95-0107,o,"For extracting simple noun phrases we first used Ramshaw and Marcuss base NP chunker (Ramshaw and Marcus, 1995)." W01-0908,W95-0107,o,Wall-Street Journal (WSJ) Sections 15-18 and 20 were used by Ramshaw and Marcus (1995) as training and test data respectively for evaluating their base-NP chunker. W01-1011,W95-0107,o,The noun phrase extraction module uses Brill's POS tagger [Brill (1992)]and a base NP chunker [Ramshaw and Marcus (1995)]. W01-1011,W95-0107,o,3.1 Candidate NPs Noun phrases were extracted using Ramshaw and Marcus's base NP chunker [Ramshaw and Marcus (1995)]. W02-0301,W95-0107,o,"Several representations to encode region information are proposed and examined (Ramshaw and Marcus, 1995; Uchimoto et al. , 2000; Kudo and Matsumoto, 2001)." W02-2024,W95-0107,o,The tagging scheme is a variant of the IOB scheme originally put forward by Ramshaw and Marcus (1995). W03-0419,W95-0107,o,This tagging scheme is the IOB scheme originally put forward by Ramshaw and Marcus (1995). W03-0613,W95-0107,o,"We split the returned documents into classes encompassing n-grams (terms of word length n), adjectives (using a part-of-speech tagger (Brill, 1992)) and noun phrases (using a lexical chunker (Ramshaw and Marcus, 1995))." W03-1706,W95-0107,o,"Like baseNP chunking(Church, 1988; Ramshaw & Marcus 1995), content chunk parsing is also a kind of shallow parsing." W04-1107,W95-0107,o,"(Ramshaw and Marcus, 1995) represent chunking as tagging problem and the CoNLL2000 shared task (Kim Sang and Buchholz, 2000) is now the standard evaluation task for chunking English." W04-2416,W95-0107,o,"2 System Description 2.1 Data Representation In this paper, we change the representation of the original data as follows: Bracketed representation of roles is converted into IOB2 representation (Ramhsaw and Marcus, 1995; Sang and Veenstra, 1995) Word tokens are collapsed into base phrase (BP) tokens." W05-0611,W95-0107,o,"The examples represent seven-word windows of words and their respective (predicted) part-of-speech tags, and each example is labeled with a class using the IOB type of segmentation coding as introduced by Ramshaw and Marcus (1995), marking whether the middle word is inside (I), outside (O), or at the beginning (B) of a chunk." W05-0629,W95-0107,o,"1http://chasen.org/ taku/software/yamcha/ 2http://chasen.org/ taku/software/TinySVM/ 197 a0 Bracketed representation of roles was converted into IOB2 representation (Ramhsaw and Marcus, 1995) (Sang and Veenstra, 1999)." W05-0634,W95-0107,o,"This is referred to as an IOB representation (Ramshaw and Marcus, 1995)." W05-1514,W95-0107,o,"Sang used the IOB tagging method proposed by Ramshow(Ramshaw and Marcus, 1995) and memory-based learning for each level of chunking and achieved an f-score of 80.49 on the Penn Treebank corpus." W06-0112,W95-0107,o,Ramshaw and Marcus (1995) introduced a transformationbased learning method which considered chunking as a kind of tagging problem. W06-0112,W95-0107,o,2 Task Description 2.1 Data Representation Ramshaw and Marcus (1995) gave mainly two kinds of base NPs representation the open/close bracketing and IOB tagging. W06-0113,W95-0107,o,Ramshaw and Marcus (1995) first introduced the machine learning techniques to chunking problem. W06-0113,W95-0107,o,"6 Related works After the work of Ramshaw and Marcus (1995), many machine learning techniques have been applied to the basic chunking task, such as Support Vector Machines (Kudo and Matsumoto, 2001), Hidden Markov Model(Molina and Pla 2002), Memory Based Learning (Sang, 2002), Conditional Random Fields (Sha and Pereira, 2003), and so on." W06-0139,W95-0107,o,"2.1 Word Sequence Classification Similar to English text chunking (Ramshaw and Marcus, 1995; Wu et al. , 2006a), the word sequence classification model aims to classify each word via encoding its context features." W06-0505,W95-0107,o,"The sentences in the training and testing sets were already (perfectly) POS-tagged and noun chunked, and that in a real-life situation additional preprocessing by a POS-tagger (such as the LT-POS-tagger4) and noun chunker (such as described in (Ramshaw and Marcus, 1995)) which will introduce additional errors." W06-1618,W95-0107,o,"al. 2003b) 147 is (B)eginning, (I)nside or (O)utside of a chunk (Ramshaw & Marcus, 1995)." W06-2602,W95-0107,o,"Each token is labelled with a class using the IOB type of segmentation coding as introduced by Ramshaw and Marcus (1995), marking whether the middle word is inside (I), outside (O), or at the beginning (B) of a chunk, or named entity." W07-0812,W95-0107,o,"Moreover, since BPC had been cast as a classification problem by Ramshaw and Marcus (1995), the task is performed with greater efficiency and is easily portable to new languages in a supervised manner (Diab et al. , 2004; Diab et al. , 2007)." W07-0812,W95-0107,o,"It was first cast as a classification problem by Ramshaw and Marcus (1995), as a problem of NP chunking." W07-0812,W95-0107,o,"A la Ramshaw and Marcus (1995), they represent the words as a sequence of labeled words with IOB annotations, where the B marks a word at the beginning of a chunk, I marks a word inside a chunk, and O marks those words (and punctuation) that are outside chunks." W07-0812,W95-0107,o,"A la Ramshaw and Marcus (1995), and Kudo and Matsumato (2000), we use the IOB tagging style for modeling and classification." W07-1009,W95-0107,o,"The difficulty of this task is that the standard method for converting NER to a sequence tagging problem with BIOencoding (Ramshaw and Marcus, 1995), where each 1http://www.nist.gov/speech/tests/ace/ index.htm token is assigned a tag to indicate whether it is at the beginning (B), inside (I), or outside (O) of an entity, is not directly applicable when tokens belong to more than one entity." W07-1022,W95-0107,o,"The concept of baseNP has undergone a number of revisions (Ramshaw and Marcus, 1995; Tjong Kim Sang and Buchholz, 2000) but has previously always been tied to extraction from a more completely annotated treebank, whose annotations are subject to other pressures than just initial material up to the head . To our knowledge, our gures for inter-annotator agreement on the baseNP task itself 169 (i.e. not derived from a larger annotation task) are the rst to be reported." W07-1022,W95-0107,o,"Ramshaw and Marcus (1995) state that a baseNP aims to identify essentially the initial portions of nonrecursive noun phrases up to the head, including determiners but not including postmodifying prepositional phrases or clauses . However, work on baseNPs has essentially always proceeded via algorithmic extraction from fully parsed corpora such as the Penn Treebank." W07-1022,W95-0107,o,"1 Introduction Base noun phrases (baseNPs), broadly the initial portions of non-recursive noun phrases up to the head (Ramshaw and Marcus, 1995), are valuable pieces of linguistic structure which minimally extend beyond the scope of named entities." W07-1509,W95-0107,o,"Given a weight vector w, the score wf(x,y) ranks possible labelings of x, and we denote by Yk,w(x) the set of k top scoring labelings for x. We use the standard B,I,O encoding for named entities (Ramshaw and Marcus, 1995)." W08-0113,W95-0107,o,"We hence chose transformation-based learning to create this (shallow) segmentation grammar, converting the segmentation task into a tagging task (as is done in 85 (Ramshaw and Marcus, 1995), inter alia)." W08-0113,W95-0107,o,"Apart from this, the module is a straightforward implementation of (Ramshaw and Marcus, 1995), which in turn adapts (Brill, 1993) for syntactic chunking." W09-1317,W95-0107,o,"4.4 Text chunking Next, a rule-based text chunker (Ramshaw and Marcus, 1995) is applied on the tagged sentences to further identify phrasal units, such as base noun phrases NP and verbal units VB." W09-1412,W95-0107,o,"All our experiments used the standard BIO encoding (Ramshaw and Marcus, 1995) with different feature sets and learning procedures." W97-0104,W95-0107,o,\[RM95\] Lance A. Ramshaw & Mitchell P. Marcus (1995). W99-0621,W95-0107,o,"We do not consider mixed features between words and POS tags as in (l:tamshaw and Marcus, 1995), that is, a single feature consists of either words or tags." W99-0621,W95-0107,o,"4 Methodology 4.1 Data In order to be able to compare our results with the results obtained by other researchers, we worked with the same data sets already used by (Ramshaw and Marcus, 1995; Argamon et al. , 1998) for NP and SV detection." W99-0621,W95-0107,o,"Instead of using the NP bracketing information present in the tagged Treebank data, Ramshaw and Marcus modified the data so as to include bracketing information related only to the non-recursive, base NPs present in each sentence while the subject verb phrases were taken as is. The data sets include POS tag information generated by Ramshaw and Marcus using Brill's transformational part-of-speech tagger (Brill, 1995)." W99-0621,W95-0107,o,"The results are comparable to other results reported using the Inside/Outside method (Ramshaw and Marcus, 1995) (see Table 7." W99-0621,W95-0107,o,"This is similar to results in the literature (Ramshaw and Marcus, 1995)." W99-0621,W95-0107,o,"Perhaps this was not observed earlier since (Ramshaw and Marcus, 1995) studied only base NPs, most of which are short." W99-0621,W95-0107,o,"These problems formulations are similar to those studied in (Ramshaw and Marcus, 1995) and (Church, 1988; Argamon et al. , 1998), respectively." W99-0621,W95-0107,o,"Of the several slightly different definitions of a base NP in the literature we use for the purposes of this work the definition presented in (Ramshaw and Marcus, 1995) and used also by (Argamon et al. , 1998)and others." W99-0621,W95-0107,o,"For example, the sentence I went to California last May would be marked for base NPs as: I went to California last May I 0 0 I B I indicating that the NPs are I, California and last May. This approach has been studied in (Ramshaw and Marcus, 1995)." W99-0621,W95-0107,o,"The observation that shallow syntactic information can be extracted using local information by examining the pattern itself, its nearby context and the local part-of-speech information has motivated the use of learning methods to recognize these patterns (Church, 1988; Ramshaw and Marcus, 1995; Argamon et al. , 1998; Cardie and Pierce, 1998)." W99-0629,W95-0107,o,"Ramshaw and Marcus (1995) first assigned a chunk tag to each word in the sentence: I for inside a chunk, O for outside a chunk, and 240 type precision B tbr inside a chunk, but tile preceding word is in another chunk." W99-0705,W95-0107,o,"Introduction Since Eric Brill first introduced the method of Transformation-Based Learning (TBL) it has been used to learn rules for many natural language processing tasks, such as part-of-speech tagging \[Brill, 1995\], PPattachment disambiguation \[Brill and Resnik, 1994\], text chunking \[Ramshaw and Marcus, 1995\], spelling correction \[Mangu and Brill, 1997\], dialogue act tagging \[Samuel et al. , 1998\] and ellipsis resolution \[Hardt, 1998\]." W99-0707,W95-0107,o,"Chunking For NP chunking, \[Argamon et al. , 1998\] used data extracted from section 15-18 of the WS.J as a fixed train set and section 20 as a fixed test set, the same data as \[Ramshaw and Marcus, 1995\]." W99-0707,W95-0107,o,"Since part of the chunking errors could be caused by POS errors, we also compared the same baseNP chunker on the santo corpus tagged with i) the Brill tagger as used in \[Ramshaw and Marcus, 1995\], ii) the Memory-Based Tagger (MBT) as described in \[Daelemans et al. , 1996\]." W99-0707,W95-0107,o,"We also present the results of \[Argamon et al. , 1998\], \[Ramshaw and Marcus: 1995\] and \[Cardie and Pierce, 1998\] in Table 4." W99-0707,W95-0107,o,"Introduction Recently, there has been an increased interest in approaches to automatically learning to recognize shallow linguistic patterns in text \[Ramshaw and Marcus, 1995, Vilain and Day, 1996, Argamon et al. , 1998, Buchholz, 1998, Cardie and Pierce, 1998, Veenstra, 1998, Daelemans et aI." A00-1031,W96-0213,p,"Recent comparisons of approaches that can be trained on corpora (van Halteren et al. , 1998; Volk and Schneider, 1998) have shown that in most cases statistical aproaches (Cutting et al. , 1992; Schmid, 1995; Ratnaparkhi, 1996) yield better results than finite-state, rule-based, or memory-based taggers (Brill, 1993; Daelemans et al. , 1996)." A00-1031,W96-0213,n,"For the Penn Treebank, (Ratnaparkhi, 1996) reports an accuracy of 96.6% using the Maximum Entropy approach, our much simpler and therefore faster HMM approach delivers 96.7%." A00-1031,W96-0213,o,"According to current tagger comparisons (van Halteren et al. , 1998; Zavrel and Daelemans, 1999), and according to a comparsion of the results presented here with those in (Ratnaparkhi, 1996), the Maximum Entropy framework seems to be the only other approach yielding comparable results to the one presented here." A00-1031,W96-0213,o,"The Penn Treebank results reported here for the Markov model approach are at least equivalent to those reported for the Maximum Entropy approach in (Ratnaparkhi, 1996)." A00-2013,W96-0213,o,"1 Full Morphological Tagging English Part of Speech (POS) tagging has been widely described in the recent past, starting with the (Church, 1988) paper, followed by numerous others using various methods: neural networks (Julian Benello and Anderson, 1989), HMM tagging (Merialdo, 1992), decision trees (Schmid, 1994), transformation-based error-driven learning (Brill, 1995), and maximum entropy (Ratnaparkhi, 1996), to select just a few." A00-2013,W96-0213,o,"This is the way the Maximum Entropy tagger (Ratnaparkhi, 1996) runs if one uses the binary version from the website (see the comparison in Section 5)." A00-2013,W96-0213,p,"We have chosen the Maximum Entropy tagger (Ratnaparkhi, 1996) for a comparison with our universal tagger, since it achieved (by a small margin) the best overall result on Slovene as reported there (86.360% on all tokens) of taggers available to us (MBT, the best overall, was not freely available to us at the time of writing)." A00-2020,W96-0213,o,Adwait Ratnaparkhi (1996) estimates a probability distribution for tagging using a maximum entropy approach. A00-2020,W96-0213,o,"Regarding error detection in corpora, Ratnaparkhi (1996) discusses inconsistencies in the Penn Treebank and relates them to interannotator differences in tagging style." A97-1004,W96-0213,o,"4 Maxilnum Entropy The model used here for sentence-boundary detection is based on the maximum entropy model used for POS tagging in (Ratnaparkhi, 1996)." C00-1082,W96-0213,p,"a.2 Maximum-entropy method The maximum-entropy method is useful with sparse data conditions and has been used by many researchers (Berger et al. , 1996; Ratnaparkhi, 1996; Ratnaparkhi, 1997; Borthwick el; al. , 1998; Uchimoto et al. , 1999)." C00-2089,W96-0213,o,"We can find some other machine-learning approaches that use more sophisticated LMs, such as Decision Trees (Mhrquez and Rodrfguez, 1998)(Magerman, 1996), memory-based approaclms to learn special decision trees (Daelemans et al. , 1996), maximmn entropy approaches that combine statistical information from different sources (Ratnaparkhi, 1996), finite state autonmt2 inferred using Grammatical Inference (Pla and Prieto, 1998), etc. The comparison among different al)t)roaches is dif ficult due to the nmltiple factors that can be eonsid614 ered: tile languagK, tile mmfl)er and tyt)e of the tags, the size of tilt vocabulary, thK ambiguity, the diiticulty of the test ski, Kte." C02-1040,W96-0213,o,"We prepare the corpus by passing it through Adwait Ratnaparkhis part-of-speech tagger (Ratnaparkhi, 1996) (trained on the Penn Treebank WSJ corpus) and then running Steve Abneys chunker (Abney, 1997) over the entire text." C02-1101,W96-0213,o,"Many studies and improvements have been conducted for Presently with Service Media Laboratory, Corporate ResearchandDevelopmentCenter, OkiElectricIndustry Co. ,Ltd. POS tagging, and major methods of POS tagging achieve an accuracy of 9697% on the Penn Treebank WSJ corpus, but obtaining higher accuracies is difficult (Ratnaparkhi, 1996)." C02-1101,W96-0213,o,"It is mentioned that the limitation is largely caused by inconsistencies in the corpus (Ratnaparkhi, 1996; Padro and M`arquez, 1998; van Halteren et al. , 2001)." C02-1101,W96-0213,o,"For example, many statistical part-of-speech (POS) taggers have been developed and they use corpora as the training data to obtain statistical information or rules (Brill, 1995; Ratnaparkhi, 1996)." C02-1145,W96-0213,o,Our POS tagger is essentially the maximum entropy tagger by Ratnaparkhi (1996) retrained on the CTB-I data. C04-1040,W96-0213,o,"According to the document, it is the output of Ratnaparkhis tagger (Ratnaparkhi, 1996)." C04-1124,W96-0213,o," POS tagger: The maximum entropy POS tagger developed by Ratnaparkhi (Ratnaparkhi, 1996) and the rule-based POS tagger developed by Brill (Brill, 1994) are trained with 1200 abstracts extracted from the GENIA corpus, which achieve accuracies of 97.97% and 98.06% respectively, when testing on the rest 800 abstract of the GENIA corpus." C04-1134,W96-0213,o,"We use two state-of-the-art POS taggersa maximum entropy based English POS tagger (Ratnaparkhi, 1996), and an HMM based Chinese POS tagger." C04-1140,W96-0213,o,"Second, several tagging experiments on newspaper language, whether statistical (Ratnaparkhi, 1996; Brants, 2000) or rule-based (Brill, 1995), report that the tagging accuracy for unknown words is much lower than the overall accuracy.2 Thus, the lower percentage of unknown words in medical texts seems to be a sublanguage feature beneficial to POS taggers, whereas the higher proportion of unknown words in newspaper language seems to be a prominent source of tagging errors." C08-1106,W96-0213,o,"To accommodate multiple overlapping features on observations, some other approaches view the sequence labeling problem as a sequence of classification problems, including support vector machines (SVMs) (Kudo & Matsumoto 2001) and a variety of other classifiers (Punyakanok & Roth 2001; Abney et al. 1999; Ratnaparkhi 1996)." D07-1003,W96-0213,o,"We tokenized sentences using the standard treebank tokenization script, and then we performed part-of-speech tagging using MXPOST tagger (Ratnaparkhi, 1996)." D07-1041,W96-0213,o,"In contrast, the C&C tagger, which is based on that of Ratnaparkhi (1996), utilizes a wide range of features and a larger contextual window including the previous two tags and the two previous and two following words." D07-1086,W96-0213,o,"We then tagged the search queries using a maximum entropy part-of-speech tagger (Ratnaparkhi, 1996)." D07-1108,W96-0213,o,"We use MXPOST tagger (Adwait, 1996) for POS tagging, Charniak parser (Charniak, 2000) for extracting syntactic relations, SVMlight1 for SVM classifier and David Bleis version of LDA2 for LDA training and inference." D07-1117,W96-0213,p,"The state-of-theart systems have achieved an accuracy of 97% for English on the Wall Street Journal (WSJ) corpus (which contains 4.5M words) using various models (Brants, 2000; Ratnaparkhi, 1996; Thede and Harper, 1999)." D07-1117,W96-0213,p,"Hidden Markov models are simple and effective, but unlike discriminative models, such as Maximum Entropy models (Ratnaparkhi, 1996) and Conditional Random Fields (John Lafferty, 2001), they have more difficulty utilizing a rich set of conditionally dependent features." D08-1070,W96-0213,o,"The CRF tagger was implemented in MALLET (McCallum, 2002) using the original feature templates from (Ratnaparkhi, 1996)." D08-1110,W96-0213,o,"Some examples of POS taggers that perform reasonably well on monolingual text of each language can be found in (Brants, 2000; Brill, 1992; Carreras and Padro, 2002; Charniak, 1993; Ratnaparkhi, 1996; Schmid, 1994)." D09-1003,W96-0213,o,"This model can be seen as an extension of the standard Maximum Entropy Markov Model (MEMM, see (Ratnaparkhi, 1996)) with an extra dependency on the predicate label, we will henceforth refer to this model as MEMM+pred." D09-1058,W96-0213,o,"English POS tags were assigned by MXPOST (Ratnaparkhi, 1996), which was trained on the training data described in Section 4.1." D09-1060,W96-0213,o,"(2008), we used the MXPOST (Ratnaparkhi, 1996) tagger trained on training data to provide part-of-speech tags for the development and the test set, and we used 10way jackknifing to generate tags for the training set." E06-1042,W96-0213,o,"The target set is built using the 88-89 Wall Street Journal Corpus (WSJ) tagged using the (Ratnaparkhi, 1996) tagger and the (Bangalore & Joshi, 1999) SuperTagger; the feedback sets are built using WSJ sentences con330 Algorithm 1 KE-train: (Karov & Edelman, 1998) algorithm adapted to literal/nonliteral classification Require: S: the set of sentences containing the target word Require: L: the set of literal seed sentences Require: N: the set of nonliteral seed sentences Require: W: the set of words/features, w s means w is in sentence s, s owner w means s contains w Require: epsilon1: threshold that determines the stopping condition 1: w-sim0(wx,wy) := 1 if wx = wy,0 otherwise 2: s-simI0(sx,sy) := 1, for all sx,sy S S where sx = sy, 0 otherwise 3: i := 0 4: while (true) do 5: s-simLi+1(sx,sy) := summationtextwxsx p(wx,sx)maxwysy w-simi(wx,wy), for all sx,sy S L 6: s-simNi+1(sx,sy) := summationtextwxsx p(wx,sx)maxwysy w-simi(wx,wy), for all sx,sy S N 7: for wx,wy W W do 8: w-simi+1(wx,wy) := braceleftBigg i = 0 summationtextsxownerwx p(wx,sx)maxsyownerwy s-simIi(sx,sy) else summationtextsxownerwx p(wx,sx)maxsyownerwys-simLi (sx,sy),s-simNi (sx,sy)} 9: end for 10: if wx,maxwyw-simi+1(wx,wy)w-simi(wx,wy)} epsilon1 then 11: break # algorithm converges in 1epsilon1 steps." E09-1011,W96-0213,o,"We then proceed to split the data into smaller sentences and tag them using Ratnaparkhis Maximum Entropy Tagger (Ratnaparkhi, 1996)." E09-1041,W96-0213,o,"(2007), it is much higher than the 2.6% unknown word rate in the test set for Ratnaparkhis (1996) English POS tagging experiments." E09-1060,W96-0213,o,"Thus, we used the five taggers, MBL (Daelemans et al., 1996), MXPOST (Ratnaparkhi, 1996), fnTBL (Ngai and Florian, 2001), TnT, and IceTagger3, in the same manner as described in (Loftsson, 2006), but with the following minor changes." E09-1087,W96-0213,o,"16In fact, we have experimented with other tagger combinations and configurations as wellwith the TnT (Brants, 2000), MaxEnt (Ratnaparkhi, 1996) and TreeTagger (Schmid, 1994), with or without the Morce tagger in the pack; see below for the winning combination." H05-1009,W96-0213,o,"We generate POS tags using the MXPOST tagger (Ratnaparkhi, 1996) for English and Chinese, and Connexor for Spanish." H05-1024,W96-0213,o,"Specifically, three features are used to instantiate the templates: POS tags on both sides: We assign POS tags using the MXPOST tagger (Ratnaparkhi, 1996) for English and Chinese, and Connexor for Spanish." H05-1058,W96-0213,o,"Previous work used all possible pre xes and suf xes ranging in length from 1 to k characters, with k = 4 (Ratnaparkhi, 1996), and k = 10 (Toutanova et al. , 2003)." H05-1058,W96-0213,o,"Previous authors have used numerous HMM-based models (Banko and Moore, 2004; Collins, 2002; Lee et al. , 2000; Thede and Harper, 1999) and other types of networks including maximum entropy models (Ratnaparkhi, 1996), conditional Markov models (Klein and Manning, 2002; McCallum et al. , 2000), conditional random elds (CRF) (Lafferty et al. , 2001), and cyclic dependency networks (Toutanova et al. , 2003)." H05-1058,W96-0213,o,"This is based on the idea from (Ratnaparkhi, 1996) that rare words in the training set are similar to unknown words in the test set, and can be used to learn how to tag the unknown words that will be encountered during testing." H05-1071,W96-0213,o,"Both systems rely on the OpenNlp maximum-entropy part-of-speech tagger and chunker (Ratnaparkhi, 1996), but KNOWITALL applies them to pages downloaded from the Web based on the results of Google queries, whereas KNOWITNOW applies them once to crawled and indexed pages.6 Overall, each of the above elements of KNOWITALL and KNOWITNOW are the same to allow for controlled experiments." H05-1098,W96-0213,o,"We used MXPOST (Ratnaparkhi, 1996), and in order to discover more general patterns, we map the tag set down after tagging, e.g. NN, NNP, NNPS and NNS all map to NN." H05-1107,W96-0213,o,"Given the parallel corpus, we tagged the English words with a publicly available maximum entropy tagger (Ratnaparkhi, 1996), and we used an implementation of the IBM translation model (AlOnaizan et al. , 1999) to align the words." H05-1124,W96-0213,o,"Models of that form include hidden Markov models (Rabiner, 1989; Bikel et al. , 1999) as well as discriminative tagging models based on maximum entropy classification (Ratnaparkhi, 1996; McCallum et al. , 2000), conditional random fields (Lafferty et al. , 2001; Sha and Pereira, 2003), and large-margin techniques (Kudo and Matsumoto, 2001; Taskar et al. , 2003)." H05-2007,W96-0213,o,"MXPOST (Ratnaparkhi, 1996), and in order to discover more general patterns, we map the tag set down after tagging, e.g. NN, NNP, NNPS and NNS all map to NN." I05-3005,W96-0213,o,"The unknown word tokens are with respect to Training I. Data set Sect'ns Token Unknown Training I 26-270, 600-931 213986 Training II 600-931, 500-527, 1001-1039 204701 Training III 001-270, 301-527, 590-593, 600-1039, 1043-1151 485321 Devset 23839 2849 XH 001-025 7844 381 HKSAR 500-527 8202 1168 SM 590-593, 1001-1002 7793 1300 Test set 23522 2957 XH 271-300 8008 358 HKSAR 528-554 7153 1020 SM 594-596, 1040-1042 8361 1579 5.2 The model Our model builds on research into loglinear models by Ng and Low (2004), Toutanova et al. , (2003) and Ratnaparkhi (1996)." I05-3005,W96-0213,o,"(Ng and Low 2004, Toutanova et al, 2003, Brants 2000, Ratnaparkhi 1996, Samuelsson 1993)." I05-3005,W96-0213,o,"Previous work on POS tagging of unknown words has proposed a number of features based on prefixes and suffixes and spelling cues like capitalization (Toutanova et al. 2003, Brants 2000, Ratnaparkhi 1996)." I05-3005,W96-0213,o,"(2002), who retrain the Ratnaparkhi (1996) tagger and reach accuracies of 93% using CTB-I." I05-3031,W96-0213,o,"V. J. Della Pietra, 1996; Ratnaparkhi, 1996) was proposed in the original work to solve the LMR Tagging problem." I08-2097,W96-0213,o,"The features are the same as those in (Ratnaparkhi, 1996)." I08-2130,W96-0213,o,"Statistic-based algorithms based on Belief Network(Murphy, 2001) such as Hidden-MarkovModel(HMM)(Cutting, 1992)(Thede, 1999), Lexicalized HMM(Lee, 2000) and Maximal-Entropy model(Ratnaparkhi, 1996) use the statistical information of a manually tagged corpus as background knowledge to tag new sentences." I08-4024,W96-0213,o,"The models are based on a maximum entropy framework (Ratnaparkhi, 1996; Xue and Shen, 2003)." I08-4024,W96-0213,o,"2 Maximum Entropy In this bakeoff, our basic model is based on the framework described in the work of Ratnaparkhi (1996) which was applied for English POS tagging." I08-4024,W96-0213,o,"4 POS Tagger and Named Entity Recognizer For the POS tagging task, the tagger is built based on the work of Ratnaparkhi (1996) which was applied for English POS tagging." J01-2002,W96-0213,o,"7 For a more detailed discussion, see Berger, Della Pietra, and Della Pietra (1996) and Ratnaparkhi (1996)." J01-2002,W96-0213,o,"Tagging can also be done using maximum entropy modeling (see Section 2.4): a maximum entropy tagger, called MXPOST, was developed by Ratnaparkhi (1996) (we will refer to this tagger as MXP below)." J01-2002,W96-0213,o,Ratnaparkhi 1996). J01-3003,W96-0213,o,Our method was applied to 23 million words of the WSJ that were automatically tagged with Ratnaparkhi's maximum entropy tagger (Ratnaparkhi 1996) and chunked with the partial parser CASS (Abney 1996). J02-3002,W96-0213,o,"We see no good reason, however, why such text spans should necessarily be sentences, since the majority of tagging paradigms (e.g. , Hidden Markov Model [HMM] [Kupiec 1992], Brills [Brill 1995a], and MaxEnt [Ratnaparkhi 1996]) do not attempt to parse an entire sentence and operate only in the local window of two to three tokens." J03-4003,W96-0213,o,Words in test data that have not been seen in training are deterministically assigned the POS tag that is assigned by the tagger described in Ratnaparkhi (1996). J03-4003,W96-0213,o,"17 The justification for this is that there is an estimated 3% error rate in the hand-assigned POS tags in the treebank (Ratnaparkhi 1996), and we didnt want this noise to contribute to dependency errors." J03-4003,W96-0213,o,"Of particular relevance is other work on parsing the Penn WSJ Treebank (Jelinek et al. 1994; Magerman 1995; Eisner 1996a, 1996b; Collins 1996; Charniak 1997; Goodman 1997; Ratnaparkhi 1997; Chelba and Jelinek 1998; Roark 2001)." J04-4004,W96-0213,o,"32 7.3 Unknown Words and Parts of Speech When the parser encounters an unknown word, the first-best tag delivered by Ratnaparkhis (1996) tagger is used." J07-3004,W96-0213,o,Ratnaparkhi (1996) estimates a POS tagging error rate of 3% in the Treebank. J07-4004,W96-0213,o,"Introduction Log-linear models have been applied to a number of problems in NLP, for example, POS tagging (Ratnaparkhi 1996; Lafferty, McCallum, and Pereira 2001), named entity recognition (Borthwick 1999), chunking (Koeling 2000), and parsing (Johnson et al. 1999)." J07-4004,W96-0213,o,"Clark and Curran (2004a) describe the supertagger, which uses log-linear models to define a distribution over the lexical category set for each local five-word context containing the target word (Ratnaparkhi 1996)." J07-4004,W96-0213,p,A key component of the parsing system is a Maximum Entropy CCG supertagger (Ratnaparkhi 1996; Curran and Clark 2003) which assigns lexical categories to words in a sentence. J07-4004,W96-0213,o,"Related Work The first application of log-linear models to parsing is the work of Ratnaparkhi and colleagues (Ratnaparkhi, Roukos, and Ward 1994; Ratnaparkhi 1996, 1999)." N01-1023,W96-0213,o,"Also, we used Adwait Ratnaparkhis part-of-speech tagger (Ratnaparkhi, 1996) to tag unknown words in the test data." N03-1014,W96-0213,o,"We then tested the best models for each vocabulary size on the testing set.4 Standard measures of performance are shown in table 1.5 3We used a publicly available tagger (Ratnaparkhi, 1996) to provide the tags used in these experiments, rather than the handcorrected tags which come with the corpus." N03-1028,W96-0213,o,"The sequential classi cation approach can handle many correlated features, as demonstrated in work on maximum-entropy (McCallum et al. , 2000; Ratnaparkhi, 1996) and a variety of other linear classi ers, including winnow (Punyakanok and Roth, 2001), AdaBoost (Abney et al. , 1999), and support-vector machines (Kudo and Matsumoto, 2001)." N03-1033,W96-0213,o,"The per-state models in this paper are log-linear models, building upon the models in (Ratnaparkhi, 1996) and (Toutanova and Manning, 2000), though some models are in fact strictly simpler." N03-1033,W96-0213,o,"High-performance taggers typically also include joint three-tag counts in some way, either as tag trigrams (Brants, 2000) or tag-triple features (Ratnaparkhi, 1996, Toutanova and Manning, 2000)." N03-1033,W96-0213,o,"Words surrounding the current word have been occasionally used in taggers, such as (Ratnaparkhi, 1996), Brills transformation based tagger (Brill, 1995), and the HMM model of Lee et al." N03-1033,W96-0213,o,"3.3 Unknown word features Most of the models presented here use a set of unknown word features basically inherited from (Ratnaparkhi, 1996), which include using character n-gram prefixes and suffixes (for n up to 4), and detectors for a few other prominent features of words, such as capitalization, hyphens, and numbers." N03-1033,W96-0213,o,"At any rate, regularized conditional loglinear models have not previously been applied to the problem of producing a high quality part-of-speech tagger: Ratnaparkhi (1996), Toutanova and Manning (2000), and Collins (2002) all present unregularized models." N03-1033,W96-0213,o,"Whereas Ratnaparkhi (1996) used feature support cutoffs and early stopping to stop overfitting of the model, and Collins (2002) contends that including low support features harms a maximum entropy model, our results show that low support features are useful in a regularized maximum entropy model." N03-1033,W96-0213,o,"2 Bidirectional Dependency Networks When building probabilistic models for tag sequences, we often decompose the global probability of sequences using a directed graphical model (e.g. , an HMM (Brants, 2000) or a conditional Markov model (CMM) (Ratnaparkhi, 1996))." N03-2003,W96-0213,o,"We then piped the text through a maximum entropy sentence boundary detector (Ratnaparkhi, 1996) and performed text normalization using NSW tools (Sproat et al, 2001)." N03-2027,W96-0213,o,"The best result known to us is achieved by Toutanova[2002] by enriching the feature representation of the MaxEnt approach [Ratnaparkhi, 1996]." N03-3001,W96-0213,o,"However, after several advances in tasks such as automatic tagging of text with high level semantics such as parts-of-speech (Ratnaparkhi, 1996), named-entities (Bikel et al. , 1999), sentence-parsing (Charniak, 1997), etc. , there is increasing hope that one could leverage this information into IR techniques." N04-1013,W96-0213,o,"For example, since the Collins parser depends on a prior part-of-speech tagger (Ratnaparkhi, 1996), we included the time for POS tagging in our Collins measurements." N04-1018,W96-0213,o,"The POS tag features were produced by rst predicting the tags with Ratnaparkhis Maximum Entropy Tagger (Ratnaparkhi, 1996) and then clustered by hand into a smaller number of groups based on their syntactic role." N04-2003,W96-0213,p,"Maximum Entropy (MaxEnt) principle has been successfully applied in many classification and tagging tasks (Ratnaparkhi, 1996; K. Nigam and A.McCallum, 1999; A. McCallum and Pereira, 2000)." N04-4040,W96-0213,o,"2We use a POS tagger (Ratnaparkhi, 1996) trained on switchboard data with the additional tags of FP (filled pause) and FRAG (word fragment)." N06-1042,W96-0213,o,"Examples of statistical and machine learning approaches that have been used for tagging include transformation based learning (Brill, 1995), memory based learning (Daelemans et al. , 1996), and maximum entropy models (Ratnaparkhi, 1996)." N07-1026,W96-0213,o,"We used the MXPOST tagger (Ratnaparkhi, 1996) for POS annotation." N09-1013,W96-0213,o,"The Chinese text was tagged using the MXPOST maximum-entropy part of speech tagging tool (Ratnaparkhi, 1996) trained on the Penn Chinese Treebank 5.1; the English text was tagged using the TnT part of speech tagger (Brants, 2000) trained on the Wall Street Journal portion of the English Penn treebank." P01-1035,W96-0213,p,"(Hakkani-Tur et al. , 2000)), and Basque (Ezeiza et al. , 1998), which pose quite different and in the end less severe problems, there have been attempts at solving this problem for some of the highly inflectional European languages, such as (Daelemans et al. , 1996), (Erjavec et al. , 1999) (Slovenian), (Hajic and Hladka, 1997), (Hajic and Hladka, 1998) (Czech) and (Hajic, 2000) (five Central and Eastern European languages), but so far no system has reached in the absolute terms a performance comparable to English tagging (such as (Ratnaparkhi, 1996)), which stands around or above 97%." P01-1039,W96-0213,p,"It has been used in a variety of difficult classification tasks such as part-of-speech tagging (Ratnaparkhi, 1996), prepositional phrase attachment (Ratnaparkhi et al. , 1994) and named entity tagging (Borthwick et al. , 1998), and achieves state of the art performance." P01-1039,W96-0213,o,"We use the beam search technique of (Ratnaparkhi, 1996) to search the space of all hypotheses." P02-1021,W96-0213,o,"(Berger 1996, Ratnaparkhi 1996, 1998, Mikheev 1998, 2000)." P02-1021,W96-0213,o,"3.1 Maximum Entropy This section presents a brief description of ME. A more detailed and informative description can be found in Berger (1996) 4, Ratnaparkhi (1998), Manning and Shutze (2000) to name just a few." P02-1034,W96-0213,o,"As a baseline model we used a maximum entropy tagger, very similar to the one described in (Ratnaparkhi 1996)." P02-1034,W96-0213,p,"Maximum entropy taggers have been shown to be highly competitive on a number of tagging tasks, such as partof-speech tagging (Ratnaparkhi 1996), and namedentity recognition (Borthwick et." P02-1042,W96-0213,o,"We have explained elsewhere (Clark, 2002) how suitable features can be defined in terms of the a18 word, pos-tag a20 pairs in the context, and how maximum entropy techniques can be used to estimate the probabilities, following Ratnaparkhi (1996)." P02-1043,W96-0213,o,"We therefore ran the dependency model on a test corpus tagged with the POS-tagger of Ratnaparkhi (1996), which is trained on the original Penn Treebank (see HWDep (+ tagger) in Table 3)." P02-1055,W96-0213,o,"In modern lexicalized parsers, POS tagging is often interleaved with parsing proper instead of being a separate preprocessing module (Collins, 1996; Ratnaparkhi, 1997)." P02-1055,W96-0213,o,"task (Church, 1988; Brill, 1993; Ratnaparkhi, 1996; Daelemans et al. , 1996), and reported errors in the range of 26% are common." P02-1055,W96-0213,o,Chunks as a separate level have also been used in Collins (1996) and Ratnaparkhi (1997). P02-1062,W96-0213,p,"Max-ent taggers have been shown to be highly competitive on a number of tagging tasks, such as part-of-speech tagging (Ratnaparkhi 1996), named-entity recognition (Borthwick et." P02-1062,W96-0213,o,"For example, Animal would be mapped to Aa, G.M. would again be mapped to A.A The tagger was applied and trained in the same way as described in (Ratnaparkhi 1996)." P02-1062,W96-0213,o,"Following (Ratnaparkhi 1996), we only include features which occur 5 times or more in training data." P03-1038,W96-0213,o,"Many machine learning techniques have been developed to tackle such random process tasks, which include Hidden Markov Models (HMMs) (Rabiner, 1989), Maximum Entropy Models (MEs) (Ratnaparkhi, 1996), Support Vector Machines (SVMs) (Vapnik, 1998), etc. Among them, SVMs have high memory capacity and show high performance, especially when the target classification requires the consideration of various features." P03-1046,W96-0213,o,The input is POS-tagged using the tagger of Ratnaparkhi (1996). P03-1055,W96-0213,o,"Templates for local features are similar to the ones employed by Ratnaparkhi (1996) for POS-tagging (Table 3), though as our input already includes POStags, we can make use of part-of-speech information as well." P03-1064,W96-0213,o,"4.4 Related Work (Chen, 2001) implemented an MEMM model for supertagging which is analogous to the POS tagging model of (Ratnaparkhi, 1996)." P04-1013,W96-0213,o,"In each case the input to the network is a sequence of tag-word pairs.5 5We used a publicly available tagger (Ratnaparkhi, 1996) to provide the tags." P04-1013,W96-0213,o,"However, the fact that the DGSSN uses a large-vocabulary tagger (Ratnaparkhi, 1996) as a preprocessing stage may compensate for its smaller vocabulary." P04-1030,W96-0213,o,Collins (1999) falls back to the POS tagging of Ratnaparkhi (1996) for words seen fewer than 5 times in the training corpus. P04-1030,W96-0213,n,"As the tagger of Ratnaparkhi (1996) cannot tag a word lattice, we cannot back off to this tagging." P04-1085,W96-0213,p,"Conditional Markov models (CMM) (Ratnaparkhi, 1996; Klein and Manning, 2002) have been successfully used in sequence labeling tasks incorporating rich feature sets." P04-1085,W96-0213,o,"The algorithm is exactly the same as the one described in (Ratnaparkhi, 1996) to find the most probable part-of-speech sequence." P05-1012,W96-0213,o,Our system assumes POS tags as input and uses the tagger of Ratnaparkhi (1996) to provide tags for the development and evaluation sets. P05-1023,W96-0213,o,"In each case the input to the network is a sequence of tag-word pairs.2 We report results for two different vocabulary sizes, varying in the frequency with which tag-word pairs must 2We used a publicly available tagger (Ratnaparkhi, 1996) to provide the tags." P05-1065,W96-0213,o,"Motivated by our goal of representing syntax, we used part-of-speech (POS) tags as labeled by a maximum entropy tagger (Ratnaparkhi, 1996)." P05-2001,W96-0213,o,"It will also be relevant to apply advanced statistical models that can incorporate various useful information to this task, e.g., the maximum entropy model (Ratnaparkhi, 1996)." P05-2004,W96-0213,p,"The most notable of these include the trigram HMM tagger (Brants, 2000), maximum entropy tagger (Ratnaparkhi, 1996), transformation-based tagger (Brill, 1995), and cyclic dependency networks (Toutanova et al. , 2003)." P06-1021,W96-0213,o,"This test set was tagged using MXPOST (Ratnaparkhi, 1996) which was itself trained on Switchboard." P06-1073,W96-0213,o,"We use a statistical POS tagging system built on Arabic Treebank data with MaxEnt framework (Ratnaparkhi, 1996)." P06-1088,W96-0213,o,"One possible conclusion from the POS tagging literature is that accuracy is approaching the limit, and any remaining improvement is within the noise of the Penn Treebank training data (Ratnaparkhi, 1996; Toutanova et al. , 2003)." P06-1088,W96-0213,o,"The POS tagger uses the same contextual predicates as Ratnaparkhi (1996); the supertagger adds contextual predicates corresponding to POS tags and bigram combinations of POS tags (Curran and Clark, 2003)." P06-1089,W96-0213,o,"The features we use are shown in Table 2, which are based on the features used by Ratnaparkhi (1996) and Uchimoto et al." P06-1110,W96-0213,o,POS tag the text using the tagger of Ratnaparkhi (1996). P06-1110,W96-0213,o,"(2004) 89.10 89.14 89.12 kitchen sink 89.26 89.55 89.40 parser (Bikel, 2004)8, the only one that we were able to train and test under exactly the same experimental conditions (including the use of POS tags from the tagger of Ratnaparkhi (1996))." P06-1110,W96-0213,o,"The initial state contains terminal items, whose labels are the POS tags given by the tagger of Ratnaparkhi (1996)." P06-2028,W96-0213,o,"The basic engine used to perform the tagging in these experiments is a direct descendent of the maximum entropy (ME) tagger of (Ratnaparkhi, 1996) which in turn is related to the taggers of (Kupiec, 1992) and (Merialdo, 1994)." P06-2028,W96-0213,o,"5 External Knowledge Sources 5.1 Lexical Dependencies Features derived from n-grams of words and tags in the immediate vicinity of the word being tagged have underpinned the world of POS tagging for many years (Kupiec, 1992; Merialdo, 1994; Ratnaparkhi, 1996), and have proven to be useful features in WSD (Yarowsky, 1993)." P06-2028,W96-0213,o,"In these experiments we used the MXPOST tagger (Ratnaparkhi, 1996) combined withCollinsparser(Collins,1996)toassignparse trees to the corpus." P06-2067,W96-0213,o,Tag test data using the POS-tagger described in Ratnaparkhi (1996). P06-2100,W96-0213,o,"For English there are many POS taggers, employing machine learning techniques like transformation-based error-driven learning (Brill, 1995), decision trees (Black et al. , 1992), markov model (Cutting et al. 1992), maximum entropy methods (Ratnaparkhi, 1996) etc. There are also taggers which are hybrid using both stochastic and rule-based approaches, such as CLAWS (Garside and Smith, 1997)." P06-3014,W96-0213,o,Tag test data using the POS-tagger described in Ratnaparkhi (1996). P07-1006,W96-0213,o,"Verbs and possible senses in our corpus Both corpora were lemmatized and part-of-speech (POS) tagged using Minipar (Lin, 1993) and Mxpost (Ratnaparkhi, 1996), respectivelly." P07-1080,W96-0213,o,"The standard split of the corpus into training (sections 222, 9,753 sentences), validation (section 24, 321 sentences), and testing (section 23, 603 sentences) was performed.2 As in (Henderson, 2003; Turian and Melamed, 2006) we used a publicly available tagger (Ratnaparkhi, 1996) to provide the part-of-speech tag for each word in the sentence." P07-1104,W96-0213,o,"Following previous work (Ratnaparkhi, 1996), we assume that the tag of a word is independent of the tags of all preceding words given the tags of the previous two words (i.e. , =2 in the equation above)." P07-2009,W96-0213,o,"3 Maximum Entropy Taggers The taggers are based on Maximum Entropy tagging methods (Ratnaparkhi, 1996), and can all be trained on new annotated data, using either GIS or BFGS training code." P07-2053,W96-0213,n,"Though taggers based on dependency networks (Toutanova et al. , 2003), SVM (Gimenez and M`arquez, 2003), MaxEnt (Ratnaparkhi, 1996), CRF (Smith et al. , 2005), and other methods may reach slightly better results, their train/test cycle is orders of magnitude longer." P08-1007,W96-0213,o,"57 Given a pair of English sentences to be compared (a system translation against a reference translation), we perform tokenization2, lemmatization using WordNet3, and part-of-speech (POS) tagging with the MXPOST tagger (Ratnaparkhi, 1996)." P08-1044,W96-0213,o,"Classes were identified using a POS tagger (Ratnaparkhi, 1996) trained on the tagged Switchboard corpus." P08-1068,W96-0213,o,"The part of speech tags for the development and test data were automatically assigned by MXPOST (Ratnaparkhi, 1996), where the tagger was trained on the entire training corpus; to generate part of speech tags for the training data, we used 10-way jackknifing.8 English word clusters were derived from the BLLIP corpus (Charniak et al., 2000), which contains roughly 43 million words of Wall Street Journal text.9 The Czech experiments were performed on the Prague Dependency Treebank 1.0 (Hajic, 1998; Hajic et al., 2001), which is directly annotated with dependency structures." P08-1101,W96-0213,o,"During training, the baseline POS tagger stores special word-tag pairs into a tag dictionary (Ratnaparkhi, 1996)." P08-1101,W96-0213,p,"This method led to improvement in the decoding speed as well as the output accuracy for English POS tagging (Ratnaparkhi, 1996)." P08-1101,W96-0213,p,"It worked well for word segmentation alone (Zhang and Clark, 2007), even with an agenda size as small as 8, and a simple beam search algorithm also works well for POS tagging (Ratnaparkhi, 1996)." P08-1102,W96-0213,o,"Several models were introduced for these problems, for example, the Hidden Markov Model (HMM) (Rabiner, 1989), Maximum Entropy Model (ME) (Ratnaparkhi and Adwait, 1996), and Conditional Random Fields (CRFs) (Lafferty et al., 2001)." P09-1053,W96-0213,o,"8http://svmlight.joachims.org 9Our replication of the Wan et al. model is approximate, because we used different preprocessing tools: MXPOST for POS tagging (Ratnaparkhi, 1996), MSTParser for parsing (McDonald et al., 2005), and Dan Bikels interface (http://www.cis.upenn.edu/dbikel/ software.html#wn) to WordNet (Miller, 1995) for lemmatization information." P09-1053,W96-0213,o,"In our experiments these were obtained automatically using MXPOST (Ratnaparkhi, 1996) and BBNs Identifinder (Bikel et al., 1999)." P09-1054,W96-0213,o,"The applications range from simple classification tasks such as text classification and history-based tagging (Ratnaparkhi, 1996) to more complex structured prediction tasks such as partof-speech (POS) tagging (Lafferty et al., 2001), syntactic parsing (Clark and Curran, 2004) and semantic role labeling (Toutanova et al., 2005)." P09-1058,W96-0213,p,"Their idea has proven effective for estimating the statistics of unknown words in previous studies (Ratnaparkhi, 1996; Nagata, 1999; Nakagawa, 2004)." P97-1056,W96-0213,o,"There is a large number of potentially informative features that could play a role in correctly predicting the tag of an unknown word (Ratnaparkhi, 1996; Weischedel et al. , 1993; Daelemans et al. , 1996)." P97-1056,W96-0213,o,"More recently, the integration of information sources, and the modeling of more complex language processing tasks in the statistical framework has increased the interest in smoothing methods (Collins ~z Brooks, 1995; Ratnaparkhi, 1996; Magerman, 1994; Ng & Lee, 1996; Collins, 1996)." P98-1029,W96-0213,o,"In Ratnaparkhi (1996), a maximum entropy tagger is presented." P98-1029,W96-0213,o,"Since the advent of manually tagged corpora such as the Brown Corpus and the Penn Treebank (Francis(1982), Marcus(1993)), the efficacy of machine learning for training a tagger has been demonstrated using a wide array of techniques, including: Markov models, decision trees, connectionist machines, transformations, nearest-neighbor algorithms, and maximum entropy (Weischedel(1993), Black(1992), Schmid(1994), Brill(1995),Daelemans(1995),Ratnaparkhi(1996 ))." P98-2177,W96-0213,o,"The tagger from (Ratnaparkhi, 1996) first annotates sentences of raw text with a sequence of partof-speech tags." P98-2251,W96-0213,o,"Other methods include rule-based systems (Brill, 1995), maximum entropy models (Ratnaparkhi, 1996), and memory-based models (Daelemans et al. , 1996)." P98-2251,W96-0213,o,"Entropy, used in some part-of-speech tagging systems (Ratnaparkhi, 1996), is a measure of how much information is necessary to separate data." P99-1023,W96-0213,o,"Much research has been done to improve tagging accuracy using several different models and methods, including: hidden Markov models (HMMs) (Kupiec, 1992), (Charniak et al. , 1993); rule-based systems (Brill, 1994), (Brill, 1995); memory-based systems (Daelemans et al. , 1996); maximum-entropy systems (Ratnaparkhi, 1996); path voting constraint systems (Tiir and Oflazer, 1998); linear separator systems (Roth and Zelenko, 1998); and majority voting systems (van Halteren et al. , 1998)." P99-1023,W96-0213,o,"The MBT (Daelemans et al. , 1996) 180 Tagger Type Standard Trigram (Weischedel et al. , 1993) MBT (Daelemans et al. , 1996) Rule-based (Brill, 1994) Maximum-Entropy (Ratnaparkhi, 1996) Full Second-Order HMM SNOW (Roth and Zelenko, 1998) Voting Constraints (Tiir and Oflazer, 1998) Full Second-Order HMM Known Unknown Overall Open/Closed Lexicon?" P99-1036,W96-0213,o,"Some statistical model to estimate the part of speech of unknown words from the case of the first letter and the prefix and suffix is proposed (Weischedel et al. , 1993; Brill, 1995; Ratnaparkhi, 1996; Mikheev, 1997)." P99-1036,W96-0213,p,"To improve the unknown word model, featurebased approach such as the maximum entropy method (Ratnaparkhi, 1996) might be useful, because we don't have to divide the training data into several disjoint sets (like we did by part of speech and word type) and we can incorporate more linguistic and morphological knowledge into the same probabilistic framework." P99-1046,W96-0213,o,"Our named entity recognizer used a maximum entropy model, built with Adwait Ratnaparkhi's tools (Ratnaparkhi, 1996) to label word sequences as either person, place, company or none of the above based on local cues including the surrounding words and whether honorifics (e.g. Mrs. or Gen)." P99-1082,W96-0213,o,"The tool set for TEA is constantly being extended, recent additions include a prototype symbolic classifier, shallow parser (Choi, Forthcoming), sentence segmentation algorithm (Reynar and Ratnaparkhi, 1997) and a POS tagger (Ratnaparkhi, 1996)." W00-0731,W96-0213,o,"2 The Tagger We used Ratnaparkhi's maximum entropybased POS tagger (Ratnaparkhi, 1996)." W00-0731,W96-0213,o,"For our experiments, we used the binary-only distribution of the tagger (Ratnaparkhi, 1996)." W00-0904,W96-0213,o,"We use a tagger based on Adwait Ratnaparkhi's method (Ratnaparkhi, 1996)." W00-1308,W96-0213,n,"A maximum entropy approach has been applied to partof-speech tagging before (Ratnaparkhi 1996), but the approach's ability to incorporate nonlocal and non-HMM-tagger-type evidence has not been fully explored." W00-1308,W96-0213,o,1 The Baseline Maximum Entropy Model We started with a maximum entropy based tagger that uses features very similar to the ones proposed in Ratnaparkhi (1996). W00-1308,W96-0213,n,"Ratnaparkhi (1996: 134) suggests use of an approximation summing over the training data, which does not sum over possible tags: "" h E f j = 2 P( ~)p(ti l hi)f j(hi,ti) i=1 However, we believe this passage is in error: such an estimate is ineffective in the iterative scaling algorithm." W00-1308,W96-0213,o,The features that define the constraints on the model are obtained by instantiation of feature templates as in Ratnaparkhi (1996). W00-1308,W96-0213,o,They are a subset of the features used in Ratnaparkhi (1996). W00-1308,W96-0213,p,"Among recent top performing methods are Hidden Markov Models (Brants 2000), maximum entropy approaches (Ratnaparkhi 1996), and transformation-based learning (Brill 1994)." W00-1308,W96-0213,o,"The feature templates in Ratnaparkhi (1996) that were left out were the ones that look at the previous word, the word two positions before the current, and the word two positions after the current." W00-1308,W96-0213,o,"Model Overall Unknown Word Accuracy Accuracy Baseline, 96.72% 84.5% J Ratnaparkhi 96.63% 85.56% (1996) Table 3 Baseline model performance This table also shows the results reported in Ratnaparkhi (1996: 142)for COnvenience." W00-1308,W96-0213,o,"This may stem from the differences between the two models' feature templates, thresholds, and approximations of the expected values for the features, as discussed in the beginning of the section, or may just reflect differences in the choice of training and test sets (which are not precisely specified in Ratnaparkhi (1996))." W00-1308,W96-0213,n,One conclusion that we can draw is that at present the additional word features used in Ratnaparkhi (1996) looking at words more than one position away from the current do not appear to be helping the overall performance of the models. W00-1308,W96-0213,n,"Some are the result of inconsistency in labeling in the training data (Ratnaparkhi 1996), which usually reflects a lack of linguistic clarity or determination of the correct part of speech in context." W02-0301,W96-0213,o,"We use the maximum entropy tagging method described in (Kazama et al. , 2001) for the experiments, which is a variant of (Ratnaparkhi, 1996) modified to use HMM state features." W02-0301,W96-0213,p,"Support Vector Machines (SVMs) (Vapnik, 1995) and Maximum Entropy (ME) method (Berger et al. , 1996) are powerful learning methods that satisfy such requirements, and are applied successfully to other NLP tasks (Kudo and Matsumoto, 2000; Nakagawa et al. , 2001; Ratnaparkhi, 1996)." W02-0903,W96-0213,o,"A possible solution to his problem might be the use of more general morphological rules like those used in part-of-speech tagging models (e.g. , 1 2 3 4 530 40 50 60 70 80 90 100 level error RAND BASE Boost_S NNtfidf NB Boost_M Figure 6: Comparison of all models for a129 a48a51a95a66a97a98a97a180a222 . Ratnaparkhi (1996)), where all suffixes up to a certain length are included." W02-1006,W96-0213,o,"For these words, we first used a POS tagger (Ratnaparkhi, 1996) to determine the correct POS." W02-1006,W96-0213,o,"A token can be a word or a punctuation symbol, and each of these neighboring tokens must be in the same sentence as a2 . We use a sentence segmentation program (Reynar and Ratnaparkhi, 1997) and a POS tagger (Ratnaparkhi, 1996) to segment the tokens surrounding a2 into sentences and assign POS tags to these tokens." W02-1019,W96-0213,o,"To obtain these distances, Ratnaparkhis partof-speech (POS) tagger (Ratnaparkhi, 1996) and Collins parser (Collins, 1999) were used to obtain parse trees for the English side of the test corpus." W02-1019,W96-0213,o,"If POS denotes the POS of the English word, we can define the word-to-word distance measure (Equation 4) as POS POS (15) Ratnaparkhis POS tagger (Ratnaparkhi, 1996) was used to obtain POS tags for each word in the English sentence." W02-1041,W96-0213,o,"(2001) discuss three approaches: hand-crafted rules; grammatical inference of subsequential transducers; and log-linear classifiers with bigram and trigram features used as taggers (Ratnaparkhi, 1996)." W02-1116,W96-0213,o,"For this work, an off-the-shelf maximum entropy tagger 10 (Ratnaparkhi, 1996) was used." W02-1815,W96-0213,o,2 Combining Classifiers for Chinesewordsegmentation Thetwomachine-learningmodelsweuseinthis work are the maximum entropy model (Ratnaparkhi 1996) and the error-driven transformation-based learning model (Brill 1994).Weusetheformerasthemainworkhorse and the latter to correct some of the errors producedbytheformer. W02-1815,W96-0213,o,2.2 Themaximumentropytagger The maximum entropy model used in POStagging is described in detail in Ratnaparkhi (1996)andthePOCtaggerhereusesthesame probability model. W02-2010,W96-0213,o,"The leader of the pack is the MXPOST tagger (Ratnaparkhi, 1996)." W03-0407,W96-0213,o,"3ThePOS taggers The two POS taggers used in the experiments are TNT, a publicly available Markov model tagger (Brants, 2000), and a reimplementation of the maximum entropy (ME) tagger MXPOST (Ratnaparkhi, 1996)." W03-0424,W96-0213,o,2 The ME Tagger The ME tagger is based on Ratnaparkhi (1996)s POS tagger and is described in Curran and Clark (2003). W03-0428,W96-0213,o,"Finally, in section 4 we add additional features to the maxent model, and chain these models into a conditional markov model (CMM), as used for tagging (Ratnaparkhi, 1996) or earlier NER work (Borthwick, 1999)." W03-0430,W96-0213,p,"There has been significant work with such models for greedy sequence modeling in NLP (Ratnaparkhi, 1996; Borthwick et al. , 1998)." W03-0806,W96-0213,o,"For instance, implementing an efficient version of the MXPOST POS tagger (Ratnaparkhi, 1996) will simply involve composing and configuring the appropriate text file reading component, with the sequential tagging component, the collection of feature extraction components and the maximum entropy model component." W03-1018,W96-0213,p,"1 Introduction The maximum entropy model (Berger et al. , 1996; Pietra et al. , 1997) has attained great popularity in the NLP field due to its power, robustness, and successful performance in various NLP tasks (Ratnaparkhi, 1996; Nigam et al. , 1999; Borthwick, 1999)." W03-1201,W96-0213,o,"We used MXPOST (Ratnaparkhi, 1996), a maximum entropy based POS tagger." W03-1728,W96-0213,o,"The Maximum Entropy Markov Model used in POS-tagging is described in detail in (Ratnaparkhi, 1996) and the LMR tagger here uses the same probability model." W03-2909,W96-0213,p,"This approach allows to combine strengths of generality of context attributes as in n-gram models (Brants, 2000; Megyesi, 2001) with their specificity as for binary features in MaxEnt taggers (Ratnaparkhi, 1996; Hajic and Hladk, 1998)." W04-0305,W96-0213,o,"We determined appropriate training parameters and network size based on intermediate validation 1We used a publicly available tagger (Ratnaparkhi, 1996) to provide the tags." W04-0814,W96-0213,o,"Every sentence was part-of-speech tagged using a maximum entropy tagger (Ratnaparkhi, 1996) and parsed using a state-of-the-art wide coverage phrase structure parser (Collins, 1999)." W04-0834,W96-0213,o,"3.1 Part-of-Speech (POS) of Neighboring Words We use 7 features to encode this knowledge source: a0a2a1a4a3a6a5a7a0a8a1a10a9a11a5a7a0a8a1a13a12a14a5a15a0a17a16a6a5a15a0a2a12a18a5a7a0a19a9a20a5a15a0a17a3, where a0a8a1 a21 (a0 a21 ) is the POS of thea6 th token to the left (right) ofa0, and a0a17a16 is the POS of a0 . A token can be a word or a punctuation symbol, and each of these neighboring tokens must be in the same sentence asa0 . We use a sentence segmentation program (Reynar and Ratnaparkhi, 1997) and a POS tagger (Ratnaparkhi, 1996) to segment the tokens surroundinga0 into sentences and assign POS tags to these tokens." W05-0104,W96-0213,o,The original intention of assignment 2 was that students then use this maxent classifier as a building block of a maxent part-of-speech tagger like that of Ratnaparkhi (1996). W05-0201,W96-0213,o,"We assign tags of part-of-speech (POS) to the words with MXPOST that adopts the Penn Treebank tag set (Ratnaparkhi, 1996)." W05-0611,W96-0213,o,Direct feedback loops that copy a predicted output label to the input representation of the next example have been used in symbolic machine-learning architectures such as the the maximum-entropy tagger described by Ratnaparkhi (1996) and the memory-based tagger (MBT) proposed by Daelemans et al. W05-0611,W96-0213,o,"Output sequence optimization Rather than basing classifications only on model parameters estimated from co-occurrences between input and output symbols employed for maximizing the likelihood of point-wise single-label predictions at the output level, classifier output may be augmented by an optimization over the output sequence as a whole using optimization techniques such as beam searching in the space of a conditional markov models output (Ratnaparkhi, 1996) or hidden markov models (Skut and Brants, 1998)." W05-0806,W96-0213,o,"Therefore, the base forms have been introduced manually and the POS tags have been provided partly manually and partly automatically using a statistical maximum-entropy based POS tagger similar to the one described in (Ratnaparkhi, 1996)." W05-0821,W96-0213,o,"For the factored language models, a feature-based word representation was obtained by tagging the text with Rathnaparkis maximum-entropy tagger (Ratnaparkhi, 1996) and by stemming words using the Porter stemmer (Porter, 1980)." W05-1011,W96-0213,o,"Context extraction begins with a Maximum Entropy POS tagger and chunker (Ratnaparkhi, 1996)." W05-1514,W96-0213,o,"4 Filtering with the CFG Rule Dictionary We use an idea that is similar to the method proposed by Ratnaparkhi (Ratnaparkhi, 1996) for partof-speech tagging." W05-1515,W96-0213,o,POS tag the text using Ratnaparkhi (1996). W05-1515,W96-0213,n,"Both Charniak (2000) and Bikel (2004) were trained using the goldstandard tags, as this produced higher accuracy on the development set than using Ratnaparkhi (1996)s tags." W06-0123,W96-0213,o,")|(maxarg* STPT T = (1) Then we assume that the tagging of one character is independent of each other, and modify formula 1 as == = = = n i ii tttT nn tttT ctP ccctttPT n n 1 2121 * )|(maxarg )|(maxarg 21 21 (2) Beam search (n=3) (Ratnaparkhi,1996) is applied for tag sequence searching, but we only search the valid sequences to ensure the validity of searching result." W06-1615,W96-0213,o,We used the same 58 feature types as Ratnaparkhi (1996). W06-1615,W96-0213,o,"Finally, we show in Section 7.3 that our SCL PoS 124 (a) 100 500 1k 5k 40k75 80 85 90 Results for 561 MEDLINE Test Sentences Number of WSJ Training Sentences Accuracy supervised semiASO SCL (b) Accuracy on 561-sentence test set Words Model All Unknown Ratnaparkhi (1996) 87.2 65.2 supervised 87.9 68.4 semi-ASO 88.4 70.9 SCL 88.9 72.0 (c) Statistical Significance (McNemars) for all words Null Hypothesis p-value semi-ASO vs. super 0.0015 SCL vs. super 2.1 1012 SCL vs. semi-ASO 0.0003 Figure 5: PoS tagging results with no target labeled training data (a) 50 100 200 500 86 88 90 92 94 96 Number of MEDLINE Training Sentences Accuracy Results for 561 MEDLINE Test Sentences 40kSCL 40ksuper 1kSCL 1ksuper nosource (b) 500 target domain training sentences Model Testing Accuracy nosource 94.5 1k-super 94.5 1k-SCL 95.0 40k-super 95.6 40k-SCL 96.1 (c) McNemars Test (500 training sentences) Null Hypothesis p-value 1k-super vs. nosource 0.732 1k-SCL vs. 1k-super 0.0003 40k-super vs. nosource 1.9 1012 40k-SCL vs. 40k-super 6.5 107 Figure 6: PoS tagging results with no target labeled training data tagger improves the performance of a dependency parser on the target domain." W06-1615,W96-0213,n,"For unknown words, SCL gives a relative reduction in error of 19.5% over Ratnaparkhi (1996), even with 40,000 sentences of source domain training data." W06-1615,W96-0213,o,"(b) MEDLINE DT JJ VBN NNS IN DT NN NNS VBP The oncogenic mutated forms of the ras proteins are RB JJ CC VBP IN JJ NN NN . constitutively active and interfere with normal signal transduction . Figure 1: Part of speech-tagged sentences from both corpora we investigate its use in part of speech (PoS) tagging (Ratnaparkhi, 1996; Toutanova et al. , 2003)." W06-1615,W96-0213,p,"Discriminative taggers and chunkers have been the state-of-the-art for more than a decade (Ratnaparkhi, 1996; Sha and Pereira, 2003)." W06-1618,W96-0213,o,"Part-of-speech tags are assigned by the MXPOST maximum-entropy based part-of-speech tagger (Ratnaparkhi, 1996)." W06-1666,W96-0213,o,"We used a publicly available tagger (Ratnaparkhi, 1996) to provide the part-of-speech tags for each word in the sentence." W06-1701,W96-0213,o,"Our first model (MA-ME) is based on disambiguating the MA output in the maximum entropy (ME) framework (Ratnaparkhi, 1996)." W06-3327,W96-0213,p,"2 Method Maximum Entropy Markov Models (MEMMs) (Ratnaparkhi 1996) and their extensions (Tutanova et al 2003, Tsuruoka et al 2005) have been successfully applied to English POS tagging." W06-3603,W96-0213,o,"We use the same preprocessing steps as Turian and Melamed (2005): during both training and testing, the parser is given text POS-tagged by the tagger of Ratnaparkhi (1996), with capitalization stripped and outermost punctuation removed." W06-3603,W96-0213,o,"Step Description mean stddev % 1.5 Sample 1.5s 0.07s 0.7% 1.6 Extraction 38.2s 0.13s 18.6% 1.7 Build tree 127.6s 27.60s 62.3% 1.8 Percolation 31.4s 4.91s 15.3% 1.911 Leaf updates 6.2s 1.75s 3.0% 1.511 Total 204.9s 32.6s 100.0% 2004),10 the only one that we were able to train and test under exactly the same experimental conditions (including the use of POS tags from Ratnaparkhi (1996))." W06-3603,W96-0213,o,"The initial state contains terminal items, whose labels are the POS tags given by Ratnaparkhi (1996)." W07-1202,W96-0213,o,"It uses a log-linear model to define a distribution over the lexical category set for each word and the previous two categories (Ratnaparkhi, 1996) and the forward backward algorithm efficiently sums over all histories to give a distibution for each word." W07-1209,W96-0213,o,"So, we pre-tagged the input to the Bikel parser using the MXPOST tagger (Ratnaparkhi, 1996)." W07-1516,W96-0213,p,"More recent work has achieved state-of-the-art results with Maxi101 mum entropy conditional Markov models (MaxEnt CMMs, or MEMMs for short) (Ratnaparkhi, 1996; Toutanova & Manning, 2000; Toutanova et al. , 2003)." W07-2053,W96-0213,o,"We use MXPOST tagger (Adwait, 1996) for POS tagging, Charniak parser (Charniak, 2000) for extracting syntactic relations, and David Blei?s version of LDA1 for LDA training and inference." W07-2206,W96-0213,o,"The supertagger uses a log-linear model to define a distribution over the lexical category set for each word and the previous two categories (Ratnaparkhi, 1996) and the forward backward algorithm efficiently sums over all histories to give a distribution for each word." W08-0206,W96-0213,o,"Hw6: Implement beam search and reduplicate the POS tagger described in (Ratnaparkhi, 1996)." W08-0206,W96-0213,o,"For Hw6, students compared their POS tagging results with the ones reported in (Ratnaparkhi, 1996)." W08-0206,W96-0213,o,"For instance, for Maximum Entropy, I picked (Berger et al., 1996; Ratnaparkhi, 1997) for the basic theory, (Ratnaparkhi, 1996) for an application (POS tagging in this case), and (Klein and Manning, 2003) for more advanced topics such as optimization and smoothing." W08-0409,W96-0213,o,"We tagged all the sentences in the training and devset3 using a maximum entropy-based POS tagger MXPOST (Ratnaparkhi, 1996), trained on the Penn English and Chinese Treebanks." W08-0611,W96-0213,o,"2A maximum-entropy-based part of speech tagger was used (Ratnaparkhi, 1996) without the adaptation to the biomedical domain." W09-0416,W96-0213,o,"The features we used are as follows: Direct and inverse IBM model; 3, 4-gram target language model; 3, 4, 5-gram POS language model (Ratnaparkhi, 1996; Schmid, 1994); 96 Sentence length posterior probability (Zens and Ney, 2006); N-gram posterior probabilities within the NBest list (Zens and Ney, 2006); Minimum Bayes Risk probability; Length ratio between source and target sentence; The weights are optimized via MERT algorithm." W09-0715,W96-0213,o,"Using an Maximum Entropy approach to POS tagging, Ratnaparkhi (1996) reports a tagging accuracy of 96.6% on the Wall Street Journal." W96-0111,W96-0213,o,"Ratnaparkhi, 1996), a single inconsistency in a test set tree will very likely yield a zero percent parse accuracy for the particular test set sentence." W97-0301,W96-0213,o,"The maximum entropy models used here are similar in form to those in (Ratnaparkhi, 1996; Berger, Della Pietra, and Della Pietra, 1996; Lau, Rosenfeld, and Roukos, 1993)." W97-0301,W96-0213,o,"The training samples are respectively used to create the models PT^G, PCHUNK, PBUILD, and PCMECK, all of which have the form: k p(a, b) = II _ij(o,b ~j (1) j----1 where a is some action, b is some context, ~"" is a nor4 Model Categories Description Templates Used TAG See (Ratnaparkhi, 1996) CHUNK chunkandpostag(n)* BUILD CHECK chunkandpostag(m, n)* cons(n) cons(re, n)* cons(m, n,p) T punctuation checkcons(n)* checkcons(m,n)* production surround(n)* The word, POS tag, and chunk tag of nth leaf." W97-0301,W96-0213,o,"The search also uses a Tag Dictionary constructed from training data, described in (Ratnaparkhi, 1996), that reduces the number of actions explored by the tagging model." W98-1116,W96-0213,p,"Models that can handle non-independent lexical features have given very good results both for part-of-speech and structural disambiguation (Ratnaparkhi, 1996; Ratnaparkhi, 1997; Ratnaparkhi, 1998)." W98-1117,W96-0213,o,"Its applications range from sentence boundary disambiguation (Reynar and Ratnaparkhi, 1997) to part-of-speech tagging (Ratnaparkhi, 1996), parsing (Ratnaparkhi, 1997) and machine translation (Berger et al. , 1996)." W98-1118,W96-0213,p,"He has achieved state-of-the art results by applying M.E. to parsing (Ratnaparkhi, 1997a), part-of-speech tagging (Ratnaparkhi, 1996), and sentence-boundary detection (Reynar and Ratnaparkhi, 1997)." W99-0606,W96-0213,o,"B = (Brill and Wu, 1998); M = (Magerman, 1995); O = our data; R = (Ratnaparkhi, 1996); W = (Weischedel and others, 1993)." W99-0607,W96-0213,o,"The model we use is similar to that of (Ratnaparkhi, 1996)." W99-0607,W96-0213,p,"Our model exploits the same kind of tag-n-gram information that forms the core of many successful tagging models, for example, (Kupiec, 1992), (Merialdo, 1994), (Ratnaparkhi, 1996)." W99-0608,W96-0213,o,"In that table, TBL stands for Brill's transformation-based error-driven tagget (Brill, 1995), ME stands for a tagger based on the maimum entropy modelling (Ratnaparkhi, 1996), SPATTER stands for a statistical parser based on decision trees (Magerman, 1996), IGTREE stands for the memory-based tagger by Daelemans et al."