text
stringlengths 0
30.2k
|
---|
W03-0430 W96-0213 p "There has been significant work with such models for greedy sequence modeling in NLP (Ratnaparkhi, 1996; Borthwick et al. , 1998)." |
W03-0806 W96-0213 o "For instance, implementing an efficient version of the MXPOST POS tagger (Ratnaparkhi, 1996) will simply involve composing and configuring the appropriate text file reading component, with the sequential tagging component, the collection of feature extraction components and the maximum entropy model component." |
W03-1018 W96-0213 p "1 Introduction The maximum entropy model (Berger et al. , 1996; Pietra et al. , 1997) has attained great popularity in the NLP field due to its power, robustness, and successful performance in various NLP tasks (Ratnaparkhi, 1996; Nigam et al. , 1999; Borthwick, 1999)." |
W03-1201 W96-0213 o "We used MXPOST (Ratnaparkhi, 1996), a maximum entropy based POS tagger." |
W03-1728 W96-0213 o "The Maximum Entropy Markov Model used in POS-tagging is described in detail in (Ratnaparkhi, 1996) and the LMR tagger here uses the same probability model." |
W03-2909 W96-0213 p "This approach allows to combine strengths of generality of context attributes as in n-gram models (Brants, 2000; Megyesi, 2001) with their specificity as for binary features in MaxEnt taggers (Ratnaparkhi, 1996; Hajic and Hladk, 1998)." |
W04-0305 W96-0213 o "We determined appropriate training parameters and network size based on intermediate validation 1We used a publicly available tagger (Ratnaparkhi, 1996) to provide the tags." |
W04-0814 W96-0213 o "Every sentence was part-of-speech tagged using a maximum entropy tagger (Ratnaparkhi, 1996) and parsed using a state-of-the-art wide coverage phrase structure parser (Collins, 1999)." |
W04-0834 W96-0213 o "3.1 Part-of-Speech (POS) of Neighboring Words We use 7 features to encode this knowledge source: a0a2a1a4a3a6a5a7a0a8a1a10a9a11a5a7a0a8a1a13a12a14a5a15a0a17a16a6a5a15a0a2a12a18a5a7a0a19a9a20a5a15a0a17a3, where a0a8a1 a21 (a0 a21 ) is the POS of thea6 th token to the left (right) ofa0, and a0a17a16 is the POS of a0 . A token can be a word or a punctuation symbol, and each of these neighboring tokens must be in the same sentence asa0 . We use a sentence segmentation program (Reynar and Ratnaparkhi, 1997) and a POS tagger (Ratnaparkhi, 1996) to segment the tokens surroundinga0 into sentences and assign POS tags to these tokens." |
W05-0104 W96-0213 o The original intention of assignment 2 was that students then use this maxent classifier as a building block of a maxent part-of-speech tagger like that of Ratnaparkhi (1996). |
W05-0201 W96-0213 o "We assign tags of part-of-speech (POS) to the words with MXPOST that adopts the Penn Treebank tag set (Ratnaparkhi, 1996)." |
W05-0611 W96-0213 o Direct feedback loops that copy a predicted output label to the input representation of the next example have been used in symbolic machine-learning architectures such as the the maximum-entropy tagger described by Ratnaparkhi (1996) and the memory-based tagger (MBT) proposed by Daelemans et al. |
W05-0611 W96-0213 o "Output sequence optimization Rather than basing classifications only on model parameters estimated from co-occurrences between input and output symbols employed for maximizing the likelihood of point-wise single-label predictions at the output level, classifier output may be augmented by an optimization over the output sequence as a whole using optimization techniques such as beam searching in the space of a conditional markov models output (Ratnaparkhi, 1996) or hidden markov models (Skut and Brants, 1998)." |
W05-0806 W96-0213 o "Therefore, the base forms have been introduced manually and the POS tags have been provided partly manually and partly automatically using a statistical maximum-entropy based POS tagger similar to the one described in (Ratnaparkhi, 1996)." |
W05-0821 W96-0213 o "For the factored language models, a feature-based word representation was obtained by tagging the text with Rathnaparkis maximum-entropy tagger (Ratnaparkhi, 1996) and by stemming words using the Porter stemmer (Porter, 1980)." |
W05-1011 W96-0213 o "Context extraction begins with a Maximum Entropy POS tagger and chunker (Ratnaparkhi, 1996)." |
W05-1514 W96-0213 o "4 Filtering with the CFG Rule Dictionary We use an idea that is similar to the method proposed by Ratnaparkhi (Ratnaparkhi, 1996) for partof-speech tagging." |
W05-1515 W96-0213 o POS tag the text using Ratnaparkhi (1996). |
W05-1515 W96-0213 n "Both Charniak (2000) and Bikel (2004) were trained using the goldstandard tags, as this produced higher accuracy on the development set than using Ratnaparkhi (1996)s tags." |
W06-0123 W96-0213 o ")|(maxarg* STPT T = (1) Then we assume that the tagging of one character is independent of each other, and modify formula 1 as == = = = n i ii tttT nn tttT ctP ccctttPT n n 1 2121 * )|(maxarg )|(maxarg 21 21 (2) Beam search (n=3) (Ratnaparkhi,1996) is applied for tag sequence searching, but we only search the valid sequences to ensure the validity of searching result." |
W06-1615 W96-0213 o We used the same 58 feature types as Ratnaparkhi (1996). |
W06-1615 W96-0213 o "Finally, we show in Section 7.3 that our SCL PoS 124 (a) 100 500 1k 5k 40k75 80 85 90 Results for 561 MEDLINE Test Sentences Number of WSJ Training Sentences Accuracy supervised semiASO SCL (b) Accuracy on 561-sentence test set Words Model All Unknown Ratnaparkhi (1996) 87.2 65.2 supervised 87.9 68.4 semi-ASO 88.4 70.9 SCL 88.9 72.0 (c) Statistical Significance (McNemars) for all words Null Hypothesis p-value semi-ASO vs. super 0.0015 SCL vs. super 2.1 1012 SCL vs. semi-ASO 0.0003 Figure 5: PoS tagging results with no target labeled training data (a) 50 100 200 500 86 88 90 92 94 96 Number of MEDLINE Training Sentences Accuracy Results for 561 MEDLINE Test Sentences 40kSCL 40ksuper 1kSCL 1ksuper nosource (b) 500 target domain training sentences Model Testing Accuracy nosource 94.5 1k-super 94.5 1k-SCL 95.0 40k-super 95.6 40k-SCL 96.1 (c) McNemars Test (500 training sentences) Null Hypothesis p-value 1k-super vs. nosource 0.732 1k-SCL vs. 1k-super 0.0003 40k-super vs. nosource 1.9 1012 40k-SCL vs. 40k-super 6.5 107 Figure 6: PoS tagging results with no target labeled training data tagger improves the performance of a dependency parser on the target domain." |
W06-1615 W96-0213 n "For unknown words, SCL gives a relative reduction in error of 19.5% over Ratnaparkhi (1996), even with 40,000 sentences of source domain training data." |
W06-1615 W96-0213 o "(b) MEDLINE DT JJ VBN NNS IN DT NN NNS VBP The oncogenic mutated forms of the ras proteins are RB JJ CC VBP IN JJ NN NN . constitutively active and interfere with normal signal transduction . Figure 1: Part of speech-tagged sentences from both corpora we investigate its use in part of speech (PoS) tagging (Ratnaparkhi, 1996; Toutanova et al. , 2003)." |
W06-1615 W96-0213 p "Discriminative taggers and chunkers have been the state-of-the-art for more than a decade (Ratnaparkhi, 1996; Sha and Pereira, 2003)." |
W06-1618 W96-0213 o "Part-of-speech tags are assigned by the MXPOST maximum-entropy based part-of-speech tagger (Ratnaparkhi, 1996)." |
W06-1666 W96-0213 o "We used a publicly available tagger (Ratnaparkhi, 1996) to provide the part-of-speech tags for each word in the sentence." |
W06-1701 W96-0213 o "Our first model (MA-ME) is based on disambiguating the MA output in the maximum entropy (ME) framework (Ratnaparkhi, 1996)." |
W06-3327 W96-0213 p "2 Method Maximum Entropy Markov Models (MEMMs) (Ratnaparkhi 1996) and their extensions (Tutanova et al 2003, Tsuruoka et al 2005) have been successfully applied to English POS tagging." |
W06-3603 W96-0213 o "We use the same preprocessing steps as Turian and Melamed (2005): during both training and testing, the parser is given text POS-tagged by the tagger of Ratnaparkhi (1996), with capitalization stripped and outermost punctuation removed." |
W06-3603 W96-0213 o "Step Description mean stddev % 1.5 Sample 1.5s 0.07s 0.7% 1.6 Extraction 38.2s 0.13s 18.6% 1.7 Build tree 127.6s 27.60s 62.3% 1.8 Percolation 31.4s 4.91s 15.3% 1.911 Leaf updates 6.2s 1.75s 3.0% 1.511 Total 204.9s 32.6s 100.0% 2004),10 the only one that we were able to train and test under exactly the same experimental conditions (including the use of POS tags from Ratnaparkhi (1996))." |
W06-3603 W96-0213 o "The initial state contains terminal items, whose labels are the POS tags given by Ratnaparkhi (1996)." |
W07-1202 W96-0213 o "It uses a log-linear model to define a distribution over the lexical category set for each word and the previous two categories (Ratnaparkhi, 1996) and the forward backward algorithm efficiently sums over all histories to give a distibution for each word." |
W07-1209 W96-0213 o "So, we pre-tagged the input to the Bikel parser using the MXPOST tagger (Ratnaparkhi, 1996)." |
W07-1516 W96-0213 p "More recent work has achieved state-of-the-art results with Maxi101 mum entropy conditional Markov models (MaxEnt CMMs, or MEMMs for short) (Ratnaparkhi, 1996; Toutanova & Manning, 2000; Toutanova et al. , 2003)." |
W07-2053 W96-0213 o "We use MXPOST tagger (Adwait, 1996) for POS tagging, Charniak parser (Charniak, 2000) for extracting syntactic relations, and David Blei?s version of LDA1 for LDA training and inference." |
W07-2206 W96-0213 o "The supertagger uses a log-linear model to define a distribution over the lexical category set for each word and the previous two categories (Ratnaparkhi, 1996) and the forward backward algorithm efficiently sums over all histories to give a distribution for each word." |
W08-0206 W96-0213 o "Hw6: Implement beam search and reduplicate the POS tagger described in (Ratnaparkhi, 1996)." |
W08-0206 W96-0213 o "For Hw6, students compared their POS tagging results with the ones reported in (Ratnaparkhi, 1996)." |
W08-0206 W96-0213 o "For instance, for Maximum Entropy, I picked (Berger et al., 1996; Ratnaparkhi, 1997) for the basic theory, (Ratnaparkhi, 1996) for an application (POS tagging in this case), and (Klein and Manning, 2003) for more advanced topics such as optimization and smoothing." |
W08-0409 W96-0213 o "We tagged all the sentences in the training and devset3 using a maximum entropy-based POS tagger MXPOST (Ratnaparkhi, 1996), trained on the Penn English and Chinese Treebanks." |
W08-0611 W96-0213 o "2A maximum-entropy-based part of speech tagger was used (Ratnaparkhi, 1996) without the adaptation to the biomedical domain." |
W09-0416 W96-0213 o "The features we used are as follows: Direct and inverse IBM model; 3, 4-gram target language model; 3, 4, 5-gram POS language model (Ratnaparkhi, 1996; Schmid, 1994); 96 Sentence length posterior probability (Zens and Ney, 2006); N-gram posterior probabilities within the NBest list (Zens and Ney, 2006); Minimum Bayes Risk probability; Length ratio between source and target sentence; The weights are optimized via MERT algorithm." |
W09-0715 W96-0213 o "Using an Maximum Entropy approach to POS tagging, Ratnaparkhi (1996) reports a tagging accuracy of 96.6% on the Wall Street Journal." |
W96-0111 W96-0213 o "Ratnaparkhi, 1996), a single inconsistency in a test set tree will very likely yield a zero percent parse accuracy for the particular test set sentence." |
W97-0301 W96-0213 o "The maximum entropy models used here are similar in form to those in (Ratnaparkhi, 1996; Berger, Della Pietra, and Della Pietra, 1996; Lau, Rosenfeld, and Roukos, 1993)." |
W97-0301 W96-0213 o "The training samples are respectively used to create the models PT^G, PCHUNK, PBUILD, and PCMECK, all of which have the form: k p(a, b) = II _ij(o,b ~j (1) j----1 where a is some action, b is some context, ~"" is a nor4 Model Categories Description Templates Used TAG See (Ratnaparkhi, 1996) CHUNK chunkandpostag(n)* BUILD CHECK chunkandpostag(m, n)* cons(n) cons(re, n)* cons(m, n,p) T punctuation checkcons(n)* checkcons(m,n)* production surround(n)* The word, POS tag, and chunk tag of nth leaf." |
W97-0301 W96-0213 o "The search also uses a Tag Dictionary constructed from training data, described in (Ratnaparkhi, 1996), that reduces the number of actions explored by the tagging model." |
W98-1116 W96-0213 p "Models that can handle non-independent lexical features have given very good results both for part-of-speech and structural disambiguation (Ratnaparkhi, 1996; Ratnaparkhi, 1997; Ratnaparkhi, 1998)." |
W98-1117 W96-0213 o "Its applications range from sentence boundary disambiguation (Reynar and Ratnaparkhi, 1997) to part-of-speech tagging (Ratnaparkhi, 1996), parsing (Ratnaparkhi, 1997) and machine translation (Berger et al. , 1996)." |
W98-1118 W96-0213 p "He has achieved state-of-the art results by applying M.E. to parsing (Ratnaparkhi, 1997a), part-of-speech tagging (Ratnaparkhi, 1996), and sentence-boundary detection (Reynar and Ratnaparkhi, 1997)." |
W99-0606 W96-0213 o "B = (Brill and Wu, 1998); M = (Magerman, 1995); O = our data; R = (Ratnaparkhi, 1996); W = (Weischedel and others, 1993)." |
W99-0607 W96-0213 o "The model we use is similar to that of (Ratnaparkhi, 1996)." |
W99-0607 W96-0213 p "Our model exploits the same kind of tag-n-gram information that forms the core of many successful tagging models, for example, (Kupiec, 1992), (Merialdo, 1994), (Ratnaparkhi, 1996)." |
W99-0608 W96-0213 o "In that table, TBL stands for Brill's transformation-based error-driven tagget (Brill, 1995), ME stands for a tagger based on the maimum entropy modelling (Ratnaparkhi, 1996), SPATTER stands for a statistical parser based on decision trees (Magerman, 1996), IGTREE stands for the memory-based tagger by Daelemans et al." |