alwaysaditi's picture
End of training
dc78b20 verified
a large number of current language processing systems use a part-of-speech tagger for pre-processing. the tagger assigns a (unique or ambiguous) part-ofspeech tag to each token in the input and passes its output to the next processing level, usually a parser. furthermore, there is a large interest in part-ofspeech tagging for corpus annotation projects, who create valuable linguistic resources by a combination of automatic processing and human correction. for both applications, a tagger with the highest possible accuracy is required. the debate about which paradigm solves the part-of-speech tagging problem best is not finished. recent comparisons of approaches that can be trained on corpora (van halteren et al., 1998; volk and schneider, 1998) have shown that in most cases statistical aproaches (cutting et al., 1992; schmid, 1995; ratnaparkhi, 1996) yield better results than finite-state, rule-based, or memory-based taggers (brill, 1993; daelemans et al., 1996). they are only surpassed by combinations of different systems, forming a "voting tagger". among the statistical approaches, the maximum entropy framework has a very strong position. nevertheless, a recent independent comparison of 7 taggers (zavrel and daelemans, 1999) has shown that another approach even works better: markov models combined with a good smoothing technique and with handling of unknown words. this tagger, tnt, not only yielded the highest accuracy, it also was the fastest both in training and tagging. the tagger comparison was organized as a "blackbox test": set the same task to every tagger and compare the outcomes. this paper describes the models and techniques used by tnt together with the implementation. the reader will be surprised how simple the underlying model is. the result of the tagger comparison seems to support the maxime "the simplest is the best". however, in this paper we clarify a number of details that are omitted in major previous publications concerning tagging with markov models. as two examples, (rabiner, 1989) and (charniak et al., 1993) give good overviews of the techniques and equations used for markov models and part-ofspeech tagging, but they are not very explicit in the details that are needed for their application. we argue that it is not only the choice of the general model that determines the result of the tagger but also the various "small" decisions on alternatives. the aim of this paper is to give a detailed account of the techniques used in tnt. additionally, we present results of the tagger on the negra corpus (brants et al., 1999) and the penn treebank (marcus et al., 1993). the penn treebank results reported here for the markov model approach are at least equivalent to those reported for the maximum entropy approach in (ratnaparkhi, 1996). for a comparison to other taggers, the reader is referred to (zavrel and daelemans, 1999).tnt is freely available to universities and related organizations for research purposes (see http://www.coli.uni-sb.derthorstenant). a large number of current language processing systems use a part-of-speech tagger for pre-processing. for a comparison to other taggers, the reader is referred to (zavrel and daelemans, 1999). we have shown that a tagger based on markov models yields state-of-the-art results, despite contrary claims found in the literature. the penn treebank results reported here for the markov model approach are at least equivalent to those reported for the maximum entropy approach in (ratnaparkhi, 1996). the tagger assigns a (unique or ambiguous) part-ofspeech tag to each token in the input and passes its output to the next processing level, usually a parser. furthermore, there is a large interest in part-ofspeech tagging for corpus annotation projects, who create valuable linguistic resources by a combination of automatic processing and human correction. additionally, we present results of the tagger on the negra corpus (brants et al., 1999) and the penn treebank (marcus et al., 1993). for example, the markov model tagger used in the comparison of (van halteren et al., 1998) yielded worse results than all other taggers. it is a very interesting future research topic to determine the advantages of either of these approaches, to find the reason for their high accuracies, and to find a good combination of both.