text
stringlengths
0
30.2k
# This corpus consists of 8,736 citation sentences which have been manually annotated with sentiment.
# The citation sentences have been extracted from the ACL Anthology Network corpus.
# The file format is:
# Source_Paper Target_Paper Sentiment Citation_Text
#
# If you use this corpus, please cite the following paper:
#
# @InProceedings{athar:2011:SS,
# author = {Athar, Awais},
# title = {Sentiment Analysis of Citations using Sentence Structure-Based Features},
# booktitle = {Proceedings of the ACL 2011 Student Session},
# month = {June},
# year = {2011},
# address = {Portland, OR, USA},
# publisher = {Association for Computational Linguistics},
# pages = {81--87},
# url = {http://www.aclweb.org/anthology/P11-3015}
# }
A00-1043 A00-2024 o "We analyzed a set of articles and identified six major operations that can be used for editing the extracted sentences, including removing extraneous phrases from an extracted sentence, combining a reduced sentence with other sentences, syntactic transformation, substituting phrases in an extracted sentence with their paraphrases, substituting phrases with more general or specific descriptions, and reordering the extracted sentences (Jing and McKeown, 1999; Jing and McKeown, 2000)."
H05-1033 A00-2024 o "Table 3: Example compressions Compression AvgLen Rating Baseline 9.70 1.93 BT-2-Step 22.06 3.21 Spade 19.09 3.10 Humans 20.07 3.83 Table 4: Mean ratings for automatic compressions nally, we added a simple baseline compression algorithm proposed by Jing and McKeown (2000) which removed all prepositional phrases, clauses, toinfinitives, and gerunds."
I05-2009 A00-2024 o "5.3 Related works and discussion Our two-step model essentially belongs to the same category as the works of (Mani et al. , 1999) and (Jing and McKeown, 2000)."
I05-2009 A00-2024 o "(1999) proposed a summarization system based on the draft and revision. Jing and McKeown (2000) proposed a system based on extraction and cut-and-paste generation. Our abstractors performed the same cut-and-paste operations that Jing and McKeown noted in their work, and we think that our two-step model will be a reasonable starting point for our subsequent research."
I05-2009 A00-2024 o "We found that the deletion of lead parts did not occur very often in our summary, unlike the case of Jing and McKeown (2000)."
I08-1016 A00-2024 o "Automatic text summarization approaches have offered reasonably well-performing approximations for identifiying important sentences (Lin and Hovy, 2002; Schiffman et al., 2002; Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Daume III and Marcu, 2006) but, not surprisingly, text (re)generation has been a major challange despite some work on sub-sentential modification (Jing and McKeown, 2000; Knight and Marcu, 2000; Barzilay and McKeown, 2005)."
I08-2101 A00-2024 p "al., 1994), compression of sentences with Automatic Translation approaches (Knight and Marcu, 2000), Hidden Markov Model (Jing and McKeown, 2000), Topic Signatures based methods (Lin and Hovy, 2000, Lacatusu et al., 2006) are among the most popular techniques that have been used in the summarization systems of this category."
J02-4002 A00-2024 o "Because of this, it is generally accepted that some kind of postprocessing should be performed to improve the final result, by shortening, fusing, or otherwise revising the material (Grefenstette 1998; Mani, Gates, and Bloedorn 1999; Jing and McKeown 2000; Barzilay et al. 2000; Knight and Marcu 2000)."
J02-4004 A00-2024 o "Additionally, some research has explored cutting and pasting segments of text from the full document to generate a summary (Jing and McKeown 2000)."
J02-4005 A00-2024 p "But in fact, the issue of editing in text summarization has usually been neglected, notable exceptions being the works by Jing and McKeown (2000) and Mani, Gates, and Bloedorn (1999)."
J02-4005 A00-2024 o Jing and McKeown (2000) and Jing (2000) propose a cut-and-paste strategy as a computational process of automatic abstracting and a sentence reduction strategy to produce concise sentences.
J02-4005 A00-2024 o Our work in sentence reformulation is different from cut-and-paste summarization (Jing and McKeown 2000) in many ways.
J02-4005 A00-2024 n "Jing and McKeown (2000) have proposed a rule-based algorithm for sentence combination, but no results have been reported."
J05-3002 A00-2024 o "As previously observed in the literature (Mani, Gates, and Bloedorn 1999; Jing and McKeown 2000), such components include a clause in the clause conjunction, relative clauses, and some elements within a clause (such as adverbs and prepositions)."
J05-3002 A00-2024 o "In addition to sentence fusion, compression algorithms (Chandrasekar, Doran, and Bangalore 1996; Grefenstette 1998; Mani, Gates, and Bloedorn 1999; Knight and Marcu 2002; Jing and McKeown 2000; Reizler et al. 2003) and methods for expansion of a multiparallel corpus (Pang, Knight, and Marcu 2003) are other instances of such methods."
J05-3002 A00-2024 o "While earlier approaches for text compression were based on symbolic reduction rules (Grefenstette 1998; Mani, Gates, and Bloedorn 1999), more recent approaches use an aligned corpus of documents and their human written summaries to determine which constituents can be reduced (Knight and Marcu 2002; Jing and McKeown 2000; Reizler et al. 2003)."
J05-3002 A00-2024 o "While this approach exploits only syntactic and lexical information, Jing and McKeown (2000) also rely on cohesion information, derived from word distribution in a text: Phrases that are linked to a local context are retained, while phrases that have no such links are dropped."
J05-3002 A00-2024 o "In addition to reducing the original sentences, Jing and McKeown (2000) use a number of manually compiled rules to aggregate reduced sentences; for example, reduced clauses might be conjoined with and."
W02-0404 A00-2024 o "Previous research has addressed revision in single-document summaries [Jing & McKeown, 2000] [Mani et al, 1999] and has suggested that revising summaries can make them more informative and correct errors."
W02-0404 A00-2024 o "To contrast, [Jing & McKeown, 2000] concentrated on analyzing human-written summaries in order to determine how professionals construct summaries."
W03-1004 A00-2024 o "1 Introduction Text-to-text generation is an emerging area of research in NLP (Chandrasekar and Bangalore, 1997; Caroll et al. , 1999; Knight and Marcu, 2000; Jing and McKeown, 2000)."
W03-1102 A00-2024 p "The recent approach for editing extracted text spans (Jing and McKeown, 2000) may also produce improvement for our algorithm."
W09-0604 A00-2024 o "First, splitting and merging of sentences (Jing and McKeown, 2000), which seems related to content planning and aggregation."
W09-0604 A00-2024 o "1 Introduction The task of sentence compression (or sentence reduction) can be defined as summarizing a single sentence by removing information from it (Jing and McKeown, 2000)."
W09-0604 A00-2024 o "One of the applications is in automatic summarization in order to compress sentences extracted for the summary (Lin, 2003; Jing and McKeown, 2000)."
W09-2807 A00-2024 o "In cut-and-paste summarization (Jing and McKeown, 2000), sentence combination operations were implemented manually following the study of a set of professionally written abstracts; however the particular pasting operation presented here was not implemented."
W09-2807 A00-2024 o "Close to the problem studied here is Jing and McKeowns (Jing and McKeown, 2000) cut-and-paste method founded on EndresNiggemeyers observations."
W09-2808 A00-2024 o Jing and McKeown (1999; 2000) found that human summarization can be traced back to six cut-andpaste operations of a text and proposed a revision method consisting of sentence reduction and combination modules with a sentence extraction part.
W09-2808 A00-2024 o Like the work of Jing and McKeown (2000) and Mani et al.
A00-1026 A92-1018 o The SPECIALIST minimal commitment parser relies on the SPECIALIST Lexicon as well as the Xerox stochastic tagger (Cutting et al. 1992).
A00-1031 A92-1018 p "Recent comparisons of approaches that can be trained on corpora (van Halteren et al. , 1998; Volk and Schneider, 1998) have shown that in most cases statistical aproaches (Cutting et al. , 1992; Schmid, 1995; Ratnaparkhi, 1996) yield better results than finite-state, rule-based, or memory-based taggers (Brill, 1993; Daelemans et al. , 1996)."
A94-1008 A92-1018 o "The two systems we use are ENGCG (Karlsson et al. , 1994) and the Xerox Tagger (Cutting et al. , 1992)."
A94-1008 A92-1018 o "2.2 Xerox Tagger The Xerox Tagger 1, XT, (Cutting et al. , 1992) is a statistical tagger made by Doug Cutting, Julian Kupiec, Jan Pedersen and Penelope Sibun in Xerox PARC."
A94-1009 A92-1018 p "One of the most effective taggers based on a pure HMM is that developed at Xerox (Cutting et al. , 1992)."
A94-1009 A92-1018 o "The Xerox experiments (Cutting et al. , 1992) correspond to something between D1 and D2, and between TO and T1, in that there is some initial biasing of the probabilities."
A94-1027 A92-1018 o "All 8,907 articles were tagged by the Xerox Part-ofSpeech Tagger (Cutting et al. , 1992) 4."
A97-1004 A92-1018 o "(Cutting et al. , 1992))."
A97-1014 A92-1018 o "(Cutting et al. , 1992) and (Feldweg, 1995))."
A97-1017 A92-1018 o "For Czech, we created a prototype of the first step of this process -the part-of-speech (POS) tagger -using Rank Xerox tools (Tapanainen, 1995), (Cutting et al. , 1992)."
C00-1004 A92-1018 o "5 Related work Cutting introduced grouping of words into equiva.lence classes based on the set of possible tags to reduce the number of the parameters (Cutting et al. , 1992) . Schmid used tile equivaleuce classes for smoothing."
C08-1026 A92-1018 p "4.1 Complete ambiguity classes Ambiguity classes capture the relevant property we are interested in: words with the same category possibilities are grouped together.4 And ambiguity classes have been shown to be successfully employed, in a variety of ways, to improve POS tagging (e.g., Cutting et al., 1992; Daelemans et al., 1996; Dickinson, 2007; Goldberg et al., 2008; Tseng et al., 2005)."
C94-1027 A92-1018 o "In tabh; 2, the accuracy rate of the Net-Tagger is cOrolLated to that of a trigram l)msed tagger (Kempe, 1993) and a lIidden Markov Model tagger (Cutting et al. , 1992) which were."
C94-1027 A92-1018 o "In this paper, a new part-of-speech tagging method hased on neural networks (Net-Tagger) is presented and its performance is compared to that of a llMM-tagger (Cutting et al. , 1992) and a trigrambased tagger (Kempe, 1993)."
C94-1027 A92-1018 o "The performance of tl,e presented tagger is measured and compared to that of two other taggers (Cutting et al. , 1992; Kempe, 1993)."
C94-1027 A92-1018 o "No documentation of tile construction algorithm of the su\[lix lexicon in (Cutting et al. , 1992) was available."
C96-1036 A92-1018 o "Language models, such as N-gram class models (Brown et al. , 1992) and Ergodic Hidden Markov Models (Kuhn el, al. , 1994) were proposed and used in applications such as syntactic class (POS) tagging for English (Cutting et al. , 1992), clustering and scoring of recognizer sentence hypotheses."
C96-2114 A92-1018 o "The tagger used is thus one that does not need tagged and disambiguated material to be trained on, namely the XPOST originally constructed at Xerox Parc (Cutting et al. 1992, Cutting and Pedersen 1993)."
C96-2136 A92-1018 o "It is used,as tagging mode\[ in English (Church, 1988; Cutting et al. , 1992) and morphological analysis nlodel (word segmentation and tagging) in Japanese (Nagata, 1994)."
C96-2136 A92-1018 o "It is a natural extension of the Viteri>i algorithm (Church, 1<,)88; Cutting et al. , 1992) for those languages that do not have delimiters between words, and it can generate N-best morphological analysis hypotheses, like tree trellis search (Soong and l\[uang, 1991)."
C96-2192 A92-1018 o (DeRose 1988; Cutting et al 1992; Merialdo 1994).
E06-1034 A92-1018 o "5.2 Assigning complex ambiguity tags In the tagging literature (e.g. , Cutting et al (1992)) an ambiguity class is often composed of the set of every possible tag for a word."
E95-1014 A92-1018 o "The corpus lines retained are part-of-speech tagged (Cutting et al. , 1992)."
E95-1014 A92-1018 o "This text was part-of-speech tagged using the Xerox HMM tagger (Cutting et al. , 1992)."
E95-1020 A92-1018 o "No pretagged text is necessary for Hidden Markov Models (Jelinek, 1985; Cutting et al. , 1991; Kupiec, 1992)."
E95-1020 A92-1018 o "We obtained 47,025 50-dimensional reduced vectors from the SVD and clustered them into 200 classes using the fast clustering algorithm Buckshot (Cutting et al. , 1992) (group average agglomeration applied to a sample)."
E95-1021 A92-1018 o "3 The statistical model We use the Xerox part-of-speech tagger (Cutting et al. , 1992), a statistical tagger made at the Xerox Palo Alto Research Center."
E95-1022 A92-1018 o "This corpus-based information typically concerns sequences of 1-3 tags or words (with some well-known exceptions, e.g. Cutting et al. 1992)."
E95-1022 A92-1018 o "157 ena or the linguist's abstraction capabilities (e.g. knowledge about what is relevant in the context), they tend to reach a 95-97% accuracy in the analysis of several languages, in particular English (Marshall 1983; Black et aL 1992; Church 1988; Cutting et al. 1992; de Marcken 1990; DeRose 1988; Hindle 1989; Merialdo 1994; Weischedel et al. 1993; Brill 1992; Samuelsson 1994; Eineborg and Gamb~ick 1994, etc.)."
E99-1018 A92-1018 o "As a common strategy, POS guessers examine the endings of unknown words (Cutting et al. 1992) along with their capitalization, or consider the distribution of unknown words over specific parts-of-speech (Weischedel et aL, 1993)."
E99-1018 A92-1018 o "On the other hand, according to the data-driven approach, a frequency-based language model is acquired from corpora and has the forms of ngrams (Church, 1988; Cutting et al. , 1992), rules (Hindle, 1989; Brill, 1995), decision trees (Cardie, 1994; Daelemans et al. , 1996) or neural networks (Schmid, 1994)."
H05-1052 A92-1018 o "In the absence of an annotated corpus, dependencies can be derived by other means, e.g. part413 of-speech probabilities can be approximated from a raw corpus as in (Cutting et al. , 1992), word-sense dependencies can be derived as definition-based similarities, etc. Label dependencies are set as weights on the arcs drawn between corresponding labels."
I08-3015 A92-1018 o "There are many POS taggers developed using different techniques for many major languages such as transformation-based error-driven learning (Brill, 1995), decision trees (Black et al., 1992), Markov model (Cutting et al., 1992), maximum entropy methods (Ratnaparkhi, 1996) etc for English."
J02-1004 A92-1018 o Our statistical tagging model is modified from the standard bigrams (Cutting et al. 1992) using Viterbi search plus onthe-fly extra computing of lexical probabilities for unknown morphemes.
J02-1004 A92-1018 o "POS disambiguation has usually been performed by statistical approaches, mainly using the hidden Markov model (HMM) in English research communities (Cutting et al. 1992; Kupiec 1992; Weischedel et al. 1993)."
J93-1002 A92-1018 o "The main application of these techniques to written input has been in the robust, lexical tagging of corpora with part-of-speech labels (e.g. Garside, Leech, and Sampson 1987; de Rose 1988; Meteer, Schwartz, and Weischedel 1991; Cutting et al. 1992)."
J94-2001 A92-1018 o "Two main approaches have generally been considered: rule-based (Klein and Simmons 1963; Brodda 1982; Paulussen and Martin 1992; Brill et al. 1990) probabilistic (Bahl and Mercer 1976; Debili 1977; Stolz, Tannenbaum, and Carstensen 1965; Marshall 1983; Leech, Garside, and Atwell 1983; Derouault and Merialdo 1986; DeRose 1988; Church 1989; Beale 1988; Marcken 1990; Merialdo 1991; Cutting et al. 1992)."
J95-2001 A92-1018 o "Stochastic taggers use both contextual and morphological information, and the model parameters are usually defined or updated automatically from tagged texts (Cerf-Danon and E1-Beze 1991; Church 1988; Cutting et al. 1992; Dermatas and Kokkinakis 1988, 1990, 1993, 1994; Garside, Leech, and Sampson 1987; Kupiec 1992; Maltese * Department of Electrical Engineering, Wire Communications Laboratory (WCL), University of Patras, 265 00 Patras, Greece."
J95-2004 A92-1018 o "Unlike stochastic approaches to part-of-speech tagging (Church 1988; Kupiec 1992; Cutting et al. 1992; Merialdo 1990; DeRose 1988; Weischedel et al. 1993), up to now the knowledge found in finite-state taggers has been handcrafted and was not automatically acquired."
J95-2004 A92-1018 o "7 Independently, Cutting et aL (1992) quote a performance of 800 words per second for their part-of-speech tagger based on hidden Markov models."
J95-3004 A92-1018 o "These methods have reported performance in the range of 95-99% ""correct"" by word (DeRose 1988; Cutting et al. 1992; Jelinek, Mercer, and Roukos 1992; Kupiec 1992)."
J95-4004 A92-1018 p "A number of part-of-speech taggers are readily available and widely used, all trained and retrainable on text corpora (Church 1988; Cutting et al. 1992; Brill 1992; Weischedel et al. 1993)."
J95-4004 A92-1018 o "Part-of-speech tagging is an active area of research; a great deal of work has been done in this area over the past few years (e.g. , Jelinek 1985; Church 1988; Derose 1988; Hindle 1989; DeMarcken 1990; Merialdo 1994; Brill 1992; Black et al. 1992; Cutting et al. 1992; Kupiec 1992; Charniak et al. 1993; Weischedel et al. 1993; Schutze and Singer 1994)."
J95-4004 A92-1018 o Almost all recent work in developing automatically trained part-of-speech taggers has been on further exploring Markovmodel based tagging (Jelinek 1985; Church 1988; Derose 1988; DeMarcken 1990; Merialdo 1994; Cutting et al. 1992; Kupiec 1992; Charniak et al. 1993; Weischedel et al. 1993; Schutze and Singer 1994).
J97-3003 A92-1018 o "As the baseline standard, we took the ending-guessing rule set supplied with the Xerox tagger (Cutting et al. 1992)."
J97-3003 A92-1018 o "The Xerox tagger (Cutting et al. 1992) comes with a set of rules that assign an unknown word a set of possible pos-tags (i.e. , POS-class) on the basis of its ending segment."
N01-1023 A92-1018 p "(Cutting et al. , 1992) reported very high results (96% on the Brown corpus) for unsupervised POS tagging using Hidden Markov Models (HMMs) by exploiting hand-built tag dictionaries and equivalence classes."
N06-1042 A92-1018 o "It is also possible to train statistical models using unlabeled data with the expectation maximization algorithm (Cutting et al. , 1992)."
P06-2100 A92-1018 o "For English there are many POS taggers, employing machine learning techniques like transformation-based error-driven learning (Brill, 1995), decision trees (Black et al. , 1992), markov model (Cutting et al. 1992), maximum entropy methods (Ratnaparkhi, 1996) etc. There are also taggers which are hybrid using both stochastic and rule-based approaches, such as CLAWS (Garside and Smith, 1997)."
P07-2056 A92-1018 o "In such cases, additional information may be coded into the HMM model to achieve higher accuracy (Cutting et al. , 1992)."
P07-2056 A92-1018 p "Stochastic models (Cutting et al. , 1992; Dermatas et al. , 1995; Brants, 2000) have been widely used in POS tagging for simplicity and language independence of the models."
P93-1003 A92-1018 o "This situation is very similar to that involved in training HMM text taggers, where joint probabilities are computed that a particular word corresponds to a particular part-ofspeech, and the rest of the words in the sentence are also generated (e.g. \[Cutting et al. , 1992\])."
README.md exists but content is empty.
Downloads last month
29