Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Syndromic classification of Twitter messages | Recent studies have shown strong correlation between social networking data
and national influenza rates. We expanded upon this success to develop an
automated text mining system that classifies Twitter messages in real time into
six syndromic categories based on key terms from a public health ontology.
10-fold cross validation tests were used to compare Naive Bayes (NB) and
Support Vector Machine (SVM) models on a corpus of 7431 Twitter messages. SVM
performed better than NB on 4 out of 6 syndromes. The best performing
classifiers showed moderately strong F1 scores: respiratory = 86.2 (NB);
gastrointestinal = 85.4 (SVM polynomial kernel degree 2); neurological = 88.6
(SVM polynomial kernel degree 1); rash = 86.0 (SVM polynomial kernel degree 1);
constitutional = 89.3 (SVM polynomial kernel degree 1); hemorrhagic = 89.9
(NB). The resulting classifiers were deployed together with an EARS C2
aberration detection algorithm in an experimental online system.
| 2,011 | Computation and Language |
Positive words carry less information than negative words | We show that the frequency of word use is not only determined by the word
length \cite{Zipf1935} and the average information content
\cite{Piantadosi2011}, but also by its emotional content. We have analyzed
three established lexica of affective word usage in English, German, and
Spanish, to verify that these lexica have a neutral, unbiased, emotional
content. Taking into account the frequency of word usage, we find that words
with a positive emotional content are more frequently used. This lends support
to Pollyanna hypothesis \cite{Boucher1969} that there should be a positive bias
in human expression. We also find that negative words contain more information
than positive words, as the informativeness of a word increases uniformly with
its valence decrease. Our findings support earlier conjectures about (i) the
relation between word frequency and information content, and (ii) the impact of
positive emotions on communication and social links.
| 2,012 | Computation and Language |
Ideogram Based Chinese Sentiment Word Orientation Computation | This paper presents a novel algorithm to compute sentiment orientation of
Chinese sentiment word. The algorithm uses ideograms which are a distinguishing
feature of Chinese language. The proposed algorithm can be applied to any
sentiment classification scheme. To compute a word's sentiment orientation
using the proposed algorithm, only the word itself and a precomputed character
ontology is required, rather than a corpus. The influence of three parameters
over the algorithm performance is analyzed and verified by experiment.
Experiment also shows that proposed algorithm achieves an F Measure of 85.02%
outperforming existing ideogram based algorithm.
| 2,011 | Computation and Language |
Individual and Domain Adaptation in Sentence Planning for Dialogue | One of the biggest challenges in the development and deployment of spoken
dialogue systems is the design of the spoken language generation module. This
challenge arises from the need for the generator to adapt to many features of
the dialogue domain, user population, and dialogue context. A promising
approach is trainable generation, which uses general-purpose linguistic
knowledge that is automatically adapted to the features of interest, such as
the application domain, individual user, or user group. In this paper we
present and evaluate a trainable sentence planner for providing restaurant
information in the MATCH dialogue system. We show that trainable sentence
planning can produce complex information presentations whose quality is
comparable to the output of a template-based generator tuned to this domain. We
also show that our method easily supports adapting the sentence planner to
individuals, and that the individualized sentence planners generally perform
better than models trained and tested on a population of individuals. Previous
work has documented and utilized individual preferences for content selection,
but to our knowledge, these results provide the first demonstration of
individual preferences for sentence planning operations, affecting the content
order, discourse structure and sentence structure of system responses. Finally,
we evaluate the contribution of different feature sets, and show that, in our
application, n-gram features often do as well as features based on higher-level
linguistic representations.
| 2,007 | Computation and Language |
Algebras over a field and semantics for context based reasoning | This paper introduces context algebras and demonstrates their application to
combining logical and vector-based representations of meaning. Other approaches
to this problem attempt to reproduce aspects of logical semantics within new
frameworks. The approach we present here is different: We show how logical
semantics can be embedded within a vector space framework, and use this to
combine distributional semantics, in which the meanings of words are
represented as vectors, with logical semantics, in which the meaning of a
sentence is represented as a logical form.
| 2,011 | Computation and Language |
Genetic Algorithm (GA) in Feature Selection for CRF Based Manipuri
Multiword Expression (MWE) Identification | This paper deals with the identification of Multiword Expressions (MWEs) in
Manipuri, a highly agglutinative Indian Language. Manipuri is listed in the
Eight Schedule of Indian Constitution. MWE plays an important role in the
applications of Natural Language Processing(NLP) like Machine Translation, Part
of Speech tagging, Information Retrieval, Question Answering etc. Feature
selection is an important factor in the recognition of Manipuri MWEs using
Conditional Random Field (CRF). The disadvantage of manual selection and
choosing of the appropriate features for running CRF motivates us to think of
Genetic Algorithm (GA). Using GA we are able to find the optimal features to
run the CRF. We have tried with fifty generations in feature selection along
with three fold cross validation as fitness function. This model demonstrated
the Recall (R) of 64.08%, Precision (P) of 86.84% and F-measure (F) of 73.74%,
showing an improvement over the CRF based Manipuri MWE identification without
GA application.
| 2,011 | Computation and Language |
ESLO: from transcription to speakers' personal information annotation | This paper presents the preliminary works to put online a French oral corpus
and its transcription. This corpus is the Socio-Linguistic Survey in Orleans,
realized in 1968. First, we numerized the corpus, then we handwritten
transcribed it with the Transcriber software adding different tags about
speakers, time, noise, etc. Each document (audio file and XML file of the
transcription) was described by a set of metadata stored in an XML format to
allow an easy consultation. Second, we added different levels of annotations,
recognition of named entities and annotation of personal information about
speakers. This two annotation tasks used the CasSys system of transducer
cascades. We used and modified a first cascade to recognize named entities.
Then we built a second cascade to annote the designating entities, i.e.
information about the speaker. These second cascade parsed the named entity
annotated corpus. The objective is to locate information about the speaker and,
also, what kind of information can designate him/her. These two cascades was
evaluated with precision and recall measures.
| 2,011 | Computation and Language |
\'Evaluation de lexiques syntaxiques par leur int\'egartion dans
l'analyseur syntaxiques FRMG | In this paper, we evaluate various French lexica with the parser FRMG: the
Lefff, LGLex, the lexicon built from the tables of the French Lexicon-Grammar,
the lexicon DICOVALENCE and a new version of the verbal entries of the Lefff,
obtained by merging with DICOVALENCE and partial manual validation. For this,
all these lexica have been converted to the format of the Lefff, Alexina
format. The evaluation was made on the part of the EASy corpus used in the
first evaluation campaign Passage.
| 2,011 | Computation and Language |
Construction du lexique LGLex \`a partir des tables du Lexique-Grammaire
des verbes du grec moderne | In this paper, we summerize the work done on the resources of Modern Greek on
the Lexicon-Grammar of verbs. We detail the definitional features of each
table, and all changes made to the names of features to make them consistent.
Through the development of the table of classes, including all the features, we
have considered the conversion of tables in a syntactic lexicon: LGLex. The
lexicon, in plain text format or XML, is generated by the LGExtract tool
(Constant & Tolone, 2010). This format is directly usable in applications of
Natural Language Processing (NLP).
| 2,011 | Computation and Language |
Extending the adverbial coverage of a NLP oriented resource for French | This paper presents a work on extending the adverbial entries of LGLex: a NLP
oriented syntactic resource for French. Adverbs were extracted from the
Lexicon-Grammar tables of both simple adverbs ending in -ment '-ly' (Molinier
and Levrier, 2000) and compound adverbs (Gross, 1986; 1990). This work relies
on the exploitation of fine-grained linguistic information provided in existing
resources. Various features are encoded in both LG tables and they haven't been
exploited yet. They describe the relations of deleting, permuting, intensifying
and paraphrasing that associate, on the one hand, the simple and compound
adverbs and, on the other hand, different types of compound adverbs. The
resulting syntactic resource is manually evaluated and freely available under
the LGPL-LR license.
| 2,011 | Computation and Language |
Question Answering in a Natural Language Understanding System Based on
Object-Oriented Semantics | Algorithms of question answering in a computer system oriented on input and
logical processing of text information are presented. A knowledge domain under
consideration is social behavior of a person. A database of the system includes
an internal representation of natural language sentences and supplemental
information. The answer {\it Yes} or {\it No} is formed for a general question.
A special question containing an interrogative word or group of interrogative
words permits to find a subject, object, place, time, cause, purpose and way of
action or event. Answer generation is based on identification algorithms of
persons, organizations, machines, things, places, and times. Proposed
algorithms of question answering can be realized in information systems closely
connected with text processing (criminology, operation of business, medicine,
document systems).
| 2,011 | Computation and Language |
Rule based Part of speech Tagger for Homoeopathy Clinical realm | A tagger is a mandatory segment of most text scrutiny systems, as it
consigned a s yntax class (e.g., noun, verb, adjective, and adverb) to every
word in a sentence. In this paper, we present a simple part of speech tagger
for homoeopathy clinical language. This paper reports about the anticipated
part of speech tagger for homoeopathy clinical language. It exploit standard
pattern for evaluating sentences, untagged clinical corpus of 20085 words is
used, from which we had selected 125 sentences (2322 tokens). The problem of
tagging in natural language processing is to find a way to tag every word in a
text as a meticulous part of speech. The basic idea is to apply a set of rules
on clinical sentences and on each word, Accuracy is the leading factor in
evaluating any POS tagger so the accuracy of proposed tagger is also conversed.
| 2,011 | Computation and Language |
Exploring Twitter Hashtags | Twitter messages often contain so-called hashtags to denote keywords related
to them. Using a dataset of 29 million messages, I explore relations among
these hashtags with respect to co-occurrences. Furthermore, I present an
attempt to classify hashtags into five intuitive classes, using a
machine-learning approach. The overall outcome is an interactive Web
application to explore Twitter hashtags.
| 2,011 | Computation and Language |
Statistical Sign Language Machine Translation: from English written text
to American Sign Language Gloss | This works aims to design a statistical machine translation from English text
to American Sign Language (ASL). The system is based on Moses tool with some
modifications and the results are synthesized through a 3D avatar for
interpretation. First, we translate the input text to gloss, a written form of
ASL. Second, we pass the output to the WebSign Plug-in to play the sign.
Contributions of this work are the use of a new couple of language English/ASL
and an improvement of statistical machine translation based on string matching
thanks to Jaro-distance.
| 2,011 | Computation and Language |
Grammatical Relations of Myanmar Sentences Augmented by
Transformation-Based Learning of Function Tagging | In this paper we describe function tagging using Transformation Based
Learning (TBL) for Myanmar that is a method of extensions to the previous
statistics-based function tagger. Contextual and lexical rules (developed using
TBL) were critical in achieving good results. First, we describe a method for
expressing lexical relations in function tagging that statistical function
tagging are currently unable to express. Function tagging is the preprocessing
step to show grammatical relations of the sentences. Then we use the context
free grammar technique to clarify the grammatical relations in Myanmar
sentences or to output the parse trees. The grammatical relations are the
functional structure of a language. They rely very much on the function tag of
the tokens. We augment the grammatical relations of Myanmar sentences with
transformation-based learning of function tagging.
| 2,011 | Computation and Language |
Creating a Live, Public Short Message Service Corpus: The NUS SMS Corpus | Short Message Service (SMS) messages are largely sent directly from one
person to another from their mobile phones. They represent a means of personal
communication that is an important communicative artifact in our current
digital era. As most existing studies have used private access to SMS corpora,
comparative studies using the same raw SMS data has not been possible up to
now. We describe our efforts to collect a public SMS corpus to address this
problem. We use a battery of methodologies to collect the corpus, paying
particular attention to privacy issues to address contributors' concerns. Our
live project collects new SMS message submissions, checks their quality and
adds the valid messages, releasing the resultant corpus as XML and as SQL
dumps, along with corpus statistics, every month. We opportunistically collect
as much metadata about the messages and their sender as possible, so as to
enable different types of analyses. To date, we have collected about 60,000
messages, focusing on English and Mandarin Chinese.
| 2,012 | Computation and Language |
Visualization and Analysis of Frames in Collections of Messages: Content
Analysis and the Measurement of Meaning | A step-to-step introduction is provided on how to generate a semantic map
from a collection of messages (full texts, paragraphs or statements) using
freely available software and/or SPSS for the relevant statistics and the
visualization. The techniques are discussed in the various theoretical contexts
of (i) linguistics (e.g., Latent Semantic Analysis), (ii) sociocybernetics and
social systems theory (e.g., the communication of meaning), and (iii)
communication studies (e.g., framing and agenda-setting). We distinguish
between the communication of information in the network space (social network
analysis) and the communication of meaning in the vector space. The vector
space can be considered a generated as an architecture by the network of
relations in the network space; words are then not only related, but also
positioned. These positions are expected rather than observed and therefore one
can communicate meaning. Knowledge can be generated when these meanings can
recursively be communicated and therefore also further codified.
| 2,012 | Computation and Language |
Proof nets for the Lambek-Grishin calculus | Grishin's generalization of Lambek's Syntactic Calculus combines a
non-commutative multiplicative conjunction and its residuals (product, left and
right division) with a dual family: multiplicative disjunction, right and left
difference. Interaction between these two families takes the form of linear
distributivity principles. We study proof nets for the Lambek-Grishin calculus
and the correspondence between these nets and unfocused and focused versions of
its sequent calculus.
| 2,011 | Computation and Language |
Formalization of semantic network of image constructions in electronic
content | A formal theory based on a binary operator of directional associative
relation is constructed in the article and an understanding of an associative
normal form of image constructions is introduced. A model of a commutative
semigroup, which provides a presentation of a sentence as three components of
an interrogative linguistic image construction, is considered.
| 2,012 | Computation and Language |
Toward a Motor Theory of Sign Language Perception | Researches on signed languages still strongly dissociate lin- guistic issues
related on phonological and phonetic aspects, and gesture studies for
recognition and synthesis purposes. This paper focuses on the imbrication of
motion and meaning for the analysis, synthesis and evaluation of sign language
gestures. We discuss the relevance and interest of a motor theory of perception
in sign language communication. According to this theory, we consider that
linguistic knowledge is mapped on sensory-motor processes, and propose a
methodology based on the principle of a synthesis-by-analysis approach, guided
by an evaluation process that aims to validate some hypothesis and concepts of
this theory. Examples from existing studies illustrate the di erent concepts
and provide avenues for future work.
| 2,012 | Computation and Language |
Recognizing Bangla Grammar using Predictive Parser | We describe a Context Free Grammar (CFG) for Bangla language and hence we
propose a Bangla parser based on the grammar. Our approach is very much general
to apply in Bangla Sentences and the method is well accepted for parsing a
language of a grammar. The proposed parser is a predictive parser and we
construct the parse table for recognizing Bangla grammar. Using the parse table
we recognize syntactical mistakes of Bangla sentences when there is no entry
for a terminal in the parse table. If a natural language can be successfully
parsed then grammar checking from this language becomes possible. The proposed
scheme is based on Top down parsing method and we have avoided the left
recursion of the CFG using the idea of left factoring.
| 2,012 | Computation and Language |
Du TAL au TIL | Historically two types of NLP have been investigated: fully automated
processing of language by machines (NLP) and autonomous processing of natural
language by people, i.e. the human brain (psycholinguistics). We believe that
there is room and need for another kind, INLP: interactive natural language
processing. This intermediate approach starts from peoples' needs, trying to
bridge the gap between their actual knowledge and a given goal. Given the fact
that peoples' knowledge is variable and often incomplete, the aim is to build
bridges linking a given knowledge state to a given goal. We present some
examples, trying to show that this goal is worth pursuing, achievable and at a
reasonable cost.
| 2,012 | Computation and Language |
Wikipedia Arborification and Stratified Explicit Semantic Analysis | [This is the translation of paper "Arborification de Wikip\'edia et analyse
s\'emantique explicite stratifi\'ee" submitted to TALN 2012.]
We present an extension of the Explicit Semantic Analysis method by
Gabrilovich and Markovitch. Using their semantic relatedness measure, we weight
the Wikipedia categories graph. Then, we extract a minimal spanning tree, using
Chu-Liu & Edmonds' algorithm. We define a notion of stratified tfidf where the
stratas, for a given Wikipedia page and a given term, are the classical tfidf
and categorical tfidfs of the term in the ancestor categories of the page
(ancestors in the sense of the minimal spanning tree). Our method is based on
this stratified tfidf, which adds extra weight to terms that "survive" when
climbing up the category tree. We evaluate our method by a text classification
on the WikiNews corpus: it increases precision by 18%. Finally, we provide
hints for future research
| 2,012 | Computation and Language |
Inference and Plausible Reasoning in a Natural Language Understanding
System Based on Object-Oriented Semantics | Algorithms of inference in a computer system oriented to input and semantic
processing of text information are presented. Such inference is necessary for
logical questions when the direct comparison of objects from a question and
database can not give a result. The following classes of problems are
considered: a check of hypotheses for persons and non-typical actions, the
determination of persons and circumstances for non-typical actions, planning
actions, the determination of event cause and state of persons. To form an
answer both deduction and plausible reasoning are used. As a knowledge domain
under consideration is social behavior of persons, plausible reasoning is based
on laws of social psychology. Proposed algorithms of inference and plausible
reasoning can be realized in computer systems closely connected with text
processing (criminology, operation of business, medicine, document systems).
| 2,012 | Computation and Language |
Considering a resource-light approach to learning verb valencies | Here we describe work on learning the subcategories of verbs in a
morphologically rich language using only minimal linguistic resources. Our goal
is to learn verb subcategorizations for Quechua, an under-resourced
morphologically rich language, from an unannotated corpus. We compare results
from applying this approach to an unannotated Arabic corpus with those achieved
by processing the same text in treebank form. The original plan was to use only
a morphological analyzer and an unannotated corpus, but experiments suggest
that this approach by itself will not be effective for learning the
combinatorial potential of Arabic verbs in general. The lower bound on
resources for acquiring this information is somewhat higher, apparently
requiring a a part-of-speech tagger and chunker for most languages, and a
morphological disambiguater for Arabic.
| 2,012 | Computation and Language |
Beyond Sentiment: The Manifold of Human Emotions | Sentiment analysis predicts the presence of positive or negative emotions in
a text document. In this paper we consider higher dimensional extensions of the
sentiment concept, which represent a richer set of human emotions. Our approach
goes beyond previous work in that our model contains a continuous manifold
rather than a finite set of human emotions. We investigate the resulting model,
compare it to psychological observations, and explore its predictive
capabilities. Besides obtaining significant improvements over a baseline
without manifold, we are also able to visualize different notions of positive
sentiment in different domains.
| 2,013 | Computation and Language |
Realisation d'un systeme de reconnaissance automatique de la parole
arabe base sur CMU Sphinx | This paper presents the continuation of the work completed by Satori and all.
[SCH07] by the realization of an automatic speech recognition system (ASR) for
Arabic language based SPHINX 4 system. The previous work was limited to the
recognition of the first ten digits, whereas the present work is a remarkable
projection consisting in continuous Arabic speech recognition with a rate of
recognition of surroundings 96%.
| 2,010 | Computation and Language |
A Lexical Analysis Tool with Ambiguity Support | Lexical ambiguities naturally arise in languages. We present Lamb, a lexical
analyzer that produces a lexical analysis graph describing all the possible
sequences of tokens that can be found within the input string. Parsers can
process such lexical analysis graphs and discard any sequence of tokens that
does not produce a valid syntactic sentence, therefore performing, together
with Lamb, a context-sensitive lexical analysis in lexically-ambiguous language
specifications.
| 2,012 | Computation and Language |
The Horse Raced Past: Gardenpath Processing in Dynamical Systems | I pinpoint an interesting similarity between a recent account to rational
parsing and the treatment of sequential decisions problems in a dynamical
systems approach. I argue that expectation-driven search heuristics aiming at
fast computation resembles a high-risk decision strategy in favor of large
transition velocities. Hale's rational parser, combining generalized
left-corner parsing with informed $\mathrm{A}^*$ search to resolve processing
conflicts, explains gardenpath effects in natural sentence processing by
misleading estimates of future processing costs that are to be minimized. On
the other hand, minimizing the duration of cognitive computations in
time-continuous dynamical systems can be described by combining vector space
representations of cognitive states by means of filler/role decompositions and
subsequent tensor product representations with the paradigm of stable
heteroclinic sequences. Maximizing transition velocities according to a
high-risk decision strategy could account for a fast race even between states
that are apparently remote in representation space.
| 2,012 | Computation and Language |
Modelling Social Structures and Hierarchies in Language Evolution | Language evolution might have preferred certain prior social configurations
over others. Experiments conducted with models of different social structures
(varying subgroup interactions and the role of a dominant interlocutor) suggest
that having isolated agent groups rather than an interconnected agent is more
advantageous for the emergence of a social communication system. Distinctive
groups that are closely connected by communication yield systems less like
natural language than fully isolated groups inhabiting the same world.
Furthermore, the addition of a dominant male who is asymmetrically favoured as
a hearer, and equally likely to be a speaker has no positive influence on the
disjoint groups.
| 2,011 | Computation and Language |
Establishing linguistic conventions in task-oriented primeval dialogue | In this paper, we claim that language is likely to have emerged as a
mechanism for coordinating the solution of complex tasks. To confirm this
thesis, computer simulations are performed based on the coordination task
presented by Garrod & Anderson (1987). The role of success in task-oriented
dialogue is analytically evaluated with the help of performance measurements
and a thorough lexical analysis of the emergent communication system.
Simulation results confirm a strong effect of success mattering on both
reliability and dispersion of linguistic conventions.
| 2,011 | Computation and Language |
Statistical Function Tagging and Grammatical Relations of Myanmar
Sentences | This paper describes a context free grammar (CFG) based grammatical relations
for Myanmar sentences which combine corpus-based function tagging system. Part
of the challenge of statistical function tagging for Myanmar sentences comes
from the fact that Myanmar has free-phrase-order and a complex morphological
system. Function tagging is a pre-processing step to show grammatical relations
of Myanmar sentences. In the task of function tagging, which tags the function
of Myanmar sentences with correct segmentation, POS (part-of-speech) tagging
and chunking information, we use Naive Bayesian theory to disambiguate the
possible function tags of a word. We apply context free grammar (CFG) to find
out the grammatical relations of the function tags. We also create a functional
annotated tagged corpus for Myanmar and propose the grammar rules for Myanmar
sentences. Experiments show that our analysis achieves a good result with
simple sentences and complex sentences.
| 2,012 | Computation and Language |
Distributional Measures of Semantic Distance: A Survey | The ability to mimic human notions of semantic distance has widespread
applications. Some measures rely only on raw text (distributional measures) and
some rely on knowledge sources such as WordNet. Although extensive studies have
been performed to compare WordNet-based measures with human judgment, the use
of distributional measures as proxies to estimate semantic distance has
received little attention. Even though they have traditionally performed poorly
when compared to WordNet-based measures, they lay claim to certain uniquely
attractive features, such as their applicability in resource-poor languages and
their ability to mimic both semantic similarity and semantic relatedness.
Therefore, this paper presents a detailed study of distributional measures.
Particular attention is paid to flesh out the strengths and limitations of both
WordNet-based and distributional measures, and how distributional measures of
distance can be brought more in line with human notions of semantic distance.
We conclude with a brief discussion of recent work on hybrid measures.
| 2,012 | Computation and Language |
Distributional Measures as Proxies for Semantic Relatedness | The automatic ranking of word pairs as per their semantic relatedness and
ability to mimic human notions of semantic relatedness has widespread
applications. Measures that rely on raw data (distributional measures) and
those that use knowledge-rich ontologies both exist. Although extensive studies
have been performed to compare ontological measures with human judgment, the
distributional measures have primarily been evaluated by indirect means. This
paper is a detailed study of some of the major distributional measures; it
lists their respective merits and limitations. New measures that overcome these
drawbacks, that are more in line with the human notions of semantic
relatedness, are suggested. The paper concludes with an exhaustive comparison
of the distributional and ontology-based measures. Along the way, significant
research problems are identified. Work on these problems may lead to a better
understanding of how semantic relatedness is to be measured.
| 2,012 | Computation and Language |
Categories of Emotion names in Web retrieved texts | The categorization of emotion names, i.e., the grouping of emotion words that
have similar emotional connotations together, is a key tool of Social
Psychology used to explore people's knowledge about emotions. Without
exception, the studies following that research line were based on the gauging
of the perceived similarity between emotion names by the participants of the
experiments. Here we propose and examine a new approach to study the categories
of emotion names - the similarities between target emotion names are obtained
by comparing the contexts in which they appear in texts retrieved from the
World Wide Web. This comparison does not account for any explicit semantic
information; it simply counts the number of common words or lexical items used
in the contexts. This procedure allows us to write the entries of the
similarity matrix as dot products in a linear vector space of contexts. The
properties of this matrix were then explored using Multidimensional Scaling
Analysis and Hierarchical Clustering. Our main findings, namely, the underlying
dimension of the emotion space and the categories of emotion names, were
consistent with those based on people's judgments of emotion names
similarities.
| 2,012 | Computation and Language |
A Cross-cultural Corpus of Annotated Verbal and Nonverbal Behaviors in
Receptionist Encounters | We present the first annotated corpus of nonverbal behaviors in receptionist
interactions, and the first nonverbal corpus (excluding the original video and
audio data) of service encounters freely available online. Native speakers of
American English and Arabic participated in a naturalistic role play at
reception desks of university buildings in Doha, Qatar and Pittsburgh, USA.
Their manually annotated nonverbal behaviors include gaze direction, hand and
head gestures, torso positions, and facial expressions. We discuss possible
uses of the corpus and envision it to become a useful tool for the human-robot
interaction community.
| 2,012 | Computation and Language |
Fault detection system for Arabic language | The study of natural language, especially Arabic, and mechanisms for the
implementation of automatic processing is a fascinating field of study, with
various potential applications. The importance of tools for natural language
processing is materialized by the need to have applications that can
effectively treat the vast mass of information available nowadays on electronic
forms. Among these tools, mainly driven by the necessity of a fast writing in
alignment to the actual daily life speed, our interest is on the writing
auditors. The morphological and syntactic properties of Arabic make it a
difficult language to master, and explain the lack in the processing tools for
that language. Among these properties, we can mention: the complex structure of
the Arabic word, the agglutinative nature, lack of vocalization, the
segmentation of the text, the linguistic richness, etc.
| 2,013 | Computation and Language |
Toward an example-based machine translation from written text to ASL
using virtual agent animation | Modern computational linguistic software cannot produce important aspects of
sign language translation. Using some researches we deduce that the majority of
automatic sign language translation systems ignore many aspects when they
generate animation; therefore the interpretation lost the truth information
meaning. Our goals are: to translate written text from any language to ASL
animation; to model maximum raw information using machine learning and
computational techniques; and to produce a more adapted and expressive form to
natural looking and understandable ASL animations. Our methods include
linguistic annotation of initial text and semantic orientation to generate the
facial expression. We use the genetic algorithms coupled to learning/recognized
systems to produce the most natural form. To detect emotion we are based on
fuzzy logic to produce the degree of interpolation between facial expressions.
Roughly, we present a new expressive language Text Adapted Sign Modeling
Language TASML that describes all maximum aspects related to a natural sign
language interpretation. This paper is organized as follow: the next section is
devoted to present the comprehension effect of using Space/Time/SVO form in ASL
animation based on experimentation. In section 3, we describe our technical
considerations. We present the general approach we adopted to develop our tool
in section 4. Finally, we give some perspectives and future works.
| 2,012 | Computation and Language |
An Accurate Arabic Root-Based Lemmatizer for Information Retrieval
Purposes | In spite of its robust syntax, semantic cohesion, and less ambiguity, lemma
level analysis and generation does not yet focused in Arabic NLP literatures.
In the current research, we propose the first non-statistical accurate Arabic
lemmatizer algorithm that is suitable for information retrieval (IR) systems.
The proposed lemmatizer makes use of different Arabic language knowledge
resources to generate accurate lemma form and its relevant features that
support IR purposes. As a POS tagger, the experimental results show that, the
proposed algorithm achieves a maximum accuracy of 94.8%. For first seen
documents, an accuracy of 89.15% is achieved, compared to 76.7% of up to date
Stanford accurate Arabic model, for the same, dataset.
| 2,012 | Computation and Language |
SignsWorld; Deeping Into the Silence World and Hearing Its Signs (State
of the Art) | Automatic speech processing systems are employed more and more often in real
environments. Although the underlying speech technology is mostly language
independent, differences between languages with respect to their structure and
grammar have substantial effect on the recognition systems performance. In this
paper, we present a review of the latest developments in the sign language
recognition research in general and in the Arabic sign language (ArSL) in
specific. This paper also presents a general framework for improving the deaf
community communication with the hearing people that is called SignsWorld. The
overall goal of the SignsWorld project is to develop a vision-based technology
for recognizing and translating continuous Arabic sign language ArSL.
| 2,012 | Computation and Language |
Arabic Keyphrase Extraction using Linguistic knowledge and Machine
Learning Techniques | In this paper, a supervised learning technique for extracting keyphrases of
Arabic documents is presented. The extractor is supplied with linguistic
knowledge to enhance its efficiency instead of relying only on statistical
information such as term frequency and distance. During analysis, an annotated
Arabic corpus is used to extract the required lexical features of the document
words. The knowledge also includes syntactic rules based on part of speech tags
and allowed word sequences to extract the candidate keyphrases. In this work,
the abstract form of Arabic words is used instead of its stem form to represent
the candidate terms. The Abstract form hides most of the inflections found in
Arabic words. The paper introduces new features of keyphrases based on
linguistic knowledge, to capture titles and subtitles of a document. A simple
ANOVA test is used to evaluate the validity of selected features. Then, the
learning model is built using the LDA - Linear Discriminant Analysis - and
training documents. Although, the presented system is trained using documents
in the IT domain, experiments carried out show that it has a significantly
better performance than the existing Arabic extractor systems, where precision
and recall values reach double their corresponding values in the other systems
especially for lengthy and non-scientific articles.
| 2,012 | Computation and Language |
Reduplicated MWE (RMWE) helps in improving the CRF based Manipuri POS
Tagger | This paper gives a detail overview about the modified features selection in
CRF (Conditional Random Field) based Manipuri POS (Part of Speech) tagging.
Selection of features is so important in CRF that the better are the features
then the better are the outputs. This work is an attempt or an experiment to
make the previous work more efficient. Multiple new features are tried to run
the CRF and again tried with the Reduplicated Multiword Expression (RMWE) as
another feature. The CRF run with RMWE because Manipuri is rich of RMWE and
identification of RMWE becomes one of the necessities to bring up the result of
POS tagging. The new CRF system shows a Recall of 78.22%, Precision of 73.15%
and F-measure of 75.60%. With the identification of RMWE and considering it as
a feature makes an improvement to a Recall of 80.20%, Precision of 74.31% and
F-measure of 77.14%.
| 2,012 | Computation and Language |
Analysing Temporally Annotated Corpora with CAVaT | We present CAVaT, a tool that performs Corpus Analysis and Validation for
TimeML. CAVaT is an open source, modular checking utility for statistical
analysis of features specific to temporally-annotated natural language corpora.
It provides reporting, highlights salient links between a variety of general
and time-specific linguistic features, and also validates a temporal annotation
to ensure that it is logically consistent and sufficiently annotated. Uniquely,
CAVaT provides analysis specific to TimeML-annotated temporal information.
TimeML is a standard for annotating temporal information in natural language
text. In this paper, we present the reporting part of CAVaT, and then its
error-checking ability, including the workings of several novel TimeML document
verification methods. This is followed by the execution of some example tasks
using the tool to show relations between times, events, signals and links. We
also demonstrate inconsistencies in a TimeML corpus (TimeBank) that have been
detected with CAVaT.
| 2,010 | Computation and Language |
Using Signals to Improve Automatic Classification of Temporal Relations | Temporal information conveyed by language describes how the world around us
changes through time. Events, durations and times are all temporal elements
that can be viewed as intervals. These intervals are sometimes temporally
related in text. Automatically determining the nature of such relations is a
complex and unsolved problem. Some words can act as "signals" which suggest a
temporal ordering between intervals. In this paper, we use these signal words
to improve the accuracy of a recent approach to classification of temporal
links.
| 2,012 | Computation and Language |
USFD2: Annotating Temporal Expresions and TLINKs for TempEval-2 | We describe the University of Sheffield system used in the TempEval-2
challenge, USFD2. The challenge requires the automatic identification of
temporal entities and relations in text. USFD2 identifies and anchors temporal
expressions, and also attempts two of the four temporal relation assignment
tasks. A rule-based system picks out and anchors temporal expressions, and a
maximum entropy classifier assigns temporal link labels, based on features that
include descriptions of associated temporal signal words. USFD2 identified
temporal expressions successfully, and correctly classified their type in 90%
of cases. Determining the relation between an event and time expression in the
same sentence was performed at 63% accuracy, the second highest score in this
part of the challenge.
| 2,010 | Computation and Language |
An Annotation Scheme for Reichenbach's Verbal Tense Structure | In this paper we present RTMML, a markup language for the tenses of verbs and
temporal relations between verbs. There is a richness to tense in language that
is not fully captured by existing temporal annotation schemata. Following
Reichenbach we present an analysis of tense in terms of abstract time points,
with the aim of supporting automated processing of tense and temporal relations
in language. This allows for precise reasoning about tense in documents, and
the deduction of temporal relations between the times and verbal events in a
discourse. We define the syntax of RTMML, and demonstrate the markup in a range
of situations.
| 2,011 | Computation and Language |
A Corpus-based Study of Temporal Signals | Automatic temporal ordering of events described in discourse has been of
great interest in recent years. Event orderings are conveyed in text via va
rious linguistic mechanisms including the use of expressions such as "before",
"after" or "during" that explicitly assert a temporal relation -- temporal
signals. In this paper, we investigate the role of temporal signals in temporal
relation extraction and provide a quantitative analysis of these expres sions
in the TimeBank annotated corpus.
| 2,011 | Computation and Language |
USFD at KBP 2011: Entity Linking, Slot Filling and Temporal Bounding | This paper describes the University of Sheffield's entry in the 2011 TAC KBP
entity linking and slot filling tasks. We chose to participate in the
monolingual entity linking task, the monolingual slot filling task and the
temporal slot filling tasks. We set out to build a framework for
experimentation with knowledge base population. This framework was created, and
applied to multiple KBP tasks. We demonstrated that our proposed framework is
effective and suitable for collaborative development efforts, as well as useful
in a teaching environment. Finally we present results that, while very modest,
provide improvements an order of magnitude greater than our 2010 attempt.
| 2,012 | Computation and Language |
Massively Increasing TIMEX3 Resources: A Transduction Approach | Automatic annotation of temporal expressions is a research challenge of great
interest in the field of information extraction. Gold standard
temporally-annotated resources are limited in size, which makes research using
them difficult. Standards have also evolved over the past decade, so not all
temporally annotated data is in the same format. We vastly increase available
human-annotated temporal expression resources by converting older format
resources to TimeML/TIMEX3. This task is difficult due to differing annotation
methods. We present a robust conversion tool and a new, large temporal
expression resource. Using this, we evaluate our conversion process by using it
as training data for an existing TimeML annotation tool, achieving a 0.87 F1
measure -- better than any system in the TempEval-2 timex recognition exercise.
| 2,012 | Computation and Language |
A Data Driven Approach to Query Expansion in Question Answering | Automated answering of natural language questions is an interesting and
useful problem to solve. Question answering (QA) systems often perform
information retrieval at an initial stage. Information retrieval (IR)
performance, provided by engines such as Lucene, places a bound on overall
system performance. For example, no answer bearing documents are retrieved at
low ranks for almost 40% of questions.
In this paper, answer texts from previous QA evaluations held as part of the
Text REtrieval Conferences (TREC) are paired with queries and analysed in an
attempt to identify performance-enhancing words. These words are then used to
evaluate the performance of a query expansion method.
Data driven extension words were found to help in over 70% of difficult
questions. These words can be used to improve and evaluate query expansion
methods. Simple blind relevance feedback (RF) was correctly predicted as
unlikely to help overall performance, and an possible explanation is provided
for its low value in IR for QA.
| 2,008 | Computation and Language |
Post-Editing Error Correction Algorithm for Speech Recognition using
Bing Spelling Suggestion | ASR short for Automatic Speech Recognition is the process of converting a
spoken speech into text that can be manipulated by a computer. Although ASR has
several applications, it is still erroneous and imprecise especially if used in
a harsh surrounding wherein the input speech is of low quality. This paper
proposes a post-editing ASR error correction method and algorithm based on
Bing's online spelling suggestion. In this approach, the ASR recognized output
text is spell-checked using Bing's spelling suggestion technology to detect and
correct misrecognized words. More specifically, the proposed algorithm breaks
down the ASR output text into several word-tokens that are submitted as search
queries to Bing search engine. A returned spelling suggestion implies that a
query is misspelled; and thus it is replaced by the suggested correction;
otherwise, no correction is performed and the algorithm continues with the next
token until all tokens get validated. Experiments carried out on various
speeches in different languages indicated a successful decrease in the number
of ASR errors and an improvement in the overall error correction rate. Future
research can improve upon the proposed algorithm so much so that it can be
parallelized to take advantage of multiprocessor computers.
| 2,012 | Computation and Language |
ASR Context-Sensitive Error Correction Based on Microsoft N-Gram Dataset | At the present time, computers are employed to solve complex tasks and
problems ranging from simple calculations to intensive digital image processing
and intricate algorithmic optimization problems to computationally-demanding
weather forecasting problems. ASR short for Automatic Speech Recognition is yet
another type of computational problem whose purpose is to recognize human
spoken speech and convert it into text that can be processed by a computer.
Despite that ASR has many versatile and pervasive real-world applications,it is
still relatively erroneous and not perfectly solved as it is prone to produce
spelling errors in the recognized text, especially if the ASR system is
operating in a noisy environment, its vocabulary size is limited, and its input
speech is of bad or low quality. This paper proposes a post-editing ASR error
correction method based on MicrosoftN-Gram dataset for detecting and correcting
spelling errors generated by ASR systems. The proposed method comprises an
error detection algorithm for detecting word errors; a candidate corrections
generation algorithm for generating correction suggestions for the detected
word errors; and a context-sensitive error correction algorithm for selecting
the best candidate for correction. The virtue of using the Microsoft N-Gram
dataset is that it contains real-world data and word sequences extracted from
the web which canmimica comprehensive dictionary of words having a large and
all-inclusive vocabulary. Experiments conducted on numerous speeches, performed
by different speakers, showed a remarkable reduction in ASR errors. Future
research can improve upon the proposed algorithm so much so that it can be
parallelized to take advantage of multiprocessor and distributed systems.
| 2,012 | Computation and Language |
Exploring Text Virality in Social Networks | This paper aims to shed some light on the concept of virality - especially in
social networks - and to provide new insights on its structure. We argue that:
(a) virality is a phenomenon strictly connected to the nature of the content
being spread, rather than to the influencers who spread it, (b) virality is a
phenomenon with many facets, i.e. under this generic term several different
effects of persuasive communication are comprised and they only partially
overlap. To give ground to our claims, we provide initial experiments in a
machine learning framework to show how various aspects of virality can be
independently predicted according to content features.
| 2,011 | Computation and Language |
Tree Transducers, Machine Translation, and Cross-Language Divergences | Tree transducers are formal automata that transform trees into other trees.
Many varieties of tree transducers have been explored in the automata theory
literature, and more recently, in the machine translation literature. In this
paper I review T and xT transducers, situate them among related formalisms, and
show how they can be used to implement rules for machine translation systems
that cover all of the cross-language structural divergences described in Bonnie
Dorr's influential article on the topic. I also present an implementation of xT
transduction, suitable and convenient for experimenting with translation rules.
| 2,012 | Computation and Language |
You had me at hello: How phrasing affects memorability | Understanding the ways in which information achieves widespread public
awareness is a research question of significant interest. We consider whether,
and how, the way in which the information is phrased --- the choice of words
and sentence structure --- can affect this process. To this end, we develop an
analysis framework and build a corpus of movie quotes, annotated with
memorability information, in which we are able to control for both the speaker
and the setting of the quotes. We find that there are significant differences
between memorable and non-memorable quotes in several key dimensions, even
after controlling for situational and contextual factors. One is lexical
distinctiveness: in aggregate, memorable quotes use less common word choices,
but at the same time are built upon a scaffolding of common syntactic patterns.
Another is that memorable quotes tend to be more general in ways that make them
easy to apply in new contexts --- that is, more portable. We also show how the
concept of "memorable language" can be extended across domains.
| 2,012 | Computation and Language |
Information Retrieval Systems Adapted to the Biomedical Domain | The terminology used in Biomedicine shows lexical peculiarities that have
required the elaboration of terminological resources and information retrieval
systems with specific functionalities. The main characteristics are the high
rates of synonymy and homonymy, due to phenomena such as the proliferation of
polysemic acronyms and their interaction with common language. Information
retrieval systems in the biomedical domain use techniques oriented to the
treatment of these lexical peculiarities. In this paper we review some of the
techniques used in this domain, such as the application of Natural Language
Processing (BioNLP), the incorporation of lexical-semantic resources, and the
application of Named Entity Recognition (BioNER). Finally, we present the
evaluation methods adopted to assess the suitability of these techniques for
retrieving biomedical resources.
| 2,010 | Computation and Language |
Roget's Thesaurus as a Lexical Resource for Natural Language Processing | WordNet proved that it is possible to construct a large-scale electronic
lexical database on the principles of lexical semantics. It has been accepted
and used extensively by computational linguists ever since it was released.
Inspired by WordNet's success, we propose as an alternative a similar resource,
based on the 1987 Penguin edition of Roget's Thesaurus of English Words and
Phrases.
Peter Mark Roget published his first Thesaurus over 150 years ago. Countless
writers, orators and students of the English language have used it.
Computational linguists have employed Roget's for almost 50 years in Natural
Language Processing, however hesitated in accepting Roget's Thesaurus because a
proper machine tractable version was not available.
This dissertation presents an implementation of a machine-tractable version
of the 1987 Penguin edition of Roget's Thesaurus - the first implementation of
its kind to use an entire current edition. It explains the steps necessary for
taking a machine-readable file and transforming it into a tractable system.
This involves converting the lexical material into a format that can be more
easily exploited, identifying data structures and designing classes to
computerize the Thesaurus. Roget's organization is studied in detail and
contrasted with WordNet's.
We show two applications of the computerized Thesaurus: computing semantic
similarity between words and phrases, and building lexical chains in a text.
The experiments are performed using well-known benchmarks and the results are
compared to those of other systems that use Roget's, WordNet and statistical
techniques. Roget's has turned out to be an excellent resource for measuring
semantic similarity; lexical chains are easily built but more difficult to
evaluate. We also explain ways in which Roget's Thesaurus and WordNet can be
combined.
| 2,012 | Computation and Language |
Parallel Spell-Checking Algorithm Based on Yahoo! N-Grams Dataset | Spell-checking is the process of detecting and sometimes providing
suggestions for incorrectly spelled words in a text. Basically, the larger the
dictionary of a spell-checker is, the higher is the error detection rate;
otherwise, misspellings would pass undetected. Unfortunately, traditional
dictionaries suffer from out-of-vocabulary and data sparseness problems as they
do not encompass large vocabulary of words indispensable to cover proper names,
domain-specific terms, technical jargons, special acronyms, and terminologies.
As a result, spell-checkers will incur low error detection and correction rate
and will fail to flag all errors in the text. This paper proposes a new
parallel shared-memory spell-checking algorithm that uses rich real-world word
statistics from Yahoo! N-Grams Dataset to correct non-word and real-word errors
in computer text. Essentially, the proposed algorithm can be divided into three
sub-algorithms that run in a parallel fashion: The error detection algorithm
that detects misspellings, the candidates generation algorithm that generates
correction suggestions, and the error correction algorithm that performs
contextual error correction. Experiments conducted on a set of text articles
containing misspellings, showed a remarkable spelling error correction rate
that resulted in a radical reduction of both non-word and real-word errors in
electronic text. In a further study, the proposed algorithm is to be optimized
for message-passing systems so as to become more flexible and less costly to
scale over distributed machines.
| 2,012 | Computation and Language |
OCR Context-Sensitive Error Correction Based on Google Web 1T 5-Gram
Data Set | Since the dawn of the computing era, information has been represented
digitally so that it can be processed by electronic computers. Paper books and
documents were abundant and widely being published at that time; and hence,
there was a need to convert them into digital format. OCR, short for Optical
Character Recognition was conceived to translate paper-based books into digital
e-books. Regrettably, OCR systems are still erroneous and inaccurate as they
produce misspellings in the recognized text, especially when the source
document is of low printing quality. This paper proposes a post-processing OCR
context-sensitive error correction method for detecting and correcting non-word
and real-word OCR errors. The cornerstone of this proposed approach is the use
of Google Web 1T 5-gram data set as a dictionary of words to spell-check OCR
text. The Google data set incorporates a very large vocabulary and word
statistics entirely reaped from the Internet, making it a reliable source to
perform dictionary-based error correction. The core of the proposed solution is
a combination of three algorithms: The error detection, candidate spellings
generator, and error correction algorithms, which all exploit information
extracted from Google Web 1T 5-gram data set. Experiments conducted on scanned
images written in different languages showed a substantial improvement in the
OCR error correction rate. As future developments, the proposed algorithm is to
be parallelised so as to support parallel and distributed computing
architectures.
| 2,012 | Computation and Language |
OCR Post-Processing Error Correction Algorithm using Google Online
Spelling Suggestion | With the advent of digital optical scanners, a lot of paper-based books,
textbooks, magazines, articles, and documents are being transformed into an
electronic version that can be manipulated by a computer. For this purpose,
OCR, short for Optical Character Recognition was developed to translate scanned
graphical text into editable computer text. Unfortunately, OCR is still
imperfect as it occasionally mis-recognizes letters and falsely identifies
scanned text, leading to misspellings and linguistics errors in the OCR output
text. This paper proposes a post-processing context-based error correction
algorithm for detecting and correcting OCR non-word and real-word errors. The
proposed algorithm is based on Google's online spelling suggestion which
harnesses an internal database containing a huge collection of terms and word
sequences gathered from all over the web, convenient to suggest possible
replacements for words that have been misspelled during the OCR process.
Experiments carried out revealed a significant improvement in OCR error
correction rate. Future research can improve upon the proposed algorithm so
much so that it can be parallelized and executed over multiprocessing
platforms.
| 2,012 | Computation and Language |
Roget's Thesaurus and Semantic Similarity | We have implemented a system that measures semantic similarity using a
computerized 1987 Roget's Thesaurus, and evaluated it by performing a few
typical tests. We compare the results of these tests with those produced by
WordNet-based similarity measures. One of the benchmarks is Miller and Charles'
list of 30 noun pairs to which human judges had assigned similarity measures.
We correlate these measures with those computed by several NLP systems. The 30
pairs can be traced back to Rubenstein and Goodenough's 65 pairs, which we have
also studied. Our Roget's-based system gets correlations of .878 for the
smaller and .818 for the larger list of noun pairs; this is quite close to the
.885 that Resnik obtained when he employed humans to replicate the Miller and
Charles experiment. We further evaluate our measure by using Roget's and
WordNet to answer 80 TOEFL, 50 ESL and 300 Reader's Digest questions: the
correct synonym must be selected amongst a group of four words. Our system gets
78.75%, 82.00% and 74.33% of the questions respectively.
| 2,003 | Computation and Language |
Keyphrase Extraction : Enhancing Lists | This paper proposes some modest improvements to Extractor, a state-of-the-art
keyphrase extraction system, by using a terabyte-sized corpus to estimate the
informativeness and semantic similarity of keyphrases. We present two
techniques to improve the organization and remove outliers of lists of
keyphrases. The first is a simple ordering according to their occurrences in
the corpus; the second is clustering according to semantic similarity.
Evaluation issues are discussed. We present a novel technique of comparing
extracted keyphrases to a gold standard which relies on semantic similarity
rather than string matching or an evaluation involving human judges.
| 2,012 | Computation and Language |
Not As Easy As It Seems: Automating the Construction of Lexical Chains
Using Roget's Thesaurus | Morris and Hirst present a method of linking significant words that are about
the same topic. The resulting lexical chains are a means of identifying
cohesive regions in a text, with applications in many natural language
processing tasks, including text summarization. The first lexical chains were
constructed manually using Roget's International Thesaurus. Morris and Hirst
wrote that automation would be straightforward given an electronic thesaurus.
All applications so far have used WordNet to produce lexical chains, perhaps
because adequate electronic versions of Roget's were not available until
recently. We discuss the building of lexical chains using an electronic version
of Roget's Thesaurus. We implement a variant of the original algorithm, and
explain the necessary design decisions. We include a comparison with other
implementations.
| 2,003 | Computation and Language |
Roget's Thesaurus: a Lexical Resource to Treasure | This paper presents the steps involved in creating an electronic lexical
knowledge base from the 1987 Penguin edition of Roget's Thesaurus. Semantic
relations are labelled with the help of WordNet. The two resources are compared
in a qualitative and quantitative manner. Differences in the organization of
the lexical material are discussed, as well as the possibility of merging both
resources.
| 2,001 | Computation and Language |
A practical approach to language complexity: a Wikipedia case study | In this paper we present statistical analysis of English texts from
Wikipedia. We try to address the issue of language complexity empirically by
comparing the simple English Wikipedia (Simple) to comparable samples of the
main English Wikipedia (Main). Simple is supposed to use a more simplified
language with a limited vocabulary, and editors are explicitly requested to
follow this guideline, yet in practice the vocabulary richness of both samples
are at the same level. Detailed analysis of longer units (n-grams of words and
part of speech tags) shows that the language of Simple is less complex than
that of Main primarily due to the use of shorter sentences, as opposed to
drastically simplified syntax or vocabulary. Comparing the two language
varieties by the Gunning readability index supports this conclusion. We also
report on the topical dependence of language complexity, e.g. that the language
is more advanced in conceptual articles compared to person-based (biographical)
and object-based articles. Finally, we investigate the relation between
conflict and language complexity by analyzing the content of the talk pages
associated to controversial and peacefully developing articles, concluding that
controversy has the effect of reducing language complexity.
| 2,012 | Computation and Language |
Segmentation Similarity and Agreement | We propose a new segmentation evaluation metric, called segmentation
similarity (S), that quantifies the similarity between two segmentations as the
proportion of boundaries that are not transformed when comparing them using
edit distance, essentially using edit distance as a penalty function and
scaling penalties by segmentation size. We propose several adapted
inter-annotator agreement coefficients which use S that are suitable for
segmentation. We show that S is configurable enough to suit a wide variety of
segmentation evaluations, and is an improvement upon the state of the art. We
also propose using inter-annotator agreement coefficients to evaluate automatic
segmenters in terms of human performance.
| 2,012 | Computation and Language |
Indus script corpora, archaeo-metallurgy and Meluhha (Mleccha) | Jules Bloch's work on formation of the Marathi language has to be expanded
further to provide for a study of evolution and formation of Indian languages
in the Indian language union (sprachbund). The paper analyses the stages in the
evolution of early writing systems which began with the evolution of counting
in the ancient Near East. A stage anterior to the stage of syllabic
representation of sounds of a language, is identified. Unique geometric shapes
required for tokens to categorize objects became too large to handle to
abstract hundreds of categories of goods and metallurgical processes during the
production of bronze-age goods. About 3500 BCE, Indus script as a writing
system was developed to use hieroglyphs to represent the 'spoken words'
identifying each of the goods and processes. A rebus method of representing
similar sounding words of the lingua franca of the artisans was used in Indus
script. This method is recognized and consistently applied for the lingua
franca of the Indian sprachbund. That the ancient languages of India,
constituted a sprachbund (or language union) is now recognized by many
linguists. The sprachbund area is proximate to the area where most of the Indus
script inscriptions were discovered, as documented in the corpora. That
hundreds of Indian hieroglyphs continued to be used in metallurgy is evidenced
by their use on early punch-marked coins. This explains the combined use of
syllabic scripts such as Brahmi and Kharoshti together with the hieroglyphs on
Rampurva copper bolt, and Sohgaura copper plate from about 6th century
BCE.Indian hieroglyphs constitute a writing system for meluhha language and are
rebus representations of archaeo-metallurgy lexemes. The rebus principle was
employed by the early scripts and can legitimately be used to decipher the
Indus script, after secure pictorial identification.
| 2,015 | Computation and Language |
ILexicOn: toward an ECD-compliant interlingual lexical ontology
described with semantic web formalisms | We are interested in bridging the world of natural language and the world of
the semantic web in particular to support natural multilingual access to the
web of data. In this paper we introduce a new type of lexical ontology called
interlingual lexical ontology (ILexicOn), which uses semantic web formalisms to
make each interlingual lexical unit class (ILUc) support the projection of its
semantic decomposition on itself. After a short overview of existing lexical
ontologies, we briefly introduce the semantic web formalisms we use. We then
present the three layered architecture of our approach: i) the interlingual
lexical meta-ontology (ILexiMOn); ii) the ILexicOn where ILUcs are formally
defined; iii) the data layer. We illustrate our approach with a standalone
ILexicOn, and introduce and explain a concise human-readable notation to
represent ILexicOns. Finally, we show how semantic web formalisms enable the
projection of a semantic decomposition on the decomposed ILUc.
| 2,011 | Computation and Language |
Ecological Evaluation of Persuasive Messages Using Google AdWords | In recent years there has been a growing interest in crowdsourcing
methodologies to be used in experimental research for NLP tasks. In particular,
evaluation of systems and theories about persuasion is difficult to accommodate
within existing frameworks. In this paper we present a new cheap and fast
methodology that allows fast experiment building and evaluation with
fully-automated analysis at a low cost. The central idea is exploiting existing
commercial tools for advertising on the web, such as Google AdWords, to measure
message impact in an ecological setting. The paper includes a description of
the approach, tips for how to use AdWords for scientific research, and results
of pilot experiments on the impact of affective text variations which confirm
the effectiveness of the approach.
| 2,015 | Computation and Language |
Context-sensitive Spelling Correction Using Google Web 1T 5-Gram
Information | In computing, spell checking is the process of detecting and sometimes
providing spelling suggestions for incorrectly spelled words in a text.
Basically, a spell checker is a computer program that uses a dictionary of
words to perform spell checking. The bigger the dictionary is, the higher is
the error detection rate. The fact that spell checkers are based on regular
dictionaries, they suffer from data sparseness problem as they cannot capture
large vocabulary of words including proper names, domain-specific terms,
technical jargons, special acronyms, and terminologies. As a result, they
exhibit low error detection rate and often fail to catch major errors in the
text. This paper proposes a new context-sensitive spelling correction method
for detecting and correcting non-word and real-word errors in digital text
documents. The approach hinges around data statistics from Google Web 1T 5-gram
data set which consists of a big volume of n-gram word sequences, extracted
from the World Wide Web. Fundamentally, the proposed method comprises an error
detector that detects misspellings, a candidate spellings generator based on a
character 2-gram model that generates correction suggestions, and an error
corrector that performs contextual error correction. Experiments conducted on a
set of text documents from different domains and containing misspellings,
showed an outstanding spelling error correction rate and a drastic reduction of
both non-word and real-word errors. In a further study, the proposed algorithm
is to be parallelized so as to lower the computational cost of the error
detection and correction processes.
| 2,012 | Computation and Language |
A Corpus-based Evaluation of a Domain-specific Text to Knowledge Mapping
Prototype | The aim of this paper is to evaluate a Text to Knowledge Mapping (TKM)
Prototype. The prototype is domain-specific, the purpose of which is to map
instructional text onto a knowledge domain. The context of the knowledge domain
is DC electrical circuit. During development, the prototype has been tested
with a limited data set from the domain. The prototype reached a stage where it
needs to be evaluated with a representative linguistic data set called corpus.
A corpus is a collection of text drawn from typical sources which can be used
as a test data set to evaluate NLP systems. As there is no available corpus for
the domain, we developed and annotated a representative corpus. The evaluation
of the prototype considers two of its major components- lexical components and
knowledge model. Evaluation on lexical components enriches the lexical
resources of the prototype like vocabulary and grammar structures. This leads
the prototype to parse a reasonable amount of sentences in the corpus. While
dealing with the lexicon was straight forward, the identification and
extraction of appropriate semantic relations was much more involved. It was
necessary, therefore, to manually develop a conceptual structure for the domain
to formulate a domain-specific framework of semantic relations. The framework
of semantic relationsthat has resulted from this study consisted of 55
relations, out of which 42 have inverse relations. We also conducted rhetorical
analysis on the corpus to prove its representativeness in conveying semantic.
Finally, we conducted a topical and discourse analysis on the corpus to analyze
the coverage of discourse by the prototype.
| 2,012 | Computation and Language |
Rule-weighted and terminal-weighted context-free grammars have identical
expressivity | Two formalisms, both based on context-free grammars, have recently been
proposed as a basis for a non-uniform random generation of combinatorial
objects. The former, introduced by Denise et al, associates weights with
letters, while the latter, recently explored by Weinberg et al in the context
of random generation, associates weights to transitions. In this short note, we
use a simple modification of the Greibach Normal Form transformation algorithm,
due to Blum and Koch, to show the equivalent expressivities, in term of their
induced distributions, of these two formalisms.
| 2,012 | Computation and Language |
Characterizing Ranked Chinese Syllable-to-Character Mapping Spectrum: A
Bridge Between the Spoken and Written Chinese Language | One important aspect of the relationship between spoken and written Chinese
is the ranked syllable-to-character mapping spectrum, which is the ranked list
of syllables by the number of characters that map to the syllable. Previously,
this spectrum is analyzed for more than 400 syllables without distinguishing
the four intonations. In the current study, the spectrum with 1280 toned
syllables is analyzed by logarithmic function, Beta rank function, and
piecewise logarithmic function. Out of the three fitting functions, the
two-piece logarithmic function fits the data the best, both by the smallest sum
of squared errors (SSE) and by the lowest Akaike information criterion (AIC)
value. The Beta rank function is the close second. By sampling from a Poisson
distribution whose parameter value is chosen from the observed data, we
empirically estimate the $p$-value for testing the
two-piece-logarithmic-function being better than the Beta rank function
hypothesis, to be 0.16. For practical purposes, the piecewise logarithmic
function and the Beta rank function can be considered a tie.
| 2,013 | Computation and Language |
Parsing of Myanmar sentences with function tagging | This paper describes the use of Naive Bayes to address the task of assigning
function tags and context free grammar (CFG) to parse Myanmar sentences. Part
of the challenge of statistical function tagging for Myanmar sentences comes
from the fact that Myanmar has free-phrase-order and a complex morphological
system. Function tagging is a pre-processing step for parsing. In the task of
function tagging, we use the functional annotated corpus and tag Myanmar
sentences with correct segmentation, POS (part-of-speech) tagging and chunking
information. We propose Myanmar grammar rules and apply context free grammar
(CFG) to find out the parse tree of function tagged Myanmar sentences.
Experiments show that our analysis achieves a good result with parsing of
simple sentences and three types of complex sentences.
| 2,012 | Computation and Language |
Spectral Analysis of Projection Histogram for Enhancing Close matching
character Recognition in Malayalam | The success rates of Optical Character Recognition (OCR) systems for printed
Malayalam documents is quite impressive with the state of the art accuracy
levels in the range of 85-95% for various. However for real applications,
further enhancement of this accuracy levels are required. One of the bottle
necks in further enhancement of the accuracy is identified as close-matching
characters. In this paper, we delineate the close matching characters in
Malayalam and report the development of a specialised classifier for these
close-matching characters. The output of a state of the art of OCR is taken and
characters falling into the close-matching character set is further fed into
this specialised classifier for enhancing the accuracy. The classifier is based
on support vector machine algorithm and uses feature vectors derived out of
spectral coefficients of projection histogram signals of close-matching
characters.
| 2,012 | Computation and Language |
Multilingual Topic Models for Unaligned Text | We develop the multilingual topic model for unaligned text (MuTo), a
probabilistic model of text that is designed to analyze corpora composed of
documents in two languages. From these documents, MuTo uses stochastic EM to
simultaneously discover both a matching between the languages and multilingual
latent topics. We demonstrate that MuTo is able to find shared topics on
real-world multilingual corpora, successfully pairing related documents across
languages. MuTo provides a new framework for creating multilingual topic models
without needing carefully curated parallel corpora and allows applications
built using the topic model formalism to be applied to a much wider class of
corpora.
| 2,012 | Computation and Language |
A Model-Driven Probabilistic Parser Generator | Existing probabilistic scanners and parsers impose hard constraints on the
way lexical and syntactic ambiguities can be resolved. Furthermore, traditional
grammar-based parsing tools are limited in the mechanisms they allow for taking
context into account. In this paper, we propose a model-driven tool that allows
for statistical language models with arbitrary probability estimators. Our work
on model-driven probabilistic parsing is built on top of ModelCC, a model-based
parser generator, and enables the probabilistic interpretation and resolution
of anaphoric, cataphoric, and recursive references in the disambiguation of
abstract syntax graphs. In order to prove the expression power of ModelCC, we
describe the design of a general-purpose natural language parser.
| 2,012 | Computation and Language |
Arabic Language Learning Assisted by Computer, based on Automatic Speech
Recognition | This work consists of creating a system of the Computer Assisted Language
Learning (CALL) based on a system of Automatic Speech Recognition (ASR) for the
Arabic language using the tool CMU Sphinx3 [1], based on the approach of HMM.
To this work, we have constructed a corpus of six hours of speech recordings
with a number of nine speakers. we find in the robustness to noise a grounds
for the choice of the HMM approach [2]. the results achieved are encouraging
since our corpus is made by only nine speakers, but they are always reasons
that open the door for other improvement works.
| 2,012 | Computation and Language |
Task-specific Word-Clustering for Part-of-Speech Tagging | While the use of cluster features became ubiquitous in core NLP tasks, most
cluster features in NLP are based on distributional similarity. We propose a
new type of clustering criteria, specific to the task of part-of-speech
tagging. Instead of distributional similarity, these clusters are based on the
beha vior of a baseline tagger when applied to a large corpus. These cluster
features provide similar gains in accuracy to those achieved by
distributional-similarity derived clusters. Using both types of cluster
features together further improve tagging accuracies. We show that the method
is effective for both the in-domain and out-of-domain scenarios for English,
and for French, German and Italian. The effect is larger for out-of-domain
text.
| 2,012 | Computation and Language |
Precision-biased Parsing and High-Quality Parse Selection | We introduce precision-biased parsing: a parsing task which favors precision
over recall by allowing the parser to abstain from decisions deemed uncertain.
We focus on dependency-parsing and present an ensemble method which is capable
of assigning parents to 84% of the text tokens while being over 96% accurate on
these tokens. We use the precision-biased parsing task to solve the related
high-quality parse-selection task: finding a subset of high-quality (accurate)
trees in a large collection of parsed text. We present a method for choosing
over a third of the input trees while keeping unlabeled dependency parsing
accuracy of 97% on these trees. We also present a method which is not based on
an ensemble but rather on directly predicting the risk associated with
individual parser decisions. In addition to its efficiency, this method
demonstrates that a parsing system can provide reasonable estimates of
confidence in its predictions without relying on ensembles or aggregate corpus
counts.
| 2,012 | Computation and Language |
FASTSUBS: An Efficient and Exact Procedure for Finding the Most Likely
Lexical Substitutes Based on an N-gram Language Model | Lexical substitutes have found use in areas such as paraphrasing, text
simplification, machine translation, word sense disambiguation, and part of
speech induction. However the computational complexity of accurately
identifying the most likely substitutes for a word has made large scale
experiments difficult. In this paper I introduce a new search algorithm,
FASTSUBS, that is guaranteed to find the K most likely lexical substitutes for
a given word in a sentence based on an n-gram language model. The computation
is sub-linear in both K and the vocabulary size V. An implementation of the
algorithm and a dataset with the top 100 substitutes of each token in the WSJ
section of the Penn Treebank are available at http://goo.gl/jzKH0.
| 2,012 | Computation and Language |
Syst\`eme d'aide \`a l'acc\`es lexical : trouver le mot qu'on a sur le
bout de la langue | The study of the Tip of the Tongue phenomenon (TOT) provides valuable clues
and insights concerning the organisation of the mental lexicon (meaning, number
of syllables, relation with other words, etc.). This paper describes a tool
based on psycho-linguistic observations concerning the TOT phenomenon. We've
built it to enable a speaker/writer to find the word he is looking for, word he
may know, but which he is unable to access in time. We try to simulate the TOT
phenomenon by creating a situation where the system knows the target word, yet
is unable to access it. In order to find the target word we make use of the
paradigmatic and syntagmatic associations stored in the linguistic databases.
Our experiment allows the following conclusion: a tool like SVETLAN, capable to
structure (automatically) a dictionary by domains can be used sucessfully to
help the speaker/writer to find the word he is looking for, if it is combined
with a database rich in terms of paradigmatic links like EuroWordNet.
| 2,012 | Computation and Language |
Language Acquisition in Computers | This project explores the nature of language acquisition in computers, guided
by techniques similar to those used in children. While existing natural
language processing methods are limited in scope and understanding, our system
aims to gain an understanding of language from first principles and hence
minimal initial input. The first portion of our system was implemented in Java
and is focused on understanding the morphology of language using bigrams. We
use frequency distributions and differences between them to define and
distinguish languages. English and French texts were analyzed to determine a
difference threshold of 55 before the texts are considered to be in different
languages, and this threshold was verified using Spanish texts. The second
portion of our system focuses on gaining an understanding of the syntax of a
language using a recursive method. The program uses one of two possible methods
to analyze given sentences based on either sentence patterns or surrounding
words. Both methods have been implemented in C++. The program is able to
understand the structure of simple sentences and learn new words. In addition,
we have provided some suggestions regarding future work and potential
extensions of the existing program.
| 2,012 | Computation and Language |
Automated Word Puzzle Generation via Topic Dictionaries | We propose a general method for automated word puzzle generation. Contrary to
previous approaches in this novel field, the presented method does not rely on
highly structured datasets obtained with serious human annotation effort: it
only needs an unstructured and unannotated corpus (i.e., document collection)
as input. The method builds upon two additional pillars: (i) a topic model,
which induces a topic dictionary from the input corpus (examples include e.g.,
latent semantic analysis, group-structured dictionaries or latent Dirichlet
allocation), and (ii) a semantic similarity measure of word pairs. Our method
can (i) generate automatically a large number of proper word puzzles of
different types, including the odd one out, choose the related word and
separate the topics puzzle. (ii) It can easily create domain-specific puzzles
by replacing the corpus component. (iii) It is also capable of automatically
generating puzzles with parameterizable levels of difficulty suitable for,
e.g., beginners or intermediate learners.
| 2,012 | Computation and Language |
UNL Based Bangla Natural Text Conversion - Predicate Preserving Parser
Approach | Universal Networking Language (UNL) is a declarative formal language that is
used to represent semantic data extracted from natural language texts. This
paper presents a novel approach to converting Bangla natural language text into
UNL using a method known as Predicate Preserving Parser (PPP) technique. PPP
performs morphological, syntactic and semantic, and lexical analysis of text
synchronously. This analysis produces a semantic-net like structure represented
using UNL. We demonstrate how Bangla texts are analyzed following the PPP
technique to produce UNL documents which can then be translated into any other
suitable natural language facilitating the opportunity to develop a universal
language translation method via UNL.
| 2,012 | Computation and Language |
Hedge detection as a lens on framing in the GMO debates: A position
paper | Understanding the ways in which participants in public discussions frame
their arguments is important in understanding how public opinion is formed. In
this paper, we adopt the position that it is time for more
computationally-oriented research on problems involving framing. In the
interests of furthering that goal, we propose the following specific,
interesting and, we believe, relatively accessible question: In the controversy
regarding the use of genetically-modified organisms (GMOs) in agriculture, do
pro- and anti-GMO articles differ in whether they choose to adopt a
"scientific" tone?
Prior work on the rhetoric and sociology of science suggests that hedging may
distinguish popular-science text from text written by professional scientists
for their colleagues. We propose a detailed approach to studying whether hedge
detection can be used to understanding scientific framing in the GMO debates,
and provide corpora to facilitate this study. Some of our preliminary analyses
suggest that hedges occur less frequently in scientific discourse than in
popular text, a finding that contradicts prior assertions in the literature. We
hope that our initial work and data will encourage others to pursue this
promising line of inquiry.
| 2,012 | Computation and Language |
Developing a model for a text database indexed pedagogically for
teaching the Arabic language | In this memory we made the design of an indexing model for Arabic language
and adapting standards for describing learning resources used (the LOM and
their application profiles) with learning conditions such as levels education
of students, their levels of understanding...the pedagogical context with
taking into account the repre-sentative elements of the text, text's
length,...in particular, we highlight the specificity of the Arabic language
which is a complex language, characterized by its flexion, its voyellation and
its agglutination.
| 2,012 | Computation and Language |
Temporal expression normalisation in natural language texts | Automatic annotation of temporal expressions is a research challenge of great
interest in the field of information extraction. In this report, I describe a
novel rule-based architecture, built on top of a pre-existing system, which is
able to normalise temporal expressions detected in English texts. Gold standard
temporally-annotated resources are limited in size and this makes research
difficult. The proposed system outperforms the state-of-the-art systems with
respect to TempEval-2 Shared Task (value attribute) and achieves substantially
better results with respect to the pre-existing system on top of which it has
been developed. I will also introduce a new free corpus consisting of 2822
unique annotated temporal expressions. Both the corpus and the system are
freely available on-line.
| 2,012 | Computation and Language |
BADREX: In situ expansion and coreference of biomedical abbreviations
using dynamic regular expressions | BADREX uses dynamically generated regular expressions to annotate term
definition-term abbreviation pairs, and corefers unpaired acronyms and
abbreviations back to their initial definition in the text. Against the
Medstract corpus BADREX achieves precision and recall of 98% and 97%, and
against a much larger corpus, 90% and 85%, respectively. BADREX yields improved
performance over previous approaches, requires no training data and allows
runtime customisation of its input parameters. BADREX is freely available from
https://github.com/philgooch/BADREX-Biomedical-Abbreviation-Expander as a
plugin for the General Architecture for Text Engineering (GATE) framework and
is licensed under the GPLv3.
| 2,012 | Computation and Language |
TempEval-3: Evaluating Events, Time Expressions, and Temporal Relations | We describe the TempEval-3 task which is currently in preparation for the
SemEval-2013 evaluation exercise. The aim of TempEval is to advance research on
temporal information processing. TempEval-3 follows on from previous TempEval
events, incorporating: a three-part task structure covering event, temporal
expression and temporal relation extraction; a larger dataset; and single
overall task quality scores.
| 2,014 | Computation and Language |
Keyphrase Based Arabic Summarizer (KPAS) | This paper describes a computationally inexpensive and efficient generic
summarization algorithm for Arabic texts. The algorithm belongs to extractive
summarization family, which reduces the problem into representative sentences
identification and extraction sub-problems. Important keyphrases of the
document to be summarized are identified employing combinations of statistical
and linguistic features. The sentence extraction algorithm exploits keyphrases
as the primary attributes to rank a sentence. The present experimental work,
demonstrates different techniques for achieving various summarization goals
including: informative richness, coverage of both main and auxiliary topics,
and keeping redundancy to a minimum. A scoring scheme is then adopted that
balances between these summarization goals. To evaluate the resulted Arabic
summaries with well-established systems, aligned English/Arabic texts are used
through the experiments.
| 2,012 | Computation and Language |
Two Step CCA: A new spectral method for estimating vector models of
words | Unlabeled data is often used to learn representations which can be used to
supplement baseline features in a supervised learner. For example, for text
applications where the words lie in a very high dimensional space (the size of
the vocabulary), one can learn a low rank "dictionary" by an
eigen-decomposition of the word co-occurrence matrix (e.g. using PCA or CCA).
In this paper, we present a new spectral method based on CCA to learn an
eigenword dictionary. Our improved procedure computes two set of CCAs, the
first one between the left and right contexts of the given word and the second
one between the projections resulting from this CCA and the word itself. We
prove theoretically that this two-step procedure has lower sample complexity
than the simple single step procedure and also illustrate the empirical
efficacy of our approach and the richness of representations learned by our Two
Step CCA (TSCCA) procedure on the tasks of POS tagging and sentiment
classification.
| 2,012 | Computation and Language |
A Joint Model of Language and Perception for Grounded Attribute Learning | As robots become more ubiquitous and capable, it becomes ever more important
to enable untrained users to easily interact with them. Recently, this has led
to study of the language grounding problem, where the goal is to extract
representations of the meanings of natural language tied to perception and
actuation in the physical world. In this paper, we present an approach for
joint learning of language and perception models for grounded attribute
induction. Our perception model includes attribute classifiers, for example to
detect object color and shape, and the language model is based on a
probabilistic categorial grammar that enables the construction of rich,
compositional meaning representations. The approach is evaluated on the task of
interpreting sentences that describe sets of objects in a physical workspace.
We demonstrate accurate task performance and effective latent-variable concept
induction in physical grounded scenes.
| 2,012 | Computation and Language |
A Fast and Simple Algorithm for Training Neural Probabilistic Language
Models | In spite of their superior performance, neural probabilistic language models
(NPLMs) remain far less widely used than n-gram models due to their notoriously
long training times, which are measured in weeks even for moderately-sized
datasets. Training NPLMs is computationally expensive because they are
explicitly normalized, which leads to having to consider all words in the
vocabulary when computing the log-likelihood gradients.
We propose a fast and simple algorithm for training NPLMs based on
noise-contrastive estimation, a newly introduced procedure for estimating
unnormalized continuous distributions. We investigate the behaviour of the
algorithm on the Penn Treebank corpus and show that it reduces the training
times by more than an order of magnitude without affecting the quality of the
resulting models. The algorithm is also more efficient and much more stable
than importance sampling because it requires far fewer noise samples to perform
well.
We demonstrate the scalability of the proposed approach by training several
neural language models on a 47M-word corpus with a 80K-word vocabulary,
obtaining state-of-the-art results on the Microsoft Research Sentence
Completion Challenge dataset.
| 2,012 | Computation and Language |
Cross Language Text Classification via Subspace Co-Regularized
Multi-View Learning | In many multilingual text classification problems, the documents in different
languages often share the same set of categories. To reduce the labeling cost
of training a classification model for each individual language, it is
important to transfer the label knowledge gained from one language to another
language by conducting cross language classification. In this paper we develop
a novel subspace co-regularized multi-view learning method for cross language
text classification. This method is built on parallel corpora produced by
machine translation. It jointly minimizes the training error of each classifier
in each language while penalizing the distance between the subspace
representations of parallel documents. Our empirical study on a large set of
cross language text classification tasks shows the proposed method consistently
outperforms a number of inductive methods, domain adaptation methods, and
multi-view learning methods.
| 2,012 | Computation and Language |
Elimination of Spurious Ambiguity in Transition-Based Dependency Parsing | We present a novel technique to remove spurious ambiguity from transition
systems for dependency parsing. Our technique chooses a canonical sequence of
transition operations (computation) for a given dependency tree. Our technique
can be applied to a large class of bottom-up transition systems, including for
instance Nivre (2004) and Attardi (2006).
| 2,012 | Computation and Language |
Adversarial Evaluation for Models of Natural Language | We now have a rich and growing set of modeling tools and algorithms for
inducing linguistic structure from text that is less than fully annotated. In
this paper, we discuss some of the weaknesses of our current methodology. We
present a new abstract framework for evaluating natural language processing
(NLP) models in general and unsupervised NLP models in particular. The central
idea is to make explicit certain adversarial roles among researchers, so that
the different roles in an evaluation are more clearly defined and performers of
all roles are offered ways to make measurable contributions to the larger goal.
Adopting this approach may help to characterize model successes and failures by
encouraging earlier consideration of error analysis. The framework can be
instantiated in a variety of ways, simulating some familiar intrinsic and
extrinsic evaluations as well as some new evaluations.
| 2,012 | Computation and Language |
Applying Deep Belief Networks to Word Sense Disambiguation | In this paper, we applied a novel learning algorithm, namely, Deep Belief
Networks (DBN) to word sense disambiguation (WSD). DBN is a probabilistic
generative model composed of multiple layers of hidden units. DBN uses
Restricted Boltzmann Machine (RBM) to greedily train layer by layer as a
pretraining. Then, a separate fine tuning step is employed to improve the
discriminative power. We compared DBN with various state-of-the-art supervised
learning algorithms in WSD such as Support Vector Machine (SVM), Maximum
Entropy model (MaxEnt), Naive Bayes classifier (NB) and Kernel Principal
Component Analysis (KPCA). We used all words in the given paragraph,
surrounding context words and part-of-speech of surrounding words as our
knowledge sources. We conducted our experiment on the SENSEVAL-2 data set. We
observed that DBN outperformed all other learning algorithms.
| 2,012 | Computation and Language |
Learning to Map Sentences to Logical Form: Structured Classification
with Probabilistic Categorial Grammars | This paper addresses the problem of mapping natural language sentences to
lambda-calculus encodings of their meaning. We describe a learning algorithm
that takes as input a training set of sentences labeled with expressions in the
lambda calculus. The algorithm induces a grammar for the problem, along with a
log-linear model that represents a distribution over syntactic and semantic
analyses conditioned on the input sentence. We apply the method to the task of
learning natural language interfaces to databases and show that the learned
parsers outperform previous methods in two benchmark database domains.
| 2,012 | Computation and Language |
Finding Structure in Text, Genome and Other Symbolic Sequences | The statistical methods derived and described in this thesis provide new ways
to elucidate the structural properties of text and other symbolic sequences.
Generically, these methods allow detection of a difference in the frequency of
a single feature, the detection of a difference between the frequencies of an
ensemble of features and the attribution of the source of a text. These three
abstract tasks suffice to solve problems in a wide variety of settings.
Furthermore, the techniques described in this thesis can be extended to provide
a wide range of additional tests beyond the ones described here.
A variety of applications for these methods are examined in detail. These
applications are drawn from the area of text analysis and genetic sequence
analysis. The textually oriented tasks include finding interesting collocations
and cooccurent phrases, language identification, and information retrieval. The
biologically oriented tasks include species identification and the discovery of
previously unreported long range structure in genes. In the applications
reported here where direct comparison is possible, the performance of these new
methods substantially exceeds the state of the art.
Overall, the methods described here provide new and effective ways to analyse
text and other symbolic sequences. Their particular strength is that they deal
well with situations where relatively little data are available. Since these
methods are abstract in nature, they can be applied in novel situations with
relative ease.
| 2,012 | Computation and Language |