Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Linguistic complexity: English vs. Polish, text vs. corpus | We analyze the rank-frequency distributions of words in selected English and
Polish texts. We show that for the lemmatized (basic) word forms the
scale-invariant regime breaks after about two decades, while it might be
consistent for the whole range of ranks for the inflected word forms. We also
find that for a corpus consisting of texts written by different authors the
basic scale-invariant regime is broken more strongly than in the case of
comparable corpus consisting of texts written by the same author. Similarly,
for a corpus consisting of texts translated into Polish from other languages
the scale-invariant regime is broken more strongly than for a comparable corpus
of native Polish texts. Moreover, we find that if the words are tagged with
their proper part of speech, only verbs show rank-frequency distribution that
is almost scale-invariant.
| 2,010 | Computation and Language |
Inflection system of a language as a complex network | We investigate inflection structure of a synthetic language using Latin as an
example. We construct a bipartite graph in which one group of vertices
correspond to dictionary headwords and the other group to inflected forms
encountered in a given text. Each inflected form is connected to its
corresponding headword, which in some cases in non-unique. The resulting sparse
graph decomposes into a large number of connected components, to be called word
groups. We then show how the concept of the word group can be used to construct
coverage curves of selected Latin texts. We also investigate a version of the
inflection graph in which all theoretically possible inflected forms are
included. Distribution of sizes of connected components of this graphs
resembles cluster distribution in a lattice percolation near the critical
point.
| 2,009 | Computation and Language |
Distinguishing Fact from Fiction: Pattern Recognition in Texts Using
Complex Networks | We establish concrete mathematical criteria to distinguish between different
kinds of written storytelling, fictional and non-fictional. Specifically, we
constructed a semantic network from both novels and news stories, with $N$
independent words as vertices or nodes, and edges or links allotted to words
occurring within $m$ places of a given vertex; we call $m$ the word distance.
We then used measures from complex network theory to distinguish between news
and fiction, studying the minimal text length needed as well as the optimized
word distance $m$. The literature samples were found to be most effectively
represented by their corresponding power laws over degree distribution $P(k)$
and clustering coefficient $C(k)$; we also studied the mean geodesic distance,
and found all our texts were small-world networks. We observed a natural
break-point at $k=\sqrt{N}$ where the power law in the degree distribution
changed, leading to separate power law fit for the bulk and the tail of $P(k)$.
Our linear discriminant analysis yielded a $73.8 \pm 5.15%$ accuracy for the
correct classification of novels and $69.1 \pm 1.22%$ for news stories. We
found an optimal word distance of $m=4$ and a minimum text length of 100 to 200
words $N$.
| 2,010 | Computation and Language |
Symmetric categorial grammar: residuation and Galois connections | The Lambek-Grishin calculus is a symmetric extension of the Lambek calculus:
in addition to the residuated family of product, left and right division
operations of Lambek's original calculus, one also considers a family of
coproduct, right and left difference operations, related to the former by an
arrow-reversing duality. Communication between the two families is implemented
in terms of linear distributivity principles. The aim of this paper is to
complement the symmetry between (dual) residuated type-forming operations with
an orthogonal opposition that contrasts residuated and Galois connected
operations. Whereas the (dual) residuated operations are monotone, the Galois
connected operations (and their duals) are antitone. We discuss the algebraic
properties of the (dual) Galois connected operations, and generalize the
(co)product distributivity principles to include the negative operations. We
give a continuation-passing-style translation for the new type-forming
operations, and discuss some linguistic applications.
| 2,010 | Computation and Language |
Space and the Synchronic A-Ram | Space is a circuit oriented, spatial programming language designed to exploit
the massive parallelism available in a novel formal model of computation called
the Synchronic A-Ram, and physically related FPGA and reconfigurable
architectures. Space expresses variable grained MIMD parallelism, is modular,
strictly typed, and deterministic. Barring operations associated with memory
allocation and compilation, modules cannot access global variables, and are
referentially transparent. At a high level of abstraction, modules exhibit a
small, sequential state transition system, aiding verification. Space deals
with communication, scheduling, and resource contention issues in parallel
computing, by resolving them explicitly in an incremental manner, module by
module, whilst ascending the ladder of abstraction. Whilst the Synchronic A-Ram
model was inspired by linguistic considerations, it is also put forward as a
formal model for reconfigurable digital circuits. A programming environment has
been developed, that incorporates a simulator and compiler that transform Space
programs into Synchronic A-Ram machine code, consisting of only three bit-level
instructions, and a marking instruction. Space and the Synchronic A-Ram point
to novel routes out of the parallel computing crisis.
| 2,010 | Computation and Language |
For the sake of simplicity: Unsupervised extraction of lexical
simplifications from Wikipedia | We report on work in progress on extracting lexical simplifications (e.g.,
"collaborate" -> "work together"), focusing on utilizing edit histories in
Simple English Wikipedia for this task. We consider two main approaches: (1)
deriving simplification probabilities via an edit model that accounts for a
mixture of different operations, and (2) using metadata to focus on edits that
are more likely to be simplification operations. We find our methods to
outperform a reasonable baseline and yield many high-quality lexical
simplifications not included in an independently-created manually prepared
list.
| 2,010 | Computation and Language |
Don't 'have a clue'? Unsupervised co-learning of downward-entailing
operators | Researchers in textual entailment have begun to consider inferences involving
'downward-entailing operators', an interesting and important class of lexical
items that change the way inferences are made. Recent work proposed a method
for learning English downward-entailing operators that requires access to a
high-quality collection of 'negative polarity items' (NPIs). However, English
is one of the very few languages for which such a list exists. We propose the
first approach that can be applied to the many languages for which there is no
pre-existing high-precision database of NPIs. As a case study, we apply our
method to Romanian and show that our method yields good results. Also, we
perform a cross-linguistic analysis that suggests interesting connections to
some findings in linguistic typology.
| 2,010 | Computation and Language |
Lexical Co-occurrence, Statistical Significance, and Word Association | Lexical co-occurrence is an important cue for detecting word associations. We
present a theoretical framework for discovering statistically significant
lexical co-occurrences from a given corpus. In contrast with the prevalent
practice of giving weightage to unigram frequencies, we focus only on the
documents containing both the terms (of a candidate bigram). We detect biases
in span distributions of associated words, while being agnostic to variations
in global unigram frequencies. Our framework has the fidelity to distinguish
different classes of lexical co-occurrences, based on strengths of the document
and corpuslevel cues of co-occurrence in the data. We perform extensive
experiments on benchmark data sets to study the performance of various
co-occurrence measures that are currently known in literature. We find that a
relatively obscure measure called Ochiai, and a newly introduced measure CSA
capture the notion of lexical co-occurrence best, followed next by LLR, Dice,
and TTest, while another popular measure, PMI, suprisingly, performs poorly in
the context of lexical co-occurrence.
| 2,010 | Computation and Language |
Emotional State Categorization from Speech: Machine vs. Human | This paper presents our investigations on emotional state categorization from
speech signals with a psychologically inspired computational model against
human performance under the same experimental setup. Based on psychological
studies, we propose a multistage categorization strategy which allows
establishing an automatic categorization model flexibly for a given emotional
speech categorization task. We apply the strategy to the Serbian Emotional
Speech Corpus (GEES) and the Danish Emotional Speech Corpus (DES), where human
performance was reported in previous psychological studies. Our work is the
first attempt to apply machine learning to the GEES corpus where the human
recognition rates were only available prior to our study. Unlike the previous
work on the DES corpus, our work focuses on a comparison to human performance
under the same experimental settings. Our studies suggest that
psychology-inspired systems yield behaviours that, to a great extent, resemble
what humans perceived and their performance is close to that of humans under
the same experimental setup. Furthermore, our work also uncovers some
differences between machine and humans in terms of emotional state recognition
from speech.
| 2,010 | Computation and Language |
Constructions d\'efinitoires des tables du Lexique-Grammaire | Lexicon-Grammar tables are a very rich syntactic lexicon for the French
language. This linguistic database is nevertheless not directly suitable for
use by computer programs, as it is incomplete and lacks consistency. Tables are
defined on the basis of features which are not explicitly recorded in the
lexicon. These features are only described in literature. Our aim is to define
for each tables these essential properties to make them usable in various
Natural Language Processing (NLP) applications, such as parsing.
| 2,010 | Computation and Language |
Tableaux for the Lambek-Grishin calculus | Categorial type logics, pioneered by Lambek, seek a proof-theoretic
understanding of natural language syntax by identifying categories with
formulas and derivations with proofs. We typically observe an intuitionistic
bias: a structural configuration of hypotheses (a constituent) derives a single
conclusion (the category assigned to it). Acting upon suggestions of Grishin to
dualize the logical vocabulary, Moortgat proposed the Lambek-Grishin calculus
(LG) with the aim of restoring symmetry between hypotheses and conclusions. We
develop a theory of labeled modal tableaux for LG, inspired by the
interpretation of its connectives as binary modal operators in the relational
semantics of Kurtonina and Moortgat. As a linguistic application of our method,
we show that grammars based on LG are context-free through use of an
interpolation lemma. This result complements that of Melissen, who proved that
LG augmented by mixed associativity and -commutativity was exceeds LTAG in
expressive power.
| 2,010 | Computation and Language |
Niche as a determinant of word fate in online groups | Patterns of word use both reflect and influence a myriad of human activities
and interactions. Like other entities that are reproduced and evolve, words
rise or decline depending upon a complex interplay between {their intrinsic
properties and the environments in which they function}. Using Internet
discussion communities as model systems, we define the concept of a word niche
as the relationship between the word and the characteristic features of the
environments in which it is used. We develop a method to quantify two important
aspects of the size of the word niche: the range of individuals using the word
and the range of topics it is used to discuss. Controlling for word frequency,
we show that these aspects of the word niche are strong determinants of changes
in word frequency. Previous studies have already indicated that word frequency
itself is a correlate of word success at historical time scales. Our analysis
of changes in word frequencies over time reveals that the relative sizes of
word niches are far more important than word frequencies in the dynamics of the
entire vocabulary at shorter time scales, as the language adapts to new
concepts and social groupings. We also distinguish endogenous versus exogenous
factors as additional contributors to the fates of words, and demonstrate the
force of this distinction in the rise of novel words. Our results indicate that
short-term nonstationarity in word statistics is strongly driven by individual
proclivities, including inclinations to provide novel information and to
project a distinctive social identity.
| 2,011 | Computation and Language |
A probabilistic top-down parser for minimalist grammars | This paper describes a probabilistic top-down parser for minimalist grammars.
Top-down parsers have the great advantage of having a certain predictive power
during the parsing, which takes place in a left-to-right reading of the
sentence. Such parsers have already been well-implemented and studied in the
case of Context-Free Grammars, which are already top-down, but these are
difficult to adapt to Minimalist Grammars, which generate sentences bottom-up.
I propose here a way of rewriting Minimalist Grammars as Linear Context-Free
Rewriting Systems, allowing to easily create a top-down parser. This rewriting
allows also to put a probabilistic field on these grammars, which can be used
to accelerate the parser. Finally, I propose a method of refining the
probabilistic field by using algorithms used in data compression.
| 2,010 | Computation and Language |
Learning Taxonomy for Text Segmentation by Formal Concept Analysis | In this paper the problems of deriving a taxonomy from a text and
concept-oriented text segmentation are approached. Formal Concept Analysis
(FCA) method is applied to solve both of these linguistic problems. The
proposed segmentation method offers a conceptual view for text segmentation,
using a context-driven clustering of sentences. The Concept-oriented Clustering
Segmentation algorithm (COCS) is based on k-means linear clustering of the
sentences. Experimental results obtained using COCS algorithm are presented.
| 2,010 | Computation and Language |
Stabilizing knowledge through standards - A perspective for the
humanities | It is usual to consider that standards generate mixed feelings among
scientists. They are often seen as not really reflecting the state of the art
in a given domain and a hindrance to scientific creativity. Still, scientists
should theoretically be at the best place to bring their expertise into
standard developments, being even more neutral on issues that may typically be
related to competing industrial interests. Even if it could be thought of as
even more complex to think about developping standards in the humanities, we
will show how this can be made feasible through the experience gained both
within the Text Encoding Initiative consortium and the International
Organisation for Standardisation. By taking the specific case of lexical
resources, we will try to show how this brings about new ideas for designing
future research infrastructures in the human and social sciences.
| 2,011 | Computation and Language |
A PDTB-Styled End-to-End Discourse Parser | We have developed a full discourse parser in the Penn Discourse Treebank
(PDTB) style. Our trained parser first identifies all discourse and
non-discourse relations, locates and labels their arguments, and then
classifies their relation types. When appropriate, the attribution spans to
these relations are also determined. We present a comprehensive evaluation from
both component-wise and error-cascading perspectives.
| 2,014 | Computation and Language |
Emoticonsciousness | A temporal analysis of emoticon use in Swedish, Italian, German and English
asynchronous electronic communication is reported. Emoticons are classified as
positive, negative and neutral. Postings to newsgroups over a 66 week period
are considered. The aggregate analysis of emoticon use in newsgroups for
science and politics tend on the whole to be consistent over the entire time
period. Where possible, events that coincide with divergences from trends in
language-subject pairs are noted. Political discourse in Italian over the
period shows marked use of negative emoticons, and in Swedish, positive
emoticons.
| 2,010 | Computation and Language |
Integration of Agile Ontology Mapping towards NLP Search in I-SOAS | In this research paper we address the importance of Product Data Management
(PDM) with respect to its contributions in industry. Moreover we also present
some currently available major challenges to PDM communities and targeting some
of these challenges we present an approach i.e. I-SOAS, and briefly discuss how
this approach can be helpful in solving the PDM community's faced problems.
Furthermore, limiting the scope of this research to one challenge, we focus on
the implementation of a semantic based search mechanism in PDM Systems. Going
into the details, at first we describe the respective field i.e. Language
Technology (LT), contributing towards natural language processing, to take
advantage in implementing a search engine capable of understanding the semantic
out of natural language based search queries. Then we discuss how can we
practically take advantage of LT by implementing its concepts in the form of
software application with the use of semantic web technology i.e. Ontology.
Later, in the end of this research paper, we briefly present a prototype
application developed with the use of concepts of LT towards semantic based
search.
| 2,010 | Computation and Language |
Motifs de graphe pour le calcul de d\'ependances syntaxiques compl\`etes | This article describes a method to build syntactical dependencies starting
from the phrase structure parsing process. The goal is to obtain all the
information needed for a detailled semantical analysis. Interaction Grammars
are used for parsing; the saturation of polarities which is the core of this
formalism can be mapped to dependency relation. Formally, graph patterns are
used to express the set of constraints which control dependency creations.
| 2,010 | Computation and Language |
Opinion Polarity Identification through Adjectives | "What other people think" has always been an important piece of information
during various decision-making processes. Today people frequently make their
opinions available via the Internet, and as a result, the Web has become an
excellent source for gathering consumer opinions. There are now numerous Web
resources containing such opinions, e.g., product reviews forums, discussion
groups, and Blogs. But, due to the large amount of information and the wide
range of sources, it is essentially impossible for a customer to read all of
the reviews and make an informed decision on whether to purchase the product.
It is also difficult for the manufacturer or seller of a product to accurately
monitor customer opinions. For this reason, mining customer reviews, or opinion
mining, has become an important issue for research in Web information
extraction. One of the important topics in this research area is the
identification of opinion polarity. The opinion polarity of a review is usually
expressed with values 'positive', 'negative' or 'neutral'. We propose a
technique for identifying polarity of reviews by identifying the polarity of
the adjectives that appear in them. Our evaluation shows the technique can
provide accuracy in the area of 73%, which is well above the 58%-64% provided
by naive Bayesian classifiers.
| 2,010 | Computation and Language |
La r\'eduction de termes complexes dans les langues de sp\'ecialit\'e | Our study applies statistical methods to French and Italian corpora to
examine the phenomenon of multi-word term reduction in specialty languages.
There are two kinds of reduction: anaphoric and lexical. We show that anaphoric
reduction depends on the discourse type (vulgarization, pedagogical,
specialized) but is independent of both domain and language; that lexical
reduction depends on domain and is more frequent in technical, rapidly evolving
domains; and that anaphoric reductions tend to follow full terms rather than
precede them. We define the notion of the anaphoric tree of the term and study
its properties. Concerning lexical reduction, we attempt to prove statistically
that there is a notion of term lifecycle, where the full form is progressively
replaced by a lexical reduction. ----- Nous \'etudions par des m\'ethodes
statistiques sur des corpus fran\c{c}ais et italiens, le ph\'enom\`ene de
r\'eduction des termes complexes dans les langues de sp\'ecialit\'e. Il existe
deux types de r\'eductions : anaphorique et lexicale. Nous montrons que la
r\'eduction anaphorique d\'epend du type de discours (de vulgarisation,
p\'edagogique, sp\'ecialis\'e) mais ne d\'epend ni du domaine, ni de la langue,
alors que la r\'eduction lexicale d\'epend du domaine et est plus fr\'equente
dans les domaines techniques \`a \'evolution rapide. D'autre part, nous
montrons que la r\'eduction anaphorique a tendance \`a suivre la forme pleine
du terme, nous d\'efinissons une notion d'arbre anaphorique de terme et nous
\'etudions ses propri\'et\'es. Concernant la r\'eduction lexicale, nous tentons
de d\'emontrer statistiquement qu'il existe une notion de cycle de vie de
terme, o\`u la forme pleine est progressivement remplac\'ee par une r\'eduction
lexicale.
| 2,011 | Computation and Language |
The semantic mapping of words and co-words in contexts | Meaning can be generated when information is related at a systemic level.
Such a system can be an observer, but also a discourse, for example,
operationalized as a set of documents. The measurement of semantics as
similarity in patterns (correlations) and latent variables (factor analysis)
has been enhanced by computer techniques and the use of statistics; for
example, in "Latent Semantic Analysis". This communication provides an
introduction, an example, pointers to relevant software, and summarizes the
choices that can be made by the analyst. Visualization ("semantic mapping") is
thus made more accessible.
| 2,011 | Computation and Language |
MUDOS-NG: Multi-document Summaries Using N-gram Graphs (Tech Report) | This report describes the MUDOS-NG summarization system, which applies a set
of language-independent and generic methods for generating extractive
summaries. The proposed methods are mostly combinations of simple operators on
a generic character n-gram graph representation of texts. This work defines the
set of used operators upon n-gram graphs and proposes using these operators
within the multi-document summarization process in such subtasks as document
analysis, salient sentence selection, query expansion and redundancy control.
Furthermore, a novel chunking methodology is used, together with a novel way to
assign concepts to sentences for query expansion. The experimental results of
the summarization system, performed upon widely used corpora from the Document
Understanding and the Text Analysis Conferences, are promising and provide
evidence for the potential of the generic methods introduced. This work aims to
designate core methods exploiting the n-gram graph representation, providing
the basis for more advanced summarization systems.
| 2,010 | Computation and Language |
Categorial Minimalist Grammar | We first recall some basic notions on minimalist grammars and on categorial
grammars. Next we shortly introduce partially commutative linear logic, and our
representation of minimalist grammars within this categorial system, the
so-called categorial minimalist grammars. Thereafter we briefly present
\lambda\mu-DRT (Discourse Representation Theory) an extension of \lambda-DRT
(compositional DRT) in the framework of \lambda\mu calculus: it avoids type
raising and derives different readings from a single semantic representation,
in a setting which follows discourse structure. We run a complete example which
illustrates the various structures and rules that are needed to derive a
semantic representation from the categorial view of a transformational
syntactic analysis.
| 2,010 | Computation and Language |
Annotated English | This document presents Annotated English, a system of diacritical symbols
which turns English pronunciation into a precise and unambiguous process. The
annotations are defined and located in such a way that the original English
text is not altered (not even a letter), thus allowing for a consistent reading
and learning of the English language with and without annotations. The
annotations are based on a set of general rules that make the frequency of
annotations not dramatically high. This makes the reader easily associate
annotations with exceptions, and makes it possible to shape, internalise and
consolidate some rules for the English language which otherwise are weakened by
the enormous amount of exceptions in English pronunciation. The advantages of
this annotation system are manifold. Any existing text can be annotated without
a significant increase in size. This means that we can get an annotated version
of any document or book with the same number of pages and fontsize. Since no
letter is affected, the text can be perfectly read by a person who does not
know the annotation rules, since annotations can be simply ignored. The
annotations are based on a set of rules which can be progressively learned and
recognised, even in cases where the reader has no access or time to read the
rules. This means that a reader can understand most of the annotations after
reading a few pages of Annotated English, and can take advantage from that
knowledge for any other annotated document she may read in the future.
| 2,011 | Computation and Language |
Concrete Sentence Spaces for Compositional Distributional Models of
Meaning | Coecke, Sadrzadeh, and Clark (arXiv:1003.4394v1 [cs.CL]) developed a
compositional model of meaning for distributional semantics, in which each word
in a sentence has a meaning vector and the distributional meaning of the
sentence is a function of the tensor products of the word vectors. Abstractly
speaking, this function is the morphism corresponding to the grammatical
structure of the sentence in the category of finite dimensional vector spaces.
In this paper, we provide a concrete method for implementing this linear
meaning map, by constructing a corpus-based vector space for the type of
sentence. Our construction method is based on structured vector spaces whereby
meaning vectors of all sentences, regardless of their grammatical structure,
live in the same vector space. Our proposed sentence space is the tensor
product of two noun spaces, in which the basis vectors are pairs of words each
augmented with a grammatical role. This enables us to compare meanings of
sentences by simply taking the inner product of their vectors.
| 2,011 | Computation and Language |
A Context-theoretic Framework for Compositionality in Distributional
Semantics | Techniques in which words are represented as vectors have proved useful in
many applications in computational linguistics, however there is currently no
general semantic formalism for representing meaning in terms of vectors. We
present a framework for natural language semantics in which words, phrases and
sentences are all represented as vectors, based on a theoretical analysis which
assumes that meaning is determined by context.
In the theoretical analysis, we define a corpus model as a mathematical
abstraction of a text corpus. The meaning of a string of words is assumed to be
a vector representing the contexts in which it occurs in the corpus model.
Based on this assumption, we can show that the vector representations of words
can be considered as elements of an algebra over a field. We note that in
applications of vector spaces to representing meanings of words there is an
underlying lattice structure; we interpret the partial ordering of the lattice
as describing entailment between meanings. We also define the context-theoretic
probability of a string, and, based on this and the lattice structure, a degree
of entailment between strings.
We relate the framework to existing methods of composing vector-based
representations of meaning, and show that our approach generalises many of
these, including vector addition, component-wise multiplication, and the tensor
product.
| 2,015 | Computation and Language |
Geometric representations for minimalist grammars | We reformulate minimalist grammars as partial functions on term algebras for
strings and trees. Using filler/role bindings and tensor product
representations, we construct homomorphisms for these data structures into
geometric vector spaces. We prove that the structure-building functions as well
as simple processors for minimalist languages can be realized by piecewise
linear operators in representation space. We also propose harmony, i.e. the
distance of an intermediate processing step from the final well-formed state in
representation space, as a measure of processing complexity. Finally, we
illustrate our findings by means of two particular arithmetic and fractal
representations.
| 2,012 | Computation and Language |
Developing a New Approach for Arabic Morphological Analysis and
Generation | Arabic morphological analysis is one of the essential stages in Arabic
Natural Language Processing. In this paper we present an approach for Arabic
morphological analysis. This approach is based on Arabic morphological
automaton (AMAUT). The proposed technique uses a morphological database
realized using XMODEL language. Arabic morphology represents a special type of
morphological systems because it is based on the concept of scheme to represent
Arabic words. We use this concept to develop the Arabic morphological automata.
The proposed approach has development standardization aspect. It can be
exploited by NLP applications such as syntactic and semantic analysis,
information retrieval, machine translation and orthographical correction. The
proposed approach is compared with Xerox Arabic Analyzer and Smrz Arabic
Analyzer.
| 2,011 | Computation and Language |
Polarized Montagovian Semantics for the Lambek-Grishin calculus | Grishin proposed enriching the Lambek calculus with multiplicative
disjunction (par) and coresiduals. Applications to linguistics were discussed
by Moortgat, who spoke of the Lambek-Grishin calculus (LG). In this paper, we
adapt Girard's polarity-sensitive double negation embedding for classical logic
to extract a compositional Montagovian semantics from a display calculus for
focused proof search in LG. We seize the opportunity to illustrate our approach
alongside an analysis of extraction, providing linguistic motivation for linear
distributivity of tensor over par, thus answering a question of
Kurtonina&Moortgat. We conclude by comparing our proposal to the continuation
semantics of Bernardi&Moortgat, corresponding to call-by- name and
call-by-value evaluation strategies.
| 2,011 | Computation and Language |
Malagasy Dialects and the Peopling of Madagascar | The origin of Malagasy DNA is half African and half Indonesian, nevertheless
the Malagasy language, spoken by the entire population, belongs to the
Austronesian family. The language most closely related to Malagasy is Maanyan
(Greater Barito East group of the Austronesian family), but related languages
are also in Sulawesi, Malaysia and Sumatra. For this reason, and because
Maanyan is spoken by a population which lives along the Barito river in
Kalimantan and which does not possess the necessary skill for long maritime
navigation, the ethnic composition of the Indonesian colonizers is still
unclear.
There is a general consensus that Indonesian sailors reached Madagascar by a
maritime trek, but the time, the path and the landing area of the first
colonization are all disputed. In this research we try to answer these problems
together with other ones, such as the historical configuration of Malagasy
dialects, by types of analysis related to lexicostatistics and glottochronology
which draw upon the automated method recently proposed by the authors
\cite{Serva:2008, Holman:2008, Petroni:2008, Bakker:2009}. The data were
collected by the first author at the beginning of 2010 with the invaluable help
of Joselin\`a Soafara N\'er\'e and consist of Swadesh lists of 200 items for 23
dialects covering all areas of the Island.
| 2,011 | Computation and Language |
The effect of linguistic constraints on the large scale organization of
language | This paper studies the effect of linguistic constraints on the large scale
organization of language. It describes the properties of linguistic networks
built using texts of written language with the words randomized. These
properties are compared to those obtained for a network built over the text in
natural order. It is observed that the "random" networks too exhibit
small-world and scale-free characteristics. They also show a high degree of
clustering. This is indeed a surprising result - one that has not been
addressed adequately in the literature. We hypothesize that many of the network
statistics reported here studied are in fact functions of the distribution of
the underlying data from which the network is built and may not be indicative
of the nature of the concerned network.
| 2,011 | Computation and Language |
Universal Higher Order Grammar | We examine the class of languages that can be defined entirely in terms of
provability in an extension of the sorted type theory (Ty_n) by embedding the
logic of phonologies, without introduction of special types for syntactic
entities. This class is proven to precisely coincide with the class of
logically closed languages that may be thought of as functions from expressions
to sets of logically equivalent Ty_n terms. For a specific sub-class of
logically closed languages that are described by finite sets of rules or rule
schemata, we find effective procedures for building a compact Ty_n
representation, involving a finite number of axioms or axiom schemata. The
proposed formalism is characterized by some useful features unavailable in a
two-component architecture of a language model. A further specialization and
extension of the formalism with a context type enable effective account of
intensional and dynamic semantics.
| 2,011 | Computation and Language |
Recognizing Uncertainty in Speech | We address the problem of inferring a speaker's level of certainty based on
prosodic information in the speech signal, which has application in
speech-based dialogue systems. We show that using phrase-level prosodic
features centered around the phrases causing uncertainty, in addition to
utterance-level prosodic features, improves our model's level of certainty
classification. In addition, our models can be used to predict which phrase a
person is uncertain about. These results rely on a novel method for eliciting
utterances of varying levels of certainty that allows us to compare the utility
of contextually-based feature sets. We elicit level of certainty ratings from
both the speakers themselves and a panel of listeners, finding that there is
often a mismatch between speakers' internal states and their perceived states,
and highlighting the importance of this distinction.
| 2,011 | Computation and Language |
Self reference in word definitions | Dictionaries are inherently circular in nature. A given word is linked to a
set of alternative words (the definition) which in turn point to further
descendants. Iterating through definitions in this way, one typically finds
that definitions loop back upon themselves. The graph formed by such
definitional relations is our object of study. By eliminating those links which
are not in loops, we arrive at a core subgraph of highly connected nodes.
We observe that definitional loops are conveniently classified by length,
with longer loops usually emerging from semantic misinterpretation. By breaking
the long loops in the graph of the dictionary, we arrive at a set of
disconnected clusters. We find that the words in these clusters constitute
semantic units, and moreover tend to have been introduced into the English
language at similar times, suggesting a possible mechanism for language
evolution.
| 2,011 | Computation and Language |
Fitting Ranked English and Spanish Letter Frequency Distribution in U.S.
and Mexican Presidential Speeches | The limited range in its abscissa of ranked letter frequency distributions
causes multiple functions to fit the observed distribution reasonably well. In
order to critically compare various functions, we apply the statistical model
selections on ten functions, using the texts of U.S. and Mexican presidential
speeches in the last 1-2 centuries. Dispite minor switching of ranking order of
certain letters during the temporal evolution for both datasets, the letter
usage is generally stable. The best fitting function, judged by either
least-square-error or by AIC/BIC model selection, is the Cocho/Beta function.
We also use a novel method to discover clusters of letters by their
observed-over-expected frequency ratios.
| 2,011 | Computation and Language |
Codeco: A Grammar Notation for Controlled Natural Language in Predictive
Editors | Existing grammar frameworks do not work out particularly well for controlled
natural languages (CNL), especially if they are to be used in predictive
editors. I introduce in this paper a new grammar notation, called Codeco, which
is designed specifically for CNLs and predictive editors. Two different parsers
have been implemented and a large subset of Attempto Controlled English (ACE)
has been represented in Codeco. The results show that Codeco is practical,
adequate and efficient.
| 2,010 | Computation and Language |
Materials to the Russian-Bulgarian Comparative Dictionary "EAD" | This article presents a fragment of a new comparative dictionary "A
comparative dictionary of names of expansive action in Russian and Bulgarian
languages". Main features of the new web-based comparative dictionary are
placed, the principles of its formation are shown, primary links between the
word-matches are classified. The principal difference between translation
dictionaries and the model of double comparison is also shown. The
classification scheme of the pages is proposed. New concepts and keywords have
been introduced. The real prototype of the dictionary with a few key pages is
published. The broad debate about the possibility of this prototype to become a
version of Russian-Bulgarian comparative dictionary of a new generation is
available.
| 2,011 | Computation and Language |
A Universal Part-of-Speech Tagset | To facilitate future research in unsupervised induction of syntactic
structure and to standardize best-practices, we propose a tagset that consists
of twelve universal part-of-speech categories. In addition to the tagset, we
develop a mapping from 25 different treebank tagsets to this universal set. As
a result, when combined with the original treebank data, this universal tagset
and mapping produce a dataset consisting of common parts-of-speech for 22
different languages. We highlight the use of this resource via two experiments,
including one that reports competitive accuracies for unsupervised grammar
induction without gold standard part-of-speech tags.
| 2,015 | Computation and Language |
Seeking Meaning in a Space Made out of Strokes, Radicals, Characters and
Compounds | Chinese characters can be compared to a molecular structure: a character is
analogous to a molecule, radicals are like atoms, calligraphic strokes
correspond to elementary particles, and when characters form compounds, they
are like molecular structures. In chemistry the conjunction of all of these
structural levels produces what we perceive as matter. In language, the
conjunction of strokes, radicals, characters, and compounds produces meaning.
But when does meaning arise? We all know that radicals are, in some sense, the
basic semantic components of Chinese script, but what about strokes?
Considering the fact that many characters are made by adding individual strokes
to (combinations of) radicals, we can legitimately ask the question whether
strokes carry meaning, or not. In this talk I will present my project of
extending traditional NLP techniques to radicals and strokes, aiming to obtain
a deeper understanding of the way ideographic languages model the world.
| 2,011 | Computation and Language |
Phylogeny and geometry of languages from normalized Levenshtein distance | The idea that the distance among pairs of languages can be evaluated from
lexical differences seems to have its roots in the work of the French explorer
Dumont D'Urville. He collected comparative words lists of various languages
during his voyages aboard the Astrolabe from 1826 to 1829 and, in his work
about the geographical division of the Pacific, he proposed a method to measure
the degree of relation between languages.
The method used by the modern lexicostatistics, developed by Morris Swadesh
in the 1950s, measures distances from the percentage of shared cognates, which
are words with a common historical origin. The weak point of this method is
that subjective judgment plays a relevant role.
Recently, we have proposed a new automated method which is motivated by the
analogy with genetics. The new approach avoids any subjectivity and results can
be easily replicated by other scholars. The distance between two languages is
defined by considering a renormalized Levenshtein distance between pair of
words with the same meaning and averaging on the words contained in a list. The
renormalization, which takes into account the length of the words, plays a
crucial role, and no sensible results can be found without it.
In this paper we give a short review of our automated method and we
illustrate it by considering the cluster of Malagasy dialects. We show that it
sheds new light on their kinship relation and also that it furnishes a lot of
new information concerning the modalities of the settlement of Madagascar.
| 2,011 | Computation and Language |
Performance Evaluation of Statistical Approaches for Text Independent
Speaker Recognition Using Source Feature | This paper introduces the performance evaluation of statistical approaches
for TextIndependent speaker recognition system using source feature. Linear
prediction LP residual is used as a representation of excitation information in
speech. The speaker-specific information in the excitation of voiced speech is
captured using statistical approaches such as Gaussian Mixture Models GMMs and
Hidden Markov Models HMMs. The decrease in the error during training and
recognizing speakers during testing phase close to 100 percent accuracy
demonstrates that the excitation component of speech contains speaker-specific
information and is indeed being effectively captured by continuous Ergodic HMM
than GMM. The performance of the speaker recognition system is evaluated on GMM
and 2 state ergodic HMM with different mixture components and test speech
duration. We demonstrate the speaker recognition studies on TIMIT database for
both GMM and Ergodic HMM.
| 2,010 | Computation and Language |
Mark My Words! Linguistic Style Accommodation in Social Media | The psycholinguistic theory of communication accommodation accounts for the
general observation that participants in conversations tend to converge to one
another's communicative behavior: they coordinate in a variety of dimensions
including choice of words, syntax, utterance length, pitch and gestures. In its
almost forty years of existence, this theory has been empirically supported
exclusively through small-scale or controlled laboratory studies. Here we
address this phenomenon in the context of Twitter conversations. Undoubtedly,
this setting is unlike any other in which accommodation was observed and, thus,
challenging to the theory. Its novelty comes not only from its size, but also
from the non real-time nature of conversations, from the 140 character length
restriction, from the wide variety of social relation types, and from a design
that was initially not geared towards conversation at all. Given such
constraints, it is not clear a priori whether accommodation is robust enough to
occur given the constraints of this new environment. To investigate this, we
develop a probabilistic framework that can model accommodation and measure its
effects. We apply it to a large Twitter conversational dataset specifically
developed for this task. This is the first time the hypothesis of linguistic
style accommodation has been examined (and verified) in a large scale, real
world setting. Furthermore, when investigating concepts such as stylistic
influence and symmetry of accommodation, we discover a complexity of the
phenomenon which was never observed before. We also explore the potential
relation between stylistic influence and network features commonly associated
with social status.
| 2,009 | Computation and Language |
English-Lithuanian-English Machine Translation lexicon and engine:
current state and future work | This article overviews the current state of the English-Lithuanian-English
machine translation system. The first part of the article describes the
problems that system poses today and what actions will be taken to solve them
in the future. The second part of the article tackles the main issue of the
translation process. Article briefly overviews the word sense disambiguation
for MT technique using Google.
| 2,006 | Computation and Language |
Multilingual lexicon design tool and database management system for MT | The paper presents the design and development of English-Lithuanian-English
dictionarylexicon tool and lexicon database management system for MT. The
system is oriented to support two main requirements: to be open to the user and
to describe much more attributes of speech parts as a regular dictionary that
are required for the MT. Programming language Java and database management
system MySql is used to implement the designing tool and lexicon database
respectively. This solution allows easily deploying this system in the
Internet. The system is able to run on various OS such as: Windows, Linux, Mac
and other OS where Java Virtual Machine is supported. Since the modern lexicon
database managing system is used, it is not a problem accessing the same
database for several users.
| 2,005 | Computation and Language |
A Compositional Distributional Semantics, Two Concrete Constructions,
and some Experimental Evaluations | We provide an overview of the hybrid compositional distributional model of
meaning, developed in Coecke et al. (arXiv:1003.4394v1 [cs.CL]), which is based
on the categorical methods also applied to the analysis of information flow in
quantum protocols. The mathematical setting stipulates that the meaning of a
sentence is a linear function of the tensor products of the meanings of its
words. We provide concrete constructions for this definition and present
techniques to build vector spaces for meaning vectors of words, as well as that
of sentences. The applicability of these methods is demonstrated via a toy
vector space as well as real data from the British National Corpus and two
disambiguation experiments.
| 2,011 | Computation and Language |
Perception of Personality and Naturalness through Dialogues by Native
Speakers of American English and Arabic | Linguistic markers of personality traits have been studied extensively, but
few cross-cultural studies exist. In this paper, we evaluate how native
speakers of American English and Arabic perceive personality traits and
naturalness of English utterances that vary along the dimensions of verbosity,
hedging, lexical and syntactic alignment, and formality. The utterances are the
turns within dialogue fragments that are presented as text transcripts to the
workers of Amazon's Mechanical Turk. The results of the study suggest that all
four dimensions can be used as linguistic markers of all personality traits by
both language communities. A further comparative analysis shows cross-cultural
differences for some combinations of measures of personality traits and
naturalness, the dimensions of linguistic variability and dialogue acts.
| 2,011 | Computation and Language |
A statistical learning algorithm for word segmentation | In natural speech, the speaker does not pause between words, yet a human
listener somehow perceives this continuous stream of phonemes as a series of
distinct words. The detection of boundaries between spoken words is an instance
of a general capability of the human neocortex to remember and to recognize
recurring sequences. This paper describes a computer algorithm that is designed
to solve the problem of locating word boundaries in blocks of English text from
which the spaces have been removed. This problem avoids the complexities of
speech processing but requires similar capabilities for detecting recurring
sequences. The algorithm relies entirely on statistical relationships between
letters in the input stream to infer the locations of word boundaries. A
Viterbi trellis is used to simultaneously evaluate a set of hypothetical
segmentations of a block of adjacent words. This technique improves accuracy
but incurs a small latency between the arrival of letters in the input stream
and the sending of words to the output stream. The source code for a C++
version of this algorithm is presented in an appendix.
| 2,011 | Computation and Language |
Quantum-Like Uncertain Conditionals for Text Analysis | Simple representations of documents based on the occurrences of terms are
ubiquitous in areas like Information Retrieval, and also frequent in Natural
Language Processing. In this work we propose a logical-probabilistic approach
to the analysis of natural language text based in the concept of Uncertain
Conditional, on top of a formulation of lexical measurements inspired in the
theoretical concept of ideal quantum measurements. The proposed concept can be
used for generating topic-specific representations of text, aiming to match in
a simple way the perception of a user with a pre-established idea of what the
usage of terms in the text should be. A simple example is developed with two
versions of a text in two languages, showing how regularities in the use of
terms are detected and easily represented.
| 2,011 | Computation and Language |
Computational Approach to Anaphora Resolution in Spanish Dialogues | This paper presents an algorithm for identifying noun-phrase antecedents of
pronouns and adjectival anaphors in Spanish dialogues. We believe that anaphora
resolution requires numerous sources of information in order to find the
correct antecedent of the anaphor. These sources can be of different kinds,
e.g., linguistic information, discourse/dialogue structure information, or
topic information. For this reason, our algorithm uses various different kinds
of information (hybrid information). The algorithm is based on linguistic
constraints and preferences and uses an anaphoric accessibility space within
which the algorithm finds the noun phrase. We present some experiments related
to this algorithm and this space using a corpus of 204 dialogues. The algorithm
is implemented in Prolog. According to this study, 95.9% of antecedents were
located in the proposed space, a precision of 81.3% was obtained for pronominal
anaphora resolution, and 81.5% for adjectival anaphora.
| 2,001 | Computation and Language |
Chameleons in imagined conversations: A new approach to understanding
coordination of linguistic style in dialogs | Conversational participants tend to immediately and unconsciously adapt to
each other's language styles: a speaker will even adjust the number of articles
and other function words in their next utterance in response to the number in
their partner's immediately preceding utterance. This striking level of
coordination is thought to have arisen as a way to achieve social goals, such
as gaining approval or emphasizing difference in status. But has the adaptation
mechanism become so deeply embedded in the language-generation process as to
become a reflex? We argue that fictional dialogs offer a way to study this
question, since authors create the conversations but don't receive the social
benefits (rather, the imagined characters do). Indeed, we find significant
coordination across many families of function words in our large movie-script
corpus. We also report suggestive preliminary findings on the effects of gender
and other features; e.g., surprisingly, for articles, on average, characters
adapt more to females than to males.
| 2,011 | Computation and Language |
Experimental Support for a Categorical Compositional Distributional
Model of Meaning | Modelling compositional meaning for sentences using empirical distributional
methods has been a challenge for computational linguists. We implement the
abstract categorical model of Coecke et al. (arXiv:1003.4394v1 [cs.CL]) using
data from the BNC and evaluate it. The implementation is based on unsupervised
learning of matrices for relational words and applying them to the vectors of
their arguments. The evaluation is based on the word disambiguation task
developed by Mitchell and Lapata (2008) for intransitive sentences, and on a
similar new experiment designed for transitive sentences. Our model matches the
results of its competitors in the first experiment, and betters them in the
second. The general improvement in results with increase in syntactic
complexity showcases the compositional power of our model.
| 2,011 | Computation and Language |
Acquiring Word-Meaning Mappings for Natural Language Interfaces | This paper focuses on a system, WOLFIE (WOrd Learning From Interpreted
Examples), that acquires a semantic lexicon from a corpus of sentences paired
with semantic representations. The lexicon learned consists of phrases paired
with meaning representations. WOLFIE is part of an integrated system that
learns to transform sentences into representations such as logical database
queries. Experimental results are presented demonstrating WOLFIE's ability to
learn useful lexicons for a database interface in four different natural
languages. The usefulness of the lexicons learned by WOLFIE are compared to
those acquired by a similar system, with results favorable to WOLFIE. A second
set of experiments demonstrates WOLFIE's ability to scale to larger and more
difficult, albeit artificially generated, corpora. In natural language
acquisition, it is difficult to gather the annotated data needed for supervised
learning; however, unannotated data is fairly plentiful. Active learning
methods attempt to select for annotation and training only the most informative
examples, and therefore are potentially very useful in natural language
applications. However, most results to date for active learning have only
considered standard classification tasks. To reduce annotation effort while
maintaining accuracy, we apply active learning to semantic lexicons. We show
that active learning can significantly reduce the number of annotated examples
required to achieve a given level of performance.
| 2,003 | Computation and Language |
Translation of Pronominal Anaphora between English and Spanish:
Discrepancies and Evaluation | This paper evaluates the different tasks carried out in the translation of
pronominal anaphora in a machine translation (MT) system. The MT interlingua
approach named AGIR (Anaphora Generation with an Interlingua Representation)
improves upon other proposals presented to date because it is able to translate
intersentential anaphors, detect co-reference chains, and translate Spanish
zero pronouns into English---issues hardly considered by other systems. The
paper presents the resolution and evaluation of these anaphora problems in AGIR
with the use of different kinds of knowledge (lexical, morphological,
syntactic, and semantic). The translation of English and Spanish anaphoric
third-person personal pronouns (including Spanish zero pronouns) into the
target language has been evaluated on unrestricted corpora. We have obtained a
precision of 80.4% and 84.8% in the translation of Spanish and English
pronouns, respectively. Although we have only studied the Spanish and English
languages, our approach can be easily extended to other languages such as
Portuguese, Italian, or Japanese.
| 2,003 | Computation and Language |
Acquiring Correct Knowledge for Natural Language Generation | Natural language generation (NLG) systems are computer software systems that
produce texts in English and other human languages, often from non-linguistic
input data. NLG systems, like most AI systems, need substantial amounts of
knowledge. However, our experience in two NLG projects suggests that it is
difficult to acquire correct knowledge for NLG systems; indeed, every knowledge
acquisition (KA) technique we tried had significant problems. In general terms,
these problems were due to the complexity, novelty, and poorly understood
nature of the tasks our systems attempted, and were worsened by the fact that
people write so differently. This meant in particular that corpus-based KA
approaches suffered because it was impossible to assemble a sizable corpus of
high-quality consistent manually written texts in our domains; and structured
expert-oriented KA techniques suffered because experts disagreed and because we
could not get enough information about special and unusual cases to build
robust systems. We believe that such problems are likely to affect many other
NLG systems as well. In the long term, we hope that new KA techniques may
emerge to help NLG system builders. In the shorter term, we believe that
understanding how individual KA techniques can fail, and using a mixture of
different KA techniques with different strengths and weaknesses, can help
developers acquire NLG knowledge that is mostly correct.
| 2,003 | Computation and Language |
Entropy of Telugu | This paper presents an investigation of the entropy of the Telugu script.
Since this script is syllabic, and not alphabetic, the computation of entropy
is somewhat complicated.
| 2,011 | Computation and Language |
On the origin of ambiguity in efficient communication | This article studies the emergence of ambiguity in communication through the
concept of logical irreversibility and within the framework of Shannon's
information theory. This leads us to a precise and general expression of the
intuition behind Zipf's vocabulary balance in terms of a symmetry equation
between the complexities of the coding and the decoding processes that imposes
an unavoidable amount of logical uncertainty in natural communication.
Accordingly, the emergence of irreversible computations is required if the
complexities of the coding and the decoding processes are balanced in a
symmetric scenario, which means that the emergence of ambiguous codes is a
necessary condition for natural communication to succeed.
| 2,013 | Computation and Language |
Notes on Electronic Lexicography | These notes are a continuation of topics covered by V. Selegej in his article
"Electronic Dictionaries and Computational lexicography". How can an electronic
dictionary have as its object the description of closely related languages?
Obviously, such a question allows multiple answers.
| 2,015 | Computation and Language |
Experimenting with Transitive Verbs in a DisCoCat | Formal and distributional semantic models offer complementary benefits in
modeling meaning. The categorical compositional distributional (DisCoCat) model
of meaning of Coecke et al. (arXiv:1003.4394v1 [cs.CL]) combines aspected of
both to provide a general framework in which meanings of words, obtained
distributionally, are composed using methods from the logical setting to form
sentence meaning. Concrete consequences of this general abstract setting and
applications to empirical data are under active study (Grefenstette et al.,
arxiv:1101.0309; Grefenstette and Sadrzadeh, arXiv:1106.4058v1 [cs.CL]). . In
this paper, we extend this study by examining transitive verbs, represented as
matrices in a DisCoCat. We discuss three ways of constructing such matrices,
and evaluate each method in a disambiguation task developed by Grefenstette and
Sadrzadeh (arXiv:1106.4058v1 [cs.CL]).
| 2,011 | Computation and Language |
The settlement of Madagascar: what dialects and languages can tell | The dialects of Madagascar belong to the Greater Barito East group of the
Austronesian family and it is widely accepted that the Island was colonized by
Indonesian sailors after a maritime trek which probably took place around 650
CE. The language most closely related to Malagasy dialects is Maanyan but also
Malay is strongly related especially for what concerns navigation terms. Since
the Maanyan Dayaks live along the Barito river in Kalimantan (Borneo) and they
do not possess the necessary skill for long maritime navigation, probably they
were brought as subordinates by Malay sailors.
In a recent paper we compared 23 different Malagasy dialects in order to
determine the time and the landing area of the first colonization. In this
research we use new data and new methods to confirm that the landing took place
on the south-east coast of the Island. Furthermore, we are able to state here
that it is unlikely that there were multiple settlements and, therefore,
colonization consisted in a single founding event.
To reach our goal we find out the internal kinship relations among all the 23
Malagasy dialects and we also find out the different kinship degrees of the 23
dialects versus Malay and Maanyan. The method used is an automated version of
the lexicostatistic approach. The data concerning Madagascar were collected by
the author at the beginning of 2010 and consist of Swadesh lists of 200 items
for 23 dialects covering all areas of the Island. The lists for Maanyan and
Malay were obtained from published datasets integrated by author's interviews.
| 2,015 | Computation and Language |
Finding Deceptive Opinion Spam by Any Stretch of the Imagination | Consumers increasingly rate, review and research products online.
Consequently, websites containing consumer reviews are becoming targets of
opinion spam. While recent work has focused primarily on manually identifiable
instances of opinion spam, in this work we study deceptive opinion
spam---fictitious opinions that have been deliberately written to sound
authentic. Integrating work from psychology and computational linguistics, we
develop and compare three approaches to detecting deceptive opinion spam, and
ultimately develop a classifier that is nearly 90% accurate on our
gold-standard opinion spam dataset. Based on feature analysis of our learned
models, we additionally make several theoretical contributions, including
revealing a relationship between deceptive opinions and imaginative writing.
| 2,011 | Computation and Language |
Fence - An Efficient Parser with Ambiguity Support for Model-Driven
Language Specification | Model-based language specification has applications in the implementation of
language processors, the design of domain-specific languages, model-driven
software development, data integration, text mining, natural language
processing, and corpus-based induction of models. Model-based language
specification decouples language design from language processing and, unlike
traditional grammar-driven approaches, which constrain language designers to
specific kinds of grammars, it needs general parser generators able to deal
with ambiguities. In this paper, we propose Fence, an efficient bottom-up
parsing algorithm with lexical and syntactic ambiguity support that enables the
use of model-based language specification in practice.
| 2,011 | Computation and Language |
A Semantic Relatedness Measure Based on Combined Encyclopedic,
Ontological and Collocational Knowledge | We describe a new semantic relatedness measure combining the Wikipedia-based
Explicit Semantic Analysis measure, the WordNet path measure and the mixed
collocation index. Our measure achieves the currently highest results on the
WS-353 test: a Spearman rho coefficient of 0.79 (vs. 0.75 in (Gabrilovich and
Markovitch, 2007)) when applying the measure directly, and a value of 0.87 (vs.
0.78 in (Agirre et al., 2009)) when using the prediction of a polynomial SVM
classifier trained on our measure.
In the appendix we discuss the adaptation of ESA to 2011 Wikipedia data, as
well as various unsuccessful attempts to enhance ESA by filtering at word,
sentence, and section level.
| 2,011 | Computation and Language |
Design of Arabic Diacritical Marks | Diacritical marks play a crucial role in meeting the criteria of usability of
typographic text, such as: homogeneity, clarity and legibility. To change the
diacritic of a letter in a word could completely change its semantic. The
situation is very complicated with multilingual text. Indeed, the problem of
design becomes more difficult by the presence of diacritics that come from
various scripts; they are used for different purposes, and are controlled by
various typographic rules. It is quite challenging to adapt rules from one
script to another. This paper aims to study the placement and sizing of
diacritical marks in Arabic script, with a comparison with the Latin's case.
The Arabic script is cursive and runs from right-to-left; its criteria and
rules are quite distinct from those of the Latin script. In the beginning, we
compare the difficulty of processing diacritics in both scripts. After, we will
study the limits of Latin resolution strategies when applied to Arabic. At the
end, we propose an approach to resolve the problem for positioning and resizing
diacritics. This strategy includes creating an Arabic font, designed in
OpenType format, along with suitable justification in TEX.
| 2,011 | Computation and Language |
Use Pronunciation by Analogy for text to speech system in Persian
language | The interest in text to speech synthesis increased in the world .text to
speech have been developed formany popular languages such as English, Spanish
and French and many researches and developmentshave been applied to those
languages. Persian on the other hand, has been given little attentioncompared
to other languages of similar importance and the research in Persian is still
in its infancy.Persian language possess many difficulty and exceptions that
increase complexity of text to speechsystems. For example: short vowels is
absent in written text or existence of homograph words. in thispaper we propose
a new method for persian text to phonetic that base on pronunciations by
analogy inwords, semantic relations and grammatical rules for finding proper
phonetic. Keywords:PbA, text to speech, Persian language, FPbA
| 2,011 | Computation and Language |
NEMO: Extraction and normalization of organization names from PubMed
affiliation strings | We propose NEMO, a system for extracting organization names in the
affiliation and normalizing them to a canonical organization name. Our parsing
process involves multi-layered rule matching with multiple dictionaries. The
system achieves more than 98% f-score in extracting organization names. Our
process of normalization that involves clustering based on local sequence
alignment metrics and local learning based on finding connected components. A
high precision was also observed in normalization. NEMO is the missing link in
associating each biomedical paper and its authors to an organization name in
its canonical form and the Geopolitical location of the organization. This
research could potentially help in analyzing large social networks of
organizations for landscaping a particular topic, improving performance of
author disambiguation, adding weak links in the co-author network of authors,
augmenting NLM's MARS system for correcting errors in OCR output of affiliation
field, and automatically indexing the PubMed citations with the normalized
organization name and country. Our system is available as a graphical user
interface available for download along with this paper.
| 2,010 | Computation and Language |
BioSimplify: an open source sentence simplification engine to improve
recall in automatic biomedical information extraction | BioSimplify is an open source tool written in Java that introduces and
facilitates the use of a novel model for sentence simplification tuned for
automatic discourse analysis and information extraction (as opposed to sentence
simplification for improving human readability). The model is based on a
"shot-gun" approach that produces many different (simpler) versions of the
original sentence by combining variants of its constituent elements. This tool
is optimized for processing biomedical scientific literature such as the
abstracts indexed in PubMed. We tested our tool on its impact to the task of
PPI extraction and it improved the f-score of the PPI tool by around 7%, with
an improvement in recall of around 20%. The BioSimplify tool and test corpus
can be downloaded from https://biosimplify.sourceforge.net.
| 2,010 | Computation and Language |
An Effective Approach to Biomedical Information Extraction with Limited
Training Data | Overall, the two main contributions of this work include the application of
sentence simplification to association extraction as described above, and the
use of distributional semantics for concept extraction. The proposed work on
concept extraction amalgamates for the first time two diverse research areas
-distributional semantics and information extraction. This approach renders all
the advantages offered in other semi-supervised machine learning systems, and,
unlike other proposed semi-supervised approaches, it can be used on top of
different basic frameworks and algorithms.
http://gradworks.umi.com/34/49/3449837.html
| 2,011 | Computation and Language |
Cross-moments computation for stochastic context-free grammars | In this paper we consider the problem of efficient computation of
cross-moments of a vector random variable represented by a stochastic
context-free grammar. Two types of cross-moments are discussed. The sample
space for the first one is the set of all derivations of the context-free
grammar, and the sample space for the second one is the set of all derivations
which generate a string belonging to the language of the grammar. In the past,
this problem was widely studied, but mainly for the cross-moments of scalar
variables and up to the second order. This paper presents new algorithms for
computing the cross-moments of an arbitrary order, and the previously developed
ones are derived as special cases.
| 2,013 | Computation and Language |
Serialising the ISO SynAF Syntactic Object Model | This paper introduces, an XML format developed to serialise the object model
defined by the ISO Syntactic Annotation Framework SynAF. Based on widespread
best practices we adapt a popular XML format for syntactic annotation,
TigerXML, with additional features to support a variety of syntactic phenomena
including constituent and dependency structures, binding, and different node
types such as compounds or empty elements. We also define interfaces to other
formats and standards including the Morpho-syntactic Annotation Framework MAF
and the ISOCat Data Category Registry. Finally a case study of the German
Treebank TueBa-D/Z is presented, showcasing the handling of constituent
structures, topological fields and coreference annotation in tandem.
| 2,014 | Computation and Language |
A Concise Query Language with Search and Transform Operations for
Corpora with Multiple Levels of Annotation | The usefulness of annotated corpora is greatly increased if there is an
associated tool that can allow various kinds of operations to be performed in a
simple way. Different kinds of annotation frameworks and many query languages
for them have been proposed, including some to deal with multiple layers of
annotation. We present here an easy to learn query language for a particular
kind of annotation framework based on 'threaded trees', which are somewhere
between the complete order of a tree and the anarchy of a graph. Through
'typed' threads, they can allow multiple levels of annotation in the same
document. Our language has a simple, intuitive and concise syntax and high
expressive power. It allows not only to search for complicated patterns with
short queries but also allows data manipulation and specification of arbitrary
return values. Many of the commonly used tasks that otherwise require writing
programs, can be performed with one or more queries. We compare the language
with some others and try to evaluate it.
| 2,011 | Computation and Language |
Using Inverse lambda and Generalization to Translate English to Formal
Languages | We present a system to translate natural language sentences to formulas in a
formal or a knowledge representation language. Our system uses two inverse
lambda-calculus operators and using them can take as input the semantic
representation of some words, phrases and sentences and from that derive the
semantic representation of other words and phrases. Our inverse lambda operator
works on many formal languages including first order logic, database query
languages and answer set programming. Our system uses a syntactic combinatorial
categorial parser to parse natural language sentences and also to construct the
semantic meaning of the sentences as directed by their parsing. The same parser
is used for both. In addition to the inverse lambda-calculus operators, our
system uses a notion of generalization to learn semantic representation of
words from the semantic representation of other words that are of the same
category. Together with this, we use an existing statistical learning approach
to assign weights to deal with multiple meanings of words. Our system produces
improved results on standard corpora on natural language interfaces for robot
command and control and database queries.
| 2,011 | Computation and Language |
Language understanding as a step towards human level intelligence -
automatizing the construction of the initial dictionary from example
sentences | For a system to understand natural language, it needs to be able to take
natural language text and answer questions given in natural language with
respect to that text; it also needs to be able to follow instructions given in
natural language. To achieve this, a system must be able to process natural
language and be able to capture the knowledge within that text. Thus it needs
to be able to translate natural language text into a formal language. We
discuss our approach to do this, where the translation is achieved by composing
the meaning of words in a sentence. Our initial approach uses an inverse lambda
method that we developed (and other methods) to learn meaning of words from
meaning of sentences and an initial lexicon. We then present an improved method
where the initial lexicon is also learned by analyzing the training sentence
and meaning pairs. We evaluate our methods and compare them with other existing
methods on a corpora of database querying and robot command and control.
| 2,011 | Computation and Language |
Solving puzzles described in English by automated translation to answer
set programming and learning how to do that translation | We present a system capable of automatically solving combinatorial logic
puzzles given in (simplified) English. It involves translating the English
descriptions of the puzzles into answer set programming(ASP) and using ASP
solvers to provide solutions of the puzzles. To translate the descriptions, we
use a lambda-calculus based approach using Probabilistic Combinatorial
Categorial Grammars (PCCG) where the meanings of words are associated with
parameters to be able to distinguish between multiple meanings of the same
word. Meaning of many words and the parameters are learned. The puzzles are
represented in ASP using an ontology which is applicable to a large set of
logic puzzles.
| 2,011 | Computation and Language |
Query Expansion: Term Selection using the EWC Semantic Relatedness
Measure | This paper investigates the efficiency of the EWC semantic relatedness
measure in an ad-hoc retrieval task. This measure combines the Wikipedia-based
Explicit Semantic Analysis measure, the WordNet path measure and the mixed
collocation index. In the experiments, the open source search engine Terrier
was utilised as a tool to index and retrieve data. The proposed technique was
tested on the NTCIR data collection. The experiments demonstrated promising
results.
| 2,011 | Computation and Language |
Why is language well-designed for communication? (Commentary on
Christiansen and Chater: 'Language as shaped by the brain') | Selection through iterated learning explains no more than other
non-functional accounts, such as universal grammar, why language is so
well-designed for communicative efficiency. It does not predict several
distinctive features of language like central embedding, large lexicons or the
lack of iconicity, that seem to serve communication purposes at the expense of
learnability.
| 2,008 | Computation and Language |
Une analyse bas\'ee sur la S-DRT pour la mod\'elisation de dialogues
pathologiques | In this article, we present a corpus of dialogues between a schizophrenic
speaker and an interlocutor who drives the dialogue. We had identified specific
discontinuities for paranoid schizophrenics. We propose a modeling of these
discontinuities with S-DRT (its pragmatic part)
| 2,011 | Computation and Language |
Event in Compositional Dynamic Semantics | We present a framework which constructs an event-style dis- course semantics.
The discourse dynamics are encoded in continuation semantics and various
rhetorical relations are embedded in the resulting interpretation of the
framework. We assume discourse and sentence are distinct semantic objects, that
play different roles in meaning evalua- tion. Moreover, two sets of composition
functions, for handling different discourse relations, are introduced. The
paper first gives the necessary background and motivation for event and dynamic
semantics, then the framework with detailed examples will be introduced.
| 2,011 | Computation and Language |
Encoding Phases using Commutativity and Non-commutativity in a Logical
Framework | This article presents an extension of Minimalist Categorial Gram- mars (MCG)
to encode Chomsky's phases. These grammars are based on Par- tially Commutative
Logic (PCL) and encode properties of Minimalist Grammars (MG) of Stabler. The
first implementation of MCG were using both non- commutative properties (to
respect the linear word order in an utterance) and commutative ones (to model
features of different constituents). Here, we pro- pose to adding Chomsky's
phases with the non-commutative tensor product of the logic. Then we could give
account of the PIC just by using logical prop- erties of the framework.
| 2,011 | Computation and Language |
Minimalist Grammars and Minimalist Categorial Grammars, definitions
toward inclusion of generated languages | Stabler proposes an implementation of the Chomskyan Minimalist Program,
Chomsky 95 with Minimalist Grammars - MG, Stabler 97. This framework inherits a
long linguistic tradition. But the semantic calculus is more easily added if
one uses the Curry-Howard isomorphism. Minimalist Categorial Grammars - MCG,
based on an extension of the Lambek calculus, the mixed logic, were introduced
to provide a theoretically-motivated syntax-semantics interface, Amblard 07. In
this article, we give full definitions of MG with algebraic tree descriptions
and of MCG, and take the first steps towards giving a proof of inclusion of
their generated languages.
| 2,011 | Computation and Language |
Emotional Analysis of Blogs and Forums Data | We perform a statistical analysis of emotionally annotated comments in two
large online datasets, examining chains of consecutive posts in the
discussions. Using comparisons with randomised data we show that there is a
high level of correlation for the emotional content of messages.
| 2,012 | Computation and Language |
Inter-rater Agreement on Sentence Formality | Formality is one of the most important dimensions of writing style variation.
In this study we conducted an inter-rater reliability experiment for assessing
sentence formality on a five-point Likert scale, and obtained good agreement
results as well as different rating distributions for different sentence
categories. We also performed a difficulty analysis to identify the bottlenecks
of our rating procedure. Our main objective is to design an automatic scoring
mechanism for sentence-level formality, and this study is important for that
purpose.
| 2,014 | Computation and Language |
Building Ontologies to Understand Spoken Tunisian Dialect | This paper presents a method to understand spoken Tunisian dialect based on
lexical semantic. This method takes into account the specificity of the
Tunisian dialect which has no linguistic processing tools. This method is
ontology-based which allows exploiting the ontological concepts for semantic
annotation and ontological relations for speech interpretation. This
combination increases the rate of comprehension and limits the dependence on
linguistic resources. This paper also details the process of building the
ontology used for annotation and interpretation of Tunisian dialect in the
context of speech understanding in dialogue systems for restricted domain.
| 2,011 | Computation and Language |
LexRank: Graph-based Lexical Centrality as Salience in Text
Summarization | We introduce a stochastic graph-based method for computing relative
importance of textual units for Natural Language Processing. We test the
technique on the problem of Text Summarization (TS). Extractive TS relies on
the concept of sentence salience to identify the most important sentences in a
document or set of documents. Salience is typically defined in terms of the
presence of particular important words or in terms of similarity to a centroid
pseudo-sentence. We consider a new approach, LexRank, for computing sentence
importance based on the concept of eigenvector centrality in a graph
representation of sentences. In this model, a connectivity matrix based on
intra-sentence cosine similarity is used as the adjacency matrix of the graph
representation of sentences. Our system, based on LexRank ranked in first place
in more than one task in the recent DUC 2004 evaluation. In this paper we
present a detailed analysis of our approach and apply it to a larger data set
including data from earlier DUC evaluations. We discuss several methods to
compute centrality using the similarity graph. The results show that
degree-based methods (including LexRank) outperform both centroid-based methods
and other systems participating in DUC in most of the cases. Furthermore, the
LexRank with threshold method outperforms the other degree-based techniques
including continuous LexRank. We also show that our approach is quite
insensitive to the noise in the data that may result from an imperfect topical
clustering of documents.
| 2,004 | Computation and Language |
Combining Knowledge- and Corpus-based Word-Sense-Disambiguation Methods | In this paper we concentrate on the resolution of the lexical ambiguity that
arises when a given word has several different meanings. This specific task is
commonly referred to as word sense disambiguation (WSD). The task of WSD
consists of assigning the correct sense to words using an electronic dictionary
as the source of word definitions. We present two WSD methods based on two main
methodological approaches in this research area: a knowledge-based method and a
corpus-based method. Our hypothesis is that word-sense disambiguation requires
several knowledge sources in order to solve the semantic ambiguity of the
words. These sources can be of different kinds--- for example, syntagmatic,
paradigmatic or statistical information. Our approach combines various sources
of knowledge, through combinations of the two WSD methods mentioned above.
Mainly, the paper concentrates on how to combine these methods and sources of
information in order to achieve good results in the disambiguation. Finally,
this paper presents a comprehensive study and experimental work on evaluation
of the methods and their combinations.
| 2,005 | Computation and Language |
Learning Content Selection Rules for Generating Object Descriptions in
Dialogue | A fundamental requirement of any task-oriented dialogue system is the ability
to generate object descriptions that refer to objects in the task domain. The
subproblem of content selection for object descriptions in task-oriented
dialogue has been the focus of much previous work and a large number of models
have been proposed. In this paper, we use the annotated COCONUT corpus of
task-oriented design dialogues to develop feature sets based on Dale and
Reiters (1995) incremental model, Brennan and Clarks (1996) conceptual pact
model, and Jordans (2000b) intentional influences model, and use these feature
sets in a machine learning experiment to automatically learn a model of content
selection for object descriptions. Since Dale and Reiters model requires a
representation of discourse structure, the corpus annotations are used to
derive a representation based on Grosz and Sidners (1986) theory of the
intentional structure of discourse, as well as two very simple representations
of discourse structure based purely on recency. We then apply the
rule-induction program RIPPER to train and test the content selection component
of an object description generator on a set of 393 object descriptions from the
corpus. To our knowledge, this is the first reported experiment of a trainable
content selection component for object description generation in dialogue.
Three separate content selection models that are based on the three theoretical
models, all independently achieve accuracies significantly above the majority
class baseline (17%) on unseen test data, with the intentional influences model
(42.4%) performing significantly better than either the incremental model
(30.4%) or the conceptual pact model (28.9%). But the best performing models
combine all the feature sets, achieving accuracies near 60%. Surprisingly, a
simple recency-based representation of discourse structure does as well as one
based on intentional structure. To our knowledge, this is also the first
empirical comparison of a representation of Grosz and Sidners model of
discourse structure with a simpler model for any generation task.
| 2,005 | Computation and Language |
From Contracts in Structured English to CL Specifications | In this paper we present a framework to analyze conflicts of contracts
written in structured English. A contract that has manually been rewritten in a
structured English is automatically translated into a formal language using the
Grammatical Framework (GF). In particular we use the contract language CL as a
target formal language for this translation. In our framework CL specifications
could then be input into the tool CLAN to detect the presence of conflicts
(whether there are contradictory obligations, permissions, and prohibitions. We
also use GF to get a version in (restricted) English of CL formulae. We discuss
the implementation of such a framework.
| 2,011 | Computation and Language |
A Probabilistic Approach to Pronunciation by Analogy | The relationship between written and spoken words is convoluted in languages
with a deep orthography such as English and therefore it is difficult to devise
explicit rules for generating the pronunciations for unseen words.
Pronunciation by analogy (PbA) is a data-driven method of constructing
pronunciations for novel words from concatenated segments of known words and
their pronunciations. PbA performs relatively well with English and outperforms
several other proposed methods. However, the best published word accuracy of
65.5% (for the 20,000 word NETtalk corpus) suggests there is much room for
improvement in it.
Previous PbA algorithms have used several different scoring strategies such
as the product of the frequencies of the component pronunciations of the
segments, or the number of different segmentations that yield the same
pronunciation, and different combinations of these methods, to evaluate the
candidate pronunciations. In this article, we instead propose to use a
probabilistically justified scoring rule. We show that this principled approach
alone yields better accuracy (66.21% for the NETtalk corpus) than any
previously published PbA algorithm. Furthermore, combined with certain ad hoc
modifications motivated by earlier algorithms, the performance climbs up to
66.6%, and further improvements are possible by combining this method with
other methods.
| 2,011 | Computation and Language |
Automatic transcription of 17th century English text in Contemporary
English with NooJ: Method and Evaluation | Since 2006 we have undertaken to describe the differences between 17th
century English and contemporary English thanks to NLP software. Studying a
corpus spanning the whole century (tales of English travellers in the Ottoman
Empire in the 17th century, Mary Astell's essay A Serious Proposal to the
Ladies and other literary texts) has enabled us to highlight various lexical,
morphological or grammatical singularities. Thanks to the NooJ linguistic
platform, we created dictionaries indexing the lexical variants and their
transcription in CE. The latter is often the result of the validation of forms
recognized dynamically by morphological graphs. We also built syntactical
graphs aimed at transcribing certain archaic forms in contemporary English. Our
previous research implied a succession of elementary steps alternating textual
analysis and result validation. We managed to provide examples of
transcriptions, but we have not created a global tool for automatic
transcription. Therefore we need to focus on the results we have obtained so
far, study the conditions for creating such a tool, and analyze possible
difficulties. In this paper, we will be discussing the technical and linguistic
aspects we have not yet covered in our previous work. We are using the results
of previous research and proposing a transcription method for words or
sequences identified as archaic.
| 2,011 | Computation and Language |
Object-oriented semantics of English in natural language understanding
system | A new approach to the problem of natural language understanding is proposed.
The knowledge domain under consideration is the social behavior of people.
English sentences are translated into set of predicates of a semantic database,
which describe persons, occupations, organizations, projects, actions, events,
messages, machines, things, animals, location and time of actions, relations
between objects, thoughts, cause-and-effect relations, abstract objects. There
is a knowledge base containing the description of semantics of objects
(functions and structure), actions (motives and causes), and operations.
| 2,015 | Computation and Language |
User-level sentiment analysis incorporating social networks | We show that information about social relationships can be used to improve
user-level sentiment analysis. The main motivation behind our approach is that
users that are somehow "connected" may be more likely to hold similar opinions;
therefore, relationship information can complement what we can extract about a
user's viewpoints from their utterances. Employing Twitter as a source for our
experimental data, and working within a semi-supervised framework, we propose
models that are induced either from the Twitter follower/followee network or
from the network in Twitter formed by users referring to each other using "@"
mentions. Our transductive learning results reveal that incorporating
social-network information can indeed lead to statistically significant
sentiment-classification improvements over the performance of an approach based
on Support Vector Machines having access only to textual features.
| 2,011 | Computation and Language |
A Comparison of Different Machine Transliteration Models | Machine transliteration is a method for automatically converting words in one
language into phonetically equivalent ones in another language. Machine
transliteration plays an important role in natural language applications such
as information retrieval and machine translation, especially for handling
proper nouns and technical terms. Four machine transliteration models --
grapheme-based transliteration model, phoneme-based transliteration model,
hybrid transliteration model, and correspondence-based transliteration model --
have been proposed by several researchers. To date, however, there has been
little research on a framework in which multiple transliteration models can
operate simultaneously. Furthermore, there has been no comparison of the four
models within the same framework and using the same data. We addressed these
problems by 1) modeling the four models within the same framework, 2) comparing
them under the same conditions, and 3) developing a way to improve machine
transliteration through this comparison. Our comparison showed that the hybrid
and correspondence-based models were the most effective and that the four
models can be used in a complementary manner to improve machine transliteration
performance.
| 2,006 | Computation and Language |
Learning Sentence-internal Temporal Relations | In this paper we propose a data intensive approach for inferring
sentence-internal temporal relations. Temporal inference is relevant for
practical NLP applications which either extract or synthesize temporal
information (e.g., summarisation, question answering). Our method bypasses the
need for manual coding by exploiting the presence of markers like after", which
overtly signal a temporal relation. We first show that models trained on main
and subordinate clauses connected with a temporal marker achieve good
performance on a pseudo-disambiguation task simulating temporal inference
(during testing the temporal marker is treated as unseen and the models must
select the right marker from a set of possible candidates). Secondly, we assess
whether the proposed approach holds promise for the semi-automatic creation of
temporal annotations. Specifically, we use a model trained on noisy and
approximate data (i.e., main and subordinate clauses) to predict
intra-sentential relations present in TimeBank, a corpus annotated rich
temporal information. Our experiments compare and contrast several
probabilistic models differing in their feature space, linguistic assumptions
and data requirements. We evaluate performance against gold standard corpora
and also against human subjects.
| 2,006 | Computation and Language |
Product Review Summarization based on Facet Identification and Sentence
Clustering | Product review nowadays has become an important source of information, not
only for customers to find opinions about products easily and share their
reviews with peers, but also for product manufacturers to get feedback on their
products. As the number of product reviews grows, it becomes difficult for
users to search and utilize these resources in an efficient way. In this work,
we build a product review summarization system that can automatically process a
large collection of reviews and aggregate them to generate a concise summary.
More importantly, the drawback of existing product summarization systems is
that they cannot provide the underlying reasons to justify users' opinions. In
our method, we solve this problem by applying clustering, prior to selecting
representative candidates for summarization.
| 2,011 | Computation and Language |
A Constraint-Satisfaction Parser for Context-Free Grammars | Traditional language processing tools constrain language designers to
specific kinds of grammars. In contrast, model-based language specification
decouples language design from language processing. As a consequence,
model-based language specification tools need general parsers able to parse
unrestricted context-free grammars. As languages specified following this
approach may be ambiguous, parsers must deal with ambiguities. Model-based
language specification also allows the definition of associativity, precedence,
and custom constraints. Therefore parsers generated by model-driven language
specification tools need to enforce constraints. In this paper, we propose
Fence, an efficient bottom-up chart parser with lexical and syntactic ambiguity
support that allows the specification of constraints and, therefore, enables
the use of model-based language specification in practice.
| 2,012 | Computation and Language |
Data formats for phonological corpora | The goal of the present chapter is to explore the possibility of providing
the research (but also the industrial) community that commonly uses spoken
corpora with a stable portfolio of well-documented standardised formats that
allow a high re-use rate of annotated spoken resources and, as a consequence,
better interoperability across tools used to produce or exploit such resources.
| 2,012 | Computation and Language |
NP Animacy Identification for Anaphora Resolution | In anaphora resolution for English, animacy identification can play an
integral role in the application of agreement restrictions between pronouns and
candidates, and as a result, can improve the accuracy of anaphora resolution
systems. In this paper, two methods for animacy identification are proposed and
evaluated using intrinsic and extrinsic measures. The first method is a
rule-based one which uses information about the unique beginners in WordNet to
classify NPs on the basis of their animacy. The second method relies on a
machine learning algorithm which exploits a WordNet enriched with animacy
information for each sense. The effect of word sense disambiguation on the two
methods is also assessed. The intrinsic evaluation reveals that the machine
learning method reaches human levels of performance. The extrinsic evaluation
demonstrates that animacy identification can be beneficial in anaphora
resolution, especially in the cases where animate entities are identified with
high precision.
| 2,007 | Computation and Language |
Towards cross-lingual alerting for bursty epidemic events | Background: Online news reports are increasingly becoming a source for event
based early warning systems that detect natural disasters. Harnessing the
massive volume of information available from multilingual newswire presents as
many challenges as opportunities due to the patterns of reporting complex
spatiotemporal events. Results: In this article we study the problem of
utilising correlated event reports across languages. We track the evolution of
16 disease outbreaks using 5 temporal aberration detection algorithms on
text-mined events classified according to disease and outbreak country. Using
ProMED reports as a silver standard, comparative analysis of news data for 13
languages over a 129 day trial period showed improved sensitivity, F1 and
timeliness across most models using cross-lingual events. We report a detailed
case study analysis for Cholera in Angola 2010 which highlights the challenges
faced in correlating news events with the silver standard. Conclusions: The
results show that automated health surveillance using multilingual text mining
has the potential to turn low value news into high value alerts if informed
choices are used to govern the selection of models and data sources. An
implementation of the C2 alerting algorithm using multilingual news is
available at the BioCaster portal http://born.nii.ac.jp/?page=globalroundup.
| 2,011 | Computation and Language |
OMG U got flu? Analysis of shared health messages for bio-surveillance | Background: Micro-blogging services such as Twitter offer the potential to
crowdsource epidemics in real-time. However, Twitter posts ('tweets') are often
ambiguous and reactive to media trends. In order to ground user messages in
epidemic response we focused on tracking reports of self-protective behaviour
such as avoiding public gatherings or increased sanitation as the basis for
further risk analysis. Results: We created guidelines for tagging self
protective behaviour based on Jones and Salath\'e (2009)'s behaviour response
survey. Applying the guidelines to a corpus of 5283 Twitter messages related to
influenza like illness showed a high level of inter-annotator agreement (kappa
0.86). We employed supervised learning using unigrams, bigrams and regular
expressions as features with two supervised classifiers (SVM and Naive Bayes)
to classify tweets into 4 self-reported protective behaviour categories plus a
self-reported diagnosis. In addition to classification performance we report
moderately strong Spearman's Rho correlation by comparing classifier output
against WHO/NREVSS laboratory data for A(H1N1) in the USA during the 2009-2010
influenza season. Conclusions: The study adds to evidence supporting a high
degree of correlation between pre-diagnostic social media signals and
diagnostic influenza case data, pointing the way towards low cost sensor
networks. We believe that the signals we have modelled may be applicable to a
wide range of diseases.
| 2,011 | Computation and Language |
What's unusual in online disease outbreak news? | Background: Accurate and timely detection of public health events of
international concern is necessary to help support risk assessment and response
and save lives. Novel event-based methods that use the World Wide Web as a
signal source offer potential to extend health surveillance into areas where
traditional indicator networks are lacking. In this paper we address the issue
of systematically evaluating online health news to support automatic alerting
using daily disease-country counts text mined from real world data using
BioCaster. For 18 data sets produced by BioCaster, we compare 5 aberration
detection algorithms (EARS C2, C3, W2, F-statistic and EWMA) for performance
against expert moderated ProMED-mail postings. Results: We report sensitivity,
specificity, positive predictive value (PPV), negative predictive value (NPV),
mean alerts/100 days and F1, at 95% confidence interval (CI) for 287
ProMED-mail postings on 18 outbreaks across 14 countries over a 366 day period.
Results indicate that W2 had the best F1 with a slight benefit for day of week
effect over C2. In drill down analysis we indicate issues arising from the
granular choice of country-level modeling, sudden drops in reporting due to day
of week effects and reporting bias. Automatic alerting has been implemented in
BioCaster available from http://born.nii.ac.jp. Conclusions: Online health news
alerts have the potential to enhance manual analytical methods by increasing
throughput, timeliness and detection rates. Systematic evaluation of health
news aberrations is necessary to push forward our understanding of the complex
relationship between news report volumes and case numbers and to select the
best performing features and algorithms.
| 2,010 | Computation and Language |