charlieoneill/embedding-saes
Updated
•
12
categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 0001004 | null | null | http://arxiv.org/pdf/cs/0001004v1 | 2000-01-07T06:20:53Z | 2000-01-07T06:20:53Z | Multiplicative Algorithm for Orthgonal Groups and Independent Component
Analysis | The multiplicative Newton-like method developed by the author et al. is extended to the situation where the dynamics is restricted to the orthogonal group. A general framework is constructed without specifying the cost function. Though the restriction to the orthogonal groups makes the problem somewhat complicated, an explicit expression for the amount of individual jumps is obtained. This algorithm is exactly second-order-convergent. The global instability inherent in the Newton method is remedied by a Levenberg-Marquardt-type variation. The method thus constructed can readily be applied to the independent component analysis. Its remarkable performance is illustrated by a numerical simulation. | [
"['Toshinao Akuzawa']"
] |
null | null | 0001008 | null | null | http://arxiv.org/pdf/cs/0001008v3 | 2003-06-20T14:20:48Z | 2000-01-12T20:57:59Z | Predicting the expected behavior of agents that learn about agents: the
CLRI framework | We describe a framework and equations used to model and predict the behavior of multi-agent systems (MASs) with learning agents. A difference equation is used for calculating the progression of an agent's error in its decision function, thereby telling us how the agent is expected to fare in the MAS. The equation relies on parameters which capture the agent's learning abilities, such as its change rate, learning rate and retention rate, as well as relevant aspects of the MAS such as the impact that agents have on each other. We validate the framework with experimental results using reinforcement learning agents in a market system, as well as with other experimental results gathered from the AI literature. Finally, we use PAC-theory to show how to calculate bounds on the values of the learning parameters. | [
"['Jose M. Vidal' 'Edmund H. Durfee']"
] |
null | null | 0001027 | null | null | http://arxiv.org/pdf/cs/0001027v1 | 2000-01-29T01:23:54Z | 2000-01-29T01:23:54Z | Pattern Discovery and Computational Mechanics | Computational mechanics is a method for discovering, describing and quantifying patterns, using tools from statistical physics. It constructs optimal, minimal models of stochastic processes and their underlying causal structures. These models tell us about the intrinsic computation embedded within a process---how it stores and transforms information. Here we summarize the mathematics of computational mechanics, especially recent optimality and uniqueness results. We also expound the principles and motivations underlying computational mechanics, emphasizing its connections to the minimum description length principle, PAC theory, and other aspects of machine learning. | [
"['Cosma Rohilla Shalizi' 'James P. Crutchfield']"
] |
null | null | 0002006 | null | null | http://arxiv.org/abs/cs/0002006v1 | 2000-02-09T06:44:28Z | 2000-02-09T06:44:28Z | Multiplicative Nonholonomic/Newton -like Algorithm | We construct new algorithms from scratch, which use the fourth order cumulant of stochastic variables for the cost function. The multiplicative updating rule here constructed is natural from the homogeneous nature of the Lie group and has numerous merits for the rigorous treatment of the dynamics. As one consequence, the second order convergence is shown. For the cost function, functions invariant under the componentwise scaling are choosen. By identifying points which can be transformed to each other by the scaling, we assume that the dynamics is in a coset space. In our method, a point can move toward any direction in this coset. Thus, no prewhitening is required. | [
"['Toshinao Akuzawa' 'Noboru Murata']"
] |
null | null | 0003072 | null | null | http://arxiv.org/pdf/cs/0003072v1 | 2000-03-22T12:49:38Z | 2000-03-22T12:49:38Z | MOO: A Methodology for Online Optimization through Mining the Offline
Optimum | Ports, warehouses and courier services have to decide online how an arriving task is to be served in order that cost is minimized (or profit maximized). These operators have a wealth of historical data on task assignments; can these data be mined for knowledge or rules that can help the decision-making? MOO is a novel application of data mining to online optimization. The idea is to mine (logged) expert decisions or the offline optimum for rules that can be used for online decisions. It requires little knowledge about the task distribution and cost structure, and is applicable to a wide range of problems. This paper presents a feasibility study of the methodology for the well-known k-server problem. Experiments with synthetic data show that optimization can be recast as classification of the optimum decisions; the resulting heuristic can achieve the optimum for strong request patterns, consistently outperforms other heuristics for weak patterns, and is robust despite changes in cost model. | [
"['Jason W. H. Lee' 'Y. C. Tay' 'Anthony K. H. Tung']"
] |
null | null | 0004001 | null | null | http://arxiv.org/pdf/cs/0004001v1 | 2000-04-03T06:16:16Z | 2000-04-03T06:16:16Z | A Theory of Universal Artificial Intelligence based on Algorithmic
Complexity | Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameterless theory of universal Artificial Intelligence. We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. We outline for a number of problem classes, including sequence prediction, strategic games, function minimization, reinforcement and supervised learning, how the AIXI model can formally solve them. The major drawback of the AIXI model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIXI-tl, which is still effectively more intelligent than any other time t and space l bounded agent. The computation time of AIXI-tl is of the order tx2^l. Other discussed topics are formal definitions of intelligence order relations, the horizon problem and relations of the AIXI theory to other AI approaches. | [
"['Marcus Hutter']"
] |
null | null | 0004057 | null | null | http://arxiv.org/pdf/physics/0004057v1 | 2000-04-24T15:22:30Z | 2000-04-24T15:22:30Z | The information bottleneck method | We define the relevant information in a signal $xin X$ as being the information that this signal provides about another signal $yin Y$. Examples include the information that face images provide about the names of the people portrayed, or the information that speech sounds provide about the words spoken. Understanding the signal $x$ requires more than just predicting $y$, it also requires specifying which features of $X$ play a role in the prediction. We formalize this problem as that of finding a short code for $X$ that preserves the maximum information about $Y$. That is, we squeeze the information that $X$ provides about $Y$ through a `bottleneck' formed by a limited set of codewords $tX$. This constrained optimization problem can be seen as a generalization of rate distortion theory in which the distortion measure $d(x,x)$ emerges from the joint statistics of $X$ and $Y$. This approach yields an exact set of self consistent equations for the coding rules $X to tX$ and $tX to Y$. Solutions to these equations can be found by a convergent re-estimation method that generalizes the Blahut-Arimoto algorithm. Our variational principle provides a surprisingly rich framework for discussing a variety of problems in signal processing and learning, as will be described in detail elsewhere. | [
"['Naftali Tishby' 'Fernando C. Pereira' 'William Bialek']"
] |
null | null | 0005021 | null | null | http://arxiv.org/pdf/cs/0005021v1 | 2000-05-14T14:35:20Z | 2000-05-14T14:35:20Z | Modeling the Uncertainty in Complex Engineering Systems | Existing procedures for model validation have been deemed inadequate for many engineering systems. The reason of this inadequacy is due to the high degree of complexity of the mechanisms that govern these systems. It is proposed in this paper to shift the attention from modeling the engineering system itself to modeling the uncertainty that underlies its behavior. A mathematical framework for modeling the uncertainty in complex engineering systems is developed. This framework uses the results of computational learning theory. It is based on the premise that a system model is a learning machine. | [
"['A. Guergachi']"
] |
null | null | 0005027 | null | null | http://arxiv.org/abs/cs/0005027v1 | 2000-05-26T20:24:48Z | 2000-05-26T20:24:48Z | A Bayesian Reflection on Surfaces | The topic of this paper is a novel Bayesian continuous-basis field representation and inference framework. Within this paper several problems are solved: The maximally informative inference of continuous-basis fields, that is where the basis for the field is itself a continuous object and not representable in a finite manner; the tradeoff between accuracy of representation in terms of information learned, and memory or storage capacity in bits; the approximation of probability distributions so that a maximal amount of information about the object being inferred is preserved; an information theoretic justification for multigrid methodology. The maximally informative field inference framework is described in full generality and denoted the Generalized Kalman Filter. The Generalized Kalman Filter allows the update of field knowledge from previous knowledge at any scale, and new data, to new knowledge at any other scale. An application example instance, the inference of continuous surfaces from measurements (for example, camera image data), is presented. | [
"['David R. Wolf']"
] |
null | null | 0006025 | null | null | http://arxiv.org/abs/nlin/0006025v1 | 2000-06-16T17:01:39Z | 2000-06-16T17:01:39Z | Information Bottlenecks, Causal States, and Statistical Relevance Bases:
How to Represent Relevant Information in Memoryless Transduction | Discovering relevant, but possibly hidden, variables is a key step in constructing useful and predictive theories about the natural world. This brief note explains the connections between three approaches to this problem: the recently introduced information-bottleneck method, the computational mechanics approach to inferring optimal models, and Salmon's statistical relevance basis. | [
"['Cosma Rohilla Shalizi' 'James P. Crutchfield']"
] |
null | null | 0006233 | null | null | http://arxiv.org/pdf/math/0006233v3 | 2001-10-09T17:53:45Z | 2000-06-30T17:19:06Z | Algorithmic Statistics | While Kolmogorov complexity is the accepted absolute measure of information content of an individual finite object, a similarly absolute notion is needed for the relation between an individual data sample and an individual model summarizing the information in the data, for example, a finite set (or probability distribution) where the data sample typically came from. The statistical theory based on such relations between individual objects can be called algorithmic statistics, in contrast to classical statistical theory that deals with relations between probabilistic ensembles. We develop the algorithmic theory of statistic, sufficient statistic, and minimal sufficient statistic. This theory is based on two-part codes consisting of the code for the statistic (the model summarizing the regularity, the meaningful information, in the data) and the model-to-data code. In contrast to the situation in probabilistic statistical theory, the algorithmic relation of (minimal) sufficiency is an absolute relation between the individual model and the individual data sample. We distinguish implicit and explicit descriptions of the models. We give characterizations of algorithmic (Kolmogorov) minimal sufficient statistic for all data samples for both description modes--in the explicit mode under some constraints. We also strengthen and elaborate earlier results on the ``Kolmogorov structure function'' and ``absolutely non-stochastic objects''--those rare objects for which the simplest models that summarize their relevant information (minimal sufficient statistics) are at least as complex as the objects themselves. We demonstrate a close relation between the probabilistic notions and the algorithmic ones. | [
"['Peter Gacs' 'John Tromp' 'Paul Vitanyi']"
] |
null | null | 0007026 | null | null | http://arxiv.org/pdf/cs/0007026v1 | 2000-07-14T00:33:12Z | 2000-07-14T00:33:12Z | Integrating E-Commerce and Data Mining: Architecture and Challenges | We show that the e-commerce domain can provide all the right ingredients for successful data mining and claim that it is a killer domain for data mining. We describe an integrated architecture, based on our expe-rience at Blue Martini Software, for supporting this integration. The architecture can dramatically reduce the pre-processing, cleaning, and data understanding effort often documented to take 80% of the time in knowledge discovery projects. We emphasize the need for data collection at the application server layer (not the web server) in order to support logging of data and metadata that is essential to the discovery process. We describe the data transformation bridges required from the transaction processing systems and customer event streams (e.g., clickstreams) to the data warehouse. We detail the mining workbench, which needs to provide multiple views of the data through reporting, data mining algorithms, visualization, and OLAP. We con-clude with a set of challenges. | [
"['Suhail Ansari' 'Ron Kohavi' 'Llew Mason' 'Zijian Zheng']"
] |
null | null | 0007070 | null | null | http://arxiv.org/pdf/physics/0007070v3 | 2001-01-23T20:02:27Z | 2000-07-20T00:45:11Z | Predictability, complexity and learning | We define {em predictive information} $I_{rm pred} (T)$ as the mutual information between the past and the future of a time series. Three qualitatively different behaviors are found in the limit of large observation times $T$: $I_{rm pred} (T)$ can remain finite, grow logarithmically, or grow as a fractional power law. If the time series allows us to learn a model with a finite number of parameters, then $I_{rm pred} (T)$ grows logarithmically with a coefficient that counts the dimensionality of the model space. In contrast, power--law growth is associated, for example, with the learning of infinite parameter (or nonparametric) models such as continuous functions with smoothness constraints. There are connections between the predictive information and measures of complexity that have been defined both in learning theory and in the analysis of physical systems through statistical mechanics and dynamical systems theory. Further, in the same way that entropy provides the unique measure of available information consistent with some simple and plausible conditions, we argue that the divergent part of $I_{rm pred} (T)$ provides the unique measure for the complexity of dynamics underlying a time series. Finally, we discuss how these ideas may be useful in different problems in physics, statistics, and biology. | [
"['William Bialek' 'Ilya Nemenman' 'Naftali Tishby']"
] |
null | null | 0008009 | null | null | http://arxiv.org/pdf/cs/0008009v1 | 2000-08-15T15:20:18Z | 2000-08-15T15:20:18Z | Data Mining to Measure and Improve the Success of Web Sites | For many companies, competitiveness in e-commerce requires a successful presence on the web. Web sites are used to establish the company's image, to promote and sell goods and to provide customer support. The success of a web site affects and reflects directly the success of the company in the electronic market. In this study, we propose a methodology to improve the ``success'' of web sites, based on the exploitation of navigation pattern discovery. In particular, we present a theory, in which success is modelled on the basis of the navigation behaviour of the site's users. We then exploit WUM, a navigation pattern discovery miner, to study how the success of a site is reflected in the users' behaviour. With WUM we measure the success of a site's components and obtain concrete indications of how the site should be improved. We report on our first experiments with an online catalog, the success of which we have studied. Our mining analysis has shown very promising results, on the basis of which the site is currently undergoing concrete improvements. | [
"['Myra Spiliopoulou' 'Carsten Pohle']"
] |
null | null | 0008019 | null | null | http://arxiv.org/pdf/cs/0008019v1 | 2000-08-22T11:20:14Z | 2000-08-22T11:20:14Z | An Experimental Comparison of Naive Bayesian and Keyword-Based Anti-Spam
Filtering with Personal E-mail Messages | The growing problem of unsolicited bulk e-mail, also known as "spam", has generated a need for reliable anti-spam e-mail filters. Filters of this type have so far been based mostly on manually constructed keyword patterns. An alternative approach has recently been proposed, whereby a Naive Bayesian classifier is trained automatically to detect spam messages. We test this approach on a large collection of personal e-mail messages, which we make publicly available in "encrypted" form contributing towards standard benchmarks. We introduce appropriate cost-sensitive measures, investigating at the same time the effect of attribute-set size, training-corpus size, lemmatization, and stop lists, issues that have not been explored in previous experiments. Finally, the Naive Bayesian filter is compared, in terms of performance, to a filter that uses keyword patterns, and which is part of a widely used e-mail reader. | [
"['Ion Androutsopoulos' 'John Koutsias' 'Konstantinos V. Chandrinos'\n 'Constantine D. Spyropoulos']"
] |
null | null | 0008022 | null | null | http://arxiv.org/pdf/cs/0008022v1 | 2000-08-22T21:37:50Z | 2000-08-22T21:37:50Z | A Learning Approach to Shallow Parsing | A SNoW based learning approach to shallow parsing tasks is presented and studied experimentally. The approach learns to identify syntactic patterns by combining simple predictors to produce a coherent inference. Two instantiations of this approach are studied and experimental results for Noun-Phrases (NP) and Subject-Verb (SV) phrases that compare favorably with the best published results are presented. In doing that, we compare two ways of modeling the problem of learning to recognize patterns and suggest that shallow parsing patterns are better learned using open/close predictors than using inside/outside predictors. | [
"['Marcia Muñoz' 'Vasin Punyakanok' 'Dan Roth' 'Dav Zimak']"
] |
null | null | 0009001 | null | null | http://arxiv.org/pdf/cs/0009001v3 | 2002-02-26T01:51:09Z | 2000-09-05T18:54:58Z | Complexity analysis for algorithmically simple strings | Given a reference computer, Kolmogorov complexity is a well defined function on all binary strings. In the standard approach, however, only the asymptotic properties of such functions are considered because they do not depend on the reference computer. We argue that this approach can be more useful if it is refined to include an important practical case of simple binary strings. Kolmogorov complexity calculus may be developed for this case if we restrict the class of available reference computers. The interesting problem is to define a class of computers which is restricted in a {it natural} way modeling the real-life situation where only a limited class of computers is physically available to us. We give an example of what such a natural restriction might look like mathematically, and show that under such restrictions some error terms, even logarithmic in complexity, can disappear from the standard complexity calculus. Keywords: Kolmogorov complexity; Algorithmic information theory. | [
"['Andrei N. Soklakov']"
] |
null | null | 0009007 | null | null | http://arxiv.org/pdf/cs/0009007v1 | 2000-09-13T21:09:47Z | 2000-09-13T21:09:47Z | Robust Classification for Imprecise Environments | In real-world environments it usually is difficult to specify target operating conditions precisely, for example, target misclassification costs. This uncertainty makes building robust classification systems problematic. We show that it is possible to build a hybrid classifier that will perform at least as well as the best available classifier for any target conditions. In some cases, the performance of the hybrid actually can surpass that of the best known classifier. This robust performance extends across a wide variety of comparison frameworks, including the optimization of metrics such as accuracy, expected cost, lift, precision, recall, and workforce utilization. The hybrid also is efficient to build, to store, and to update. The hybrid is based on a method for the comparison of classifier performance that is robust to imprecise class distributions and misclassification costs. The ROC convex hull (ROCCH) method combines techniques from ROC analysis, decision analysis and computational geometry, and adapts them to the particulars of analyzing learned classifiers. The method is efficient and incremental, minimizes the management of classifier performance data, and allows for clear visual comparisons and sensitivity analyses. Finally, we point to empirical evidence that a robust hybrid classifier indeed is needed for many real-world problems. | [
"['Foster Provost' 'Tom Fawcett']"
] |
null | null | 0009009 | null | null | http://arxiv.org/pdf/cs/0009009v1 | 2000-09-18T14:05:13Z | 2000-09-18T14:05:13Z | Learning to Filter Spam E-Mail: A Comparison of a Naive Bayesian and a
Memory-Based Approach | We investigate the performance of two machine learning algorithms in the context of anti-spam filtering. The increasing volume of unsolicited bulk e-mail (spam) has generated a need for reliable anti-spam filters. Filters of this type have so far been based mostly on keyword patterns that are constructed by hand and perform poorly. The Naive Bayesian classifier has recently been suggested as an effective method to construct automatically anti-spam filters with superior performance. We investigate thoroughly the performance of the Naive Bayesian filter on a publicly available corpus, contributing towards standard benchmarks. At the same time, we compare the performance of the Naive Bayesian filter to an alternative memory-based learning approach, after introducing suitable cost-sensitive evaluation measures. Both methods achieve very accurate spam filtering, outperforming clearly the keyword-based filter of a widely used e-mail reader. | [
"['Ion Androutsopoulos' 'Georgios Paliouras' 'Vangelis Karkaletsis'\n 'Georgios Sakkis' 'Constantine D. Spyropoulos' 'Panagiotis Stamatopoulos']"
] |
null | null | 0009027 | null | null | http://arxiv.org/pdf/cs/0009027v1 | 2000-09-28T14:25:51Z | 2000-09-28T14:25:51Z | A Classification Approach to Word Prediction | The eventual goal of a language model is to accurately predict the value of a missing word given its context. We present an approach to word prediction that is based on learning a representation for each word as a function of words and linguistics predicates in its context. This approach raises a few new questions that we address. First, in order to learn good word representations it is necessary to use an expressive representation of the context. We present a way that uses external knowledge to generate expressive context representations, along with a learning method capable of handling the large number of features generated this way that can, potentially, contribute to each prediction. Second, since the number of words ``competing'' for each prediction is large, there is a need to ``focus the attention'' on a smaller subset of these. We exhibit the contribution of a ``focus of attention'' mechanism to the performance of the word predictor. Finally, we describe a large scale experimental study in which the approach presented is shown to yield significant improvements in word prediction tasks. | [
"['Yair Even-Zohar' 'Dan Roth']"
] |
null | null | 0009032 | null | null | http://arxiv.org/pdf/physics/0009032v1 | 2000-09-08T23:30:26Z | 2000-09-08T23:30:26Z | Information theory and learning: a physical approach | We try to establish a unified information theoretic approach to learning and to explore some of its applications. First, we define {em predictive information} as the mutual information between the past and the future of a time series, discuss its behavior as a function of the length of the series, and explain how other quantities of interest studied previously in learning theory - as well as in dynamical systems and statistical mechanics - emerge from this universally definable concept. We then prove that predictive information provides the {em unique measure for the complexity} of dynamics underlying the time series and show that there are classes of models characterized by {em power-law growth of the predictive information} that are qualitatively more complex than any of the systems that have been investigated before. Further, we investigate numerically the learning of a nonparametric probability density, which is an example of a problem with power-law complexity, and show that the proper Bayesian formulation of this problem provides for the `Occam' factors that punish overly complex models and thus allow one {em to learn not only a solution within a specific model class, but also the class itself} using the data only and with very few a priori assumptions. We study a possible {em information theoretic method} that regularizes the learning of an undersampled discrete variable, and show that learning in such a setup goes through stages of very different complexities. Finally, we discuss how all of these ideas may be useful in various problems in physics, statistics, and, most importantly, biology. | [
"['Ilya Nemenman']"
] |
null | null | 0009165 | null | null | http://arxiv.org/abs/cond-mat/0009165v2 | 2002-02-05T00:04:38Z | 2000-09-11T22:51:53Z | Occam factors and model-independent Bayesian learning of continuous
distributions | Learning of a smooth but nonparametric probability density can be regularized using methods of Quantum Field Theory. We implement a field theoretic prior numerically, test its efficacy, and show that the data and the phase space factors arising from the integration over the model space determine the free parameter of the theory ("smoothness scale") self-consistently. This persists even for distributions that are atypical in the prior and is a step towards a model-independent theory for learning continuous distributions. Finally, we point out that a wrong parameterization of a model family may sometimes be advantageous for small data sets. | [
"['Ilya Nemenman' 'William Bialek']"
] |
null | null | 0010001 | null | null | http://arxiv.org/pdf/cs/0010001v1 | 2000-09-30T11:47:42Z | 2000-09-30T11:47:42Z | Design of an Electro-Hydraulic System Using Neuro-Fuzzy Techniques | Increasing demands in performance and quality make drive systems fundamental parts in the progressive automation of industrial processes. Their conventional models become inappropriate and have limited scope if one requires a precise and fast performance. So, it is important to incorporate learning capabilities into drive systems in such a way that they improve their accuracy in realtime, becoming more autonomous agents with some degree of intelligence. To investigate this challenge, this chapter presents the development of a learning control system that uses neuro-fuzzy techniques in the design of a tracking controller to an experimental electro-hydraulic actuator. We begin the chapter by presenting the neuro-fuzzy modeling process of the actuator. This part surveys the learning algorithm, describes the laboratorial system, and presents the modeling steps as the choice of actuator representative variables, the acquisition of training and testing data sets, and the acquisition of the neuro-fuzzy inverse-model of the actuator. In the second part of the chapter, we use the extracted neuro-fuzzy model and its learning capabilities to design the actuator position controller based on the feedback-error-learning technique. Through a set of experimental results, we show the generalization properties of the controller, its learning capability in actualizing in realtime the initial neuro-fuzzy inverse-model, and its compensation action improving the electro-hydraulics tracking performance. | [
"['P. J. Costa Branco' 'J. A. Dente']"
] |
null | null | 0010002 | null | null | http://arxiv.org/pdf/cs/0010002v1 | 2000-09-30T14:37:23Z | 2000-09-30T14:37:23Z | Noise Effects in Fuzzy Modelling Systems | Noise is source of ambiguity for fuzzy systems. Although being an important aspect, the effects of noise in fuzzy modeling have been little investigated. This paper presents a set of tests using three well-known fuzzy modeling algorithms. These evaluate perturbations in the extracted rule-bases caused by noise polluting the learning data, and the corresponding deformations in each learned functional relation. We present results to show: 1) how these fuzzy modeling systems deal with noise; 2) how the established fuzzy model structure influences noise sensitivity of each algorithm; and 3) whose characteristics of the learning algorithms are relevant to noise attenuation. | [
"['P. J. Costa Branco' 'J. A. Dente']"
] |
null | null | 0010003 | null | null | http://arxiv.org/abs/cs/0010003v1 | 2000-09-30T15:31:16Z | 2000-09-30T15:31:16Z | Torque Ripple Minimization in a Switched Reluctance Drive by Neuro-Fuzzy
Compensation | Simple power electronic drive circuit and fault tolerance of converter are specific advantages of SRM drives, but excessive torque ripple has limited its use to special applications. It is well known that controlling the current shape adequately can minimize the torque ripple. This paper presents a new method for shaping the motor currents to minimize the torque ripple, using a neuro-fuzzy compensator. In the proposed method, a compensating signal is added to the output of a PI controller, in a current-regulated speed control loop. Numerical results are presented in this paper, with an analysis of the effects of changing the form of the membership function of the neuro-fuzzy compensator. | [
"['L. Henriques' 'L. Rolim' 'W. Suemitsu' 'P. J. Costa Branco'\n 'J. A. Dente']"
] |
null | null | 0010004 | null | null | http://arxiv.org/pdf/cs/0010004v1 | 2000-09-30T15:42:55Z | 2000-09-30T15:42:55Z | A Fuzzy Relational Identification Algorithm and Its Application to
Predict The Behaviour of a Motor Drive System | Fuzzy relational identification builds a relational model describing systems behaviour by a nonlinear mapping between its variables. In this paper, we propose a new fuzzy relational algorithm based on simplified max-min relational equation. The algorithm presents an adaptation method applied to gravity-center of each fuzzy set based on error integral value between measured and predicted system output, and uses the concept of time-variant universe of discourses. The identification algorithm also includes a method to attenuate noise influence in extracted system relational model using a fuzzy filtering mechanism. The algorithm is applied to one-step forward prediction of a simulated and experimental motor drive system. The identified model has its input-output variables (stator-reference current and motor speed signal) treated as fuzzy sets, whereas the relations existing between them are described by means of a matrix R defining the relational model extracted by the algorithm. The results show the good potentialities of the algorithm in predict the behaviour of the system and attenuate through the fuzzy filtering method possible noise distortions in the relational model. | [
"['P. J. Costa Branco' 'J. A. Dente']"
] |
null | null | 0010006 | null | null | http://arxiv.org/pdf/cs/0010006v1 | 2000-10-02T12:16:17Z | 2000-10-02T12:16:17Z | Applications of Data Mining to Electronic Commerce | Electronic commerce is emerging as the killer domain for data mining technology. The following are five desiderata for success. Seldom are they they all present in one data mining application. 1. Data with rich descriptions. For example, wide customer records with many potentially useful fields allow data mining algorithms to search beyond obvious correlations. 2. A large volume of data. The large model spaces corresponding to rich data demand many training instances to build reliable models. 3. Controlled and reliable data collection. Manual data entry and integration from legacy systems both are notoriously problematic; fully automated collection is considerably better. 4. The ability to evaluate results. Substantial, demonstrable return on investment can be very convincing. 5. Ease of integration with existing processes. Even if pilot studies show potential benefit, deploying automated solutions to previously manual processes is rife with pitfalls. Building a system to take advantage of the mined knowledge can be a substantial undertaking. Furthermore, one often must deal with social and political issues involved in the automation of a previously manual business process. | [
"['Ron Kohavi' 'Foster Provost']"
] |
null | null | 0010010 | null | null | http://arxiv.org/abs/cs/0010010v1 | 2000-10-03T17:54:38Z | 2000-10-03T17:54:38Z | Fault Detection using Immune-Based Systems and Formal Language
Algorithms | This paper describes two approaches for fault detection: an immune-based mechanism and a formal language algorithm. The first one is based on the feature of immune systems in distinguish any foreign cell from the body own cell. The formal language approach assumes the system as a linguistic source capable of generating a certain language, characterised by a grammar. Each algorithm has particular characteristics, which are analysed in the paper, namely in what cases they can be used with advantage. To test their practicality, both approaches were applied on the problem of fault detection in an induction motor. | [
"['J. F. Martins' 'P. J. Costa Branco' 'A. J. Pires' 'J. A. Dente']"
] |
null | null | 0010022 | null | null | http://arxiv.org/pdf/cs/0010022v1 | 2000-10-15T20:14:08Z | 2000-10-15T20:14:08Z | Noise-Tolerant Learning, the Parity Problem, and the Statistical Query
Model | We describe a slightly sub-exponential time algorithm for learning parity functions in the presence of random classification noise. This results in a polynomial-time algorithm for the case of parity functions that depend on only the first O(log n log log n) bits of input. This is the first known instance of an efficient noise-tolerant algorithm for a concept class that is provably not learnable in the Statistical Query model of Kearns. Thus, we demonstrate that the set of problems learnable in the statistical query model is a strict subset of those problems learnable in the presence of noise in the PAC model. In coding-theory terms, what we give is a poly(n)-time algorithm for decoding linear k by n codes in the presence of random noise for the case of k = c log n loglog n for some c > 0. (The case of k = O(log n) is trivial since one can just individually check each of the 2^k possible messages and choose the one that yields the closest codeword.) A natural extension of the statistical query model is to allow queries about statistical properties that involve t-tuples of examples (as opposed to single examples). The second result of this paper is to show that any class of functions learnable (strongly or weakly) with t-wise queries for t = O(log n) is also weakly learnable with standard unary queries. Hence this natural extension to the statistical query model does not increase the set of weakly learnable functions. | [
"['Avrim Blum' 'Adam Kalai' 'Hal Wasserman']"
] |
null | null | 0011032 | null | null | http://arxiv.org/pdf/cs/0011032v1 | 2000-11-21T21:51:01Z | 2000-11-21T21:51:01Z | Top-down induction of clustering trees | An approach to clustering is presented that adapts the basic top-down induction of decision trees method towards clustering. To this aim, it employs the principles of instance based learning. The resulting methodology is implemented in the TIC (Top down Induction of Clustering trees) system for first order clustering. The TIC system employs the first order logical decision tree representation of the inductive logic programming system Tilde. Various experiments with TIC are presented, in both propositional and relational domains. | [
"['Hendrik Blockeel' 'Luc De Raedt' 'Jan Ramon']"
] |
null | null | 0011033 | null | null | http://arxiv.org/pdf/cs/0011033v1 | 2000-11-22T09:41:53Z | 2000-11-22T09:41:53Z | Web Mining Research: A Survey | With the huge amount of information available online, the World Wide Web is a fertile area for data mining research. The Web mining research is at the cross road of research from several research communities, such as database, information retrieval, and within AI, especially the sub-areas of machine learning and natural language processing. However, there is a lot of confusions when comparing research efforts from different point of views. In this paper, we survey the research in the area of Web mining, point out some confusions regarded the usage of the term Web mining and suggest three Web mining categories. Then we situate some of the research with respect to these three categories. We also explore the connection between the Web mining categories and the related agent paradigm. For the survey, we focus on representation issues, on the process, on the learning algorithm, and on the application of the recent works as the criteria. We conclude the paper with some research issues. | [
"['Raymond Kosala' 'Hendrik Blockeel']"
] |
null | null | 0011038 | null | null | http://arxiv.org/pdf/cs/0011038v1 | 2000-11-23T14:48:53Z | 2000-11-23T14:48:53Z | Provably Fast and Accurate Recovery of Evolutionary Trees through
Harmonic Greedy Triplets | We give a greedy learning algorithm for reconstructing an evolutionary tree based on a certain harmonic average on triplets of terminal taxa. After the pairwise distances between terminal taxa are estimated from sequence data, the algorithm runs in O(n^2) time using O(n) work space, where n is the number of terminal taxa. These time and space complexities are optimal in the sense that the size of an input distance matrix is n^2 and the size of an output tree is n. Moreover, in the Jukes-Cantor model of evolution, the algorithm recovers the correct tree topology with high probability using sample sequences of length polynomial in (1) n, (2) the logarithm of the error probability, and (3) the inverses of two small parameters. | [
"['Miklos Csuros' 'Ming-Yang Kao']"
] |
null | null | 0011044 | null | null | http://arxiv.org/pdf/cs/0011044v1 | 2000-11-29T12:14:50Z | 2000-11-29T12:14:50Z | Scaling Up Inductive Logic Programming by Learning from Interpretations | When comparing inductive logic programming (ILP) and attribute-value learning techniques, there is a trade-off between expressive power and efficiency. Inductive logic programming techniques are typically more expressive but also less efficient. Therefore, the data sets handled by current inductive logic programming systems are small according to general standards within the data mining community. The main source of inefficiency lies in the assumption that several examples may be related to each other, so they cannot be handled independently. Within the learning from interpretations framework for inductive logic programming this assumption is unnecessary, which allows to scale up existing ILP algorithms. In this paper we explain this learning setting in the context of relational databases. We relate the setting to propositional data mining and to the classical ILP setting, and show that learning from interpretations corresponds to learning from multiple relations and thus extends the expressiveness of propositional learning, while maintaining its efficiency to a large extent (which is not the case in the classical ILP setting). As a case study, we present two alternative implementations of the ILP system Tilde (Top-down Induction of Logical DEcision trees): Tilde-classic, which loads all data in main memory, and Tilde-LDS, which loads the examples one by one. We experimentally compare the implementations, showing Tilde-LDS can handle large data sets (in the order of 100,000 examples or 100 MB) and indeed scales up linearly in the number of examples. | [
"['Hendrik Blockeel' 'Luc De Raedt' 'Nico Jacobs' 'Bart Demoen']"
] |
null | null | 0011122 | null | null | http://arxiv.org/pdf/quant-ph/0011122v2 | 2000-12-20T14:54:39Z | 2000-11-30T14:23:55Z | Algorithmic Theories of Everything | The probability distribution P from which the history of our universe is sampled represents a theory of everything or TOE. We assume P is formally describable. Since most (uncountably many) distributions are not, this imposes a strong inductive bias. We show that P(x) is small for any universe x lacking a short description, and study the spectrum of TOEs spanned by two Ps, one reflecting the most compact constructive descriptions, the other the fastest way of computing everything. The former derives from generalizations of traditional computability, Solomonoff's algorithmic probability, Kolmogorov complexity, and objects more random than Chaitin's Omega, the latter from Levin's universal search and a natural resource-oriented postulate: the cumulative prior probability of all x incomputable within time t by this optimal algorithm should be 1/t. Between both Ps we find a universal cumulatively enumerable measure that dominates traditional enumerable measures; any such CEM must assign low probability to any universe lacking a short enumerating program. We derive P-specific consequences for evolving observers, inductive reasoning, quantum physics, philosophy, and the expected duration of our universe. | [
"['Juergen Schmidhuber']"
] |
null | null | 0012011 | null | null | http://arxiv.org/pdf/cs/0012011v1 | 2000-12-16T09:38:13Z | 2000-12-16T09:38:13Z | Towards a Universal Theory of Artificial Intelligence based on
Algorithmic Probability and Sequential Decision Theory | Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown distribution. We unify both theories and give strong arguments that the resulting universal AIXI model behaves optimal in any computable environment. The major drawback of the AIXI model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIXI^tl, which is still superior to any other time t and space l bounded agent. The computation time of AIXI^tl is of the order t x 2^l. | [
"['Marcus Hutter']"
] |
null | null | 0012163 | null | null | http://arxiv.org/pdf/math/0012163v2 | 2000-12-19T07:17:10Z | 2000-12-18T10:35:00Z | Learning Complexity Dimensions for a Continuous-Time Control System | This paper takes a computational learning theory approach to a problem of linear systems identification. It is assumed that input signals have only a finite number k of frequency components, and systems to be identified have dimension no greater than n. The main result establishes that the sample complexity needed for identification scales polynomially with n and logarithmically with k. | [
"['Pirkko Kuusela' 'Daniel Ocone' 'Eduardo D. Sontag']"
] |
null | null | 0101019 | null | null | http://arxiv.org/pdf/cs/0101019v2 | 2001-09-19T09:12:56Z | 2001-01-21T17:19:37Z | General Loss Bounds for Universal Sequence Prediction | The Bayesian framework is ideally suited for induction problems. The probability of observing $x_t$ at time $t$, given past observations $x_1...x_{t-1}$ can be computed with Bayes' rule if the true distribution $mu$ of the sequences $x_1x_2x_3...$ is known. The problem, however, is that in many cases one does not even have a reasonable estimate of the true distribution. In order to overcome this problem a universal distribution $xi$ is defined as a weighted sum of distributions $mu_iinM$, where $M$ is any countable set of distributions including $mu$. This is a generalization of Solomonoff induction, in which $M$ is the set of all enumerable semi-measures. Systems which predict $y_t$, given $x_1...x_{t-1}$ and which receive loss $l_{x_t y_t}$ if $x_t$ is the true next symbol of the sequence are considered. It is proven that using the universal $xi$ as a prior is nearly as good as using the unknown true distribution $mu$. Furthermore, games of chance, defined as a sequence of bets, observations, and rewards are studied. The time needed to reach the winning zone is bounded in terms of the relative entropy of $mu$ and $xi$. Extensions to arbitrary alphabets, partial and delayed prediction, and more active systems are discussed. | [
"['Marcus Hutter']"
] |
null | null | 0102015 | null | null | http://arxiv.org/pdf/cs/0102015v1 | 2001-02-20T13:08:15Z | 2001-02-20T13:08:15Z | Non-convex cost functionals in boosting algorithms and methods for panel
selection | In this document we propose a new improvement for boosting techniques as proposed in Friedman '99 by the use of non-convex cost functional. The idea is to introduce a correlation term to better deal with forecasting of additive time series. The problem is discussed in a theoretical way to prove the existence of minimizing sequence, and in a numerical way to propose a new "ArgMin" algorithm. The model has been used to perform the touristic presence forecast for the winter season 1999/2000 in Trentino (italian Alps). | [
"['Marco Visentin']"
] |
null | null | 0102018 | null | null | http://arxiv.org/pdf/cs/0102018v1 | 2001-02-21T20:52:28Z | 2001-02-21T20:52:28Z | An effective Procedure for Speeding up Algorithms | The provably asymptotically fastest algorithm within a factor of 5 for formally described problems will be constructed. The main idea is to enumerate all programs provably equivalent to the original problem by enumerating all proofs. The algorithm could be interpreted as a generalization and improvement of Levin search, which is, within a multiplicative constant, the fastest algorithm for inverting functions. Blum's speed-up theorem is avoided by taking into account only programs for which a correctness proof exists. Furthermore, it is shown that the fastest program that computes a certain function is also one of the shortest programs provably computing this function. To quantify this statement, the definition of Kolmogorov complexity is extended, and two new natural measures for the complexity of a function are defined. | [
"['Marcus Hutter']"
] |
null | null | 0103003 | null | null | http://arxiv.org/pdf/cs/0103003v1 | 2001-03-02T01:55:46Z | 2001-03-02T01:55:46Z | Learning Policies with External Memory | In order for an agent to perform well in partially observable domains, it is usually necessary for actions to depend on the history of observations. In this paper, we explore a {it stigmergic} approach, in which the agent's actions include the ability to set and clear bits in an external memory, and the external memory is included as part of the input to the agent. In this case, we need to learn a reactive policy in a highly non-Markovian domain. We explore two algorithms: SARSA(lambda), which has had empirical success in partially observable domains, and VAPS, a new algorithm due to Baird and Moore, with convergence guarantees in partially observable domains. We compare the performance of these two algorithms on benchmark problems. | [
"['Leonid Peshkin' 'Nicolas Meuleau' 'Leslie Kaelbling']"
] |
null | null | 0103015 | null | null | http://arxiv.org/pdf/cs/0103015v1 | 2001-03-14T18:40:32Z | 2001-03-14T18:40:32Z | Fitness Uniform Selection to Preserve Genetic Diversity | In evolutionary algorithms, the fitness of a population increases with time by mutating and recombining individuals and by a biased selection of more fit individuals. The right selection pressure is critical in ensuring sufficient optimization progress on the one hand and in preserving genetic diversity to be able to escape from local optima on the other. We propose a new selection scheme, which is uniform in the fitness values. It generates selection pressure towards sparsely populated fitness regions, not necessarily towards higher fitness, as is the case for all other selection schemes. We show that the new selection scheme can be much more effective than standard selection schemes. | [
"['Marcus Hutter']"
] |
null | null | 0104005 | null | null | http://arxiv.org/pdf/cs/0104005v1 | 2001-04-03T14:09:12Z | 2001-04-03T14:09:12Z | Bootstrapping Structure using Similarity | In this paper a new similarity-based learning algorithm, inspired by string edit-distance (Wagner and Fischer, 1974), is applied to the problem of bootstrapping structure from scratch. The algorithm takes a corpus of unannotated sentences as input and returns a corpus of bracketed sentences. The method works on pairs of unstructured sentences or sentences partially bracketed by the algorithm that have one or more words in common. It finds parts of sentences that are interchangeable (i.e. the parts of the sentences that are different in both sentences). These parts are taken as possible constituents of the same type. While this corresponds to the basic bootstrapping step of the algorithm, further structure may be learned from comparison with other (similar) sentences. We used this method for bootstrapping structure from the flat sentences of the Penn Treebank ATIS corpus, and compared the resulting structured sentences to the structured sentences in the ATIS corpus. Similarly, the algorithm was tested on the OVIS corpus. We obtained 86.04 % non-crossing brackets precision on the ATIS corpus and 89.39 % non-crossing brackets precision on the OVIS corpus. | [
"['Menno van Zaanen']"
] |
null | null | 0104006 | null | null | http://arxiv.org/pdf/cs/0104006v1 | 2001-04-03T14:20:26Z | 2001-04-03T14:20:26Z | ABL: Alignment-Based Learning | This paper introduces a new type of grammar learning algorithm, inspired by string edit distance (Wagner and Fischer, 1974). The algorithm takes a corpus of flat sentences as input and returns a corpus of labelled, bracketed sentences. The method works on pairs of unstructured sentences that have one or more words in common. When two sentences are divided into parts that are the same in both sentences and parts that are different, this information is used to find parts that are interchangeable. These parts are taken as possible constituents of the same type. After this alignment learning step, the selection learning step selects the most probable constituents from all possible constituents. This method was used to bootstrap structure on the ATIS corpus (Marcus et al., 1993) and on the OVIS (Openbaar Vervoer Informatie Systeem (OVIS) stands for Public Transport Information System.) corpus (Bonnema et al., 1997). While the results are encouraging (we obtained up to 89.25 % non-crossing brackets precision), this paper will point out some of the shortcomings of our approach and will suggest possible solutions. | [
"['Menno van Zaanen']"
] |
null | null | 0104007 | null | null | http://arxiv.org/pdf/cs/0104007v1 | 2001-04-03T15:03:16Z | 2001-04-03T15:03:16Z | Bootstrapping Syntax and Recursion using Alignment-Based Learning | This paper introduces a new type of unsupervised learning algorithm, based on the alignment of sentences and Harris's (1951) notion of interchangeability. The algorithm is applied to an untagged, unstructured corpus of natural language sentences, resulting in a labelled, bracketed version of the corpus. Firstly, the algorithm aligns all sentences in the corpus in pairs, resulting in a partition of the sentences consisting of parts of the sentences that are similar in both sentences and parts that are dissimilar. This information is used to find (possibly overlapping) constituents. Next, the algorithm selects (non-overlapping) constituents. Several instances of the algorithm are applied to the ATIS corpus (Marcus et al., 1993) and the OVIS (Openbaar Vervoer Informatie Systeem (OVIS) stands for Public Transport Information System.) corpus (Bonnema et al., 1997). Apart from the promising numerical results, the most striking result is that even the simplest algorithm based on alignment learns recursion. | [
"['Menno van Zaanen']"
] |
null | null | 0105025 | null | null | http://arxiv.org/pdf/cs/0105025v1 | 2001-05-15T19:07:28Z | 2001-05-15T19:07:28Z | Market-Based Reinforcement Learning in Partially Observable Worlds | Unlike traditional reinforcement learning (RL), market-based RL is in principle applicable to worlds described by partially observable Markov Decision Processes (POMDPs), where an agent needs to learn short-term memories of relevant previous events in order to execute optimal actions. Most previous work, however, has focused on reactive settings (MDPs) instead of POMDPs. Here we reimplement a recent approach to market-based RL and for the first time evaluate it in a toy POMDP setting. | [
"['Ivo Kwee' 'Marcus Hutter' 'Juergen Schmidhuber']"
] |
null | null | 0105027 | null | null | http://arxiv.org/pdf/cs/0105027v1 | 2001-05-17T18:33:56Z | 2001-05-17T18:33:56Z | Bounds on sample size for policy evaluation in Markov environments | Reinforcement learning means finding the optimal course of action in Markovian environments without knowledge of the environment's dynamics. Stochastic optimization algorithms used in the field rely on estimates of the value of a policy. Typically, the value of a policy is estimated from results of simulating that very policy in the environment. This approach requires a large amount of simulation as different points in the policy space are considered. In this paper, we develop value estimators that utilize data gathered when using one policy to estimate the value of using another policy, resulting in much more data-efficient algorithms. We consider the question of accumulating a sufficient experience and give PAC-style bounds. | [
"['Leonid Peshkin' 'Sayan Mukherjee']"
] |
null | null | 0105032 | null | null | http://arxiv.org/pdf/cs/0105032v1 | 2001-05-25T02:52:07Z | 2001-05-25T02:52:07Z | Learning to Cooperate via Policy Search | Cooperative games are those in which both agents share the same payoff structure. Value-based reinforcement-learning algorithms, such as variants of Q-learning, have been applied to learning cooperative games, but they only apply when the game state is completely observable to both agents. Policy search methods are a reasonable alternative to value-based methods for partially observable environments. In this paper, we provide a gradient-based distributed policy-search method for cooperative games and compare the notion of local optimum to that of Nash equilibrium. We demonstrate the effectiveness of this method experimentally in a small, partially observable simulated soccer domain. | [
"['Leonid Peshkin' 'Kee-Eung Kim' 'Nicolas Meuleau' 'Leslie Pack Kaelbling']"
] |
null | null | 0105235 | null | null | http://arxiv.org/pdf/math/0105235v3 | 2001-12-03T02:25:00Z | 2001-05-29T02:20:17Z | Mathematics of learning | We study the convergence properties of a pair of learning algorithms (learning with and without memory). This leads us to study the dominant eigenvalue of a class of random matrices. This turns out to be related to the roots of the derivative of random polynomials (generated by picking their roots uniformly at random in the interval [0, 1], although our results extend to other distributions). This, in turn, requires the study of the statistical behavior of the harmonic mean of random variables as above, which leads us to delicate question of the rate of convergence to stable laws and tail estimates for stable laws. The reader can find the proofs of most of the results announced here in the paper entitled "Harmonic mean, random polynomials, and random matrices", by the same authors. | [
"['Natalia Komarova' 'Igor Rivin']"
] |
null | null | 0105236 | null | null | http://arxiv.org/pdf/math/0105236v2 | 2001-12-03T02:22:12Z | 2001-05-29T02:25:23Z | Harmonic mean, random polynomials and stochastic matrices | Motivated by a problem in learning theory, we are led to study the dominant eigenvalue of a class of random matrices. This turns out to be related to the roots of the derivative of random polynomials (generated by picking their roots uniformly at random in the interval [0, 1], although our results extend to other distributions). This, in turn, requires the study of the statistical behavior of the harmonic mean of random variables as above, and that, in turn, leads us to delicate question of the rate of convergence to stable laws and tail estimates for stable laws. | [
"['Natalia Komarova' 'Igor Rivin']"
] |
null | null | 0106016 | null | null | http://arxiv.org/pdf/cs/0106016v1 | 2001-06-10T14:56:51Z | 2001-06-10T14:56:51Z | File mapping Rule-based DBMS and Natural Language Processing | This paper describes the system of storage, extract and processing of information structured similarly to the natural language. For recursive inference the system uses the rules having the same representation, as the data. The environment of storage of information is provided with the File Mapping (SHM) mechanism of operating system. In the paper the main principles of construction of dynamic data structure and language for record of the inference rules are stated; the features of available implementation are considered and the description of the application realizing semantic information retrieval on the natural language is given. | [
"['Vjacheslav M. Novikov']"
] |
null | null | 0106036 | null | null | http://arxiv.org/pdf/cs/0106036v1 | 2001-06-15T09:12:51Z | 2001-06-15T09:12:51Z | Convergence and Error Bounds for Universal Prediction of Nonbinary
Sequences | Solomonoff's uncomputable universal prediction scheme $xi$ allows to predict the next symbol $x_k$ of a sequence $x_1...x_{k-1}$ for any Turing computable, but otherwise unknown, probabilistic environment $mu$. This scheme will be generalized to arbitrary environmental classes, which, among others, allows the construction of computable universal prediction schemes $xi$. Convergence of $xi$ to $mu$ in a conditional mean squared sense and with $mu$ probability 1 is proven. It is shown that the average number of prediction errors made by the universal $xi$ scheme rapidly converges to those made by the best possible informed $mu$ scheme. The schemes, theorems and proofs are given for general finite alphabet, which results in additional complications as compared to the binary case. Several extensions of the presented theory and results are outlined. They include general loss functions and bounds, games of chance, infinite alphabet, partial and delayed prediction, classification, and more active systems. | [
"['Marcus Hutter']"
] |
null | null | 0106044 | null | null | http://arxiv.org/pdf/cs/0106044v1 | 2001-06-20T19:01:41Z | 2001-06-20T19:01:41Z | A Sequential Model for Multi-Class Classification | Many classification problems require decisions among a large number of competing classes. These tasks, however, are not handled well by general purpose learning methods and are usually addressed in an ad-hoc fashion. We suggest a general approach -- a sequential learning model that utilizes classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidates set. Some theoretical and computational properties of the model are discussed and we argue that these are important in NLP-like domains. The advantages of the model are illustrated in an experiment in part-of-speech tagging. | [
"['Yair Even-Zohar' 'Dan Roth']"
] |
null | null | 0107032 | null | null | http://arxiv.org/pdf/cs/0107032v1 | 2001-07-23T11:06:45Z | 2001-07-23T11:06:45Z | Coupled Clustering: a Method for Detecting Structural Correspondence | This paper proposes a new paradigm and computational framework for identification of correspondences between sub-structures of distinct composite systems. For this, we define and investigate a variant of traditional data clustering, termed coupled clustering, which simultaneously identifies corresponding clusters within two data sets. The presented method is demonstrated and evaluated for detecting topical correspondences in textual corpora. | [
"['Zvika Marx' 'Ido Dagan' 'Joachim Buhmann']"
] |
null | null | 0107033 | null | null | http://arxiv.org/pdf/cs/0107033v1 | 2001-07-25T15:50:43Z | 2001-07-25T15:50:43Z | Yet another zeta function and learning | We study the convergence speed of the batch learning algorithm, and compare its speed to that of the memoryless learning algorithm and of learning with memory (as analyzed in joint work with N. Komarova). We obtain precise results and show in particular that the batch learning algorithm is never worse than the memoryless learning algorithm (at least asymptotically). Its performance vis-a-vis learning with full memory is less clearcut, and depends on certainprobabilistic assumptions. These results necessitate theintroduction of the moment zeta function of a probability distribution and the study of some of its properties. | [
"['Igor Rivin']"
] |
null | null | 0108018 | null | null | http://arxiv.org/pdf/cs/0108018v1 | 2001-08-27T13:07:44Z | 2001-08-27T13:07:44Z | Bipartite graph partitioning and data clustering | Many data types arising from data mining applications can be modeled as bipartite graphs, examples include terms and documents in a text corpus, customers and purchasing items in market basket analysis and reviewers and movies in a movie recommender system. In this paper, we propose a new data clustering method based on partitioning the underlying bipartite graph. The partition is constructed by minimizing a normalized sum of edge weights between unmatched pairs of vertices of the bipartite graph. We show that an approximate solution to the minimization problem can be obtained by computing a partial singular value decomposition (SVD) of the associated edge weight matrix of the bipartite graph. We point out the connection of our clustering algorithm to correspondence analysis used in multivariate analysis. We also briefly discuss the issue of assigning data objects to multiple clusters. In the experimental results, we apply our clustering algorithm to the problem of document clustering to illustrate its effectiveness and efficiency. | [
"['H. Zha' 'X. He' 'C. Ding' 'M. Gu' 'H. Simon']"
] |
null | null | 0109034 | null | null | http://arxiv.org/pdf/cs/0109034v1 | 2001-09-19T08:07:38Z | 2001-09-19T08:07:38Z | Relevant Knowledge First - Reinforcement Learning and Forgetting in
Knowledge Based Configuration | In order to solve complex configuration tasks in technical domains, various knowledge based methods have been developed. However their applicability is often unsuccessful due to their low efficiency. One of the reasons for this is that (parts of the) problems have to be solved again and again, instead of being "learnt" from preceding processes. However, learning processes bring with them the problem of conservatism, for in technical domains innovation is a deciding factor in competition. On the other hand a certain amount of conservatism is often desired since uncontrolled innovation as a rule is also detrimental. This paper proposes the heuristic RKF (Relevant Knowledge First) for making decisions in configuration processes based on the so-called relevance of objects in a knowledge base. The underlying relevance-function has two components, one based on reinforcement learning and the other based on forgetting (fading). Relevance of an object increases with its successful use and decreases with age when it is not used. RKF has been developed to speed up the configuration process and to improve the quality of the solutions relative to the reward value that is given by users. | [
"['Ingo Kreuz' 'Dieter Roller']"
] |
null | null | 0110036 | null | null | http://arxiv.org/pdf/cs/0110036v1 | 2001-10-17T15:45:23Z | 2001-10-17T15:45:23Z | Efficient algorithms for decision tree cross-validation | Cross-validation is a useful and generally applicable technique often employed in machine learning, including decision tree induction. An important disadvantage of straightforward implementation of the technique is its computational overhead. In this paper we show that, for decision trees, the computational overhead of cross-validation can be reduced significantly by integrating the cross-validation with the normal decision tree induction process. We discuss how existing decision tree algorithms can be adapted to this aim, and provide an analysis of the speedups these adaptations may yield. The analysis is supported by experimental results. | [
"['Hendrik Blockeel' 'Jan Struyf']"
] |
null | null | 0110053 | null | null | http://arxiv.org/abs/cs/0110053v1 | 2001-10-26T09:27:48Z | 2001-10-26T09:27:48Z | Machine Learning in Automated Text Categorization | The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last ten years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. The advantages of this approach over the knowledge engineering approach (consisting in the manual definition of a classifier by domain experts) are a very good effectiveness, considerable savings in terms of expert manpower, and straightforward portability to different domains. This survey discusses the main approaches to text categorization that fall within the machine learning paradigm. We will discuss in detail issues pertaining to three different problems, namely document representation, classifier construction, and classifier evaluation. | [
"['Fabrizio Sebastiani']"
] |
null | null | 0111003 | null | null | http://arxiv.org/pdf/cs/0111003v1 | 2001-11-01T03:02:19Z | 2001-11-01T03:02:19Z | The Use of Classifiers in Sequential Inference | We study the problem of combining the outcomes of several different classifiers in a way that provides a coherent inference that satisfies some constraints. In particular, we develop two general approaches for an important subproblem-identifying phrase structure. The first is a Markovian approach that extends standard HMMs to allow the use of a rich observation structure and of general classifiers to model state-observation dependencies. The second is an extension of constraint satisfaction formalisms. We develop efficient combination algorithms under both models and study them experimentally in the context of shallow parsing. | [
"['Vasin Punyakanok' 'Dan Roth']"
] |
null | null | 0201005 | null | null | http://arxiv.org/pdf/cs/0201005v2 | 2002-10-10T17:23:57Z | 2002-01-08T16:44:10Z | Sharpening Occam's Razor | We provide a new representation-independent formulation of Occam's razor theorem, based on Kolmogorov complexity. This new formulation allows us to: (i) Obtain better sample complexity than both length-based and VC-based versions of Occam's razor theorem, in many applications. (ii) Achieve a sharper reverse of Occam's razor theorem than previous work. Specifically, we weaken the assumptions made in an earlier publication, and extend the reverse to superpolynomial running times. | [
"['Ming Li' 'John Tromp' 'Paul Vitanyi']"
] |
null | null | 0201009 | null | null | http://arxiv.org/pdf/cs/0201009v1 | 2002-01-14T18:38:55Z | 2002-01-14T18:38:55Z | The performance of the batch learner algorithm | We analyze completely the convergence speed of the emph{batch learning algorithm}, and compare its speed to that of the memoryless learning algorithm and of learning with memory. We show that the batch learning algorithm is never worse than the memoryless learning algorithm (at least asymptotically). Its performance emph{vis-a-vis} learning with full memory is less clearcut, and depends on certain probabilistic assumptions. | [
"['Igor Rivin']"
] |
null | null | 0201014 | null | null | http://arxiv.org/pdf/cs/0201014v1 | 2002-01-17T13:42:23Z | 2002-01-17T13:42:23Z | The Dynamics of AdaBoost Weights Tells You What's Hard to Classify | The dynamical evolution of weights in the Adaboost algorithm contains useful information about the role that the associated data points play in the built of the Adaboost model. In particular, the dynamics induces a bipartition of the data set into two (easy/hard) classes. Easy points are ininfluential in the making of the model, while the varying relevance of hard points can be gauged in terms of an entropy value associated to their evolution. Smooth approximations of entropy highlight regions where classification is most uncertain. Promising results are obtained when methods proposed are applied in the Optimal Sampling framework. | [
"['Bruno Caprile' 'Cesare Furlanello' 'Stefano Merler']"
] |
null | null | 0201021 | null | null | http://arxiv.org/pdf/cs/0201021v1 | 2002-01-23T11:58:17Z | 2002-01-23T11:58:17Z | Learning to Play Games in Extensive Form by Valuation | A valuation for a player in a game in extensive form is an assignment of numeric values to the players moves. The valuation reflects the desirability moves. We assume a myopic player, who chooses a move with the highest valuation. Valuations can also be revised, and hopefully improved, after each play of the game. Here, a very simple valuation revision is considered, in which the moves made in a play are assigned the payoff obtained in the play. We show that by adopting such a learning process a player who has a winning strategy in a win-lose game can almost surely guarantee a win in a repeated game. When a player has more than two payoffs, a more elaborate learning procedure is required. We consider one that associates with each move the average payoff in the rounds in which this move was made. When all players adopt this learning procedure, with some perturbations, then, with probability 1, strategies that are close to subgame perfect equilibrium are played after some time. A single player who adopts this procedure can guarantee only her individually rational payoff. | [
"['Philippe Jehiel' 'Dov Samet']"
] |
null | null | 0202383 | null | null | http://arxiv.org/pdf/cond-mat/0202383v1 | 2002-02-21T18:25:29Z | 2002-02-21T18:25:29Z | Extended Comment on Language Trees and Zipping | This is the extended version of a Comment submitted to Physical Review Letters. I first point out the inappropriateness of publishing a Letter unrelated to physics. Next, I give experimental results showing that the technique used in the Letter is 3 times worse and 17 times slower than a simple baseline. And finally, I review the literature, showing that the ideas of the Letter are not novel. I conclude by suggesting that Physical Review Letters should not publish Letters unrelated to physics. | [
"['Joshua Goodman']"
] |
null | null | 0203010 | null | null | http://arxiv.org/pdf/cs/0203010v1 | 2002-03-07T10:16:25Z | 2002-03-07T10:16:25Z | On Learning by Exchanging Advice | One of the main questions concerning learning in Multi-Agent Systems is: (How) can agents benefit from mutual interaction during the learning process?. This paper describes the study of an interactive advice-exchange mechanism as a possible way to improve agents' learning performance. The advice-exchange technique, discussed here, uses supervised learning (backpropagation), where reinforcement is not directly coming from the environment but is based on advice given by peers with better performance score (higher confidence), to enhance the performance of a heterogeneous group of Learning Agents (LAs). The LAs are facing similar problems, in an environment where only reinforcement information is available. Each LA applies a different, well known, learning technique: Random Walk (hill-climbing), Simulated Annealing, Evolutionary Algorithms and Q-Learning. The problem used for evaluation is a simplified traffic-control simulation. Initial results indicate that advice-exchange can improve learning speed, although bad advice and/or blind reliance can disturb the learning performance. | [
"['L. Nunes' 'E. Oliveira']"
] |
null | null | 0203011 | null | null | http://arxiv.org/pdf/cs/0203011v1 | 2002-03-08T15:58:23Z | 2002-03-08T15:58:23Z | Capturing Knowledge of User Preferences: ontologies on recommender
systems | Tools for filtering the World Wide Web exist, but they are hampered by the difficulty of capturing user preferences in such a dynamic environment. We explore the acquisition of user profiles by unobtrusive monitoring of browsing behaviour and application of supervised machine-learning techniques coupled with an ontological representation to extract user preferences. A multi-class approach to paper classification is used, allowing the paper topic taxonomy to be utilised during profile construction. The Quickstep recommender system is presented and two empirical studies evaluate it in a real work setting, measuring the effectiveness of using a hierarchical topic ontology compared with an extendable flat list. | [
"['S. E. Middleton' 'D. C. De Roure' 'N. R. Shadbolt']"
] |
null | null | 0203012 | null | null | http://arxiv.org/pdf/cs/0203012v1 | 2002-03-09T01:28:33Z | 2002-03-09T01:28:33Z | Interface agents: A review of the field | This paper reviews the origins of interface agents, discusses challenges that exist within the interface agent field and presents a survey of current attempts to find solutions to these challenges. A history of agent systems from their birth in the 1960's to the current day is described, along with the issues they try to address. A taxonomy of interface agent systems is presented, and today's agent systems categorized accordingly. Lastly, an analysis of the machine learning and user modelling techniques used by today's agents is presented. | [
"['Stuart E. Middleton']"
] |
null | null | 0204012 | null | null | http://arxiv.org/pdf/cs/0204012v1 | 2002-04-08T10:56:26Z | 2002-04-08T10:56:26Z | Exploiting Synergy Between Ontologies and Recommender Systems | Recommender systems learn about user preferences over time, automatically finding things of similar interest. This reduces the burden of creating explicit queries. Recommender systems do, however, suffer from cold-start problems where no initial information is available early on upon which to base recommendations. Semantic knowledge structures, such as ontologies, can provide valuable domain knowledge and user information. However, acquiring such knowledge and keeping it up to date is not a trivial task and user interests are particularly difficult to acquire and maintain. This paper investigates the synergy between a web-based research paper recommender system and an ontology containing information automatically extracted from departmental databases available on the web. The ontology is used to address the recommender systems cold-start problem. The recommender system addresses the ontology's interest-acquisition problem. An empirical evaluation of this approach is conducted and the performance of the integrated systems measured. | [
"['Stuart E. Middleton' 'Harith Alani' 'David C. De Roure']"
] |
null | null | 0204040 | null | null | http://arxiv.org/pdf/cs/0204040v1 | 2002-04-17T10:46:00Z | 2002-04-17T10:46:00Z | Self-Optimizing and Pareto-Optimal Policies in General Environments
based on Bayes-Mixtures | The problem of making sequential decisions in unknown probabilistic environments is studied. In cycle $t$ action $y_t$ results in perception $x_t$ and reward $r_t$, where all quantities in general may depend on the complete history. The perception $x_t$ and reward $r_t$ are sampled from the (reactive) environmental probability distribution $mu$. This very general setting includes, but is not limited to, (partial observable, k-th order) Markov decision processes. Sequential decision theory tells us how to act in order to maximize the total expected reward, called value, if $mu$ is known. Reinforcement learning is usually used if $mu$ is unknown. In the Bayesian approach one defines a mixture distribution $xi$ as a weighted sum of distributions $nuinM$, where $M$ is any class of distributions including the true environment $mu$. We show that the Bayes-optimal policy $p^xi$ based on the mixture $xi$ is self-optimizing in the sense that the average value converges asymptotically for all $muinM$ to the optimal value achieved by the (infeasible) Bayes-optimal policy $p^mu$ which knows $mu$ in advance. We show that the necessary condition that $M$ admits self-optimizing policies at all, is also sufficient. No other structural assumptions are made on $M$. As an example application, we discuss ergodic Markov decision processes, which allow for self-optimizing policies. Furthermore, we show that $p^xi$ is Pareto-optimal in the sense that there is no other policy yielding higher or equal value in {em all} environments $nuinM$ and a strictly higher value in at least one. | [
"['Marcus Hutter']"
] |
null | null | 0204043 | null | null | http://arxiv.org/pdf/cs/0204043v1 | 2002-04-20T05:02:53Z | 2002-04-20T05:02:53Z | Learning from Scarce Experience | Searching the space of policies directly for the optimal policy has been one popular method for solving partially observable reinforcement learning problems. Typically, with each change of the target policy, its value is estimated from the results of following that very policy. This requires a large number of interactions with the environment as different polices are considered. We present a family of algorithms based on likelihood ratio estimation that use data gathered when executing one policy (or collection of policies) to estimate the value of a different policy. The algorithms combine estimation and optimization stages. The former utilizes experience to build a non-parametric representation of an optimized function. The latter performs optimization on this estimate. We show positive empirical results and provide the sample complexity bound. | [
"['Leonid Peshkin' 'Christian R. Shelton']"
] |
null | null | 0204052 | null | null | http://arxiv.org/pdf/cs/0204052v1 | 2002-04-26T14:33:29Z | 2002-04-26T14:33:29Z | Required sample size for learning sparse Bayesian networks with many
variables | Learning joint probability distributions on n random variables requires exponential sample size in the generic case. Here we consider the case that a temporal (or causal) order of the variables is known and that the (unknown) graph of causal dependencies has bounded in-degree Delta. Then the joint measure is uniquely determined by the probabilities of all (2 Delta+1)-tuples. Upper bounds on the sample size required for estimating their probabilities can be given in terms of the VC-dimension of the set of corresponding cylinder sets. The sample size grows less than linearly with n. | [
"['Pawel Wocjan' 'Dominik Janzing' 'Thomas Beth']"
] |
null | null | 0205025 | null | null | http://arxiv.org/pdf/cs/0205025v1 | 2002-05-16T12:35:00Z | 2002-05-16T12:35:00Z | Bootstrapping Structure into Language: Alignment-Based Learning | This thesis introduces a new unsupervised learning framework, called Alignment-Based Learning, which is based on the alignment of sentences and Harris's (1951) notion of substitutability. Instances of the framework can be applied to an untagged, unstructured corpus of natural language sentences, resulting in a labelled, bracketed version of that corpus. Firstly, the framework aligns all sentences in the corpus in pairs, resulting in a partition of the sentences consisting of parts of the sentences that are equal in both sentences and parts that are unequal. Unequal parts of sentences can be seen as being substitutable for each other, since substituting one unequal part for the other results in another valid sentence. The unequal parts of the sentences are thus considered to be possible (possibly overlapping) constituents, called hypotheses. Secondly, the selection learning phase considers all hypotheses found by the alignment learning phase and selects the best of these. The hypotheses are selected based on the order in which they were found, or based on a probabilistic function. The framework can be extended with a grammar extraction phase. This extended framework is called parseABL. Instead of returning a structured version of the unstructured input corpus, like the ABL system, this system also returns a stochastic context-free or tree substitution grammar. Different instances of the framework have been tested on the English ATIS corpus, the Dutch OVIS corpus and the Wall Street Journal corpus. One of the interesting results, apart from the encouraging numerical results, is that all instances can (and do) learn recursive structures. | [
"['Menno M. van Zaanen']"
] |
null | null | 0205070 | null | null | http://arxiv.org/pdf/cs/0205070v1 | 2002-05-28T02:01:55Z | 2002-05-28T02:01:55Z | Thumbs up? Sentiment Classification using Machine Learning Techniques | We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging. | [
"['Bo Pang' 'Lillian Lee' 'Shivakumar Vaithyanathan']"
] |
null | null | 0205072 | null | null | http://arxiv.org/pdf/cs/0205072v1 | 2002-05-29T17:48:48Z | 2002-05-29T17:48:48Z | Unsupervised Learning of Morphology without Morphemes | The first morphological learner based upon the theory of Whole Word Morphology Ford et al. (1997) is outlined, and preliminary evaluation results are presented. The program, Whole Word Morphologizer, takes a POS-tagged lexicon as input, induces morphological relationships without attempting to discover or identify morphemes, and is then able to generate new words beyond the learning sample. The accuracy (precision) of the generated new words is as high as 80% using the pure Whole Word theory, and 92% after a post-hoc adjustment is added to the routine. | [
"['Sylvain Neuvel' 'Sean A. Fulop']"
] |
null | null | 0206006 | null | null | http://arxiv.org/pdf/cs/0206006v1 | 2002-06-03T16:00:55Z | 2002-06-03T16:00:55Z | Robust Feature Selection by Mutual Information Distributions | Mutual information is widely used in artificial intelligence, in a descriptive way, to measure the stochastic dependence of discrete random variables. In order to address questions such as the reliability of the empirical value, one must consider sample-to-population inferential approaches. This paper deals with the distribution of mutual information, as obtained in a Bayesian framework by a second-order Dirichlet prior distribution. The exact analytical expression for the mean and an analytical approximation of the variance are reported. Asymptotic approximations of the distribution are proposed. The results are applied to the problem of selecting features for incremental learning and classification of the naive Bayes classifier. A fast, newly defined method is shown to outperform the traditional approach based on empirical mutual information on a number of real data sets. Finally, a theoretical development is reported that allows one to efficiently extend the above methods to incomplete samples in an easy and effective way. | [
"['Marco Zaffalon' 'Marcus Hutter']"
] |
null | null | 0206017 | null | null | http://arxiv.org/pdf/cs/0206017v1 | 2002-06-10T16:02:36Z | 2002-06-10T16:02:36Z | The Prioritized Inductive Logic Programs | The limit behavior of inductive logic programs has not been explored, but when considering incremental or online inductive learning algorithms which usually run ongoingly, such behavior of the programs should be taken into account. An example is given to show that some inductive learning algorithm may not be correct in the long run if the limit behavior is not considered. An inductive logic program is convergent if given an increasing sequence of example sets, the program produces a corresponding sequence of the Horn logic programs which has the set-theoretic limit, and is limit-correct if the limit of the produced sequence of the Horn logic programs is correct with respect to the limit of the sequence of the example sets. It is shown that the GOLEM system is not limit-correct. Finally, a limit-correct inductive logic system, called the prioritized GOLEM system, is proposed as a solution. | [
"['Shilong Ma' 'Yuefei Sui' 'Ke Xu']"
] |
null | null | 0207097 | null | null | http://arxiv.org/pdf/cs/0207097v2 | 2002-12-23T14:11:16Z | 2002-07-31T14:33:11Z | Optimal Ordered Problem Solver | We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, efficiently searching not only the space of domain-specific algorithms, but also the space of search algorithms. Essentially we extend the principles of optimal nonincremental universal search to build an incremental universal learner that is able to improve itself through experience. In illustrative experiments, our self-improver becomes the first general system that learns to solve all n disk Towers of Hanoi tasks (solution size 2^n-1) for n up to 30, profiting from previously solved, simpler tasks involving samples of a simple context free language. | [
"['Juergen Schmidhuber']"
] |
null | null | 0210025 | null | null | http://arxiv.org/pdf/cs/0210025v3 | 2002-11-27T00:56:43Z | 2002-10-29T00:33:26Z | An Algorithm for Pattern Discovery in Time Series | We present a new algorithm for discovering patterns in time series and other sequential data. We exhibit a reliable procedure for building the minimal set of hidden, Markovian states that is statistically capable of producing the behavior exhibited in the data -- the underlying process's causal states. Unlike conventional methods for fitting hidden Markov models (HMMs) to data, our algorithm makes no assumptions about the process's causal architecture (the number of hidden states and their transition structure), but rather infers it from the data. It starts with assumptions of minimal structure and introduces complexity only when the data demand it. Moreover, the causal states it infers have important predictive optimality properties that conventional HMM states lack. We introduce the algorithm, review the theory behind it, prove its asymptotic reliability, use large deviation theory to estimate its rate of convergence, and compare it to other algorithms which also construct HMMs from data. We also illustrate its behavior on an example process, and report selected numerical results from an implementation. | [
"['Cosma Rohilla Shalizi' 'Kristina Lisa Shalizi' 'James P. Crutchfield']"
] |
null | null | 0211003 | null | null | http://arxiv.org/pdf/cs/0211003v1 | 2002-11-01T18:09:56Z | 2002-11-01T18:09:56Z | Evaluation of the Performance of the Markov Blanket Bayesian Classifier
Algorithm | The Markov Blanket Bayesian Classifier is a recently-proposed algorithm for construction of probabilistic classifiers. This paper presents an empirical comparison of the MBBC algorithm with three other Bayesian classifiers: Naive Bayes, Tree-Augmented Naive Bayes and a general Bayesian network. All of these are implemented using the K2 framework of Cooper and Herskovits. The classifiers are compared in terms of their performance (using simple accuracy measures and ROC curves) and speed, on a range of standard benchmark data sets. It is concluded that MBBC is competitive in terms of speed and accuracy with the other algorithms considered. | [
"['Michael G. Madden']"
] |
null | null | 0211006 | null | null | http://arxiv.org/pdf/cs/0211006v1 | 2002-11-07T06:44:54Z | 2002-11-07T06:44:54Z | Maximing the Margin in the Input Space | We propose a novel criterion for support vector machine learning: maximizing the margin in the input space, not in the feature (Hilbert) space. This criterion is a discriminative version of the principal curve proposed by Hastie et al. The criterion is appropriate in particular when the input space is already a well-designed feature space with rather small dimensionality. The definition of the margin is generalized in order to represent prior knowledge. The derived algorithm consists of two alternating steps to estimate the dual parameters. Firstly, the parameters are initialized by the original SVM. Then one set of parameters is updated by Newton-like procedure, and the other set is updated by solving a quadratic programming problem. The algorithm converges in a few steps to a local optimum under mild conditions and it preserves the sparsity of support vectors. Although the complexity to calculate temporal variables increases the complexity to solve the quadratic programming problem for each step does not change. It is also shown that the original SVM can be seen as a special case. We further derive a simplified algorithm which enables us to use the existing code for the original SVM. | [
"['Shotaro Akaho']"
] |
null | null | 0211007 | null | null | http://arxiv.org/pdf/cs/0211007v1 | 2002-11-07T07:21:58Z | 2002-11-07T07:21:58Z | Approximating Incomplete Kernel Matrices by the em Algorithm | In biological data, it is often the case that observed data are available only for a subset of samples. When a kernel matrix is derived from such data, we have to leave the entries for unavailable samples as missing. In this paper, we make use of a parametric model of kernel matrices, and estimate missing entries by fitting the model to existing entries. The parametric model is created as a set of spectral variants of a complete kernel matrix derived from another information source. For model fitting, we adopt the em algorithm based on the information geometry of positive definite matrices. We will report promising results on bacteria clustering experiments using two marker sequences: 16S and gyrB. | [
"['Koji Tsuda' 'Shotaro Akaho' 'Kiyoshi Asai']"
] |
null | null | 0212008 | null | null | http://arxiv.org/pdf/cs/0212008v1 | 2002-12-07T18:51:12Z | 2002-12-07T18:51:12Z | Principal Manifolds and Nonlinear Dimension Reduction via Local Tangent
Space Alignment | Nonlinear manifold learning from unorganized data points is a very challenging unsupervised learning and data visualization problem with a great variety of applications. In this paper we present a new algorithm for manifold learning and nonlinear dimension reduction. Based on a set of unorganized data points sampled with noise from the manifold, we represent the local geometry of the manifold using tangent spaces learned by fitting an affine subspace in a neighborhood of each data point. Those tangent spaces are aligned to give the internal global coordinates of the data points with respect to the underlying manifold by way of a partial eigendecomposition of the neighborhood connection matrix. We present a careful error analysis of our algorithm and show that the reconstruction errors are of second-order accuracy. We illustrate our algorithm using curves and surfaces both in 2D/3D and higher dimensional Euclidean spaces, and 64-by-64 pixel face images with various pose and lighting conditions. We also address several theoretical and algorithmic issues for further research and improvements. | [
"['Zhenyue Zhang' 'Hongyuan Zha']"
] |
null | null | 0212011 | null | null | http://arxiv.org/pdf/cs/0212011v1 | 2002-12-08T18:52:33Z | 2002-12-08T18:52:33Z | Mining the Web for Lexical Knowledge to Improve Keyphrase Extraction:
Learning from Labeled and Unlabeled Data | Keyphrases are useful for a variety of purposes, including summarizing, indexing, labeling, categorizing, clustering, highlighting, browsing, and searching. The task of automatic keyphrase extraction is to select keyphrases from within the text of a given document. Automatic keyphrase extraction makes it feasible to generate keyphrases for the huge number of documents that do not have manually assigned keyphrases. Good performance on this task has been obtained by approaching it as a supervised learning problem. An input document is treated as a set of candidate phrases that must be classified as either keyphrases or non-keyphrases. To classify a candidate phrase as a keyphrase, the most important features (attributes) appear to be the frequency and location of the candidate phrase in the document. Recent work has demonstrated that it is also useful to know the frequency of the candidate phrase as a manually assigned keyphrase for other documents in the same domain as the given document (e.g., the domain of computer science). Unfortunately, this keyphrase-frequency feature is domain-specific (the learning process must be repeated for each new domain) and training-intensive (good performance requires a relatively large number of training documents in the given domain, with manually assigned keyphrases). The aim of the work described here is to remove these limitations. In this paper, I introduce new features that are derived by mining lexical knowledge from a very large collection of unlabeled data, consisting of approximately 350 million Web pages without manually assigned keyphrases. I present experiments that show that the new features result in improved keyphrase extraction, although they are neither domain-specific nor training-intensive. | [
"['Peter D. Turney']"
] |
null | null | 0212012 | null | null | http://arxiv.org/pdf/cs/0212012v1 | 2002-12-08T19:06:08Z | 2002-12-08T19:06:08Z | Unsupervised Learning of Semantic Orientation from a
Hundred-Billion-Word Corpus | The evaluative character of a word is called its semantic orientation. A positive semantic orientation implies desirability (e.g., "honest", "intrepid") and a negative semantic orientation implies undesirability (e.g., "disturbing", "superfluous"). This paper introduces a simple algorithm for unsupervised learning of semantic orientation from extremely large corpora. The method involves issuing queries to a Web search engine and using pointwise mutual information to analyse the results. The algorithm is empirically evaluated using a training corpus of approximately one hundred billion words -- the subset of the Web that is indexed by the chosen search engine. Tested with 3,596 words (1,614 positive and 1,982 negative), the algorithm attains an accuracy of 80%. The 3,596 test words include adjectives, adverbs, nouns, and verbs. The accuracy is comparable with the results achieved by Hatzivassiloglou and McKeown (1997), using a complex four-stage supervised learning algorithm that is restricted to determining the semantic orientation of adjectives. | [
"['Peter D. Turney' 'Michael L. Littman']"
] |
null | null | 0212013 | null | null | http://arxiv.org/pdf/cs/0212013v1 | 2002-12-08T19:27:56Z | 2002-12-08T19:27:56Z | Learning to Extract Keyphrases from Text | Many academic journals ask their authors to provide a list of about five to fifteen key words, to appear on the first page of each article. Since these key words are often phrases of two or more words, we prefer to call them keyphrases. There is a surprisingly wide variety of tasks for which keyphrases are useful, as we discuss in this paper. Recent commercial software, such as Microsoft's Word 97 and Verity's Search 97, includes algorithms that automatically extract keyphrases from documents. In this paper, we approach the problem of automatically extracting keyphrases from text as a supervised learning task. We treat a document as a set of phrases, which the learning algorithm must learn to classify as positive or negative examples of keyphrases. Our first set of experiments applies the C4.5 decision tree induction algorithm to this learning task. The second set of experiments applies the GenEx algorithm to the task. We developed the GenEx algorithm specifically for this task. The third set of experiments examines the performance of GenEx on the task of metadata generation, relative to the performance of Microsoft's Word 97. The fourth and final set of experiments investigates the performance of GenEx on the task of highlighting, relative to Verity's Search 97. The experimental results support the claim that a specialized learning algorithm (GenEx) can generate better keyphrases than a general-purpose learning algorithm (C4.5) and the non-learning algorithms that are used in commercial software (Word 97 and Search 97). | [
"['Peter D. Turney']"
] |
null | null | 0212014 | null | null | http://arxiv.org/pdf/cs/0212014v1 | 2002-12-08T19:40:42Z | 2002-12-08T19:40:42Z | Extraction of Keyphrases from Text: Evaluation of Four Algorithms | This report presents an empirical evaluation of four algorithms for automatically extracting keywords and keyphrases from documents. The four algorithms are compared using five different collections of documents. For each document, we have a target set of keyphrases, which were generated by hand. The target keyphrases were generated for human readers; they were not tailored for any of the four keyphrase extraction algorithms. Each of the algorithms was evaluated by the degree to which the algorithm's keyphrases matched the manually generated keyphrases. The four algorithms were (1) the AutoSummarize feature in Microsoft's Word 97, (2) an algorithm based on Eric Brill's part-of-speech tagger, (3) the Summarize feature in Verity's Search 97, and (4) NRC's Extractor algorithm. For all five document collections, NRC's Extractor yields the best match with the manually generated keyphrases. | [
"['Peter D. Turney']"
] |
null | null | 0212020 | null | null | http://arxiv.org/pdf/cs/0212020v1 | 2002-12-10T15:30:56Z | 2002-12-10T15:30:56Z | Learning Algorithms for Keyphrase Extraction | Many academic journals ask their authors to provide a list of about five to fifteen keywords, to appear on the first page of each article. Since these key words are often phrases of two or more words, we prefer to call them keyphrases. There is a wide variety of tasks for which keyphrases are useful, as we discuss in this paper. We approach the problem of automatically extracting keyphrases from text as a supervised learning task. We treat a document as a set of phrases, which the learning algorithm must learn to classify as positive or negative examples of keyphrases. Our first set of experiments applies the C4.5 decision tree induction algorithm to this learning task. We evaluate the performance of nine different configurations of C4.5. The second set of experiments applies the GenEx algorithm to the task. We developed the GenEx algorithm specifically for automatically extracting keyphrases from text. The experimental results support the claim that a custom-designed algorithm (GenEx), incorporating specialized procedural domain knowledge, can generate better keyphrases than a generalpurpose algorithm (C4.5). Subjective human evaluation of the keyphrases generated by Extractor suggests that about 80% of the keyphrases are acceptable to human readers. This level of performance should be satisfactory for a wide variety of applications. | [
"['Peter D. Turney']"
] |
null | null | 0212023 | null | null | http://arxiv.org/pdf/cs/0212023v1 | 2002-12-10T18:19:54Z | 2002-12-10T18:19:54Z | How to Shift Bias: Lessons from the Baldwin Effect | An inductive learning algorithm takes a set of data as input and generates a hypothesis as output. A set of data is typically consistent with an infinite number of hypotheses; therefore, there must be factors other than the data that determine the output of the learning algorithm. In machine learning, these other factors are called the bias of the learner. Classical learning algorithms have a fixed bias, implicit in their design. Recently developed learning algorithms dynamically adjust their bias as they search for a hypothesis. Algorithms that shift bias in this manner are not as well understood as classical algorithms. In this paper, we show that the Baldwin effect has implications for the design and analysis of bias shifting algorithms. The Baldwin effect was proposed in 1896, to explain how phenomena that might appear to require Lamarckian evolution (inheritance of acquired characteristics) can arise from purely Darwinian evolution. Hinton and Nowlan presented a computational model of the Baldwin effect in 1987. We explore a variation on their model, which we constructed explicitly to illustrate the lessons that the Baldwin effect has for research in bias shifting algorithms. The main lesson is that it appears that a good strategy for shift of bias in a learning algorithm is to begin with a weak bias and gradually shift to a strong bias. | [
"['Peter D. Turney']"
] |
null | null | 0212024 | null | null | http://arxiv.org/pdf/cs/0212024v1 | 2002-12-10T21:59:15Z | 2002-12-10T21:59:15Z | Unsupervised Language Acquisition: Theory and Practice | In this thesis I present various algorithms for the unsupervised machine learning of aspects of natural languages using a variety of statistical models. The scientific object of the work is to examine the validity of the so-called Argument from the Poverty of the Stimulus advanced in favour of the proposition that humans have language-specific innate knowledge. I start by examining an a priori argument based on Gold's theorem, that purports to prove that natural languages cannot be learned, and some formal issues related to the choice of statistical grammars rather than symbolic grammars. I present three novel algorithms for learning various parts of natural languages: first, an algorithm for the induction of syntactic categories from unlabelled text using distributional information, that can deal with ambiguous and rare words; secondly, a set of algorithms for learning morphological processes in a variety of languages, including languages such as Arabic with non-concatenative morphology; thirdly an algorithm for the unsupervised induction of a context-free grammar from tagged text. I carefully examine the interaction between the various components, and show how these algorithms can form the basis for a empiricist model of language acquisition. I therefore conclude that the Argument from the Poverty of the Stimulus is unsupported by the evidence. | [
"['Alexander Clark']"
] |
null | null | 0212028 | null | null | http://arxiv.org/pdf/cs/0212028v1 | 2002-12-11T15:50:41Z | 2002-12-11T15:50:41Z | Technical Note: Bias and the Quantification of Stability | Research on bias in machine learning algorithms has generally been concerned with the impact of bias on predictive accuracy. We believe that there are other factors that should also play a role in the evaluation of bias. One such factor is the stability of the algorithm; in other words, the repeatability of the results. If we obtain two sets of data from the same phenomenon, with the same underlying probability distribution, then we would like our learning algorithm to induce approximately the same concepts from both sets of data. This paper introduces a method for quantifying stability, based on a measure of the agreement between concepts. We also discuss the relationships among stability, predictive accuracy, and bias. | [
"['Peter D. Turney']"
] |
null | null | 0212029 | null | null | http://arxiv.org/pdf/cs/0212029v1 | 2002-12-11T16:08:36Z | 2002-12-11T16:08:36Z | A Theory of Cross-Validation Error | This paper presents a theory of error in cross-validation testing of algorithms for predicting real-valued attributes. The theory justifies the claim that predicting real-valued attributes requires balancing the conflicting demands of simplicity and accuracy. Furthermore, the theory indicates precisely how these conflicting demands must be balanced, in order to minimize cross-validation error. A general theory is presented, then it is developed in detail for linear regression and instance-based learning. | [
"['Peter D. Turney']"
] |
null | null | 0212030 | null | null | http://arxiv.org/pdf/cs/0212030v1 | 2002-12-11T17:36:00Z | 2002-12-11T17:36:00Z | Theoretical Analyses of Cross-Validation Error and Voting in
Instance-Based Learning | This paper begins with a general theory of error in cross-validation testing of algorithms for supervised learning from examples. It is assumed that the examples are described by attribute-value pairs, where the values are symbolic. Cross-validation requires a set of training examples and a set of testing examples. The value of the attribute that is to be predicted is known to the learner in the training set, but unknown in the testing set. The theory demonstrates that cross-validation error has two components: error on the training set (inaccuracy) and sensitivity to noise (instability). This general theory is then applied to voting in instance-based learning. Given an example in the testing set, a typical instance-based learning algorithm predicts the designated attribute by voting among the k nearest neighbors (the k most similar examples) to the testing example in the training set. Voting is intended to increase the stability (resistance to noise) of instance-based learning, but a theoretical analysis shows that there are circumstances in which voting can be destabilizing. The theory suggests ways to minimize cross-validation error, by insuring that voting is stable and does not adversely affect accuracy. | [
"['Peter D. Turney']"
] |
null | null | 0212031 | null | null | http://arxiv.org/pdf/cs/0212031v1 | 2002-12-11T18:30:59Z | 2002-12-11T18:30:59Z | Contextual Normalization Applied to Aircraft Gas Turbine Engine
Diagnosis | Diagnosing faults in aircraft gas turbine engines is a complex problem. It involves several tasks, including rapid and accurate interpretation of patterns in engine sensor data. We have investigated contextual normalization for the development of a software tool to help engine repair technicians with interpretation of sensor data. Contextual normalization is a new strategy for employing machine learning. It handles variation in data that is due to contextual factors, rather than the health of the engine. It does this by normalizing the data in a context-sensitive manner. This learning strategy was developed and tested using 242 observations of an aircraft gas turbine engine in a test cell, where each observation consists of roughly 12,000 numbers, gathered over a 12 second interval. There were eight classes of observations: seven deliberately implanted classes of faults and a healthy class. We compared two approaches to implementing our learning strategy: linear regression and instance-based learning. We have three main results. (1) For the given problem, instance-based learning works better than linear regression. (2) For this problem, contextual normalization works better than other common forms of normalization. (3) The algorithms described here can be the basis for a useful software tool for assisting technicians with the interpretation of sensor data. | [
"['Peter D. Turney' 'Michael Halasz']"
] |
null | null | 0212032 | null | null | http://arxiv.org/pdf/cs/0212032v1 | 2002-12-11T18:57:42Z | 2002-12-11T18:57:42Z | Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised
Classification of Reviews | This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g., "subtle nuances") and a negative semantic orientation when it has bad associations (e.g., "very cavalier"). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word "excellent" minus the mutual information between the given phrase and the word "poor". A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84% for automobile reviews to 66% for movie reviews. | [
"['Peter D. Turney']"
] |
null | null | 0212033 | null | null | http://arxiv.org/pdf/cs/0212033v1 | 2002-12-11T19:17:06Z | 2002-12-11T19:17:06Z | Mining the Web for Synonyms: PMI-IR versus LSA on TOEFL | This paper presents a simple unsupervised learning algorithm for recognizing synonyms, based on statistical data acquired by querying a Web search engine. The algorithm, called PMI-IR, uses Pointwise Mutual Information (PMI) and Information Retrieval (IR) to measure the similarity of pairs of words. PMI-IR is empirically evaluated using 80 synonym test questions from the Test of English as a Foreign Language (TOEFL) and 50 synonym test questions from a collection of tests for students of English as a Second Language (ESL). On both tests, the algorithm obtains a score of 74%. PMI-IR is contrasted with Latent Semantic Analysis (LSA), which achieves a score of 64% on the same 80 TOEFL questions. The paper discusses potential applications of the new unsupervised learning algorithm and some implications of the results for LSA and LSI (Latent Semantic Indexing). | [
"['Peter D. Turney']"
] |
null | null | 0212034 | null | null | http://arxiv.org/pdf/cs/0212034v1 | 2002-12-11T19:42:14Z | 2002-12-11T19:42:14Z | Types of Cost in Inductive Concept Learning | Inductive concept learning is the task of learning to assign cases to a discrete set of classes. In real-world applications of concept learning, there are many different types of cost involved. The majority of the machine learning literature ignores all types of cost (unless accuracy is interpreted as a type of cost measure). A few papers have investigated the cost of misclassification errors. Very few papers have examined the many other types of cost. In this paper, we attempt to create a taxonomy of the different types of cost that are involved in inductive concept learning. This taxonomy may help to organize the literature on cost-sensitive learning. We hope that it will inspire researchers to investigate all types of cost in inductive concept learning in more depth. | [
"['Peter D. Turney']"
] |
null | null | 0212035 | null | null | http://arxiv.org/pdf/cs/0212035v1 | 2002-12-12T19:40:50Z | 2002-12-12T19:40:50Z | Exploiting Context When Learning to Classify | This paper addresses the problem of classifying observations when features are context-sensitive, specifically when the testing set involves a context that is different from the training set. The paper begins with a precise definition of the problem, then general strategies are presented for enhancing the performance of classification algorithms on this type of problem. These strategies are tested on two domains. The first domain is the diagnosis of gas turbine engines. The problem is to diagnose a faulty engine in one context, such as warm weather, when the fault has previously been seen only in another context, such as cold weather. The second domain is speech recognition. The problem is to recognize words spoken by a new speaker, not represented in the training set. For both domains, exploiting context results in substantially more accurate classification. | [
"['Peter D. Turney']"
] |
null | null | 0212036 | null | null | http://arxiv.org/pdf/cs/0212036v1 | 2002-12-11T21:34:18Z | 2002-12-11T21:34:18Z | Myths and Legends of the Baldwin Effect | This position paper argues that the Baldwin effect is widely misunderstood by the evolutionary computation community. The misunderstandings appear to fall into two general categories. Firstly, it is commonly believed that the Baldwin effect is concerned with the synergy that results when there is an evolving population of learning individuals. This is only half of the story. The full story is more complicated and more interesting. The Baldwin effect is concerned with the costs and benefits of lifetime learning by individuals in an evolving population. Several researchers have focussed exclusively on the benefits, but there is much to be gained from attention to the costs. This paper explains the two sides of the story and enumerates ten of the costs and benefits of lifetime learning by individuals in an evolving population. Secondly, there is a cluster of misunderstandings about the relationship between the Baldwin effect and Lamarckian inheritance of acquired characteristics. The Baldwin effect is not Lamarckian. A Lamarckian algorithm is not better for most evolutionary computing problems than a Baldwinian algorithm. Finally, Lamarckian inheritance is not a better model of memetic (cultural) evolution than the Baldwin effect. | [
"['Peter D. Turney']"
] |
null | null | 0212037 | null | null | http://arxiv.org/pdf/cs/0212037v1 | 2002-12-12T18:14:38Z | 2002-12-12T18:14:38Z | The Management of Context-Sensitive Features: A Review of Strategies | In this paper, we review five heuristic strategies for handling context-sensitive features in supervised machine learning from examples. We discuss two methods for recovering lost (implicit) contextual information. We mention some evidence that hybrid strategies can have a synergetic effect. We then show how the work of several machine learning researchers fits into this framework. While we do not claim that these strategies exhaust the possibilities, it appears that the framework includes all of the techniques that can be found in the published literature on contextsensitive learning. | [
"['Peter D. Turney']"
] |
null | null | 0212038 | null | null | http://arxiv.org/pdf/cs/0212038v1 | 2002-12-12T18:29:02Z | 2002-12-12T18:29:02Z | The Identification of Context-Sensitive Features: A Formal Definition of
Context for Concept Learning | A large body of research in machine learning is concerned with supervised learning from examples. The examples are typically represented as vectors in a multi-dimensional feature space (also known as attribute-value descriptions). A teacher partitions a set of training examples into a finite number of classes. The task of the learning algorithm is to induce a concept from the training examples. In this paper, we formally distinguish three types of features: primary, contextual, and irrelevant features. We also formally define what it means for one feature to be context-sensitive to another feature. Context-sensitive features complicate the task of the learner and potentially impair the learner's performance. Our formal definitions make it possible for a learner to automatically identify context-sensitive features. After context-sensitive features have been identified, there are several strategies that the learner can employ for managing the features; however, a discussion of these strategies is outside of the scope of this paper. The formal definitions presented here correct a flaw in previously proposed definitions. We discuss the relationship between our work and a formal definition of relevance. | [
"['Peter D. Turney']"
] |