id
int64
1
141k
title
stringlengths
15
150
body
stringlengths
43
35.6k
tags
stringlengths
1
118
label
int64
0
1
841
Proving a binary heap has $\lceil n/2 \rceil$ leaves
<p>I'm trying to prove that a <a href="http://en.wikipedia.org/wiki/Binary_heap">binary heap</a> with $n$ nodes has exactly $\left\lceil \frac{n}{2} \right\rceil$ leaves, given that the heap is built in the following way:</p>&#xA;&#xA;<p>Each new node is inserted via <a href="http://en.wikipedia.org/wiki/Binary_heap#Insert">percolate up</a>. This means that each new node must be created at the next available child. What I mean by this is that children are filled level-down, and left to right. For example, the following heap:</p>&#xA;&#xA;<pre><code> 0&#xA; / \&#xA; 1 2&#xA;</code></pre>&#xA;&#xA;<p>would <b>have</b> to have been built in this order: 0, 1, 2. (The numbers are just indexes, they give no indication of the actual data held in that node.) </p>&#xA;&#xA;<p>This has two important implications:</p>&#xA;&#xA;<ol>&#xA;<li><p>There can exist no node on level $k+1$ without level $k$ being completely filled</p></li>&#xA;<li><p>Because children are built left to right, there can be no "empty spaces" between the nodes on level $k+1$, or situations like the following: </p>&#xA;&#xA;<pre><code> 0&#xA; / \&#xA; 1 2&#xA; / \ \&#xA;3 4 6&#xA;</code></pre></li>&#xA;</ol>&#xA;&#xA;<p>(This would be an illegal heap by my definition.) Thus, a good way to think of this heap is an <a href="http://en.wikipedia.org/wiki/Binary_heap#Heap_implementation">array implementation</a> of a heap, where there can't be any "jumps" in indeces of the array.</p>&#xA;&#xA;<p>So, I was thinking induction would probably be a good way to do this... Perhaps something having to deal with even an odd cases for n. For example, some induction using the fact that even heaps built in this fashion must have an internal node with one child for an even n, and no such nodes for an odd n. Ideas?</p>&#xA;
data structures binary trees
1
842
Can a dynamic language like Ruby/Python reach C/C++ like performance?
<p>I wonder if it is possible to build compilers for dynamic languages like Ruby to have similar and comparable performance to C/C++? From what I understand about compilers, take Ruby for instance, compiling Ruby code can't ever be efficient because the way Ruby handles reflection, features such as automatic type conversion from integer to big integer, and lack of static typing makes building an efficient compiler for Ruby extremely difficult.</p>&#xA;&#xA;<p>Is it possible to build a compiler that can compile Ruby or any other dynamic languages to a binary that performs very closely to C/C++? Is there a fundamental reason why JIT compilers, such as PyPy/Rubinius will eventually or will never match C/C++ in performance?</p>&#xA;&#xA;<p>Note: I do understand that “performance” can be vague, so to clear that up, I meant, if you can do X in C/C++ with performance Y, can you do X in Ruby/Python with performance close to Y? Where X is everything from device drivers and OS code, to web applications.</p>&#xA;
programming languages compilers
0
850
Is the shadow ray in a Whitted ray tracer occluded by transparent objects?
<p>In a Whitted ray tracer, each ray-object intersection spawns a transmitted ray (if the object was translucent), a reflected ray and a shadow ray. The shadow ray contributes the direct lighting component.</p>&#xA;&#xA;<p>But what happens if the shadow ray intersects a transparent object? Is the direct lighting component ignored? How will diffuse objects submerged in water be lit if they don't get any direct light contributions from the shadow ray?</p>&#xA;
graphics
1
851
Polymorphism and Inductive datatypes
<p>I'm curious. I've been working on this datatype in <em>OCaml</em>:</p>&#xA;&#xA;<pre><code>type 'a exptree =&#xA; | Epsilon&#xA; | Delta of 'a exptree * 'a exptree&#xA; | Omicron of 'a&#xA; | Iota of 'a exptree exptree&#xA;</code></pre>&#xA;&#xA;<p>Which can be manipulated using explicitly typed recursive functions (a feature that has been added quite recently). Example:</p>&#xA;&#xA;<pre><code>let rec map : 'a 'b. ('a -&gt; 'b) -&gt; 'a exptree -&gt; 'b exptree =&#xA; fun f -&gt;&#xA; begin function&#xA; | Epsilon -&gt; Epsilon&#xA; | Delta (t1, t2) -&gt; Delta (map f t1, map f t2)&#xA; | Omicron t -&gt; Omicron (f t)&#xA; | Iota tt -&gt; Iota (map (map f) tt)&#xA; end&#xA;</code></pre>&#xA;&#xA;<p>But I've never been able to define it in <em>Coq</em>:</p>&#xA;&#xA;<pre><code>Inductive exptree a :=&#xA; | epsilon : exptree a&#xA; | delta : exptree a -&gt; exptree a -&gt; exptree a&#xA; | omicron : a -&gt; exptree a&#xA; | iota : exptree (exptree a) -&gt; exptree a&#xA;.&#xA;</code></pre>&#xA;&#xA;<p><em>Coq</em> is whining. It doesn't like the last constructor, and says something I don't completely understand or agree with:</p>&#xA;&#xA;<pre><code>Error: Non strictly positive occurrence of "exptree" in "exptree (exptree a) -&gt; exptree a".&#xA;</code></pre>&#xA;&#xA;<p>What I can understand is that inductive types using a negation inside their definition like <code>type 'a term = Constructor ('a term -&gt; …)</code> are rejected, because they would lead to ugly non well-founded beasts like (untyped) λ-terms.&#xA;However this particular <code>exptree</code> datatype seems innocuous enough: looking at its <em>OCaml</em> definition, its argument <code>'a</code> is never used in negative positions.</p>&#xA;&#xA;<p>It seems that <em>Coq</em> is overcautious here.&#xA;So is there really a problem with this particular inductive datatype?&#xA;Or could <em>Coq</em> be slightly more permissive here?</p>&#xA;&#xA;<p>Also, what about other proof assistants, are they able to cope with such an inductive definition (in a natural way)?</p>&#xA;
logic programming languages coq inductive datatypes
1
857
How to fool the plot inspection heuristic?
<p>Over <a href="https://cs.stackexchange.com/a/825/98">here</a>, Dave Clarke proposed that in order to compare asymptotic growth you should plot the functions at hand. As a theoretically inclined computer scientist, I call(ed) this vodoo as a plot is never proof. On second thought, I have to agree that this is a very useful approach that is even sometimes underused; a plot is an efficient way to get first ideas, and sometimes that is all you need.</p>&#xA;&#xA;<p>When teaching TCS, there is always the student who asks: "What do I need formal proof for if I can just do X which always works?" It is up to his teacher(s) to point out and illustrate the fallacy. There is a brilliant set of examples of <a href="https://math.stackexchange.com/q/111440/3330">apparent patterns that eventually fail</a> over at math.SE, but those are fairly mathematical scenarios.</p>&#xA;&#xA;<p>So, how do you fool the plot inspection heuristic? There are some cases where differences are hard to tell appart, e.g.</p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/cXBip.png" alt="example">&#xA;<img src="https://i.stack.imgur.com/tFr3Z.png" alt="example">&#xA;<img src="https://i.stack.imgur.com/qzOHT.png" alt="example"><br>&#xA;<sup>[<a href="https://github.com/akerbos/sesketches/blob/gh-pages/src/cs_857.gnuplot" rel="noreferrer">source</a>]</sup></p>&#xA;&#xA;<p>Make a guess, and then check the source for the real functions. But those are not as spectacular as I would hope for, in particular because the real relations are easy to spot from the functions alone, even for a beginner.</p>&#xA;&#xA;<p>Are there examples of (relative) asymptotic growth where the truth is not obvious from the function definiton and plot inspection for reasonably large $n$ gives you a completely wrong idea? Mathematical functions and real data sets (e.g. runtime of a specific algorithm) are both welcome; please refrain from piecewise defined functions, though.</p>&#xA;
asymptotics didactics
1
864
Reducing directed hamiltonian cycle to graph coloring
<p>The 3-SAT problem can be reduced to both the graph coloring and the directed hamiltonian cycle problem, but is there any chain of reductions which reduce directed hamiltonian cycle to graph coloring in polynomial time?</p>&#xA;
complexity theory np complete reductions
1
868
Types of Automated Theorem Provers
<p><sup><em>I am learning <a href="http://en.wikipedia.org/wiki/Automated_theorem_proving" rel="nofollow noreferrer">Automated Theorem Proving</a> / <a href="http://en.wikipedia.org/wiki/Satisfiability_Modulo_Theories" rel="nofollow noreferrer">SMT solvers</a> / <a href="http://en.wikipedia.org/wiki/Proof_assistant" rel="nofollow noreferrer">Proof Assistants</a> by myself and post a series of questions about the process, starting <a href="https://cs.stackexchange.com/questions/820/learning-automated-theorem-proving">here</a>.</em></sup></p>&#xA;&#xA;<p>Which are the relevant automated theorem provers? I found <a href="http://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;frm=1&amp;source=web&amp;cd=6&amp;ved=0CGgQFjAF&amp;url=http%3A%2F%2Fwww.cs.cornell.edu%2FNuprl%2FPRLSeminar%2FPRLSeminar01_02%2FNogin%2FPRLseminar7b.pdf&amp;ei=Nkx0T-XMMqjc0QGv-Nj_Ag&amp;usg=AFQjCNGslr0mgMKpFQg1NdtEmA-BxY-eTA" rel="nofollow noreferrer">A Review of Theorem Provers</a></p>&#xA;&#xA;<p>Is this still current?</p>&#xA;&#xA;<p>Which ones are still very active, i.e. which are currently used beyond the group that created it?</p>&#xA;&#xA;<p><sup><em>Find the series' next question <a href="https://cs.stackexchange.com/questions/879/why-do-some-inference-engines-need-human-assistance-while-others-dont">here</a>.</em></sup></p>&#xA;
logic automated theorem proving proof assistants
1
878
Issue in comparing classifiers for pattern recognition
<p>I have designed a classifier M which recognizes gestures and classifies it under any category always. A gesture is classified based on the hamming distance between the sample time series y and the training time series x. The result of the classifier are probabilistic values. There are 3 classes/categories with labels A,B,C which classifies hand gestures where there are 100 samples for each class which are to be classified (single feature and data length=100). The data are different time series (x coordinate vs time). The training set is used to assign probabilities indicating which gesture has occured how many times. So,out of 10 training samples if gesture A appeared 6 times then probability that a gesture falls under category A is </p>&#xA;&#xA;<blockquote>&#xA; <p>P(A)=0.6&#xA; similarly &#xA; P(B)=0.3</p>&#xA;</blockquote>&#xA;&#xA;<p>and</p>&#xA;&#xA;<blockquote>&#xA; <p>P(C)=0.1</p>&#xA;</blockquote>&#xA;&#xA;<p>Now, I am trying to compare the performance of this classifier with Bayes classifier, K-NN, Principal component analysis (PCA) and Neural Network. </p>&#xA;&#xA;<ol>&#xA;<li>On what basis,parameter and method should I do it if I consider ROC or cross validate since the features for my classifier are the probabilistic values for the ROC plot hence what shall be the features for k-nn,bayes classification and PCA?</li>&#xA;<li>Is there a code for it which will be useful.</li>&#xA;<li>What should be the value of k is there are 3 classes of gestures?</li>&#xA;</ol>&#xA;&#xA;<p>Please help. I am in a fix. </p>&#xA;
machine learning pattern recognition
1
879
Why do some inference engines need human assistance while others don't?
<p><sup><em>I am learning <a href="http://en.wikipedia.org/wiki/Automated_theorem_proving" rel="nofollow noreferrer">Automated Theorem Proving</a> / <a href="http://en.wikipedia.org/wiki/Satisfiability_Modulo_Theories" rel="nofollow noreferrer">SMT solvers</a> / <a href="http://en.wikipedia.org/wiki/Proof_assistant" rel="nofollow noreferrer">Proof Assistants</a> by myself and post a series of questions about the process, starting <a href="https://cs.stackexchange.com/questions/820/learning-automated-theorem-proving">here</a>.</em></sup></p>&#xA;&#xA;<p>Why is it that automated theorem provers, i.e. <a href="http://en.wikipedia.org/wiki/ACL2" rel="nofollow noreferrer">ACL2</a>, and SMT solvers do not need human assistance while proof assistants, i.e. <a href="http://en.wikipedia.org/wiki/Isabelle_%28theorem_prover%29" rel="nofollow noreferrer">Isabelle</a> and <a href="http://en.wikipedia.org/wiki/Coq" rel="nofollow noreferrer">Coq</a>, do?</p>&#xA;&#xA;<p><sup><em>Find the series' next question <a href="https://cs.stackexchange.com/questions/882/why-is-unification-so-important-to-inference-engines">here</a>.</em></sup></p>&#xA;
logic proof assistants automated theorem proving smt solvers
1
880
Priority queue with both decrease-key and increase-key operations
<p>A <a href="http://en.wikipedia.org/wiki/Fibonacci_heap#Summary_of_running_times">Fibonnaci Heap</a> supports the following operations:</p>&#xA;&#xA;<ul>&#xA;<li><code>insert(key, data)</code> : adds a new element to the data structure</li>&#xA;<li><code>find-min()</code> : returns a pointer to the element with minimum key</li>&#xA;<li><code>delete-min()</code> : removes the element with minimum key</li>&#xA;<li><code>delete(node)</code> : deletes the element pointed to by <code>node</code></li>&#xA;<li><code>decrease-key(node)</code> : decreases the key of the element pointed to by <code>node</code></li>&#xA;</ul>&#xA;&#xA;<p>All non-delete operations are $O(1)$ (amortized) time, and the delete operations are $O(\log n)$ amortized time.</p>&#xA;&#xA;<p>Are there any implementations of a priority queue which also support<code>increase-key(node)</code> in $O(1)$ (amortized) time?</p>&#xA;
data structures priority queues
1
882
Why is unification so important to inference engines?
<p><sup><em>I am learning <a href="http://en.wikipedia.org/wiki/Automated_theorem_proving" rel="noreferrer">Automated Theorem Proving</a> / <a href="http://en.wikipedia.org/wiki/Satisfiability_Modulo_Theories" rel="noreferrer">SMT solvers</a> / <a href="http://en.wikipedia.org/wiki/Proof_assistant" rel="noreferrer">Proof Assistants</a> by myself and post a series of questions about the process, starting <a href="https://cs.stackexchange.com/questions/820/learning-automated-theorem-proving">here</a>.</em></sup></p>&#xA;&#xA;<p>I keep reading about the <a href="http://en.wikipedia.org/wiki/Unification_(computer_science)" rel="noreferrer">Unification Algorithm</a>. </p>&#xA;&#xA;<ul>&#xA;<li>What is it and why is so important to <a href="http://en.wikipedia.org/wiki/Inference_engine" rel="noreferrer">Inference Engines</a>?</li>&#xA;<li>Why is it so important to Computer Science?</li>&#xA;</ul>&#xA;
logic proof assistants automated theorem proving smt solvers unification
1
888
Computer Science Book for Young Adults
<p>What is a good beginner computer science book for a young adult, say, a 15 year old? I want to get started in CS, but have no idea where to start. I have limited experience in programming.</p>&#xA;
education reference request
0
897
Does spanning tree make sense for DAG?
<p>Why cannot I find any information about spanning tree for DAG ? I must be wrong somewhere.</p>&#xA;
graphs spanning trees
1
899
Why is using a lexer/parser on binary data so wrong?
<p>I often work with <a href="http://en.wikipedia.org/wiki/Lexical_analysis">lexer</a>/<a href="http://en.wikipedia.org/wiki/Parsing">parsers</a>, as opposed to a parser combinator and see people who never took a class in parsing, ask about parsing binary data. Typically the data is not only binary but also context sensitive. This basically leads to having only one type of token, a token for byte. </p>&#xA;&#xA;<p>Can someone explain why parsing binary data with a lexer/parser is so wrong with enough clarity for a CS student who hasn't taken a parsing class, but with a footing on theory?</p>&#xA;
programming languages compilers parsers
1
909
Knapsack problem -- NP-complete despite dynamic programming solution?
<p>Knapsack problems are easily solved by dynamic programming. Dynamic programming runs in polynomial time; that is why we do it, right?</p>&#xA;&#xA;<p>I have read it is actually an NP-complete problem, though, which would mean that solving the problem in polynomial problem is probably impossible.</p>&#xA;&#xA;<p>Where is my mistake?</p>&#xA;
complexity theory np complete dynamic programming
1
915
Solving Recurrences via Characteristic Polynomial with Imaginary Roots
<p>In algorithm analysis you often have to solve recurrences. In addition to Master Theorem, substitution and iteration methods, there is one using <em>characteristic polynomials</em>.</p>&#xA;&#xA;<p>Say I have concluded that a characteristic polynomial $x^2 - 2x + 2$ has <em>imaginary</em> roots, namely $x_1 = 1+i$ and $x_2 =1-i$. Then I cannot use</p>&#xA;&#xA;<p>$\qquad c_1\cdot x_1^n + c_2\cdot x_2^n$</p>&#xA;&#xA;<p>to obtain the solution, right? How should I proceed in this case?</p>&#xA;
algorithms algorithm analysis recurrence relation
0
923
Examples of sophisticated recursive algorithms
<p>I was explaining the famous deterministic <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm">linear-time selection algorithm</a> (median of medians algorithm) to a friend.</p>&#xA;&#xA;<p>The recursion in this algorithm (while being very simple) is quite sophisticated. There are two recursive calls, each with different parameters.</p>&#xA;&#xA;<p>I was trying to find other examples of such interesting recursive algorithms, but could not find any. All of the recursive algorithms I could come up with are either simple tail-recursions or simple divide and conquer (where the two calls are "the same").</p>&#xA;&#xA;<p>Can you give some examples of sophisticated recursion?</p>&#xA;
algorithms recursion
0
930
Adding elements to a sorted array
<p>What would be the fastest way of doing this (from an algorithmic perspective, as well as a practical matter)?</p>&#xA;&#xA;<p>I was thinking something along the following lines.</p>&#xA;&#xA;<p>I could add to the end of an array and then use bubblesort as it has a best case (totally sorted array at start) that is close to this, and has linear running time (in the best case).</p>&#xA;&#xA;<p>On the other hand, if I know that I start out with a sorted array, I can use a binary search to find out the insertion point for a given element.</p>&#xA;&#xA;<p>My hunch is that the second way is nearly optimal, but curious to see what is out there.</p>&#xA;&#xA;<p>How can this best be done?</p>&#xA;
algorithms efficiency arrays sorting
1
933
Computational power of nondeterministic type-2 min-heap automata
<p>I have asked a series of questions concerning capabilities of a certain class of exotic automata which I have called <em>min-heap automata</em>; the original question, and links to others, can be found <a href="https://cs.stackexchange.com/q/110/69">here</a>.</p>&#xA;&#xA;<p>This question concerns the computational power of type-2 min-heap automata, which were suggested by Raphael as a more natural kind of computing device. The class of languages which can be accepted by such automata is a proper superset of the set of context-free languages; which leads me to my question.</p>&#xA;&#xA;<blockquote>&#xA; <p>Is $HAL_2$ (the set of languages accepted by nondeterministic type-2 min-heap automata) a proper subset of $CSL$ (the set of context-sensitive languages), or not? Note that it is already known (shown by Raphael in the linked question) that there are $CSL$ languages not in in $HAL_2$.</p>&#xA;</blockquote>&#xA;&#xA;<p>This is one of the last questions I intend to ask about these automata. If a good answer can be found to these (and other) questions, my curiosity will be completely satisfied. Thanks in advance and for all the hard work so far.</p>&#xA;
formal languages automata
0
934
Computational power of nondeterministic type-1 min-heap automata with multiple heaps
<p>I have asked a series of questions concerning capabilities of a certain class of exotic automata which I have called <em>min-heap automata</em>; the original question, and links to others, can be found <a href="https://cs.stackexchange.com/q/110/69">here</a>.</p>&#xA;&#xA;<p>This question concerns the power of type-1 min-heap automata, which represent my initial idea for how these machines would operate. The class of languages which can be accepted by such automata is incomparable (i.e., neither a proper subset nor a proper superset) of the set of context-free languages.</p>&#xA;&#xA;<p>Push down automata, which possess a single stack for data storage, accept the set of context-free languages, in the same way that min-heap automata, which possess a single heap for data storage, accept the set $HAL_1$ of languages accepted by nondeterministic type-1 min-heap automata. Push-down automata with two stacks are equivalent to Turing machines in computational power; they can simulate Turing machines, and vice versa; which leads me to my question:</p>&#xA;&#xA;<blockquote>&#xA; <p>Does adding another heap to non-deterministic type-1 min-heap automata make them equivalent in terms of computing ability to Turing machines, in the sense that they are able to simulate Turing machines? If not, does it increase their computational power at all, in the sense that nondeterministic type-1 min-heap automata can accept a set of languages which is a proper subset of $HAL_1$? If so, does adding additional heaps increase computational power, i.e., can nondeterministic min-heap automata with $k+1$ heaps accept more languages than automata with $k$ heaps, for any $k$? </p>&#xA;</blockquote>&#xA;&#xA;<p>This is one of the last questions I plan to ask about these automata; if good answers can be had for these (and other) questions, my curiosity will be completely satisfied. Thanks in advance and for all the hard work so far.</p>&#xA;
formal languages automata
1
938
From the LR(1) parsing table, can we deduce that it is also an LALR and SLR table?
<p>There is this question I read somewhere but could not answer myself.</p>&#xA;&#xA;<p>Assume I have an LR(1) Parsing table. Is there any way that just by looking at it and its items, can I deduce that it is also a table for LALR and SLR?</p>&#xA;
compilers parsers
1
939
Optimization version of decision problems
<p>It is known that each optimization/search problem has an equivalent decision problem. For example the shortest path problem</p>&#xA;&#xA;<blockquote>&#xA; <ul>&#xA; <li><strong>optimization/search version:</strong>&#xA; Given an undirected unweighted graph $G = (V, E)$ and two vertices $v,u\in V$, find a shortest path between $v$ and $u$.</li>&#xA; <li><strong>decision version:</strong> &#xA; Given an undirected unweighted graph $G = (V, E)$, two vertices $v,u\in V$, and a non-negative integer $k$, is there a path in $G$ between $u$ and $v$ whose length is at most $k$?</li>&#xA; </ul>&#xA;</blockquote>&#xA;&#xA;<p>In general, "Find $x^*\in X$ s.t. $f(x^*) = \min\{f(x)\mid x\in X\}$!" becomes "Is there $x\in X$ s.t. $f(x) \leq k$?". </p>&#xA;&#xA;<p>But is the reverse also true, i.e. is there an equivalent optimization problem for every decision problem? If not, what is an example of a decision problem that has no equivalent optimization problem?</p>&#xA;
complexity theory optimization search problem decision problem
0
944
Power of nondeterministic type-1 min-heap automaton with both a heap and a stack
<p>I have asked a series of questions concerning capabilities of a certain class of exotic automata which I have called min-heap automata; the original question, and links to others, can be found <a href="https://cs.stackexchange.com/q/110/69">here</a>.</p>&#xA;&#xA;<p>Two of my last questions seem to have been quickly dispatched; one completely, and the other mostly (I have edited it to make it more viable). In either event, I actually had one other question I meant to ask, and would be interested to lay this subject to rest for good and all. So here it is:</p>&#xA;&#xA;<blockquote>&#xA; <p>A two-stack PDA can simulate a Turing machine. A $k$-heap nondeterministic type-1 min-heap automaton cannot (it seems; see the linked question). What about a $k$-tape nondeterministic type-1 min-heap automaton augmented with a stack (similar to that of a PDA)? Can it simulate a Turing machine? If not, does an augmented $(k+1)$-heap nondeterministic type-1 min-heap automaton accept a class of languages which is a proper superset of languages accepted by augmented automata with only $k$ heaps?</p>&#xA;</blockquote>&#xA;&#xA;<p>Thanks, and I promise this is the last of these questions.</p>&#xA;
formal languages automata
1
945
Difference between "information" and "useful information" in algorithmic information theory
<p>According to <a href="http://en.wikipedia.org/wiki/Algorithmic_information_theory#Overview">Wikipedia</a>:</p>&#xA;&#xA;<blockquote>&#xA; <p>Informally, from the point of view of algorithmic information theory, the information content of a string is equivalent to the length of the shortest possible self-contained representation of that string.</p>&#xA;</blockquote>&#xA;&#xA;<p>What is the analogous informal rigorous definition of "useful information"? Why is "useful information" not taken as the more natural or more fundamental concept; naively it seems a purely random string must by definition contain zero information, so I'm trying to get my head around the fact that it is considered to have maximal information by the standard definition.</p>&#xA;
information theory terminology kolmogorov complexity
1
971
Quantum lambda calculus
<p>Classically, there are 3 popular ways to think about computation: Turing machine, circuits, and lambda-calculus (I use this as a catch all for most functional views). All 3 have been fruitful ways to think about different types of problems, and different fields use different formulation for this reason.</p>&#xA;<p>When I work with quantum computing, however, I only ever think about the circuit model. Originally, QC was defined in terms of <a href="https://cs.stackexchange.com/q/125/55">quantum Turing machines</a> but as far as I understand, this definition (although equivalent to quantum circuits if both are formulated carefully) has not been nearly as fruitful. The 3rd formulation (in terms of lambda-calculus or similar functional settings) I am completely unfamiliar with. Hence my questions:</p>&#xA;<ul>&#xA;<li><p><strong>What are useful definitions of quantum lambda-calculus (or other functional paradigms)?</strong></p>&#xA;</li>&#xA;<li><p><strong>What subfields of QIP gain deeper insight from using this formulation instead of the circuit model?</strong></p>&#xA;</li>&#xA;</ul>&#xA;<hr />&#xA;<h3>Notes</h3>&#xA;<p>I am aware that I am ignoring many other popular formalisms like cellular automata, RAM-models, etc. I exclude these mostly because I don't have experience with thinking in terms of these models classically, let alone <a href="https://cstheory.stackexchange.com/q/6932/1037">quantumly</a>.</p>&#xA;<p>I am also aware that there are popular alternatives in the quantum setting, such as measurement-based, topological, and adiabatic. I do not discuss them because I am not familiar with the classical counterparts.</p>&#xA;
lambda calculus quantum computing reference request computation models
1
974
How to prove that $n(\log_3(n))^5 = O(n^{1.2})$?
<p>This a homework question from Udi Manber's book. Any hint would be nice :)</p>&#xA;&#xA;<p>I must show that:</p>&#xA;&#xA;<blockquote>&#xA; <p>$n(\log_3(n))^5 = O(n^{1.2})$</p>&#xA;</blockquote>&#xA;&#xA;<p>I tried using Theorem 3.1 of book:</p>&#xA;&#xA;<blockquote>&#xA; <p>$f(n)^c = O(a^{f(n)})$ (for $c &gt; 0$, $a &gt; 1$)</p>&#xA;</blockquote>&#xA;&#xA;<p>Substituing:</p>&#xA;&#xA;<p>$(\log_3(n))^5 = O(3^{\log_3(n)}) = O(n) $</p>&#xA;&#xA;<p>but $n(\log_3(n))^5 = O(n\cdot n) = O(n^2) \ne O(n^{1.2})$</p>&#xA;&#xA;<p>Thank you for any help.</p>&#xA;
asymptotics landau notation mathematical analysis
1
982
"NP-complete" optimization problems
<p>I am slightly confused by some terminology I have encountered regarding the complexity of optimization problems. In an algorithms class, I had the <a href="http://en.wikipedia.org/wiki/Maximum_parsimony_%28phylogenetics%29#Problems_with_maximum_parsimony_phylogeny_estimation">large parsimony</a> problem described as NP-complete. However, I am not exactly sure what the term NP-complete means in the context of an optimization problem. Does this just mean that the corresponding decision problem is NP-complete? And does that mean that the optimization problem may in fact be harder (perhaps outside of NP)?</p>&#xA;&#xA;<p>In particular, I am concerned about the fact that while an NP-complete decision problem is polynomial time verifiable, a solution to a corresponding optimization problem does not appear to be polynomial time verifiable. Does that mean that the problem is not really in NP, or is polynomial time verifiability only a characteristic of NP decision problems?</p>&#xA;
complexity theory np complete terminology
1
983
Pattern classification: what goes into the sample?
<p>I am trying to compare the performance of a classification result with Bayes classifier, K-NN, Piece wise component analysis (PCA). I have doubts regarding the following (please excuse my lack of programming skills since I am a biologist and not a programmer thus finding the Matlab documentation hard to follow).</p>&#xA;&#xA;<p>In the Matlab code </p>&#xA;&#xA;<pre><code> Class = knnclassify(Sample, Training, Group, k)&#xA; Group = [1;2;3] //where 1,2,3 represents Class A,B,C respectively.&#xA;</code></pre>&#xA;&#xA;<p>What goes in the sample because my data is a 100 row 1 column for each of the classes? So Group 1 contains data like $[0.9;0.1;......n]$ where $n=100$. Would the sample be a vector containing random mixtures of the data points from the three classes? Same question for the <code>Training</code> matrix.</p>&#xA;
machine learning modelling pattern recognition
0
987
In Constraint Programming, are there any models that take into account the number of variable changes?
<p>Consider a CSP model where changing the value of a particular variable is expensive. Is there any work where the objective function also considers the number of changes in the value of the variable during the search process?</p>&#xA;&#xA;<p>An example: The expensive-to-change variable may be in the control of some other agent and there is some overhead of involving that agent to change the variable. Another example: The variable participates in one of the constraints, and the satisfaction of this constraint involves calling an expensive function (such as, a simulator), e.g. $z = f(x, y)$ is the constraint, and $f$ is an expensive-to-compute function. Therefore, $x$ and $y$ are expensive-to-change variables.</p>&#xA;
algorithms constraint programming
0
991
Are there minimum criteria for a programming language being Turing complete?
<p>Does there exist a set of programming language constructs in a programming language in order for it to be considered Turing Complete?</p>&#xA;&#xA;<p>From what I can tell from <a href="http://en.wikipedia.org/wiki/Turing_completeness">wikipedia</a>, the language needs to support recursion, or, seemingly, must be able to run without halting. Is this all there is to it?</p>&#xA;
computability programming languages turing machines turing completeness
1
996
About Codd's reduction algorithm
<p><a href="http://dictionary.reference.com/browse/Codd%27s+reduction+algorithm" rel="nofollow noreferrer">Codd's Algorithm</a> converts an expression in tuple relational calculus to Relational Algebra.</p>&#xA;&#xA;<ol>&#xA;<li>Is there a standard implementation of the algorithm? </li>&#xA;<li>Is this algorithm used anywhere? (It seems that the industry only needs SQL and variants, I'm not sure about database theorists in academia.)</li>&#xA;<li>What's the complexity of the reduction?</li>&#xA;</ol>&#xA;&#xA;<p><sub> This was posted on <a href="https://stackoverflow.com/questions/4149840/about-codds-reduction-algorithm">SO</a> over a year ago, but it didn't receive a good answer. </sub> </p>&#xA;
algorithms database theory
1
998
distributed alpha beta pruning
<p>I am looking for an efficient algorithm that lets me process the minimax search tree for chess with <a href="http://en.wikipedia.org/wiki/Alpha-beta_pruning">alpha-beta pruning</a> on a distributed architecture. The algorithms I have found (PVS, YBWC, DTS see below) are all quite old (1990 being the latest). I assume there have been many substantial advancements since then. What is the current standard in this field?</p>&#xA;&#xA;<p>Also please point me to an idiot's explanation of DTS as I can't understand it from the research papers that I have read.</p>&#xA;&#xA;<p>The algorithms mentioned above:</p>&#xA;&#xA;<ul>&#xA;<li>PVS: Principle Variation Splitting</li>&#xA;<li>YBWC: Young Brothers Wait Concept</li>&#xA;<li>DTS: Dynamic Tree Splitting</li>&#xA;</ul>&#xA;&#xA;<p>are all are discussed <a href="http://chessprogramming.wikispaces.com/Parallel+Search">here</a>.</p>&#xA;
algorithms distributed systems board games
0
999
Complexity of Towers of Hanoi
<p>I ran into the following doubts on the complexity of <a href="http://en.wikipedia.org/wiki/Tower_of_Hanoi">Towers of Hanoi</a>, on which I would like your comments.</p>&#xA;&#xA;<ul>&#xA;<li><p><b>Is it in NP? </b> &#xA;Attempted answer: Suppose Peggy (prover) solves the problem &amp; submits it to Victor (verifier). Victor can easily see that the final state of the solution is right (in linear time) but he'll have no option but to go through each of Peggy's moves to make sure she didn't make an illegal move. Since Peggy has to make a minimum of 2^|disks| - 1 moves (provable), Victor too has to follow suit. Thus Victor has no polynomial time verification (the definition of NP), and hence can't be in NP.</p></li>&#xA;<li><p><b> Is it in PSPACE </b> ? Seems so, but I can't think of how to extend the above reasoning. </p></li>&#xA;<li><p><b> Is it PSPACE-complete? </b> Seems not, but I have only a vague idea. Automated Planning , of which ToH is a specific instance, is PSPACE-complete. I think that Planning has far more hard instances than ToH. </p></li>&#xA;</ul>&#xA;&#xA;<p><b> Updated </b>: Input = $n$, the number of disks; Output = disk configuration at each step. After updating this, I realized that this input/output format doesn't fit a decision problem. I'm not sure about the right formalization to capture the notions of NP,PSPACE, etc. for this kind of problem.</p>&#xA;&#xA;<p><b> Update #2 </b>: After Kaveh's and Jeff's comments, I'm forced to make the problem more precise: </p>&#xA;&#xA;<blockquote>&#xA; <p>Let the input be the pair of ints $(n,i)$ where $n$ is the number of disks. If the sequence of moves taken by the disks is written down in the format (disk-number,from-peg,to-peg)(disk-number, from-peg, to-peg)... from the first move to the last, and encoded in binary, output the $i$th bit.</p>&#xA;</blockquote>&#xA;&#xA;<p>Let me know if I need to be more specific about the encoding. I suppose Kaveh's comment applies in this case? </p>&#xA;
complexity theory time complexity towers of hanoi
1
1,001
Optimal algorithm for finding the girth of a sparse graph?
<p>I wonder how to find the <a href="http://en.wikipedia.org/wiki/Girth_%28graph_theory%29">girth</a> of a sparse undirected graph. By sparse I mean $|E|=O(|V|)$. By optimum I mean the lowest time complexity.</p>&#xA;&#xA;<p>I thought about some modification on <a href="http://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm">Tarjan's algorithm</a> for undirected graphs, but I didn't find good results. Actually I thought that if I could find a 2-connected components in $O(|V|)$, then I can find the girth, by some sort of induction which can be achieved from the first part. I may be on the wrong track, though. Any algorithm asymptotically better than $\Theta(|V|^2)$ (i.e. $o(|V|^2)$) is welcome.</p>&#xA;
algorithms time complexity graphs
1
1,002
Showing that a problem in X is not X-Complete
<p>The <a href="http://en.wikipedia.org/wiki/Existential_theory_of_the_reals" rel="noreferrer">Existential Theory of the Reals</a> is in <strong>PSPACE</strong>, but I don't know whether it is <strong>PSPACE-Complete</strong>. If I believe that it is not the case, how could I prove it?</p>&#xA;&#xA;<p>More generally, given a problem in some complexity class <strong>X</strong>, how can I show that it is <em>not</em> <strong>X-Complete</strong>? For example, <strong>X</strong> could be <strong>NP</strong>, <strong>PSPACE</strong>, <strong>EXPTIME</strong>.</p>&#xA;
complexity theory proof techniques
1
1,004
Most efficient know way to match orders
<p>Consider two 2D-Array $B_{ij} $ (the buy array) and $S_{ij}$ (the sell array) where each $i^{th}$ element is associated with an array of floating-point values and each of the floating point value, in turn, is associated with an array of integers.</p>&#xA;&#xA;<p>For example</p>&#xA;&#xA;<pre><code>B = [ &#xA; 0001 =&gt; [ 32.5 =&gt; {10, 15, 20}, &#xA; 45.2 =&gt; {48, 16, 19}, &#xA; ...,&#xA; k1&#xA; ], &#xA; 0002 =&gt; [ 35.6 =&gt; {17, 35, 89}, &#xA; 68.7 =&gt; {18, 43, 74}, &#xA; ...,&#xA; k2&#xA; ] &#xA;] &#xA;</code></pre>&#xA;&#xA;<p>and similiarly for the sell array.</p>&#xA;&#xA;<p>This is akin to an order association system of a stock/commodity exchange </p>&#xA;&#xA;<pre><code>BuyOrderBook = [&#xA; CompanyName =&gt; [&#xA; Price1 =&gt; [Qty1, Qty2...],&#xA; Price2 =&gt; [Qty1, Qty2...]&#xA; ]&#xA; SecondCompany = [...]&#xA; ]&#xA;</code></pre>&#xA;&#xA;<p>What is the fastest way known to solve the following problem:</p>&#xA;&#xA;<blockquote>&#xA; <p><strong>Input:</strong> Buy array $B$, Sell array $S$<br>&#xA; <strong>Problem:</strong> Decide wether there are $(c_1 \Rightarrow p_1 \Rightarrow q_1) \in B$ and $(c_2 \Rightarrow p_2 \Rightarrow q_2) \in S$ with $q_1, q_2 &gt; 0$ and $p_2 \geq p_1$.</p>&#xA;</blockquote>&#xA;&#xA;<p>In short, what is the fatest way of matching orders for an exchange?</p>&#xA;&#xA;<p><strong>Update in response to comments</strong></p>&#xA;&#xA;<p>Lets say, MSFT has 25 shares @ \$60 to be sold and there is buyer who is willing to offer \$61 for 10 shares of MSFT. Then the buyer gets 10 shares @ \$60 and the buy order book becomes empty while the sell order book now is updated with new quantity - 15 shares @ \$60.</p>&#xA;&#xA;<p>Now take the reverse case, MSFT has 25 shares @ \$60 to be bought and there is seller who is willing to receive \$61 for 10 shares of MSFT. Then the trade will not be executed because seller is demanding a minimum of \$61 and the buyer is offering a maximum of \$60. The orders are now stored and await until new orders are received.</p>&#xA;&#xA;<p>(This is the <a href="http://en.wikipedia.org/wiki/Order_%28exchange%29#Limit_order" rel="nofollow">limit order principle</a>, where the seller specifies minimum price at which he is willing to sell at and buyer specifies maximum price at which he is willing to buy). </p>&#xA;&#xA;<p>Post execution, the sell order book will be (25-10)=15@86.5 while buy order book will be empty (10-10)=0.</p>&#xA;
algorithms
0
1,006
Regular sets have linear growth?
<p>Is it true that the set $\{ 0^{n^2} \mid n \in\mathbb{N} \}$ is not regular because it does not grow linearly?</p>&#xA;&#xA;<blockquote>&#xA; <p>Regular sets are called regular because if you have a regular set then you can always pump it up (pumping lemma) in regular intervals and get other things in the set. They string out at very linear intervals. That's why anything that grows in other than linear intervals is not regular.</p>&#xA;</blockquote>&#xA;&#xA;<p>Therefore, $\{ a^{n^2} b^n \mid n \in\mathbb{N} \}$ is not regular, right? Also, I know that $\{ a^n b^n \mid n \in\mathbb{N} \}$ is not regular, but what about $\{ a^{cn} b^{dn} \mid n \in\mathbb{N} \}$ for any integer coefficients $c$ and $d$?</p>&#xA;
formal languages regular languages
0
1,012
Lambda Calculus Evaluation
<p>I know this is a simple question but can someone show me how&#xA;$(\lambda y. \lambda x. \lambda y.y) (\lambda x. \lambda y. y)$ reduces to $\lambda x. \lambda y. y$.</p>&#xA;
logic lambda calculus
1
1,016
Branch and Bound explanation
<p>I have a test about the <a href="http://en.wikipedia.org/wiki/Branch_and_bound">branch and bound</a> algorithm. I understand theoretically how this algorithm works but I couldn't find examples that illustrates how this algorithm can be implemented practically. </p>&#xA;&#xA;<p>I found some examples such as <a href="http://optlab-server.sce.carleton.ca/POAnimations2007/BranchAndBound.html">this one</a>&#xA;but I'm still confused about it. I also looked for travelling salesman problem and I couldn't understand it.</p>&#xA;&#xA;<p>What I need is some problems and how can these problems solved by using branch and bound.</p>&#xA;
algorithms optimization branch and bound
0
1,017
Regular Expression for the language that requires one symbol to occur at least once
<p>I am trying to figure out the simplest way to do this using a regular expression. </p>&#xA;&#xA;<ul>&#xA;<li>Three symbols a, b, c.</li>&#xA;<li>The sequence length is unlimited, i.e. *.</li>&#xA;<li>The symbol a must be somewhere in the sequence at least once, but can appear more than once. </li>&#xA;<li>The sequence may have only a.</li>&#xA;</ul>&#xA;&#xA;<p>More formally, $\{ w \in \{a,b,c\}^* ~|~ \#_a(w)\ge 1 \}$, where $\#_a(w)$ is the number&#xA;of $a$s in $w$. </p>&#xA;&#xA;<p>The best I get is</p>&#xA;&#xA;<blockquote>&#xA; <p>$( ( b \mid c )^*\, a\, ( b \mid c )^* )^+$</p>&#xA;</blockquote>&#xA;&#xA;<p>Is that the simplest way?</p>&#xA;
formal languages regular expressions
1
1,018
Non-Parametric Methods Like K-Nearest-Neighbours in High Dimensional Feature Space
<p>The main idea of <a href="https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm">k-Nearest-Neighbour</a> takes into account the $k$ nearest points and decides the classification of the data by majority vote. If so, then it should not have problems in higher dimensional data because methods like <a href="https://en.wikipedia.org/wiki/Nearest_neighbor_search#Locality_sensitive_hashing">locality sensitive hashing</a> can efficiently find nearest neighbours.</p>&#xA;&#xA;<p>In addition, feature selection with Bayesian networks can reduce the dimension of data and make learning easier.</p>&#xA;&#xA;<p>However, this <a href="http://repository.cmu.edu/cgi/viewcontent.cgi?article=2038&amp;context=compsci&amp;sei-redir=1&amp;referer=http%3A%2F%2Fscholar.google.com.hk%2Fscholar_url%3Fhl%3Den%26q%3Dhttp%3A%2F%2Frepository.cmu.edu%2Fcgi%2Fviewcontent.cgi%253Farticle%253D2038%2526context%253Dcompsci%26sa%3DX%26scisig%3DAAGBfm2_yYj4fmPTEDqBmHIZN-g3FWN7BA%26oi%3Dscholarr%26ei%3DzeB7T6qOIOmwiQeHj8GKCQ%26ved%3D0CBsQgAMoADAA#search=%22http%3A%2F%2Frepository.cmu.edu%2Fcgi%2Fviewcontent.cgi%3Farticle%3D2038%26context%3Dcompsci%22">review paper</a> by John Lafferty in statistical learning points out that non-parametric learning in high dimensional feature spaces is still a challenge and unsolved. </p>&#xA;&#xA;<p>What is going wrong?</p>&#xA;
machine learning artificial intelligence
1
1,020
How to implement AO* algorithm?
<p>I have noticed that different data structures are used when we implement search algorithms. For example, we use queues to implement breadth first search, stacks to implement depth-first search and min-heaps to implement the <a href="https://en.wikipedia.org/wiki/A%2a_algorithm">A* algorithm</a>. In these cases, we do not need to construct the search tree explicitly.</p>&#xA;&#xA;<p>But I can not find a simple data structure to simulate the searching process of the <a href="http://www.cs.cf.ac.uk/Dave/AI2/node26.html">AO* algorithm</a>. I would like to know if constructing the search tree explicitly is the only way to implement AO* algorithm? Can anybody provide me an efficient implementation?</p>&#xA;
algorithms graphs data structures search algorithms
0
1,027
Using Pumping Lemma to prove language $L = \{(01)^m 2^m \mid m \ge0\}$ is not regular
<p>I'm trying to use pumping lemma to prove that $L = \{(01)^m 2^m \mid m \ge0\}$ is not regular.</p>&#xA;&#xA;<p>This is what I have so far: Assume $L$ is regular and let $p$ be the pumping length, so $w = (01)^p 2^p$. Consider any pumping decomposition $w = xyz$ such that $|y| &gt;0$ and $|xy| \le p$.</p>&#xA;&#xA;<p>I'm not sure what to do next.</p>&#xA;&#xA;<p>Am I on the right track? Or am I way off?</p>&#xA;
formal languages regular languages pumping lemma
0
1,029
reduce reduce and shift reduce error in LALR grammar
<p>I have to write a grammar for Pascal, and there is just one thing that is causing problems. </p>&#xA;&#xA;<p>Lets say we have operators (sorted by priority from low to high):</p>&#xA;&#xA;<ol>&#xA;<li><em>Postfix <code>^</code></em>.</li>&#xA;<li><em>Prefix <code>^</code></em>.</li>&#xA;<li><em><code>[ ]</code></em>, and <code>.</code>, (same priority and left associative).</li>&#xA;<li>The only terminal is <code>id</code>, which is any lowercase letter.</li>&#xA;</ol>&#xA;&#xA;<p>Now let's say that an expression is:</p>&#xA;&#xA;<ol>&#xA;<li>Any id.</li>&#xA;<li>Any expression with the <em>Postfix <code>^</code></em> operator.</li>&#xA;<li>Any expression with the <em>Prefix <code>^</code></em> operator.</li>&#xA;<li>Any expression with <code>.</code> followed by <code>id</code>.</li>&#xA;<li>Any expression with <code>[</code> and another expression and <code>]</code>.</li>&#xA;</ol>&#xA;&#xA;<p>Now I would like to know how can I make a LALR grammar without shift-reduce and reduce-reduce conflicts, OR if that can't be done how can I prove that it can't be done.</p>&#xA;&#xA;<p>Some examples:</p>&#xA;&#xA;<pre><code>good:&#xA;a.b.c.d &#xA;a.b^.c&#xA;^a.b^&#xA;a.b^^[c]^^.d.e &#xA;^^a.b^.d.e^[]&#xA;&#xA;bad:&#xA;a.^b.c&#xA;</code></pre>&#xA;&#xA;<p>Without the prefix <code>^</code>, this problem is easy to solve, but the prefix sign keeps getting me. Can anyone help? My solutions so far:</p>&#xA;&#xA;<pre><code>// this works without the prefix but it does not produce a.b^.c which is wrong.&#xA;A ::= B | A ^ ;&#xA;B ::= C | ^ B ;&#xA;C ::= id | C [ A ] | C . id;&#xA;</code></pre>&#xA;&#xA;<p>So I thought that the prefix can only occur before the first dot, and between dots, there can only be a postfix <code>^</code> and brackets. So I came up with this:</p>&#xA;&#xA;<pre><code>A ::= B | A ^ ;&#xA;B ::= C | ^ B ;&#xA;C ::= id | C [ A ] |id D;&#xA;D ::= id E;&#xA;F ::= E | F ^;&#xA;E ::= id | F . id;&#xA;</code></pre>&#xA;&#xA;<p>But this causes 3 conflicts.</p>&#xA;
programming languages parsers formal grammars
0
1,031
How to prove that a language is not regular?
<p>We learned about the class of regular languages $\mathrm{REG}$. It is characterised by any one concept among regular expressions, finite automata and left-linear grammars, so it is easy to show that a given language is regular.</p>&#xA;&#xA;<p>How do I show the opposite, though? My TA has been adamant that in order to do so, we would have to show for all regular expressions (or for all finite automata, or for all left-linear grammars) that they can not describe the language at hand. This seems like a big task!</p>&#xA;&#xA;<p>I have read about some pumping lemma but it looks really complicated.</p>&#xA;&#xA;<p><em><sup>This is intended to be a reference question collecting usual proof methods and application examples. See <a href="https://cs.stackexchange.com/q/265/98">here</a> for the same question on context-free languages.</sup></em></p>&#xA;
formal languages regular languages proof techniques reference question
1
1,039
Number of words in the regular language $(00)^*$
<p><a href="http://en.wikipedia.org/wiki/Regular_language#The_number_of_words_in_a_regular_language">According to Wikipedia</a>, for any regular language $L$ there exist constants $\lambda_1,\ldots,\lambda_k$ and polynomials $p_1(x),\ldots,p_k(x)$ such that for every $n$ the number $s_L(n)$ of words of length $n$ in $L$ satisfies the equation </p>&#xA;&#xA;<p>$\qquad \displaystyle s_L(n)=p_1(n)\lambda_1^n+\dots+p_k(n)\lambda_k^n$.</p>&#xA;&#xA;<p>The language $L =\{ 0^{2n} \mid n \in\mathbb{N} \}$ is regular ($(00)^*$ matches it). $s_L(n) = 1$ iff n is even, and $s_L(n) = 0$ otherwise.</p>&#xA;&#xA;<p>However, I can not find the $\lambda_i$ and $p_i$ (that have to exist by the above). As $s_L(n)$ has to be differentiable and is not constant, it must somehow behave like a wave, and I can't see how you can possibly do that with polynomials and exponential functions without ending up with an infinite number of summands like in a Taylor expansion. Can anyone enlighten me?</p>&#xA;
formal languages regular languages combinatorics word combinatorics
1
1,045
Number of words of a given length in a regular language
<p>Is there an algebraic characterization of the number of words of a given length in a regular language?</p>&#xA;&#xA;<p><a href="http://en.wikipedia.org/wiki/Regular_language#The_number_of_words_in_a_regular_language" rel="nofollow noreferrer">Wikipedia</a> states a result somewhat imprecisely:</p>&#xA;&#xA;<blockquote>&#xA; <p>For any regular language $L$ there exist constants $\lambda_1,\,\ldots,\,\lambda_k$ and polynomials $p_1(x),\,\ldots,\,p_k(x)$&#xA; such that for every $n$ the number $s_L(n)$ of words of length $n$ in $L$ satisfies the equation&#xA; $s_L(n)=p_1(n)\lambda_1^n+\dotsb+p_k(n)\lambda_k^n$.</p>&#xA;</blockquote>&#xA;&#xA;<p>It's not stated what space the $\lambda$'s live in ($\mathbb{C}$, I presume) and whether the function is required to have nonnegative integer values over all of $\mathbb{N}$. I would like a precise statement, and a sketch or reference for the proof.</p>&#xA;&#xA;<p>Bonus question: is the converse true, i.e. given a function of this form, is there always a regular language whose number of words per length is equal to this function?</p>&#xA;&#xA;<p><sub> This question generalizes <a href="https://cs.stackexchange.com/questions/1039/question-about-the-number-of-words-in-a-regular-language">Number of words in the regular language $(00)^*$</a> </sub> </p>&#xA;
formal languages regular languages word combinatorics
1
1,050
Multiples of n is a regular language
<blockquote>&#xA; <p><strong>Possible Duplicate:</strong><br>&#xA; <a href="https://cs.stackexchange.com/questions/640/language-of-the-values-of-an-affine-function">Language of the values of an affine function</a> </p>&#xA;</blockquote>&#xA;&#xA;&#xA;&#xA;<p>Let $C_n = \{x\mid x \text{ is a binary number that is a multiple of } n\}$. Show that for each $n$, the language $C_n$ is regular.</p>&#xA;&#xA;<p>Just provide a generic recipe (i.e., a formal definition for arbitrary $n$).</p>&#xA;
formal languages regular languages
0
1,053
Subset Sum Requirements
<p>Consider the following problem.</p>&#xA;&#xA;<blockquote>&#xA; <p>Given a set $S$ of integers, a function $f : \mathbb{Z} \to \mathbb{Z}$ and $k \in \mathbb{Z}$, decide wether there is $X \subseteq S$ such that $f\left(\sum_{x\in X}x\right)=k$.</p>&#xA;</blockquote>&#xA;&#xA;<p>Is this still considered a <a href="https://en.wikipedia.org/wiki/Subset_sum">subset-sum problem</a>?</p>&#xA;&#xA;<p>For instance, given</p>&#xA;&#xA;<p>$\qquad \displaystyle S=\{ −7, −3, −2, 5, 8\}$</p>&#xA;&#xA;<p>and $k=0$, find a subset $X$ such that $f\left(\sum_{x\in X}x\right)=0$ for $f(y)=-3+y$. In this case, a solution is $X=\{ -3,-2,8 \}$.</p>&#xA;
complexity theory
0
1,058
Supporting data structures for SAT local search
<p><a href="http://en.wikipedia.org/wiki/WalkSAT" rel="nofollow noreferrer">WalkSAT and GSAT</a> are well-known and simple local search algorithms for solving the Boolean satisfiability problem. The pseudocode for the GSAT algorithm is copied from the question <a href="https://cs.stackexchange.com/questions/219/implementing-the-gsat-algorithm-how-to-select-which-literal-to-flip">Implementing the GSAT algorithm - How to select which literal to flip?</a> and presented below.</p>&#xA;&#xA;<pre><code>procedure GSAT(A,Max_Tries,Max_Flips)&#xA; A: is a CNF formula&#xA; for i:=1 to Max_Tries do&#xA; S &lt;- instantiation of variables&#xA; for j:=1 to Max_Iter do&#xA; if A satisfiable by S then&#xA; return S&#xA; endif&#xA; V &lt;- the variable whose flip yield the most important raise in the number of satisfied clauses;&#xA; S &lt;- S with V flipped;&#xA; endfor&#xA; endfor&#xA; return the best instantiation found&#xA;end GSAT&#xA;</code></pre>&#xA;&#xA;<p>Here we flip the variable that maximizes the number of satisfied clauses. How is this done efficiently? The naive method is to flip every variable, and for each step through all clauses and calculate how many of them get satisfied. Even if a clause could be queried for satisfiability in constant time, the naive method would still run in $O(VC)$ time, where $V$ is the number of variables and $C$ the number of clauses. I'm sure we can do better, hence the question:</p>&#xA;&#xA;<blockquote>&#xA; <p>Many local search algorithms flip the variable's assignment that maximizes the number of satisfied clauses. In practice, with what data structures is this operation supported efficiently?</p>&#xA;</blockquote>&#xA;&#xA;<p>This is something I feel like textbooks often omit. One example is even the famous <a href="http://aima.cs.berkeley.edu/" rel="nofollow noreferrer">Russell &amp; Norvig book</a>.</p>&#xA;
algorithms data structures satisfiability
1
1,060
Confluence proof for a simple rewriting system
<p>Assume we have a simple language that consists of the terms:</p>&#xA;&#xA;<ul>&#xA;<li>$\mathtt{true}$</li>&#xA;<li>$\mathtt{false}$</li>&#xA;<li>if $t_1,t_2,t_3$ are terms then so is $\mathtt{if}\: t_1 \:\mathtt{then}\: t_2 \:\mathtt{else}\: t_3$</li>&#xA;</ul>&#xA;&#xA;<p>Now assume the following logical evaluation rules:</p>&#xA;&#xA;<p>$$ \begin{gather*}&#xD;&#xA;\dfrac{}&#xD;&#xA; {\mathtt{if}\: \mathtt{true} \:\mathtt{then}\: t_2 \:\mathtt{else}\: t_3 \to t_2}&#xD;&#xA; \text{[E-IfTrue]} \quad&#xD;&#xA;\dfrac{}&#xD;&#xA; {\mathtt{if}\: \mathtt{false} \:\mathtt{then}\: t_2 \:\mathtt{else}\: t_3 \to t_3}&#xD;&#xA; \text{[E-IfFalse]} \\&#xD;&#xA;\dfrac{t_1 \to t_1&#39;}&#xD;&#xA; {\mathtt{if}\: t_1 \:\mathtt{then}\: t_2 \:\mathtt{else}\: t_3 \to \mathtt{if}\: t_1&#39; \:\mathtt{then}\: t_2 \:\mathtt{else}\: t_3}&#xD;&#xA; \text{[E-If]} \\&#xD;&#xA;\end{gather*} $$</p>&#xA;&#xA;<p>Suppose we also add the following funky rule:</p>&#xA;&#xA;<p>$$&#xD;&#xA;\dfrac{t_2 \to t_2&#39;}&#xD;&#xA; {\mathtt{if}\: t_1 \:\mathtt{then}\: t_2 \:\mathtt{else}\: t_3 \to \mathtt{if}\: t_1 \:\mathtt{then}\: t_2&#39; \:\mathtt{else}\: t_3}&#xD;&#xA; \text{[E-IfFunny]}&#xD;&#xA;$$</p>&#xA;&#xA;<p>For this simple language with the given evaluation rules I wish to prove the following:</p>&#xA;&#xA;<p><strong>Theorem: If $r \rightarrow s$ and $r \rightarrow t$ then there is some term $u$ such that $s \rightarrow u$ and $t \rightarrow u$.</strong></p>&#xA;&#xA;<p>I am proving this by induction on the structure of $r$. Here is my proof so far, it all worked out well, but I am stuck at the very last case. It seems like induction on the structure of $r$ is not sufficing, can anyone help me out?</p>&#xA;&#xA;<p><em>Proof.</em> By induction on $r$, we will seperate all the forms that $r$ can take:</p>&#xA;&#xA;<ol>&#xA;<li>$r$ is a constante, nothing to prove since a normal form does not evaluate to anything.</li>&#xA;<li>$r=$ if true then $r_2$ else $r_3$. (a) both derivations were done with the E-IfTrue rule. In this case $s=t$, so there is nothing to prove. (b) one deriviation was done with the E-IfTrue rule, the other with the E-Funny rule. Assume $r \rightarrow s$ was done with E-IfTrue, the other case is equivalently proven. We now know that $s = r_2$. We also know that $t =$ if true then $r&#39;_2$ else $r_3$ and that there exists some deriviation $r_2 \rightarrow r&#39;_2$ (the premise). If we now choose $u = r&#39;_2$, we conclude the case.</li>&#xA;<li>$r=$ if false then $r_2$ else $r_3$. Equivalently proven as above.</li>&#xA;<li>$r=$ if $r_1$ then $r_2$ else $r_3$ with $r_1 \neq $ true or false. (a) both deriviations were done with the E-If rule. We now know that $s =$ if $r&#39;_1$ then $r_2$ else $r_3$ and $t =$ if $r&#39;&#39;_1$ then $r_2$ else $r_3$. We also know that there exists deriviations $r_1 \rightarrow r&#39;_1$ and $r_1 \rightarrow r&#39;&#39;_1$ (the premises). We can now use the induction hypothese to say that there exists some term $r&#39;&#39;&#39;_1$ such that $r&#39;_1 \rightarrow r&#39;&#39;&#39;_1$ and $r&#39;&#39;_1 \rightarrow r&#39;&#39;&#39;_1$. We now conclude the case by saying $u =$ if $r&#39;&#39;&#39;_1$ then $r_2$ else $r_3$ and noticing that $s \rightarrow u$ and $t \rightarrow u$ by the E-If rule. (b) one derivation was done by the E-If rule and one by the E-Funny rule.</li>&#xA;</ol>&#xA;&#xA;<p>This latter case, where one derivation was done by E-If and one by E-Funny is the case I am missing... I can't seem to be able to use the hypotheses.</p>&#xA;&#xA;<p>Help will be much appreciated.</p>&#xA;
logic semantics proof techniques term rewriting
1
1,062
How many shortest distances change when adding an edge to a graph?
<p>Let $G=(V,E)$ be some complete, weighted, undirected graph. We construct a second graph $G&#39;=(V, E&#39;)$ by adding edges one by one from $E$ to $E&#39;$. We add $\Theta(|V|)$ edges to $G&#39;$ in total.</p>&#xA;&#xA;<p>Every time we add one edge $(u,v)$ to $E&#39;$, we consider the shortest distances between all pairs in $(V, E&#39;)$ and $(V, E&#39; \cup \{ (u,v) \})$. We count how many of these shortest distances have changed as a consequence of adding $(u,v)$. Let $C_i$ be the number of shortest distances that change when we add the $i$th edge, and let $n$ be the number of edges we add in total.</p>&#xA;&#xA;<blockquote>&#xA; <p>How big is $C = \frac{\sum_i C_i}{n}$?</p>&#xA;</blockquote>&#xA;&#xA;<p>As $C_i = O(|V|^2)=O(n^2)$, $C=O(n^2)$ as well. Can this bound be improved? Note that I define $C$ to be the average over all edges that were added, so a single round in which a lot of distances change is not that interesting, though it proves that $C = \Omega(n)$.</p>&#xA;&#xA;<p>I have an algorithm for computing a geometric t-spanner greedily that works in $O(C n \log n)$ time, so if $C$ is $o(n^2)$, my algorithm is faster than the original greedy algorithm, and if $C$ is really small, potentially faster than the best known algorithm (though I doubt that).</p>&#xA;&#xA;<p>Some problem-specific properties that might help with a good bound: the edge $(u,v)$ that is added always has larger weight than any edge already in the graph (not necessarily strictly larger). Furthermore, its weight is shorter than the shortest path between $u$ and $v$.</p>&#xA;&#xA;<p>You may assume that the vertices correspond to points in a 2d plane and the distances between vertices are the Euclidian distances between these points. That is, every vertex $v$ corresponds to some point $(x,y)$ in the plane, and for an edge $(u,v)=((x_1,y_1),(x_2,y_2))$ its weight is equal to $\sqrt{(x_2-x_1)^2 + (y_2-y_1)^2.}$</p>&#xA;
algorithms graphs shortest path
1
1,065
What is the difference between user-level threads and kernel-level threads?
<p>After reading several sources I'm still confused about user- and kernel-level threads. </p>&#xA;&#xA;<p>In particular:</p>&#xA;&#xA;<blockquote>&#xA; <p>Threads can exist at both the user level and the kernel level</p>&#xA;</blockquote>&#xA;&#xA;<p>What is the difference between the user level and kernel level? </p>&#xA;
operating systems terminology concurrency os kernel
1
1,069
What use are the minimum values on minimax trees?
<p>Consider a <a href="http://en.wikipedia.org/wiki/Minimax" rel="nofollow noreferrer">minimax</a> tree for an adversarial search problem. For example, in this picture (alpha-beta pruning):</p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/NmKIO.jpg" alt="enter image description here"></p>&#xA;&#xA;<p>When marking the the tree with $[\min,\max]$ values bottom-up, we first traverse node $3$ and assign $B.\max = 3$. Then we traverse $12$ and $8$ in this order, it will make sure $B.max = 3$.</p>&#xA;&#xA;<p>But why is $B.\min = 3$? What is the use of that value?</p>&#xA;
data structures trees game theory
0
1,071
Is there an algorithm which finds sorted subsequences of size three in $O(n)$ time?
<p>I want to prove or disprove the existence of an algorithm which, given an array $A$ of integers, finds three indices $i, j$ and $k$ such that $i &lt; j &lt; k$ and $A[i] &lt; A[j] &lt; A[k]$ (or finds that there is no such triple) in linear time.</p>&#xA;&#xA;<p>This is not a homework question; I saw it on a programming forum framed as “try to implement such an algorithm.” I suspect that it is impossible after various experiments. My intuition tells me so, but that does not really count for anything.</p>&#xA;&#xA;<p>I would like to prove it formally. How do you do it? I would ideally like to see a proof laid out step-by-step, and then if you are so inclined, some explanation of how to go about proving/disproving simple questions like this in general. If it helps, some examples:</p>&#xA;&#xA;<pre><code>[1,5,2,0,3] → (1,2,3)&#xA;[5,6,1,2,3] → (1,2,3)&#xA;[1,5,2,3] → (1,2,3)&#xA;[5,6,1,2,7] → (1,2,7)&#xA;[5,6,1,2,7,8] → (1,2,7)&#xA;[1,2,999,3] → (1,2,999)&#xA;[999,1,2,3] → (1,2,3)&#xA;[11,12,8,9,5,6,3,4,1,2,3] → (1,2,3)&#xA;[1,5,2,0,-5,-2,-1] → (-5,-2,-1)&#xA;</code></pre>&#xA;&#xA;<p>I supposed that one could iterate over $A$, and each time there is an $i &lt; j$ (our current $j$, that is), we make a new triple and push it onto an array. We continue stepping and comparing each triple until one of our triples is complete. So it's like <code>[1,5,2,0,-5,-2,-1] → 1..2.. -5.. -2.. -1</code>, <code>[1,5,2,0,-5,-2,3,-1] → 1..2.. -5.. -2.. 3</code>! But I think this is more complex than mere $\mathcal{O}(n)$ as the number of triples on our triple array would in the worst case correspond to the size of the input list.</p>&#xA;
algorithms arrays subsequences
1
1,074
What is the purpose of M:N (Hybrid) threading?
<p>In other words, what advantages does <a href="http://en.wikipedia.org/wiki/Thread_%28computer_science%29#M%3aN_.28Hybrid_threading.29" rel="noreferrer">Hybrid threading</a> have over 1:1 (kernel only) and N:1 (user only) threading?</p>&#xA;&#xA;<p><sub>This is a follow-up to <a href="https://cs.stackexchange.com/questions/1065/what-is-the-difference-between-user-level-threads-and-kernel-level-threads">What is the difference between user-level threads and kernel-level threads?</a></sub></p>&#xA;
operating systems concurrency os kernel
1
1,076
How to approach Vertical Sticks challenge
<p>This problem is taken from <a href="https://www.interviewstreet.com/challenges/dashboard/#problem/4eed18ded76fe" rel="noreferrer">interviewstreet.com</a></p>&#xA;&#xA;<p>We are given an array of integers $Y=\{y_1,...,y_n\}$ that represents $n$ line segments such that endpoints of segment $i$ are $(i, 0)$ and $(i, y_i)$. Imagine that from the top of each segment a horizontal ray is shot to the left, and this ray stops when it touches another segment or it hits the y-axis. We construct an array of n integers, $v_1, ..., v_n$, where $v_i$ is equal to length of ray shot from the top of segment $i$. We define $V(y_1, ..., y_n)&#xA;= v_1 + ... + v_n$.</p>&#xA;&#xA;<p>For example, if we have $Y=[3,2,5,3,3,4,1,2]$, then $[v_1, ..., v_8] = [1,1,3,1,1,3,1,2]$, as shown in the picture below:</p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/bZ04e.png" alt="enter image description here"></p>&#xA;&#xA;<p>For each permutation $p$ of $[1,...,n]$, we can calculate $V(y_{p_1}, ..., y_{p_n})$. If we choose a uniformly random permutation $p$ of $[1,...,n]$, what is the expected value of $V(y_{p_1}, ..., y_{p_n})$?</p>&#xA;&#xA;<p>If we solve this problem using the naive approach it will not be efficient and run practically forever for $n=50$. I believe we can approach this problem by indepdently calculating the expected value of $v_i$ for each stick but I still need to know wether there is another efficient approach for this problem. On what basis can we calculate the expected value for each stick independently?</p>&#xA;
algorithms probability theory
0
1,079
When did commercial Speech Recognition first begin using grammar (sentence structure) for prediction?
<p>It seems as though modern speech recognition (e.g., through Android, iOS phones) make use of grammar or sentence structure. (e.g., it might have a tough time distinguishing between "grammar" and "grandma" but can distinguish between "I'm going to see grandma" and "I'm reading a book on english grammar". (yes, I just tried it with my Android phone with vLingo app)</p>&#xA;&#xA;<p>That is much improved (with Speaker Independent SR (i.e., no training)) over what I experienced with Dragon Dictate even using Speaker Dependent SR (with 30m of training).</p>&#xA;&#xA;<p>So, I'm wondering whether my guess is right: When did the commercially available SR software start using grammar and sentence structure to "guess" the right speech?</p>&#xA;
natural language processing history
0
1,084
How to implement the details of shotgun hill climbing to make it effective?
<p>I am currently working on a solution to a problem for which (after a bit of research) the use of a hill climbing, and more specificly a <em>shotgun</em> (or <em>random-restart</em>) <a href="http://en.wikipedia.org/wiki/Hill_climbing" rel="nofollow">hill climbing</a> algorithmic idea seems to be the best fit, as I have no clue how the best start value can be found.</p>&#xA;&#xA;<p>But there is not a lot of information about this type of algorithm except the <a href="http://en.wikipedia.org/wiki/Hill_climbing#Variants" rel="nofollow">rudimentary idea</a> behind it:</p>&#xA;&#xA;<blockquote>&#xA; <p>[Shotgun] hill climbing is a meta-algorithm built on top of the hill climbing algorithm. It iteratively does hill-climbing, each time with a random initial condition $x_0$. The best $x_m$ is kept: if a new run of hill climbing produces a better $x_m$ than the stored state, it replaces the stored state.</p>&#xA;</blockquote>&#xA;&#xA;<p>If I understand this correctly, this means something like this (assuming maximisation):</p>&#xA;&#xA;<pre><code>x = -infinity;&#xA;for ( i = 1 .. N ) {&#xA; x = max(x, hill_climbing(random_solution()));&#xA;}&#xA;return x;&#xA;</code></pre>&#xA;&#xA;<p>But how can you make this really effective, that is better than normal hill climbing? It is hard to believe that using random start values helps a lot, especially for huge search spaces. More precisely, I wonder:</p>&#xA;&#xA;<ul>&#xA;<li>Is there a good strategy for choosing the $x_0$ (that is implementing <code>random_solution</code>), in particular knowing (intermediate) results of former iterations?</li>&#xA;<li>How to choose $N$, that is how many iterations are needed to be quite certain that the perfect solution is not missed (by much)?</li>&#xA;</ul>&#xA;
algorithms optimization heuristics
1
1,088
What happens to the cache contents on a context switch?
<p>In a multicore processor, what happens to the contents of a core's cache (say L1) when a context switch occurs on that cache?</p>&#xA;&#xA;<p>Is the behaviour dependent on the architecture or is it a general behaviour followed by all chip manufacturers?</p>&#xA;
computer architecture operating systems cpu cache
1
1,092
Constraint-based Type Inference with Algebraic Data
<p>I am working on an expression based language of ML genealogy, so it naturally needs type inference >:)</p>&#xA;&#xA;<p>Now, I am trying to extend a constraint-based solution to the problem of inferring types, based on a simple implementation in EOPL (Friedman and Wand), but they elegantly side-step algebraic datatypes.</p>&#xA;&#xA;<p>What I have so far works smoothly; if an expression <code>e</code> is <code>a + b</code>, <code>e : Int</code>, <code>a : Int</code> and <code>b : Int</code>. If <code>e</code> is a match,</p>&#xA;&#xA;<pre><code>match n with&#xA; | 0 -&gt; 1&#xA; | n' -&gt; n' * fac(n - 1)`, &#xA;</code></pre>&#xA;&#xA;<p>I can rightly infer that the <code>t(e) = t(the whole match expression)</code>, <code>t(n) = t(0) = t(n')</code>, <code>t(match) = t(1) = t(n' * fac(n - 1)</code> and so on...</p>&#xA;&#xA;<p>But I am very unsure when it comes to algebraic datatypes. Suppose a function like filter:</p>&#xA;&#xA;<pre><code>let filter pred list =&#xA; match list with&#xA; | Empty -&gt; Empty&#xA; | Cons(e, ls') when pred e -&gt; Cons (e, filter ls')&#xA; | Cons(_, ls') -&gt; filter &#xA;</code></pre>&#xA;&#xA;<p>For the list type to remain polymorphic, Cons needs to be of type <code>a * a list -&gt; a list</code>. So, in establishing these constraints, I obviously need to look up these types of my algebraic constructors - the problem I now have is the 'context-sensitivity' of multiple uses of algebraic constructors - how do I express in my constraint equations that the <code>a</code> in each case needs to be the same?</p>&#xA;&#xA;<p>I am having trouble finding a general solution to this, and I am unable to find much literature on this. Whenever I find something similar - expression based language with constraint-based type inference - they stop just short of algebraic datatypes and polymorphism.</p>&#xA;&#xA;<p>Any input is much appreciated!</p>&#xA;
programming languages type theory functional programming inductive datatypes typing
0
1,100
How are statistics being applied in computer science to evaluate accuracy in research claims?
<p>I have noticed in my short academic life that many published papers in our area sometimes do not have much rigor regarding statistics. This is not just an assumption; I have heard professors say the same. </p>&#xA;&#xA;<p>For example, in CS disciplines I see papers being published claiming that methodology X has been observed to be effective and this is proved by ANOVA and ANCOVA, however I see no references for other researchers evaluating that the necessary constraints have been observed. It somewhat feels like as soon as some 'complex function and name' appears, then that shows the researcher is using some highly credible method and approach that 'he must know what is he doing and it is fine if he does not describe the constraints', say, for that given distribution or approach, so that the community can evaluate it. </p>&#xA;&#xA;<p>Sometimes, there are excuses for justifying the hypothesis with such a small sample size. </p>&#xA;&#xA;<p>My question here is thusly posed as a student of CS disciplines as an aspirant to learn more about statistics: How do computer scientists approaches statistics? </p>&#xA;&#xA;<p>This question might seems like I am asking what I have already explained, but that is my <em>opinion</em>. I might be wrong, or I might be focusing on a group of practitioners whereas other groups of CS researchers might be doing something else that follows better practices with respect to statistics rigor. </p>&#xA;&#xA;<p>So specifically, what I want is "Our area is or is not into statistics because of the given facts (papers example, books, or another discussion article about this are fine)". @Patrick answer is closer to this. </p>&#xA;
software engineering empirical research statistics
1
1,106
Which queue does the long-term scheduler maintain?
<p>There are different queues of processes (in an operating system):</p>&#xA;&#xA;<p><em>Job Queue:</em> Each new process goes into the job queue. Processes in the job queue reside on mass storage and await the allocation of main memory.</p>&#xA;&#xA;<p><em>Ready Queue:</em> The set of all processes that are in main memory and are waiting for CPU time is kept in the ready queue.</p>&#xA;&#xA;<p><em>Waiting (Device) Queues:</em> The set of processes waiting for allocation of certain I/O devices is kept in the waiting (device) queue.</p>&#xA;&#xA;<p>The short-term scheduler (also known as CPU scheduling) selects a process from the ready queue and yields control of the CPU to the process.</p>&#xA;&#xA;<p>In my lecture notes the long-term scheduler is partly described as maintaining a queue of new processes waiting to be admitted into the system. </p>&#xA;&#xA;<p>What is the name of the queue the long-term scheduler maintains? When it admits a process to the system is the process placed in the ready queue? </p>&#xA;
operating systems terminology process scheduling
1
1,113
How to construct the found path in bidirectional search
<p>I am trying to implement bidirectional search in a graph. I am using two breadth first searches from the start node and the goal node. The states that have been checked are stored in two hash tables (closed lists).&#xA;How can I get the solution (path from the start to the goal), when I find that a state that is checked by one of the searches is in the closed list of the other?</p>&#xA;&#xA;<p>EDIT </p>&#xA;&#xA;<p>Here are the explanations from the book:&#xA;<em>"Bidirectional search is implemented by having one or both of the searches check each&#xA;node before it is expanded to see if it is in the fringe of the other search tree; if so, a solution has been found... Checking a node for membership in the other search tree can be done in constant time with a hash table..."</em></p>&#xA;&#xA;<p>Some pages before: &#xA;<em>"A node is a boolkkeeping data structure used to represent the search tree. A state corresponds to a configuration of the world... two different nodes can contain the same world state, if that state is generated via two different search paths."</em> So from that I conclude that if nodes are kept in the hash tables than a node from the BFS started from the start node would not match a node constructed from the other BFS started from the goal node.</p>&#xA;&#xA;<p>And later in general Graph search algorithm the states are stored in the closed list, not the nodes, but it seems to me that even that the states are saved in the hash tables after that the nodes are retrieved from there.</p>&#xA;
algorithms graphs shortest path
1
1,117
Machine learning algorithm to play Connect Four
<p>I'm currently reading about machine learning and wondered how to apply it to playing <a href="http://en.wikipedia.org/wiki/Connect_Four">Connect Four</a>.</p>&#xA;&#xA;<p>My current attempt is a simple multiclass classificator using a sigmoid function model and the one-vs-all method.</p>&#xA;&#xA;<p>In my opinion, the input features have to be the state (disc of player 1, disc of player 2, empty) of the 7x6=42 grid fields. </p>&#xA;&#xA;<p>The output would be the number of the row to put the disc into. Because that is a discrete number between 1 and 7, I guess this can be treated as a multiclass classification problem.</p>&#xA;&#xA;<p>But how do I generate training examples usable in supervised learning? </p>&#xA;&#xA;<p>The main goal is to win the game but the outcome obviously isn't known when doing every but the last turn.&#xA;If I just let two players who decide randomly what to do play against each other thousands of times, will it be sufficient to simply take all turns made by the winner of each game round as training examples? Or do I have to do this in a completly different way?</p>&#xA;&#xA;<p><strong>Edit: As suggested in the comments I read a little about reinforcement learning.</strong>&#xA;From what I know understand, Q-Learning should do the trick, i.e. I have to approximate a function Q of the current state and the action to take to be the maximum cumulative reward beginning in that state. Then each step would be to choose the action which results in the maximum value of Q. However, this game has way too many states in order to do this e.g. as a lookup table. So, what is an effective way to model this Q-Function?</p>&#xA;
machine learning board games
0
1,118
Subset-sum and 3SAT
<p>Two things (this may be naive):</p>&#xA;&#xA;<ol>&#xA;<li><p>Does anyone believe there is a sub-exponential time algorithm for the <a href="http://en.wikipedia.org/wiki/Subset_sum_problem" rel="nofollow">Subset-sum problem</a>? It seems obvious to me that you would have to look through all possible subsets to prove (the negation of) an existential statement. It seems obvious in the same way that if somebody asked you "Is $x$ in this list of $n$ numbers?", it's obvious that you would have to look through all $n$ numbers.</p></li>&#xA;<li><p>If Subset-sum can be polynomial time reduced to <a href="http://en.wikipedia.org/wiki/K-SAT#3-satisfiability" rel="nofollow">3SAT</a> and we agree on (1), then doesn't that mean $NP \neq P$?</p></li>&#xA;</ol>&#xA;
complexity theory time complexity np complete reductions
0
1,121
Choosing taps for Linear Feedback Shift Register
<p>I am confused about how taps are chosen for Linear Feedback Shift Registers.</p>&#xA;<p>I have a diagram which shows a LFSR with connection polynomial <span class="math-container">$C(X) = X^5 + X^2 + 1$</span>. The five stages are labelled: <span class="math-container">$R4, R3, R2, R1$</span> and <span class="math-container">$R0$</span> and the taps come out of <span class="math-container">$R0$</span> and <span class="math-container">$R3$</span>.</p>&#xA;<p>How are these taps decided? When I am given a connection polynomial but no diagram, how do I know what values I should XOR?</p>&#xA;<p><img src="https://i.stack.imgur.com/ubz8P.jpg" alt="enter image description here" /></p>&#xA;
cryptography pseudo random generators shift register
1
1,122
Is it viable to use an HMM to evaluate how well a catalogue is used?
<p>I was interested on evaluating a catalogue that students would be using to observe how is it being used probabilistically. </p>&#xA;&#xA;<p>The catalogue works by choosing cells in a temporal sequence, so for example:</p>&#xA;&#xA;<ul>&#xA;<li>Student A has: ($t_1$,$Cell_3$),($t_2$,$Cell_4$)</li>&#xA;<li>Student B has: $(t_1,Cell_5),(t_2,Cell_3),(t_3,Cell_7)$. </li>&#xA;</ul>&#xA;&#xA;<p>Assume that the cells of the table are states of a <a href="https://en.wikipedia.org/wiki/Hidden_Markov_model" rel="nofollow">Hidden Markov Model</a>, so the transition between states would map in the real world to a student going from a given cell to another.</p>&#xA;&#xA;<p>Assuming that the catalogue is nothing more than guidance, it is expected to have a certain kind of phenomenon to occur on a given artifact. Consider this artifact to be unique, say, for example a program. </p>&#xA;&#xA;<p>What happens to this program is a finite list of observations, thus, for a given cell we have a finite list of observations for following the suggestion mentioned on that cell. On a HMM this would be then the probability associated to a state to generate a given observation in this artifact. </p>&#xA;&#xA;<p>Finally, consider the catalogue to be structured in a way that initially it is expected that the probability to start in a given cell is equal. The catalogue does not suggest any starting point. </p>&#xA;&#xA;<ul>&#xA;<li><p><strong>Question 1</strong>: Is the mapping between the catalogue and the HMM appropriate?</p></li>&#xA;<li><p><strong>Question 2</strong>: Assuming question 1 holds true. Consider now that we train the HMM using as entries $(t_1,Cell_1), (t_2,Cell_3) , ... (t_n,Cell_n)$ for the students. Would the trained HMM, once asked to generate the transition between states that it is most likely yields as result what is the most used way by the people who used the catalogue for a given experiment $\epsilon$? </p></li>&#xA;</ul>&#xA;
probability theory empirical research modelling hidden markov models
1
1,124
How to devise an algorithm that suggests feasible cooking recipes?
<p>I once had a veteran in my course that created an algorithm that would suggest cooking recipes. At first, all sort of crazy recipes would come out. Then, she would train the cooking algorithm with real recipes and eventually it would suggest very good ones. </p>&#xA;&#xA;<p>I believe she used something related to Bayes Theorem or Clustering, but she is long gone and so is the algorithm. I have searched the internet but looking for cooking recipes will yield any sort of results but not the one I am looking for. So, my question is:</p>&#xA;&#xA;<blockquote>&#xA; <p>What techniques can be used to devise an algorithm that (randomly) suggests feasible recipes (without using a database of fixed recipes)?</p>&#xA;</blockquote>&#xA;&#xA;<p>Why would I bother looking for a cooking algorithm? Well, it was a very good example of a real world application of the underlying concepts, and such algorithm could be useful in different settings that are closer to the real world.</p>&#xA;
machine learning artificial intelligence modelling recommendation systems
0
1,134
How does the NegaScout algorithm work?
<p>On Alpha-Beta pruning, <a href="http://en.wikipedia.org/wiki/Negascout" rel="noreferrer">NegaScout</a> claims that it can accelerate the process by setting [Alpha,Beta] to [Alpha,Alpha-1].</p>&#xA;&#xA;<p>I do not understand the whole process of NegaScout.</p>&#xA;&#xA;<p>How does it work? What is its recovery mechanism when its guessing failed?</p>&#xA;
algorithms algorithm analysis search algorithms
0
1,137
What is the difference between "Decision" and "Verification" in complexity theory?
<p>In Michael Sipser's <em>Theory of Computation</em> on page 270 he writes:</p>&#xA;&#xA;<blockquote>&#xA; <p>P = the class of languages for which membership can be decided quickly.<br>&#xA; NP = the class of languages for which membership can be verified quickly.</p>&#xA;</blockquote>&#xA;&#xA;<p>What is the difference between "decided" and "verified"?</p>&#xA;
complexity theory terminology decision problem
1
1,138
For every computable function $f$ does there exist a problem that can be solved at best in $\Theta(f(n))$ time?
<p>For every computable function $f$ does there exist a problem that can be solved at best in $\Theta(f(n))$ time or is there a computable function $f$ such that every problem that can be solved in $O(f(n))$ can also be solved in $o(f(n))$ time?</p>&#xA;&#xA;<p>This question popped into my head yesterday. I've been thinking about it for a bit now, but can't figure it out. I don't really know how I'd google for this, so I'm asking here. Here's what I've come up with:</p>&#xA;&#xA;<p>My first thought was that the answer is yes: For every computable function $f$ the problem "Output $f(n)$ dots" (or create a string with $f(n)$ dots or whatever) can obviously not be solved in $o(f(n))$ time. So we only need to show that it can be solved in $O(f(n))$ time. No problem, just take the following pseudo code:</p>&#xA;&#xA;<pre><code>x = f(n)&#xA;for i from 1 to x:&#xA; output(".")&#xA;</code></pre>&#xA;&#xA;<p>Clearly that algorithm solves the stated problem. And it's runtime is obviously in $\Theta(f(n))$, so problem solved. That was easy, right? Except no, it isn't because you have to consider the cost of the first line. The above algorithm's runtime is only in $\Theta(f(n))$ if the time needed to calculate $f(n)$ is in $O(f(n))$. Clearly that's not true for all functions<sup>1</sup>.</p>&#xA;&#xA;<p>So this approach didn't get me anywhere. I'd be grateful for anyone pointing me in the right direction to figure this out properly.</p>&#xA;&#xA;<hr>&#xA;&#xA;<p><sup>1</sup> Consider for example the function $p(n) = \cases{1 &amp; \text{if $n$ is prime} \\ 2 &amp; \text{otherwise}}$. Clearly $O(p(n)) = O(1)$, but there is no algorithm that calculates $p$ in $O(1)$ time.</p>&#xA;
complexity theory
1
1,142
A continuous optimization problem that reduces to TSP
<p>Suppose I am given a finite set of points $p_1,p_2,..p_n$ in the plane, and asked to draw a twice-differentiable curve $C(P)$ through the $p_i$'s, such that its perimeter is as small as possible. Assuming $p_i=(x_i,y_i)$ and $x_i&lt;x_{i+1}$, I can formalize this problem as:</p>&#xA;&#xA;<p><i> Problem 1 (edited in response to Suresh's comments) </i>Determine $C^2$ functions $x(t),y(t)$ of a parameter $t$ such that the arclength $ L = \int_{[t \in 0,1]} \sqrt{x'^2+y'^2}dt$ is minimized, with $x(0) = x_1, x(1) = x_n$ and for all $t_i: x(t_i) = x_i$, we have $y(t_i)=y_i)$. </p>&#xA;&#xA;<blockquote>&#xA; <p>How do I prove (or perhaps refute) that Problem 1 is NP-hard?</p>&#xA;</blockquote>&#xA;&#xA;<p><i> Why I suspect NP-hardness </i> Suppose the $C^2$ assumption is relaxed. Evidently, the function of minimal arclength is the Travelling Salesman tour of the $p_i$'s. Perhaps the $C^2$ constraint only makes the problem much harder?</p>&#xA;&#xA;<p><i> Context </i> A variant of this problem was posted on <a href="https://math.stackexchange.com/questions/23181/extremal-curve-passing-through-a-set-of-points">MSE</a>. It didn't receive an answer both there and on <a href="https://mathoverflow.net/questions/58885/extremal-curves-with-a-should-pass-through-constraint">MO</a>. Given that it's nontrivial to solve the problem, I want to establish how hard it is. </p>&#xA;
complexity theory np hard optimization computable analysis
1
1,143
Finding a 5-Pointed Star in polynomial time
<p><strong>I want to establish that this is part of my homework for a course I am currently taking. I am looking for some assistance in proceeding, NOT AN ANSWER.</strong></p>&#xA;&#xA;<p>This is the question in question:</p>&#xA;&#xA;<blockquote>&#xA; <p>A 5-pointed-star in an undirected graph is a 5-clique. Show that&#xA; 5-POINTED-STAR $\in P$, where 5-POINTED-STAR = $\{ &lt;G&gt;$ $: G$ contains a&#xA; 5-pointed-star as a subgraph $\}$.</p>&#xA;</blockquote>&#xA;&#xA;<p>Where a clique is CLIQUE = $\{(G, k) : G$ is an undirected graph $G$ with a $k$-clique $\}$.</p>&#xA;&#xA;<p>Now my problem is that this appears to be solving the CLIQUE problem, determining whether a graph contains a clique with the additional constraint of having to determine that the CLIQUE forms a 5-pointed star. This seems to involve some geometric calculation based on knowledge of a <a href="http://www.ehow.com/about_4606571_geometry-fivepoint-star.html">5-pointed star</a>. However, in Michael Sipser's <em>Theory of Computation</em>, pg 268, there is a proof showing that CLIQUE is in $NP$ and on page 270 notes that,</p>&#xA;&#xA;<blockquote>&#xA; <p><em>We have presented examples of languages, such as HAMPATH and CLIQUE,&#xA; <strong>that are members of NP but that are not known to be in $P$.</em></strong> [emphasis added]</p>&#xA;</blockquote>&#xA;&#xA;<p>If CLIQUE is not in $P$, why five pointed star be in $P$? Is there something I'm not seeing?&#xA;<strong>Remember, this is a HOMEWORK PROBLEM and A DIRECT ANSWER WOULD NOT BE APPRECIATED.</strong> Thanks!</p>&#xA;
complexity theory time complexity
1
1,147
Complexity of computing matrix powers
<p>I am interested in calculating the $n$'th power of a $n\times n$ matrix $A$. Suppose we have an algorithm for matrix multiplication which runs in $\mathcal{O}(M(n))$ time. Then, one can easily calculate $A^n$ in $\mathcal{O}(M(n)\log(n))$ time. Is it possible to solve this problem in lesser time complexity?</p>&#xA;&#xA;<p>Matrix entries can, in general, be from a semiring but you can assume additional structure if it helps.</p>&#xA;&#xA;<p>Note: I understand that in general computing $A^m$ in $o(M(n)\log(m))$ time would give a $o(\log m)$ algorithm for exponentiation. But, a number of interesting problems reduce to the special case of matrix exponentiation where m=$\mathcal O(n)$, and I was not able to prove the same about this simpler problem.</p>&#xA;
algorithms complexity theory time complexity computer algebra
0
1,150
Is there evidence that using dynamic languages has an impact on productivity?
<p>I am wondering if there are any experiments that show the existence or the non-existence of a correlation between usage of a dynamic language (such as Python, Ruby, or even languages that run on the Java platform such as Groovy, Clojure) over a static language (such as C/C++), and the difference in the productivity.</p>&#xA;
programming languages empirical research software engineering
0
1,151
Where to get graphs to test my search algorithms against?
<p>I am implementing a set of path finding algorithms such as Dijkstra's, Depth First, etc.</p>&#xA;&#xA;<p>At first I used a couple of self made graphs, but now I'd like to take the challenge a bit further and thus I'm looking for either</p>&#xA;&#xA;<ol>&#xA;<li>graphs used in benchmarks;</li>&#xA;<li>graphs of real world cities (or a way to download that kind of info off google maps, or any other kind of source, if possible).</li>&#xA;</ol>&#xA;&#xA;<p>I'd like those sources to either have or allow me to easily create frontiers such that I can try my algorithms for different sized sets of graphs, if possible.</p>&#xA;&#xA;<p>I'm looking for simple solutions, as I'd prefer not to be diverted from main goal (compare a set of different algorithms), so I'd need a quick way to convert that graph data into my own format (basically, a set of connected <code>(x, y)</code> points).</p>&#xA;&#xA;<p>To be more concrete, what I'm looking for are 2D cyclic graphs. If those graphs reflect real world city streets (taking into consideration one-way streets, two-way streets, etc, better yet!).</p>&#xA;
algorithms graphs data sets benchmarking
0
1,154
How can I measure the usability of a catalogue?
<p>This question might seems vague but heres the context:</p>&#xA;&#xA;<p>When we are focusing on HCI we would most likely be interested on knowing first how the user usually deals with a certain object. We then try to see how our system could take away one of the tasks he would do himself and try to do it itself. </p>&#xA;&#xA;<ul>&#xA;<li><p>The object of my interest here is a simple paper catalogue. How would you measure its usability (paper one). </p></li>&#xA;<li><p>Then, how would you map it to a system interface? How would you measure the usability now on the system?</p></li>&#xA;<li><p>How would you compare the two usabilities measures?</p></li>&#xA;</ul>&#xA;&#xA;<p>This question narrows down this approach which is suggested on Stones book - User Interface and Evaluation. </p>&#xA;&#xA;<p>What the catalogue is about is not the point, that why I left it without a description: To avoid suggestions trying to measure what the catalogue is about. My focus here is on the particular mapping of this kind of object on the real world as a simple paper and when it is mapped to a system interface. Assume the catalogue to consist of rows and tables, where each matching row and table gives you a suggestion and you must first reason about each row and each column to see if it suits you (Perhaps you would suggest another template for the catalogue?).</p>&#xA;
empirical research modelling hci
1
1,156
Introductory books on nature sciences behind bioinformatics
<p>My question goes to those who are concerned with computational biology algorithmics. I'm going to take a course on bioinformatics this fall; the problem, however, is that I have too little background in biology and chemistry to feel prepared for that cycle of lections (I was rather weak at these subjects at school).</p>&#xA;&#xA;<p>Could you recommend any books that would provide a good introduction to the questions of natural sciences that bioinformatics focuses on?</p>&#xA;
algorithms reference request education bioinformatics
0
1,157
Loop invariant for an algorithm
<p>I have developed the following pseudocode for the sum of pairs problem:</p>&#xA;&#xA;<blockquote>&#xA; <p>Given an array $A$ of integers and an integer $b$, return YES if there are positions $i,j$ in $A$ with $A[i] + A[j] = b$, NO otherwise.</p>&#xA;</blockquote>&#xA;&#xA;<p>Now I should state a loop invariant that shows that my algorithm is correct. Can someone give me a hint of a valid loop invariant? </p>&#xA;&#xA;<pre><code>PAIRSUM(A,b):&#xA;YES := true;&#xA;NO := false;&#xA;n := length(A);&#xA;if n&lt;2 then&#xA; return NO;&#xA;&#xA;SORT(A);&#xA;i := 1;&#xA;j := n;&#xA;while i &lt; j do // Here I should state my invariant&#xA; currentSum := A[i] + A[j];&#xA; if currentSum = b then&#xA; return YES;&#xA; else &#xA; if currentSum &lt; b then&#xA; i := i + 1;&#xA; else&#xA; j := j – 1;&#xA;return NO;&#xA;</code></pre>&#xA;
algorithms proof techniques loop invariants
0
1,176
HALF CLIQUE - NP Complete Problem
<p>Let me start off by noting <strong>this is a homework problem, please provide only advice and related observations, NO DIRECT ANSWERS please</strong>. With that said, here is the problem I am looking at:</p>&#xA;&#xA;<blockquote>&#xA; <p>Let HALF-CLIQUE = { $\langle G \rangle$ | $G$ is an undirected graph having a complete&#xA; subgraph with at least $n/2$ nodes, where n is the number of nodes in $G$&#xA; }. Show that HALF-CLIQUE is NP-complete.</p>&#xA;</blockquote>&#xA;&#xA;<p>Also, I know the following:</p>&#xA;&#xA;<ul>&#xA;<li>In terms of this problem a <em>clique</em>, is defined as an undirected subgraph of the input graph, wherein every two nodes are connected by an edge. A <em>$k$-clique</em> is a clique that contains $k$ nodes.</li>&#xA;<li>According to our textbook, Michael Sipser's "<em>Introduction to the Theory of Computation</em>", pg 268, that the problem CLIQUE = {$\langle G,k\rangle$ | $G$ is an undirected graph with a $k$-clique} is in NP</li>&#xA;<li>Furthermore, according to the same source (on pg 283) notes that CLIQUE is in NP-Complpete (thus also obviously in NP).</li>&#xA;</ul>&#xA;&#xA;<p>I think I have the kernel of an answer here, however I could use <em>some indication of what is wrong with it or any related points that might be relevant to an answer</em>. This is the general idea I have so far,</p>&#xA;&#xA;<blockquote>&#xA; <p>Ok, I'd first note that a certificate would simply be a HALF-QLIQUE of $\text{size} \geq n/2$. Now it appears that what I would need to do is to create a verifier that is a polynomial time reduction from CLIQUE (which we know is NP-Complete) to HALF-CLIQUE. My idea would be that this would be done by creating a Turing machine which runs the turing machine verifier in the book for CLIQUE with the additional constraint for HALF-CLIQUE.</p>&#xA;</blockquote>&#xA;&#xA;<p>This sounds correct to me, but I don't really trust myself yet in this subject. Once again, I would like to remind everyone <strong>this is a HOMEWORK PROBLEM</strong> so please try to avoid answering the question. Any guidance which falls short of this would be most welcome! </p>&#xA;
complexity theory np complete reductions
1
1,200
How to deal with arrays during Hoare-style correctness proofs
<p>In the discussion around <a href="https://cs.stackexchange.com/q/1157/98">this question</a>, Gilles mentions correctly that any correctness proof of an algorithm that uses arrays has to prove that there are no out-of-bounds array accesses; depending on the runtime model, this would cause a runtime error or access to non-array elements.</p>&#xA;&#xA;<p>One common technique to perform such correctness proofs (at least in undergrad studies and probably in automated verification) is by using <a href="https://en.wikipedia.org/wiki/Hoare_logic" rel="nofollow noreferrer">Hoare logic</a>. I am not aware that the standard set of rules containes anything relating to arrays; they seem to be restricted to monadic variables.</p>&#xA;&#xA;<p>I can imagine adding axioms of the form</p>&#xA;&#xA;<p>$\qquad \displaystyle \frac{}{\{0 \leq i \lt A.\mathrm{length} \land {P[A[i]/E]} \}\ A[i] := E;\ \{P\}}$</p>&#xA;&#xA;<p>However, it is not clear to me how you would deal with an array access on the right hand side, i.e. if it is part of a complex expression $E$ in some statement $x := E$.</p>&#xA;&#xA;<blockquote>&#xA; <p>How can arrays accesses be modelled in Hoare logic so that the absence of invalid accesses can and has to be proven for program correctness?</p>&#xA;</blockquote>&#xA;&#xA;<p>Answers may assume that we disallow array elements to be used in statements other than $A[i] := E$ or as part of some $E$ in $x := E$ as this does not restrict expressiveness; we can always assign a temporary variable the desired value, i.e. write $t := A[i];\ \mathtt{if} ( t &gt; 0 ) \dots$ instead of $\mathtt{if} ( A[i] &gt; 0 )\dots$.</p>&#xA;
proof techniques semantics arrays hoare logic software verification
1
1,217
How to devise an algorithm to arrange (resizable) windows on the screen to cover as much space as possible?
<p>I would like to write a simple program that accepts a set of windows (width+height) and the screen resolution and outputs an arrangement of those windows on the screen such that the windows take the most space. Therefore it is possible to resize a window, while maintaining <code>output size &gt;= initial size</code> and the aspect ratio. So for window $i$, I'd like the algorithm to return a tuple $(x, y, width, height)$.</p>&#xA;&#xA;<p>I believe this is might be a variation of 2D Knapsack. I've tried going over results around the web but they mostly had a lot of background (and no implementation) that made it hard for me to follow.</p>&#xA;&#xA;<p>I'm less interested in the fastest possible algorithm, but more in something that is practical for my specific need.</p>&#xA;
algorithms computational geometry packing user interface
0
1,218
Lower bounds of calculating a function of a set
<p>Having a set $A$ of $n$ elements, let's say I want to calculate a function $f(A)$ that is sensitive to all parts of the input, i.e. depends on very member of $A$ (i.e. it is possible to change any member of $A$ to something else to obtain a new input $A&#39;$ s.t. value of $f$ on $A$ and $A&#39;$ are different).</p>&#xA;&#xA;<p>For example, $f$ could be the sum or the average.</p>&#xA;&#xA;<p>Is there a result that proves that, under some conditions, the time necessary to a deterministic Turing machine to compute $f$ will be $\Omega(n)$?</p>&#xA;
complexity theory
1
1,223
The space complexity of recognising Watson-Crick palindromes
<p>I have the following algorithmic problem:</p>&#xA;&#xA;<blockquote>&#xA; <p>Determine the space Turing complexity of recognizing DNA strings that are Watson-Crick palindromes. </p>&#xA;</blockquote>&#xA;&#xA;<p>Watson-Crick palindromes are strings whose reversed complement is the original string. The <em>complement</em> is defined letter-wise, inspired by DNA: A is the complement of T and C is the complement of G. A simple example for a WC-palindrome is ACGT.</p>&#xA;&#xA;<p>I've come up with two ways of solving this.</p>&#xA;&#xA;<p><strong>One requires $\mathcal{O}(n)$ space.</strong></p>&#xA;&#xA;<ul>&#xA;<li>Once the machine is done reading the input. The input tape must be copied to the work tape in reverse order. </li>&#xA;<li>The machine will then read the input and work tapes from the left and compare each entry to verify the cell in the work tape is the compliment of the cell in the input. This requires $\mathcal{O}(n)$ space. </li>&#xA;</ul>&#xA;&#xA;<p><strong>The other requires $\mathcal{O}(\log n)$ space.</strong></p>&#xA;&#xA;<ul>&#xA;<li>While reading the input. Count the number of entries on the input tape.</li>&#xA;<li>When the input tape is done reading&#xA;<ul>&#xA;<li>copy the complement of the letter onto the work tape</li>&#xA;<li>copy the letter L to the end of the work tape</li>&#xA;</ul></li>&#xA;<li>(Loop point)If the counter = 0, clear the worktape and write yes, then halt</li>&#xA;<li>If the input tape reads L&#xA;<ul>&#xA;<li>Move the input head to the left by the number of times indicated by the counter (requires a second counter)</li>&#xA;</ul></li>&#xA;<li>If the input tape reads R &#xA;<ul>&#xA;<li>Move the input head to the right by the number of times indicated by the counter (requires a second counter)</li>&#xA;</ul></li>&#xA;<li>If the cell that holds the value on the worktape matches the current cell on the input tape&#xA;<ul>&#xA;<li>decrement the counter by two</li>&#xA;<li>Move one to the left or right depending if R or L is on the worktape respectively</li>&#xA;<li>copy the Complement of L or R to the worktape in place of the current L or R</li>&#xA;<li>continue the loop</li>&#xA;</ul></li>&#xA;<li>If values dont match, clear the worktape and write no, then halt</li>&#xA;</ul>&#xA;&#xA;<p>This comes out to about $2\log n+2$ space for storing both counters, the current complement, and the value L or R.</p>&#xA;&#xA;<p><strong>My issue</strong></p>&#xA;&#xA;<p>The first one requires both linear time and space. The second one requires $\frac{n^2}{2}$ time and $\log n$ space. I was given the problem from the quote and came up with these two approaches, but I don't know which one to go with. I just need to give the space complexity of the problem. </p>&#xA;&#xA;<p><strong>The reason I'm confused</strong></p>&#xA;&#xA;<p>I would tend to say the second one is the best option since it's better in terms of time, but that answer only comes from me getting lucky and coming up with an algorithm. It seems like if I want to give the space complexity of something, it wouldn't require luck in coming up with the right algorithm. Am I missing something? Should I even be coming up with a solution to the problem to answer the space complexity?</p>&#xA;
algorithms algorithm analysis turing machines space complexity
1
1,225
Attack on hash functions that do not satisfy the one-way property
<p>I am revising for a computer security course and I am stuck on one of the past questions. Here is it:</p>&#xA;&#xA;<blockquote>&#xA; <p>Alice ($A$) wants to send a short message $M$ to Bob ($B$) using a shared secret $S_{ab}$ to authenticate that the message has come from her. She proposes to send a single message with two pieces:&#xA; $$ A \to B: \quad M, h(M \mathbin\parallel S_{ab})$$&#xA; where $h$ is a hash function and $\parallel$ denotes concatenation.</p>&#xA; &#xA; <ol>&#xA; <li>Explain carefully what Bob does to check that the message has come from Alice, and why (apart from properties of $h$) he may believe this.</li>&#xA; <li>Suppose that $h$ does not satisfy the one-way property and it is possible to generate pre-images. Explain what an attacker can do and how.</li>&#xA; <li>If generating pre-images is comparatively time-consuming, suggest a simple countermeasure to improve the protocol without changing $h$.</li>&#xA; </ol>&#xA;</blockquote>&#xA;&#xA;<p>I think I know the first one. Bob needs to take a hash of the received message along with his shared key and compare that hash with the hash received from Alice, if they match then this should prove Alice sent it.</p>&#xA;&#xA;<p>I am not sure about the second two questions though. For the second one, would the answer be that an attacker can simply obtain the original message given a hash? I'm not sure how that would be done though.</p>&#xA;
cryptography hash one way functions
1
1,226
Are asymptotic lower bounds relevant to cryptography?
<p>An asymptotic lower bound such as exponential-hardness is generally thought to imply that a problem is "inherently difficult". Encryption that is "inherently difficult" to break is thought to be secure. </p>&#xA;&#xA;<p>However, an asymptotic lower bound does not rule out the possibility that a huge but finite class of problem instances are easy (eg. all instances with size less than $10^{1000}$).</p>&#xA;&#xA;<p>Is there any reason to think that cryptography being based on asymptotic lower bounds would confer any particular level of security? Do security experts consider such possibilities, or are they simply ignored? </p>&#xA;&#xA;<p>An example is the use of trap-door functions based on the decomposition of large numbers into their prime factors. This was at one point thought to be inherently difficult (I think that exponential was the conjecture) but now many believe that there may be a polynomial algorithm (as there is for primality testing). No one seems to care very much about the lack of an exponential lower bound.</p>&#xA;&#xA;<p>I believe that other trap door functions have been proposed that are thought to be NP-hard (see <a href="https://cs.stackexchange.com/q/356/98">related question</a>), and some may even have a proven lower bound. My question is more fundamental: does it matter what the asymptotic lower bound is? If not, is the practical security of any cryptographic code at all related to any asymptotic complexity result?</p>&#xA;
complexity theory cryptography asymptotics
0
1,229
Why does the splay tree rotation algorithm take into account both the parent and grandparent node?
<p>I don't quite understand why the rotation in the splay tree data structure is taking into account not only the parent of the rating node, but also the grandparent (zig-zag and zig-zig operation). Why would the following not work:</p>&#xA;&#xA;<p>As we insert, for instance, a new node to the tree, we check whether we insert into the left or right subtree. If we insert into the left, we rotate the result RIGHT, and vice versa for right subtree. Recursively it would be sth like this</p>&#xA;&#xA;<pre><code>Tree insert(Tree root, Key k){&#xA; if(k &lt; root.key){&#xA; root.setLeft(insert(root.getLeft(), key);&#xA; return rotateRight(root);&#xA; }&#xA; //vice versa for right subtree&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>That should avoid the whole "splay" procedure, don't you think?</p>&#xA;
algorithms data structures binary trees search trees
1
1,231
Efficiently selecting the median and elements to its left and right
<p>Suppose we have a set $S = \{ a_1,a_2,a_3,\ldots , a_N \}$ of $N$ coders.</p>&#xA;&#xA;<p>Each Coders has rating $R_i$ and the number of gold medals $E_i$, they had won so far.</p>&#xA;&#xA;<p>A Software Company wants to hire exactly three coders to develop an application.</p>&#xA;&#xA;<p>For hiring three coders, they developed the following strategy:</p>&#xA;&#xA;<ol>&#xA;<li>They first arrange the coders in ascending order of ratings and descending order of gold medals.</li>&#xA;<li>From this arranged list, they select the three of the middle coders.&#xA;E.g., if the arranged list is $(a_5,a_2,a_3,a_1,a_4)$ they select $(a_2,a_3,a_1)$ coders.</li>&#xA;</ol>&#xA;&#xA;<p>Now we have to help company by writing a program for this task.</p>&#xA;&#xA;<p><strong>Input:</strong></p>&#xA;&#xA;<p>The first line contains $N$, i.e. the number of coders.</p>&#xA;&#xA;<p>Then the second line contains the ratings $R_i$ of $i$th coder.</p>&#xA;&#xA;<p>The third line contains the number of gold medals bagged by the $i$th coder.</p>&#xA;&#xA;<p><strong>Output:</strong></p>&#xA;&#xA;<p>Display only one line that contains the sum of gold medals earned by the three coders the company will select.</p>&#xA;
algorithms algorithm design
1
1,234
Classification of intractable/tractable satisfiability problem variants
<p>Recently I found in a paper [1] a special symmetric version of SAT called the <strong>2/2/4-SAT</strong>. But there are many $\text{NP}$-complete variants out there, for example: <strong>MONOTONE NAE-3SAT</strong>, <strong>MONOTONE 1-IN-3-SAT</strong>, ...</p>&#xA;&#xA;<p>Some other variants are tractable: $2$-$\text{SAT}$, Planar-NAE-$\text{SAT}$, ...</p>&#xA;&#xA;<p>Are there survey papers (or web pages) that classify all the (weird) $\text{SAT}$ variants that have been proved to be $\text{NP}$-complete (or in $\text{P}$) ?</p>&#xA;&#xA;<hr>&#xA;&#xA;<ol>&#xA;<li><a href="https://www.aaai.org/Papers/AAAI/1986/AAAI86-027.pdf">Finding a shortest solution for the $N$x$N$ extension of the 15-Puzzle is intractable</a> by D. Ratner and M. Warmuth (1986)</li>&#xA;</ol>&#xA;
complexity theory reference request satisfiability
1
1,236
How does variance in task completion time affect makespan?
<p>Let's say that we have a large collection of tasks $\tau_1, \tau_2, ..., \tau_n$ and a collection of identical (in terms of performance) processors $\rho_1, \rho_2, ..., \rho_m$ which operate completely in parallel. For scenarios of interest, we may assume $m \leq n$. Each $\tau_i$ takes some amount of time/cycles to complete once it is assigned to a processor $\rho_j$, and once it is assigned, it cannot be reassigned until completed (processors always eventually complete assigned tasks). Let's assume that each $\tau_i$ takes an amount of time/cycles $X_i$, not known in advance, taken from some discrete random distribution. For this question, we can even assume a simple distribution: $P(X_i = 1) = P(X_i = 5) = 1/2$, and all $X_i$ are pairwise independent. Therefore $\mu_i = 3$ and $\sigma^2 = 4$.</p>&#xA;&#xA;<p>Suppose that, statically, at time/cycle 0, all tasks are assigned as evenly as possible to all processors, uniformly at random; so each processor $\rho_j$ is assigned $n/m$ tasks (we can just as well assume $m | n$ for the purposes of the question). We call the makespan the time/cycle at which the last processor $\rho^*$ to finish its assigned work, finishes the work it was assigned. First question:</p>&#xA;&#xA;<blockquote>&#xA; <p>As a function of $m$, $n$, and the $X_i$'s, what is the makespan $M$? Specifically, what is $E[M]$? $Var[M]$?</p>&#xA;</blockquote>&#xA;&#xA;<p>Second question:</p>&#xA;&#xA;<blockquote>&#xA; <p>Suppose $P(X_i = 2) = P(X_i = 4) = 1/2$, and all $X_i$ are pairwise independent, so $\mu_i = 3$ and $\sigma^2 = 1$. As a function of $m$, $n$, and these new $X_i$'s, what is the makespan? More interestingly, how does it compare to the answer from the first part?</p>&#xA;</blockquote>&#xA;&#xA;<p>Some simple thought experiments demonstrate the answer to the latter is that the makespan is longer. But how can this be quantified? I will be happy to post an example if this is either (a) controversial or (b) unclear. Depending on the success with this one, I will post a follow-up question about a dynamic assignment scheme under these same assumptions. Thanks in advance!</p>&#xA;&#xA;<p><strong>Analysis of an easy case: $m = 1$</strong></p>&#xA;&#xA;<p>If $m = 1$, then all $n$ tasks are scheduled to the same processor. The makespan $M$ is just the time to complete $n$ tasks in a complete sequential fashion. Therefore,&#xA;$$\begin{align*}&#xD;&#xA; E[M]&#xD;&#xA; &amp;= E[X_1 + X_2 + ... + X_n] \\&#xD;&#xA; &amp;= E[X_1] + E[X_2] + ... + E[X_n] \\&#xD;&#xA; &amp;= \mu + \mu + ... + \mu \\&#xD;&#xA; &amp;= n\mu&#xD;&#xA;\end{align*}$$&#xA;and&#xA;$$\begin{align*}&#xD;&#xA; Var[M]&#xD;&#xA; &amp;= Var[X_1 + X_2 + ... + X_n] \\&#xD;&#xA; &amp;= Var[X_1] + Var[X_2] + ... + Var[X_n] \\&#xD;&#xA; &amp;= \sigma^2 + \sigma^2 + ... + \sigma^2 \\&#xD;&#xA; &amp;= n\sigma^2 \\&#xD;&#xA;\end{align*}$$</p>&#xA;&#xA;<p>It seems like it might be possible to use this result to answer the question for $m &gt; 1$; we simply need to find an expression (or close approximation) for $\max(Y_1, Y_2, ..., Y_m)$ where $Y_i = X_{i\frac{n}{m} + 1} + X_{i\frac{n}{m} + 2} + ... + X_{i\frac{n}{m} + \frac{n}{m}}$, a random variable with $\mu_Y = \frac{n}{m}\mu_X$ and $\sigma_Y^2 = \frac{n}{m}\sigma_X^2$. Is this heading in the right direction?</p>&#xA;
probability theory scheduling parallel computing
1
1,237
Complexity of Special Case Problems
<p>Often I see a sentence like this while reading texts on Computational Complexity:</p>&#xA;&#xA;<p>"For this special case of $\text{TSP}$" or</p>&#xA;&#xA;<p>"This is a special case of $\text{SAT}$" or</p>&#xA;&#xA;<p>"$k$-$\text{PARTITION}$ is the following special case of $\text{BIN PACKING}$" or</p>&#xA;&#xA;<p>"$\text{SUBSET SUM}$ is a special case of $\text{KNAPSACK}$" ad nauseam.</p>&#xA;&#xA;<p>What I find missing is the criterium to claim one problem is a special case of another.<br>&#xA;What is the necessity of classifying one problem as a special case of another? Does a special case always carry the complexity class as its 'unspecial' case?</p>&#xA;&#xA;<p>Often this is simply stated with no proof of relation between these problems.</p>&#xA;&#xA;<p>What requirements must be met for a problem to be a special case of another?</p>&#xA;&#xA;<p>How can I prove a new Language for a problem is a special case of an already existing problem?</p>&#xA;
complexity theory
0
1,240
How do I construct reductions between problems to prove a problem is NP-complete?
<p>I am taking a complexity course and I am having trouble with coming up with reductions between NPC problems. How can I find reductions between problems? Is there a general trick that I can use? How should I approach a problem that asks me to prove a problem is NPC?</p>&#xA;
complexity theory np complete proof techniques reductions
0
1,243
What is meant by "solvable by non deterministic algorithm in polynomial time"
<p>In many textbooks NP problems are defined as:</p>&#xA;&#xA;<blockquote>&#xA; <p>Set of all decision problems solvable by non deterministic algorithms in polynomial time</p>&#xA;</blockquote>&#xA;&#xA;<p>I couldn't understand the part "solvable by non deterministic algorithms". Could anyone please explain that?</p>&#xA;
complexity theory terminology nondeterminism
1
1,246
Where can I find good study material on Role Mining?
<p>I need to cover these topics in Role Mining. If anyone knows good site which well summarizes the topics and concepts are well explained please help out.</p>&#xA;&#xA;<p>Basic role mining problem<br>&#xA;• Delta-approx RMP<br>&#xA;• Min-noise RMP<br>&#xA;• Nature of the RMP problems<br>&#xA;• Mapping RMP to database tiling problem<br>&#xA;• Minimum tiling problem<br>&#xA;• Mapping min-noise RMP to database tiling problem<br>&#xA;• Mapping RMP to minimum biclique cover problem<br></p>&#xA;
education reference request security access control
1
1,255
Ordering elements so that some elements don't come between others
<p>Given an integer $n$ and set of triplets of distinct integers&#xA;$$S \subseteq \{(i, j, k) \mid 1\le i,j,k \le n, i \neq j, j \neq k, i \neq k\},$$&#xA;find an algorithm which either finds a permutation $\pi$ of the set $\{1, 2, \dots, n\}$ such that&#xA;$$(i,j,k) \in S \implies (\pi(j)&lt;\pi(i)&lt;\pi(k)) ~\lor~ (\pi(i)&lt;\pi(k)&lt;\pi(j))$$&#xA;or correctly determines that no such permutation exists. Less formally, we want to reorder the numbers 1 through $n$; each triple $(i,j,k)$ in $S$ indicates that $i$ must appear before $k$ in the new order, but $j$ must not appear between $i$ and $k$.</p>&#xA;&#xA;<p><strong>Example 1</strong></p>&#xA;&#xA;<p>Suppose $n=5$ and $S = \{(1,2,3), (2,3,4)\}$. Then</p>&#xA;&#xA;<ul>&#xA;<li><p>$\pi = (5, 4, 3, 2, 1)$ is <em>not</em> a valid permutation, because $(1, 2, 3)\in S$, but $\pi(1) &gt; \pi(3)$.</p></li>&#xA;<li><p>$\pi = (1, 2, 4, 5, 3)$ is <em>not</em> a valid permutation, because $(1, 2, 3) \in S$ but $\pi(1) &lt; \pi(3) &lt; \pi(5)$.</p></li>&#xA;<li><p>$(2, 4, 1, 3, 5)$ is a valid permutation.</p></li>&#xA;</ul>&#xA;&#xA;<p><strong>Example 2</strong></p>&#xA;&#xA;<p>If $n=5$ and $S = \{(1, 2, 3), (2, 1, 3)\}$, there is no valid permutation. Similarly, there is no valid permutation if $n=5$ and $S = \{(1,2,3), (3,4,5), (2,5,3), (2,1,4)\}$ (I think; may have made a mistake here).</p>&#xA;&#xA;<p><em>Bonus: What properties of $S$ determine whether a feasible solution exists?</em></p>&#xA;
algorithms optimization scheduling
1
1,257
Chinese Postman Problem: finding best connections between odd-degree nodes
<p>I am writing a Program, solving the <a href="http://en.wikipedia.org/wiki/Route_inspection_problem" rel="nofollow">Chinese Postman Problem</a> (also known as route inspection problem) in an undirected draph and currently facing the problem to find the best additional edges to connect the nodes with odd degree, so I can compute an Eulerian circuit.</p>&#xA;&#xA;<p>There might be (considering the size of the graph that wants to be solved) an enormous combination of edges which need to be computed and evaluated.</p>&#xA;&#xA;<p>As an example there are the odd-degree nodes $A, B, C, D, E, F, G, H$. The best combinations could be:</p>&#xA;&#xA;<ol>&#xA;<li>$AB$, $CD$, $EF$, $GH$</li>&#xA;<li>$AC$, $BD$, $EH$, $FG$</li>&#xA;<li>$AD$, $BC$, $EG$, $FH$</li>&#xA;<li>$AE$ ....</li>&#xA;</ol>&#xA;&#xA;<p>where $AB$ means "edge between node $A$ and node $B$".</p>&#xA;&#xA;<p>Therefore my question is: is there a known algorithm to solve that problem in a complexity better than pure brute force (computing and evaluating them all)?</p>&#xA;&#xA;<p>€:After some research effort I found <a href="http://web.mit.edu/urban_or_book/www/book/chapter6/6.4.4.html" rel="nofollow">this</a> article, speaking about the "Edmonds' minimum-length matching algorithm" but I cannot find any pseudo-code or learners-descriptions of this algorithm (or at least I do not recognize them, as Google offers a lot of hits an matching algorithms by J. Edmonds)</p>&#xA;
algorithms graphs
0
1,259
A lambda calculus evaluation involving Church numerals
<p>I understand that a <a href="http://en.wikipedia.org/wiki/Church_encoding">Church numeral</a> $c_n$ looks like $\lambda s. \lambda z. s$ (... n times ...) $s\;z$. This means nothing more than "the function $s$ applied $n$ times to the function $z$".</p>&#xA;&#xA;<p>A possible definition of the $\mathtt{times}$ function is the following: $\mathtt{times} = \lambda m. \lambda n. \lambda s. m \; (n\; s)$. Looking at the body, I understand the logic behind the function. However, when I start evaluating, I get stuck. I will illustrate it with an example:</p>&#xA;&#xA;<p>$$\begin{align*}&#xD;&#xA; (\lambda m. \lambda n. \lambda s. m \; (n\; s))(\lambda s.\lambda z.s\;s\;z)(\lambda s.\lambda z.s\;s\;s\;z) \mspace{-4em} \\&#xD;&#xA; \to^*&amp; \lambda s. (\lambda s.\lambda z.s\;s\;z) \; ((\lambda s.\lambda z.s\;s\;s\;z)\; s)) \\&#xD;&#xA; \to^*&amp; \lambda s. (\lambda s.\lambda z.s\;s\;z) \; (\lambda z.s\;s\;s\;z) \\&#xD;&#xA; \to^*&amp; \lambda s. \lambda z.(\lambda z.s\;s\;s\;z)\;(\lambda z.s\;s\;s\;z)\;z&#xD;&#xA;\end{align*}$$</p>&#xA;&#xA;<p>Now in this situation, if I first apply $(\lambda z.s\;s\;s\;z)\;z$, I get to the desired result. However, if I apply $(\lambda z.s\;s\;s\;z)\;(\lambda z.s\;s\;s\;z)$ first, as I should because application is associative from the left, I get a wrong result:</p>&#xA;&#xA;<p>$\lambda s. \lambda z.(\lambda z.s\;s\;s\;z)\;(\lambda z.s\;s\;s\;z)\;z \to \lambda s. \lambda z.(s\;s\;s\;(\lambda z.s\;s\;s\;z))\;\;z$</p>&#xA;&#xA;<p>I can no longer reduce this. What am I doing wrong? The result should be $\lambda s. \lambda z.s\;s\;s\;s\;s\;s\;z$</p>&#xA;
lambda calculus church numerals
1
1,268
Efficient queriable data structure to represent a screen with windows on it
<p>(this is related to my other question, see <a href="https://cs.stackexchange.com/questions/1217/how-to-devise-an-algorithm-to-arrange-resizable-windows-on-the-screen-to-cover">here</a>)</p>&#xA;&#xA;<p>Imagine a screen, with 3 windows on it:</p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/vVUl3.jpg" alt="enter image description here"></p>&#xA;&#xA;<p>I'd like to find an efficient data structure to represent this, while supporting these actions:</p>&#xA;&#xA;<ul>&#xA;<li>return a list of coordinates where a given window can be positioned without overlapping with others&#xA;&#xA;<ul>&#xA;<li>for the above example, if we want to insert a window of size 2x2, possible positions will be (8, 6), (8, 7), ..</li>&#xA;</ul></li>&#xA;<li>resizing a window on the screen without overlapping other windows while maintaining aspect ratio</li>&#xA;<li>insert window at position x, y (assuming it doesn't overlap)</li>&#xA;</ul>&#xA;&#xA;<p>Right now my naive approach is keeping an array of windows and going over all points on the screen, checking for each one if it's in any of the windows. This is $O(n\cdot m\cdot w)$ where $n, m$ are the width, height of the screen and $w$ is the number of windows in it. Note that in general $w$ will be small (say &lt; 10) where each window is taking a lot of space.</p>&#xA;
algorithms computational geometry user interface modelling
0
1,270
What is the average turnaround time?
<p>For the following jobs: </p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/rwOBN.png" alt="job table"></p>&#xA;&#xA;<p>The <strong>average wait time</strong> would be using a FCFS algorithm:</p>&#xA;&#xA;<p>(6-6)+(7-2)+(11-5)+(17-5)+(14-1) -> 0+5+6+10+13 -> 34/5 = 7 (6.8)</p>&#xA;&#xA;<p>What would the <strong>average turnaround time</strong> be? </p>&#xA;
algorithms operating systems process scheduling scheduling
1