url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/55867?sort=oldest
## The canonical divisor of the Hilbert scheme $Hilb^n P^2$? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hey everyone, I was wondering if anyone knows what the canonical divisor of the Hilbert scheme $Hilb^n P^2$ is --$Hilb^n P^2$ is the Hilbert scheme of degree-n zero dimensional subschemes of the projective plane $P^2$. Any references? Many thanks in advance. - ## 2 Answers There is an easy formula for the canonical divisor on the Hilbert scheme of $n$ poin ts on any any smooth projective surface $X$. Let's first fix some notations. Denote by $X^{n}$ the $n-$fold product with projections $pr_i X^{n}\to X$. We can consider line bundles of the form $$L^{[n]}=pr_1^* L \otimes\cdots \otimes pr_n^* L$$It is not hard to show that this decends to a line bundle on the symmetric product ${X}^{[n]}$ and this defines a homomorphism $Pic(X)\to Pic X^{[n]}$. In this notation, the canonical line bundle is given by $$\omega_{X^{[n]}}=\omega_{X}^{[n]}.$$I think this is in Göttsche's book on Hilbert schemes of points. - Thank you. It was very helpful. – Turkelli Feb 18 2011 at 18:37 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. $n=1$ already tells you that the anticanonical divisor is going to be nicer than the canonical, in that it's effective. There, the divisor given by the three coordinate lines is anticanonical. Next step, look at the Chow variety of $n$ points in $P^2$, with an anticanonical given by "some point is on some coordinate line". Then use the fact that the morphism from Hilb to Chow is crepant, to say that we can pull the anticanonical back. So: the divisor given by "some point is on some coordinate line" is again anticanonical up on the Hilbert scheme. - Oh, I see, thank you. In fact, I care about the anti-canonical divisor (rather than the canonical) as I am interested in Batyrev-Manin conjectures for Hilbert schemes. I wonder if the effective cone of the Hilbert schemes is known? – Turkelli Feb 19 2011 at 21:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9288250803947449, "perplexity_flag": "head"}
http://mathoverflow.net/questions/29652/what-are-the-most-elegant-proofs-that-you-have-learned-from-mo
## What are the most elegant proofs that you have learned from MO? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) One of the things that MO does best is provide clear, concise answers to specific mathematical questions. I have picked up ideas from areas of mathematics I normally wouldn't touch, simply because someone posted an eye-catching answer on MO. In particular, there have been some really elegant and surprising proofs. For example, this one by villemoes, when the questioner asked for a simple proof that there are uncountably many permutations of $\mathbb{N}$. The fact that any conditionally convergent series [and that such exists] can be rearranged to converge to any given real number x proves that there is an injection P from the reals to the permutations of $\mathbb{N}$. Or this one by André Henriques, when the questioner asked whether the Cantor set is the zero set of a continuous function: The continuous function is very easy to construct: it's the distance to the closed set. There must many such proofs that most of us have missed, so I'd like to see a list, an MO Greatest Hits if you will. Please include a link to the answer, so that the author gets credit (and maybe a few more rep points), but also copy the proof, as it would nice to see the proofs without having to move away from the page. (If anyone knows the best way to copy text with preservation of LaTeX, please advise.) I realize that one person's surprise may be another person's old hat, so that's why I'm asking for proofs that you learned from MO. You don't have to guarantee that the proof is original. - 1 Shortly: mathoverflow.net/questions/26520/…. – Wadim Zudilin Jun 27 2010 at 1:32 Also shortly: How to capture a sphere in a knot? mathoverflow.net/questions/8091/… – Gjergji Zaimi Jun 27 2010 at 2:10 2 I've tried in vain to answer this question, and I've come to realize that for me, learning slick proofs has not been the most attractive or memorable part of the MO experience (though I must have learned a few here). – Thierry Zell Apr 16 2011 at 23:00 ## 6 Answers In this fantastic answer, Ashutosh proved that the Axiom of Choice is equivalent to the assertion that every set admits a group structure. In ZF, the following are equivalent: (a) For every nonempty set there is a binary operation making it a group (b) Axiom of choice Non trivial direction [(a) -> (b)]: The trick is Hartogs construction which gives for every set $X$ an ordinal $\aleph(X)$ such that there is no injection from $\aleph(X)$ into $X$. Assume for simplicity that $X$ has no ordinals. Let $o$ be a group operation on $X \cup \aleph(X)$. Now for any $x \in X$ there must be an $\alpha \in \aleph(X)$ such that $x o \alpha \in \aleph(X)$ since otherwise we get an injection of $\aleph(X)$ into $X$. Using $o$, therefore, one may inject $X$ into $(\aleph(X))^{2}$ by sending $x \in X$ to the <-least pair $(\alpha, \beta)$ in $(\aleph(X))^{2}$ such that $x o \alpha = \beta$. Here, < is the lexic well ordering on the product $(\aleph(X))^{2}$. This induces a well ordering on $X$. (The argument is due originally to Hajnal and Kertész, 1973.) - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Unfortunately I can't find the link but someone mentioned this proof that there are irrational numbers $a$ and $b$ such that $a^b$ is rational: if $\sqrt{2}^\sqrt{2}$ is rational then we are done, if it is irrational then $2 = (\sqrt{2}^\sqrt{2})^\sqrt{2}$ is an irrational raised to an irrational. - 16 @Waldim: Sorry but I don't understand your point. Obviously the statement is not deep or a difficult thing to prove using other means; I just found that proof to be very elegant. Different strokes for different folks, I guess. – Eric O. Korman Jun 27 2010 at 2:49 7 @Wadim: the point is that without Gelfond or Gelfond-Schneider etc etc it is actually a neat little puzzle to find irrational a,b with a^b rational. Your continuity argument doesn't work without more effort, because you have to check that the map $x\mapsto x^{\sqrt{2}}$ does not have the property that the pre-image of every rational is rational. Of course it doesn't---far from it---but the issue is finding a proof without invoking a transcendence theory sledgehammer. – Kevin Buzzard Jun 27 2010 at 8:16 21 Although this proof is pretty, I find it much more interesting as a demonstration of the meaning of the term "non-constructive proof", and I think this is the context in which it is usually presented. – Dan Piponi Jun 27 2010 at 17:23 15 Here's a simple, constructive proof: $\sqrt{2}^{\log_{\sqrt{2}} 3} = 3$ and $\log_{\sqrt{2}} 3$ is irrational since otherwise $2^p = 3^q$ for some positive integers $p,q$. It's not as pretty as the $\sqrt{2}^{\sqrt{2}}$ proof, but it shows that no "transcendence theory sledgehammer" is needed to provide an explicit example. – Mark Schwarzmann Apr 15 2011 at 15:59 3 sigfpe is right when he tacitly suggests that this argument is well-known and classical (which is why I wasn't much wowed by it myself). Come to think of it, sigfpe got everything about it. right :-) – Todd Trimble Apr 15 2011 at 16:40 show 4 more comments I found several very nice proofs which I enjoyed: 1.Brilliant proof of fundamental theorem of algebra by Gian Maria Dall'Ara http://mathoverflow.net/questions/10535/ways-to-prove-the-fundamental-theorem-of-algebra/10684#10684 2.Some proofs of quadratic reciprocity: http://mathoverflow.net/questions/1420/whats-the-best-proof-of-quadratic-reciprocity (I especially liked that one: http://mathoverflow.net/questions/1420/whats-the-best-proof-of-quadratic-reciprocity/1431#1431) 3.Proof that $\mathbb{R}^{2n+1}$ does NOT have a square root (quite elementary and beatiful) http://mathoverflow.net/questions/60375/is-r3-the-square-of-some-topological-space/60389#60389 4.Nullstellensatz using model theory http://mathoverflow.net/questions/9667/what-are-some-results-in-mathematics-that-have-snappy-proofs-using-model-theory/9693#9693 5.If in ring R every countably generated ideal is principal than R is a PID http://mathoverflow.net/questions/8042/do-there-exist-non-pids-in-which-every-countably-generated-ideal-is-principal/8067#8067 6.An infinite dimensional vector space have smaller dimension than it's dual. http://mathoverflow.net/questions/13322/slick-proof-a-vector-space-has-the-same-dimension-as-its-dual-if-and-only-if-it/13372#13372 7.Topological proof that Z is a Bezout domain. http://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/64039#640397. - 12 "NUllstehlensatz" would be the zero stealing theorem. Instead, it's the zero point theorem, i.e. the "Nullstellensatz". – Alex Bartel Apr 16 2011 at 9:41 7 Alex, now I want to learn enough analysis that I can state and prove a Nullstehlensatz. :-) – L Spice Apr 16 2011 at 14:14 15 The Nullstehlensatz should be a security theorem about cryptographic protocols for digital cash. If you break such a protocol, that's a Positivstehlensatz. – Henry Cohn Apr 16 2011 at 15:46 1 I imagined it as some sort of complement to pole-pushing. – L Spice Apr 16 2011 at 18:16 I asked a question a while ago about proving that the real line is connected. http://mathoverflow.net/questions/26537/connectedness-and-the-real-line Omar Antolín-Camarena's Answer and comment prove that the closed interval $[0,1]$ is connected iff it is compact. - That is a very nice proof! – David Roberts Jul 24 2011 at 23:18 My candidate is Jim Belk's one-line answer to the question about the existence of functions from $\Bbb{R}$ to $\Bbb{R}$ whose range is $\Bbb{R}$ on every open interval. I do wonder, however, if Jim Balk's solution was known to founders of classical set theory (Cantor, Bernstein, Hausdorff, ...). - @Ali: I have no idea who first came up with that argument, but my first guess would be Sierpinski. Cantor, Bernstein, and Hausdorff undoubtedly knew the result, but they probably used a "construction" by transfinite induction, like the standard construction of a Bernstein set. – Andreas Blass Jul 25 2011 at 8:13 @Andreas: I agree that the existence of such a function must have been known to the "founding fathers" of set theory; it is amusing that even though the one-line proof uses AC, there are other proofs that are implementable in $ZF$. Indeed $ZF$ can produce such a function that is equal to $0$ almost everywhere. – Ali Enayat Jul 25 2011 at 13:36 My favorite is this proof by Bjorn Poonen that every finite Galois extension of $\mathbb{Q}$ has infinitely many completely split primes. Although Bjorn's proof does not give the density of such primes, as the proof using the Chebotarev Density Theorem does, it is refreshing to see that such an elementary proof exists. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9246616363525391, "perplexity_flag": "middle"}
http://all-science-fair-projects.com/science_fair_projects_encyclopedia/Sound
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. Categories: Sound | Waves | Physics # Sound "Sound is an alternation in pressure, particle displacement, or particle velocity propagated in an elastic material" (Olson 1957) or series of mechanical compressions and rarefactions or longitudinal waves that successively propagate through medium that are at least a little compressible (solid, liquid or gas but not vacuum). In sound waves parts of matter (molecules or groups of molecules) move in a direction of the spreading of the disturbance (as opposite to transversal waves). The cause of sound waves is called the source of waves, e.g. a violin string vibrating upon being bowed or plucked. A sound wave is usually represented graphically by a wavy, horizontal line; the upper part of the wave (the crest) indicates a compression and the lower part (the trough) indicates a rarefaction. Contents ## Attributes of sound The characteristics of sound are frequency, wavelength, amplitude and velocity. ### Frequency and wavelength The frequency is the number of oscillations of a particular point in the course of soundwaves in a second. One single oscillatory cycle per second corresponds to 1 Hz(1/s). The wavelength is the distance between two successive crests and is the path that a wave travels in the time of one oscillatory cycle. In the case of longitudinal harmonic sound waves we can describe it with the equation $y(x,t) = y_0sin\omega(t-\frac{x}{c})$ where y(x,t) is the displacement of particles from the stable position (y0) in the direction of spreading of waves, while x is the displacement of the source of waves, c is the speed of waves, ω is the angle speed of the source of waves and x/c is the time that the wave needs to travel the path x. Time of one oscillatory cycle is denoted by t. ### Amplitude The amplitude is the magnitude of sound pressure change within the wave. It is the maximal displacement of particles of matter that is obtained in compressions, where the particles of matter move towards each other and pressure increases the most and in rarefactions, where the pressure lessens the most. See also particle displacement and particle velocity. While the pressure can be measured in pascals, the amplitude is more often referred to as sound pressure level and measured in decibels, or dBSPL, sometimes written as dBspl or dB(SPL). When the measurement is adjusted based on how the human ear perceives loudness based on frequency, it is called dBA or A-weighting. See decibels for a more thorough discussion. ### Velocity The speed of this propagation depends on the type, temperature and pressure of the medium. Under normal conditions, however, because air is nearly a perfect gas, it does not depend on the air pressure. In dry air at 20 °C (68 °F) the speed of sound is approximately 343 m/s. A real-world estimate is nearly 1 meter per 3 milliseconds. ## Types of sounds Noises are irregular and disordered vibrations, they include all possible frequencies, their picture does not repeat in time. The noise is an aperiodic series of waves. Sounds that are sine waves with fixed frequency and amplitude are perceived as pure tones. While sound waves are usually visualised as sine waves, sound waves can have arbitrary shapes and frequency content, limited only by the apparatus that generates them and the medium through which they travel. In fact, most sound waves consist of multiple overtones or harmonics and any sound can be thought of as being composed of sine waves (see additive synthesis). Waveforms commonly used to approximate harmonic sounds in nature include sawtooth waves, square waves and triangle waves. While a sound may still be referred to as being of a single frequency (for example, a piano striking the A above middle C is said to be playing a note at 440 Hz), the sound perceived by a listener will be colored by all of the sound wave's frequency components and their relative amplitudes (see timbre.) For convenience in this article, however, it is best to think of sound waves as sine waves. ## Perception of sound The frequency range of sound audible to humans is approximately between 20 and 20,000 Hz. This range varies by individual and generally shrinks with age. It is also an uneven curve - sounds near 3,500 Hz are often perceived as louder than a sound with the same amplitude at a much lower or higher frequency. Above and below this range are ultrasound and infrasound, respectively. The amplitude range of sound for humans has a lower limit of 0 dBSPL, called the threshold of hearing. While there is technically no upper limit, sounds begin to do damage to ears at 85 dBSPL and sounds above approximately 130 dBSPL (called the threshold of pain) cause pain. Again, this range varies by individual and changes with age. The perception of sound is the sense of hearing. In humans and many animals this is accomplished by the ears, but loud sounds and low frequency sounds can be perceived by other parts of the body through the sense of touch. Sounds are used in several ways, most notably for communication through speech or, for example, music. Sound perception can also be used for acquiring information about the surrounding environment in properties such as spatial characteristics and presence of other animals or objects. For example, bats use one sort of echolocation, ships and submarines use sonar, and humans can determine spatial information by the way in which they perceive sounds. The study of sound is called acoustics and is performed by acousticians. A notable subset is psychoacoustics, which combines acoustics and psychology to study how people react to sounds. ## Reference • Olson (1957) cited in Roads, Curtis (2001). Microsound. MIT. ISBN 0262182157. ## External links Categories: Sound | Waves | Physics 03-10-2013 05:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9283384680747986, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/43028/symmetries-of-spacetime-and-objects-over-it
# Symmetries of spacetime and objects over it I guess according to mathematical didactic, we first think of spacetime as a set and we reason about elements of its topology and then it's furthermore equipped with a metric. Appearently it is this Riemannian metric, which people consider to be the object, which induced the minimal symmetry requirements of spacetime. 1) Regarding the relation between Riemannian geometry and the Hamiltonian formalism of classical mechanisc: Does a setting for Riemannian geometry always already imply that it's possible to cook up a symplectic structre in the cotangent bundle? 2) Are there some some more natural structures which physicists might be tempted to put on spacetime, which might then also be restricting regarding the (spacetime) symmetry structures? Is constructing quantum group symmetries (of non-commutative coordinate algebras, alla Connes?) just this? 3) I'm given a solution to a differential equation which can be thought of a resulting from a Lagrangian with a set of $n$ symmetries (e.g. $n=10$ for some spacetime models). Can this solution also be the result of a Lagrangian with fewer symmetries? Here, I'm basically asking to what extend I can reconstruct the symmetries from a solution or specific sets of the soltuon. It's kind of the inverse problem of the question "are there hidden/broken symmetries?". - ## 1 Answer The following description will be from the particle point of view, i.e., the space time manifold will refer to the configuration manifold on which a particle moves. Remark: My wrong answer of the first question was correted following the comment by Qmechanic. 1) There is no need for a metric to define a symplectic structure on a cotangent bundle. A cotangent bundle has a canonical symplectic structure independent of any metric: $\omega = dx^i\wedge dp_i$ However, given a metric on the configuration manifold, a cotangent bundle of a wide class of manifolds (for example compact manifolds) can be given a Kähler structure. No explicit expression is known in the general case. However, there are implicit expressions for special cases such as Lie groups, please see the example of $T^{*}SU(2)$ in Hall's lectures. The advantage of having a Kähler structure on a cotangent bundle is that enables quantization in terms of creation and anihilation operators like in the case of a flat space. 2) A natural structure that one can put on a space time manifold is a principal bundle. In this case given a metric on the base manifold and a connection on the principal bundle a Poisson structure can be defined on this principal bundle. In this case even in the case of a vanishing Hamiltonian, there will be nontrivial dynamics determined by the constraints. The classical equations of motion are the Wong equations of a colored particle in a Yang-Mills field. Please see the following work by: A. Duviryac for a clear exposition. Regarding the second part of the question, The quantization of this system leads a quantum representation of the color group. The operator algebra of this representation has the structure of a noncommutative manifold. The most known example of this type of algebra is in the case of $SU(2)$, where this manifold is a fuzzy sphere. 3) Consider a particle moving uniformly on a great circle of a two dimensional sphere. This is a solution of a free particle on a circle whose symmetry is $U(1)$, and also a solution of a free particle on a sphere whose symmetry is $SO(3)$. - Comment to the answer(v1): If $p_j$ is supposed to transform as a co-vector under coordinate transformations $x\to x^{\prime}$, then the rhs. of the first eq. is not invariant under change of coordinates. – Qmechanic♦ Nov 6 '12 at 12:18 @Qmechanic Thank you – David Bar Moshe Nov 6 '12 at 13:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916456401348114, "perplexity_flag": "head"}
http://mathoverflow.net/questions/115636/properties-of-permutations-with-unknown-pattern-avoidance-descriptions/116699
## Background Many properties of permutations can be stated in terms of classical patterns. For example: • a permutation is stack-sortable if and only if it avoids 231 (Knuth 1975) • a permutation corresponds to a smooth Schubert variety if and only if it avoids 1324 and 2143 (Lakshmibai and Sandhya 1990) For other properties we need a stronger notion of a pattern, e.g., the mesh patterns introduced by Brändén and Claesson (2011). For example: • a permutation corresponds to a factorial Schubert variety if and only if it avoids 1324 and (2143,{(2,2)}) (These are the so-called forest-like permutations, Bousquet-Mélou and Butler 2007) • a permutation is sortable in two passes through a stack if and only if it avoids 2341 and (3241,{(1,4)}) (These are the so-called West-2-stack-sortable permutations, West 1990) There are also properties which have not been translated into patterns (to my knowledge): • meander permutations (http://theory.cs.uvic.ca/inf/perm/StampFolding.html) • the involutions in the symmetric group • ... ## The Question What permutation properties do you know that have not been described by the avoidance of patterns ## Motivation I recently wrote an algorithm that given a finite set of permutations outputs the mesh patterns that the permutations avoid. This algorithm is called BiSC (derived from the last names of three people that inspired me to write the algorithm) and can conjecture the descriptions given in the first two lists above. It is available at http://staff.ru.is/henningu/programs/bisc/bisc.html and described in the paper http://arxiv.org/abs/1211.7110. This is a community wiki question since it there is obviously not a single best answer - What means permutation avoids patern? – Alexander Chervov Dec 18 at 19:50 ## 4 Answers Here's one idea. For every permutation $\pi$ of length $n$, there are $n^2+1$ permutation of length $n+1$ containing $\pi$. However, once you look at permutations of length $n+2$, this quantity depends on $\pi$. Ray and West gave a proof that for $\pi$ of length $n$ the number of permutations of length $n+2$ containing $\pi$ is $$(n^4+2n^3+n^2+4n+4-2j)/2,$$ where $0\le j\le k-1$ depends on $\pi$. Perhaps you could give a description of this statistic in terms of patterns of $\pi$? References and a bit more discussion can be found in this paper: http://www.math.ufl.edu/~vatter/publications/pp2007-problems/ - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Derangements. More generally, properties that allow superexponentially many permutations. - I hope I understood the question correctly. I have a feeling that questions on permutations of algebraic as opposed to combinatorial nature, could be candidates. Lakshmibai and Sandhya's theorem is a geometric question and it is a significant theorem because it reduces geometry to combinatorics. With this understanding of your question let me attempt to give four examples: (1) A permutation being of specific order $m$ . Suppose we attempt pattern avoidance like: for any $k$ relatively prime to $m$ it should not have a length $k$ cycle. A permutation of order, for example $m^2$, will also satisfy that criterion and will be accepted wrongly. (2) Permutation being even. (avoidance criterion may not work: because presence of an even number of cycles of any particular length, as opposed odd number of them, will be ok) (3) Some irreducible character vanishing in it. This is conjugacy class question. Can be argued similarly (4) Commuting with another specific permutation. - A source of interesting examples may come from infinite groups with finite presentation, possibly extending your methods to words instead of just permutations (i.e. allowing repetitions). Given a set of generators $\{x,y,\dots,z\}$ of the group $G$, which words in the alphabets of $\{x,\ x^{-1},y,\ y^{-1},\dots z,\ z^{-1}\}$ correspond to minimal length presentations of elements of $G$? In this generality, of course, the problem is intractable, but in principle one optimal answer could be given (and actually is, in some concrete cases) precisely in terms of avoidance of a list of patterns (starting, of course, from avoiding $xx^{-1}$). Clearly, an algorithm as yours may prove very useful to formulate conjecture about patterns. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9013331532478333, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/109140-differentiability.html
# Thread: 1. ## Differentiability I don't understand this question. Could I get some help with it. (Problem is attached) Thanks Attached Thumbnails 2. It asks you to prove that the derivative of f(x,y) in (0,0) in the direction of a generic vector v (directional derivative) exists. You can do this thinking of v as (a,b) for example, or (cos $\theta$, sin $\theta$). Then you have to prove that f(x,y) is not differentiable at that point. Do you understand why this would be "surprising" as the exercise says, and what two important concepts is it referring to?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9655018448829651, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/129824-unbiased-estimator-theta.html
# Thread: 1. ## unbiased estimator of theta I'm trying to find an estimator for $\theta.$ I have the following facts: Where $v_{ij}$ is a matrix that was obtained by multiplying a standard normal matrix and an unspecified data matrix. $P(sign(v_{1j}) = sign(v_{2j})) = 1 - \frac{\theta}{\pi}$ My problem is how can I estimate $P(sign(v_{1j}) = sign(v_{2j})?$ Is this probability dependent on the distribution of $v_{ij}?$ Or, is the desired result simply: $P(sign(v1)=sign(v2))= 1/3$, since, + & -, - & +, 0 & +, + & 0, - & 0, 0 & - + & +, - & -, 0 & 0, And 3/9 satisfy the condition. Thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8646955490112305, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/20717/how-to-find-solutions-of-linear-diophantine-ax-by-c/20727
# How to find solutions of linear Diophantine ax + by = c? I want to find a set of integer solutions of Diophantine equation: $ax + by = c$, and apparently $gcd(a,b)|c$. Then by what formula can I use to find $x$ and $y$ ? I tried to play around with it: $x = (c - by)/a$, hence $a|(c - by)$. $a$, $c$ and $b$ are known. So to obtain integer solution for $a$, then $c - by = ak$, and I lost from here, because $y = (c - ak)/b$. I kept repeating this routine and could not find a way to get rid of it? Any hint? Thanks, Chan - 2 – Qiaochu Yuan Feb 6 '11 at 19:21 Your condition is flipped; it's $\gcd(a,b)|c$, not the other way around. – Arturo Magidin Feb 6 '11 at 21:04 @Arturo Magidin: Thanks, edited. – Chan Feb 6 '11 at 21:37 ## 3 Answers The diophantine equation $ax+by = c$ has solutions if and only if $\gcd(a,b)|c$. If so, it has infinitely many solutions, and any one solution can be used to generate all the other ones. To see this, note that the greatest common divisor of $a$ and $b$ divides both $ax$ and $by$, hence divides $c$ if there is a solution. This gives the necessity of the condition (which you have backwards). (fixed in edits) The converse is actually a constructive proof, that you can find in pretty much every elementary number theory course or book, and which is essentially the same as yunone's answer above (but without dividing through first). From the Extended Euclidean Algorithm, given any integers $a$ and $b$ you can find integers $s$ and $t$ such that $as+bt = \gcd(a,b)$; the numbers $s$ and $t$ are not unique, but you only need one pair. Once you find $s$ and $t$, since we are assuming that $\gcd(a,b)$ divides $c$, there exists an integer $k$ such that $\gcd(a,b)k = c$. Multiplying $as+bt=\gcd(a,b)$ through by $k$ you get $$a(sk) + b(tk) = \gcd(a,b)k = c.$$ So this gives one solution, with $x=sk$ and $y=tk$. Now suppose that $ax_1 + by_1 = c$ is a solution, and $ax+by=c$ is some other solution. Taking the difference between the two, we get $$a(x_1-x) + b(y_1-y) = 0.$$ Therefore, $a(x_1-x) = b(y-y_1)$. That means that $a$ divides $b(y-y_1)$, and therefore $\frac{a}{\gcd(a,b)}$ divides $y-y_1$. Therefore, $y = y_1 + r\frac{a}{\gcd(a,b)}$ for some integer $r$. Substituting into the equation $a(x_1-x) = b(y-y_1)$ gives $$a(x_1 - x) = rb\left(\frac{a}{\gcd(a,b)}\right)$$ which yields $$\gcd(a,b)a(x_1-x) = rba$$ or $x = x_1 - r\frac{b}{\gcd(a,b)}$. Thus, if $ax_1+by_1 = c$ is any solution, then all solutions are of the form $$x = x_1 - r\frac{b}{\gcd(a,b)},\qquad y = y_1 + r\frac{a}{\gcd(a,b)}$$ exactly as yunone said. To give you an example of this in action, suppose we want to find all integer solutions to $$258x + 147y = 369.$$ First, we use the Euclidean Algorithm to find $\gcd(147,258)$; the parenthetical equation on the far right is how we will use this equality after we are done with the computation. \begin{align*} 258 &= 147(1) + 111 &\quad&\mbox{(equivalently, $111=258 - 147$)}\\ 147 &= 111(1) + 36&&\mbox{(equivalently, $36 = 147 - 111$)}\\ 111 &= 36(3) + 3&&\mbox{(equivalently, $3 = 111-3(36)$)}\\ 36 &= 3(12). \end{align*} So $\gcd(147,258)=3$. Since $3|369$, the equation has integral solutions. Then we find a way of writing $3$ as a linear combination of $147$ and $258$, using the Euclidean algorithm computation above, and the equalities on the far right. We have: \begin{align*} 3 &= 111 - 3(36)\\ &= 111 - 3(147 - 111) = 4(111) - 3(147)\\ &= 4(258 - 147) - 3(147)\\ &= 4(258) -7(147). \end{align*} Then, we take $258(4) + 147(-7)=3$, and multiply through by $123$; why $123$? Because $3\times 123 = 369$. We get: $$258(492) + 147(-861) = 369.$$ So one solution is $x=492$ and $y=-861$. All other solutions will have the form \begin{align*} x &= 492 - \frac{147r}{3} = 492 - 49r,\\ y &= -861 + \frac{258r}{3} =86r - 861, &\qquad&r\in\mathbb{Z}. \end{align*} You can reduce those constants by making a simple change of variable. For example, if we let $r=t+10$, then \begin{align*} x &= 492 - 49(t+10) = 2 - 49t,\\ y &= 86(t+10) - 861 = 86t - 1,&\qquad&t\in\mathbb{Z}. \end{align*} - 2 +1, puts my answer to shame! – yunone Feb 6 '11 at 20:56 2 All I have to say is AMAZING ANSWER ^_^! – Chan Feb 6 '11 at 21:40 I think there was a typo on the line: $x = 592 - \frac{147r}{3} = 492 - 49r$. I believe it should be $492$ on the left hand side. – Chan Feb 28 '11 at 1:02 @Chan: Yes, thank you. – Arturo Magidin Feb 28 '11 at 1:05 I don't mean to bug you all these months later, but I believe there is an extraneous $t$ in the equation for $y$ right before the gray page break line. – yunone Jul 22 '11 at 2:15 show 2 more comments As others have mentioned one may employ the extended Euclidean algorithm. It deserves to be better known that this is most easily performed via row-reduction on an augmented matrix - analogous to methods used in linear algebra. See this excerpt from one of my old sci.math posts: ````For example, to solve mx + ny = gcd(x,y) one begins with two rows [m 1 0], [n 0 1], representing the two equations m = 1m + 0n, n = 0m + 1n. Then one executes the Euclidean algorithm on the numbers in the first column, doing the same operations in parallel on the other columns, Here is an example: d = x(80) + y(62) proceeds as: in equation form | in row form ---------------------+------------ 80 = 1(80) + 0(62) | 80 1 0 62 = 0(80) + 1(62) | 62 0 1 row1 - row2 -> 18 = 1(80) - 1(62) | 18 1 -1 row2 - 3 row3 -> 8 = -3(80) + 4(62) | 8 -3 4 row3 - 2 row4 -> 2 = 7(80) - 9(62) | 2 7 -9 row4 - 4 row5 -> 0 = -31(80) -40(62) | 0 -31 40 Above the row operations are those resulting from applying the Euclidean algorithm to the numbers in the first column, row1 row2 row3 row4 row5 namely: 80, 62, 18, 8, 2 = Euclidean remainder sequence | | for example 62-3(18) = 8, the 2nd step in Euclidean algorithm becomes: row2 -3 row3 = row4 on the identity-augmented matrix. In effect we have row-reduced the first two rows to the last two. The matrix effecting the reduction is in the bottom right corner. It starts as the identity, and is multiplied by each elementary row operation matrix, hence it accumulates the product of all the row operations, namely: [ 7 -9] [ 80 1 0] = [2 7 -9] [-31 40] [ 62 0 1] [0 -31 40] The 1st row is the particular solution: 2 = 7(80) - 9(62) The 2nd row is the homogeneous solution: 0 = -31(80) + 40(62), so the general solution is any linear combination of the two: n row1 + m row2 -> 2n = (7n-31m) 80 + (40m-9n) 62 The same row/column reduction techniques tackle arbitrary systems of linear Diophantine equations. Such techniques generalize easily to similar coefficient rings possessing a Euclidean algorithm, e.g. polynomial rings F[x] over a field, Gaussian integers Z[i]. There are many analogous interesting methods, e.g. search on keywords: Hermite / Smith normal form, invariant factors, lattice basis reduction, continued fractions, Farey fractions / mediants, Stern-Brocot tree / diatomic sequence. ```` - Thanks, I really like your Linear Algebra approach. – Chan Feb 6 '11 at 21:54 1 @Chan: It irks me that most textbooks in elementary number theory present more obfuscated approaches. If you go on to study algebra you will learn more about the underlying theory when you study Hermite Smith normal forms and other module-theoretic generalizations of linear algebra results. – Gone Feb 6 '11 at 22:06 This is actually discussed in Niven, Zuckerman, Montgomery. Just so you have a reference (pages 217-218 in the 5th edition). – Arturo Magidin Feb 6 '11 at 22:07 @Arturo. Thanks for the reference. I'm happy to see that it finally made it into an edition of a popular textbook, but I'm sad that the presentation there leaves much to be desired. – Gone Feb 6 '11 at 22:21 Do you mean $\gcd(a,b)$ divides $c$? If so, you can divide both sides of the equation to get $$\frac{a}{g}x+\frac{b}{g}y=\frac{c}{g}$$ where $g=\gcd(a,b)$. But since $\gcd(a/g,b/g)=1$, you can use the extended Euclidean algorithm to find a solution $(x_0,y_0)$ to the equation $$\frac{a}{g}x+\frac{b}{g}y=1.$$ Once you have that, the solution $(X,Y)=(\frac{c}{g}\cdot x_0,\frac{c}{g}\cdot y_0)$ is a solution to your original equation. Furthermore, the values $$x=X + \frac{b}{g} t\quad y=Y - \frac{a}{g} t$$ give all solutions when $t$ ranges over $\mathbb{Z}$, I believe. - @yuone: Yes, that gives all solutions. – Arturo Magidin Feb 6 '11 at 20:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8716999888420105, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/269032/understanding-two-similar-definitions-frechet-urysohn-space-and-sequential-spac
# Understanding two similar definitions: Fréchet-Urysohn space and sequential space Here are the definitions: Fréchet-Urysohn space: A topological space $X$ where for every $A \subseteq X$ and every $x \in \text{cl}(A)$, there exists a sequence $(x_{n})_{n \in \mathbb{N}}$ in $A$ converging to $x$. Sequential space: A topological space $X$ where a set $A \subseteq X$ is closed iff $A$ contains the limit points of every sequence contained in it. As the title explains, I would like to know the difference between them. Thanks for any help. - ## 4 Answers Consider the following operation on a subset $A$ of a space $X$, defining a new subset of $X$: $$\mbox{s-cl}(A) = \{ x \in X \mid \mbox{ there exists a sequence } (x_n)_n \mbox{ from } A \mbox{ such that } x_n \rightarrow x \}\mbox{.}$$ This set, the sequential closure of $A$, contains $A$ (take constant sequences) and in all spaces $X$ it will be a subset of the $\mbox{cl}(A)$, the closure of $A$ in $X$. We can define $\mbox{s-cl}^{0}(A) = A$ and for ordinals $\alpha > 0$ we define $\mbox{s-cl}^\alpha(A) = \mbox{s-cl}(\cup_{\beta < \alpha} \mbox{s-cl}^\beta(A))$, the so-called iterated sequential closure. A space is Fréchet-Urysohn when $\mbox{s-cl}(A) = \mbox{cl}(A)$ for all subsets $A$ of $X$, so the first iteration of the sequential closure is the closure. A space is sequential if some iteration $\mbox{s-cl}^\alpha(A)$ equals the $\mbox{cl}(A)$, for all subsets $A$. So basically by taking sequence limits we can reach all points of the closure eventually in a sequential space, but in a Fréchet-Urysohn space we are done after one step already. For more on the differences and the "canonical" example of a sequential non-Fréchet-Urysohn space (the Arens space), see this nice topology blog, and the links therein. - Both Frechet-Urysohn and sequential spaces are related to first-countable spaces. In fact, first-countable $\Rightarrow$ Frechet-Urysohn $\Rightarrow$ sequential. • Frechet-Urysohn gives a characterisation of what it means for a point to belong to the closure of a set: $x \in \overline{A}$ iff there is a sequence in $A$ converging to $x$. • Sequentiality gives a characterisation of the closed subsets of a space: A set is closed exactly when it contains the limits of all its convergent sequences. An example of a sequential space which is not Frechet-Urysohn is as follows (this is essentially taken from Engelking's text with some added details): For $i \geq 1$ define $X_i = \left\{ \frac 1i \right\} \cup \left\{ \frac 1i + \frac 1{i^2 + k} : k \geq 0 \right\}$, and let $X = \{ 0 \} \cup \bigcup_{i=1}^\infty X_i$. (Note that $X_i \cap X_j = \emptyset$ for $i \neq j$.) We topologise $X$ as follows: • all points of the form $\frac 1i + \frac 1{i^2+k}$ are isolated; • the basic open neighbourhoods of $\frac 1i$ are of the cofinite subsets of $X_i$ containing $\frac 1i$; and • the basic open neighbourhoods of $0$ are of the form $\{ 0 \} \cup \bigcup_{i=1}^\infty Y_i$ where $Y_i \subseteq X_i$ for each $i$, and $Y_i \neq \emptyset$ for all but finitely many $i$, and if $Y_i \neq \emptyset$, then $\frac 1i \in Y_i$ and $Y_i$ is a cofinite subset of $X_i$. It is easy to see that $0 \in \overline{ X \setminus \left\{ 0 , \frac 11 , \frac 12 , \frac 13 , \ldots \right\} }$, but no sequence in this set converges to $0$: If $\{ x_j \}_{j=1}^\infty$ is any sequence in this set, note that if $X_i \cap \{ x_j : j \geq 1 \}$ is infinite for only finitely many $i$, then we can easily form a neighbourhood of $0$ containing no points of this sequence. If $X_i \cap \{ x_j : j \geq 1 \}$ is infinite for infinitely many $i$, enumerate them as $\{ i_k : k \geq 1 \}$. Inductively pick a sequence $\{ j_{k} \}_{k=1}^\infty$ so that $j_{k+1}$ is the least $j > j_k$ such that $x_j \in X_{i_k}$. Note that $X \setminus \{ x_{j_{k}} : k \geq 1 \}$ is a neighbourhood of $0$ which does not include a tail of the sequence. Nevertheless, $X$ is sequential. Suppose that $A \subseteq X$ contains the limits of all convergent sequences of its points. If $x \in \overline{A}$, note that if $x \neq 0$ then $x$ has a countable neighbourhood base it follows that there is a sequence in $A$ converging to $x$, meaning that $x \in A$. If $x = 0$, then assume that $0 \notin A$. Note that there must be a subsequence $\{ x_j \}_{j=1}^\infty$ of $\{ \frac 1 i \}_{i=1}^\infty$ such that every neighbourhood of each $x_j$ intersects $A$ (otherwise we could form a neighbourhood of $0$ which is disjoint from $A$). Then each $x_j \in A$ (since $x_j \in \overline{A}$, and we have observed above that for these points we can build a sequence in $A$ converging to $x_j$), and it follows that $\lim_j x_j = 0$ (every neighbourhood of $0$ contains all but finitely many points of the form $\frac 1i$). - a space is a Sequential space space iff every sequencially closed set is closed. a space is frechet-urysohn if sequencial closure of a set and usual closure coincide - a space is a Sequential space space iff every sequencially closed set is closed.But there can be closed sets which are not sequencially closed. a space is frechet-urysohn if sequencial closure of a set and usual closure coincide - That's awfully wrong! Any closed subset $A$ is sequentially closed since a limit point of a sequence in $A$ is always an adherence point (or limit point) of the set $A$. – Stefan H. Jan 2 at 17:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 93, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528118968009949, "perplexity_flag": "head"}
http://mathhelpforum.com/math-topics/199962-geometric-sequences-how-find-maximum-value.html
# Thread: 1. ## Geometric Sequences - How to find maximum value This question legitimately confuses me. I found that the formula for a geometric sequence is An=1800 x 0.9^(n-1) How does one calculate the maximum value from that, however? I do not understand how there is a maximum value for geometric sequences. 2. ## Re: Geometric Sequences - How to find maximum value I think you mean a value which the sum of the sequence approaches but never reaches, called the sum to infinity. The formula ia a/(1-r) So in your case a=1800 and r=0.9 so sum to infinity =1800/0.1 =18000. r has to be between -1 and +1 for a sequence to have a sum to infinity. 3. ## Re: Geometric Sequences - How to find maximum value The maximum value of $a_n$ is simply the first term (assumed to be $a_1 = 1800$), because all proceeding terms get smaller.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9208516478538513, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/112555/lp-and-l-infty
# $L^p$ and $L^\infty$ So I am trying to prove that for a set $E$ of finite measure, and for $1 \leq p < \infty$, $||f||_p \leq (m(E))^{1 - 1/p}||f||_{\infty}$. But I think I have proved the wrong thing. Can you help me see where I went wrong? My proof is something like $$||f||_p =\left(\int_E |f|^p\right)^{1/p} \leq \left(\int_E ||f||_{\infty}^p\right)^{1/p} = \left(||f||_{\infty}^p \int_E 1\right)^{1/p} = ||f||_{\infty} (m(E))^{1/p},$$ which is not what was asked for in the problem. Thanks! - ## 1 Answer What you get is true but not the wanted inequality. But you can write, assuming that $f\in L^{\infty}$ $|f|^p=|f|^{p-1}|f|\leq ||f||_{\infty}^{p-1}|f|$ then apply Hölder's inequality. - Oh right! Thanks :) For a while I thought the statements contradicted each other. – badatmath Feb 23 '12 at 19:40 Wait, but how can my statement be true? If $p \to \infty$, $||f||_p \to 0$, and assuming $||f||_p \to ||f||_\infty$ I think that's a contradiction. – badatmath Feb 23 '12 at 19:46 Why $||f||_p\to 0$? – Davide Giraudo Feb 23 '12 at 19:54 Wait, never mind, it's not :P – badatmath Feb 23 '12 at 19:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483696818351746, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/167346/implicit-differentiation-question
# Implicit differentiation question Differentiate given $$\frac{y}{x-y}=x^2+1$$ Initially I wanted to use the quotient rule to solve this, but then I tried differentiating it as it is: $$\frac {y_\frac{dy}{dx}}{1-y_\frac{dy}{dx}}=2x$$ $$\frac{dy}{dx}(y y^{-1})=2x$$ $$\frac{dy}{dx}=\frac{2x}{yy^{-1}}$$ $$\frac{dy}{dx}=\frac{2xy}{y}$$ $$\frac{dy}{dx}=2x$$ I am wondering how I can check to see if this is a valid answer? - What is $y_{dy/dx}$? – Peter Tamaroff Jul 6 '12 at 5:44 @PeterTamaroff: Quite possibly an error on my part. I arrived at $y_\frac{dy}{dx}$ by applying the chain rule to y. If I understand correctly, y is considered to be a function of x, so we apply the chain rule to y (when using implicit differentiation). – Kurt Jul 6 '12 at 5:51 1 Explain to me what you mean by "$y_{dx/dy}$". In general, what do you mean by $f_g$, when $f$ and $g$ are functions? – Peter Tamaroff Jul 6 '12 at 5:57 I meant to express that y is a function of x. My aim was to differentiate the top of bottom of the rational expression. Since the expression is a relation versus a function, I thought I had to differentiate y using the chain rule. – Kurt Jul 6 '12 at 6:18 1 Careful. The top is $y(x)$. The chain rule is used for compositions, and I see none (Do you?). What you ought to be doing is using the quotient rule and treating $y$ as $y(x)$ implicitly. I still can't understand why would you say a function is a functions of its derivative (when the converse might make more sense) or how you arrived to the expression $y'(y y^{-1})$. If you write out your reasoning it might help. – Peter Tamaroff Jul 6 '12 at 6:24 show 1 more comment ## 3 Answers $$\frac{y}{x-y}=x^2+1$$ You claim that $$y'=2x$$ so that $y=x^2+C$ This means $$\frac{x^2+C}{x-x^2-C}=x^2+1$$ This is absurd, since the quotient of two second degree polynomials can't be a second degree polynomial. In fact you get two non vanishing terms $x^3$ and $x^4$ which are off. I don't understand what your procedure is, also. I would proceed as follows: $$\displaylines{ \frac{y}{{x - y}} = {x^2} + 1 \cr \frac{d}{{dx}}\left( {\frac{y}{{x - y}}} \right) = \frac{d}{{dx}}\left( {{x^2} + 1} \right) \cr \frac{{y'\left( {x - y} \right) - \left( {1 - y'} \right)y}}{{{{\left( {x - y} \right)}^2}}} = 2x \cr \frac{{y'x - yy' - y + yy'}}{{{{\left( {x - y} \right)}^2}}} = 2x \cr \frac{{y'x - y}}{{{{\left( {x - y} \right)}^2}}} = 2x \cr y'x = 2x{\left( {x - y} \right)^2} + y \cr y' = 2{\left( {x - y} \right)^2} + \frac{y}{x} \cr}$$ - An explicit approach: Rewrite as $y = (x-y)(x^2+1)$, and factor out $y$ to get $y = \frac{x^3+x}{x^2+2}$. This is straightforward to differentiate, yielding $\frac{d y}{d x} = \frac{x^4+5 x^2+2}{(x^2+2)^2}$. - You can integrate your final expression to get $y=x^2+c$ and see if this works in the original equation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481683969497681, "perplexity_flag": "head"}
http://mathoverflow.net/questions/114801/normal-subgroups-of-free-products
## Normal Subgroups of Free Products ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G=A\ast \mathbb{Z}$ be the free product of a group $A$ and the cyclic group $\mathbb{Z}$ and suppose $K$ is a subgroup of $G$. By Kurosh Subgroup Theorem we know that $K=F\ast (\ast_{i\in I}(K\cap A^{u_i}))$, where $F$ is free group and $u_i$ are some representatives of double cosets $KxA$ in $G$. Now suppose further that $A$ has ACC on normal subgroups and $K$ is normal. Is it true that $K$ is finitely generated? (this will be true if we can show that $|I|$ and $rank\ F$ are finite). - Briefly, if $A$ has ACC on normal subgroups, show that $A\ast \mathbb{Z}$ has also ACC on normal subgroups or give a counterexample. – M Shahryari Nov 28 at 18:35 1 In fact a non-trivial free product $G=A*B$ (with $|A|\ge 3$ and $|B|\ge 2$) never has ACC on normal subgroups. This is because $G$ is a non-elementary relatively hyperbolic group, and there is a versions of small cancellation theory over such groups, which, in particular, implies that every non-elementary rel. hyperbolic group possesses a proper non-elementary rel. hyperbolic quotient. – Ashot Minasyan Nov 28 at 21:31 ## 1 Answer Set $A$ equal to $\mathbb{Z}$, which satisfies the ascending chain condition ("ACC", every strictly ascending chain of (normal) subgroups eventually terminates). Then $G=\mathbb{Z}\ast\mathbb{Z}=F_2$ and $F_2$ contains normal subgroups that are not finitely generated. Examples: 1) The commutator subgroup is normal and not finitely generated. 2) The subgroup generated by `$\left\{b^k a b^{-k}\ |\ k\in\mathbb{Z}\right\}$` is normal and not finitely generated. - 2 In fact, Greenberg proved that every normal subgroup of $F_2$ is trivial, of finite index or infinitely generated. – HW Nov 28 at 20:53 This is true for all finitely generated free groups (Hatcher, p.87, problem 7): If $N\leq F_n$ is a nontrivial normal subgroup of infinite index then $N$ is not finitely generated. This is an easy exercise that involves covering theory. – Sebastian Meinert Nov 28 at 21:04 In a now-deleted answer, the OP says: Thank you for answers. I was trying to prove some kind of Hilbert basis theorem for "Algebraic Geometry over Groups". Now, it is clear that there is no such a generalization: It is not true to say that if a group $G$ has ACC on normal subgroups, then $G[X] = G \ast F(X)$ is so. Therefore there may exist $G$-groups which are not Equationally Noetherian. For Algebraic Geometry over Groups, see J. Alg. (Baumslag, Miasnikov, Remeslennikov). – S. Carnahan♦ Dec 4 at 14:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211549758911133, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/294660/trying-to-prove-let-e-be-a-hilbert-a-module-then-e-langle-e-e-rangle-i
# Trying to prove: Let $E$ be a Hilbert $A$-module. Then, $E\langle E,E\rangle$ is norm dense in $E$. Let $E$ be a Hilbert $A$-module. Then, $E\langle E,E\rangle$ is norm dense in $E$. I am having trouble proving this. I believe $\langle E,E\rangle$ is a $C^*$-algebra. If I can show this, then the proof is easy since all $C^*$-algebras have an approximate identity. It seems like it shouldn't be too hard, but I am having trouble showing it. Thank you. - 1 What precisely do the notations $E\langle E,E\rangle$ and $\langle E,E\rangle$ refer to? In any case, a stronger statement can be found here: math.stackexchange.com/questions/163485/… – Jonas Meyer Feb 4 at 17:14 ## 1 Answer They are in fact equal See lemma 2.2.3. of book Hilbert C*-Modules by M. Manuilov page20 But I only point out that $\langle E,E\rangle$ is a C*-subalgebra of A and so $E\langle E,E\rangle\subset E$; conversely since any x in E can be written as $x=y\langle y,y\rangle$ we have $E\subset E\langle E,E\rangle.$ - 1 For those who don't have that book, could you describe a bit about what that citation says? – robjohn♦ Feb 5 at 7:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236316680908203, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/117156/generators-and-subgroups-of-mathbf-z-15
# Generators and subgroups of $\mathbf Z_{15}$ Could you help me with following excercise? Find all generators of additive group Z15. Find all sub-groups of additive group Z15. Could you please explain how to do that and post a solution? Thanks, Mark - 1 In case this is a homework, please add the tag `(homework)`. In any ways, what did you try? In which step did you get stuck? – user2468 Mar 6 '12 at 17:33 1 To get started, can you think of one generator for $\mathbf Z_{15}$? Is $2$ a generator? Is $3$? – Dylan Moreland Mar 6 '12 at 17:35 2 It might be time consuming, but if you're completely lost try writing out the addition table. – you Mar 6 '12 at 17:39 Also, what do you know about the size of subgroups as compared to the size of the original group? – JavaMan Mar 6 '12 at 17:50 1 The answer is the $10$ (equivalence classes of) numbers from $0$ to $14$ that are relatively prime to $15$. This can be verified painfully, by hand, one at a time. There is of course a shortcut theorem that tells me this, but computing is good. It is easy to verify the others don't work. Start with $1$. Sure. What about $2$? Keep adding $2$ to itself, modulo $15$. For a mild shortcut, note that $2+2+\cdots +2$ (eight of them) is $1$. But since $1$ is a generator, $\dots$. – André Nicolas Mar 6 '12 at 18:05 show 1 more comment ## 2 Answers An element $\overline{a}$ in $\mathbf{Z}_m$ generates if and only if its order is $m$. If $0\leq a\lt m$, then the order of $\overline{a}$ is the least positive integer $k$ such that $m|ka$. Since $a|ka$ for every integer $k$, it follows that the order of $\overline{a}$ is the smallest integer $k$ such that $ka=\mathrm{lcm}(m,a)$. Under what conditions is $k=m$? Since $\mathbf{Z}_m$ is cyclic, every subgroup is cyclic. Can you show that if $\overline{a}$ and $\overline{b}$ have the same order, then they generate the same subgroup? - I'll work with $\mathbf Z_6$. The differences are superficial and translating everything to the situation of $\mathbf Z_{15}$ will be good practice. For $x \in \mathbf Z$ to be a generator for $\mathbf Z_6$, it is necessary and sufficient that some multiple of $x$ be congruent to $1 \bmod 6$, i.e. that $6 \mid (ax - 1)$ for some integer $a$. To expand this further, there exists an integer $b$ such that $b6 = ax - 1$, so $1 = ax - b6$. What does Bézout now tell you about $a$ and $6$? It should follow that the classes of $1$ and $5$ are the possible generators. For finding subgroups, you can do something analogous to how we characterize the subgroups of $\mathbf Z$. In fact, certain theorems make this more than an analogy. If $H$ is a subgroup of $\mathbf Z_6$, then let $y$ be the smallest integer among $\{1, \ldots, 6\}$ whose residue modulo $6$ is in $H$. Again using Bézout, show that $y$ must divide $6$ and that it generates $H$. You should find that there are four subgroups, generated by the classes of $1$, $2$, $3$, and $6$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9246519804000854, "perplexity_flag": "head"}
http://mathoverflow.net/questions/9981?sort=newest
## Coarse moduli spaces over Z and F_p ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I would like to know to what extent it is possible to compare fibers over $\mathbb{F}_p$ of coarse moduli spaces over $\mathbb{Z}$, and coarse moduli spaces over $\mathbb{F}_p$. I ask a more precise question below. Let $\mathcal{M}_g^{\mathbb{Z}}$ be the moduli stack of smooth genus $g$ curves over $\mathbb{Z}$. Let $M_g^{\mathbb{Z}}$ be its coarse moduli space, and $(M_g^{\mathbb{Z}})_p$ the fiber of this coarse moduli space over $\mathbb{F}_p$. Let $\mathcal{M}_g^{\mathbb{F}_p}$ be the moduli stack of smooth genus $g$ curves over $\mathbb{F}_p$ and $M_g^{\mathbb{F}_p}$ its coarse moduli space. The universal property gives a map $\phi:M_g^{\mathbb{F}_p}\rightarrow(M_g^{\mathbb{Z}})_p$. My question is : is $\phi$ an isomorphism ? In fact, since $\phi$ is a bijection between geometric points, and $M_g^{\mathbb{F}_p}$ is normal, the question can be reformulated as : is $(M_g^{\mathbb{Z}})_p$ normal ? This shows that when $g$ is fixed, the answer is "yes" except for a finite number of primes $p$. - ## 1 Answer So you're asking if formation of coarse spaces commutes with (certain types of) base change. In general the answer is no; one needs the notion of a tame moduli space. A good starting point for this is Jarod Alper's paper "Good Moduli Spaces for Artin Stacks", available on his web page; he explains the notion and cites the relevant papers for tame moduli spaces. This should help you to work out your particular example (I don't know the answer off the top of my head). - Thanks for the reference ! However, I don't think it applies in this situation. Indeed, being "tame" is a condition on the automorphism groups of the geometric points. In the situation here, these groups are reduced, and I think the copndition is exactly "being of order prime to the characteristic". And there exist curves in characteristic $p$ with $p$-groups of automorphisms. Such arguments apply, however, for primes $p$ bigger than the known upper bounds for the order of the automorphism group of a smooth genus $g$ curve. – Olivier Benoist Dec 29 2009 at 23:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9157240390777588, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/27589/convert-state-vectors-to-bloch-sphere-angles?answertab=votes
# Convert state Vectors to Bloch Sphere angles I think this question is a bit low brow for the forum. I want to take a state vector $\alpha |0\rangle + \beta |1\rangle$ to the two bloch angles. What's the best way? I tried to just factor out the phase from $\alpha$, but then ended up with a divide by zero when trying to compute $\phi$ from $\beta$. - 1 This question should be migrated to physics.sx – Frédéric Grosshans May 4 '12 at 19:37 2 – Piotr Migdal May 8 '12 at 7:05 1 You have asked 13 questions, cast 0 votes, and marked only one question as accepted. Consider marking more questions as correct and definitely start voting on answers to your own questions, as well as other questions and answers that you have not provided. There is little incentive for anyone to answer your questions as it stands. – Mark S. Everitt May 10 '12 at 4:24 2 Sure, Ill do that more. I didnt understand that I could give back to people answering simply by upping their numbers. I like the community and will try to do better as a member – Ben Sprott May 17 '12 at 15:10 No problem. It can take a while to find your feet on SE. Sometimes we just need to make a little noise. ;) – Mark S. Everitt May 18 '12 at 9:08 ## 2 Answers You are probably dividing by $\alpha$ at some point to eliminate a global phase, leading to your divide by zero in some cases. It would be better to get the phase angles of $\alpha$ and $\beta$ with $\arg$, and set the relative phase $\phi=\arg(\beta)-\arg(\alpha)$. Angle $\theta$ is now simply extracted as $\theta = 2\cos^{-1}(|\alpha|)$ (note that the absolute value of $\alpha$ is used). This is all assuming that you want to get to $$|\psi\rangle = \cos(\theta/2)|0\rangle + \mathrm{e}^{i\phi}\sin(\theta/2)|1\rangle\,,$$ which neglects global phase. - 1 Your previous questions suggest that you are using Matlab, which has the `angle()` function for calculating `arg`. In other languages that support complex types, `arg` (or something similar such as `carg` for C99 complex doubles) is more common. – Mark S. Everitt May 10 '12 at 4:21 $\phi$ is the relative phase between $\alpha$ and $\beta$ (so the phase of $\alpha/\beta$). You will only get zero or divide-by-zero when $\alpha=0$ or $\beta=0$. But in that case, $\phi$ is arbitrary. And when $\alpha$ or $\beta$ are close to zero, you are near the poles of the Bloch sphere, and $\phi$ doesn't really matter that much. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9662218689918518, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=472541
Physics Forums ## Kinetics of Particles work and energy; A moving car 1. The problem statement, all variables and given/known data The 2Mg (i assumed mega-gram, not sure if that is the correct term) car has a velocity of v= 100km/h when the driver sees an obstacle in front of the car. If it takes 0.75 s for him to react and lock the brakes, causing the car to skid, determine the distance the car travels before it stops. The coefficient of kinetic friction between the tires and the road is Uk= 0.25. 2. Relevant equations Work and energy for a system of particles: $$\Sigma$$T1 + $$\Sigma$$U = $$\Sigma$$T2 T's represent initial and final kinetic energy respectively, 1/2*m*v^2 U represents all work done by external and internal forces acting on the system. Work of a constant force along a straight line: U = Fcos$$\vartheta$$($$\Delta$$s) 3. The attempt at a solution So i tried to apply my basic equation to the question $$\Sigma$$T1 + $$\Sigma$$U = $$\Sigma$$T2 T2 equals zero (i assumed) because the car will come to rest at the end of the question T1 will equal 1/2 mv^2 which is equal to 1/2 *(2Mg * 100km/h^2) = 10000 since the only work acting in this question is the friction force Caused by the car braking U = Fcos$$\vartheta$$($$\Delta$$s) therefore U = Ff(force of friction)*$$\Delta$$s which is = -4.905$$\Delta$$s (negative because friction force acts in the negative direction) so to summarize i now have 10000 -4905$$\Delta$$s = 0. i solved for delta s and got 2038, but this is obviously incorrect, i dont know how to apply the time delay into this question. For reference the correct answer is s = 178m I just started this chapter and dont have my bearings yet so if you could please explain clearly how to proceed it would be appreciated, thank you Attached Thumbnails PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Without doing any of my own work, here are a few pointers: Be sure to account the fact that the given velocity is in km/h and not m/s. Remember that the work done by a constant force can be written as F*x where x is the distance along which the force did the work. Now, to account for the time delay, simply find the distance the car would have traveled in that time and add it to the final distance to find the distance the driver travels before stopping. use equations of motion to get this one Thread Tools | | | | |--------------------------------------------------------------------------|----------------------------------------------|---------| | Similar Threads for: Kinetics of Particles work and energy; A moving car | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 1 | | | Advanced Physics Homework | 3 | | | Engineering, Comp Sci, & Technology Homework | 0 | | | Introductory Physics Homework | 4 | | | Special & General Relativity | 7 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231876730918884, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/48951-what-perpendicular-distance-pt-5-4-line-y-1-2x-6-a.html
# Thread: 1. ## What is the perpendicular distance from the pt 5, 4 to the line y = 1/2x + 6? The title pretty much explains the question: What is the perpendicular distance from the pt 5, 4 to the line y = 1/2x + 6? If you could please tell the answer and explain how you got there that would be great. thanks 2. Let's call line $y = \frac{1}{2}x + 6$: line k. Find the equation of the line whose slope is the negative reciprocal of line K and it includes the point (5, 4). Call this line J. You will need to use the point-slope form equation: $y - y_1 = m(x - x_1)$ Find the point of intersection of these 2 lines. The perpendicular distance from the pt 5, 4 to the line $y = \frac{1}{2}x + 6$ is the distance from (5,4) to the point of intersection. 3. Hello, gobbajeezalus! What is the perpendicular distance from $P(5,4)$ to the line $L_1\!:\;y \:= \:\frac{1}{2}x + 6$ ? There is a formula for this problem, but I'll walk through it for you . . . The given line $(L_1)$ has slope $\frac{1}{2}$ The line perpendicular to it $(L_2)$ has slope $-2$ $L_2$ has point (5,4) and slope -2 . . Its equation is: . $y - 4 \:=\:-2(x - 5) \quad\Rightarrow\quad y \:=\:-2x + 14$ Where do $L_1$ and $L_2$ intersect? . . $\frac{1}{2}x + 6 \;=\;-2x + 14 \quad\Rightarrow\quad \frac{5}{2}x \:=\:8$ Hence: . $x \:=\:\frac{16}{5},\;\;y \:=\:\frac{38}{5}\quad\hdots$ . They intersect at: . $Q\left(\frac{16}{5},\:\frac{38}{5}\right)$ The desired distance is: . $PQ \;=\;\sqrt{\left(\frac{16}{5} - 5\right)^2 + \left(\frac{38}{5} - 4\right)^2} \;= \;\sqrt{\left(\text{-}\frac{9}{5}\right)^2 + \left(\frac{18}{5}\right)^2}$ . . $= \;\sqrt{\frac{81}{25} + \frac{324}{25}} \;=\;\sqrt{\frac{405}{25}} \;=\;\sqrt{\frac{81\cdot5}{25}} \;=\;\boxed{\frac{9\sqrt{5}}{5} \;\approx\;4.025}$ 4. Originally Posted by gobbajeezalus The title pretty much explains the question: What is the perpendicular distance from the pt 5, 4 to the line y = 1/2x + 6? If you could please tell the answer and explain how you got there that would be great. thanks ${\text{Perpendicular distance from a point }}$ $\left( {x_1 ,y_1 } \right){\text{ to the line }}ax + by + c = 0{\text{ is given by the formula:}} \hfill \\$ $= \frac{{\left| {ax_1 + by_1 + c} \right|}}<br /> {{\sqrt {a^2 + b^2 } }} \hfill \\$ ${\text{So, perpendicular distance from }}\left( {5,4} \right){\text{ to line }}\frac{1}<br /> {2}x - y + 6 = 0{\text{ is given as:}} \hfill \\$ $= \frac{{\left| {\frac{1}<br /> {2}\left( 5 \right) + \left( { - 1} \right)\left( 4 \right) + 6} \right|}}<br /> {{\sqrt {\left( {\frac{1}<br /> {2}} \right)^2 + \left( { - 1} \right)^2 } }} = \frac{{4.5}}<br /> {{\sqrt {1.25} }} \approx 4.025 \hfill \\ <br />$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9112203121185303, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/53465/what-is-the-proof-that-a-force-applied-on-a-rigid-body-will-cause-it-to-rotate-a/53696
# What is the proof that a force applied on a rigid body will cause it to rotate around its center of mass? Say I have a rigid body in space. I've read that if I during some short time interval apply a force on the body at some point which is not in line with the center of mass, it would start rotating about an axis which is perpendicular to the force and which goes through the center of mass. What is the proof of this? - You mean like a mathematical proof on experimental proof? – Zetta Suro Feb 9 at 15:25 2 You could google for `cars crashing on ice site:youtube.com`. – dmckee♦ Feb 9 at 15:28 @phoenixheart6 A mathematical proof using the three laws of Newton for a particle as its axioms. – Alraxite Feb 9 at 15:31 @dmckee I know it's true, so an experimental poof isn't what I'm looking for. – Alraxite Feb 9 at 15:34 2 @joshphysics Yes, it isn't if the rod is fixed. That's why my object is in space. There is more than one force acting when it is fixed. – Alraxite Feb 10 at 0:53 show 5 more comments ## 2 Answers Assume a very small particle embedded in the Rigid body of mass $m$. Let us find out its Torque or moment of force $\vec{\tau}$ about an arbitrary point $p$. $\vec{\tau} = \vec{f} \times \vec{r}$ where $\vec{r}$ is a displacement of this particle from point $p$. The total Torque on the rigid body will be some of $\tau$ of all the particles. If this $\tau$ has a non-zero value then the body will be rotating. Lets find out the total Torque, $\Gamma$ $\Gamma = \Sigma{\tau}$ $=> \Gamma = \Sigma{ \vec{f} \times \vec{r}}$ $=> \Gamma = \Sigma{ m \, \vec{a} \times \vec{r}}$ As The body is said to be rigid, therefore all the points on this body will be having same accelerations at ever instance. Also, Cross product is distributive ref, therefore, we can take $\vec{a}$ out of summation. $=> \Gamma = \vec{a} \times \Sigma{ m \, \vec{r}}$ now, if point $p$ is center of mass then, $\Sigma{ m \, \vec{r}}$ is zero. ref Therefore, $\Gamma$ is zero and rigid body will not rotate at all. NOTE: $\times$ is the vector cross product operator. - "As The body is said to be rigid, therefore all the points on this body will be having same accelerations at ever instance." I don't think that's quite true. – Gugg Mar 5 at 17:34 It seems that you have put the conclusion to your answer in as a premise. – Gugg Mar 5 at 18:32 One can make reasonable assumptions to investigate the problem in a simple manner. Here is my reasoning about this question. For the sake of simplicity let us assume we have a spherical object of radius R in the outer space. Let there be a hook at the surface of the sphere from which we can attach a string. Imagine we are equipped with a rocket system that can give us momentum to move about. Now, we hold one end of the string and move away from the sphere in a direction that the string, when it becomes taut, is not parallel to the radius of the sphere. The force we exert on the sphere in that direction can be analysed into the tangent and the perpendicular to the surface of the sphere. If $\theta$ is the angle between the string and the normal to the sphere we have: Tangent component: $F_T=F\sin(\theta)$ Normal component: $F_N=F\cos(\theta)$. The normal component is parallel to the radius of the sphere and passes through the centre (CM) and has no moment. This component will pull the sphere in the normal direction. The tangent component has a moment with respect to the centre $M=FR\sin(\theta)$. This component would rotate the sphere, should the axis of the sphere be pivoted, but it is not! However, I believe that, due to the inertia of the mass of the sphere, it would be sufficient to give pivotal leverage for the tangent force to rotate the sphere. The law of conservation of energy must be written, for a short time interval of application of the force, in the form ${\bf {F.x}} = {\frac {1}{2}}mv^2+ {\frac {1}{2}}I{\omega}^2$ where:$\bf x$ is the displacement of the sphere, while the first term on the RHS is the kinetic energy due to the linear motion, and the second is the kinetic energy due to the rotational motion. Note that, as the sphere has no fixed axis, it will rotate about the axis which is peprepndicular to the great circle passing through the point of the hook, and the $F_T$ is tangent to it. Hence the axis will be perpendicular to $F_T$ and $F_N$ and so it is perpendicular to the force $\bf F$. This will be the case for any direction of $\bf F$. Why should the axis of rotation pass through the CM? The poitn here is that the object is rotating freely. Is not constrained to rotate about an arbitrary axis. Without going into mathematics, a quick argument from physics point of view is that, if the axis passed through another point, the rotational motion would be unstable. I mean that for a freely rotating object, there is a minimum state of energy, and this is when the axis of rotation passes through the CM. If it passed through some other point, then according to the parallel axis theorem, the inertia of the object would be higher, hence higher energy of the system. It is like you bring an object at a certain height near the surface of the earth and then you set it free. It will fall to the lowest energy state, and that is when it is on the ground. - I appreciate you writing the answer but consider removing it. This does not answer the question. You just gave an analysis of a force acting on a sphere and then you gave the expression for the kinetic and rotational energy for the sphere. That is all. – Alraxite Feb 12 at 1:39 You didn't give any arguments for why the axis should go through the center of mass even in the special case of a sphere. – Alraxite Feb 12 at 1:39 @Alraxite Sorry I did not have the chance to repond earlyer. Thanks for bringing to my attention the CM part of the question. I have edited my answer to include an argument about that. Please read it. – JKL Feb 12 at 11:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408742785453796, "perplexity_flag": "head"}
http://en.wikiversity.org/wiki/Advanced_elasticity/Stress-strain_relation_for_thermoelasticity
# Advanced elasticity/Stress-strain relation for thermoelasticity From Wikiversity Relation between Cauchy stress and Green strain Show that, for thermoelastic materials, the Cauchy stress can be expressed in terms of the Green strain as $\boldsymbol{\sigma} = \rho~\boldsymbol{F}\cdot\frac{\partial e}{\partial \boldsymbol{E}}\cdot\boldsymbol{F}^T ~.$ Proof: Recall that the Cauchy stress is given by $\boldsymbol{\sigma} = \rho~\frac{\partial e}{\partial \boldsymbol{F}}\cdot\boldsymbol{F}^T \qquad \implies \qquad \sigma_{ij} = \rho~\frac{\partial e}{\partial F_{ik}}F^T_{kj} = \rho~\frac{\partial e}{\partial F_{ik}}F_{jk} ~.$ The Green strain $\boldsymbol{E} = \boldsymbol{E}(\boldsymbol{F}) = \boldsymbol{E}(\boldsymbol{U})$ and $e = e(\boldsymbol{F},\eta) = e(\boldsymbol{U},\eta)$. Hence, using the chain rule, $\frac{\partial e}{\partial \boldsymbol{F}} = \frac{\partial e}{\partial \boldsymbol{E}}:\frac{\partial \boldsymbol{E}}{\partial \boldsymbol{F}} \qquad \implies \qquad \frac{\partial e}{\partial F_{ik}} = \frac{\partial e}{\partial E_{lm}}~\frac{\partial E_{lm}}{\partial F_{ik}} ~.$ Now, $\boldsymbol{E} = \frac{1}{2}(\boldsymbol{F}^T\cdot\boldsymbol{F} - \boldsymbol{\mathit{1}}) \qquad \implies \qquad E_{lm} = \frac{1}{2}(F^T_{lp}~F_{pm} - \delta_{lm}) = \frac{1}{2}(F_{pl}~F_{pm} - \delta_{lm}) ~.$ Taking the derivative with respect to $\boldsymbol{F}$, we get $\frac{\partial \boldsymbol{E}}{\partial \boldsymbol{F}} = \frac{1}{2}\left(\frac{\partial \boldsymbol{F}^T}{\partial \boldsymbol{F}}\cdot\boldsymbol{F} + \boldsymbol{F}^T\cdot\frac{\partial \boldsymbol{F}}{\partial \boldsymbol{F}}\right) \qquad \implies \qquad \frac{\partial E_{lm}}{\partial F_{ik}} = \frac{1}{2}\left(\frac{\partial F_{pl}}{\partial F_{ik}}~F_{pm} + F_{pl}~\frac{\partial F_{pm}}{\partial F_{ik}}\right) ~.$ Therefore, $\boldsymbol{\sigma} = \frac{1}{2}~\rho~\left[\frac{\partial e}{\partial \boldsymbol{E}}: \left(\frac{\partial \boldsymbol{F}^T}{\partial \boldsymbol{F}}\cdot\boldsymbol{F} + \boldsymbol{F}^T\cdot\frac{\partial \boldsymbol{F}}{\partial \boldsymbol{F}}\right)\right]\cdot\boldsymbol{F}^T \qquad \implies \qquad \sigma_{ij} = \frac{1}{2}~\rho~\left[\frac{\partial e}{\partial E_{lm}} \left(\frac{\partial F_{pl}}{\partial F_{ik}}~F_{pm} + F_{pl}~\frac{\partial F_{pm}}{\partial F_{ik}}\right)\right]~F_{jk} ~.$ Recall, $\frac{\partial \boldsymbol{A}}{\partial \boldsymbol{A}} \equiv \frac{\partial A_{ij}}{\partial A_{kl}} = \delta_{ik}~\delta_{jl} \qquad \text{and} \qquad \frac{\partial \boldsymbol{A}^T}{\partial \boldsymbol{A}} \equiv \frac{\partial A_{ji}}{\partial A_{kl}} = \delta_{jk}~\delta_{il} ~.$ Therefore, $\sigma_{ij} = \frac{1}{2}~\rho~\left[\frac{\partial e}{\partial E_{lm}} \left(\delta_{pi}~\delta_{lk}~F_{pm} + F_{pl}~\delta_{pi}~\delta_{mk}\right)\right]~F_{jk} = \frac{1}{2}~\rho~\left[\frac{\partial e}{\partial E_{lm}} \left(\delta_{lk}~F_{im} + F_{il}~\delta_{mk}\right)\right]~F_{jk}$ or, $\sigma_{ij} = \frac{1}{2}~\rho~\left[\frac{\partial e}{\partial E_{km}}~F_{im} + \frac{\partial e}{\partial E_{lk}}~F_{il}\right]~F_{jk} \qquad \implies \qquad \boldsymbol{\sigma} = \frac{1}{2}~\rho~\left[\boldsymbol{F}\cdot\left(\frac{\partial e}{\partial \boldsymbol{E}}\right)^T + \boldsymbol{F}\cdot\frac{\partial e}{\partial \boldsymbol{E}}\right]\cdot\boldsymbol{F}^T$ or, $\boldsymbol{\sigma} = \frac{1}{2}~\rho~\boldsymbol{F}\cdot\left[\left(\frac{\partial e}{\partial \boldsymbol{E}}\right)^T + \frac{\partial e}{\partial \boldsymbol{E}}\right]\cdot\boldsymbol{F}^T ~.$ From the symmetry of the Cauchy stress, we have $\boldsymbol{\sigma} = (\boldsymbol{F}\cdot\boldsymbol{A})\cdot\boldsymbol{F}^T \qquad \text{and} \qquad \boldsymbol{\sigma}^T = \boldsymbol{F}\cdot(\boldsymbol{F}\cdot\boldsymbol{A})^T = \boldsymbol{F}\cdot\boldsymbol{A}^T\cdot\boldsymbol{F}^T \qquad \text{and} \qquad \boldsymbol{\sigma} = \boldsymbol{\sigma}^T \implies \boldsymbol{A} = \boldsymbol{A}^T ~.$ Therefore, $\frac{\partial e}{\partial \boldsymbol{E}} = \left(\frac{\partial e}{\partial \boldsymbol{E}}\right)^T$ and we get ${ \boldsymbol{\sigma} = ~\rho~\boldsymbol{F}\cdot\frac{\partial e}{\partial \boldsymbol{E}}\cdot\boldsymbol{F}^T ~. }$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8552948832511902, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/39042/proving-function-for-stirling-numbers-of-the-second-kind?answertab=active
# Proving function for Stirling Numbers of the Second Kind I need to proof the following formula for Stirling Numbers of the Second Kind: $\sum\limits_{n \geq 0} S(n,k) x^n = \frac{x^k}{(1-x)(1-2x)\cdots(1-kx)}$ It is used widely around formularies but I neither have found any proof nor was I able to figure it out on my own. Thank you in advance! - 2 How do you define Stirling numbers? – Phira May 14 '11 at 11:07 For $n \geq1, k \geq 0$ is $c(n,k)$ the quantity of permutations $\pi \in S_n$ having k cycles. c(0,0) := 1, c(0,k) := 0 for $k \geq 1$. Set for $m,n \geq 0$ $s(m,n) := (-1)^{m-n}c(m,n)$ The numbers $s(m,n)$ alre called Stirling Numbers of the First Kind. – muffel May 14 '11 at 11:15 @thewilli So why is there a capital S in your formula? – Phira May 14 '11 at 11:33 Hmm, our prof seems to use other conventions that the guy writing the lecture notes. It definitely is the same.. – muffel May 14 '11 at 11:39 No, it isn't.... – Phira May 14 '11 at 12:00 show 5 more comments ## 2 Answers This is set out in the initial part of section 1.6 of Geneatingfunctionology with the result in equation 1.6.5. $$S(n,k) = S(n-1,k-1) + kS(n-1,k)$$ with $S(0,0)=1$. Of the natural generation functions which might express this, the simplest which deals with the factor of $k$ is likely to be of the form $$B_k(x) = \sum_n S(n,k) x^n$$ which with the recurrence leads to $$B_k(x) = xB_{k-1}(x) + kxB_k(x) = \frac{x}{1-kx} B_{k-1}(x)$$ for $k \ge 1$; $B_0(x)=1$. That in turn leads to the desired formula by multiplying successive terms. - could you please explain me what you did in the last step ($\cdots = \frac{x}{1-kx}B_{k-1}(x)$)? – muffel May 15 '11 at 18:50 If $B_k(x) = xB_{k-1}(x) + kxB_k(x)$ then $B_k(x) - kxB_k(x)= xB_{k-1}(x)$, i.e $(1- kx)B_k(x)= xB_{k-1}(x)$, so $B_k(x) = \frac{x}{1-kx} B_{k-1}(x)$ – Henry May 15 '11 at 21:24 Hmm, I cannot see the problem which came up in the comments. If we assume simply an error in the notation and that the actually the Stirling numbers 2'nd kind are meant (as verbally exposed in the question) then the identity holds. This can even be checked simply using negative integer $x$ and computing Eulersums of appropriate order. Let's write S2 the matrix of that numbers as $\qquad \small \begin{array} {rrrrrrr} 1 & . & . & . & . & . & . & . \\ 1 & 1 & . & . & . & . & . & . \\ 1 & 3 & 1 & . & . & . & . & . \\ 1 & 7 & 6 & 1 & . & . & . & . \\ 1 & 15 & 25 & 10 & 1 & . & . & . \\ 1 & 31 & 90 & 65 & 15 & 1 & . & . \\ 1 & 63 & 301 & 350 & 140 & 21 & 1 & . \\ 1 & 127 & 966 & 1701 & 1050 & 266 & 28 & 1 \\ \ldots & \end{array}$ where we use zero-based row- and columnindexes. Then the problem can be restated as summing by building the dot product of one column by one row vector $V(x) = [1,x,x^2,x^3,...]$ with manageable (ideally infinite) dimension. The numbers along a column can be seen as composed by finite compositions of geometric series. Column 0 is $[1,1,1,1,1,...]$ and the dot-product with the V(x)-vector is then $V(x)*S2[,0] = {1 \over 1-x}$ Column 1 is $[1-1,2-1,4-1,8-1,16-1,...]$ and the dot-product with the V(x)-vector is then $V(x)*S2[,1] = {1 \over 1-2x} - {1 \over 1-x} = { (1-1x) - (1-2x) \over (1-1x)(1-2x) } = {x \over (1-1x)(1-2x) }$ One needs the simple composition of the other columns (see for instance in wikipedia) to see more examples for that decompositions, and also a general description for that compositions (where the text is:"Another explicit expanding of the recurrence-relation(...)"). I think the idea behind that homework-assignment was, that the student should find the compositions of powers such that the problem is ocnverted to describe the (finite) composition of closed forms of geometric series. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271489977836609, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/145248/limits-problem-in-integration
# Limits problem in Integration please look at the following question, Let $X$ denote the diameter of an armored electric cable and $Y$ denote the diameter of the ceramic mold that makes the cable. Both $X$ and $Y$ are scaled so that they range between $0$ and $1$. Suppose that $X$ and $Y$ have the joint density $$f(x, y) =\begin{cases} \frac1y,&0<x<y<1\\\\ 0,&\text{elsewhere} \end{cases}$$ when i solved it i got the following limits of x and y, 0->1/2 and 0->(1/2-x) however according to the book the correct limits are 0->1/4 and x->1/2-x I am just confused how to plot this function in order to find out where that 1/4 came from ? According to the book the solution is, $$\begin{align*} P\left(X+Y > \frac12\right)&=1-P\left(X+Y < \frac12\right)\\ &=1-\int_0^{1/4}\int_x^{1/2-x}\frac1y dy\,dx\\ &= 1-\int_0^{1/4}\left[\ln\left(\frac12-x\right)-\ln x\right]dx\\ &=1+\left.\left[\left(\frac12-x\right)\ln\left(\frac12-x\right)-x\ln x\right]\right\vert_0^{1/4}\\ &=1+\frac14\ln\left(\frac14\right)\\ &=0.6534. \end{align*}$$ please guide me where that 1/4 came from and how can i plot that question to clearly understand that ? - ## 2 Answers It all hinges on drawing the right picture. Once you do that, the rest is almost automatic. The joint density is $0$ except on or inside the triangle with vertices $(0,0)$, $(1,1)$, and $(0,1)$. We are interested in the integral of the joint density over the part of this triangle that has $x+y \gt 1/2$. So draw the line $x+y=1/2$. We will want to be "above" this line. Note that the line $x+y=1/2$ meets the line $y=x$ at $(1/4,1/4)$. So we want to integrate the joint density over the quadrilateral-shaped region that has the following corners: $(1/4,1/4)$, $(1,1)$, $(0,1)$ and $(0,1/2)$. The region is slightly ugly. Whether we integrate first with respect to $x$ or with respect to $y$, we will have to break up the region. It is tempting to integrate over the part of our big triangle that has $x+y \le 1/2$, and subtract the result from $1$. This "complementary" region is the triangle with corners $(0,0)$, $(1/4,1/4)$, and $(0,1/2)$. Why $(1/4,1/4)$? Because that's where $y=x$ and $x+y=1/2$ meet. If we integrate first with respect to $y$, there is no need to break up the integral. For then $y$ goes from $x$ to $1/2-x$. Then we integrate with respect to $x$. The rightmost point of our region is at $x=1/4$, $y=1/4$ so we will integrate from $x=0$ to $x=1/4$. that is the book's solution. Remark: The book's solution is not optimal. Despite the need to break up the integral, I would prefer to set things up so that I integrate first with respect to $x$. Since the density function does not mention $x$ explicitly, the first integration is trivial. We can integrate over the part of the triangle that has $x+y>1/2$, or over the complementary region. Let's find the answer directly. For $y=1/4$ to $y=1/2$, we want to integrate from $x=1/2-y$ to $x=y$. From $y=1/2$ to $y=1$, we want to integrate from $x=0$ to $x=y$. No integration of $\ln$ is needed. We get $$\int_{1/4}^{1/2} \left(2-\frac{1}{2y}\right)dy+\int_{1/2}^{1}dy,$$ which is easy to calculate. - This is mostly a rewording of things already in Andre's answer, but still it might be helpful. When you let $y$ run from $0$ to $(1/2)-x$, you are allowing $y\le x$, but $f(x,y)=0$ there, so if you want to wind up integrating $1/y$, you want $y$ to run from $x$ to $(1/2)-x$. Also, if you want $x+y\lt1/2$, then that, together with $0\lt x\lt y$, forces $x\lt1/4$; if $x\ge1/4$, then $y\gt x\ge1/4$, and $x+y\gt1/4+1/4=1/2$. That's where the $1/4$ comes from. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9560387134552002, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/189127/what-is-bayes-theorem-in-simplest-way
# What is Baye's Theorem in simplest way I am currently planning to take a course on artificial intelligence, there Bayes theorem is the basic. Now I tried to understand bayes theorem so many times. But to understand that I have to understand conditional probability, joint probability and total probability. So can anyone answer my following questions in a simplest way? 1. What is difference between conditional probability and joint probability? I would be glad if someone can explain intuitively in real life problems and mathematically also. 2. What is Total probability? 3. Explain bayes theorem in simple logic and where I can use them? Real life examples would be better.... - ## 2 Answers I think Bayes' theorem is intuitive if you just multiply through with the denominator. The formula is $$P(A, B) = P(A|B)P(B) = P(B|A)P(A).$$ This is, in a way, the definition of conditional probability. Look at the first one: $$P(A, B) = P(B|A)P(A).$$ There are many equivalent ways to interpret this. Let me give you one example. Let $A$ be the event "I am hungry", and $B$ the event "I go to a restaurant". Note that it's possible for me to be hungry and not go to a restaurant, and it's also possible for me to go to a restaurant without being hungry. But there is a correlation like this: The chance of me going to restaurant increases if I'm hungry. That means $P(B|A) > P(B)$. (This is only true in this example!) $P(B|A)$ is called the conditional probability because it is conditioned on $A$. It is the probability of $B$ happening given the knowledge that $A$ happens. The joint probability is $P(A, B)$, the chance of both $A$ and $B$ happening. If I know $P(A)$ and $P(B|A)$, I can compute $P(A, B)$ as follows. First, think about how probable $A$ can happen. That is $P(A)$. And assuming $A$ happens, how probable is it that $B$ also happens? That is $P(B|A)$ by definition. Multiplying them together, I get the probability that both $A$ and $B$ happen. - What does this have to do with Bayes' theorem? – Dilip Sarwate Aug 31 '12 at 11:52 It's just the first equation. – Tunococ Sep 1 '12 at 0:09 I believe that the following article on Wikipedia answers your Question 3 quite nicely: http://en.wikipedia.org/wiki/Bayes'_theorem (see the Introductory Example). Bayes Theorem is about "reversing" the conditional probability, i.e. finding $P(A\mid B)$ given $P(B\mid A)$. This is sometimes easier than directly finding $P(A\mid B)$. Another nice website is : http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem/. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507700800895691, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/55865/why-does-this-method-for-solving-matrix-equations-work
Why does this method for solving matrix equations work? I have this assignment: Given: $A = \begin{pmatrix} 2 & 4 \\ 0 & 3 \end{pmatrix}$ $C = \begin {pmatrix} -1 & 2 \\ -6 & 3 \end{pmatrix}$ Find all B that satisfy $AB = C$. I know that one option is to say $B = \left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right)$ and multiply it with $A$. By making each member equal to the one in $C$, I have a system of linear equations which I can solve. However, I also know that I can set up a system like this: $$\left( \begin{array} {cc|cc} 2 & 4 & -1 & 2 \\ 0 & 3 & -6 & 3 \end{array} \right)$$ If I manipulate it like I would a system of linear equations (for example, by swapping rows, or adding a multiple of a row to another) to get the identity matrix $\left( \begin{smallmatrix} 1 & 0 \\ 0 & 1 \end{smallmatrix} \right)$, then what I'm looking for (matrix $B$) will appear in the right hand side, like this: $$\left( \begin{array} {cc|cc} 1 & 0 & 7/2 & -1 \\ 0 & 1 & -2 & 1 \end{array} \right)$$ In this case, $B = \left( \begin{smallmatrix} 7/2 & -1 \\ -2 & 1\end{smallmatrix} \right)$. My question is, quite simply, how does this work? It looks like magic to me right now. - 2 – lhf Aug 5 '11 at 21:39 2 Answers You want a matrix $B$ that satisfies $AB = C$. That is, if $A$ is invertible, you can left multiply both sides by $A^{-1}$ and get $B = A^{-1}C$. Notice that simple row operations are just left multiplication by matrices. You may need a minute or two to convince yourself of this, but try it: left multiplying a matrix $A$ by $\begin{pmatrix}1&0 \\ 0&3\end{pmatrix}$ is just multiplying (row 2) by 3; left multiplying by $\begin{pmatrix} 1&2 \\ 0&1\end{pmatrix}$ is just adding 2*(row 2) to (row 1). So performing the same row operations to $A$ and $C$ is essentially left multiplying $A$ and $C$ by the same matrix. When you manipulate $A$ until it becomes the identity, you must have ended up left multiplying it by $A^{-1}$, so the matrix you get on the right is $A^{-1}C$, i.e. $B$. (In the same way, if you wanted to solve $BA = C$ for $B$, you could use column operations, because right multiplication by matrices is just column operations.) - That makes sense, thanks. – Javier Badia Aug 5 '11 at 23:33 All the operations you perform on A and C (and on the intermediary matrices you get after some operations) make you replace A and C by PA and PC for some matrices P. If in the end A is transformed into the identity matrix Id, this means that the product K of the matrices P used is such that KA=Id. Hence K is the inverse A-1 of A and the matrix you get on the right is KC=A-1C, as desired. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485363960266113, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=107476
Physics Forums ## Applications of Integration-Volume I'm asked to find the volume generated by rotating the region bounded by the given curves about the y-axis (using the method of cylindrical shells). I'm given the functions $$y= 4(x-2)^2$$and $$y = x^2 - 4x +7$$. I'm not sure how to word this properly...they don't give me the domain of the function to find the volume...as in, most of the questions (and of course, all of the examples in the text) have given domains...find such and such when x=3 and x=0 or something of the like. I can do all the problems where the domain or range (in some cases) is given but I'm not entirely sure how to figure out my domains? Is that the intecept of the 2? Because then I would get x=1 and x=3. And if that's true then I'm just screwing something else up. Another general question I have is when I'm making equations (and I'm consistently having this problem for areas etc), I always seem to subtract the wrong function from the other one...in other words, I always seem to end up with a negative or incorrect area/volume. Say for volumes... $$\int_{a}^{b} 2 \pi xf(x)dx$$ for f(x) I always seem to subract the wrong function by the wrong function!!! How can I tell which one is going to be the correct one? And sometimes both answers are positive and one is correct and one is not. I asked my professor and he told us just to put a (+/-) at the front and then change it once you know what it is...???!!! At first I thought it was which function was "on top" of the graphed functions but that doesn't seem to work very well either!!! Sorry for the super long post!!! It's been almost 4 years since I've done calc and now I have to take another course (calc II) so if my questions seem dumb, I'm sorry but I'm still trying to catch up!!! Thanks! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug for the domain, you are getting the right region you set the functions equal to each other to find out where they intersect.. this may help http://mathdemos.gcsu.edu/shellmetho...y/gallery.html furthermmore, the formula for the volume of a cylindrical shell is V=2pi (delta r) h so then 2*pi integral radius and your height here radius is the distance from the y-axis to the center of the shell, which in this case is an x-distance.. now for the height, the height of the function is the top function minus the lower function... hope this helps.. Thanks again!!!! Those animations are awesome!! Thread Tools | | | | |---------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Applications of Integration-Volume | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 4 | | | Calculus & Beyond Homework | 1 | | | Calculus & Beyond Homework | 4 | | | Introductory Physics Homework | 2 | | | Calculus | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277462363243103, "perplexity_flag": "middle"}
http://www.mathplanet.com/education/pre-algebra/probability-and-statistic/combinations-and-permutations
# Combinations and permutations Before we discuss permutations we are going to have a look at what the words combination means and permutation. A Waldorf salad is a mix of among other things celeriac, walnuts and lettuce. It doesn't matter in what order we add our ingredients but if we have a combination to our padlock that is 4-5-6 then the order is extremely important. If the order doesn't matter then we have a combination, if the order do matter then we have a permutation. One could say that a permutation is an ordered combination. The number of permutations of n objects taken r at a time is determined by the following formula: $\\P(n,r)=\frac{n!}{(n-r)!}\\$ n! is read n factorial and means all numbers from 1 to n multiplied e.g. $\\5!=5\cdot 4\cdot 3\cdot 2\cdot 1\\$ This is read five factorial. 0! Is defined as 1. $\\0!=1\\$ Example A code have 4 digits in a specific order, the digits are between 0-9. How many different permutations are there if one digit may only be used once? A four digit code could be anything between 0000 to 9999, hence there are 10,000 combinations if every digit could be used more than one time but since we are told in the question that one digit only may be used once it limits our number of combinations. In order to determine the correct number of permutations we simply plug in our values into our formula: $\\P(n,r)=\frac{10!}{(10-4)!}=\frac{10\cdot9\cdot8\cdot 7\cdot 6\cdot 5\cdot 4\cdot 3\cdot 2\cdot 1 }{6\cdot5\cdot 4\cdot 3\cdot 2\cdot 1}=5040\\$ In our example the order of the digits were important, if the order didn't matter we would have what is the definition of a combination. The number of combinations of n objects taken r at a time is determined by the following formula: $\\C(n,r)=\frac{n!}{(n-r)!r!}\\$ Video lesson: Solve Next Class:  Probability and statistic, Finding the odds • Pre-Algebra • Algebra 1 • Algebra 2 • Geometry • Sat • Act
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 5, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9245250225067139, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/19264/what-is-the-etymology-for-the-term-conductor
## What is the etymology for the term conductor? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is related to the previous question of how to define a conductor of an elliptic curve or a Galois representation. What motivated the use of the word "conductor" in the first place? A friend of mine once pointed out the amusing idea that one can think of the conductor of an elliptic curves as "someone" driving a train which lets you off at the level of the associated modular form. A similar statement can be made concerning Szpiro's conjecture, which provides asymptotic bounds on several invariants of an elliptic curve in terms of its conductor. Here one might think of the conductor as "someone" who controls this symphony of invariants consisting of the minimal discriminant, the real period, the modular degree, and the order of the Shafarevich-Tate group (assuming BSD). Was there some statement of this sort which motivated Artin's original definition of the conductor? Does anyone have a reference for the first appearance of the word conductor in this context? I apologize if this question is inappropriate for MO. - 5 I believe it is a translation of the German term 'Führer' (which must have led to some awkward conversations between number theorists in the late 1930's and early 1940's). I don't know how and when the German term originated. – François G. Dorais♦ Mar 25 2010 at 3:05 "conductor" must be appropriate for $\mho$, if not for MO ;-) – Noam D. Elkies Jan 29 at 20:47 ## 3 Answers It is a translation from the German Führer (which also is the reason that in older literature, as well as a fair bit of current literature, the conductor is denoted as f in various fonts). Originally the term conductor appeared in complex multiplication and class field theory: the conductor of an abelian extension is a certain ideal that controls the situation. Then it drifted off into other areas of number theory to describe parameters that control other situations. Of course in English we tend not to think of conductor as a leader in the strong sense of Führer, but more in a musical sense, so it seems like a weird translation. But back in the 1930s the English translation was leader rather than conductor, at least once: see the review of Fueter’s book on complex multiplication in the 1931 Bulletin of the Amer. Math. Society, page 655. The reviewer writes in the second paragraph "First there is a careful treatment of those ray class fields whose leaders are multiples of the ideal..." You can find the review yourself at http://www.ams.org/bull/1931-37-09/S0002-9904-1931-05214-9/S0002-9904-1931-05214-9.pdf. I stumbled onto that reference quite by chance (a couple of years ago). If anyone knows other places in older papers in English where conductors were called leaders, please post them as comments below. Thanks! Concerning Artin's conductor, he was generalizing to non-abelian Galois extensions the parameter already defined for abelian extensions and called the conductor. So it was natural to use the same name for it in the general case. Edit: I just did a google search on "leader conductor abelian" and the first hit is this answer. Incredible: it was posted less than 15 minutes ago! - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. "Es steht alles schon bei Dedekind", as Emmy Noether was fond of saying. In fact, • R. Dedekind, Über die Anzahl der Idealklassen in den verschiedenen Ordnungen eines endlichen Körpers, Gauss Festschrift 1877 defined the "Führer" of an order in a number field. [BTW: in German, Führer does not actually mean a strong leader but rather someone who guides you (as in tourist guide). But of course . . . ] Class groups of orders in quadratic number fields are ring class groups, which generalize immediately to ray class groups (Weber); from there the word spread to complex multiplication and class field theory. - The first time I ever saw a conductor defined is not in the sense mentioned above, but in linear algebra, from Hoffman & Kunze's book. In their chapter on elementary canonical forms they define the conductor of vector $\alpha$ into a subspace $W$ with respect to a linear operator $T$ to be the ideal `$S_T(\alpha;W) = \{ g \in F[x] \mid g(T)\alpha \in W \}$` Where the ambient vector space is over the field $F$. Interestingly, they say that they themselves call this the 'stuffer' ideal (from German, das eistopfende Ideal), but claim that "Conductor" is more commonly used, and add that this term is "preferred by those who envision a less aggressive operator $g(T)$, gently leading the vector $\alpha$ into $W$." Hoffman & Kunze, Linear Algebra 2nd Edition, p. 201 -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347158074378967, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/104583/exponentiating-4-by-4-matrix-analytically
## Exponentiating 4 by 4 matrix analytically ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Does there exist an analytical method by which i can exponentiate a 4 by 4 matrix, in the same way as the general 2 by 2 matrix case in pauli matrix basis. I have dirac matrices (which are composed of direct products of pauli matrices) as my basis for 4 by 4 matrices. I need an analytical way ! Any reply is appreciated. regards - 3 What is the result you are alluding to for \$2\times 2? matrices? – Igor Rivin Aug 12 at 23:22 By "analytically" you mean "explicitely"? Put your matrix $X$ in Jordan form. Then $X=S+N$ where $S$ is diagonal and $N$ nilpotent and $SN=NS$, and $\exp(X)=\exp(S)\cdot\exp(N)$ which is quite explicit to compute... – Qfwfq Aug 12 at 23:47 @Qfwfq: that is computationally tractable, but NOT explicit (try writing a formula in terms of matrix elements of $X$) – Igor Rivin Aug 13 at 0:12 3 Also, why all the votes to close? – Igor Rivin Aug 13 at 0:12 ## 3 Answers Perhaps I misunderstand the question. When you say you have Dirac matrices, does that mean that you are computing the exponential of a liner combination of Dirac matrices? If so, then there is a very simple analytical formula in any dimension: just use the Clifford relations in the exponential series. More concretely, suppose that you would like to compute the exponential of a matrix $X := \sum_i x^i \Gamma_i$, where the Dirac matrices $\Gamma_i$ obey the Clifford relation $$\Gamma_i \Gamma_j + \Gamma_j \Gamma_i = - 2 g_{ij} I~,$$ with $I$ the identity matrix. Then it follows from this relation that $$X^2 = - x^2 I~,$$ where I have introduced the (indefinite, if $g_{ij}$ has indefinite signature) "squared norm" $$x^2 = \sum_{i,j} x^i x^j g_{ij}~.$$ If $x^2$ = 0, then $$\exp X = I + X$$ and if $x^2 \neq 0$, then letting $x = \sqrt{x^2}$ (which could be imaginary), $$\exp X = \cos x I + \frac{\sin x}{x} X~.$$ Added (for the "heathens") Quiaochu's comment is correct. Here are some more details. Let $V$ be a finite-dimensional real vector space with a non-degenerate inner product $\left<-,-\right>$. Let $Cl(V)$ be the corresponding Clifford algebra. Let $\rho: Cl(V) \to \operatorname{End}(M)$ be an irreducible representation of $Cl(V)$. Let $(e_i)$ be a basis for $V$. Then $\Gamma_i := \rho(e_i)$ are called Dirac matrices of $CL(V)$ in the representation $M$. - For us heathens: what are Dirac matrices? – Igor Rivin Aug 13 at 13:35 @Igor Rivin: as far as I understand (which is not to say much) the first display (Clifford relation) is sort-of the defintion. – quid Aug 13 at 14:08 2 @Igor: they are a particular matrix representation of a certain Clifford algebra. – Qiaochu Yuan Aug 13 at 16:05 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There is a completely explicit formula in this paper of Bensauod and Mouline (rendicotti Palermo, 2005), which is quite compact for low dimensions. - (There seems to be a problem with the link.) – Andres Caicedo Aug 13 at 0:12 @Andres: should be fixed now... – Igor Rivin Aug 13 at 0:35 4 It's explicit in terms of the solution of a differential equation related to the characteristic polynomial. Of course, to solve that differential equation explicitly you need the eigenvalues... – Robert Israel Aug 13 at 1:29 Thanks all, Igor, are you sure about the Dirac matrices satisfy this particular Clifford relation ?? I guess it's +/- depending upon the indices.I tried this derivation before and got stuck at the anticommutator relations particularly. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9124993681907654, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/251839/probabilities-for-unknown-finite-population-from-sample?answertab=active
# Probabilities for unknown finite population from sample? If i have a known population ($N$ marbles of which $M$ are black) and draw $n$ samples without replacement the probability to draw $x$ black marbles is given by the hypergeometric distribution. Is there a way to get probabilities for the total number of blacks $M$ from the number of black marbles $x$ in my sample? - Under suitable assumptions one can. Assume for example that $N$ is fixed, and Alicia decided on how many of these will be black, $0$ to $N$, using a uniform distribution, or any other known distribution. Then based on sample proportion, we can calculate the probabilities Alicia decided to put in $k$ black. – André Nicolas Dec 5 '12 at 21:37 Can you get any quantitative results without assumptions on the distribution of the $M$ (i.e. Alicia's choice)? I want to find a maximum number of the remaining black marbles (i.e. $M-x$), that holds except some small probability. – Tom S Dec 5 '12 at 21:47 – Jonathan Christensen Dec 5 '12 at 21:56 I do not have any insight on how to attack the problem without a prior. – André Nicolas Dec 5 '12 at 21:56 ## 1 Answer The probability of extracting exactly $x$ black marbles in a sample of size $n$ from a population of $N$ marbles of which $M$ are black can be calculated as: $$P(X=x|n,N,M) = \frac{\binom{M}{x}\binom{N-M}{n-x}}{\binom{N}{n}}$$ If you assume that all values of $M$ compatible with your result are equally likely (this would be André's prior distribution assumption, I believe), you can use the exact same formula with a different twist, considering the total number of black marbles, $M$, as the independent variable, and the number of black marbles in your sample, $x$, as a parameter instead: $$f(M) = P(X=x|n,N,M) = \frac{\binom{M}{x}\binom{N-M}{n-x}}{\binom{N}{n}}$$ The above function computes the relative likelihood of a value of $M$, and the value that maximizes it, is the maximum likelihood estimator of $M$ for the population. If you plot the above function for all possible values, $M\in[x,N-n+x]$, after normalization you'll get a probability distribution which can be used to compute the probabilities you are after. - Right, this corresponds to a uniform prior on M. – Jonathan Christensen Dec 5 '12 at 22:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9026869535446167, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/208726-group-theory-question-print.html
# Group theory question Printable View • November 29th 2012, 03:07 PM Ant Group theory question Hi, Let $H_{1}, H_{2}, H_{3}$ be subgroups of a group $G$ under addition. Moreover suppose $x\in H_{1}, y\in H_{2}, x+y \in H_{3}$ I'm wondering if it then follows that $H_{1} = H_{2} = H_{3}$ Could anyone offer any help as to whether or not this is true? Thanks! • November 29th 2012, 03:25 PM GJA Re: Group theory question Hi Ant, Take a look at $H_{1}=2\mathbb{Z}, H_{2}=3\mathbb{Z}$ and $H_{3}=5\mathbb{Z}$ as subgroups of $(\mathbb{Z}, +)$ to see if you can come up with a counterexample. If this is too cryptic let me know and I'll try to provide more details. Good luck! • November 29th 2012, 03:36 PM Ant Re: Group theory question Yes of course. $2 \in H_{1}$, $3 \in H_{2}$ and $2+3=5 \in H_{3}$ yet clearly these subgroups are not equal. Thanks! No wonder I couldn't prove it! • November 29th 2012, 03:45 PM Ant Re: Group theory question The problem I'm working is actually: Let R be a commutative ring with unity. Prove that if the sum of two non units is a non unit then R has a unique maximal ideal. My working so far: Let x,y be non zero non units. So the ideals they generate are proper subgroups of R. Furthermore the ideal that x+y generates is also proper. We also know that every proper ideal is contained in a maximal ideal. so $(x) \subset J_{1}$, $(y) \subset J_{2}$, $(x+y) \subset J_{3}$. For $J_{1}, J_{2}, J_{3}$ maximal ideals. Our goal is to prove that $J_{1} = J_{2} = J_{3}$ i.e. that There is unique maximal ideal. The only thing I can think of to do at the moment, is use the closure of ideals to show that the intersection of (x) and (y) will contain xy = yx. but I'm not sure how, if at all, that helps me... • November 29th 2012, 04:12 PM GJA Re: Group theory question Seems like a fun problem! I think looking at the ideal generated by a non unit is a good idea. Here's my two cents (for what it's worth): By way of contradiction suppose $R$ contains two distinct maximal ideals $M_{1}$ and $M_{2}$. Without loss of generality take $x\in M_{1}-M_{2}.$ Since $x\in M_{1}$ and $M_{1}\neq R$, $x$ is a non-unit. Now take a look at the ideal $(x)+M_{2}$ and see if you can use the assumption to get a contradiction. Good luck! • November 29th 2012, 04:22 PM Ant Re: Group theory question Thanks! I'll try that and see if I can come up with anything • November 29th 2012, 08:20 PM Deveno Re: Group theory question Quote: Originally Posted by GJA Seems like a fun problem! I think looking at the ideal generated by a non unit is a good idea. Here's my two cents (for what it's worth): By way of contradiction suppose $R$ contains two distinct maximal ideals $M_{1}$ and $M_{2}$. Without loss of generality take $x\in M_{1}-M_{2}.$ Since $x\in M_{1}$ and $M_{1}\neq R$, $x$ is a non-unit. Now take a look at the ideal $(x)+M_{2}$ and see if you can use the assumption to get a contradiction. Good luck! oh i like that! (x) + M2 is an ideal containing M2, and so we have two choices: a)(x) + M2 = R b)(x) + M2 = M2. b) is out of the question since x is in (x) + M2 (as the element 1x + 0) and by supposition, x is not in M2. the key to ruling out a) is that M2 is proper, and thus doesn't contain any units, and neither does (x). but certainly 1 is in R. • November 30th 2012, 12:51 AM Ant Re: Group theory question Quote: Originally Posted by Deveno oh i like that! (x) + M2 is an ideal containing M2, and so we have two choices: a)(x) + M2 = R b)(x) + M2 = M2. b) is out of the question since x is in (x) + M2 (as the element 1x + 0) and by supposition, x is not in M2. the key to ruling out a) is that M2 is proper, and thus doesn't contain any units, and neither does (x). but certainly 1 is in R. This seems to work perfectly. However, as far as I can see, at no point in this argument do we use the fact that if x,y are non units them so is their sum, x+y. This concerns me! • November 30th 2012, 04:47 AM Deveno Re: Group theory question sure we do. take any element of (x) (which is to say rx for some r in R). this cannot be a unit, for if so, we have, say rx = u, then we have: (u-1r)x = 1, contradicting the fact that x is not a unit (and we know x is not a unit, because x is in M1, and M1 ≠ R -this is using the fact that if an ideal of a commutative ring with unity contains a unit, it contains 1, and thus it is the entire ring). by the same reasoning, any element of M2 is ALSO not a unit. now if (x) + M2 = R, then: rx + m = 1, for some r in R, and some m in M2. so we have: non-unit + non-unit = unit, contradicting what we are given as a condition on R. thus any two maximal ideals of R cannot be distinct (the assumption that allowed us to assume x existed). • November 30th 2012, 06:20 AM Ant Re: Group theory question Ah okay, thanks. For some reason I was thinking that (x) + M2 was the union of (x) and M2. Which is why I thought we didn't need to use the closure under + of non units. BTW I've since realized that in fact considering the union isn't helpful as it may no even be an ideal. If anyone is interested, here's another proof (which I believe is also correct!): Consider the set $J$ of all non units in $R$. Claim 1: $J$ is an ideal of $R$. Proof: It's clear that $0$ is in $R$. Let $x$ be a non unit, assume $-x$ is a unit. So there exists $u$ s.t. $-xu = 1$ then $x(-u) = 1$ so $x$ is a unit. So $-x$ must be a non unit. Closure follows by assumption. so $J$ forms an abelian group under $+$. The product of two non units is clearly non unit, and so is the product of a unit with a non unit. (let u be a unit, $x$ be a non unit. Assume $ux$ is a unit. So there exists $w$ s.t $uxw=wux =1 = (wu)x$ So $wu$ is inverse of $x$ and thus $x$ is a unit. Contradiction proves $ux$ is non unit). So $J$ is an ideal. Claim 2: $J$ is unique maximal. Proof: $J \ne R$ because $R$ contains $1$. So we must still prove unique maximality. (Uniqueness) Consider an arbitrary proper ideal of $R$, $I$. $I$ is proper and therefore cannot contain any units. As $J$ is the set of all units, we have that $I \subset J$. (Maximality) Recall that $J$ contain all non units. This means that if we want to find an ideal of $R$ which is larger than $J$ we must include some non unit of $R$. But the inclusion of a non unit will immediately give us all of $R$. So $J$ is maximal. • November 30th 2012, 06:37 AM Deveno Re: Group theory question for claim 2 i would word it like so: let I be a maximal ideal of R. since I is a maximal ideal it is proper, and therefore contains no units. since J contains all non-units, I is contained in J, hence I = J (by the maximality of I, since J ≠ R). i would be curious to see what kind of ring R might have to be, since the integers don't qualify: -2 and 3 are not units, but -2+3 is. the ring Q[x] also doesn't appear to work: neither x nor 1-x are units, but their sum is. the only examples of such rings that spring to mind are fields (which have boring maximal ideals: {0}), but there might be others (i haven't thought about it too much). • November 30th 2012, 06:49 AM Ant Re: Group theory question Quote: Originally Posted by Deveno for claim 2 i would word it like so: let I be a maximal ideal of R. since I is a maximal ideal it is proper, and therefore contains no units. since J contains all non-units, I is contained in J, hence I = J (by the maximality of I, since J ≠ R). i would be curious to see what kind of ring R might have to be, since the integers don't qualify: -2 and 3 are not units, but -2+3 is. the ring Q[x] also doesn't appear to work: neither x nor 1-x are units, but their sum is. the only examples of such rings that spring to mind are fields (which have boring maximal ideals: {0}), but there might be others (i haven't thought about it too much). Yes, that's a bit more succinct. Apparently they're called "local rings" Local ring - Wikipedia, the free encyclopedia All times are GMT -8. The time now is 02:52 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 70, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9569063782691956, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/129361-vector-2-a.html
# Thread: 1. ## vector 2 the vector $\vec{a}=(2,3)$ is projected onto the x=axis. what is the scalar projection? what is the vector projection? what are the scalar and vector projections when $\vec{a}$ is projected onto the y-axis 2. Originally Posted by william the vector $\vec{a}=(2,3)$ is projected onto the x=axis. what is the scalar projection? what is the vector projection? what are the scalar and vector projections when $\vec{a}$ is projected onto the y-axis Have you thought about these at all yourself? These problems are about as trivial as you going to see! Draw a picture. What does the projection of (2, 3) onto the x-axis look like?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9663858413696289, "perplexity_flag": "middle"}
http://terrytao.wordpress.com/tag/covariance-matrices/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Tag Archive You are currently browsing the tag archive for the ‘covariance matrices’ tag. ## Random covariance matrices: Universality of local statistics of eigenvalues 9 December, 2009 in math.PR, math.SP, paper | Tags: covariance matrices, Four Moment Theorem, universality, Van Vu, Wishart ensemble | by Terence Tao | 3 comments Van Vu and I have just uploaded to the arXiv our paper “Random covariance matrices: Universality of local statistics of eigenvalues“, to be submitted shortly. This paper draws heavily on the technology of our previous paper, in which we established a Four Moment Theorem for the local spacing statistics of eigenvalues of Wigner matrices. This theorem says, roughly speaking, that these statistics are completely determined by the first four moments of the coefficients of such matrices, at least in the bulk of the spectrum. (In a subsequent paper we extended the Four Moment Theorem to the edge of the spectrum.) In this paper, we establish the analogous result for the singular values of rectangular iid matrices ${M = M_{n,p}}$, or (equivalently) the eigenvalues of the associated covariance matrix ${\frac{1}{n} M M^*}$. As is well-known, there is a parallel theory between the spectral theory of random Wigner matrices and those of covariance matrices; for instance, just as the former has asymptotic spectral distribution governed by the semi-circular law, the latter has asymptotic spectral distribution governed by the Marcenko-Pastur law. One reason for the connection can be seen by noting that the singular values of a rectangular matrix ${M}$ are essentially the same thing as the eigenvalues of the augmented matrix $\displaystyle \begin{pmatrix} 0 & M \\ M^* & 0\end{pmatrix}$ after eliminating sign ambiguities and degeneracies. So one can view singular values of a rectangular iid matrix as the eigenvalues of a matrix which resembles a Wigner matrix, except that two diagonal blocks of that matrix have been zeroed out. The zeroing out of these elements prevents one from applying the entire Wigner universality theory directly to the covariance matrix setting (in particular, the crucial Talagrand concentration inequality for the magnitude of a projection of a random vector to a subspace does not work perfectly once there are many zero coefficients). Nevertheless, a large part of the theory (particularly the deterministic components of the theory, such as eigenvalue variation formulae) carry through without much difficulty. The one place where one has to spend a bit of time to check details is to ensure that the Erdos-Schlein-Yau delocalisation result (that asserts, roughly speaking, that the eigenvectors of a Wigner matrix are about as small in ${\ell^\infty}$ norm as one could hope to get) is also true for in the covariance matrix setting, but this is a straightforward (though somewhat tedious) adaptation of the method (which is based on the Stieltjes transform). As an application, we extend the sine kernel distribution of local covariance matrix statistics, first established in the case of Wishart ensembles (when the underlying variables are gaussian) by Nagao and Wadati, and later extended to gaussian-divisible matrices by Ben Arous and Peche, to any distributions which matches one of these distributions to up to four moments, which covers virtually all complex distributions with independent iid real and imaginary parts, with basically the lone exception of the complex Bernoulli ensemble. Recently, Erdos, Schlein, Yau, and Yin generalised their local relaxation flow method to also obtain similar universality results for distributions which have a large amount of smoothness, but without any matching moment conditions. By combining their techniques with ours as in our joint paper, one should probably be able to remove both smoothness and moment conditions, in particular now covering the complex Bernoulli ensemble. In this paper we also record a new observation that the exponential decay hypothesis in our earlier paper can be relaxed to a finite moment condition, for a sufficiently high (but fixed) moment. This is done by rearranging the order of steps of the original argument carefully. ### Recent Comments Sandeep Murthy on An elementary non-commutative… Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8959894776344299, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/15611?sort=oldest
## To prove the Nullstellensatz, how can the general case of an arbitrary algebraically closed field be reduced to the easily-proved case of an uncountable algebraically closed field? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In his answer to a question about simple proofs of the Nullstellensatz (http://mathoverflow.net/questions/15226/elementary-interesting-proofs-of-the-nullstellensatz), Qiaochu Yuan referred to a really simple proof for the case of an uncountable algebraically closed field. Googling, I found this construction also in Exercise 10 of a 2008 homework assignment from a course of J. Bernstein (see the last page of http://www.math.tau.ac.il/~bernstei/courses/2008%20spring/D-Modules_and_applications/pr/pr2.pdf). Interestingly, this exercise ends with the following (asterisked, hard) question: (*) Reduce the case of arbitrary field $k$ to the case of an uncountable field. After some tries to prove it myself, I gave up and returned to googling. I found several references to the proof provided by Qiaochu Yuan, but no answer to exercise (*) above. So, my question is: To prove the Nullstellensatz, how can the general case of an arbitrary algebraically closed field be reduced to the easily-proved case of an uncountable algebraically closed field? The exercise is from a course of Bernstein called 'D-modules and their applications.' One possibility is that the answer arises somehow when learning D-modules, but unfortunately I know nothing of D-modules. Hence, proofs avoiding D-modules would be particularly helpful. - 3 It seems natural to try to use the model completeness of the theory of algebraically closed fields. But if you're going to use model theory, it seems to me that you might as well prove the Nullstellensatz outright, which is possible: see the accepted answer to mathoverflow.net/questions/9667/…. – Pete L. Clark Feb 18 2010 at 3:01 1 It's possible that Bernstein had in mind a more direct reduction, although I can't imagine what it would look like. – Qiaochu Yuan Feb 18 2010 at 3:24 1 Is there a non-model-theory approach? – Harry Gindi Feb 18 2010 at 4:44 @PLC: Thank you very much for your comment. Given the context of the question in the homework assignment, I tend to believe (or at least to hope) that there is a proof from commutative algebra. Clearly, this should not be an obvious proof, but I am still hoping that someone familiar with Bernstein's work in other fields will come up with the proof. Less ambitiously, perhaps a student from that course will reveal the secret... – unknown (google) Feb 18 2010 at 6:27 2 Also, +1 for the long but extremely informative title. It's good for people to realize that nothing is gained by making their titles shorter. – Qiaochu Yuan Feb 18 2010 at 7:15 ## 5 Answers These logic/ZFC/model theory arguments seem out of proportion to the task at hand. Let $k$ be a field and $A$ a finitely generated $k$-algebra over a field $k$. We want to prove that there is a $k$-algebra map from $A$ to a finite extension of $k$. Pick an algebraically closed extension field $k'/k$ (e.g., algebraic closure of a massive transcendental extension, or whatever), and we want to show that if the result is known in general over $k'$ then it holds over $k$. We just need some very basic commutative algebra, as follows. Proof: We may replace $k$ with its algebraic closure $\overline{k}$ in $k'$ and $A$ with a quotient $\overline{A}$ of $A \otimes_k \overline{k}$ by a maximal ideal (since if the latter equals $\overline{k}$ then $A$ maps to an algebraic extension of $k$, with the image in a finite extension of $k$ since $A$ is finitely generated over $k$). All that matters is that now $k$ is perfect and infinite. By the hypothesis over $k'$, there is a $k'$-algebra homomorphism $$A' := k' \otimes_k A \rightarrow k',$$ or equivalently a $k$-algebra homomorphism $A \rightarrow k'$. By expressing $k'$ as a direct limit of finitely generated extension fields of $k$ such an algebra homomorphism lands in such a field (since $A$ is finitely generated over $k$). That is, there is a finitely generated extension field $k'/k$ such that the above kind of map exists. Now since $k$ is perfect, there is a separating transcendence basis $x_1, \dots, x_n$, so $k' = K[t]/(f)$ for a rational function field $K/k$ (in several variables) and a monic (separable) $f \in K[t]$ with positive degree. Considering coefficients of $f$ in $K$ as rational functions over $k$, there is a localization $$R = k[x_1,\dots,x_n][1/h]$$ so that $f \in R[t]$. By expressing $k'$ as the limit of such $R$ we get such an $R$ so that there is a $k$-algebra map $$A \rightarrow R[t]/(f).$$ But $k$ is infinite, so there are many $c \in k^n$ such that $h(c) \ne 0$. Pass to the quotient by $x_i \mapsto c_i$. QED I think the main point is twofold: (i) the principle of proving a result over a field by reduction to the case of an extension field with more properties (e.g., algebraically closed), and (ii) spreading out (descending through direct limits) and specialization are very useful for carrying out (i). - +1: This works nicely. – Pete L. Clark Feb 18 2010 at 6:05 1 @Brian: when you edit a post significantly, it is nice to give some indication of what you have changed. Was there something wrong with your previous argument? – Pete L. Clark Feb 18 2010 at 6:22 5 The previous post had an integrality argument that didn't apply when k'/k is not algebraic. The ironic thing is that my immediate reaction upon seeing the question was "Oh, it's just the old spread out and specialize business", and while typing that I thought I found an even "slicker" argument (the original post) which I realized was not right about 2 seconds after I posted it. So I went back to my original idea, which is correct. Better to follow one's instincts and not try to be too slick. :) – BCnrd Feb 18 2010 at 6:27 Wow! this looks like what I am looking for. It will take me some time to process this proof, though. – unknown (google) Feb 18 2010 at 7:53 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Well, this is the opposite of what you asked, but there is an easy reduction in the other direction. Namely, if the result is true for countable fields, then it is true for all fields. I can give two totally different proofs of this, both very soft, using elementary methods from logic. While we wait for a solution in the requested direction, let me describe these two proofs. Proof 1. Suppose k is any algebraically closed field, and J is an ideal in the polynomial ring k[x1,...,xn]. Consider the structure (k[x1,...,xn],k,J,+,.), which is the polynomial ring k[x1,...,xn], together with a predicate for the field k and for the ideal J. By the downward Loweheim-Skolem theorem, there is a countable elementary substructure, which must have the form (F[x1,...,xn],F,I,+,.), where F is a countable subfield of k, and I is a proper ideal in F[x1,...,xn]. The "elementarity" part means that any statement expressible in this language that is true in the subring is also true in the original structure. In particular, I is a proper ideal in F[x1,...,xn] and F is algebraically closed. Thus, by assumption, there is a1,...,an in F making all polynomials in I zero simultaneously. This is a fact about a1,...,an that is expressible in the smaller structure, and so it is also true in the upper structure. That is, every polynomial in J is zero at a1,...,an, as desired. Proof 2. The second proof is much quicker, for it falls right out of simple considerations in set theory. Suppose that we can prove (in ZFC) that the theorem holds for countable fields. Now, suppose that k is any field and that J is a proper ideal in the ring k[x1,...,xn]. If V is the set-theoretic universe, let V[G] be a forcing extension where k has become countable. (It is a remarkable fact about forcing that any set at all can become countable in a forcing extension.) We may consider k and k[x1,...,xn] and J inside the forcing extension V[G]. Moving to the forcing extension does not affect any of our assumptions about k or k[x1,...,xn] or J, except that now, in the forcing extension, k has become countable. Thus, by our assumption, there is a1,...,an in kn making all polynomials in J zero. This fact was true in V[G], but since the elements of k and J are the same in V and V[G], and the evaluations of polynonmials is the same, it follows that this same solution works back in V. So the theorem is true for k in V, as desired. But I know, it was the wrong reduction, since I am reducing from the uncountable to the countable, instead of from the countable to the uncountable, as you requested... Nevertheless, I suppose that both of these arguments could be considered as alternative very soft short proofs of the uncountable case (assuming one has a proof of the countable case). - 1 I am sure this is a naive question, but: In your introduction your hypothesis was "if the result is true for all countable fields", but then in Proof 2 the hypothesis is "suppose we can prove in ZFC that the theorem holds for countable fields". Does the former imply the latter? I guess it does, by the completeness of the theory of algebraically closed fields of each characteristic. But can you conclude this without appealing to the completeness of this particular theory? – Tom Church Feb 18 2010 at 3:00 It's not a naive question. In the forcing argument, one needs the theorem for countable fields to be true in V[G], rather than V. So if you assumed only that it was true (i.e. true in V), then the argument wouldn't quite work. If you assume it is provable in ZFC, then we get to use it in any model of ZFC, including V[G]. – Joel David Hamkins Feb 18 2010 at 3:09 I've realized that the assertion that the claim is true for countable fields has complexity only Pi^1_1, and so if it is true in V, it will also be true in V[G] by the Schoenfield Absoluteness theorem. So there is no need for me to have assumed that the claim was provable, but rather only that it was true. – Joel David Hamkins Feb 18 2010 at 14:31 I know a way to do this, but it involves some very heavy machinery... The first component are effective bounds on the degrees of the polynomials in the conclusion of the Weak Nullstellensatz. Such bounds are not that easy to get and there has been a lot of literature on the Effective Nullstellensatz. Perhaps the earliest effective bounds were found by Grete Hermann Die Frage der endlich vielen Schritte in der Theorie der Polynomideale (Mathematische Annalen 95, 1926), but there has been a lot of work on improving these bounds and also obtaining lower bounds over the years. [E.g., D. W. Brownawell, Bounds for the degrees in the Nullstellensatz, Ann. of Math. (2) 126 (1987), 577-591] It's interesting to read these papers, but I will only use the fact that effective bounds do exist. Using these bounds it is possible to find a sequence of first-order sentences $\phi_{n,k,r}$, which together are equivalent to the Weak Nullstellensatz; the sentence $\phi_{n,k,r}$ is a first order rendition of the following statement. If $p_1(\bar{x}),\dots,p_k(\bar{x})$ ($\bar{x} = x_1,\ldots,x_r$) are polynomials of degree at most $n$ without common zeros, then there are polynomials $q_1(\bar{x}),\dots,q_k(\bar{x})$ of degree at most $b(n,k,r)$ such that $p_1(\bar{x})q_1(\bar{x})+\cdots+p_k(\bar{x})q_k(\bar{x}) = 1$. The bounds $n$ and $b(n,k,r)$ are necessary so that the $p_i(\bar{x})$ and $q_i(\bar{x})$ have a bounded number of coefficients. Otherwise, we could not use a fixed number of variables for these coefficients. That said, the other piece of heavy machinery is the fact that the theory of algebraically closed fields of a given characteristic is complete, i.e. every first-order sentence is decided by the axioms. Therefore, if the above sentences $\phi_{n,k,r}$ are true in any algebraically closed field of a given characteristic, then they must be true in all algebraically closed fields of the same characteristic. In particular, the Weak Nullstellensatz for $\mathbb{C}$ implies the Weak Nullstellensatz for all algebraically closed fields of characteristic zero. From here, you can use the Rabinowitsch trick to get the Strong Nullstellensatz... PS: You do not need the Nullstellensatz to prove that the theory of algebraically closed fields of a given characteristic is complete. You implicitly need the Nullstellensatz to prove the effective upper bounds, but you only need them for the one field and you can think of them as wild guesses that turn out to be right. - After seeing Pete's comment, a simpler approach is to first prove quantifier elimination and use model completeness. (Well, I don't know which is easiest between getting very crude effective bounds and proving quantifier elimination.) However, there is a small benefit of my brute force approach, namely that the Nullstellensatz is actually expressible in first-order logic. – François G. Dorais♦ Feb 18 2010 at 3:37 Thank you very much for your answer. While I am hoping for a "trick" using only commutative algebra, this is still very interesting! – unknown (google) Feb 18 2010 at 6:27 This is a comment on Brian's answer, which is however a bit long to fit into the comment box. I wanted to remark that Brian's argument is ulimately not so different from the Noether normalization argument, nor is it so different to the argument linked to here, or to the argument in II.2 of Mumford--Oda using Chevalley's theorem. What they all have in common is the fact that any finite type variety can be projected to affine space with generically finite fibres and big image. On affine space (at least over an infinite field) we can find lots of points, and by the generic finiteness and big image assumptions we can even find such a point lying in the image of the original affine variety with finite fibres. Finding a point on this fibre then involves solving a finite degree polynomial, which we can do over the algebraic closure. Hence our original finite-type variety has a point. Here is a rewrite of Brian's argument which illustrates this: Following his reduction, we may assume that $k$ is infinite and perfect. We are given a non-zero finite type $k$-algebra $A$, and we want to show that Spec $A$ has a $\bar{k}$-point, i.e. that we can find a $k$-algebra homomorphism $A \to \bar{k}$. For this, we may as well replace $A$ by a quotient by a maximal ideal, and thus assume that $A$ is a field. As Brian notes, the theory of finitely generated field extensions allows us to write $A = k(X_1,\ldots,X_d)[t]/f(t)$ (because $k$ is perfect). We then observe that since $A$ is finite type over $k$, its generators involve only finitely many denominators, as do the coefficients of $f$, and so in fact $A = k[X_1,\ldots,X_d][1/h][t]/f(t)$ for some well-chosen non-zero $h$. Now because $k$ is infinite, $h$ is not identically zero on $k^d$, and so we are done: we choose a point $c_i$ where $h$ is non-zero, then solve $f(c_1,\ldots,c_d,t) = 0$ in $\bar{k}$. So one sees that the role of the theory of finitely generated field extensions is simply to provide a weaker version of the Noether normalization, with generic finiteness replacing finiteness. As I already wrote, the other "soft" arguments for the Nullstellensatz proceed along essentially the same lines. - Very helpful, +1 – Hailong Dao Feb 18 2010 at 16:04 I fixed your first link. – Harry Gindi Feb 18 2010 at 16:07 Thank you, fpqc – Emerton Feb 18 2010 at 16:36 1 Matt's formulation is more elegant, but I was trying to set up it to look more like a deduction from the case over a big extension field (hence my focus on the map to k' without changing A too much after the initial reduction step) since otherwise we'e just giving a direct proof of Nullstellensatz over the ground field, which seems to violate the spirit of the question. That said, in some sense the whole question is kind of pointless because we have so many nice direct proofs of Null. over any field. The above is yet another. :) – BCnrd Feb 18 2010 at 17:20 1 Dear Brian, Yes, I realized that you were trying to conform to the spirit of the question, which I happily disregarded! :) – Emerton Feb 18 2010 at 18:55 show 4 more comments The easiest way to reduce to the uncountable case may be as follows. Let $I$ be an ideal of $k[X_1,...,X_d]$ which does not contain $1$. Let $P_1,\dots,P_r$ be a generating family of $I$. Let $A=k^{\mathbf N}$ and let $m$ be a maximal ideal of $A$ which contains the ideal $N=k^{(\mathbf N)}$ of $A$. Then $K=A/m$ is an algebraically closed field which is has at least the power of the continuum. (Alternative description: let $K$ be an ultrapower of $k$, with respect to a non-principal ultrafilter.) Lemma. *For $i\in{1,\dots,r}$, let $a_i=(a_{i,n})\in A$. Assume that $(\bar a_1,\dots,\bar a_r)=0$ in $K^r$. Then the set of $n\in\mathbf N$ such that $(a_{1,n},\dots,a_{r,n})=0$ is infinite.* Proof. Assume otherwise. For every $n$ such that $(a_{1,n},...,a_{r,n}) \neq 0$, choose $(b_{1,n},\dots,b_{r,n})$ such that $\sum a_{i,n}b_{i,n}=1$, and let $b_i=(b_{i,n})_n\in A$. Then $\sum a_i b_i - 1$ belongs to $N^r$, hence $\sum \bar a_i \bar b_i=1$. Contradiction. Thanks to the lemma, one proves easily that the ideal $I_K$ of $K[X_1,...,X_d]$ generated by $I$ does not contain $1$. By the uncountable case, there exists $x=(x_1,...,x_d)\in K^d$ such that $P_j(x_1,...,x_d)=0$ for every $j$. For every $i$, let $a=(a_{n})\in A^d$ be such that $\bar a=x$. By the lemma again the set of integers $n$ such that $P_j(a_n) \neq 0$ for some $j$ is finite. In particular, there exists a point $y\in k^d$ such that $P_j(y)=0$ for every $j$. - I like this proof, it is completely elementary (and remark that also Brian's proof contains a choice of a maximal ideal). – Martin Brandenburg Jan 6 at 18:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 130, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451663494110107, "perplexity_flag": "head"}
http://nrich.maths.org/6575/note?nomenu=1
## 'Fix Me or Crush Me' printed from http://nrich.maths.org/ ### Why do this problem? This problem gives students the opportunity to explore the effect of matrix multiplication on vectors, and lays the foundations for studying the eigenvectors and kernel of a matrix, ideas which are very important in higher level algebra with applications in science. ### Possible approach Start by asking students to work with the vector ${\bf F}$ to find a matrix which fixes it. Initially, let students find their own methods of working - some may choose to try to fit numbers in the matrix, some may straight away work with algebra. Once students have had a chance to try the task, allow some time to discuss methods, as well as the simplest and most complicated examples of matrices they have managed to find. Repeat the same process to find a matrix which crushes the vector $\bf Z$. The last part of the problem asks students to seek vectors which are fixed or crushed by each of the three matrices given. This works well if students are first given time to explore the properties of the matrices and to construct the conditions needed for a vector to be fixed or crushed by them. Then encourage discussion of their findings, particularly focussing on justification for matrices where appropriate vectors can't be found. ### Key questions What properties must a matrix have if it fixes $\bf F$? Or if it crushes $\bf Z$? What is the simplest matrix with these properties? What is the most general matrix you can write down? What properties must a vector have to be fixed or crushed by the three matrices given?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946140468120575, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/253432/improving-newtons-iteration-where-the-derivative-is-near-zero/253466
# Improving Newton's iteration where the derivative is near zero? I'm implementing a root-solver for finding x coordinates of a function f(x), after I have an y-coordinate. The function is periodic, roughly sinusoidal with constant amplitude but non-linearly varying frequency; for an inverse I don't have a closed-form (it is an infinite series), so I use the Newton iteration to find the x-value at a given y beginning the iteration at $x_0$ which is rather near the true value by something like $x=newton(x=x_0,f(x)-y)$. In most cases this works fine, however if the y is in the near of a maximum (or minimum) of f, where the shape is very similar to the maximum of a sinus-curve, the newton-iteration does not converge. The wikipedia gives a bit of information about this, but not a workaround. The last way out would be to resort to binary search, but which I'd like to avoid since the computation of f(x) is (relatively) costly. Does someone know an improvement in the spirit of the Newton-iteration (which has often quadratic convergence) for this region of y-values? [update] hmmm... perhaps it needs only a tiny twist? It just occurs to me, that it might be possible to go along the way how I find the easily approximated x for the maximum y: here I use the Newton on the derivative of the function and search for the zero: $x_{max}=newton(x=x_0,f(x)') \qquad$ and this has the usual quadratic convergence. But how to apply this for some y in the near of the maximum? - You could approximate your function at $x$ by a parabola, using $f(x)$, $f'(x)$ and $f''(x)$, instead of a line using just the first two... – Jaime Dec 7 '12 at 23:33 Do you mean that the starting value is very close to a local extremum or that the actual zero of the function is also an extremum? In the latter case, newton for $\sqrt f$ might help ... – Hagen von Eitzen Dec 7 '12 at 23:34 @Hagen : the $y$ for which it is difficult to find the $x$ are in the near of $\sin(\pm \pi / 2)\cdot \alpha$ where $\alpha$ is the amplitude of my function $f(x)$ – Gottfried Helms Dec 7 '12 at 23:42 @Jaime : looks like a sort of idea which I'm looking for. Would you mind to elaborate this a bit more to help me step in? (Surely I'll also need time to test&adapt...) – Gottfried Helms Dec 7 '12 at 23:46 Maybe you could use quadratic interpolation along with bisection to (hopefully) converge faster? – copper.hat Dec 8 '12 at 0:05 ## 4 Answers Have you considered implementing a hybrid method? For example, at each step: IF a Newton step would result in an iteration that is outside the bounds where you have determined the root must lie, then take a bisection step (slower than Newton, but bisection always converges to a root and is not affected by extrema), or a step using a method other than Newton that is not prone to failing near extrema. ELSE proceed with a Newton step (since it converges quadratically, as you pointed out). - Yes, I'm considering bisection, but I'd like to find something else with better rate-of-convergence – Gottfried Helms Dec 7 '12 at 23:38 Then replace "bisection" with another method of your choosing? I chose bisection because it is reliable, and it will only be used at steps near a maximum/minimum. – Eric Angle Dec 7 '12 at 23:56 By popular demand from the OP... In Newton's method you are replacing your function $y=f(x)$ by a linear approximation around the point $x_0$, $y = f(x_0) + f'(x_0) (x-x_0)$, which intersects the x axis ($y=0$) at $x=x_0-f(x_0)/f'(x_0)$. You could instead approximate by a parabola as $y=f(x_0) + f'(x_0)(x-x_0) +\frac{1}{2}f''(x_0)(x-x_0)^2$, which intercepts the x-axis at $x = x_0 -\frac{f'(x_0)\mp\sqrt{f'(x_0)-2f(x_0)f''(x_0)}}{f''(x_0)}$. You will of course have the issue of having two, not one, possible next iteration point, but there are multiple ways to get around these: choose the closest one, always move up (or down), choose the one with a smallest $f(x_0)$... - Ahh, very nice - it looks, that with my update I was even near already... So I'll just try it. That shall need some time, then I'll come back to this. Thanks so far! – Gottfried Helms Dec 8 '12 at 0:14 Hmm, I couldn't manage to make this working with my specific application, maybe simply programming errors. Perhaps if I get the solver working by more detailed study of the process in the extreme cases I'll come back to this and try to locate my errors. Thanks anyway for that answer! – Gottfried Helms Jan 3 at 15:37 I couldn't tell you the cost of this idea, but maybe you could work it out: You are trying to solve for a root of $g$, where $g(x)=f(x)-y$ and you want your solution in a neighborhood of $x_0$. Let's call that solution $x_s$. The problem is that $g'(x)$ is too small near $x_s$, so in the Newton algorithm you divide by very small things, yielding big changes from $x_i$ to $x_{i+1}$; possibly so big that the algorithm converges to a different solution or not at all. So what if you engineered a substitute function $\tilde{g}$, who still satisfied the demand that $\tilde{g}(x_s)=0$, but has $\tilde{g}'(x_s)$ not so small. For example, $\tilde{g}(x)$ could equal $g(x)\cdot\ln|g(x)|$. This has $\lim_{x\to x_s}\tilde{g}(x)=0$ and has $\tilde{g}'(x)=g'(x)\cdot(1+\ln|g(x)|)$. The absolute value of $\tilde{g}'(x_s)$ will be quite larger than that of $g'(x_s)$. - If the minimum/maximum is negative, then an $x_0$ such that $f(x_0)$ is positive is preferred or if the minimum/maximum is positive then an $x_0$ such that $f(x_0)$ negative is more reasonable but if the root is at the minimum/maximum then I don't think there should be a problem. Something else with a better rate of convergence is the Secant Method. This has a convergence rate of $\cfrac{1+\sqrt 5}{2}=1.618...$ mostly due to the fact that it starts with two initial values. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9509705305099487, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/55021/list
## Return to Answer 2 added 4 characters in body Ian's answer is very elegant, but in case you're looking for a more computational approach, you could use the Seifert form. Namely, if you take a Seifert surface $\Sigma$ for a knot, look at the form $\Theta\colon H_1(\Sigma)\otimes H_1(\Sigma)\to \mathbb Z$ given by $\Theta(x,y)=lk(x^+,y)$ where $x^+$ is a push-off of $x$ along a consistently chosen positive normal direction. Then one can show that the Alexander polynomial is expressible as $\det(t\Theta-\Theta^T)$. Note that for a Whitehead double, there is an obvious Seifert surface with one band being a thickening of the original knot, and one band being a small twisted dual band. In particular, the Seifert form looks something like `$$\left(\begin{array}{cc}0&1\0&1\end{array}\right)$$ &1\\0&1\end{array}\right)$$` which yields a trivial Alexander polynomial, which is only well-defined in this formula up to powers of $t$. Or you could notice that the unknot has a Seifert surface with the same Seifert form as this, by Whitehead doubling the unknot! 1 Ian's answer is very elegant, but in case you're looking for a more computational approach, you could use the Seifert form. Namely, if you take a Seifert surface $\Sigma$ for a knot, look at the form $\Theta\colon H_1(\Sigma)\otimes H_1(\Sigma)\to \mathbb Z$ given by $\Theta(x,y)=lk(x^+,y)$ where $x^+$ is a push-off of $x$ along a consistently chosen positive normal direction. Then one can show that the Alexander polynomial is expressible as $\det(t\Theta-\Theta^T)$. Note that for a Whitehead double, there is an obvious Seifert surface with one band being a thickening of the original knot, and one band being a small twisted dual band. In particular, the Seifert form looks something like $$\left(\begin{array}{cc}0&1\0&1\end{array}\right)$$ which yields a trivial Alexander polynomial, which is only well-defined in this formula up to powers of $t$. Or you could notice that the unknot has a Seifert surface with the same Seifert form as this, by Whitehead doubling the unknot!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481160640716553, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2012/02/09/the-propagation-velocity-of-electromagnetic-waves/?like=1&source=post_flair&_wpnonce=420e8d64d7
# The Unapologetic Mathematician ## The Propagation Velocity of Electromagnetic Waves Now we’ve derived the wave equation from Maxwell’s equations, and we have worked out the plane-wave solutions. But there’s more to Maxwell’s equations than just the wave equation. Still, let’s take some plane-waves and see what we get. First and foremost, what’s the propagation velocity of our plane-wave solutions? Well, it’s $c$ for the generic wave equation $\displaystyle\frac{\partial^2F}{\partial t^2}-c^2\nabla^2F=0$ while our electromagnetic wave equation is $\displaystyle\begin{aligned}\frac{\partial^2E}{\partial t^2}-\frac{1}{\epsilon_0\mu_0}\nabla^2E&=0\\\frac{\partial^2B}{\partial t^2}-\frac{1}{\epsilon_0\mu_0}\nabla^2B&=0\end{aligned}$ so we find the propagation velocity of waves in both electric and magnetic fields is $\displaystyle c=\frac{1}{\sqrt{\epsilon_0\mu_0}}$ Hm. Conveniently, I already gave values for both $\epsilon_0$ and $\mu_0$: $\displaystyle\begin{aligned}\epsilon_0&=8.85418782\times10^{-12}\frac{\mathrm{F}}{\mathrm{m}}&=8.85418782\times10^{-12}\frac{\mathrm{s}^2\cdot\mathrm{C}^2}{\mathrm{m}^3\cdot\mathrm{kg}}\\\mu_0&=1.2566370614\times10^{-6}\frac{\mathrm{H}}{\mathrm{m}}&=1.2566370614\times10^{-6}\frac{\mathrm{m}\cdot\mathrm{kg}}{\mathrm{C}^2}\end{aligned}$ Multiplying, we find: $\displaystyle\epsilon_0\mu_0=8.85418782\times1.2566370614\times10^{-18}\frac{\mathrm{s}^2}{\mathrm{m}^2}=11.1265006\times10^{-18}\frac{\mathrm{s}^2}{\mathrm{m}^2}$ which means that $\displaystyle c=\frac{1}{\sqrt{\epsilon_0\mu_0}}=0.299792457\times10^9\frac{\mathrm{m}}{\mathrm{s}}=299\,792\,457\frac{\mathrm{m}}{\mathrm{s}}$ And this is a number which should look very familiar: it’s the speed of light. In an 1864 paper, Maxwell himself noted: The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws. Indeed, this supposition has been borne out in experiment after experiment over the last century and a half: light is an electromagnetic wave. ## 3 Comments » 1. You have done a remarkable work so far to expound the principles of electrostatics. I have to re-read all the posts and absorb. Here you have found the speed of light. That is based on two other constants. I have to read the older posts to see if those constants were calculated from from principles. Otherwise, there is some circularity here. I am still not sure whether you proved in all these posts Maxwell’s equations from vector calculus alone (generalized Stokes’ theorem). If that is the case, it proves the power of vector calculus. No matter the questions that linger in mind, what you done relating calculus and electrostatics is truly wonderful. Congratulations! I hope you continue this effort. Comment by Soma Murthy | February 9, 2012 | Reply 2. I didn’t get into how $\epsilon_0$ and $\mu_0$ were calculated, but in fact they are determined from laboratory experiments which are specifically concerned with electric or magnetic phenomena, and not with light as such. Comment by | February 9, 2012 | Reply 3. [...] The Propagation Velocity of Electromagnetic Waves (unapologetic.wordpress.com) [...] Pingback by | February 26, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me! %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311769008636475, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/40526/list
## Return to Answer 4 the editings overlapped, apparently, it should be fixed now, sorry... Here is a quick sketch (probably it can be made much cleaner). Using an inductive argument it should not be difficult to reduce to studying the case that `$(X,<_1)$` is isomorphic to a cardinal, say $\kappa$. For convenience, let us identify $(\kappa,<)$ and `$(X,<_1)$`. Let us now construct $Y$ using transfinite induction. Let $X'\subseteq X$ be the initial segment of $X$ (with respect to `$<_2$`) which, endowed with the order `$<_2$`, is isomorphic to $\kappa$. For `$\alpha<k$` define `$$y_\alpha=\min\{\beta\in X': \forall \alpha'<\alpha\; \beta> y_{\alpha'}\textrm{\ and\ }\beta>_2 y_{\alpha'}\}.$$` The set above is not empty, and so $y_\alpha$ is well defined, because `$|\{\gamma\in X':\exists \alpha'<\alpha\; \gamma\leq y_{\alpha}\}|y_{\alpha'}\}|<k$` and also `$|\{\gamma\in X':\exists \alpha'<\alpha\; \gamma\leq_2 y_\alpha\}|y_{\alpha'}\}|<k$` (this depends on `$\{y_{\alpha'}\}_\{alpha'\{y_{\alpha'}\}_{\alpha'<\alpha}$` not being cofinal in $(k,<)$ and `$(X',<_2)$`), so their complements in $X'$ intersect. The set `$Y=\{y_\alpha\}_{\alpha<k}$` is what we were looking for. 3 fixed some of the LaTeX by adding backticks, but gave up on one part Here is a quick sketch (probably it can be made much cleaner). Using an inductive argument it should not be difficult to reduce to studying the case that `$(X,<_1)$` is isomorphic to a cardinal, say $\kappa$. For convenience, let us identify $(\kappa,<)$ and `$(X,<_1)$`. Let us now construct $Y$ using transfinite induction. Let $X'\subseteq X$ be the initial segment of $X$ (with respect to `$<_2$`) which, endowed with the order `$<_2$`, is isomorphic to $\kappa$. For `$\alpha<k$` define `$$y_\alpha=\min\{\beta\in X': \forall \alpha'<\alpha,\; <\alpha\; \beta> y_{\alpha'}\textrm{ y_{\alpha'}\textrm{\ and\ }\beta>_2 y_{\alpha'}\}.$$` The set above is not empty, and so $y_\alpha$ is well defined, because `$|\{\gamma\in X':\exists \alpha'<\alpha\; \gamma\leq y_{\alpha}\}|<k$` and also `$|\{\gamma\in X':\exists \alpha'<\alpha\; \gamma\leq_2 y_\alpha\}|<k$` (this depends on `$\{y_{\alpha'}\}_{\alpha'\{y_{\alpha'}\}_\{alpha'<\alpha}$` not being cofinal in $(k,<)$ and `$(X',<_2)$`), so their complements in $X'$ intersect. The set `$Y=\{y_\alpha\}_{\alpha<k}$` is what we were looking for. 2 fixed formatting problems Here is a quick sketch (probably it can be made much cleaner). Using an inductive argument it should not be difficult to reduce to studying the case that `$(X,<_1)$` is isomorphic to a cardinal, say $\kappa$. For convenience, let us identify $(\kappa,<)$ and `$(X,<_1)$. <_1)$`. Let us now construct $Y$ using transfinite induction. Let $X'\subseteq X$ be the initial segment of $X$ (with respect to `$<_2$) <_2$`) which, endowed with the order `$<2$<_2$`, is isomorphic to $\kappa$. For `$\alpha\alpha=\min{\beta\in \alpha<k$` define ```$$y_\alpha=\min\{\beta\in X': \forall \alpha'<\alpha\; beta> y_{\alpha'}\textrm{<\alpha,\; \beta> y_{\alpha'}\textrm{ and \ }\beta>2 y{\alpha'}}.$$ \beta>_2 y_{\alpha'}\}.$$``` The set above is not empty, and so $y_\alpha$ is well defined, because `$|{\gamma\in |\{\gamma\in X':\exists \alpha'<\alpha\; \gamma\leq y_{\alpha}}|2)$)y_{\alpha}\}|<k$` and also `$|\{\gamma\in X':\exists \alpha'<\alpha\; \gamma\leq_2 y_\alpha\}|<k$` (this depends on `$\{y_{\alpha'}\}_{\alpha'<\alpha}$` not being cofinal in $(k,<)$ and `$(X',<_2)$`), so their complements in $X'$ intersect. The set `$Y={y````\alpha}_{\alpha Y=\{y_\alpha\}_{\alpha<k}$``` is what we were looking for. 1 Here is a quick sketch (probably it can be made much cleaner). Using an inductive argument it should not be difficult to reduce to studying the case that $(X,<_1)$ is isomorphic to a cardinal, say $\kappa$. For convenience, let us identify $(\kappa,<)$ and $(X,<_1)$. Let us now construct $Y$ using transfinite induction. Let $X'\subseteq X$ be the initial segment of $X$ (with respect to $<_2$) which, endowed with the order $<2$, is isomorphic to $\kappa$. For $\alpha\alpha=\min{\beta\in X': \forall \alpha'<\alpha\; beta> y_{\alpha'}\textrm{\ and\ }\beta>2 y{\alpha'}}.$$The set above is not empty, and so$y_\alpha$is well defined, because$|{\gamma\in X':\exists \alpha'<\alpha\; \gamma\leq y_{\alpha}}|2)$), so their complements in$X'$intersect. The set$Y={y\alpha}_{\alpha
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.955096960067749, "perplexity_flag": "head"}

Dataset Card for "tiny_math"

More Information needed

Downloads last month
46
Edit dataset card