Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Transformations of Normal If we have $U$, $V$, and $W$ as i.i.d normal RV with mean $0$ and variance $\sigma^2$, then what are the following expressed as a transformation of a known distribution (if known): 1) $\frac{U}{V + W}$ I don't think this can expressed as a distribution. 2) $\frac{U^2}{V^2}$ Both $U^2$ and $V^2$ are $\chi^2$ and the quotient of $\chi^2$ is an F-distribution. I am just not sure what the degrees of freedom are... 3) $\frac{U^2 + V^2}{V^2}$ $U^2 + V^2$ is simply a $\chi^2$ with the sum of the degrees of freedom of $U$ and $V$. A $\chi^2$ divided by another $\chi^2$ is F. But I am wondering is this can be simiplified to $\frac{U^2}{V^2} + 1$ (are you allowed to do that?) in which case it would be $1 + F$... 4) $\frac{U^2 + V^2}{W^2}$ I think it is the same as the one above... just an F distribution. Please help me out with my reasoning! Thanks.
* *$\frac{V+W}{\sigma^2} \sim N(0, 2)$ and the ratio of two iid standard normal is Cauchy(0,1). So $\frac{U}{(V+W)/\sqrt{2}} \sim \text{Cauchy}(0,1)$. You can then derive that $\frac{U}{V+W} \sim \text{Cauchy}(0, \sqrt{2})$. *Yes, they are $\chi^2$. Specifically, $\frac{U^2}{\sigma^2}$ and $\frac{V^2}{\sigma^2}$ are $\chi^2(1)$. Then $\frac{U^2}{V^2} \sim F(1,1)$. *$U^2 + V^2$ is not independent from $V^2$, so you can't say their ratio is a standard F-distribution. So yes you should write it as $\frac{U^2}{V^2} +1$. Then because $\frac{U^2}{V^2} \sim F(1,1)$, $\frac{U^2}{V^2} + 1$ looks like $F(1,1)$ but shifted to the right by 1. *The sum of two independent $\chi^2(1)$ variables is $\chi^2(2)$, ie, $\frac{U^2 + V^2}{\sigma^2} \sim \chi^2(2)$. Then $\frac{U^2 + V^2}{W^2} \sim F(2,1)$. All these relationships can be found in Wikipedia (normal, chi-square, Cauchy, F).
{ "language": "en", "url": "https://math.stackexchange.com/questions/34007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Group Which Isn't Solvable For a recent qualifier problem, I was to show that if a group $G$ has two solvable subgroups $H,K$ such that $HK=G$ and $H$ is normal, then $G$ is solvable. This is simply a matter of showing that $G/H$ is solvable, and I think is not too difficult. The next part of the question was to find an example of a group with all of the above conditions except the normality condition (i.e. for any two such solvable subgroups, neither is normal in $G$) and show that $G$ is no longer solvable. Does anyone know of a good example? I don't even know that many groups which aren't solvable. I have been told $A_5$ is not solvable, but that is quite a large group, and it seems like it would take a long time to show this in 20 minutes (the time I would have if I was doing a qualifier) if it is even true for $A_5$. I'd love to know what group to look at, so I can prove it myself. Thanks!
Here's a hint - I don't know what you already know, so if you don't understand, just ask for clarification! OK, so we're looking for two solvable subgroups $H$ and $K$ of a non-solvable group $G$, such that $HK = G$. The smallest non-solvable subgroup is indeed $A_5$, and every smaller group is solvable. In particular, $A_4$ and $C_5$ are solvable. Can you find two subgroups of $A_5$, one isomorphic to $A_4$ and another isomorphic to $C_5$ which together generate $A_5$? Can you then show that not only $\langle H,K \rangle = G$ but also $HK = G$? EDIT: In the previous wrong version I was hinting towards two copies of $A_4$ which together generate $A_5$, but do not have $HK = G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/34114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How do I find the Fourier transform of a function that is separable into a radial and an angular part? how do I find the Fourier transform of a function that is separable into a radial and an angular part: $f(r, \theta, \phi)=R(r)A(\theta, \phi)$ ? Thanks in advance for any answers!
You can use the expansion of a plane wave in spherical waves. If you integrate the product of your function with such a plane wave, you get integrals over $R$ times spherical Bessel functions and $A$ times spherical harmonics; you'll need to be able to solve those in order to get the Fourier coefficients.
{ "language": "en", "url": "https://math.stackexchange.com/questions/34167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Field automorphisms of $\mathbb{Q}$ - shouldn't there be only one? $1$ has to map to $1$, right? So $n$ has to map to $n$ (for $n \in \mathbb{Z}$), and $\frac{1}{n}$ maps to $\frac{1}{n}$, so $\frac{n}{m}$ maps to itself, so the only possible automorphism is the identity. Is this true or am I deceiving myself? Because I feel like there should definitely be more automorphisms of $\mathbb{Q}$. Also, if you have some extension of $\mathbb{Q}$ (call it $E$), does every automorphism of $E$ automatically fix $\mathbb{Q}$?
When one says "automorphisms", it is important to specify automorphisms of what. There are a lot of automorphisms of $\mathbb{Q}$ as an abelian group (equivalently, as a $\mathbb{Q}$-vector space). However, there is one and only one field automorphism (equivalently, one and only one ring automorphism). Indeed: If you are doing a ring automorphism, then $1$ must map to an idempotent (an element equal to its square); there are only two idempotents in $\mathbb{Q}$, $1$ and $0$; but if you map $1$ to $0$, then you map everything to $0$ and the map is not an automorphism. So $1$ must map to $1$ (you can skip this step if your definition of "homomorphism of rings" requires you to map $1$ to $1$). Since $1$ maps to $1$, by induction you can show that for every natural number $n$, $n$ maps to $n$. Therefore, $-n$ must map to $-n$ (since the map sends additive inverses to additive inverses), and must sent $\frac{1}{n}$ to $\frac{1}{n}$ (because it maps $1 = n(\frac{1}{n})$ to $n$ times the image of $\frac{1}{n}$, and the only solution to $nx = 1$ in $\mathbb{Q}$ is $x=\frac{1}{n}$. And from here you get that any field automorphism of $\mathbb{Q}$ must be the identity. As to your second question, yes: if $E$ is an extension of $\mathbb{Q}$, then any field automorphism of $E$ restricts to the identity automorphism of $\mathbb{Q}$. More generally, if $E$ is any field, then any automorphism of $E$ restricts to the identity of its prime field (which is $\mathbb{Q}$ in the case of characteristic 0, and $\mathbb{F}_p$ in the case of characteristic $p$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/34217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 2, "answer_id": 1 }
Order of general- and special linear groups over finite fields. Let $\mathbb{F}_3$ be the field with three elements. Let $n\geq 1$. How many elements do the following groups have? * *$\text{GL}_n(\mathbb{F}_3)$ *$\text{SL}_n(\mathbb{F}_3)$ Here GL is the general linear group, the group of invertible n×n matrices, and SL is the special linear group, the group of n×n matrices with determinant 1.
Determinant function is a surjective homomorphism from $GL(n, F)$ to $F^*$ with kernel $SL(n, F)$. Hence by the fundamental isomorphism theorem $\frac{GL(n,F)}{SL(n,F)}$ is isomorphic to $F^*$, the multiplicative group of nonzero elements of $F$. Thus if $F$ is finite with $p$ elements then $|GL(n,F)|=(p-1)|SL(n, F)|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/34271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64", "answer_count": 2, "answer_id": 0 }
Control / Feedback Theory I am more interested in the engineering perspective of this topic, but I realize that fundamentally this is a very interesting mathematical topic as well. Also, at an introductory level they would be very similar from both perspectives. So, what are some good introductory texts on Control/Feedback theory for an advanced undergraduate/early graduate student? Thanks!
In stead of any textbook, I strongly recommend you the following survey paper A˚ström, Karl J., and P.R. Kumar. 2014. “Control: A Perspective.” Automatica 50 (1): 3–43. doi:10.1016/j.automatica.2013.10.012. written by Karl J. Astrom and P.R. Kumar, where feedback is a key element through the paper, I would like to share with you the ABSTRACT Feedback is an ancient idea, but feedback control is a young field. Nature long ago discovered feedback since it is essential for homeostasis and life. It was the key for harnessing power in the industrial revolution and is today found everywhere around us. Its development as a field involved contributions from engineers, mathematicians, economists and physicists. It is the first systems discipline; it represented a paradigm shift because it cut across the traditional engineering disciplines of aeronautical, chemical, civil, electrical and mechanical engineering, as well as economics and operations research. The scope of control makes it the quintessential multidisciplinary field. Its complex story of evolution is fascinating, and a perspective on its growth is presented in this paper. The interplay of industry, applications, technology, theory and research is discussed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/34322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 7, "answer_id": 4 }
How to prove these? This is rather a continuation for this,but this is much precise.After proving and understanding the basic formulas for pair of straight lines I am having some troubles with these: * *If the equation $ax^2+by^2+2hxy+2gx+2fy+c=0$ represents a pair of parallel lines if $h^2 = ab$ and $bg^2=af^2$,then the distance between the parallel lines is $\large 2\sqrt{\frac{g^2-ac}{a^2+ab}}$ or $\large 2\sqrt{\frac{f^2-ac}{b^2+ab}}$. *The area of the triangle formed $ax^2+2hxy+by^2=0$ and $lx+my+n=0$ is $ \large \frac{n^2\sqrt{h^2-ab}}{|am^2-2hlm+bl^2|}$ In my module no proof is given just given as formula,I am very much interested to know how could we prove them?
1) Multiply by $a$ (for a nicer computation) and write $a^2x^2+aby^2+2haxy+2gax+2fay+ac= (lx+my+n)(lx+my+r)$ You get $l=a$, $m=\pm h$, $r+n=2g$, $r+n=\pm 2fa/h$, $nr=ac$. To proceed you need $2g=\pm 2fa/h$ which is equivalent to $g^2=f^2 a^2/(ab)$ which is given. So $r,n = g \pm \sqrt{g^2- ac}$. Now use your formula for the distance of parallel lines. 2) Notice that $ax^2+2hxy+by^2=0$ is equivalent to $a^2x^2+2ahxy+h^2y^2=h^2y^2-aby^2=0$ so get three lines $lx+my+n=0$, $ax+hy=\sqrt{h^2-ab}y$ and $ax+hy=-\sqrt{h^2-ab}y$. You probably have a formula for calculating the area of this triangle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/34363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $\operatorname{Gal}(\mathbb{Q}(\sqrt[8]{2}, i)/\mathbb{Q}(\sqrt{-2})) \cong Q_8$ I seem to have reached a contradiction. I am trying to prove that $\operatorname{Gal}(\mathbb{Q}(\sqrt[8]{2}, i)/\mathbb{Q}(\sqrt{-2})) \cong Q_8$. I could not think of a clever way to do this, so I decided to just list all the automorphisms of $\mathbb{Q}(\sqrt[8]{2}, i)$ that fix $\mathbb{Q}$ and hand-pick the ones that fix $i\sqrt{2}$. By the Fundamental Theorem of Galois Theory, those automorphisms should be a subgroup of the ones that fix $\mathbb{Q}$. I proved earlier that those automorphisms are given by $\sigma: \sqrt[8]{2} \mapsto \zeta^n\sqrt[8]{2}, i \mapsto \pm i$, where $n \in [0, 7]$ and $\zeta = e^\frac{2\pi i}{8}$. However, I am getting too many automorphisms. One automorphism that fixes $i\sqrt{2}$ is $\sigma: \sqrt[8]{2} \mapsto \zeta\sqrt[8]{2}, i \mapsto -i$. However, this means all powers of $\sigma$ fix $i\sqrt{2}$, and I know $Q_8$ does not contain a cyclic subgroup of order $8$. What am I doing wrong? (Please do not give me the answer. I have classmates for that.)
Would it be easier to notice that extension $\mathbb{Q}(\sqrt[8]{2},i)$ is equal to $\mathbb{Q}(\sqrt[8]{2},\zeta)$ which is a cyclotomic extension followed by Kummer extension? You can then work out which elements of its Galois group fix $\sqrt{-2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/34417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
RSA: Encrypting values bigger than the module Good morning! This may be a stupid one, but still, I couldn't google the answer, so please consider answering it in 5 seconds and gaining a piece of rep :-) I'm not doing well with mathematics, and there is a task for me to implement the RSA algorithm. In every paper I've seen, authors assume that the message $X$ encrypted is less than the module $N$, so that $X^e\quad mod\quad N$ allows to fully restore $X$ in the process of decryption. However, I'm really keen to know what if my message is BIGGER than the module? What's the right way to encrypt such a message?
The typical use of the RSA algorithm encrypts a symmetric key that is used to encrypt the actual message, and decrypt it on the receiving end. Thus, only the symmetric key need be smaller than the log of the modulus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/34469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Euclidean algorithm to find the GCD I have to find the greatest common divisor of $a=78$ and $b=132$. I have worked out to $$\begin{align} 132 & = 78 \times 1 + 54 \\ 78 & = 54 \times 1 + 24 \\ 54 & = 24 \times 2 + 6 \\ 24 & = 6 \times 4 + 0 \end{align}$$ and back substituted $$\begin{align} 6 & = 54 - 24 \times 2 \\ & = 54 - (78 - 54) \times 2 \\ & = 3 \times 54 - 78 \times 2 \end{align}$$ However I don't seem to be able to work back to $132$ ? Can someone explain / help?
From the first equation you have $54=132-78$. By plugging this into the last one you get $6=3(132-78)-2\cdot78=3\cdot132-5\cdot78.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/34529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Simple set exercise seems not so simple Exercise about sets from Birkhoff's "Modern Applied Algebra". Prove that for operation $\ \Delta $ , defined as $\ R \Delta S = (R \cap S^c) \cup (R^c \cap S) $ following is true: $\ R \Delta ( S \Delta T ) = ( R \Delta S ) \Delta T $ ($\ S^c $ is complement of $\ S $) It's meant to be very simple, being placed in the first excercise block of the book. When I started to expand both sides of equations in order to prove that they're equal, I got this monster just for the left side: $\ R \Delta ( S \Delta T ) = \Bigl( R \cap \bigl( (S \cap T^c) \cup (S^c \cap T) \bigr)^c \Bigr) \cup \Bigl(R^c \cap \bigl( (S \cap T^c) \cup (S^c \cap T) \bigr) \Bigr) $ For the right: $\ ( R \Delta S ) \Delta T = \Bigl(\bigl( (R \cap S^c) \cup (R^c \cap S) \bigr) \cap T^c \Bigr) \cup \Bigl( \bigl( (R \cap S^c) \cup (R^c \cap S) \bigr)^c \cap T \Bigr) $ I've tried to simplify this expression, tried to somehow rearrange it, but no luck. Am I going the wrong way? Or what should I do with what I have?
This is the symmetric difference. It includes all elements that are in exactly one of the two sets. In binary terms, it's the XOR operation. Independent of the order of the operations on the three sets, the result will contain exactly the elements that are in an odd number of the sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/34676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
an illustration question in Meyer's book I am reading the "Nilpotent matrices and Jordan structure" chapter of Meyer's "Matrix analysis and applied linear algebra". I just do not quite understand the for illustrating extending basis $S_i$ for the subsspace chain $M_i$'s. Could anyone give some explanation? If this is a 2D vector space, the subspaces (except the trivial ones) are lines passing origin. Its basis is then only one vector along such line (for example, can be taken as the unit vector with such direction.) So what do all those parallelograms/colored dots mean here?
It is not supposed to be a 2D space. It is supposed to illustrate a nested sequence of subspaces of the nullspace. All the subspaces $\mathcal M_4, \mathcal M_3, \mathcal M_2, \mathcal M_1, \mathcal M_0$ are subspaces of the nullspace of the matrix $L$ satisfying $\mathcal M_4 \subseteq \dots \subseteq M_0 = N(L)$. 4 is here is the largest exponent such that $L^k \neq 0$. Every dot in the figure is supposed to be a basis vector. The text describes it by first finding a basis for $M^4$, then extending it to $M^3$, etc. until you reach $M^0 = N(L)$ and then have a basis for the whole nullspace. The next picture (7.7.2) then describes how these vectors are extended to a whole basis for $\mathbb C^n$. If the vector $b$ is represented by a dot in $\mathcal M_i$ (but not $\mathcal M_{i+1}$), you can build a chain "on top" of this vector by solving $b = L^i x$ and then taking $L^{i-1}x, L^{i-2}, \dots, L x, x$ to be the chain formed by $b$. Thus, if $b$ is a basis vector for $\mathcal M_i$ (but not $\mathcal M_{i+1}$) it will have a tower of $i$ vectors on top of it (in fig 7.2.2).
{ "language": "en", "url": "https://math.stackexchange.com/questions/34753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Olympiad calculus problem This problem is from a qualifying round in a Colombian math Olympiad, I thought some time about it but didn't make any progress. It is as follows. Given a continuous function $f : [0,1] \to \mathbb{R}$ such that $$\int_0^1{f(x)\, dx} = 0$$ Prove that there exists $c \in (0,1) $ such that $$\int_0^c{xf(x) \, dx} = 0$$ I will appreciate any help with it.
This is a streamlined version of Thomas Andrews' proof: Put $F(x):=\int_0^x f(t)dt$ and consider the auxiliary function $\phi(x)={1\over x}\int_0^x F(t)dt$. Then $\phi(0)=0$, $\ \phi(1)=\int_0^1 F(t)dt=:\alpha$, and by partial integration one obtains $$\phi'(x)=-{1\over x^2}\int_0^xF(t)dt +{1\over x}F(x)={1\over x^2}\int_0^x t f(t)dt\ .$$ The mean value theorem provides a $\xi\in(0,1)$ with $\phi'(\xi)=\alpha$. If $\alpha$ happens to be $0$ we are done. Otherwise we invoke $F(1)=0$ and conclude that $\phi'(1)=-\alpha$. It follows that there is a $\xi'\in(\xi,1)$ with $\phi'(\xi')=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/34808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 3, "answer_id": 1 }
Inclusion-exclusion principle: Number of integer solutions to equations The problem is: Find the number of integer solutions to the equation $$ x_1 + x_2 + x_3 + x_4 = 15 $$ satisfying $$ \begin{align} 2 \leq &x_1 \leq 4, \\ -2 \leq &x_2 \leq 1, \\ 0 \leq &x_3 \leq 6, \text{ and,} \\ 3 \leq &x_4 \leq 8 \>. \end{align} $$ I have read some papers on this question, but none of them explain clearly enough. I am especially confused when you must decrease the total amount of solutions to the equation—with no regard to the restrictions—from the solutions that we don't want. How do we find the intersection of the sets that we don't want? Either way, in helping me with this, please explain this step.
If you don't get the larger question, start smaller first. * *How many solutions to $x_1 + x_2 = 15$, no restrictions? (infinite of course) *How many solutions where $0\le x_1$, $0\le x_2$? *How many solutions where $6\le x_1$, $0\le x_2$? *How many solutions where $6\le x_1$, $6\le x_2$? (these last questions don't really say anything about inclusion-exclusion yet) *How many solutions where $0\le x_1\le 5$, $0\le x_2$? Hint: exclude the complement. This is the fist step of the exclusion. *How many solutions where $0\le x_1\le 5$, $0\le x_2\le7$? Hint: exclude the both complements, but re-include where those two complements overlap (the intersection of those two excluded ranges - what is it), because you excluded the intersection twice. That is the gist of it. Now it gets harder, because you need to do it for 4 variables not just 2. But that's the exercise, figuring out how to manage including/excluding/then including back of what you threw away too much of /then excluding back again that littlest bit messed up in that last step.
{ "language": "en", "url": "https://math.stackexchange.com/questions/34871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 1 }
Evaluating the limit $\lim \limits_{x \to \infty} \frac{x^x}{(x+1)^{x+1}}$ How do you evaluate the limit $$\lim_{x \to \infty} \frac{x^x}{(x+1)^{x+1}}?$$
I think we should be witty about how we write it. How about we consider instead the limit $$ \lim_{x \to \infty} \frac{x^x}{(x+1)^x (x+1)} = \lim_{x \to \infty} \left ( \frac{x}{x+1} \right )^{x} * \frac{1}{x+1} $$ I think that this is suggestive of a proof?
{ "language": "en", "url": "https://math.stackexchange.com/questions/34983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
A Binomial Coefficient Sum: $\sum_{m = 0}^{n} (-1)^{n-m} \binom{n}{m} \binom{m-1}{l}$ In my work on $f$-vectors in polytopes, I ran across an interesting sum which has resisted all attempts of algebraic simplification. Does the following binomial coefficient sum simplify? \begin{align} \sum_{m = 0}^{n} (-1)^{n-m} \binom{n}{m} \binom{m-1}{l} \qquad l \geq 0 \end{align} Update: After some numerical work, I believe a binomial sum orthogonality identity is at work here because I see only $\pm 1$ and zeros. Any help would certainly be appreciated. I take $\binom{-1}{l} = (-1)^{l}$, $\binom{m-1}{l} = 0$ for $0 < m < l$ and the standard definition otherwise. Thanks!
$$\sum_{m=0}^n (-1)^{n-m} \binom{n}{m} \binom{m-1}{l} = (-1)^{l+n} + \sum_{l+1 \leq m \leq n} (-1)^{n-m} \binom{n}{m} \binom{m-1}{l}$$ So we need to compute this last sum. It is clearly zero if $l \geq n$, so we assume $l < n$. It is equal to $f(1)$ where $f(x)= \sum_{l+1 \leq m \leq n} (-1)^{n-m} \binom{n}{m} \binom{m-1}{l} x^{m-1-l}$. We have that $$\begin{eqnarray*} f(x) & = & \frac{1}{l!} \frac{d^l}{dx^l} \left( \sum_{l+1 \leq m \leq n} (-1)^{n-m} \binom{n}{m} x^{m-1} \right) \\ & = & \frac{1}{l!} \frac{d^l}{dx^l} \left( \frac{(-1)^{n+1}}{x} + \sum_{0 \leq m \leq n} (-1)^{n+1} \binom{n}{m} (-x)^{m-1} \right) \\ & = & \frac{1}{l!} \frac{d^l}{dx^l} \left( \frac{(-1)^{n+1}}{x} + \frac{(x-1)^n}{x} \right) \\ & = & \frac{(-1)^{n+1+l}}{x^{l+1}} + \frac{1}{l!} \sum_{k=0}^l \binom{l}{k} n(n-1) \ldots (n-k+1) (x-1)^{n-k} \frac{(-1)^{l-k} (l-k)!}{x^{1+l-k}} \end{eqnarray*}$$ (this last transformation thanks to Leibniz) and since $n>l$, $f(1)=(-1)^{l+n+1}$. In the end, your sum is equal to $(-1)^{l+n}$ if $l \geq n$, $0$ otherwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/35051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Working with Conditions or Assumptions in Mathematica with boolean operators I have the following code: $Assumptions = {x > 0} b[x_] := x^2 b'[x] > 0 In my (very basic) understanding of Mathematica, this should give me me the Output True, but I get 2 x > 0. I also tried b[x_] := x^2 /; x > 0 and Assuming[x > 0, b'[x] > 0]. I've searched the mathematica help, but without success. What's my basic error and how do I get the desired output? EDIT: The original question is answered, now I wanted to adapt this solution to two variables: c[x_, y_] := x^2 + y $Assumptions = {y > 0} $Assumptions = {x > 0} Simplify[c[x, y] > 0] It follows the same logic as the first case, where I now get the desired output, but why not here? I realize that these are probably typical beginners questions, so if you could explain the logic to me or give me a hint where to read up upon this stuff? Neither the Mathematica help nor my university's (very short) guidebook are sufficient for my understanding.
Your first code $Assumptions = {x > 0} b[x_] := x^2 b'[x] > 0 works fine if you apply Simplify to the result (2x > 0). Edit: For completeness, I also add the answer of J.M in the comment to the second question. $Assumptions = {x > 0} overwrites $Assumptions = {y > 0}. Try $Assumptions = x > 0 && y > 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/35383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving a property of homogeneous equation that is exact The following question was given to us in an exam: If $0=M dx + N dy$ is an exact equation, in addition to the fact that $\frac{M}{N} = f\Big(\frac{y}{x}\Big)$ is homogeneous, then $xM_x + yM_y = (xN_x + yN_y)f$. Now I had absolutely no idea how to prove this question. I tried doing $M = Nf$ and taking derivatives and multiplying by $x$ or $y$, and you get the required R.H.S. but with the extra term $N(\frac{-f_x}{x} + \frac{f_y}{x})$ added. How does one approach a question like that?? I have never encountered a question like that, not even when solving for different types of integrating factors to get an exact equation or when working with a homogeneous equation. Anyone got any ideas? Please don't post a complete solution. Thanks.
So the solution should be: As $\frac{M}{N} = f\Big(\frac{y}{x}\Big)$, this means that the degree of homogeneity of $M$ and $N$ must be equal. So $xM_x + yM_y = aM$, and $xN_x + yN_y = aN$, by euler's homogeneity theorem where $a$ is the degree of homogeneity of $M$ and $N$. So dividing the first by the second of these equations, and one should get $xM_x + yM_y = \frac{M}{N} (xN_x + yN_y) = f\Big(\frac{y}{x}\Big)(xN_x + yN_y )$. Is that correct?
{ "language": "en", "url": "https://math.stackexchange.com/questions/35437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What is the difference between "family" and "set"? What is the difference between "family" and "set"? The definition of "family" on mathworld (http://mathworld.wolfram.com/Family.html) is a collection of objects of the form $\{a_i\}_{i \in I}$, where $I$ is an index set. But, I think a set can also be represented in this form. So, what is the difference between the concept family and the concept set? Is there any example of a collection of objects that is a family, but not a set, or reversely? Many thanks!
A family is indeed a set, and it is defined by the indexing -- as you observed. Just as well every set $A$ is a family of the form $\{i\}_{i\in A}$. However often you want to have some property about the index set (i.e. some order relation, or some other structure) that you do not require from a general set. This addition structure on the index can help you define further properties about the family, or prove things using the properties of the family (its elements are disjoint, co-prime, increasing in some order, every two elements have a supremum, and so on).
{ "language": "en", "url": "https://math.stackexchange.com/questions/35462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42", "answer_count": 6, "answer_id": 5 }
Half angle formulas Say I have the trig identity $$ \tan \frac{\theta}{2} = \frac{ 1 - \cos \theta }{ \sin \theta } $$ And you have the 3,4,5 triangle: taking the angle $\theta$ as marked, why can't we just say $$ \tan \frac{\theta}{2} = \frac{ 1.5 }{ 4 } = \frac{ 3 }{ 8 } $$ (Like, if you half $\theta$, then the opposite side length is seen as half, right?) But this doesn't work or check out with the identity: $$ \tan \frac{\theta}{2} = \frac{ 1 - \frac{4}{5} }{ \frac{3}{5} } = \frac{ \frac{1}{5} }{ \frac{3}{5} } = \frac{1}{3} $$
Actually, if you half an angle, it will divide the opposite side proportionally to the two other sides. (see http://en.wikipedia.org/wiki/Angle_bisector_theorem ) In your case, 3 would be divided into parts $4/3$ and $5/3$. So you get $\tan \theta/2=(4/3)/4= 1/3$. Everything works out fine.
{ "language": "en", "url": "https://math.stackexchange.com/questions/35531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why are addition and multiplication commutative, but not exponentiation? We know that the addition and multiplication operators are both commutative, and the exponentiation operator is not. My question is why. As background there are plenty of mathematical schemes that can be used to define these operators. One of these is hyperoperation where $H_0(a,b) = b+1$ (successor op) $H_1(a,b) = a+b$ (addition op) $H_2(a,b) = ab $ (multiplication op) $H_3(a,b) = a^b$ (exponentiation op) $H_4(a,b) = a\uparrow \uparrow b$ (tetration op: $a^{(a^{(...a)})}$ nested $b$ times ) etc. Here it is not obvious to me why $H_1(a,b)=H_1(b,a)$ and $H_2(a,b)=H_2(b,a)$ but not $H_3(a,b)=H_3(b,a)$ Can anyone explain why this symmetry breaks, in a reasonably intuitive fashion? Thanks.
When I first read your question, I expected that it must mean that addition would possess some obscure property that multiplication lacks, after all, both the additive structure and multiplicative structure are abelian groups, so you'd expect something like this to just generalize. But after some thinking, I realized that this wasn't the case, and instead that the problem is that we aren't generalizing properly. For if we define "applying an operator $f$, $n$ times, ie $f^n$" as the recursive procedure $ f^n(x) = \begin{cases} x & \text{if n = 0} \\ f^1(f^{n - 1}(x)) & \text{otherwise} \end{cases} $ Then this definition actually uses addition, so if we'd want to generalize this procedure properly, we'd need to change our definition of "applying an operator $n$ times" as well. And indeed $a^n$ does equal $(a^2)^{n / 2}$, which induces a better generalization of commutativity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/35598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73", "answer_count": 13, "answer_id": 0 }
Simplifying simple radicals $\sqrt{\frac{1}{a}}$ I'm having a problems simplifying this apparently simple radical: $\sqrt{\frac{1}{a}}$ The book I'm working through gives the answer as: $\frac{1}{a}\sqrt{a}$ Could someone break down the steps used to get there? I've managed to answer all the other questions in this chapter right, but my brain refuses to lock onto this one and I'm feeling really dense.
What do you get if you square both expressions and then simplify?
{ "language": "en", "url": "https://math.stackexchange.com/questions/35745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Validate my reasoning for this logical equivalence I've basically worked out how to do this question but not sure about my reasoning: Question: Show 1) $(p \rightarrow q) \land (q \rightarrow (\lnot p \lor r))$ is logically equivalent to: 2) $p \rightarrow (q \land r)$ and I am given this equivalence as a hint: $u \rightarrow v$ is logically equivalent to $(\lnot u) \lor v)$ My reasoning: From statement (1): $(\lnot p \lor r)$ is equivalent to $(p \rightarrow r)$ (By the hint given) Hence statement (1) becomes: $(p \rightarrow q) \land (q \rightarrow (p \rightarrow r))$ We assume $p$ is true, therefore $q$ is true So $p$ also implies $r$ Therefore $p$ implies $q$ and $p$ also implies $r$ Hence $p \rightarrow (q \land r)$ I understand the basic ideas but I'm really confused as to how I can write it all down logically and clearly
There are several routes to a proof, I will list two: 1) You can make a list of all cases. Since you have three variables, there are 8 possibilites for them to have the values true/false. You can make a table with column titles: $p,q,r,p \to q, \lnot q \lor p, \dots$ and enter the truth values, then compare the columns for the two expression you want to be equivalent. 2) As indicated by the hint, you can transform all occurences of $\to$ to $\lor$. Then you can use the distributivity to bring both expression to a normal form, for example to conjunctive normal form http://en.wikipedia.org/wiki/Conjunctive_normal_form
{ "language": "en", "url": "https://math.stackexchange.com/questions/35796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 4 }
two point line form in 3d the two-point form for a line in 2d is $$y-y_1 = \left(\frac{y_2-y_1}{x_2-x_1}\right)(x-x_1);$$ what is it for 3d lines/planes?
For lines, you need two equations, so it is just duplicated: $y-y_1 = \frac{y_2-y_1}{x_2-x_1} (x-x_1)$ and $z-z_1 = \frac{z_2-z_1}{x_2-x_1} (x-x_1)$ For planes, you need three points. Three approaches are shown in Wikipedia under "Define a plane through three points"
{ "language": "en", "url": "https://math.stackexchange.com/questions/35857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
what name for a shape made from two intersecting circles of different sizes? what is the name of a shape made from two circles with different radii that intersect each other? Sort of like a snowman shape, made of a big and a small ball of snow, melted together a bit! :-) Thanks
I do know that a "figure 8" shape is known as a lemniscate: you can read more here: http://en.wikipedia.org/wiki/Lemniscate. But I'm not sure if that's what you're looking for. What you seem to describe is the union of two circles (of different size) which intersect at two points. Wikipedia has an interesting "taxonomy" of various shapes and variations of familiar shapes, etc.: http://en.wikipedia.org/wiki/List_of_geometric_shapes
{ "language": "en", "url": "https://math.stackexchange.com/questions/35915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Exponential Diophantine Equations for Beginners What would be some exponential Diophantine equations for the beginner to solve (which can demonstrate the techniques?) especially good if there are hints! Thank you very much!
The posed problem is tightly connected with FLT, which here is not examined. But it is it's a pity! However,… If Fermat’s equality exists, then in the numeration system with the prime base n>2 next-to-last digits in numbers $1^n$, $2^n$,...$(n-1)^n$ are equal to 0 and, therefore, the two-digit end of the number $S=1^n+2^n+...+(n-1)^n$ is equal to the sum of the arithmetical progression $S'=1+2+...+(n-1)$, i.e. is equal to the number $d0$, where the digit $d$ is not zero. That contradicts the direct calculation of the end of the number S (it is equal to 00, which is evident when grouping the terms of the sum $S$ into the pairs: $S=[1^n+(n-1)^n]+[2^n+(n-2)^n]+...)$. See
{ "language": "en", "url": "https://math.stackexchange.com/questions/35987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
How to prove a function is positive or negative in $x \in \mathbb{R}$ A homework question: I know the solution but I don't know how to prove that the function is negative XOR positive for $x \in \mathbb{R}$ f is continuous in $\mathbb{R}$. $$\text{ prove that if } |f(x)| \ge x \text{ for } x \in \mathbb{R} \text { then: } \lim_{x\to\infty} f(x) = \infty \text{ or } \lim_{x\to\infty} f(x) = -\infty$$ Now once I prove that the function is negative XOR positive it's relatively simple to prove that the limits are at infinity. How do I prove that there is no $f(x) = 0$? Thanks!
The function, as you stated it, is not exclusively negative nor exclusively positive. There's a really simple counter example. If f(x)=x for all x in R, then f is continuous everywhere, |f(x)|>=x everywhere, and even one of the limits is satisfied (limit as x goes-to infinity of f(x) is infinity). But clearly, f(0)=0, f(-1)=-1, and f(1)=1. So there is a point where f(x)=0, and f is not exclusively positive nor exclusively negative everywhere. As to the most likely question you were probably asking, user6312 already answered it for you, but I'll type the same proof for completeness. (Also, does anyone know where I can find a guide on how to get latex to work properly in here? I can't get things like /mathbb{R} or /infinity or /in to work. Maybe the rules have changed since I last used latex...) If f(1)>=1, then f is positive for all x>1. (Suppose there exists b>1 such that f(b)<1. Since |f(x)|>=x for all x, and b>1, f(b)<-b<0. But if 10 and f(b)<0, there exists a point c, 1the coolest theorem ever. Since c>0 and f(c)=0, =><=) Since f(x) is positive for all x>1, then |f(x)|=f(x) for x>1. Thus the limit x->infinity of f(x) is greater than limit x->infinity of x which is infinity. If f(1)<=-1, then let g(x)=-f(x), and do the same proof, and you'll end with lim x->infinity f(x) = -infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Functions, graphs, and adjacency matrices One naively thinks of (continuous) functions as of graphs1 (lines drawn in a 2-dimensional coordinate space). One often thinks of (countable) graphs2 (vertices connected by edges) as represented by adjacency matrices. That's what I learned from early on, but only recently I recognized that the "drawn" graphs1 are nothing but generalized - continuous - adjacency matrices, and thus graphs1 are more or less the same as graphs2. I'm quite sure that this is common (maybe implicit) knowledge among working mathematicians, but I wonder why I didn't learn this explicitly in any textbook on set or graph theory I've read. I would have found it enlightening. My questions are: Did I read my textbooks too superficially? Is the analogy above (between graphs1 and graphs2) misleading? Or is the analogy too obvious to be mentioned?
My opinion: the analogy is not misleading, is not too obvious to be mentioned, but is also not terribly useful. Have you found a use for it? EDIT: Here's another way to think about it. A $\it relation$ on a set $S$ is a subset of $S\times S$, that is, it's a set of ordered pairs of elements of $S$. A relation on $S$ can be viewed as a (directed) graph, with vertex set $S$ and edge set the relation. We draw this graph by drawing the vertices as points in the plane and the edges as (directed) line segments connecting pairs of points Now consider "graph" in the sense of "draw the graph of $x^2+y^2=1$." That equation is a relation on the set of real numbers, and the graph is obtained by drawing the members of this relation as points in the plane. So the two kinds of graph are two ways of drawing a picture to illustrate a relation on a set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
If $n$ is any positive integer, prove that $\sqrt{4n-2}$ is irrational If $n$ is any positive integer, prove that $\sqrt{4n-2}$ is irrational. I've tried proving by contradiction but I'm stuck, here is my work so far: Suppose that $\sqrt{4n-2}$ is rational. Then we have $\sqrt{4n-2}$ = $\frac{p}{q}$, where $ p,q \in \mathbb{Z}$ and $q \neq 0$. From $\sqrt{4n-2}$ = $\frac{p}{q}$, I just rearrange it to: $n=\frac{p^2+2q^2}{4q^2}$. I'm having troubles from here, $n$ is obviously positive but I need to prove that it isn't an integer. Any corrections, advice on my progress and what I should do next?
$4n-2 = (a/b)^2$ so $b$ divides $a$. But $\operatorname{gcd}(a,b) = 1$ so $b = 1$. So now $2$ divides $a$ so write $a = 2k$ then by substitution, we get that $2n-1 = 2k^2$ Left side is odd but the right side is even. Contradiction!
{ "language": "en", "url": "https://math.stackexchange.com/questions/36195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
What is the standard interpretation of order of operations for the basic arithmetic operations? What is the standard interpretation of the order of operations for an expression involving some combination of grouping symbols, exponentiation, radicals, multiplication, division, addition, and subtraction?
Any parts of an expression grouped with grouping symbols should be evaluated first, followed by exponents and radicals, then multiplication and division, then addition and subtraction. Grouping symbols may include parentheses/brackets, such as $()$ $[]$ $\{\}$, and vincula (singular vinculum), such as the horizontal bar in a fraction or the horizontal bar extending over the contents of a radical. Multiple exponentiations in sequence are evaluated right-to-left ($a^{b^c}=a^{(b^c)}$, not $(a^b)^c=a^{bc}$). It is commonly taught, though not necessarily standard, that ungrouped multiplication and division (or, similarly, addition and subtraction) should be evaluated from left to right. (The mnemonics PEMDAS and BEDMAS sometimes give students the idea that multiplication and division [or similarly, addition and subtraction] are evaluated in separate steps, rather than together at one step.) Implied multiplication (multiplication indicated by juxtaposition rather than an actual multiplication symbol) and the use of a $/$ to indicate division often cause ambiguity (or at least difficulty in proper interpretation), as evidenced by the $48/2(9+3)$ or $48÷2(9+3)$ meme. This is exacerbated by the existence of calculators (notably the obsolete Texas Instruments TI-81 and TI-85), which (at least in some instances) treated the $/$ division symbol as if it were a vinculum, grouping everything after it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 1, "answer_id": 0 }
What does $\ll$ mean? I saw two less than signs on this Wikipedia article and I was wonder what they meant mathematically. http://en.wikipedia.org/wiki/German_tank_problem EDIT: It looks like this can use TeX commands. So I think this is the symbol: $\ll$
Perhaps not its original intention, but we (my collaborators and former advisor) use $X \gg Y$ to mean that $X \geq c Y$ for a sufficiently large constant $c$. Precisely, we usually use it when we write things like: $$ f(x) = g(x) + O(h(x)) \quad \Longrightarrow \quad f(x) = g(x) (1 + o(1)) $$ when $g(x) \gg h(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 8, "answer_id": 2 }
How to calculate the new intersection on the x-axis after rotation of a rectangle? I've been trying to calculate the new intersection on the x-axis after rotation of any given rectangle. The rectangle's center is the point $(0,0)$. What do I know: * *length of B (that is half of the width of the given rectangle) *angle of a (that is the rotation of the rectangle) What do I want to know: length of A (or value of point c on the x-axis).
Hint: Try to divide the cases. Referring to your image, after the rotation of the angle $a$ the vertex on the left side of the rectangle pass or not pass the x-axis? Suppose now that your rectangle has one side of lenght 2B, and the other one "large", so the vertex on the left side doesn't pass the x-axis. Then using Pythagoras you get $A=\sqrt{B^2 + B^2 sen^2(a)}$. What about the other case?
{ "language": "en", "url": "https://math.stackexchange.com/questions/36436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to compute homotopy classes of maps on the 2-torus? Let $\mathbb T^2$ be the 2-Torus and let $X$ be a topological space. Is there any way of computing $[\mathbb T^2,X]$, the set of homotopy class of continuous maps $\mathbb T^2\to X$ if I know, for instance, the homotopy groups of $X$? Actually, I am interested in the case $X=\mathbb{CP^\infty}$. I would like to classify $\mathbb T^1$-principal bundles over $\mathbb T^2$ (in fact $\mathbb T^2$-principal bundles, but this follows easily.)
This is a good chance to advertise the paper Ellis, G.J. Homotopy classification the J. H. C. Whitehead way. Exposition. Math. 6(2) (1988) 97-110. Graham Ellis is referring to Whitehead's paper "Combinatorial Homotopy II", not so well read as "Combinatorial Homotopy I". He writes:" Almost 40 years ago J.H.C. Whitehead showed in \cite{W49:CHII} that, for connected $CW$-complexes $X, Y$ with dim $X \le n$ and $\pi_i Y = 0$ for $2\le i \le \ n - 1$, the homotopy classification of maps $X \to Y$ can be reduced to a purely algebraic problem of classifying, up to an appropriate notion of homotopy, the $\pi_1$-equivariant chain homomorphisms $C_* \widetilde{X} \to C_* \widetilde{Y}$ between the cellular chain complexes of the universal covers. The classification of homotopy equivalences $Y \simeq Y$ can similarly be reduced to a purely algebraic problem. Moreover, the algebra of the cellular chains of the universal covers closely reflects the topology, and provides pleasant and interesting exercises. "These results ought to be a standard piece of elementary algebraic topology. Yet, perhaps because of the somewhat esoteric exposition given in \cite{W49:CHII}, and perhaps because of a lack of worked examples, they have remained largely ignored. The purpose of the present paper is to rectify this situation."
{ "language": "en", "url": "https://math.stackexchange.com/questions/36488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 0 }
Which simple puzzles have fooled professional mathematicians? Although I'm not a professional mathematician by training, I felt I should have easily been able to answer straight away the following puzzle: Three men go to a shop to buy a TV and the only one they can afford is £30 so they all chip in £10. Just as they are leaving, the manager comes back and tells the assisitant that the TV was only £25. The assistant thinks quickly and decides to make a quick profit, realising that he can give them all £1 back and keep £2. So the question is this: If he gives them all £1 back which means that they all paid £9 each and he kept £2, wheres the missing £1? 3 x £9 = £27 + £2 = £29...?? Well, it took me over an hour of thinking before I finally knew what the correct answer to this puzzle was and, I'm embarrassed. It reminds me of the embarrassement some professional mathematicians must have felt in not being able to give the correct answer to the famous Monty Hall problem answered by Marilyn Vos Savant: http://www.marilynvossavant.com/articles/gameshow.html Suppose you're on a game show, and you're given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say #1, and the host, who knows what's behind the doors, opens another door, say #3, which has a goat. He says to you, "Do you want to pick door #2?" Is it to your advantage to switch your choice of doors? Yes; you should switch. It's also mentioned in the book: The Man Who Only loved Numbers, that Paul Erdos was not convinced the first time either when presented by his friend with the solution to the Monty Hall problem. So what other simple puzzles are there which the general public can understand yet can fool professional mathematicians?
Along the same lines as the Monty Hall Problem is the following (lifted from Devlin's Angle on MAA and quickly amended): I have two children, and (at least) one of them is a boy born on a Tuesday. What is the probability that I have two boys? Read a fuller analysis here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 9, "answer_id": 1 }
Gram matrix invertible iff set of vectors linearly independent Given a set of vectors $v_1 \cdots v_n$, the $n\times n$ Gram matrix $G$ is defined as $G_{i,j}=v_i \cdot v_j$ Due to symmetry in the dot product, $G$ is Hermitian. I'm trying to remember why $|G|=0$ iff the set of vectors are not linearly independent.
Here's another way to look at it. If $A$ is the matrix with columns $v_1,\ldots,v_n$, and the columns are not linearly independent, it means there exists some vector $u \in \mathbb{R}^n$ where $u \neq 0$ such that $A u = 0$. Since $G = A^T A$, this means $G u = A^T A u = A^T 0 = 0$ or that there exists a vector $u \neq 0$ such that $G u = 0$. So $G$ is not of full rank. This proves the "if" part. The "only if" part -- i.e. if $|G| = 0$, the vectors are not linearly independent -- follows because $|G| = |A^T A| = |A|^2 = 0$ which implies that $|A| = 0$ and so $v_1,\ldots,v_n$ are not linearly independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 5, "answer_id": 1 }
Evenly distribute points along a path I have a user defined path which a user has hand drawn - the distance between the points which make up the path is likely to be variant. I would like to find a set of points along this path which are equally separated. Any ideas how to do this?
If you measure distance along the path it is no different from a straight line. If the length is $L$ and you want $n$ points (including the ends) you put a point at one end and every $\frac{L}{n-1}$ along the way. If you measure distance as straight lines between the points there is no guarantee of a solution, but you could just start with this guess (or something a bit smaller) and "swing a compass" from each point, finding where it cuts the curve (could be more than once-this is problematic), and see how close to the end you wind up. Then a one-dimensional rootfinder (the parameter is the length of the radius) will do as well as possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Applications of the fact that a group is never the union of two of its proper subgroups It is well-known that a group cannot be written as the union of two its proper subgroups. Has anybody come across some consequences from this fact? The small one I know is that if H is a proper subgroup of G, then G is generated by the complement G-H.
A consequence is that if a finite group $G$ has only two proper subgroups, then the group itself must be cyclic. This is seen as follows: By the stated result the group has at least one element $g$ that does not belong to either of the proper subgroups. But if there are no other proper subgroups, then the subgroup generated by $g$ cannot be a proper one, and thus must be all of $G$. This gives an(other) easy proof of the cyclicity of the group of order $pq$, where $p<q$ are primes such that $q\not\equiv 1\pmod p$. By the Sylow theorems there is only one subgroup of order $p$ and only one of order $q$, so the above result applies.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 1 }
Showing that a level set is not a submanifold Is there a criterion to show that a level set of some map is not an (embedded) submanifold? In particular, an exercise in Lee's smooth manifolds book asks to show that the sets defined by $x^3 - y^2 = 0$ and $x^2 - y^2 = 0$ are not embedded submanifolds. In general, is it possible that a level set of a map which does not has constant rank on the set still defines a embedded submanifold?
It is certainly possible for a level set of a map which does not have constant rank on the set to still be an embedded submanifold. For example, the set defined by $x^3 - y^3 = 0$ is an embedded curve (it is the same as the line $y=x$), despite the fact that $F(x,y) = x^3 - y^3$ has a critical point at $(0,0)$. The set defined by $x^2 - y^2 = 0$ is not an embedded submanifold, because it is the union of the lines $y=x$ and $y=-x$, and is therefore not locally Euclidean at the origin. To prove that no neighborhood of the origin is homeomorphic to an open interval, observe that any open interval splits into exactly two connected components when a point is removed, but any neighborhood of the origin in the set $x^2 - y^2$ has at least four components after the point $(0,0)$ is removed. The set $x^3-y^2 = 0$ is an embedded topological submanifold, but it is not a smooth submanifold, since the embedding is not an immersion. There are many ways to prove that this set is not a smooth embedded submanifold, but one possibility is to observe that any smooth embedded curve in $\mathbb{R}^2$ must locally be of the form $y = f(x)$ or $x = f(y)$, where $f$ is some differentiable function. (This follows from the local characterization of smooth embedded submanifolds as level sets of submersions, together with the Implicit Function Theorem.) The given curve does not have this form, so it cannot be a smooth embedded submanifold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
What is the math behind the game Spot It? I just purchased the game Spot It. As per this site, the structure of the game is as follows: Game has 55 round playing cards. Each card has eight randomly placed symbols. There are a total of 50 different symbols through the deck. The most fascinating feature of this game is any two cards selected will always have ONE (and only one) matching symbol to be found on both cards. Is there a formula you can use to create a derivative of this game with different numbers of symbols displayed on each card. Assuming the following variables: * *S = total number of symbols *C = total number of cards *N = number of symbols per card Can you mathematically demonstrate the minimum number of cards (C) and symbols (S) you need based on the number of symbols per card (N)?
I have the game myself. I took the time to count out the appearance frequency of each object for each card. There are 55 cards, 57 objects, 8 per card. The interesting thing to me is that each object does not appear in equal frequency with others ... the minimum is 6, max 10, and mean 7.719. I am left curious why the makers of Spot It decided to take this approach. Apparently they favor the clover leaf over the flower, maple leaf, or snow man.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76", "answer_count": 9, "answer_id": 0 }
A property of $J$-semisimple rings I'd like a little help on how to begin this problem. Show that a PID $R$ is Jacobson-semisimple $\Leftrightarrow$ $R$ is a field or $R$ contains infinitely many nonassociate irreducible elements. Thanks.
If $R$ is a PID and has infinitely many nonassociated irreducible elements, then given any nonunit $x\in R$ you can find an irreducible element that does not divide $x$; can you find a maximal ideal that does not contain $x$? If so, you will have proven that $x$ is not in the Jacobson radical of $R$. The case where $R$ is a field is pretty easy as well. Conversely, suppose $R$ is a PID that is not a field, but contains only finitely many nonassociated primes; can you exhibit an element that will necessarily lie in every maximal ideal of $R$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/36875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
About the factors of the product of prime numbers If a number is a product of unique prime numbers, are the factors of this number the used unique prime numbers ONLY? Example: 6 = 2 x 3, 15 = 3 x 5. But I don't know for large numbers. I will be using this in my code to speed up my checking on uniqueness of data. Thanks! :D Edit: I will be considering all unique PRIME factors only. For example, I will not generate 9 because it's factors are both 3 (I don't consider 1 here), And also 24 (= 2 x 2 x 2 x 3). I want to know if it is TRUE if unique PRIME numbers are multiplied, the product's PRIME factors are only those PRIME factors that we multiplied in the first place. Sorry for not clarifying it earlier.
It is not quite clear what you are asking. A prime number has two factors: itself and $1$. E.g. $3$ has the factors $3$ and $1$. The product of two distinct prime numbers has four factors: itself, the two prime numbers and $1$. E.g. $6$ has the factors $6$, $3$, $2$ and $1$. You may not be interested in the first and last of these. The product of three distinct prime numbers has eight factors: itself, itself divided by one of the three prime numbers, the three prime numbers, and $1$. E.g. $30$ has the factors $30$, $15$, $10$, $6$, $5$, $3$, $2$ and $1$. You may also be interested in the fundamental theorem of arithmetic
{ "language": "en", "url": "https://math.stackexchange.com/questions/36927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
A Curious Binomial Sum Identity without Calculus of Finite Differences Let $f$ be a polynomial of degree $m$ in $t$. The following curious identity holds for $n \geq m$, \begin{align} \binom{t}{n+1} \sum_{j = 0}^{n} (-1)^{j} \binom{n}{j} \frac{f(j)}{t - j} = (-1)^{n} \frac{f(t)}{n + 1}. \end{align} The proof follows by transforming it into the identity \begin{align} \sum_{j = 0}^{n} \sum_{k = j}^{n} (-1)^{k-j} \binom{k}{j} \binom{t}{k} f(j) = \sum_{k = 0}^{n} \binom{t}{k} (\Delta^{k} f)(0) = f(t), \end{align} where $\Delta^{k}$ is the $k^{\text{th}}$ forward difference operator. However, I'd like to prove the aforementioned identity directly, without recourse to the calculus of finite differences. Any hints are appreciated! Thanks.
This is just Lagrange interpolation for the values $0, 1, \dots, n$. This means that after cancelling the denominators on the left you can easily check that the equality holds for $t=0, \dots, n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 0 }
Prove $e^{i \pi} = -1$ Possible Duplicate: How to prove Euler's formula: $\exp(i t)=\cos(t)+i\sin(t)$ ? I recently heard that $e^{i \pi} = -1$. WolframAlpha confirmed this for me, however, I don't see how this works.
This identity follows from Euler's Theorem, \begin{align} e^{i \theta} = \cos \theta + i \sin \theta, \end{align} which has many proofs. The one that I like the most is the following (sketched). Define $f(\theta) = e^{-i \theta}(\cos \theta + i \sin \theta)$. Use the quotient rule to show that $f^{\prime}(\theta)= 0$, so $f(\theta)$ is constant in $\theta$. Evaluate $f(0)$ to prove that $f(\theta) = f(0)$ everywhere. Take $\theta = \pi$ for your claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How do we prove the existence of uncountably many transcendental numbers? I know how to prove the countability of sets using equivalence relations to other sets, but I'm not sure how to go about proving the uncountability of the transcendental numbers (i.e., numbers that are not algebraic).
If a number $t$ is algebraic, it is the root of some polynomial with integer coefficients. There are only countably many such polynomials (each having a finite number of roots), so there are only countably many such $t$. Since there are uncountably many real (or complex) numbers, and only countably many of them are algebraic, uncountably many of them must be transcendental.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 0 }
Combination of smartphones' pattern password Have you ever seen this interface? Nowadays, it is used for locking smartphones. If you haven't, here is a short video on it. The rules for creating a pattern is as follows. * *We must use four nodes or more to make a pattern at least. *Once a node is visited, then the node can't be visited anymore. *You can start at any node. *A pattern has to be connected. *Cycle is not allowed. How many distinct patterns are possible?
I believe the answer can be found in OEIS. You have to add the paths of length $4$ through $9$ on a $3\times3$ grid, so $80+104+128+112+112+40=576$ I have validated the $80$, $4$ number paths. If we number the grid $$\begin{array}{ccc}1&2&3\\4&5&6\\7&8&9 \end{array}$$ The paths starting $12$ are $1236, 1254, 1258, 1256$ and there were $8$ choices of corner/direction, so $32$ paths start at a corner. Starting at $2$, there are $2145,2147,2369,2365,2541,2547,2587,2589,2563,2569$ for $10$ and there are $4$ edge cells, so $40$ start at an edge. Starting at $5$, there are $8$ paths-four choices of first direction and two choices of which way to turn Added per user3123's comment that cycles are allowed: unfortunately in OEIS there are a huge number of series titled "Number of n-step walks on square lattice" and "Number of walks on square lattice", and there is no specific definition to tell one from another. For $4$ steps, it adds $32$ more paths-four squares to go around, four places to start in each square, and two directions to cycle. So the $4$ step count goes up to $112$. For longer paths, the increase will be larger. But there still will not be too many.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 6, "answer_id": 1 }
Are there any non-trivial rational integers in the $p$-adic closure of $\{1,q,q^2,q^3,...\}$? If $p$ is prime and not a divisor of $q$, are there any non-trivial rational integers in the $p$-adic closure of the set or powers of $q$? Edit: $q$ is also a (rational) integer, not a $p$-adic.
If $p>2$ then there is an integer $q$ not divisible by $p$, with the properties: $q^k\not\equiv 1$ mod $p$ for $1\leq k\leq p-2$ and $q^{p-1}\not\equiv 1$ mod $p^2$. Under these conditions the $p$-adic closure of $\{1,q,q^2,\dots\}$ is the whole $\mathbb{Z}_p^\times$ - in particular it contains all the rational integers not divisible by $p$. For $p=2$ and $q\equiv 5$ mod $8$ then the closure is $1+4\mathbb{Z}_2$ - i.e. it contains all the integers which are $1$ mod $4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If for every $v\in V$ $\langle v,v\rangle_{1} = \langle v,v \rangle_{2}$ then $\langle\cdot,\cdot \rangle_{1} = \langle\cdot,\cdot \rangle_{2}$ Let $V$ be a vector space with a finite Dimension above $\mathbb{C}$ or $\mathbb{R}$. How does one prove that if $\langle\cdot,\cdot\rangle_{1}$ and $\langle \cdot, \cdot \rangle_{2}$ are two Inner products and for every $v\in V$ $\langle v,v\rangle_{1}$ = $\langle v,v\rangle_{2}$ so $\langle\cdot,\cdot \rangle_{1} = \langle\cdot,\cdot \rangle_{2}$ The idea is clear to me, I just can't understand how to formalize it. Thank you.
You can use the polarization identity. $\langle \cdot, \cdot \rangle_1$ and $\langle \cdot, \cdot \rangle_2$ induces the norms $\| \cdot \|_1$ and $\| \cdot \|_2$ respectively, i.e.: $$\begin{align} \| v \|_1 = \sqrt{\langle v, v \rangle_1} \\ \| v \|_2 = \sqrt{\langle v, v \rangle_2} \end{align}$$ From this it is obvious that $\|v\|_1 = \|v\|_2$ for all $v \in V$, so we can write $\| \cdot \|_1 = \| \cdot \|_2 = \| \cdot \|$. By the polarization identity we get (for complex spaces): $$\begin{align} \langle x, y \rangle_1 &=\frac{1}{4} \left(\|x + y \|^2 - \|x-y\|^2 +i\|x+iy\|^2 - i\|x-iy\|^2\right) \ \forall\ x,y \in V \ \\ \langle x, y \rangle_2 &=\frac{1}{4} \left(\|x + y \|^2 - \|x-y\|^2 +i\|x+iy\|^2 - i\|x-iy\|^2\right) \ \forall\ x,y \in V \end{align}$$ since these expressions are equal, the inner products are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Is product of two continuous functions still continuous? Let $f:\mathbb{R}\rightarrow \mathbb{R}$ and $g:\mathbb{R}\rightarrow \mathbb{R}$ be continuous. Is $h:\mathbb{R}\rightarrow \mathbb{R}$, where $h(x): = f(x) \times g(x)$, still continuous? I guess it is, but I feel difficult to manipulate the absolute difference: $$|h(x_2)-h(x_1)|=|f(x_2)g(x_2)-f(x_1)g(x_1)| \dots $$ Thanks in advance!
Hint: $$\left| f(x+h)g(x+h) - f(x)g(x) \right| = \left| f(x+h)\left( g(x+h) - g(x) \right) + \left( f(x+h) - f(x) \right) g(x) \right|$$ Can you proceed from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/37312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Reference request: introduction to commutative algebra My goal is to pick up some commutative algebra, ultimately in order to be able to understand algebraic geometry texts like Hartshorne's. Three popular texts are Atiyah-Macdonald, Matsumura (Commutative Ring Theory), and Eisenbud. There are also other books by Reid, Kemper, Sharp, etc. Can someone outline the differences between these texts, their relative strengths, and their intended audiences? I am not listing my own background and strengths, on purpose, (a) so that the answers may be helpful to others, and (b) I might be wrong about myself, and I want to hear more general opinions than what might suite my narrow profile (e.g. If I said "I only like short books", then I might preclude useful answers about Eisenbud, etc.).
It's a bit late. But since no one has mentioned it, I would mention Gathmann's lecture notes on Commutative Algebra (https://www.mathematik.uni-kl.de/~gathmann/class/commalg-2013/commalg-2013.pdf). The exposition is excellent. The content is comparable to Atiyah-McDonald, but contains much more explanation. It emphasizes the geometric intuitions throughout the lectures. For example, the chapters on integral ring extension and Noetherian normalization have one of the best expositions of the geometric pictures behind these important algebraic concepts that I have read among several introductory books on commutative algebra. Chapters usually begin with a very good motivation and give many examples.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "75", "answer_count": 6, "answer_id": 2 }
Lottery ball problem - How to go about solving? A woman works at a lottery ball factory. She's instructed to create lottery balls, starting from number 1, using the following steps: * *Open lottery ball package and remove red rubber ball. *Using two strips of digit stickers (0 through 9), create the current number by pasting the digits on the ball. *Digits not used in this way are put in a bowl where she may fish for other digits if she's missing some later. *Proceed to the next ball, incrementing the number by one. The lottery ball problem is, at what number will she arrive at before she's out of digits (granted, it's a large number, so assume these are basketball-sized rubber balls)? My question is not so much the solution as it is how to go about solving for this number? It seems evident that the first digit she'll run out of will be 1, since that's the number she starts with, however beyond that I wouldn't know how to go about determining that number. Any clues that could push me in the right direction would be greatly appreciated.
Yup, the first digit you run out of will be 1. As to how to solve it - try writing a formula for the number of $1$s in the decimal representations of the first $n$ numbers, and try and work out when it overtakes $2n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Exhibit an integral domain $R$ and a non-zero non-unit element of $R$ that is not a product of irreducibles. Exhibit an integral domain $R$ and a non-zero non-unit element of $R$ that is not a product of irreducibles. My thoughts so far: I don't really have a clue. Could anyone direct me on how to think about this? I'm struggling to get my head round irreducibles. Thanks.
Such an element can be factored, each factor can be factored, each factor can be factored, etc. Changing the problem into an additive one, you would want to find an element that can be written as a sum of two strictly smaller numbers, each of which can be written as a sum of two strictly smaller numbers, each of which... etc. Perhaps thinking along the lines of: $$1 = \frac{1}{2}+\frac{1}{2} = \left(\frac{1}{4}+\frac{1}{4}\right) + \left(\frac{1}{4}+\frac{1}{4}\right) = \cdots = \left(\frac{1}{2^n}+\frac{1}{2^n}\right) + \cdots + \left(\frac{1}{2^n}+\frac{1}{2^n}\right) = \cdots$$ Hmmm... Is there any way we could turn that into some kind of multiplicative, instead of additive, set of equalities?
{ "language": "en", "url": "https://math.stackexchange.com/questions/37485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 3 }
Counting trails in a triangular grid A triangular grid has $N$ vertices, labeled from 1 to $N$. Two vertices $i$ and $j$ are adjacent if and only if $|i-j|=1$ or $|i-j|=2$. See the figure below for the case $N = 7$. How many trails are there from $1$ to $N$ in this graph? A trail is allowed to visit a vertex more than once, but it cannot travel along the same edge twice. I wrote a program to count the trails, and I obtained the following results for $1 \le N \le 17$. $$1, 1, 2, 4, 9, 23, 62, 174, 497, 1433, 4150, 12044, 34989, 101695, 295642, 859566, 2499277$$ This sequence is not in the OEIS, but Superseeker reports that the sequence satisfies the fourth-order linear recurrence $$2 a(N) + 3 a(N + 1) - a(N + 2) - 3 a(N + 3) + a(N + 4) = 0.$$ Question: Can anyone prove that this equation holds for all $N$?
Regard the same graph, but add an edge from $n-1$ to $n$ with weight $x$ (that is, a path passing through this edge contributes $x$ instead of 1). The enumeration is clearly a linear polynomial in $x$, call it $a(n,x)=c_nx+d_n$ (and we are interested in $a(n,0)=d_n$). By regarding the three possible edges for the last step, we find $a(1,x)=1$, $a(2,x)=1+x$ and $$a(n,x)=a(n-2,1+2x)+a(n-1,x)+x\,a(n-1,1)$$ (If the last step passes through the ordinary edge from $n-1$ to $n$, you want a trail from 1 to $n-1$, but there is the ordinary edge from $n-2$ to $n-1$ and a parallel connection via $n$ that passes through the $x$ edge and is thus equivalent to a single edge of weight $x$, so we get $a(n-1,x)$. If the last step passes through the $x$-weighted edge this gives a factor $x$, and you want a trail from $1$ to $n-1$ and now the parallel connection has weight 1 which gives $x\,a(n-1,1)$. If the last step passes through the edge $n-2$ to $n$, then we search a trail to $n-2$ and now the parallel connection has the ordinary possibility $n-3$ to $n-2$ and two $x$-weighted possibilities $n-3$ to $n-1$ to $n$ to $n-1$ to $n-2$, in total this gives weight $2x+1$ and thus $a(n-2,2x+1)$.) Now, plug in the linear polynomial and compare coefficients to get two linear recurrences for $c_n$ and $d_n$. \begin{align} c_n&=2c_{n-2}+2c_{n-1}+d_{n-1}\\ d_n&=c_{n-2}+d_{n-2}+d_{n-1} \end{align} Express $c_n$ with the second one, eliminate it from the first and you find the recurrence for $d_n$. (Note that $c_n$ and $a(n,x)$ are solutions of the same recurrence.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/37553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41", "answer_count": 2, "answer_id": 0 }
Expected Value for summing over distinct random integers? Let $L=\{a_1,a_2,\ldots,a_k\}$ be a random (uniformly chosen) subset of length $k$ of the numbers $\{1,2,\ldots,n\}$. I want to find $E(X)$ where $X$ is the random variable that sums all numbers. We might want that $k < n$ too. My main problem is that I cannot get the function $q(a,k,n)$ that gives me the number of ways to write the number $a$ as the sum of exactly $k$ distinct addends less or equal $n$. This seems related but it doesn't limit the size of the numbers.
The expectation of each of the terms in the sum is $(n+1)/2$ so the expectation is $k(n+1)/2$. If you want to calculate the function $q(a,k,n)$ then you can use my Java applet here, by choosing "Partitions with distinct terms of:" $a$, "Exact number of terms:"$k$, "Each term no more than:" $n$, and then click on the "Calculate" button. If instead you start with "Compositions with distinct terms of:", then you will get a figure $k!$ times as big.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Variance for summing over distinct random integers Let $L=\{a_1,a_2,\ldots,a_k\}$ be a random (uniformly chosen) subset of length $k$ of the numbers $\{1,2,\ldots,n\}$. I want to find $\operatorname{Var}(X)$ where $X$ is the random variable that sums all numbers with $k < n$. Earlier today I asked about the expected value, which I noticed was easier than I thought. But now I am sitting on the variance since several hours but cannot make any progress. I see that $E(X_i)=\frac{n+1}{2}$ and $E(X)=k \cdot \frac{n+1}{2}$, I tried to use $\operatorname{Var}\left(\sum_{i=1}^na_iX_i\right)=\sum_{i=1}^na_i^2\operatorname{Var}(X_i)+2\sum_{i=1}^{n-1}\sum_{j=i+1}^na_ia_j\operatorname{Cov}(X_i,X_j)$ but especially the second sum is hard to evaluate by hand ( every time I do this I get a different result :-) ) and I have no idea how to simplify the Covariance term. Furthermore I know that $\operatorname{Var}(X)=\operatorname{E}\left(\left(X-\operatorname{E}(X)\right)^2\right)=\operatorname{E}\left(X^2\right)-\left(\operatorname{E}(X)\right)^2$, so the main Problem is getting $=\operatorname{E}\left(X^2\right)$. Maybe there is also a easier way than to use those formulas. I think I got the correct result via trial and error: $\operatorname{Var}(X)=(1/12) k (n - k) (n + 1)$ but not the way how to get there..
So I actually assigned this problem to a class a couple weeks ago. You can do what you did, of course. But if you happen to know the "finite population correction" from statistics, it's useful here. This says that if you sample $k$ times from a population of size $n$, without replacement, the variance of the sum of your sample will be $(n-k)/(n-1)$ times the variance that you'd get summing with replacement. The variance if you sum with replacement is, of course, $k$ times the variance of a single element. So you get $Var(X) = k(n-k)/(n-1) \times Var(U)$, where $U$ is a uniform random variable on $\{1, 2, \ldots, n\}$. It's well-known that $Var(U) = (n^2-1)/12$ (and you can check this by doing the sums) which gives the answer. Of course this formula is derived by summing covariances, so in a sense I've just swept that under the rug...
{ "language": "en", "url": "https://math.stackexchange.com/questions/37683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to evaluate $\lim\limits_{h \to 0} \frac {3^h-1} {h}=\ln3$? How is $$\lim_{h \to 0} \frac {3^h-1} {h}=\ln3$$ evaluated?
There are at least two ways of doing this: Either you can use de l'Hôpital's rule, and as I pointed out in the comments the third example on Wikipedia gives the details. I think a better way of doing this (and Jonas seems to agree, as I saw after posting) is to write $f(h) = 3^{h} = e^{\log{3}\cdot h}$ and write the limit as $$\lim_{h \to 0} \frac{f(h) - f(0)}{h}$$ and recall the definition of a derivative. What comes out is $f'(0) = \log{3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 3 }
Computing the integral of $\log(\sin x)$ How to compute the following integral? $$\int\log(\sin x)\,dx$$ Motivation: Since $\log(\sin x)'=\cot x$, the antiderivative $\int\log(\sin x)\,dx$ has the nice property $F''(x)=\cot x$. Can we find $F$ explicitly? Failing that, can we find the definite integral over one of intervals where $\log (\sin x)$ is defined?
Series expansion can be used for this integral too. We use the following identity; $$\log(\sin x)=-\log 2-\sum_{k\geq 1}\frac{\cos(2kx)}{k} \phantom{a} (0<x<\pi)$$ This identity gives $$\int_{a}^{b} \log(\sin x)dx=-(b-a)\log 2-\sum_{k\ge 1}\frac{\sin(2kb)-\sin(2ka)}{2k^2}$$ ($a, b<\pi$) For example, $$\int_{0}^{\pi/4}\log(\sin x)dx=-\frac{\pi}{4}\log 2-\sum_{k\ge 1}\frac{\sin(\pi k/2)}{2k^2}=-\frac{\pi}{4}\log 2-\frac{1}{2}K$$ $$\int_{0}^{\pi/2} \log(\sin x)dx=-\frac{\pi}{2}\log 2$$ $$\int_{0}^{\pi}\log(\sin x)dx=-\pi \log 2$$ ($K$; Catalan's constant ... $\displaystyle K=\sum_{k\ge 1}\frac{(-1)^{k-1}}{(2k-1)^2}$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/37829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 11, "answer_id": 7 }
Is there any mathematical operation on Integers that yields the same result as doing bitwise "AND"? I'll provide a little bit of a background so you guys can better understand my question: Let's say I have two positive, non-zero Binary Numbers.(Which can, obviously, be mapped to integers) I will then proceed onto doing an "AND" operation for each bit, (I think that's called a bitwise operation) which will yield yet another binary number. Ok. Now this new Binary number can, in turn, also be mapped to an Integer. My question is: Is there any Integer operation I can do on the mapped Integer values of the two original binary numbers that would yield the same result? Thanks in advance. EDIT : I forgot to mention that What I'm looking for is a mathematical expression using things like +,-,/,pow(base,exp) and the like. I'm not 100% sure (I'm a compuer scientist) but I think what I'm looking for is an isomorphism. LAST EDIT: I think this will clear any doubts as to what sort of mathematical expression I'm looking for. I wanted something like: The bitwise AND of two Integers A and B is always equal to (AB)X(B)X(3). The general feeling I got is that it's not possible or extremely difficult to prove(either its validity or non-validity)
One way to do a bitwise AND would be to decompose each integer into a sequence of values in {0,1}, perform a Boolean AND on each pair of corresponding bits, and then recompose the result into an integer. A function for getting the $i$-th bit (zero-indexed, starting at the least significant bit) of an integer $n$ could be defined as $f(n, i) = \lfloor n/2^i\rfloor \bmod 2$; the bitwise AND of two integers $m$ and $n$ would then be $$\sum_{i=0}^\infty (f(n,i) \mbox{ AND } f(m,i)) 2^i$$ Expressing the simpler Boolean AND in terms of common mathematical functions is left as an exercise to the reader.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 1 }
generators of the symplectic group In Masoud Kamgarpour's paper "Weil Representations" he uses a set of generators for the symplectic group, referring to a book by R. Steinberg which I do not have access to. If it matters at all, I am working in characteristic zero. After choosing a symplectic basis, the generators can be written \begin{equation} \left( \begin{array}{cc} A & 0 \newline 0 & (A^t)^{-1} \end{array} \right), \ \left( \begin{array}{cc} I & B \newline 0 & I \end{array} \right), \ \text{and} \ \left( \begin{array}{cc} 0 & I \newline -I & 0 \end{array} \right), \end{equation} where $A$ ranges through invertible matrices and $B$ ranges through symmetric matrices. Does anyone know of a reference or an explanation for this, especially a coordinate-free conceptual and/or geometric one?
I don't know if this precisely answers your question, but a study of generators by symplectic transvections for fields of characteristic $\ne 2$ was carried out by methods using graphs in R. Brown and S.P. Humphries, ``Orbits under symplectic transvections I'', Proc. London Math. Soc. (3) 52 (1986) 517-531. The main result is: for a symplectic space $V$ with symplectic form $\cdot$ and a subset $S$ of $V$, define a graph $G(S)$ with vertex set $S$ and an edge between $a$ and $b$ if and only if $a\cdot b \ne 0$. Then the transvections corresponding to the elements of $S$ generate the symplectic group of $(V, \cdot)$ if and only if $S$ spans $V$ and $G(S)$ is connected. (The immediately following sequel did the more complicated case of characteristic $2$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/37947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Can I rebuild a NxN matrix if I know its Covariance Matrix? Can I rebuild a NxN matrix if I know its Covariance Matrix? If so, how would I go upon it? is there a Matlab function to do so?
If $C$ is a covariance-matrix, then it is the product of some matrix $M$ with its transpose $M^t$ : $ C = M^t * M $ . Now there are matrices whose transposes-product equals the identity matrix; namely any rotation matrix. Say $ T^t * T = I$ where $I$ is the identity and $T$ is some rotation-matrix. Then $ C= M^t * M $ but also $ C = M^t * I * M = M^t * T^t * T * M = (T*M)^t * (T*M) = A^t * A$ where there are infinitely many $A$ -matrices, all rotations of each other. Even more, $T^t$ can have as many columns as we like as long they are more than rows. So the space, spanned by the columns of $T^t$ is of arbitrary dimension. In short: the decomposition of $C$ is non-unique; there are infinitely many solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Categorical description of algebraic structures There is a well-known description of a group as "a category with one object in which all morphisms are invertible." As I understand it, the Yoneda Lemma applied to such a category is simply a statement of Cayley's Theorem that every group G is isomorphic to a subset of the symmetric group on G (see the aside at the bottom of this post... I'm still a little confused on this). Assuming that I will make this clear in my own mind in the future, are there similar categorical descriptions of other algebraic object, eg rings, fields, modules, vector spaces? If so, what does the Yoneda Lemma tell us about the representability (or otherwise) of those objects? In particular, are there `nice' characterisations of other algebraic objects which correspond to the characterisation of a group arising from Cayley's Theorem as "subgroups of Sym(X) for some X"? Aside to (attempt to) work through the details of this: If $C$ is a category with one object $G$, then $h^G=\mathrm{Hom}(G,-)$ corresponds to the regular action of $G$ on itself (it takes $G$ to itself and takes the group element $f$ to the homomorphism $h_f(g)=f\circ g$). Any functor $F:C\to\mathbf{Set}$ with $F(G)=X$ gives a concrete model for the group, and the fact that natural transformations from $h^G$ to $F$ are 1-1 with elements of $X$ tells us that $G$ is isomorphic to a subgroup of $\mathrm{Sym}(X)$... somehow?
Let $C$ be a category, $C'$ the opposite category, and $S$ the category of sets. Recall that the natural embedding $e$ of $C$ in the category $S^{C'}$ of functors from $C'$ to $S$ is given by the following formulas. $\bullet$ If $c$ is an object of $C$, then $e(c)$ is the functor $C(\bullet,c)$ which attaches to each object $d$ of $C$ the set $C(d,c)$ of $C$-morphisms from $d$ to $c$. $\bullet$ If $x:c_1\to c_2$ is in $C(c_1,c_2)$, then $e(x)$ is the map from $C(c_2,d)$ to $C(c_1,d)$ defined by $$ e(x)(y)=yx. $$ In particular, if $C$ has exactly one object $c$, then $e$ is the Cayley isomorphism of the monoid $M:=C(c,c)$ onto the monoid opposite to the monoid of endomorphisms of $M$. One can also view $e$ as an anti-isomorphism of $M$ onto its monoid of endomorphisms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 2, "answer_id": 1 }
How to calculate hyperbola from data points? I have 4 data points, from which I want to calculate a hyperbola. It seems that the Excel trendline feature can't do it for me, so how do I find the relationship? The points are: (x,y) (3, 0.008) (6, 0,006) (10, 0.003) (13, 0.002) Thanks!
A hyperbola takes the form $y = k \frac{1}{x}$. This may be difficult to deal with. So instead, let's consider the reciprocals of our x values as J.M. suggested. For example, instead of looking at $(2.5, 0.007713)$, we consider $(\frac{1}{2.5}, 0.007713)$. Then since we have flipped all of our x values, we are looking to fit something of the form $y = k \dfrac{1}{ \frac{1}{x} } = k x$. This can be accomplished by doing any standard linear regression technique. This is just an extension of J.M.'s comment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Edge of factoring technology? Schneier in 1996's Applied Cryptography says: "Currently, a 129-decimal-digit modulus is at the edge of factoring technology" In the intervening 15 years has anything much changed?
The same algorithm is used for factoring, the Number-Field Sieve. It's probably been optimized further. But the main difference between now and then is computing power. Since the asymptotic running-time of the NFS is "known", you can even extrapolate into the future (assuming your favorite version of Moore's law).
{ "language": "en", "url": "https://math.stackexchange.com/questions/38271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
$n$ lines cannot divide a plane region into $x$ regions, finding $x$ for $n$ I noticed that $3$ lines cannot divide the plane into $5$ regions (they can divide it into $2,3,4,6$ and $7$ regions). I have a line of reasoning for it, but it seems rather ad-hoc. I also noticed that there are no "gaps" in the number of divisions that can be made with $n=4$, i.e we can divide a plane into $\{2,3,\cdots,11\}$ using $4$ lines. Is there anyway to know that if there are $n$ lines with $x$ being the possible number of divisions $\{ 2,\ldots, x,\ldots x_{max} \}$ where $x_{max}$ is the maximum number of divisions of the plane with $n$ lines, then what are the $x \lt x_{max}$ which are not possible? For e.g, $n=3$, $x_{max}=7$, $x=5$ is not possible.
I don't have access to any papers, unfortunately, but I think I've found a handwavy proof sketch that shows there are no gaps other than $n = 3, x = 5$. Criticism is welcomed; I'm not sure how to make this argument rigorous, and I'm also curious if there's an article that already uses these constructions. Suppose $n - 1$ lines can divide the plane into 2 through $\frac{(n-1)n}{2} + 1$ regions. For sufficiently large $n$, we will show that $n$ lines can divide the plane into $\frac{n(n-1)}{2} + 2$ through $\frac{n(n-1)}{2} + n + 1 = \frac{n(n+1)}{2} + 1$ regions. Consider an arrangement of $n$ lines that splits the plane into $\frac{n(n+1)}{2} + 1$ regions, such that, for simplicity, lines are paired into groups of two, where each line in the $k$th pair has a root at $k$ and the negative slope of its partner. If $n$ is odd, there will be one line left over which can't be paired; put this line horizontally underneath the roots of the pairs (e.g. $y = -1$). If $n$ is even, take the last pair and put one line horizontally as described, and the other vertically at $x = 0$. We can hand-wave to "pull down" pairs one-by-one so their intersection rests on the horizontal line, subtracting one region for each pair "pulled down." This ends up removing $\frac{n-1}{2}$ regions for odd $n$, and $\frac{n}{2} - 1$ regions for even $n$. Then we can go through each pair of lines and adjust the line with negative slope to have the same slope as the next pair's positively sloped line, shaving one region off each time (and removing the same number of regions as the previous operation). So these operations will get us to $\frac{(n-1)n}{2} + 2$ for odd $n$, and $\frac{(n-1)n}{2} + 3$ for even $n$. To get to $\frac{(n-1)n}{2} + 2$ for even $n$, we take the last pair's positive line and put it parallel to the first two vertical lines (subtracting two regions), then nudge the first pair slightly above the horizontal line (adding back one). Now we have to consider when such operations fail, for both odd and even cases. We certainly can't "pull down" when $n \le 2$. For $n = 3$, we have just one pair above the horizontal line, so we can't adjust slopes as suggested, giving us a gap at $x = 5$. For $n = 4$, we have only one pair, and we can't make up the gap at $\frac{(n-1)n}{2} + 2$ — but luckily, not only can we cover up the 8-region gap using 3 parallel lines and one non-parallel one, but 4 parallel lines cover the 5-region gap introduced when $n = 3$. So we can use these techniques to complete the induction process for $n \ge 5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 1 }
Exact values of $\cos(2\pi/7)$ and $\sin(2\pi/7)$ What are the exact values of $\cos(2\pi/7)$ and $\sin(2\pi/7)$ and how do I work it out? I know that $\cos(2\pi/7)$ and $\sin(2\pi/7)$ are the real and imaginary parts of $e^{2\pi i/7}$ but I am not sure if that helps me...
There are various ways of construing and attacking your question. At the most basic level: it's no problem to write down a cubic polynomial satisfied by $\alpha = \cos(2 \pi/7)$ and hit it with Cardano's cubic formula. For instance, if we put $z = \zeta_7 = e^{2 \pi i/7}$, then $2\alpha = z + \overline{z} = z + \frac{1}{z}$. A little algebra leads to the polynomial $P(t) = t^3 + \frac{1}{2} t^2 - \frac{1}{2}t - \frac{1}{8}$ which is irreducible with $P(\alpha) = 0$. (Note that the noninteger coefficients of $P(t)$ imply that $\alpha$ is not an algebraic integer. In this respect, the quantity $2 \alpha$ is much better behaved, and it is often a good idea to work with $2 \alpha$ instead of $\alpha$.) To see what you get when you apply Cardano's formula, consult the other answers or just google for it: for instance I quickly found this page, among many others (including wikipedia) which does it. The expression is kind of a mess, which gives you the idea that having these explicit radical expressions for roots of unity (and related quantities like the values of the sine and cosine) may not actually be so useful: if I wanted to compute with $\alpha$ (and it has come up in my work!) I wouldn't get anything out of this formula that I didn't get from $2 \alpha = \zeta_7 + \zeta_7^{-1}$ or the minimal polynomial $P(t)$. On the other hand, if you know some Galois theory, you know that the Galois group of every cyclotomic polynomial is abelian, so there must exist a radical expression for $\zeta_n$ for any $n \in \mathbb{Z}^+$. (We will usually not be able to get away with only repeatedly extracting square roots; that could only be sufficient when Euler's totient function $\varphi(n)$ is a power of $2$, for instance, so not even when $n = 7$.) From this perspective, applying the cubic formula is a big copout, since there is no analogous formula in degree $d > 4$: the general polynomial of such a degree cannot be solved by radicals...but cyclotomic polynomials can. So what do you do in general? The answer was known to Gauss, and involves some classical algebra -- resolvents, Gaussian periods, etc. -- that is not very well remembered nowadays. In fact I have never gone through the details myself. But I cast around on the web for a while looking for a nice treatment, and I eventually found this writeup by Paul Garrett. I recommend it to those who want to learn more about this (not so useful, as far as I know, but interesting) classical problem: his notes are consistently excellent, and have the virtue of concision (which I admire especially for lack of ability to produce it myself).
{ "language": "en", "url": "https://math.stackexchange.com/questions/38414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 6, "answer_id": 0 }
Connections between metrics, norms and scalar products (for understanding e.g. Banach and Hilbert spaces) I am trying to understand the differences between $$ \begin{array}{|l|l|l|} \textbf{vector space} & \textbf{general} & \textbf{+ completeness}\\\hline \text{metric}& \text{metric space} & \text{complete space}\\ \text{norm} & \text{normed} & \text{Banach space}\\ \text{scalar product} & \text{pre-Hilbert space} & \text{Hilbert space}\\\hline \end{array} $$ What I don't understand are the differences and connections between metric, norm and scalar product. Obviously, there is some kind of hierarchy but I don't get the full picture. Can anybody help with some good explanations/examples and/or readable references?
When is a normed subspace of vector space a pre-Hilbert space. The analogous concept to orthogonal and orthonormal sequences in a normed space is perpendicular and a perpnormal sequences. If the norm squared of the sum of a linear combination of non-zero vectors equals the sum of the norm squared of each of the components, then the set of vectors is perpendicular. For example, x,y are perpendicular vectors in a complex normed space, if for arbitrary complex numbers a,b, ||ax + by||2 = ||ax||2 + ||by||2 = |a|2 ||x||2 + |b|2 ||y||2, and perpnormal, if, ||x||2 = ||y||2 = 1, so, ||ax + by||2 = |a|2 + |b|2. Define the polarization product of two vectors x,y in a normed space using the polarization identity in a pre-Hilbert space, (x|y) = 1/4 { ||x + y||2 - ||x - y||2 + i ||x + i y||2 - i ||x – i y||2 }. Then, a normed space having a sequence of perpnormal vectors (vectors that are perpendicular and unit vectors), is equivalent to all pairs of vectors in the normed space satisfying the parallelogram law. learn more>>
{ "language": "en", "url": "https://math.stackexchange.com/questions/38460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "77", "answer_count": 3, "answer_id": 2 }
Showing the inequality $\frac{|f^{'}(z)|}{1-|f(z)|^{2}} \leq \frac{1}{1-|z|^{2}}$ I am trying to show if $|f(z)| \leq 1$, $|z| \leq 1$, then \begin{equation} \frac{|f^{'}(z)|}{1-|f(z)|^{2}} \leq \frac{1}{1-|z|^{2}} \end{equation}. I have used Cauchy's Inequality to derive $|f^{'}(z)| \leq \frac{1}{1-|z|}$ yet I still couldn't get the result I need. Also I am trying to find when equality would hold. Any tips or help would be much appreciated. Thanks!
That's the Schwarz–Pick theorem. The wikipedia page contains a proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Geometric distinction between real cubics with different Galois group? The following Cubics have 3 real roots but the first has Galois group $C_3$ and the second $S_3$ * *$x^3 - 3x + 1$ (red) *$x^3 - 4x + 2$ (green) Is there any geometric way to distinguish between the two cases? Obviously graphing this onto the real line does not help. It is not clear to me why you cannot transpose the red dots but you can transpose the green ones.
Almost all cubics (with integer coefficients and three real roots) have Galois group $S_3$. What exactly is meant by "almost all" is a little technical, but the phrase can be made precise, and the result rigorously proved. One consequence is that if you start with a $C_3$ cubic and perturb the roots the tiniest little bit then with probability $1$ you now have an $S_3$ cubic. So just looking at the red dots can't help you: it's guaranteed that there is a set of green dots so close by that you wouldn't be able to distinguish them with an electron microscope.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Infinite area under a curve has finite volume of revolution? So I was thinking about the harmonic series, and how it diverges, even though every subsequent term tends toward zero. That meant that its integral from 1 to infinity should also diverge, but would the volume of revolution also diverge (for the function y=1/x)? I quickly realized that its volume is actually finite, because to find the volume of revolution the function being integrated has to be squared, which would give 1/x^2, and, as we all know, that converges. So, my question is, are there other functions that share this property? The only family of functions that I know that satisfy this is 1/x, 2/x, 3/x, etc.
$\frac{1}{x^p}$ with $\frac{1}{2} < p \leq 1$ all satisfy these properties. Then, by limit comparison test, any positive function $f(x)$ with the propery that there exists a $\frac{1}{2} < p \leq 1$ so that $$ \lim_{x \to \infty} x^p f(x) = C \in (0, \infty) \,.$$ also has this property... This allows you create lots and lost of example, just add to $\frac{\alpha}{x^p}$ any "smaller" function. (i.e. $o(\frac{1}{x^p} )$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/38611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Matrix Exponentiation for Recurrence Relations I know how to use Matrix Exponentiation to solve problems having linear Recurrence relations (for example Fibonacci sequence). I would like to know, can we use it for linear recurrence in more than one variable too? For example can we use matrix exponentiation for calculating ${}_n C_r$ which follows the recurrence C(n,k) = C(n-1,k) + C(n-1,k-1). Also how do we get the required matrix for a general recurrence relation in more than one variable?
@ "For example can we use matrix exponentiation for calculating nCr" There is a simple matrix as logarithm of P (which contains the binomial-coefficients): $\qquad \exp(L) = P $ where $ \qquad L = \small \begin{array} {rrrrrrr} 0 & . & . & . & . & . & . & . \\ 1 & 0 & . & . & . & . & . & . \\ 0 & 2 & 0 & . & . & . & . & . \\ 0 & 0 & 3 & 0 & . & . & . & . \\ 0 & 0 & 0 & 4 & 0 & . & . & . \\ 0 & 0 & 0 & 0 & 5 & 0 & . & . \\ 0 & 0 & 0 & 0 & 0 & 6 & 0 & . \\ 0 & 0 & 0 & 0 & 0 & 0 & 7 & 0 \end{array} $ and $ \qquad P =\small \begin{array} {rrrrrrr} 1 & . & . & . & . & . & . & . \\ 1 & 1 & . & . & . & . & . & . \\ 1 & 2 & 1 & . & . & . & . & . \\ 1 & 3 & 3 & 1 & . & . & . & . \\ 1 & 4 & 6 & 4 & 1 & . & . & . \\ 1 & 5 & 10 & 10 & 5 & 1 & . & . \\ 1 & 6 & 15 & 20 & 15 & 6 & 1 & . \\ 1 & 7 & 21 & 35 & 35 & 21 & 7 & 1 \end{array} $ L and P can be extended to arbitrary size in the obvious way
{ "language": "en", "url": "https://math.stackexchange.com/questions/38659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Help to understand material implication This question comes from from my algebra paper: $(p \rightarrow q)$ is logically equivalent to ... (then four options are given). The module states that the correct option is $(\sim p \lor q)$. That is: $$(p\rightarrow q) \iff (\sim p \lor q )$$ but I could not understand this problem or the solution. Could anybody help me?
$p \to q$ is only logically false if $p$ is true and $q$ is false. So if not-$p$ or $q$ (or both) are true, you do not have to worry about $p \to q$ being false. On the other hand, if both are false, then that's the same as saying $p$ is true and $q$ is false (De Morgan's Law), so $p \to q$ is false. Therefore, the two are logically equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 7, "answer_id": 2 }
How to find a polynomial from a given root? I was asked to find a polynomial with integer coefficients from a given root/solution. Lets say for example that the root is: $\sqrt{5} + \sqrt{7}$. * *How do I go about finding a polynomial that has this number as a root? *Is there a specific way of finding a polynomial with integer coefficients? Any help would be appreciated. Thanks.
One can start from the equation $x=\sqrt5+\sqrt7$ and try to get rid of the square roots one at a time. For example, $x-\sqrt5=\sqrt7$, squaring yields $(x-\sqrt5)^2=7$, developing the square yields $x^2-2=2x\sqrt5$, and squaring again yields $(x^2-2)^2=20x^2$, that is, $x^4-24x^2+4=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
proper differentiable maps of manifolds Is the following statement true?If yes, why? Let $f: M\to N$ be a proper morphism between smooth manifolds. Let $x$ be a point of $N$, and $U$ a nbhd of $f^{-1}(x)$ in $M$. Then there exists a nbhd $V$ of $x$ in $N$ such that $f^{-1}(V)\subset U$. Here, proper means that the preimage of any compact set is compact. It seems to me this was used in a expository article that I am reading. If it is true, I expect it to be true for proper morphism of locally compact topological spaces. But for some reason I wasn't able to find a proof. Thank you.
Suppose not. Then there is a sequence $(y_n)_{n\geq1}$ in $M\setminus U$ such that $f(y_n)\to x$. The set $S=\{f(y_n):n\geq1\}\cup\{x\}$ is compact, so its preimage $f^{-1}(S)$ is also compact. Since the sequence $(y_n)_{n\geq1}$ is contained in $f^{-1}(S)$, we can —by replacing it with one of its subsequences, if needed— assume that in fact $(y_n)_{n\geq1}$ converges to a point $y\in M$. Can you finish? (I am using that $x$ has a countable basis of neighborhoods here and that sequences in a compact subset of $M$ have convergent subsequences —to reduce to dealing with sequences— but more technology will remove that in order to generalize this to spaces other that manifolds)
{ "language": "en", "url": "https://math.stackexchange.com/questions/38841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Using binomial expansion to derive Pascal's rule $\displaystyle \binom{n}{k}=\binom{n-1}{k} + \binom{n-1}{k-1}$ $\displaystyle \left(1+x\right)^{n} = \left(1+x\right)\left(1+x\right)^{n-1}$ How do I use binomial expansion on the second equations for the right hand side and convert it to the first equation? The left hand side is obvious, but I'm not sure how to do the right hand side. Please give me some hints thanks
Binomial expansion of both sides of $$\left(1+x\right)^{n} = \left(1+x\right)\left(1+x\right)^{n-1}$$ gives $$\sum_{k=0}^n \binom{n}{k} x^k = \left(1+x\right)\sum_{k=0}^{n-1} \binom{n-1}{k} x^k$$ by distributivity on the right hand side we find $$\left(\sum_{k=0}^{n-1} \binom{n-1}{k} x^k \right)+\left(\sum_{k=0}^{n-1} \binom{n-1}{k} x^{k+1} \right) = \left(\sum_{k=0}^{n} \binom{n-1}{k} x^k \right)+\left(\sum_{k=0}^{n} \binom{n-1}{k-1} x^{k}\right)$$ the limits of the summations do not change the sum because $\binom{n-1}{n} = 0$, $\binom{-1}{n} = 0$. Thus we have $$\sum_{k=0}^n \binom{n}{k} x^k = \sum_{k=0}^{n} \left(\binom{n-1}{k} + \binom{n-1}{k-1}\right) x^k$$ and extracting the $x^k$ coefficients from both sides gives the identity $$\displaystyle \binom{n}{k}=\binom{n-1}{k} + \binom{n-1}{k-1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/38900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Does the cartesian product have a neutral element? Let $A$ be any set. Is there a set $E$ such that $A \times E = E \times A = A$? I thought of the empty set, but Wikipedia says otherwise. This operation changes dimension, so an isomorphism might be needed for such element to exist.
In some sense, the whole reason we have these things called addition and multiplication and the ring axioms is because of certain properties satisfied by the Cartesian product and disjoint union. Both are associative and commutative (up to natural isomorphism). One distributes over the other (up to natural isomorphism). Both have identity elements (up to natural isomorphism). Decategorify, restricting to finite sets, and you get the non-negative integers. Take the Grothendieck group, and you get the integers, and then at some point you are led to write down the ring axioms in general. But it's good to keep in mind where it all comes from.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
monomorphism from $\mathrm{GL}(n,q^2)$ to $\mathrm{GL}(2n,q)$ Construct an explicit monomorphism from $GL(n,q^2)$ to $GL(2n,q)$. Will $GL(n,q^m)$ always have a subgroup isomorphic with $GL(mn,q^2)$ for every $m,n\in \mathbb{N}$, and prime power $q$?
Can you see how $\mathrm{GL}_n(\mathbb{C})$ is naturally a subgroup of $\mathrm{GL}_{2n}(\mathbb{R})$? You may apply the same idea to the finite fields. Hint: A $\mathbb{C}$-linear operator is a priori $\mathbb{R}$ linear. I wonder if there is some typo in the second part of the question. As it stands now, it doesn't make much sense, since $\mathrm{GL}_{mn}(\mathbb{F}_{q^2})$ has larger cardinality than $\mathrm{GL}_{n}(\mathbb{F}_{q^m})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does $c^n(n!+c^n)\lt (n+c^2)^n$ hold for all positive integers $n$ and $c\gt 0$? I am not sure whether the following inequality is true? Some small $n$ indicates it is true. Let $n$ be a positive integer and $c\gt0$, then $$c^n(n!+c^n)\lt(n+c^2)^n.$$
I believe this is false. Take $n=25, c = 5$. I came up with this example by setting $c = \sqrt{n}$ and using Stirling's approximation formula for $n!$. Wolfram alpha link showing the computation. In fact for $c = \sqrt{n}$, we have that $$\frac{c^n(n!+c^n)}{(n+c^2)^n} \sim \sqrt{2 \pi n}\ \left(\frac{n}{\sqrt{2e}}\right)^{n/2}$$ which goes to $\infty$ as $n \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Application of Galois theory i have a question regarding roots of equation, find all $a$,such that the cubic polynomial $x^3-bx+a=0$ has three integer roots, how can you solve these by using galois theory,what does the reducible polynomials,splitting fields,field extensions have to do with these, explain each of them,in detail,because it serves as a introduction to galois theory, ok for eg take $b$ to be 3 and list all $a$ such that equation has three integer roots
Suppose the polynomial has three integer roots $r_1, r_2, r_3$. Then $(x - r_1)(x - r_2)(x - r_3) = x^3 - bx + a$, hence $$r_1 + r_2 + r_3 = 0$$ $$r_1 r_2 + r_2 r_3 + r_3 r_1 = -b$$ $$r_1 r_2 r_3 = -a.$$ Squaring the first equation gives $r_1^2 + r_2^2 + r_3^2 = 2b$, which immediately tells you that for fixed $b$ there are only finitely many possibilities for the roots, and from here it's casework for any fixed $b$. For example, for $b = 3$ we get $r_1^2 + r_2^2 + r_3^2 = 6$, which has solutions $(\pm 1, \pm 1, \pm 2)$ up to cyclic permutation, and of these solutions only $(-1, -1, 2)$ and $(1, 1, -2)$ add up to zero. Hence the possible values in this case are $a = \pm 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How many different ways can you distribute 5 apples and 8 oranges among six children? How many different ways can you distribute 5 apples and 8 oranges among six children if every child must receive at least one piece of fruit? If there was a way to solve this using Pólya-Redfield that would be great, but I cannot figure out the group elements.
I am too lazy to calculate the numbers of elements with $k$ cycles in $S_8$ but if you do that yourself a solution could work as follows. (I will use this version of Redfield-Polya and $[n]$ shall denote $\{1,\dots,n\}$.) Let us take $X = [13]$ the set of fruits, where $G= S_5 \times S_8$ acts on $X$ such that the first five apples and the later eight oranges are indistinguishable. Then $$K_n = |[n]^X/G|$$ is the number of ways to distribute these apples and oranges among six distinguishable children. And $$ N_n = K_n -n\cdot K_{n-1}$$ the of ways to distribute these apples and oranges among six distinguishable children such that every child must recieve at least one piece of fruit. Now by the Theorem $$K_n = \frac{1}{|G|} \sum_{g\in G} n^{c(g)} = \frac{1}{5!\cdot 8!} \left(\sum_{g\in S_5} n^{c(g)}\right)\left(\sum_{g\in S_8} n^{c(g)}\right) = \frac{1}{5!\cdot 8!} \left(\sum_{i\in [2]} d_i n^{i}\right) \left(\sum_{i\in [4]} e_i n^{i}\right),$$ where $c(g)$ is the number of cycles of $g$, $d_i$ the number of permutations of $S_5$ with exactly $i$ cycles and $e_i$ the number of permutations of $S_8$ with exactly $i$ cycles. The number that we are looking for in the end is $N_6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Infinite shortest paths in graphs From Wikipedia: "If there is no path connecting two vertices, i.e., if they belong to different connected components, then conventionally the distance is defined as infinite." This seems to negate the possibility that there are graphs with vertices connected by an infinite shortest path (as opposed to being not connected). Why is it that for every (even infinite) path between two vertices there is a finite one? Note that infinite paths between vertices do exist - e.g. in the infinite complete graph -, but they are not the shortest.
To expand on my comment: It's clear that if an infinite path is defined as a map from $\mathbb N$ to the edge set such that consecutive edges share a vertex, then any vertices connected by such an infinite path are in fact connected by a finite section of the path. To make sense of the question nevertheless, one might ask whether it is possible to use a different ordinal than $\omega$, say, $\omega\cdot2$, to define an infinite path. But that doesn't make sense either, since there's no way (at least I don't see one) to make the two parts of such a path have anything to do with each other -- at each limit ordinal, the path can start wherever it wants, since there's no predecessor for applying the condition that consecutive edges share a vertex. Note that the situation is different in infinite trees, which can perfectly well contain infinite paths connecting the root to a node. This is because the definition of a path in an infinite tree is different; it explicitly attaches the nodes on levels corresponding to limit ordinals to entire sequences of nodes, not to individual nodes; such a concept doesn't exist in graphs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
A good book for learning mathematical trickery I've seen several question here on what book to read to learn writing and reading proofs. This question is not about that. I've been doing that for a while, and I'm quite comfortable with proofs. I am looking for resources (books, ideally) that can teach not the concept of proofs, but rather some of the specific mathematical tricks that are commonly employed in proofs: those that mostly include clever number manipulation, ad-hoc integration techniques, numerical methods and other thing you are likely never to learn in theory-oriented books. I come mainly from applied math and engineering, and when I look at proofs from Stochastic Processes, Digital Signal Processing, Non-Linear Systems and other applied subjects, I feel like I need to learn a new method to understand every proof I read. Is there any good literature on such mathematical tricks?
The Tricki ("Trick Wiki") is an attempt to catalogue such things, although it is somewhat less successful than was initially hoped.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 0 }
Looking for the name of a Rising/Falling Curve I'm looking for a particular curve algorithm that is similar to to a bell curve/distribution, but instead of approaching zero at its ends, it stops at its length/limit. You specify the length of the curve of the curve and its maximum peak, and the plot will approach its peak at the midpoint of length(the middle) and then it curves downward to its end. As a math noob, I may not be making any sense. Here's an image of the curve I'm looking for:
The curve which you are looking for is a parabola. When I plugged in the equation $$f(x) = -(x-3.9)^{2} + 4$$ I got this figure, which some what resembles what you are looking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why truth table is not used in logic? One day, I bought Principia Mathematica and saw a lot of proofs of logical equations, such as $\vdash p \implies p$ or $\vdash \lnot (p \wedge \lnot p)$. (Of course there's bunch of proofs about rel&set in later) After reading these proofs, I suddenly thought that "why they don't use the truth table?". I know this question is quite silly, but I don't know why it's silly either (just my feeling says that). My (discrete math) teacher says that "It's hard question, and you may not understand until you'll become university student," which I didn't expected (I thought the reason would be something easy). Why people don't use truth table to prove logical equations? (Except for study logic (ex: question like "prove this logic equation using truth table"), of course.) PS. My teacher is a kind of people who thinks something makes sense iff something makes sense mathematically.
In Principia, the authors wanted to produce an explicit list of purely logical ideas, including an explicit finite list of axioms and rules of inference, from which all of mathematics could be derived. The method of truth tables is not such a finite list, and in any case would only deal with propositional logic. The early derivations of Principia are quite tedious, and could have been eliminated by adopting a more generous list of initial axioms. But for reasons of tradition, the authors wanted their list to be as small as possible. Remark: Principia is nowadays only of historical interest, since the subject has developed in quite different directions from those initiated by Russell and Whitehead. The idea of basing mathematics (including the development of the usual integers, reals, function spaces) purely on "logic" has largely been abandoned in favour of set-theory based formulations. And Principia does not have a clear separation between syntax and semantics. Such a separation is essential to the development of Model Theory in the past $80$ years.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 1 }
Reference for a proof of the Hahn-Mazurkiewicz theorem The Hahn-Mazurkiewicz theorem states that a space $X$ is a Peano Space if and only if $X$ is compact, connected, locally connected, and metrizable. If anybody knows a book with a proof, please let me know. Thanks. P.S. (added by t.b.) A Peano space is a topological space which is the continuous image of the unit interval.
Read the section on Peano spaces in General Topology by Stephen Willard.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
How to calculate $\int_0^{2\pi} \sqrt{1 - \sin^2 \theta}\;\mathrm d\theta$ How to calculate: $$ \int_0^{2\pi} \sqrt{1 - \sin^2 \theta}\;\mathrm d\theta $$
Use Wolfram Alpha! Plug in "integrate sqrt(1-sin^2(x))". Then press "show steps". You can enter the bounds by hand... http://www.wolframalpha.com/input/?i=integrate+sqrt%281-sin%5E2%28x%29%29
{ "language": "en", "url": "https://math.stackexchange.com/questions/39643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Logic problem - what kind of logic is it? I would be most gratefull, if someone could verify my solution to this problem. 1) Of Aaron, Brian and Colin, only one man is smart. Aaron says truthfully: 1. If I am not smart, I will not pass Physics. 2. If I am smart, I will pass Chemistry. Brian says truthfully: 3. If I am not smart, I will not pass Chemistry. 4. If I am smart, I will pass Physics. Colin says truthfully: 5. If I am not smart, I will not pass Physics. 6. If I am smart, I will pass Physics. While I. The smart man is the only man to pass one particular subject. II. The smart man is also the only man to fail the other particular subject. Which one of the three men is smart? Why? I would say that it could have been any one of them, as the implications in every statement are not strong enough to disprove the statements I,II. But I'm not sure if my solution is enough, as I'm not sure what kind of logic it is.
You could just create a simple table and read the solution:
{ "language": "en", "url": "https://math.stackexchange.com/questions/39680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Equality for the Gradient We have that $f : \mathbb{R}^2 \mapsto \mathbb{R}, f \in C^2$ and $h= \nabla f = \left(\frac{\partial f}{\partial x_1 },\frac{\partial f}{\partial x_2 } \right)$, $x=(x_1,x_2)$. Now the proposition I try to show says that $$\int_0^1 \! \langle \nabla f(x \cdot t),x \rangle\,dt \,= \int_0^{x_1} \! h_1(t,0) ,dt \, +\int_0^{x_2} \! h_2(x_1,t) \,dt \,$$ I know that $\langle \nabla f(x \cdot t),x \rangle=d f(tx) \cdot x$ but it doesn't seem to help, maybe you have to make a clever substitution? (Because the range of integration changes). Thanks in advance.
Before you read this answer, fetch a piece of paper and draw the following three points on it: $(0,0)$, $(x_{1},0)$ and $(x_{1},x_{2})$. These are the corners of a right-angled triangle whose hypotenuse I'd like to call $\gamma$ whose side on the $x_1$-axis I call $\gamma_{1}$ and whose parallel to the $x_2$-axis I call $\gamma_2$. More formally, let $\gamma: [0,1] \to \mathbb{R}^2$ be the path $t \mapsto tx$. Similarly, let $\gamma_{1} : [0,1] \to \mathbb{R}^2$ be the path $t \mapsto (tx_{1},0)$ and $\gamma_{2} : [0,1] \to \mathbb{R}^2$ be the path $t \mapsto (x_{1}, tx_{2})$. The integral on the left hand side can be written as $$\int_{0}^{1} df(\gamma(t))\cdot\dot{\gamma}(t)\,dt = \int_{0}^{1} \frac{d}{dt}(f \circ \gamma)(t)\,dt = f(\gamma(1)) - f(\gamma(0)) = f(x_1, x_2) - f(0,0).$$ Similarly, after some simple manipulations the right hand side is equal to $$\int_{0}^{1} \frac{d}{dt} (f \circ \gamma_{1})(t)\,dt + \int_{0}^{1} \frac{d}{dt} (f \circ \gamma_2)(t)\,dt = \left( f(\gamma_{1}(1)) - f(\gamma_{1}(0))\right) + \left(f(\gamma_2 (1)) - f(\gamma_2(0))\right)$$ and as $\gamma_1 (1) = (x_1,0) = \gamma_2 (0)$ two terms cancel out and what remains is $f(x_{1},x_{2}) - f(0,0)$. Thus the left hand side and the right hand side are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
To sum $1+2+3+\cdots$ to $-\frac1{12}$ $$\sum_{n=1}^\infty\frac1{n^s}$$ only converges to $\zeta(s)$ if $\text{Re}(s)>1$. Why should analytically continuing to $\zeta(-1)$ give the right answer?
If the following were true: $$\sum_{n=1}^\infty{n}=-\frac1{12}\tag{hypothesis}$$ then we would expect the following: $$\lim_{n\to\infty}\sum_{i=1}^n{i}\\ =\lim_{n\to\infty}\frac{n(n+1)}2=-\frac1{12}\tag{expectation}$$ which is the formula for the infinite triangular number limit. Unfortunately this is a result that we do not get when the limit is correctly taken. The correct value is $$\lim_{n\to\infty}\frac{n(n+1)}{2}\\ =\lim_{n\to\infty}\frac{n^2+n}{2}\\ =\lim_{m:{n^2+n}\to\infty}\frac{m}{2}=\infty\\ \neq-\frac{1}{12}$$ This sort of mathematical sleight of hand, smoke and mirrors, pulling a finite negative rabbit out of an empty positively infinite hat does not impress me; worse yet, it gives legitimate, observable, repeatable mathematics a bad name.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "449", "answer_count": 18, "answer_id": 4 }
Branch cut of the logarithm I have a function $F$ holomorphic on some open set, and I have $ F(0) = 1 $ and $ F $ is non-vanishing. I want to show that there is a holomorphic branch of $ \log(F(z)) $. Now, I'm getting confused. The principal branch of logarithm removes $ (-\infty, 0] $. But if the point 0 is missing from the plane, what happens when we take $ \log{F(0)} = \log{1} + 0 = 0 $? (I'm sure we can take the principal branch, because $ \exp(z) $ satisfies the conditions in question). Any help would be appreciated. Thanks
That only means $F(z)$ cannot be 0, while $z$ can be 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Complete induction of $10^n \equiv (-1)^n \pmod{11}$ To prove $10^n \equiv (-1)^n\pmod{11}$, $n\geq 0$, I started an induction. It's $$11|((-1)^n - 10^n) \Longrightarrow (-1)^n -10^n = k*11,\quad k \in \mathbb{Z}. $$ For $n = 0$: $$ (-1)^0 - (10)^0 = 0*11 $$ $n\Rightarrow n+1$ $$\begin{align*} (-1) ^{n+1} - (10) ^{n+1} &= k*11\\ (-1)*(-1)^n - 10*(10)^n &= k*11 \end{align*}$$ But I don't get the next step.
Since 10 ≡ -1 (mod 11), you can just raise both sides to the power of $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 12, "answer_id": 1 }
How to factor quadratic $ax^2+bx+c$? How do I shorten this? How do I have to think? $$ x^2 + x - 2$$ The answer is $$(x+2)(x-1)$$ I don't know how to get to the answer systematically. Could someone explain? Does anyone have a link to a site that teaches basic stuff like this? My book does not explain anything and I have no teacher; this is self-studies. Please help me out; thanks!
Given $$A: x^2 + x - 2$$ you're trying to do the 'magic' in your head in order to get backwards to $$B: (x+2)(x-1)$$ What is it that you are trying to do backwards. It's the original multiplication of $(x+2)(x-1)$. Note that * *the -2 in $A$ comes from multiplying the +2 and -1 in $B$ *the +1 (it's kind of invisible it's the coefficient of $x$ ) in $B$ comes from: * *$x$ in the first part times -1 in the second, plus *+2 in the first part times $x$ in the second or $(-1)+2 = +1$. So that's how the multiplication works going forward. Now you have to think of that to go backwards. In $x^2 + x - 2$: * *where does the -2 come from? From two things that multiply to get -2. What could those possibly be? Usually we assume integers so the only possibilities are the two pairs 2, -1, and -2, 1. *of those two pairs, they have to -add- to the coefficient for $x$ or just plain positive 1. So the answer has to be the pair 2 and -1. Another example might help: given $$x^2-5x+6$$ what does this factor to? (that is, find $(x-a)(x-b)$ which equals $x^2 -5x + 6$). So the steps are: * *what are the factors of 6? (you should get 2 pairs, all negative. *for those pairs, which pair adds up to -5? The main difficulty is keeping track in your head of what is multiplying, what is adding, and what is positive and negative. The pattern for any sort of problem solving skill like this that seems like magic (but really is not) is to: * *Do more examples to get a speedier feel for it. *Check your work. Since you're going backwards, once you get a possible answer, you can do the non-magic (multiplying) to see if you can get the original item in the question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 8, "answer_id": 7 }
How do get combination sequence formula? What would be a closed-form formula that would determine the ith value of the sequence 1, 3, 11, 43, 171... where each value is one minus the product of the previous value and 4? Thanks!
The sequence can be written using the recurrence formula: $y_n = a+by_{n-1}$ . Then using the first 3 terms, one gets $y_1=1, a=-1, b=4 $ Sometimes it is easy to convert a recurrence formula to a closed-form, by studding the structure of the math relations. $y_1 = 1$ $y_2 = a+by_1$ $y_3 = a+by_2 = a+b(a+by_1) = a+ab+b^2y_1$ $y_4 = a+by_3 = a+b(a+by_2) = a+ab+ab^2+b^3y_1$ $y_5 = a+ab+ab^2+ab^3+b^4y_1$ there is a geometric sequence $a+ab+ab^2.. = a(1+b+b^2...) $ $ (1+b+b^2...b^n) = (1-b^{n+1}) / (1-b) $ therefore the general formula is: $y_n = a+ab+ab^2+...+ab^{n-2}+b^{n-1}y_1$ $y_n = a(1-b^{n-1}) / (1-b) + b^{n-1}y_1 $ Using the above parameters, $y_1=1 , a=-1, b=4, $ $y_n = -1(1-4^{n-1})/(1-4) +4^{n-1}y_1$ $y_n = -\frac{4^{n-1}}{3} +\frac13 +4^{n-1}y_1$ $y_n = \frac{2}{3}4^{n-1} + \frac{1}{3} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/40036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Push forward and pullback in products I am reading this Questions about Serre duality, and there is one part in the answer that I'd like to know how it works. But after many tries I didn't get anywhere. So here is the problem. Let $X$ and $B$ be algebraic varieties over an algebraically closed field, $\pi_1$ and $\pi_2$ be the projections from $X\times B$ onto $X$ and $B$, respectively. Then it was claimed that $R^q\pi_{2,*} \pi_1^* \Omega_X^p \cong H^q(X, \Omega^p_X)\otimes \mathcal{O}_B$. I am guessing it works for any (quasi)coherent sheaf on $X$. Basically, I have two tools available, either Proposition III8.1 of Hartsshorne or going through the definition of the derived functors. Thank you.
Using flat base-change (Prop. III.9.3 of Hartshorne), one sees that $$R^1\pi_{2 *} \pi_1^*\Omega_X^p = \pi_1^* H^q(X,\Omega^p_X) = H^q(X,\Omega^p_X)\otimes \mathcal O_B.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/40106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Transitive groups Someone told me the only transitive subgroup of $A_6$ that contains a 3-cycle and a 5-cycle is $A_6$ itself. (1) What does it mean to be a "transitive subgroup?" I know that a transitive group action is one where if you have a group $G$ and a set $X$, you can get from any element in $X$ to any other element in $X$ by multiplying it by an element in $G$. Is a transitive subgroup just any group that acts transitively on a set? And if so, does its transitiveness depend on the set it's acting on? (2) Why is $A_6$ the only transitive subgroup of $A_6$ that contains a 3-cycle and a 5-cycle? Thank you for your help :)
Let $H\leq A_6$ be transitive and generated by a 3-cycle and a 5-cycle. Let if possible, $H\neq A_6$, and let us compute $|H|$. $|H|$ is divisible by 15, and divides 360$=|A_6|$, so it is one of $\{15,30,45,60,90,120,180\}$. * *$|H|$ can not be $\{90,120,180\}$, otherwise we get a subgroup of $A_6$ of index less than $6$. *$|H|$ can not be 15, since then $A_6$ will have an element of order 15, which is not possible, *$|H|$ can not be 45, since a group of order 45 is abelian and so it contains an element of order 15. *$|H|$ can not be 30, since a group of order 30 has normal Sylow-5 subgroup, and so it will contain a subgroup of order 15, hence an element of order 15. Hence $|H|$ should be $60$. Now in this subgroup of order 60, Sylow-5 subgroup can not be normal, since if it is normal, then it will also be normalized by an element of order 3, giving a subgroup of order 15, hence an element of order 15. So $H$ is a group of order $60$, which has no normal Sylow-5 subgroup; $H$ must be isomorphic to $A_5$. There are 6 Sylow-5 subgroups of $H\cong A_5$, hence 24 elements of order 5; they will be 5 cycles, hence fixing an element in $\{1,2,...,6\}$. Let $(12345)\in H$. As $H$ is transitive subgroup, there will be an element $\sigma \in H$ such that $\sigma(6)=1$, so $\sigma (12345)\sigma^{-1}\in H$, will be a 5-cycle fixing 1; in this way all Sylow-5 subgroups of $A_6$, and hence all element of order 5 of $A_6$ will be in $H$, exceeding the size of $H$. Hence we must have $H=A_6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Characterization of linear independence by wedge product Let $V$ be a vector space of finite dimension. Show that $x_1,...,x_k$ is linearly independent iff $x_1\wedge ... \wedge x_k \neq 0$.
Hint for one direction: if there is a linear dependence, one of the $x_i$ is a linear combination of the others. Then substitute into $x_1\wedge\cdots \wedge x_k$. Hint for the other direction: You can do row operations $x_i\mapsto x_i+rx_j$ for $i\neq j$ without affecting the wedge $x_1\wedge\cdots\wedge x_k$. Similarly you can divide any $x_j$ by a scalar without affecting whether $x_1\wedge\cdots\wedge x_k$ is nonzero. I'm not sure what properties you already know about the wedge. If you know that wedges $e_{i_1}\wedge\cdots \wedge e_{i_k}$, $i_1<i_2<\cdots<i_k$ form a basis for $\wedge^kV$, when $e_i$ is a basis for $V$, then you're home free.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Primitive polynomials of finite fields there are two primitive polynomials which I can use to construct $GF(2^3)=GF(8)$: $p_1(x) = x^3+x+1$ $p_2(x) = x^3+x^2+1$ $GF(8)$ created with $p_1(x)$: 0 1 $\alpha$ $\alpha^2$ $\alpha^3 = \alpha + 1$ $\alpha^4 = \alpha^3 \cdot \alpha=(\alpha+1) \cdot \alpha=\alpha^2+\alpha$ $\alpha^5 = \alpha^4 \cdot \alpha = (\alpha^2+\alpha) \cdot \alpha=\alpha^3 + \alpha^2 = \alpha^2 + \alpha + 1$ $\alpha^6 = \alpha^5 \cdot \alpha=(\alpha^2+\alpha+1) \cdot \alpha=\alpha^3+\alpha^2+\alpha=\alpha+1+\alpha^2+\alpha=\alpha^2+1$ $GF(8)$ created with $p_2(x)$: 0 1 $\alpha$ $\alpha^2$ $\alpha^3=\alpha^2+1$ $\alpha^4=\alpha \cdot \alpha^3=\alpha \cdot (\alpha^2+1)=\alpha^3+\alpha=\alpha^2+\alpha+1$ $\alpha^5=\alpha \cdot \alpha^4=\alpha \cdot(\alpha^2+\alpha+1) \cdot \alpha=\alpha^3+\alpha^2+\alpha=\alpha^2+1+\alpha^2+\alpha=\alpha+1$ $\alpha^6=\alpha \cdot (\alpha+1)=\alpha^2+\alpha$ So now let's say I want to add $\alpha^2 + \alpha^3$ in both fields. In field 1 I get $\alpha^2 + \alpha + 1$ and in field 2 I get $1$. Multiplication is the same in both fields ($\alpha^i \cdot \alpha^j = \alpha^{i+j\bmod(q-1)}$. So does it work so, that when some $GF(q)$ is constructed with different primitive polynomials then addition tables will vary and multiplication tables will be the same? Or maybe one of presented polynomials ($p_1(x), p_2(x)$) is not valid to construct field (altough both are primitive)?
The generator $\alpha$ for your field with the first description cannot be equal to the generator $\beta$ for your field with the second description. An isomorphism between $\mathbb{F}_2(\alpha)$ and $\mathbb{F}_2(\beta)$ is given by taking $\alpha \mapsto \beta + 1$; you can check that $\beta + 1$ satisfies $p_1$ iff $\beta$ satisfies $p_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Finding a vector in Euclidian space that minimizes a loss function subject to some constraints I'm trying to solve the following minimization problem, and I'm sure there must be a standard methodology that I could use, but so far I couldn't find any good references. Please let me know if you have anything in mind that could help or any references that you think would be useful for tackling this problem. Suppose you are given $K$ points, $p_i \in R^n$, for $i \in \{1,\ldots,K\}$. Assume also that we are given $K$ constants $\delta_i$, for $i \in \{1,\ldots,K\}$. We want to find the vector $x$ that minimizes: $\min_{x \in R^n} \sum_{i=1,\ldots,K} || x - p_i ||^2$ subject the following $K$ constraints: $\frac{ || x - p_i ||^2 } { \sum_{j=1,\ldots,K} ||x - p_j||^2} = \delta_i$ for all $i \in {1,\ldots,K}$. Any help is extremely welcome! Bruno edit: also, we know that $\sum_{i=1,\ldots,K} \delta_i = 1$.
You can try using Lagrange multiplier method - see wikipedia.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Subset sum problem is NP-complete? If I know correctly, subset sum problem is NP-complete. Here you have an array of n integers and you are given a target sum t, you have to return the numbers from the array which can sum up to the target (if possible). But can't this problem be solved in polynomial time by dynamic programming method where we construct a table n X t and take cases like say last number is surely included in output and then the target becomes t- a[n]. In other case, the last number is not included, then the target remains same t but array becomes of size n-1. Hence this way we keep reducing size of problem. If this approach is correct, isn't the complexity of this n * t, which is polynomial? and so if this belongs to P and also NP-complete (from what I hear) then P=NP. Surely, I am missing something here. Where is the loophole in this reasoning? Thanks,
If you express the inputs in unary you get a different running time than if you express them in a higher base (binary, most commonly). So the question is, for subset sum, what base is appropriate? In computer science we normally default to the following: * *If the input is a list or collection, we express its size as the number of items *If the input is an integer, we express its size as the number of bits (binary digits) The intuition here is that we want to take the more "compact" representation. So for subset sum, we have a list of size $n$ and a target integer of value $t$. Therefore it's common to express the input size as $n$ and $t=2^k$ where $k$ is the number of bits needed to express $t$. So the running time is $O(n 2^k)$ which is exponential in $k$. But one could also say that $t$ is given in unary. Now the size of $t$ is $t$, and the running time is $O(n t)$, which is polynomial in $n$ and $t$. In reductions involving subset sum (and other related problems like partition, 3-partition, etc) we must use a non-unary representation if we want to use it as an NP-Hard problem to reduce from.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
What is the symmetry between the definitions of the bounded universal/existential quantifiers? What is the symmetry between the definitions of the bounded universal/existential quantifiers? $\forall x \in A, B(x)$ means $\forall x (x \in A \rightarrow B(x))$ $\exists x \in A, B(x)$ means $\exists x (x \in A \land B(x))$ These make intuitive sense, but I would expect there to be some kind of symmetry between how the definitions of the bounded quantifiers work, and I can't see one. $A \rightarrow B$ means $\lnot A \lor B$ which doesn't seem to have a direct relationship with $A \land B$. What am I missing?
You might think of universal as a mega-intersection, and existential as a mega-union. E.g., if $A=\lbrace x_1,x_2,\dots\rbrace$ then your first formula is $B(x_1)$ and $B(x_2)$ and ..., while the second is $B(x_1)$ or $B(x_2)$ or .... There is also some sort of symmetry in noting that "not for all" is the same as "there exists ... not," and "not there exists" is the same as "for all ... not."
{ "language": "en", "url": "https://math.stackexchange.com/questions/40564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Energy norm. Why is it called that way? Let $\Omega$ be an open subset of $\mathbb{R}^n$. The following $$\lVert u \rVert_{1, 2}^2=\int_{\Omega} \lvert u(x)\rvert^2\, dx + \int_{\Omega} \lvert \nabla u(x)\rvert^2\, dx$$ defines a norm on $H^1(\Omega)$ space, that is sometimes called energy norm. I don't feel easy with the physical meaning this name suggests. In particular, I see two non-homogeneous quantities, $\lvert u(x)\rvert^2$ and $\lvert \nabla u(x)\rvert^2$, being summed together. How can this be physically consistent? Maybe some example could help me here. Thank you.
To expand on my comment (see e.g. http://online.itp.ucsb.edu/online/lnotes/balents/node10.html): The expression which you give can be interpreted as the energy of a $n$-dimensional elastic manifold being elongated in the $n+1$ dimension (e.g. for $n=2$, membrane in three dimension); $u$ is the displacement field. Let me put back the units $$E[u]= \frac{a}{2}\int_{\Omega} \lvert u(x)\rvert^2\, dx + \frac{b}{2} \int_{\Omega} \lvert \nabla u(x)\rvert^2\, dx.$$ The first term tries to bring the manifold back to equilibrium (with $u=0$), the second term penalizes fast changes in the displacement. The energy is not homogenous and involves a characteristic length scale $$\ell_\text{char} = \sqrt{\frac{b}{a}}.$$ This is the scale over which the manifold returns back to equilibrium (in space) if elongated at some point. With $b=0$, the manifold would return immediately, you elongate it at some point and infinitely close the manifold is back at $u=0$. With $a=0$ the manifold would never return to $u=0$. Only the competition between $a$ and $b$ leads to the physics which we expect for elastic manifold. This competition is intimately related to the fact that there is a characteristic length scale appearing. It is important that physical laws are not homogenous, in order to have characteristic length scales (like $\ell$ in your example, the Bohr radius for the hydrogen problem, $\sqrt{\hbar/m\omega}$ for the quantum harmonic oscillator, ...). The energy of systems only become scale invariant in the vicinity of second order phase transitions. This is a strong condition on energy functionals to the extend that people classify all possible second order phase transitions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 2, "answer_id": 1 }
Does the converse of uniform continuity -> Preservance of Cauchy sequences hold? We know that if a function $f$ is uniformly continuous on an interval $I$ and $(x_n)$ is a Cauchy sequence in $I$, then $f(x_n)$ is a Cauchy sequence as well. Now, I would like to ask the following question: The function $g:(0,1) \rightarrow \mathbb{R}$ has the following property: for every Cauchy sequence $(x_n)$ in $(0,1)$, $(g(x_n))$ is also a Cauchy sequence. Prove that g is uniformly continuous on $(0,1)$. How do we go about doing it?
You can also prove it by contradiction. Suppose that $f$ is not uniformly continuous. Then there exists an $\epsilon >0$ so that for each $\delta>0$ there exists $x,y \in (0,1)$ with $|x-y| < \delta$ and $|f(x)-f(y)| \geq \epsilon$. For each $n$ pick $x_n, y_n$ so that $|x_n-y_n| < \frac{1}{n}$ and $|f(x_n)-f(y_n)| \geq \epsilon$. Pick $x_{k_n}$ a Cauchy subsequence of $x_n$ and $y_{l_n}$ a Cauchy subsequence of $y_{k_n}$. Then the alternaticng sequence $x_{l_1}, y_{l_1}, x_{l_2}, y_{l_2},..., x_{l_n}, y_{ln}, ...$ is Cauchy but $$\left| f(x_{l_n}) - f(y_{l_n}) \right| \geq \epsilon \,.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/40676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }