Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Constructing self-complementary graphs How does one go about systematically constructing a self-complementary graph, on say 8 vertices?
[Added: Maybe everyone else knows this already, but I had to look up my guess to be sure it was correct: a self-complementary graph is a simple graph which is isomorphic to its complement. --PLC]
| Here's a nice little algorithm for constructing a self-complementary graph from a self-complementary graph $H$ with $4k$ or $4k+1$ vertices, $k = 1, 2, ...$ (e.g., from a self-complementary graph with $4$ vertices, one can construct a self-complementary graph with $8$ vertices; from $5$ vertices, construct one with $9$ vertices).
See this PDF on constructing self-complementary graphs.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/40745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 2
} |
Convergence of infinite/finite 'root series' Let $S_n=a_1+a_2+a_3+...$ be a series where $ {a}_{k}\in \mathbb{R}$ and let $P = \{m\;|\;m\;is\;a\;property\;of\;S_n\}$ based on this information what can be said of the corresponding root series: $R_n=\sqrt{a_1} + \sqrt{a_2} + \sqrt{a_3} + ...$
In particular, if $S_n$ is convergent/divergent then in what circumstances can we say that $R_n$ is also convergent/divergent?
EDIT (1)
Eg:
$$S_n = \frac{1}{2}+\frac{1}{4}+\frac{1}{8}+...$$ we know that the series converges to $1$. While the corresponding root series $$R_n = \frac{\sqrt{1}}{\sqrt{2}}+\frac{\sqrt{1}}{\sqrt{4}}+\frac{\sqrt{1}}{\sqrt{8}}+...$$ also converges (which we know does to $1+\sqrt2$).
We also know that the above convergence cannot generalised to all root series as, the series $\displaystyle \frac{1}{n^2}$ converges to $\displaystyle \frac{\pi^2}{6}$, while the corresponding root series $\displaystyle \sqrt{\frac{1}{n^2}}$ diverges.
My Question is: Is there a way to determine which 'root series' diverges or converges based only on information about the parent series.
| If $S_n$ is convergent you cannot say anything about $R_n$, for example if $a_n=1/n^2$ then $R_n$ diverges. If $a_n=1/2^n$ then $R_n$ converges too.
If $S_n$ diverges $R_n$ will diverge too because you have for $a < 1$ that $a < \sqrt{a}$ (This reasoning assumes that $a_k \geq 0$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/40834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
How do I find roots of a single-variate polynomials whose integers coefficients are symmetric wrt their respective powers Given a polynomial such as $X^4 + 4X^3 + 6X^2 + 4X + 1,$ where the coefficients are symmetrical, I know there's a trick to quickly find the zeros. Could someone please refresh my memory?
| Hint: This particular polynomial is very nice, and factors as $(X+1)^4$.
Take a look at Pascal's Triangle and the Binomial Theorem for more details.
Added: Overly complicated formula
The particular quartic you asked about had a nice solution, but lets find all the roots of the more general $$ax^{4}+bx^{3}+cx^{2}+bx+a.$$ Since $0$ is not a root, we are equivalently finding the zeros of
$$ax^{2}+bx^{1}+c+bx^{-1}+ax^{-2}.$$Let $z=x+\frac{1}{x}$ (as suggested by Aryabhatta) Then $z^{2}=x^{2}+2+x^{-2}$ so that $$ax^{2}+bx^{1}+c+bx^{-1}+ax^{-2}=az^{2}+bz+\left(c-2a\right).$$ The roots of this are given by the quadratic formula: $$\frac{-b+\sqrt{b^{2}-4a\left(c-2a\right)}}{2a},\ \frac{-b-\sqrt{b^{2}-4a\left(c-2a\right)}}{2a}.$$ Now, we then have $$x+\frac{1}{x}=\frac{-b\pm\sqrt{b^{2}-4a\left(c-2a\right)}}{2a}$$
and hence we have the two quadratics $$x^{2}+\frac{b+\sqrt{b^{2}-4a\left(c-2a\right)}}{2a}x+1=0,$$ $$x^{2}+\frac{b-\sqrt{b^{2}-4a\left(c-2a\right)}}{2a}x+1=0.$$ This then gives the four roots:$$\frac{-b+\sqrt{b^{2}-4a\left(c-2a\right)}}{4a}\pm\sqrt{\frac{1}{4}\left(\frac{b-\sqrt{b^{2}-4a\left(c-2a\right)}}{2a}\right)^2-1}$$
$$\frac{-b-\sqrt{b^{2}-4a\left(c-2a\right)}}{4a}\pm\sqrt{\frac{1}{4}\left(\frac{b+\sqrt{b^{2}-4a\left(c-2a\right)}}{2a}\right)^2-1}.$$
If we plug in $a=1$, $b=4$, $c=6$, we find that all four of these are exactly $1$, so our particular case does work out.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/40864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 6,
"answer_id": 4
} |
Weak limit of an $L^1$ sequence We have functions $f_n\in L^1$ such that $\int f_ng$ has a limit for every $g\in L^\infty$. Does there exist a function $f\in L^1$ such that the limit equals $\int fg$? I think this is not true in general (really? - why?), then can this be true if we also know that $f_n$ belong to a certain subspace of $L^1$?
| Perhaps surprisingly, the answer is yes.
More generally, given any Banach space $X$, a sequence $\{x_n\} \subset X$ is said to be weakly Cauchy if, for every $\ell \in X^*$, the sequence $\{\ell(f_n)\} \subset \mathbb{R}$ (or $\mathbb{C}$) is Cauchy. If every weakly Cauchy sequence is weakly convergent, $X$ is said to be weakly sequentially complete.
Every reflexive Banach space is weakly sequentially complete (a nice exercise with the uniform boundedness principle). $L^1$ is not reflexive, but it turns out to be weakly sequentially complete anyway. This theorem can be found in P. Wojtaszczyk, Banach spaces for analysts, as Corollary 14 on page 140. It works for $L^1$ over an arbitrary measure space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/40920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Is the factorization problem harder than RSA factorization ($n = pq$)? Let $n \in \mathbb{N}$ be a composite number, and $n = pq$ where $p,q$ are distinct primes. Let $F : \mathbb{N} \rightarrow \mathbb{N} \times \mathbb{N}$ (*) be an algorithm which takes as an input $x \in \mathbb{N}$ and returns two primes $u, v$ such that $x = uv,$ or returns FAIL if there is no such factorization ($F$ uses, say, an oracle). That is, $F$ solves the RSA factorization problem. Note that whenever a prime factorization $x = uv$ exists for $x,$ $F$ is guaranteed to find it.
Can $F$ be used to solve the prime factorization problem in general? (i.e. given
$n \in \mathbb{N},$ find primes $p_i \in \mathbb{N},$ and integers $e_i \in \mathbb{N},$ such that $n = \prod_{i=0}^{k} p_{i}^{e_i}$)
If yes, how? A rephrased question would be: is the factorization problem harder than factoring $n = pq$?
(*) abuse of the function type notation. More appropriately $F : \mathbb{N} \rightarrow \mathbb{N} \times \mathbb{N} \bigcup \mbox{FAIL} $
Edit 1: $F$ can determine $p,q,$ or FAIL in polynomial time. The general factoring algorithm is required to be polynomial time.
Edit 2: The question is now cross-posted on cstheory.SE.
| Two vague reasons I think the answer must be "no":
If there were any inductive reason that we could factor a number with k prime factors in polynomial time given the ability to factor a number with k-1 prime factors in polynomial time, then the AKS primality test has already provided a base case. So semiprime factorization would have to be considered as a new base case for anything like this to work.
The expected number of prime factors is on the order of log(log(n)) which is unbounded although it is very slow. So for sufficiently large n there is unlikely to be a prime or a semiprime which differs from it by less than any given constant. For large enough k, it seems like the ability to factor p*q won't help us factor (p*q)+k, similarly to how the ability to prove p is prime won't help us factor p+k.
Interesting question. I hope someone more knowledgeable than me can answer this with a reference and a decisive statement.
EDIT: I found this paper entitled Breaking RSA May Be Easier Than Factoring which argues for a "no" answer and states the problem is open.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/40971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Simple (even toy) examples for uses of Ordinals? I want to describe Ordinals using as much low-level mathematics as possible, but I need examples in order to explain the general idea. I want to show how certain mathematical objects are constructed using transfinite recursion, but can't think of anything simple and yet not artificial looking. The simplest natural example I have are Borel sets, which can be defined via transfinite recursion, but I think it's already too much (another example are Conway's Surreal numbers, but that again may already be too much).
| Some accessible applications transfinite induction could be the following (depending on what the audience already knows):
*
*Defining the addition, multiplication (or even exponentiation) of ordinal numbers by transfinite recursion and then showing some of their basic properties. (Probably most of the claims for addition and multiplication can be proved easier in a non-inductive way.)
*$a.a=a$ holds for every cardinal $a\ge\aleph_0$. E.g. Cieselski: Set theory for the working mathematician, Theorem 5.2.4, p.69. Using the result that any two cardinals are comparable, this implies $a.b=a+b=\max\{a,b\}$. See e.g. here
*The proof that Axiom of Choice implies Zorn's lemma. (This implication is undestood as a theorem in ZF - in all other bullets we work in ZFC.)
*Proof of Steinitz theorem - every field has an algebraically closed extension. E.g. Antoine Chambert-Loir: A field guide to algebra, Theorem 2.3.3, proof is given on p.39-p.40.
*Some constructions of interesting subsets of plane are given in Cieselski's book, e.g. Theorem 6.1.1 in which a set $A\subseteq\mathbb R\times\mathbb R$ is constructed such that $A_x=\{y\in\mathbb R; (x,y)\in A\}$ is singleton for each $x$ and $A^y=\{x\in\mathbb R; (x,y)\in A\}$ is dense in $\mathbb R$ for every $y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 2
} |
Conditional probability Given the events $A, B$ the conditional probability of $A$ supposing that $B$ happened is:
$$P(A | B)=\frac{P(A\cap B )}{P(B)}$$
Can we write that for the Events $A,B,C$, the following is true?
$$P(A | B\cap C)=\frac{P(A\cap B\cap C )}{P(B\cap C)}$$
I have couple of problems with the equation above; it doesn't always fit my logical solutions.
If it's not true, I'll be happy to hear why.
Thank you.
| Yes you can. I see no fault. Because if you put $K = B \cap C$ you obtain the original result
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
Compound angle formula confusion I'm working through my book, on the section about compound angle formulae. I've been made aware of the identity $\sin(A + B) \equiv \sin A\cos B + \cos A\sin B$. Next task was to replace B with -B to show $\sin(A - B) \equiv \sin A\cos B - \cos A \sin B$ which was fairly easy. I'm struggling with the following though:
"In the identity $\sin(A - B) \equiv \sin A\cos B - \cos A\sin B$, replace A by $(\frac{1}{2}\pi - A)$ to show that $\cos(A + B) \equiv \cos A\cos B - \sin A\sin B$."
I've got $\sin((\frac{\pi}{2} - A) - B) \equiv \cos A\cos B - \sin A\sin B$ by replacing $\sin(\frac{\pi}{2} - A)$ with $\cos A$ and $\cos(\frac{\pi}{2} - A)$ with $\sin A$ on the RHS of the identity. It's just the LHS I'm stuck with and don't know how to manipulate to make it $\cos(A + B)$.
P.S. I know I'm asking assistance on extremely trivial stuff, but I've been staring at this for a while and don't have a tutor so hope someone will help!
| Note that you can also establish:
$$\sin\left(\left(\frac{\pi}{2} - A\right) - B\right) =\sin\left(\frac{\pi}{2} - (A + B)\right) = \cos(A+B)$$ by using the second identity you figured out above, $\sin(A - B) \equiv \sin A\cos B - \cos A\sin B$, giving you:
$$\sin\left(\left(\frac{\pi}{2} - A\right) - B\right) = \sin\left(\frac{\pi}{2} - (A+B)\right)$$ $$ = \sin\left(\frac{\pi}{2}\right)\cos(A+B) - \cos\left(\frac{\pi}{2}\right)\sin(A+B)$$ $$= (1)\cos(A+B) - (0)\sin(A+B)$$ $$ = \cos(A+B)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
2D Epanechnikov Kernel What is the equation for the $2D$ Epanechnikov Kernel?
The following doesn't look right when I plot it.
$$K(x) = \frac{3}{4} * \left(1 - \left(\left(\frac{x}{\sigma} \right)^2 + \left(\frac{y}{\sigma}\right)^2\right) \right)$$
I get this:
| I have an equation for some p-D Epanechnikov Kernel.
Maybe you will find it useful.
$$
\begin{equation}
K(\hat{x})=\begin{cases} \frac{1}2C_p^{-1}(p
+2)(1-||\hat{x}||^2)& ||\hat{x}||<1\\\\
0& \text{otherwise}
\end{cases}
\end{equation}
$$
while $\hat{x}$ is a vector with p dimensions and $C_p$ is defined as:
$$C_1 = 2;\ C_2=\pi,\ C_3=\frac{4\pi}3$$
would like to see an equation for $C_p$ for every p.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Equivalent Definitions of Positive Definite Matrix As Wikipedia tells us, a real $n \times n$ symmetric matrix $G = [g_{ij}]$ is positive definite if $v^TGv >0$ for all $0 \neq v \in \mathbb{R}^n$. By a well-known theorem of linear algebra it can be shown that $G$ is positive definite if and only if the eigenvalues of $G$ are positive. Therefore, this gives us two distinct ways to say what it means for a matrix to be positive definite.
In Amann and Escher's Analysis II, exercise 7.1.8 seems to provide yet another way recognize a positive definite matrix. In this exercise, $G$ is defined to be positive definite if there exists a positive number $\gamma$ such that
$$
\sum\limits_{i,j = 1}^n g_{ij}v^iv^j \geq \gamma |v|^2
$$
I have not before seen this characterization of a positive definite matrix and I have not been successful at demonstrating that this characterization is equivalent to the other two characterizations listed above.
Can anyone provide a hint how one might proceed to demonstrate this apparent equivalence or suggest a reference that discusses it?
| Let's number the definitions:
*
*$v^T G v > 0$ for all nonzero $v$.
*$G$ has positive eigenvalues.
*$v^T G v > \gamma v^T v$ for some $\gamma > 0$.
You know that 1 and 2 are equivalent. It's not hard to see that 3 implies 1. So it remains to show that either 1 or 2 implies 3. A short proof: 2 implies 3 because we can take $\gamma$ to be, say, half the smallest eigenvalue of $G$.
Another short proof: 1 implies 3 because 3 is equivalent to the condition that $v^T G v > \gamma$ for all $v$ on the unit sphere. But the unit sphere is compact, so if $v^T G v$ is positive on the unit sphere, it attains a positive minimum.
(I'd like to take the time to complain about definition 2. It is a misleading definition in that the statement it is describing makes sense for all matrices, but it is not equivalent to the first definition in this generality. The problem is that positive-definiteness is a property of a bilinear form $V \times V \to \mathbb{R}$, whereas eigenvalues are a property of an endomorphism $V \to V$, and in complete generality there's no natural way to turn one into the other. To do this you really need something like an inner product.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Limit of monotonic functions at infinity I understand that if a function is monotonic then the limit at infinity is either $\infty$,a finite number or $-\infty$.
If I know the derivative is bigger than $0$ for every $x$ in $[0, \infty)$ then I know that $f$ is monotonically increasing but I don't know whether the limit is finite or infinite.
If $f'(x) \geq c$ and $c \gt 0$ then I know the limit at infinity is infinity and not finite, but why? How do I say that if the limit of the derivative at infinity is greater than zero, then the limit is infinite?
| You can also prove it directly by the Mean Value Theorem:
$$f(x)-f(0)=f'(\alpha)(x-0) \geq cx \,.$$
Thus $f(x) \geq cx + f(0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
Outer product of a vector with itself Is there a special name for an outer product of a vector with itself? Is it a special case of a Gramian? I've seen them a thousand times, but I have no idea if such product has a name.
Update:
The case of outer product I'm talking about is $\vec{u}\vec{u}^T$ where $\vec{u}$ is a column vector.
Does is have a name in the form of something of $\vec{u}$?
Cheers!
| In statistics, we call it the "sample autocorrelation matrix", which is like an estimation of autocorrelation matrix based on observed samples.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Symmetric and diagonalizable matrix-Jacob method: finding $p$ and $q$ Given this symmetric matrix-$A$:
$\begin{pmatrix}
14 &14 & 8 &12 \\
14 &17 &11 &14 \\
8& 11 &11 &10 \\
12 & 14 &10 & 12
\end{pmatrix}$
I need to find $p,q$ such that $p$ is the number of 1's and $q$ is the number of -1's
in the diagonalizable matrix $D_{p,q}$ such that $D_{p,q}$= Diag {$1,1,\ldots 1,-1,-1, \ldots-1,0,0,\ldots0$}.
$D=P^{t}AP$ while $P$ is the the matrix that contains the eigenvectors of $A$ as a Columns.
I tried to use Jacobi method but I found out that $|A|=0$, so I can't use it, but I know now that $0$ is an eigenvalue of $A$, So Do I really need to compute $P$ in order to find $p$ and $q$? It's a very long and messy process.
Thank you
| The characteristic polynomial of $A$ is $P(x)= x^4 - 54x^3 + 262x^2 - 192x
$. It has $0$ as a simple root, and the other three are positive. Therefore $A$ has three positive eigenvalues and one equal to zero. Since the signature can be obtained from the signs of the eigenvalues, we are done. Therefore $p=3,q=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Easy Proof Adjoint(Compact)=Compact I am looking for an easy proof that the adjoint of a compact operator on a Hilbert space is again compact. This makes the big characterization theorem for compact operators (i.e. compact iff image of unit ball is relatively compact iff image of unit ball is compact iff norm limit of finite rank operators) much easier to prove, provided that you have already developed spectral theory for C*-algebras.
By the way, I'm using the definition that an operator $T\colon H \to H$ is compact if and only if given any [bounded] sequence of vectors $(x_n)$, the image sequence $(Tx_n)$ has a convergent subsequence.
edited for bounded
| Here is an alternative proof, provided that you know that an operator is compact iff it is the operator-limit of a sequence of finite-rank operators.
Let $T: H \to H$ be a compact operator. Then $T= \lim_n T_n$ where the limit is w.r.t. the operatornorm and $T_n$ is a finite rank operator. Using that the $*$-involution is continuous, we get
$$T^*= \lim_n T_n^*$$
where $T_n^*$ is also a finite rank operator for all $n$. Thus $T^*$ is the limit of finite-rank operators and it follows that $T^*$ is compact as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 3,
"answer_id": 0
} |
Doubt in Discrete-Event System Simulation by Jerry Banks,4th Edition I'm new to the Math forum here, so pardon my question if it seems juvenile to some. I've googled intensively,gone through wikipedia,wolfram and after hitting dead ends everywhere have resorted to this site.
My query is this-
In chapter#8, "Random-Variate Generation", the problems are required to use a sequence of random numbers obtained from a table A.1 .
But I find no correlation between the random numbers used and the numbers in the table.
So how are the numbers generated exactly? Are they assumed??
Table A.1 is on page 501 in this link
http://books.google.com/books?id=b0lgHnfe3K0C&pg=PA501&lpg=PA501&dq=78166+82521&source=bl&ots=nR33GcAzGF&sig=9LQjAPyGxDDxz1QLsEeMwN_UytA&hl=en&ei=3TTeTbPyNoqJrAe6zPGOCg&sa=X&oi=book_result&ct=result&resnum=6&ved=0CDUQ6AEwBTgo#v=onepage&q&f=false
And the random numbers used in my problem are :
R1=0.8353
R2=0.9952
R3=0.8004
How do you get these values of R1,R2,R3 from the table in the link???
If you cant view the table from the link up there, the table is as in the image shown below-
| Here is an hypothesis. Since three coefficients only are obtained from a whole bunch of data, these could summarize some properties of the sample considered. Statisticians often use the symbol R2 for a coefficient of determination, which, roughly speaking, measures the proportion of variability in a data set.
On the positive side, these are by definition between 0 and 1, like yours. On the negative side, one would still have to understand how one sample gave rise to three coefficients, perhaps the whole sample was split into three. (I was not able to check the pages around Table A.1 because I have access to no preview on googlebooks for this book.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Intuitive explanation of the tower property of conditional expectation I understand how to define conditional expectation and how to prove that it exists.
Further, I think I understand what conditional expectation means intuitively. I can also prove the tower property, that is if $X$ and $Y$ are random variables (or $Y$ a $\sigma$-field) then we have that
$$\mathbb E[X] = \mathbb{E}[\mathbb E [X | Y]].$$
My question is: What is the intuitive meaning of this? It seems quite puzzling to me.
(I could find similar questions but not this one.)
| For simple discrete situations from which one obtains most basic intuitions, the meaning is clear.
I have a large bag of biased coins. Suppose that half of them favour heads, probability of head $0.7$. Two-fifths of them favour heads, probability of head $0.8$. And the rest favour heads, probability of head $0.9$.
Pick a coin at random, toss it, say once. To find the expected number of heads, calculate the expectations, given the various biasing possibilities. Then average the answers, taking into consideration the proportions of the various types of coin.
It is intuitively clear that this formal procedure "should" give about the same answer as the highly informal process of say repeating the experiment $1000$ times, and dividing by $1000$. For if we do that, in about $500$ cases we will get the first type of coin, and out of these $500$ we will get about $350$ heads, and so on. The informal arithmetic mirrors exactly the more formal process described in the preceding paragraph.
If it is more persuasive, we can imagine tossing the chosen coin $12$ times.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41",
"answer_count": 8,
"answer_id": 2
} |
Why can any affine transformaton be constructed from a sequence of rotations, translations, and scalings? A book on CG says:
... we can construct any affine transformation from a sequence of rotations, translations, and scalings.
But I don't know how to prove it.
Even in a particular case, I found it still hard. For example, how to construct
a shear transformation from a sequence of rotations, translations, and scalings?
Can you please help? Thank you.
EDIT:
Axis scalings may use different scaling factors for the axes.
Is there a matrix representation or proof for this?
For example, to show that a two-dimensional rotation can be decomposed into three shear transformation, we can write
$$
\begin{pmatrix}
\cos\alpha & \sin\alpha\\
-\sin\alpha & \cos\alpha
\end{pmatrix}
=
\begin{pmatrix}
1 & \tan\frac{\alpha}{2}\\
0 & 1
\end{pmatrix}
\begin{pmatrix}
1 & 0\\
-\sin\alpha & 1
\end{pmatrix}
\begin{pmatrix}
1 & \tan\frac{\alpha}{2}\\
0 & 1
\end{pmatrix}
$$
| Perhaps using the singular value decomposition?
For the homogeneus case (linear transformation), we can always write
$y = A x = U D V^t x$
for any square matrix $A$ with positive determinant, were U and V are orthogonal and D is diagonal with positive real entries. U and V would the be the rotations and D the scaling.
Some (trivial?) details to polish: what if A has negative determinant, what is U and V are not pure rotations but also involve axis reflections.
It only remains add the indepent term to get the affine transformation ($y = Ax +b$) and that would be the translation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Slick way to define p.c. $f$ so that $f(e) \in W_{e}$ Is there a slick way to define a partial computable function $f$ so that $f(e) \in W_{e}$ whenever $W_{e} \neq \emptyset$? (Here $W_{e}$ denotes the $e^{\text{th}}$ c.e. set.) My only solution is to start by defining $g(e) = \mu s [W_{e,s} \neq \emptyset]$, where $W_{e,s}$ denotes the $s^{\text{th}}$ finite approximation to $W_{e}$, and then set
$$
f(e) = \begin{cases}
\mu y [y \in W_{e, g(e)}] &\text{if } W_{e} \neq \emptyset \\
\uparrow &\text{otherwise},
\end{cases}
$$
but this is ugly (and hence not slick).
| Perhaps the reason your solution seems ugly to you is that you appear to be excessively concerned with the formalism of representing your computable function in terms of the $\mu$ operator. The essence of computability, however, does not lie with this formalism, but rather with the idea of a computable procedure. It is much easier and more enlightening to see that a function is computable simply by describing an algorithm that computes it, and such kind of arguments are pervasive in computability theory. (One can view them philosophically as instances of the Church-Turing thesis.)
The set $W_e$ consists of the numbers that are eventually accepted by program $e$. These are the computabley enumerable sets, in the sense that there is a uniform computable procedure to enumerate their elements.
We may now define the desired function $f$ by the following computable procedure: on input $e$, start enumerating $W_e$. When the first element appears, call it $f(e)$.
It is now clear both that $f$ is computable and that $f(e)\in W_e$ whenever $W_e$ is not empty, as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Is an integer uniquely determined by its multiplicative order mod every prime Let $x$ and $y$ be nonzero integers and $\mathrm{ord}_p(w)$ be the multiplicative order of $w$ in $ \mathbb{Z} /p \mathbb{Z} $. If $\mathrm{ord}_p(x) = \mathrm{ord}_p(y)$ for all primes (Edit: not dividing $x$ or $y$), does this imply $x=y$?
| [This is an answer to the original form of the question. In the meantime the question has been clarified to refer to the multiplicative order; this seems like a much more interesting and potentially difficult question, though I'm pretty sure the answer must be yes.]
I may be missing something, but it seems the answer is a straightforward no. All non-identity elements in $\mathbb{Z} /p \mathbb{Z}$ have the same order $p$, which is different from the order $1$ of the identity element; so saying that all the orders are the same amounts to saying that $x$ and $y$ are divisible by the same primes. But different powers of the same prime, e.g. $x=2$ and $y=4$, are divisible by the same primes, and hence have the same orders.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 4,
"answer_id": 1
} |
Find a first order sentence in $\mathcal{L}=\{0,+\}$ which is satisfied by exactly one of $\mathbb{Z}\oplus \mathbb{Z}$ and $\mathbb{Z}$ I'm re-reading some material and came to a question, paraphrased below:
Find a first order sentence in $\mathcal{L}=\{0,+\}$ which is satisfied by exactly one of the structures $(\mathbb{Z}\oplus \mathbb{Z}, (0,0), +)$ and $(\mathbb{Z}, 0, +)$.
At first I was thinking about why they're not isomorphic as groups, but the reasons I can think of mostly come down to $\mathbb{Z}$ being generated by one element while $\mathbb{Z}\oplus \mathbb{Z}$ is generated by two, but I can't capture this with such a sentence.
I'm growing pessimistic about finding a sentence satisfied in $\mathbb{Z}\oplus \mathbb{Z}$ but not in the other, since every relation I've thought of between some vectors in the plane seems to just be satisfied by integers, seen by projecting down on an axis.
In any case, this is getting kind of frustrating because my guess is there should be some simple statement like "there exists three nonzero vectors that add to 0 in the plane, but there doesn't exist three nonzero numbers that add to 0 in the integers" (note: this isn't true).
| Here's one:
$$
(\forall x)(\forall y)\Bigl[(\exists z)(x=z+z) \lor (\exists z)(y=z+z) \lor (\exists z)(x+y=z+z)\Bigr]
$$
This sentence is satisfied in $\mathbb{Z}$, since one of the numbers $x$, $y$, and $x+y$ must be even. It isn't satisfied in $\mathbb{Z}\oplus\mathbb{Z}$, e.g. if $x=(1,0)$, $y=(0,1)$, and $x+y=(1,1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
$\lim (a + b)\;$ when $\;\lim(b)\;$ does not exist? Suppose $a$ and $b$ are functions of $x$. Is it guaranteed that
$$
\lim_{x \to +\infty} a + b\text{ does not exist}
$$
when
$$
\lim_{x \to +\infty} a = c\quad\text{and}\quad
\lim_{x \to +\infty} b\text{ does not exist ?}
$$
| Suppose, to get a contradiction, that our limit exists. That is, suppose $$\lim_{x\rightarrow \infty} a(x)+b(x)=d$$ exists. Then since $$\lim_{x\rightarrow \infty} -a(x)=-c,$$ and as limits are additive, we conclude that $$\lim_{x\rightarrow \infty} a(x)+b(x)-a(x)=d-c$$ which means $$\lim_{x\rightarrow \infty} b(x)=d-c.$$ But this is impossible since we had that $b(x)$ did not tend to a limit.
Hope that helps,
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Sorting a deck of cards with Bogosort Suppose you have a standard deck of 52 cards which you would like to sort in a particular order. The notorious algorithm Bogosort works like this:
*
*Shuffle the deck
*Check if the deck is sorted. If it's not sorted, goto 1. If it's sorted, you're done.
Let B(n) be the probability that Bogosort sorts the deck in n shuffles or less. B(n) is a monotonically increasing function which converges toward 1. What is the smallest value of n for which B(n) exceeds, say, 0.9?
If the question is computationally infeasible then feel free to reduce the number of cards in the deck.
| An estimate. The probability that Bogosort doesn't sort the deck in a particular shuffle is $1 - \frac{1}{52!}$, hence $1 - B(n) = \left( 1 - \frac{1}{52}! \right)^n$. Since
$$\left( 1 - \frac{x}{n} \right)^n \approx e^{-x}$$
for large $n$, the above is is approximately equal to $e^{- \frac{n}{52!} }$, hence $B(n) \approx 0.9$ when
$$- \frac{n}{52!} \approx \log 0.1 \approx -2.30.$$
This gives
$$n \approx 2.30 \cdot 52! \approx 2.30 \cdot \sqrt{106\pi} \left( \frac{52}{e} \right)^{52} \approx 1.87 \times 10^{68}$$
by Stirling's approximation. By comparison, the current age of the universe is about $4.33 \times 10^{17}$ seconds, or about $4.33 \times 10^{32}$ flops if your computer runs at $1$ petaflops.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/41948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Prove that any shape 1 unit area can be placed on a tiled surface Given a surface of equal square tiles where each tile side is 1 unit long. Prove that a single area A, of any shape, but just less than 1 unit square in area can be placed on the surface without touching a vertex of any tiled area? The Shape A may have holes.
| Project $A$ onto a single square by "Stacking" all of the squares in the plane. Then translating $A$ on this square corresponds to moving $A$ on a torus with surface area one. As the area of $A$ is less then one, there must be some point which it does not cover. Then choose that point to be the four corners of the square, and unravel the torus.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/42004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
Proving an integer $3n+2$ is odd if and only if the integer $9n+5$ is even How can I prove that the integer $3n+2$ is odd if and only if the integer $9n+5$ is even, where n is an integer?
I suppose I could set $9n+5 = 2k$, to prove it's even, and then do it again as $9n+5=2k+1$
Would this work?
| HINT $\rm\ \ 3\ (3\:n+2)\ -\ (9\:n+5)\:\ =\:\ 1$
Alternatively note that their sum $\rm\:12\:n + 7\:$ is odd, so they have opposite parity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/42059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Calculate Line Of Best Fit Using Exponential Weighting? I know how to calculate a line of best fit with a set of data.
I want to be able to exponentially weight the data that is more recent so that the more recent data has a greater effect on the line.
How can I do this?
| Most linear least squares algorithms let you set the measurement error of each point. Errors in point $i$ are then weighted by $\frac{1}{\sigma_i}$. So assign a smaller measurement error to more recent points. One algorithm is available for free in the obsolete version of Numerical Recipes, chapter 15.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/42242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Proving that $\lim\limits_{x\to\infty}f'(x) = 0$ when $\lim\limits_{x\to\infty}f(x)$ and $\lim\limits_{x\to\infty}f'(x)$ exist I've been trying to solve the following problem:
Suppose that $f$ and $f'$ are continuous functions on $\mathbb{R}$, and that $\displaystyle\lim_{x\to\infty}f(x)$ and $\displaystyle\lim_{x\to\infty}f'(x)$ exist. Show that $\displaystyle\lim_{x\to\infty}f'(x) = 0$.
I'm not entirely sure what to do. Since there's not a lot of information given, I guess there isn't very much one can do. I tried using the definition of the derivative and showing that it went to $0$ as $x$ went to $\infty$ but that didn't really work out. Now I'm thinking I should assume $\displaystyle\lim_{x\to\infty}f'(x) = L \neq 0$ and try to get a contradiction, but I'm not sure where the contradiction would come from.
Could somebody point me in the right direction (e.g. a certain theorem or property I have to use?) Thanks
| Hint: If you assume $\lim _{x \to \infty } f'(x) = L \ne 0$, the contradiction would come from the mean value theorem (consider $f(x)-f(M)$ for a fixed but arbitrary large $M$, and let $x \to \infty$).
Explained: If the limit of $f(x)$ exist, there is a horizontal asymptote. Therefore as the function approaches infinity it becomes more linear and thus the derivative approaches zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/42277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55",
"answer_count": 6,
"answer_id": 4
} |
Find control point on piecewise quadratic Bézier curve I need to write an OpenGL program to generate and display a piecewise quadratic Bézier curve that interpolates each set of data points:
$$(0.1, 0), (0, 0), (0, 5), (0.25, 5), (0.25, 0), (5, 0), (5, 5), (10, 5), (10, 0), (9.5, 0)$$
The curve should have continuous tangent directions, the tangent direction at each data point being a convex combination of the two adjacent chord directions.
I am not good at math, can anyone give me some suggestions about what formula I can use to calculate control point for Bézier curve if I have a starting point and an ending point.
Thanks in advance
| You can see that it will be difficult to solve this satisfactorily by considering the case where the points to be interpolated are at the extrema of a sinusoidal curve. Any reasonable solution should have horizontal tangents at the points, but this is not possible with quadratic curves.
Peter has described how to achieve continuity of the tangents with many arbitrary choices. You can reduce those choices to a single choice by requiring continuity in the derivatives, not just their directions (which determine the tangents). This looks nice formally, but it can lead to rather wild curves, since a single choice of control point at one end then determines all the control points (since you now have to take equal steps on both sides of the points in Peter's method), and these may end up quite far away from the original points – again, take the case of the extrema of a sinusoidal; this will cause the control points to oscillate more and more as you propagate them.
What I would try in order to get around these problems, if you really have to use quadratic Bézier curves, is to use some good interpolation method, e.g. cubic splines, and calculate intermediate points between the given points, along with tangent directions at the given points and the intermediate points. Then you can draw quadratic Bézier curves through all the points, given and intermediate, and determine control points by intersecting the tangents. This wouldn't work without the intermediate points, because the tangents might not intersect at reasonable points – again, think of the extrema of a sinuisoidal, where the desired tangents are in fact parallel – but I think it should work with the intermediate points – for instance, in the sinusoidal example, the intermediate points would be at the inflection points of the sinusoidal, and the tangents would intersect at suitable control points.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/42395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Is -5 bigger than -1? In everyday language people often mix up "less than" and "smaller than" and in most situations it doesn't matter but when dealing with negative numbers this can lead to confusion.
I am a mathematics teacher in the UK and there are questions in national GCSE exams phrased like this:
Put these numbers in order from smallest to biggest: 3, -1, 7, -5, 13, 0.75
These questions are in exams designed for low ability students and testing their knowledge of place value and ordering numbers and the correct solution in the exam would be: -5, -1, 0.75, 3, 7, 13.
I think if the question says "smallest to biggest" the correct solution should be 0.75, -1, 3, -5, 7, 13. Even though it doesn't seem to bother most people, I think the precise mathematical language is important and "smallest to biggest" should be avoided but if it is used it should refer to the absolute value of the numbers.
So my question is: Which is bigger, -5 or -1?
| Like all too many test questions, the quoted question is a question not about things but about words.
Roughly speaking the same question will have appeared on these exams since before the students were born. And in their homework and quizzes, students will have seen the question repeatedly.
Let's assume that the student has a moderately comfortable knowledge of the relative sizes of positive integers. It is likely that the student has in effect been trained to use the following algorithm to deal with questions like the one quoted.
*
*Arrange the numbers without a $-$ (the "real" numbers, negatives are not really real) in the right order.
*Put all the things with a $-$ to the left of them, in the wrong order. Why? Because your answer is then said to be right.
*goto next question
Even if there has been a serious attempt by the teacher to discuss the "whys," at the test taking level, the whys play essentially no role.
The OP's suggestion that "size" might be more intuitively viewed as distance from $0$ is a very reasonable one. That is part of what gives the ordering question some bite. Students who follow their intuition can be punished for not following the rules.
Sadly, in our multiple choice world, questions are often designed to exploit vulnerabilities and ambiguities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/42556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 4
} |
Why are the periods of these permutations often 1560? I ran across a math puzzle that went like this:
Consider the list $1,9,9,3, \cdots$ where the next entry is equal to the sum mod 10 of the prior 4. So the list begins $1,9,9,3,2,3,7,\cdots$. Will the sequence $7,3,6,7$ ever occur?
(Feel free to pause here and solve this problem for your own amusement if you desire. Spoiler below.)
So the answer is "yes", and we can solve this by noticing that the function to derive the next digit is invertible so we can derive digits going to the left as well. Going left, we find $7,3,6,7$ pretty quickly.
I wrote a program and found that the period (equivalently the length of the permutation's cycle) is 1560. But surprisingly (to me) altering the starting sequence from 1,9,9,3 to most any other sequence left the period at 1560. There are a few cases where it changes; for example, starting with 4,4,4,4 we get a period of length only 312.
So, my question: what's special about 1560 here?
Note: This feels a lot like LFSRs, but I don't know much about them.
| Your recurrence is linear in that you can add two series together term by term and still have it a series. The period of (0,0,0,1) is 1560, so all periods will be a divisor of that. To get 1560 you just have to avoid shorter cycles.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/42880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Is it possible for function $f : \mathbb{R} \to \mathbb{R}$ have a maximum at every point in a countable dense subset of its domain? Is it possible for function $f : \mathbb{R} \to \mathbb{R}$ have a maximum at every point in a countable dense subset of its domain ? The motivation for this question is I have a sequence of functions $\{f_n\}$ where the number of maxima increases with $n$ and I am interested to know what happens to the sequence of functions.
PS : every function of the sequence has a finite number of maxima.
EDIT : $f$ should not be constant function.
| Sample paths of Brownian motion have this property (with probability $1$), see here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/42944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Finding double coset representatives in finite groups of Lie type Is there a standard algorithm for finding the double coset representatives of $H_1$ \ $G/H_2$, where the groups are finite of Lie type?
Specifically, I need to compute the representatives when $G=Sp_4(\mathbb{F}_q)$ (I'm using $J$ the anti diagonal with top two entries $1$, and the other two $-1$), $H_1$ is the parabolic with $4=2+2$, and $H_2=SL_2(\mathbb{F}_q)\ltimes H$, where $H$ is the group of matrices of the form:
$$\begin{bmatrix} 1&x&y&z \\\\ 0&1&0&y \\\\ 0&0&1&-x \\\\ 0&0&0&1 \end{bmatrix}$$
which is isomorphic to the Heisenberg group, and $SL_2$ is embedded in $Sp_4$ as:
$$\begin{bmatrix} 1&&& \\\\ &a&b& \\\\ &c&d& \\\\ &&&1 \end{bmatrix}$$
| Many such questions yield to using Bruhat decompositions, and often succeed over arbitrary fields (which shows how non-computational it may be). Let P be the parabolic with Levi component GL(2)xSL(2). Your second group misses being the "other" maximal proper parabolic Q only insofar as it misses the GL(1) part of the Levi component. Your double coset space fibers over $P\backslash G/Q$. It is not trivial, but is true that P\G/Q is in bijection with $W_P\backslash W/W_Q$, with W the Weyl group and the subscripted version the intersections with the two parabolics. This is perhaps the chief miracle here. Since the missing GL(1) is normalized by the Weyl group, the fibering is trivial. Then some small bit of care is needed to identify the Weyl group double coset elements correctly (since double coset spaces do not behave as uniformly as "single" coset spaces). In this case, the two smaller Weyl groups happen to be generated by the reflections attached to the two simple roots, and the Weyl group has a reasonable description as words in these two generators.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/42995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Angle of a javelin at any given moment I am using the following formula to draw the trajectory of a javelin (this is very basic, I am not taking into consideration the drag, etc.).
speedX = Math.Cos(InitialAngle) * InitialSpeed;
speedY = Math.Sin(InitialAngle) * InitialSpeed;
javelin.X = speedX * timeT;
javelin.Y = speedY * timeT - 0.5 * g * Math.Pow(timeT, 2);
How do I know at what angle my javelin for a given timeT?
| I am making the assumption that the javelin is pointed exactly in the direction of its motion. (This seems dubious, but may be a close enough approximation for your purposes).
The speed in the X direction is constant, but the speed in the Y direction is $\text{speedY} -g\cdot \text{timeT}$. So the direction of motion has angle $\text{angle}\theta$ from the positive X direction satisfying $$\tan(\text{angle}\theta)=\frac{\text{speedY}-g\cdot\text{timeT}}{\text{speedX}}.$$ If the initial angle is in $\left(0,\frac{\pi}{2}\right)$, then the angle always lies in $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$, and you can use the ordinary $\arctan$ function to get $\text{angle}\theta$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Real-world applications of prime numbers? I am going through the problems from Project Euler and I notice a strong insistence on Primes and efficient algorithms to compute large primes efficiently.
The problems are interesting per se, but I am still wondering what the real-world applications of primes would be.
What real tasks require the use of prime numbers?
Edit: A bit more context to the question:
I am trying to improve myself as a programmer, and having learned a few good algorithms for calculating primes, I am trying to figure out where I could apply them.
The explanations concerning cryptography are great, but is there nothing else that primes can be used for?
| Thought I'd mention an application (or more like an explicit effect, rather than a direct application) that prime numbers have on computing fast Fourier transforms (FFTs), which are of fundamental use to many fields (e.g. signal processing, electrical engineering, computer vision).
It turns out that most algorithms for computing FFTs go fastest on inputs of power-of-two size and slowest on those of prime size. This effect is not small; in fact, it is often recommended, when memory is not an issue compared to time, to pad one's input to a power of 2 (increasing the input size to earn a speedup).
Papers on this have been written: e.g. see Discrete Fourier transforms when the number of data samples is prime by Rader.
And github issues like this suggest it is still an issue.
Very specific algorithms (e.g. see this one using the Chinese remainder theorem for cases where the size is a product of relative primes) have been developed that, in my opinion, constitute some relevancy of primality to these applications.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50",
"answer_count": 19,
"answer_id": 8
} |
Convex hull problem with a twist I have a 2D set and would like to determine from them the subset of points which, if joined together with lines, would result in an edge below which none of the points in the set exist.
This problem resembles the convex hull problem, but is fundamentally different in its definition.
One approach to determine these points might be to evaluate the cross-product of only x_1, x_2 and x_3, where x_1 is on the 'hull', x_2's 'hull'-ness is being evaluated and x_3 is another point on the set (all other points in the set should yield positive cross products if x_2 is indeed on the hull), with the additional constraint that x_1 < x_2 in one dimension.
I realize that this algorithm is not entirely perfect; the plot below shows that some valid points would be missed as a result of the convex hull constraint. How else can I define this edge?
Hope the question is clear.
| It looks like you are looking for the lower [convex] hull. Some algorithms such as the Andrew's variant of Graham Scan actually compute this and compute the upper hull and then merge these two to obtain the convex hull. Andrew's algorithm can also be seen as a sweep algorithm, so if you want a quick implementation, you could just a vertical sweep algorithm (see the Wiki link for details).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
A simple question about Iwasawa Theory There has been a lot of talk over the decades about Iwasawa Theory being a major player in number theory, and one of the most important object in said theory is the so-called Iwasawa polynomial. I have yet to see an example anywhere of such a polynomial. Is this polynomial hard/impossible to compute? I've read the definition in the standard literature, however, none of the texts/books/papers that I've seen provide any examples of this polynomial. Sigh...
Any sightings of those polynomials out there? I would appreciate some feedback on this. Thanks.
| Here is a function written for Pari/GP which computes Iwasawa polynomials. See in particular the note.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Evaluate $\sum\limits_{k=1}^n k^2$ and $\sum\limits_{k=1}^n k(k+1)$ combinatorially
$$\text{Evaluate } \sum_{k=1}^n k^2 \text{ and } \sum_{k=1}^{n}k(k+1) \text{ combinatorially.}$$
For the first one, I was able to express $k^2$ in terms of the binomial coefficients by considering a set $X$ of cardinality $2k$ and partitioning it into two subsets $A$ and $B$, each with cardinality $k$. Then, the number of ways of choosing 2-element subsets of $X$ is $$\binom{2k}{2} = 2\binom{k}{2}+k^2$$ So sum $$\sum_{k=1}^n k^2 =\sum_{k=1}^n \binom{2k}{2} -2\sum_{k=2}^n \binom{k}{2} $$ $$ \qquad\qquad = \color{red}{\sum_{k=1}^n \binom{2k}{2}} - 2 \binom{n+1}{3} $$ I am stuck at this point to evaluate the first of the sums. How to evaluate it?
I need to find a similar expression for $k(k+1)$ for the second sum highlighted above. I have been unsuccessful this far. (If the previous problem is done then so is this, but it would be nice to know if there are better approaches or identities that can be used.)
Update: I got the second one. Consider $$\displaystyle \binom{n+1}{r+1} = \binom{n}{r}+\binom{n-1}{r}+\cdots + \binom{r}{r}$$ Can be shown using recursive definition. Now multiply by $r!$ and set $r=2$
| For the first one, $\displaystyle \sum_{k=1}^{n} k^2$, you can probably try this way.
$$k^2 = \binom{k}{1} + 2 \binom{k}{2}$$
This can be proved using combinatorial argument by looking at drawing $2$ balls from $k$ balls with replacement.
The total number of ways to do this is $k^2$.
The other way to count it is as follows. There are two possible options either you draw the same ball on both trials or you draw different balls on both trials. The number of ways for the first option is $\binom{k}{1}$ and the number of ways for the second option is $\binom{k}{2} \times \left( 2! \right)$
Hence, we have that $$k^2 = \binom{k}{1} + 2 \binom{k}{2}$$
$$\displaystyle\sum_{k=1}^{n} k^2 = \sum_{k=1}^{n} \binom{k}{1} + 2 \sum_{k=1}^{n} \binom{k}{2} $$
The standard combinatorial arguments for $\displaystyle\sum_{k=1}^{n} \binom{k}{1}$ and $\displaystyle\sum_{k=1}^{n} \binom{k}{2}$ gives us $\displaystyle \binom{n+1}{2}$ and $\displaystyle \binom{n+1}{3}$ respectively.
Hence, $$ \sum_{k=1}^{n} k^2 = \binom{n+1}{2} + 2 \binom{n+1}{3}$$
For the second case, it is much easier than the first case and in fact this suggests another method for the first case.
$k(k+1)$ is the total number of ways of drawing 2 balls from $k+1$ balls without replacement where the order is important. This is same as $\binom{k+1}{2} \times \left(2! \right)$
Hence, $$\sum_{k=1}^{n} k(k+1) = 2 \sum_{k=1}^{n} \binom{k+1}{2} = 2 \times \binom{n+2}{3}$$
This suggests a method for the previous problem since $k^2 = \binom{k+1}{2} \times \left(2! \right) - \binom{k}{1}$
(It is easy to give a combinatorial argument for this by looking at drawing two balls from $k+1$ balls without replacement but hide one of the balls during the first draw and add the ball during the second draw)
and hence $$\sum_{k=1}^{n} k^2 = 2 \times \binom{n+2}{3} - \binom{n+1}{2} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 5,
"answer_id": 3
} |
Need a hint: prove that $[0, 1]$ and $(0, 1)$ are not homeomorphic I need a hint: prove that $[0, 1]$ and $(0, 1)$ are not homeomorphic without referring to compactness. This is an exercise in a topology textbook, and it comes far earlier than compactness is discussed.
So far my only idea is to show that a homeomorphism would be monotonic, so it would define a poset isomorphism. But the can be no such isomorphism, because there are a minimal and a maximal elements in $[0, 1]$, but neither in $(0, 1)$. However, this doesn't seem like an elemenary proof the book must be asking for.
| There is no continuous and bijective function $f:(0,1) \rightarrow [0,1]$. In fact, if $f:(0,1) \rightarrow [0,1]$ is continuous and surjective, then $f$ is not injective, as proved in my answer in Continuous bijection from $(0,1)$ to $[0,1]$. This is a consequence of the intermediate value theorem, which is a theorem about connectedness. Are you allowed to use that?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
How can I compute the integral $\int_{0}^{\infty} \frac{dt}{1+t^4}$? I have to compute this integral $$\int_{0}^{\infty} \frac{dt}{1+t^4}$$ to solve a problem in a homework. I have tried in many ways, but I'm stuck. A search in the web reveals me that it can be do it by methods of complex analysis. But I have not taken this course yet. Thanks for any help.
| Let the considered integral be I i.e
$$I=\int_0^{\infty} \frac{1}{1+t^4}\,dt$$
Under the transformation $t\mapsto 1/t$, the integral is:
$$I=\int_0^{\infty} \frac{t^2}{1+t^4}\,dt \Rightarrow 2I=\int_0^{\infty}\frac{1+t^2}{1+t^4}\,dt=\int_0^{\infty} \frac{1+\frac{1}{t^2}}{t^2+\frac{1}{t^2}}\,dt$$
$$2I=\int_0^{\infty} \frac{1+\frac{1}{t^2}}{\left(t-\frac{1}{t}\right)^2+2}\,dt$$
Next, use the substitution $t-1/t=u \Rightarrow (1+1/t^2)\,dt=du$ to get:
$$2I=\int_{-\infty}^{\infty} \frac{du}{u^2+2}\Rightarrow I=\int_0^{\infty} \frac{du}{u^2+2}=\boxed{\dfrac{\pi}{2\sqrt{2}}}$$
$\blacksquare$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 7,
"answer_id": 5
} |
Quick ways for approximating $\sum_{k=a_1}^{k=a_2}C_{100}^k(\frac{1}{2})^k(\frac{1}{2})^{100-k}$? Consider the following problem:
A fair coin is to be tossed 100 times, with each toss resulting in a head or a tail. Let
$$H:=\textrm{the total number of heads}$$
and
$$T:=\textrm{the total number of tails},$$
which of the following events has the greatest probability?
A. $H=50$
B. $T\geq 60$
C. $51\leq H\leq 55$
D. $H\geq 48$ and $T\geq 48$
E. $H\leq 5$ or $H\geq 95$
What I can think is the direct calculation:
$$P(a_1\leq H\leq a_2)=\sum_{k=a_1}^{k=a_2}C_{100}^k(\frac{1}{2})^k(\frac{1}{2})^{100-k}$$
Here is my question:
Is there any quick way to solve this problem except the direct calculation?
| Chebyshev's inequality, combined with mixedmath's and some other observations, shows that the answer has to be D without doing the direct calculations.
First, rewrite D as $48 \leq H \leq 52$. A is a subset of D, and because the binomial distribution with $n = 100$ and $p = 0.5$ is symmetric about $50$, C is less likely than D. So, as mixedmath notes, A and C can be ruled out.
Now, estimate the probability of D. We have $P(H = 48) = \binom{100}{48} 2^{-100} > 0.07$. Since $H = 48$ and $H=52$ are equally probable and are the least likely outcomes in D, $P(D) > 5(0.07) = 0.35$.
Finally, $\sigma_H = \sqrt{100(0.5)(0.5)} = 5$. So the two-sided version of Chebyshev says that $P(E) \leq \frac{1}{9^2} = \frac{1}{81}$, since E asks for the probability that $H$ takes on a value 9 standard deviations away from the mean. The one-sided version of Chebyshev says that $P(B) \leq \frac{1}{1+2^2} = \frac{1}{5}$, since B asks for the probability that $H$ takes on a value 2 standard deviations smaller than the mean.
So D must be the most probable event.
Added: OP asks for more on why $P(C) < P(D)$. Since the binomial($100,50$) distribution is symmetric about $50$, $P(H = i) > P(H = j)$ when $i$ is closer to $50$ than $j$ is. Thus $$P(C) = P(H = 51) + P(H = 52) + P(H = 53) + P(H = 54) + P(H = 55)$$ $$< P(H = 50) + P(H=51) + P(H = 49) + P(H = 52) + P(H = 48) = P(D),$$ by directly comparing probabilities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
How can I solve this infinite sum? I calculated (with the help of Maple) that the following infinite sum is equal to the fraction on the right side.
$$
\sum_{i=1}^\infty \frac{i}{\vartheta^{i}}=\frac{\vartheta}{(\vartheta-1)^2}
$$
However I don't understand how to derive it correctly. I've tried numerous approaches but none of them have worked out so far. Could someone please give me a hint on how to evaluate the infinite sum above and understand the derivation?
Thanks. :)
| Several good methods have been suggested. Here's one more. $$\eqalign{\sum{i\over\theta^i}&={1\over\theta}+{2\over\theta^2}+{3\over\theta^3}+{4\over\theta^4}+\cdots\cr&={1\over\theta}+{1\over\theta^2}+{1\over\theta^3}+{1\over\theta^4}+\cdots\cr&\qquad+{1\over\theta^2}+{1\over\theta^3}+{1\over\theta^4}+\cdots\cr&\qquad\qquad+{1\over\theta^3}+{1\over\theta^4}+\cdots\cr&\qquad\qquad\qquad+{1\over\theta^4}+\cdots\cr&={1/\theta\over1-(1/\theta)}+{1/\theta^2\over1-(1/\theta)}+{1/\theta^3\over1-(1/\theta)}+{1/\theta^4\over1-(1/\theta)}+\cdots\cr}$$ which is a geometric series which you can sum to get the answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Bounding ${(2d-1)n-1\choose n-1}$ Claim: ${3n-1\choose n-1}\le 6.25^n$.
*
*Why?
*Can the proof be extended to
obtain a bound on ${(2d-1)n-1\choose
n-1}$, with the bound being $f(d)^n$
for some function $f$?
(These numbers describe the number of some $d$-dimensional combinatorial objects; claim 1 is the case $d=2$, and is not my claim).
| First, lets bound things as easily as possible. Consider the inequality $$\binom{n}{k}=\frac{(n-k)!}{k!}\leq\frac{n^{k}}{k!}\leq e^{k}\left(\frac{n}{k}\right)^{k}.$$ The $n^k$ comes from the fact that $n$ is bigger then each factor of the product in the numerator. Also, we know that $k!e^k>k^k$ by looking at the $k^{th}$ term in the Taylor series, as $e^k=1+k+\cdots +\frac{k^k}{k!}+\cdots $.
Now, lets look at the similar $3n$ and $n$ instead of $3n-1$ and $n-1$. Then we see that $$\binom{3n}{n}\leq e^{n}\left(3\right)^{n}\leq\left(8.15\right)^{n}$$and then for any $k$ we would have $$\binom{kn}{n}\leq\left(ke\right)^{n}.$$
We could use Stirlings formula, and improve this more. What is the most that this can be improved? Apparently, according to Wolfram the best possible is $$\binom{(k+1)n}{n}\leq \left(\frac{(k+1)^{k+1}}{k^k}\right)^n.$$
(Notice that when $k=2$ we have $27/4$ which is $6.25$)
Hope that helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
} |
Descriptive examples for beta distribution Do you have descriptive/typical examples for processes whose results are described by a beta distribution? So far i only have one:
You have a population of constant size with N individuals and you observe a single gene (or gene locus).
The descendants in the next generation are drawn from a binomial distribution, so some individuals have several descendants, others have no dexcendants.
The gene can mutate at a rate u (for example blue eyes become brown eyes in 10^-5 cases you draw an individual with blue eyes).
Thae rate at which brown eyed individuals have blue eyed descendants ist the same.
The beta distribution describes how likely it is to find X% of the individuals having a ceratian eye colour. Thereby 2*N*u is the value for both parameters of the beta distribution.
Do you have mor examples. For which things is the beta distribution used?
Sven
| Completely elementary is the fact that for every positive integers $k\le n$, the distribution of the order statistics of rank $k$ in an i.i.d. sample of size $n$ uniform on the interval $(0,1)$ is beta $(k,n-k+1)$.
Slightly more sophisticated is the fact that, in Bayesian statistics, beta distributions provide a simple example of conjugate priors for binomial proportions. If $X$ conditionally on $U=u$ is binomial $(n,u)$ for every $u$ in $(0,1)$, then the distribution of $U$ conditionally on $X=x$ which is called the conjugate prior of the binomial is beta $(x,n-x)$. This result is a special case of the multinomial Dirichlet conjugacy.
Still more sophisticated is the fact that beta distributions are stationary distributions of Dubins-Freedman processes. These are Markov chains $(X_t)$ on $(0,1)$ moving from $X_t=x$ to $X_{t+1}=xU_t$ with probability $p$ and to $X_{t+1}=x+(1-x)U_t$ with probability $1-p$, where $p$ is a fixed parameter in $(0,1)$ and the sequence $(U_t)$ is an i.i.d. sequence with values in $(0,1)$. If the distribution of $U_t$ is uniform on $(0,1)$, then $(X_t)$ is ergodic and its stationary distribution is beta $(1-p,p)$. The seminal paper on the subject is due to Dubins and Freedman in the Fifth Berkeley Symposium. Later on, Diaconis and Freedman wrote a very nice survey. And the specific result mentioned above was somewhat generalized here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
equivalent definitions of orientation I know two definitions of an orientation of a smooth n-manifold $M$:
1) A continuous pointwise orientation for $M$.
2) A continuous choice of generators for the groups $H_n(M,M-\{x\})=\mathbb{Z}$.
Why are these two definitions equivalent? In other words, why is a choice of basis of $\mathbb{R}^n$ equivalent to a choice of generator of $H_n(\mathbb{R}^n,\mathbb{R}^n-\{0\})=\mathbb{Z}$?
See comments for precise definitions.
Thanks!
| Recall that an element of $H_n(M,M-\{x\})$ is an equivalence class of singular $n$-chains, where the boundary of any chain in the class lies entirely in $M-\{x\}$. In particular, any generator of $H_n(M,M-\{x\})$ has a representative consisting of a single singular $n$-simplex $\sigma\colon \Delta^n\to M$, whose boundary lies in $M-\{x\}$. Moreover, the map $\sigma$ can be chosen to be a differentiable embedding. (Think of $\sigma$ as an oriented simplex in $M$ that contains $x$.)
Now, the domain $\Delta^n$ of $\sigma$ is the standard $n$-simplex, which has a canonical orientation as a subspace of $\mathbb{R^n}$. Since $\sigma$ is differentiable, we can push this orientation forward via the derivative of $\sigma$ onto the image of $\sigma$ in $M$. This gives a pointwise orientation on a neighborhood of $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
Converting a QBFs Matrix into CNF, maintaining equisatisfiability I have a fully quantified boolean formula in Prenix Normal Form $\Phi = Q_1 x_1, \ldots Q_n x_n . f(x_1, \ldots, x_n)$. As most QBF-Solvers expect $f$ to be in CNF, I use Tseitins Tranformation (Denoted by $TT$). This does not give an equivalent, but an equisatisfiable formula. Which leads to my question:
Does $Q_1 x_1, \ldots Q_n x_n . f(x_1, \ldots, x_n) \equiv Q_1 x_1, \ldots Q_n x_n . TT(f(x_1, \ldots, x_n))$ hold?
| To use Tseitin's Transformation for predicate formulas, you'll need to add new predicate symbols of the form $A(x_1, ..., x_n)$. Then the formula $Q_1 x_1, ..., Q_n x_n TT(f(x_1,...,x_n))$ will imply "something" about this new predicate symbols, so the logical equivalence (which I assume what is meant by $\equiv$) does not hold. However $Q_1 x_1 ,..., Q_n x_n TT(f(x_1,...,x_n))$ is a conservative extension of $Q_1 x_1, ..., Q_n x_n f(x_1,...,x_n)$, that is everything provable from $Q_1 x_1, ..., Q_n x_n TT(f(x_1, ..., x_n))$ that does not use the extra symbols is already provable from $Q_1 x_1, ..., Q_n x_n f(x_1, ..., x_n)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43840",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Isomorphism on commutative diagrams of abelian groups Consider the following commutative diagram of homomorphisms of abelian groups $$\begin{array} 00&\stackrel{f_1}{\longrightarrow}&A& \stackrel{f_2}{\longrightarrow}&B& \stackrel{f_3}{\longrightarrow}&C&\stackrel{f_4}{\longrightarrow}& D &\stackrel{f_5}{\longrightarrow}&0\\
\downarrow{g_1}&&\downarrow{g_2}&&\downarrow{g_3}&&\downarrow{g_4}&&\downarrow{g_5}&&\downarrow{g_6}\\
0&\stackrel{h_1}{\longrightarrow}&0& \stackrel{h_2}{\longrightarrow}&E& \stackrel{h_3}{\longrightarrow}&F&\stackrel{h_4}{\longrightarrow} &0 &\stackrel{h_5}{\longrightarrow}&0
\end{array}
$$
Suppose the horizontal rows are exact ($\mathrm{ker}(f_{i+1})=\mathrm{Im}(f_i) $)
Suppose we know that $g_4:C\rightarrow F$ is an isomorphism.
How to deduce that $D=0$?
All what I could get is that $h_3:E\rightarrow F$ is an isomorphism and $f_4:C\rightarrow D$ is surjective.
| This is wrong. Consider
\begin{array}{ccccccccccc}
0 & \to & 0 & \to & 0 & \to & A & \to & A & \to & 0\\
\downarrow & & \downarrow & & \downarrow & & \downarrow & & \downarrow & & \downarrow\\
0 & \to & 0 & \to & A & \to & A & \to & 0 & \to & 0
\end{array}
where all maps $A \to A$ are the identity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
If $f(xy)=f(x)f(y)$ then show that $f(x) = x^t$ for some t
Let $f(xy) =f(x)f(y)$ for all $x,y\geq 0$. Show that $f(x) = x^p$ for some $p$.
I am not very experienced with proof. If we let $g(x)=\log (f(x))$ then this is the same as $g(xy) = g(x) + g(y)$
I looked up the hint and it says let $g(x) = \log f(a^x) $
The wikipedia page for functional equations only states the form of the solutions without proof.
Attempt
Using the hint (which was like pulling a rabbit out of the hat)
Restricting the codomain $f:(0,+\infty)\rightarrow (0,+\infty)$
so that we can define the real function $g(x) = \log f(a^x)$ and we
have $$g(x+y) = g(x)+ g(y)$$
i.e $g(x) = xg(1)$ as $g(x)$ is continuous (assuming $f$ is).
Letting $\log_a f(a) = p$ we get $f(a^x) =a^p $. I do not have a rigorous argument but I think I can conclude that $f(x) = x^p$ (please fill any holes or unspecified assumptions) Different solutions are invited
| Both the answers above are very good and thorough, but given an assumption that the function is differentiable, the DE approach strikes me as the easiest.
$ \frac{\partial}{\partial y} f(x y) = x f'(xy) = f(x)f'(y) $
Evaluating y at 1 gives:
$ xf'(x) = f(x)f'(1) $
The above is a separable DE:
Let $ p = f'(1) $ and $ z = f(x) $
$ x\frac{dz}{dx} = pz \implies \int \frac{dz}{z} = p\int \frac{dx}{x}$
$ \therefore \ln|z| = p\ln|x| + C $
Let $ A = e^C $. $ \implies C = \ln(A) $
$ x > 0 \implies |x| = x $
$ \therefore \ln|z| = p\ln(x) + \ln(A) = \ln(x^p) + \ln(A) = \ln(Ax^p) $
Hence $ |z| = Ax^p $; $ z = \pm Ax^p = f(x)$
Let $ B = \pm A $ and now $ f(x) = Bx^p $
Now using the initial property:
$ f(x)f(y) = Bx^p By^p = B^2 (xy)^p = f(xy) = B (xy)^p $
$B^2 = B \implies B $ is $0$ or $1$.
If B is zero, that provides the constant function $ f(x) = 0 $, otherwise the solution is $ f(x) = x^p $.
As can be seen from the other answers, this does not capture all possible solutions, but sometimes that's the price of simplicity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/43964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 3,
"answer_id": 0
} |
Edge coloring a graph to find a monochromatic $K_{2,n}$ I am trying to prove or disprove the following statement: Let $n>1$ be a positive integer. Then there exists a graph $G$ of size 4n-1 such that if the edges of $G$ are colored red or blue, no matter in which way, $G$ definitely contains a monochromatic $K_{2,n}$.
I tried to check a few cases in the hope of discovering a counter-example. For n=2 $G$ has to have size 7. The graph certainly is of the form "Square + 3 edges". Moreover, it should have the property that if among the 7 edges any 3 are deleted the remaining graph is a square. I couldnt construct any such graph. Is there any justification why such a graph cant exist, thereby negating the statement?
| The claim does not hold for $n = 2$. Consider the following observations for any graph $G$ hoping to satisfy the claim.
*
*$G$ is a $K_{2,2}$ with three edges appended.
*Without loss of generality, $G$ is connected and has no leaves.
*$G$ has at least five vertices.
Draw a $K_{2,2}$ and plus one more vertex. Since the vertex is not isolated and is not a leaf, it has two edges adjoining it to the $K_{2,2}$, which can be done in two nonisomorphic ways. Note now that we have only one edge remaining, so we can't add another vertex, as this would be a leaf. Thus, $G$ has exactly five vertices. The last edge can be added to the two nonisomorphic six-edge graphs in a total of four nonisomorphic ways (two for each - you can check the cases). For each of these candidates, it is easy to find an edge-coloring that avoids a monochromatic $K_{2,2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/44021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Trouble with absolute value in limit proof As usual, I'm having trouble, not with the calculus, but the algebra. I'm using Calculus, 9th ed. by Larson and Edwards, which is somewhat known for racing through examples with little explanation of the algebra for those of us who are rusty.
I'm trying to prove $$\lim_{x \to 1}(x^2+1)=2$$ but I get stuck when I get to $|f(x)-L| = |(x^2+1)-2| = |x^2-1| = |x+1||x-1|$. The solution I found says "We have, in the interval (0,2), |x+1|<3, so we choose $\delta=\frac{\epsilon}{3}$."
I'm not sure where the interval (0,2) comes from.
Incidentally, can anyone recommend any good supplemental material to go along with this book?
| Because of the freedom in the choice of $\delta$, you can always assume $\delta < 1$, that implies you can assume $x$ belongs to the interval $(0, 2)$.
Edit: $L$ is the limit of $f(x)$ for $x$ approaching $x_0$, iff for every $\epsilon > 0$ it exists a $\delta_\epsilon > 0$ such that:
$$\left\vert f(x) - L\right\vert < \epsilon$$
for each $x$ in the domain of $f$ satisfying $\left\vert x - x_0\right\vert < \delta_\epsilon$.
Now if $\delta_\epsilon$ verifies the above condition, the same happens for each $\delta_\epsilon'$ such that $0 < \delta_\epsilon' < \delta_\epsilon$, therefore we can choose $\delta_\epsilon$ arbitrarily small, in particular lesser than 1.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/44093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
How many points in the xy-plane do the graphs of $y=x^{12}$ and $y=2^x$ intersect? The question in the title is equivalent to find the number of the zeros of the function $$f(x)=x^{12}-2^x$$
Geometrically, it is not hard to determine that there is one intersect in the second quadrant. And when $x>0$, $x^{12}=2^x$ is equivalent to $\log x=\frac{\log 2}{12}x$. There are two intersects since $\frac{\log 2}{12}<e$.
Is there other quicker way to show
this?
Edit: This question is motivated from a GRE math subject test problem which is a multiple-choice one(A. None B. One C. Two D.Three E. Four). Usually, the ability for a student to solve such problem as quickly as possible may be valuable at least for this kind of test. In this particular case, geometric intuition may be misleading if one simply sketch the curve of two functions to find the possible intersect.
| If you are solving a multiple choice test like GRE you really need fast intuitive, but certain, thinking. I tried to put myself in this rushed set of mind when I read your question and thought this way: think of $x^{12}$ as something like $x^2$ but growing faster, think of $2^x$ as $e^x$ similarly, sketch both functions.
It is immediate to see an intersection point for $x<0$ and another for $0<x<b$, for some positive $b$ since the exponential grows slower for small $x$ for a while, as the sketched graph suggests. So the answer is at least $2$. In fact it is $3$ because after the second intersection point you clearly see the graph of $x^{12}$ over $2^x$ but you should notice that $a^x\gg x^n$ at $+\infty$, and therefore the exponential must take over and have a third intersection point at a really big value of positive $x$. Once this happens the exponential function is growing so fast that a potential function cannot catch up so there are no further intersections.
(To quickly see that $a^x\gg x^n$ at $+\infty$ just calculate $\lim_{x\to\infty}\frac{a^x}{x^n}=+\infty$ using L'Hopital's rule or Taylor expanding the numerator whose terms are of the form $\log^m(a)a^m/m!$).
More rigorously, maybe you can find a way to study the signs of $g(x)=x^{12}-2^x$ using derivatives and monotony. There are 4 intervals giving signs + - + - resulting in 3 points of intersection by the intermediate value theorem. These intervals are straightforwardly seen as reasoned above just sketching the function and taking into account the behavior for big values of $x$. To be sure that there is no other change of sign you must prove that $g'$ is monotone after the third point of intersection. Just after this last point, the graph of $2^x$ can easily be seen over $x^{12}$ and both subfunctions are monotone along with their derivatives: since $2^x>x^{12}\Rightarrow \log(2)2^x>12x^{11}$ which means $g'(x)=12x^{11}-\log(2)2^x$ is indeed monotone afterwards and therefore there is no fourth intersection.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/44206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 1
} |
Game theory textbooks/lectures/etc I looking for good books/lecture notes/etc to learn game theory. I do not fear the math, so I'm not looking for a "non-mathematical intro" or something like that. Any suggestions are welcome. Just put here any references you've seen and some brief description and/or review. Thanks.
Edit: I'm not constrained to any particular subject. Just want to get a feeling of the type of books out there. Then I can decide what to read. I would like to see here a long list of books on the subject and its applications, together with reviews or opinions of those books.
| Coursera.org offers an excellent game theory course by Dr.s Shoham, Leyton-Brown, and Jackson (https://www.coursera.org/course/gametheory).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/44246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 7,
"answer_id": 6
} |
How do I get the square root of a complex number? If I'm given a complex number (say $9 + 4i$), how do I calculate its square root?
| Here is a direct algebraic answer.
Suppose that $z=c+di$, and we want to find $\sqrt{z}=a+bi$ lying in the first two quadrants. So what are $a$ and $b$?
Precisely we have
$$a=\sqrt{\frac{c+\sqrt{c^{2}+d^{2}}}{2}}$$ and
$$b=\frac{d}{|d|}\sqrt{\frac{-c+\sqrt{c^{2}+d^{2}}}{2}}.$$ (The factor of $\frac{d}{|d|}$ is used so that $b$ has the same sign as $d$) To find this, we can use brute force and the quadratic formula. Squaring, we would need to solve $$a^2-b^2 +2abi=c+di.$$ This gives two equations and two unknowns (separate into real and imaginary parts), which can then be solved by substitutions and the quadratic formula.
I hope that helps!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/44406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "121",
"answer_count": 12,
"answer_id": 3
} |
Paths with DFA? My teacher made an example to explain DFA, it was about paths (URL paths), the rules were as follows:
S ::= /
S ::= /O
O ::= [a-z]
O ::= [a-z]R
O ::= [a-z]S
R ::= [a-z]
R ::= [a-z]R
R ::= [a-z]S
Examples of paths could be: /foo, /foo/, foo/bar and so on.
However, I don't understand why you would need the R rules since they are equal to the O rules.
Can I write it without the R? If not, why?
| You don't need them, in fact. The grammar you wrote is equivalent to the one obtained by deleting the R rules and substituting the second O rule by
O ::= [a-z]O
... No idea why your teacher wrote it that way, sorry.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/44445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Are the inverses of these matrices always tridiagonal? While putzing around with the linear algebra capabilities of my computing environment, I noticed that inverses of $n\times n$ matrices $\mathbf M$ associated with a sequence $a_i$, $i=1\dots n$ with $m_{ij}=a_{\max(i,j)}$, which take the form
$$\mathbf M=\begin{pmatrix}a_1&a_2&\cdots&a_n\\a_2&a_2&\cdots&a_n\\\vdots&\vdots&\ddots&a_n\\a_n&a_n&a_n&a_n\end{pmatrix}$$
(i.e., constant along "backwards L" sections of the matrix) are tridiagonal. (I have no idea if there's a special name for these matrices, so if they've already been studied in the literature, I'd love to hear about references.) How can I prove that the inverses of these special matrices are indeed tridiagonal?
| Let $B_j$ be the $n\times n$ matrix with $1$s in the upper-left hand $j\times j$ block and zeros elsewhere. The space of $L$-shaped matrices you're interested in is spanned by $B_1,B_2,\dots,B_n$. I claim that if $b_1,\dots,b_n$ are non-zero scalars, then the inverse of
$$ M=b_1B_1+b_2B_2+\dots + b_nB_n$$ is then the symmetric tridiagonal matrix $$N=c_1C_1+c_2C_2+\dots+c_nC_n$$ where $c_j=b_j^{-1}$ and $C_j$ is the matrix with zero entries except for a block matrix $\begin{pmatrix}1&-1\\-1&1\end{pmatrix}$ placed along the diagonal of $C_j$ in the $j$th and $j+1$th rows and columns, if $j<n$, and $C_n$ is the matrix with a single non-zero entry, $1$ in the $(n,n)$ position. The point is that $C_jB_k=0$ if $j\ne k$, and $C_jB_j$ is a matrix with at most two non-zero rows: the $j$th row is $(1,1,\dots,1,0,0,\dots)$, with $j$ ones, and if $j<n$ then the $j+1$th row is the negation of the $j$th row. So $NM=C_1B_1+\dots+C_nB_n=I$, so $N=M^{-1}$.
If one of the $b_j$'s is zero, then $M$ not invertible since it's arbitrarily close to matrices whose inverses have arbitrarily large entries.
Addendum: they're called type D matrices, and in fact the inverse of any irreducible nonsingular symmetric tridiagonal matrix is the entrywise product of a type D matrix and a "flipped" type D matrix (start the pattern in the lower right corner rather than the upper left corner). There's also a variant of this result characterising the inverse of arbitrary tridiagonal matrices. This stuff is mentioned in the introduction of this paper by Reinhard Nabben.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/44511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
What is the smallest number of $45^\circ-60^\circ-75^\circ$ triangles that a square can be divided into? What is the smallest number of $45^\circ-60^\circ-75^\circ$ triangles that a square can be divided into?
The image below is a flawed example, from http://www.mathpuzzle.com/flawed456075.gif
Laczkovich gave a solution with many hundreds of triangles, but this was just an demonstration of existence, and not a minimal solution. ( Laczkovich, M. "Tilings of Polygons with Similar Triangles." Combinatorica 10, 281-306, 1990. )
I've offered a prize for this problem: In US dollars, (\$200-number of triangles).
NEW: The prize is won, with a 50 triangle solution by Lew Baxter.
| I have no answer to the question, but here's a picture resulting from some initial attempts to understand the constraints that exist on any solution.
$\qquad$
This image was generated by considering what seemed to be the simplest possible configuration that might produce a tiling of a rectangle. Starting with the two “split pentagons” in the centre, the rest of the configuration is produced by triangulation. In this image, all the additional triangles are “forced”, and the configuration can be extended no further without violating the contraints of triangulation. If I had time, I'd move on to investigating the use of “split hexagons”.
The forcing criterion is that triangulation requires every vertex to be surrounded either (a) by six $60^\circ$ angles, three triangles being oriented one way and three the other, or else (b) by two $45^\circ$ angles, two $60^\circ$ angles and two $75^\circ$ angles, the triangles in each pair being of opposite orientations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/44684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "108",
"answer_count": 2,
"answer_id": 0
} |
One divided by Infinity? Okay, I'm not much of a mathematician (I'm an 8th grader in Algebra I), but I have a question about something that's been bugging me.
I know that $0.999 \cdots$ (repeating) = $1$. So wouldn't $1 - \frac{1}{\infty} = 1$ as well? Because $\frac{1}{\infty} $ would be infinitely close to $0$, perhaps as $1^{-\infty}$?
So $1 - 1^{-\infty}$, or $\frac{1}{\infty}$ would be equivalent to $0.999 \cdots$? Or am I missing something? Is infinity something that can even be used in this sort of mathematics?
| There is one issue that has not been raised in the fine answers given earlier. The issue is implicit in the OP's phrasing and it is worth making it explicit. Namely, the OP is assuming that, just as $0.9$ or $0.99$ or $0.999$ denote terminating decimals with a finite number of 9s, so also $0.999\ldots$ denotes a terminating decimal with an infinite number of 9s, the said infinite number being denoted $\infty$. Changing the notation from $\infty$ to $H$ for this infinite number so as to avoid a clash with traditional notation, we get that indeed that 0.999etc. with an infinite number $H$ of 9s falls infinitesimally short of $1$.
More specifically, it falls short of $1$ by the infinitesimal $\frac{1}{10^H}$, and there is no paradox. Here one does not especially need the hyperreal number system. It is sufficient to use the field of fractions of Skolem's nonstandard integers whose construction is completely constructive (namely does not use the axiom of choice or any of its weaker forms). As the OP points out, the infinitesimal $\frac{1}{H}$ (or more precisely $\frac{1}{10^H}$) is infinitely close to $0$ without being $0$ itself.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/44746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 5,
"answer_id": 4
} |
Can a circle truly exist? Is a circle more impossible than any other geometrical shape? Is a circle is just an infinitely-sided equilateral parallelogram? Wikipedia says...
A circle is a simple shape of Euclidean geometry consisting of the set of points in a plane that are a given distance from a given point, the centre. The distance between any of the points and the centre is called the radius.
A geometric plane would need to have an infinite number of points in order to represent a circle, whereas, say, a square could actually be represented with a finite number of points, in which case any geometric calculations involving circles would involve similarly infinitely precise numbers(pi, for example).
So when someone speaks of a circle as something other than a theory, are they really talking about a [ really big number ]-sided equilateral parallelogram? Or is there some way that they fit an infinite number of points on their geometric plane?
| In the same sense as you think a circle is impossible, a square with truly perfect sides can never exist because the lines would have to have infinitesimal width, and we can never measure a perfect right angle, etc.
You say that you think a square is physically possible to represent with 4 points, though. In this case, a circle is possible - you only need one point and a defined length. Then all the points of that length from the initial point define the circle, whether we can accurately delineate them or not. In fact, in this sense, I think a circle is more naturally and precisely defined than a given polygon.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/44880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
Independence of sums of gaussian random variables Say, I have independent gaussian random variables $t1, t2, t3, t4, t5$ and I have two new random variables $S = t1 + t2 - t3$ and $K = t3 + t4$.
Are $S$ and $K$ independent or is there any theorem about independece of random variables formed by sum of independent gaussians ?
| In fact, the distribution of the $t_i$ plays no significant role here, and, moreover, existence of the covariance is not necessary.
Let $S=X-Y$ and $K=Y+Z$, where $X$, $Y$, and $Z$ are independent random variables generalizing the role of $t1+t2$, $t3$, and $t4$, respectively.
Note that, by independence of $X$, $Y$, and $Z$, for any $u_1,u_2 \in \mathbb{R}$ it holds
$$
{\rm E}[e^{iu_1 S + iu_2 K} ] = {\rm E}[e^{iu_1 X + iu_1 ( - Y) + iu{}_2Y + iu_2 Z} ] = {\rm E}[e^{iu_1 X} ]{\rm E}[e^{iu_1 ( - Y) + iu{}_2Y} ]{\rm E}[e^{iu_2 Z} ]
$$
and
$$
{\rm E}[e^{iu_1 S} ] {\rm E}[e^{iu_2 K} ] = {\rm E}[e^{iu_1 X} ]{\rm E}[e^{iu_1 (-Y)} ]{\rm E}[e^{iu_2 Y} ]{\rm E}[e^{iu_2 Z} ].
$$
The following basic theorem then shows that $S$ an $K$ are generally not independent.
Theorem. Random variables $\xi_1$ and $\xi_2$ are independent if and only
$$
{\rm E}[e^{iu_1 \xi _1 + iu_2 \xi _2 } ] = {\rm E}[e^{iu_1 \xi _1 } ]{\rm E}[e^{iu_2 \xi _2 } ]
$$
for all $u_1,u_2 \in \mathbb{R}$.
(In particular, note that if $-Y$ and $Y$ are not independent, then there exist $u_1,u_2 \in \mathbb{R}$ such that
${\rm E}[e^{iu_1 ( - Y) + iu_2 Y} ] \ne {\rm E}[e^{iu_1 ( - Y)} ]{\rm E}[e^{iu_2 Y} ]$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/44926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
bayesian networks for regression Would it be possible to use bayesian network for regression and/or prediction? I understand that it is a tool one can use to compute probabilities, but I haven't found much material about possible applications for forecasting.
| The Naive Bayes classifier is a type of classifier which is a Bayesian Network (BN). There are also extensions like Tree-Augmented Naive Bayes and more generally Augmented Naive Bayes.
So not only is it possible, but it has been done and there is lots of literature on it.
Most of the applications I see deal with classification rather than regression, but prediction of continuous values is also possible.
A prediction task is essentially a question of "what is $E(Y|X)$" where $Y$ is the variable you want to predict and $X$ is(are) the variable(s) that you observe, so yes you can (and people have) used BNs for it.
Note that a lot of the BN literature for those applications is in the Machine Learning domain.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$\tan(\frac{\pi}{2}) = \infty~$?
Evaluate $\displaystyle \int\nolimits^{\pi}_{0} \frac{dx}{5 + 4\cos{x}}$ by using the substitution $t = \tan{\frac{x}{2}}$
For the question above, by changing variables, the integral can be rewritten as $\displaystyle \int \frac{\frac{2dt}{1+t^2}}{5 + 4\cos{x}}$, ignoring the upper and lower limits.
However, after changing variables from $dx$ to $dt$, when $x = 0~$,$~t = \tan{0} = 0~$ but when $ x = \frac{\pi}{2}~$, $~t = \tan{\frac{\pi}{2}}~$, so can the integral technically be written as $\displaystyle \int^{\tan{\frac{\pi}{2}}}_{0} \frac{\frac{2dt}{1+t^2}}{5 + 4\cos{x}}~$, and if so, is it also reasonable to write it as $\displaystyle \int^{\infty}_{0} \frac{\frac{2dt}{1+t^2}}{5 + 4\cos{x}}$
EDIT: In response to confusion, my question is: Is it technically correct to write the above integral in the form with an upper limit of $\tan{\frac{\pi}{2}}$ and furthermore, is it is reasonable to equate $\tan{\frac{\pi}{2}}$ with $\infty$ and substitute it on the upper limit?
| Continuing from my comment, you have
$$\cos(t) = \cos^2(t/2) - \sin^2(t/2) = {1-t^2\over 1+ t^2}.$$
Restating the integral with the transformation gives
$$\int_0^\infty {1\over 5 + 4\left({1-t^2 \over 1 + t^2}\right)}{2\, dt\over 1 + t^2} = 2\int_0^\infty {dt\over 9 + t^2}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Proving $\frac{1}{\sin^{2}\frac{\pi}{14}} + \frac{1}{\sin^{2}\frac{3\pi}{14}} + \frac{1}{\sin^{2}\frac{5\pi}{14}} = 24$ How do I show that:
$$\frac{1}{\sin^{2}\frac{\pi}{14}} + \frac{1}{\sin^{2}\frac{3\pi}{14}} + \frac{1}{\sin^{2}\frac{5\pi}{14}} = 24$$
This is actually problem B $4371$ given at this link. Looks like a very interesting problem.
My attempts: Well, I have been thinking about this for the whole day, and I have got some insights. I don't believe my insights will lead me to a $\text{complete}$ solution.
*
*First, I wrote $\sin\frac{5\pi}{14}$ as $\sin\frac{9 \pi}{14}$ so that if I put $A = \frac{\pi}{14}$ so that the given equation becomes, $$\frac{1}{\sin^{2}{A}} + \frac{1}{\sin^{2}{3A}} + \frac{1}{\sin^{2}{9A}} =24$$ Then I tried working with this by taking $\text{lcm}$ and multiplying and doing something, which appeared futile.
*Next, I actually didn't work it out, but I think we have to look for a equation which has roots as $\sin$ and then use $\text{sum of roots}$ formulas to get $24$. I think I haven't explained this clearly.
*
*$\text{Thirdly, is there a trick proving such type of identities using Gauss sums ?}$ One post related to this is: How to prove that: $\tan(3\pi/11) + 4\sin(2\pi/11) = \sqrt{11}$ I don't know how this will help as I haven't studied anything yet regarding Gauss sums.
| Use $\sin(x) = \cos(\frac{\pi}2 - x)$, we can rewrite this as:
$$\frac{1}{\cos^2 \frac{3\pi}{7}} + \frac{1}{\cos^2 \frac{2\pi}{7}} + \frac{1}{\cos^2 \frac{\pi}{7}}$$
Let $a_k = \frac{1}{\cos \frac{k\pi}7}$.
Let $f(x) = (x-a_1)(x-a_2)(x-a_3)(x-a_4)(x-a_5)(x-a_6)$.
Now, using that $a_k = - a_{7-k}$, this can be written as:
$$f(x) = (x^2-a_1^2)(x^2-a_2^2)(x^2-a_3^2)$$
Now, our problem is to find the sum $a_1^2 + a_2^2 + a_3^2$, which is just the negative of the coefficient of $x^4$ in the polynomial $f(x)$.
Let $U_6(x)$ be the Chebyshev polynomial of the second kind - that is:
$$U_6(\cos \theta) = \frac{\sin 7\theta }{\sin \theta}$$
It is a polynomial of degree $6$ with roots equal to $\cos(\frac{k\pi}7)$, for $k=1,...,6$.
So the polynomials $f(x)$ and $x^6U_6(1/x)$ have the same roots, so:
$$f(x) = C x^6 U_6(\frac{1}x)$$
for some constant $C$.
But $U_6(x) = 64x^6-80x^4+24x^2-1$, so $x^6 U_6(\frac{1}x) = -x^6 + 24 x^4 - 80x^2 + 64$. Since the coefficient of $x^6$ is $-1$, and it is $1$ in $f(x)$, $C=-1.$ So:
$$f(x) = x^6 - 24x^4 +80x^2 - 64$$
In particular, the sum you are looking for is $24$.
In general, if $n$ is odd, then the sum:
$$\sum_{k=1}^{\frac{n-1}2} \frac{1}{\cos^2 \frac{k\pi}{n}}$$
is the absolute value of the coefficient of $x^2$ in the polynomial $U_{n-1}(x)$, which turns out to have closed form $\frac{n^2-1}2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 5,
"answer_id": 0
} |
Raising a square matrix to the k'th power: From real through complex to real again - how does the last step work? I am reading Applied linear algebra: the decoupling principle by Lorenzo Adlai Sadun (btw very recommendable!)
On page 69 it gives an example where a real, square matrix $A=[(a,-b),(b,a)]$ is raised to the k'th power: $$A^k.(1,0)^T$$ The result must be a real vector. Nevertheless it seems easier to do the calculation via the complex numbers:$$=((a+bi)^k+(a-bi)^k).(1,0)^T/2-i((a+bi)^k-(a-bi)^k).(0,1)^T/2$$ At this stage the result seems to be complex. But then comes the magic step and everything gets real again:$$=Re[(a+bi)^k].(1,0)^T+Im[(a+bi)^k].(0,1)^T$$ Now I did some experiments and made two observations: First, this step seems to yield the correct results - yet I don't know why. Second, the raising of this matrix to the k'th power even confuses CAS (e.g. WolframAlpha aka Mathematica, see e.g. the plots here) because they most of the time seem to think that the results are complex.
My question
Could you please give me a proof/explanation for the correctness of the last step. Perhaps you will even know why CAS are confused too (perhaps it is because their algorithms also go through the complex numbers and they too have difficulties in seeing that the end result will be real?)
| What you are using is that for a given complex number $z=a+bi$, we have $\frac{z+\overline{z}}{2}=a={\rm Re}(z)$ and $\frac{z-\overline{z}}{2}=ib=i{\rm Im}(z)$ (where $\overline{z}=a-bi$). Also check that $\overline{z^k}=\overline{z}^k$ for all $k \in \mathbb{N}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Minimal free resolution I'm studying on the book "Cohen-Macaulay rings" of Bruns-Herzog
(Here's a link and an image of the page in question for those unable to use Google Books.)
At page 17 it talks about minimal free resolution, but it doesn't give a proper definition (or I'm misunderstanding the one it gives), could you give me a definition?
And if $(R,\mathfrak{m},k)$ is a Noetherian local ring, $M$ a finite $R$-module and
$F.:\cdots\rightarrow F_n\rightarrow F_{n-1}\rightarrow\cdots\rightarrow F_1\rightarrow F_0\rightarrow 0$
a finite free resolution of $M$. The it is minimal if and only if $\varphi_i(F_i)\subset\mathfrak{m}F_{i-1}$ for all $i\geq1$. Why?
| I can't see that book online, but let me paraphrase the definition from Eisenbud's book on commutative algebra. See Chapter 19 page 473-477 for details.
Let $R$ be a Noetherian local ring with maximal ideal $\mathfrak{m},$ then
Definition: A free resolution of a $R$-module $M$ is a complex
$$\mathcal{F}: ...\rightarrow F_i \rightarrow F_{i-1} \rightarrow ... \rightarrow F_1 \rightarrow F_0$$
with trivial homology such that $\mathbf{\text{coker}}(F_1 \rightarrow F_0) \cong M$ and each $F_i$ is a free $R$-module.
He then defines a minimal resolution as follows:
Definition: A complex
$$\mathcal{F}: ...\rightarrow F_i \rightarrow F_{i-1} \rightarrow ...$$
over $(R,\mathfrak{m})$ is minimal if the induced maps in the complex $\mathcal{F}\otimes R/\mathfrak{m}$ are each identically $0$. (Note that this is equivalent to the condition that $Im(F_i \rightarrow F_{i-1}) \subset \mathfrak{m}F_{i-1}$)
after this he proves the fact that a free resolution $\mathcal{F}$ is minimal if and only if a basis for $F_{i-1}$ maps into a minimal set of generators for $\mathbf{\text{coker}}(F_i \rightarrow F_{i-1}).$
The proof is via a straightforward appeal to Nakayama's Lemma.
From the way your question is worded, I would guess that your text has taken the latter of these equivalent conditions as the definition of a minimal free resolution and proved it to be equivalent to the former. But I can't be sure without seeing the text.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 1
} |
Weierstrass Equation and K3 Surfaces Let $a_{i}(t) \in \mathbb{Z}[t]$. We shall denote these by $a_{i}$. The equation $y^{2} + a_{1}xy + a_{3}y = x^{3} + a_{2}x^{2} + a_{4}x + a_{6}$ is the affine equation for the Weierstrass form of a family of elliptic curves. Under what conditions does this represent a K3 surface?
| A good reference for this would be Abhinav Kumar's PhD thesis, which you can find here. In particular, look at Chapter 5, and Section 5.1. If an elliptic surface $y^2+a_1(t)xy+a_3(t)y = x^3+a_2(t)x^2+a_4(t)x+a_6(t)$ is K3, then the degree of $a_i(t)$ must be $\leq 2i$.
I hope this helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Proof that the set of incompressible strings is undecidable I would like to see a proof or a sketch of a proof that the set of incompressible strings is undecidable.
Definition: Let x be a string, we say that x is c-compressible if K(x) $\leq$ |x|-c.
If x is not c-compressible, we say that x is incompressible by c. K(x) represents the Kolmogorov complexity of a binary string x.
Theorem: incompressible strings of every length exist.
Proof: The number of binary strings of length n is $2^{n}$, but there exist
$\displaystyle\sum\limits_{i=0}^{n-1} 2^i$=$2^{n}$-1 descriptions of lengths that is less than n. Since each description describes at most one string then there is at least one string of length n that is incompressible.
From here I feel it is natural to ask whether or not the set of incompressible strings is decidable, the answer is $\textit{no}$, but I would like to see the justification via a proof or proof sketch.
Edit: I would like to add I am already familiar/comfortable with the proof that Kolmogorov complexity is uncomputable.
| Roughly speaking, incompressibility is undecidable because of a version of the Berry paradox. Specifically, if incompressibility were decidable, we could specify "the lexicographically first incompressible string of length 1000" with the description in quotes, which has length less than 1000.
For a more precise proof of this, consider the Wikipedia proof that $K$ is not computable. We can modify this proof as follows to show that incompressibility is undecidable. Suppose we have a function IsIncompressible that checks whether a given string is incompressible. Since there is always at least one incompressible string of length $n$, the function GenerateComplexString that the Wikipedia article describes can be modified as follows:
function GenerateComplexString(int n)
for each string s of length exactly n
if IsIncompressible(s)
return s
quit
This function uses IsIncompressible to produce roughly the same result that the KolmogorovComplexity function is used for in the Wikipedia article. The argument that this leads to a contradiction now goes through almost verbatim.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How to compute the transition function in non-determinism finite accepter NFA? I'm currently teaching myself Automaton using Peter Linz book - An Introduction to Formal Languages and Automata 4th edition. While reading chapter 2 about NFA, I was stuck this example (page 51):
According to the author, the transition function $$\delta^{*}(q_1,a) = \{q_0, q_1, q_2\}$$, and I have no idea how this works since the definition is defined in the book as following:
For an nfa, the extended transition function is defined so that $\delta^{*}(q_i,w)$ contains $q_j$ if and only if there is a walk in the transition graph from $q_i$ to $q_j$ labeled $w$. This holds for all $q_i, q_j \in Q$ and $w \in \sum^{*}.$
From my understanding, there must be a walk of label $a$ so that a state $q_k$ will be in the set. In the example above, there is no such walk label $a$ from $q_1$ to $q_0, q_2$. Perhaps, I missed some important points, but I honestly don't understand how the author got that answer, i.e. $\{q_0, q_1, q_2\}$. Any suggestion?
Thank you,
Note I already posted this question as I already posted my question at https://cstheory.stackexchange.com/questions/7009/how-to-compute-the-transition-function-in-non-determinism-finite-accepter-nfa. However, it was closed because it's not at graduate research level.
| be careful that your machine should read 'a' to accept destination state. in your nfa, before reading 'a', 2 lambda transitions should be placed. first, to go to q2, and second, to go to q0. after that your machine can read a 'a' and places on q1. now transition to q2 and q0 are take placed by one and two lambda transitions.
good luck
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Number of possible sets for given N How many possible valid collections are there for a given positive integer N given the following conditions:
All the sums from 1 to N should be possible to be made by selecting some of the integers. Also this has to be done in way such that if any integer from 1 to N can be made in more than one way by combining other selected integers then that set of integers is not valid.
For example, with N = 7,
The valid collections are:{1,1,1,1,1,1,1},{1,1,1,4},{1,2,2,2},{1,2,4}
Invalid collections are:
{1,1,1,2,2} because the sum adds up to 7 but 2 can be made by {1,1} and {2}, 3 can be made by {1,1,1} and {1,2}, 4 can be made by {1,1,2} and {2,2} and similarly 5, 6 and 7 can also be made in multiple ways using the same set.
{1,1,3,6} because all from 1 to 7 can be uniquely made but the sum is not 7 (its 11).
| The term I would use is "multiset". Note that your multiset must contain 1 (as this is the only way to get a sum of 1). Suppose there are $r$ different values $a_1 = 1, \ldots, a_r$ in the multiset, with $k_j$ copies of $a_j$. Then we must have $a_j = (k_{j-1}+1) a_{j-1}$ for $j = 2, \ldots, r$, and $N = (k_r + 1) a_r - 1$. Working backwards, if $A(N)$ is the number of valid multisets summing to $N$, for each factorization $N+1 = ab$ where $a$ and $b$ are positive integers with $b > 1$ you can take $a_r = a$, $k_r = b - 1$, together with any valid multiset summing to $a-1$. Thus $A(N) = \sum_{b | N+1, b > 1} A((N+1)/b - 1)$ for $N \ge 1$, with $A(0) = 1$. We then have, if I programmed it right, 1, 1, 2, 1, 3, 1, 4, 2, 3, 1, 8, 1, 3, 3, 8, 1, 8, 1, 8, 3 for $N$ from 1 to 20. This matches OEIS sequence A002033, "Number of perfect partitions of n".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
} |
Why is an empty function considered a function? A function by definition is a set of ordered pairs, and also according the Kuratowski, an ordered pair $(x,y)$ is defined to be $$\{\{x\}, \{x,y\}\}.$$
Given $A\neq \varnothing$, and $\varnothing\colon \varnothing \rightarrow A$. I know $\varnothing \subseteq \varnothing \times A$, but still an empty set is not an ordered pair. How do you explain that an empty function is a function?
| The empty set is a set of ordered pairs. It contains no ordered pairs but that's fine, in the same way that $\varnothing$ is a set of real numbers though $\varnothing$ does not contain a single real number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "49",
"answer_count": 3,
"answer_id": 2
} |
How to find a positive semidefinite matrix $Y$ such that $YB =0$ where $B$ is given $B$ is an $n\times m$ matrix, $m\leq n$.
I have to find an $n\times n$ positive semidefinite matrix $Y$ such that $YB = 0$.
Please help me figure out how can I find the matrix $Y$.
| If $X$ is any (real) matrix with the property that $XB=0$, then $Y=X^TX$ will do the trick. Such a matrix $Y$ is always positive semidefinite. To see this note that for any (column) vector $v$ we have $v^TYv=(Xv)^T(Xv)=|Xv|^2\ge0$.
How to find such a matrix $X$? If $m=n$ and $\det B\neq0$, then there is no other choice but $Y=0$. Otherwise we can do the following. The rows of $X$ should be orthogonal to the columns of $B$. Let $v=(v_1,v_2,\ldots,v_n)$ be a vector of unknowns. From our assumptions it follows that the homogeneous linear system $B^Tv=0$ has non-trivial solutions: either $m<n$ or there are linear dependencies among the equations as $B$ has rank $<n$. Let $U$ be the set of solutions (use whatever methods you know to find a basis for $U$). Then any matrix $X$ with row vectors that (or rather their transposes) are from the space $U$ will work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Interesting integral related to the Omega Constant/Lambert W Function I ran across an interesting integral and I am wondering if anyone knows where I may find its derivation or proof. I looked through the site. If it is here and I overlooked it, I am sorry.
$$\displaystyle\frac{1}{\int_{-\infty}^{\infty}\frac{1}{(e^{x}-x)^{2}+{\pi}^{2}}dx}-1=W(1)=\Omega$$
$W(1)=\Omega$ is often referred to as the Omega Constant. Which is the solution to
$xe^{x}=1$. Which is $x\approx .567$
Thanks much.
EDIT: Sorry, I had the integral written incorrectly. Thanks for the catch.
I had also seen this:
$\displaystyle\int_{-\infty}^{\infty}\frac{dx}{(e^{x}-x)^{2}+{\pi}^{2}}=\frac{1}{1+W(1)}=\frac{1}{1+\Omega}\approx .638$
EDIT: I do not what is wrong, but I am trying to respond, but can not. All the buttons are unresponsive but this one. I have been trying to leave a greenie and add a comment, but neither will respond. I just wanted you to know this before you thought I was an ingrate.
Thank you. That is an interesting site.
| While this is by no means rigorous, but it gives the correct solution. Any corrections to this are welcome!
Let
$$f(z) := \frac{1}{(e^z-z)^2+\pi^2}$$
Let $C$ be the canonical positively-oriented semicircular contour that traverses the real line from $-R$ to $R$ and all around $Re^{i \theta}$ for $0 \le \theta \le \pi$ (let this semicircular arc be called $C_R$), so
$$\oint_C f(z)\, dz = \int_{-R}^R f(z)\,dz + \int_{C_R}f(z)\, dz$$
To evaluate the latter integral, we see
$$
\left| \int_{C_R} \frac{1}{(e^z-z)^2+\pi^2}\, dz \right| =
\int_{C_R} \left| \frac{1}{(e^z-z)^2+\pi^2}\right| \, dz \le
\int_{C_R} \frac{1}{(|e^z-z|)^2-\pi^2} \, dz \le
\int_{C_R} \frac{1}{(e^R-R)^2-\pi^2} \, dz
$$
and letting $R \to \infty$, the outer integral disappears.
Looking at the denominator of $f$ for singularities:
$$(e^z-z)^2 + \pi^2 = 0 \implies e^z-z = \pm i \pi \implies z = -W (1)\pm i\pi$$
using this.
We now use the root with the positive $i\pi$ because when the sign is negative, the pole does not fall within the contour because $\Im (-W (1)- i\pi)<0$.
$$z_0 := -W (1)+i\pi$$
We calculate the beautiful residue at $b_0$ at $z=z_0$:
$$
b_0=
\operatorname*{Res}_{z \to z_0} f(z) =
\lim_{z\to z_0} \frac{(z-z_0)}{(e^z-z)^2+\pi^2} =
\lim_{z\to z_0} \frac{1}{2(e^z-1)(e^z-z)} =
\frac{1}{2(-W(1) -1)(-W(1)+W(1)-i\pi)} =
\frac{1}{-2\pi i(-W(1) -1)} =
\frac{1}{2\pi i(W(1)+1)}
$$
using L'Hopital's rule to compute the limit.
And finally, with residue theorem
$$
\oint_C f(z)\, dz = \int_{-\infty}^\infty f(z)\,dz = 2 \pi i b_0 = \frac{2 \pi i}{2\pi i(W(1)+1)} =
\frac{1}{W(1)+1}
$$
An evaluation of this integral with real methods would also be intriguing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 3,
"answer_id": 0
} |
Calculate the area on a sphere of the intersection of two spherical caps Given a sphere of radius $r$ with two spherical caps on it defined by the radii ($a_1$ and $a_2$) of the bases of the spherical caps, given a separation of the two spherical caps by angle $\theta$, how do you calculate the surface area of that intersection?
To clarify, the area is that of the curved surface on the sphere defined by the intersection. At the extreme where both $a_1,a_2 = r$, we would be describing a spherical lune.
Alternatively define the spherical caps by the angles
$\Phi_1 = \arcsin(a_1/r)$ and $\Phi_2 = \arcsin(a_2/r)$.
| Here's a simplified formula as a function of your 3 variables, $a_1$, $a_2$, and $\theta$:
$$
2\cos(a_2)\arccos \left ( \frac{-\cos(a_1) + \cos(\theta)\cos(a_2)}{\sin(\theta)\sin(a_2)} \right ) \\
-2\cos(a_1)\arccos \left ( \frac{\cos(a_2) - \cos(\theta)\cos(a_1)}{\sin(\theta)\sin(a_1)} \right ) \\
-2\arccos \left ( \frac{-\cos(\theta) + \cos(a_1)\cos(a_2)}{\sin(a_1)\sin(a_2)} \right ) \\
-2\pi\cos(a_2)
$$
As previously stated, the caps must intersect and one cannot entirely contain the other.
This solution is copied from a graphics presentation by AMD, and is originally from a biochemistry paper. I have no proof that it works, so take it for what it's worth.
TOVCHIGRECHKO, A. AND VAKSER, I.A. 2001. How
common is the funnel-like energy landscape in protein-protein
interactions? Protein Sci. 10:1572-1583
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 5,
"answer_id": 1
} |
Can an element other than the neutral element be its own inverse? Take the following operation $*$ on the set $\{a, b\}$:
*
*$a * b = a$
*$b * a = a$
*$a * a = b$
*$b * b = b$
$b$ is the neutral element. Can $a$ also be its own inverse, even though it's not the neutral element? Or does the inverse property require that only the neutral element may be its own inverse but all other elements must have another element be the inverse.
| Your set is isomorphic to the two-element group: $b=1$, $a=-1$, $*=$multiplication. So yes, $a$ can very well be its own inverse.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 0
} |
Subspace intersecting many other subspaces V is a vector space of dimension 7. There are 5 subspaces of dimension four. I want to find a two dimensional subspace such that it intersects at least once with all the 5 subspaces.
Edit: All the 5 given subspaces are chosen randomly (with a very high probability, the intersection is a line).
If i take any two of the 5 subspaces and find the intersection it results in a line. Similarly, we can take another two planes and find another line. From these two lines we can form a 2 dimensional subspace which intersect 4 of the 5 subspaces. But can some one tell me how we can find a two dimensional subspace which intersect all the 5 subspace.
It would be very useful if you can tell what kind of concepts in mathematics can i look for to solve problems like this?
Thanks in advance.
Edit: the second paragraph is one way in which i tried the problem. But taking the intersection of the subspace puts more constraint on the problem and the solution becomes infeasible.
| Assuming your vector space is over $\mathbb R$, it looks to me like "generically" there should be a finite number of solutions, but I can't prove that this finite number is positive, nor do I have a counterexample. We can suppose your two-dimensional subspace $S$ has an orthonormal basis $\{ u, v \}$ where $u \cdot e_1 = 0$ (where $e_1$ is a fixed nonzero vector). There are 10 degrees of freedom for choosing $u$ and $v$. The five subspaces are
the kernels of five linear operators $F_j$ of rank 3; for $S$ to have nonzero intersection with ${\rm ker} F_j$ you need scalars $a_j$ and $b_j$ with $a_j^2 + b_j^2 = 1$ and $F_j (a_j u + b_j v) = 0$. This gives 5 more degrees of freedom for choosing points $(a_j, b_j)$ on the unit circle, minus 15 for the equations $F_j (a_j u + b_j v) = 0$, for a net of 0 degrees of freedom, and thus a discrete set of solutions (finite because the equations are polynomial).
For actually finding solutions in particular cases, I found Maple's numerical solver fsolve worked pretty well - the system seems too complicated for the symbolic solvers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
} |
bijective morphism of affine schemes The following question occurred to me while doing exercises in Hartshorne. If $A \to B$ is a homomorphism of (commutative, unital) rings and $f : \text{Spec } B \to \text{Spec } A$ is the corresponding morphism on spectra, does $f$ bijective imply that $f$ is a homeomorphism? If not, can anyone provide a counterexample? The reason this seems reasonable to me is because I think that the inverse set map should preserve inclusions of prime ideals, which is the meaning of continuity in the Zariski topology, but I can't make this rigorous.
| No. Let $A$ be a DVR. Let $k$ be the residue field, $K$ the quotient field. There is a map $\mathrm{Spec} k \sqcup \mathrm{Spec} K \to \mathrm{Spec} A$ which is bijective, but not a homeomorphism (one side is discrete and the other is not). Note that $\mathrm{Spec} k \sqcup \mathrm{Spec}K = \mathrm{Spec} k \times K$, so this is an affine scheme.
As Matt E observes below in the comments, one can construct more geometric examples of this phenomenon (e.g. the coproduct of a punctured line plus a point mapping to a line): the point is that things can go very wrong with the topology.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/45954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Why are horizontal transformations of functions reversed? While studying graph transformations I came across horizontal and vertical scale and translations of functions. I understand the ideas below.
*
*$f(x+a)$ - grouped with x, horizontal translation, inverse, x-coordinate shifts left, right for -a
*$f(ax)$ - grouped with x, horizontal scaling, inverse so x-coordinate * 1/a
*$f(x)$ + a - not grouped with x, vertical translation, shifts y-coordinate up, d
*$af(x)$ - not grouped with x, vertical scaling, y-coordinate * a
I have mostly memorized this part but I am unable to figure out why the horizontal transformations are reversed/inverse?
Thanks for your help.
| For horizantal shift:
The logical reason for horizantal shift is that in(f)(x)=y=x the origin is (0,0)and in f(x)=(x-2)is (2,0) for this we should add 2 to get 0 becouse in parent function become 0 when we add 0 and in shifted function to make zero our function we ahould add 2
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 4
} |
Fractional part of $b \log a$ From the problem...
Find the minimal positive integer $b$ such that the first digits of $2^b$ are 2011
...I have been able to reduce the problem to the following instead:
Find minimal $b$ such that $\log_{10} (2.011) \leq \operatorname{frac}(b~\log_{10} (2)) < \log_{10} (2.012)$, where $b$ is a positive integer
Is there an algorithm that can be applied to solve this or would you need to step through all possible b until you find the right solution?
| You are looking for integers $b$ and $p$ such that $b\log_{10}2-\log_{10}(2.011)-p$ is small and positive. The general study of such things is called "inhomogeneous diophantine approximation," which search term should get you started, if you want something more analytical than a brute force search. As 6312 indicated, continued fractions come into it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
Effect of adding a constant to both Numerator and Denominator I was reading a text book and came across the following:
If a ratio $a/b$ is given such that $a \gt b$, and given $x$ is a positive integer, then
$$\frac{a+x}{b+x} \lt\frac{a}{b}\quad\text{and}\quad \frac{a-x}{b-x}\gt \frac{a}{b}.$$
If a ratio $a/b$ is given such that $a \lt b$, $x$ a positive integer, then
$$\frac{a+x}{b+x}\gt \frac{a}{b}\quad\text{and}\quad \frac{a-x}{b-x}\lt \frac{a}{b}.$$
I am looking for more of a logical deduction on why the above statements are true (than a mathematical "proof"). I also understand that I can always check the authenticity by assigning some values to a and b variables.
Can someone please provide a logical explanation for the above?
Thanks in advance!
| How about something along these lines: Think of a pot of money divided among the people in a room. In the beginning, there are a dollars and b persons. Initially, everyone gets a/b>1 dollars since a>b. But new people are allowed into the room at a fee of 1 dollar person. The admission fees are put into the pot. The average will at always be greater than 1 but since each new person is not charged what he (or she) is getting back, the average will have to drop and so
[ \frac{a+x}{b+x}<\frac ab.]
Similar reasoning applies to the other inequalities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 8,
"answer_id": 6
} |
Reference book on measure theory I post this question with some personal specifications. I hope it does not overlap with old posted questions.
Recently I strongly feel that I have to review the knowledge of measure theory for the sake of starting my thesis.
I am not totally new with measure theory, since I have taken and past one course at the graduate level. Unfortunately, because the lecturer was not so good at teaching, I followed the course by self-study. Now I feel that all the knowledge has gone after the exam and still don’t have a clear overview on the structure of measure theory.
And here come my specified requirements for a reference book.
*
*I wish the book elaborates the proofs, since I will read it on my own again, sadly. And this is the most important criterion for the book.
*I wish the book covers most of the topics in measure theory. Although the topic of my thesis is on stochastic integration, I do want to review measure theory at a more general level, which means it could emphasize on both aspects of analysis and probability. If such a condition cannot be achieved, I'd like to more focus on probability.
*I wish the book could deal with convergences and uniform integrability carefully, as Chung’s probability book.
My expectation is after thorough reading, I could have strong background to start a thesis on stochastic integration at an analytic level.
Sorry for such a tedious question.
P.S: the textbook I used is Schilling’s book: measures, integrals and martingales. It is a pretty good textbook, but misprints really ruin the fun of reading.
| Donald L. Cohn-"Measure theory". Everything is detailed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "86",
"answer_count": 16,
"answer_id": 1
} |
Book/tutorial recommendations: acquiring math-oriented reading proficiency in German I'm interested in others' suggestions/recommendations for resources to help me acquire reading proficiency (of current math literature, as well as classic math texts) in German.
I realize that German has evolved as a language, so ideally, the resource(s) I'm looking for take that into account, or else perhaps I'll need a number of resources to accomplish such proficiency. I suspect I'll need to include multiple resources (in multiple forms) in my efforts to acquire the level of reading proficiency I'd like to have.
I do like "hard copy" material, at least in part, from which to study. But I'm also very open to suggested websites, multimedia packages, etc.
In part, I'd like to acquire reading proficiency in German to meet a degree requirement, but as a native English speaker, I would also like to be able to study directly from significant original German sources.
Finally, there's no doubt that a sound/solid reference/translation dictionary (or two or three!) will be indispensable, as well. Any recommendations for such will be greatly appreciated, keeping in mind that my aim is to be proficient in reading mathematically-oriented German literature (though I've no objections to expanding from this base!).
| I realize this is a bit late, but I just saw by chance that the math department of Princeton has a list of German words online, seemingly for people who want to read German math papers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36",
"answer_count": 6,
"answer_id": 2
} |
Many convergent sequences imply the initial sequence zero? In connection to this question, I found a similar problem in another Miklos Schweitzer contest:
Problem 8./2007 For $A=\{a_i\}_{i=0}^\infty$ a sequence of real numbers, denote by $SA=\{a_0,a_0+a_1,a_0+a_1+a_2,...\}$ the sequence of partial sums of the series $a_0+a_1+a_2+...$. Does there exist a non-identically zero sequence $A$ such that all the sequences $A,SA,SSA,SSSA,...$ are convergent?
If $SA$ is convergent then $A \to 0$. $SSA$ convergent implies $SA \to 0$.
We have
*
*$SSA=\{a_0,2a_0+a_1,3a_0+2a_1+a_2,4a_0+3a_1+2a_2+a_3...\}$
*$SSSA=\{a_0,3a_0+a_1,6a_0+3a_1+a_2,10a_0+6a_1+3a_2+a_3...\}$.
I suppose when the number of iteration grows, the coefficients of the sequence grow very large, and I suppose somehow we can get a contradiction if the initial sequence is non-identically zero.
| I would suggest you try using the alternating harmonic series. It is conditionally convergent so you can try rearrangements that might pop out convergent to zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 2
} |
Alternative to imaginary numbers? In this video, starting at 3:45 the professor says
There are some superb papers written that discount the idea that we should ever use j (imaginary unit) on the grounds that it conceals some structure that we can explain by other means.
What is the "other means" that he is referring to?
| Maybe he meant the following: A complex number $z$ is in the first place an element of the field ${\mathbb C}$ of complex numbers, and not an $a+bi$. There are indeed structure elements which remain hidden when thinking in terms of real and imaginary parts only, e.g., the multiplicative structure of the set of roots of unity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 4
} |
Reduction formula for $I_{n}=\int {\cos{nx} \over \cos{x}}\rm{d}x$ What would be a simple method to compute a reduction formula for the following?
$\displaystyle I_{n}=\int {\cos{nx} \over \cos{x}} \rm{d}x~$ where $n$ is a positive integer
I understand that it may involve splitting the numerator into $\cos(n-2+2)x~$ (or something similar to this form...), but how would one intuitively recognize that manipulating the expression into such a random arrangement is the way to proceed on this question? Moreover, are there alternative methods, and possibly even some way of directly computing this integral without the need for a reduction formula?
| The complex exponential approach described by Gerry Myerson is very nice, very natural. Here are a couple of first-year calculus approaches. The first is kind of complicated, but introduces some useful facts. The second one, given at the very end, is quick.
Instead of doing a reduction formula directly, we separate out a fact that is far more important than our integral.
Lemma: There is a polynomial $P_n(x)$ such that
$$\cos(nx)=P_n(\cos x)$$
Moreover, $P_n$ contains only terms of odd degree if $n$ is odd, and only terms of even degree if $n$ is even.
Proof: The cases $n=1$ and $n=2$ are familiar.
Suppose we know the result for $n$. We establish the result for $n+2$.
Note that
$$\cos((n+2)x)=\cos(2x)\cos(nx)-\sin(2x)\sin(nx)$$
The $\cos(2x)\cos(nx)$ part is expressible as a polynomial in $\cos x$, by the induction hypothesis.
But $\sin(nx)$ is the derivative of $(-1/n)\cos(nx)$, so it is $(-1/n)(2\sin x)P_n'(\cos x)$.
Thus
$$\sin(2x)\sin(nx)=(-1/n)(2\sin x\cos x)(\sin x)P_n'(\cos x)$$
and now we replace $\sin^2 x$ by $1-\cos^2 x$.
As we do the induction, we can easily check that all degrees are even or all are odd as claimed. Or else we can obtain the degree information afterwards from symmetry considerations.
Now to the integral!
If $n$ is odd, then $\cos(nx)=P_n(\cos x)$, where $P_n(x)$ has only terms of odd degree. Then $\frac{\cos(nx)}{\cos x}$ is a polynomial in $\cos x$, and can be integrated using the standard reduction procedure.
If $n$ is even, pretty much the same thing happens, except that $P_n(x)$ has a non-zero constant term. Divide as in the odd case. We end up with a polynomial in $\cos x$ plus a term of shape $k/(\cos x)$. The integral of $\sec x$, though mildly unpleasant, is standard.
Remark: If $n$ is odd, then $\sin(nx)$ is a polynomial in $\sin x$, with only terms of odd degree. If $n$ is even, then $\sin(nx)$ is $\cos x$ times a polynomial in $\sin x$, with all terms of odd degree.
Added: I should also give the simple reduction formula that was asked for, even at the risk people will not get interested in the polynomials.
Recall that
$$\cos(a-b)+\cos(a+b)=2\cos a \cos b$$
Take $a=(n-1)x$ and $b=x$, and rearrange a bit. We get
$$\cos(nx)=2\cos x\cos((n-1)x)-\cos((n-2)x)$$
Divide through by $\cos x$, and integrate.
$$\int\frac{\cos(nx)}{\cos x}dx =2\int \cos((n-1)x)dx-\int\frac{\cos((n-2)x)}{\cos x}dx $$
The first integral on the right is easy to evaluate, and we get our recurrence, and after a while arrive at the case $n=0$ or $n=1$. Now working "forwards" we can even express our integral as a simple explicit sum.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Connected planar simple Graph: number of edges a function of the number of vertices Suppose that a connected planar simple graph with $e$ edges and $v$ vertices contains no simple circuit with length greater than or equal to $4.\;$ Show that $$\frac 53 v -\frac{10}{3} \geq e$$
or, equivalently, $$5(v-2) \geq 3e$$
| As Joseph suggests, one of two formulas you'll want to use for this problem is Euler's formula, which you may know as
$$r = e - v + 2 \quad\text{(or}\quad v + r - e = 2)\qquad\qquad\quad (1)$$
where $r$ is the number of regions in a planar representation of $G$ (e: number of edges, v: number of vertices). (Note, for polyhedra which are clearly not planar, this translates into $r = F$, where $F$ is the number of faces of a polyhedron.)
Now, a connected planar simple graph drawn in the plane divides the plane into regions, say $r$ of them. The degree of each region, including the unbound region, must be at least five (assuming graph $G$ is a connected planar graph with no simple circuit with length $\leq 4$).
For the second formula you'll need: remember that the sum of the degrees of the regions is exactly twice the number of edges in the graph, because each edge occurs on the boundary of a region exactly twice, either in two different regions, or twice in the same region. Because each region $r$ has degree greater than or equal to five, $$2e = \sum_{\text{all regions}\;R} \mathrm{deg}(R) \geq 5r\qquad\qquad\qquad\qquad (2)$$
which gives us $r \leq \large\frac 25 e$.
Now, using this result from (2), and substituting for r in Euler's formula, (1), we obtain $$e - v + 2 \leq \frac 25 e,$$
$$\frac 35 e \leq v - 2,$$
and hence, we have, as desired: $$e \leq \frac 53 v - \frac {10}{3} \quad\iff \quad \frac 53 v - \frac{10}{3} \geq e \quad \iff \quad 5(v-2) \geq 3e$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why does a diagonalization of a matrix B with the basis of a commuting matrix A give a block diagonal matrix? I am trying to understand a proof concerning commuting matrices and simultaneous diagonalization of these.
It seems to be a well known result that when you take the eigenvectors of $A$ as a basis and diagonalize $B$ with it then you get a block diagonal matrix:
$$B=
\begin{pmatrix}
B_{1} & 0 & \cdots & 0 \\
0 & B_{2} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & B_{m}
\end{pmatrix},$$
where each $B_{i}$ is an $m_{g}(\lambda_{i}) \times m_{g}(\lambda_{i})$ block ($m_{g}(\lambda_{i})$ being the geometric multiplicity of $\lambda_{i}$).
My questionWhy is this so? I calculated an example and, lo and behold, it really works :-) But I don't understand how it works out so neatly.
Can you please explain this result to me in an intuitive and step-by-step manner - Thank you!
| Suppose that $A$ and $B$ are matrices that commute. Let $\lambda$ be an eigenvalue for $A$, and let $E_{\lambda}$ be the eigenspace of $A$ corresponding to $\lambda$. Let $\mathbf{v}_1,\ldots,\mathbf{v}_k$ be a basis for $E_{\lambda}$.
I claim that $B$ maps $E_{\lambda}$ to itself; in particular, $B\mathbf{v}_i$ can be expressed as a linear combination of $\mathbf{v}_1,\ldots,\mathbf{v}_k$, for $i=1,\ldots,k$.
To show that $B$ maps $E_{\lambda}$ to itself, it is enough to show that $B\mathbf{v}_i$ lies in $E_{\lambda}$; that is, that if we apply $A$ to $B\mathbf{v}_i$, the result ill be $\lambda(B\mathbf{v}_i)$. This is where the fact that $A$ and $B$ commute comes in. We have:
$$A\Bigl(B\mathbf{v}_i\Bigr) = (AB)\mathbf{v}_i = (BA)\mathbf{v}_i = B\Bigl(A\mathbf{v}_i\Bigr) = B(\lambda\mathbf{v}_i) = \lambda(B\mathbf{v}_i).$$
Therefore, $B\mathbf{v}_i\in E_{\lambda}$, as claimed.
So, now take the basis $\mathbf{v}_1,\ldots,\mathbf{v}_k$, and extend it to a basis for $\mathbf{V}$, $\beta=[\mathbf{v}_1,\ldots,\mathbf{v}_k,\mathbf{v}_{k+1},\ldots,\mathbf{v}_n]$. To find the coordinate matrix of $B$ relative to $\beta$, we compute $B\mathbf{v}_i$ for each $i$, write $B\mathbf{v}_i$ as a linear combination of the vectors in $\beta$, and then place the corresponding coefficients in the $i$th column of the matrix.
When we compute $B\mathbf{v}_1,\ldots,B\mathbf{v}_k$, each of these will lie in $E_{\lambda}$. Therefore, each of these can be expressed as a linear combination of $\mathbf{v}_1,\ldots,\mathbf{v}_k$ (since they form a basis for $E_{\lambda}$. So, to express them as linear combinations of $\beta$, we just add $0$s; we will have:
$$\begin{align*}
B\mathbf{v}_1 &= b_{11}\mathbf{v}_1 + b_{21}\mathbf{v}_2+\cdots+b_{k1}\mathbf{v}_k + 0\mathbf{v}_{k+1}+\cdots + 0\mathbf{v}_n\\
B\mathbf{v}_2 &= b_{12}\mathbf{v}_1 + b_{22}\mathbf{v}_2 + \cdots +b_{k2}\mathbf{v}_k + 0\mathbf{v}_{k+1}+\cdots + 0\mathbf{v}_n\\
&\vdots\\
B\mathbf{v}_k &= b_{1k}\mathbf{v}_1 + b_{2k}\mathbf{v}_2 + \cdots + b_{kk}\mathbf{v}_k + 0\mathbf{v}_{k+1}+\cdots + 0\mathbf{v}_n
\end{align*}$$
where $b_{ij}$ are some scalars (some possibly equal to $0$). So the matrix of $B$ relative to $\beta$ would start off something like:
$$\left(\begin{array}{ccccccc}
b_{11} & b_{12} & \cdots & b_{1k} & * & \cdots & *\\
b_{21} & b_{22} & \cdots & b_{2k} & * & \cdots & *\\
\vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots\\
b_{k1} & b_{k2} & \cdots & b_{kk} & * & \cdots & *\\
0 & 0 & \cdots & 0 & * & \cdots & *\\
\vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & 0 & * & \cdots & *
\end{array}\right).$$
So, now suppose that you have a basis for $\mathbf{V}$ that consists entirely of eigenvectors of $A$; let $\beta=[\mathbf{v}_1,\ldots,\mathbf{v}_n]$ be this basis, with $\mathbf{v}_1,\ldots,\mathbf{v}_{m_1}$ corresponding to $\lambda_1$ (with $m_1$ the algebraic multiplicity of $\lambda_1$, which equals the geometric multiplicity of $\lambda_1$); $\mathbf{v}_{m_1+1},\ldots,\mathbf{v}_{m_1+m_2}$ the eigenvectors corresponding to $\lambda_2$, and so on until we get to $\mathbf{v}_{m_1+\cdots+m_{k-1}+1},\ldots,\mathbf{v}_{m_1+\cdots+m_k}$ corresponding to $\lambda_k$. Note that $\mathbf{v}_{1},\ldots,\mathbf{v}_{m_1}$ are a basis for $E_{\lambda_1}$; that $\mathbf{v}_{m_1+1},\ldots,\mathbf{v}_{m_1+m_2}$ are a basis for $E_{\lambda_2}$, etc.
By what we just saw, each of $B\mathbf{v}_1,\ldots,B\mathbf{v}_{m_1}$ lies in $E_{\lambda_1}$, and so when we express it as a linear combination of vectors in $\beta$, the only vectors with nonzero coefficients are $\mathbf{v}_1,\ldots,\mathbf{v}_{m_1}$, because they are a basis for $E_{\lambda_1}$. So in the first $m_1$ columns of $[B]_{\beta}^{\beta}$ (the coordinate matrix of $B$ relative to $\beta$), the only nonzero entries in the first $m_1$ columns occur in the first $m_1$ rows.
Likewise, each of $B\mathbf{v}_{m_1+1},\ldots,B\mathbf{v}_{m_1+m_2}$ lies in $E_{\lambda_2}$, so when we express them as linear combinations of $\beta$, the only places where you can have nonzero coefficients are in the coefficients of $\mathbf{v}_{m_1+1},\ldots,\mathbf{v}_{m_1+m_2}$. So the $(m_1+1)$st through $(m_1+m_2)$st column of $[B]_{\beta}^{\beta}$ can only have nonzero entries in the $(m_1+1)$st through $(m_1+m_2)$st rows. And so on.
That means that $[B]_{\beta}^{\beta}$ is in fact block-diagonal, with the blocks corresponding to the eigenspaces $E_{\lambda_i}$ of $A$, exactly as described.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 2
} |
Proving Stewart's theorem without trig Stewart's theorem states that in the triangle shown below,
$$ b^2 m + c^2 n = a (d^2 + mn). $$
Is there any good way to prove this without using any trigonometry? Every proof I can find uses the Law of Cosines.
| Geometric equivalents of the Law of Cosines are already present in Book II of Euclid, in Propositions $12$ and $13$ (the first is the obtuse angle case, the second is the acute angle case).
Here are links to Proposition $12$, Book II, and to Proposition $13$.
There is absolutely no trigonometry in Euclid's proofs.
These geometric equivalents of the Law of Cosines can be used in a mechanical way as "drop in" replacements for the Law of Cosines in "standard" proofs of Stewart's Theorem. What in trigonometric approaches we think of as $2ab\cos\theta$ is, in Euclid, the area of a rectangle that is added to or subtracted from the combined area of two squares.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 4,
"answer_id": 0
} |
Atiyah-Macdonald, Exercise 8.3: Artinian iff finite k-algebra.
Atiyah Macdonald, Exercise 8.3. Let $k$ be a field and $A$ a finitely generated $k$-algebra. Prove that the following are equivalent:
(1) $A$ is Artinian.
(2) $A$ is a finite $k$-algebra.
I have a question in the proof of (1$\Rightarrow$2): By using the structure theorem, we may assume that $(A,m)$ is an Artin local ring. Then $A/m$ is a finite algebraic extension of $k$ by Zariski lemma. Since $A$ is Artinian, $m$ is the nilradical of $A$ and thus $m^n=0$ for some $n$. Thus we have a chain $A \supseteq m \supseteq m^2 \supseteq \cdots \supseteq m^n=0$. Since $A$ is Noetherian, $m$ is finitely generated and hence each $m^i/m^{i+1}$ is a finite dimensional $A/m$-vector space, hence a finite dimensional $k$-vector space.
But now how can I deduce that $A$ is a finite dimensional $k$-vector space?
| The claim also seems to follow from the Noether normalization lemma:
Let $B := k[x_1, \dotsc, x_n]$ with $k$ any field and let $I \subseteq B$ be any ideal.
Since $A$ is a finitely generated $k$-algebra you may let $A := B/I$. By the Noether normalization lemma it follows that there is a finite set of elements $y_1, \dotsc, y_d \in A$ with $d = \dim(X)$ and the property that the subring $k[y_1, \dotsc, y_d] \subseteq A$ generated by the elements $y_i$ is a polynomial ring. The ring extension $k[y_1, \dotsc, y_d] \subseteq A$ is an integral extension of rings. If $d = 0$, it follows from the same lemma the ring extension $k \subseteq A$ is integral and since $A$ is finitely generated as $k$-algebra by the elements $\overline{x_i}$ and since each element $\overline{x_i}$ is integral over $k$, it follows that $\dim_k(A) < \infty$.
Question: “But now how can I deduce that $A$ is a finite dimensional $k$-vector space?”
Answer: It seems from the argument above you can use the Noether normalization lemma to give another proof of your implication, different from the proofs given above. Hence now you have two proofs of your result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 3,
"answer_id": 2
} |
How to prove the implicit function theorem fails Define $$F(x,y,u,v)= 3x^2-y^2+u^2+4uv+v^2$$ $$G(x,y,u,v)=x^2-y^2+2uv$$
Show that there is no open set in the $(u,v)$ plane such that $(F,G)=(0,0)$ defines $x$ and $y$ in terms of $u$ and $v$.
If (F,G) is equal to say (9,-3) you can just apply the Implicit function theorem and show that in a neighborhood of (1,1) $x$ and $y$ are defined in terms of $u$ and $v$. But this question seems to imply that some part of the assumptions must be necessary for such functions to exist?
I believe that since the partials exist and are continuous the determinant of $$\pmatrix{
\frac{\partial F}{\partial x}&\frac{\partial F}{\partial y}\cr
\frac{\partial G}{\partial x}&\frac{\partial G}{\partial y} }$$ must be non-zero in order for x and y to be implicitly defined on an open set near any point (u,v) but since the above conditions require x=y=0 the determinant of the above matrix is =0.
I have not found this in an analysis text but this paper http://www.u.arizona.edu/~nlazzati/Courses/Math519/Notes/Note%203.pdf claims it is necessary.
| To say $(F,G) = (0,0)$ is to say that $y^2 - 3x^2 = u^2 + 4uv + v^2$ and $y^2 - x^2 = 2uv$. By some algebra, this is equivalent to $x^2 = -{1 \over 2}(u + v)^2$ and $y^2 = -{1 \over 2}(u - v)^2$. So you are requiring the nonnegative quantities on the left to be equal to the nonpositive quantities on the right. Hence the solution set is only $(x,y,u,v) = (0,0,0,0)$, where everything is zero.
Suppose on the other hand you had equations $x^2 = {1 \over 2}(u + v)^2$ and $y^2 = {1 \over 2}(u - v)^2$. Then you could solve them, but there is no uniqueness now; you could take $(x,y) = (\pm {1 \over \sqrt{2}}(u + v),\pm {1 \over \sqrt{2}}(u - v))$ obtaining four distinct smooth solutions that come together at $(0,0,0,0)$.
So these are good examples showing that if the determinant is zero at $(0,0)$ you don't have to have existence of solutions, nor uniqueness when you do have existence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Permutation/Combinations in bit Strings I have a bit string with 10 letters, which can be {a, b, c}. How many bit strings can be made that have exactly 3 a's, or exactly 4 b's?
I thought that it would be C(7,2) + C(6,2), but that's wrong (the answer is 24,600).
| Hint: By the inclusion-exclusion principle, the answer is equal to
$$\begin{align} & \text{(number of strings with exactly 3 a's)}\\ + & \text{(number of strings with exactly 4 b's)}\\ - &\text{(number of strings with exactly 3 a's and 4 b's)} \end{align}$$
Suppose I want to make a string with exactly 3 a's. First, I need to choose where to put the a's; the number of ways of choosing 3 places to put the a's, out of 10 places, is $\binom{10}{3}$. Now, I need to choose how to fill in the other places with b's or c's; there are 2 choices of letters and 7 places left. Thus, the number of strings that have exactly 3 a's is equal to
$$\binom{10}{3}\cdot 2^7$$
You should be able to use similar reasoning to find the other numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Modus Operandi. Formulae for Maximum and Minimum of two numbers with a + b and $|a - b|$ I came across the following problem in my self-study of real analysis:
For any real numbers $a$ and $b$, show that $$\max \{a,b \} = \frac{1}{2}(a+b+|a-b|)$$ and $$\min\{a,b \} = \frac{1}{2}(a+b-|a-b|)$$
So $a \geq b$ iff $a-b \ge0$ and $b \ge a$ iff $b-a \ge 0$. At first glance, it seems like an average of distances. For the first case, go to the point $a+b$, add $|a-b|$ and divide by $2$. Similarly with the second case.
Would you just break it up in cases and verify the formulas? Or do you actually need to come up with the formulas?
| I know this is a little bit late, but here another way to get into that formula.
If we want to know $\min(a,b)$ we can know which is smaller by taking the sign of $b-a$. The sign is defined as $sign(x)=\frac{x}{|x|}$ and $msign(x)=\frac{sign(x)+1}{2}$ to get the values $0$ or $1$; if $msign(a-b)$ is $1$ it means that $a$ is bigger, if it is $0$, $a$ is smaller. To get the minimum value, we need to sum all values that $sign(x-y)$ is $1$ (which means that $x$ is bigger than $y$) for $y=a$ and $x=b$. So we have
$$\min(a,b)=msign(b-a)a+msign(a-b)b$$
and
$$\max(a,b)=msign(a-b)a+msign(b-a)b$$
and simplifying
$$\min(a,b)=\frac{1}{2}\left(a+b-|a-b|\right)$$
$$\max(a,b)=\frac{1}{2}\left(a+b+|a-b|\right)$$
All this come from this equations:
$$\min(a,b)= \begin{cases}
a & msign(a-b)==0\\
b & msign(a-b)==1
\end{cases}
$$
$$\max(a,b)= \begin{cases}
a & msign(a-b)==1\\
b & msign(a-b)==0
\end{cases}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 4,
"answer_id": 3
} |
Group theory intricate problem This is Miklos Schweitzer 2009 Problem 6. It's a group theory problem hidden in a complicated language.
A set system $(S,L)$ is called a Steiner triple system if $L \neq \emptyset$, any pair $x,y \in S, x \neq y$ of points lie on a unique line $\ell \in L$, and every line $\ell \in L$ contains exactly three points. Let $(S,L)$ be a Steiner triple system, and let us denote by $xy$ the third point on a line determined by the points $x \neq y$. Let $A$ be a group whose factor by its center $C(A)$ is of prime power order. Let $f,h:S \to A$ be maps, such that $C(A)$ contains the range of $f$, and the range of $h$ generates $A$. Show that if
$$ f(x)=h(x)h(y)h(x)h(xy)$$ holds for all pairs of points $x \neq y$, then $A$ is commutative and there exists an element $k \in A$ such that $$ f(x)= k h(x),\ \forall x \in S $$
Here is what I've got:
*
*Because the image of $h$ generates $A$, for $A$ to be commutative is enough to prove that $h(x)h(y)=h(y)h(x)$ for every $x,y \in S$.
*For the last identity to be true (if we have proved the commutativity) it is enough to have that the product $h(x)h(y)h(xy)=k$ for every $x \neq y$.
*$h(y)h(x)h(xy)=h(xy)h(x)h(y)$
*I should use somewhere the fact that the factor $A /C(A)$ has prime power order.
| Let $g:S\rightarrow A$ be defined as $g(x) = h(x)^{-1} f(x)$.
Now, if $\{x,y,z\}\in L$, then $g(y) = h(z)g(x)h(z)^{-1}$. This means that the image of $g$ is closed under conjugation by elements of $A$ since $A$ is generated by the image of $h.$
Also, since this formula does not depend on the order of $x,y,z$, it means that $g(x)=h(z)g(y)h(z)^{-1}$. In particular, then $h(z)^2$ commutes with $g(x)$ for all $x$.
But since $f(x)$ is in the center of $A$, that means that $h(z)^2$ commutes with $h(x)$ for all $x, z\in S$. Hence $h(z)^2$ commutes with all of $A$ - that is $h(z)^2\in C(A)$, so $A/C(A)$ is generated by elements of order $2$, so by the condition of the problem, $A/C(A)$ must be of order $2^n$ for some $n$.
Now, since $g(x)=h(y)h(x)h(z) = h(z)h(x)h(y)$, we can see that:
$$g(x)^2 = h(y)h(x)h(z)h(z)h(x)h(y) = h(x)^2 h(y)^2 h(z)^2$$
Therefore, $g(x)^2 = g(y)^2 = g(z)^2$, and in particular, for all $x,y \in S$, $g(x)^2 = g(y)^2$. So there is some $K\in C(A)$ such that $\forall x\in S, g(x)^2=K$.
There are lots of things that can be concluded from knowing that $h(x)^2\in C(A)$. For example, that $f(x)f(y) {f(z)}^{-1}= h(x)^2 h(y)^2$. That can be used to show that $f(x)f(y)f(z) = h(x)^4h(y)^4h(z)^4 = K^2$.
Not sure where to go from here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/46982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
$F[a] \subseteq F(a)?$ I think this is probably an easy question, but I'd just like to check that I'm looking at it the right way.
Let $F$ be a field, and let $f(x) \in F[x]$ have a zero $a$ in some extension field $E$ of $F$. Define $F[a] = \left\{ f(a)\ |\ f(x) \in F[x] \right\}$. Then $F[a]\subseteq F(a)$.
The way I see this is that $F(a)$ contains all elements of the form $c_0 + c_1a + c_2a^2 + \cdots + c_na^n + \cdots$ ($c_i \in F$), hence it contains $F[a]$. Is that the "obvious" reason $F[a]$ is in $F(a)$?
And by the way, is $F[a]$ standard notation for the set just defined?
| (1) Yes, you are correct. Note that $F(a)=\{\frac{f(a)}{g(a)}:f,g\in F[x], g(a)\neq 0\}$; in other words, $F(a)$ is the field of fractions of $F[a]$ and therefore certainly contains $F[a]$.
(2) Yes, the notation $F[a]$ is standard for the set you described.
Exercise 1: Prove that if $a$ is algebraic over $F$, then $F[a]=F(a)$. (Hint: prove first that $\frac{1}{a}\in F[a]$ (if $a\neq 0$) using an algeraic equation of minimal degree of $a$ over $F$.)
Exercise 2: Prove that if $a$ is transcendental over $F$, then $F[a]\neq F(a)$. (Hint: Prove that $F[a]\cong F[x]$ where $F[x]$ denotes the polynomial ring in the variable $x$ over $F$. Note that $F[x]$ is never a field if $F$ is a field.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Is there a name for the matrix $X(X^tX)^{-1}X^{t}$? In my work, I have repeatedly stumbled across the matrix (with a generic matrix $X$ of dimensions $m\times n$ with $m>n$ given) $\Lambda=X(X^tX)^{-1}X^{t}$. It can be characterized by the following:
(1) If $v$ is in the span of the column vectors of $X$, then $\Lambda v=v$.
(2) If $v$ is orthogonal to the span of the column vectors of $X$, then $\Lambda v = 0$.
(we assume that $X$ has full rank).
I find this matrix neat, but for my work (in statistics) I need more intuition behind it. What does it mean in a probability context? We are deriving properties of linear regressions, where each row in $X$ is an observation.
Is this matrix known, and if so in what context (statistics would be optimal but if it is a celebrated operation in differential geometry, I'd be curious to hear as well)?
| This should be a comment, but I can't leave comments yet. As pointed out by Rahul Narain, this is the orthogonal projection onto the column space of $X$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Null Sequences and Real Analysis I came across the following problem during the course of my study of real analysis:
Prove that $(x_n)$ is a null sequence iff $(x_{n}^{2})$ is null.
For all $\epsilon>0$, $|x_{n}| \leq \epsilon$ for $n > N_1$. Let $N_2 = \text{ceiling}(\sqrt{N_1})$. Then $(x_{n}^{2}) \leq \epsilon$ for $n > N_2$. If $(x_{n}^{2})$ is null then $|x_{n}^{2}| \leq \epsilon$ for $n>N$. Let $N_3 = N^2$. Then $|x_n| \leq \epsilon$ for $n> N_3$.
Is this correct? In general, we could say $(x_{n})$ is null iff $(x_{n}^{n})$ is null?
| You could use the following fact:
If a function $f:X\to Y$ between two topological spaces is continuous and $x_n\to x$, then $f(x_n)\to f(x)$.
(In case you do not have learned it in this generality, you might at least know that this is true for real functions or for functions between metric spaces. In fact, in the case of real functions the above condition is equivalent to continuity.)
You can obtain your first claim by applying the fact to the continuous functions:
$f: \mathbb R\to\mathbb R$, $f(x)=x^2$ (one implication)
$f: \langle 0,\infty)\to \mathbb R$, $f(x)=\sqrt{x}$ (reverse implication)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
(Organic) Chemistry for Mathematicians Recently I've been reading "The Wild Book" which applies semigroup theory to, among other things, chemical reactions. If I google for mathematics and chemistry together, most of the results are to do with physical chemistry: cond-mat, fluids, QM of molecules, and analysis of spectra. I'm more interested in learning about biochemistry, molecular biology, and organic chemistry — and would prefer to learn from a mathematical perspective.
What other books aim to teach (bio- || organic) chemistry specifically to those with a mathematical background?
| Organic chemistry
S. Fujita's "Symmetry and combinatorial enumeration in chemistry" (Springer-Verlag, 1991) is one such endeavor. It mainly focuses on stereochemistry.
Molecular biology and biochemistry
A. Carbone and M. Gromov's "Mathematical slices of molecular biology" is recommended, although it is not strictly a book.
R. Phillips, J. Kondev and J. Theriot have published "Physical biology of the cell", which contains biochemical topics (such as structures of hemoglobin) and is fairly accessible to mathematicians in my opinion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 2,
"answer_id": 1
} |
Question about proof for $S_4 \cong V_4 \rtimes S_3$ In my book they give the following proof for $S_4 \cong V_4 \rtimes S_3$ :
Let $j: S_3 \rightarrow S_4: p \mapsto \left( \begin{array}{cccc}
1 & 2 & 3 & 4 \\
p(1) & p(2) & p(3) & 4 \end{array} \right)$
Clearly, $j(S_3)$ is a subgroup $S_4$ isomorphic with $S_3$, hence $j $ is injective. We identify $S_3 $ with $ j(S_3$).
Also $V_4 \triangleleft S_4$ and clearly $V_4 \cap S_3 = \{I\}$.
We now only have to show that $S_4 = V_4S_3$. Hence $V_4\cap S_3 = \{I\}$, we know that $\#(V_4S_3) = \#V_4 \#S_3 = 4 \cdot 6 = 24 = \#S_4$ thus $S_4 = V_4S_3$, which implies that $S_4 \cong V_4 \rtimes S_3$.
However, I am wondering what the function $j$ is actually used for in the proof? (I do not see the connection.)
|
It is only used to identify the subgroup S3 of S4, and is only needed as a technicality.
If you view S3 as bijections from {1,2,3} to {1,2,3} and S4 as bijections from {1,2,3,4} to {1,2,3,4}, and you view functions as having domains and ranges (not just rules), then no element of S3 is an element of S4. The function j allows you to view elements of S3 as bijections of {1,2,3,4} that happen to leave 4 alone. Then the elements of S3 (really j(S3)) are elements of S4, and so you can talk about it being a subgroup.
The statement of the theorem appears to mention external semi-direct products, but the proof uses internal semi-direct products. To use an internal semi-direct product, you need subgroups.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Qualitative interpretation of Hilbert transform the well-known Kramers-Kronig relations state that for a function satisfying certain conditions, its imaginary part is the Hilbert transform of its real part.
This often comes up in physics, where it can be used to related resonances and absorption. What one usually finds there is the following: Where the imaginary part has a peak, the real part goes through zero.
Is this a general rule?
And are there more general statements possible? For Fourier transforms, for example, I know the statement that a peak with width $\Delta$ in time domain corresponds to a peak with width $1/\Delta$ (missing some factors $\pi$, I am sure...) in frequency domain.
Is there some rule of thumb that tells me how the Hilbert transform of a function with finite support (e.g. with a bandwidth $W$) looks like, approximately?
Tanks,
Lagerbaer
| Never heard of the Kramers-Kronig relations and so I looked it up. It relates the real and imaginary parts of an analytic function on the upper half plane that satisfies certain growth conditions. This is a big area in complex analysis and there are many results. For example, in the case of a function with compact support, its Hilbert transform can never have compact support, or even vanish on a set of measure greater $0$. Many books on analytic functions (especially ones on $H^p$ spaces and bounded analytic functions) cover this topic. Some books in signal processing also cover this but from a different perspective, and in most cases less rigorous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Involuted vs Idempotent What is the difference between an "involuted" and an "idempotent" matrix?
I believe that they both have to do with inverse, perhaps "self inverse" matrices.
Or do they happen to refer to the same thing?
| A matrix $A$ is an involution if it is its own inverse, ie if
$$A^2 = I$$
A matrix $B$ is idempotent if it squares to itself, ie if
$$B^2 = B$$
The only invertible idempotent matrix is the identity matrix, which can be seen by multiplying both sides of the above equation by $B^{-1}$. An idempotent matrix is also known as a projection.
Involutions and idempotents are related to one another. If $A$ is idempotent then $I - 2A$ is an involution, and if $B$ is an involution, then $\tfrac{1}{2}(I\pm B)$ is idempotent.
Finally, if $B$ is idempotent then $I-B$ is also idempotent and if $A$ is an involution then $-A$ is also an involution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Show that $f \in \Theta(g)$, where $f(n) = n$ and $g(n) = n + 1/n$ I am a total beginner with the big theta notation. I need find a way to show that $f \in \Theta(g)$, where $f(n) = n$, $g(n) = n + 1/n$, and that $f, g : Z^+ \rightarrow R$. What confuses me with this problem is that I thought that "$g$" is always supposed to be "simpler" than "$f$." But I think I missed something here.
| You are sort of right about thinking that "$g$" is supposed to be simpler than "$f$", but not technically right. The formal definition says nothing about simpler.
However, in practice one is essentially always comparing something somewhat messy, on the left, with something whose behaviour is sort of clear(er) to the eye, on the right.
For the actual verifications in this exercise, it would have made no difference if the functions had been interchanged, so probably the "colloquially standard" version should have been used. But maybe not, once or twice. Now you know a little more about the symmetry of the notion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
"Counting Tricks": using combination to derive a general formula for $1^2 + 2^2 + \cdots + n^2$ I was reading an online article which confused me with the following. To find out $S(n)$, where $S(n) = 1^2 + 2^2 + \cdots + n^2$, one can first write out the first few terms:
0 1 5 14 30 55 91 140 204 285
Then, get the differences between adjacent terms until they're all zeroes:
0 1 5 14 30 55 91 140 204 285
1 4 9 16 25 36 49 64 81
3 5 7 9 11 13 15 17
2 2 2 2 2 2 2
all zeroes this row
Then it says that therefore we can use the following method to achieve $S(n)$:
$S(n) = 0 {n\choose 0} + 1 {n\choose 1} + 3 {n\choose 2} + 2 {n\choose 3}$.
I don't understand the underlying mechanism. Someone cares to explain?
| The key word here is finite differences. See Newton series.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 1
} |
A double integral (differentiation under the integral sign) While working on a physics problem, I got the following double integral that depends on the parameter $a$:
$$I(a)=\int_{0}^{L}\int_{0}^{L}\sqrt{a}e^{-a(x-y+b)^2}dxdy$$
where $L$ and $b$ are constants.
Now, this integral obviously has no closed form in terms of elementary functions. However, it follows from physical considerations that the derivative of this integral $\frac{dI}{da}$ has a closed form solution in terms of exponential functions. Unfortunately, my mathematical abilities are not good enough to get this result directly from the integral. So, how does a mathematician solve this problem?
| Nowadays many mathematicians (including me -:)) would be content to use some program to have
$$I'(a)=\frac{e^{-a (b+L)^2} \left(2 e^{a L (2 b+L)}-e^{4 a b L}-1\right)}{4 a^{3/2}}.$$
As for the proof, put $t=1/a$ and let $G(b,t)=e^{-b^2/t}/\sqrt{\pi t}\ $ be a fundamental solution of the heat equation $u_t-u_{bb}/4=0\ $. Then
$$
u(b,t)=I(1/a)/\sqrt\pi =\int_{0}^{L}\int_{0}^{L}G(b+x-y,t)\,dxdy.
$$
If to tinker a bit about what happens then $t\to+0$ we'll have that $u$ is a solution of the Cauchy problem with initial condition $u(b,0)=\psi(b)$ where $\psi(b)=L-|b|$ then $|b|\le L$ and $\psi(b)=0$ otherwise. So $u(b,t)=\int_{-\infty}^\infty G(b-z,t)\psi(z)\,dz\,\,\,$. Taking Fourier transform with respect to b we have
$$
\tilde u(\xi,t)=\tilde \psi(\xi) \tilde G(\xi,t)=-\frac{e^{-i L \xi} \left(-1+e^{i L \xi}\right)^2}{\sqrt{2 \pi } \xi^2} \frac{e^{-\frac{\xi ^2 t}{4}}}{\sqrt{2 \pi }}=
$$
$$
-\frac{\left(-1+e^{i L \xi}\right)^2 e^{-\frac{\xi ^2 t}{4}-i L \xi}}{2 \pi \xi^2},$$
$$
\tilde u_t(\xi,t)=\frac{\left(-1+e^{i L \xi }\right)^2 e^{-\frac{1}{4} \xi (\xi t+4 i L)}}{8 \pi }.
$$
Taking inverse Fourier transform etc. will give the answer above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Completeness and Cauchy Sequences I came across the following problem on Cauchy Sequences:
Prove that every compact metric space is complete.
Suppose $X$ is a compact metric space. By definition, every sequence in $X$ has a convergent subsequence. We want to show that every Cauchy sequence in $X$ is convergent in $X$. Let $(x_n)$ be an arbitrary sequence in $X$ and $(x_{n_{k}})$ a subsequence that converges to $a$. Since $(x_{n_{k}}) \to a$ we have the following: $$(\forall \epsilon >0) \ \exists N \ni m,n \geq N \implies |x_{n_{m}}-x_{n_{n}}| < \epsilon$$
Using this, we can conclude that every Cauchy sequence in $X$ is convergent in $X$? Or do we inductively create subsequences and use Cauchy's criterion to show that it converges?
| Let $\epsilon > 0$. Since $(x_n)$ is Cauchy, exists $\eta_1\in \mathbb N$ such that
$$ \left\vert x_n - x_m\right\vert < \frac \epsilon 2$$
for each pair $n, m > \eta_1$.
Since $x_{k_n} \to a$, exists $\eta_2 \in \mathbb N$ such that
$$ \left\vert x_{k_n} - a\right\vert < \frac \epsilon 2$$
for each $n > \eta_2$.
Let $\eta = \max\{\eta_1, \eta_2\}$, if $n > \eta$ then $k_n \ge n > \eta$. Therefore we have
$$ \left\vert x_n - a\right\vert \le \left\vert x_n - x_{k_n}\right\vert + \left\vert x_{k_n} - a\right\vert < \frac \epsilon 2 + \frac \epsilon 2 = \epsilon$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |