Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Why does current decrease when voltage is increased? (transformers) Why is the current in the secondary coil less than in the primary when the voltage is greater in that coil as compared to primary coil? (for transformers) Well I know that energy should be conserved but listen this is not the cause! There would be some motion, forces or fields included in reason that finally results in high voltage and low current. And i also think that solution will obey ohms law too since I am not talking about non ohmic conductors. If you think it doesn't then give explanation for why it obeys when you use a battery and a metal wire and why it does not apply for transformers.
If the output from the secondary of a transformer is connected to a fixed load (such as a resistor), an increased voltage will produce an increased current. This will require an increase in the current in the primary (in phase with the input voltage to match the output power).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/684921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Does Newtonian mechanics work in polar coordinates? Our teacher suggested that Newtonian Mechanics only applies in cartesian coordinates. Is this true? He gave this example. Suppose there a train moving with constant velocity $\vec{v}=v_0\hat{x}$, with initial position vector $\vec{r}=(0, y_0)$, where $v_0,y_0$ are constants. He argued that Newton's second law would not hold in polar coordinates. Any ideas? (We can assume 2D or 3D cases as well, so spherical or polar, it doesn't really matter)
This is an example of how operators do not in general commute. That is: if $x$ and $y$ are variables, $xy=yx$, but if $f$ and $g$ are operators, $fg$ does not generally equal $gf$. An operator is a set of instructions for what to do to the expression that follows it. Consider as a simple example $f =$"add 5" and $g =$ "multiply by 10". Then $fgx = 10x+5$ and $gfx = 10x + 50$. If we want to reverse the operators, we need a third operator, which has the effect of undoing the consequence of the order reversal. Suppose we started with $gx$ and wanted to operate on $x$ with $f$. In this case, we could introduce $h =$ "subtract 45". Then $fgx = hgfx$. Or we could introduce an operator that undid $g$, using $g^{-1} $="divide by 10". Then we can use the identity $gfg^{-1}gx=gfx$. Here, "convert Cartesian to polar" and "take the time derivative" are operators. Newton's mechanics are formulated in Cartesian, so if we want to operate with the time derivative and get Newtonian results, we need either a Cartesian coordinate expression, or a third operator. That is: either "convert polar to Cartesian" as $ g^{-1}$ or "undo the consequence of operating on 'convert Cartesian to Polar' with 'take the time derivative'" as $h$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/684991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 10, "answer_id": 9 }
Difference in temperature due to height I observed a strange phenomenon today. I brought my milk and it was steaming hot so i left it for a while so that it could cool down a bit. It was cooled enough and there was cream on the surface and i started to drink after removing the cream. When the glass was approx 1/8 full i found the milk was cold and the difference in temperature when i started to drink and now could clearly be felt. Then i swirled the milk (like we do with test tubes to mix chemicals) and found milk was hot again. Is this phenomenon usual or just happened with me? Few other observations: The base of glass at the time i discovered this phenomenon was cold unlike the body. The glass was made of steel.
As the vessel was steel it's possible that the heat was being conducted away through the base of the 'glass'. The base would be in contact with the table. This would cause cold milk at the bottom, it would stay at the bottom, as it's denser than the hot milk. The top of the glass would be hotter than the bottom. When you drank, you'd come to the cooler milk. When you swirled it, the milk would come into contact with the hot part of the glass, higher up, and get hot again.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/685138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to calculate the right path for this spaceship? a spaceship is moving with velocity $v$ in a line, it has a distance P with a planet with $$F=-(mγ)/r^3 (γ=8/9 P^2v^2)$$ now, how can I show this spaceship turns around the planet B for 1 round, and its closest distance to the planet B, and its velocity at that point. As far as I know we should use the $F=ma$ then from a evaluate $r, θ$ if in polar coordinates, or something like this. what I get is: $$r(θ)=1/[C1 sin(ωθ)+C2cos(ωθ)]$$ for ω= radical (1-mμ/L^2) but the constants C1 & C2 can't be found. how can I solve what is been asked?
The Equation of motions are: $$m\,\ddot r-m\,\dot\theta^2\,r+F_r=0\\ r^2\,\ddot\theta+r\,2\dot r\dot\theta=0\quad\Rightarrow\\ \dot\theta=\frac{h}{r^2}$$ where $~F_r=\frac{m\,\gamma}{r^3}$ form here you obtain $$r(\theta)=\left[C_1\sin(\omega\,\theta)+C_2\cos(\omega\,\theta)\right]^{-1}$$ where $~\omega=\frac{\sqrt{h^2-\gamma}}{h}\quad $ The Initial conditions $$r\,\sin(\theta)\bigg|_{\theta=\pi/2}=p\\ \frac{dr}{d\theta}\bigg|_{\theta=0}=0$$ I got this result ?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/685336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
I don't understand Wigner's friend paradox The Wigner's friend experiment goes like this: Say Wigner instructed his friend to perform Schrödinger's cat experiment in a laboratory while he work from home, his friend made the measurement and email Wigner about the result. The paradox is the state of the cat is defined for his friend since he took a peek but before the email containing the result reaches Wigner, to Wigner the state of the cat is both alive and dead at the same time. I am now confused as why would Wigner knowing the result even matters? Please help me understand this paradox because of Wigner involvement there now seems to be a contradiction of the result.
The important part of the thought experiment is the time where Wigner's friend knows the cat is dead but Wigner himself has not yet read the email. After Wigner reads the email, he and his friend will agree that the system has collapsed and the paradox disappears. I would say that Wigner knowing the result eventually is not important for the paradox. The fact that at one time his friend knows for sure that the cat is dead but for Wigner the cat is in a superposition is the crucial part. While the email is on its way we have: Wigner's friend says: $$ |\psi> = |dead> $$ Wigner says: $$|\psi> \propto (|dead> + |alive>)$$ Now who is right and when does the system collapse? In order for this paradoxical situation to occur, you need Wigner as an observer of the system that involves his friend and the cat.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/685492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What is causing the diffraction pattern on my ceiling? When I wake up in the morning and look at my curtains, I see a pattern on the ceiling made by the light going through the gap between my curtains. I have added a picture of it below. I remember from high school that when a laser was being shot through a very thin slit, that you would get a similar pattern. However, I see two important differences here: * *This is regular divergent sun light instead of parallel single wavelength light from a laser. *The gap in my curtains is orders of magnitude thicker than the gap in a slit experiment. So if this is not a diffraction pattern caused by the wave particle duality of light, then what is causing this pattern to appear on my ceiling? Edit: Okay, so I looked again at the light pattern and the building across and now think that the vertical white beams are causing the pattern. The light pattern is just the horizontally mirrored reflection of those white beams. The part of the building in the front right has 6 beams, which correspond to the 6 bright beams of light. The back left has a lot more beams which are further away and seem thinner because of it, so those we see on the right in the picture. Now I am wondering why the horizontal white beams are not showing up on my ceiling.
It looks as though it may be a reflection off the curtain rod.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/685704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 4, "answer_id": 0 }
Do light particles have thrust? I understand that nothing is faster than light and that it can not escape a black hole. However, light particles may be fast, but perhaps it can't escape a black hole due to it's lack of thrust power? I can't reasonably push an object with light. a rocket has thrust but can't go as fast as light and light has speed but can't go through sheetrock. It just doesn't seem that light has much strength to it. Quasars, spew light out due to a force pushing light out. Can this be explained to me?
As pointed out by joseph, light has indeed a momentum. Its extremely small, still it is measurable. The origin of this property can be found in the wave particle dualism of electromagnetic radiation. In space, the momentum of photons is even being utilized as a form of repulsion based thrust with the help of solar sails. This is possible, because in space there is no air based drag. There are also concepts, in which space vehicles are getting their thrust by focused laser beams. When the mass of these objects is reasonably small, they could in theory reach a significant portion of the speed of light itself. For further reading on this: https://en.wikipedia.org/wiki/Solar_sail
{ "language": "en", "url": "https://physics.stackexchange.com/questions/685817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How is classical mechanics recovered when the commutator is zero? If $X$ and $P$ commute, then the rate of change of expectation value of $X$ becomes zero, assuming $$\frac{d}{dt} \langle X \rangle= \langle [X, P^2+V(x)] \rangle=0.$$ This is not what classical mechanics says, is it?
One has to be careful in discussing the transition from quantum to classical mechanics. First, by Dirac quantization (see also this post): $$ [\hat A,\hat B]\to i\hbar \{A,B\}_{PB} +{\cal O}(\hbar^2) \tag{1} $$ where $\{A,B\}_{PB}$ is the Poisson bracket. Thus, if you naively set $\hbar\to 0$, you get nonsense. In particular you have no dynamics as this comes out of the Poisson bracket of a function and the Hamiltonian. Note that, in (1), the left hand side refers to the commutator of operators whereas the right hand side refers to the PB of functions in phase space (of $p$ and $q$). Within the formalism of Wigner quasidistributions, which is probably the most natural to investigate the quantum-classical transition, the classical limit is not obtained by setting $\hbar=0$ but by ignoring higher powers of $\hbar$ past the Poisson bracket in the expansion of the Moyal bracket. Even in the WKB formalism (which is an expansion in $\hbar$), the leading term, from which we extract the lowest order WKB approximation, still contains one power of $\hbar$. Thus recovering classical mechanics from quantum mechanics is a subtle business it is misleading to suggest that the classical mechanics is obtained by simply setting $\hbar\to 0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/686166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How to add angular velocity vectors? I was reading David Morin's mechanics book and came across this problem. And here is the solution provided: I am just wondering why you can express the total angular velocity of the coin with respect to the lab frame by simply adding the different angular velocity vectors. In the earlier part of the book, there's is a theorem: Clearly in this case, angular velocity vector from the rotation about the centre of the contact-point circle ($\mathbf{\Omega}\mathbf{\hat{z}}$)and the angular velocity vector from the rotation about $\mathbf{\hat{x_3}}$ do not share a common origin (if you extend a line from the centre of the coin perpendicular to the surface of the coin, it clearly will not always pass through the centre of the contact-point circle). I think my understanding is flawed. Why can you add angular velocities in this case?
Starting with the rotations matrix \begin{align*} &[\,_1^3\,\mathbf S\,]=[\,_1^2\,\mathbf S\,]\,[\,_2^3\,\mathbf S\,]\quad\Rightarrow\quad [\,_1^3\,\mathbf{\dot{S}}\,]=[\,_1^2\,\mathbf{\dot{S}}\,]\,[\,_2^3\,\mathbf S\,]+ [\,_1^2\,\mathbf S\,]\,[\,_2^3\,\mathbf{\dot{S}}\,]\\ &\text{with}\quad \mathbf{\dot{S}}=\mathbf{\tilde{\omega}}\,\mathbf S\quad \mathbf{\tilde{\omega}}= \left[ \begin {array}{ccc} 0&-\omega_{{z}}&\omega_{{y}} \\ \omega_{{z}}&0&-\omega_{{x}}\\ -\omega_{{y}}&\omega_{{x}}&0\end {array} \right]\quad\Rightarrow \\\\ &\mathbf{\tilde{\omega}}_{13}[\,_1^3\,\mathbf{{S}}\,]= \mathbf{\tilde{\omega}}_{12}[\,_1^2\,\mathbf{{S}}\,]\,[\,_2^3\,\mathbf S\,]+ [\,_1^2\,\mathbf S\,]\,\mathbf{\tilde{\omega}}_{23}[\,_2^3\,\mathbf{{S}}\,]\\\\ &\text{multiply from the right with}\quad [\,_3^1\,\mathbf{{S}}\,]\\\\ &\mathbf{\tilde{\omega}}_{13}= \mathbf{\tilde{\omega}}_{12}\underbrace{[\,_1^2\,\mathbf{{S}}\,]\,[\,_2^3\,\mathbf S\,][\,_3^1\,\mathbf{{S}}\,]}_{I_3}+ [\,_1^2\,\mathbf S\,]\,\mathbf{\tilde{\omega}}_{23}\underbrace{[\,_2^3\,\mathbf{{S}}\,][\,_3^1\,\mathbf{{S}}\,]} _{ [\,_2^1\,\mathbf S\,]}\\ &\text{thus the angular velocity vector}\\ &\mathbf\omega_{13}=\mathbf\omega_{12}+[\,_1^2\,\mathbf S\,]\mathbf\omega_{23} \end{align*} \begin{align*} &\text{with}\\ &[\,_1^2\,\mathbf S\,]=\left[ \begin {array}{ccc} \cos \left( \psi \right) &-\sin \left( \psi \right) &0\\ \sin \left( \psi \right) &\cos \left( \psi \right) &0\\ 0&0&1\end {array} \right] \quad, [\,_2^3\,\mathbf S\,]=\left[ \begin {array}{ccc} 1&0&0\\ 0&\cos \left( \phi \right) &-\sin \left( \phi \right) \\ 0 &\sin \left( \phi \right) &\cos \left( \phi \right) \end {array} \right] \\ &\mathbf\omega_{12}=\begin{bmatrix} 0 \\ 0 \\ \omega_\psi \\ \end{bmatrix}\quad \mathbf\omega_{23}=\begin{bmatrix} \omega_\phi \\ 0 \\ 0 \\ \end{bmatrix} \end{align*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/686320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Will a planet rotate if it is the only being in the universe? As a senior student , I have been wondering whatever the word inertia mean . Is inertia lying in the interaction between all the objects , or is it the nature of a space even without anything put into it ? In our life it seems like the latter , since wherever you throw out a stone into a space it will go along a parabola . But that is not the case , for there is still the earth and the sun and all the distant galaxies that interact with the stone outside its moving space . So if all the interactions are removed , and there's only a planet thrown into a universe of nothing . Then will it rotate , or can we detect its rotation through , for example , a Foucault pendulum ? If not , can we conclude that inertia relies on the interaction of the objects , and thus a consequence of universal gravitation?
Rotation is a type of acceleration, and acceleration can be detected in an absolute sense. If the planet was symmetrical and rotating in the way that the Earth rotates, then with the right instruments the inhabitants of the planet would be able to detect the rotation and also the axis of rotation. An object at either of the poles would weight more than an object at the equator because of the centrifugal effect.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/686772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 2 }
Microcanonical ensemble probability density distribution In microcanonical ensemble the probability density function is postulated as $\rho(q,p)=const.\times\delta(E-E_0)$ so the probability of an ensemble being in an element of phase space $\mathrm{d} q \mathrm{d} p$ is $\mathrm{d} P = \rho(p,q) \mathrm{d} p \mathrm{d} q$. But since $\rho(p,q)$ is constant for a given energy $E_0$ of the ensemble, and we know that for example all gas particles being in one half of a container is highly unlikely but still allowed, does this mean that phase space $\mathrm{d} p \mathrm{d} q$ belonging to a state describing the aforementioned example is much smaller than the phase space belonging to the equilibrium state? Is my conclusion correct? And if it is, can I conclude it in a more rigorous way than pointing out the example? Thanks for answering
There are a few concepts that should be better focused, to formulate this question precisely. An ensemble of Classical Statistical Mechanics is the set of all possible configurations in phase space, each configuration being characterized by the set of its Hamiltonian coordinates $q=(q_1,q_2,\dots,q_N)$ and $p=(p_1,p_2,\dots,p_N)$. Therefore, there is nothing like the probability of an ensemble being in an element of phase space. Instead, we can safely speak about the probability of a system of the ensemble being in a volume of the phase space. When such a volume is so small that the variations of the probability density over the volume are negligible, we can say that the probability of that microscopic state is ${\mathrm dP}=\rho(q,p){\mathrm dq}{\mathrm dp}$. If $\rho(q,p)$ is a constant over the hypersurface $H(q,p)=E$ (where $H$ is the Hamiltonian, and $E$ a possible value of the energy), all the subsets of the phase space on such a hypersurface with the same volume ${\mathrm dq}{\mathrm dp}$ have the same probability. This fact implies that each microstate is as probable as any other. However, a set of microstates may be overwhelming more probable than others. In particular, the collection of microstates such that all the particles occupy only half of the volume has a negligible probability compared to the set where there is almost the same number of particles in the two half-volumes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/686930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Why are most QM Operator defined as Identity minus Generator? I am currently rehearsing my lectures in quantum mechanics for the exam. I recognized that there is a pattern for different types of operators such as: Rotation operator, Time evolution operator and so on. The way we got it presented in our course is that they all look the following way: $R(d\Phi k) = I-\frac{i}{\hbar}d\Phi J_z$ And if you do this $N$ times for $N \rightarrow \infty $ we can write as $R(\Phi k) = e^{-iJ_z\Phi/\hbar}$. So far I get that we take the Original and subtract from it depending on the angle. But I can't figure out how to get to the primary equation.
The identity operator is the same as doing nothing. If you want to construct an operator that is 'small' it better be close to the identity. The goal is to construct a 'big' (read: finite) operator by composing (infinitely) many 'small' operators. We want to do this because these small operators are easy to study and we can deduce many properties of the big group just by looking at the small group. You can't derive the first equation because it is a definition. When you insert an infinitessimal parameter in an operator$^\dagger$ you will get the identity operator + another infinitessimal $\times$ a matrix. That matrix is defined as the generator. In mathematics this is simply the definition but in physics we like to slap a factor $\frac i\hbar$ in front. This way the generators become Hermitian and we can interpret them as physical observables. $\dagger$ this assumes that your operator becomes the identity when the parameter it depends on is zero, i.e. $R(\theta):\ R(0)=I$. This is a fundamental assumption in Lie groups.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/687218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does anything in an incandescent bulb actually reach its color temperature (say 2700 K)? This question is inspired by a question about oven lightbulbs over on the DIY stack. It spawned a lengthy comment discussion about whether an incandescent lightbulb with a color temperature of 2500 K actually has a filament at a temperature of 2500 K. The articles I could Google are focused on explaining how other types of bulbs like LEDs are compared to an idealized blackbody to assign a color temperature, which makes sense to me. I couldn't find one that plainly answers my more basic question: Does any component in an incandescent lightbulb actually reach temperatures in the thousands of degrees? If so, how are things like the filament insulated from the filament leads or the glass, which stay so (comparatively) cool? Is this still true of bulbs with crazy high 20000 K color temp such as metal halide-aquatic? Do they actually sustain an arc that hot?
The filament reaches that temperature and acts as a black-body radiator. There is a type of measuring instrument used to measure the temperature at incandescent temperatures- called an "optical pyrometer" or, more specifically, a "disappearing filament pyrometer". A filament, much like the filament in a bulb, is optically overlaid over the material to be measured (it should be similar to a black body, with an emissivity close to 1). The filament current is adjusted by the operator and when the temperature of the filament "disappears" the operator can assume the temperature matches the filament temperature and read off the measured temperature from a chart. I have used these instruments and they are capable of reasonable accuracy with care, specifically the below type (from an eBay photo): Somewhat disturbingly, they are now called collectible museum pieces. Bulbs like mercury arc and fluorescent lamps have a different spectrum (nowhere close to black body) from phosphor excited by UV from the spectral lines of mercury. Similarly "white" LED lamps usually use a phosphor or mixture of phosphors (for warm white) in conjunction with a fairly monochromatic blue LED.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/687335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56", "answer_count": 4, "answer_id": 2 }
How did Ernest Sternglass’ phenomenologically incorrect model of the neutral pion predict its mass and lifetime so accurately? In 1961, Ernest Sternglass published a paper where, using what seems to be to be a combination of relativistic kinematics and Bohr’s old quantisation procedure, he looked at the energy levels of a set of metastable electron-positron states, and found the lowest of these to be a mass surprisingly close to the measured mass of the neutral pion. He also calculated its lifetime, through what looks to me to be a form of dimensional analysis, to be close to that of the neutral pion also. We now know, of course, that this is not the correct model of the neutral pion, but how did his analysis manage to produce these curiously close results? Is it understandable in terms of our modern model of neutral pions, a mistake in the argument, a coincidence, or some combination of these?
Circular arguments, arguments based on existing assumptions, all demonstrating a reluctance to abandon what you have learned in school. There’s no coincidence here. The strong force is just compressed EM force. Think of this “coincidence” as one data point, Sternglass’ greatness lies in discovering all coincidences across the whole field of particle physics and cosmology. That’s why it’s no coincidence.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/687820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Why do we need the concept of Gravitational and Electric Potential? I understand that we need potential energy for the concept of energy conservation. However, why would we come up with a definition like 'energy required per unit mass/charge to bring the mass/charge from point A to B. The part says 'per unit mass/charge' allegedly to avoid mass/charge dependence as the potential energy depends on the mass/charge. Why do we need to get rid of the mass/charge dependence and invent a new concept like 'potential' out of potential energy?
It is similar to a system of coordinates. When we want to know the distance between points in a room it is easy to measure directly. But for the short airline route between two cities we can use the information of latitude and longitude for each, and calculate the distance. In the case of gravitational potential, knowing the level curves of a map near a river allows to calculate hydroelectric potential for a plant for example. Also points in a electrical circuit with assigned potentials can be used to know available energy for some device. The advantage of the concept of potentials is to have a bunch of information that can be used to calculate things, that many times were not even in consideration when the mapping was made.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/687983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Force of photons from the Sun hitting a football field = weight of 1 dime? I read, I think, some time ago that the "weight" of photons from the Sun hitting an area the size of a football field at noon on a sunny day would be about the "weight" of a dime? Would appreciate it someone could flesh that out, verify if correct or false?
"Weight" can be understood as a type of force - standing on the floor, you impart a force on the floor. Light can impart force on a surface due to the transfer of momentum involved. In other words, if a photon with momentum $p$ strikes a surface and is reflected in the opposite direction, a total momentum of $2p$ is imparted on the surface. This is called radiation pressure. In Newtonian mechanics, you need mass to have momentum, but in relativistic mechanics, you only need energy to have momentum. And photons certainly carry energy. So how do we compare the radiation pressure on a football field to the weight of a dime? Consider the units: Pressure is force per area, as measured in newton per square meter, N/m$^2$. You would typically think about weight as measured in kilograms, but that is actually mass. The weight is the force on the object due to gravity, so if the dime has mass $m$ and we have gravitational acceleration $g$, the weight is $F = m g$. Force is also defined as change in momentum. So if we say $N$ photons with momentum $p$ are being reflected off a football field with area $A$ per time $t$, in total their momentum is changed by $2 N p / t$. The pressure on the football field is $2 N p/(t A)$. If we imagine a dime spread over the football field, the pressure from this dime would be $m g/A$. So, by saying that the weight of a football field of photons is the same as that of a dime, we are saying $$\frac{2 N p}{t} = m g.$$ As for whether it is true or false, that is a simple question of estimating the parameters in this equation, which I leave as an exercise for the reader.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/688085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 3, "answer_id": 1 }
Would light bend the other way, if I use antimatter instead? Imagine the following setup: an antimatter straw, an antimatter glass filled with antimatter water and we have antimatter atmosphere just in case. My question is: does Snell's law still apply here as though they are regular matter, if I were to observe the straw inside the water?
It's a no from me. For light to bend the other way, light in antiwater would have to have phase velocity greater than $c$. This is possible in some systems (called metamaterials) but the optical properties of antiwater would have to be completely different from ordinary water - which is ruled out by existing experiments which show that positrons and antiprotons attract very very similarly (at most) to electrons and protons. So it is JUST possible that the angle of bending would be slightly different (this is the sort of behaviour the CERN experiments are looking for) but nothing so drastic as complete reversal.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/688618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Does SR intend to postulate the one- or two-way speed of light? I have read this question: It follows that the two-way speed of light is invariant (in the context of relativity, "invariant" is understood to mean "invariant with respect to Lorentz transformations"). Meaning and validity of the mass-energy equivalence valid if we don't know the one-way speed of light? The constancy of the one-way speed in any given inertial frame is the basis of his special theory of relativity https://en.wikipedia.org/wiki/One-way_speed_of_light Now the first answer specifically states that SR postulates the two way speed, which can (and has been) experimentally proven. The second one says otherwise, and is saying that it (assumedly the one-way speed of light) is a postulate, that cannot be proven. However, when I look at the papers of SR itself, either on wiki, or some original papers (I can only find very limited versions), the postulate itself does nowhere mention any specific one or two way versions of the speed of light. It just simply says the speed of light. https://en.wikipedia.org/wiki/Special_relativity http://hermes.ffn.ub.es/luisnavarro/nuevo_maletin/Einstein_1905_relativity.pdf Question: * *Does SR intend to postulate the one- or two-way speed of light?
The Einstein synchronization convention produces a one-way speed of light that is c. So the second postulate is based on the one way speed. This is justified by the isotropy of the two way speed of light and the isotropy of all known laws of physics. In Einstein’s seminal paper he says “we establish by definition that the “time” required by light to travel from A to B equals the “time” it requires to travel from B to A.” Where those two times are the one-way times and setting them equal makes the assumption that the one way speed equals the two way speed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/688753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Has Rindler horizon already been tested experimentally? For an accelerated frame, there is a Rindler horizon at a distance of $$X = \frac{c^2}{a}$$ where $a$ is the proper acceleration. For $a = g$ it is about 1 light year. If one of that spacial telescopes, like the James Webb, accelerates for a little while in a direction opposed to the direction of the space being observed, all galaxies would disappear in the photos, as I understand. And only a fraction of $g$ is necessary to check this effect, because they are millions of light years away. However I didn't find in the internet any information of an experiment like that in the past or being planned. It seems an interesting relativistic test, so maybe it has already been done. There is the cost of losing fuel, and I wonder that vibrations from the engines could mess any photos.
No one has done this experiment because the stars do not suddenly disappear for an accelerated observer - the Rindler observer "outruns" light emitted at a distance $c^2/a$ at the time when the observer is that distance away from it, but the faraway stars have been shining for a long time and there's plenty of light from them emitted a long time ago that's closer to the observer. Think of a line of photons stretching from the star to the observer - the observer is going to outrun the end of that line if they keep constantly accelerating, but all the rest of the photons in that line are going to catch up to them, increasingly redshifted as the observer accelerates away from their source. Therefore, an accelerated observer doesn't see stuff suddenly vanish behind a horizon, they see stuff behind them slowly redshift into oblivion. But that accelerated observers see redshift is something we already know from observation from earth, we don't need to send a telescope to space to make certain of that. The more interesting observation related to accelerated observers that is as-of-yet untested is that there should be Unruh radiation coming from the (apparent) horizon, but at ordinary accelerations the temperature for that is so low that we can't really hope to observe it with current tech.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/689098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Centre of mass reference frame of a particle and a photon If we consider the reaction of a gamma photon with a proton, e.g., $$\gamma + p \rightarrow p + \pi^0$$ I wonder what would the linear momentum of the initial proton in the center-of-mass reference frame be. I am confused because this reference frame should be located above the proton, since the photon has no mass. However, by definition, in the center-of-mass reference frame the total momentum should be zero, whereas in this case it is not, since the photon carries a linear momentum $p=\frac{E}{c}$. Wouldn't therefore the center of mass be placed above the proton, but between it and the photon? What would the linear momentum of the proton be if we assume it at rest in the laboratory frame of reference?
Draw an energy-momentum diagram (adding the timelike 4-momentum of the proton and the lightlike 4-momentum photon (tip to tail) to get the timelike 4-momentum of the system). It'll look like a triangle... in fact like a Doppler-effect problem. Then, find the component of the proton 4-momentum that is orthogonal to the 4-momentum of the system. (The analogous component of the photon 4-momentum should be opposite this, leading to a total spatial momentum of zero in the center-of-momentum frame.) Follow the strategy at Lowest kinetic energy of particle for which reaction is possible (invariant mass) applied to that reaction in a recent question What is the minimum energy of a photon for the reaction to occur?. See also: Momentum diagram for two colliding Particles
{ "language": "en", "url": "https://physics.stackexchange.com/questions/689204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Symmetry associated to a part of a separable Hamiltonian The harmonic oscillator in 3D is: $$H=\frac{p_x^2+p_y^2+p_z^2}{2m}+ \frac{k}{2} (x^2+y^2+z^2) = H_x + H_y + H_z,$$ where $H_x$, $H_y$ and $H_z$ are all constants of motion (alongside $\vec{L}$). Time translation invariance implies the conservation of $H$. What is the symmetry associated to the conservation of $H_x$, $H_y$ and $H_z$? I guess it's linked to time invariance as well?
Well, quite generally for Hamiltonian systems, the infinitesimal symmetry behind a constant of motion (COM) is the Hamiltonian vector field generated by the COM itself, cf. this Phys.SE post.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/689279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is the size of image increasing as observer moves away from lens? I was using a convex lens and placed the object at principal axis at a distance from optical center lesser than focal length (between $F_1$ and optical center). Then I started observing the size of the image from other side of lens. At first I had placed my eye close to the $F_2$ and between $F_2$ and $2F_2,$ then moved it away from that towards $2F_2.$ I found that as I moved away from lens, the image was getting bigger and bigger. That's where my confusion comes in. What I understand is that the size of the image formed at any point is only dependent on its position from lens and lens. It should not be dependent on observer but the size of object seen by observer can get smaller and smaller as he moves away from lens just like a tree when seen from a distance appears small as compared to looking at it from closer distance. Why is the size of image of object increasing?
I suspect you are looking through the lens rather than looking at the physical image, aka "real image," that the lens would form. The Lensmaker's equation, $ \frac{1}{f} = \frac{1}{p} + \frac{1}{q}$ leads to a smaller image only for a real image (p and q both positive). Since you are looking through the lens, you would have to analyze a two-lens system comprising the convex lens plus your eye's lens. If you do that, e.g., with a simple ray trace, you will see why the image size grows. See, e.g. Ray trace matrix method for a simple way to calculate image magnification.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/689407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is a single photon a wave plane or a wave packet? According to the definition a photon is monochromatic, so it has a unique frequency $\omega$ and thus it can be expressed as $\psi(x,t)=\exp i(kx-\omega t)$. This suggests that a photon is a plane wave which occupies the whole space at the same time. But why we can say a photon transports one place to another? In ordinary thinking a photon is more like a wave packet, and its probability density has a non-uninform distribution in the space. So what the photon indeed is?
One can derive the longitudinal and lateral intrinsic fields of a photon by equating the expectation values of the field operator in terms of the electric and magnetic field expectation values with the expectation value obtained with the wavefunction of two consecutive number states. One can have a monochromatic temporal pulse by using the mixed time-frequency representation which involves time-varying spectra, also known as the Wigner spectrum. This localizes the energy of the photon without the difficulties of the Fourier limit relation of time spread multiplied by the frequency spread. More mathematical and physical details can be found by searching online for "Instantaneous Quantum Description of Photonic Wavefronts and Applications"
{ "language": "en", "url": "https://physics.stackexchange.com/questions/689659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 7, "answer_id": 5 }
How large does $N$ need to be for statistical mechanics to be a good approximation? About how many components ($N$) does a system need for statistical mechanics to apply to that system? I took stat mech and biophysics from the same professor in undergrad and I distinctly remember him saying that part of the reason that biophysics was so intractable is because systems were large, but not so large that the thermodynamic limit made sense. I think he said biophysical systems often had $N \sim 10^2-10^4$ components while stat mech really only made sense for systems with $N>10^6$(?), but I really don't remember the exact values for $N$.
I think it's unfair to ask for an exact value for $N$ to justify all statistical mechanics. There are very many different problems and applications of stat mech, and some of them might have intrinsically low variance and work fine for relatively small $N$, whereas in other problems really require $N \rightarrow \infty$. With that said, it's easy to see why statistical mechanics works so much better for (say) a gas of particles than for many biological systems. As you mention in your post, biological systems often deal with $N \sim 100$ or $1,000$ degrees of freedom, which is firmly in the "mesoscopic" regime. On the other hand, a reasonably sized box of air will contain something on the order of a mole of particles, ie $6\times 10^{23}$ particles, which is twenty orders of magnitude larger than the biological system. So you can see why you typically don't need to scratch your head over whether $10^6$ or $10^7$ particles is enough to justify stat mech: typically, the number of particles is so unimaginably huge that you can often (but not always!) approximate the system by taking the thermodynamic limit $N \rightarrow \infty$. Another nice semi-quantitative heuristic: very often one approximates particles as non-interacting in statistical mechanics. In this case, many observable quantities are given by a sum of independent and identically distributed random variables, such that the sum can be approximated to good precision as Gaussian distributed by the central limit theorem. Then, you expect that the standard deviation of the distribution will fall as $1/\sqrt{N}$ as $N$ becomes large. From this, you can get a rough idea of how good statistical mechanics will be as $N$ increases: for instance, $1/\sqrt{100}$ is $0.1$ for common biological systems, while $1/\sqrt{10^{23}}$ is $3 \times 10^{-12}$ for a box of gas particles.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/689801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Phase difference between two waves in opposite directions Suppose I have two waves travelling along the positive and negative $x$ axis, and are given by : $$y_1=A\sin(kx-\omega t)\,\,\,\,,\,\,\,y_2=A\sin(kx+\omega t)$$ What would be the phase difference between these two waves at a particular point ? If I define the phase difference as the difference between the arguments, then I get : $$\Delta \phi=kx+\omega t-(kx-\omega t)=2\omega t$$ But, I could have easily defined the waves, by keeping a positive sign in front of $\omega t$ instead of $kx$. So in that case, my arguments would have become $\omega t-kx$ and $\omega t+kx$ instead. In this case, the phase difference at any point comes out to be : $$\Delta\phi=\omega t+kx-(\omega t-kx)=2kx$$ At any value of $x=x_0$, this phase difference is constant. So, I get two contradictory answers here. In the previous case, the phase difference at any point, varied over time. In the second case, this phase difference was constant at a given point, and varied from point to point. Which one is correct, and how should I know, which one to choose, in situations such as these ?
I could have easily defined the waves, by keeping a positive sign in front of ωt instead of kx. Actually, in this case you cannot do that. Here you have defined the waves in terms of $\sin$ functions. So $\sin(kx-\omega t)$ is not the same wave as $\sin(\omega t - k x)$. However, you could have asked the question in terms of $\cos$, and in that case $\cos(kx-\omega t)$ is the same wave as $\cos(\omega t - k x)$. In that case you would indeed get your two different scenarios. I get two contradictory answers here. In the previous case, the phase difference at any point, varied over time. In the second case, this phase difference was constant at a given point, and varied from point to point. So, using $\cos$ waves you do get two different answers, but they are not contradictory, they are completely equivalent. There is no difference between a wave with spatially varying phase whose amplitude changes in time and a wave with temporally varying phase whose amplitude changes in space. Those are just two equivalent ways of describing the same wave pattern.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/689913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How does potential energy increase with no work? If you're dragging an object up a hill at a constant velocity, work is technically 0 (as acceleration is 0), but potential energy constantly increases. How would you represent this situation mathematically, and how does the potential energy increase despite a lack of work?
The net force ends up being 0, but you are still applying a force because gravity is pulling down. Gravity exerts a force down the hill, so to keep the block at a constant velocity, you must exert a force opposite that. This means that work will be applied. For example, suppose a block is falling down a vertical shaft. At the top end is a pulley (this is equivalent to pulling a block up a 90 degree incline). If the block has a mass of 10 kg, and we'll approximate Earth's gravitational acceleration with $10 \frac{m}{s^2}$. By Newton's second law, 100 N of force are being applied down the shaft on the box. To counteract that you must pull up with 100 N of force (via the pulley). Suppose you pull it up 10 meters.. This means the the work done is 100 N * 10 m, or 1000 J.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/690037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Elastic potential energy and work equations Elastic potential energy is $\frac{1}{2} k x^2$ and work is $F \cdot d$. Why these numbers do not evaluate to the same value in a problem? The change in potential energy is the work done on a spring - $W = \Delta U$. However, every time I do an example I always get that the work is double the elastic potential energy. What am I missing? If it takes $2 \text{ N}$ of force to displace a spring by $0.2 \text{ m}$ with a spring constant of $10 \text{ N/m}$ then the work is $W_e = 2 \text{ N} \cdot 0.2 \text{ m} = 0.4 \text{ J}$. However, the elastic potential energy stored in the spring is $U_e = \frac{1}{2} 10 \text{ N/m} \cdot (0.2 \text{ m})^2 = 0.2 \text{ J}$.
You are missing that the force changes as the spring elongates. It is 0 at the equilibrium position and it is only equal to the final force at the final extension. The factor of 1/2 accounts for the variation in the force, essentially giving you the average force.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/690807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
What is the angle of a ray passing through a thin lens? Let's say I have a thin lens model of an optical system. When I have a ray that is parallel to the optical axis, the situation is quite standard - the ray refracts and passes the focal point f (see my bad drawing). From the triangle in the picture, I can calculate the angle $\beta$ by using the formula $\tan(\beta) = y/f$ and so $\beta = \arctan(y/f)$. But what if my ray is not parallel with the optical axis? How do I calculate the angle of the refracted ray with the opt. axis $\beta'$? I thought the ray might obey simply $\beta' = \beta + \delta$ = $\arctan(y/f) + \delta$, e.g. angle $\beta'$ could be calculated by simply adding the angle a parallel ray produces when refracted on a lens $\beta$ and an angle of deviation from being parallel with the optical axis $\delta$. On the other hand, I am not sure this approach is right. All in all, I am interested in a solution that does not involve the paraxial approximation (notice I use $\tan()$ in my equations) and I would like to know the following. How does one calculate the angle of refracting rays that are not parallel with the optical axis, in the thin lens model approximation?
I realized that a ray passing through the center of the lens (let's call it ray A) does not deviate from its path. And if another ray (ray B) comes in the lens with the same angle as ray A, but does not pass the center of the lens, it has to cross ray A at the back focal plane of the lens. I drew the situation on a graph. Here, we can calculate the variable $x$ by noticing the following orange triangle: From here, $\tan(\delta) = x/f$ and hence $x = f\tan(\delta)$. Next, we can notice another triangle, marked in blue. This one actually contains the angle $\beta'$ that we are interested in: From here, $\tan(\beta') = \frac{x+y}{f}$. The rest is just simple algebra. $\beta' = \arctan(\frac{x}{f} + \frac{y}{f}) = \arctan(\frac{f\tan(\delta)}{f} + \frac{y}{f}) = \arctan(\tan(\delta) + \frac{y}{f})$. All in all, when tracing a ray passing through a thin lens without paraxial approximation, I think its angle with the optical axis after refraction will be $\beta' = \arctan(\tan(\delta) + \frac{y}{f})$, where $y$ is the point measured from the center of the lens where the ray hits the lens, $f$ is the focal point of the lens and $\delta$ is the angle of the ray coming to the lens, measured from the optical axis.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/690925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Why are quantum numbers assigned to electrons? Specifically the principal quantum number ($n$), orbital quantum number ($l$) and orbital magnetic quantum number ($m_{l}$). For systems like the hydrogen atom, these quantum numbers arise from the Schrödinger equation which involves: * *A potential energy function, for the system of the electron and the nucleus *A wave function, also characteristic of the same system, determined by the electron and it's interaction with it's environment Wouldn't it then be more appropriate to assign quantum numbers to the electron-nucleus system rather than the electron itself? Several sources always describe the quantum numbers (particularly in multi-electron systems) to be a characteristic of an electron, including the exclusion principle, so I am unsure if my reasoning is correct. Here's the paragraph from the Wikipedia page on the exclusion principle: it is impossible for two electrons of a poly-electron atom to have the same values of the four quantum numbers: n, the principal quantum number; ℓ, the azimuthal quantum number; $m_ℓ$, the magnetic quantum number; and $m_s$, the spin quantum number.
Yes, these numbers are assigned to the electron-nucleus system. Usually (as in classical mechanics) the treatment of a hydrogen atom starts with separating the motion of the center-of-mass of the atom and the relative motion of the electron and the proton, reducing a two-body problem to a one-body problem in an effective potential (and with an effective mass). This is easily overlooked in the quantum forest, but is actually found in many basic QM books. It is for the wave function of this relative motion that one defines the quantum numbers mentioned in the OP. The reason why one often speaks of electrons making transitions and so on, is because proton is a thousand times heavier than the electron, and consequently the center of mass pretty much coincides with that of the proton, while the internal motion of the atom is pretty much that of the electron.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/691462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Isotope that decays when ionized Some time ago, I read about a certain isotope that is stable when neutral but decays with electron emission (beta) when being completely ionized, but I can't find which one it was. Which isotope decays when fully ionized?
I’m nearly certain you are thinking of beryllium-7, but that you have remembered the condition backwards. Neutral $\rm^7Be$ can decay to $\rm^7Li$ by electron capture, with energy about $\rm 860\,keV$. Positron-emission decays are always disfavored relative to electron-capture decays, because the final state with an electron missing is lower in mass than the final state with the positron added. Since the total $\rm^7Be$ decay energy is less than the mass difference $2m_\mathrm e = \rm 1022\,keV$, the positron-emission mode is completely forbidden. Beryllium-7 is not found in beryllium ores on Earth, but completely-ionized $\rm^7Be$ is a stable component of cosmic rays. A read through all the NNDC dataset finds a number of other nuclei with electron-capture $Q$-values below $\rm1\,MeV$, starting with $\rm^{41}Ca$. However, cosmic ray populations are heavily skewed towards the low-mass end of the chart of isotopes; I’ve only ever heard people talk about beryllium-7 having this property. For the condition you describe, where an ionized parent nucleus can decay but the neutral parent atom is stable, the decay energy would have to be less than the electron binding energy for the daughter atom. If that were the case, the ionized nucleus could decay to a bound state of the daughter and the beta electron, with the antineutrino carrying away the energy. But the neutral atom would be “Pauli-blocked” from decaying, with its bound electrons already occupying the possible final states for the $\beta^-$. I believe there are no $\beta^-$ emitters with energies this low. If such a decay existed, it’d be an interesting place to try and measure the mass of the electron antineutrino, by doing precision mass spectrometry on the parent ion and the daughter ion, to be compared with recoil measurements on the daughter following the decay. Instead, that experimental energy has gone into the KATRIN experiment, which analyzes the $\beta^-$ decay of tritium to helium-3, with endpoint energy $\rm17\,keV$. However, as revealed in the comments on another answer, my belief was incorrect. Neutral dysprosium-163 is stable against $\beta^-$ decay, with $Q$-value $\rm-2.6\,keV$; the linked paper observes the beta-decay of the bare nucleus. The next candidate would be $\rm^{148}Eu$, with $Q$-value $\rm-27\,keV$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/691553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 2, "answer_id": 1 }
How Would a Car Move in Zero-G? Consider a car floating in a microgravity environment. Assuming the engine can still function (i.e. it is surrounded by normal atmosphere; fuel can still be pumped, etc.), in what ways (if any) will the car move when the accelerator is pressed? There is air moving into the intake and out of the exhaust, will that cause a net acceleration forward? Will air resistance with the wheels cause any sort of net acceleration? Will the torque from the engine cause the car to rotate at all?
The movement would depend on several factors, such as wether the engine was transverse or longitudinal. However, the most pronounced motion would be some compound form of rotation. Most of the moving parts in a car rotate, so in the absence of gravity the rotation of parts of the car would generate counter rotations of the rest. If the car had a longitudinal engine, the rotation of the crankshaft and flywheel would cause a counter rotation of the body around a lengthways axis. The rotation of the driving wheels will cause a counter rotation of the body around a transverse axis. Switching on the windscreen wipers will cause an oscillation of the body. There would also be some longitudinal acceleration, not just from the exhaust but also from the radiator cooling fan, and the heater/air-conditioning fans which propel air rearwards.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/691962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Question about the Wave equation I have a question. I was looking for the Wave equation (first Eq. of this wikipedia page). I saw for the first time a version of this equation during an Acoustic course, where we obtained it for the sound wave combining the Euler equation, the Continuity equation, the general gas equation. So, how is a generical wave equation, as the one described in wikipedia, derived? Is there behind a mathematical derivation or is it just a specific form of Differential Eq. that was found the same for some scalars, so we have to take it "as it is"? Thank you in advance
If you had to write down a generic Lagrangian for a scalar field that is invariant under rotations and space-time translations it would look something like $$\mathcal L = \frac{1}{2}\left(\frac{\partial \phi}{\partial t}\right)^2 + \frac{1}{2}v^2 \nabla^2\phi + A\phi+B\phi^2 +C\phi^3 + \mu (\nabla^2 \phi)^2 +... $$ The first two terms give us the standard wave equation while the other terms either give mass to the field or cause interactions, so if we want a massless field with no interactions then the lagrangian is pretty much fixed. Notice how rotational , translational, and temporal symmetry forbids terms like $$\mathbf{a}\cdot\nabla\phi\qquad\rho(\mathbf{x})\nabla^2\phi\qquad\gamma(t)\phi^2.$$ Imposing these symmetries ensures angular momentum, linear momentum, and energy conservation respectively, which you kind of want for a typical physical system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/692062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
How to affirm whether a frame of reference is Inertial or non-inertial? As far as I know, inertial frame of reference are the ones where the all the three Newton's laws of motion hold. Having this definition we can then identify all such frames of reference which are inertial, if we have an inertial frame of reference, to begin with, to observe them by applying Newton's first law of motion i.e., * *If S is an inertial frame of reference then we can conclude that S' is also an inertial frame of reference if velocity of S' is uniform/constant with respect to S. Now from these, we can define a non-inertial frame of reference as a frame of reference where laws of motion are not valid in their current form and need to be modified so that they can be used (such as introduction of Fictitious force). Now the question: * *Given a non-inertial frame of reference what is(are) the condition(s) required to affirm whether another frame of reference (being observed from the current non-inertial frame) is inertial or non-inertial? I think a brief background to the question is required. I thought of this situation while considering the following case: suppose we are observing an observer (in space) from Earth, how may I claim that the the reference frame attached to that observer is inertial or not? Clearly earth is a non-inertial frame of reference, hence the question.
You don’t need a second frame to determine if a frame is inertial. Simply compare the coordinate acceleration in the frame to the proper acceleration measured by momentarily co-moving accelerometers. If they match then the frame is inertial. If they do not match then the frame is non-inertial and the difference between the coordinate acceleration and the proper acceleration is a fictitious force.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/692244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 0 }
What Lorentz symmetries do electric and magnetic fields break? When we turn on an external (non-dynamical) electric or magnetic field in (3+1)-dimensional Minkowski space we break rotational invariance because they pick out a special direction in spacetime. Does this also break boost invariance? What about in (2+1)-dimensions when the magnetic field is a scalar? Now the magnetic field does not seem to break rotations. Does it break boosts? How can I show this?
The electromagnetic field on spacetime is actually Lorentz invariant. It's this conflict between this symmetry group of electromagnetism and the symmetry group of classical mechanics, which is the Galilean group, that led Einstein to special relativity. The electromagnetic field on spacetime is a single field, it can't be covariantly split into electric and magnetic fields. It is when we choose a spitting of spacetime into space and time, which breaks Lorentz invariance, that we can split the electromagnetic field into an electric and magnetic field. Conversely, such a choice defines a splitting of spacetime into space and time, so breaking Lorentz invariance.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/692681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Can the Newton's laws be derived from each other in a specific order only(2nd from 1st only and not from 3rd)? In my opinion we can derive Newton's laws in a specific order only that is 2nd from 1st and 3rd from 2nd and first only. Let us suppose there is a body B which is in its initial state P(i). Now as per Newton's 1st law we have that to take the body to some state P(j) in time t(ij) we need to apply force on it. From above we certainly know that to Change the state of a body force must be applied and hence we can define force F as F= G(P(ij)) or in unit time t(ij) we have F= f(P(ij)/t(ij))
This is a common misconception that Newton's first law is unnecessary or that it can be derived from Newton's second law; ''If we put the force equal to zero in Newton's second law then we get the first hence the first law is redundant''. But this is wrong. What is a force? Newton's first law defines what a force is! And Newton's second law describes how this defined force acts on an object.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/692849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can we derive Boyle's law out of nothing? My textbook states Boyle's law without a proof. I saw Feynman's proof of it but found it to be too handwavy and at the same time it uses Boltzmann's equipartition theorem from statistical mechanics which is too difficult for me now. So to state roughly what Boyle's law is, it states that at a constant temperature and mass of gas, $$PV=k$$ Where $P$ is pressure and $V$ is the volume and $k$ is constant in this case. Is there a proof for this that isn't based on any other gas law, perhaps based on Newtonian mechanics?
The law can be derived from the kinetic theory of gases. Several assumptions are made about the molecules, and Newton's laws are then applied. For $N$ molecules, each of mass $m$, moving in a container of volume $V$ with a root mean square speed of $c_{rms}$, the pressure, $p$, exerted on the walls by gas molecules colliding with them is given by $$pV=\tfrac 13 Nmc_{rms}^2.$$ Sir James Jeans (in The Kinetic Theory of Gases) has a simple argument involving molecules exchanging energy with a wall (modelled as spheres on springs!) to show that for gases at the same temperature, $mc_{rms}^2$ is the same. In other words, gas temperature is determined by $mc_{rms}^2$. So for a gas at constant temperature, $c_{rms}$ is constant, and if we keep $N$ constant, too, we deduce that $pV$ is constant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/693255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 2 }
Why is amplitude going to infinity in forced damped oscillator at resonance? I'm trying to find the amplitude of steady state response of the following differential equation: $$\ddot{x}+2p\dot x + {\omega_0}^2x=\cos(\omega t)$$ A particular solution is $$x_p=\Re{\dfrac{e^{i\omega t}}{\omega_0^2 - \omega^2 + i2p\omega}} $$ The amplitude at steady state is then $$A=\dfrac{1}{\sqrt{(\omega_0^2 - \omega^2)^2 + (2p\omega)^2}}$$ The denominator has minimum value when $\omega^2 =\omega_0^2 - 2p^2 $: $$A=\dfrac{1}{2p\sqrt{\omega_0^2-p^2}}$$ This expression seems to suggest that the amplitude goes to infinity as $p$ approaches $\omega_0$. But amplitude has to be finite(from other examples of LRC tank circuit etc). Pretty sure I'm wrong but not able to see where. Any help?
Without math (or almost ;-)- a system driven by an external forcing function at resonance is accepting energy input that the system cannot easily get rid of. This makes the energy pile up in the system which makes the amplitude of the oscillations grow over time and get big enough to blow it up. In electrical systems like underdamped RLC circuits, the energy piles up to the point where it makes the system driver emit smoke, or shut itself down through the use of an automatic circuit called a fault detector or foldback which is intended to prevent smoke emission. This is a big deal because the process of catching the smoke and pumping it back into the system is difficult and expensive.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/693365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How is conservation of momentum applied even if there is component of $mg$ acting as external force? In this question momentum is conserved along X' direction. But there is a component of mg along the plane which means that there is external force in X' direction. So how is conservation of momentum applied here??
Gravity is a non impulsive force. it means it takes enough time to cause action and does not change momentum in an instant. So in time just before and after collision, momentum is conserved and gravity is neglected for such small time. it will work in all horizontal collisions like BUT if ball does not strike horizontally, then Normal and/or friction are impulsive so you cannot conserve momentum even for short durations
{ "language": "en", "url": "https://physics.stackexchange.com/questions/693515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Easier way to calculate the expected value of $p^2$ I was doing some math with some wave functions when I stumbled upon this one: \begin{equation} \phi(x) = \sqrt{\frac{2 a^3}{\pi}} \frac{1}{a^2 + x^2} \mathrm{e}^{i k_0 x} \end{equation} I wanted to calculate the uncertainty in the momentum so I changed to $p$-representation obtaining $$\psi(p) = \sqrt{\frac{a}{\hbar}}\,\pi\,\exp \left(-\left|a \left(\frac{p}{h} - k_0\right)\right|\right),$$ but I don't know how to compute $\langle p^2\rangle$ correctly because if I use $p$-representation, I obtain big integrals but if I use $x$-representation, while computing $\langle p\rangle$ is very easy, the calculations for $\langle p^2\rangle$ are kind of complex. I was wondering if there was another way to calculate this expected value without it involving annoying integrals.
You don't have to change to p-representation. Just use the fact that p = hbar/i * d/dx and do the math. And your p-representation wavefunction is just an exponential (apart from the absolute value). Use the gamma function (it is really easy).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/693784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Standing wave in a string clamped at one end *In the case of a standing wave formed with one end clamped (fixed), there is an anti-node at the free end irrespective of the overtone. My question is why is there has to be an antinode. Is there any intuition behind it?
At the free end, the restoring force is zero (since it's free). This means the slope of the string at the end must be zero. Why? The restoring force comes from the vertical component of tension, and tension is tangent to the string always. So if the restoring force is zero, the slope must be zero. Only anti-nodes have the property of zero slope (max. deflection).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/693896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What are the exact axioms to uniquely define the Minkowski metric tensor as a bilinear map? I have read that the definition of a metric tensor is a map with the following axioms: * *a bilinear form from the tangent vector space (of a smooth manifold) to the real field *symmetric *nondegenerate [Question] Now, from a purely mathematical prospective: given a map X (defined on a 4D tangent space), is it enough to say that: * *$X$ is a metric tensor *$X$ has signature $(-, +, +, +)$ or $(+, -, -, -)$ to deduce that X is the Minkowski metric tensor? Note: if the answer is yes, it would mean that Minkowski is the only metric tensor that as a bilinear form has the signature $(-, +, +, +)$. I think that these axioms are not enough, because in GR we work with metric tensors with the same signature (see this question). Therefore: [Subquestion part a] Which additional axioms should we include to uniquely define the Minkowski metric tensor as a map? [Subquestion part b] Would the additional axiom simply be explicitly stating that the coefficients of the bilinear form are all 1 (so -1,+1,+1,+1)?
Yes, that's enough. To be pedantic, though, a metric always has a positive definite signature, aka (+,+,+, ..,+) whilst a semi-metric can have arbitrary signature. A manifold with a metric is called a Riemannian manifold whilst a manifold with a semi-metric is called a semi-Riemannian manifold. Often the qualifier 'pseudo' is used instead of 'semi', but I prefer to not use that as the conventional understanding of pseudo means false or fake. A Lorentzian manifold is semi-Riemannian manifold with signature (-+++...+) or (+----...-) and this is what you are after. Minkowski space is simply a flat 4d Lorentzian manifold.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/694290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
How to make the Moon spiral into Earth? I recently watched a video of what would happen if the Moon spiraled into Earth. But the video is pretty sketchy on the physics of just what would have to happen for that to occur. At first I thought I understood (just slow the Moon down enough), but my rudimentary orbital mechanics isn't enough to convince me that's sufficient (e.g., wouldn't the Moon just settle into a lower orbit?). What forces would have to be applied to the Moon to get it to spiral into the Earth, at what times? What basic physics are involved? (And why should I have already known this if I could simply remember my freshman Physics?)
The easiest way is to slow the moon down progressively until atmosphere does the job. But it would be catastrophic, of course. Supposing that the moon is spiraling down on earth: 1 - Higher tides, huge waves and probable coast devastation. 2 - Once it get's close to Roche (18,470 km) limit it would star to crumble and fall into earth. 3 - We would get a beautiful set of rings at least for a few years. 4 - Most of life on earth would vanish. 5 - Earth would end up like venus, moonless inhospitable place (at least for a few hundred million years).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/694535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 8, "answer_id": 5 }
How to use a piecewise acceleration function to get a position function? This should be a relatively easy problem but I think I am missing something somewhere. This problem consists of a object that is being thrown into the air at $t = 4s$ at a velocity $v_0$ here is my acceleration function: $a(n) = \begin{cases} 0, & t<t_1 \\ g, & t≥t_1 \end{cases} $ Where $g = - 9.8m/s^2$ $t_1 = 4s$ $x_0=0m$ $v_0$ is the velocity at which the object is being thrown up in the air at. When I derive the velocity function it seems to be correct from what I could find, $v(n) = \begin{cases} 0, & t<t_1 \\ gt-gt_1+v_0, & t≥t_1 \end{cases} $ But when I go to derive the position function I get lost. $y(t) - x_0= \int_{t_1}^tv(t)dt => [\frac{1}{2}gt^2-gt{t_1}+v_ot]_{t_1}^t =>\frac{1}{2}gt^2-gt{t_1}+v_ot -(\frac{1}{2}g{t_1}^2-g{t_1}^2+v_ot_1) $ When I then go apply this to the rest of the problem I get nonsense answers. Can someone please let me know where I've gone wrong. Sorry if this is an easy problem, I am a beginner to physics. PS: I know you can solve for this algebraically, you get $y(t) = \begin{cases} 0, & t>t_1 \\ \frac{1}{2}g(t+t_1)+v_0(t+t_1), & t≥t_1 \end{cases}$ but I would like to know the derivation based on the calculus of the problem as it is more relevant to the course I am following.
Your integration is correct. It just needs some completing-the-square TLC. $$ \frac{1}{2} g t^2 - g t t_1 + v_0 t - \frac{1}{2} g t_1^2 + g t_1^2 - v_0 t_1 $$ $$ \frac{1}{2} g\left( t^2 - 2 t t_1 + t_1^2 \right) + v_0 (t-t_1) $$ $$ \frac{1}{2} g\left(t - t_1 \right)^2 + v_0 (t - t_1) $$ which is what your PS should have said. Notice that we can, from the very beginning of the problem, make a change of variables $$ t' = t - t_1 $$ Then all integrals are from $t' = 0$ and we find $$v(t') = gt' + v_0$$ and $$x(t') = \frac{1}{2}g(t')^2 + v_0 t' $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/694741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Differential equation of a series $RLC$ circuit driven by a DC voltage source? From math below it seems no oscillations are possible and the steady state reaches instantly. I know this is wrong but I'm new to differential equations and don't see my mistake. Summary: For the initial conditions on capacitor $$v(0) = 0, \qquad i(0) = C v'(0) = 0$$ I'm getting the homogeneous solution constants $c_1 = 0$ and $c_2 = 0$. This means there is no transient response and the response reaches steady-state instantly. How is this possible? Let $v$ denote voltage across capacitor. Differential equation for series RLC circuit in terms of voltage across the capacitor is $$Ri + L\frac{di}{dt} + v = V$$ Since $i = C\frac{dv}{dt}$ it follows $$RC\frac{dv}{dt} + LC\frac{d^2v}{dt^2 } + v = V$$ $$\frac{d^2v}{dt^2} + \frac{R}{L}\frac{dv}{dt} + \frac{1}{LC}v = \frac{V}{LC}$$ With $2p = \frac{R}{L}$ and $\omega_0^2=\frac{1}{LC}$ the differential equation is $$v''(t) + 2pv'(t) + \omega_0^2 v(t) = \omega_0^2 V$$ The homogeneous solution for the above differential equation is $$v_h(t) = e^{-pt} ( c_1 \cos(\omega t) + c_2 \sin(\omega t) )$$ where $\omega^2 = \omega_0^2 - p^2$. From the initial conditions it follows $$v(0) = 0 \implies c_1 = 0$$ $$i(0) = v'(0) = 0 \implies c_2 = 0$$ This means that the differential equation has no transient response! A particular solution is $v_p(t) = V$ and this makes the complete solution! What am I doing wrong?
TL;DR In the procedure you posted you forgot to include particular solution. The homogeneous solution will always evaluate to $0$ when used as a solution to the general differential equation. Homogeneous solution The roots of the differential equation $$v''(t) + 2p v'(t) + \omega_0^2 v(t) = \omega_0^2 V$$ are $q_{1,2} = -p \pm j \sqrt{\omega_0^2 - p^2}$. For these roots there are three types of homogeneous solutions: $$v_h(t) = \left\{ \begin{array}{lll} e^{-pt} (A \cos(\omega t) + B \sin(\omega t)), & \omega_0^2 > p^2 & \qquad\text{(Underdamped response)} \\ e^{-pt} (A + B t), & \omega_0^2 = p^2 & \qquad\text{(Critically damped response)} \\ e^{-pt} (A e^{\omega_1 t} + B e^{-\omega_1 t}), & \omega_0^2 < p^2 & \qquad\text{(Overdamped response)} \end{array} \right. $$ where $A$ and $B$ are unknown constants, $\omega^2 = \omega_0^2 - p^2$ and $\omega_1^2 = -\omega^2$. We will now focus only on the underdamped response, as that is the one you analyze in your question. Particular solution The particular solution depends on the input (driver) function and the roots of the differential equation. In your case, the particular solution is $$v_p(t) = K$$ where $K$ is unknown constant. Solving for unknown constants The total solution is a linear combination of homogeneous and particular solutions $$v(t) = v_h(t) + v_p(t) = e^{-pt} \bigl( A \cos(\omega t) + B \sin(\omega t) \bigr) + K$$ If we use this as a solution to the general differential equation we get $$\underbrace{v_h''(t) + 2p v_h'(t) + \omega_0^2 v_h(t)}_{\text{always } 0} + \omega_0^2 v_p(t) = \omega_0^2 V$$ from which it follows $K = V$. The other two unknown constants are determined from initial conditions $v(0) = v_0$ and $v'(0) = v_0'$ $$A + K = v_0, \qquad -p A + \omega B = v_0'$$ from which it follows $A = v_0 - V$ and $B = \frac{1}{\omega} (v_0' + p v_0 - p V)$. Final solution In your special case, $v_0 = 0$ and $v_0' = 0$, and the final solution is $$v(t) = - V e^{-pt} \Bigl( \cos(\omega t) + \frac{p}{\omega} \sin(\omega t) \Bigr) + V$$ The above solution can be written in a more compact form $$\boxed{v(t) = V \Bigl( 1 - \frac{\omega_0}{\omega} e^{-pt} \cos\bigl( \omega t - \arctan \frac{p}{\omega} \bigr) \Bigr) }$$ Response overshoot Note that underdamped (oscillatory) responses naturally have an overshoot. This means that at some point the voltage on the capacitor will be higher than the input voltage. After the transient response, voltage on the capacitor settles to the input DC voltage $$v_f = \lim_{t \to \infty} v(t) = V$$ The response overshoot magnitude can be found as $$\text{PO} = \frac{v(t_m) - v_f}{v_f} \cdot 100\%$$ where $t_m$ is the time of the response maximum. From $v'(t_m) = 0$ it follows that $t_m = k \pi / \omega$ where $k$ is an odd positive number. The response overshoot magnitude is $$\boxed{\text{PO} = e^{-k \pi p/\omega} \cdot 100\%}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/694867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Dirac spinor definition is it right to say that the Dirac spinor is a mathematical representation of a wave-function that satisfy the Dirac equation? or are there more requirements to it?
Honestly, I somehow dislike this point of view for the simple reason that in order to state the differential equation you must first have a definition of the object which should solve it. Let me state this differently. A differential equation is an equation of the form ${\scr D}\Psi=0$, where $\scr D$ is one differential operator. But to properly define and make sense of $\scr D$ you must know its domain first. Without defining a spinor you cannot write the Dirac eqution. That said, a Dirac spinor is one field $\Psi:\mathbb{R}^{1,3}\to \mathbb{C}^4$ on Minkowski spacetime which takes values in one vector space which carries one specific representation of the universal cover of the Lorentz group, ${\rm Spin}(1,3)\simeq {\rm SL}(2,\mathbb{C})$. The spin group ${\rm Spin}(1,3)$ has irreducible representations labelled by $(A,B)$ where $A$ and $B$ are integers or half-integers greater or equal to zero. In particular, the objects living in the representations $\left(\frac{1}{2},0\right)$ and $\left(0,\frac{1}{2}\right)$ are respectively called left-handed Weyl spinors and right-handed Weyl spinors. Finally, the objects living in the direct sum $\left(\frac{1}{2},0\right)\oplus \left(0,\frac{1}{2}\right)$ are called Dirac spinors. The representation space of the both left and right-handed Weyl spinors is $\mathbb{C}^2$, so the representation space of Dirac spinors is $\mathbb{C}^4$ as anticipated. After you have a proper definition of a spinor field you make sense of the operator appearing in the Dirac equation ${\scr D} = \gamma^\mu \partial_\mu + m$. To fully appreciate this story I like sections 5.4 and 5.6 of Weinberg's The Quantum Theory of Fields.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How do we find the centripetal forces of 3 planets revolving around a point given that they have the same mass? Let's say we have three planets revolving around a point. We know that the force of gravity acting on all of these planets can be taken from $g = G{m_1m_2 \over r^2}$. We can derive the velocity of these planets' revolutions through Centripetal force. How do we go about doing that?
For fun: The three body problem in general has to be solved numerically. The mathematical setup is given here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why are kinetic energy of electrons and potential energy of electron - electron interaction universal operators? Time indepedendent Schrödinger equation for a system (atom or molecule) consisting of N electrons can be written as (with applying Born - Oppenheimer approximation): $$ \left[\left(\sum_{i=1}^N - \frac {h^2} {2m} \nabla _i ^2\right) + \sum_{i=1}^N V(r_i) + \sum_{i < j}^N U(r_i,r_j)\right] \Psi = E \Psi $$ Terms in Hamiltonian are as follows: * *Kinetic energy of electrons *Potential energy of electron - nuclei interaction *Potential energy of electron - electron interaction It is said that for N electron system, kinetic energy of electrons and potential energy of electron - electron interaction are system independent which means that their value depends only on number of electrons $N$ (Because of that they are called universal operators). Potential energy of electron - nuclei interaction depends on specific system and isn't determined only by $N$. Source: DFT wikipedia, section: Derivation and Formalism, 2nd paragraph. https://en.wikipedia.org/wiki/Density_functional_theory. Second source is this page where Hohenberg and Kohn theorems are proved; statements are made after equation 1.31. http://cmt.dur.ac.uk/sjc/thesis_ppr/node12.html Why is this? This is usually mentioned in DFT materials, but I didn't find any source which explains it.
The Born-Oppenheimer approximation introduces a dependency on the nuclear coordinates. The operator $\hat V = \hat V(R)$ can be seen as function of these parameters. We solve the electronic time independent Schrödinger equation only for one particular choice of $R$ and in that sense, the solution is not universal, since the operator itself isn't universal. The universal solution would be solving the Schrödinger equation without the Born-Oppenheimer approximation, i.e. treating the nuclei not as fixed but just like the electrons. In that case we wouldn't have to specify any nuclear coordinates and could use an universal potential operator. The electron interaction operator on the other hand is universal. $$ \hat U = \sum^N _{i=1}\sum^N_{j>i} \frac{e^2}{4\pi \varepsilon_0|\hat r_i- \hat r_j|} $$ It only depends on $N$ the number of electrons but besides that, it always has exactly this form. The interaction with the nuclei however is not a "pure" operator within the Born-Oppenheimer approximation, $$ \hat V(R) = \sum^N _{i=1}\sum^N_{j>i} \frac{-Z_ie^2}{4\pi \varepsilon_0|R_i- \hat r_j|} $$ This is different to the universal interaction $$ \hat V = \sum^N _{i=1}\sum^N_{j>i} \frac{-Z_ie^2}{4\pi \varepsilon_0|\hat R_i- \hat r_j|}. $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Microwave inside-out cooking true/false The wikipedia article on microwave ovens says Another misconception is that microwave ovens cook food "from the inside out", meaning from the center of the entire mass of food outwards. It further says that with uniformly structured or reasonably homogenous food item, microwaves are absorbed in the outer layers of the item at a similar level to that of the inner layers. However, on more than one occasion I've microwaved a stick of butter, and the inside melts first, then the outside caves in releasing a flood of butter. (It may be relevant that my microwave turntable does not turn - but since I've done it more than once, I would not expect it to be a fluke of placement in the standing wave. And, the resulting butter-softness seemed very strongly correlated with depth, more than I'd expect from accident.) That sure seems consistent with the food absorbing more energy on the inside than on the outside. Given that this takes place over 30 seconds or so, I'd not expect much heat exchange to occur with the butter's environment (nor inside the butter itself), so that would forestall explanations of "the air cools off the outer layer of butter", unless I'm seriously underestimating the ability of air to cool off warm butter. So what's going on?
You guys are complicating the whole thing. Butter melts at a relatively low temperature. So too, the difference between "solid" and "liquid" is very small. Microwaves penetrate from around outside of a food stuff about ½". How thick is a stick of butter? Just over an inch. If microwaves pass into food ½" then they pass in from all sides ½". So where do they meet, combine or cancel each other out? In the center of a stick which is about 1" in cross section.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 7, "answer_id": 6 }
Why should a black hole be infinitely dense? I have been listening to this on Discovery for centuries of my childhood!! That when the heavy core of a star collapses under its own gravity, it shrinks to an infinitely dense point called "singularity". However, recently I was introduced to wave mechanics and The Schrodinger equation, so when the shrinking mass gets as small as the size of a hydrogen atom, quantum mechanics should be dominating the scenario, thereby eliminating all "singularities" - there should be a great great (but finite) density in that place. Maybe I took it all wrong, so please resolve my apparent paradoxical situation.
This is one of the currently unanswered questions in physics. The singularity of a black hole is a place where the spacetime curvature is very high (so general relativity is important), and where the size is very small (so quantum mechanics is important). Therefore, the general expectation is that we would need a quantum theory of gravity to tell us what happens in the singularity of a black hole. Since, at the present time, no one knows what the quantum theory of gravity is, no one can answer this question scientifically. As far as I know, within candidate theories of quantum gravity, such as string theory, it is not known how the singularity is resolved. As a caveat, there may be special cases where the singularity can be resolved in string theory which I don't know about, or other candidate theories of quantum gravity may have something to say about resolving the singularity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Stress tensor and equality of normal stresses on opposite faces Consider a body arbitrarily loaded as shown, At a particular point in the body, I take an element and show all the stresses acting on its faces. To specify a plane I will be using the the axis which is perpendicular to it. For instance, the front face is the +z face and the face opposite to the +z face is the -z face. All the sources that I follow, state that the normal stress acting on the +z and -z face will be equal. Similarly, the normal stress on the faces +x and -x as well as +y and -y face will be equal. However, I feel that might not necessarily be the case. The normal stress on the +z and -z face, can be different, but could be such that theses normal stresses along with the shear stresses acting along z direction, on the +y,-y and +x, -x faces vectorially sum to zero, so that equilibrium is maintained along the z direction. $$\sigma_z - \sigma_z' + \tau_{xz} - \tau_{xz}' + \tau_{yz} - \tau_{yz}' = 0$$ Same arguments can hold true for equilibrium along x and y directions. So, it might not be necessary that the normal stresses on opposite faces are equal, then why in the general state of stress at a point they are shown equal? A similar question was asked here
The point is that the claimed identity is valid in the limit of vanishing size $2\delta$ of the considered cubic element. If the element is cetered on the origin and you consider two opposite faces, e.g., normal to the axis $z$, you have, assuming that the stress tensor is differentiable at the origin, $$\sigma(0,0,\pm\delta) \cdot {\bf e}_z= \sigma(0,0,0)\cdot {\bf e}_z + O_\pm(\delta)\:.$$ At this order of approximation, the stresses on the two opposite faces are equal in norm and have opposite directions (since the normal outward vectors are opposite: $\pm {\bf e}_z$). Since what I wrote above concerns the stresses as vectors, not only the normal components of the stresses are equal in norm, but also the tangential components are: $$\sigma_{az}(0,0,z)= \sigma_{az}(0,0,-z)+ O(\delta)$$ for $a=x,y,z$. Since this argument applies to each pair of opposite faces, in your equation $$(\sigma_z - \sigma_z') + (\tau_{xz} - \tau_{xz}') + (\tau_{yz} - \tau_{yz}') \simeq 0$$ the three differences vanish separately with the said approximation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/695914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Gravitational potential energy of a two body system from infinity In determining gravitational energy of a two body system,we define it as the negative work done by gravitational force in bringing those two bodies from infinity to a distance $r$ with respect to the first body. Now in doing this, we say that work done by gravity in bringing the first body to a certain distance is $0$ because there is no gravitational field in our destination. And then we calculate the work done in bringing the second body due to the gravitational field created by first body. But I didn't get why work done by first body is $0$. Because as the first body starts moving from infinity,gravitational force still backwards between that body and the second body though the first body is moving forward. In that case,how can work done be $0$?
Gravitational potential energy in a two body system is a function of the separation of the two bodies, not their absolute locations. So if you want to use a work-energy argument to determine the potential energy at a separation $r$ then your initial condition is that the two bodies are separated by an infinite distance, when $PE=0$, and then brought closer together. Since gravity is a conservative force it does not matter which body is moved (or, indeed, if both are moved at the same time) or what paths they take. The only relevant points are their initial separation and their final separation. Your other error is to think of infinity as if it were a single location rather than a separation (or, strictly speaking, the limiting case of larger and larger separations). “At infinity” means “having an infinite separation from each other”, not “at a location that we call infinity”.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/696082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
In radiotherapy, why do normal tissiue or organ cells not die of radiation? In radiotherapy, why don't normal tissiue cells or organ cells in the way of incoming radiation die, but tumours die instead?
There are two main reasons for this. First, there isn't a single direction the radiation is applied from. Instead, beams from multiple directions are directed at the affected body part. The part where all the beams overlap is the volume recieving the highest radiation dose. Ideally this is where the pathological tissue (e.g. a tumor) would be. Second, healthy tissue is better at regenerating from radiation damage than cancerous tissue. Therefore, over the course of many radiation sessions, the surrounding tissue can heal (to some degree), while the damage in the tumor accumulates over time (also to some degree).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/696333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 1 }
Is power a cumulative quantity? Is the power needed to do a particular work cumulative? Like, the power needed to do work W for one second is P, is the power needed to do the same work for 2, 3, 4... seconds equal to 2P, 3P, 4P...?
Power is work divided by time. Or the rate at which work is done. So the average power required to do the same amount of work in twice the time would be $\frac{P}{2}$ I exactly don't know what cumulative quantity means. But I feel a cumulative quantity is something which adds up over time or space (like mass, distance, work). Power is not one such quantity. If work is analogous to the distance traveled, then power is analogous to the instantaneous speed. I believe you might be thinking of work as an instantaneous quantity. 1J of work for 1s and 1J of work for 2s, are the same amount of work. Work is the "total" quantity here. (Work x Time) isn't a quantity that we calculate, or have a name for
{ "language": "en", "url": "https://physics.stackexchange.com/questions/696619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why does plane mirror form image of same size as object? Plane mirror form images of the same size as of the object. Also if we need to see ourselves completely in mirror, we would require a mirror of at least half out height. Assume I am 6 feet tall then if I use a mirror 3 feet tall then how come me and my image have that same size, should not my Image by 3 feet tall and if yes then why we say that the Plane mirror form images of the same size as of the object?
The optical ray diagram of a plane mirror may help Also here: Let’s say you have a toy car, and it’s sitting in front of a regular bathroom mirror. The distance between the car and mirror is called the object distance, and it’s always positive. If you look at the image of the toy car in the mirror, it will appear to be the same distance behind the mirror as the real car is in front of the mirror, at the same height. It will also appear to be the same size as the real car. The image of the car looks like it’s behind the mirror (and the light we see does not directly emerge from the image), we say that the image is upright and virtual, and that the image distance is negative. Because of the geometry of optical rays, plotting them, and measuring the sizes , plane mirror images have the same size as the original.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/696765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Is potential difference always required for current? Say we use a cell to give rise to a current in a circuit and then remove the cell such that the circuit doesn't break. It means that no potential difference exists between any two points in the circuit since a circuit wire with current is neutral. So will the circuit wire have a decreasing current after I remove the battery due to N1L and Galileos definition of inertia that a body's state of rest or motion cannot be changed without an external force acting on it? Also does this mean that a superconductor will have constant current flowing through it even after removing the voltage source since there is no resistance to slow it down?
. . . . we use a cell to give rise to a current in a circuit and then remove the cell such that the circuit doesn't break. The circuit whether there is a break or not can now being considered as having an inductance, resistance and capacitance with the capacitance easier to visualise if there is a break in the circuit. However even without a break there is capacitance as explained in the Wikipedia article parasitic capacitance. What happens next depends on the relative values of capacitance, inductance and resistance but basically it will be a damped LCR system which can be over-damped (current falls to zero exponential), critically-damped (current falls to zero in the shortest time) or over-damped (current executes damped simple harmonic motion). . . . . does this mean that a superconductor will have constant current flowing through it even after removing the voltage source since there is no resistance to slow it down? A way of exciting a superconducting magnet is explained below. The switch heater, the superconducting switch and the superconducting magnet are immersed in a cryogenic liquid below the temperature at which the superconducting switch and the superconducting magnet become superconducting. The heater is switched on and heats the superconducting switch to a temperature at which it is no longer a superconductor. The magnet power supply is switched on and the current, passing though the superconducting magnet and not the superconducting switch, is slowly increased to the required value. The heater is switched off and so the temperature of the superconducting switch drops until it becomes a superconductor. The magnet power supply is switched off and a constant current passes through the circuit consisting of the superconducting switch and the superconducting magnet.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/696886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 0 }
Nature of force between two permanent magnets When we put two permanent magnets close to each other they repel or attract each other and this process increase their kinetic energy. I know that magnetic force can't increase kinetic energy so plz explain which type of force is this.
I know that magnetic force can't increase kinetic energy so plz explain which type of force is this. Assuming you meant potential energy when you said, "kinetic energy." If two magnets are oriented so that they repel each other, then you increase the potential energy of that system when you push them closer together. Conversely, if they are oriented so that they attract each other, then once again, it is you who increases the potential energy of the system when you pull them apart.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Interpretation and units of propagators Quantum field theory is usually expressed in natural units in which $\hbar=c=1$. This simplifies equations and one can always get back to other units by inserting $\hbar$ and $c$ in appropriate places. However, to me this is not always straightforward. In the second edition of the book of Quantum Field Theory in a Nutshell by Zee we find on page 22 the equation for the Klein-Gordon (KG) equation \begin{equation} -(\partial^2+m^2)D(x-y) = \delta^{(4)}(x-y) \end{equation} One possible way to go to other units is to set \begin{equation} -(\partial^2+\left(\frac{mc}{\hbar}\right)^2)D(x-y) = \delta^{(4)}(x-y) \end{equation} in which case $D$ has dimensions of $L^{-2}$, where $L$ is length, but is this logical? On page 24 we learn that the propagator describes the amplitude for a disturbance in the field to propagate from $y$ to $x$. With this interpretation in mind, what should be the logical units of the KG or any other propagator? In quantum mechanics the wave function $\psi(\mathbf{r})$ has dimension $L^{-3/2}$ and is interpreted as a probability amplitude. This interpretation leads us to require that $\int\psi^{\ast}(\mathbf{r})\psi(\mathbf{r}) d^3\mathbf{r}=1$. Are there similar "sum rules" for propagators, reflecting an interpretation in terms of probabilities?
Yes, it's logical. With natural units $c=\hbar=1$ we usually work in mass dimension, so lengths and times are $-1$ (in other words, change signs in what follows if you prefer to think in terms of length), $\partial$ is $+1$, $\delta^{(4)}$ is $+4$, and $D$ is $+2$. This is the only way to determine the propagator dimension; in particular, there isn't an alternative rooted in unitarity, because the role of propagators is to invert differential operators. I appreciate the comparison to e.g. $(i\partial_t+\nabla^2/2m-V-E)\psi=0$, which can't determine $\psi$'s dimension, whereas $\int|\psi|^2d^3\vec{r}=1$ can. But therein lies the difference: $D$ satisfies an inhomogeneous equation with $\delta^{(4)}$ on the RHS, which is what sets its scale (i.e. prevents us from just doubling it or multiplying it by a length or whatever), not an integral equal to $1$. Of course, we can interpret it in integral terms viz.$$-\int(\partial^2+m^2)Df(x)dx=\int\delta^{(4)}(x-y)f(x)dx=f(y).$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is enthalpy discussed exclusively as $\Delta H$ and doesn't make sense as just $H$ For a chemical reaction, it is well-known that $$\Delta H = H_{\text{products}} -H_{\text{reactants}}$$ Are we physically unable to determine an absolute $H$?
Yes; since we are unable to determine an absolute energy $U$ and since enthalpy $H\equiv U+PV$, we are unable to determine an absolute enthalpy (or Helmholtz free energy or Gibbs free energy or chemical potential or anything that includes $U$).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find the velocity of point of intersection? My approach: The velocity of the point of intersection u will be in horizontal direction due to symmetry about x-axis.The velocity v of the rod makes angle $\theta$ with the vertical.Thus, $v\sin \theta=u$. But the correct answer is A. Can anyone please point out where am I wrong?
TL;DR You have correctly identified that the intersection point does not travel along the vertical axis, but your conclusion about the horizontal velocity is not correct. I show here how to solve this problem by following: (i) geometric approach, from which you will see what is the problem with your conclusion, and (ii) algebraic approach, which is in my opinion much more straightforward for this particular problem. Geometric approach Let rod $A$ be the one with negative slope. If the positive horizontal axis goes along the dashed line to the right, then velocity of the rod $A$ is $$\vec{v}_{A/0} = v \angle \bar{\theta} = v ( \sin\theta \hat{\imath} + \cos\theta \hat{\jmath})$$ where $\bar{\theta} = 90^\circ - \theta$. Here you assumed that the intersection horizontal velocity is simply $v \sin\theta$, which is incorrect. That is horizontal velocity of a point (particle) on the rod, which is not the same as the intersection. See figure below for geometric explanation. Figure: Geometric approach to calculate intersection velocity. The two red lines are of same length. The intersection horizontal displacement $\Delta x$ can be calculated as $$\frac{\Delta x}{\sin 90^\circ} = \frac{v \Delta t}{\sin \theta} \qquad \rightarrow \qquad \boxed{\frac{\Delta x}{\Delta t} = \frac{v}{\sin\theta} = v \cdot \mathrm{cosec}(\theta)}$$ where $\Delta x / \Delta t$ equals intersection velocity. Algebraic approach Let's describe the two rods as lines and place them in a (fixed) Cartesian coordinate system. The line equations are then $$y_1 = x \tan\theta_1 \qquad y_2 = -x \tan\theta_2$$ The lines travel at certain velocity, but their slope remains constant. This means that the line offset changes in time (see figure below). It takes only a little bit of geometry to show that the offset magnitude is $vt/\cos\theta$ and the line equations become $$y_1(t) = x(t) \tan\theta_1 - \frac{v_1 t}{\cos\theta_1} \qquad \text{and} \qquad y_2(t) = -x(t) \tan\theta_2 + \frac{v_2 t}{\cos\theta_2}$$ Your problem is actually a simple version with $\theta_1 = \theta_2 \equiv \theta$ and $v_1 = v_2 \equiv v$ $$y_1(t) = x(t) \tan\theta - \frac{v t}{\cos\theta} \qquad \text{and} \qquad y_2(t) = -x(t) \tan\theta + \frac{v t}{\cos\theta}$$ Figure: Algebraic approach to calculate intersection velocity. Line offset changes in time. Coordinates of the intersection are found from the condition $y_1(t) = y_2(t)$ $$x_i(t) = \frac{v t}{\sin\theta} \qquad \text{and} \qquad y_i(t) = 0$$ The intersection velocity components are $$v_{i,x} = \frac{d}{dt} x_i(t) = \frac{v}{\sin\theta} = v \cdot \mathrm{cosec}(\theta) \qquad \text{and} \qquad v_{i,y} = \frac{d}{dt} y_i(t) = 0$$ and the intersection velocity magnitude is $$\boxed{v_i = \sqrt{v_{i,x}^2 + v_{i,y}^2} = v \cdot \mathrm{cosec} (\theta)}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Would a head-on collision between two stars create heavier elements? I was thinking about Przybylski's Star, and I was wondering how it was possible that so many heavy elements ended up in the star, such as einsteinium, californium, berkelium, etc. But there is unusually low amounts of iron and nickel. Also, I read about the LIGO detection of a neutron star merger. In my layman mind, it seems like a much more likely scenario for a neutron star merger is that the two become ensnared by each others' gravity and the two slowly collide, losing momentum from Chandrasekhar friction. This seems like a much more likely event than two neutron stars heading in opposite directions, each traveling 200km/s, meeting head-on. It's my guess that this is less likely, because I heard that when the Milky Way and Andromeda merge, it's likely not a single star will collide. So, all this together, I am wondering if two neutron stars met in this high speed, head-on manner, if that would produce ultra-heavy elements like seen in Przybylski's Star?
Whether we consider the usual gravitational in-spiral case or an unlikely head-on collision, two neutron stars that are close enough to collide will also have powerful gravitational forces between them, accelerating them to high speeds. By conservation of energy, you can expect their velocity to be roughly the escape velocity $v^{2} \sim \frac{GM}{r}$ where $M$ is the mass of a neutron star and $r$ is the distance between them. When they collide, $r$ will simply become the radius of the neutron star (when they are touching, their centers of mass will be roughly 2r apart). For the mass and radius of neutron stars, the escape velocity is tenths of the speed of light. So they slam together at a speed tenths the speed of light, providing the energy to eject some of the material. This neutron-rich material, freed from the pressures of being in a neutron star, undergoes the r-process and indeed creates heavier elements. I don't know about Przybylski's star in particular.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can observation by animals collapse the wave function? In Schrodinger's cat, somehow the cat is dead and alive at the same time until someone opens the box, observes that cat's state, and collapses the wave function. Of course, something can't be dead and alive at the same time (unless...), so this makes no sense. I still can't quite wrap my head around this idea. What if the cat is the observer and collapses the wave function while in the act of dying or staying alive? Is that a possibility? Which raises the question, can observation by other animals collapse the wave function? What counts as observation?
Let's suppose the cat will be killed when an atom undergoes radioactive decay. What happens is: * *A live cat goes in the box. *The cat knows it is alive. From the cat's point of view, the wavefunction has collapsed and the atom has not decayed. From the human's point of view, outside the box, the wavefunction has not collapsed and the cat might be alive or dead. *Sometimes, at a later point, the atom decays. If that happens, the cat knows it is dead and the cat's perspective shows a collapsed wavefunction. For the human, outside the box, the wavefunction has not collapsed and the cat might be alive or dead. *The box is opened. Now the cat and the human agree about the cat's status and they both observe a collapsed wavefunction. In summary, the cat can collapse the wavefunction for itself, but not for the rest of the universe.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Covariant derivative with an upper index in terms of Christoffel symbols I have encountered expression $$\frac{1}{2}\left(2 \dot{g}_{\mu}{}^{\lambda ; \mu}-\dot{g}_{\mu}{}^{\mu ; \lambda}\right)$$ in a GR paper. Here we assume to be working with the de Sitter metric $g$ and $\dot{g}$ is some two tensor. I know that in general $$F_{\mu\nu;\kappa}=\partial_{\kappa} F_{\mu \nu}-\Gamma(g)_{\mu \kappa}^{\lambda} F_{\lambda \nu}-\Gamma(g)_{\nu \kappa}^{\lambda} F_{\mu \lambda},$$ but I am not sure how I can apply this to two terms where one index is at the bottom and the other one is at the top. I tried to lower everything as follows. Thus for instance for the first term, \begin{align} \dot{g}_{\mu}{}^{\lambda ; \mu} =\nabla^\mu \dot{g}_{\mu}{}^{\lambda}=g^{\mu \alpha}\nabla_{\alpha}(g^{\lambda \gamma}\dot{g}_{\mu \gamma}). \end{align} However, now I have to take the derivative of the product of two tensors which is not very nice. Is there a way to write a direct formula just like the one for $F$?
In General Relativity, the covariant derivative is always taken to be compatible with the metric. In other words, $\nabla_{\mu} g_{\nu\tau} = 0$ and $\nabla_{\mu} g^{\nu\tau} = 0$. This implies that $\nabla_{\alpha}(g^{\lambda \gamma}\dot{g}_{\mu \gamma}) = g^{\lambda \gamma} \nabla_{\alpha}\dot{g}_{\mu \gamma}$. As for the remaining steps of the calculation, I would do it in the same manner: bring all the derivative indices down and proceed as usual.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/697967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
QM perturbation degenerate case $$ \begin{aligned} 0 &=\left(E-H_{0}-\lambda V\right)|l\rangle \\ &=\left(E-E_{D}^{(0)}-\lambda V\right) P_{0}|l\rangle+\left(E-H_{0}-\lambda V\right) P_{1}|l\rangle \end{aligned} $$ We next separate (5.2.2) into two equations by projecting from the left on (5.2.2) with $P_{0}$ and $P_{1}$, $$ \begin{aligned} &\left.\left(E-E_{D}^{(0)}-\lambda P_{0} V\right) P_{0}\left|l \rangle-\lambda P_{0} V P_{1}\right| l\right\rangle=0 \\ &\left.-\lambda P_{1} V P_{0}\left|l \rangle+\left(E-H_{0}-\lambda P_{1} V\right) P_{1}\right| l\right\rangle=0 \end{aligned} $$ We can solve (5.2.4) in the $P_{1}$ subspace because $P_{1}\left(E-H_{0}-\lambda P_{1} V P_{1}\right)$ is not singular in this subspace since $E$ is close to $E_{D}^{(0)}$ and the eigenvalues of $P_{1} H_{0} P_{1}$ are all different from $E_{D}^{(0)}$. Hence we can write $$ P_{1}|l\rangle=P_{1} \frac{\lambda}{E-H_{0}-\lambda P_{1} V P_{1}} P_{1} V P_{0}|l\rangle $$ or written out explicitly to order $\lambda$ when $|l\rangle$ is expanded as $|l\rangle=\left|l^{(0)}\right\rangle+$ $\lambda\left|l^{(1)}\right\rangle+\cdots$. $$ P_{1}\left|l^{(1)}\right\rangle=\sum_{k \notin D} \frac{\left|k^{(0)}\right\rangle V_{k l}}{E_{D}^{(0)}-E_{k}^{(0)}} $$ Modern quantum mechanics JJ Sakurai page 299 My question is: there is operator $P_1$ right side of V on denominator of the fifth equation.I use the fourth equation to derive the fifth equation but I don't have $P_1$ right side of V in my calculation. Where does it come from?
On the LHS of your 4th equation, the last term is $-\lambda P_1 V P_1 |l>$. Since $P_1^2=P_1$, this is equivalent to $-\lambda P_1 V P_1^2 |l>$. Then the rest is just a rearrangement of your 4th equation. you move the 1st term to the RHS, and multiply by the inverse operator in the parenthesis (its inverse exists in the $P_1$ subspace, as described in the text) to both sides, with the extra $P_1$ we just inserted included in the parenthesis. Now you get the 5th equation except for the first $P_1$ on the RHS. Act $P_1$ on both sides. On the LHS you get $P_1^2=P_1$ so nothing changes, on the RHS you get the RHS of the 5th equation. It is not necessary to insert the $P_1$, i.e. in the denominator of your 5th equation, the last term can just be $-\lambda P_1 V$ as what you have got, rather than $-\lambda P_1 V P_1$ that appears in the text. However, including the inserted $P_1$ makes it clearer that the operator $-\lambda P_1 V$ is acting on the $P_1$ subspace, as seen from the last term on the LHS of the 4th equation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/698055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why does the vapour bubble in an inkjet printhead collapse so fast? I am studying the inkjet printer in detail. I have come across thermal inkjet printing technology (bubble inkjet technology) and this short discussion below. We create a water vapour bubble by heating a resistance, which displaces ink and forms a drop but after that. Why does the bubble collapse in 10 to 20 microseconds? Bubble Lifetime The bubble lifetime can be determined from the reflectance measurements. If the bubble collapse is considered complete when the reflectance recovers to 0.75 of its initial value, then the typical lifetime for this printhead is about 11 μs, depending on the voltage applied to the heater. A higher voltage tends to length the bubble lifetime as seen in Figure 3b. The reflectance does not assume its initial value quickly until the heater cools to its steady state temperature. (screenshot of original)
I can't compete with Niels' years of experience in the field, but I'll add a note explaining why the bubble collapse is so fast once the heating is turned off. If you have ever inflated a party balloon then you will know that the tension in the rubber skin of the balloon exerts a pressure on the air inside. That's why if you let go of the balloon all the air rushes out. In this case the heater creates a bubble of steam inside the water, and the steam/water interface has an elasticity like the rubber skin of a balloon. This interfacial elasticity is called surface tension, and usually given the symbol $\gamma$. Just like a balloon the surface tension compresses the steam inside, and the pressure is given by: $$ P = \frac{2\gamma}{r} $$ where $r$ is the radius of the bubble. The surface tension of the steam/water interface is about $0.06$ N/m, so for a one micron radius bubble the pressure inside is about an atmosphere. That means as soon as the heating is switched off, and no more steam is being created the bubble is very quickly crushed out of existence by the one atmosphere pressure exerted on it by the surface tension. In a balloon the balloon collapses because the air rushes out of the nozzle. In a bubble of steam the bubble collapses because the steam is cooled by the water around it and condenses back into water.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/698471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 2, "answer_id": 1 }
Derivatives of exponential operator I'm reading the paper (eq.(14) and eq.(10)) and got curious how the paper uses this equation: $\frac{\partial}{\partial c}\exp(-i\Delta t (X+cY)) = \exp(-i\Delta t (X+cY))(-iY\Delta t + \frac{\Delta t^2}{2}[X+cY, Y] + \frac{i\Delta t^3}{6}[X+cY, [X+cY,Y]]+ \cdots )$ Can anybody help me deriving the equation?
The standard identity for the derivative of the exponential map is $$ \partial_c e^{M(c)}= e^{M} \left (1-\frac{1}{2}[M,\bullet]+ \frac{1}{6}[M,[M,\bullet]]+... \right ) \partial_c M, $$ where $\bullet$ pipes the argument on the right in case you were not familiar with the adjoint map. So, just plug in, $M= -i\Delta t (X+cY)$, $$ \partial_c e^{-i\Delta t (X+cY)} \\ = e^{-i\Delta t (X+cY)} \Delta t \Bigl (-i Y +[\Delta t (X+cY), Y ]/2 + i[\Delta t (X+cY) , [\Delta t (X+cY),Y]]/6 +...\Bigr ) , $$ amounting to your result.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/698595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
If the creation operator has no eigenstates, then what happens when you "use" it? According to Is there a simple way of finding the eigenstates of the creation and annihilation operator in QM? The creation operator has no eigenstates. But one postulate of QM says that the state of a system after measuement using an operator must be an eigenstate of that operator. How to make sense of this?
Not all operators are observables. Only self-adjoint operators are observables. A property of self-adjoint operators is that they have real eigenvalues. The creation operator is not an observable and is not a self-adjoint operator. It's okay for an operator not to have eigenstates and not to be an observable. Only observables can be measured.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/698744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Ehrenfest theorem proof I'm using this resource along with Griffith's Introduction to Quantum Mechanics to try and reproduce the Ehrenfest theorem. From equation $(176)$ in the link above, we have: $$\frac{d\langle p\rangle}{dt}=\int_{-\infty}^{\infty}\left[\frac{-\hbar^2}{2m}\frac{\partial}{\partial x} \left( \frac{\partial \Psi^*}{\partial x} \frac{\partial \Psi}{\partial x} \right) +V(x)\frac{\partial|\Psi^2|}{\partial x} \right] dx$$ I am able to get to here without issues, but next we have to show that: $$\int_{-\infty}^{\infty}\left[\frac{-\hbar^2}{2m}\frac{\partial}{\partial x} \left( \frac{\partial \Psi^*}{\partial x} \frac{\partial \Psi}{\partial x} \right) \right] dx = 0$$ Which would only be true if: $$\left. \frac{\partial \Psi}{\partial x} \right|^{x=\infty}_{x=-\infty} = \left. \frac{\partial \Psi^*}{\partial x} \right|^{x=\infty}_{x=-\infty} = 0$$ Is there a way to know this generally? It's obviously true in certain cases of the wave function (e.g. $\Psi(x)=\exp[-x^2]$). In general, I thought the only condition for normalization was that: $$\left. \Psi \right|^{x=\infty}_{x=-\infty} = \left. \Psi^* \right|^{x=\infty}_{x=-\infty} = 0$$
Fun fact; it's not true in general! For example, this answer lists an example of a function that is totally square-integrable and therefore viable as a wave-function but whose derivatives do not have a well-defined limit at infinity. The real reason you can get away with doing this approximation is that we assume implicitly in quantum mechanics, perhaps with not enough forcefulness, that wave-functions have "compact support", i.e., the functions and their derivatives are only nonzero on a closed, bounded subset of space. Some toy examples of wave-functions eschew this requirement, such as the quantum free particle with exact momentum, but this is not a true wave-function as it is not square-integrable.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/698849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
In a capacitor, is there energy in the electric field, is there potential energy, or both? The electric field between two capacitor plates is very simple. $$ \vec{E} = \frac{Q}{\epsilon_0 A} \vec{e}_z $$ I can get the energy stored in the field by integrating the energy density, $u_e$, over the volume (between the plates). $$ U = \int_V u_e \; \text{d}^3\!x = \int_V \frac{\epsilon_0}{2} E^2 \; \text{d}^3\!x $$ Since the field is constant, if I pull the plates appart—say that I double the distance—the integration volume is now twice what is was, and the energy stored in the field doubles. Fine! Simultaneously, we can make an argument from potential energy of the charges in the plates. The charges in each plate are attracted to the other, so when I pull them appart there is a force, and I'm doing work which gives the charges additional potential energy; in virtue of their increased separation. My question is: Are these two separate processes, where energy stored in the field AND in the potential energy of the charges. Or, are these two different ways of describing the same physical fact that the energy of the system is increasing? Cheers!
Energy stored in a capacitor is electrical potential energy, and it is thus related to the charge $Q$ and voltage $V$ on the capacitor. We must be careful when applying the equation for electrical potential energy $\Delta PE = q \Delta V$ to a capacitor. Remember that $\Delta PE$ is the potential energy of a charge q going through a voltage $\Delta V$. But the capacitor starts with zero voltage and gradually comes up to its full voltage as it is charged. The first charge placed on a capacitor experiences a change in voltage $\Delta V = 0$ since the capacitor has zero voltage when uncharged. The final charge placed on a capacitor experiences $\Delta V = V$, since the capacitor now has its full voltage $V$ on it. The average voltage on the capacitor during the charging process is $\frac{V}{2}$ , and so the average voltage experienced by the full charge $q$ is $\frac{V}{2}$ . Thus the energy stored in a capacitor is $Q \frac{V}{2}$ , where $Q$ is the charge on a capacitor with a voltage $V$ applied. (Note that the energy is not $Q V$, but $Q \frac{V}{2}$ ). Charge and voltage are related to the capacitance $C$ of a capacitor by $Q = C V$, and so the expression for $E_{cap}$ can be algebraically manipulated into three equivalent expressions: $$ E_{cap} = Q \frac{V}{2} = C \frac{V^{2}}{2} =\frac{Q^{2}}{2C}$$ where $Q$ is the charge and $V$ is the voltage on a capacitor $C$. The energy is in joules for a charge in coulombs, a voltage in volts, and capacitance in farads.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
EFT unitarity violation Standard Model Effective Field Theory is said to not be a complete model because the presence of nonzero anomalous, say, Quartic Gauge Couplings would violate tree-level unitarity at sufficiently high energy, i.e. if you don't constrain the energy considered Can someone explain why exactly? is the unitarity a strict requirement? if so, where does it come from?
There are no couplings in the Standard Model that violate tree-level unitarity at any energy. The reason we know the Standard Model is incomplete is experimental, not theoretical: it does not contain gravity, or dark matter. In the Standard Model without the Higgs, there are couplings that violate tree-level unitarity; removing them is one reason why the Higgs field is introduced. Unitarity is the requirement that the time evolution operator is a unitary operator. This is necessary so that the norm of states in Hilbert space is preserved in time. This is equivalent to requiring that probability is conserved. We normalize the state at some initial time so the probability that we will get a possible outcome of an experiment is equal to $1$ (ie: there is probability $1$ that something happens). Unitarity guarantees that this probability remains $1$ when evolving the state in time. If unitarity was violated, quantum mechanics would not predict well-defined probabilities for events to occur, and so would be useless for making predictions. Note, there are some cases like particle decay where you can model some incomplete part of a system as evolving in a way that is not unitary. But, at a fundamental level, if you account for all relevant degrees of freedom, the time evolution must be unitary. You can read more about unitarity here: https://en.wikipedia.org/wiki/Unitarity_(physics)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Work of a spaceship in circular motion Say a spaceship is traveling though space in a uniform circular motion. It's not orbiting any planet, it just flies in circles in an empty space. The only force working on the spaceship would be the centripetal force caused by the ship's engine. Thus, the work would be $0$, as the force would always be perpendicular to the ship's path. But that sounds counterintuitive to me, it would seem that the spaceship must do some work, otherwise it would just float in a straight line. Can anyone point out the error in my reasoning?
This is a continuation of Andrea's and Claudio's answers: From this link Work refers to an activity involving a force and movement in the direction of the force. A force of 20 newtons pushing an object 5 meters in the direction of the force does 100 joules of work. Energy is the capacity for doing work. You must have energy to accomplish work - it is like the "currency" for performing work. To do 100 joules of work, you must expend 100 joules of energy. italics mine Energy is a conserved quantity, work is not a conserved quantity, as its definition relies on the vector direction of the force, as Claudio states. So energy is conserved as discussed in the answer by Andrea, but work in this particular problem is zero if the loss of mass is ignored. If the loss of mass is not ignored, the magnitude of F should change because of the diminution of mass, so as to stay in the same r radius circle,and then there is work done.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
What does the Lorentz factor represent? How can the Lorentz factor $\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$ be understood? What does that mean? For example, what is the reason for the second power and square root? Why not $\frac{1}{1-\frac{v}{c}}$, or what would happen if it took that form?. Can you point me to other physical laws that make use of the $\sqrt{1-r^2}$ so as to "translate" it. Let $T_0$ be the local period and $L_0$ the local length * *Light-clock moves $\bot$ light $\rightarrow$ $T=\frac{T_0}{\sqrt{1-\frac{v^2}{c^2}}}$ *Light-clock moves $\parallel$ light $\rightarrow$ $T=\frac{2L}{c\left(1-\frac{v^2}{c^2}\right)}$ Referring to 1, How could photons not miss the mirror? Q: How could I understand the Lorentz factor formula intuitively, or what is your concept of it?
The Lorentz factor can be understood as how much the measurements of time, length, and other physical properties change for an object while that object is moving. What you have named $r^2$ is indeed known as $\beta^2$ which is the ratio between the relative velocity between inertial reference frames and $c$ the speed of light. It is also written as $$ \gamma = \frac{1}{\sqrt{1-\beta^{2}}}=\frac{d t}{d \tau} $$ where $t$ is coordinate time and $\tau$ is the proper time for an observer (measuring time intervals in the observer's own frame). This is also what you have written as equation (1) on your question. For the interpretation of (1) don't think of a mirror itself, just think of an infinitely big plane perpendicular to your light beam/direction of the photons. It is also related to many other quantities as I mentioned when I was defining what the factor is, I will write a few examples below: * *Time Dilation: $\Delta t^{\prime}=\gamma \Delta t$ *Length Contraction: $\Delta x^{\prime}=\Delta x / \gamma$ *Relativistic mass: $m=\gamma m_{0}$ As you can see, all this quantities are $\gamma$ dependent and therefore evolve with the square root factor you mentioned! For more examples, search in Wikipedia, there is probably more of them
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 8, "answer_id": 0 }
Notation for rule of thumb, without breaking dimensional homogeneity? I'd like to know how to write rules of thumb in a concise way, without breaking dimensional homogeneity. For example, if a runner has an average speed of ~10 km / h, an approximation of the covered distance would be $\mathrm{distance} \approx \mathrm{duration} * 10 \frac{\mathrm{km}}{\mathrm{h}}$ Is there any shorter way to write it? The goal would be to make it clear that you can simply multiply the number of hours by 10, and you'd get the number of kilometers. $\mathrm{km} = 10 * \mathrm{h}$ is concise, but it's also obviously wrong because it breaks dimensional homogeneity. There was a question on bicycle.stackexchange ("How to convert calories to watts on Strava rides?"), and one of the answers was Calories(kcal) = Watts * Hours * 4. This rule of thumb doesn't break homogeneity, but it still looks weird because one kcal is 1.163Wh, and not 4Wh. What would be a better way to write it?
My preference for such things is $$\left[\frac{\mathrm{distance}}{1\ \mathrm {km}}\right] = 10\left[\frac{\mathrm{duration}}{1\ \mathrm{hr}}\right]$$ As another example, the electron plasma frequency is given by $\omega = \sqrt{ne^2/\epsilon_0 m}$. Since all but one of the quantities on the right-hand side are constants, this can be written as a very straightforward rule of thumb: $$ \left[\frac{\omega}{1\ \mathrm{Hz}}\right] = 5.64 \times 10^{4} \left[\frac{n}{1\ \mathrm{cm}^{-3}}\right]^{1/2}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/699915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Regarding the action of Time reversal on Dirac spinors I'm inquring about the difference between notions of time reversal found in Streater & Wightman's "PCT, Spin and Statistics, and All That", and this accepted answer from Chiral Anomaly. While both agree $\mathcal{T}$ is anti-unitary and $\mathcal{C}$ is unitary, and that $$\mathcal{C} \psi(t,x) \mathcal{C}^{-1} \propto \psi^{c}(t,x)$$ they disagree in that the book says on page $19$ section $(1-45)$ $$\mathcal{T} \psi(t,x) \mathcal{T}^{-1} \propto \psi^{c} (-t,x)$$ while the above answer says $$\mathcal{T} \psi(t,x) \mathcal{T}^{-1} \propto \psi(-t,x).$$ Is this merely different conventions? Or is one incorrect?
There are two definitions of time reversal, one of which changes particle to antiparticles. The second, the Wigner definition, does not and is the one usually used these days.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Quality factor of LCR circuit If we are given a parallel or series LCR circuit, we know the quality factor of these circuits ( which we can see in many books). But if we are given a LCR circuit with the three components connected in series or parallel as we like, say resistor is connected in series to both inductor and capacitor but the inductor and capacitor are connected in parallel. Now here we have a simple deviation from our traditional series or parallel circuit. so wouldn't this change in configuration also change the quality factor of the circuit. If yes, what factors do we need to consider to find out the quality factor of any such arbitrary LCR circuits?
Now here we have a simple deviation from our traditional series or parallel circuit. so wouldn't this change in configuration also change the quality factor of the circuit. Yes, of course it would. It's not the same circuit anymore. There would be similarities, but you can't use the common equations for series or parallel LCR circuits. If yes, what factors do we need to consider to find out the quality factor of any such arbitrary LCR circuits? You'd need to do circuit analysis on the proposed configuration and calculate its bandwidth and attenuation. Any 2nd year EE student could do this (should be able to do this...) This was touched on already --- see here: $Q$ factor of parallel RLC circuit in series with a capacitor and resistor
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is the meaning of an object having an uncertainty of velocity of 2 $\rm m/s$? In several questions we are given the uncertainty in velocity of an object and are asked to calculate the uncertainty in position of an object? Well my doubt is that,as when we say that the uncertainty in position (of an let's say electron) is let's say 2 nm We mean that the electron could be found anywhere within that 2 nanometer it could be present in 1.2 nm Or at 1.5 nm Or anywhere within that 2nm distance But when we say that the uncertainty in velocity (of the electron) is let's say 2m/s (just for the sake of convenience) Then what does that literally mean as it meant in the case of position
When you say that electron position is uncertain by $\pm 2\,\rm nm$, it means that whatever electron position you have calculated/measured, you may be wrong by 2nm offset. That is, in reality electron could be anywhere within 2 nm radius of your target position coordinate in position vector space. Same goes about speed uncertainty. Whatever speed of electron you have calculated/measured, $2\,\rm m/s$ uncertainty means that in $v_x,v_y,v_z$ speed vector 3D phase space, electron speed fluctuates within radius of $2\,\rm m/s$ of your target value. In addition, each vector component, may have it's own uncertainties. That is, it can be that $$\Delta v_x \ne \Delta v_y \ne \Delta v_z,$$ due to the fact that your particle detector may be more sensitive to some measurement axis than others, etc.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Force between two protons Yesterday my teacher was teaching about the production of photons, he told that photons are produced when the electron move from a higher energy level to a lower energy level then suddenly a idea struck in my mind that if electrons are responsible for photons and photons are responsible for electromagnetic force then how will the electromagnetic force will come between two individual protons? Is there more ways to generate photons?
Photons are the carrier of the electromagnetic force. So any charged particle exchanges photons with another charged particle to transmit the force. Electrons aren't the only particles that can emit virtual photons , any charged particle can do it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Why doesn't the non-degeneracy definition of the metric tensor assure $g(v,v)=0\implies v=0$? We know that a defining property of the metric tensor is that it is non-degenerate, meaning $\forall u,\, g(v,u)=0\implies v=0$. Yet from a textbook I read that $g(v,v)=0$ does not assure $v=0$. Why is this? Can't we simply let $v=u$ in the definition and obtain $g(v,v)=0\implies v=0$? Thanks.
I think this is a question of logic: Suppose $$g(v,v)= 0 \Longrightarrow v=0 \tag{1} $$ holds. Then we can conclude $$\forall u:\quad g(u,v)=0 \Longrightarrow v=0 \quad, \tag{2}$$ by choosing $u=v$. However, the converse must not be necessarily true: Even if $(2)$ holds, we cannot conclude from $g(v,v)=0$ alone that $v=0$ - the condition in $(2)$ must hold for all $u$. So while $(2)$ follows from $(1)$, the converse is in general not true. Counter examples are provided in the other answers.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Electromagnetic forces due to a current carrying wire on a stationary charge due to length contraction Consider a current carrying wire. There is a stationary charge $q$ at a distance $r$ from wire. In lab frame when the electrons in wire are moving at $v_1$ velocity we say that there is some linear charge density of poisitve ions equal to $\lambda_1$, and the linear charge density of electrons with the affect of length contraction is equal to $-\lambda$. The electrons in wire were moving and therefore length contracted. Their linear charge density become equal to charge density of positive ions due to length contraction. Therefore the net charge on wire was $0$ and no electric force applied on $q$. Now again in lab frame, we increase the current in wire and the velocity of electrons becomes $v_2$. The electrons are now more length contracted and their absolute linear charge density should increase from $|-\lambda|$. The density of positive charges is same as before. It means there would be a net charge density on wire and therefore a force on stationary $q$. But it doesn't happen. Why? According to length contraction, more the velocity of electrons, more the contraction of electrons but it doesn't seem to work in case of current carrying wires. Why?
But it doesn't happen. Why? The electrons don’t have a fixed separation in their rest frame. So when you change the current you also change the separation in the electron’s rest frame. Changing the current in the wire is not a boost.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/700881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why do we use different differential notation for heat and work? Just recently started studying Thermodynamics, and I am confused by something we were told, I understand we use the inexact differential notation because work and heat are not state functions, but we are told that the '$df$' notation is only for functions and that the infinitesimal heat and work are 'not changes is anything' surely they can be expressed as functions of something? and they are still changes as they do change? What is the thermodynamic reason for describing them as not being changes in anything?
$ df,\,\Delta f$ and $\delta f$ are associated with the idea of a change from an initial value to a final value. So $\Delta f$ or $\delta f$ are equal to $f_{\rm final}-f_{\rm initial}$ and $df$ when the change is infinitesimal. As you have pointed out with work and heat there are no initial and final states but it is useful to use some sort of notation for amount of work done and heat supplied. Some people use $\delta f$ explaining that it represents a "small" amount whereas others use the lower case delta with a bar through it (I cannot find the Latex symbol for this although there is one for $h$, $\hbar$) to make the distinction more obvious. So "deltabar Q" might equal $m\,C\,\delta T$ and "delta bar W" might equal $F\,\Delta x$ where $\delta T = T_{\rm final}-T_{\rm initial}$ and $\Delta x = x_{\rm final}-x_{\rm initial}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/701026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
A question regarding excitation of electrons in atomic orbitals In Bohr's model of an atom, the formula used to find the energy between the 2 orbits and wavelength of emitted photon was valid only for single electron species like hydrogen.In the case of a multi-electron system like in the picture given above will the electron absorb a photon to go from 2s to 2p and also remitt a photon while dexciting from 2p to 2s.There are also elements like sulphur with two excited states thus showing variable covalency but how do the electrons not dexcite from higher energy orbital in a short time but give enough time gap to show two excitation states?Is the dexcitation and remission of photon a phenomenon which can only be seen when an electron goes from one shell to another like from n=1 to n=2 or can it also be seen when electron goes from orbitals and sub shells like 2s to 2p?Since there is an energy difference between the 2s and 2p sub shells there must be remission of photon on excitation but I did not find any online sources to verify this, so I need help.
The electron can go to any state allowed by the selection rules for the corresponding type of transitions considered (e.g. electric dipole transitions). So this means there are in general different decay channels available for the electron. Which channel it takes is a random process, with a statistical weight given by the corresponding decay probability in the respective levels (which depends on the transition frequency and the overlap integral for the two wave functions of the states involved in the transition). Of course, an electron in a given atom can lastly go only into one of the states available, but an electron in a different atom may go into a different state. This is why you can see more than one spectral line from an ensemble of many atoms. Having said this, in your example there is actually only one way the excited electron can go, namely where it came from.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/701568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Do the components of a force written for a purpose actually exist? On an inclined plane if you put a box, the force of gravity $mg$ is written as sum of two forces $mg\sin\theta$ and $mg\cos\theta$ where $\theta$ is the angle the incline is making with earths surface. Do these forces $mg\sinθ$ and $mg\cosθ$ actually work on the object?
I am tempted to say that in the larger picture, given that the force $m\vec g$ comes from interaction with the earth, the components of the vector we are considering simply happen to be in the direction in which something of our interest is happening, the direction of the incline plane in this context, and thus $mgsin\theta$ and $mgcos\theta$ themselves aren't individual forces given that they haven't arisen from different sources. But as @AccidentalTaylorExpansion mentioned, this is indeed more of a philosophical question that falls in the same sort as asking "If I punch a person and my hand hurts, did they hurt me or did I hurt myself". While the larger source of the action was you, the person being there obviously still was the cause of your hand hurting. At the end of the day, you might or might not treat components as human constructs, but they give mathematical form to the effect some vector quantity has in another direction, so they very much exist.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/701644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 4 }
Why is Avogadro constant used to calculate the number density? My book says: The number density of particles is $nN/V$, where $n$ is the total amount of molecules in the container of volume $V$ and $N$ is Avogadro's constant. I can do something with the concentration $n/V$, it tells me how many moles of a particle I have in a certain volume, but why times $N$? And another thing, where is the difference between number density (according to Wikipedia $n/V$) and molar concentration $n/V$?
You are converting between absolute number of particles per volume and mol of particles per volume. The total number of particles is obtained by multiplying the moles with Avogardro's constant. Lets say I have 1 mol per volume. How many particles in absolute numbers do I have per volume? To answer that question you multiply with Avogadro's number. An analogy would be converting between dozens per volume and absolute number per volume. Lets define the constant $d=\frac{12}{\text{[dozen]}}$. To convert a quantity specified in dozens per volume, you would simply multiply with constant $d$ to obtain the absolute number of particles per volume. Avogadro's number plays the same role, $N_A=\frac{6.02214076\cdot 10^{23}}{[mol]}$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/702263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Difference between 2 points a circuit I am asked to find the voltage difference between points A and B, namely $V_a - V_b$ of the following circuit: I don't exactly know how to approach this however. Is it as simple as going counter clock wise from A to B (since this path doesn't have an opening) and counting the voltage changes, so +20V - 10V? Or am I missing something? Thanks for any help
I recommend you setup the potential-difference equations. From your schematics it follows: $$\varphi_Y - \varphi_B = +10 \text{ V}$$ $$\varphi_Y - \varphi_A = +20 \text{ V}$$ From this you can easily find what is $\varphi_A - \varphi_B$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/702409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is it possible to derive Navier-Stokes equations of fluid mechanics from the Standard Model? We know that the Standard Model is a theory about almost everything (except gravity). So it should be the basis of fluid mechanics, which is a macroscopic theory from experiences. So is it possible that we can derive equations of fluid mechanics from the Standard Model? If the answer is yes, please give a simple example. If the answer is no, what is the reason that prevent the derivations to be reality.
The answer is no, here is why. The Standard Model lets us predict (among other things) experimental outcomes of tests run in particle accelerators, at the scale length of ~much smaller than a proton and truly gigantic energy scales (billions of electron volts), where the number of particles in the system is of order ~a few. It was not invented to tell us anything at all about the behavior of macroscopic objects like a bucketful of water/glycerine mixture or honey flowing through pipes or air flowing over a wing at supersonic speeds, where the typical length scale is of order ~one baseball diameter and the energy scale is of order ~a couple of electron volts, and the number of particles in the system is of order ~10exp23. That said, if you had a superduper megacomputer that could model those 10^23 particles individually and track their movements in 3-D space with one picosecond time resolution and one angstrom spatial resolution, you might be able to observe the emergence of macroscopic behavior patterns like viscosity, surface tension, heat capacity, shear stresses and so on, but then again you might not. That would be akin to painting the Golden Gate Bridge with a toothpick tip dipped in paint: not definitively ruled out by mathematics, but a fool's errand nonetheless.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/702695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 2 }
Why do the electric field lines not originate from a positive charge in the following situation? Consider two fixed positive point charges, each of magnitude $Q$ placed at a finite distance apart. Let point $O$ be the midpoint of the two charges. We can see that the electric field at $O$ is zero, but for any other point in the perpendicular bisector of the two charges, the electric field is nonzero and the direction of the electric field is along the perpendicular bisector and opposite to that of point $O$. So it appears that there are electric field lines originating from point $O$. I have learnt that electric field lines originate only from positive charges, but there aren’t any positive charges at point $O$, so where did I go wrong in my reasoning? The field lines can only originate from only one of the charges. But if we consider the field line through the perpendicular bisector, the field line is symmetric to both the charges, so how can we say that this field line originates from one of the charges?
Electric field lines do originate only at charges, but they can cancel out. * *Consider point O. From the top charge, a field line points downwards in this point. From the bottom charge, a field line points upwards in this point. Their effects cancel out and there is no net field at this point O. *Now look slightly to the side, horizontally from point O. In such a point the top charge causes a field line with an angle, so downwards-and-a-bit-sideways. The bottom charge also causes a field line with an angle, but upwards-and-a-bit-sideways. Now, note how the sideways components of the field from either charge is in the same direction. But the vertical components are opposite. The vertical components thus cancel out whereas the sideways components add up. The final result is a net field directed perfectly horizontally from point O. This does not mean that the field originates at point O - it is just the merged result of two fields originating at the charges that just happen to perfectly cancel out vertically due to ideal symmetry in this setup.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/702769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
More general propagator of a real scalar field I have some Lagrangian containing a real scalar field $\phi$ with mass $m$. Let $A \in \mathbb{R}$ be some constant. The Lagrangian takes the form: \begin{equation} \mathcal{L} = -\frac{A}{2} (\partial_\mu \phi)^2 - \frac{1}{2}m^2 \phi^2 + \mathcal{L}_{\phi \phi \phi} + \mathcal{L}_{\phi \phi \phi \phi}, \end{equation} where the last two terms indicate interaction terms. My question is whether it makes sense to compute the scattering amplitude for the case with $A = 0$?
* *On one hand, if $A\equiv 0$, then the field is non-propagating, and one cannot construct a scattering theory. This is OP's case. *On the other hand, if one makes a field-redefinition $$ \phi^{\prime}~=~\sqrt{|A|}\phi, \quad m^{\prime}~=~\frac{m}{\sqrt{|A|}}, \quad g_3^{\prime}~=~\frac{g_3}{|A|^{3/2}}, \quad g_4^{\prime}~=~\frac{g_4}{|A|^2}, $$ and takes the limit $A\to 0$, then the field becomes infinitely massive, and the coupling constants becomes infinitely large.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/702921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Gravitational wave radiation power from dimensional analysis Let us try to find a formula for the power emitted through gravitational waves (GW) from a binary system in quasi circular orbit. The relevant quantities are the Newton's constant $G_N$, speed of light $c$, a mass scale $M$, orbital frequency $\omega$. So I write $P=G_N^a c^b M^c \omega^d$. Demanding both sides have the same dimension gives \begin{align} P_{GW}=\frac{c^5}{G_N}\left(\frac{G_NM\omega}{c^3}\right)^d \end{align} with arbitrary exponent $d$. It turns out that $d=10/3$ from the quadrupole approximation. My question is the following: Is there a way to get $d=10/3$ from some sort of argument without having to get into the details of the computation? One possible answer is that we know from post-Newtonian theory that radiation effects start at $1/c^5$, and this implies $d=10/3$. But I thought maybe there is a better argument.
The short answer is no. The problem is that a simple dimensional analysis argument tells you that you need the power to be energy per unit time, given the mass $M$, frequency $\omega$, and $G$ and $c$. Well, the energy part is "easy" in the sense that $Mc^2$ has the right dimensions. However, the problem is that there are two quantities with dimensions of time \begin{equation} \frac{G M}{c^3}, \ \ \frac{1}{\omega} \end{equation} So dimensional analysis is only powerful enough to tell you that the answer must look like \begin{equation} P_{\rm GW} = \left(M c^2\right) \times \omega^d \times \left(\frac{c^3}{GM}\right)^{1-d} = \frac{c^5}{G} \left(\frac{GM\omega}{c^3}\right)^d \end{equation} for some $d$, as you said. Another way to express the same point is that dimensional analysis only fixes the dependence on dimensionful quantities, but there is a dimensionless ratio $GM \omega/c^3$ which dimensional analysis cannot help you with. In fact, in general the situation is even worse than this. If you allowed for the two objects in the binary to have different masses, there would even be another dimensionless quantity, which we can take to be the mass ratio $q=m_1/m_2$. The dependence on $q$ is also not fixed by dimensional analysis; doing a more careful calculation ends up telling you that the power depends on the chirp mass. And if you allow the black holes to have spins, you need even more associated dimensionless ratios to describe the system. So, the bottom line is that you do need information about the leading order scaling of the post-Newtonian corrections to fix $d$ (as well as mass ratio dependence, and spin dependence). The longer answer, though, is that it isn't too hard to get the leading order PN scaling to fix the frequency dependence, if you are willing to accept that gravity is a spin-2 field and so must couple to the quadrupole moment $Q$. (As @ProfRob pointed out in the comments, there are other ways you can justify the coupling to $Q$. For example: $Q$ is the lowest order multipole that gravitational waves can couple to, given that the monopole and dipole moments correspond to the mass and linear momentum of the system, which are conserved and so cannot contribute to radiation). In order to estimate the frequency dependence, because of Kepler's laws, we also need to know the dependence on the size of the system, $R$. Given that $h \sim \ddot{Q}$ where $Q \sim R^2$, we have $h \sim \omega^2 R^2$. Using Kepler's laws, we use that $R \sim \omega^{-2/3}$, so $h \sim \omega^{2/3}$. Therefore, $E \sim \int dt \dot{h}^2 \sim \omega^2 h^2/\omega \sim \omega^{7/3}$ and $P \sim \dot{E} \sim \omega E \sim \omega^{10/3}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/703008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Conservation of symmetrization in quantum mechanics I recently read about the symmetrization requirement, which my book states is axiomatic of quantum mechanics: $$ \psi(\mathbf r_1, \mathbf r_2) = \pm \psi(\mathbf r_2, \mathbf r_1). \tag{*} $$ It further states that if a system starts out in such a state, then it will remain in such a state. Is this conservation of symmetrization also an axiom, or can this be proven (like the conservation of normalization)? Moreover, does the time-dependent wave function $\Psi(\mathbf r_1,\mathbf r_2,t)$ also satisfy the symmetrization requirement? Intuitively, I don't see how it can follow from $(*)$ that $$ \Psi(\mathbf r_1,\mathbf r_2,t) = \sum_n \psi_n(\mathbf r_1, \mathbf r_2) e^{-iE_n t/\hbar} = \pm \Psi(\mathbf r_2,\mathbf r_1,t) = \pm \sum_n \psi_n(\mathbf r_2, \mathbf r_1) e^{-iE_n t/\hbar}, $$ and if the time-dependent wave function does not follow the symmetrization requirement, then what good is $(*)$?
If the Hamiltonian is symmetric in $r_1$ and $r_2$, then we can show that its eigenfunctions can be taken to by symmetric or antisymmetric. If we start with a symmetric state at some time $t_1$, then we can expand it over the symmetric eigenstates only. We see that it will remain symmetric at any other time $t$. The same is true if we start with an antisymmetric state, which will remain antisymmetric under time evolution.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/703539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determining initial conditions for central force question Question: A particle of unit mass is acted on by an attractive force of magnitude $k/r^2$ directed toward origin O. It is projected from infinity with speed $v$ along a line whose perpendicular distance from O is $d$. Find and sketch the path of the particle. I know that since we have a central force, $u''(\theta)$ + $u(\theta)$ = $k/h^2$ where $u = 1/r$ and $h$ is the angular momentum per unit mass, which is conserved. However, I am really unsure what my initial conditions are from the information given. I don't quite understand what being projected "from infinity" means and how I am meant to determine $h$ or anything about $r(0)$ and $\dot r(0)$. I would appreciate some help as to how to understand the scenario and hence derive the intial conditions for my ODE
You are given two quantities that are conserved throughout the motion: (a) The mechanical energy $E=\frac{1}{2}mv^2-\frac{k}{r}.$ (b) The angular momentum about the force center $L=mvd.$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/703876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Recommendations for Algebraic quantum mechanics book I am familiar with quantum mechanics and quantum information at the level of Sakurai and Preskill's lecture notes / Nielsen and Chuang. I want to study the $C^*$ algebraic formulation of quantum mechanics. Are there any good books on that? Mathematical rigour is not a problem, I am willing to push through it.
I can recommend "Araki: Mathematical Theory of Quantum Fields." The first two chapters describe an algebraic formulation of quantum mechanics (and can be read without knowledge of quantum field theory). The book begins with an outline/motivation of how an algebraic formulation of a quantum theory should look like, and then presents the mathematical theory in the second chapter. In my opinion, this text is the best compromise between a physical motivated and a rigorous mathematical presentation. In this book you find everything typically needed when working with an algebraic formulation of quantum mechanics including states and representations of a $C^*$-algebra, the GNS construction, symmetries from an algebraic point of view, etc. As prerequisites, you should know the standard quantum mechanical formulation and some functional analysis, but the book also has an appendix summarising the most important results from functional analysis required for the book.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/703992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Does amplitude really go to infinity in resonance? I was recapping the forced oscillations, and something troubled me. The equation concerning forced oscillation is: $$ x=\frac{F_0}{m(\omega_0^2-\omega^2)}\cos(\omega t) $$ I don't understand why this equation predicts that the amplitude will approach infinity as $\omega$ approaches $\omega_0$. One can come up with the argument that in the actual world, there are damping forces, friction etc. The trouble is, however, even in the ideal world, the amplitude wouldn't approach infinity as the spring's restoring force will catch the driving force at some point, and the system will stay in equilibrium. What I'm wondering is * *Is my suggestion in the last paragraph correct? *If it is correct, what assumption led us to the erroneous model of $x$? *If it is not correct, what am I missing?
It is instructive to analyze the problem in time domain. When the driving force is $F_0 \cos(\omega_0 t)$ and the mass is initially at rest at its equilibrium position, the solution is $$x =\frac{F_0}{2\omega_0m}t\sin(\omega_0t) $$ which represents oscillations that grow in amplitude over time. It isn't necessarily relevant how the magnitude of the restoring force compares to that of the driving force. Even for small amplitudes, in certain parts of the motion, the driving force will inevitably be smaller in magnitude than (or in the same direction as) the restoring force. This is why the mass periodically comes to a stop and starts accelerating toward the equilibrium position. It can be seen by integrating the driving force along the trajectory that the driving force does net work on the mass each period, causing the amplitude to grow every cycle. Another way to see that the mass will acquire energy over time is by integrating the sum of the driving and restoring forces between times $nT_0$ and $(n+1)T_0$ where $T_0=2\pi/\omega_0$ is the period. This gives the momentum change at the equilibrium position every cycle. Since the driving force is periodic, it causes no momentum change per cycle. However, the restoring force grows in magnitude and its integral is positive each cycle. This can be intuitively understood as follows. In the period between times $nT_0$ and $(n+1)T_0$, on average, the restoring force is smaller in magnitude for $x>0$ than for $x<0$, because of the growing amplitude. Therefore, there is a non-zero impulse transfer in the positive direction each cycle, causing the speed at $x=0$ to increase every cycle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/704112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 9, "answer_id": 7 }
Formula For Finding Electric Field I am currently studying AP Physics and have a question about a formula. I know that from $E = \frac{F}{q}$, we get $dE = dF/q = \frac{1}{4 \pi \epsilon_0} \frac{dQ}{r^2}.$ However, I also see a formula: $$E = \frac{1}{4 \pi \epsilon_0} \int \frac{dq}{r^2} (\hat{x} \cos \theta + \hat{y} \sin{\theta})$$ and was wondering in what context it is useful. Also, how is this formula derived? Does it only work in certain circumstances?
Electric field and (Coulomb) force are vectors, meaning they have a direction. So from your first equation for electric field you could have written $$d{\bf E}=\frac{1}{4\pi\epsilon_0}\frac{dQ}{r^2}{\bf\hat r}$$ where ${\bf\hat r}$ is a unit vector pointing in the direction of the electric field (or force). Now your main equation $$E = \frac{1}{4 \pi \epsilon_0} \int \frac{dq}{r^2} (\hat{x} \cos \theta + \hat{y} \sin{\theta})$$ could have been written $${\bf E} = \frac{1}{4 \pi \epsilon_0} \int \frac{dq}{r^2} {\bf\hat r}$$ where the unit vector is $${\bf\hat r}=\hat{x} \cos \theta + \hat{y} \sin{\theta}$$ and is the same unit vector but in a two-dimensional Cartesian coordinate system. Note that $\bf\hat r=\frac{r}{|r|}$ and $\bf |\hat r|=1$ so the "usefulness" of this depends on the problem you are doing. Some authors will give you a problem in cartesian coordinates and ask you to switch to polar, cylindrical or spherical coordinates or maybe vice-versa. A lot of computations in electrostatics have spherical symmetry, so the problem can be reduced to just one (the $\bf r$) coordinate. Others can be a little more subtle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/704301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What are the differential equations that model a self-propagating gravitational wave in space-time? Light is a self-propagating wave, but it's very complicated. Imagine, if you will, a wave in space-time that by assumption was self-propagating like light, except that it was a gravitational wave. What are the differential equations and boundary conditions that would govern the transfer of a wave between two absorption points? I'm familiar with differential equations, but not the specifics of differential geometry that might better address this.
Light as a self-propagating wave in vacuum is governed by the free Maxwell equations: $$\nabla^aF_{ab}=0$$The electric and magnetic fields which shows the oscillatory behavior are actually the components of the Maxwell tensor $F_{ab}$. One could ask - what are the analogues of $F_{ab}$ in GR? Note that one can express the vacuum Einstein's field equations $R_{ab}=0$ as divergence of Weyl tensor: $$\nabla^aC_{abcd}=0$$ which looks quite similar to free Maxwell equations. There are other representations of these vacuum equations - for instance in 2-spinor formalism one can define a free zero rest mass (z.r.m.) field equation for a general spin n/2, for example see here. However, unlike free Maxwell equation, the vacuum Einstein's equations $\nabla^aC_{abcd}=0$ are not exactly linear in $C_{abcd}$, since both the connection $\nabla^a$ and $C^{abcd}$ depends on metric $g_{ab}$. Solutions of such non-linear differential equations are indeed hard to find. Some solutions for exact plane waves and spherical waves have been discussed here, here etc. If we are looking at small perturbations on background Minkowski-metric, then we can essentially linearize the vacuum equations as $$\partial^aK_{abcd}=0$$where $K_{abcd}$ is the linearized Riemann curvature and is traceless (like the Weyl curvature). $K_{abcd}$ is invariant under gauge transformation of linearized metric only when we consider perturbations about Minkowski-metric.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/704462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Quantum and classical physics are reversible, yet quantum gates have to be reversible, whereas classical gates need not. Why? I've read in many books and articles that because Schrödinger's equation is reversible, quantum gates have to be reversible. OK. But, classical physics is reversible, yet classical gates in classical computers are not all reversible ! So the reversibility of Schrödinger's equation doesn't seem to be the right reason for quantum gates to be reversible. Or, maybe it is because quantum computers compute "quantumly", whereas classical computers do not compute "Newtonly" but mathematically. Or if I try to put it in another way : the classical equivalent of a "quantum computer" would be an "analog computer". But our computers are not analog computers. In an analog computer, the gates would have to be reversible. So in a way a quantum computer is an "analog quantum computer" But maybe I'm wrong Thanks
Irreversibility comes from friction. Frictionless classical systems are (theoretically) reversible. Practical computer design exploits friction to remove unwanted perturbations to the computer's state. Imagine that 3V is supposed to represent a "1", but for some reason a "1" at a particular node in the circuit is at 2.5V. The circuitry drives the node toward 3V, while its friction damps out the oscillations around 3V that would occur in a frictionless, reversible circuit. In quantum systems, the analog of friction is called decoherence. For quantum computer designers, it's a problem. The whole point of quantum computing is that you theoretically get to explore a large number of state trajectories simultaneously in superposition. If the computer decoheres into a particular state prematurely, you lose the superposition. This is what makes quantum computers so fiddly: quantum computer designers cannot use irreversibility as a stabilizing tool.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/704625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 7, "answer_id": 3 }
Can positrons attract electrons? Now, it is established that positrons and electrons have the same mass but opposite charges. Since they have opposite charges, do they create a force of attraction and collide thus annihilating each other? Or do they just "happen" to interact? If an attraction exists, can it also occur through a conductor?
As per Coulomb law, $$ |\mathbf {F} |=k_{\text{e}}{\frac {|q_{1}q_{2}|}{r^{2}}} $$ any charged particle $q_1$ affects any other charged particle $q_2$ (unless distance between them is infinity). So the answer is yes, positrons are attracted by electrons, and upon collision they annihilate, according to reaction: $$ e^{^+} + e^{^-} \to \gamma + \gamma $$ or in terms of Feynman diagram : There are pair of photons produced, due to momentum conservation. If for example positron and electron are colliding front-to-front with approximately same momentum magnitude, then it must be satisfied : $$ \vec {p^{^+}} + \vec {p^{^-}} = 0 $$ But if just one photon would came out, it would carry non-zero momentum, in contrary to what equation requires, so in that case pair of photons are produced going in opposite directions : (There are cases when just single photon can be produced, but that involves third charged particle to which photon transfers excess momentum, so Feynman diagram would be a bit different.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/704940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 0 }
Why doesn’t horizon distance move exactly proportional to the height of the observer? For instance if someone is 8 inches above the surface of the Earth, they can see approximately 1 mile to the horizon. However, if someone is viewing the horizon at an eye level of 5’5 they can only see about 3 miles out. If the height of the observer increases by a factor of about 8, from 8 inches to 65 inches, why does the distance they can see only increase by a factor of 3?(from 1 mile to 3 miles)
On earth, the distance to the horizon, say $d_h$ and the height of an observer, say $h_o$ cannot have a linear relationship $$d_h=\text{constant}\cdot h_o$$ or proportional relationship you speak of, since this would assume the earth has some geometry other than spherical. Instead, in reality, since the earth has curvature and is a sphere, you can show with a little trigonometry, $$d_h\propto \sqrt h_o$$ The exact relationship is $$d_h=r_e\cos^{-1}(\frac{r_e}{r_e+h_o})$$ where $r_e$ is the radius of earth, and this equation is usually simplified to the approximate relationship $$d_h\approx\text{1.22}\sqrt h_o$$ where $d_h$ is in miles and $h_o$ is in feet.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/705354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
No net charge in conductor or no charge at all? Electrostatics I understand that there can be no net charge in a conductor because any moving free electrons would induce a countering electric field that would then cause the net E field inside a conductor to be zero. I also understand that net charge is distributed on the surface of a conductor in attempts for all surface charges to repel each other. But does this mean there is simply no net charge in a conductor? Or just no charge at all? Does all charge have to exist on the surface? Or only net charge?
Since conductors are made of atoms, which are made of particles with charge, there are still charges in conductors. There is no net charge within a conductor; any net charge resides on the surface(s).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/705887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Energy to momentum Is there anyway to convert energy to motion IN SPACE? Let's say a satellite collects electric energy from sun using solar panel. Is it possible to convert it to Linear motion? The only way I know to change linear motion in space is by throwing stuff out (Ions, burned propellants etc).
Yes, light has momentum, so you could use your collected electrical energy to create a beam of light to propel the satellite. However, you need to bear two points in mind- firstly, the recoil from the beam would be tiny compared with the energy taken to produce it, and secondly the collection of light from a solar panel would involve the transfer of momentum from sunlight to the satellite, so you would be propelled away from the Sun in any event.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/706069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 0 }
Are there two work-energy theorems (rotational and translational) or just a single theorem for both? Suppose a body is able to rotate. If work is applied to it along a path $C$, the traditional work-kinetic energy theorem states that $$W_{\mathrm{translational}} = \int_{C} \vec{F} \cdot d\vec{r} = \Delta \left(\frac{m v^2}{2}\right)$$ But there is also the equivalent principle relating work done by a torque and the resulting change in rotational kinetic energy, $$W_\mathrm{rotational} = \int \vec{\tau} \cdot d\vec{\theta} = \Delta \left(\frac{I \omega^2}{2}\right)$$ My question is whether this two equations are valid separately for a body undergoing both translational and rotational motion, or is it only valid that the total work equals the total change in kinetic energy, i.e., $$W_{\mathrm{translational}} + W_\mathrm{rotational} = \Delta \left(\frac{m v^2}{2}\right) + \Delta \left(\frac{I \omega^2}{2}\right)$$ I hope the questio is clear.
To make a clarification, I am unsure about relativistic effects, so I assume we are only considering non-relativistic speeds. To answer your question, it is important to define our translational and rotational kinetic energy. For a body undergoing both translational and rotational kinetic energy, a suitable definition is as follows. Translational Kinetic Energy: Refers to the kinetic energy expression obtained, when treating the object as a moving point mass, where the point is its center of mass. Rotational Kinetic Energy: Refers to the kinetic energy expression obtained in an alternative inertial frame of reference, where the total momentum of the object is 0. (Or as we commonly say Centre of Mass frame). Given these 2 explicit definitions, yes, both equations are separately valid for a body undergoing both motions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/706701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Meaning of "$=$" in $\vec{F}=m\vec{a}$ (for example) I don't understand how the two could really be one and the same. E.g. we can exert forces $\vec{F}$ and $-\vec{F}$ on a body and it's acceleration will not change. I don't think it makes sense to say that a body at rest is accelerating equally in all directions. So what does it mean to say that force and mass $\times$ acceleration are equal to each other? "For example" because I feel that my misunderstanding is more fundamental than just this.
I don't understand how the two could really be one and the same. E.g. we can exert forces $F$ and $-F$ on a body and it's acceleration will not change. $\vec{F}$ in the $\vec{F}=m\vec{a}$ is the net force acting on the body. In other words, Newton's second law of motion should be written $$\vec{F}_{net}=m\vec{a}$$ Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/706947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }