diff --git "a/train_with_llm_answers.csv" "b/train_with_llm_answers.csv" --- "a/train_with_llm_answers.csv" +++ "b/train_with_llm_answers.csv" @@ -360,7 +360,7 @@ In hierarchical systems the planets are arranged so that the system can be gravi Giant planets are found in mean-motion resonances more often than smaller planets. In interacting systems the planets orbits are close enough together that they perturb the orbital parametersIn resonant systems the orbital periods of the planets are in integer ratiosin a star with a close-in hot jupiter with another gas giant much further out, the star and hot jupiter form a pair that appears as a single object to another planet that is far enough out. Other, as yet unobserved, orbital possibilities include: double planets; various co-orbital planets such as quasi-satellites, trojans and exchange orbits; and interlocking orbits maintained by precessing orbital planesThe Kepler-223 system contains four planets in an 8:6:4:3 orbital resonance.Since 1992, over four thousand exoplanets have been discovered ( planets in planetary systems including multiple planetary systems as of ). -There are at least four sdB stars which may possess planetary systemsThe Solar System could be described as weakly interacting- Some astronomers search for[SEP]Which of the following statements is true about the categorization of planetary systems according to their orbital dynamics?","['D', 'E', 'C']",1.0 +There are at least four sdB stars which may possess planetary systemsThe Solar System could be described as weakly interacting- Some astronomers search for[SEP]Which of the following statements is true about the categorization of planetary systems according to their orbital dynamics?","['D', 'E', 'B']",1.0 What is the propagation constant in sinusoidal waves?,"The phase of the sinusoid varies with distance which results in the propagation constant being a complex number, the imaginary part being caused by the phase change. ==Alternative names== The term ""propagation constant"" is somewhat of a misnomer as it usually varies strongly with ω. The propagation constant of a sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given direction. The propagation constant itself measures the change per unit length, but it is otherwise dimensionless. Thus they are directly proportional to the frequency. :\alpha_d={{\pi}\sqrt{\varepsilon_r}\over{\lambda}}{\tan \delta} ===Optical fibre=== The attenuation constant for a particular propagation mode in an optical fiber is the real part of the axial propagation constant. ==Phase constant== In electromagnetic theory, the phase constant, also called phase change constant, parameter or coefficient is the imaginary component of the propagation constant for a plane wave. Note that in the field of transmission lines, the term transmission coefficient has a different meaning despite the similarity of name: it is the companion of the reflection coefficient. ==Definition== The propagation constant, symbol , for a given system is defined by the ratio of the complex amplitude at the source of the wave to the complex amplitude at some distance , such that, : \frac{A_0}{A_x} = e^{\gamma x} Since the propagation constant is a complex quantity we can write: : \gamma = \alpha + i \beta\ where * , the real part, is called the attenuation constant * , the imaginary part, is called the phase constant * i \equiv j \equiv \sqrt{ -1\ }\ ; more often is used for electrical circuits. It is the real part of the propagation constant and is measured in nepers per metre. The propagation constant for conducting lines can be calculated from the primary line coefficients by means of the relationship : \gamma= \sqrt{ Z Y\ } where : Z = R + i\ \omega L\ , the series impedance of the line per unit length and, : Y = G + i\ \omega C\ , the shunt admittance of the line per unit length. ===Plane wave=== The propagation factor of a plane wave traveling in a linear media in the direction is given by P = e^{-\gamma x} where * \gamma = \alpha + i\ \beta = \sqrt{i\ \omega\ \mu\ (\sigma + i\ \omega \varepsilon)\ }\ * x = distance traveled in the direction * \alpha =\ attenuation constant in the units of nepers/meter * \beta =\ phase constant in the units of radians/meter * \omega=\ frequency in radians/second * \sigma =\ conductivity of the media * \varepsilon = \varepsilon' - i\ \varepsilon \ = complex permitivity of the media * \mu = \mu' - i\ \mu \; = complex permeability of the media * i \equiv \sqrt{-1\ } The sign convention is chosen for consistency with propagation in lossy media. The propagation constant's value is expressed logarithmically, almost universally to the base e, rather than the more usual base 10 that is used in telecommunications in other situations. In the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next. Attenuation constant can be defined by the amplitude ratio :\left|\frac{A_0}{A_x}\right|=e^{\alpha x} The propagation constant per unit length is defined as the natural logarithm of the ratio of the sending end current or voltage to the receiving end current or voltage. ===Conductive lines=== The attenuation constant for conductive lines can be calculated from the primary line coefficients as shown above. Wavelength, phase velocity, and skin depth have simple relationships to the components of the propagation constant: \lambda = \frac {2 \pi}{\beta} \qquad v_p = \frac{\omega}{\beta} \qquad \delta = \frac{1}{\alpha} ==Attenuation constant== In telecommunications, the term attenuation constant, also called attenuation parameter or attenuation coefficient, is the attenuation of an electromagnetic wave propagating through a medium per unit distance from the source. The term sinusoidal thereby collectively refers to both sine waves and cosine waves with any phase offset. == Occurrence == thumb|400px|Illustrating the cosine wave's fundamental relationship to the circle. thumb|3D complex plane model to visualize usefulness for translation of domains This wave pattern occurs often in nature, including wind waves, sound waves, and light waves. These include transmission parameter, transmission function, propagation parameter, propagation coefficient and transmission constant. It represents the change in phase per unit length along the path travelled by the wave at any instant and is equal to the real part of the angular wavenumber of the wave. In a cascaded topology, the propagation constant, attenuation constant and phase constant of individual sections may be simply added to find the total propagation constant etc. ===Cascaded networks=== The ratio of output to input voltage for each network is given byMatthaei et al pp51-52 :\frac{V_1}{V_2}=\sqrt{\frac{Z_{I1}}{Z_{I2}}}e^{\gamma_1} :\frac{V_2}{V_3}=\sqrt{\frac{Z_{I2}}{Z_{I3}}}e^{\gamma_2} :\frac{V_3}{V_4}=\sqrt{\frac{Z_{I3}}{Z_{I4}}}e^{\gamma_3} The terms \sqrt{\frac{Z_{In}}{Z_{Im}}} are impedance scaling termsMatthaei et al pp37-38 and their use is explained in the image impedance article. The imaginary phase constant, , can be added directly to the attenuation constant, , to form a single complex number that can be handled in one mathematical operation provided they are to the same base. This property leads to its importance in Fourier analysis and makes it acoustically unique. == General form == In general, the function may also have: * a spatial variable x that represents the position on the dimension on which the wave propagates, and a characteristic parameter k called wave number (or angular wave number), which represents the proportionality between the angular frequency ω and the linear speed (speed of propagation) ν; * a non-zero center amplitude, D which is *y(x, t) = A\sin(kx - \omega t + \varphi) + D, if the wave is moving to the right *y(x, t) = A\sin(kx + \omega t + \varphi) + D, if the wave is moving to the left. The formula of a sinusoidal plane wave can be written in several other ways: *: F(\vec x,t)=A \cos (2\pi[(\vec x \cdot \hat n)/\lambda - t/T] + \varphi) :Here \lambda = 1/ u is the wavelength, the distance between two wavefronts where the field is equal to the amplitude A; and T = \lambda/c is the period of the field's variation over time, seen at any fixed point in space. A sine wave, sinusoidal wave, or just sinusoid is a mathematical curve defined in terms of the sine trigonometric function, of which it is the graph. The phase velocity equals :v_p=\frac{\omega}{\beta}=\frac{c}{\sqrt{1-\frac{\omega_\mathrm{c}^2}{\omega^2}}}>c ==Filters and two-port networks== The term propagation constant or propagation function is applied to filters and other two-port networks used for signal processing. ",The propagation constant is a measure of the amplitude of the sinusoidal wave that varies with distance.,The propagation constant is a real number that remains constant with distance due to the phase change in the sinusoidal wave.,The propagation constant is a real number that varies with distance due to the phase change in the sinusoidal wave.,The propagation constant is a complex number that varies with distance due to the phase change in the sinusoidal wave.,The propagation constant is a complex number that remains constant with distance due to the phase change in the sinusoidal wave.,D,kaggle200,"Wavelength, phase velocity, and skin depth have simple relationships to the components of the propagation constant: The attenuation constant for a particular propagation mode in an optical fiber is the real part of the axial propagation constant. The propagation constant is a useful concept in filter design which invariably uses a cascaded section topology. In a cascaded topology, the propagation constant, attenuation constant and phase constant of individual sections may be simply added to find the total propagation constant etc. @@ -496,7 +496,7 @@ Fibrous rings The right and left fibrous rings of heart (annuli fibrosi cordis) The term cardiac skeleton is sometimes considered synonymous with endomysium in the heart, but cardiac skeleton also refers to the combination of the endomysium and perimysium. In cardiology, the cardiac skeleton, also known as the fibrous skeleton of the heart, is a high-density homogeneous structure of connective tissue that forms and anchors the valves of the heart, and influences the forces exerted by and through themSimply put, the dense connective tissue within the cardiac skeleton does not conduct electricity and its deposition within the myocardial matrix is not accidental. In cardiology, the cardiac skeleton, also known as the fibrous skeleton of the heart, is a high-density homogeneous structure of connective tissue that forms and anchors the valves of the heart, and influences the forces exerted by and through themThis provides crucial support and structure to the heart while also serving to electrically isolate the atria from the ventricles.The unique matrix of connective tissue within the cardiac skeleton isolates electrical influence within these defined chambersThe cardiac skeleton ensures that the electrical and autonomic energy generated above is ushered below and cannot returnThe cardiac skeleton separates and partitions the atria (the smaller, upper two chambers) from the ventricles (the larger, lower two chambers).The heart's cardiac skeleton comprises four dense connective tissue rings that encircle the mitral and tricuspid atrioventricular (AV) canals and extend to the origins of the pulmonary trunk and aortaUnderstood as such, the cardiac skeleton efficiently centers and robustly funnels electrical energy from the atria to the ventricles. -Fibrous rings The right and left fibrous rings of heart (annuli fibrosi cordis) surround the atrioventricular and arterial orificesThe cardiac skeleton does this by [SEP]What is the function of the fibrous cardiac skeleton?","['C', 'B', 'E']",1.0 +Fibrous rings The right and left fibrous rings of heart (annuli fibrosi cordis) surround the atrioventricular and arterial orificesThe cardiac skeleton does this by [SEP]What is the function of the fibrous cardiac skeleton?","['C', 'B', 'D']",1.0 What is the Carnot engine?,"Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirs. Other practical requirements that make the Carnot cycle hard to realize (e.g., fine control of the gas, thermal contact with the surroundings including high and low temperature reservoirs), so the Carnot engine should be thought as the theoretical limit of macroscopic scale heat engines rather than a practical device that could ever be built. ==See also== * Carnot heat engine * Reversible process (thermodynamics) ==References== ;Notes ;Sources :* Carnot, Sadi, Reflections on the Motive Power of Fire :* Ewing, J. A. (1910) The Steam-Engine and Other Engines edition 3, page 62, via Internet Archive :* :* :* :* American Institute of Physics, 2011. . This is the Carnot heat engine working efficiency definition as the fraction of the work done by the system to the thermal energy received by the system from the hot reservoir per cycle. The Carnot engine is the most efficient heat engine which is theoretically possible. By Carnot's theorem, it provides an upper limit on the efficiency of any classical thermodynamic engine during the conversion of heat into work, or conversely, the efficiency of a refrigeration system in creating a temperature difference through the application of work to the system. A quantum Carnot engine is one in which the atoms in the heat bath are given a small bit of quantum coherence. Carnot defined work as “weight lifted through a height”. ==Carnot cycle== 350px|thumb|Figure 2: A Carnot cycle acting as a heat engine, illustrated on a temperature-entropy diagram. The Carnot cycle when acting as a heat engine consists of the following steps: # Reversible isothermal expansion of the gas at the ""hot"" temperature, TH (isothermal heat addition or absorption). Hence, the efficiency of the real engine is always less than the ideal Carnot engine. In a Carnot cycle, a system or engine transfers energy in the form of heat between two thermal reservoirs at temperatures T_H and T_C (referred to as the hot and cold reservoirs, respectively), and a part of this transferred energy is converted to the work done by the system. A Carnot cycle is an ideal thermodynamic cycle proposed by French physicist Sadi Carnot in 1824 and expanded upon by others in the 1830s and 1840s. At this point the gas is in the same state as at the start of step 1. == Carnot's theorem == Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between the same reservoirs. \eta_{I}=\frac{W}{Q_{\mathrm{H}}}=1-\frac{T_{\mathrm{C}}}{T_{\mathrm{H}}} Explanation This maximum efficiency \eta_\text{I} is defined as above: : is the work done by the system (energy exiting the system as work), : Q_\text{H} is the heat put into the system (heat energy entering the system), : T_\text{C} is the absolute temperature of the cold reservoir, and : T_\text{H} is the absolute temperature of the hot reservoir. In a footnote, Carnot distinguishes the steam-engine (machine à vapeur) from the heat-engine in general. The work W done by the system or engine to the environment per Carnot cycle depends on the temperatures of the thermal reservoirs and the entropy transferred from the hot reservoir to the system \Delta S per cycle such as W = (T_H - T_C) \Delta S = (T_H - T_C) \frac{Q_H}{T_H}, where Q_H is heat transferred from the hot reservoir to the system per cycle. ==Stages== A Carnot cycle as an idealized thermodynamic cycle performed by a heat engine (Carnot heat engine) consists of the following steps. In the process of going through this cycle, the system may perform work on its surroundings, thereby acting as a heat engine. == Carnot's diagram == In the adjacent diagram, from Carnot's 1824 work, Reflections on the Motive Power of Fire, there are ""two bodies A and B, kept each at a constant temperature, that of A being higher than that of B. This thermal energy is the cycle initiator. === Reversed Carnot cycle === A Carnot heat-engine cycle described is a totally reversible cycle. The first prototype of the diesel engine was based on the Carnot cycle. == Carnot heat engine as an impractical macroscopic construct == A Carnot heat engine is a heat engine performing a Carnot cycle, and its realization on a macroscopic scale is impractical. A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient. A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient. A Carnot heat engineIn French, Carnot uses machine à feu, which Thurston translates as heat-engine or steam-engine. ",The Carnot engine is a theoretical engine that operates in the limiting mode of extreme speed known as dynamic. It represents the theoretical maximum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures.,The Carnot engine is an ideal heat engine that operates in the limiting mode of extreme slowness known as quasi-static. It represents the theoretical maximum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures.,The Carnot engine is a real heat engine that operates in the limiting mode of extreme speed known as dynamic. It represents the theoretical minimum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures.,The Carnot engine is a theoretical engine that operates in the limiting mode of extreme slowness known as quasi-static. It represents the theoretical minimum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures.,The Carnot engine is a real engine that operates in the limiting mode of extreme slowness known as quasi-static. It represents the theoretical maximum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures.,B,kaggle200,"Carnot's theorem states that all heat engines operating between the same two thermal or heat reservoirs can't have efficiencies greater than a reversible heat engine operating between the same reservoirs. A corollary of this theorem is that every reversible heat engine operating between a pair of heat reservoirs is equally efficient, regardless of the working substance employed or the operation details. Since a Carnot heat engine is also a reversible engine, the efficiency of all the reversible heat engines is determined as the efficiency of the Carnot heat engine that depends solely on the temperatures of its hot and cold reservoirs. Carnot's theorem is a formal statement of this fact: ""No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirs."" Thus, Equation gives the maximum efficiency possible for any engine using the corresponding temperatures. A corollary to Carnot's theorem states that: ""All reversible engines operating between the same heat reservoirs are equally efficient."" Rearranging the right side of the equation gives what may be a more easily understood form of the equation, namely that the theoretical maximum efficiency of a heat engine equals the difference in temperature between the hot and cold reservoir divided by the absolute temperature of the hot reservoir. Looking at this formula an interesting fact becomes apparent: Lowering the temperature of the cold reservoir will have more effect on the ceiling efficiency of a heat engine than raising the temperature of the hot reservoir by the same amount. In the real world, this may be difficult to achieve since the cold reservoir is often an existing ambient temperature. For any heat engine, the exergy efficiency compares a given cycle to a Carnot heat engine with the cold side temperature in equilibrium with the environment. Note that a Carnot engine is the most efficient heat engine possible, but not the most efficient device for creating work. Fuel cells, for instance, can theoretically reach much higher efficiencies than a Carnot engine; their energy source is not thermal energy and so their exergy efficiency does not compare them to a Carnot engine. @@ -608,7 +608,7 @@ Thus in 1856 Norman Pogson of Oxford proposed that a logarithmic scale of 5√10 Scaled rounding This type of rounding, which is also named rounding to a logarithmic scale, is a variant of rounding to a specified powerHowever, musical scales are based on a logarithmic scale for frequenciesRounding on a logarithmic scale is accomplished by taking the log of the amount and doing normal rounding to the nearest value on the log scale. If only the ordinate or abscissa is scaled logarithmically, the plot is referred to as a semi-logarithmic plot.Rounding on a logarithmic scale is accomplished by taking the log of the amount and doing normal rounding to the nearest value on the log scaleExamples include the loudness of a sound (measured in decibels), the brightness of a star, and the Richter scale of earthquake intensityLogarithmic magnitudes can be negativeEvery interval of one magnitude equates to a variation in brightness of 5√100 or roughly 2.512 timesTherefore, we should describe the frequency in logarithmic scale related to human hearing. The size of commas is commonly expressed and compared in terms of cents – fractions of an octave on a logarithmic scale. -This type of rounding, which is also named rounding to a logarithmic scale, is a variant of rounding to a specified powerIt displays frequencies on a uniform scale- Although the spectrogram is profoundly useful, it still has one drawback[SEP]What is the reason behind the adoption of a logarithmic scale of 5√100 ≈ 2.512 between magnitudes in astronomy?","['A', 'C', 'B']",1.0 +This type of rounding, which is also named rounding to a logarithmic scale, is a variant of rounding to a specified powerIt displays frequencies on a uniform scale- Although the spectrogram is profoundly useful, it still has one drawback[SEP]What is the reason behind the adoption of a logarithmic scale of 5√100 ≈ 2.512 between magnitudes in astronomy?","['A', 'C', 'E']",1.0 What is the spin quantum number?,"In physics, the spin quantum number is a quantum number (designated ) that describes the intrinsic angular momentum (or spin angular momentum, or simply spin) of an electron or other particle. The phrase spin quantum number was originally used to describe the fourth of a set of quantum numbers (the principal quantum number , the azimuthal quantum number , the magnetic quantum number , and the spin magnetic quantum number ), which completely describe the quantum state of an electron in an atom. At a more advanced level where quantum mechanical operators or coupled spins are introduced, is referred to as the spin quantum number, and is described as the spin magnetic quantum number or as the -component of spin . In atomic physics, a magnetic quantum number is a quantum number used to distinguish quantum states of an electron or other particle according to its angular momentum along a given axis in space. Some introductory chemistry textbooks describe as the spin quantum number, and is not mentioned since its value is a fixed property of the electron, sometimes using the variable in place of . The spin magnetic quantum number specifies the z-axis component of the spin angular momentum for a particle having spin quantum number . Other magnetic quantum numbers are similarly defined, such as for the z-axis component the total electronic angular momentum , and for the nuclear spin . The direction of spin is described by spin quantum number. * The particles having integral value (0, 1, 2...) of spin are called bosons. == Magnetic nature of atoms and molecules == The spin quantum number helps to explain the magnetic properties of atoms and molecules. Spin quantum numbers apply also to systems of coupled spins, such as atoms that may contain more than one electron. The component of the spin along a specified axis is given by the spin magnetic quantum number, conventionally written . The azimuthal quantum number is the second of a set of quantum numbers that describe the unique quantum state of an electron (the others being the principal quantum number , the magnetic quantum number , and the spin quantum number ). Magnetic quantum numbers are capitalized to indicate totals for a system of particles, such as or for the total z-axis orbital angular momentum of all the electrons in an atom. ==Derivation== thumb|These orbitals have magnetic quantum numbers m_l=-\ell, \ldots,\ell from left to right in ascending order. Typical quantum numbers related to spacetime symmetries are spin (related to rotational symmetry), the parity, C-parity and T-parity (related to the Poincaré symmetry of spacetime). * The magnitude spin quantum number of an electron cannot be changed. In quantum mechanics, the azimuthal quantum number is a quantum number for an atomic orbital that determines its orbital angular momentum and describes the shape of the orbital. The magnetic quantum number determines the energy shift of an atomic orbital due to an external magnetic field (the Zeeman effect) -- hence the name magnetic quantum number. Nuclear-spin quantum numbers are conventionally written for spin, and or for the -axis component. Quantum numbers often describe specifically the energy levels of electrons in atoms, but other possibilities include angular momentum, spin, etc. As a result of the different basis that may be arbitrarily chosen to form a complete set of commuting operators, different sets of quantum numbers may be used for the description of the same system in different situations. ==Electron in an atom== Four quantum numbers can describe an electron in an atom completely: *Principal quantum number () *Azimuthal quantum number () *Magnetic quantum number () *Spin quantum number () The spin–orbital interaction, however, relates these numbers. ",The spin quantum number is a measure of the distance between an elementary particle and the nucleus of an atom.,The spin quantum number is a measure of the size of an elementary particle.,The spin quantum number is a measure of the charge of an elementary particle.,The spin quantum number is a measure of the speed of an elementary particle's rotation around some axis.,"The spin quantum number is a dimensionless quantity obtained by dividing the spin angular momentum by the reduced Planck constant ħ, which has the same dimensions as angular momentum.",E,kaggle200,"In atomic physics, the spin quantum number is a quantum number (designated ) which describes the intrinsic angular momentum (or spin angular momentum, or simply spin) of an electron or other particle. The phrase was originally used to describe the fourth of a set of quantum numbers (the principal quantum number , the azimuthal quantum number , the magnetic quantum number , and the spin quantum number ), which completely describe the quantum state of an electron in an atom. The name comes from a physical spinning of the electron about an axis, as proposed by Uhlenbeck and Goudsmit. The value of is the component of spin angular momentum parallel to a given direction (the –axis), which can be either +1/2 or –1/2 (in units of the reduced Planck constant). In general, the values of range from to , where is the spin quantum number, associated with the particle's intrinsic spin angular momentum: where is the secondary spin quantum number, ranging from − to + in steps of one. This generates different values of . @@ -668,7 +668,7 @@ In the following table, it is assumed that for formula_1 the receiver and the so 37 Cancri is a star in the zodiac constellation of CancerBefore this was verified, however, it was found that stellar colors were primarily due to a star's temperature, not motionThe observed frequency shift is a good indicator of the velocity of the illuminated moving particlesIn 1871, optical redshift was confirmed when the phenomenon was observed in Fraunhofer lines using solar rotation, about 0.1 Å in the red","The star is moving away from the Earth with a heliocentric radial velocity of +22 km/s, having come as close as some 2.7 million years ago. The first Doppler redshift was described by French physicist Hippolyte Fizeau in 1848, who pointed to the shift in spectral lines seen in stars as being due to the Doppler effectIn 1887, Vogel and Scheiner discovered the annual Doppler effect, the yearly change in the Doppler shift of stars located near the ecliptic due to the orbital velocity of the EarthIn 1887, Vogel and Scheiner discovered the ""annual Doppler effect"", the yearly change in the Doppler shift of stars located near the ecliptic due to the orbital velocity of the EarthIn 1868, British astronomer William Huggins was the first to determine the velocity of a star moving away from the Earth by this methodOnly later was Doppler vindicated by verified redshift observations.The first Doppler redshift was described by French physicist Hippolyte Fizeau in 1848, who pointed to the shift in spectral lines seen in stars as being due to the Doppler effectDoppler correctly predicted that the phenomenon should apply to all waves, and in particular suggested that the varying colors of stars could be attributed to their motion with respect to the EarthThe effect is named after Christian Doppler, who offered the first known physical explanation for the phenomenon in 1842In particular, this has been used to determine the velocity distribution of interstellar gas clouds. In the following table, it is assumed that for formula_1 the receiver and the source are moving away from each other, formula_2 being the relative velocity and formula_3 the speed of light, and formula_4. -37 Cancri is a star in the zodiac constellation of CancerBefore this was verified, however, it was found that stellar colors were primarily due to a star's temperature, not motionThe observed frequency shift is a good indicator of the velocity of the illuminated moving particlesIn 1871, optical redshift was confirmed when the phenomenon was observed in Fraunhofer lines using solar rotation, about 0.1 Å in the red[SEP]Who was the first to determine the velocity of a star moving away from the Earth using the Doppler effect?","['C', 'E', 'B']",0.3333333333333333 +37 Cancri is a star in the zodiac constellation of CancerBefore this was verified, however, it was found that stellar colors were primarily due to a star's temperature, not motionThe observed frequency shift is a good indicator of the velocity of the illuminated moving particlesIn 1871, optical redshift was confirmed when the phenomenon was observed in Fraunhofer lines using solar rotation, about 0.1 Å in the red[SEP]Who was the first to determine the velocity of a star moving away from the Earth using the Doppler effect?","['C', 'D', 'E']",0.0 What is the information loss paradox in black holes?,"This perspective holds that Hawking's computation is reliable until the final stages of black-hole evaporation when information suddenly escapes. The information paradox appears when one considers a process in which a black hole is formed through a physical process and then evaporates away entirely through Hawking radiation. It is now generally believed that information is preserved in black-hole evaporation. Starting in the mid-1970s, Stephen Hawking and Jacob Bekenstein put forward theoretical arguments that suggested that black-hole evaporation loses information, and is therefore inconsistent with unitarity. As explained above, one way to frame the information paradox is that Hawking's calculation appears to show that the von Neumann entropy of Hawking radiation increases throughout the lifetime of the black hole. Moreover, the argument for information loss relied on the causal structure of the black-hole spacetime, which suggests that information in the interior should not affect any observation in the exterior including observations performed on the radiation emitted by the black hole. On the other hand, this idea implies that just before the sudden escape of information, a very small black hole must be able to store an arbitrary amount of information and have a very large number of internal states. According to the external observer, infalling information heats up the stretched horizon, which then reradiates it as Hawking radiation, with the entire evolution being unitary. *Information is stored in a large remnant This idea suggests that Hawking radiation stops before the black hole reaches the Planck size. Taken together these puzzles about black hole evaporation have implications for how gravity and quantum mechanics must be combined, leading to the information paradox remaining an active field of research within quantum gravity. == Relevant principles == In quantum mechanics, the evolution of the state is governed by the Schrödinger equation. Therefore Hawking's argument suggests that the process of black-hole evaporation cannot be described within the framework of unitary evolution. Within, what might be termed, the loop-quantum-gravity approach to black holes, it is believed that understanding this phase of evaporation is crucial to resolving the information paradox. Since the black hole never evaporates, information about its initial state can remain inside the black hole and the paradox disappears. However, if the black hole formed from a pure state with zero entropy, unitarity implies that the entropy of the Hawking radiation must decrease back to zero once the black hole evaporates completely. Once the black holes evaporate completely, in both cases, one will be left with a featureless gas of radiation. Hawking argued that the process of radiation would continue until the black hole had evaporated completely. In 2004, Hawking also conceded the 1997 bet, paying Preskill with a baseball encyclopedia ""from which information can be retrieved at will"" although Thorne refused to concede. == Solutions == Since the 1997 proposal of the AdS/CFT correspondence, the predominant belief among physicists is that information is indeed preserved in black hole evaporation. Hawking also argued that the detailed form of the radiation would be independent of the initial state of the black hole, and would depend only on its mass, electric charge and angular momentum. These scenarios are broadly called remnant scenarios since information does not emerge gradually but remains in the black-hole interior only to emerge at the end of black-hole evaporation. Therefore, Hawking argued that if the star or material that collapsed to form the black hole started in a specific pure quantum state, the process of evaporation would transform the pure state into a mixed state. ","Black holes have an infinite number of internal parameters, so all the information about the matter that went into forming the black hole is preserved. Regardless of the type of matter which goes into a black hole, it appears that all the information is conserved. As black holes evaporate by emitting Hawking radiation, the information is lost forever.","Black holes have an infinite number of internal parameters, so all the information about the matter that went into forming the black hole is preserved. Regardless of the type of matter which goes into a black hole, it appears that all the information is conserved. As black holes evaporate by emitting Hawking radiation, the information is lost temporarily but reappears once the black hole has fully evaporated.","Black holes have only a few internal parameters, so most of the information about the matter that went into forming the black hole is lost. Regardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conserved. As black holes evaporate by emitting Hawking radiation, the information is lost forever.","Black holes have only a few internal parameters, so most of the information about the matter that went into forming the black hole is lost. Regardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conserved. As black holes evaporate by emitting Hawking radiation, the information is preserved and reappears once the black hole has fully evaporated.","Black holes have only a few internal parameters, so most of the information about the matter that went into forming the black hole is preserved. Regardless of the type of matter which goes into a black hole, it appears that all the information is conserved. As black holes evaporate by emitting Hawking radiation, the information is preserved and reappears once the black hole has fully evaporated.",C,kaggle200,"Hawking radiation is black-body radiation that is predicted to be released by black holes, due to quantum effects near the event horizon. This radiation reduces the mass and energy of black holes, causing them to shrink and ultimately vanish. If black holes evaporate via Hawking radiation, a non-rotating and uncharged stupendously large black hole with a mass of will evaporate in around . Black holes formed during the predicted collapse of superclusters of galaxies in the far future with would evaporate over a timescale of up to . In 1974, Hawking predicted that black holes are not entirely black but emit small amounts of thermal radiation at a temperature ħ""c""/(8π""GM""""k""); this effect has become known as Hawking radiation. By applying quantum field theory to a static black hole background, he determined that a black hole should emit particles that display a perfect black body spectrum. Since Hawking's publication, many others have verified the result through various approaches. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which, for a Schwarzschild black hole, is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes. There are three reasons the black holes in the ELIRGs could be massive. First, the embryonic black holes might be bigger than thought possible. Second, the Eddington limit was exceeded. When a black hole feeds, gas falls in and heats, emitting light. The pressure of the emitted light forces the gas outward, creating a limit to how fast the black hole can continuously absorb matter. If a black hole broke this limit, it could theoretically increase in size at a fast rate. Black holes have previously been observed breaking this limit; the black hole in the study would have had to repeatedly break the limit to grow this large. Third, the black holes might just be bending this limit, absorbing gas faster than thought possible, if the black hole is not spinning fast. If a black hole spins slowly, it will not repel its gas absorption as much. A slow-spinning black hole can absorb more matter than a fast-spinning black hole. The massive black holes in ELIRGs could be absorbing matter for a longer time. @@ -943,7 +943,7 @@ Atomristor Atomristor is defined as the electrical devices showing memristive be Layered memristor In 2014, Bessonov et alThese weights are adjusted during the training process, allowing the network to learn and","In fact, memristive devices in circuits have complex interactions due to Kirchhoff's laws. Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheetsThese atomristors offer forming-free switching and both unipolar and bipolar operationHigh switching performance, demonstrated synaptic plasticity and sustainability to mechanical deformations promise to emulate the appealing characteristics of biological neural systems in novel computing technologies. Atomristor Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheets- In a memristive network, the memristive devices are used to simulate the behavior of neurons and synapses in the human brainin the Akinwande group at the University of Texas, first reported a universal memristive effect in single-layer TMD (MX, M = Mo, W; and X = S, Se) atomic sheets based on vertical metal-insulator-metal (MIM) device structurein the Akinwande group at the University of Texas, first reported a universal memristive effect in single-layer TMD (MX2, M = Mo, W; and X = S, Se) atomic sheets based on vertical metal-insulator-metal (MIM) device structureThe network consists of layers of memristive devices, each of which is connected to other layers through a set of weightsAfterwards, taking advantage of the low ""on"" resistance and large on/off ratio, a high-performance zero-power RF switch is proved based on MoS or h-BN atomristors, indicating a new application of memristors for 5G, 6G and THz communication and connectivity systemsAfterwards, taking advantage of the low ""on"" resistance and large on/off ratio, a high-performance zero-power RF switch is proved based on MoS2 or h-BN atomristors, indicating a new application of memristors for 5G, 6G and THz communication and connectivity systemsIn fact, memristive devices in circuits have complex interactions due to Kirchhoff's laws. -Layered memristor In 2014, Bessonov et alThese weights are adjusted during the training process, allowing the network to learn and[SEP]What is the definition of Atomristor?","['C', 'A', 'D']",1.0 +Layered memristor In 2014, Bessonov et alThese weights are adjusted during the training process, allowing the network to learn and[SEP]What is the definition of Atomristor?","['C', 'D', 'A']",1.0 Who published the first theory that was able to encompass previously separate field theories to provide a unifying theory of electromagnetism?,"Maxwell's equations for electromagnetism have been called the ""second great unification in physics"" where the first one had been realised by Isaac Newton. Chapters six through eight present the development of electromagnetism as a line from Faraday to Maxwell, including the development of theories of electricity and magnetism modelled on Newtonian mechanics. A History of the Theories of Aether and Electricity is any of three books written by British mathematician Sir Edmund Taylor Whittaker FRS FRSE on the history of electromagnetic theory, covering the development of classical electromagnetism, optics, and aether theories. The book covers the history of aether theories and the development of electromagnetic theory up to the 20th century. James Clerk Maxwell used Faraday's conceptualisation to help formulate his unification of electricity and magnetism in his electromagnetic theory. James Clerk Maxwell (13 June 1831 – 5 November 1879) was a Scottish mathematician and scientist responsible for the classical theory of electromagnetic radiation, which was the first theory to describe electricity, magnetism and light as different manifestations of the same phenomenon. In particular, unification of gravitation and electromagnetism was actively pursued by several physicists and mathematicians in the years between the two World Wars. Einstein was not alone in his attempts to unify electromagnetism and gravity; a large number of mathematicians and physicists, including Hermann Weyl, Arthur Eddington, and Theodor Kaluza also attempted to develop approaches that could unify these interactions. The work covers the development of optics, electricity, and magnetism, with some side-plots in the history of thermodynamics and gravitation, over three centuries, through the close of the nineteenth century. ====Overview (vol. 1)==== Volume I: The Classical Theories contents # Title 1 The theory of the aether to the death of Newton 2 Electric and magnetic science, prior to the introduction of the potentials 3 Galvanism, from Galvani to Ohm 4 The luminiferous medium from Bradley to Fresnel 5 The aether as an elastic solid 6 Faraday 7 The mathematical electricians of the middle of the nineteenth century 8 Maxwell 9 Models of the aether 10 The followers of Maxwell 11 Conduction in solutions and gases, from Faraday to the discovery of the electron 12 Classical radiation-theory 13 Classical theory in the age of Lorentz Chapter one of the first volume was renamed the theory of the aether to the death of Newton after being mostly rewritten, though it still focuses on René Descartes, Isaac Newton, Pierre de Fermat, Robert Hooke, and Christiaan Huygens, among others. Since the 19th century, some physicists, notably Albert Einstein, have attempted to develop a single theoretical framework that can account for all the fundamental forces of nature – a unified field theory. Although new ""classical"" unified field theories continue to be proposed from time to time, often involving non-traditional elements such as spinors or relating gravitation to an electromagnetic force, none have been generally accepted by physicists yet. ==See also== *Affine gauge theory *Classical field theory *Gauge gravitation theory *Metric-affine gravitation theory ==References== Category:History of physics * Classical unified field theories But even after his Treatise and subsequent discovery of light as an electromagnetic wave, Maxwell continued to believe in the aether theory: > ""Another theory of electricity which I prefer denies action at a distance > and attributes electric action to tensions and pressures in an all-pervading > medium, these stresses being the same in kind with those familiar to > engineers, and the medium being identical with that in which light is > supposed to be propagated."" Faraday's insights into the behavior of magnetic fields would prove invaluable to James Clerk Maxwell's course to unite electricity and magnetism into one theory. Field theory had its origins in the 18th century in a mathematical formulation of Newtonian mechanics, but it was seen as deficient as it implied action at a distance. Faraday advanced what has been termed the molecular theory of electricityA treatise on electricity, in theory and practice, Volume 1 By Auguste de La Rive. Current mainstream research on unified field theories focuses on the problem of creating a quantum theory of gravity and unifying with the other fundamental theories in physics, all of which are quantum field theories. This discovery gave a clue to the subsequently proved intimate relationship between electricity and magnetism which was promptly followed up by Ampère who some months later, in September 1820, presented the first elements of his new theory, which he developed in the following years culminating with the publication in his 1827 """" (Memoir on the Mathematical Theory of Electrodynamic Phenomena, Uniquely Deduced from Experience) announcing his celebrated theory of electrodynamics, relating to the force that one current exerts upon another, by its electro-magnetic effects, namely # Two parallel portions of a circuit attract one another if the currents in them are flowing in the same direction, and repel one another if the currents flow in the opposite direction. Perhaps the most original, and certainly the most permanent in their influence, were his memoirs on the theory of electricity and magnetism, which virtually created a new branch of mathematical physics. For a survey of current work toward creating a quantum theory of gravitation, see quantum gravity. ==Overview== The early attempts at creating a unified field theory began with the Riemannian geometry of general relativity, and attempted to incorporate electromagnetic fields into a more general geometry, since ordinary Riemannian geometry seemed incapable of expressing the properties of the electromagnetic field. Now Maxwell logically showed how these methods of calculation could be applied to the electro-magnetic field.In November 1847, Clerk Maxwell entered the University of Edinburgh, learning mathematics from Kelland, natural philosophy from J. D. Forbes, and logic from Sir W. R. Hamilton. ",Maxwell,Einstein,Galileo,Faraday,Newton,A,kaggle200,"In the 20th century, the search for a unifying theory was interrupted by the discovery of the strong and weak nuclear forces, which differ both from gravity and from electromagnetism. A further hurdle was the acceptance that in a theory of everything, quantum mechanics had to be incorporated from the outset, rather than emerging as a consequence of a deterministic unified theory, as Einstein had hoped. The Novak–Tyson model, first published in the paper titled ""Numerical analysis of a comprehensive model of M-phase control in Xenopus oocyte extracts and intact embryos"", builds on the Goldbeter and Tyson 1991 models in order to generate a unifying theory, encapsulating the observed dynamics of the cyclin-MPF relationship. This area of research was summarized in terms understandable by the layperson in a 2008 article in New Scientist that offered a unifying theory of brain function. Friston makes the following claims about the explanatory power of the theory: @@ -1084,7 +1084,7 @@ An evaporator is a device used to turn a liquid into a gas. Pycnometer A pycnometer (from Ancient Greek: πυκνός, romanized: puknos, lit. 'dense'), also called pyknometer or specific gravity bottle, is a device used to determine the density of a liquidThe powder is added to the pycnometer, which is then weighed, giving the weight of the powder sampleThe pycnometer is then filled with a liquid of known density, in which the powder is completely insolubleThe term has its origins in the Greek word πυκνός, meaning ""dense"". The density calculated from a volume measured using a gas pycnometer is often referred to as skeletal density, true density or helium density. For non-porous solids a pycnometer can be used to measure particle density. -An extreme example of the gas displacement principle for volume measurement is described in U.S- The Fahrenheit hydrometer is a device used to measure the density of a liquidThe particle density of a powder, to which the usual method of weighing cannot be applied, can also be determined with a pycnometerThis device enables a liquid's density to be measured accurately by reference to an appropriate working fluid, such as water or mercury, using an analytical balance.It was invented by Daniel Gabriel Fahrenheit (1686[SEP]What is a pycnometer?","['D', 'A', 'E']",1.0 +An extreme example of the gas displacement principle for volume measurement is described in U.S- The Fahrenheit hydrometer is a device used to measure the density of a liquidThe particle density of a powder, to which the usual method of weighing cannot be applied, can also be determined with a pycnometerThis device enables a liquid's density to be measured accurately by reference to an appropriate working fluid, such as water or mercury, using an analytical balance.It was invented by Daniel Gabriel Fahrenheit (1686[SEP]What is a pycnometer?","['D', 'E', 'A']",1.0 "What is the estimated redshift of CEERS-93316, a candidate high-redshift galaxy observed by the James Webb Space Telescope?","Spectroscopic observations by JWST's NIRSpec instrument in October 2022 confirmed the galaxy's redshift of z = 13.2 to a high accuracy, establishing it as the oldest and most distant spectroscopically-confirmed galaxy known , with a light-travel distance (lookback time) of 13.6 billion years. CEERS-93316 is a high-redshift galaxy with a spectroscopic redshift z=4.9. F200DB-045 is a candidate high-redshift galaxy, with an estimated redshift of approximately z = 20.4, corresponding to 168 million years after the Big Bang. Notably, the redshift that was initially reported was photometric (z = 16.4), and would have made CEERS-93316 the earliest and most distant known galaxy observed. __NOTOC__ MACS0647-JD is a galaxy with a redshift of about z = 10.7, equivalent to a light travel distance of 13.26 billion light-years (4 billion parsecs). Nonetheless, the redshift value of the galaxy presented by the procedure in one study may differ from the values presented in other studies using different procedures. ==Discovery== The candidate high-redshift galaxy F200DB-045 was discovered within the data from the Early Release Observations (ERO) that was obtained using the Near Infrared Camera of the James Webb Space Telescope (JWST) in July 2022. (H0=67.4 and OmegaM=0.315 (see Table/Planck2018 at ""Lambda-CDM model#Parameters"" ) ==Discovery== The candidate high-redshift galaxy CEERS-93316 (RA:14:19:39.48 DEC:+52:56:34.92), in the Boötes constellation, was discovered by the CEERS imaging observing program using the Near Infrared Camera of the James Webb Space Telescope (JWST) in July 2022. It was reported with a redshift of z~10 using Hubble and Spitzer Space Telescope photometric data, with later reports in 2012 suggesting a possibly higher redshift of z = 11.9 Although doubts were raised that this galaxy could instead be a low- redshift interloper with extreme spectral emission lines producing the appearance of a very high redshift source, later spectroscopic observations by the James Webb Space Telescope's NIRSpec instrument in 2022 confirmed the galaxy's high redshift to a spectroscopically confirmed estimate of z = 11.58. == Gallery == File:Hudf09z10nl.png|UDFj-39546284 File:UDFj-39546284.tif|UDFj-39546284 appears as a faint red blob == See also == * EGSY8p7 * Hubble Ultra-Deep Field * List of the most distant astronomical objects * MACS0647-JD * Reionization * UDFy-38135539 == References == == External links == * UDFj-39546284 on WikiSky 20110127 Category:Fornax Category:Dwarf galaxies Category:Hubble Space Telescope Category:Hubble Ultra- Deep Field CEERS-93316 has a light-travel distance (lookback time) of 12.6 billion years, and, due to the expansion of the universe, a present proper distance of 25.7 billion light-years. MACS0647-JD was announced in November 2012, but by the next month UDFj-39546284, which was previously thought to be z = 10.3, was said to be at z = 11.9,Universe Today - Hubble Census Unveils Galaxies Shining Near Cosmic Dawn although more recent analyses have suggested the latter is likely to be at a lower redshift. This data included a nearby galaxy cluster SMACS J0723.3–7327, a massive cluster known as a possible ""cosmic telescope"" in amplifying background galaxies, including the F200DB-045 background galaxy. ==Distance== Only a photometric redshift has been determined for F200DB-045; follow-up spectroscopic measurements will be required to confirm the redshift (see spectroscopic redshift). Additional spectroscopic observations by JWST will be needed to accurately confirm the redshift of MACS0647-JD. == See also == * List of the most distant astronomical objects * Farthest galaxies ==References== ==External links== * * NASA Great Observatories Find Candidate for Most Distant Object in the Universe to Date * European Space Agency – Galaxy cluster MACS J0647.7+7015 Category:Galaxies Category:Camelopardalis Category:Dwarf galaxies If the distance estimate is correct, it formed about 427 million years after the Big Bang. ==Details== JD refers to J-band Dropout – the galaxy was not detected in the so-called J-band (F125W), nor in 14 bluer Hubble filters. F200DB-045 would have a light-travel distance (lookback time) of 13.7 billion years, and, due to the expansion of the universe, a present proper distance of 36.1 billion light-years. Due to the expansion of the universe, its present proper distance is 33.6 billion light-years. Infrared NIRCam imaging of MACS0647-JD by the James Webb Space Telescope (JWST) in September 2022 determined a photometric redshift of , in agreement with the previous Hubble estimate. CEERS stands for ""Cosmic Evolution Early Release Science Survey"", and is a deep- and wide-field sky survey program developed specifically for JWST image studies, and is conducted by the CEERS Collaboration. ==See also== * Earliest galaxies * F200DB-045 * GLASS-z12 * HD1 (galaxy) * JADES-GS-z13-0 * List of the most distant astronomical objects * Peekaboo Galaxy ==References== ==External links== * CEERS WebSite * IMAGE: CEERS-93316 galaxy (1 Aug 2022) * * Category:Astronomical objects discovered in 2022 Category:Boötes Category:Galaxies Category:Discoveries by the James Webb Space Telescope __NOTOC__ UDFj-39546284 is a high-redshift Lyman-break galaxy discovered by the Hubble Space Telescope in infrared Hubble Ultra-Deep Field (HUDF) observations in 2009. A paper in April 2023 suggests that JADES-GS-z13-0 isn't in fact a galaxy, but a dark star with a mass of around a million times that of the Sun. == See also == * List of the most distant astronomical objects * GN-z11 - Previous record holder from 2016 to 2022. (z = 10.957) == References == Category:Astronomical objects discovered in 2022 Category:Galaxies Category:Fornax Category:Discoveries by the James Webb Space Telescope JADES-GS-z13-0 is a high-redshift Lyman-break galaxy discovered by the James Webb Space Telescope (JWST) during NIRCam imaging for the JWST Advanced Deep Extragalactic Survey (JADES) on 29 September 2022. ","Approximately z = 6.0, corresponding to 1 billion years after the Big Bang.","Approximately z = 16.7, corresponding to 235.8 million years after the Big Bang.","Approximately z = 3.0, corresponding to 5 billion years after the Big Bang.","Approximately z = 10.0, corresponding to 13 billion years after the Big Bang.","Approximately z = 13.0, corresponding to 30 billion light-years away from Earth.",B,kaggle200,"More than a million quasars have been found, with the nearest known being about 600 million light-years away from Earth. The record for the most distant known quasar continues to change. In 2017, the quasar ULAS J1342+0928 was detected at redshift ""z"" = 7.54. Light observed from this 800-million-solar-mass quasar was emitted when the universe was only 690 million years old. In 2020, the quasar Pōniuāʻena was detected from a time only 700 million years after the Big Bang, and with an estimated mass of 1.5 billion times the mass of the Sun. In early 2021, the quasar J0313–1806, with a 1.6-billion-solar-mass black hole, was reported at ""z"" = 7.64, 670 million years after the Big Bang. HD1 is a proposed high-redshift galaxy, and is considered, as of April 2022, to be one of the earliest and most distant known galaxies yet identified in the observable universe. The galaxy, with an estimated redshift of approximately z = 13.27, is seen as it was about 324 million years after the Big Bang, 13.787 billion years ago. It has a light-travel distance (lookback time) of 13.463 billion light-years from Earth, and, due to the expansion of the universe, a present proper distance of 33.288 billion light-years. Within two weeks of the first Webb images, several preprint papers described a wide range of early galaxies believed to date from 235 million years (z=16.7) to 280 million years after the Big Bang, far earlier than previously known. The results await peer review. On 17 August 2022, NASA released a large mosaic image of 690 individual frames taken by the Near Infrared Camera (NIRCam) on JWST of numerous very early galaxies. Some early galaxies observed by JWST like CEERS-93316, which has an estimated redshift of approximately z=16.7 corresponding to 235.8 million years after the Big Bang, are high redshift galaxy candidates. @@ -1175,7 +1175,7 @@ The last several chapters deal with a conundrum called the Ozma Problem, which e The Ozma Problem The 18th chapter, ""The Ozma Problem"", poses a problem that Gardner claims would arise if Earth should ever enter into communication with life on another planet through Project OzmaGardner follows the thread of several false leads on the road to the solution of the problem, such as the magnetic poles of astronomical bodies and the chirality of life molecules, which could be arbitrary based on how life locally originated.The solution to the Ozma Problem was finally realized in the famous Wu experiment, conducted in 1956 by Chinese-American physicist Chien-Shiung Wu (1912–1997), involving the beta decay of cobalt-60Implications for particle physics, theoretical physics and cosmology are covered and brought up to date (in later editions of the book) with regard to Grand Unified Theories, theories of everything, superstring theory and . The 18th chapter, ""The Ozma Problem"", poses a problem that Gardner claims would arise if Earth should ever enter into communication with life on another planet through Project OzmaAn earlier example of asymmetry had actually been detected as early as 1928 in the decay of a radionuclide of radium, but its significance was not then realized. The last several chapters deal with a conundrum called the Ozma Problem, which examines whether there is any fundamental asymmetry to the universeIt was the first experiment to disprove the conservation of parity, and according to Gardner, one could use it to convey the meaning of left and right to remote extraterrestrialsSee the Ozma Problem for an illustration of this. -The last several chapters deal with a conundrum called the Ozma Problem, which examines whet[SEP]What is the Ozma Problem?","['C', 'E', 'B']",1.0 +The last several chapters deal with a conundrum called the Ozma Problem, which examines whet[SEP]What is the Ozma Problem?","['C', 'B', 'E']",1.0 What is a Hilbert space in quantum mechanics?,"In quantum mechanics, the Hilbert space is the space of complex-valued functions belonging to L^2 (\mathbb{R}^3 , d^3x), where the simple \mathbb{R}^3 is the classical configuration space of free particle which has finite degrees of freedom, and d^3 x is the Lebesgue measure on \mathbb{R}^3. Phase-space representation of quantum state vectors is a formulation of quantum mechanics elaborating the phase-space formulation with a Hilbert space. In mathematics and the foundations of quantum mechanics, the projective Hilbert space P(H) of a complex Hilbert space H is the set of equivalence classes of non-zero vectors v in H, for the relation \sim on H given by :w \sim v if and only if v = \lambda w for some non-zero complex number \lambda. For this purpose, the Hilbert space of a quantum system is enlarged by introducing an auxiliary quantum system. In quantum field theory, it is expected that the Hilbert space is also the L^2 space on the configuration space of the field, which is infinite dimensional, with respect to some Borel measure naturally defined. In the mathematical physics of quantum mechanics, Liouville space, also known as line space, is the space of operators on Hilbert space. The term Hilbert geometry may refer to several things named after David Hilbert: * Hilbert's axioms, a modern axiomatization of Euclidean geometry * Hilbert space, a space in many ways resembling a Euclidean space, but in important instances infinite-dimensional * Hilbert metric, a metric that makes a bounded convex subset of a Euclidean space into an unbounded metric space In the quantum mechanics the domain space of the wave functions \psi is the classical configuration space \mathbb{R}^3. Liouville space is itself a Hilbert space under the Hilbert-Schmidt inner product. Liouville space underlies the density operator formalism and is a common computation technique in the study of open quantum systems. ==References== Category:Hilbert spaces Category:Linear algebra Category:Operator theory Category:Functional analysis This is the usual construction of projectivization, applied to a complex Hilbert space. ==Overview== The physical significance of the projective Hilbert space is that in quantum theory, the wave functions \psi and \lambda \psi represent the same physical state, for any \lambda e 0. The same construction can be applied also to real Hilbert spaces. Complex projective Hilbert space may be given a natural metric, the Fubini–Study metric, derived from the Hilbert space's norm. ==Product== The Cartesian product of projective Hilbert spaces is not a projective space. Relative-position state and relative- momentum state are defined in the extended Hilbert space of the composite quantum system and expressions of basic operators such as canonical position and momentum operators, acting on these states, are obtained."" For the finite-dimensional complex Hilbert space, one writes :P(H_{n})=\mathbb{C}P^{n-1} so that, for example, the projectivization of two-dimensional complex Hilbert space (the space describing one qubit) is the complex projective line \mathbb{C}P^{1}. Thus the intuitive expectation should be modified, and the concept of quantum configuration space should be introduced as a suitable enlargement of the classical configuration space so that an infinite dimensional measure, often a cylindrical measure, can be well defined on it. This symplectic Hilbert space is denoted by \mathcal{H}(\Gamma). In quantum field theory, the quantum configuration space, the domain of the wave functions \Psi, is larger than the classical configuration space. In the case where \psi(q,p)\propto W(q,p), worked in the beginning of the section, the Oliveira approach and phase-space formulation are indistinguishable, at least for pure states. == Equivalence of representations == As it was states before, the first wave-function formulation of quantum mechanics was developed by Torres- Vega and Frederick, its phase-space operators are given by :\widehat{x}_{{}_\text{TV}}=\frac{1}{2}x+i\hbar\frac{\partial}{\partial p} , and :\widehat{p\,}_{{}_\text{TV}}=\frac{1}{2}p-i\hbar\frac{\partial}{\partial x} . Then \psi(x,p)\propto W(q,p). === Torres-Vega–Frederick representation === With the operators of position and momentum a Schrödinger picture is developed in phase space :i\hbar\frac{\partial}{\partial t}\psi(x,p,t)=\widehat{H}_{{}_\text{TV}}\psi(x,p,t) . ",A complex vector space where the state of a classical mechanical system is described by a vector |Ψ⟩.,A physical space where the state of a classical mechanical system is described by a vector |Ψ⟩.,A physical space where the state of a quantum mechanical system is described by a vector |Ψ⟩.,A mathematical space where the state of a classical mechanical system is described by a vector |Ψ⟩.,A complex vector space where the state of a quantum mechanical system is described by a vector |Ψ⟩.,E,kaggle200,"Any system that can be described by a Pfaffian constraint and has a configuration space or state space of only two variables or one variable is holonomic. In a formal setup, any system in quantum mechanics is described by a state, which is a vector , residing in an abstract complex vector space, called a Hilbert space. It may be either infinite- or finite-dimensional. A usual presentation of that Hilbert space is a special function space, called , on certain set , that is either some configuration space or a discrete set. with the vector |""ψ""> representing the complete potential state in Hilbert space, co-efficients c for ""i"" = 1, ... , n numbers in the complex plane related to the probability of each corresponding vector and each vector |""i""> representing each n-indeterminate state forming an orthogonal basis spanning |""i"">. @@ -1205,7 +1205,7 @@ In 1905 Einstein postulated from the outset that the speed of light in vacuum, m The speed of light in vacuum is defined to be exactly 299 792 458 m/s (approxUsing this and the principle of relativity as a basis he derived the special theory of relativity, in which the speed of light in vacuum ""c"" featured as a fundamental constant, also appearing in contexts unrelated to lightThis made the concept of the stationary aether (to which Lorentz and Poincaré still adhered) useless and revolutionized the concepts of space and time. The speed of light in vacuum is usually denoted by a lowercase , for ""constant"" or the Latin (meaning 'swiftness, celerity')Using this and the principle of relativity as a basis he derived the special theory of relativity, in which the speed of light in vacuum c featured as a fundamental constant, also appearing in contexts unrelated to lightAll forms of electromagnetic radiation move at exactly this same speed in vacuum. The speed of light in vacuum is usually denoted by a lowercase c, for ""constant"" or the Latin celeritas (meaning 'swiftness, celerity')This article uses c exclusively for the speed of light in vacuumThis article uses exclusively for the speed of light in vacuum.For instance, (the speed of light in vacuum, in metres per second) can be written as and then approximated as . -In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observerSpecial relativity In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observerIn 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used for a different constant that was later shown to equal times the speed of light in vacuumEinstein used V in his original German-language papers on special relativity in 1905, but in 1907 he switched to c, which by then had become the standard symbol for the speed of l[SEP]What is the significance of the speed of light in vacuum?","['C', 'D', 'E']",1.0 +In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observerSpecial relativity In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observerIn 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used for a different constant that was later shown to equal times the speed of light in vacuumEinstein used V in his original German-language papers on special relativity in 1905, but in 1907 he switched to c, which by then had become the standard symbol for the speed of l[SEP]What is the significance of the speed of light in vacuum?","['C', 'D', 'B']",1.0 What is the term used to describe the proportionality factor to the Stefan-Boltzmann law that is utilized in subsequent evaluations of the radiative behavior of grey bodies?,"The Stefan–Boltzmann law, also known as Stefan's law, describes the intensity of the thermal radiation emitted by matter in terms of that matter's temperature. A so-called grey body is a body for which the spectral emissivity is independent of wavelength, so that the total emissivity, \varepsilon, is a constant. For an ideal absorber/emitter or black body, the Stefan–Boltzmann law states that the total energy radiated per unit surface area per unit time (also known as the radiant exitance) is directly proportional to the fourth power of the black body's temperature, T: : M^{\circ} = \sigma\, T^{4}. The Stefan–Boltzmann law may be expressed as a formula for radiance as a function of temperature. The total emissivity, as applicable to the Stefan–Boltzmann law, may be calculated as a weighted average of the spectral emissivity, with the blackbody emission spectrum serving as the weighting function. The Stefan–Boltzmann law for the radiance of a black body is: : L^\circ_\Omega = \frac{M^{\circ}}\pi = \frac\sigma\pi\, T^{4}. However, the emissivity which appears in the non-directional form of the Stefan–Boltzmann law is the hemispherical total emissivity, which reflects emissions as totaled over all wavelengths, directions, and polarizations. In the general case, the Stefan–Boltzmann law for radiant exitance takes the form: : M = \varepsilon\,M^{\circ} = \varepsilon\,\sigma\, T^{4} where \varepsilon is the emissivity of the matter doing the emitting. The formula is given, where E is the radiant heat emitted from a unit of area per unit time, T is the absolute temperature, and is the Stefan–Boltzmann constant. ==Equations== ===Planck's law of black-body radiation=== Planck's law states that :B_ u(T) = \frac{2h u^3}{c^2}\frac{1}{e^{h u/kT} - 1}, where :B_{ u}(T) is the spectral radiance (the power per unit solid angle and per unit of area normal to the propagation) density of frequency u radiation per unit frequency at thermal equilibrium at temperature T. Units: power / [area × solid angle × frequency]. :h is the Planck constant; :c is the speed of light in vacuum; :k is the Boltzmann constant; : u is the frequency of the electromagnetic radiation; :T is the absolute temperature of the body. The emitted energy flux density or irradiance B_ u(T,E), is related to the photon flux density b_ u(T,E) through :B_ u(T,E) = Eb_ u(T,E) ===Wien's displacement law=== Wien's displacement law shows how the spectrum of black-body radiation at any temperature is related to the spectrum at any other temperature. A consequence of Wien's displacement law is that the wavelength at which the intensity per unit wavelength of the radiation produced by a black body has a local maximum or peak, \lambda_\text{peak}, is a function only of the temperature: :\lambda_\text{peak} = \frac{b}{T}, where the constant b, known as Wien's displacement constant, is equal to \frac{hc}k\frac 1{5+W_0(-5e^{-5})} (where W_0 is the Lambert W function). The intensity of the light emitted from the blackbody surface is given by Planck's law, I( u,T) =\frac{2 h u^3}{c^2}\frac{1}{ e^{h u/(kT)}-1}, where *I( u,T) is the amount of power per unit surface area per unit solid angle per unit frequency emitted at a frequency u by a black body at temperature T. *h is the Planck constant *c is the speed of light, and *k is the Boltzmann constant. An emissivity of one corresponds to a black body. ==Detailed explanation== The radiant exitance (previously called radiant emittance), M, has dimensions of energy flux (energy per unit time per unit area), and the SI units of measure are joules per second per square metre (J s m), or equivalently, watts per square metre (W m). The constant of proportionality, \sigma, is called the Stefan–Boltzmann constant. The emissivity of a material specifies how well a real body radiates energy as compared with a black body. The Gebhart factors are used in radiative heat transfer, it is a means to describe the ratio of radiation absorbed by any other surface versus the total emitted radiation from given surface. For simpler cases it can also be formulated as a single expression. ==See also== * Radiosity * Thermal radiation * Black body == References == Category:Heat transfer Through Planck's law the temperature spectrum of a black body is proportionally related to the frequency of light and one may substitute the temperature (T) for the frequency in this equation. The law, including the theoretical prediction of the Stefan–Boltzmann constant as a function of the speed of light, the Boltzmann constant and the Planck constant, is a direct consequence of Planck's law as formulated in 1900. == Stefan–Boltzmann constant == The Stefan–Boltzmann constant, σ, is derived from other known physical constants: :\sigma = \frac{2 \pi^5 k^4}{15 c^2 h^3} where k is the Boltzmann constant, the h is Planck's constant, and c is the speed of light in a vacuum. The wavelength at which the radiation is strongest is given by Wien's displacement law, and the overall power emitted per unit area is given by the Stefan–Boltzmann law. ",Emissivity,Wien's displacement law,Reflectance,Black-body radiation,Albedo,A,kaggle200,"There is no consistent evidence that glomerulations are correlated to severity of urinary symptoms, quality of life, bladder inflammation, or bladder capacity. One study suggests that the severity of glomerulations may change over time as seen in a few individuals who have either worsened or diminished glomerulations in their subsequent evaluations. The proportionality factor in the definition of Ross' time constant is dependent upon the magnitude of the disturbance on the plant and the specifications for feedback control. When there are no disturbances, Ross' -lemma shows that the open-loop optimal solution is the same as the closed-loop one. In the presence of disturbances, the proportionality factor can be written in terms of the Lambert W-function. formula_1 assuming the emission layer of the atmosphere radiates like a blackbody according to the Stefan-Boltzmann law. σ is the Stefan-Boltzmann constant. @@ -1275,7 +1275,7 @@ Hypercubes or measure polytopes, including the square and the cube. In mathematics, a regular polytope is a polytope whose symmetry group acts transitively on its flags, thus giving it the highest degree of symmetryAn n-polytope is regular if any set consisting of a vertex, an edge containing it, a 2-dimensional face containing the edge, and so on up to n−1 dimensions, can be mapped to any other such set by a symmetry of the polytope.So for example, the cube is regular because if we choose a vertex of the cube, and one of the three edges it is on, and one of the two faces containing the edge, then this triplet, or flag, (vertex, edge, face) can be mapped to any other such flag by a suitable symmetry of the cubeThe symmetry group of a regular polytope acts transitively on its flags; hence, the dual polytope of a regular polytope is also regular. In mathematics, a regular polytope is a polytope whose symmetry group acts transitively on its flags, thus giving it the highest degree of symmetry- For a regular abstract polytope, if the combinatorial automorphisms of the abstract polytope are realized by geometric symmetries then the geometric figure will be a regular polytope. A regular polytope can be represented by a Schläfli symbol of the form with regular facets as and regular vertex figures as -Regular polytopes have the highest degree of symmetry of all polytopesThus we can define a regular polytope very succinctly: A regular polytope is one whose symmetry group is transitive on its flags.In the 20th century, some important developments were madeAll its elements or -faces (for all , where is the dimension of the polytope) — cells, faces and so on — are also transitive on the symmetries of the polytope, and are regular polytopes of dimension .All its elements or j-face[SEP]What is a regular polytope?","['C', 'A', 'B']",1.0 +Regular polytopes have the highest degree of symmetry of all polytopesThus we can define a regular polytope very succinctly: A regular polytope is one whose symmetry group is transitive on its flags.In the 20th century, some important developments were madeAll its elements or -faces (for all , where is the dimension of the polytope) — cells, faces and so on — are also transitive on the symmetries of the polytope, and are regular polytopes of dimension .All its elements or j-face[SEP]What is a regular polytope?","['C', 'A', 'D']",1.0 What is the reason behind the largest externally observed electrical effects when two conductors are separated by the smallest distance without touching?,"In electromagnetics, proximity effect is a redistribution of electric current occurring in nearby parallel electrical conductors carrying alternating current flowing in the same direction which causes the current distribution in the conductor to concentrate on the side away from the nearby conductor. The proximity effect can significantly increase the AC resistance of adjacent conductors when compared to its resistance to a DC current. The result is that the current is concentrated in the areas of the conductor farthest away from nearby conductors carrying current in the same direction. Contact electrification is a phrase that describes the phenomenon whereby two surfaces become electrically charged when they contact and then separate. The concentration of current on the side of the conductor gets larger with increasing frequency. The Johnsen–Rahbek effect occurs when an electric potential is applied across the boundary between a metallic surface and the surface of a semiconducting material. While many aspects of contact electrification are now understood, and consequences have been extensively documented, there remain disagreements in the current literature about the underlying mechanisms. It is caused by eddy currents induced by the time-varying magnetic field of the other conductor. Similarly, in two adjacent conductors carrying alternating currents flowing in opposite directions, such as are found in power cables and pairs of bus bars, the current in each conductor is concentrated into a strip on the side facing the other conductor. == Effects == The additional resistance increases power losses which, in power circuits, can generate undesirable heating. Similarly, in adjacent conductors carrying AC flowing in opposite directions, the current will be redistributed to the side of the conductor closest to the other conductor. == Explanation == A changing magnetic field will influence the distribution of an electric current flowing within an electrical conductor, by electromagnetic induction. This ""current crowding"" effect causes the current to occupy a smaller effective cross-sectional area of the conductor, increasing current density and AC electrical resistance of the conductor. The Ferranti effect is more pronounced the longer the line and the higher the voltage applied.Line-Charging Current Interruption by HV and EHV Circuit Breakers, Carl-Ejnar Sölver, Ph. D. and Sérgio de A. Morais, M. Sc. The alternating magnetic field induces eddy currents in adjacent conductors, altering the overall distribution of current flowing through them. As mentioned above contact electrification is when two bodies contact then separate; triboelectricity includes sliding. The relative voltage rise is proportional to the square of the line length and the square of frequency.A Knowledge Base for Switching Surge Transients, A. I. Ibrahim and H. W. Dommel The Ferranti effect is much more pronounced in underground cables, even in short lengths, because of their high capacitance per unit length, and lower electrical impedance. thumb|right|Illustration of the Ferranti effect; addition of voltages across the line inductance In electrical engineering, the Ferranti effect is the increase in voltage occurring at the receiving end of a very long (> 200 km) AC electric power transmission line, relative to the voltage at the sending end, when the load is very small, or no load is connected. Under these conditions an attractive force appears, whose magnitude depends on the voltage and the specific materials involved. At higher frequencies, the AC resistance of a conductor can easily exceed ten times its DC resistance. == Example == For example, if two wires carrying the same alternating current lie parallel to one another, as would be found in a coil used in an inductor or transformer, the magnetic field of one wire will induce longitudinal eddy currents in the adjacent wire, that flow in long loops along the wire, in the same direction as the main current on the side of the wire facing away from the other wire, and back in the opposite direction on the side of the wire facing the other wire. It was first observed during the installation of underground cables in Sebastian Ziani de Ferranti's 10,000-volt AC power distribution system in 1887.J. F. Wilson, Ferranti and the British Electrical Industry, 1864-1930, Manchester University Press, 1988 page 44 The capacitive line charging current produces a voltage drop across the line inductance that is in-phase with the sending-end voltage, assuming negligible line resistance. The winding is usually limited to a single layer, and often the turns are spaced apart to separate the conductors. ","The surface charge on a conductor depends on the magnitude of the electric field, which in turn depends on the temperature between the surfaces.","The surface charge on a conductor depends on the magnitude of the magnetic field, which in turn depends on the distance between the surfaces.","The surface charge on a conductor depends on the magnitude of the electric field, which in turn depends on the angle between the surfaces.","The surface charge on a conductor depends on the magnitude of the electric field, which in turn depends on the distance between the surfaces.","The surface charge on a conductor depends on the magnitude of the electric field, which in turn depends on the pressure between the surfaces.",D,kaggle200,"where for simplicity, we assume an orthogonal lattice in which α only depends on ""m"", β only depends on ""n"" and γ only depends on ""p"". With this assumption, The operational definition of synonymy depends on the distinctions between these classes of sememes. For example, the differentiation between what some academics call cognitive synonyms and near-synonyms depends on these differences. The ampacity of a conductor depends on its ability to dissipate heat without damage to the conductor or its insulation. This is a function of the @@ -1375,7 +1375,7 @@ In mathematical physics, Minkowski space (or Minkowski spacetime) () combines in This supergroup has the following Lie superalgebraSuppose that formula_20 is Minkowski space (of dimension formula_21), and formula_22 is a finite sum of irreducible real spinor represen","Thus, the structure of Minkowski space is still essential in the description of general relativity. Introducing more terminology (but not more structure), Minkowski space is thus a pseudo-Euclidean space with total dimension n = 4 and signature (3, 1) or (1, 3)- Even in curved space, Minkowski space is still a good description in an infinitesimal region surrounding any point (barring gravitational singularities)Even in curved space, Minkowski space is still a good description in an infinitesimal region surrounding any point (barring gravitational singularities)Mathematician Hermann Minkowski developed it from the work of Hendrik Lorentz, Henri Poincaré, and others, and said it ""was grown on experimental physical grounds."" Minkowski space is closely associated with Einstein's theories of special relativity and general relativity and is the most common mathematical structure by which special relativity is formalizedMinkowski space differs from four-dimensional Euclidean space insofar as it treats time differently than the three spatial dimensionsElements of Minkowski space are called eventsIt is perhaps the simplest example of a pseudo-Riemannian manifold. In mathematical physics, Minkowski space (or Minkowski spacetime) () combines inertial space and time manifolds (x,y) with a non-inertial reference frame of space and time (x',t') into a four-dimensional model relating a position (inertial frame of reference) to the field (physics)More abstractly, we say that in the presence of gravity spacetime is described by a curved 4-dimensional manifold for which the tangent space to any point is a 4-dimensional Minkowski spaceMinkowski space is often denoted or to emphasize the chosen signature, or just Minkowski space is often denoted R3,1 or R1,3 to emphasize the chosen signature, or just MThus, the structure of Minkowski space is still essential in the description of general relativity. -This supergroup has the following Lie superalgebraSuppose that formula_20 is Minkowski space (of dimension formula_21), and formula_22 is a finite sum of irreducible real spinor represen[SEP]What is Minkowski space?","['B', 'E', 'C']",1.0 +This supergroup has the following Lie superalgebraSuppose that formula_20 is Minkowski space (of dimension formula_21), and formula_22 is a finite sum of irreducible real spinor represen[SEP]What is Minkowski space?","['B', 'E', 'D']",1.0 What is the Optical Signal-to-Noise Ratio (OSNR)?,"The OSNR is the ratio between the signal power and the noise power in a given bandwidth. To describe the signal quality without taking the receiver into account, the optical SNR (OSNR) is used. OSNR is measured with an optical spectrum analyzer. ==Types and abbreviations== Signal to noise ratio may be abbreviated as SNR and less commonly as S/N. PSNR stands for peak signal-to-noise ratio. OSNR, a four-letter acronym or abbreviation, may refer to: *Optical signal-to- noise ratio *Optical spectrum analyzer *Optical performance monitoring *Other / Signature Not Required - a delivery classification used by some shippers. Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR measures the ratio between an arbitrary signal level (not necessarily the most powerful signal possible) and noise. Depending on whether the signal is a constant () or a random variable (), the signal-to-noise ratio for random noise becomes: : \mathrm{SNR} = \frac{s^2}{\mathrm{E}[N^2]} where E refers to the expected value, i.e. in this case the mean square of , or : \mathrm{SNR} = \frac{\mathrm{E}[S^2]}{\mathrm{E}[N^2]} If the noise has expected value of zero, as is common, the denominator is its variance, the square of its standard deviation . SNR is usually taken to indicate an average signal-to-noise ratio, as it is possible that instantaneous signal-to-noise ratios will be considerably different. SNR is defined as the ratio of signal power to noise power, often expressed in decibels. Related measures are the ""contrast ratio"" and the ""contrast-to- noise ratio"". ==Modulation system measurements== ===Amplitude modulation=== Channel signal-to-noise ratio is given by :\mathrm{(SNR)_{C,AM}} = \frac{A_C^2 (1 + k_a^2 P)} {2 W N_0} where W is the bandwidth and k_a is modulation index Output signal-to-noise ratio (of AM receiver) is given by :\mathrm{(SNR)_{O,AM}} = \frac{A_c^2 k_a^2 P} {2 W N_0} ===Frequency modulation=== Channel signal-to-noise ratio is given by :\mathrm{(SNR)_{C,FM}} = \frac{A_c^2} {2 W N_0} Output signal-to-noise ratio is given by :\mathrm{(SNR)_{O,FM}} = \frac{A_c^2 k_f^2 P} {2 N_0 W^3} ==Noise reduction== All real measurements are disturbed by noise. A ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise. Other definitions of SNR may use different factors or bases for the logarithm, depending on the context and application. ==Definition== Signal-to-noise ratio is defined as the ratio of the power of a signal (meaningful input) to the power of background noise (meaningless or unwanted input): : \mathrm{SNR} = \frac{P_\mathrm{signal}}{P_\mathrm{noise}}, where is average power. Audio uses RMS, Video P-P, which gave +9 dB more SNR for video. ==Optical signals== Optical signals have a carrier frequency (about and more) that is much higher than the modulation frequency. GSNR stands for geometric signal-to- noise ratio. Yet another alternative, very specific, and distinct definition of SNR is employed to characterize sensitivity of imaging systems; see Signal-to-noise ratio (imaging). Peak signal-to-noise ratio (PSNR) is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. In this case, the SNR is approximately : \mathrm{SNR_{dB}} \approx 20 \log_{10} (2^n {\textstyle\sqrt {3/2}}) \approx 6.02 \cdot n + 1.761 ===Floating point=== Floating-point numbers provide a way to trade off signal-to-noise ratio for an increase in dynamic range. Philadelphia: Lippincott Williams & Wilkins, 2006, p. 280. : \mathrm{SNR} = \frac{\mu}{\sigma} where \mu is the signal mean or expected value and \sigma is the standard deviation of the noise, or an estimate thereof.The exact methods may vary between fields. Substituting the definitions of SNR, signal, and noise in decibels into the above equation results in an important formula for calculating the signal to noise ratio in decibels, when the signal and noise are also in decibels: : \mathrm{SNR_{dB}} = {P_\mathrm{signal,dB} - P_\mathrm{noise,dB}}. Using the definition of SNR : \mathrm{SNR_{dB}} = 10 \log_{10} \left ( \frac{P_\mathrm{signal}}{P_\mathrm{noise}} \right ). ","The Optical Signal-to-Noise Ratio (OSNR) is the ratio between the modulation frequency and the carrier frequency of an optical signal, used to describe the signal quality in systems where dynamic range is less than 6.02m.","The Optical Signal-to-Noise Ratio (OSNR) is the ratio between the signal power and the noise power in a given bandwidth, used to describe the signal quality without taking the receiver into account.","The Optical Signal-to-Noise Ratio (OSNR) is the ratio between the signal power and the noise power in a given bandwidth, used to describe the signal quality in situations where the dynamic range is less than 6.02m.","The Optical Signal-to-Noise Ratio (OSNR) is the ratio between the signal power and the noise power in a fixed bandwidth of 6.02m, used to describe the signal quality in systems where dynamic range is less than 6.02m.","The Optical Signal-to-Noise Ratio (OSNR) is the ratio between the signal power and the noise power in a given bandwidth, used to describe the signal quality in situations where the dynamic range is large or unpredictable.",B,kaggle200,"Note that the dynamic range is much larger than fixed-point, but at a cost of a worse signal-to-noise ratio. This makes floating-point preferable in situations where the dynamic range is large or unpredictable. Fixed-point's simpler implementations can be used with no signal quality disadvantage in systems where dynamic range is less than 6.02m. The very large dynamic range of floating-point can be a disadvantage, since it requires more forethought in designing algorithms. Signal to noise ratio may be abbreviated as SNR and less commonly as S/N. PSNR stands for peak signal-to-noise ratio. GSNR stands for geometric signal-to-noise ratio. SINR is the signal-to-interference-plus-noise ratio. The optical component used for this purpose in DWDM networks is known as optical performance monitor (OPM) or optical channel monitor (OCM), which measures channel power, wavelength, and optical signal-to-noise ratio (OSNR) for each channel. @@ -1511,7 +1511,7 @@ Earnshaw's theorem states that a collection of point charges cannot be maintaine Classical Electrodynamics Antidynamo theorems is a general category of theorems that restrict the type of magnetic fields that can be produced by dynamo action. Cowling's theorem states that an axisymmetric magnetic field cannot be maintained through a self-sustaining dynamo action by an axially symmetric current. Earnshaw's theorem states that a collection of point charges cannot be maintained in a stable stationary equilibrium configuration solely by the electrostatic interaction of the charges. -Non-Relativistic Quantum Mechanics and Quantum Information Bell's theorem Kochen–Specker theorem PBR theorem No-hiding theorem No-cloning theorem Quantum no-deleting theorem No-teleportation theor[SEP]What does Earnshaw's theorem state?","['D', 'A', 'B']",1.0 +Non-Relativistic Quantum Mechanics and Quantum Information Bell's theorem Kochen–Specker theorem PBR theorem No-hiding theorem No-cloning theorem Quantum no-deleting theorem No-teleportation theor[SEP]What does Earnshaw's theorem state?","['D', 'B', 'A']",1.0 What is radiosity in radiometry?,"In radiometry, radiosity is the radiant flux leaving (emitted, reflected and transmitted by) a surface per unit area, and spectral radiosity is the radiosity of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. Radiosity may refer to: *Radiosity (radiometry), the total radiation (emitted plus reflected) leaving a surface, certainly including the reflected radiation and the emitted radiation. Radiosity is often called in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensity. ==Mathematical definitions== ===Radiosity=== Radiosity of a surface, denoted Je (""e"" for ""energetic"", to avoid confusion with photometric quantities), is defined as :J_\mathrm{e} = \frac{\partial \Phi_\mathrm{e}}{\partial A} = J_\mathrm{e,em} + J_\mathrm{e,r} + J_\mathrm{e,tr}, where * ∂ is the partial derivative symbol * \Phi_e is the radiant flux leaving (emitted, reflected and transmitted) * A is the area * J_{e,em} = M_e is the emitted component of the radiosity of the surface, that is to say its exitance * J_{e,r} is the reflected component of the radiosity of the surface * J_{e,tr} is the transmitted component of the radiosity of the surface For an opaque surface, the transmitted component of radiosity Je,tr vanishes and only two components remain: :J_\mathrm{e} = M_\mathrm{e} + J_\mathrm{e,r}. Radiodensity (or radiopacity) is opacity to the radio wave and X-ray portion of the electromagnetic spectrum: that is, the relative inability of those kinds of electromagnetic radiation to pass through a particular material. The radiosity of an opaque, gray and diffuse surface is given by :J_\mathrm{e} = M_\mathrm{e} + J_\mathrm{e,r} = \varepsilon \sigma T^4 + (1 - \varepsilon) E_\mathrm{e}, where *ε is the emissivity of that surface; *σ is the Stefan–Boltzmann constant; *T is the temperature of that surface; *Ee is the irradiance of that surface. In such a case, the radiosity does not depend on the angle of incidence of reflecting radiation and this information is lost on a diffuse surface. In reality, however, the radiosity will have a specular component from the reflected radiation. In such an application, the radiosity must be calculated spectrally and then integrated over the range of radiation spectrum. Spectral radiosity in wavelength of a surface, denoted Je,λ, is defined as :J_{\mathrm{e},\lambda} = \frac{\partial J_\mathrm{e}}{\partial \lambda}, where λ is the wavelength. ==Radiosity method== thumb|400px|right|The two radiosity components of an opaque surface. In heat transfer, combining these two factors into one radiosity term helps in determining the net energy exchange between multiple surfaces. ===Spectral radiosity=== Spectral radiosity in frequency of a surface, denoted Je,ν, is defined as :J_{\mathrm{e}, u} = \frac{\partial J_\mathrm{e}}{\partial u}, where ν is the frequency. The SI unit of radiosity is the watt per square metre (), while that of spectral radiosity in frequency is the watt per square metre per hertz (W·m−2·Hz−1) and that of spectral radiosity in wavelength is the watt per square metre per metre (W·m−3)—commonly the watt per square metre per nanometre (). Materials that inhibit the passage of electromagnetic radiation are called radiodense or radiopaque, while those that allow radiation to pass more freely are referred to as radiolucent. If it is not, then the radiosity will vary as a function of position along the surface. Radiophysics (also modern writing ""radio physics""Radio Physics Solutions company official web page) is a branch of physics focused on the theoretical and experimental study of certain kinds of radiation, its emission, propagation and interaction with matter. The two main factors contributing to a material's radiopacity are density and atomic number. *Radiosity (computer graphics), a rendering algorithm which gives a realistic rendering of shadows and diffuse light. Radiopacity is one of the key considerations in the design of various devices such as guidewires or stents that are used during radiological intervention. Radiopaque volumes of material have white appearance on radiographs, compared with the relatively darker appearance of radiolucent volumes. These can be for instance, in the field of radiometry or the measurement of ionising radiation radiated from a source. ==Ionising radiation== thumb|400px|Graphic showing relationships between radioactivity and detected ionizing radiation. Though the term radiodensity is more commonly used in the context of qualitative comparison, radiodensity can also be quantified according to the Hounsfield scale, a principle which is central to X-ray computed tomography (CT scan) applications. ","Radiosity is the radiant flux entering a surface per unit area, including emitted, reflected, and transmitted radiation.","Radiosity is the radiant flux entering a surface per unit area, including absorbed, reflected, and transmitted radiation.","Radiosity is the radiant flux leaving a surface per unit area, including absorbed, reflected, and transmitted radiation.","Radiosity is the radiant flux leaving a surface per unit area, including emitted, reflected, and transmitted radiation.","Radiosity is the radiant flux leaving a surface per unit volume, including emitted, reflected, and transmitted radiation.",D,kaggle200,"In radiometry, irradiance is the radiant flux ""received"" by a ""surface"" per unit area. The SI unit of irradiance is the watt per square metre (W⋅m). The CGS unit erg per square centimetre per second (erg⋅cm⋅s) is often used in astronomy. Irradiance is often called intensity, but this term is avoided in radiometry where such usage leads to confusion with radiant intensity. In astrophysics, irradiance is called ""radiant flux"". In radiometry, radiance is the radiant flux emitted, reflected, transmitted or received by a given surface, per unit solid angle per unit projected area. Radiance is used to characterize diffuse emission and reflection of electromagnetic radiation, and to quantify emission of neutrinos and other particles. The SI unit of radiance is the watt per steradian per square metre (). It is a ""directional"" quantity: the radiance of a surface depends on the direction from which it is being observed. More correctly, radiosity ""B"" is the energy per unit area leaving the patch surface per discrete time interval and is the combination of emitted and reflected energy: @@ -1569,7 +1569,7 @@ In superconductors, there is a condensed-matter collective field ψ, which acts It was expected that a half-integer flux, that is, a spontaneous magnetization could only occur for a junction of ""d"" symmetry superconductorsTherefore, a gauge symmetry of formula_1 Spontaneously-symmetry-broken phases of matter are characterized by an order parameter that describes the quantity which breaks the symmetry under considerationIf ϕ is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point. For the electroweak model, as explained earlier, a component of the Higgs field provides the order parameter breaking the electroweak gauge symmetry to the electromagnetic gauge symmetryIn the experiment, the spontaneous magnetization was clearly observed in YBCO, which supported the ""d"" symmetry of the order parameter in YBCOAlso, they found that there was a pure ""d"" order parameter symmetry in the tetragonal TlBaCuO. -If formula_1 is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.But, even if the junction experiment is the strongest method to determine the symmetry of the HTS order parameter, the results have been ambiguousLike the ferromagnetic example, there is a phase transition at the electroweak temperature- A gauge symmetry of a Lagrangian formula_1 is defined as a differential operator on some vector bundle formula_2 taking its values in the linear space of (variational or exact) symmetries of formula_1So, by tuning their technique further, they found that there was an admixture of ""s"" symmetry in YBCO within about 3[SEP]What is the order parameter that breaks the electromagnetic gauge symmetry in superconductors?","['C', 'A', 'E']",1.0 +If formula_1 is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.But, even if the junction experiment is the strongest method to determine the symmetry of the HTS order parameter, the results have been ambiguousLike the ferromagnetic example, there is a phase transition at the electroweak temperature- A gauge symmetry of a Lagrangian formula_1 is defined as a differential operator on some vector bundle formula_2 taking its values in the linear space of (variational or exact) symmetries of formula_1So, by tuning their technique further, they found that there was an admixture of ""s"" symmetry in YBCO within about 3[SEP]What is the order parameter that breaks the electromagnetic gauge symmetry in superconductors?","['C', 'E', 'A']",1.0 What is the reason for the sun appearing slightly yellowish when viewed from Earth?,"A number of different atmospheric conditions can be responsible for this effect, all of which divert the sunlight in such a way as to allow it to reach the observer's eye, thereby giving the impression that the light comes directly from the Sun itself. A related phenomenon is gegenschein (or counterglow), sunlight backscattered from the interplanetary dust, appearing directly opposite to the Sun as a faint but slightly brighter oval glow. Yellow sun or Yellow Sun may refer to: *Yellow Sun (nuclear weapon), a British nuclear weapon *Yellow sun, a type of stellar classification *""Yellow Sun"", a song by The Raconteurs from their album Broken Boy Soldiers This is why it is most clearly visible near sunrise or sunset when the sun is blocked, but the dust particles nearest the line of sight to the sun are not. Depending on circumstances, these phenomena can give the impression of an actual sunset. Similarly to a false sunrise, other atmospheric circumstances may be responsible for the effect as well, such as simple reflection of the sunlight off the bottom of the clouds, or a type of mirage like the Novaya Zemlya effect. ==See also== *False sunrise *Halo (optical phenomenon) *Lower tangent arc *Mirage *Novaya Zemlya effect *Subsun *Sun pillar *Upper tangent arc ==References== Category:Atmospheric optical phenomena Up to now, the ""Blue Sky with a White Sun"" can still be seen in the emblem of the US Army 75th Ranger Regiment. The zodiacal light (also called false dawn when seen before sunrise) is a faint glow of diffuse sunlight scattered by interplanetary dust. The Blue Sky with a White Sun () serves as the design for the party flag and emblem of the Kuomintang, the canton of the flag of the Republic of China, the national emblem of the Republic of China, and as the naval jack of the ROC Navy. Several atmospheric phenomena that may alternatively be called a ""false sunrise"" are: * Simple reflection of the sunlight off the bottom of the clouds. There are several atmospheric conditions which may cause the effect, most commonly a type of halo, caused by the reflection and refraction of sunlight by small ice crystals in the atmosphere, often in the form of cirrostratus clouds. Consequently, its spectrum is the same as the solar spectrum. A false sunrise is any of several atmospheric optical phenomena in which the Sun appears to have risen, but is actually still some distance below the horizon. Depending on which variety of ""false sunset"" is meant, the halo has to appear either above the Sun (which itself is hidden below the horizon) or below it (in which case the real Sun is obstructed from view, e.g. by clouds or other objects), making the upper and lower tangent arc, upper and lower sun pillars and the subsun the most likely candidates. The spread of light can sometimes be deceivingly similar to a true sun. After the Northern Expedition it was replaced by the Blue Sky with a White Sun national emblem in 1928. ===Nationalist period=== Since 1928, under the KMT's political tutelage, the Blue Sky with a White Sun Flag shared the same prominence as the ROC flag. A false sunset can refer to one of two related atmospheric optical phenomena, in which either (1) the Sun appears to be setting into or to have set below the horizon while it is actually still some height above the horizon, or (2) the Sun has already set below the horizon, but still appears to be on or above the horizon (thus representing the reverse of a false sunrise). Like all halos, these phenomena are caused by the reflection and/or refraction of sunlight by ice crystals suspended in the atmosphere, often in the form of cirrus or cirrostratus clouds. The light scattered from extremely small dust particles is strongly forward scattering, although the zodiacal light actually extends all the way around the sky, hence it is brightest when observing at a small angle with the Sun. Thus it is possible to see more of the width at small angles toward the sun, and it appears wider near the horizon, closer to the sun under the horizon. == Origin == The source of the dust has been long debated. ",The sun appears yellowish due to a reflection of the Earth's atmosphere.,"The longer wavelengths of light, such as red and yellow, are not scattered away and are directly visible when looking towards the sun.","The sun appears yellowish due to the scattering of all colors of light, mainly blue and green, in the Earth's atmosphere.","The sun emits a yellow light due to its own spectrum, which is visible when viewed from Earth.","The atmosphere absorbs the shorter wavelengths of light, such as blue and red, leaving only the longer wavelengths of light, such as green and yellow, visible when looking towards the sun.",B,kaggle200,"A monochrome or red rainbow is an optical and meteorological phenomenon and a rare variation of the more commonly seen multicolored rainbow. Its formation process is identical to that of a normal rainbow (namely the reflection/refraction of light in water droplets), the difference being that a monochrome rainbow requires the sun to be close to the horizon; i.e., near sunrise or sunset. The low angle of the sun results in a longer distance for its light to travel through the atmosphere, causing shorter wavelengths of light, such as blue, green and yellow, to be scattered and leaving primarily red. White light from the sun consists of a continuous spectrum of colors which, when divided, forms the colors of the rainbow: violet, indigo blue, blue, green, yellow, orange, and red. In its interaction with the Earth's atmosphere, sunlight tends to scatter the shorter wavelengths, i.e. the blue photons, which is why the sky is perceived as blue. On the other hand, at sunset, when the atmosphere is denser, the light is less scattered, so that the longer wavelengths, red, are perceived. The Sun emits light across the visible spectrum, so its color is white, with a CIE color-space index near (0.3, 0.3), when viewed from space or when the Sun is high in the sky. The Solar radiance per wavelength peaks in the green portion of the spectrum when viewed from space. When the Sun is very low in the sky, atmospheric scattering renders the Sun yellow, red, orange, or magenta, and in rare occasions even green or blue. Despite its typical whiteness (white sunrays, white ambient light, white illumination of the Moon, etc.), some cultures mentally picture the Sun as yellow and some even red; the reasons for this are cultural and exact ones are the subject of debate. @@ -1734,7 +1734,7 @@ Another example comes from ancient Greece during the Geometric Age (1100–900 B In visual art, horror vacui (, ; ), also referred to as kenophobia (from ), is the filling of the entire surface of a space or an artwork with detailThe mature work of the French Renaissance engraver Jean Duvet consistently exhibits horror vacui. The artwork in the Where's Wally? series of children's books is a commonly known example of horror vacui, as are many of the small books written or illustrated by the macabre imagination of Edward Gorey. The Tingatinga painting style of Dar es Salaam in Tanzania is a contemporary example of horror vacuiThe mature work of the French Renaissance engraver Jean Duvet consistently exhibits horror vacui. -The Tingatinga painting style of Dar es Salaam in Tanzania is a contemporary example of horror vacuiIn physics, ""horror vacui"" reflects Aristotle's idea that ""nature abhors an empty space.""Other examples of horror vacui can be seen in the densely decorated carpet pages of Insular illuminated manuscripts, where intricate patterns and interwoven symbols may have served ""apotropaic as well as decorative functions."" The interest in meticulously filling empty spaces is also reflected in Arabesque decoration in Islamic ar[SEP]What is the meaning of the term ""horror vacui""?","['B', 'D', 'C']",1.0 +The Tingatinga painting style of Dar es Salaam in Tanzania is a contemporary example of horror vacuiIn physics, ""horror vacui"" reflects Aristotle's idea that ""nature abhors an empty space.""Other examples of horror vacui can be seen in the densely decorated carpet pages of Insular illuminated manuscripts, where intricate patterns and interwoven symbols may have served ""apotropaic as well as decorative functions."" The interest in meticulously filling empty spaces is also reflected in Arabesque decoration in Islamic ar[SEP]What is the meaning of the term ""horror vacui""?","['B', 'D', 'E']",1.0 What is the Droste effect?,"The Droste effect (), known in art as an example of mise en abyme, is the effect of a picture recursively appearing within itself, in a place where a similar picture would realistically be expected to appear. The illustration reappears on the cocoa package held by the nurse, inducing a recursive visual effect known today as the Droste effect.Törnqvist, Egil. They devised a method of filling in the artwork's central void in an additional application of the Droste effect by successively rotating and shrinking an image of the artwork. === Advertising === In the 20th century, the Droste effect was used to market a variety of products. The effect has been a motif, too, for the cover of many comic books, where it was especially popular in the 1940s. == Effect == === Origins === The Droste effect is named after the image on the tins and boxes of Droste cocoa powder which displayed a nurse carrying a serving tray with a cup of hot chocolate and a box with the same image, designed by Jan Misset.""Bedenker van Droste-effect bekend"", Trouw, 1 August 1994. File:Droste 1260359-nevit.jpg|Droste effect by image manipulation (using GIMP). === Medieval art === The Droste effect was anticipated by Giotto early in the 14th century, in his Stefaneschi Triptych. File:Polittico Stefaneschi, dettaglio.jpg| ... who is holding the triptych itself. === M. C. Escher === The Dutch artist M. C. Escher made use of the Droste effect in his 1956 lithograph Print Gallery, which portrays a gallery containing a print which depicts the gallery, each time both reduced and rotated, but with a void at the centre of the image. The effect is seen in the Dutch artist M. C. Escher's 1956 lithograph Print Gallery, which portrays a gallery that depicts itself. Apart from advertising, the Droste effect is displayed in the model village at Bourton-on-the-Water: this contains a model of itself, with two further iterations. The image would proclaim the wholesome effect of chocolate milk and became inseparable from the Droste brand. Little Giant Comics #1 (July 1938) is said to be the first-published example of an infinity cover. == See also == * Beyond the Infinite Two Minutes, a movie prominently incorporating the effect * Chinese boxes * Dream within a dream * Fractal * Homunculus argument * Infinity mirror * Infinite regress * Matryoshka doll * Infinity * Quine * Scale invariance * Self-similarity * Story within a story § Fractal fiction * Video feedback == Notes == == References == == External links == * Escher and the Droste effect * The Math Behind the Droste Effect (article by Jos Leys summarizing the results of the Leiden study and article) * Droste Effect with Mathematica * Droste Effect from Wolfram Demonstrations Project Category:Artistic techniques Category:Recursion Category:Symmetry By making dynamic and progressive commercials for Droste, CSM provided a rejuvenation of Droste's image. The Droste effect is a theme in Russell Hoban's children's novel, The Mouse and His Child, appearing in the form of a label on a can of ""Bonzo Dog Food"" which depicts itself. The effect is named after a Dutch brand of cocoa, with an image designed by Jan Misset in 1904. File:JudgeMagazine19Jan1918.png|Judge cover, 19 January 1918 File:LibertyMagazine10May1924.png|Liberty cover, 10 May 1924 File:Royal Baking Powder.jpg|Royal Baking Powder, early 20th century === Comic books === The Droste effect has been a motif for the cover of comic books for many years, known as an ""infinity cover"". Droste B.V. () is a Dutch chocolate manufacturer. It is believed that this illustration was created by Jan (Johannes) Musset, being inspired by a pastel known as La Belle Chocolatière (""The Pretty Chocolate Girl""). After the turn of the century the company had been exporting its products to Belgium, Germany and France, and in 1905 it entered the American market. ===The nurse=== The famous illustration of the woman in nurse clothes, holding a plate with a cup of milk and a Droste cocoa package, first appeared on Droste products around the year 1900. This produces a loop which in theory could go on forever, but in practice only continues as far as the image's resolution allows. In the meantime, Droste's assortment had grown to numerous cocoa and chocolate products, the famous Dutch chocolate letters included. Drost is a Dutch occupational surname. ",The Droste effect is a type of optical illusion that creates the appearance of a three-dimensional image within a two-dimensional picture.,"The Droste effect is a type of packaging design used by a variety of products, named after a Dutch brand of cocoa, with an image designed by Jan Misset in 1904.","The Droste effect is a type of painting technique used by Dutch artist M. C. Escher in his 1956 lithograph Print Gallery, which portrays a gallery that depicts itself.","The Droste effect is a recursive image effect in which a picture appears within itself in a place where a similar picture would realistically be expected to appear. This creates a loop that can continue as far as the image's resolution allows, and is named after a Dutch brand of cocoa.",The Droste effect is a type of recursive algorithm used in computer programming to create self-referential images.,D,kaggle200,"The Dutch artist M. C. Escher made use of the Droste effect in his 1956 lithograph ""Print Gallery"", which portrays a gallery containing a print which depicts the gallery, each time both reduced and rotated, but with a void at the centre of the image. The work has attracted the attention of mathematicians including Hendrik Lenstra. They devised a method of filling in the artwork's central void in an additional application of the Droste effect by successively rotating and shrinking an image of the artwork. The Droste effect (), known in art as an example of ""mise en abyme"", is the effect of a picture recursively appearing within itself, in a place where a similar picture would realistically be expected to appear. This produces a loop which in theory could go on forever, but in practice only continues as far as the image's resolution allows. The Droste effect is a theme in Russell Hoban's children's novel, ""The Mouse and His Child"", appearing in the form of a label on a can of ""Bonzo Dog Food"" which depicts itself. @@ -2100,7 +2100,7 @@ On occasion, a halonium atom will rearrange to a carbocationSubsequently, an alk In the course of this organic reaction, protonation of one of the –OH groups occurs and a carbocation is formedFor the remainder of this article, the term carbonium ion will be used in this latter restricted sense, while non-classical carbocation will be used to refer to any carbocation with C–C and/or C–H σ-bonds delocalized by bridging. On occasion, a halonium atom will rearrange to a carbocationFor the remainder of this article, the term ""carbonium ion"" will be used in this latter restricted sense, while ""non-classical carbocation"" will be used to refer to any carbocation with C–C and/or C–H σ-bonds delocalized by bridging. In the course of this organic reaction, protonation of one of the –OH groups occurs and a carbocation is formedthe pinacol is asymmetrical), then the one which creates a more stable carbocation participates in the reactionPrior to the observation of five-coordinate carbocations by Olah and coworkers, carbocation and carbonium ion were used interchangeablyThe migration of alkyl groups in this reaction occurs in accordance with their usual migratory aptitude, i.e.phenyl carbocation > hydride > tertiary carbocation (if formed by migration) > secondary carbocation (if formed by migration) > methyl carbocationThus the actual product no doubt consists of a mixture of enantiomers but the enantiomers with inverted configuration would predominate and complete racemization does not occurs. -According to the IUPAC, a ""carbocation"" is any cation containing an even number of electrons in which a significant portion of the positive charge resides on a carbon atomAlthough the initial carbocation is already tertiary, the oxygen can stabilize the positiv[SEP]What is the fate of a carbocation formed in crystalline naphthalene?","['E', 'D', 'A']",1.0 +According to the IUPAC, a ""carbocation"" is any cation containing an even number of electrons in which a significant portion of the positive charge resides on a carbon atomAlthough the initial carbocation is already tertiary, the oxygen can stabilize the positiv[SEP]What is the fate of a carbocation formed in crystalline naphthalene?","['E', 'A', 'D']",1.0 What is the main focus of the Environmental Science Center at Qatar University?,"The Environmental Science Center is a research center at Qatar University and was established in 1980 to promote environmental studies across the state of Qatar with main focus on marine science, atmospheric and biological sciences. The center also has 12 labs equipped with state-of-arts instruments. == See also == * Qatar University * Qatar University Library * Mariam Al Maadeed * Center for Advanced Materials (CAM) == External links == * Research and Graduate Studies Office at Qatar University * Qatar University Newsroom == References == Category:1980 establishments in Qatar Category:Organisations based in Doha Category:Research institutes in Qatar Category:Educational institutions established in 1980 Category:Qatar University Category:Education by subject Category:Human impact on the environment Category:Oceans Category:Fishing Category:Earth sciences Category:Nature Category:Biology For the past 18 years, ESC monitored and studied Hawksbill turtle nesting sites in Qatar. == History == * in 1980 it was named Scientific and Applied Research Center (SARC). * in 2005 it was restructured and renamed Environmental Studies Center (ESC). * in 2015, the business name was changed to Environmental Science Center (ESC) to better reflect the research-driven objectives. == Research clusters == The ESC has 3 major research clusters that cover areas of strategic importance to Qatar. According to the Qatar Foundation, its initiatives are oriented towards education, science and research, and community development. The Scientific Center of Kuwait, located in Salmiya, Kuwait, serves as a center for environmental education in the Persian Gulf region. The clusters are: * Atmospheric sciences cluster * Earth sciences cluster * Marine sciences cluster with 2 majors: ** Terrestrial Ecology ** Physical and Chemical Oceanography == UNESCO Chair in marine sciences == The first of its kind in the Arabian Gulf region, United Nations Educational, Scientific and Cultural Organization (UNESCO) have announced the establishment of the UNESCO Chair in marine sciences at QU's Environmental Science Center. It aims to build the educational, life and social experience of students. ===Student Clubs=== Student clubs are divided into three categories: *Departmental and College clubs such as the Statistics Club *Talent and skill clubs such as the Voice Club and the Poetry Club *Clubs and public associations, such as the Book Club == Research centers == Research is conducted in and across colleges and is buoyed by an increased research budget, a multimillion-dollar Research Complex and partnerships. ;18 centers of research # Biomedical Research Center (BRC) # Center for Advanced Materials (CAM) # Environmental Science Center (ESC) # Social and Economic Survey Research Institute (SESRI) # Laboratory Animal Research Center (LARC) # Qatar University Young Scientists Center (QUYSC) # Ibn Khaldon Center for Humanities and Social Sciences # Central Lab Unit (CLU) # Center for Entrepreneurship (CFE) # Center for Sustainable Development (CSD) # Centre for Law and Development (CLD) # Early Childhood Center # Gas Processing Center (GPC) # Gulf Studies Center (GSC) # KINDI Center for Computing Research (KINDI) # National Center for Educational Development (NCED) # Qatar Mobility Innovation Center (QMIC) # Qatar Transportation and Traffic Safety Center (QTTSC) == Notable alumni == *Noor Al Mazroei, chef and activist *Abdulla bin Abdulaziz bin Turki Al Subaie, Qatari Minister of Municipality *Moza bint Nasser, consort of Hamad bin Khalifa Al Thani *Mohammed bin Abdulrahman bin Jassim Al Thani, Qatari Prime Minister *Jawaher bint Hamad bin Suhaim Al Thani, wife of the Emir of Qatar *Mariam Al Maadeed, Qatari scientist, Vice President for Research and Graduate Studies at Qatar University *Nasser Al-Khelaifi, businessman, president of Paris Saint-Germain *Saad Al Mohannadi, Qatari President of Public Works Authority Ashgal *Amal Al-Malki, academic *Abdulrahman bin Hamad bin Jassim bin Hamad Al Thani, Qatari Minister of Culture == See also == * Qatar University Library * Qatar University Stadium * Education in Qatar ==References== Category:Universities in Qatar Category:Educational institutions established in 1973 Category:Organisations based in Doha Category:1973 establishments in Qatar It is the largest college by both number of programs and student population at Qatar University, with a total of 2,383 students; 1,933 Arts majors and 450 Science majors. A QAR 20 million Scientific and Applied Research Center is under construction. ==Colleges and Departments== ===College of Arts and Sciences=== thumb|The Women's College of Arts and Sciences at Qatar University in 2008 The College of Arts and Sciences was established in 2004 through the merging of two former colleges; the College of Humanities and Social Sciences, and the College of Science. Qatar University (; transliterated: Jami'at Qatar) is a public research university located on the northern outskirts of Doha, Qatar. US Education department investigated Georgetown University, Texas A&M;, and Cornell and Rutgers over their funding from Qatar. == Science and research == A program known as the Qatar Science Leadership Program was initiated in 2008 in order to help develop aspiring applied science students. Departments: *Department of Arabic Language **History *Department of Biological & Environmental Sciences **Biological Sciences **Environmental Sciences *Department of Chemistry & Earth Sciences **Chemistry Program accredited by the CSC *Department of English Literature and Linguistics *Department of Health Sciences **Biomedical Program accredited by the NAACLS **Human Nutrition Program **Public health *Department of Humanities *Department of Mass Communication **Mass Communication Program *Department of Mathematics, Statistics & Physics *Department of Social Sciences **Social Work **Psychology **Sociology **International Affairs **Policy, Planning and Development **Statistics *Sport Science Programs: *Arabic for Non-Native Speakers Program ===College of Business & Economics=== thumb|Men's College of Business & Economics at Qatar University in 2008 Founded in 1985, it has begun work on a new QR 185 million facility to accommodate its student body and provide resources.QU 2008/2009 Brochure Dr. Nitham M. Hindi was appointed as Dean in August 2010. The center will be housed and managed by the College of Engineering and its funding will be obtained from different sources including Qatar University, companies and government agencies. The services provided by the center have been designed to address the necessities and challenges of both Qatar University and the Qatari Industry. Research topics include Arabic language computer technologies, computer security and data analysis. ===Environmental initiatives=== In the environmental sciences, Qatar Foundation founded the Qatar Green Building Council in 2009, and the Qatar Environmental & Energy Research Institute (QEERI). ===Medicine initiatives=== In 2012, the Qatar Biomedical Research Institute (QBRI) was established to develop translational biomedical research and biotechnology, focusing on diabetes, cancer and cardiovascular diseases. The Program offers a Bachelor of Science degree which allows for one of 3 concentrations: *Sport Management *Exercise and Fitness *Physical Education ==Honors Program== Qatar University's Honors Program was established in 2009. to provide academic opportunities for high- achieving students. These centers sit alongside the Qatar Faculty of Islamic Studies which began its first graduate classes in the 2007–2008 academic year. For courses which are not offered as Honors, students may propose an ""Honors Contract"" to specify honors-level objectives and goals to be monitored by a sponsoring professor. ==Qatar University student clubs== Qatar University is the biggest and most popular university in Qatar, as stated by UniRank. The college began with a total of 150 students (93 women and 57 men) and was later expanded to become the University of Qatar in 1977 with four new colleges : Education, Humanities & Social Sciences, Sharia & Law & Islamic Studies, and Science. Qatar Foundation for Education, Science and Community Development () is a state-led non-profit organization in Qatar, founded in 1995 by then-emir Hamad bin Khalifa Al Thani and his second wife Moza bint Nasser Al-Missned. ","Environmental studies, with a main focus on marine science, atmospheric and political sciences.","Environmental studies, with a main focus on marine science, atmospheric and physical sciences.","Environmental studies, with a main focus on marine science, atmospheric and social sciences.","Environmental studies, with a main focus on marine science, atmospheric and biological sciences.","Environmental studies, with a main focus on space science, atmospheric and biological sciences.",D,kaggle200,"The main focus of comprehensive crawls is to automatically harvest the biggest number of Czech web resources. The list of URLs is from organisation CZ.NIC. Another subgenre is called , in which sexual gratification of the player is the main focus of the game. The application of mediatization theory to the study of religion was initiated by Stig Hjarvard with a main focus on Northern Europe. @@ -2138,7 +2138,7 @@ A distinguishing characteristic of the class ""Mammalia"" is the presence of mam Mammals are divided into 3 groups: prototherians, metatherians, and eutheriansHistology A mammary gland is a specific type of apocrine gland specialized for manufacture of colostrum when giving birthConcerning metatherians and eutherians, only females have functional mammary glandsIn general most mammals develop mammary glands in pairs along these lines, with a number approximating the number of young typically birthed at a timeThese mammary glands are modified sweat glandsThese mammary glands are modified sebaceous glandsIn the case of udders, pairs of mammary glands comprise a single mass, with more than one nipple (or teat) hanging from itMammary glands can be identified as apocrine because they exhibit striking ""decapitation"" secretionMany sources assert that mammary glands are modified sweat glandsIn the case of breasts, each mammary gland has its own nipple (e.g., human mammary glands)Some authors dispute that and argue instead that they are sebaceous glands. General The breasts of female humans vary from most other mammals that tend to have less conspicuous mammary glandsThese mammary glands are modified sweat glands. Male mammals typically have rudimentary mammary glands and nipples, with a few exceptions: male mice do not have nipples, male marsupials do not have mammary glands, and male horses lack nipples and mammary glands- In females, H19 is expressed postnatally during puberty and pregnancy in the mammary glands, and in the uterus during pregnancy. -A distinguishing characteristic of the class ""Mammalia"" is the presence of mammary glandsIn the case of [SEP]What is the function of mammary glands in mammals?","['A', 'B', 'E']",1.0 +A distinguishing characteristic of the class ""Mammalia"" is the presence of mammary glandsIn the case of [SEP]What is the function of mammary glands in mammals?","['A', 'E', 'B']",1.0 What is the relationship between interstellar and cometary chemistry?,"The similarity between interstellar and cometary ices (as well as comparisons of gas phase compounds) have been invoked as indicators of a connection between interstellar and cometary chemistry. This is somewhat supported by the results of the analysis of the organics from the comet samples returned by the Stardust mission but the minerals also indicated a surprising contribution from high-temperature chemistry in the solar nebula. == Research == thumb|Transition from atomic to molecular gas at the border of the Orion molecular cloud Research is progressing on the way in which interstellar and circumstellar molecules form and interact, e.g. by including non-trivial quantum mechanical phenomena for synthesis pathways on interstellar particles. The authors describe the scientific nature of comets, as well as their varying roles and perceptions throughout history. This research could have a profound impact on our understanding of the suite of molecules that were present in the molecular cloud when our solar system formed, which contributed to the rich carbon chemistry of comets and asteroids and hence the meteorites and interstellar dust particles which fall to the Earth by the ton every day. The study of the abundance of elements and isotope ratios in Solar System objects, such as meteorites, is also called cosmochemistry, while the study of interstellar atoms and molecules and their interaction with radiation is sometimes called molecular astrophysics. They are also the most common class of carbon molecule in meteorites and in cometary and asteroidal dust (cosmic dust). This has prompted a still ongoing search for interstellar molecules which are either of direct biological importance – such as interstellar glycine, discovered in a comet within our solar system in 2009 – or which exhibit biologically relevant properties like chirality – an example of which (propylene oxide) was discovered in 2016 – alongside more basic astrochemical research. == Spectroscopy == One particularly important experimental tool in astrochemistry is spectroscopy through the use of telescopes to measure the absorption and emission of light from molecules and atoms in various environments. The theoretical importance granted to these spectroscopic results was greatly expanded upon the development of quantum mechanics, as the theory allowed for these results to be compared to atomic and molecular emission spectra which had been calculated a priori. === History of astrochemistry === While radio astronomy was developed in the 1930s, it was not until 1937 that any substantial evidence arose for the conclusive identification of an interstellar molecule – up until this point, the only chemical species known to exist in interstellar space were atomic. Comets have appeared in numerous works of fiction. The evolution of human understanding of comets is also detailed, and thinkers and astronomers such as Edmond Halley, Immanuel Kant, and William Huggins are discussed. The word ""astrochemistry"" may be applied to both the Solar System and the interstellar medium. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds is of special interest, because it is from these clouds that solar systems form. == History == As an offshoot of the disciplines of astronomy and chemistry, the history of astrochemistry is founded upon the shared history of the two fields. By comparing astronomical observations with laboratory measurements, astrochemists can infer the elemental abundances, chemical composition, and temperatures of stars and interstellar clouds. In the thirty years afterwards, a small selection of other molecules were discovered in interstellar space: the most important being OH, discovered in 1963 and significant as a source of interstellar oxygen,) and H2CO (formaldehyde), discovered in 1969 and significant for being the first observed organic, polyatomic molecule in interstellar space The discovery of interstellar formaldehyde – and later, other molecules with potential biological significance, such as water or carbon monoxide – is seen by some as strong supporting evidence for abiogenetic theories of life: specifically, theories which hold that the basic molecular components of life came from extraterrestrial sources. In fact, CO is such a common interstellar molecule that it is used to map out molecular regions. The development of advanced observational and experimental spectroscopy has allowed for the detection of an ever-increasing array of molecules within solar systems and the surrounding interstellar medium. When it was discovered in 1939 it was not recognized as a comet and designated as asteroid 1939 TN. == References == == External links == * Orbital simulation from JPL (Java) / Horizons Ephemeris * 139P/Vaisala-Oterma – Seiichi Yoshida @ aerith.net *139P at Kronk's Cometography Category:Periodic comets 0139 Category:Discoveries by Liisi Oterma \+ Category:Comets in 2017 19391007 Astrochemistry overlaps with astrophysics and nuclear physics in characterizing the nuclear reactions which occur in stars, as well as the structure of stellar interiors. Comet is a 1985 popular-science book by Carl Sagan and Ann Druyan. In July 2015, scientists reported that upon the first touchdown of the Philae lander on comet 67/P surface, measurements by the COSAC and Ptolemy instruments revealed sixteen organic compounds, four of which were seen for the first time on a comet, including acetamide, acetone, methyl isocyanate and propionaldehyde. thumb|center|upright=4.5|The chemical diversity in the different types of astronomical object is noteworthy. ","Cometary chemistry is responsible for the formation of interstellar molecules, but there is no direct connection between the two.","Interstellar and cometary chemistry are the same thing, just with different names.","There is a possible connection between interstellar and cometary chemistry, as indicated by the similarity between interstellar and cometary ices and the analysis of organics from comet samples returned by the Stardust mission.","There is no relationship between interstellar and cometary chemistry, as they are two completely different phenomena.","Interstellar chemistry is responsible for the formation of comets, but there is no direct connection between the two.",C,kaggle200,"Several structures have been described as cometary knots or cometary globules that surround R Coronae Borealis, which is a peculiar star described as potentially the result of a white dwarf merger or final helium shell flash that periodically dims due to a build-up of carbon dust surrounding it, acting as a 'natural coronograph'. Project Hyperion, one of the projects of Icarus Interstellar has looked into various feasibility issues of crewed interstellar travel. Its members continue to publish on crewed interstellar travel in collaboration with the Initiative for Interstellar Studies. With the experiments onboard of the EXPOSE facilities, various aspects of astrobiology were investigated that could not be sufficiently approached by use of laboratory facilities on ground. The chemical set of experiments is designed to reach a better understanding of the role of interstellar, cometary and planetary chemistry in the origin of life. Comets and meteorites are interpreted as exogenous sources of prebiotic molecules on the early Earth. All data achieved from the astrobiological experiments on both EXPOSE missions will add to the understanding of the origin and evolution of life on Earth and on the possibility of its distribution in space or origin elsewhere. @@ -2550,7 +2550,7 @@ The primary anatomic components of BAOS include stenotic nares (pinched or narro In cosmology, baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe, caused by acoustic density waves in the primordial plasma of the early universeIn the same way that supernovae provide a ""standard candle"" for astronomical observations, BAO matter clustering provides a ""standard ruler"" for length scale in cosmology. Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scalesThese are predicted to arise in the Lambda-CDM model due to acoustic oscillations in the photon–baryon fluid of the early universe, and can be observed in the cosmic microwave background angular power spectrumThis is yet another indication that at least some of what is being called brachycephalic airway syndrome is not linked to skull shape and has previously been found to cause fluid retention and swelling Sky surveys and baryon acoustic oscillations Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scalesBAOs set up a preferred length scale for baryonsIn the same way that supernovae provide a ""standard candle"" for astronomical observations, BAO matter clustering provides a ""standard ruler"" for length scale in cosmologyBAO measurements help cosmologists understand more about the nature of dark energy (which causes the accelerating expansion of the universe) by constraining cosmological parameters. -The primary anatomic components of BAOS include stenotic nares (pinched or narrowed nostrils), and elongated soft palate, tracheal hypoplasia (reduced trachea size), and nasopharyngeal turbinates.Other risk factors for BAOS include a lower craniofacial ratio (shorter muzzle in com[SEP]What is the significance of Baryon Acoustic Oscillations (BAOs) in the study of the universe?","['A', 'D', 'E']",1.0 +The primary anatomic components of BAOS include stenotic nares (pinched or narrowed nostrils), and elongated soft palate, tracheal hypoplasia (reduced trachea size), and nasopharyngeal turbinates.Other risk factors for BAOS include a lower craniofacial ratio (shorter muzzle in com[SEP]What is the significance of Baryon Acoustic Oscillations (BAOs) in the study of the universe?","['A', 'E', 'D']",1.0 What can be inferred about the electronic entropy of insulators and metals based on their densities of states at the Fermi level?,"As the density of states at the Fermi level varies widely between systems, this approximation is a reasonable heuristic for inferring when it may be necessary to include electronic entropy in the thermodynamic description of a system; only systems with large densities of states at the Fermi level should exhibit non-negligible electronic entropy (where large may be approximately defined as ). == Application to different materials classes == Insulators have zero density of states at the Fermi level due to their band gaps. Metals have non-zero density of states at the Fermi level. Metals with free-electron-like band structures (e.g. alkali metals, alkaline earth metals, Cu, and Al) generally exhibit relatively low density of states at the Fermi level, and therefore exhibit fairly low electronic entropies. Electronic entropy is thus most relevant for the thermodynamics of condensed phases, where the density of states at the Fermi level can be quite large, and the electronic entropy can thus contribute substantially to thermodynamic behavior. Several other approximations can be made, but they all indicate that the electronic entropy should, to first order, be proportional to the temperature and the density of states at the Fermi level. Thus, the density of states- based electronic entropy is essentially zero in these systems. Electronic entropy is the entropy of a system attributable to electrons' probabilistic occupation of states. One can then re-write the entropy as: :S=-k_{\rm B} \int n(E) \left [ f \ln f +(1- f) \ln \left ( 1- f \right ) \right ]dE This is the general formulation of the density-of-states based electronic entropy. ===Useful approximation=== It is useful to recognize that the only states within ~ of the Fermi level contribute significantly to the entropy. As the entropy is given by a sum over the probabilities of occupation of those states, there is an entropy associated with the occupation of the various electronic states. However, when oxides are metallic (i.e. the Fermi level lies within an unfilled, flat set of bands), oxides exhibit some of the largest electronic entropies of any material. Transition metals, wherein the flat d-bands lie close to the Fermi level, generally exhibit much larger electronic entropies than the free-electron like metals. A second form of electronic entropy can be attributed to the configurational entropy associated with localized electrons and holes. To a first approximation (i.e. assuming that the charges are distributed randomly), the molar configurational electronic entropy is given by: :S \approx n_\text{sites} \left [ x \ln x + (1-x) \ln (1-x) \right ] where is the fraction of sites on which a localized electron/hole could reside (typically a transition metal site), and is the concentration of localized electrons/holes. Instead of engineering band filling, one may also engineer the shape of the band structure itself via introduction of nanostructures or quantum wells to the materials. ==Configurational electronic entropy== Configurational electronic entropy is usually observed in mixed- valence transition metal oxides, as the charges in these systems are both localized (the system is ionic), and capable of changing (due to the mixed valency). The distinction between the valence and conduction bands is meaningless in metals, because conduction occurs in one or more partially filled bands that take on the properties of both the valence and conduction bands. == Band gap == In semiconductors and insulators the two bands are separated by a band gap, while in conductors the bands overlap. Switching from summing over individual states to integrating over energy levels, the entropy can be written as: :S=-k_{\rm B} \int n(E) \left [ p(E) \ln p(E) +(1- p(E)) \ln \left ( 1- p(E)\right ) \right ]dE where is the density of states of the solid. In nonmetals, the valence band is the highest range of electron energies in which electrons are normally present at absolute zero temperature, while the conduction band is the lowest range of vacant electronic states. Electronic entropy can substantially modify phase behavior, as in lithium ion battery electrodes, high temperature superconductors, and some perovskites. More specifically, thermoelectric materials are intentionally doped to exhibit only partially filled bands at the Fermi level, resulting in high electronic entropies. In solid-state physics, the valence band and conduction band are the bands closest to the Fermi level, and thus determine the electrical conductivity of the solid. ","Insulators and metals have zero density of states at the Fermi level, and therefore, their density of states-based electronic entropy is essentially zero.","Insulators have zero density of states at the Fermi level, and therefore, their density of states-based electronic entropy is essentially zero. Metals have non-zero density of states at the Fermi level, and thus, their electronic entropy should be proportional to the temperature and density of states at the Fermi level.","Insulators have non-zero density of states at the Fermi level, and therefore, their density of states-based electronic entropy is proportional to the temperature and density of states at the Fermi level. Metals have zero density of states at the Fermi level, and thus, their electronic entropy is essentially zero.","Insulators and metals have varying densities of states at the Fermi level, and thus, their electronic entropy may or may not be proportional to the temperature and density of states at the Fermi level.","Insulators and metals have non-zero density of states at the Fermi level, and thus, their electronic entropy should be proportional to the temperature and density of states at the Fermi level.",B,kaggle200,"It is useful to recognize that the only states within ~ of the Fermi level contribute significantly to the entropy. Other states are either fully occupied, , or completely unoccupied, . In either case, these states do not contribute to the entropy. If one assumes that the density of states is constant within of the Fermi level, one can derive that the electron heat capacity, equal to: Metals have non-zero density of states at the Fermi level. Metals with free-electron-like band structures (e.g. alkali metals, alkaline earth metals, Cu, and Al) generally exhibit relatively low density of states at the Fermi level, and therefore exhibit fairly low electronic entropies. Transition metals, wherein the flat d-bands lie close to the Fermi level, generally exhibit much larger electronic entropies than the free-electron like metals. Insulators have zero density of states at the Fermi level due to their band gaps. Thus, the density of states-based electronic entropy is essentially zero in these systems. @@ -2607,7 +2607,7 @@ Dielectric loss and non-zero DC conductivity in materials cause absorptionVarian Cole–Davidson equation This equation is used when the dielectric loss peak shows asymmetric broadening. Havriliak–Negami relaxation This equation considers both symmetric and asymmetric broadening. Kohlrausch–Williams–Watts function Fourier transform of stretched exponential function. -Curie–von Schweidler law [SEP]What is the relationship between dielectric loss and the transparency of a material?","['B', 'D', 'E']",1.0 +Curie–von Schweidler law [SEP]What is the relationship between dielectric loss and the transparency of a material?","['B', 'E', 'D']",1.0 What is the purpose of measuring the Larmor precession fields at about 100 microtesla with highly sensitive superconducting quantum interference devices (SQUIDs) in ultra-low field MRI?,"Retrieved: 14 October 2010. ==Low-temperature superconductivity== === Magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR)=== The biggest application for superconductivity is in producing the large-volume, stable, and high-intensity magnetic fields required for MRI and NMR. By using a lock-in amplifier the device can read only the frequency corresponding to the magnetic field, ignoring many other sources of noise. ==Instrumentation== A Scanning SQUID Microscope is a sensitive near-field imaging system for the measurement of weak magnetic fields by moving a Superconducting Quantum Interference Device (SQUID) across an area. In condensed matter physics, scanning SQUID microscopy is a technique where a superconducting quantum interference device (SQUID) is used to image surface magnetic field strength with micrometre-scale resolution. Further description of the physics of SQUIDs and SQUID microscopy can be found elsewhere.""Current Imaging using Magnetic Field Sensors"" L.A. Knauss, S.I. Woods and A. OrozcoJ. A magnetic field image can be converted to a current density image in about 1 or 2 seconds. ==Applications== The scanning SQUID microscope was originally developed for an experiment to test the pairing symmetry of the high- temperature cuprate superconductor YBCO. For magnetic current imaging systems, a small (about 30 µm wide) high temperature SQUID is used. With this post-processing of a magnetic image and the low noise present in SQUID images, it is possible to enhance the spatial resolution by factors of 5 or more over the near-field limited magnetic image. In addition such devices require extensive vibration dampening if precise height control is to be maintained. ===High temperature scanning SQUID microscope=== thumb|Scanning SQUID microscope A high temperature Scanning SQUID Microscope using a YBCO SQUID is capable of measuring magnetic fields as small as 20 pT (about 2 million times weaker than the earth's magnetic field). Kirtley, IEEE Spectrum p. 40, Dec. (1996) ===Magnetic field detection using SQUID=== Magnetic current imaging uses the magnetic fields produced by currents in electronic devices to obtain images of those currents. As the SQUID is the most sensitive detector of magnetic fields available and can be constructed at submicrometre widths via lithography, the scanning SQUID microscope allows magnetic fields to be measured with unparalleled resolution and sensitivity. Tsuei et al. used a scanning SQUID microscope to measure the local magnetic field at each of the devices in the figure, and observed a field in ring A approximately equal in magnitude Φ0/2A, where A was the area of the ring. As noted, the coordinate axes selected for this analysis are shown in Figure 1. ===Magnetic Current Imaging=== SQUIDs are the most sensitive magnetic sensors known. The Scanning SQUID Microscopy (SSM) data are current density images and current peak images. In the same property behind the scanning SQUID microscope, the phase of the wavefunction is also altered by the amount of magnetic flux passing through the junction, following the relationship Δφ=π(Φ0). With enough electrons moving, the aggregate magnetic field can be detected by superconducting sensors. * Design and applications of a scanning SQUID microscope * Center for Superconductivity Research, University of Maryland * Neocera LLC Category:Josephson effect Category:Measuring instruments Category:Microscopy Category:Scanning probe microscopy Category:Superconductivity As the SQUID material must be superconducting, measurements must be performed at low temperatures. To use the DC SQUID to measure standard magnetic fields, one must either count the number of oscillations in the voltage as the field is changed, which is very difficult in practice, or use a separate DC bias magnetic field parallel to the device to maintain a constant voltage and consequently constant magnetic flux through the loop. The SQUID itself can be used as the pickup coil for measuring the magnetic field, in which case the resolution of the device is proportional to the size of the SQUID. As a result, alone a SQUID can only be used to measure the change in magnetic field from some known value, unless the magnetic field or device size is very small such that Φ < Φ0. ",To measure the magnetization in the same direction as the static magnetic field in T1 relaxation.,"To create a T1-weighted image that is useful for assessing the cerebral cortex, identifying fatty tissue, and characterizing focal liver lesions.","To obtain sufficient signal quality in the microtesla-to-millitesla range, where MRI has been demonstrated recently.",To measure the independent relaxation processes of T1 and T2 in each tissue after excitation.,To change the repetition time (TR) and obtain morphological information in post-contrast imaging.,C,kaggle200,"To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR). This image weighting is useful for assessing the cerebral cortex, identifying fatty tissue, characterizing focal liver lesions, and in general, obtaining morphological information, as well as for post-contrast imaging. Aluminium oxide is an electrical insulator used as a substrate (silicon on sapphire) for integrated circuits but also as a tunnel barrier for the fabrication of superconducting devices such as single-electron transistors and superconducting quantum interference devices (SQUIDs). To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR). This image weighting is useful for assessing the cerebral cortex, identifying fatty tissue, characterizing focal liver lesions, and in general, obtaining morphological information, as well as for post-contrast imaging. @@ -2634,7 +2634,7 @@ Illuminance at a given distance in lux The lux (lx) is this SI unit for illumina The lux (lx) is this SI unit for illuminance, that is the amount of light that illuminates a surface (the road, in the case of a bike light) per unit area at a given point, weighted according to the sensitivity of the human eye to various colours of lightThe foot-candle is a non-metric unit of illuminance that is used in photography.Illuminance was formerly often called brightness, but this leads to confusion with other uses of the word, such as to mean luminanceIn the CGS system, the unit of illuminance is the phot, which is equal to 10000 lux- where both sides now have units of power "," The procedure for conversion from spectral radiance to luminance is standardized by the CIE and ISO.Brightness is the term for the subjective impression of the objective luminance measurement standard (see Objectivity (science) § Objectivity in measurement for the importance of this contrast). In photometry, illuminance is the total luminous flux incident on a surface, per unit areaLuminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction""Brightness"" should never be used for quantitative description, but only for nonquantitative references to physiological sensations and perceptions of light. Illuminance at a given distance in lux The lux (lx) is this SI unit for illuminance, that is the amount of light that illuminates a surface (the road, in the case of a bike light) per unit area at a given point, weighted according to the sensitivity of the human eye to various colours of lightLuminous emittance is also known as luminous exitance.In SI units illuminance is measured in lux (lx), or equivalently in lumens per square metre (lm·m−2)Similarly, luminous emittance is the luminous flux per unit area emitted from a surfaceIt is a measure of how much the incident light illuminates the surface, wavelength-weighted by the luminosity function to correlate with human brightness perceptionIt describes the amount of light that passes through, is emitted from, or is reflected from a particular area, and falls within a given solid angle. -The lux (lx) is this SI unit for illuminance, that is the amount of light that illuminates a surface (the road, in the case of a bike light) per unit area at a given point, weighted according to the sensitivity of the human eye to various colours of lightThe foot-candle is a non-metric unit of illuminance that is used in photography.Illuminance was formerly often called brightness, but this leads to confusion with other uses of the word, such as to mean luminanceIn the CGS system, the unit of illuminance is the phot, which is equal to 10000 lux- where both sides now have units of power [SEP]What is the difference between illuminance and luminance?","['D', 'B', 'E']",0.5 +The lux (lx) is this SI unit for illuminance, that is the amount of light that illuminates a surface (the road, in the case of a bike light) per unit area at a given point, weighted according to the sensitivity of the human eye to various colours of lightThe foot-candle is a non-metric unit of illuminance that is used in photography.Illuminance was formerly often called brightness, but this leads to confusion with other uses of the word, such as to mean luminanceIn the CGS system, the unit of illuminance is the phot, which is equal to 10000 lux- where both sides now have units of power [SEP]What is the difference between illuminance and luminance?","['B', 'D', 'E']",1.0 What is a magnetic monopole in particle physics?,"In particle physics, a magnetic monopole is a hypothetical elementary particle that is an isolated magnet with only one magnetic pole (a north pole without a south pole or vice versa). A magnetic monopole, if it exists, would have the defining property of producing a magnetic field whose monopole term is non-zero. A true magnetic monopole would be a new elementary particle, and would violate Gauss's law for magnetism . (See below.) ==Poles and magnetism in ordinary matter== All matter isolated to date, including every atom on the periodic table and every particle in the Standard Model, has zero magnetic monopole charge. A magnetic monopole would have a net north or south ""magnetic charge"". In some theoretical models, magnetic monopoles are unlikely to be observed, because they are too massive to create in particle accelerators (see below), and also too rare in the Universe to enter a particle detector with much probability. Coleman, ""The Magnetic Monopole 50 years Later"", reprinted in Aspects of Symmetry The known elementary particles that have electric charge are electric monopoles. The hypothetical existence of a magnetic monopole would imply that the electric charge must be quantized in certain units; also, the existence of the electric charges implies that the magnetic charges of the hypothetical magnetic monopoles, if they exist, must be quantized in units inversely proportional to the elementary electric charge. Electric monopole, or object with non-zero divergency of electrical field may refer to: * Electric charge ==See also== * Magnetic monopole (non-zero divergency of magnetic field) In mathematics, a monopole is a connection over a principal bundle G with a section of the associated adjoint bundle. ==Physical interpretation== Physically, the section can be interpreted as a Higgs field, where the connection and Higgs field should satisfy the Bogomolny equations and be of finite action. == See also == * Nahm equations * Instanton * Magnetic monopole * Yang–Mills theory == References == * * * Category:Differential geometry Category:Mathematical physics However, in the multipole expansion of a magnetic field, the ""monopole"" term is always exactly zero (for ordinary matter). While these should not be confused with hypothetical elementary monopoles existing in the vacuum, they nonetheless have similar properties and can be probed using similar techniques. For instance, a wide class of particles known as the X and Y bosons are predicted to mediate the coupling of the electroweak and strong forces, but these particles are extremely heavy and well beyond the capabilities of any reasonable particle accelerator to create. == Searches for magnetic monopoles == Experimental searches for magnetic monopoles can be placed in one of two categories: those that try to detect preexisting magnetic monopoles and those that try to create and detect new magnetic monopoles. This constitutes the first example of a quasi-magnetic monopole observed within a system governed by quantum field theory. ==See also== * Bogomolny equations * Dirac string * Dyon * Felix Ehrenhaft * Flatness problem * Gauss's law for magnetism * Ginzburg–Landau theory * Halbach array * Horizon problem * Instanton * Magnetic monopole problem * Meron * Soliton * 't Hooft–Polyakov monopole * Wu–Yang monopole * Magnetic current ==Notes== ==References== ===Bibliography=== * * * * * * * * * * * ==External links== Category:Hypothetical elementary particles Category:Magnetism Category:Gauge theories Category:Hypothetical particles Category:Unsolved problems in physics Multipole magnets are magnets built from multiple individual magnets, typically used to control beams of charged particles. A magnetic dipole is something whose magnetic field is predominantly or exactly described by the magnetic dipole term of the multipole expansion. Magnetism in bar magnets and electromagnets is not caused by magnetic monopoles, and indeed, there is no known experimental or observational evidence that magnetic monopoles exist. Further advances in theoretical particle physics, particularly developments in grand unified theories and quantum gravity, have led to more compelling arguments (detailed below) that monopoles do exist. Nevertheless, Pierre Curie pointed out in 1894 that magnetic monopoles could conceivably exist, despite not having been seen so far. ===Quantum mechanics=== The quantum theory of magnetic charge started with a paper by the physicist Paul Dirac in 1931. Retrieved February 1, 2014. has never been observed in experiments.Magnetic Monopoles, report from Particle data group, updated August 2015 by D. Milstead and E.J. Weinberg. ",A hypothetical elementary particle that is an isolated electric charge with both positive and negative poles.,A hypothetical elementary particle that is an isolated magnet with no magnetic poles.,"A hypothetical elementary particle that is an isolated electric charge with only one electric pole, either a positive pole or a negative pole.",A hypothetical elementary particle that is an isolated magnet with both north and south poles.,"A hypothetical elementary particle that is an isolated magnet with only one magnetic pole, either a north pole or a south pole.",E,kaggle200,"A magnetic dipole is something whose magnetic field is predominantly or exactly described by the magnetic dipole term of the multipole expansion. The term ""dipole"" means ""two poles"", corresponding to the fact that a dipole magnet typically contains a ""north pole"" on one side and a ""south pole"" on the other side. This is analogous to an electric dipole, which has positive charge on one side and negative charge on the other. However, an electric dipole and magnetic dipole are fundamentally quite different. In an electric dipole made of ordinary matter, the positive charge is made of protons and the negative charge is made of electrons, but a magnetic dipole does ""not"" have different types of matter creating the north pole and south pole. Instead, the two magnetic poles arise simultaneously from the aggregate effect of all the currents and intrinsic moments throughout the magnet. Because of this, the two poles of a magnetic dipole must always have equal and opposite strength, and the two poles cannot be separated from each other. The axino is a hypothetical elementary particle predicted by some theories of particle physics. Peccei–Quinn theory attempts to explain the observed phenomenon known as the strong CP problem by introducing a hypothetical real scalar particle called the axion. Adding supersymmetry to the model predicts the existence of a fermionic superpartner for the axion, the axino, and a bosonic superpartner, the ""saxion"". They are all bundled up in a chiral superfield. The pole model usually treats magnetic charge as a mathematical abstraction, rather than a physical property of particles. However, a magnetic monopole is a hypothetical particle (or class of particles) that physically has only one magnetic pole (either a north pole or a south pole). In other words, it would possess a ""magnetic charge"" analogous to an electric charge. Magnetic field lines would start or end on magnetic monopoles, so if they exist, they would give exceptions to the rule that magnetic field lines neither start nor end. Some theories (such as Grand Unified Theories) have predicted the existence of magnetic monopoles, but so far, none have been observed.