A person learns the world around him through simplification and classification. The stars have attracted the explorers of the world since ancient times, and seemed mysterious due to their inaccessibility. But if at least one sense organ is capable of perceiving a phenomenon, we can describe it and try to classify it.
So did Hipparchus of Nicaea – an ancient Greek astronomer, mechanic, geographer and mathematician, who lived about 2200 years ago, and worked most of his life on the island of Rhodes. The mystery of the starry sky was extremely attractive to him, and in an effort to reveal it, he compiled a star catalog, in which he divided the stars into 6 classes according to their luminosity. To the stars of the 6th magnitude, he attributed those that were barely visible to the naked eye, and the brightest he attributed to the stars of the 1st magnitude. Each next value differed from the previous one in brightness by about two times. Unfortunately, in its original form, its catalog has not survived to this day, and we know about it only from the works of other great scientists of antiquity (Pappus, Strabo and Ptolemy).
However, this estimate was too inaccurate, and in 1856 an English astronomer Norman Robert Pogson gave a more formal definition of stellar magnitudes. Since then, a star of the first magnitude differs in brightness from a star of the 6th magnitude by exactly 100 times. Such a logarithmic scale is still used today, and the apparent brightness of stars in the sky is measured using photodetectors. On this scale, it turns out that stars of adjacent magnitudes differ in brightness by a factor of about 2.512 (), the brightest star in the night sky outside the Solar System (in the visible spectrum), Sirius has an apparent magnitude of −1.46, while our Sun has an apparent magnitude of −26.74.
However, it is clear that the brightness of a distant star seen from the Earth is determined not only by its true brightness, but also by the distance to it, as well as the presence of various objects in the gap between us – cosmic dust, interstellar matter, etc. Therefore, actually brighter stars may appear dim to us only because they are farther away than the others.
To compare the true brightness of stars, the absolute magnitude is used. It is equal to the apparent magnitude of a given object, which we would perceive, being at a distance of exactly 10 parsecs (32.6 light years) from it, if there were no interference between us and the object, such as the interstellar medium or cosmic dust. By hypothetically placing different objects at the same distance from us, we can compare their brightness directly.
The brightness scale is logarithmic, and a difference of 5 units on this scale corresponds to a change in brightness of 100 times. As with apparent brightness, the lower the value, the greater the brightness.
Since stars shine in different wavelength ranges, the UBV photometric system is used to evaluate them, where U is the ultraviolet band of the spectrum, B is blue, and V is visible. The absolute brightness of the Sun in the visible part of the spectrum, or MV = +4.83. There is also the absolute bolometric brightness of an object – this is its total brightness over the entire frequency range.
For very bright objects, brightness can be measured in negative values. For example, MB Milky Way = -20.8. But since galaxies (and other large objects) are larger than 10 parsecs, their absolute brightness is considered to be that of a point object, which would emit as much light as the entire galaxy.
Some of the stars visible in our sky with the naked eye are actually so bright that if they were 10 parsecs away from us, they would look brighter than the planets in the solar system and would cast shadows. Among them – Rigel (-7.0), Deneb (-7.2) and Betelgeuse (-5.6). Looks bright Sirius the absolute brightness is only 1.4, which is still more than that of the Sun. The brightest objects in the visible part of the universe are active galactic nuclei (for example, such a quasar as CTA-102), the absolute magnitude of which can reach -32.
Adaptive and active optics
Twinkling stars in the night sky are beautiful, romantic, but terribly inconvenient for scientific observations. The atmosphere, thanks to which we exist on the surface of the Earth, is constantly moving due to the temperature difference between different layers that generate wind. The variable refractive index of different layers of the atmosphere creates constant fluctuations in the sky image. concept astronomical visibility describe how much the earth’s atmosphere distorts the visibility of stars in a given place at a given time.
Adaptive optics is used to combat atmospheric distortions in the picture. Its development was started by the United States during the Cold War to monitor Soviet satellites, and by 1991 declassified and began to apply in civilian science.
To correct image fluctuations in telescopes, a wavefront sensor, a mirror with variable geometry, and a computer that controls it are used. If the observed object is too dim for the correct operation of the sensor, then to level the fluctuations of its image, you can use a “reference star” – a bright light source located near this object.
Naturally, it is far from always possible to find such a star – to solve this problem, some telescopes install a laser that generates an “artificial reference star.” The laser beam goes up, reflects back from the layers of the atmosphere, and hits the sensor. Its vibrations are analyzed and the mirror is microelectromechanical systems adjusts its shape in real time within small limits.
Do not confuse adaptive and active optics. Almost all modern telescopes are reflectors, and their main element is a large mirror. Previously, mirrors for such telescopes were made heavy and thick so that they could resist deformation under the influence of external forces – wind and temperature changes. Since the 1980s, telescope mirrors have been made thin, or even broken into segments. Behind them, actuators are installed that maintain the required shape of the mirrors using a picture quality sensor and a computer. Shape adjustment is relatively slow, typically in increments of a few seconds.
In relation to the planet Earth, albedo is usually understood as the percentage of solar radiation for all wavelengths that is diffusely reflected from the Earth’s surface (or, for example, from the upper atmosphere due to clouds). In astronomy, albedo is used in different ways and is defined in different ways.
Astronomical photometry studies the optical albedo of planets, satellites, and asteroids, its dependence on wavelength, illumination angle, and time variation. The absolute albedo of an astronomical object indicates whether there is ice on its surface. The dependence of the albedo on the illumination angle of an object provides information about its properties. regolith.
The highest albedo is at Saturn’s moon Europa, namely 0.99. Also high albedo at Eris, the second largest dwarf planet after Pluto – 0.96. Small asteroids from the belts of the solar system, on the contrary, boast very low albedos up to 0.05. Comet nuclei usually have an albedo of the order of 0.04. General lunar albedo – 0.14.
Since the albedo depends on the illumination angle, there are two types of optical albedo. The geometric albedo of a celestial body is measured when the light source is behind the observer. Bond’s albedo is the percentage of reflected electromagnetic energy relative to the total received. Sometimes these two indicators for one celestial body differ significantly, which can cause confusion. For example, geometric albedo Neptune is 0.442, and bond – 0.29.
Radar astronomy studies the properties of celestial bodies using radar. The high accuracy of measurements of these properties is achieved due to the fact that all parameters of the initial beam are known. In this area of research, radar albedo is used – a normalized reflected signal with polarization opposite to the original one. The radar albedo of a smooth metal sphere would be one. The radar albedo of the Moon is 0.06.
Units of measurement of length appeared and improved in parallel with the progress of human civilization. For orientation on the surface of the Earth, kilometers are enough, and already within the solar system, this unit of length turns out to be too small. As a result, to measure such distances, it is customary to use the distance from the Earth to the Sun – an astronomical unit, or AU.
They tried to estimate the distance from the Earth to the Sun since the third century BC (Aristarchus of Samos), but the Dutch mechanic, physicist, mathematician, astronomer and inventor Christian Huygens was the first to accurately determine it. True, historians question his achievement because of the many controversial assumptions and assumptions he made. Following him, astronomers Giovanni Domenico Cassini and Jean Richet came close to the answer. They found the parallax of Venus by observing the passage of the planet across the solar disk, and from it they calculated the ratio of the distances from the Earth to the Sun and from Venus to the Sun. After that, it was possible to find the solar parallax, through which it is possible to estimate the distance to the Sun. Direct measurements of the distances to Venus and Mars using radar became possible in the 1960s.
Initially for a.u. took the average distance from the Earth to the Sun (since the Earth moves in an elliptical orbit, the distance fluctuates within 3% annually) – about 150 million km. or about 8 light minutes. After several refinements due to equipment improvements and theoretical refinements, in 2012 this distance began to be determined as exactly 149,597,870,700 meters. Accuracy is important in this case, since through a.u. another fundamental astronomical unit of length is defined – parsec.
Adaptive and active optics
Baryon acoustic oscillations
Fast neutron capture process
Wolf’s Star – Rayet
open star cluster
Type Ia supernova
Type II supernova
Shade and penumbra
The Big Bang Theory
crevices of kirkwood