36. The visible stars

When our ancestors looked up into the night sky, they beheld the wonder of the stars. With our ubiquitous electrical lighting, many of us don’t see the same view today – our city skies are too bright from artificial light (previously discussed under Skyglow). We can see the brightest handful of stars, but most of us have forgotten how to navigate the night sky, recognising the constellations and other features such as the intricately structured band of the Milky Way and the Magellanic Clouds. There are features in the night sky other than stars (the moon, the planets, meteors, and comets), but we’re going to concentrate on the stars.

The night sky, showing the Milky Way

Composite image of the night sky from the European Southern Observatory at Cerro Paranal, Chile, showing the Milky Way (bright band) and the two Magellanic Clouds (far left). (Creative Commons Attribution 4.0 International image by the European Southern Observatory.)

The Milky Way counts because it is made of stars. To our ancestors, it resembled a stream of milk flung across the night sky, a continuous band of brightness. But a small telescope reveals that it is made up of millions of faint stars, packed so closely that they blend together to our naked eyes. The Milky Way is our galaxy, a collection of roughly 100 billion stars and their planets.

The stars are apparently fixed in place with respect to one another. (Unlike the moon, planets, meteors, and comets, which move relative to the stars, thus distinguishing them.) The stars are not fixed in the sky relative to the Earth though. Each night, the stars wheel around in circles in the sky, moving over the hours as if stuck to the sky and the sky itself is rotating.

The stars move in their circles and come back to the same position in the sky approximately a day later. But not exactly a day later. The stars return to the same position after 23 hours, 56 minutes, and a little over 4 seconds, if you time it precisely. We measure our days by the sun, which appears to move through the sky in roughly the same way as the stars, but which moves more slowly, taking a full 24 hours (on average, over the course of a year) to return to the same position.

This difference is caused by the physical arrangement of the sun, Earth, and stars. Our Earth spins around on its axis once every 23 hours, 56 minutes, and 4 and a bit seconds. However in this time it has also moved in its orbit around the sun, by a distance of approximately one full orbit (which takes a year) divided by 365.24 (the average number of days in a year). This means that from the viewpoint of a person on Earth, the sun has moved a little bit relative to the stars, and it takes an extra (day/365.24) = 236 seconds for the Earth to rotate far enough for the sun to appear as though it has returned to the same position. This is why the solar day (the way we measure time with our clocks) is almost 4 minutes longer than the Earth’s rotation period (called the sidereal day, “sidereal” meaning “relative to the stars”).

Sidereal and solar days

Diagram showing the difference between a sidereal day (23 hours, 56 minutes, 4 seconds) when the Earth has rotated once, and a solar day (24 hours) when the sun appears in the same position to an observer on Earth.

Another way of looking at is that in one year the Earth spins on its axis 366.24 times, but in that same time the Earth has moved once around the sun, so only 365.24 solar days have passed. The sidereal day is thus 365.24/366.24 = 99.727% of the length of the solar day.

The consequence of all this is that slowly, throughout the year, the stars we see at night change. On 1 January, some stars are hidden directly behind the sun, and we can’t see them or nearby stars, because they are in the sky during the day, when their light is drowned out by the light of the sun. But six months later, the Earth is on the other side of its orbit, and those stars are now high in the sky at midnight and easily visible, whereas some of the stars that were visible in January are now in the sky at daytime and obscured.

This change in visibility of the stars over the course of a year applies mostly to stars above the equatorial regions. If we imagine the equator of the Earth extended directly upwards (a bit like the rings of Saturn) towards the stars, it defines a plane cutting the sky in half. This plane is called the celestial equator.

However the sun doesn’t move along this path. The Earth’s axis is tilted relative to its orbit by an angle of approximately 23.5°. So the sun’s apparent path through the sky moves up and down by ±23.5° over the course of a year, which is what causes our seasons. When the sun is higher in the sky it is summer, when it’s lower, it’s winter.

So as well as the celestial equator, there is another plane bisecting the sky, the plane that the sun appears to follow around the Earth – or equivalently, the plane of the Earth’s (and other planets’) orbit around the sun. This plane is called the ecliptic. It’s the stars along and close to the ecliptic that appear the closest to and thus the most obscured by the sun throughout the year.

Celestial equator and ecliptic plane

Diagram of the celestial equator and the ecliptic plane relative to the Earth and sun (sizes and distances not to scale). The Earth revolves around the sun in the ecliptic plane. (Adapted from a public domain image by NASA, from Wikimedia Commons.)

The constellations of the ecliptic have another name: the zodiac. We’ve met this term before as part of the name of the zodiacal light. The zodiacal light occurs in the plane of the planetary orbits, the ecliptic, which is the same as the plane of the zodiac. As an aside, the constellations of the zodiac include those familiar to people through the pre-scientific tradition of Western astrology: Aries, Taurus, Gemini, Cancer, Leo, Virgo, Libra, Scorpius (“Scorpio” in astrology), Ophiuchus (ignored in astrology), Sagittarius, Capricornus (“Capricorn” in astrology), Aquarius, and Pisces. The system of astrology abstracts these real-world constellations into 12 idealised segments of the sky, each covering exactly 30° of the circle (in fact the constellations cover different amounts), and assigns portentous meanings to the positions of the sun, moon, and planets within each segment.

The stars close to the zodiac are completely obscured by the sun for part of the year, while the stars near the celestial equator appear close to the sun but might still be visible (with difficulty) immediately after sunset or before dawn. The stars far from these planes, however, are more easily visible throughout the whole year. The north star, Polaris, is almost directly above the North Pole, and it and stars nearby are visible from most of the northern hemisphere year-round. There is no equivalent “south pole star”, but the most southerly constellations—such as the recognisable Crux, or Southern Cross—are similarly visible year-round through most of the southern hemisphere.

Axial tilt of Earth

Diagram showing the axial tilt of the Earth relative to the plane of the orbit (the ecliptic), and the positions of Polaris and stars in the zodiac and on the celestial equator. Sizes and distances are not to scale – in reality Polaris is so far away that the angle it makes between the June and December positions of Earth is only 0.007 seconds of arc (about a five millionth of a degree).

Interestingly, Polaris is never visible from the southern hemisphere. Similarly, Crux is not visible from almost all of the northern hemisphere, except for a band close to the equator, from where it appears extremely low on the southern horizon. Crux is centred around 60° south, celestial latitude (usually known as declination), which means that it is below the horizon from all points north of latitude 30°N. (In practice, stars near the horizon are obscured by topography and the long path through the atmosphere, so it is difficult to spot Crux from anywhere north of about 20°N.)

In general, stars at a given declination can never be seen from Earth latitudes 90° or more away, and only with difficulty from 80°-90° away. The reason is straightforward enough. From our spherical Earth, if you are standing at latitude x°N, all parts of the sky from (90-x)°S declination to the south celestial pole are below the horizon. And similarly if you’re at x°S, all parts of the sky from (90-x)°N declination to the north celestial pole are below the horizon. The Earth itself is in the way.

On the other hand, if you are standing at latitude x°N, all parts of the sky north of the same declination are visible every night of the year, while stars between x°N and (90-x)°S are visible only at certain times of the year.

Visibility of stars from globe Earth

Visibility of stars from parts of Earth is determined simply by sightlines from the surface of the globe.

With a spherical Earth, the geometry of the visibility of stars is readily understandable. On a flat Earth, however, there’s no obvious reason why some stars would be visible from some parts of the Earth and not others, let alone the details of how the visibilities change with latitude and throughout the year.

If we consider the usual flat Earth model, with the North Pole at the centre of a disc, and southern regions around the rim, it is difficult to imagine how Polaris can be seen from regions north of the equator but not south of it. And it is even more difficult to justify how it is even possible for southern stars such as those in Crux being visible from Australia, southern Africa, and South America but not from anywhere near the centre of the disc. The southern stars can be seen in the night sky from any two of these locations simultaneously, but if you use a radio telescope during daylight you can observe the same stars from all three at once. Things get even worse with Antarctica. In the southern winter, it is night at virtually every location in Antarctica at the same time, and many of the same stars are visible, yet cannot be seen from the northern hemisphere.

Visibility of stars from flat Earth

Visibility of stars from a flat Earth. All stars must be above the plane, but why are some visible in some parts of the world but not others? Particularly the southern stars, which can be seen from widely separated locations but not regions in the middle of them.

In any flat Earth model, there should be a direct line of sight from every location to any object above the plane of the Earth. To attempt to explain why there isn’t requires special pleading to contrived circumstances such as otherwise undetectable objects blocking lines of sight, or light rays bending or being dimmed in ways inconsistent with known physics.

The fact that when you look up at night, you can’t see all the stars visible from other parts of the Earth, is a simple consequence of the fact that the Earth is a globe.

35. The Eötvös effect

In the opening years of the twentieth century, scientists in the field of geodesy (measuring the shape and gravitational field of the Earth) were interested in making measurements of the strength of gravity all over the Earth’s surface. To do this, they trekked to remote regions of the world with sensitive gravimeters, to take the readings. On land this was straightforward enough, but they also wanted measurements taken at sea.

Around 1900, teams from the Institute of Geodesy in Potsdam took voyages into the Atlantic, Indian, and Pacific Oceans on ships, and made measurements using their gravimeters. The collected data were brought back to Potsdam for analysis. There, the readings fell under the scrutinising eyes of the Hungarian physicist Loránd Eötvös, who specialised in studying the variation of Earth’s gravitational field with position on the surface. He noticed an odd thing about the readings.

Loránd Eötvös

Because of the impracticality of stopping the ship every time they wanted to take a reading, the scientists measured the Earth’s gravity while the ships were moving. There was no reason to suppose this would make any difference. But Eötvös found a systematic effect. Gravity measurements taken while the ship was moving eastward were lower than readings taken while the ship was moving westward.

Eötvös realised that this effect was being caused by the rotation of the Earth. The Earth’s equatorial circumference is 40,075 km, and it rotates eastward once every sidereal day (23 hours, 56 minutes). So the ground at the equator is rotating at a linear speed of 465 metres per second. To move in a circular path rather than a straight line (as dictated by Newton’s First Law of Motion), gravity supplies a centripetal force to any object on the Earth’s surface. The necessary force is equal to the object’s mass times the velocity squared, divided by the radius of the circular path (6378 km). This comes to m×4652/6378000 = 0.0339m. So per kilogram of mass, a force of 0.0339 newtons is needed to enforce the circular path, an amount easily supplied by the Earth’s gravity. (This is why objects don’t get flung off the Earth by its rotation, a complaint of some spherical Earth sceptics.)

What this means is that the effective acceleration due to gravity measured for an object sitting on the equator is reduced by 0.0339 m/s2 (the same units as 0.0339 N/kg) compared to if the Earth were not rotating. But if you’re on a ship travelling east at, say, 10 m/s, the centripetal force required to keep you on the Earth’s surface is greater, equal to 4752/6378000 = 0.0354 N/kg. This reduces the apparent measured gravity by a larger amount, making the measured value of gravity smaller. And if you’re on a ship travelling west at 10 m/s, the centripetal force is 4552/6378000 = 0.0324 N/kg, reducing the apparent gravity by a smaller amount and making the measured value of gravity greater. The difference in apparent gravity between the ships travelling east and west is 0.003 m/s2, which is about 0.03% of the acceleration due to gravity. For a person of mass 70 kg, this is a difference in apparent weight of about 20 grams (strictly speaking, a difference in weight of 0.2 newtons, which is 20 grams multiplied by acceleration due to gravity).

Eötvös set out these theoretical calculations, and then organised an expedition to measure and test his results. In 1908, the experiment was carried out on board a ship in the Black Sea, with two separate ships travelling east and west past one another so the measurements could be made at the same time. The results matched Eötvös’s predictions, thus confirming the effect.

In general (if you’re not at the equator), your linear speed caused by the rotation of the Earth is equal to 465 m/s times the cosine of your latitude, while the radius of your circular motion is also equal to 6378 km times the cosine of your latitude. The centripetal force formula uses the square of the velocity divided by the radius, so this results in a cosine(latitude) term in the final result. That is, the size of the Eötvös effect also varies as the cosine of the latitude. If you measure it at 60° latitude, either north or south, the difference in gravity between east and west travelling ships is half that measured at the equator.

The Eötvös effect is well known in the field of gravimetry, and is routinely corrected for when taking measurements of the Earth’s gravitational strength from moving ships[1], aircraft[2], or submarines[3]. The reference on submarines refers to a gravitational measurement module for use on military submarines to enhance their navigation capability as undersea instruments of warfare. This module includes an Eötvös effect correction for when the sub is moving east or west. You can bet your bottom dollar that no military force in the world would make such a correction to their navigation instruments if it weren’t necessary.

One paper I found reports measurements made of the detailed structure of gravitational anomalies over the Mariana Trough in the Pacific Ocean south of Japan. It states:

Shipboard free-air gravity anomalies were calculated by subtracting the normal gravity field data from observed gravity field data, with a correction applied for the Eötvös effect using Differential Global Positioning System (DGPS) data.[4]

The results look pretty cool:

Mariana Trough gravity anomalies map

Map of gravitational anomalies in the Mariana Trough region of the Pacific Ocean, as obtained by shipboard measurement, corrected for the Eötvös effect. (Figure reproduced from [4].)

Another paper shows the Eötvös effect more directly:

Gravity measurements from moving ship

Graph showing measurements of Earth’s gravitational field strength versus distance travelled by a ship in the South Indian Ocean. In the leftmost section (16), the ship is moving slowly westward. In the central section (17) the ship is moving at a faster speed westward, showing the increase in measured gravity. In the right section (18) the ship is moving eastward at slow speed, and the gravity readings are lower than the readings taken in similar positions while moving westward. (Figure reproduced from [5].)

If the Earth were flat, on the other hand, there would be no Eötvös effect at all. If the flat Earth is not rotating (as most models posit, with the sun moving above it in a circular path), obviously there is no centripetal acceleration happening at all. Even if you adopt a model where the flat Earth rotates about the North Pole, the centripetal acceleration at every point on the surface is parallel to the surface, towards the pole, not directed downwards. So an Eötvös-like effect would actually cause a slight deflection in the angle of gravity, but almost zero change in the magnitude of the gravity.

The Eötvös effect shows that not only is the Earth rotating, but that it is rotating about a central point that is underneath the ground, not somewhere on the surface. If you stand on the equator and face east, the surface of the Earth is rotating in the direction you are facing and downwards, not to the left or right. Furthermore, the cosine term shows that at equal latitudes both north and south, the rotation is at the same angle relative to the surface, which can only be the case if the Earth is symmetrical about the equator: i.e. spherical.

References:

[1] Rousset, D., Bonneville, A., Lenat, J.F. “Detailed gravity study of the offshore structure of Piton de la Fournaise volcano, Réunion Island”. Bulletin of Volcanology, 49(6), p. 713-722, 1987. https://doi.org/10.1007/BF01079822

[2] Thompson, L.G., LaCoste, L.J. “Aerial gravity measurements”. Journal of Geophysical Research, 65(1), p. 305-322, 1960. https://doi.org/10.1029/JZ065i001p00305

[3] Moryl, J., Rice, H., Shinners, S. “The universal gravity module for enhanced submarine navigation”. In IEEE 1998 Position Location and Navigation Symposium, p. 324-331, April 1996. https://doi.org/10.1109/PLANS.1998.670124

[4] Kitada, K., Seama, N., Yamazaki, T., Nogi, Y., Suyehiro, K., “Distinct regional differences in crustal thickness along the axis of the Mariana Trough, inferred from gravity anomalies”. Geochemistry, Geophysics, Geosystems, 7(4), 2006. https://doi.org/10.1029/2005GC001119

[5] Persson, A. “The Coriolis Effect: Four centuries of conflict between common sense and mathematics, Part I: A history to 1885”. History of Meteorology, 2, p.1-24, 2005. https://www.semanticscholar.org/paper/The-Coriolis-Effect%3A-Four-centuries-of-conflict-and-Persson/c9e72567af65e44384fba048bbf491d3ac3a30ff

34. Earth’s internal heat

Opening disclaimer: I’m going to be talking about “heat” a lot in this one. Formally, “heat” is defined as a process of energy flow, and not as an amount of thermal energy in a body. However to people who aren’t experts in thermodynamics (i.e. nearly everyone), “heat” is commonly understood as an “amount of hotness” or “amount of thermal energy”. To avoid the linguistic awkwardness of using the five-syllable phrase “thermal energy” in every single instance, I’m just going to use this colloquial meaning of “heat”. Even some of the papers I cite use “heat” in this colloquial sense. I’ve already done it in the title, which to be technically correct should be the more awkward and less pithy “Earth’s internal thermal energy”.

The interior of the Earth is hot. Miners know first hand that as you go deeper into the Earth, the temperature increases. The deepest mine on Earth is the TauTona gold mine in South Africa, reaching 3.9 kilometres below sea level. At this depth, the rock temperature is 60°C, and considerable cooling technology is required to bring the air temperature down to a level where the miners can survive. The Kola Superdeep Borehole in Russia reached a depth of 12.2 km, where it found the temperature to be 180°C.

Lava, Hawaii

Lava—molten rock—emerging from the Earth in Hawaii. (Public domain image by the United States Geological Survey, from Wikimedia Commons.)

Deeper in the Earth, the temperature gets hot enough to melt rock. The results are visible in the lava that emerges from volcanic eruptions. How did the interior of the Earth get that hot? And exactly how hot is it down there?

For many years, geologists have been measuring the amount of thermal energy flowing out of the Earth, at thousands of measuring stations across the planet. A 2013 paper analyses some 38,374 heat flow measurements across the globe to produce a map of the mean heat flow out of the Earth, shown below[1]:

Mean heat flow out of the Earth

Mean heat flow out of the Earth in milliwatts per square metre, as a function of location. (Figure reproduced from [1].)

From the map, you can see that most of Earth’s heat emerges at the mid-ocean ridges, deep underwater. This makes sense, as this is where rising plumes of magma from deep within the mantle are acting to bring new rock material to the crust. The coolest areas are generally geologically stable regions in the middle of tectonic plates.

Hydrothermal vent

Subterranean material (and heat) emerging from a hydrothermal vent on Eifuku Seamount, Marianas Trench Marine National Monument. (Public domain image by the United States National Oceanic and Atmospheric Administration, from Wikimedia Commons.)

Although the heat flow out of the Earth’s surface is of the order of milliwatts per square metre, the surface has a lot of square metres. The overall heat flow out of the Earth comes to a total of around 47 terawatts[2]. In contrast, the sun emits close to 4×1014 terawatts of energy in total, and the solar energy falling on the Earth’s surface is 1360 watts per square metre, over 10,000 times as much as the heat energy leaking out of the Earth itself. So the sun dominates Earth’s heating and weather systems by roughly that factor.

So the Earth generates some 47 TW of thermal power. Where does this huge amount of energy come from? To answer that, we need to go all the way back to when the Earth was formed, some 4.5 billion years ago.

Our sun formed from the gaseous and dusty material distributed throughout the Galaxy. This material is not distributed evenly, and where there is a denser concentration, gravity acts to draw in more material. As the material is pulled in, any small motions are amplified into an overall rotation. The result is an accretion disc, with matter spiralling into a growing mass at the centre. When the central concentration accumulates enough mass, the pressure ignites nuclear reactions and a star is born. Some of the leftover material continues to orbit the new star and forms smaller accretions that eventually become planets or smaller bodies.

The process of accreting matter generates thermal energy. Gravitational potential energy reduces as matter pulls closer together, and the resulting collisions between matter particles convert it into thermal energy, heating up the accumulating mass. Our Earth was born hot. As the matter settled into a solid body, the shrinking further heated the core through the Kelvin-Helmholtz mechanism. The total heat energy from the initial formation of the Earth dissipates only very slowly into space, and that process is still going on today, 4.5 billion years later.

It’s not known precisely how much of this primordial heat is left in Earth or how much flows out, but various different studies suggest it is somewhere in the range of 12-30 TW, roughly a quarter to two-thirds of Earth’s total measured heat flux[3]. So that’s not the only source of the heat energy flowing out of the Earth.

The other source of Earth’s internal heat is radioactive decay. Some of the matter in the primordial gas and dust cloud that formed the sun and planets was produced in the supernova explosions of previous generations of stars. These explosions produce atoms of radioactively unstable isotopes. Many of these decay relatively rapidly and are essentially gone by now. But some isotopes have very long half-lives, most importantly: potassium-40 (1.25 billion years), thorium-232 (14.05 billion years), uranium-235 (703.8 million years), and uranium-238 (4.47 billion years). These isotopes still exist in significant quantities inside the Earth, where they continue to decay, releasing energy.

We have a way of probing how much radioactive energy is released inside the Earth. The decay reactions produce neutrinos (which we’ve met before), and because they travel unhindered through the Earth these can be detected by neutrino observatories. These geoneutrinos have energy ranges that distinguish them from cosmic neutrino sources, and of course always emerge from underground. The observed decay rates from geoneutrinos correspond to a total radiothermal energy production of 10-30 TW, of the same order as the primordial heat flux. (The neutrinos themselves also carry away part of the energy from the radioactive decays, roughly 5 TW, but this is an additional component not deposited as thermal energy inside the Earth.)

Mean heat flow out of the Earth

Approximate radiothermal energy generated within the Earth, plotted as a function of time, from the formation of the Earth 4.5 billion years ago, to the present. The four main isotopes are plotted separately, and the total is shown as the dashed line. (Public domain figure adapted from data in [4], from Wikimedia Commons.)

To within the uncertainties, the sum of the estimated primordial and measured radiothermal energy fluxes is equal to the total measured 47 TW flux. So that’s good.

Once you know how much heat is being generated inside the Earth, you can start to apply heat transfer equations, knowing the thermodynamic properties of rock and iron, how much conduction and convection can be expected, and cross-referencing it with our knowledge of the physical state of these materials under different temperature and pressure conditions. There’s also additional information about the internal structure of the Earth that we get from seismology, but that’s a story for a future article. Putting it all together, you end up with a linked series of equations which you can solve to determine the temperature profile of the Earth as a function of depth.

Mean heat flow out of the Earth

Temperature profile of the Earth’s interior, from the surface (left) to the centre of the core (right). Temperature units are not marked on the vertical axis, but the temperature of the surface (bottom left corner) is approximately 300 K, and the inner core (IC, right) is around 7000 K. UM is upper mantle, LM lower mantle, OC outer core. The calculated temperature profile is the solid line. The two solid dots are fixed points constrained by known phase transitions of rock and iron – the slopes of the curves between them are governed by the thermodynamic equations. The dashed lines are various components of the constraining equations. (Figure reproduced from [5].)

The results are all self-consistent, with observations such as the temperature of the rock in deep mine shafts and the rate of detection of geoneutrinos, with structural constraints provided by seismology, and with the temperature constraints and known modes of heat flow from the core to the surface of the Earth.

That is, they’re all consistent assuming the Earth is a spherical body of rock and iron. If the Earth were flat, the thermal transport equations would need to be changed to reflect the different geometry. As a first approximation, assume the flat Earth is relatively thin (i.e. a cylinder with the radius larger than the height). We still measure the same amount of heat flux emerging from the Earth’s surface, so the same amount of heat has to be either (a) generated inside it, or (b) being input from some external energy source underneath the flat Earth. However geoneutrino energy ranges indicate that they come from radioactive decay of Earthly minerals, so it makes sense to conclude that radiothermal heating is significant.

If radioactive decay is producing heat within the bulk of the flat Earth, then half of the produced neutrinos will emerge from the underside, and thus be undetectable. So the total heat production should be double that deduced from neutrino observations, or somewhere in the range 20-60 TW. To produce twice the energy, you need twice the mass of the Earth. If the flat Earth is a disc with radius 20,000 km (the distance from the North Pole to the South Pole), then to have the same volume as the spherical Earth it would need to be 859 km thick. But we need twice as much mass to produce the observed thermal energy flux, so it should be approximately 1720 km thick. Some fraction of the geoneutrinos will escape from the sides of the cylinder of this thickness, which means we need to add more rock to produce a bit more energy to compensate, so the final result will be a bit thicker.

There’s no obvious reason to suppose that a flat Earth can’t be a bit over 1700 km thick, as opposed to any other thickness. With over twice as much mass as our spherical Earth, the surface gravity of this thermodynamically correct flat Earth would be over 2 Gs (i.e. twice the gravity we experience), which is obviously wrong, but then many flat Earth models deny Newton’s law of gravity anyway (because it causes so many problems for the model).

But just as in the spherical Earth model the observed geoneutrino flux only accounts for roughly half the observed surface heat flux. The other half could potentially come from primordial heat left over from the flat Earth’s formation – although as we’ve already seen, what we know about planetary formation precludes the formation of a flat Earth in the first place. The other option is (b) that the missing half of the energy is coming from some source underneath the flat Earth, heating it like a hotplate. What this source of extra energy is is mysterious. No flat Earth model that I’ve seen addresses this problem, let alone proposes a solution.

What’s more, if such a source of energy under the flat Earth existed, then it would most likely also radiate into space around the edges of the flat Earth, and have observable effects on the objects in the sky above us. What we’re left with, if we trust the sciences of radioactive decay and thermal energy transfer, is a strong constraint on the thickness of the flat Earth, plus a mysterious unspecified energy source underneath – neither of which are mentioned in standard flat Earth models.

References:

[1] Davies, J. H. “Global map of solid Earth surface heat flow”. Geochemistry, Geophysics, Geosystems, 14(10), p.4608-4622, 2013. https://doi.org/10.1002/ggge.20271

[2] Davies, J.H., Davies, D.R. “Earth’s surface heat flux”. Solid Earth, 1(1), p.5-24, 2010. https://doi.org/10.5194/se-1-5-2010

[3] Dye, S.T. “Geoneutrinos and the radioactive power of the Earth”. Reviews of Geophysics, 50(3), 2012. https://doi.org/10.1029/2012RG000400

[4] Arevalo Jr, R., McDonough, W.F., Luong, M. “The K/U ratio of the silicate Earth: Insights into mantle composition, structure and thermal evolution”. Earth and Planetary Science Letters, 278(3-4), p.361-369, 2009. https://doi.org/10.1016/j.epsl.2008.12.023

[5] Boehler, R. “Melting temperature of the Earth’s mantle and core: Earth’s thermal structure”. Annual Review of Earth and Planetary Sciences, 24(1), p.15-40, 1996. https://doi.org/10.1146/annurev.earth.24.1.15

33. Angle sum of a triangle

Differential geometry is the field of mathematics dealing with the geometry of surfaces, such as planes, curved surfaces, and also higher dimensional curved spaces. It’s used extensively in physics to deal with the space curvatures caused by gravity in the theory of general relativity, and also has applications in several other fields of science and engineering. In its simplest form, differential geometry deals with the shapes and mathematical properties of what we intuitively think of as “surfaces” – for example, a sheet of paper, a draped cloth, the surface of a ball, or the curved surface shape of something like a saddle.

One of the most important properties of a surface is the curvature, or more specifically the Gaussian curvature. Intuitively, this is just a measure of how curved the surface is, although in some cases the answer isn’t quite as intuitive as you might think. Imagine a flat surface, like a polished table top, or a completely flat, unbent sheet of paper. Straightforwardly enough, a flat surface like this has a Gaussian curvature value of zero.

Carl Friedrich Gauss

Portrait of Carl Friedrich Gauss. (Public domain image from Wikimedia Commons.)

One of the most important results in differential geometry is the Theorema Egregium, which is Latin for “remarkable theorem”, proven by the 19th century German mathematician and physicist Carl Friedrich Gauss. The Theorema Egregium states that the Gaussian curvature of a surface does not change if the surface is bent without stretching it. So let’s take our flat sheet of paper and roll it up into a cylinder – we can do this without stretching or crumpling the paper. The resulting cylinder has the same curvature as the flat sheet, namely zero.

That might sound a bit strange, but it’s a result of the way that the Gaussian curvature of a surface is defined. A two-dimensional surface has two different directions that it can be curved in, and the two greatest amounts of curvature in different directions are called the principal curvatures. These measure how the surface bends by different amounts in different directions. Imagine drawing a straight line on a sheet of flat paper – the principal curvature in that direction is zero because the paper is flat. Now draw a line perpendicular to the first one – the principal curvature in that direction is also zero. The Gaussian curvature of the surface is the product of the two principal curvatures – in this case zero times zero.

Now if we roll the paper into a cylinder, we can draw a line around the circular part, creating a circle like a hoop around a barrel. This is the maximum curvature of the cylinder, so one of the principal curvatures, and is non-zero. It’s defined as a positive number equal to 1 divided by the radius r of the cylinder. As the radius gets smaller, this principal curvature 1/r gets bigger. But a cylinder has a second principal curvature, perpendicular to the first one. This is along a line running the length of the cylinder parallel to the axis, and this line is perfectly straight – not curved at all. So it has a principal curvature of zero. And the Gaussian curvature of the cylindrical surface is the product (1/r)×0 = 0.

Cylinder

A cylinder, as could be formed by rolling a sheet of paper. The blue line is a line of maximum curvature, wrapped around the cylinder. The red line, along the cylinder perpendicular to the blue line, has zero curvature.

So what surfaces have non-zero Gaussian curvature? By the Theorema Egregium, they must be surfaces that you can’t bend a sheet of paper into without stretching it. An example is the surface of a sphere. If you try to wrap a sheet of paper smoothly around a sphere, you can’t do it without stretching, scrunching, or tearing the paper. If we draw a line around a sphere (like an equator), that’s one principal curvature, equal to 1/r, similar to the cylinder, where r is now the radius of the sphere. A line perpendicular to that (like a line of longitude), also has the same same principal curvature due to the symmetry of the sphere, 1/r. The Gaussian curvature of a sphere is then (1/r)×(1/r) = 1/r2.

And then there are surfaces with a saddle shape, bending upwards in one direction and downwards in a perpendicular direction. An example is the surface on the inside of the hole in a torus (or doughnut shape). If you imagine standing on the surface here, in one direction it curves downwards with a radius s equal to that of the solid part of the torus, while in the perpendicular direction the surface curves upwards with radius h, the radius of the hole. Curving upwards is defined as a negative curvature, so the two principal curvatures are 1/s and -1/h, and the Gaussian curvature here is the product, -1/sh.

Torus showing radii

A torus, showing the solid radius s and the radius of the hole h. The point where the two circles intersect has Gaussian curvature -1/sh. (Image modified from public domain image from Wikimedia Commons.)

Here are examples of surfaces with negative, zero, and positive curvature, respectively a hyperboloid, cylinder, and sphere:

Surfaces with negative, zero, and positive curvatures

Illustration of surfaces with negative, zero, and positive Gaussian curvature: respectively a hyperboloid, cylinder, and sphere. (Image modified from public domain image from Wikimedia Commons.)

Another way to think about Gaussian curvature is to imagine wrapping a sheet of paper snugly onto the surface. If you can do it without stretching or tearing the paper (such as a cylinder), the curvature is zero. If you have to scrunch the paper up (like wrapping a sphere), the curvature is positive. If you have to stretch/tear the paper (like the saddle or hyperboloid), the curvature is negative. It’s also important to realise that the Gaussian curvature doesn’t need to be the same everywhere – it can vary across the surface. It’s zero at all points on a cylinder, and 1/r2 at all points on a sphere, but on a torus the curvature is negative on the inside of the hole and positive on the outside, with lines of zero curvature running around the top and bottom.

Torus showing positive and negative curvatures

Diagram of a torus, showing regions of positive (green) and negative (orange) Gaussian curvature. The boundary between the regions has zero curvature.

A property of two-dimensional curvature is that it affects the geometry of two-dimensional shapes on the surface. A surface with zero Gaussian curvature we call Euclidean, and the Euclidean geometry matches the familiar geometry we learn at primary and secondary school. This incudes all those properties of circles and triangles and parallel lines that you learnt. In particular, let’s talk about triangles. Triangles have three internal angles and, as we learnt in school, if you add up the sizes of the angles you get 180°. In the angular unit known as radians, 180° is equal to π radians. (To convert from degrees to radians, divide by 180 and multiply by π.)

So, in a Euclidean geometry, the angle sum of a triangle equals π radians. This is the case for triangles drawn on a flat sheet of paper, and it also holds if you wrap the paper around a cylinder. The triangle bends around the cylinder in the positive principal curvature direction, but its Gaussian curvature remains zero (because of the Theorema Egregium). And if you measure the angles and add them up, they still add up to π radians (i.e. 180°).

However if you draw a triangle on a surface of negative curvature, the lines are locally straight but from a three-dimensional point of view they are bowed inwards by the curvature of the surface, pinching the angles to make them smaller.

Saddle shaped surface with triangle

A saddle shaped surface with negative curvature, with a triangle drawn on it. The angles become pinched in and smaller. (Image modified from public domain image from Wikimedia Commons.)

On the other hand, if you draw a triangle on the surface of a sphere, which has positive curvature, the lines seem to bow outwards, making the angles larger.

Spherical shaped surface with triangle

A spherical surface with positive curvature, with a triangle drawn on it. The angles become bulged out and larger. (Image modified from public domain image from Wikimedia Commons.)

Now, here’s the cool thing. On a negative curvature surface, the angle sum of a triangle is less than π radians, while on a positive curvature surface it’s greater than π radians. Imagine a really small triangle on either of these surfaces. Over a very small area, the curvature is not so evident, and the angle sum is only different from π radians by a small amount. But for a larger triangle, the curvature makes a bigger difference, and the angle sum differs from π radians by a larger amount. It turns out there’s a mathematical relationship between the Gaussian curvature of the surface, the size of the triangle, and the amount by which the angle sum differs from π radians:

The angle sum of a triangle = π radians + the integral of the Gaussian curvature over the area of the triangle. [Equation 1]

If you’re not familiar with calculus, the integral part basically means you take small patches of area within the triangle, multiply the Gaussian curvature in the patch by the area of the patch and add them all up. If the Gaussian curvature is constant (such as for a sphere), the integral is just equal to the curvature times the area of the triangle.

To take a concrete example, imagine a sphere of radius one unit. The surface area of the sphere is 4π square units. Now let’s draw a triangle on the sphere. If we imagine the sphere with lines of latitude and longitude like the Earth, we’ll take the equator as one of our triangle sides, and two lines of longitude running from the North Pole to the equator, 90° apart. The angle between the equator and any line of longitude is 90° (π/2 radians), and the angle at the North Pole between our chosen two lines of longitude is also 90° (by construction). So the angle sum of this triangle is 3π/2 radians, which is π/2 radians greater than π radians.

From equation 1, this means that the integral of the Gaussian curvature over the triangle equals π/2. The area of the triangle is one eighth the surface area of the whole sphere = 4π/8 = π/2 square units. The Gaussian curvature of a sphere is constant, so curvature×(π/2 square units) = π/2, which means the curvature is equal to 1. We said the sphere has a radius of one unit, and Gaussian curvature of a sphere is 1/r2, so the curvature is just 1. It all works out!

Now imagine we’re looking at such a triangle on the Earth itself. Our edges are the equator, and we’ll take the lines of longitude 30° west (running through eastern Greenland) and 60° east (through Russia and Kazakhstan, among other places). The area of this triangle, if we measured it, turns out to be 63.8 million square kilometres.

A large triangle on Earth

A triangle on Earth, with each angle equal to 90°. (Image modified from public domain image from Wikimedia Commons.)

Applying equation 1:

Angle sum of triangle = π radians + integral of Gaussian curvature over the area of the triangle

3π/2 radians = π radians + Gaussian curvature × 63.8 million square kilometres

π/2 radians = Gaussian curvature × 63.8 million square kilometres

Gaussian curvature = (π/2)/63.8×106

1/r2 = (π/2)/63.8×106

r2 = 63.8×106/(π/2)

r = √[63.8×106/(π/2)]

r = 6371 kilometres

This is the radius of the Earth. And it’s exactly right. So simply by measuring the angles of a triangle drawn on the surface of the Earth, and the area within that triangle, we can show that the surface of the Earth is not flat, but curved, and we can determine the radius of the Earth.

Obviously I haven’t gone out and measured such a triangle in practice. It would take expensive surveying gear and an extensive travel budget, but in principle you can certainly do it. Because the effect of the curvature depends on the size of the triangle, you need to survey a large enough area to detect the Earth’s curvature. How large?

I did some searching for angular accuracy of large scale surveys, but didn’t find anything particularly convincing. As a first estimate, I guessed conservatively that you might be able to measure the angles of a very large triangle to an accuracy of a tenth of a degree. With three corners, this makes the necessary deviation of the angle sum from π equal to 0.005 radians. The necessary area to see the effect of curvature is this number times the square of Earth’s radius, which gives 203,000 square kilometres, about the area of Belarus, or Kyrgyzstan. If you surveyed a triangle that big, measuring the area accurately and the angles to within 0.1° accuracy, you could experimentally verify that the Earth was curved, not flat.

A reference on the accuracy of Global Navigation Satellite Systems used for geodetic surveying [1], gives an angular accuracy better than my guess, in the order of 2 minutes of arc (i.e. 1/30°) for this method. This gives us a necessary area of 20,300 square kilometres, about the area of Slovenia or Israel. Another reference on laser scanners used in surveying [2] gives an angular resolution of 3 mm over a range of 100 m, equivalent to 6 seconds of arc. If we can survey the angles of a triangle this accurately, we only need to measure an area of 1220 square kilometres, which is smaller than the Indian Ocean island nation of Comoros, and about the size of Gotland, Sweden’s largest island (circled in blue in the above figure).

Interestingly, Gauss was likely inspired to develop a mathematical treatment of curvature by his experience as a surveyor. In the 1820s, he was tasked with surveying the Kingdom of Hanover (now part of Germany). To check the calibration of his equipment, he surveyed a large triangle with corners on the tops of the mountains Brocken, Hoher Hagen, and Großer Inselsberg, encompassing an area of 3000 km2. Each mountaintop had direct line of sight to the others, so this was not actually a survey of a curved triangle along the surface of the Earth, but rather a flat triangle through 3D space above the surface of the Earth. Gauss considered this a validation check on the accuracy of the equipment, rather than a test to see if the Earth was curved. He measured the angles and added them up, finding the sum to be 180° to within his measurement uncertainty. Although this was not the curvature experiment described above, Gauss later drew on his surveying experience to investigate the properties of curved surfaces.

This concludes the “Earth is a Globe” portion of this entry, but there are two other cool applications of differential geometry:

Firstly, curvature of this type applies not only to two-dimensional surfaces, but also to three-dimensional space. It’s possible that the 3D space we live in has a non-zero curvature. This sort of curvature is tied up in general relativity, gravity, and the expansion of the universe. We know the curvature of space is very close to zero, but not if it’s exactly zero – it may be slightly positive or negative. To measure the curvature of space directly, all we need to do is measure the angles of a large enough triangle. In this case, large enough means millions of light years. We can’t send surveyors out that far, but imagine if we contacted two alien civilisations by radio. It would take millions of years to coordinate, but we could ask them to measure the angles between our sun and the sun of the other civilisation at some predetermined time, and we could combine it with our own measurement, to determine the angle sum of this enormous triangle. If it doesn’t equal π radians, we’d have a direct measurement of the curvature of the universe.

Secondly, and perhaps more practically, the Theorema Egregium helps us eat pizza. If you take a long slice of pizza (and the base is not thick/crispy enough to be rigid), the tip can flop down messily.

A floppy slice of pizza

A slice of pizza flopping along its length. Danger of making a mess!

Differential geometry to the rescue! The slice begins flat, so has zero Gaussian curvature. It can bend in one direction, flopping down and making a mess. But if we fold the slice by pushing the ends of the crust upwards and together, this creates a non-zero principal curvature across the slice. By the Theorema Egregium, the Gaussian curvature (the product of the principal curvatures) must remain zero, so the principal curvature in the perpendicular direction along the slice is now fixed at zero, and the slice cannot flop down any more!

A rigid slice of pizza

A slice of pizza curved perpendicular to the length can no longer flop. Danger averted, thanks to differential geometry!

References:

[1] Correa-Muños, N. A., Cerón-Calderón, L. A. 2018. “Precision and accuracy of the static GNSS method for surveying networks used in Civil Engineering”. Ingeniería e Investigación, 38(1), p. 52-59, 2018. https://doi.org/10.15446/ing.investig.v38n1.64543

[2] Fröhlich, C. Mettenleiter, M. “Terrestrial laser scanning—new perspectives in 3D surveying”. International archives of photogrammetry, remote sensing and spatial information sciences, 36(8), p.W2, 2004. https://www.semanticscholar.org/paper/TERRESTRIAL-LASER-SCANNING-–-NEW-PERSPECTIVES-IN-3-Froehlich-Mettenleiter/4e117d837e43da8b9e281aec1ce9a8625430b6c3

32. Satellite laser ranging

Lasers are amazing things. However, when first invented, they were famously derided as “a solution looking for a problem”. The American physicist Theodore Maiman built the first laser in 1960, which is possibly earlier than you realised. This is because for several years nobody knew what to use them for, and there was no visible technology that made use of lasers. Their main use was as a device for science fiction, where authors imagined them being used as weapons.

This changed in the 1970s, when laser barcode scanners were invented. These essentially just use a laser as a narrow-beam source of light, which is scanned across the barcode using a rotating mirror. A light sensor detects the pattern of light and dark reflections from the barcode and circuitry turns that into digital data, which can then be processed by an attached computer, revealing information such as a product catalogue number. This is hardly a ground-breaking application; you can (and in fact manufacturers do) make barcode scanners using normal light sources as well.

The first consumer device to use lasers was the LaserDisc player in 1978, a home video format using technology that was the forerunner of the compact disc audio player released in 1982. These devices use precisely focused lasers to read tiny indentations on a reflective surface, turning them into data (analogue in the case of LaserDiscs, digital for CDs), in a way broadly similar to a barcode reader. However here the indentations are so small that doing the same with a normal light source would be prohibitively difficult. And so lasers finally found a widespread use.

Today lasers are used in so many applications and technologies that it would be difficult to imagine life without them. They are vital to modern optical fibre communications networks; have many uses in industry for cutting, welding, scanning, and manufacturing, including 3D printing; are used in many forms of surgery and cancer treatments; and have dozens of consumer uses from laser pointers to printers to entertainment.

A laser is a device that emits light through a process known as stimulated emission. This occurs when a population of atoms exists in an excited energy state, meaning that the energy of one or more electrons in some of the atoms is not at the usual minimum energy state. In such a case, an electron can drop back down to the minimum energy state, emitting the excess energy as a photon of light; this is known as spontaneous emission. The stimulated emission part occurs when a photon interacts with another excited atom, triggering it to also drop into the minimum energy state and release a photon of the same energy. This stimulated photon is emitted in the same direction and with the same phase as the original photon (meaning the peaks and troughs of the light waves are in synch). As more emission is stimulated, an intense beam of light of a single wavelength, all travelling in the same direction is generated, known as a coherent beam.

Stimulated emission

Diagram of stimulated emission. The electron energy levels are within the confines of an atom (not shown).

Mechanically, this can be produced by using a transparent medium such as a gas or crystal, in a long cylinder shape surrounded by a bright strobe tube to supply the energy to excite the atoms. One end of the cylinder is a mirror, and the other end is a partly reflective mirror which lets some of the beam out. The light that emerges is a laser beam. Because the light is coherent, it doesn’t spread out like normal light, but travels in a tight line, illuminating only a small spot when it hits something. This means a laser beam is capable of travelling far greater distances than a normal light source of the same intensity, while still being bright enough to be observed.

Diagram of a laser

One of the very first applications for lasers was invented in 1961, but was restricted to industry and research for a decade. If you aim a brief laser pulse at something and time how long it takes for the reflection to come back, you can divide by the speed of light to calculate the distance to the object. This is called lidar, a portmanteau of “light” and “radar”, as it’s the same principle applied to light instead of radio waves. Lidar works to a range of several kilometres for detecting normal objects that partially reflect the incident beam.

But we can do a lot better if we construct a special target that reflects back virtually all of the incident beam. This can be done with a retroreflector. A common design is three flat mirrors arranged around a 90° corner, like the corner of a box. The combination of reflection off all three surfaces means that any incoming beam of light will be reflected back exactly towards its source, no matter what angle it comes in at. If you shine a laser at one of these, you can detect the return pulse over a much greater range. This form of lidar is known as laser ranging.

Retroreflector diagram

A corner retroreflector. No matter which direction incident light arrives from, the reflected beam returns in the same direction. (Public domain image from Wikimedia Commons.)

In 1964, NASA launched the Explorer 22 satellite into near-Earth orbit, about 1000 kilometres altitude. Its main mission was to perform science on the Earth’s ionosphere, but it was also equipped with a retroreflector, and was the first object in space to have its distance measured using satellite laser ranging.

In 1976, NASA launched LAGEOS 1, a satellite designed specifically for laser ranging. LAGEOS has no active components, it is simply a brass sphere, coated in aluminium, with 426 retroreflectors embedded in the surface, so that no matter which way the satellite tumbles, dozens of reflectors are always oriented towards Earth.

LAGEOS 1 model

Model of LAGEOS 1 satellite. (Public domain image by NASA, from nasa.gov.)

LAGEOS 1 is in medium-Earth orbit, at an altitude of nearly 6000 km. This orbit is far from any perturbing influences and so is extremely stable, meaning the satellite’s position at any time can be calculated to a small fraction of a millimetre. This makes it a useful reference point for measuring the distances to stations on the Earth’s surface, by aiming lasers at the satellite and timing the reflected signal.

Laser ranging from an observatory

Satellite laser ranging in action. Laser Ranging Facility at the Geophysical and Astronomical Observatory at NASA’s Goddard Spaceflight Center. The lasers are aimed at the Lunar Reconnaissance Orbiter, in orbit around the moon. (Public domain image by NASA from Wikimedia Commons.)

These measurements are so precise that they give the distance from the ground station to the satellite to an uncertainty of less than one millimetre. By using a reference point located away from Earth, this provides a method of checking motions of the Earth caused by weather systems, earthquakes, isostatic rebound (the slow rising of land in the millennia after glacial ice sheets melted), and tectonic drift. For example, geophysical tectonic modelling suggests that the Hawaiian Islands should be drifting northwards at approximately 70 mm per year. Measurement of the position of the Haleakala laser base station in Hawaii using LAGEOS and similar satellites shows this to be the case.

Laser ranging stations worldwide

Satellite laser ranging stations around the world. (Figure reproduced from [1].)

Laser ranging can also be (and is) used to measure the shape of the Earth. More specifically, it’s used to measure the shape of the geoid, which is the shape that corresponds to mean sea level (averaging out tides and weather) all over the Earth. More formally this is defined as the surface where the Earth’s gravitational field strength is identical to that at sea level. In areas of land, this surface is generally under the ground. The geoid is not perfectly spherical due to the uneven distribution of mass in the Earth. We’ve mentioned a few times that the Earth is approximately an ellipsoid due to the rotational force flattening the poles and causing a bulge at the equator. The geoid is almost an ellipsoid, but varies locally by up to approximately ±100 metres.

Diagram of the geoid

The geoid surface relative to an ellipsoid, shown as highly exaggerated relief. The darkest blue area below India is -106 m, while the red area near Iceland is +85 m. (Creative Commons Attribution 4.0 International image by the International Centre for Global Earth Models, from Wikimedia Commons.)

Besides LAGEOS 1 and 2, there are a handful of other similar retroreflector satellites. And there are also retroreflectors on the moon. Astronauts on NASA’s Apollo 11, 14, and 15 missions set up retroreflector arrays on the moon’s surface, and the unmanned Russian probes Lunakhod 1 and 2 also have retroreflectors.

Retroreflector on the moon

Retroreflector array set up on the lunar surface by Neil Armstrong and Buzz Aldrin during the Apollo 11 mission. (Public domain image by NASA from Wikimedia Commons.)

Since 1969, several lunar laser ranging experiments have been ongoing, making regular measurements of the distance between the Earth stations and the reflectors on the moon. These measurements can also determine the distance to better than one millimetre.

If you measure the distances from either an artificial satellite or the moon to different points on the Earth’s surface, it’s trivial to show that the points don’t lie even approximately on a flat plane, but that they lie on the surface of an approximately spherical body with the radius of the Earth. Finding an explicit statement such as “This demonstrates that the Earth is not flat, but spherical” in a published scientific article is difficult (because that result is neither surprising nor groundbreaking), but the following diagram shows the model that laser ranging scientists use to correct for effects such as atmospheric refraction, to enable them to get their measurements accurate down to a millimetre.

Model of Earth used for accurate laser ranging

Atmospheric refraction model used by laser ranging scientists. (Figure reproduced from [1].)

This shows clearly that laser ranging scientists—who have explicit and direct measurements of the shape of the Earth’s surface—assume the Earth is spherical in order to refine their calculations. They’d hardly do that if the Earth were flat.

References:

[1] Degnan, J. J. “Millimeter accuracy satellite laser ranging: a review”. Contributions of Space Geodesy to Geodynamics: Technology, 25, p.133-162, 1993. https://doi.org/10.1029/GD025p0133

[2] Murphy Jr., T. W. “Lunar Laser Ranging: The Millimeter Challenge”. Reports on Progress in Physics, 76(7), p. 076901, 2013. https://doi.org/10.1088/0034-4885/76/7/076901

31. Earth’s atmosphere

Earth’s atmosphere is held on by gravity, pulling it towards the centre of the planet. This means the air can move sideways around the planet in a relatively unrestricted manner, creating wind and weather systems, but it has trouble flying upwards into space.

It is possible for a planet’s atmosphere to leak away into space, if the gravity is too weak to hold it. Planets have an escape velocity, which is the speed at which an object fired directly upwards must have in order for it to fly off into space, rather than slow down and fall back down. For Earth, this escape velocity is 11.2 km/s. Almost nothing on Earth goes this fast – but there are some things that do. Gas molecules.

Air is made up of a mixture of molecules of different gases. The majority, around 78%, is nitrogen molecules, made of two atoms of nitrogen bonded together, followed by 21% oxygen molecules, similarly composed of two bonded oxygen atoms. Almost 1% is argon, which is a noble gas, its atoms going around as unbonded singletons. Then there are traces of carbon dioxide, helium, neon, methane, and a few others. On top of these is a variable amount of water vapour, which depending on local weather conditions can range from almost zero to around 3% of the total.

Gas is the state of matter in which the component atoms and molecules are separated and free to move mostly independently of one another, except for when they collide. This contrasts with a solid, in which the atoms are rigidly connected, a liquid, in which the atoms are in close proximity but able to flow and move past one another, and a plasma, in which the atoms are ionised and surrounded by a freely moving electrically charged cloud of electrons. The deciding factors on which state a material exists in are temperature and pressure.

Diagram of gas

Diagram of a gas. The gas particles are free to move anywhere and travel at high speeds.

Temperature is a measurable quantity related to the amount of thermal energy in an object. This is the form of energy which exists in the individual motion of atoms and molecules. In a solid, the atoms are vibrating slightly. As they increase in thermal energy they vibrate faster, until the energy breaks the bonds holding them together, and they form molecules and start to flow, becoming a liquid. As the temperature rises and more thermal energy is added, the molecules begin to fly off the mass of liquid completely, dispersing as a gas. And if more energy is added, it eventually strips the outer electrons off the atoms, ionising the gas into a plasma.

The speed at which molecules move in a gas is determined by the relationship between temperature and the kinetic energy of the molecules. The equipartition theorem of thermodynamics says that the average kinetic energy of molecules in a gas is equal to (3/2)kT, where T is the temperature and k is the Boltzmann constant. If T is measured in kelvins, the Boltzmann constant is about 1.38×10-23 joules per kelvin. So the kinetic energy of the molecules depends linearly on the temperature, but kinetic energy equals (1/2)mv2, where m is the mass of a molecule and v is the velocity. So the average speed of a gas molecule is then √(3kT/m). This means that more massive molecules move more slowly.

For example, here are the molecular masses of some gases and the average speed of the molecules at room temperature:

Gas Molecular mass (g/mol) Average speed (m/s)
Hydrogen (H2) 2.016 1920
Helium 4.003 1362
Water vapour (H2O) 18.015 642
Neon 20.180 607
Nitrogen (N2) 28.014 515
Oxygen (O2) 32.000 482
Argon 39.948 431
Carbon dioxide (CO2) 44.010 411

Remember that these are the average speeds of the gas molecules. The speeds actually vary according to a statistical distribution known as the Maxwell-Boltzmann distribution. Most molecules have speeds around the average, but there are some with lower speeds all the way down to zero, and some with higher speeds. At the upper end, the speed distribution is not limited (except by the speed of light), although very few molecules have speeds more than 2 or 3 times the average.

Maxwell-Boltzmann distribution

Maxwell-Boltzmann distribution for helium, neon, argon, and xenon at room temperature. Although the average speed for helium atoms is 1362 m/s, a significant number of atoms have speeds well above 2500 m/s. For the heavier gases, the number of atoms moving this fast is extremely close to zero. (Public domain image from Wikimedia Commons.)

These speeds are low enough that essentially all the gas molecules are gravitationally bound to Earth. At least in the lower atmosphere. As you go higher the air rapidly gets thinner—because gravity is pulling it down to the surface—but the pressure means it can’t all just pile up on the surface, so it spreads ever thinly upwards. The pressure drops exponentially with altitude: at 5 km the pressure is half what it is at sea level, at 10 km it’s one quarter, at 15 km one eighth, and so on.

The physics of the atmosphere changes as it moves to higher altitudes and lower pressures. Some 99.998% of the atmosphere by mass is below 85 km altitude. The gas above this altitude, in the thermosphere and exosphere, is so rarefied that it is virtually outer space. Incoming solar radiation heats the gas and it is so thin that heat transport to lower layers is inefficient and slow. Above about 200 km the gas temperature is over 1000 K, although the gas is so thin that virtually no thermal energy is transferred to orbiting objects. At this temperature, molecules of hydrogen have an average speed of 3516 m/s, and helium 2496 m/s, while nitrogen is 943 m/s.

Atmosphere diagram

Diagram of the layers of Earth’s atmosphere, with altitude plotted vertically, and temperature horizontally. The dashed line plots the electron density of the ionosphere, the regions of the atmosphere that are partly ionised by incident solar and cosmic radiation. (Public domain image from Wikimedia Commons.)

While these average speeds are still well below the escape velocity, a small fraction of molecules at the high end of the Maxwell-Boltzmann distribution do have speeds above escape velocity, and if moving in the right direction they fly off into space, never to return to Earth. Our atmosphere leaks hydrogen at a rate of about 3 kg/s, and helium at 50 g/s. The result of this is that any molecular hydrogen in Earth’s atmosphere leaks away rapidly, as does helium.

There is virtually no molecular hydrogen in Earth’s atmosphere. Helium exists at an equilibrium concentration of about 0.0005%, at which the leakage rate is matched by the replacement of helium in the atmosphere produced by alpha decay of radioactive elements. Recall that in alpha decay, an unstable isotope emits an alpha particle, which is the nucleus of a helium atom. Radioactive decay is the only source of helium we have. Decaying isotopes underground can have their alpha particles trapped in petroleum and natural gas traps underground, creating gas reservoirs with up to a few percent helium; this is the source of all helium used by industry. Over the billions of years of Earth’s geological history, it has only built up enough helium to last our civilisation for another decade or two. Any helium that we use and is released to the atmosphere will eventually be lost to space. It will become increasingly important to capture and recycle helium, lest we run out.

Because of the rapid reduction in probabilities for high speeds of the Maxwell-Boltzmann distribution, the leakage rate for nitrogen, oxygen, and heavier gases is much slower. Fortunately for us, these gases leak so slowly from our atmosphere that they take billions of years for any appreciable loss to occur.

This is the case for a spherical Earth. What if the Earth were flat? Well, the atmosphere would spill over the sides and be lost in very quick time. But wait, a common feature of flat Earth models is impassable walls of ice near the Antarctic rim to keep adventurous explorers (and presumably animals) from falling off the edge. Is it possible that such walls could hold the atmosphere in?

If they’re high enough, sure! Near the boundary between the thermosphere and the exosphere, the gas density is extremely low, and most (but not all) of the molecules that make it this high are hydrogen and helium. If the walls were this high, it would stop virtually all of the nitrogen and oxygen from escaping. However, if the walls were much lower, nitrogen and oxygen would start leaking at faster and faster rates. So how high do the walls need to be? Roughly 500-600 kilometres.

That’s well and truly impassable to any explorer using anything less than a spacecraft, so that’s good. But walls of ice 500 km high? We saw when discussing hydrostatic equilibrium that rock has the structural strength to be piled up only around 10 km high before it collapses under its own gravity. The compressive strength of ice, however, is of the order 5-25 megapascals[1][2], about a tenth that of granite.

Compressive strength of ice

Compressive yield (i.e. failure) strength of ice versus confining (applied) pressure, for varying rates of applied strain. The maximum yield strength ranges from around 3 MPa to 25 MPa. (Figure reproduced from [1].)

Ice is also less dense than rock, so a mountain of ice has a lot less mass than a mountain of granite. However, doing the sums shows that an Everest-sized pile of ice would produce a pressure of 30 MPa at its base, meaning it would collapse under its own weight. And that’s more than 50 times shorter than the walls we need to keep the atmosphere in.

So the fact that we can breathe is a consequence of our Earth being spherical. If it were flat, there would be no physically plausible way to keep the atmosphere in. (There are other models, such as the Earth being covered by a fixed firmament, like a roof, to which the stars are affixed, but these have even more physical problems – which will be discussed another day.)

References:

[1] Jones, S. J. “The confined compressive strength of polycrystalline ice”. Journal of Glaciology, 28 (98), p. 171-177, 1982. https://doi.org/10.1017/S0022143000011874

[2] Petrovic, J. J. “Review: Mechanical properties of ice and snow”. Journal of Materials Science, 38, p. 1-6, 2003. https://doi.org/10.1023/A:1021134128038

30. Pulsar timing

In our last entry on neutrino beams, we met James Chadwick, who discovered the existence of the neutron in 1932. The neutron explained radioactive beta decay as a process in which a neutron decays into a proton, an electron, and an electron antineutrino. This also means that a reverse process, known as electron capture, is possible: a proton and an electron may combine to form a neutron and an electron neutrino. This is sometimes also known as inverse beta decay, and occurs naturally for some isotopes with a relative paucity of neutrons in the nucleus.

Electron capture

Electron capture. A proton and electron combine to form a neutron. An electron neutrino is emitted in the process.

In most circumstances though, an electron will not approach a proton close enough to combine into a neutron, because there is a quantum mechanical energy barrier between them. The electron is attracted to the proton by electromagnetic force, but if it gets too close then its position becomes increasingly localised and by Heisenberg’s uncertainty principle its energy goes up correspondingly. The minimum energy state is the orbital distance where the electron’s probability distribution is highest. In electron capture, the weak nuclear force overcomes this energy barrier.

Electron capture energy diagram

Diagram of electron energy at different distances from a proton. Far away, electrostatic attraction pulls the electron closer, but if it gets too close, Heisenberg uncertainty makes the kinetic energy too large, so the electron settles around the minimum energy distance.

But you can also overcome the energy barrier by providing external energy in the form of pressure. Squeeze the electron and proton enough and you can push through the energy barrier, forcing them to combine into a neutron. In 1934 (less than 2 years after Chadwick discovered the neutron), astronomers Walter Baade and Fritz Zwicky proposed that this could happen naturally, in the cores of large stars following a supernova explosion (previously discussed in the article on supernova 1987A).

During a star’s lifetime, the enormous mass of the star is prevented from collapsing under its own gravity by the energy produced by nuclear fusion in the core. When the star begins to run out of nuclear fuel, that energy is no longer sufficient to prevent further gravitational collapse. Small stars collapse to a state known as a white dwarf, in which the minimal energy configuration has the atoms packed closely together, with electrons filling all available quantum energy states, so it’s not possible to compress the matter further. However, if the star has a mass greater than about 1.4 times the mass of our own sun, then the resulting pressure is so great that it overwhelms the nuclear energy barrier and forces the electrons to combine with protons, forming neutrons. The star collapses even further, until it is essentially a giant ball of neutrons, packed shoulder to shoulder.

These collapses, to a white dwarf or a so-called neutron star, are accompanied by a huge and sudden release of gravitational potential energy, which blows the outer layers of the star off in a tremendously violent explosion, which is what we can observe as a supernova. Baade and Zwicky proposed the existence of neutron stars based on the understanding of physics at the time. However, they could not imagine any method of ever detecting a neutron star. A neutron star would, they imagined, simply be a ball of dead neutrons in space. Calculations showed that a neutron star would have a radius of about 10 kilometres, making them amazingly dense, but correspondingly difficult to detect at interstellar distances. So neutron stars remained nothing but a theoretical possibility for decades.

In July 1967, Ph.D. astronomy student Jocelyn Bell was observing with the Interplanetary Scintillation Array at the Mullard Radio Astronomy Observatory in Cambridge, under the tutelage of her supervisor Antony Hewish. She was looking for quasars – powerful extragalactic radio sources which had recently been discovered using the new observation technique of radio astronomy. As the telescope direction passed through one particular patch of sky in the constellation of Vulpecula, Bell found some strange radio noise. Bell and Hewish had no idea what the signal was. At first they assumed it must be interference from some terrestrial or known spacecraft radio source, but over the next few days Bell noticed the signal appearing 4 minutes earlier each day. It was rising and setting with the stars, not in synch with anything on Earth. The signal was from outside our solar system.

Bell suggested running the radio signal strength plotter at faster speeds to try to catch more details of the signal. It took several months of persistent work, examining kilometres of paper plots. Hewish considered it a waste of time, but Bell persisted, until in November she saw the signal drawn on paper moving extremely rapidly through the plotter. The extraterrestrial radio source was producing extremely regular pulses, about 1 1/3 seconds apart.

PSR B1919+21 trace

The original chart recorder trace containing the detection signal of radio pulses from the celestial coordinate right ascension 1919. The pulses are the regularly spaced downward deflections in the irregular line near the top. (Reproduced from [1].)

This was exciting! Bell and Hewish thought that it might possibly be a signal produced by alien life, but they wanted to test all possible natural sources before making any sort of announcement. Bell soon found another regularly pulsating radio source in a different part of the sky, which convinced them that it was probably a natural phenomenon.

They published their observations[2], speculating that the pulses might be caused by radial oscillation in either white dwarfs or neutron stars. Fellow astronomers Thomas Gold and Fred Hoyle, however, immediately recognised that the pulses could be produced by the rotation of a neutron star.

Stars spin, relatively leisurely, due to the angular momentum in the original clouds of gas from which they formed. Our own sun rotates approximately once every 24 days. During a supernova explosion, as the core of the star collapses to a white dwarf or neutron star, the moment of inertia reduces in size and the rotation rate must increase correspondingly to conserve angular momentum, in the same way that a spinning ice skater speeds up by pulling their arms inward. Collapsing from stellar size down to 10 kilometres produces an increase in rotation rate from once per several days to the incredible rate of about once per second. At the same time, the star’s magnetic field is pulled inward, greatly strengthening it. Far from being a dead ball of neutrons, a neutron star is rotating rapidly, and has one of the strongest magnetic fields in nature. And when a magnetic field oscillates, it produces electromagnetic radiation, in this case radio waves.

The magnetic poles of a neutron star are unlikely to line up exactly with the rotational axis. Radio waves are generated by the rotation and funnelled out along the magnetic poles, forming beams of radiation. So as the neutron star rotates, these radio beams sweep out in rotating circles, like lighthouse beacons. A detector in the path of a radio beam will see it flash briefly once per rotation, at regular intervals of the order of one second – exactly what Bell observed.

Pulsar diagram

Diagram of a pulsar. The neutron star at centre has a strong magnetic field, represented by field lines in blue. I the star rotates about a vertical axis, the magnetic field generates radio waves beamed in the directions shown by the purple areas, sweeping through space like lighthouse beacons. (Public domain image by NASA, from Wikimedia Commons.)

Radio-detectable neutron stars quickly became known as pulsars, and hundreds more were soon detected. For the discovery of pulsars, Antony Hewish was awarded the Nobel Prize in Physics in 1974, however Jocelyn Bell (now Jocelyn Bell Burnell after marriage) was overlooked, in what has become one of the most notoriously controversial decisions ever made by the Nobel committee.

Jocelyn Bell Building

Image of Jocelyn Bell Burnell on the Jocelyn Bell Building in the Parque Tecnológico de Álava, Araba, Spain. (Public domain image from Wikimedia Commons.)

Astronomers found pulsars in the middle of the Crab Nebula supernova remnant (recorded as a supernova by Chinese astronomers in 1054), the Vela supernova remnant, and several others, cementing the relationship between supernova explosions and the formation of neutron stars. Popular culture even got in on the act, with Joy Division’s iconic 1979 debut album cover for Unknown Pleasures featuring pulse traces from pulsar B1919+21, the very pulsar that Bell first detected.

By now, the strongest and most obvious pulsars have been discovered. To discover new pulsars, astronomers engage in pulsar surveys. A radio telescope is pointed at a patch of sky and the strength of radio signals received is recorded over time. The radio trace is noisy and often the pulsar signal is weaker than the noise, so it’s not immediately visible like B1919+21. To detect it, one method is to perform a Fourier transform on the signal, to look for a consistent signal at a specific repetition period. Unfortunately, this only works for relatively strong pulsars, as weak ones are still lost in the noise.

A more sensitive method is called epoch folding, which is performed by cutting the signal trace into pieces of equal time length and summing them all up. The noise, being random, tends to cancel out, but if a periodic signal is present at the same period as the sliced time length then it will stack on top of itself and become more prominent. Of course, if you don’t know the period of a pulsar present in the signal, you need to try doing this for a large range of possible periods, until you find it.

To further increase the sensitivity, you can add in signals recorded at different radio frequencies as well – most radio telescopes can record signals at multiple frequencies at once. A complication is that the thin ionised gas of the interstellar medium slows down the propagation of radio waves slightly, and it slows them down by different amounts depending on the frequency. So as the radio waves propagate through space, the different frequencies slowly drift out of synch with one another, a phenomenon known as dispersion. The total amount of dispersion depends in a known way on the amount of plasma travelled through—known as the dispersion measure—so measuring the dispersion of a pulsar gives you an estimate of how far away it is. The estimate is a bit rough, because the interstellar medium is not uniform – denser regions slow down the waves more and produce greater dispersion.

Pulsar dispersion

Dispersion of pulsar pulses. Each row is a folded and summed pulse profile over many observation periods, as seen at a different radio frequency. Note how the time position of the pulse drifts as the frequency varies. If you summed these up without correction for this dispersion, the signal would disappear. The bottom trace shows the summed signal after correction for the dispersion by shifting all the pulses to match phase. (Reproduced from [3].)

So to find a weak pulsar of unknown period and dispersion measure, you fold all the signals at some speculative period, then shift the frequencies by a speculative dispersion measure and add them together. Now we have a two-dimensional search space to go through. This approach takes a lot of computer time, trying many different time folding periods and dispersion measures, and has been farmed out as part of distributed “home science” computing projects. The pulsar J2007+2722 was the first pulsar to be discovered by a distributed home computing project[4].

But wait – there’s one more complication. The observed period of a pulsar is equal to the emission period if you observe it from a position in space that is not moving relative to the pulsar. If the observer is moving with respect to the pulsar, then the period experiences a Doppler shift. Imagine you are moving away from a pulsar that is pulsing exactly once per second. A first pulse arrives, but in the second that it takes the next pulse to arrive, you have moved further away, so the radio signal has to travel further, and it arrives a fraction of a second more than one second after the previous pulse. The difference is fairly small, but noticeable if you are moving fast enough.

The Earth moves around the sun at an orbital speed of 29.8 km/s. So if it were moving directly away from a pulsar, dividing this by the speed of light, each successive pulse would arrive 0.1 milliseconds later than if the Earth were stationary. This would actually not be a problem, because instead of folding at a period of 1.0000 seconds, we could detect the pulsar by folding at a period of 1.0001 seconds. But the Earth doesn’t move in a straight line – it orbits the sun in an almost circular ellipse. On one side of the orbit the pulsar period is measured to be 1.0001 s, but six months later it appears to be 0.999 s.

This doesn’t sound like much, but if you observe a pulsar for an hour, that’s 3600 seconds, and the cumulative error becomes 0.36 seconds, which is far more than enough to completely ruin your signal, smearing it out so that it becomes undetectable. Hewish and Bell, in their original pulsar detection paper, used the fact that they observed this timing drift consistent with Earth’s orbital velocity to narrow down the direction that the pulsar must lie in (their telescope received signals from a wide-ish area of sky, making pinpointing the direction difficult).

Timing drift of pulsar B1919+21

Timing drift of pulsar B1919+21 from Hewish and Bell’s discovery paper. Cumulative period timing difference on the horizontal axis versus date on the vertical axis. If the Earth were not moving through space, all the detection periods for different dates would line up on the 0. With no other data at all, you can use this graph to work out the period of Earth’s orbit. (Figure reproduced from [2].)

What’s more, not just the orbit of the Earth, but also the rotation of the Earth affects the arrival times of pulses. When a pulsar is overhead, it is 6370 km (the radius of the Earth) closer than when it is on the horizon. Light takes over 20 milliseconds to travel that extra distance – a huge amount to consider when folding pulsar data. So if you observe a pulsar over a single six-hour session, the period can drift by more than 0.02 seconds due to the rotation of the Earth.

These timing drifts can be corrected in a straightforward manner, using the astronomical coordinates of the pulsar, the latitude and longitude of the observatory, and a bit of trigonometry. So in practice these are the steps to detect undiscovered pulsars:

  1. Observe a patch of sky at multiple radio frequencies for several hours, or even several days, to collect enough data.
  2. Correct the timing of all the data based on the astronomical coordinates, the latitude and longitude of the observatory, and the rotation and orbit of the Earth. This is a non-linear correction that stretches and compresses different parts of the observation timeline, to make it linear in the pulsar reference frame.
  3. Perform epoch folding with different values of period and dispersion measure, and look for the emergence of a significant signal above the noise.
  4. Confirm the result by observing with another observatory and folding at the same period and dispersion measure.

This method has been wildly successful, and as of September 2019 there are 2796 known pulsars[5].

If step 2 above were omitted, then pulsars would not be detected. The timing drifts caused by the Earth’s orbit and rotation would smear the integrated signal out rather than reinforcing it, resulting in it being undetectable. The latitude and longitude of the observatory are needed to ensure the timing correction calculations are done correctly, depending on where on Earth the observatory is located. It goes almost without saying that the astronomers use a spherical Earth model to get these corrections right. If they used a flat Earth model, the method would not work at all, and we would not have detected nearly as many pulsars as we have.

Addendum: Pulsars are dear to my own heart, because I wrote my physics undergraduate honours degree essay on the topic of pulsars, and I spent a summer break before beginning my Ph.D. doing a student project at the Australia Telescope National Facility, taking part in a pulsar detection survey at the Parkes Observatory Radio Telescope, and writing code to perform epoch folding searches.

Some of the data I worked on included observations of pulsar B0540-69, which was first detected in x-rays by the Einstein Observatory in 1984[6], and then at optical wavelengths in 1985[7], flashing with a period of 0.0505697 seconds. I made observations and performed the data processing that led to the first radio detection of this pulsar[8]. (I’m credited as an author on the paper under my unmarried name.) I can personally guarantee you that I used timing corrections based on a spherical model of the Earth, and if the Earth were flat I would not have this publication to my name.

References:

[1] Lyne, A. G., Smith, F. G. Pulsar Astronomy. Cambridge University Press, Cambridge, 1990.

[2] Hewish, A., Bell, S. J., Pilkington, J. D. H., Scott, P. F., Collins, R. A. “Observation of a Rapidly Pulsating Radio Source”. Nature, 217, p. 709-713, 1968. https://doi.org/10.1038/217709a0

[3] Lorimer, D. R., Kramer, M. Handbook of Pulsar Astronomy. Cambridge University Press, Cambridge, 2012.

[4 Allen, B., Knispel, B., Cordes, J.; et al. “The Einstein@Home Search for Radio Pulsars and PSR J2007+2722 Discovery”. The Astrophysical Journal, 773 (2), p. 91-122, 2013. https://doi.org/10.1088/0004-637X/773/2/91

[5] Hobbs, G., Manchester, R. N., Toomey, L. “ATNF Pulsar Catalogue v1.61”. Australia Telescope National Facility, 2019. https://www.atnf.csiro.au/people/pulsar/psrcat/ (accessed 2019-10-09).

[6] Seward, F. D., Harnden, F. R., Helfand, D. J. “Discovery of a 50 millisecond pulsar in the Large Magellanic Cloud”. The Astrophysical Journal, 287, p. L19-L22, 1984. https://doi.org/10.1086/184388

[7] Middleditch, J., Pennypacker, C. “Optical pulsations in the Large Magellanic Cloud remnant 0540–69.3”. Nature. 313 (6004). p. 659, 1985. https://doi.org/10.1038/313659a0

[8] Manchester, R. N., Mar, D. P., Lyne, A. G., Kaspi, V. M., Johnston, S. “Radio Detection of PSR B0540-69”. The Astrophysical Journal, 403, p. L29-L31, 1993. https://doi.org/10.1086/186714

29. Neutrino beams

We’ve met neutrinos before, when talking about supernova 1987A.

Historically, the early quantum physicist Wolfgang Pauli first proposed the existence of the neutrino in 1930, to explain a problem with then-current understanding of radioactive beta decay. In beta decay, an atomic nucleus emits an electron, which has a negative electric charge, and the resulting nucleus increases in positive charge, transmuting into the element with the next highest atomic number. The law of conservation of energy applied to this nuclear reaction implied that the electron should be emitted from any given isotope with a specific energy, balancing the change in mass as given by Einstein’s famous E = mc2 (energy equals mass times the speed of light squared). Alpha particles emitted during alpha decay, and gamma rays emitted during gamma decay appear at fixed energies.

beta decay, early conception

Illustration of beta decay. The nucleus at left emits an electron. (Public domain image from Wikimedia Commons.)

However, this was not what was observed for beta decay electrons. The ejected electrons had a maximum energy as predicted, but also appeared with a spread of lower energies. Pauli suggested that another particle was involved in the beta decay reaction, which carried off some of the energy. In a three-body reaction, the energy could be split between the electron and the new particle in a continuous fashion, thus explaining the spread of electron energies. Pauli suggested the new particle must be very light, so as to evade detection up to that time. He called it a “neutron”, a neutral particle following the word-ending convention of electron and proton.

However, in the same year German physicists Walther Bothe and Herbert Becker produced some strange radiation by bombarding light elements with alpha particles from radioactive polonium. This radiation had properties unlike other forms known at the time, and several experimenters tried to understand it. In 1932, James Chadwick performed experiments that demonstrated the radiation was made of neutral particles of about the same mass as a proton. The name “neutron” had been floating around nuclear physics for some time (Pauli wasn’t the first to use it; “neutron” appears in the literature as a name for proposed hypothetical neutral particles as early as 1899), but Chadwick was the first experimenter to demonstrate the existence of a neutral particle, so the name got attached to his discovery. Italian physicist Enrico Fermi responded by referring to Pauli’s proposed very light neutral particle as a “neutrino”, or “little neutron” in Italian coinage.

beta decay

Beta decay. A neutron decays to produce a proton, an electron, and an electron anti-neutrino. The neutrino produced has to be an antiparticle to maintain matter/antimatter balance, though it is often referred to simply as a “neutrino” rather than an anti-neutrino. (Public domain image from Wikimedia Commons.)

Detection of the neutrino had to wait until 1956, when a sensitive enough experiment could be performed, by American physicists Clyde Cowan and Frederick Reines, for which Reines received the 1995 Nobel Prize in Physics (Cowan had unfortunately died in 1974). In 1962, Leon Lederman, Melvin Schwartz, and Jack Steinberger of Fermilab discovered that muons—particles similar to electrons but with more mass—had their own associated neutrinos, distinct from electron neutrinos. They received the Nobel Prize for this discovery in 1988 (only a 26 year wait, unlike Reines’ 39 year wait). Finally, Martin Perl discovered a third, even more massive electron-like particle, named the tau lepton, in 1975, for which he shared that 1995 Prize with Reines. The tau lepton, like the electron and muon, has its own distinct associated neutrino.

Meanwhile, other researchers had been building neutrino detectors to observe neutrinos emitted by the sun’s nuclear reactions. Neutrinos interact only extremely weakly with matter, so although approximately 7×1014 solar neutrinos hit every square metre of Earth every second, almost none of them affect anything, and in fact almost all of them pass straight through the Earth and continue into space without stopping. To detect neutrinos you need a large volume of transparent material; water is usually used. Occasionally one neutrino of the trillions that pass through every second will interact, causing a single-atom nuclear reaction that produces a brief flash of light, which can then be seen by light detectors positioned around the periphery of the transparent material.

Daya Bay neutrino detector

Interior of the Daya Bay Reactor neutrino detector, China. The glassy bubbles house photodetectors to detect the flashes of light produced by neutrino interactions in the liquid filled interior (not filled in this photo). (Public domain image by U.S. Department of Energy, from Wikimedia Commons.)

When various solar neutrino detectors were turned on, there was a problem. They detected only about one third to one half of the number of neutrinos expected from models of how the sun works. The physics was thought to be well understood, so there was great trouble trying to reconcile the observations with theory. One of the least “break everything else we know about nuclear physics” proposals was that perhaps neutrinos could spontaneously and randomly change flavour, converting between electron, muon, and tau neutrinos. The neutrino detectors could only detect electron neutrinos, so if the neutrinos generated by the sun could change flavour (a process known as neutrino oscillation) during the time it took them to arrive at Earth, the result would be a roughly equal mix of the three flavours, so the detectors would only see about a third of them.

Another unanswered question about neutrinos was whether they had mass or not. Neutrinos have only ever been detected travelling at speeds so close to the speed of light that we weren’t sure if they were travelling at the speed of light (in which case they must be massless, like photons) or just a tiny fraction below it (in which case they must have a non-zero mass). Even the neutrinos detected from supernova 1987A, 168,000 light years away, arrived before the initial light pulse from the explosion (because the neutrinos passed immediately out of the star’s core, while the light had to contend with thousands of kilometres of opaque gas before being released), so we weren’t sure if they were travelling at the speed of light or just very close to it. Interestingly, the mass of neutrinos is tied to whether they can change flavour: if neutrinos are massless, then they can’t change flavour, whereas if they have mass, then they must be able to change flavour.

To test these properties, particle physicists began performing experiments to see if neutrinos could change flavour. To do this, you need to produce some neutrinos and then measure them a bit later to see if you can detect any that have changed flavour. But because neutrinos move at very close to the speed of light, you can’t detect them at the same place you create them; you need to have your detector a long way away. Preferably hundreds of kilometres or more.

The first such experiment was the KEK to Kamioka, or K2K experiment, running from 1999-2004. This involved the Japanese KEK laboratory in Tsukuba generating a beam of muon neutrinos and aiming the beam at the Super-Kamiokande neutrino detector at Kamioka, a distance of 250 kilometres away.

K2K map

Map of central Japan, showing the locations of KEK and Super-Kamiokande. (Figure reproduced from [1].)

The map is from the official website of KEK. Notice that Super-Kamiokande is on the other side of a mountain range from KEK. But this doesn’t matter, because neutrinos travel straight through solid matter! Interestingly, here’s another view of the neutrino path from the KEK website:

K2K cross section

Cross sectional view of neutrino beam path from KEK to Super-Kamiokande. (Figure reproduced from [1].)

You can see that the neutrino beam passes underneath the mountains from KEK to the underground location of the Super-Kamiokande detector, in a mine 1000 metres below Mount Ikeno (altitude 1360 m). KEK at Tsukuba is at an altitude of 35 m. Now because of the curvature of the Earth, the neutrino beam passes in a straight line 1000 m below sea level at its middle point. With the radius of the Earth 6367 km, Pythagoras’ theorem tells us that the centre of the beam path is 6365.8 km from the centre of the Earth, so 1200 m below the mean altitude of KEK and Super-Kamiokande – the maths works out. Importantly, the neutrino beam cannot be fired horizontally, it has to be aimed at an angle of about 0.5° into the ground for it to emerge correctly at Super-Kamiokande.

The K2K experiment succeeded in detecting a loss of muon neutrinos, establishing that some of them were oscillating into other neutrino flavours.

A follow up experiment, MINOS, began in 2005, this time using a neutrino beam generated at Fermilab in Illinois, firing at a detector located in the Soudan Mine in Minnesota, some 735 km away.

MINOS map and cross section

Map and sectional view of the MINOS experiment. (Figure reproduced from [2].)

In this case, the straight line neutrino path passes 10 km below the surface of the Earth, requiring the beam to be aimed downwards at an angle of 1.6° in order to successfully reach the detector. Another thing that MINOS did was to measure the time of flight of the neutrino beam between Fermilab and Soudan. When they sent a pulsed beam and measured the time taken for the pulse to arrive at Soudan, then divided it by the distance, they concluded that the speed of the neutrinos was between 0.999976 and 1.000126 times the speed of light, which is consistent with them not violating special relativity by exceeding the speed of light[3].

If you measure the distance from Fermilab to Soudan along the curvature of the Earth, as you would do for normal means of travel (or if the Earth were flat), you get a distance about 410 metres (or 0.06%) longer than the straight line distance through the Earth that the neutrinos took. If the scientists had used that distance, then their neutrino speed measurements would have given values 0.06% higher: 1.00053 to 1.00068 times the speed of light. In other words, to get a result that doesn’t violate known laws of physics, you have to take account of the fact that the Earth is spherical, and not flat.

This result has been reproduced with reduced uncertainty bounds by the CERN Neutrinos to Gran Sasso (CNGS) experiment in Europe, which fires a neutrino beam from CERN in Switzerland to the OPERA detector at the Gran Sasso National Laboratory in Italy.

CNGS cross section

Sectional view of the CNGS experiment neutrino beam path. (Image is Copyright CERN, used under CERN Terms of Use, from [4].)

The difference between the neutrino travel times and the speed of light over the 732 km beam path was measured to be -1.9±3.7 nanoseconds, consistent with zero difference[5]. In this case, if a flat Earth model had been used, the beam path distance would be equal to the surface distance from CERN to Gran Sasso, again about 410 metres longer. This would have given the neutrino travel time difference to be an extra 410/c = 1370 ns, making the neutrinos travel significantly faster than the speed of light.

All of these experiments have shown that neutrino oscillation does occur, which means neutrinos have a non-zero mass. But we still don’t know what that mass is. It must be small enough that for all our existing experiments we can’t detect any any difference between the neutrino speed and the speed of light. More experiments are underway to try and pin down the nature of these elusive particles.

But importantly for our purposes, these neutrino beam experiments make no sense if the Earth is flat, and can only be interpreted correctly because we know the Earth is a globe.

References:

[1] “Long Baseline neutrino oscillation experiment, from KEK to Kamioka (K2K)”. KEK website. http://neutrino.kek.jp/intro/ (accessed 2019-10-01.)

[2] Louis, W. C. “Viewpoint: The antineutrino vanishes differently”. Physics, 4, p. 54, 2011. https://physics.aps.org/articles/v4/54

[3] MINOS collaboration, Adamson, P. et al. “Measurement of neutrino velocity with the MINOS detectors and NuMI neutrino beam”. Physical Review D, 76, p. 072005, 2007. https://doi.org/10.1103/PhysRevD.76.072005

[4] “Old accelerators image gallery”. CERN. https://home.cern/resources/image/accelerators/old-accelerators-images-gallery (accessed 2019-10-01).

[5] The OPERA collaboration, Adam, T., Agafonova, N. et al. “Measurement of the neutrino velocity with the OPERA detector in the CNGS beam”. Journal of High Energy Physics, 2012, p. 93, 2012. https://doi.org/10.1007/JHEP10(2012)093

28. Stereo imaging

We see the world in 3D. What this means is that our visual system—comprising the eyes, optic nerves, and brain—takes the sensory input from our eyes and interprets it in a way that gives us the sensation that we exist in a three-dimensional world. This sensation is often called depth perception.

In practical terms, depth perception means that we are good at estimating the relative distances from ourselves to objects in our field of vision. You can tell if an object is nearer or further away just by looking at it (weird cases like optical illusions aside). A corollary of this is that you can tell the three-dimensional shape of an object just by looking at it (again, optical illusions aside). A basketball looks like a sphere, not like a flat circle. You can tell if a surface that you see is curved or flat.

To do this, our brain relies on various elements of visual data known as depth cues. The best known depth cue is stereopsis, which is interpreting the different views of left and right eyes caused by the parallax effect, your brain innately using this information to triangulate distances. You can easily observe parallax by looking at something in the distance, holding up a finger at arms length, and alternately closing left and right eyes. In the view from each eye, your finger appears to move left/right relative to the background. And with both eyes open, if you focus on the background, you see two images of your finger. This tells your brain that your finger is much closer than the background.

Parallax effect

Illustration of parallax. The dog is closer than the background scene. Sightlines from your left and right eyes passing through the dog project to different areas of the background. So the view seen by your left and right eyes show the dog in different positions relative to the background. (The effect is exaggerated here for clarity.)

We’ll discuss stereopsis in more detail below, but first it’s interesting to know that stereopsis is not the only depth cue our brains use. There are many physically different depth cues, and most of them work even with a single eye.

Cover one eye and look at the objects nearby, such as on your desk. Reach out and try to touch them gently with a fingertip, as a test for how well you can judge their depth. For objects within an easy hand’s reach you can probably do pretty well; for objects you need to stretch to touch you might do a little worse, but possibly not as bad as you thought you might. The one eye that you have looking at nearby things needs to adjust the focus of its lens in order to keep the image focused on the retina. Muscles in your eye squeeze the lens to change its shape, thus adjusting the focus. Nerves send these muscle signals to your brain, which subconsciously uses them to help gauge distance to the object. This depth cue is known as accommodation, and is most accurate within a metre or two, because it is within this range that the greatest lens adjustments need to be made.

With one eye covered, look at objects further away, such as across the room. You can tell that some objects are closer and other objects further away (although you may have trouble judging the distances as accurately as if you used both eyes). Various cues are used to do this, including:

Perspective: Many objects in our lives have straight edges, and we use the convergence of straight lines in visual perspective to help judge distances.

Relative sizes: Objects that look smaller are (usually) further away. This is more reliable if we know from experience that certain objects are the same size in reality.

Occultation: Objects partially hidden behind other objects are further away. It seems obvious, but it’s certainly a cue that our brain uses to decide which object is nearer and which further away.

Texture: The texture on an object is more easily discernible when it is nearer.

Light and shadow: The interplay of light direction and the shading of surfaces provides cues. A featureless sphere such as a cue ball still looks like a sphere rather than a flat disc because of the gradual change in shading across the surface.

Shaded circle

A circle shaded to present the illusion that it is a sphere, using light and shadow as depth cues. If you squint your eyes so your screen becomes a bit fuzzy, the illusion of three dimensionality can become even stronger.

Motion parallax: With one eye covered, look at an object 2 or 3 metres away. You have some perception of its distance and shape from the above-mentioned cues, but not as much as if both your eyes were open. Now move your head from side to side. The addition of motion produces parallax effects as your eye moves and your brain integrates that information into its mental model of what you are seeing, which improves the depth perception. Pigeons, chickens, and some other birds have limited binocular vision due to their eyes being on the sides of their heads, and they use motion parallax to judge distances, which is why they bob their heads around so much.

Motion parallax animation

Demonstration of motion parallax. You get a strong sense of depth in this animation, even though it is presented on your flat computer screen. (Creative Commons Attribution 3.0 Unported image by Nathaniel Domek, from Wikimedia Commons.)

There are some other depth cues that work with a single eye as well – I don’t want to try to be exhaustive here.

If you uncover both eyes and look at the world around you, your sense of three dimensionality becomes stronger. Now instead of needing motion parallax, you get parallax effects simply by looking with two eyes in different positions. Stereopsis is one of the most powerful depth cues we have, and it can often be used to override or trick the other cues, giving us a sense of three-dimensionality where none exists. This is the principle behind 3D movies, as well as 3D images printed on flat paper or displayed on a flat screen. The trick is to have one eye see one image, and the other eye see a slightly different image of the same scene, from an appropriate parallax viewpoint.

In modern 3D movies this is accomplished by projecting two images onto the screen simultaneously through two different polarising filters, with the planes of polarisation oriented at 90° to one another. The glasses we wear contain matched polarising filters: the left eye filter blocks the right eye projection while letting the left eye projection through, and vice versa for the right eye. The result is that we see two different images, one with each eye, and our brains combine them to produce the sensation of depth.

Another important binocular depth cue is convergence. To look at an object nearby, your eyes have to point inwards so they are both focused on the same point. For an object further away, your eyes look more parallel. Like your lenses, the muscles that control this send signals to your brain, which it interprets as a distance measure. Convergence can be a problem with 3D movies and images if the image creator is not careful. Although stereopsis can provide the illusion of depth, if it’s not also matched with convergence then there can be conflicting depth cues to your brain. Another factor is that accommodation tells you that all objects are at the distance of the display screen. The resulting disconnects between depth cues are what makes some people feel nauseated or headachy when viewing 3D images.

To create 3D images using stereopsis, you need to have two images of the same scene, as seen from different positions. One method is to have two cameras side by side. This can be used for video too, and is the method used for live 3D broadcasts, such as sports. Interestingly, however, this is not the most common method of making 3D movies.

Coronet 3D camera

A 3D camera produced by the Coronet Camera Company. Note the two lenses at the front, separated by roughly the same spacing as human eyes. (Creative Commons Attribution 3.0 Unported image by Wikimedia Commons user Bilby, from Wikimedia Commons.)

3D movies are generally shot with a single camera, and then an artificial second image is made for each frame during the post-production phase. This is done by a skilled 3D artist, using software to model the depths to various objects in each shot, and then manipulate the pixels of the image by shifting them left or right by different amounts, and painting in any areas where pixel shifts leave blank pixels behind. The reason it’s done this way is that this gives the artist control over how extreme the stereo depth effect is, and this can be manipulated to make objects appear closer or further away than they were during shooting. It’s also necessary to match depth disparities of salient objects between scenes on either side of a scene cut, to avoid the jarring effect of the main character or other objects suddenly popping backwards and forwards across scene cuts. Finally, the depth disparity pixel shifts required for cinema projection are different to the ones required for home video on a TV screen, because of the different viewing geometries. So a high quality 3D Blu-ray of a movie will have different depth disparities to the cinematic release. Essentially, construction of the “second eye” image is a complex artistic and technical consideration of modern film making, which cannot simply be left to chance by shooting with two cameras at once. See “Nonlinear disparity mapping for stereoscopic 3D” by Lang et al.[1], for example, which discusses these issues in detail.

For a still photo however, shooting with two cameras at the same time is the best method. And for scientific shape measurement using stereographic imaging, two cameras taking real images is necessary. One application of this is satellite terrain mapping.

The French space agency CNES launched the SPOT 1 satellite in 1986 into a sun-synchronous polar orbit, meaning it orbits around the poles and maintains a constant angle to the sun, as the Earth rotates beneath it. This brought any point on the surface into the imaging field below the satellite every 26 days. SPOT 1 took multiple photos of areas of Earth in different orbital passes, from different locations in space. These images could then be analysed to match features and triangulate the distances to points on the terrain, essentially forming a stereoscopic image of the Earth’s surface. This reveals the height of topographic features: hills, mountains, and so on. SPOT 1 was the first satellite to produce directly imaged stereo altitude data for the Earth. It was later joined and replaced by SPOT 2 through 7, as well as similar imaging satellites launched by other countries.

Diagram of satellite stereo imaging

Diagram illustrating the principle of satellite stereo terrain mapping. As the satellite orbits Earth, it takes photos of the same region of Earth from different positions. These are then triangulated to give altitude data for the terrain. (Background image is a public domain photo from the International Space Station by NASA. Satellite diagram is a public domain image of GOES-8 satellite by U.S. National Oceanic and Atmospheric Administration.)

Now, if we’re taking photos of the Earth and using them to calculate altitude data, how important is the fact that the Earth is spherical? If you look at a small area, say a few city blocks, the curvature of the Earth is not readily apparent and you can treat the underlying terrain as flat, with modifications by strictly local topography, without significant error. But as you image larger areas, getting up to hundreds of kilometres, the 3D shape revealed by the stereo imaging consists of the local topography superimposed on a spherical surface, not on a flat plane. If you don’t account for the spherical baseline, you end up with progressively larger altitude errors as your imaged area increases.

A research paper on the mathematics of registering stereo satellite images to obtain altitude data includes the following passage[2]:

Correction of Earth Curvature

If the 3D-GK coordinate system X, Y, Z and the local Cartesian coordinate system Xg, Yg, Zg are both set with their origins at the scene centre, the difference in Xg and X or Yg and Y will be negligible, but for Z and Zg [i.e. the height coordinates] the difference will be appreciable as a result of Earth curvature. The height error at a ground point S km away from the origin is given by the well-known expression:

ΔZ = Y2/2R km

Where R = 6367 km. This effect amounts to 67 m in the margin of the SPOT scene used for the reported experiments.

The size of the test scene was 50×60 km, and at this scale you get altitude errors of up to 67 metres if you assume the Earth is flat, which is a large error!

Another paper compares the mathematical solution of stereo satellite altitude data to that of aerial photography (from a plane)[3]:

Some of the approximations used for handling usual aerial photos are not acceptable for space images. The mathematical model is based on an orthogonal coordinate system and perspective image geometry. […] In the case of direct use of the national net coordinates, the effect of the earth curvature is respected by a correction of the image coordinates and the effect of the map projection is neglected. This will lead to [unacceptable] remaining errors for space images. […] The influence of the earth curvature correction is negligible for aerial photos because of the smaller flying height Zf. For a [satellite] flying height of 300 km we do have a scale error of the ground height of 1:20 or 5%.

So the terrain mappers using stereo satellite data need to be aware of and correct for the curvature of the Earth to get their data to come out accurately.

Terrain mapping is done on relatively small patches of Earth. But we’ve already seen in our first proof photos of Earth taken from far enough away that you can see (one side of) the whole planet, such as the Blue Marble photo. Can we do one better, and look at two photos of the Earth taken from different positions at the same time? Yes, we can!

The U.S. National Oceanic and Atmospheric Administration operates the Geostationary Operational Environmental Satellite (GOES) system, manufactured and launched by NASA. Since 1975, NASA has launched 17 GOES satellites, the last four of which are currently operational as Earth observation platforms. The GOES satellites are in geostationary orbit 35790 km above the equator, positioned over the Americas. GOES-16 is also known as GOES-East, providing coverage of the eastern USA, while GOES-17 is known as GOES-West, providing coverage of the western USA. This means that these two satellites can take images of Earth at the same time from two slightly different positions (“slightly” here means a few thousand kilometres).

This means we can get stereo views of the whole Earth. We could in principle use this to calculate the shape of the Earth by triangulation using some mathematics, but there’s an even cooler thing we can do. If we view a GOES-16 image with our right eye, while viewing a GOES-17 image taken at the same time with our left eye, we can get a 3D view of the Earth from space. Let’s try it!

The following images show cross-eyed and parallel viewing pairs for GOES-16/GOES-17 images. Depending on your ability to deal with these images, you should be able to view a stereo 3D image of Earth. (Cross-eyed stereo viewing seems to be the most popular method on the Internet, but personally I’ve never been able to get it to work for me, whereas I find the parallel method fairly easy. I find it works best if I put my face very close to the screen to lock onto the initial image fusion, and then slowly pull my head backwards. Another option if you have a VR viewer for your phone, like Google Cardboard, is to load the parallel image onto your phone and view it with your VR viewer.)

GOES stereo image, cross-eyed

Stereo pair images of Earth from NASA’s GOES-16 (left) and GOES-17 (right) satellites taken at the same time on the same date, 1400 UTC, 12 July 2018. This is a cross-eyed viewing pair: to see the 3D image, cross your eyes until three images appear, and focus on the middle image. It will probably be easier if you reduce the size of the image on your screen using your browser’s zoom function. (Public domain image by NASA, from [4].)

GOES stereo image, parallel

The same stereo pair presented with GOES-16 view on the right and GOES-17 on the left. This is a parallel viewing pair: to see the 3D image relax your eyes so the left eye views the left image and the right eye views the right image, until three images appear, and focus on the middle image. It will probably be easier if you reduce the size of the image on your screen using your browser’s zoom function. (Public domain image by NASA, from [4].)

Unfortunately these images are cropped, but if you managed to get the 3D viewing to work, you will have seen that your brain automatically does the distance calculation ting as it would with a real object, and you can see for yourself with your own eyes that the Earth is rounded, not flat.

I’ve saved the best for last. The Japan Meteorological Agency operates the Himawari-8 weather satellite, and the Korea Meteorological Administration operates the GEO-KOMPSAT-2A satellite. Again these are both on geosynchronous orbits above the equator, this time placed so that Himawari-8 has the best view of Japan, while GEO-KOMPSAT-2A has the best view of Korea, situated slightly to the west. And here I found uncropped whole Earth images from these two satellites taken at the same time, presented again as cross-eyed and then parallel viewing pairs:

Himawari-KOMPSAT stereo image, cross-eyed

Stereo pair images of Earth from Japan Meteorological Agency’s Himawari-8 (left) and Korea Meteorological Administration’s GEO-KOMPSAT-2A (right) satellites taken at the same time on the same date, 0310 UTC, 26 January 2019. This is a cross-eyed viewing pair. (Image reproduced and modified from [5].)

Himawari-KOMPSAT stereo image, parallel

The same stereo pair presented with Himawari-8 view on the right and GEO-KOMPSAT-2A on the left. This is a parallel viewing pair. (Image reproduced and modified from [5].)

For those who have trouble with free stereo viewing, I’ve also turned these photos into a red-cyan anaglyphic 3D image, which is viewable with red-cyan 3D glasses (the most common sort of coloured 3D glasses)

Himawari-KOMPSAT stereo image, anaglyph

The same stereo pair rendered as a red-cyan anaglyph. The stereo separation of the viewpoints is rather large, so it may be difficult to see the 3D effect at full size – it should help to reduce the image size using your browser’s zoom function, place your head close to your screen, and gently move side to side until the image fuses, then pull back slowly.

Hopefully you managed to get at least one of these 3D images to work for you (unfortunately some people find viewing stereo 3D images difficult). If you did, well, I don’t need to point out what you saw. The Earth is clearly, as seen with your own eyes, shaped like a sphere, not a flat disc.

References:

[1] Lang, M., Hornung, A., Wang, O., Poulakos, S., Smolic, A., Gross, M. “Nonlinear disparity mapping for stereoscopic 3D”. ACM Transactions on Graphics, 29 (4), p. 75-84. ACM, 2010. http://dx.doi.org/10.1145/1833349.1778812

[2] Hattori, S., Ono, T., Fraser, C., Hasegawa, H. “Orientation of high-resolution satellite images based on affine projection”. International Archives of Photogrammetry and Remote Sensing, 33(B3/1; PART 3) p. 359-366, 2000. https://www.isprs.org/proceedings/Xxxiii/congress/part3/359_XXXIII-part3.pdf

[3] Jacobsen, K. “Geometric aspects of high resolution satellite sensors for mapping”. ASPRS The Imaging & Geospatial Information Society Annual Convention 1997 Seattle. 1100(305), p. 230, 1997. https://www.ipi.uni-hannover.de/uploads/tx_tkpublikationen/jac_97_geom_hrss.pdf

[4] CIMSS Satellite blog, Space Science and Engineering Center, University of Wisconsin-Madison, “Stereoscopic views of Convection using GOES-16 and GOES-17”. 2018-07-12. https://cimss.ssec.wisc.edu/goes/blog/archives/28920 (accessed 2019-09-26).

[5] CIMSS Satellite blog, Space Science and Engineering Center, University of Wisconsin-Madison, “First GEOKOMPSAT-2A imagery (in stereo view with Himawari-8)”. 2019-02-04. https://cimss.ssec.wisc.edu/goes/blog/archives/31559 (accessed 2019-09-26).

27. Camera image stabilisation

Cameras are devices for capturing photographic images. A camera is basically a box with an opening in one wall that lets light enter the box and form an image on the opposite wall. The earliest such “cameras” were what are now known as camera obscuras, which are closed rooms with a small hole in one wall. The name “camera obscura” comes from Latin: “camera” meaning “room” and “obscura” meaning “dark”. (Which is incidentally why in English “camera” refers to a photographic device, while in Italian “camera” means a room.)

A camera obscura works on the principle that light travels in straight lines. How it forms an image is easiest to see with reference to a diagram:

Camera obscura diagram

Diagram illustrating the principle of a camera obscura. (Public domain image from Wikimedia Commons.)

In the diagram, the room on the right is enclosed and light can only enter through the hole C. Light from the head region A of a person standing outside enters the hole C, travelling in a straight line, until it hits the far wall of the room near the floor. Light from the person’s feet B travels through the hole C and ends up hitting the far wall near the ceiling. Light from the person’s torso D hits the far wall somewhere in between. We can see that all of the light from the person that enters through the hole C ends up projected on the far wall in such a way that it creates an image of the person, upside down. The image is faint, so the room needs to be dark in order to see it.

If you have a modern photographic camera, you can expose it for a long time to capture a photo of the faint projected image inside the room (which is upside down).

Camera obscura photo

A room turned into a camera obscura, at the Camden Arts Centre, London. (Creative Commons Attribution 2.0 image by Flickr user Kevan, from Flickr.)

The hole in the wall needs to be small to keep the image reasonably sharp. If the hole is large, the rays of light from a single point in the scene outside project to multiple points on the far wall, making the image blurry – the larger the hole, the brighter the image, but blurrier it is. You can overcome this by placing a lens in the hole, which focuses the incoming light back down to a sharper focus on the wall.

Camera obscura photo

Camera obscura using a lens to focus the incoming light for a brighter, sharper image. (Creative Commons Attribution 2.0 image by Flickr user Willi Winzig, from Flickr.)

A photographic camera is essentially a small, portable camera obscura, using a lens to focus an image of the outside world onto the inside of the back of the camera. The critical difference is that where the image forms on the back wall, there is some sort of light-sensitive device that records the pattern of light, shadow, and colour. The first cameras used light-sensitive chemicals, coated onto a flat surface. The light causes chemical reactions that change physical properties of the chemicals, such as hardness or colour. Various processes can then be used to convert the chemically coated surface into an image, that more or less resembles the scene that was projected into the camera. Modern chemical photography uses film as the chemical support medium, but glass was popular in the past and is still used for specialty purposes today.

More recently, photographic film has been largely displaced by digital electronic light sensors. Sensor manufacturers make silicon chips that contain millions of tiny individual light sensors arranged in a rectangular grid pattern. Each one records the amount of light that hits it, and that information is recorded as one pixel in a digital image file – the file holding millions of pixels that encode the image.

Camera cross section

Cross section of a modern camera, showing the light path through the lens to the digital image sensor. In this camera, a partially silvered fixed mirror reflects a fraction of the light to a dedicated autofocus sensor, and the viewfinder is electronic (this is not a single-lens reflex (SLR) design). (Photo by me.)

One important parameter in photography is the exposure time (also known as “shutter speed”). The hole where the light enters is covered by a shutter, which opens when you press the camera button, and closes a little bit later, the amount of time controlled by the camera settings. The longer you leave open the shutter, the more light can be collected and the brighter the resulting image is. In bright sunlight you might only need to expose the camera for a thousandth of a second or less. In dimmer conditions, such as indoors, or at night, you need to leave the shutter open for longer, sometimes up to several seconds to make a satisfactory image.

A problem is that people are not good at holding a camera still for more than a fraction of a second. Our hands shake by small amounts which, while insignificant for most things, are large enough to cause a long exposure photograph to be blurry because the camera points in slightly different directions during the exposure. Photographers use a rule of thumb to determine the longest shutter speed that can safely be used: For a standard 35 mm SLR camera, take the reciprocal of the focal length of the lens in millimetres, and that is the longest usable shutter speed for hand-held photography. For example, when shooting with a 50 mm lens, your exposure should be 1/50 second or less to avoid blur caused by hand shake. Longer exposures will tend to be blurry.

Camera shake

A photo I took with a long exposure (0.3 seconds) on a (stationary) train. Besides the movement of the people, the background is also blurred by the shaking of my hands; the signs above the door are blurred to illegibility.

The traditional solution has been to use a tripod to hold the camera still while taking a photo, but most people don’t want to carry around a tripod. Since the mid-1990s, another solution has become available: image stabilisation. Image stabilisation uses technology to mitigate or undo the effects of hand shake during image capture. There are two types of image stabilisation:

1. Optical image stabilisation was the first type invented. The basic principle is to move certain optical components of the camera to compensate for the shaking of the camera body, maintaining the image on the same location on the sensor. Gyroscopes are used to measure the tilting of the camera body caused by hand shake, and servo motors physically move the lens elements or the image sensor (or both) to compensate. The motions are very small, but crucial, because the size of a pixel on a modern camera sensor is only a few micrometres, so if the image moves more than a few micrometres it will become blurry.

Image stabilised photo

Optically image stabilised photo of a dim lighthouse interior. The exposure is 0.5 seconds, even longer than the previous photo, but the image stabilisation system mitigates the effects of hand shake, and details in the photo remain relatively unblurred. (Photo by me.)

2. Digital image stabilisation is a newer technology, which relies on image processing, rather than moving physical components in the camera. Digital image processing can go some way to remove the blur from an image, but this is never a perfect process because blurring loses some of the information irretrievably. Another approach is to capture multiple shorter exposure images and combine them after exposure. This produces a composite longer exposure, but each sub-image can be shifted slightly to compensate for any motion of the camera before adding them together. Although digital image stabilisation is fascinating, for this article we are actually concerned with optical image stabilisation, so I’ll say no more about digital.

Early optical image stabilisation hardware could stabilise an image by about 2 to 3 stops of exposure. A “stop” is a term referring to an increase or decrease in exposure by a factor of 2. With 3 stops of image stabilisation, you can safely increase your exposure by a factor of 23 = 8. So if using a 50 mm lens, rather than need an exposure of 1/50 second or less, you can get away with about 1/6 second or less, a significant improvement.

Image stabilisation system diagram

Optical image stabilisation system diagram from a US patent by Canon. The symbols p and y refer to pitch and yaw, which are rotations as defined by the axes shown at 61. 63p and 63y are pitch and yaw sensors (i.e. gyroscopes), which send signals to electronics (65p and 65y) to control actuator motors (67p and 67y) to move the lens element 5, in order to keep the image steady on the sensor 69. 68p and 68y are position feedback sensors. (Figure reproduced from [1].)

Newer technology has improved optical image stabilisation to about 6.5 stops. This gives a factor of 26.5 = 91 times improvement, so that 1/50 second exposure can now be stretched to almost 2 seconds without blurring. Will we soon see further improvements giving even more stops of optical stabilisation?

Interestingly, the answer is no. At least not without a fundamentally different technology. According to an interview with Setsuya Kataoka, Deputy Division Manager of the Imaging Product Development Division of Olympus Corporation, 6.5 stops is the theoretical upper limit of gyroscope-based optical image stabilisation. Why? In his words[2]:

6.5 stops is actually a theoretical limitation at the moment due to rotation of the earth interfering with gyro sensors.

Wait, what?

This is a professional camera engineer, saying that it’s not possible to further improve camera image stabilisation technology because of the rotation of the Earth. Let’s examine why that might be.

As calculated above, when we’re in the realm of 6.5 stops of image stabilisation, a typical exposure is going to be of the order of a second or so. The gyroscopes inside the camera are attempting to keep the camera’s optical system effectively stationary, compensating for the photographer’s shaky hands. However, in one second the Earth rotates by an angle of 0.0042° (equal to 360° divided by the sidereal rotation period of the Earth, 86164 seconds). And gyroscopes hold their position in an inertial frame, not in the rotating frame of the Earth. So if the camera is optically locked to the angle of the gyroscope at the start of the exposure, one second later it will be out by an angle of 0.0042°. So what?

Well, a typical digital camera sensor contains pixels of the order of 5 μm across. With a focal length of 50 mm, a pixel subtends an angle of 5/50000×(180/π) = 0.006°. That’s very close to the same angle. In fact if we change to a focal length of 70 mm (roughly the border between a standard and telephoto lens, so very reasonable for consumer cameras), the angles come out virtually the same.

What this means is that if we take a 1 second exposure with a 70 mm lens (or a 2 second exposure with a 35 mm lens, and so on), with an optically stabilised camera system that perfectly locks onto a gyroscopic stabilisation system, the rotation of the Earth will cause the image to drift by a full pixel on the image sensor. In other words, the image will become blurred. This theoretical limit to the performance of optical image stabilisation, as conceded by professional camera engineers, demonstrates that the Earth is rotating once per day.

To tie this in to our theme of comparing to a flat Earth, I’ll concede that this current limitation would also occur if the flat Earth rotated once per day. However, the majority of flat Earth models deny that the Earth rotates, preferring the cycle of day and night to be generated by the motion of a relatively small, near sun. The current engineering limitations of camera optical image stabilisation rule out the non-rotating flat Earth model.

You could in theory compensate for the angular error caused by Earth rotation, but to do that you’d need to know which direction your camera was pointing relative to the Earth’s rotation axis. Photographers hold their cameras in all sorts of orientations, so you can’t assume this; you need to know both the direction of gravity relative to the camera, and your latitude. There are devices which measure these (accelerometers and GPS), so maybe some day soon camera engineers will include data from these to further improve image stabilisation. At that point, the technology will rely on the fact that the Earth is spherical – because the orientation of gravity relative to the rotation axis changes with latitude, whereas on a rotating flat Earth gravity is always at a constant angle to the rotation axis (parallel to it in the simple case of the flat Earth spinning like a CD).

And the fact that your future camera can perform 7+ stops of image stabilisation will depend on the fact that the Earth is a globe.

References:

[1] Toyoda, Y. “Image stabilizer”. US Patent 6064827, filed 1998-05-12, granted 2000-05-16. https://pdfpiw.uspto.gov/.piw?docid=06064827

[2] Westlake, Andy. “Exclusive interview: Setsuya Kataoka of Olympus”. Amateur Photographer, 2016. https://www.amateurphotographer.co.uk/latest/photo-news/exclusive-interview-setsuya-kataoka-olympus-95731 (accessed 2019-09-18).