28. Stereo imaging

We see the world in 3D. What this means is that our visual system—comprising the eyes, optic nerves, and brain—takes the sensory input from our eyes and interprets it in a way that gives us the sensation that we exist in a three-dimensional world. This sensation is often called depth perception.

In practical terms, depth perception means that we are good at estimating the relative distances from ourselves to objects in our field of vision. You can tell if an object is nearer or further away just by looking at it (weird cases like optical illusions aside). A corollary of this is that you can tell the three-dimensional shape of an object just by looking at it (again, optical illusions aside). A basketball looks like a sphere, not like a flat circle. You can tell if a surface that you see is curved or flat.

To do this, our brain relies on various elements of visual data known as depth cues. The best known depth cue is stereopsis, which is interpreting the different views of left and right eyes caused by the parallax effect, your brain innately using this information to triangulate distances. You can easily observe parallax by looking at something in the distance, holding up a finger at arms length, and alternately closing left and right eyes. In the view from each eye, your finger appears to move left/right relative to the background. And with both eyes open, if you focus on the background, you see two images of your finger. This tells your brain that your finger is much closer than the background.

Parallax effect

Illustration of parallax. The dog is closer than the background scene. Sightlines from your left and right eyes passing through the dog project to different areas of the background. So the view seen by your left and right eyes show the dog in different positions relative to the background. (The effect is exaggerated here for clarity.)

We’ll discuss stereopsis in more detail below, but first it’s interesting to know that stereopsis is not the only depth cue our brains use. There are many physically different depth cues, and most of them work even with a single eye.

Cover one eye and look at the objects nearby, such as on your desk. Reach out and try to touch them gently with a fingertip, as a test for how well you can judge their depth. For objects within an easy hand’s reach you can probably do pretty well; for objects you need to stretch to touch you might do a little worse, but possibly not as bad as you thought you might. The one eye that you have looking at nearby things needs to adjust the focus of its lens in order to keep the image focused on the retina. Muscles in your eye squeeze the lens to change its shape, thus adjusting the focus. Nerves send these muscle signals to your brain, which subconsciously uses them to help gauge distance to the object. This depth cue is known as accommodation, and is most accurate within a metre or two, because it is within this range that the greatest lens adjustments need to be made.

With one eye covered, look at objects further away, such as across the room. You can tell that some objects are closer and other objects further away (although you may have trouble judging the distances as accurately as if you used both eyes). Various cues are used to do this, including:

Perspective: Many objects in our lives have straight edges, and we use the convergence of straight lines in visual perspective to help judge distances.

Relative sizes: Objects that look smaller are (usually) further away. This is more reliable if we know from experience that certain objects are the same size in reality.

Occultation: Objects partially hidden behind other objects are further away. It seems obvious, but it’s certainly a cue that our brain uses to decide which object is nearer and which further away.

Texture: The texture on an object is more easily discernible when it is nearer.

Light and shadow: The interplay of light direction and the shading of surfaces provides cues. A featureless sphere such as a cue ball still looks like a sphere rather than a flat disc because of the gradual change in shading across the surface.

Shaded circle

A circle shaded to present the illusion that it is a sphere, using light and shadow as depth cues. If you squint your eyes so your screen becomes a bit fuzzy, the illusion of three dimensionality can become even stronger.

Motion parallax: With one eye covered, look at an object 2 or 3 metres away. You have some perception of its distance and shape from the above-mentioned cues, but not as much as if both your eyes were open. Now move your head from side to side. The addition of motion produces parallax effects as your eye moves and your brain integrates that information into its mental model of what you are seeing, which improves the depth perception. Pigeons, chickens, and some other birds have limited binocular vision due to their eyes being on the sides of their heads, and they use motion parallax to judge distances, which is why they bob their heads around so much.

Motion parallax animation

Demonstration of motion parallax. You get a strong sense of depth in this animation, even though it is presented on your flat computer screen. (Creative Commons Attribution 3.0 Unported image by Nathaniel Domek, from Wikimedia Commons.)

There are some other depth cues that work with a single eye as well – I don’t want to try to be exhaustive here.

If you uncover both eyes and look at the world around you, your sense of three dimensionality becomes stronger. Now instead of needing motion parallax, you get parallax effects simply by looking with two eyes in different positions. Stereopsis is one of the most powerful depth cues we have, and it can often be used to override or trick the other cues, giving us a sense of three-dimensionality where none exists. This is the principle behind 3D movies, as well as 3D images printed on flat paper or displayed on a flat screen. The trick is to have one eye see one image, and the other eye see a slightly different image of the same scene, from an appropriate parallax viewpoint.

In modern 3D movies this is accomplished by projecting two images onto the screen simultaneously through two different polarising filters, with the planes of polarisation oriented at 90° to one another. The glasses we wear contain matched polarising filters: the left eye filter blocks the right eye projection while letting the left eye projection through, and vice versa for the right eye. The result is that we see two different images, one with each eye, and our brains combine them to produce the sensation of depth.

Another important binocular depth cue is convergence. To look at an object nearby, your eyes have to point inwards so they are both focused on the same point. For an object further away, your eyes look more parallel. Like your lenses, the muscles that control this send signals to your brain, which it interprets as a distance measure. Convergence can be a problem with 3D movies and images if the image creator is not careful. Although stereopsis can provide the illusion of depth, if it’s not also matched with convergence then there can be conflicting depth cues to your brain. Another factor is that accommodation tells you that all objects are at the distance of the display screen. The resulting disconnects between depth cues are what makes some people feel nauseated or headachy when viewing 3D images.

To create 3D images using stereopsis, you need to have two images of the same scene, as seen from different positions. One method is to have two cameras side by side. This can be used for video too, and is the method used for live 3D broadcasts, such as sports. Interestingly, however, this is not the most common method of making 3D movies.

Coronet 3D camera

A 3D camera produced by the Coronet Camera Company. Note the two lenses at the front, separated by roughly the same spacing as human eyes. (Creative Commons Attribution 3.0 Unported image by Wikimedia Commons user Bilby, from Wikimedia Commons.)

3D movies are generally shot with a single camera, and then an artificial second image is made for each frame during the post-production phase. This is done by a skilled 3D artist, using software to model the depths to various objects in each shot, and then manipulate the pixels of the image by shifting them left or right by different amounts, and painting in any areas where pixel shifts leave blank pixels behind. The reason it’s done this way is that this gives the artist control over how extreme the stereo depth effect is, and this can be manipulated to make objects appear closer or further away than they were during shooting. It’s also necessary to match depth disparities of salient objects between scenes on either side of a scene cut, to avoid the jarring effect of the main character or other objects suddenly popping backwards and forwards across scene cuts. Finally, the depth disparity pixel shifts required for cinema projection are different to the ones required for home video on a TV screen, because of the different viewing geometries. So a high quality 3D Blu-ray of a movie will have different depth disparities to the cinematic release. Essentially, construction of the “second eye” image is a complex artistic and technical consideration of modern film making, which cannot simply be left to chance by shooting with two cameras at once. See “Nonlinear disparity mapping for stereoscopic 3D” by Lang et al.[1], for example, which discusses these issues in detail.

For a still photo however, shooting with two cameras at the same time is the best method. And for scientific shape measurement using stereographic imaging, two cameras taking real images is necessary. One application of this is satellite terrain mapping.

The French space agency CNES launched the SPOT 1 satellite in 1986 into a sun-synchronous polar orbit, meaning it orbits around the poles and maintains a constant angle to the sun, as the Earth rotates beneath it. This brought any point on the surface into the imaging field below the satellite every 26 days. SPOT 1 took multiple photos of areas of Earth in different orbital passes, from different locations in space. These images could then be analysed to match features and triangulate the distances to points on the terrain, essentially forming a stereoscopic image of the Earth’s surface. This reveals the height of topographic features: hills, mountains, and so on. SPOT 1 was the first satellite to produce directly imaged stereo altitude data for the Earth. It was later joined and replaced by SPOT 2 through 7, as well as similar imaging satellites launched by other countries.

Diagram of satellite stereo imaging

Diagram illustrating the principle of satellite stereo terrain mapping. As the satellite orbits Earth, it takes photos of the same region of Earth from different positions. These are then triangulated to give altitude data for the terrain. (Background image is a public domain photo from the International Space Station by NASA. Satellite diagram is a public domain image of GOES-8 satellite by U.S. National Oceanic and Atmospheric Administration.)

Now, if we’re taking photos of the Earth and using them to calculate altitude data, how important is the fact that the Earth is spherical? If you look at a small area, say a few city blocks, the curvature of the Earth is not readily apparent and you can treat the underlying terrain as flat, with modifications by strictly local topography, without significant error. But as you image larger areas, getting up to hundreds of kilometres, the 3D shape revealed by the stereo imaging consists of the local topography superimposed on a spherical surface, not on a flat plane. If you don’t account for the spherical baseline, you end up with progressively larger altitude errors as your imaged area increases.

A research paper on the mathematics of registering stereo satellite images to obtain altitude data includes the following passage[2]:

Correction of Earth Curvature

If the 3D-GK coordinate system X, Y, Z and the local Cartesian coordinate system Xg, Yg, Zg are both set with their origins at the scene centre, the difference in Xg and X or Yg and Y will be negligible, but for Z and Zg [i.e. the height coordinates] the difference will be appreciable as a result of Earth curvature. The height error at a ground point S km away from the origin is given by the well-known expression:

ΔZ = Y2/2R km

Where R = 6367 km. This effect amounts to 67 m in the margin of the SPOT scene used for the reported experiments.

The size of the test scene was 50×60 km, and at this scale you get altitude errors of up to 67 metres if you assume the Earth is flat, which is a large error!

Another paper compares the mathematical solution of stereo satellite altitude data to that of aerial photography (from a plane)[3]:

Some of the approximations used for handling usual aerial photos are not acceptable for space images. The mathematical model is based on an orthogonal coordinate system and perspective image geometry. […] In the case of direct use of the national net coordinates, the effect of the earth curvature is respected by a correction of the image coordinates and the effect of the map projection is neglected. This will lead to [unacceptable] remaining errors for space images. […] The influence of the earth curvature correction is negligible for aerial photos because of the smaller flying height Zf. For a [satellite] flying height of 300 km we do have a scale error of the ground height of 1:20 or 5%.

So the terrain mappers using stereo satellite data need to be aware of and correct for the curvature of the Earth to get their data to come out accurately.

Terrain mapping is done on relatively small patches of Earth. But we’ve already seen in our first proof photos of Earth taken from far enough away that you can see (one side of) the whole planet, such as the Blue Marble photo. Can we do one better, and look at two photos of the Earth taken from different positions at the same time? Yes, we can!

The U.S. National Oceanic and Atmospheric Administration operates the Geostationary Operational Environmental Satellite (GOES) system, manufactured and launched by NASA. Since 1975, NASA has launched 17 GOES satellites, the last four of which are currently operational as Earth observation platforms. The GOES satellites are in geostationary orbit 35790 km above the equator, positioned over the Americas. GOES-16 is also known as GOES-East, providing coverage of the eastern USA, while GOES-17 is known as GOES-West, providing coverage of the western USA. This means that these two satellites can take images of Earth at the same time from two slightly different positions (“slightly” here means a few thousand kilometres).

This means we can get stereo views of the whole Earth. We could in principle use this to calculate the shape of the Earth by triangulation using some mathematics, but there’s an even cooler thing we can do. If we view a GOES-16 image with our right eye, while viewing a GOES-17 image taken at the same time with our left eye, we can get a 3D view of the Earth from space. Let’s try it!

The following images show cross-eyed and parallel viewing pairs for GOES-16/GOES-17 images. Depending on your ability to deal with these images, you should be able to view a stereo 3D image of Earth. (Cross-eyed stereo viewing seems to be the most popular method on the Internet, but personally I’ve never been able to get it to work for me, whereas I find the parallel method fairly easy. I find it works best if I put my face very close to the screen to lock onto the initial image fusion, and then slowly pull my head backwards. Another option if you have a VR viewer for your phone, like Google Cardboard, is to load the parallel image onto your phone and view it with your VR viewer.)

GOES stereo image, cross-eyed

Stereo pair images of Earth from NASA’s GOES-16 (left) and GOES-17 (right) satellites taken at the same time on the same date, 1400 UTC, 12 July 2018. This is a cross-eyed viewing pair: to see the 3D image, cross your eyes until three images appear, and focus on the middle image. It will probably be easier if you reduce the size of the image on your screen using your browser’s zoom function. (Public domain image by NASA, from [4].)

GOES stereo image, parallel

The same stereo pair presented with GOES-16 view on the right and GOES-17 on the left. This is a parallel viewing pair: to see the 3D image relax your eyes so the left eye views the left image and the right eye views the right image, until three images appear, and focus on the middle image. It will probably be easier if you reduce the size of the image on your screen using your browser’s zoom function. (Public domain image by NASA, from [4].)

Unfortunately these images are cropped, but if you managed to get the 3D viewing to work, you will have seen that your brain automatically does the distance calculation ting as it would with a real object, and you can see for yourself with your own eyes that the Earth is rounded, not flat.

I’ve saved the best for last. The Japan Meteorological Agency operates the Himawari-8 weather satellite, and the Korea Meteorological Administration operates the GEO-KOMPSAT-2A satellite. Again these are both on geosynchronous orbits above the equator, this time placed so that Himawari-8 has the best view of Japan, while GEO-KOMPSAT-2A has the best view of Korea, situated slightly to the west. And here I found uncropped whole Earth images from these two satellites taken at the same time, presented again as cross-eyed and then parallel viewing pairs:

Himawari-KOMPSAT stereo image, cross-eyed

Stereo pair images of Earth from Japan Meteorological Agency’s Himawari-8 (left) and Korea Meteorological Administration’s GEO-KOMPSAT-2A (right) satellites taken at the same time on the same date, 0310 UTC, 26 January 2019. This is a cross-eyed viewing pair. (Image reproduced and modified from [5].)

Himawari-KOMPSAT stereo image, parallel

The same stereo pair presented with Himawari-8 view on the right and GEO-KOMPSAT-2A on the left. This is a parallel viewing pair. (Image reproduced and modified from [5].)

For those who have trouble with free stereo viewing, I’ve also turned these photos into a red-cyan anaglyphic 3D image, which is viewable with red-cyan 3D glasses (the most common sort of coloured 3D glasses)

Himawari-KOMPSAT stereo image, anaglyph

The same stereo pair rendered as a red-cyan anaglyph. The stereo separation of the viewpoints is rather large, so it may be difficult to see the 3D effect at full size – it should help to reduce the image size using your browser’s zoom function, place your head close to your screen, and gently move side to side until the image fuses, then pull back slowly.

Hopefully you managed to get at least one of these 3D images to work for you (unfortunately some people find viewing stereo 3D images difficult). If you did, well, I don’t need to point out what you saw. The Earth is clearly, as seen with your own eyes, shaped like a sphere, not a flat disc.

References:

[1] Lang, M., Hornung, A., Wang, O., Poulakos, S., Smolic, A., Gross, M. “Nonlinear disparity mapping for stereoscopic 3D”. ACM Transactions on Graphics, 29 (4), p. 75-84. ACM, 2010. http://dx.doi.org/10.1145/1833349.1778812

[2] Hattori, S., Ono, T., Fraser, C., Hasegawa, H. “Orientation of high-resolution satellite images based on affine projection”. International Archives of Photogrammetry and Remote Sensing, 33(B3/1; PART 3) p. 359-366, 2000. https://www.isprs.org/proceedings/Xxxiii/congress/part3/359_XXXIII-part3.pdf

[3] Jacobsen, K. “Geometric aspects of high resolution satellite sensors for mapping”. ASPRS The Imaging & Geospatial Information Society Annual Convention 1997 Seattle. 1100(305), p. 230, 1997. https://www.ipi.uni-hannover.de/uploads/tx_tkpublikationen/jac_97_geom_hrss.pdf

[4] CIMSS Satellite blog, Space Science and Engineering Center, University of Wisconsin-Madison, “Stereoscopic views of Convection using GOES-16 and GOES-17”. 2018-07-12. https://cimss.ssec.wisc.edu/goes/blog/archives/28920 (accessed 2019-09-26).

[5] CIMSS Satellite blog, Space Science and Engineering Center, University of Wisconsin-Madison, “First GEOKOMPSAT-2A imagery (in stereo view with Himawari-8)”. 2019-02-04. https://cimss.ssec.wisc.edu/goes/blog/archives/31559 (accessed 2019-09-26).

27. Camera image stabilisation

Cameras are devices for capturing photographic images. A camera is basically a box with an opening in one wall that lets light enter the box and form an image on the opposite wall. The earliest such “cameras” were what are now known as camera obscuras, which are closed rooms with a small hole in one wall. The name “camera obscura” comes from Latin: “camera” meaning “room” and “obscura” meaning “dark”. (Which is incidentally why in English “camera” refers to a photographic device, while in Italian “camera” means a room.)

A camera obscura works on the principle that light travels in straight lines. How it forms an image is easiest to see with reference to a diagram:

Camera obscura diagram

Diagram illustrating the principle of a camera obscura. (Public domain image from Wikimedia Commons.)

In the diagram, the room on the right is enclosed and light can only enter through the hole C. Light from the head region A of a person standing outside enters the hole C, travelling in a straight line, until it hits the far wall of the room near the floor. Light from the person’s feet B travels through the hole C and ends up hitting the far wall near the ceiling. Light from the person’s torso D hits the far wall somewhere in between. We can see that all of the light from the person that enters through the hole C ends up projected on the far wall in such a way that it creates an image of the person, upside down. The image is faint, so the room needs to be dark in order to see it.

If you have a modern photographic camera, you can expose it for a long time to capture a photo of the faint projected image inside the room (which is upside down).

Camera obscura photo

A room turned into a camera obscura, at the Camden Arts Centre, London. (Creative Commons Attribution 2.0 image by Flickr user Kevan, from Flickr.)

The hole in the wall needs to be small to keep the image reasonably sharp. If the hole is large, the rays of light from a single point in the scene outside project to multiple points on the far wall, making the image blurry – the larger the hole, the brighter the image, but blurrier it is. You can overcome this by placing a lens in the hole, which focuses the incoming light back down to a sharper focus on the wall.

Camera obscura photo

Camera obscura using a lens to focus the incoming light for a brighter, sharper image. (Creative Commons Attribution 2.0 image by Flickr user Willi Winzig, from Flickr.)

A photographic camera is essentially a small, portable camera obscura, using a lens to focus an image of the outside world onto the inside of the back of the camera. The critical difference is that where the image forms on the back wall, there is some sort of light-sensitive device that records the pattern of light, shadow, and colour. The first cameras used light-sensitive chemicals, coated onto a flat surface. The light causes chemical reactions that change physical properties of the chemicals, such as hardness or colour. Various processes can then be used to convert the chemically coated surface into an image, that more or less resembles the scene that was projected into the camera. Modern chemical photography uses film as the chemical support medium, but glass was popular in the past and is still used for specialty purposes today.

More recently, photographic film has been largely displaced by digital electronic light sensors. Sensor manufacturers make silicon chips that contain millions of tiny individual light sensors arranged in a rectangular grid pattern. Each one records the amount of light that hits it, and that information is recorded as one pixel in a digital image file – the file holding millions of pixels that encode the image.

Camera cross section

Cross section of a modern camera, showing the light path through the lens to the digital image sensor. In this camera, a partially silvered fixed mirror reflects a fraction of the light to a dedicated autofocus sensor, and the viewfinder is electronic (this is not a single-lens reflex (SLR) design). (Photo by me.)

One important parameter in photography is the exposure time (also known as “shutter speed”). The hole where the light enters is covered by a shutter, which opens when you press the camera button, and closes a little bit later, the amount of time controlled by the camera settings. The longer you leave open the shutter, the more light can be collected and the brighter the resulting image is. In bright sunlight you might only need to expose the camera for a thousandth of a second or less. In dimmer conditions, such as indoors, or at night, you need to leave the shutter open for longer, sometimes up to several seconds to make a satisfactory image.

A problem is that people are not good at holding a camera still for more than a fraction of a second. Our hands shake by small amounts which, while insignificant for most things, are large enough to cause a long exposure photograph to be blurry because the camera points in slightly different directions during the exposure. Photographers use a rule of thumb to determine the longest shutter speed that can safely be used: For a standard 35 mm SLR camera, take the reciprocal of the focal length of the lens in millimetres, and that is the longest usable shutter speed for hand-held photography. For example, when shooting with a 50 mm lens, your exposure should be 1/50 second or less to avoid blur caused by hand shake. Longer exposures will tend to be blurry.

Camera shake

A photo I took with a long exposure (0.3 seconds) on a (stationary) train. Besides the movement of the people, the background is also blurred by the shaking of my hands; the signs above the door are blurred to illegibility.

The traditional solution has been to use a tripod to hold the camera still while taking a photo, but most people don’t want to carry around a tripod. Since the mid-1990s, another solution has become available: image stabilisation. Image stabilisation uses technology to mitigate or undo the effects of hand shake during image capture. There are two types of image stabilisation:

1. Optical image stabilisation was the first type invented. The basic principle is to move certain optical components of the camera to compensate for the shaking of the camera body, maintaining the image on the same location on the sensor. Gyroscopes are used to measure the tilting of the camera body caused by hand shake, and servo motors physically move the lens elements or the image sensor (or both) to compensate. The motions are very small, but crucial, because the size of a pixel on a modern camera sensor is only a few micrometres, so if the image moves more than a few micrometres it will become blurry.

Image stabilised photo

Optically image stabilised photo of a dim lighthouse interior. The exposure is 0.5 seconds, even longer than the previous photo, but the image stabilisation system mitigates the effects of hand shake, and details in the photo remain relatively unblurred. (Photo by me.)

2. Digital image stabilisation is a newer technology, which relies on image processing, rather than moving physical components in the camera. Digital image processing can go some way to remove the blur from an image, but this is never a perfect process because blurring loses some of the information irretrievably. Another approach is to capture multiple shorter exposure images and combine them after exposure. This produces a composite longer exposure, but each sub-image can be shifted slightly to compensate for any motion of the camera before adding them together. Although digital image stabilisation is fascinating, for this article we are actually concerned with optical image stabilisation, so I’ll say no more about digital.

Early optical image stabilisation hardware could stabilise an image by about 2 to 3 stops of exposure. A “stop” is a term referring to an increase or decrease in exposure by a factor of 2. With 3 stops of image stabilisation, you can safely increase your exposure by a factor of 23 = 8. So if using a 50 mm lens, rather than need an exposure of 1/50 second or less, you can get away with about 1/6 second or less, a significant improvement.

Image stabilisation system diagram

Optical image stabilisation system diagram from a US patent by Canon. The symbols p and y refer to pitch and yaw, which are rotations as defined by the axes shown at 61. 63p and 63y are pitch and yaw sensors (i.e. gyroscopes), which send signals to electronics (65p and 65y) to control actuator motors (67p and 67y) to move the lens element 5, in order to keep the image steady on the sensor 69. 68p and 68y are position feedback sensors. (Figure reproduced from [1].)

Newer technology has improved optical image stabilisation to about 6.5 stops. This gives a factor of 26.5 = 91 times improvement, so that 1/50 second exposure can now be stretched to almost 2 seconds without blurring. Will we soon see further improvements giving even more stops of optical stabilisation?

Interestingly, the answer is no. At least not without a fundamentally different technology. According to an interview with Setsuya Kataoka, Deputy Division Manager of the Imaging Product Development Division of Olympus Corporation, 6.5 stops is the theoretical upper limit of gyroscope-based optical image stabilisation. Why? In his words[2]:

6.5 stops is actually a theoretical limitation at the moment due to rotation of the earth interfering with gyro sensors.

Wait, what?

This is a professional camera engineer, saying that it’s not possible to further improve camera image stabilisation technology because of the rotation of the Earth. Let’s examine why that might be.

As calculated above, when we’re in the realm of 6.5 stops of image stabilisation, a typical exposure is going to be of the order of a second or so. The gyroscopes inside the camera are attempting to keep the camera’s optical system effectively stationary, compensating for the photographer’s shaky hands. However, in one second the Earth rotates by an angle of 0.0042° (equal to 360° divided by the sidereal rotation period of the Earth, 86164 seconds). And gyroscopes hold their position in an inertial frame, not in the rotating frame of the Earth. So if the camera is optically locked to the angle of the gyroscope at the start of the exposure, one second later it will be out by an angle of 0.0042°. So what?

Well, a typical digital camera sensor contains pixels of the order of 5 μm across. With a focal length of 50 mm, a pixel subtends an angle of 5/50000×(180/π) = 0.006°. That’s very close to the same angle. In fact if we change to a focal length of 70 mm (roughly the border between a standard and telephoto lens, so very reasonable for consumer cameras), the angles come out virtually the same.

What this means is that if we take a 1 second exposure with a 70 mm lens (or a 2 second exposure with a 35 mm lens, and so on), with an optically stabilised camera system that perfectly locks onto a gyroscopic stabilisation system, the rotation of the Earth will cause the image to drift by a full pixel on the image sensor. In other words, the image will become blurred. This theoretical limit to the performance of optical image stabilisation, as conceded by professional camera engineers, demonstrates that the Earth is rotating once per day.

To tie this in to our theme of comparing to a flat Earth, I’ll concede that this current limitation would also occur if the flat Earth rotated once per day. However, the majority of flat Earth models deny that the Earth rotates, preferring the cycle of day and night to be generated by the motion of a relatively small, near sun. The current engineering limitations of camera optical image stabilisation rule out the non-rotating flat Earth model.

You could in theory compensate for the angular error caused by Earth rotation, but to do that you’d need to know which direction your camera was pointing relative to the Earth’s rotation axis. Photographers hold their cameras in all sorts of orientations, so you can’t assume this; you need to know both the direction of gravity relative to the camera, and your latitude. There are devices which measure these (accelerometers and GPS), so maybe some day soon camera engineers will include data from these to further improve image stabilisation. At that point, the technology will rely on the fact that the Earth is spherical – because the orientation of gravity relative to the rotation axis changes with latitude, whereas on a rotating flat Earth gravity is always at a constant angle to the rotation axis (parallel to it in the simple case of the flat Earth spinning like a CD).

And the fact that your future camera can perform 7+ stops of image stabilisation will depend on the fact that the Earth is a globe.

References:

[1] Toyoda, Y. “Image stabilizer”. US Patent 6064827, filed 1998-05-12, granted 2000-05-16. https://pdfpiw.uspto.gov/.piw?docid=06064827

[2] Westlake, Andy. “Exclusive interview: Setsuya Kataoka of Olympus”. Amateur Photographer, 2016. https://www.amateurphotographer.co.uk/latest/photo-news/exclusive-interview-setsuya-kataoka-olympus-95731 (accessed 2019-09-18).

26. Skyglow

Skyglow is the diffuse illumination of the night sky by light sources other than large astronomical objects. Sometimes this is considered to include diffuse natural sources such as the zodiacal light (discussed in a previous proof), or the faint glow of the atmosphere itself caused by incoming cosmic radiation (called airglow), but primarily skyglow is considered to be the product of artificial lighting caused by human activity. In this context, skyglow is essentially the form of light pollution which causes the night sky to appear brighter near large sources of artificial light (i.e. cities and towns), drowning out natural night sky sources such as fainter stars.

Skyglow from Keys View

Skyglow from the cities of the Coachella Valley in California, as seen from Keys View lookout, Joshua Tree National Park, approximately 20 km away. (Public domain image by U.S. National Park Service/Lian Law, from Flickr.)

The sky above a city appears to glow due to the scattering of light off gas molecules and aerosols (i.e. dust particles, and suspended liquid droplets in the air). Scattering of light from air molecules (primarily nitrogen and oxygen) is called Rayleigh scattering. This is the same mechanism that causes the daytime sky to appear blue, due to scattering of sunlight. Although blue light is scattered more strongly, the overall colour effect is different for relatively nearby light sources than it is for sunlight. Much of the blue light is also scattered away from our line of sight, so skyglow caused by Rayleigh scattering ends up a similar colour to the light sources. Scattering off aerosol particles is called Mie scattering, and is much less dependent on wavelength, so also has little effect on the colour of the scattered light.

Skyglow from Cholla

Skyglow from the cities of the Coachella Valley in California, as seen from Cholla Cactus Garden, Joshua Tree National Park, approximately 40 km away. (Public domain image by U.S. National Park Service/Hannah Schwalbe, from Flickr.)

Despite the relative independence of scattered light on wavelength, bluer light sources result in a brighter skyglow as perceived by humans. This is due to a psychophysical effect of our optical systems known as the Purkinje effect. At low light levels, the rod cells in our retinas provide most of the sensory information, rather than the colour-sensitive cone cells. Rod cells are more sensitive to blue-green light than they are to redder light. This means that at low light levels, we are relatively more sensitive to blue light (compared to red light) than we are at high light levels. Hence skyglow caused by blue lights appears brighter than skyglow caused by red lights of similar perceptual brightness.

Artificially produced skyglow appears prominently in the sky above cities. It makes the whole night sky as seen from within the city brighter, making it difficult or impossible to see fainter stars. At its worst, skyglow within a city can drown out virtually all night time astronomical objects other than the moon, Venus, and Jupiter. The skyglow from a city can also be seen from dark places up to hundreds of kilometres away, as a dome of bright sky above the location of the city on the horizon.

Skyglow from Ashurst Lake

Skyglow from the cities of Phoenix and Flagstaff, as seen from Ashurst Lake, Arizona, rendered in false colour. Although the skyglow from each city is visible, the cities themselves are below the horizon and not visible directly. The arc of light reaching up into the sky is the Milky Way. (Public domain image by the U.S. National Park Service, from Wikipedia.)

However, although the skyglow from a city can be seen from such a distance, the much brighter lights of the city itself cannot be seen directly – because they are below the horizon. The fact that you can observe the fainter glow of the sky above a city while not being able to see the lights of the city directly is because of the curvature of the Earth.

This is not the only effect of Earth’s curvature on the appearance of skyglow; it also effects the brightness of the glow. In the absence of any scattering or absorption, the intensity of light falls off with distance from the source following an inverse square law. Physically, this is because the surface area of spherical shells of increasing radius from a light source increase as the square of the radius. So the same light flux has to “spread out” to cover an area equal to the square of the distance, thus by the conservation of energy its brightness at any point is proportional to one divided by the square of the distance. (The same argument applies to many phenomena whose strengths vary with distance, and is why inverse square laws are so common in physics.)

Skyglow, however, is also affected by scattering and absorption in the atmosphere. The result is that the brightness falls off more rapidly with distance from the light source. In 1977, Merle F. Walker of Lick Observatory in California published a study of the sky brightness caused by skyglow at varying distances from several southern Californian cities[1]. He found an empirical relationship that the intensity of skyglow varies as the inverse of distance to the power of 2.5.

Skyglow intensity versus distance from Salinas

Plot of skyglow intensity versus distance from Salinas, California. V is the “visual” light band and B the blue band of the UBV photometric system, which are bands of about 90 nanometres width centred around wavelengths of 540 and 442 nm respectively. The fitted line corresponds to intensity ∝ (distance)-2.5. (Figure reproduced from [1].)

This relationship, known as Walker’s law, has been confirmed by later studies, with one notable addition. It only holds for distances up to 50-100 kilometres from the city. When you travel further away from a city, the intensity of the skyglow starts to fall off more rapidly than Walker’s law suggests, a little bit faster at first, but then more and more rapidly. This is because as well as the absorption effect, the scattered light path is getting longer and more complex due to the curvature of the Earth.

A later study by prominent astronomical light pollution researcher Roy Henry Garstang published in 1989 examined data from multiple cities in Colorado, California, and Ontario to produce a more detailed model of the intensity of skyglow[2]. The model was then tested and verified for multiple astronomical sites in the mainland USA, Hawaii, Canada, Australia, France, and Chile. Importantly for our perspective, the model Garstang came up with requires the Earth’s surface to be curved.

Skyglow intensity model geometry

Geometrical diagrams for calculating intensity of skyglow caused by cities, from Garstang. The observer is located at O, atop a mountain A. Light from a city C travels upward along the path s until it is scattered into the observer’s field of view at point Q. The centre of the spherical Earth is at S, off the bottom of the figure. (Figure reproduced from [2].)

Interestingly, Garstang also calculated a model for the intensity of skyglow if you assume the Earth is flat. He did this because it greatly simplifies the geometry and the resulting algebra, to see if it produced results that were good enough. However, quoting directly from the paper:

In general, flat-Earth models are satisfactory for small city distances and observations at small zenith distances. As a rough rule of thumb we can say that for calculations of night-sky brightnesses not too far from the zenith the curvature of the Earth is unimportant for distances of the observer of up to 50 km from a city, at which distance the effect of curvature is typically 2%. For larger distances the curved-Earth model should always be used, and the curved-Earth model should be used at smaller distances when calculating for large zenith distances. In general we would use the curved-Earth model for all cases except for city-center calculations. […] As would be expected, we find that the inclusion of the curvature of the Earth causes the brightness of large, distant cities to fall off more rapidly with distance than for a flat-Earth model.

In other words, to get acceptably accurate results for either distances over 50 km or for large zenith angles at any distance, you need to use the spherical Earth model – because assuming the Earth is flat gives you a significantly wrong answer.

This result is confirmed experimentally again in a 2007 paper[3], as shown in the following diagram:

Skyglow intensity versus distance from Salinas from Las Vegas

Plot of skyglow intensity versus distance from Las Vegas as observed at various dark sky locations in Nevada, Arizona, and California. The dashed line is Walker’s Law, with an inverse power relationship of 2.5. Skyglow at Rogers Peak, more than 100 km away, is less than predicted by Walker’s Law, “due to the Earth’s curvature complicating the light path” (quoted from the paper). (Figure reproduced from [3].)

So astronomers, who are justifiably concerned with knowing exactly how much light pollution from our cities they need to contend with at their observing sites, calculate the intensity of skyglow using a model that is significantly more accurate if you include the curvature of the Earth. Using a flat Earth model, which might otherwise be preferred for simplicity, simply isn’t good enough – because it doesn’t model reality as well as a spherical Earth.

References:

[1] Walker, M. F. “The effects of urban lighting on the brightness of the night sky”. Publications of the Astronomical Society of the Pacific, 89, p. 405-409, 1977. https://doi.org/10.1086/130142

[2] Garstang, R. H. “Night sky brightness at observatories and sites”. Publications of the Astronomical Society of the Pacific, 101, p. 306-329, 1989. https://doi.org/10.1086/132436

[3] Duriscoe, Dan M., Luginbuhl, Christian B., Moore, Chadwick A. “Measuring Night-Sky Brightness with a Wide-Field CCD Camera”. Publications of the Astronomical Society of the Pacific, 119, p. 192-213, 2007. https://dx.doi.org/10.1086/512069

25. Planetary formation

Why does Earth exist at all?

The best scientific model we have for understanding how the Earth exists begins with the Big Bang, the event that created space and time as we know and understand it, around 14 billion years ago. Scientists are interested in the questions of what possibly happened before the Big Bang and what caused the Big Bang to happen, but haven’t yet converged on any single best model for those. However, the Big Bang itself is well established by multiple independent lines of evidence and fairly uncontroversial.

The very early universe was a hot, dense place. Less than a second after the Big Bang, it was essentially a soup of primordial matter and energy. The energy density was so high that the equivalence of mass and energy (discovered by Albert Einstein) allowed energy to convert into particle/antiparticle pairs and vice versa. The earliest particles we know of were quarks, electrons, positrons, and neutrinos. The high energy density also pushed space apart, causing it to expand rapidly. As space expanded, the energy density reduced. The particles and antiparticles annihilated, converting back to energy, and this process left behind a relatively small residue of particles.

Diagram of the Big Bang

Schematic diagram of the evolution of the universe following the Big Bang. (Public domain image by NASA.)

After about one millionth of a second, the quarks no longer had enough energy to stay separated, and bound together to form the protons and neutrons more familiar to us. The universe was now a plasma of charged particles, interacting strongly with the energy in the form of photons.

After a few minutes, the strong nuclear force could compete with the ambient energy level, and free neutrons bonded together with protons to form a few different types of atomic nuclei, in a process known as nucleosynthesis. A single proton and neutron could pair up to form a deuterium nucleus (an isotope of hydrogen, also known as hydrogen-2). More rarely, two protons and a neutron could combine to make a helium-3 nucleus. More rarely still, three protons and four neutrons occasionally joined to form a lithium-7 nucleus. Importantly, if two deuterium nuclei collided, they could stick together to form a helium-4 nucleus, the most common isotope of helium. The helium-4 nucleus (or alpha particle as it is also known in nuclear physics) is very stable, so the longer this process went on, the more helium nuclei were formed and the more depleted the supply of deuterium became. Ever since the Big Bang, natural processes have destroyed more of the deuterium, but created only insignificant additional amounts – which means that virtually all of the deuterium now in existence was created during the immediate aftermath of the Big Bang. This is important because measuring the abundance of deuterium in our universe now gives us valuable evidence on how long this phase of Big Bang nucleosynthesis lasted. Furthermore, measuring the relative abundances of helium-3 and lithium-7 also give us other constraints on the physics of the Big Bang. This is one method we have of knowing what the physical conditions during the very early universe must have been like.

Nuclei formed during the Big Bang

Diagrams of the nuclei (and subsequent atoms) formed during Big Bang nucleosynthesis.

The numbers all point to this nucleosynthesis phase lasting approximately 380,000 years. All the neutrons had been bound into nuclei, but the vast majority of protons were left bare. At this time, something very important happened. The energy level had lowered enough for the electrostatic attraction of protons and electrons to form the first atoms. Prior to this, any atoms formed would quickly be ionised again by the surrounding energy. The bare protons attracted an electron each and become atoms of hydrogen. The deuterium nuclei also captured an electron to become atoms of deuterium. The helium-3 and helium-4 nuclei captured two electrons each, while the lithium nuclei attracted three. There were two other types of atoms almost certainly formed which I haven’t mentioned yet: hydrogen-3 (or tritium) and beryllium-7 – however both of these are radioactive and have short half-lives (12 years for tritium; 53 days for beryllium-7), so within a few hundred years there would be virtually none of either left. And that was it – the universe had its initial supply of atoms. There were no other elements yet.

When the electrically charged electrons became attached to the charged nuclei, the electric charges cancelled out, and the universe changed from a charged plasma to an electrically neutral gas. This made a huge difference, because photons interact strongly with electrically charged particles, but much less so with neutral ones. Suddenly, the universe went from opaque to largely transparent, and light could propagate through space. When we look deep into space with our telescopes, we look back in time because of the finite speed of light (light arriving at Earth from a billion light years away left its source a billion years ago). This is the earliest possible time we can see. The temperature of the universe at this time was close to 3000 kelvins, and the radiation had a profile equal to that of a red-hot object at that temperature. Over the billions of years since, as space expanded, the radiation became stretched to longer wavelengths, until today it resembles the radiation seen from an object at temperature around 2.7 K. This is the cosmic microwave background radiation that we can observe in every direction in space – it is literally the glow of the Big Bang, and one of the strongest observational pieces of evidence that the Big Bang happened as described above.

Cosmic microwave background

Map of the cosmic microwave background radiation over the full sky, as observed by NASA’s WMAP satellite. The temperature of the radiation is around 2.7 K, while the fluctuations shown are ±0.0002 K. The radiation is thus extremely smooth, but does contain measurable fluctuations, which lead to the formation of structure in the universe. (Public domain image by NASA.)

The early universe was not uniform. The density of matter was a little higher in places, a little lower in other places. Gravity could now get to work. Where the matter was denser, gravity was higher, and these areas began attracting matter from the less dense regions. Over time, this formed larger and larger structures, the size of stars and planetary systems, galaxies, and clusters of galaxies. This part of the process is one where a lot of the details still need to be worked out – we know more about the earlier stages of the universe. At any rate, at some point clumps of gas roughly the size of planetary systems coalesced and the gas at the centre accreted under gravity until it became so massive that the pressure at the core initiated nuclear fusion. The clumps of gas became the first stars.

The Hubble Extreme Deep Field

The Hubble Extreme Deep Field. In this image, except for the three stars with visible 8-pointed starburst patterns, every dot of light is a galaxy. Some of the galaxies in this image are 13.2 billion years old, dating from just 500 million years after the Big Bang. (Public domain image by NASA.)

The first stars had no planets. There was nothing to make planets out of; the only elements in existence were hydrogen with a tiny bit of helium and lithium. But the nuclear fusion process that powered the stars created more elements: carbon, oxygen, nitrogen, silicon, sodium, all the way up to iron. After a few million years, the biggest stars had burnt through as much nuclear fuel in their cores as they could. Unable to sustain the nuclear reactions keeping them stable, they collapsed and exploded as supernovae, spraying the elements they produced back into the cosmos. The explosions also generated heavier elements: copper, gold, lead, uranium. All these things were created by the first stars.

Supernova 2012Z

Supernova 2012Z, in the spiral galaxy NGC 1309, position shown by the crosshairs, and detail before and during the explosion. (Creative Commons Attribution 4.0 International image by ESA/Hubble, from Wikimedia Commons.)

The interstellar gas cloud was now enriched with heavy elements, but still by far mostly hydrogen. The stellar collapse process continued, but now as a star formed, there were heavy elements whirling in orbit around it. The conservation of angular momentum meant that elements spiralled slowly into the proto-star at the centre of the cloud, forming an accretion disc. Now slightly denser regions of the disc itself began attracting more matter due to their stronger gravity. Matter began piling up, and the heavier elements like carbon, silicon, and iron formed the first solid objects. Over a few million years, as the proto-star in the centre slowly absorbed more gas, the lumps of matter in orbit—now large enough to be called dust, or rocks—collided together and grew, becoming metres across, then kilometres, then hundreds of kilometres. At this size, gravity ensured the growing balls of rock were roughly spherical, due to hydrostatic equilibrium (previously discussed in a separate article). They attracted not only solid elements, but also gases like oxygen and hydrogen, which wrapped the growing protoplanets in atmospheres.

Protoplanetary disc of HL Tauri

Protoplanetary disc of the very young star HL Tauri, imaged by the Atacama Large Millimetre Array. The gaps in the disc are likely regions where protoplanets are accreting matter. (Creative Commons Attribution 4.0 International image by ALMA (ESO/NAOJ/NRAO), from Wikimedia Commons.)

Eventually the star at the centre of this protoplanetary system ignited. The sudden burst of radiation pressure from the star blew away much of the remaining gas from the local neighbourhood, leaving behind only that which had been gravitationally bound to what were now planets. The closest planets had most of the gas blown away, but beyond a certain distance it was cold enough for much of the gas to remain. This is why the four innermost planets of our own solar system are small rocky worlds with thin or no atmospheres with virtually no hydrogen, while the four outermost planets are larger and have vast, dense atmospheres mainly of hydrogen and hydrogen compounds.

But the violence was not over yet. There were still a lot of chunks of orbiting rock and dust besides the planets. These continued to collide and reorganise, some becoming moons of the planets, others becoming independent asteroids circling the young sun. Collisions created craters on bigger worlds, and shattered some smaller ones to pieces.

Mimas

Saturn’s moon Mimas, imaged by NASA’s Cassini probe, showing a huge impact crater from a collision that would nearly have destroyed the moon. (Public domain image by NASA.)

Miranda

Uranus’s moon Miranda, imaged by NASA’s Voyager 2 probe, showing disjointed terrain that may indicate a major collision event that shattered the moon, but was not energetic enough to scatter the pieces, allowing them to reform. (Public domain image by NASA.)

The left over pieces of the creation of the solar system still collide with Earth to this day, producing meteors that can be seen in the night sky, and sometimes during daylight. (See also the previous article on meteor arrival rates.)

The process of planetary formation, all the way from the Big Bang, is relatively well understood, and our current theories are successful in explaining the features of our solar system and those we have observed around other stars. There are details to this story where we are still working out exactly how or when things happened, but the overall sequence is well established and fits with our observations of what solar systems are like. (There are several known extrasolar planetary systems with large gas giant planets close to their suns. This is a product of observational bias—our detection methods are most sensitive to massive planets close to their stars—and such planets can drift closer to their stars over time after formation.)

One major consequence of this sequence of events is that planets form as spherical objects (or almost-spherical ellipsoids). There is no known mechanism for the formation of a flat planet, and even if one did somehow form it would be unstable and collapse into a sphere.

24. Gravitational acceleration variation

When you drop an object, it falls down. Initially the speed at which it falls is zero, and this speed increases over time as the object falls faster and faster. In other words, objects falling under the influence of gravity are accelerating. It turns out that the rate of acceleration is a constant when the effects of air resistance are negligible. Eventually air resistance provides a balancing force and the speed of fall reaches a limit, known as the terminal velocity.

Ignoring the air resistance part, the constant acceleration caused by gravity on the Earth’s surface is largely the same everywhere on Earth. This is why you feel like you weigh the same amount no matter where you travel (excluding travel into space!). However, there are small but measurable differences in the Earth’s gravity at different locations.

It’s straightforward to measure the strength of the acceleration due to gravity at any point on Earth with a gravity meter. We’ve already met one type of gravity meter during Airy’s coal pit experiment: a pendulum. So the measurements can be made with Georgian era technology. Nowadays, the most accurate measurements of Earth’s gravity are made from space using satellites. NASA’s GRACE satellite, launched in 2002, gave us our best look yet at the details of Earth’s gravitational field.

Being roughly a sphere of roughly uniform density, you’d expect the gravity at the Earth’s surface to be roughly the same everywhere and—roughly speaking—it is. But going one level of detail deeper, we know the Earth is closer to ellipsoidal than spherical, with a bulge around the equator and flattening at the poles. The surface gravity of an ellipsoid requires some nifty triple integrals to calculate, and fortunately someone on Stack Exchange has done the work for us[1].

Given the radii of the Earth, and an average density of 5520 kg/m3, the responder calculates that the acceleration due to gravity at the poles should be 9.8354 m/s2, while the acceleration at the equator should be 9.8289 m/s2. The difference is about 0.07%.

So at this point let’s look at what the Earth’s gravitational field does look like. The following figure shows the strength of gravity at the surface according to the Earth Gravitational Model 2008 (EGM2008), using data from the GRACE satellite.

Earth Gravitational Model 2008

Earth’s surface gravity as measured by NASA’s GRACE and published in the Earth Gravitational Model 2008. (Figure produced by Curtin University’s Western Australian Geodesy Group, using data from [2].)

We can see that the overall characteristic of the surface gravity is that it is minimal at the equator, around 9.78 m/s2, and maximal at the poles, around 9.83 m/s2, with a transition in between. Overlaid on this there are smaller details caused by the continental landmasses. We can see that mountainous areas such as the Andes and Himalayas have lower gravity – because they are further away from the centre of the planet. Now, the numerical value at the poles is a pretty good match for the theoretical value on an ellipsoid, close to 9.835 m/s2. But the equatorial figure isn’t nearly as good a match; the difference between the equator and poles is around 0.6%, not the 0.07% calculated for an ellipsoid of the Earth’s shape.

The extra 0.5% difference comes about because of another effect that I haven’t mentioned yet: the Earth is rotating. The rotational speed at the equator generates a centrifugal pseudo-force that slightly counteracts gravity. This is easy to calculate; it equals the radius times the square of the angular velocity of the surface at the equator, which comes to 0.034 m/s2. Subtracting this from our theoretical equatorial value gives 9.794 m/s2. This is not quite as low as 9.78 seen in the figure, but it’s much closer. I presume that the differences are caused by the assumed average density of Earth used in the original calculation being a tiny bit too high. If we reduce the average density to 5516 kg/m3 (which is still the same as 5520 to three significant figures, so is plausible), our gravities at the poles and equator become 9.828 and 9.788, which together make a better match to the large scale trends in the figure (ignoring the small fluctuations due to landmasses).

Of course the structure and shape of the Earth are not quite as simple as that of a uniformly dense perfect ellipsoid, so there are some residual differences. But still, this is a remarkably consistent outcome. One final point to note: it took me some time to track down the figure above showing the full value of the Earth’s gravitational field at the surface. When you search for this, most of the maps you find look like the following:

Earth Gravitational Model 2008 residuals

Earth surface gravity residuals, from NASA’s GRACE satellite data. The units are milligals; 1 milligal is equal to 0.00001 m/s2. (Public domain image by NASA, from [3].)

These seem to show that gravity is extremely lumpy across the Earth’s surface, but this is just showing the smaller residual differences after subtracting off a smooth gravity model that includes the relatively large polar/equatorial difference. Given the units of milligals, the largest differences between the red and blue areas shown in this map are only different by a little over 0.001 m/s2 after subtracting the smooth model.

We’re not done yet, because besides Earth we also have detailed gravity mapping for another planet: Mars!

Mars Gravitational Model 2011

Surface gravity strength on Mars. The overall trend is for lowest gravity at the equator, increasing with latitude to highest values at the poles, just like Earth. (Figure reproduced from [4].)

This map shows that the surface gravity on Mars has the same overall shape as that of Earth: highest at the poles and lowest at the equator, as we’d expect for a rotating ellipsoidal planet. Also notice that Mars’s gravity is only around 3.7 m/s2, less than half that of Earth.

Mars’s geography is in some sense much more dramatic than that of Earth, and we can see the smaller scale anomalies caused by the Hellas Basin (large red circle at lower right, which is the lowest point on Mars, hence the higher gravity), Olympus Mons (leftmost blue dot, northern hemisphere, Mars’s highest mountain), and the chain of three volcanoes on the Tharsis Plateau (straddling the equator at left). But overall, the polar/equatorial structure matches that of Earth.

Of course this all makes sense because the Earth is approximately an ellipsoid, differing from a sphere by a small amount of equatorial bulge caused by rotation, as is the case with Mars and other planets. We can easily see that Mars and the other planets are almost spherical globes, by looking at them with a telescope. If the structure of Earth’s gravity is similar to those, it makes sense that the Earth is a globe too. If the Earth were flat, on the other hand, this would be a remarkable coincidence, with no readily apparent explanation for why gravity should be stronger at the poles (remembering that the “south pole” in most flat Earth models is the rim of a disc) and weaker at the equator (half way to the rim of the disc), other than simply saying “that’s just the way Earth’s gravity is.”

References:

[1] “Distribution of Gravitational Force on a non-rotating oblate spheroid”. Stack Exchange: Physics, https://physics.stackexchange.com/questions/144914/distribution-of-gravitational-force-on-a-non-rotating-oblate-spheroid (Accessed 2019-09-06.)

[2] Pavlis, N. K., Holmes, S. A., Kenyon, S. C. , Factor, J. K. “The development and evaluation of the Earth Gravitational Model 2008 (EGM2008)”. Journal of Geophysical Research, 117, p. B04406. https://doi.org/10.1029/2011JB008916

[3] Space Images, Jet Propulsion Laboratory. https://www.jpl.nasa.gov/spaceimages/index.php?search=GRACE&category=Earth (Accessed 2019-09-06.)

[4] Hirt, C., Claessens, S. J., Kuhn, M., Featherstone, W.E. “Kilometer-resolution gravity field of Mars: MGM2011”. Planetary and Space Science, 67(1), p.147-154, 2012. https://doi.org/10.1016/j.pss.2012.02.006