5 — The Ocean is Not Just Blue

Derya Akkaynak is a Turkish engineer and oceanographer whose research focuses on imaging and vision underwater.

She has professional, technical, and scientific diving certifications and has conducted fieldwork from the Bering Sea to Antarctica. Akkaynak was a finalist for the 2019 Blavatnik Awards for Young Scientists for resolving a fundamental problem in underwater computer vision—the reconstruction of lost colors and contrast which led to the development of the Sea-thru algorithm. She is starting her research lab in Eilat, Israel.

www.deryaakkaynak.com

Transcript:

Thank you, Pascal and Selena, for your invitation. I’m very excited to be a part of this process. My observation starts by not looking at a glass of water but looking at a glass of Turkish tea. That’s because the dissolved organic substances that give the tea its color are pretty central in the way I look at the water. They are similar; actually they are often used to create simulations of coastal ocean water in the lab because of the nature of the particles in the water column.

A glass of Turkish tea.
Water full of dissolved colored organic material.

What you’re seeing now is a real coastal lagoon off the coast of Florida and it’s very turbid, full of dissolved colored organic material. As an oceanographer, I want to give you all a secret about the ocean: the ocean is not just blue. It can appear in a spectrum of colors depending on where you are sampling (and I should say this is true for the ocean and all natural bodies of water on the planet). So when we say “blue ocean,” “the ocean is blue”—the red colors, the long wavelength, are attenuated the fastest in the water and then there’s a blue dominant hue—actually we’re only talking about the open ocean. But a lot of our research, especially using imaging with cameras—which is the central part of my research—happens on the coastal ocean. And naturally or due to anthropogenic causes, we can see a spectrum of colors in the coastal ocean.

So what do these colors mean when we look at a sample of water? That’s an extremely important question. To answer that question, I want to take you back to 1989.

EOS, 1989.

This map, that was published on the cover of a scientific journal in 1989, took 10 years to make—just the map, including the data that was collected to produce the map. What does it show? It shows a pseudo-color representation of the color of the global ocean. Now the color of the global ocean is a proxy for what is in the water column—which is organic substances or non-organic substances. And for the most part (at least for the open ocean away from the coasts) these particles happen to be phytoplankton. Phytoplankton sounds like a mouthful but it’s actually microorganisms that take the energy from the sunlight and photosynthesize—which means they produce food, so they produce food for everything in the ocean to eat. They’re very important so knowing their concentrations is extremely important.

But the interesting thing about this map is that (in addition to these colors showing the concentration of phytoplankton globally for the first time for the global ocean) when scientists in the late 60s and 70s suggested that we could do this kind of sensing from space, and we could tell what’s in the water column for every point on the globe from one snapshot, people laughed at them. They were ridiculed. They said that would never happen—even if you could sense from space, you would never be able to make sense of what’s in the water column. Of course, today we do this every day at extremely high resolution and have been gaining incredible insights regarding what’s in the water through the color of the water.

There is another reason why this ability to see the global ocean from space has revolutionized the field of Oceanography. Until this point, scientists, seaman, sailors went on ships and they traveled in the ocean and they sampled the water here, they sampled the water there, and then they put those samples against a set of standards, little bottles with different colors, and then tried to quantify which of those colors it matched. And in between the two points they assemble—imagine how big the ocean is, and you just sample two points—they assume everything in between those points changed linearly or smoothly. But what we see now, with the ability to look from space, is that there is nothing here that varies linearly or smoothly: there are jets, rings, vortexes, eddys, currents—it’s so dynamic. It changes from timescales of seconds to timescales of hours, minutes, days, seasons and it changes from spatial scales of centimeters to kilometers and it has really changed the way we study the ocean.

To summarize, the color of the ocean water gives us tremendous insights regarding what is in the ocean water. Here is a simulation of basically everything I talked about, showing you the concentration of those tiny phytoplankton organisms throughout time and seasons. Here, purple shows high concentrations—meaning there’s a lot of productivity, there’s a lot of food being produced—and the other colors gradually less. This is not a constant process. It is highly variable. Think of what this means, that the color of the ocean is highly variable. I’m going to come back to that, but to summarize: can the color of the ocean water give us scientific insights? The answer is absolutely yes, because it tells us about the concentration of phytoplankton.

Phytoplankton is a proxy that tells us about photosynthesis. These are fancy words but all it is is small organisms using the sun’s light and energy to produce food. Producing food in this context means you’re using up carbon dioxide—and when you are doing that you are now playing an important role in the planet’s climate. So these tiny invisible microorganisms actually determine the ocean’s role in the global climate—so colors of the ocean water are extremely important.

But for some people, like me, who don’t care at all about what the color of the water is or what is in the water column, colors pose a huge problem, because all I care about is knowing what’s on the ocean floor—which animals, which organisms, how are they structured, how do they function, what role do they serve for the ocean. Trying to use imaging to do that, I hit this incredibly difficult roadblock of water color. The fact that the water has color, the fact that there’s substances in the water column that change light to give it this color really blocks everything that I’m interested in seeing.

I want to give you a small glimpse of how a photograph underwater is made, so this is just a very short summary of image formation underwater. Let’s take this diver. He wants to take a photo of this octopus and his photo that forms in the camera will actually be the sum of three different photos, unbeknownst to him. The first layer of those three photos is going to be the light that comes from the sun to the skin of the octopus, and reflects from the skin of the octopus directly to the camera sensor. That’s the signal I showed with the blue arrow. It travels the shortest path. It’s the most direct path; it doesn’t scatter right or left and it reaches the sensor. So this image which i’m going to call the direct image contains the most information about the octopus. But on top of this there are going to be two layers of images that actually degrade this image of the octopus. The first one of those is a layer that will be formed with the yellow arrows: light will come from the sun and penetrate the water; it will reach the octopus’s skin and from the octopus skin light will scatter and hit these exaggerated particles that I drew in there. These particles scatter again towards the camera sensor and somehow actually find their way to the camera sensor. So the image that’s formed with the orange and yellow arrows will be a blurry and weak version of the octopus and all it will do is degrade the image quality.

It’s not so bad though—it’s still a weak effect that doesn’t ruin the whole image. But what ruins the final image—and is a huge problem in general for underwater imaging and photography—is the next layer. Now sunlight will come, the rays will never make it to the skin of the octopus, they will hit these exaggerating floating substances in the water, and they will scatter from these substances directly to the camera. Here the important thing is to note that these light rays have never visited the octopus; they carry no information about the scene. So what they’re going to do—this is called backscatter—is they’re going to create a layer of colored fog that hides the octopus, so there is going to be this layer of fog on top of the diver’s octopus image.

When the diver goes home and looks at his photographs, what he’s going to see is going to be the sum of these three—he won’t be able to separate them. His image will contain the octopus and a bunch of artifacts because there are substances in the water, because the water has color.

I showed this briefly, but I want to show it with a small animation. This time the diver is on the bottom right of the screen and wants to take a photo of the color chart which is right above the sea star. Where he is now, he’s too far: when he takes a photo, that valuable signal, that direct signal, DC, will be almost zero. It will be black, and his entire image will be made up of that annoying layer of colored fog, so he will just take an image of blue water. When he gets a bit closer, there will be some nonzero direct signal that shows the scene, which in this case is the color chart. The layer of colored fog will be slightly less intense so his picture will show the scene, but it will be through a layer of haze. Only when the diver is right up to that color chart—in other words the distance between him and the scene is minimized—his photo will contain (as much as possible) the scene. The direct signal will be maximized—the color chart will be most visible—and that layer of fog will be almost zero—it will be minimized. The takeaway from this is if you’re interested in underwater imaging or photography, get as close to your subject as possible because that layer of annoying fog grows exponentially with distance and colors fade as an exponential function of distance. The further you are, the worse everything is going to be.

This was just a simulated example—now here’s a real example. This is a color chart that we mounted in the middle of the ocean, 17 meters from the bottom and 17 meters from the top, so we have no interference from the surface or the sea floor—it’s just hanging there in the middle of the water.

Color chart under water.

We swam to this from eight meters away to about 30 centimeters, and photographed it all along the way. The images to the left are faraway images—eight meters, seven meters, six meters—all the way to 30 centimeters. What I want you to notice here is that, in the faraway images, the color chart is behind that layer of fog. It’s not sharp, it’s blurry. It gets better as you get close to the chart, but you can clearly see the occluding effects of the fog far away.

Now, using an equation that I formulated, I am going to computationally take that fog out.

They’re the same images—but now the sharpness of the color chart at each distance is the same. There is no more of the layer of haze that hides the color chart behind it. As the last step, I’m going to put back the colors that were lost.

Ignore the purple background (there were no changes made to the background); just look at the patches on the color chart. Now the colors are restored at each distance consistently and they match up with the chart that we have in the shortest distance. So this is an equation that I derived that represents how life is changed in the ocean and allows us to put the lost colors back, remove that layer of fog, and enhance images.

To summarize the problem that I see when I look at a glass of water: first of all, I immediately assume it’s a glass of tea because water doesn’t have the particles that I need to be working with. The problem that I work on, the problem of color reconstruction, is the following: given an image taken somewhere, in some water body, at some depth, from some distance, with some sensor with some random combination of objects in the scene and knowing nothing else about that scene, can we restore the colors, to the point, as if there was no water in the scene? And because I was able to formulate a new physics-based equation to quantify this, I also developed a method that can remove water from underwater photographs. This is an example of a scene—Florida Keys, very typical soft coral reef—and when we remove the water from this scene we can see the way it would appear as if it was a habitat on land.

Here’s another example, using this see-through algorithm: this is a coral reef from the Pacific from Papua New Guinea; it’s about 25 meters in depth. When you remove the water, the colorfulness of the habitat is absolutely stunning. Every time I see this image, I wonder, “why these colorful substances, objects, organisms, there on the seafloor?” Who sees them? Another scene, from the Red Sea: if we remove the water we see this beautiful reef all the way into the photo with all its colors. Here’s a very murky shipping channel from Indonesia, and when we look at it outside of water it’s actually like a flower garden.

And, of course, nothing stops us from applying this principle to water bodies that are not natural. Here’s an outdoor swimming pool from California: when we remove the water from the pool, not only do we see the bathing suit colors of the swimmers but their skin tones much more accurately as well. This is what goes through my mind when I look at a glass of tea, and the next steps of looking at that glass of water or tea, and going to a scene where there is no water or tea. My job in my career is to find out why these colors are there on the ocean floor—what purpose do they serve, who sees them, what does it mean to them—and basically their composition throughout the global ocean. So with that I want to thank you for your time and at the end I’ll take the questions.