Spatial Frequency Separation in Theory

by | Sep 3, 2016

Usually one encounters spatial frequency separation for the first time in the field of beauty retouch. To put it as easily as possible, it is a technique which separates the texture of an image from its shape, allowing the retoucher to easily clone over defective parts with no visible side-effects. Of course, there is a lot more to it – but the words “texture” and “shape” alone are undefined enough to prompt a question: what do they mean exactly? The answer is of paramount importance, so I’d like to start from the very beginning.

What is frequency?


In general, the term “frequency” is related to time. The concept is simple: suppose that an event repeats itself over a given period of time. Count the number of repeats, divide it by the length of the period, and you will get the frequency of such event. Let’s see some examples.

We measure times in seconds, although several multiples are available. This morning my heartbeat is 90 pulses per minute, or, better said, 90 pulses per 60 seconds. The frequency is simply 90 / 60 = 1.5. It makes sense to say that my heart beats 1.5 times per second. Or, more correctly, that it has a frequency of 1.5 Hertz. Hertz (shortened as Hz) is the unit of measurement of frequency, and is defined as the inverse of a second.

If my heart speeds up to 120 pulses per minute after a run, the frequency will jump to 120 / 60 = 2 Hz. We call this a higher frequency than the previous: the higher the frequency, the more often the event will happen. More frequently, that is.

Perhaps you’re acquainted with the range of frequencies which the human ear can perceive. Its extension is stated usually between 20 Hz and 20,000 Hz. This again makes sense: sound is a vibration in air, and a vibration is characterized by a frequency. If you hit a diapason and measure how many times the soundwave it produces cycles over time, you will find something very close to 440 times in a second. That’s a frequency of 440 Hz, of course, corresponding to an “A” note – for your curiosity.

Photographic images are static and don’t change over time, so why bothering with frequencies? The reply is that I needed to make this concept understood before introducing what we actually need to discuss, which is something called spatial frequency. And that is, indeed, fundamental in any image.

Equalizer Wawe

Spatial frequency separation: frequencies in space, not time


The title of this article should rather be “Spatial Frequency Separation Techniques”. If frequency is something that describes how quickly a given event happens over a certain amount of time, spatial frequency is something that describes how swiftly something changes in a certain amount of space. A look at the following figures may help.


Figure 1 is a 600 x 300 px file, filled with black and white stripes whose width is 20 px. If you count, there are exactly 30 stripes altogether: 600 / 20 = 30.


Figure 2 has the same dimension, but each stripe is only 4 px wide. There are 150 stripes altogether: 600 / 4 = 150.

The event “stripe” happens more often in figure 2, over the same amount of space. As we measure time in seconds, we may measure space in pixels, here. We could say that in figure 1 a stripe happens every 20 pixels . In figure 2, a stripe happens every 4 pixels. How much of a stripe is there in one pixel in each case? 1/20 = 0,05 and 1/4 = 0,25, respectively. These numbers, although they’re very rarely written as explicitly as here, are a measure of spatial frequency: in the second case, the frequency is five times bigger the first. In plain words, the stripes happen five times as often.

It therefore makes a lot of sense to say that the stripes in figure 2 have a higher frequency than those in figure 1. This is, of course, tech-speak. My mum would probably say that stripes are thinner in the second case, and that is correct. A photographer, though may not be very concerned with a series of artificial stripes, so we’d better move on to a real image.

Spatial frequencies in reality.



Figure 3 depicts a lemon. It’s a very uninspiring lemon, as it looks like a shape drawn in Illustrator and filled with a flat color. This may sound obvious, but let’s ask ourselves why we don’t really find such lemon very exciting.


A partial reply lies in figure 4: from a realistic point of view, it is a better rendition of a lemon because it suggests at least some shape. There is variation both in color and luminosity – something which figure 3 lacked completely – and that variation is the cue to our visual system that we are not looking at a flat object. Yet we wouldn’t take the lemon of figure 4 as real: something is still missing.


We recognize the fruit in figure 5 as a photograph, instead. It actually is, and if someone told us: “no, it was drawn” – we would probably be very amazed at the perfection of the detail. Yet the overall color and luminosity variation in this version is the same as in figure 4. Where’s the actual difference?

Texture


Texture is the keyword, here. The surface of a real lemon is not at all flat: it’s full of small features instead. The fruit in the original photograph is wider than 2000 px, and figure 6 shows a 1:1 crop.


Figure 6. We can detect a series of small bumps and craters which appear either darker or lighter than the average skin due to the light bouncing off them. These features are very small compared to the size of the lemon, and we might as well call them thin detail opposed to the overall shape of the fruit, which is a very coarse image. A common name for them is texture.


Figure 7.

It is interesting to think of the lemon of figure 5 as the featureless shape of figure 4 with detail superimposed somehow.

It is very possible to separate the two characters of the fruit: figure 7 shows the detail we need to add (more about the exact meaning later) to figure 4 to obtain figure 5. That is, we’re looking at the texture straight in the eye, as if it had been pulled off the lemon skin. If we retouched the texture, for some reason, the shape would remain intact. If we decided to retouch the shape instead, we should not worry about the texture.

The features seen in figure 7 do not vary very much over the large scale of the fruit, but are very tormented at much smaller scale. They have a high degree of variation, which leads us to connect them with high spatial frequencies, i.e. something that varies quickly over a given size (that of the lemon, in this case). On the contrary, the placid shape of figure 4 has no texture at all, and the variation happens on a much larger scale. We can connect it with low spatial frequencies, i.e. something that varies smoothly over a given size (again, that of the lemon). If we tear an image apart and separate the high and low frequencies, we can always rebuild the original look by clever blend methods. With a huge advantage: we can manipulate two very different characters of the image separately, often with amazing results.

This procedure is quite standard and enormously useful when it comes to retouching photographs – portraits especially. For this reason it is worth investigating how the separation of spatial frequencies can be obtained.

Low frequencies first


Figure 4 shows what we mean by “low frequency image”: it’s an image whose subtle features are erased, but bigger structures remain. Obtaining such an image is easy, we just need a blur. Gaussian blur is most commonly used, but that’s not the only solution: other sensible blurs can be exploited, with variable results.


Figure 8 shows a computer-generated series of vertical stripes. The leftmost stripe is 20 px wide, and each subsequent stripe is 1 px narrower than the previous one, until a 1 px wide stripe appears (in the centre). The rule is then inverted: centre to the right, each stripe is 1 px wider than the previous one, until a width of 20 px is reached again.


Figure 9 shows that a 4 px radius Gaussian blur is enough to seriously hit the highest frequencies (centre). The lower frequencies (sides) are preserved, although the stripes look out of focus.


Figure 10 shows how a 10 px radius Gaussian blur blows away any thin detail: the centre area is basically uniform, no stripes are visible. The widest stripe remain, but they are suffering, too.

The detail amount is image size sensitive


Whenever an object is more complex than a lemon, things get awkward. We immediately recognize the face of a beautiful young girl in figure 11, even if it was resized to 400 x 600 px only. If you look carefully, you can spot some small blemishes in the skin, a hint that the picture is not retouched. These indeed qualify as high frequency components, as do the hair, the eyebrows and so on. But what about the eyes? I would say they have a lower frequency, but they are not “low frequency” proper – because the mouth is a bigger shape, the nose is even bigger, the fingers are longer still, and everything goes to form a face with all its features, which is the lowest relevant frequency in the picture. Please notice that the actual value of a frequency largely depends on the size of the picture: the original is almost 4,200 px high, and I can easily spot contact lenses in the eyes of the girl. Can you, looking at this version? I bet not: it is too high a frequency to be reproduced in such a small rendition. This is, of course, a very posh way to say that the bigger the picture, the more detail we potentially have – and the higher the frequencies we can reproduce.

The basic idea about frequency separation is clever but rather easy. We all know that cloning over a picture can be difficult: our visual system is very sensitive to even the slightest variation in texture, and if we’re not careful, the retouch will be painfully evident. Going back to the example of the lemon, it would be great if could clone the texture only, regardless of luminosity and color. That is, peel the small detail off the fruit and paste it somewhere else.

This can be clearly and easily shown with the picture of the girl (figure 11).


Figure 11


Figure 12. A standard copy and paste sample and the same done in the high frequency layer.

Image 12 may look a bit eerie, but it proves the point. In the left version, the left eye of the girl was cloned over the forehead. It is, basically, a sophisticated case of copy and paste. In the right version, the image was divided in a low-frequency and a high-frequency component, subsequently recombined by clever use of blend modes. The look of the image is identical to the original, but the thin detail has been “peeled off”, a one would do with a lemon. The clone stamp was put to use on the high-frequency layer only, and the result is as obvious as it is stunning: the texture of the eye has been transferred onto the skin, but the global luminosity and color remain untouched. There is no way to do this in Photoshop in a straightforward fashion, and no amount of transparency will do. The interesting thing is that the skin around the eye appears untouched: it isn’t, because the pores are different, but it’s a huge result for a clone stamp used at 100% hardness without any care.

Conclusion


Pasting a ghastly eye on top of someone’s forehead is not probably very interesting, so we’d better look for a way to put these technique to work.

The first observation is that the border between “high” and “low” frequency is very vague: it depends on the image and the size of the image – as we’ve seen. Moreover, we might as well define several frequencies: why not “high”, “mid” and “low”? Or even more: we might stick “mid-high” and “mid-low” between the three we just mentioned.

The bottom line is that the point of decomposing images into frequencies is that of manipulating elements of different spatial scales easily. If in the previous examples we had cloned bits of skin (rather than an eye) in order to remove blemishes, we would have discovered that cloning the high-frequency layer only brings to more natural results than cloning on one single layer (that is, the original image). It would, because we would only copy texture: the luminosity and the color of the original would remain largely untouched. This means less halos, less artifacts, a less obvious repetition when we need to use a single sampled clone in several places, and so on.

There are several manipulations that can be done to the different frequency layers, but they require proper examples: they would be too long to show here, so we’ll stop for the time being and we’ll put them in an oncoming second part of this tutorial.

’Til the next time!

Frequency Separation Made Easy


Marco Olivotto explain here in an understandable way what is frequency separation and how it works. Don’t miss it!

Decomposed layer by Wow Frequency Equalizer Pro with the panel interface a the layers window
Wow! Frequency Equalizer Pro Animated Panel

Check out our Frequency Separation and Decompose Plugin


Wow! is an exclusive Photoshop extension that let you easily boost or smooth each frequency range. Wow! adds style, image sharpness and three-dimensionality together with incredibly smooth transitions. Use our easy presets with just one button, or take full control with the high quality live preview to add and remove details by tweaking our five dedicated sliders.

The Decompose button is an exclusive feature of Wow! Frequency Equalizer Pro. You can now explicitly turn each frequency into its own pixel layer (five of them are created in a stack, plus a base layer) for a better and more precise frequency based retouching.
You can paint, clone and heal on the very exact frequency layer that contains the features you need to target, with great precision, giving you unprecedented control over the retouching process.

Marco Olivotto

Marco Olivotto

Color Correction, Instructor, Writer

Lives and works in Rovereto, Italy.

He’s a  Master in Color Correction since when he decided to spread the word about color correction techniques in Photoshop, still relatively unknown in Italy. Starting from 2011 he takes the Color Correction Campus, full-immersion one or two-day courses were the students work on the images applying the techniques taught in the class. The CCC soon gained a big reputation and plenty of enthusiasts followers.  Marco also teaches colour correction techniques in important workshops and conferences and writes for  magazines about color correction.
Last but not least. Marco “The Voice” Olivotto is the author of our best tutorials.
The great Dan Margulis, who invented color correction in Photoshop, and was the first mentor of Marco publicly called him “a renaissance man” because of his eclecticism.

Visit his site >

3 Comments

  1. Marco Olivotto

    Guy, many thanks for your feedback, I really appreciate it. Sorry for failing to reply immediately: I got it by e-mail but was about to leave for a course near Rome, and there I am in fact. I attempt to make difficult concepts easy by using understandable models, and sometimes it is difficult to find the right way to explain things – but in time, maybe, some ideas actually flourish. Glad you find WOW! Pro useful: the decomposition in layers can be absurdly powerful, although it takes some time to understand exactly how to exploit it. Keep on experimenting and let us know how you use it, if you wish.
    Many thanks,
    MO

    Reply
    • Guy Perkins

      Thank you for the invitation, Marco… and watch out for the black spagetti.

      Guy

      Reply
  2. Guy Perkins

    Hello Marco,

    I hope you and everyone at Know How Transfer are well and prospering.

    I really like your new videos, demonstrations and explanations, particularly as they focus on frequency separation. They’ve helped me a great deal in my understanding of this concept and approach.

    I am also enjoying and relishing the new WOW! Pro. I’ve had it a couple of weeks now, and it’s already indispensable. Thanks to all of you.

    I plan to make some time soon to email my congratulations to Roberto.

    My very best always,

    Guy

    Reply

Leave a Reply to Marco Olivotto Cancel reply

Your email address will not be published. Required fields are marked *