This entry is part 4 in the series Image Compositing for RPGs

Palette image by Alexander Lesnitsky from Pixabay, tweaked by Mike.

In the first part of this series, I detailed the compositing modes that I use most frequently, along with a few other hints and techniques.

The second part showed project number 1, taking a black and white photograph (grayscale) and adding unconventional colors to transform the image into a blue-skinned alien on some strange other world.

In last time’s third part, I took images that were in color and showed how carefully stripping the color out permitted you to replace the original colors with your own, completely transforming the associated context of an image (a similar approach gets used to render objects underwater, FYI).

In this fourth part, I’m going to tackle a more challenging proposition, that of turning a monkey blue. This project has several things in common with the first two, but adds some new wrinkles to the technique.

WARNING:
This is a comparatively lengthy post – the equal, in terms of text, of all three of the parts that preceded it put together, approaching 15,000 words, and with 80-odd illustrations. Get yourself a beverage before you start, and settle back, we’ll be here a while…

Project 3: A Blue Monkey

The base image chosen is

– and when this project is complete, it’s anticipated that it will actually get used in the next Adventurer’s Club adventure.

Here’s the worksheet for today’s exercise:

  • Turn the monkey blue while keeping the fur realistic
  • Preserve the branch / tree that he is sitting in and maintain it’s integrity within the image
  • Extend that tree branch to widen the total photograph
  • Replace the background with something more aggressively and suggestively “jungle” using other images and clip art (hence the difficulty maintaining the tree limb’s integrity while the context of everything around it is changed.
  • Said replacement background to consist of multiple layers – a distant background, a middle-distance midground, and several layers of foreground.

Rule Zero is always to have a purpose when you start working on an image, and it’s been demonstrated a couple of times already. Rule One of photo editing is to use the purpose as a guide to needs and planning.

Since I had a clear idea of what I wanted to achieve with this image, I went looking for what I would need to ‘pull it off’.

The distant background will be this image:

but when I originally planned the image, because I wasn’t using the right search term, this was what I had found:

This had the color that I was looking for, but there is a reason why this image is so widely used in St Patrick’s Day -related pages – it’s all clover. So I was worried that it might look inappropriate. Nevertheless, there may still a role for this image in the planned composite.

In the midground is going to be this piece of clip art:

which was sourced from https://www.clipartkey.com/.

For the foreground, behind the extended tree limb, I have these Elephant’s Ear plants, also courtesy of clipartkey.

(watermarked image) via https://www.clipartkey.com/

I note that the image is watermarked, which is usually a big no-no, but in this case, the watermark seems to be behind the plant stems so hopefully it won’t cause too much of a problem – I just have to make sure that the tree-limb covers the watermark if it intrudes.

I also have these, similar, plants: to go even further in front:

To the extreme left of the image, we will have this:

It was supposed to have a transparent background; it didn’t, instead it had the checkerboard that is used to indicate transparency. It also had a shadow as though it were a cardboard cut-out hovering a little in front of the page! So both of those had to go, leaving me with a transparent png. It’s way too soon to tell if everything from freepik will suffer from the same problem – which would be enough for me to dump them from my resources list – but it’s something that I’ll be keeping an eye on.

Finally, to go in front of everything and help tie the whole image together, I have this:

– images of a pair of coconut leaves that DO have a genuinely transparent background. I may well use these twice – once coming in from the left, and once just overlapping the tree on the right, modified slightly in shape and definitely in scale and massively blurred because they are ‘so close to the camera’, i.e. the observer.

Here’s the plan of attack:

  1. Disassemble the primary source into three parts: monkey, tree, and background.
  2. Expand the canvas width to about 250% of what it already is, and use copies of the tree branch with suitable edits to extend the branch outward into the enlarged space.
  3. Design and implement the Monkey color-change.
  4. Position the distant background.
  5. Position and assemble the midground.
  6. Resize the distant background.
  7. Assemble the foreground. Use the size of the distant background to dictate the new left-hand side limit of the image.
  8. Further Resize and possibly blur the distant background.
  9. Ghost Leaves to fill any voids
  10. Coconut leaves left.
  11. Coconut leaves right.
  12. Final review of the composited image.

A twelve-step plan – with some of the steps being a lot more involved than others.

Step 1 – Disassemble the primary source into three parts: monkey, tree, and background.

There are a couple of tricks that I use when disassembling images that I should tell you about.

  • Work with images that are 2, 3, 4, or 5 times the scale that you eventually want the image to display at, with preference to 2 or 4 times. In extreme cases, you may need to go to 8x or even 10x scale.
  • When the images are grayscale, you can often get away with simply selecting the parts that you want and it will all come together in the end.
  • Things get a little trickier with color, because at the edges of the image, there will often be a transition from one color to another, from the object that we want to the part of the image that we don’t. Where that unwanted part of the image has a very different lightness or color to what we want to include and what we want to put in place of the material removed, you will often get a halo – and one that won’t completely go away using the copy-of-layer-underneath-and-blur technique. It needs help. The basic approach is to 1. select the area to keep; 2. shrink the selection by 1 or 2 pixels; 3. feather the selection by the same number of pixels as you shrank the image; 4. copy or cut the desired image and then paste (creating a new layer).
  • But, there are complications that result at the edges of the image overall – you often need to insert a step 3a, manually select the parts you want to keep that are at the edges
  • Where the color values between what you want to remove and what you want to replace it with are not too dissimilar, you can often get away with feathering out by an extra pixel or two. You may need to use soft erase in spots.

‘Feathering’ is a really hard concept to explain clearly – you either understand it all at once (usually from using it a time or two) or you simply don’t ‘get’ it. But I’ll try.

Usually, when you select a rectangular area, it will be a ‘hard select’ (there are exceptions, and they can be a pain to deal with). As soon as you add an angle, that goes out the window; if your selection line runs diagonally from one corner of a pixel to the opposite corner, it gets copied or cut at 50% opacity. If more of the pixel is “in”, the opacity goes up; if less, then it goes down. And complicated shapes are inevitably full of ‘partial pixels’. That’s a problem when the pixel content is largely unwanted.

The image to the right illustrates the situation – in theory, this depicts a 9-pixel block within a larger image. When you zoom out, it looks like part of a round object or red spot set against a sky-blue background. You can see the three fully-red pixels at the bottom left, and the fully-blue top right. Every other pixel is a blending of the two, from the mostly red to the mostly blue. Along comes our image editor, who wants to replace the blue with a deep green. So he starts by selecting what he wants to keep, doing his best to follow the shape of the object as her perceives it. (1)

He then copies and pastes what he wants to keep into a new layer. With the source turned off, everything looks perfect (2)

But, in reality, there’s some blue mixed in with those pale red in the pixels top left to bottom right, and when you drop a deep green in behind, that suddenly shows up as a ‘halo’ of sky blue of limited opacity. (3)

What we wanted to achieve is shown in figure 4. The green is compromised by red in the pixels around the edge, but there’s none of the blue (or very little of it).

Feathering The Selection

Feathering the selection selects an additional ring around the selection already made, at a reduced opacity.

Now, you either understood that completely, or quite probably, not at all.

If you feather by 1, the selected area grows by 1, but the outermost edge of the selected area is at an opacity midway between 100% and 0% – i.e. 50%. If you hit the delete key, the feathered selection is only reduced in opacity by 50%. If you copy the selection, those pixels are only sampled at 50% opacity.

If you feather by 2, the selected area grows by 2. Now you have a 1-pixel ring around your original selection (100%) at 2/3 opacity (66.6%), and an additional 1-pixel ring around that at 1/3 opacity (33.3%).

If you feather by 3, you get rings of 75% opacity, 50% opacity, and 25% opacity.

If you feather by 4, you get rings of 80%, 60%, 40%, and 20% opacity.

Now, if you shrink the original selection before feathering, the effect is of fading out the very edge of the part of the image that you want to keep in order to reduce the amount of the ‘contamination’ from the parts that you don’t want that get copied.

Feather is obtained from the “Select” menu at the top of Krita’s screen, down near the bottom. That’s also where you find shrink, grow, and other selection controls.

The problems with shrinking and feathering

There are two problems to this technique.

First, there’s a problem with fine details like hair or fur or blades of grass, or any selection that’s long and thin. These are areas where contamination is especially likely to occur. You can very carefully select each and every hair as best you can, but those finely-detailed selections are shrunken to the point of not being there when you shrink the selection. The larger the image (in number of pixels), the larger these fine details are, and the less you suffer from this problem. But some reconstruction of the edges of such materials is often necessary.

The other problem is that the software can’t understand that the image that you want to keep extends beyond the edge of the page, so it treats the edge of the page as the limit of selection. When you shrink that selection, it shrinks back from the edge of the page as well as the parts of the image that you have deliberately selected. And that means that when you feather the selection, you also fade out the edge of the image even though the object you are selecting continues beyond that edge.

That’s most easily solved using the Polygonal or Outline selection tools (a) with the addition-to-selection option (b). The latter comes up on the right-hand side of the editor when you select one of the selection tools on the left.

Anything that you select in this way after feathering, including parts already selected at partial opacity, gets re-selected at 100% opacity. So the edges of the image are quickly restored – at the price of losing the benefits of the feathering right at the edge of the image. So it’s a compromise, but one that lets you get on with the job.

The four modes of selection (and some other selection notes)

It’s probably worth spending a moment describing the settings for these tools. Mastering them is one of the most involved tasks in photo editing, and I’m still learning (even though I knew enough to do image editing and restoration as a professional thirteen or fourteen years ago). It takes years of practice to be even passable at this particular skill and you’re more than skilled enough to make a living doing such work long before even reaching that standard.

There are four main modes of selection: Replace, Intersect, Addition, and Subtract, called Actions by Krita because they are using mode for something else, as the enlarged screenshot above makes clear. These selections are all about how a new selection will relate to a selection that is already present on

  • Replace means exactly what it says – as soon as you make another selection action, any existing selection gets forgotten and replaced with the new.
  • Intersect means that no matter what you select now, only those parts of that selection that overlap a pre-existing selection will stay selected. It took me a long time to find this useful, and even now it’s the mode that I use least often.
  • Addition adds whatever you select to any already-existing selection, even if the two never intersect. That can be incredibly useful, because it means that you don’t have to do your whole selection job with a single set of inputs using your mouse (or other graphic interface tool, such a graphics tablet).
  • Subtract means that whatever you select now doesn’t stay selected, while anything that’s left untouched of a previous selection remains.

Replace is the default, and therefore the first one that you learn. Addition and Subtraction follow soon after. Intersect is the last one to be mastered.

In between, you will get to know the “select” menu options very well, because they can interact and modify existing selections, which in turn interact with new selections via these four modes. Again, I’m still learning what some of these do, so what follows will be incomplete at best. To start with, unless you already have part of the image selected, not all these will be available – in fact, most of them will be grayed out.

I should also add, right of the bat, that I have used a number of paint programs over the years, and almost all of them have selection tools that work in a very similar way, so this skill tends to be highly transferable.

  • Select All – selects the entire image. Which means that addition is meaningless, but subtract becomes very powerful.
  • Deselect – removes all existing selections. This is incredibly important because Krita won’t let you do anything to any part of the image that isn’t selected if any part of the image is selected – you can’t paint on it, you can’t draw on it, “move” and “edit” only affect the selected part of the image, and so on. This will forever catch you out.
  • Invert Selection Any part of the image that was selected, is no longer selected, and vice-versa. In combination with add and subtract, this can be very powerful – and quite useful.
  • Convert to Vector Selection – This is part of Krita that I have not yet explored. I note that the “mode” selection (above the four “action” sections) offers two modes, with bitmap being the default, and “vector selection” being the alternative, so I suspect that the two are related.
  • Convert Shapes to Vector Selection …ditto.
  • Convert to Shape… Shape is a vector graphic terms, as is Object. So, once again, I think this relates to the same unexplored part of the software.
  • Display Selection turns the dashed line that surrounds the selection on and off – though why you would want to turn it off has escaped me. The default is ‘on’.
  • Show Global Selection Mask
  • – I know nothing about this menu option.

  • Scale… I think this lets you keep the same shape within your selection but make it bigger or smaller. How that is different from growing or shrinking the selection, I’m not sure – but there have been times when that hasn’t quite done what I want, and the next time that’s the case, I intend to play around with this a little.
  • Select From Color Range… – This could be wonderfully useful or a total waste of time, I don’t know. It’s something else that I have to explore.
  • Select Opaque Ditto.
  • Feather Selection… – in many ways, “feather selection” is one of the recurring theme of today’s article, and how this project differs from the last two. The “…” on the menu item usually means that it opens a dialogue box in which various parameters can be set. In the case of feather, you select the scale of the feathering, from a low of 1 pixel to a high of whatever.
  • Shrink Selection…Makes a selection smaller. If, as a result, the selection is less than one pixel wide, that part of the image stops being selected.
  • Border Selection…I’m still learning about this, even though I’ve now used it a few times. I don’t know if the border is outside the selection boundary, inside the selection boundary, or centered on the selection boundary; I have a suspicion (unverified) that it’s the latter. I also suspect that choosing ‘1 pixel’ creates a border that is from one pixel outside the selection to one pixel inside the selection – i.e. two pixels wide. Because of these uncertainties, I will often use an alternate method of selecting a border.
  • Smooth I’ve played with this a time or two but, while I think I know in theory what it does, in terms of practical functionality and problems, I’m not so sure.

It probably doesn’t help that I’m a single lone user of this software, entirely self-taught. Being able to explore tips and tricks with someone else on a collaborative basis would be incredibly educational! The assumption, of course, being that they have mastered parts of the program that I haven’t touched, while I’ve mastered things that would never have occurred to them. Be that as it may…

An alternate method of selecting a border

Here’s a slightly complicated shape (1):

  • If I want to select a two pixel border outside the shape (which includes the text), I use the similar color tool to select the white in subtract mode, grow the selection by two pixels, and then select the black while still in subtract mode using the same tool. If I fill that with red, the second image shows the result. Note that because I’ve drawn a block box around the image, I also get a red box! (2)
  • An alternative method (and the one that I would usually use) is to select the black and put it into a separate layer, select the black, grow the selection by two, create a new layer underneath the black, and then fill the selection. Using blue this time, the second image shows how that works out.(3)
  • The third image zooms in on the third. Notice the jagged edges of the border? This is where I think ‘smooth’ might help. (4)
  • So I’ll do exactly the same thing in the fifth image right up to the point of having grown the selection. Then I’ll hit it with the smooth menu item and see what happens. This time, I’ll fill it with green….(5)
  • A closeup shows some improvement. But it’s still not completely satisfactory, and I don’t like the way the bottom tail of the “S” looks. (6)
  • There are a couple of possible causes, and perhaps several of them are ganging up on me. So this time, I will use the Similar Color tool to select the white, then invert the selection; that should mean that more light gray is part of the resulting selection. I’ll then cut-and-paste the selection into a new layer, grow the selection, fill it in a new layer below the pasted one with the black, in a pale blue this time, deselect, and blur the layer with a 2-pixel value. Finally, I’ll multiply by the base image, which means that all those pale grays in the text get merged with the blue fill instead of being obliterated, smoothing the text significantly. (7) This is what I hoped to achieve, as you can see from the close-up (8)!
Problems with the Contiguous (color) selection tool and the similar color selection tool

While I’m in the vicinity, so to speak, I should mention a couple of issues with these tools that can arise when dealing with colors that are close but not quite similar enough.

For example, let’s say we have a blue sky with some clouds, and that’s what you want to select. So you choose one of these tools and click to get the blue part of the sky. And then you switch to selection addition mode and add in the parts that haven’t already been selected.

Here’s the trap: not all of those colors will have been precise matches to the reference color of the exact pixel you clicked on. And those colors that were only a 50% match within the limits you’ve specified only get selected at 50% opacity – and there is absolutely no way to tell from looking at the image.

Here’s a cloudy landscape that I threw together in literally less than t0 minutes. It’s complicated by the rain, but anyway…

So, if I select the sky and the clouds using the color pickers instead of manually tracing out the edges of the land, I end up with this selection – not perfect but it looks like everything that matters is covered:

But if I cut the selection out, ready to post it into a new layer, I’m left with a very obvious remainder that has been left behind!

If I throw a black panel behind everything, the problem is shown to be even worse.

And, if I paste the cut layer back in, instead of restoring exactly what was there originally, I get this:

….Which isn’t bad, but now carries a hidden flaw, one that is revealed if I turn on the black panel again:

Now, if that effect was what you wanted to achieve, then congratulations! I’ve done oil slicks using a similar technique in the past. But most of the time, that’s not what’s wanted.

Here’s the correct way of dealing with the problem, starting back right after we’ve made the selection. I then create a new layer and fill it with a spot color – any spot color – not once, but several times. The first time I do so, I can see immediately that it was necessary, because the image now looks like the black-panel image above. The opacity of the fill is dictated by the opacity of the selection.

As you dump your spot color in, however, even those somewhat translucent areas get filled.

Next, i invert the selection, and with a paintbrush, correct the obvious flaws in the selection.

What I have just created is called a Mask, or – more specifically – a Selection Mask.

It defines the area that I want to select. Which enables me to turn off every other layer (so as not to contaminate the selection) and then select the mask – then turn the other layers back on (and the mask layer off) and hey-presto: a perfect cut, and a perfect paste.

Back to the project

So, with all that technique explained, I can now get on with dissecting the primary source image. I have one or two other tricks up my sleeve that I’ll show along the way.

As stated earlier, I want to separate a copy of the image into three constituent layers – the tree, the monkey, and the background – and then get rid of the background completely.

The tree can be handled as a straightforward selection, shrink, and feather, then cut and paste, with one refinement: I’ll create a version to be blurred (as explained in The Power Of Blur) from the initial selection, and after the second paste, grow the selection one pixel, invert the selection, and delete anything selected from the layer-to-be-blurred.

It should be noted that I am working with a version of the image that is 2224×1483 in size.

Step-by-step:

  • Initial selection with the Polygonal Selection tool.

As expected, this proved to be more challenging than it initially appeared, because of very fine fur over the top of the tree, as this closeup shows:

The bottom frame also gives you some idea of the scale to which I had to zoom to handle these fine hairs.

  • Create a selection mask

Notice the bottom left, where – in the real image – the branch of what appears to be another tree crosses the tree that I am preserving.

I don’t want to preserve that intruding tree-limb, and so have not included it in the selection mask.

  • Select area using the selection mask, then Copy-and-paste a copy of the tree into a new layer.

Note that if you get this wrong, you will end up pasting a copy of the selection mask into a new layer – you have to choose the layer that you want to copy from after defining the selection with the mask!

  • Deselect the selection. Duplicate the new layer so that I can control the opacity of the blur.
  • Blur the lower layer 2 pixels.
  • Adjust the opacity of the upper layer until the desired level of blur is achieved.

  • Go to the selection mask layer. Turn on the mask’s visibility.
  • Select the mask. Turn off the selection mask layer’s visibility.
  • Select the working copy of the primary source.
  • Shrink the selection by two pixels.

As explained earlier, this creates a problem at the edges of the image, because the software doesn’t realize that the object continues beyond the part that is visible. You can see both the effect of shrinking the mask, and the problem, in this closeup:

So I need to add a new step to the process:

  • Correct the selection using the Polygonal Selection tool.

This shows the result:

Also notice the image corruption in the green as a result of the original image having been saved in jpg format! Every time the image is loaded and saved, this damage would grow worse as more and more of the image information gets discarded by the process of saving the image. You should ALWAYS work in a non-destructive image format, no matter what file format you ultimately intend to use. The best one for single layers is a .png ; the best one for a project-in-progress is Krita’s own default format, .kra because it also preserves the layers and their settings.

  • Feather the selection by two pixels
  • Cut and paste the selection into a new layer above the blurred layer.

With the core image of the tree removed, this is what is left. Notice that you can clearly see the edge of the tree that has been left behind. The image below shows a zoom of the new layer with the pasted tree in it.

The effect of the shrinking and feathering is that the edge of the tree just fades away.

  • Grow the selection by 1 pixel.

It’s because I knew this step was coming up that I didn’t deselect – and that’s why you can see the selection line (“marquis” is the technical term) in both the previous images.

  • Invert the selection.
  • Go to the blurred layer and hit the delete key.

A trio of images here: the selection marquis after growing the selection, the selection marquis after inverting the selection, both against the blurred layer, and finally, after partially deleting the edge of the blurred image.

Notice how the edge of the tree is both blurred, and at the same time, more sharply delineated than it was!

  • Deselect the selection.
  • Remove the selection mask layer.
  • Turn on all the layers of image that you have created using the selection mask.
  • If there’s no further manipulation of those layers of image, merge them together. Start with the layer on top of the first pasted layer, and hit Control-E. Wait a moment; a new layer will be created that combines the layer you chose with the one below it. Repeat until done.

Here’s the combination of all the pasted tree layers:

Next, it’s time to turn my attention to the monkey.

This takes all the problems of the tree and doubles or triples them. The difficulty comes from the back-lit fur along the back and chest, through which hints of the existing background are visible. We need those to become “hints of the new background are visible”, but that’s going to take some doing.

The best approach to this problem is to deal with it in two parts – the body and fur of the monkey that aren’t back-lit, and then the parts that are. For convenience, let’s call the first part the “body” and the second part, the “fringe”.

One of the ways this complicates is that I can’t cut the body from the working image, I need to copy-and-paste, so that the fringe is not disturbed.

The actual process is very similar to that described above; the one difference is that because this part of the image has to “marry” the tree properly, I can’t shrink and feather (that would create a gap), I have to simply feather by 1.

As before, I start by selecting the body and creating a mask.

The “body” is any part of the monkey that is certain to be opaque to the background, less a tiny bit for confidence in that certainty.

I then deselect and ensure a perfect selection using the mask, then copy and paste from the working image into a new layer. Then I feather the mask by one and copy-and-paste a second new layer on top of the first.

Next, I make a duplicate the first layer, and then blur the bottom-most of the layers – the non-feathered one that was just duplicated – by 2 pixels. Finally, I play with the opacity of the non-blurred version until the amount of blur looks right. The opacity will be something very similar to that used for the tree.

Then comes the clever bit. I turn the selection layer back on and (temporarily) give it a composite mode of “erase”. This leaves the merest hint of an outline, thanks to the blur. The inside of my selection for the fringe has to be inside of that line.

Here’s the way the ‘line” looks:

On a new layer, and using a different mask color (preferably one that isn’t in the original image and will stand out), I simply draw over the top of the line, having turned the working image back on. Then, I can start to get creative.

Using brush sizes as appropriate, I draw everything that’s going to be 100% opaque in the fringe. I then set the opacity of the brush to about 60% and draw everything that’s going to be about 50% opaque; A third pass with 30% brush opacity for the parts that are going to be only 25% opaque, and the fringe mask will be complete.

Note the ‘spots’ of color placed somewhere out of the way so that I can use the Similar Color Selection tool.

    A few tips:
    Zoom is your friend – you want smooth steady strokes with your brush. Most people can do this for a certain distance and then their brushstroke veers off in a strange direction, just by a little bit. Zoom the image so that the length of the stroke required is within your range.

    Undo is also your friend – if it’s not right, undo it right away and do it again.

    The goal isn’t to get it perfect, it’s to get it good enough that you can get away with it. Never forget this vital distinction. Maybe it’s not quite right but is close enough, after all. This is not a technique that’s designed to be perfect, only good enough..

    Practice at speed – not only does the job go faster, but your brush strokes are much smoother at speed than slow and not-so-steady.

    If it sometimes feels like you are hand-painting each individual hair, it’s because sometimes you are, as the zoomed-in image of the mask makes clear.

    Finally, don’t be afraid to use your select similar colors tool and delete button to tweak the final result (after copy-and-pasting).

I can then use the similar color selection tool to take advantage of the ‘flaw’ in the way it works so that the opacity of the copied image matches the opacity desired – I just select a part that I know to be 100% opaque color. I will sometimes add a ‘spot’ in the center of the mask for that very purpose.

Using the selection mask in this way, I copy the fringe from the working image into a new layer, then shrink and feather by 2, and copy and paste a second layer below the first. I then drop the opacity of the top layer to about 50%.

It’s possible to go one step further, using the two masks, and the select similar colors tool in intersect mode, to grab just the extreme highlights, since one of the defining characteristics of back-lit hair is that it is near-white, but I’ll save that for when I’m working on making the monkey blue.

Here’s the completed monkey extracted from the source image, posed against a dark green background.

Step 2 – Expand the canvas width to about 250% of what it already is, and use copies of parts of the tree with suitable edits to extend the branch outward into the enlarged space.
Sidebar: Extending images, Focal Point, and visual flow

Extending images is never as easy as I make it seem in this section. Not only do you need to have the capacity to fill the expanded area with content, you need that content to match the rest of the image in detail, contrast, and color, which means that you need a source for the additional content. That’s the second primary requirement (I’ll cover the first shortly).

Thirdly, you need the edges to match – it’s really hard to have part of the image derived from a light background and part from a dark background. It’s not impossible to overcome this problem, but it doubles or triples the workload.

Fourthly, you need to consider the focal point of the image. There are two basic structures to most images:

I’ve boiled everything you need to know down into the five figures in this diagram.

Figure 1 shows a square image. The focal point, unsurprisingly, is in the center.

Figure 2 shows the second major layout used in good image composition, again on a square ‘canvas’, with the focal point located 2/3 of the way across the image, but stretching back toward the middle of the image. Figures 2a, 2b, and 2c show that any mirroring or rotation of this arrangement is also valid, something that is true of every subsequent figure (even though they aren’t shown explicitly).

Figure 3 takes us to a rectangular image for the first time, and brings up the “golden ratio”. No-one knows exactly why it works, but postcard relative dimensions are naturally pleasing to the eye (if that’s all there was to it, it could be written off as a function of human psychology, but the same ratio keeps showing up in strange places in mathematics, which should be objectively independent of human perceptions). Figure 3 itself shows the same focal point positioning as figure 1, but note that the circular focal region is slightly stretched by the longer axis. Again, it doesn’t matter if the image is landscape as shown, or in portrait orientation – that’s just a rotation of the layout. I’ve exaggerated the dimensions of the image a little for clarity.

Figure 4 contains a slight error, for which I apologize – the yellow “egg” is not quite vertically centered the way it should be. It shows the application of the 2/3-1/3 ratio to a rectangular shape. This particular arrangement is important because such layouts usually deal with the relationship between the primary focus (in pink) and a secondary focus (the yellow zone). This is the layout that I’m going to employ for the Blue Monkey composition, which will make the image not just about the primary focus (the monkey) but about the environment in which the monkey can be found (the secondary focus).

Figure 5 is an afterthought. It may have occurred to people that the many screen resolutions around these days are usually NOT in the golden ratio, and wondered about what happens in such cases. The answer is that the short axis dominates; dividing it by 3 and multiplying by 4 defines a part of the image about which all the usual design and layout rules still apply. Anything outside that zone is considered a ‘fringe’ that contains no content of relevance – and which is usually ignored unless a deliberate effort is made. The zone can be positioned to the right of the overall image, or in the center, or to the left; it can even shift, depending on what we are paying attention to, for example if there’s an icon of attention on the top left of the screen, the zone of attention will include that icon, and the natural tendency will be to have less awareness of the right-hand-side of the screen. The focal point will then relate to our perception of the wallpaper image. (Game designers take advantage of these phenomena all the time).

The fifth factor to take into consideration is the composition of the final image, which relates to the dark-vs-light areas of the image as much as anything else (you can do this stuff with color but that’s a lot harder). For those who read left-to-right, the natural tendency is for the eyes to enter an image at the top left and proceed to the right until something is encountered that redirects attention. When that happens, we follow the line of contrast down until meeting another. If we don’t find such an area of contrast, the eye tends to fall off the image – which can be useful in a comic book panel, but is otherwise undesirable. The goal is more commonly to direct the gaze continually back to the focal point, preferably by way of the secondary focal point, if any. Of course, if your language reads right-to-left, that is the way your eyes enter an image – producing something satisfactory to both groups of cultures is incredibly difficult, but it can be done. To analyze any image, squint at it, and you will find it blurring, losing detail but permitting the broad shapes – and the visual clues they provide – to become more readily apparent. There’s a lot more to this subject, but this gives you a basic grounding.

But by far the most important consideration is always “why?” Rule Zero applies not just to the editing of the image overall, but also to each major edit performed. This should always be your first consideration – defining a specific objective or reason for making this particular change.

In the case of “Blue Monkey”, I looked at the composition. The face of the monkey is the primary focus, because we’re naturally programmed to pay more attention to the identification of individuals. The original image works because the monkey’s face is pointed at the tree and that leads the eye to the tree-branch*, which leads us to where the monkey is sitting, which leads us back up his body to the face. Our attention is thus focused on the middle and right-hand side of the image, and the left is largely irrelevant.

* Okay, technically, the texture of the tree tries to pull the eye down out of the image – but notice the area of darker wood on the right? The eye gets pushed away by that until it encounters the horizontal rows of knots, which point the eye at the tree limb.

All that changes when an attention-getting change like blue fur is introduced. That becomes the focal point, because it’s unusual, and that pulls the eye downward to the tree limb, and then left – and out the bottom of the picture. To combat this, I need to make the background more important so that I can use it to lead the eye back to the focal point. I need room to make that happen, so I want to shift the layout from that of Figure 3 to that of figure 4. I will need a visual barrier to push the eye upwards past the tree limb, and horizontal layering within the background to pull the eye back to the right afterwards.

There’s a little more to this step than this indicates. Careful use of the palette knife and smudge soft brushes will be needed to ‘connect’ the two, and I’ll use these brushes to sketch out a general impression of the desired shape of the limb extension – from a copy of the tree layer. Use select to prevent disturbing anything you want to keep.

Before I can do that, I need to flatten the tree layers into a single layer. Start from the bottom layer, go up one layer and merge down until the process is complete. You may be tempted to simply group them together and then flatten the group – this way lies trouble, because not all composition modes are respected within a group.

The next part of the process is to copy and past parts of the real tree that can be distorted, twisted, rotated, or shaped to fit.

To start with, the results don’t look all that impressive – there are obvious transitions where one part of the Frankenstein’s monster has been stitched to another:

These problems stem from three sources:

  • The textures are at different scales because of the distortions;
  • Lighter sections are abutting darker sections with no transition;
  • There are no transitions between sections.

To solve this, following the approximate grain of the wood, I will copy and paste the endpoints of each section, move them, rotate them, but not resize them, deleting anything that doesn’t fit, then fade them out. I will also select dark areas and light areas and copy-and-paste those specifically into other sections of the tree-limb.

About 90 minutes later, I have this:

The tree limb was too long to show at anything close to full-size in a single screen-capture – I’ve had to use three.

Step 3 – Design and implement the Monkey color-change.

This is the most important part of the process, because this is what the image is supposed to be all about.

I have several different methods in mind; when that happens, I usually try one and see what the results are, then try the next one only if the previous one was unsatisfactory.

  • Method 1: select the body AND fringe masks, fill in a new layer with a mid-toned blue, set composition mode to color.
  • Method 2: select the whitest parts of the body + fringe, copy and paste into a new layer, then fill with a pale powder blue in a layer below the pasted highlights, set composition mode to multiply, adjust opacity.
  • Method 3: color adjustment curve to increase the blue content of the dark and mid-tones, especially the latter.

There are also obviously a number of combinations; I might like the look of Method 1 with a highlights layer as per method 2, for example. I might combine all three at different opacities and in different orders.

One thing that I will be doing in all methods is fading out the modified version to preserve the original pink of the muzzle, because I don’t think the creature will look realistic enough without that.

Method 1 turned out to need a darker blue than I originally thought I would need. Unfortunately, it looks like someone has died the hair of the poor ape a shade of electric blue.

This didn’t have as much effect as I was hoping it would. It’s just not quite blue enough.

The lighter-toned sections of this version are very good, especially when combined with the highlights from version 2.

I think that I will use a blend of all three methods. Highlights from Method 2, then the light tones from Method 3, then the middle tones from Method 1 (probably reduced in opacity), all over the top of Method 2. I want the blueness but not the garishness of Method 1, in other words!

I’m not sure of the best composite mode for these different layers, or the opacity. I may end up with several copies of the extract from method 3, one a low-opacity addition and one a middle-to-high opacity in normal mode or perhaps Alanon, or even multiply and addition in combination!

As per rule zero, I have a clear objective in mind, and so I can play around, keeping anything that takes me closer to that goal and ignoring anything that doesn’t.

    Hair and Fur Headaches

    It’s relevant to the business end of this part of the process, so it’s time to talk in a little more depth about making hair and fur look realistic.

    Have you ever looked closely at hair that is going gray? If you have, you will have noticed that the hair is not consistent in color. Some hairs are still dark, some are light / white, some have dark roots or light roots, and no two hairs are precisely the same in color.

    Once you’ve noticed that, you will soon discover (if you didn’t know it already) that monochrome hair always looks fake. That’s why the commercials for hair-coloring products try to emphasize ‘natural color,’ and what the mean by the phrase – they mean that hair colored with their product will look natural, with realistic highlights and variations in shade and tone.

    Beyond that consideration, some hairs will cast shadows onto others, producing still more natural variation. It’s almost impossible for an artist to spend too much time on getting hair right.

    As a general rule of thumb, any body of hair should have a dark element, and a light element, and a mid-tone element, and natural highlights and shadows in each.

    Which of these is dominant depends on what is supposedly behind the hair or fur. The fringe in the case of the monkey is against a darker background than the fur, so it’s all about the light hairs, with the others fading into the background. But in some parts, the background is lighter even than the fur – which causes the darker hairs to stand out more.

    This, of course, explains what is wrong with the “blue monkey’ transformation that resulted from Method 1.

In order to separate out the pieces I want from each of the transformations, I need to use the Similar Color selection tool, then copy-and-paste. This can be trickier than it seems, because you have two variables to contend with: the color range selected, and the base color on which you pick.

The first is controlled by a slider in the upper right labeled “Fuzziness”. The smaller the value, the more closely a color has to match the base color in order to be selected.

Too low a value, or too extreme a base selection, and not enough of the similar colors will be selected (though you can always add to your selection, you can’t determine how strongly an individual color has been selected – remember the demonstration with the clouds, earlier? The solution is to make the color ‘fudge’ as large as you can get away with, often with a bit of trial and error and educated guesswork. You can make life easier by having a low-opacity version of the modified base image underneath the selected components. Using a single-pixel feather and then shrinking the selection by a pixel can also solve a number of problems.

I frequently work with a fudge of 5 when using the similar color selector. I will sometimes use 3, or 7, or 10, and – in certain circumstances – 0 or 1.

But there’s a complication. Remember the image damage caused by the saving of the file as a jpg? Those are variations in color that aren’t there and aren’t wanted – but I don’t want them appearing as holes in the selection, either. The best answer is to choose a fudge high enough to include them, then manually edit the image to repair the damage. The selection mask prevents your edits from extending beyond the part of the image you are actually working on.

But a color range that broad can also pick up all sorts of unwanted colors as well. So you have mutually contradictory imperatives to satisfy. My practice is to go for a color range that is just a little too small, and use the addition tool to compound multiple selections. If I have to, when I look at a first attempt, I will then go to a color mask to achieve complete capture of the desired parts of the image.

It’s now 20 minutes later, and I’m satisfied. Below, I’ve curated the layers, viewed three different ways: In isolation, in closeup, and against a dark green background (because you saw earlier how illuminating that could be).

From bottom to top:

Layer 0: Base Image (for reference purposes) 100% opacity, Normal mode. Notice that the fur consists of light over mid-tones over dark over more mid-tones – no matter how simple it looks in the image on the left, the detail is incredibly important in achieving plausibility.

Layer 1: A copy of Method 2, 100% opacity, Normal mode – this is the actual base image being used, leaving Layer 0 as redundant.

The lightest shades are a sort of sky blue, the mid-tones are a slightly purplish-slightly grayish slightly dark blue. A lot of the detail and nuance have been washed out.

Layer 2: A copy of Method 1, 41% opacity, Normal mode – this shifts the base image slightly bluer – the darker the tone, the more it gets shifted.

Layer 3: Mid-tones from Method 1, 45% opacity, Normal mode, selected with Fuzziness 5 and feathered 1 pixel – this shifts the mid-tones even more toward the royal blue.

Layer 4: Light tones from Method 3, 100% opacity, Normal mode, selected with Fuzziness 5 and feathered 1 pixel. The opacities of Layers 2 and 3 were adjusted so that the results would blend well with this layer, color-wise.

Notice that against the transparent background, it just looks like a mess, but as soon as the dark background is deployed, it becomes a lot more coherent.

Layer 5: Highlights from Method 2, 68% opacity, Alanon mode, selected with Fuzziness 5 and feathered 1 pixel. This lightens and brightens the highlighted sections while permitting the blended color of the earlier layers to show through – just a little.

Because the color is slightly darker (because of the feathering), it’s easy to overestimate the opacity. The dark background shows the truth.

Layer 6: Another copy of Method 1, Opacity 60% Grain Merge mode – a little tweak of the colors, harmonizing and blending the layers beneath.

Layer 7: Dark tones from Method 2, Opacity 63%, Multiply mode, selected with Fuzziness 9 and feathered 1 pixel. Part of the effect of all the preceding layers was to wash the contrast out a little; this layer not only intensifies the blue color of the darker areas, it restores that contrast (and maybe even enhances it a little).

Because multiply makes things darker, I’ve deliberately lightened up the background so that the shadows can be seen clearly in the third panel. Most people, when they look at this, will assume that it’s light paint over a darker base color; in reality, the base color is the brighter green and the shadows are the contribution of this layer.

Layer 8: Yet another copy of Method 1, Opacity 33%, Alanon mode. A color tweak post contrast-enhancement, softening the harshness of the shadows created by Layer 7 just a little while shifting the non-dark areas just a little more to the blue.

Again, the image on the left makes this look like a more dramatic adjustment than it really is. The dark-background panel gives a more accurate perspective.

Layer 9: This is the original highlights layer selected from the base image, as described earlier. 100% opacity, Normal mode. Remember that I was very restrictive in choosing color similarity for this layer – it’s almost white.

You may notice the rather obvious darker stripes that appear to be running vertically though the image in the left two panels – these are actually optical illusions, as the dark-background panel makes clear. They also vanish when the overall image is composited. When I first observed this effect, I spent quite a bit of time investigating it, and discovered that this is another example of the human eye detecting patterns that don’t actually exist.

Layer 10: The last layer is a copy of Layer 9 that I have blurred 1 pixel, 39% opacity, Addition mode. The highlights from Layer 9 looked too stark, too severe, and didn’t quite blend. After trying various combinations of Opacity and Compositing Mode with layer 9 (and finding none of them satisfactory), this was my solution – a means of blending those highlights with the underlying image.

Looking at the first two panels, you could be forgiven for thinking they were empty, devoid of content; but the dark background reveals all.

So, let’s put it all together. Below are a series of screenshots as the layers are turned on, one after another – again, a whole-of-monkey impression and a zoom panel. This is a BIG image file, it will take a while to load!

Something that you should always do before considering a step complete is to review the compiled image. Doing so in this case showed that the efforts to save the fringe had produced an unwanted side-effect where monkey met tree image: a bright blue halo:

Fortunately, this is easily corrected, because I was very careful in working the tree (and had no such problems). It was a two-step process:

  1. Move the tree layer to be in front of the monkey; and then,
  2. Create a copy of the tree layer behind the original and blur it 1 pixel.

This covers the unwanted halo with tree and blends the pixels at the boundary together to unite the monkey and the tree seamlessly.

Step 4 – Position the distant background.

Steps four to 11 may comprise 2/3 of the list of steps, but they are far less involved, and so should go much quicker.

The one big decision remaining is to decide where the horizon line is going to be. This only has to be rough, because it will be covered over with midground vegetation.

If I position the horizon line in the middle of the image, it says that the monkey is roughly at eye height. If I raise it up, say in line with the monkey’s eyes, it suggests that we are looking up at the unusual creature; if I lower it, the impression is that we are looking down on it, which doesn’t seem right at all.

But I want the top of the midground to fall at about the 2/3 mark up the page because that will make for good composition, as discussed earlier – and that means that the horizon line has to be below that, so that the midground can cover it! So that means that it has to fall somewhere in between 1/3 from the top and half way down the page.

There’s a tuft of fur on the monkey’s back – it would be astonishing to the point of improbability if the horizon line just happened to perfectly line up with it. A tiny bit higher up or lower down is far more visually plausible; most people won’t notice the difference, but will find the image more credible without knowing why.

Taking everything into account, one consideration at a time, has narrowed the boundaries within which the horizon line should occur to a very small range. It doesn’t matter too much where in that zone it actually falls, because the intention is to cover it up, anyway.

The dimensions of the distant background are such that almost half the image are off the top of the canvas if I position the bottom near that horizon line – and it won’t go anywhere near all the way across the area to be filled. So I break it up into two parts, then duplicate the one that was the original bottom of the image and mirror it horizontally, then move it across to the right-hand side of the canvas, where it will mostly be covered by the tree-trunk. I also increased the original bottom a little in size. That gives me this:

If you look closely, though – there’s a problem: the three parts are not very seamless. The right-hand boundary isn’t bad, but the left-hand one needs some work. Using the Outline Selection Tool, I copied and pasted three patches of background – two from what is now the central panel, and one from the left-hand panel.

The topmost of these was set to a Multiply composition mode and the opacity adjusted so that the result matched fairly closely to the corresponding part of the left-hand panel. The lower-right one received the same treatment, but also needed to be darkened a fair amount to match. Finally, the bottom left patch was partially covered by the middle panel – there was some overlap because of the way I positioned them (not by accident); this now covers the seam between the panels.

As you can see from this closeup of the central panel, these quick tweaks have made a tremendous difference:

It’s still not quite perfect, but it’s close enough for some manual editing – a little brushwork and some Smudge is that’s needed.

Once that’s done, I merge the layers down, duplicate the resulting layer, reflect it both horizontally and vertically, and apply a lot of lens blue. This is background that’s supposed to be below the horizon line, but it’s only there in case there’s a hole in the midground. It’s essentially ‘noise’ that matches the color profile of the actual distant background:

Step 5 – Position and assemble the midground.

It was always anticipated that the midground would not be large enough horizontally to fill the canvas space. To fill it, I used the Outline selection tool to copy a portion of it, then resized that copy, mirrored and resized a second copy, and added a fourth copy somewhat smaller in size, positioned behind the others.

It took about five minutes to get this:

With the distant background turned on:

Notice the hole right in the middle of the image! Fortunately, I had created the blurred mirror image of the far background. Turning that on:

Step 6 – Resize the distant background.

Sometimes, though, you can anticipate problems that don’t arise. Compared to the midground, the background suddenly seems slightly out-of-focus, creating an impression of depth; I had anticipated the need to shrink the background and blur it to create this effect, but it wasn’t necessary.

It is also worth noting that the distant background is a little darker than the midground; this adds to that impression of distance. To emphasize it a little more, I slightly darken the distant background.

Step 7 – Assemble the foreground. Use the size of the distant background to dictate the new left-hand side limit of the image.

The midground doesn’t look quite realistic at the bottom of the image on the left for some reason – probably a slight difference in perspective and consequent misalignment of the horizon lines between the pieces of midground. That’s fine, that’s what the various foreground pieces are intended to overcome.

While positioning these, I’ve made a couple of changes to the original plan. In particular, the elephant’s ears have been moved to be in front of the tree, and one of the other tropical plants has been cropped out. It’s now very clear from the positioning just where the left-hand edge of the finished image will be.

That means that the next step is to crop the image.

The positioning of foreground elements makes the layout approach that I always had in mind fairly clear – they form a definite frame around the focal point of the image.

Step 8 – Further Resize and possibly blur the distant background.

Time for some fine tuning. The tree and the mid-ground are at similar levels of detail, and that doesn’t work – it forces the two to appear as though they were in the same plane, i.e. the same distance from the viewer.

So the midground needs to be blurred and possibly darkened – without impacting on the highlights too much. That calls for a duplicate layer with a Multiply composite mode, and tweaking the saturation, lightness, contrast, and opacity of that multiply layer. In addition, I don’t want all of the detail to be lost – so that means duplicating the layer, dropping it underneath its parent layer and then blurring it, then controlling the opacity of the sharper image.

How much blur? The image is now 5028×1483 pixels – right at the limit of what my computer can handle. With width the defining feature, for in-game use, that would drop to 1400 wide, or 27.84% of the current scale. CM use is limited to 556 wide, which is just 39.7% of that reduced-scale image. Put those two numbers together, and to make one pixel of difference, my blur radius needs to be 9 pixels. One and a half pixels would therefore be a radius of 13 or 14, and two pixels would be 18. (1 / 11% gives 9).

If the larger scale is the goal – and that’s the approach that I’m using for all these images, to generate them as if they were for one of my own campaigns – I don’t need to be so severe. 1 / 27.84 % = 3.6 pixels, so a blur radius of 4 would be one pixel of difference, 1.5 would be 6, and 2 would be 8.

It’s very likely that if I blur and adjust the midground this way, that I will have to be even more extreme with the background. So I need to leave scope for that, too.

The best technique when you aren’t sure is to do one at each, then play around with the opacities.

There’s a trick, or perhaps describe it as a technique, when it comes to doing this sort of thing. Get the highlights right first, then use the contrast and brightness curve controls to get the shadows and mid-tones right. It’s also worth remembering that distant objects a slightly bluer than those close at hand, so a slight adjustment of the color curve or the color setting in the Filter > Adjust > HSV Adjustment can also enhance the effect that you want to achieve.

As usual with this sort of operation, you adjust one thing and find that something else needs modification as a result. What you see above is the end result of considerable filtering. I ended up using blur 12 for the midground and blur 8, twice, for the distant background. Both parts got Multiplication layers, darkening, saturation, contrast and brightness, and an adjustment to the blue curve. I also decided to apply a lens blur to the vegetation in front of the tree, so that only the tree and the monkey are in perfect focus.

Step 9 – Ghost Leaves to fill any voids

Having verified that there are no voids, this step can be ignored.

Step 10 – Coconut leaves left.

I did this as part of the foreground image, so there’s no need to do it now.

Step 11 – Coconut leaves right.

And I decided against doing this.

Step 12 – Final review of the composited image.

I skipped ahead a little (it’s hard to stop when you get on a roll) and made a couple of final adjustments before saving the image above – little adjustments to the shape of the tree near the monkey’s head, mostly. They were made because the fringe at the top of his head was just a little too prominent and attention-getting; I wanted to tone that back just a little.

The final steps, as usual, are to resize the image to the desired scale (1400 wide, in this case), flatten it, copy the result, sharpen it, and reduce the opacity of the sharpened layer until the right balance is achieved.

Here’s the finished image (it won’t look very different to what you’ve already seen); click on this small version to open the full-sized result in another tab.

Click on the image for the full-sized version.

Extra Topic: Star-field Trickery

Before I sign off from this post, though, there are a couple of side-issues to bring up.

Here’s a 100%-scaled extract of a gloriously-detailed night sky:

If I reduce the zoom to 50%, the results are still usable.

At 25%, detail is being lost. Each of the pixels that was once a bright point in the sky has been averaged with darkness from all four sides

And if I reduce the entire star-field image from it’s starting size of 3840×2160 to fit the available space here at CM, the loss of detail is profound.

What looks brilliant during compositing can become a flat black bereft of detail when an image is resized to its intended resolution.

Because of this effect, it’s often better to create your own star-field, using zoom to compensate for extra scale on the canvas – If you are working at 200% canvas size relative to your intended finished image, zoom to 50% so that you will see the image as it will be when compositing is complete.

If you really need to, you can use a duplicate layer and addition composite mode to restore some of the lost contrast:

Alternatively, you can sharpen the image:

If you do, then multiplying with an un-sharpened version and controlling the opacity will give you some control over the depth of the star-field. At 100% opacity:

At 50% opacity:

And at 23% opacity:

Actual Starfields

Nevertheless, there will be times when you may need to use an actual star-field because it contains some object of interest that you can’t simply composite in. This could be the crab nebula or a planet Earth or the rings of Saturn or the international space station or a black hole – there are numerous possibilities. Your immediate problem is that getting that object to the correct visual size also renders the stars a particular size and density, and you have to then match that for any part of the image that this doesn’t cover.

The best solution that I have found is to

  1. Start with an image that is already at something close to the correct scale.
  2. create a temporary copy of that image and expand it to the working scale that you are using. This shows you the stars and their density – in other words, what you have to match.
  3. Either use an existing star-field or create one of your own if you aren’t worried about the constellations being recognizable. You may need several at different scales before you find one that’s anywhere close to being a correct match. For this reason, I keep several on file that I pull out as necessary.
  4. Do all the rest of your compositing.
  5. Reduce your image size to your intended size.
  6. Replace the temporary copy of the star-field with the real thing. You will usually notice that the two aren’t quite the same – the process of expansion and then contraction does funny things to the sharpness and clarity

One of the most common mistakes that I see (often because I’ve made the mistake myself, I must admit) is stars that look to big or too small, too many or too few. These all have an impact on the ultimate composition and what the visual is telling the viewer about the point-of-view of the viewer. It takes a surprising amount of effort to get this right, and sometimes (when you’re in a hurry) you will have to live with imperfections. That’s a problem that can be minimized with the process described above.

Manufacturing Starfields

It’s incredibly tempting to start with black and add colored stars – red, yellow, greenish, blue-white, and so on. No, no, no!

  1. Start by designing the composition of the image – what is going to be where, and how it will visually flow from one element to another.
  2. Then create a black background – and fill it with any illuminated dust clouds and anything else that is to go behind the stars.
  3. Think about the shape of the stellar neighborhood – is it a galactic arm? Where will the stars be thickest?
  4. In a fresh (transparent) layer, create you star-field. I DON’T recommend using the “SFX star-field” brush for this because the results are too light and too small – if you follow the usual technique of working large-scale to shrink your mistakes, you will also shrink the stars into non-existence. Instead, use the splat brush, and vary the brush size until you get the star-field populated. Keep an eye on the size of the splats and how they will look when the image is reduced in size. Do these in red, yellow, blue-white, etc, and do them to a greater stellar density than you want by about 200%.
  5. Make a copy of the star-field layer and blur it just enough that the blur will be visible when the image is shrunken in size. Then move it to behind the original star-field.
  6. Select your original star-field and turn up the lightness close to all the way using the HSV adjustment. You want the color to be just a hint at the fringe of the stars.
  7. In a new layer, use various sponge brushes to create clouds of very dark blue and black to obscure the stars you don’t want. Use multiple layers if you have to. Adjust the opacity of each individual layer so that some stars just barely show through. I will often have some of these layers set to multiply and occasionally will use white ‘dust clouds’ set to subtract. Sometimes, the airbrush tools can also be useful in this regard, and I’ve had some success with using the Soft Smudge and Palette Knife to create swirls and textures within the clouds.
  8. Any planets or objects usually go in layers on top of this star-field – but if that doesn’t work, you can always set them behind and use an erase brush to ‘reveal’ them. This can help enormously in achieving star-size parity throughout the image. Don’t forget the dark side of the planet or object – it will still be there, obscuring stars!
  9. Most of the tools that you have used will put paint beyond the edges of your canvas. This can be incredibly inconvenient for any number of reasons, including that only those parts on the canvas are affected by any menu transformation effects. So crop your image to its full size to get rid of these extras.
  10. That creates your artificial stellar background – everything else goes into the foreground/midground layers, which go on top of the star-field.

Starfields can be lots of fun, and incredibly creative activities. You can literally spend hours fiddling around with them simply because you’re enjoying yourself so much. It feels more ‘creative’ than most of the image compositing activities on offer. But losing yourself in this way can also mean losing sight of Rule Zero of image compositing, and ending up with something that just doesn’t work. The process above is a starting point for avoiding that problem.

Extra Topic: Matte Vs Glossy
Matte

Early comic books were colored in a very similar way to how a child fills in a coloring book, areas of flat color. Where depth was to be suggested, that was the job of the inker and his treatment of black.

Over several years, this began to change. Colorists would add splashes of a slight color variation to suggest a more three-dimensional image.

It doesn’t take too much effort to achieve this effect, but it still doesn’t look quite realistic. Comics got away with this because the mind’s eye was quite capable of treating the image on the page as a kind of visual shorthand and filling in the blanks.

Going further required an understanding of the differences in the way surfaces look depending on how glossy or matte they are.

Here’s a simple strip of color, which has some additional layers (shown separately underneath the composite image):

If I were to play around with the text, making the letters at the edge progressively just a little narrower, the effect would be even more strongly reinforced – but even without that, it’s easy to see this as a stubby cylinder seen edge-on. I could enhance it even more by creating even the narrowest of ellipses, filling it with base color, and then distorting the edge-image composite to match the edge of the ellipse – so that it was no longer seen perfectly side-on. But that’s not the point of this exercise.

The sequence in which these layers were created is strictly bottom-to-top. That’s why the shadow layers appear to be out of sequence – I did the first two and decided that I needed the third.

You will notice that I only needed one highlight layer. The color used is almost exactly the same as the base color, but lightened a little and increased in saturation just a touch.

There are a couple of lessons that you should take away from this image – the first is that realistic shadows are a lot more work than realistic highlights; when working on faces, a common mistake is to use the highlight color as their base tone and then attempt to add shadows, but this doubles or triples the complexity of the job because they are now asking the shadow layers to do two jobs instead of just one.

The second is that the highlights and shadows extend all the way to the edge of the colored area. That defines matte – there is no ‘shine’.

Gloss

A gloss finish requires a process that is both similar and yet not very similar at all.

There are still five layers, but three of them are now Highlights layers – two of them manipulations of the same highlights layer from the Matt Image and one new one. There’s only one shadow layer, but it actually consists of two copies of the old light shadow layer, edited, and two copies of the old medium layer, edited, and merged together. All of which sounds rather more complicated than it is.

(I forgot to add the checkerboard pattern that signifies transparency on this diagram, sorry – take it as read!)

The lowermost highlights layer uses the first of three different composite modes than can be used to apply a highlight, grain merge. Between atmospheric distortions, light source imperfections, and imperfections in the surface texture of the glossy surface, these always ripple, and while it’s possible to do too much in that respect, it’s quite often the case that more yields a better result (up to a certain threshold when an invisible line gets crossed). As usual, this is a variation on the base color – lighter and a little more saturated (saturation means ‘intensity of color’).

The second highlights layer is a 180-degree rotation of the first. Duplicate, Layer > Transform > Rotate 180°, and position it, and it’s done. Note that it has a different opacity and a different composite mode, producing a far more intense effect on the glossy composite image.

The Shadows layer is next. To create it, I reduced the horizontal scale of the light gray shadow layer to about 75%, duplicated and mirrored the result horizontally, reduced the horizontal scale part that faded to the right to about 2/3 of the one to the left, and positioned them so that they touched but did not overlap. I then did something similar with the middle-gray shadows layer, but I shrunk this even more horizontally, and left the two sides symmetrical. These were positioned so that there was no overlap with the light but no gap between them, either. All four of these layers were then merged and the oblique transformation option used to angle them toward the top right.

To look at the isolated shadows layer, you would think that the darkness of the two dark streaks at the heart of the shadows are fairly close in intensity, with the lighter one simply spread out a little more, but this isn’t actually the case, as you can see in the glossy composite image – but this is not a case of your eyes deceiving you! As with most shadow layers, multiply mode has been used so that the results are another variation on the base color.

Next, I created a third highlights layer from the original matte highlights layer in exactly the same way as the medium gray technique described above. This was squeezed still more, horizontally, and the oblique tool used to get an angle that matched that of the shadows layer. This was positioned so that it lay just to the right of the middle of the light-gray shadows layer. When I use addition mode, it puts a highlight streak through the middle of the bands of shadow, and it’s this that ‘compromises’ the light shadow just a little more than the medium-gray shadow in the composite image.

This technique works really well for creating silk curtains!

Extra Topic: Shiny, Shiny Metal

Polished metal is even more reflective than a gloss finish. The edges of the metal are even more strongly affected than in the gloss example, and this sometimes means compromising the base color towards a lighter, brighter, tone, and then using a colored ‘shadow’ in the actual base color to darken it.

Bur can be very useful for creating the halo around the surface edges.

In addition, a shiny metal finish will reflect shadows and direct light sources. The latter will consist of a very faded outline in the color of the light source and a very bright, almost white, area inside.

In general, the techniques for creating shiny surfaces and glossy surfaces are the same- there’s just more of everything, with the additional layers serving specific purposes..

Additional Bonus Topic: Curved surfaces

I threw the diagram to the right together at the last minute to amplify on a couple of points hinted at, but not stated explicitly, in the preceding explanations. It shows how artists construct the ‘edges’ of curved objects.

As you can see from the circle, each ‘panel’ in the edge-on view is the same size. You would think that this means that they would reflect less light toward the viewer, causing them to darken, but that never looks quite right in practice. The reason for this is that the light reflected may be less, but it’s even more concentrated, so the edges get brighter than the base color.

It can sometimes be effective to inset these brighter areas – moving them slightly toward the center of the ‘rim’; this reinforces the impression that it’s a reflection.

Here’s a very crudely-drawn image (blue and yellow in sympathy for Ukraine) that illustrates these points. Outside of the original ellipse for the head and some guidelines for proportions plus neck and shoulders, this is completely hand-drawn.

The shirt is semi-glossy matte; the body is glossy to the point of being semi-metallic (but with that color, it’s probably plastic).

I used a texture built into Krita for the background, with a dark green layer over the top set to Color and a lighter blue layer on top of that set to Multiply. That particular combination uses the combination of the colors of the two layers for the dark parts of the texture and the lighter color for the highlights (and it’s no coincidence that those are basically the colors of Campaign Mastery, variations on blue and a sort of sea-green, either).

I deliberately chose a very detailed texture to contrast with the smoothness of the figure. It took 15-20 minutes.

Surfaces – concluding thoughts & Post wrap-up

This all barely scratches the surface of these topics – it’s barely enough to get you started. Representing the surfaces of objects in a composition is one of the hardest things to do well – far more complicated than the simple compositions I have demonstrated thus far. It’s also worth noting that the remaining projects avoid fancy object finishes, simply because they are so hard!

In the next part of this series, I will tackle a project that is even more complex than the relatively simple Blue Monkey – because this will require the creation of layers of shadow from elements being composited. And, as I’ve pointed out above, shadows are a lot more complicated than most people realize.



Discover more from Campaign Mastery

Subscribe to get the latest posts sent to your email.