A designer sits beside me, looking at the palette in their vector artwork software. "There's something missing... I'm sure there's another color I just can't seem to reproduce...."
No, I fire back immediately and smugly explain: "Everyone remembers from school how eyes work—the cones in your eyes detect red, green and blue light, and colors are just mixtures of these. You've got red, green and blue on the screen and you can mix them for lighter colors, and that's it, all the colors."
A moment later I realised I was wrong. It's a bit more complex than that, and this article explains why.
In this article, we'll look at color models, how color can be broken down into component parts and described. Then we will move on to look at color spaces, the more precise cousin of color models, defining exactly what color we're describing. Here we'll see why there certainly are colors you can't reproduce in your software. Finally, we will look at color management, the process that should ensure you're seeing the right colors through your design workflow.
1. How Can We Describe a Color?
What Is a Color Model?
So if you're from a print background, you may already be laughing at my simplistic computer-centric red, green and blue view of the world—everyone knows colors are really made by mixing cyan, magenta, yellow and black. But both are equally valid—they're simply different color models, methods to break down colors into their components to abstract and numerically represent a color.
I assume most designers should be familiar with RGB and CMYK color models. If you are, feel free to skip ahead to halfway through section one for the more obscure but still useful HSB and Lab color models.
What Is the RGB Color Model?
Our first example will be the RGB color model. This model, sometimes described as the additive color model, describes how colored light combines to make colors. Imagine you're in a dark room with dimmable red, green and blue lamps, and by adjusting the brightness of each you can illuminate the room with any color you wish by mixing their light. If all the lamps are off, you get black—it's dark! If you mix red and green equally, the room would appear yellow, and then as you turn on the blue lamp, the room will become white.
Why red, green and blue? You may remember from science lessons in school a spectrum of light, described as frequencies, ranging from reds through the colors of the rainbow into blues and purples. From a scientific point of view, light can be a mixture of any of those monochromes, light of a single frequency.
However, we have light-sensing cells called cones in the retina of our eyes to detect the amount of light in the red, green and blue areas of the spectrum. Because of this, "true" monochromatic yellow light, which lies between red and green on the spectrum, is indistinguishable from a mixture of monochromatic red and green light.
From a design point of view, since we can't perceive the difference, it simply does not matter, and so we can abstract any color we can see as a mixture of red, green and blue.
Due to this fact, many devices, such as monitors, TVs, and color-changing LEDs, reproduce light with red, green and blue emitting light sources. Similarly, light-capturing devices, such as cameras or scanners, mimic the human eye with sensors of these three colors.
In the digital world, Red, Green and Blue components are often described as numbers between 0 and 255. Why 255? You can blame programmers for this—it's due to them being stored as "8-bit" values, which can store 256 different values. You can blame them even more if you have to deal with websites and hexadecimal encoded numbers such as #FF4E3A!
What Is the CMYK Color Model?
So why describe colors in any other way? Well, the print world is a good example. We don't want to describe the light emitted from our print media; we want to describe the pigment colors in the ink to put on a piece of paper to get light emitted of that color. Surely that's just red, green and blue again? If you've printed or painted before, you'll know that's not the case.
Our primary colors in the print world are Cyan, Magenta and Yellow pigments, and by adding two of these on a white piece of paper, we get Red, Green or Blue. Adding the third, we tend to get a muddy brown, but by adding a fourth black pigment, we can mix to get most colors. This color model adds colors to get darker shades, so is sometimes referred to as the subtractive model, but more commonly as the CMYK color model. Typically you'll see the proportion of each pigment represented digitally as a number between 0 and 100.
For a nice visual explanation of the RGB and CMYK color models, try the linked video from Kirk Nelson.
What Is the HSB/HSV/HSL Color Model?
But there are other color models out there. If you fire up the color picker in Adobe Photoshop CC, or head across to colorizer.org, you'll also see the HSB color model.
This model represents color as a combination of Hue, Saturation and Brightness, matching how many people tend to think of colors.
Saturation dictates how vivid the resulting color is: a 100% saturated color would be vivid and bold, 50% saturated color a more subtle pastel, and an unsaturated color would be a greyscale.
Brightness (sometimes instead called Value and hence the HSV color model instead) can be thought of as the amount of black in the color, 0% lightness being fully black, 100% being either white or a color depending on our saturation.
Finally, Hue dictates which monochrome color we're talking about, meaning color as we'd mean it in a rainbow: red, yellow, green, purple, etc. Hue is described as a number between 0 and 360, essentially an angle around a color wheel.
Whilst it has its place, I've always found it unsettling that if saturation is 0%, hue can be any value and still mean the same color (a greyscale), and worse, if Brightness is 0%, neither Hue nor Saturation matter a bit, with any values all meaning black.
The related HSL color model shares a definition for Hue, but adds the concept of Lightness, having white and black at its extents, with vivid colors in the middle, and a subtly different but broadly similar saturation.
What Is the Lab Color Model?
The final color model offered by Photoshop's color picker is the Lab color model, which is a bit less intuitive but more closely approximates how the human visual system works.
"But wait", I hear you cry, "You just told us human eyes sense red, green and blue!" That is true, it's called the Trichromatic model of vision, and whilst it does describe how the individual cones in the eye work, it doesn't accurately describe the visual system as a whole.
It turns out that system is better described by the Opponent model of vision, which suggests the visual system is connected to detect differences between cones rather than the actual values they sense. The system looks at differences in Greenness vs. Redness, Blueness vs. Yellowness and Light vs. Dark.
Mimicking this Lab's a & b dimension describes color-opponency, the a dimension describing red/green, and the b dimension describing blue/yellow. The third dimension, L for Lightness, is similar to HSL's definition, but with two main differences. Whereas the other models are based on intensity of light, Lab is instead based on human perception of this intensity. The result of this is that a doubling of lightness actually appears to be a doubling; the same can't be said for the earlier systems.
Separating the human perception of lightness from color leaves the a & b dimensions as measures of chromaticity, brightness independent of color. This is important, as some colors appear brighter or darker, despite being at the same intensity. For instance, we see a fully saturated yellow as a lot brighter than a fully saturated blue. All of these changes result in a perceptually uniform color model.
Regarding ranges, L is measured from 0 (dark) to 100 (light), a from -120 (red) to +120 (green), and b from +120 (yellow) to -120 (blue).
As this can be hard to grasp from text, I'd recommend a quick visit to another of Kirk Nelson's quick videos below.
Bringing perception into things certainly helps human vision researchers, but does it help designers?
Well, it's the perceptual uniformity that's really a boon. For instance, the brightness independence of the chromaticity dimensions can certainly be useful. You can, for instance, tweak the curves in these dimensions to add a bit more blueness without changing the perceived brightness of an image.
Are There Other Color Models?
Can we break down color in other ways? Certainly! Keeping Lab's perceptually independent description of Lightness, what if we broke the chromaticity into Hue and Saturation like HSV? We'd have the Munsell system, although it calls saturation "Chroma" and Lightness "Value" and tends to be used for soil research, rather than design!
The link I gave earlier, colorizer.org, is a fantastic way to understand these systems, offering sliders for all the different dimensions of the different systems. You'll see some more color models such as YPbPr and XYZ. These again are more specialist models, less useful to the designer, but handy for video codec developers to squeeze a bit more content into our bandwidth.
Moving away from the digital, systems such as Pantone could be described as color systems, being a standardised way to abstract colors, allowing two remote designers with the same swatch to know they're thinking of the same Cerulean or Hot Pink!
If we move away from human eyes, looking at animal perception of color, infrared cameras or even satellite data, suddenly we have sensitivities at frequencies other than red, green and blue. We then move into the area of false color images to make unseeable colors understandable.
2. How Can We Accurately Describe a Color?
Coming back to day-to-day design, it is when we move between these color models that why I was wrong becomes most evident. Perhaps you've gone through the pain of perfecting a piece of media to exactly the right shades of color you want, only to print it and find all the colors reproduced subtly differently.
If a document calls for 100% Red or 100% Cyan, what is that a proportion of? Given no other clue, it will be 100% of what a device can give, a fully bright red pixel or a full covering of Cyan ink. There are two main issues with this: the capabilities of devices differ, so fully red will appear different between monitors, and secondly, how do we move between color models whilst accurately representing colors?
To do this properly, we require Color Management. I'll describe this fully in section 3, but first we need to understand color spaces, color models' more precise sibling.
What Is a Color Space?
Color spaces precisely specify a mapping from the description of a color to how it should be reproduced. These color spaces specify exactly how the components' colors should be represented, precisely how a mixture of these primaries should appear, and at what real-world brightness any given value should shine from a screen.
The notion of a color space works for any color model. Pantone, which I mentioned earlier, is actually better described as a Color Space as it describes precise colors. There are common color spaces for RGB and CYMK, but first we'll look at Lab to learn a few more concepts.
CIE Lab and XYZ Color Spaces
Exactly what the L, a & b dimensions of a Lab color model measure depends on which Lab color space they are referring to. The initial Lab color space came from Richard S. Hunter in 1948, but the International Commission on Illumination (CIE) gradually improved the exact definitions for the Lab values for better approximation of human perception in the CIE 1976, CIE 1994 and CIE 2000 color space definitions. Technically, the CIE dimensions should be referred to as L*, a* and b* as they are defined differently to the Hunter 1948 dimensions, but I've followed Photoshop's Lab usage.
Each of these systems is based on and defined in XYZ values from the earlier CIE1931 XYZ color space. Unless you're interested in the human visual system, the details of these are immaterial, except for the fact that X & Y are measures of chromaticity again, and we can ignore lightness to map all the colors on an XY scale of chromaticity; we term this a chromaticity diagram. In the chromaticity diagram shown below, the arching curve shape is the range of colors human vision can see (chromaticities, really, as we have no lightness). Where this diagram really comes in useful is for comparing the ranges of different color spaces.
The range of a color space is described as its gamut. You may find color space and color gamut used somewhat interchangeably, but the best way to understand the difference is to look back at the CIE 1931 Chromaticity diagram above. The colored area is the color space of human vision, and the thick line noting the extents is the color gamut of human vision.
sRGB Color Space
Color gamuts are useful when we come to describe color spaces. Let's take a look at sRGB to demonstrate this. If you're feeling brave, you can take a look at the sRGB color space specification. The sRGB color space can be thought of as a default color space for the RGB model. Almost all capture and display devices working in the RGB color model support sRGB as a minimum.
Take a look at the Chromaticity diagram below—the triangle shows the gamut of sRGB in comparison to Human Vision (CIE1931). As you can see, a lot of the areas inside the gamut of Human Vision are outside the gamut of the sRGB color space. Essentially, these are colors that we can see but that cannot be represented within the sRGB color space, and such colors are referred to as Out of Gamut for the sRGB color space. The fact that so much of human vision is outside the sRGB color space explains why it is a minimum, and tends to be considered a narrow gamut color space.
Have you spotted the artistic licence I've taken with the chromaticity diagrams? If your monitor is only displaying sRGB, why aren't all the colors in the sRGB triangle? And how can you see colors outside it?
In reality, the colors along the arching edge of the diagram are the pure monochromatic colors; the three corners of the sRGB triangle will be the best green, blue and red a monitor can reproduce. I've just stretched the range of colors over the range of human vision to better illustrator the range.
Adobe & ProPhoto RGB Color Spaces
What if we want a color outside the sRGB color space but still in the RGB model? We need a wider gamut RGB color space.
There are many, but we'll look at two major ones. Firstly there's the Adobe RGB color space, introduced in 1998, which, as you can see below, allows a better representation of greens over sRGB.
Secondly, Kodak's ProPhoto RGB, otherwise known as ROMM RGB, offers a vast color space. In fact there are spaces inside the ProPhoto RGB color gamut that are out of gamut for CIE 1931, suggesting that deeply saturated blues and greens in this color space represent colors the human eye can't actually see!
Okay, so which RGB color space does my camera/monitor/scanner use? Likely none of them! Whilst they may be close to a standard color space, each model of a device will have its own color space.
Due to this fact, the International Color Consortium came up with the ICC profile, a way of defining and sharing device-specific color spaces. Such a space may be available from the manufacturer, or you can generate it yourself as described in section 3.
CMYK Color Spaces
Moving away from the RGB color model, we'll look at CMYK color spaces. This is a lot more complex due to requiring information not just about the inks, but also the paper and other details of printing. Take a look at this guide to see the range of profiles available. We'll just take the American Web Coated SWOP color space.
The irregular hexagonal space is the gamut for SWOP, and I've also thrown in the triangular gamut of sRGB again so we can compare them. We've got some out of gamut colors for each color space in relation to the other, so the implication is we can't trivially move between CMYK and RGB—we need Color Management.
3. What Is Color Management?
So know you (hopefully) understand color spaces, but how do you actually use them? By using a color managed workflow.
Color Management is a chain of systems to manage color through the workflow of a piece of media. It includes:
- the management of color spaces in media files
- the conversion between color spaces
- the characterization and calibration of devices to accurately display (or capture) in a color space
Characterization/Calibration of Devices
So the first step will be ensuring we are seeing color properly on our device. As we've already touched on, devices will have their own color space, referred to as an ICC profile. This profile may be available from the manufacturer, but to be really accurate it's best to generate it yourself, as devices can differ due to manufacturing tolerances and environmental conditions.
Characterisation is the process of measuring a device's capabilities. It is achieved by Colorimetry, measuring the appearance of colors as perceived by people, with a Colorimeter.
A step further is to take this characterisation and tweak the device's reproduction for a more true representation of color; this is termed Calibration. Typically, display colorimeters will come with software to calibrate a display, and then do a final characterisation to generate an ICC profile.
I've linked a few tutorials below on this. Jordan Merrick's runs through the process for both techniques on a Mac display, Daniel Sone's shows the use of another inexpensive colorimeter for calibration, and Jeffery Opp's shows the process of characterisation for a camera.
- CustomizationHow to Calibrate Your Mac's DisplayJordan Merrick
- ReviewsReview of X-Rite's ColorMunki Monitor CalibratorDaniel Sone
- Colour Correction100% Perfect Color in Product Photos With a ColorCheckerJeffrey Opp
Managing Your Color Spaces
We now understand what a color space is, but how do we choose the right one for a document? Typically we will be limited to a subset by the device we're using, and the desired final media or capture device, in terms of photography and scanning, will dictate the color model in use. So do we simply use the color space with the largest gamut available? That often is the best approach, and we certainly don't want to restrict ourselves unnecessarily, but there are some pitfalls to be mindful of.
Firstly be mindful of the final color space in your process, print or screen. By all means use a wide gamut for capture and in intermediary documents, as it'll give you more data to work with, but aim to end up with colors within the gamut of the final color space. At the very least, find out what that color space is and convert to it yourself as the final step. This will allow you to see if clipping to the color space will result in any odd colors.
A second potential pitfall is that when represented digitally, we are putting a number to the dimensions of color model, and each of these numbers has bit depth, essentially the number of subdivisions of intensity for each primary color.
Typically this will be 8 or 16 bit, representing 256 and 65536 possible values for Red, Green and Blue. So obviously we want higher bit depth to represent more colors, but sometimes we will be limited to lower bit depths (perhaps for size of resulting file).
In this case, a larger gamut spaces the subdivisions further apart, meaning the wasted saturated colors are actually wasting useful bits of data, at worst resulting in banding. So if you have a limited bit depth, chose your color space to match what you're trying to represent in a document.
Conversion Between Color Spaces
Thankfully, the Color Management toolchain deals with the mathematical part of moving between color spaces for us. The real interaction the designer has with this is selection of the mapping to deal with the change in gamut and distribution of colors between color spaces; this is termed the Render Intent.
Relative colorimetric intent aims to accurately map colors which can be represented in both color spaces, and represent out of gamut colors as the nearest color available. Assuming most of the colors in the document are in the shared space of both gamuts, this tends to appear most similar to the human eye, which is very handy for photography. The big disadvantage is that any colors outside the target gamut are "clipped" to the nearest color, and hence information is lost.
Perceptual intent conversion instead squashes all the colors in the source color space to fit into the resulting color space. This changes how all the colors look, but no information is lost. Nothing is lost, but big changes in color and brightness can occur.
One notion I've glossed over until now is the white point in a color space, which describes the location of the purest white available; this differs from color space to color space. Relative colorimetric intent tries to maintain the white point across the mapping, distorting colors to do so, but absolute colormetric intent does not do this. This can change the overall white balance of an image so is not good for photography, but is very useful in packaging and branding as it accurately reproduces exact colors.
Saturation intent can be useful moving to a bigger gamut, as it maintains relative saturation. This will make photography look too vivid, but is useful for packaging and infographics.
That Sounds Like Hard Work—Should I Bother at All?
The answer depends on what you design. If you work with printed end results, simply yes. If your entire workflow is digital, perhaps for the web, you can likely just stick to sRGB, but I'd argue you should at least know about these topics. Whether you should calibrate is a contentious issue, as described in Thomas Cannon's discussion Is Color Calibration Necessary in Web Design?.
If you capture images from the real world (scanner and cameras) or put images in the real world (printers), you really should know this stuff, and I'd recommend you read further for how your particular devices and software deal with color spaces and color management.
Either way, just be aware that yes, there likely are colors that aren't in your palette in your digital artwork tool of choice. And don't even mention metallic inks, as that's a whole other kettle of fish!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post