ChatGPT解决这个技术问题 Extra ChatGPT

Formula to determine perceived brightness of RGB color

I'm looking for some kind of formula or algorithm to determine the brightness of a color given the RGB values. I know it can't be as simple as adding the RGB values together and having higher sums be brighter, but I'm kind of at a loss as to where to start.

Perceived brightness is what I think I'm looking for, thank you.
There is a good article (Manipulating colors in .NET - Part 1) about color spaces and conversations between them including both theory and the code (C#). For the answer look at Conversion between models topic in the article.
I have been a member for lots of years, and I have never done this before. May I suggest that you review the answers and re-think which one to accept?

L
Lionel Rowe

The method could vary depending on your needs. Here are 3 ways to calculate Luminance:

Luminance (standard for certain colour spaces): (0.2126*R + 0.7152*G + 0.0722*B) source

Luminance (perceived option 1): (0.299*R + 0.587*G + 0.114*B) source

Luminance (perceived option 2, slower to calculate): sqrt( 0.241*R^2 + 0.691*G^2 + 0.068*B^2 ) → sqrt( 0.299*R^2 + 0.587*G^2 + 0.114*B^2 ) (thanks to @MatthewHerbst) source

[Edit: added examples using named css colors sorted with each method.]


Note that both of these emphasize the physiological aspects: the human eyeball is most sensitive to green light, less to red and least to blue.
Note also that all of these are probably for linear 0-1 RGB, and you probably have gamma-corrected 0-255 RGB. They are not converted like you think they are.
Not correct. Before applying the linear transformation, one must first apply the inverse of the gamma function for the color space. Then after applying the linear function, the gamma function is applied.
In the last formula, is it (0.299*R)^2 or is it 0.299*(R^2) ?
@KaizerSozay As it's written here it would mean 0.299*(R^2) (because exponentiation goes before multiplication)
G
Glenn Slayden

I think what you are looking for is the RGB -> Luma conversion formula.

Photometric/digital ITU BT.709:

Y = 0.2126 R + 0.7152 G + 0.0722 B

Digital ITU BT.601 (gives more weight to the R and B components):

Y = 0.299 R + 0.587 G + 0.114 B

If you are willing to trade accuracy for perfomance, there are two approximation formulas for this one:

Y = 0.33 R + 0.5 G + 0.16 B

Y = 0.375 R + 0.5 G + 0.125 B

These can be calculated quickly as

Y = (R+R+B+G+G+G)/6

Y = (R+R+R+B+G+G+G+G)>>3

I like that you put in precise values, but also included a quick "close enough" type shortcut. +1.
@Jonathan Dumaine - the two quick calculation formulas both include blue - 1st one is (2*Red + Blue + 3*Green)/6, 2nd one is (3*Red + Blue + 4*Green)>>3. granted, in both quick approximations, Blue has the lowest weight, but it's still there.
@JonathanDumaine That's because the human eye is least perceptive to Blue ;-)
The quick version works well. Tested and applied to real-world app with thousands of users, everything looks fine.
The quick version is even faster if you do it as: Y = (R<<1+R+G<<2+B)>>3 (thats only 3-4 CPU cycles on ARM) but I guess a good compiler will do that optimisation for you.
M
Myndex

The "Accepted" Answer is Incorrect and Incomplete

The only answers that are accurate are the @jive-dadson and @EddingtonsMonkey answers, and in support @nils-pipenbrinck. The other answers (including the accepted) are linking to or citing sources that are either wrong, irrelevant, obsolete, or broken.

Briefly:

sRGB must be LINEARIZED before applying the coefficients.

Luminance (L or Y) is linear as is light.

Perceived lightness (L*) is nonlinear as is human perception.

HSV and HSL are not even remotely accurate in terms of perception.

The IEC standard for sRGB specifies a threshold of 0.04045 it is NOT 0.03928 (that was from an obsolete early draft).

To be useful (i.e. relative to perception), Euclidian distances require a perceptually uniform Cartesian vector space such as CIELAB. sRGB is not one.

What follows is a correct and complete answer:

Because this thread appears highly in search engines, I am adding this answer to clarify the various misconceptions on the subject.

Luminance is a linear measure of light, spectrally weighted for normal vision but not adjusted for the non-linear perception of lightness. It can be a relative measure, Y as in CIEXYZ, or as L, an absolute measure in cd/m2 (not to be confused with L*).

Perceived lightness is used by some vision models such as CIELAB, here L* (Lstar) is a value of perceptual lightness, and is non-linear to approximate the human vision non-linear response curve. (That is, linear to perception but therefore non linear to light).

Brightness is a perceptual attribute, it does not have a "physical" measure. However some color appearance models do have a value, usualy denoted as "Q" for perceived brightness, which is different than perceived lightness.

Luma (Y´ prime) is a gamma encoded, weighted signal used in some video encodings (Y´I´Q´). It is not to be confused with linear luminance.

Gamma or transfer curve (TRC) is a curve that is often similar to the perceptual curve, and is commonly applied to image data for storage or broadcast to reduce perceived noise and/or improve data utilization (and related reasons).

To determine perceived lightness, first convert gamma encoded R´G´B´ image values to linear luminance (L or Y ) and then to non-linear perceived lightness (L*)

TO FIND LUMINANCE:

...Because apparently it was lost somewhere...

Step One:

Convert all sRGB 8 bit integer values to decimal 0.0-1.0

  vR = sR / 255;
  vG = sG / 255;
  vB = sB / 255;

Step Two:

Convert a gamma encoded RGB to a linear value. sRGB (computer standard) for instance requires a power curve of approximately V^2.2, though the "accurate" transform is:

https://i.stack.imgur.com/syfhh.png

Where V´ is the gamma-encoded R, G, or B channel of sRGB. Pseudocode:

function sRGBtoLin(colorChannel) {
        // Send this function a decimal sRGB gamma encoded color value
        // between 0.0 and 1.0, and it returns a linearized value.

    if ( colorChannel <= 0.04045 ) {
            return colorChannel / 12.92;
        } else {
            return pow((( colorChannel + 0.055)/1.055),2.4);
        }
    }

Step Three:

To find Luminance (Y) apply the standard coefficients for sRGB:

https://i.stack.imgur.com/TZoZA.png

Pseudocode using above functions:

Y = (0.2126 * sRGBtoLin(vR) + 0.7152 * sRGBtoLin(vG) + 0.0722 * sRGBtoLin(vB))

TO FIND PERCEIVED LIGHTNESS:

Step Four:

Take luminance Y from above, and transform to L*

https://i.stack.imgur.com/bgmlT.png

function YtoLstar(Y) {
        // Send this function a luminance value between 0.0 and 1.0,
        // and it returns L* which is "perceptual lightness"

    if ( Y <= (216/24389)) {       // The CIE standard states 0.008856 but 216/24389 is the intent for 0.008856451679036
            return Y * (24389/27);  // The CIE standard states 903.3, but 24389/27 is the intent, making 903.296296296296296
        } else {
            return pow(Y,(1/3)) * 116 - 16;
        }
    }

L* is a value from 0 (black) to 100 (white) where 50 is the perceptual "middle grey". L* = 50 is the equivalent of Y = 18.4, or in other words an 18% grey card, representing the middle of a photographic exposure (Ansel Adams zone V).

References:

IEC 61966-2-1:1999 Standard
Wikipedia sRGB
Wikipedia CIELAB
Wikipedia CIEXYZ
Charles Poynton's Gamma FAQ


@Rotem thank you — I saw some odd and incomplete statements and felt it would be helpful to nail it down, particularly as this thread still ranks highly on search engines.
@asdfasdfads Yes, L*a*b* does not take into account a number of psychophysical attributes. Helmholtz-Kohlrausch effect is one, but there are many others. CIELAB is not a "full" image assessment model by any means. In my post I was trying to cover the basic concepts as completely as possible without venturing into the very deep minutiae. The Hunt model, Fairchild's models, and others do a more complete job, but are also substantially more complex.
@Myndex, nevermind, my implementation was fatigue-based and my poor results came from that :( Thank you very much for your help and your post which is of a great value!
Fair enough, @Myndex. I've already found that colorimetry is a tricky subject.
Small typo... seems if ( Y <= (216/24389) { is missing a )
a
aleclarson

I have made comparison of the three algorithms in the accepted answer. I generated colors in cycle where only about every 400th color was used. Each color is represented by 2x2 pixels, colors are sorted from darkest to lightest (left to right, top to bottom).

1st picture - Luminance (relative)

0.2126 * R + 0.7152 * G + 0.0722 * B

2nd picture - http://www.w3.org/TR/AERT#color-contrast

0.299 * R + 0.587 * G + 0.114 * B

3rd picture - HSP Color Model

sqrt(0.299 * R^2 + 0.587 * G^2 + 0.114 * B^2)

4th picture - WCAG 2.0 SC 1.4.3 relative luminance and contrast ratio formula (see @Synchro's answer here)

Pattern can be sometimes spotted on 1st and 2nd picture depending on the number of colors in one row. I never spotted any pattern on picture from 3rd or 4th algorithm.

If i had to choose i would go with algorithm number 3 since its much easier to implement and its about 33% faster than the 4th.

https://i.imgur.com/I4alyQe.png


To me this is the best answer because oyu use a picture pattern that let you perceive if different hues are rendered with th same luminance. For me and my current monitor the 3rd picture is the "best looking" since it is also faster then 4th that's a plus
Your comparison image is incorrect because you did not provide the correct input to all of the functions. The first function requires linear RGB input; I can only reproduce the banding effect by providing nonlinear (i.e. gamma-corrected) RGB. Correcting this issue, you get no banding artifacts and the 1st function is the clear winner.
@Max the ^2 and sqrt included in the third formula are a quicker way of approximating linear RGB from non-linear RGB instead of the ^2.2 and ^(1/2.2) that would be more correct. Using nonlinear inputs instead of linear ones is extremely common unfortunately.
@Max, you're absolutely right. See my answer for updated visuals of each algorithm, comparing gamma-compressed vs linear RGB input.
x
xjcl

Below is the only CORRECT algorithm for converting sRGB images, as used in browsers etc., to grayscale.

It is necessary to apply an inverse of the gamma function for the color space before calculating the inner product. Then you apply the gamma function to the reduced value. Failure to incorporate the gamma function can result in errors of up to 20%.

For typical computer stuff, the color space is sRGB. The right numbers for sRGB are approx. 0.21, 0.72, 0.07. Gamma for sRGB is a composite function that approximates exponentiation by 1/(2.2). Here is the whole thing in C++.

// sRGB luminance(Y) values
const double rY = 0.212655;
const double gY = 0.715158;
const double bY = 0.072187;

// Inverse of sRGB "gamma" function. (approx 2.2)
double inv_gam_sRGB(int ic) {
    double c = ic/255.0;
    if ( c <= 0.04045 )
        return c/12.92;
    else 
        return pow(((c+0.055)/(1.055)),2.4);
}

// sRGB "gamma" function (approx 2.2)
int gam_sRGB(double v) {
    if(v<=0.0031308)
      v *= 12.92;
    else 
      v = 1.055*pow(v,1.0/2.4)-0.055;
    return int(v*255+0.5); // This is correct in C++. Other languages may not
                           // require +0.5
}

// GRAY VALUE ("brightness")
int gray(int r, int g, int b) {
    return gam_sRGB(
            rY*inv_gam_sRGB(r) +
            gY*inv_gam_sRGB(g) +
            bY*inv_gam_sRGB(b)
    );
}

That is just the way sRGB is defined. I think the reason is that it avoids some numerical problems near zero. It would not make much difference if you just raised the numbers to the powers of 2.2 and 1/2.2.
JMD - as part of work in a visual perception lab, I have done direct luminance measurements on CRT monitors and can confirm that there is a linear region of luminance at the bottom of the range of values.
I know this is very old, but its still out there to be searched. I don't think it can be correct. Shouldn't gray(255,255,255) = gray(255,0,0)+gray(0,255,0)+gray(0,0,255)? It doesn't.
@DCBillen: no, since the values are in non-linear gamma-corrected sRGB space, you can't just add them up. If you wanted to add them up, you should do so before calling gam_sRGB.
@DCBillen Rdb is correct. The way to add them up is shown in the function int gray(int r, int g, int b), which "uncalls" gam_sRGB. It pains me that after four years, the correct answer is rated so low. :-) Not really.. I will get over it.
G
Gust van de Wal

Rather than getting lost amongst the random selection of formulae mentioned here, I suggest you go for the formula recommended by W3C standards.

Here's a straightforward but exact PHP implementation of the WCAG 2.0 SC 1.4.3 relative luminance and contrast ratio formulae. It produces values that are appropriate for evaluating the ratios required for WCAG compliance, as on this page, and as such is suitable and appropriate for any web app. This is trivial to port to other languages.

/**
 * Calculate relative luminance in sRGB colour space for use in WCAG 2.0 compliance
 * @link http://www.w3.org/TR/WCAG20/#relativeluminancedef
 * @param string $col A 3 or 6-digit hex colour string
 * @return float
 * @author Marcus Bointon <marcus@synchromedia.co.uk>
 */
function relativeluminance($col) {
    //Remove any leading #
    $col = trim($col, '#');
    //Convert 3-digit to 6-digit
    if (strlen($col) == 3) {
        $col = $col[0] . $col[0] . $col[1] . $col[1] . $col[2] . $col[2];
    }
    //Convert hex to 0-1 scale
    $components = array(
        'r' => hexdec(substr($col, 0, 2)) / 255,
        'g' => hexdec(substr($col, 2, 2)) / 255,
        'b' => hexdec(substr($col, 4, 2)) / 255
    );
    //Correct for sRGB
    foreach($components as $c => $v) {
        if ($v <= 0.04045) {
            $components[$c] = $v / 12.92;
        } else {
            $components[$c] = pow((($v + 0.055) / 1.055), 2.4);
        }
    }
    //Calculate relative luminance using ITU-R BT. 709 coefficients
    return ($components['r'] * 0.2126) + ($components['g'] * 0.7152) + ($components['b'] * 0.0722);
}

/**
 * Calculate contrast ratio acording to WCAG 2.0 formula
 * Will return a value between 1 (no contrast) and 21 (max contrast)
 * @link http://www.w3.org/TR/WCAG20/#contrast-ratiodef
 * @param string $c1 A 3 or 6-digit hex colour string
 * @param string $c2 A 3 or 6-digit hex colour string
 * @return float
 * @author Marcus Bointon <marcus@synchromedia.co.uk>
 */
function contrastratio($c1, $c2) {
    $y1 = relativeluminance($c1);
    $y2 = relativeluminance($c2);
    //Arrange so $y1 is lightest
    if ($y1 < $y2) {
        $y3 = $y1;
        $y1 = $y2;
        $y2 = $y3;
    }
    return ($y1 + 0.05) / ($y2 + 0.05);
}

Because, as I said, it's recommended by both the W3C and WCAG?
The W3C formula is incorrect on a number of levels. It is not taking human perception into account, they are using "simple" contrast using luminance which is linear and not at all perceptually uniform. Among other things, it appears they based it on some standards as old as 1988 (!!!) which are not relevant today (those standards were based on monochrome monitors such as green/black, and referred to the total contrast from on to off, not considering greyscale nor colors).
That’s complete rubbish. Luma is specifically perceptual - that’s why it has different coefficients for red, green, and blue. Age has nothing to do with it - the excellent CIE Lab perceptual colour space dates from 1976. The W3C space isn’t as good, however it is a good practical approximation that is easy to calculate. If you have something constructive to offer, post that instead of empty criticism.
Just to add/update: we are currently researching replacement algorithms that better model perceptual contrast (discussion in Github Issue 695). However, as a separate issue FYI the threshold for sRGB is 0.04045, and not 0.03928 which was referenced from an obsolete early sRGB draft. The authoritative IEC std uses 0.04045 and a pull request is forthcoming to correct this error in the WCAG. (ref: IEC 61966-2-1:1999) This is in Github issue 360, though to mention, in 8bit there is no actual difference — near end of thread 360 I have charts of errors including 0.04045/0.03928 in 8bit.
And to add to the thread, the replacement method for WCAG 3.0 is APCA, and can be seen at myndex.com/APCA/simple
N
Nils Pipenbrinck

To add what all the others said:

All these equations work kinda well in practice, but if you need to be very precise you have to first convert the color to linear color space (apply inverse image-gamma), do the weight average of the primary colors and - if you want to display the color - take the luminance back into the monitor gamma.

The luminance difference between ingnoring gamma and doing proper gamma is up to 20% in the dark grays.


C
Community

I found this code (written in C#) that does an excellent job of calculating the "brightness" of a color. In this scenario, the code is trying to determine whether to put white or black text over the color.


That is exactly what I needed. I was doing a classic "color bars" demo, and wanted to label them on top of the color with the best black-or-white choice!
b
bobobobo

Interestingly, this formulation for RGB=>HSV just uses v=MAX3(r,g,b). In other words, you can use the maximum of (r,g,b) as the V in HSV.

I checked and on page 575 of Hearn & Baker this is how they compute "Value" as well.

https://i.stack.imgur.com/ubpRX.png


Just for the record the link is dead, archive version here - web.archive.org/web/20150906055359/http://…
HSV is not perceptually uniform (and it isn't even close). It is used only as a "convenient" way to adjust color, but it is not relevant to perception, and the V does not relate to the true value of L or Y (CIE Luminance).
Does that mean #FF0000 is as bright as #FFFFFF?
I believe it's more like lightness = (max(r, g, b) + min(r, g, b)) / 2 in HSL
c
catamphetamine

I was solving a similar task today in javascript. I've settled on this getPerceivedLightness(rgb) function for a HEX RGB color. It deals with Helmholtz-Kohlrausch effect via Fairchild and Perrotta formula for luminance correction.

/**
 * Converts RGB color to CIE 1931 XYZ color space.
 * https://www.image-engineering.de/library/technotes/958-how-to-convert-between-srgb-and-ciexyz
 * @param  {string} hex
 * @return {number[]}
 */
export function rgbToXyz(hex) {
    const [r, g, b] = hexToRgb(hex).map(_ => _ / 255).map(sRGBtoLinearRGB)
    const X =  0.4124 * r + 0.3576 * g + 0.1805 * b
    const Y =  0.2126 * r + 0.7152 * g + 0.0722 * b
    const Z =  0.0193 * r + 0.1192 * g + 0.9505 * b
    // For some reason, X, Y and Z are multiplied by 100.
    return [X, Y, Z].map(_ => _ * 100)
}

/**
 * Undoes gamma-correction from an RGB-encoded color.
 * https://en.wikipedia.org/wiki/SRGB#Specification_of_the_transformation
 * https://stackoverflow.com/questions/596216/formula-to-determine-brightness-of-rgb-color
 * @param  {number}
 * @return {number}
 */
function sRGBtoLinearRGB(color) {
    // Send this function a decimal sRGB gamma encoded color value
    // between 0.0 and 1.0, and it returns a linearized value.
    if (color <= 0.04045) {
        return color / 12.92
    } else {
        return Math.pow((color + 0.055) / 1.055, 2.4)
    }
}

/**
 * Converts hex color to RGB.
 * https://stackoverflow.com/questions/5623838/rgb-to-hex-and-hex-to-rgb
 * @param  {string} hex
 * @return {number[]} [rgb]
 */
function hexToRgb(hex) {
    const match = /^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(hex)
    if (match) {
        match.shift()
        return match.map(_ => parseInt(_, 16))
    }
}

/**
 * Converts CIE 1931 XYZ colors to CIE L*a*b*.
 * The conversion formula comes from <http://www.easyrgb.com/en/math.php>.
 * https://github.com/cangoektas/xyz-to-lab/blob/master/src/index.js
 * @param   {number[]} color The CIE 1931 XYZ color to convert which refers to
 *                           the D65/2° standard illuminant.
 * @returns {number[]}       The color in the CIE L*a*b* color space.
 */
// X, Y, Z of a "D65" light source.
// "D65" is a standard 6500K Daylight light source.
// https://en.wikipedia.org/wiki/Illuminant_D65
const D65 = [95.047, 100, 108.883]
export function xyzToLab([x, y, z]) {
  [x, y, z] = [x, y, z].map((v, i) => {
    v = v / D65[i]
    return v > 0.008856 ? Math.pow(v, 1 / 3) : v * 7.787 + 16 / 116
  })
  const l = 116 * y - 16
  const a = 500 * (x - y)
  const b = 200 * (y - z)
  return [l, a, b]
}

/**
 * Converts Lab color space to Luminance-Chroma-Hue color space.
 * http://www.brucelindbloom.com/index.html?Eqn_Lab_to_LCH.html
 * @param  {number[]}
 * @return {number[]}
 */
export function labToLch([l, a, b]) {
    const c = Math.sqrt(a * a + b * b)
    const h = abToHue(a, b)
    return [l, c, h]
}

/**
 * Converts a and b of Lab color space to Hue of LCH color space.
 * https://stackoverflow.com/questions/53733379/conversion-of-cielab-to-cielchab-not-yielding-correct-result
 * @param  {number} a
 * @param  {number} b
 * @return {number}
 */
function abToHue(a, b) {
    if (a >= 0 && b === 0) {
        return 0
    }
    if (a < 0 && b === 0) {
        return 180
    }
    if (a === 0 && b > 0) {
        return 90
    }
    if (a === 0 && b < 0) {
        return 270
    }
    let xBias
    if (a > 0 && b > 0) {
        xBias = 0
    } else if (a < 0) {
        xBias = 180
    } else if (a > 0 && b < 0) {
        xBias = 360
    }
    return radiansToDegrees(Math.atan(b / a)) + xBias
}

function radiansToDegrees(radians) {
    return radians * (180 / Math.PI)
}

function degreesToRadians(degrees) {
    return degrees * Math.PI / 180
}

/**
 * Saturated colors appear brighter to human eye.
 * That's called Helmholtz-Kohlrausch effect.
 * Fairchild and Pirrotta came up with a formula to
 * calculate a correction for that effect.
 * "Color Quality of Semiconductor and Conventional Light Sources":
 * https://books.google.ru/books?id=ptDJDQAAQBAJ&pg=PA45&lpg=PA45&dq=fairchild+pirrotta+correction&source=bl&ots=7gXR2MGJs7&sig=ACfU3U3uIHo0ZUdZB_Cz9F9NldKzBix0oQ&hl=ru&sa=X&ved=2ahUKEwi47LGivOvmAhUHEpoKHU_ICkIQ6AEwAXoECAkQAQ#v=onepage&q=fairchild%20pirrotta%20correction&f=false
 * @return {number}
 */
function getLightnessUsingFairchildPirrottaCorrection([l, c, h]) {
    const l_ = 2.5 - 0.025 * l
    const g = 0.116 * Math.abs(Math.sin(degreesToRadians((h - 90) / 2))) + 0.085
    return l + l_ * g * c
}

export function getPerceivedLightness(hex) {
    return getLightnessUsingFairchildPirrottaCorrection(labToLch(xyzToLab(rgbToXyz(hex))))
}

K
Kal

Consider this an addendum to Myndex's excellent answer. As he (and others) explain, the algorithms for calculating the relative luminance (and the perceptual lightness) of an RGB colour are designed to work with linear RGB values. You can't just apply them to raw sRGB values and hope to get the same results.

Well that all sounds great in theory, but I really needed to see the evidence for myself, so, inspired by Petr Hurtak's colour gradients, I went ahead and made my own. They illustrate the two most common algorithms (ITU-R Recommendation BT.601 and BT.709), and clearly demonstrate why you should do your calculations with linear values (not gamma-corrected ones).

Firstly, here are the results from the older ITU BT.601 algorithm. The one on the left uses raw sRGB values. The one on the right uses linear values.

ITU-R BT.601 colour luminance gradients

0.299 R + 0.587 G + 0.114 B

https://i.stack.imgur.com/W29rZ.png

At this resolution, the left one actually looks surprisingly good! But if you look closely, you can see a few issues. At a higher resolution, unwanted artefacts are more obvious:

https://i.stack.imgur.com/IunRs.jpg

The linear one doesn't suffer from these, but there's quite a lot of noise there. Let's compare it to ITU-R Recommendation BT.709…

ITU-R BT.709 colour luminance gradients

0.2126 R + 0.7152 G + 0.0722 B

https://i.stack.imgur.com/GRfaL.png

Oh boy. Clearly not intended to be used with raw sRGB values! And yet, that's exactly what most people do!

https://i.stack.imgur.com/S6R2T.jpg

At high-res, you can really see how effective this algorithm is when using linear values. It doesn't have nearly as much noise as the earlier one. While none of these algorithms are perfect, this one is about as good as it gets.


G
Gust van de Wal

Here's a bit of C code that should properly calculate perceived luminance.

// reverses the rgb gamma
#define inverseGamma(t) (((t) <= 0.0404482362771076) ? ((t)/12.92) : pow(((t) + 0.055)/1.055, 2.4))

//CIE L*a*b* f function (used to convert XYZ to L*a*b*)  http://en.wikipedia.org/wiki/Lab_color_space
#define LABF(t) ((t >= 8.85645167903563082e-3) ? powf(t,0.333333333333333) : (841.0/108.0)*(t) + (4.0/29.0))


float
rgbToCIEL(PIXEL p)
{
   float y;
   float r=p.r/255.0;
   float g=p.g/255.0;
   float b=p.b/255.0;

   r=inverseGamma(r);
   g=inverseGamma(g);
   b=inverseGamma(b);

   //Observer = 2°, Illuminant = D65 
   y = 0.2125862307855955516*r + 0.7151703037034108499*g + 0.07220049864333622685*b;

   // At this point we've done RGBtoXYZ now do XYZ to Lab

   // y /= WHITEPOINT_Y; The white point for y in D65 is 1.0

    y = LABF(y);

   /* This is the "normal conversion which produces values scaled to 100
    Lab.L = 116.0*y - 16.0;
   */
   return(1.16*y - 0.16); // return values for 0.0 >=L <=1.0
}

佚名

RGB Luminance value = 0.3 R + 0.59 G + 0.11 B

http://www.scantips.com/lumin.html

If you're looking for how close to white the color is you can use Euclidean Distance from (255, 255, 255)

I think RGB color space is perceptively non-uniform with respect to the L2 euclidian distance. Uniform spaces include CIE LAB and LUV.


D
Dave Collier

The inverse-gamma formula by Jive Dadson needs to have the half-adjust removed when implemented in Javascript, i.e. the return from function gam_sRGB needs to be return int(v*255); not return int(v*255+.5); Half-adjust rounds up, and this can cause a value one too high on a R=G=B i.e. grey colour triad. Greyscale conversion on a R=G=B triad should produce a value equal to R; it's one proof that the formula is valid. See Nine Shades of Greyscale for the formula in action (without the half-adjust).


It sounds like you know your stuff, so I removed the +0.5
I did the experiment. In C++ it needs the +0.5, so I put it back in. I added a comment about translating to other languages.
v
vortex

I wonder how those rgb coefficients were determined. I did an experiment myself and I ended up with the following:

Y = 0.267 R + 0.642 G + 0.091 B

Close but but obviously different than the long established ITU coefficients. I wonder if those coefficients could be different for each and every observer, because we all may have a different amount of cones and rods on the retina in our eyes, and especially the ratio between the different types of cones may differ.

For reference:

ITU BT.709:

Y = 0.2126 R + 0.7152 G + 0.0722 B

ITU BT.601:

Y = 0.299 R + 0.587 G + 0.114 B

I did the test by quickly moving a small gray bar on a bright red, bright green and bright blue background, and adjusting the gray until it blended in just as much as possible. I also repeated that test with other shades. I repeated the test on different displays, even one with a fixed gamma factor of 3.0, but it all looks the same to me. More over, the ITU coefficients literally are wrong for my eyes.

And yes, I presumably have a normal color vision.


In your experiments did you linearize to remove the gamma component first? If you didn't that could explain your results. BUT ALSO, the coefficients are related to the CIE 1931 experiments and those are an average of 17 observers, so yes there is individual variance in results.
And to add: Also, the 1931 CIE values are based on a 2° observer, and in addition there are known errors in the blue region. The 10° observer values are significantly different as the S cones are not in the foveal central vision. In both cases, effort was made to avoid rod intrusion, keeping the luminance levels in the photopic area. If measurements are made in the mesopic region, rod intrusion will also skew results.
I
Ian Hopkinson

The HSV colorspace should do the trick, see the wikipedia article depending on the language you're working in you may get a library conversion .

H is hue which is a numerical value for the color (i.e. red, green...)

S is the saturation of the color, i.e. how 'intense' it is

V is the 'brightness' of the color.


Problem with the HSV color space is that you can have the same saturation and value, but different hue's, for blue and yellow. Yellow is much brighter than blue. Same goes for HSL.
hsv gives you the "brightness" of a color in a technical sense. in a perceptual brightness hsv really fails
HSV and HSL are not perceptually accurate (and it's not even close). They are useful for "controls" for adjusting relative color, but not for accurate prediction of perceptual lightness. Use L* from CIELAB for perceptual lightness.
J
Jacob

The 'V' of HSV is probably what you're looking for. MATLAB has an rgb2hsv function and the previously cited wikipedia article is full of pseudocode. If an RGB2HSV conversion is not feasible, a less accurate model would be the grayscale version of the image.


E
Emil

This link explains everything in depth, including why those multiplier constants exist before the R, G and B values.

Edit: It has an explanation to one of the answers here too (0.299*R + 0.587*G + 0.114*B)


P
Pierre-louis Stenger

To determine the brightness of a color with R, I convert the RGB system color in HSV system color.

In my script, I use the HEX system code before for other reason, but you can start also with RGB system code with rgb2hsv {grDevices}. The documentation is here.

Here is this part of my code:

 sample <- c("#010101", "#303030", "#A6A4A4", "#020202", "#010100")
 hsvc <-rgb2hsv(col2rgb(sample)) # convert HEX to HSV
 value <- as.data.frame(hsvc) # create data.frame
 value <- value[3,] # extract the information of brightness
 order(value) # ordrer the color by brightness

j
joe

As mentioned by @Nils Pipenbrinck:

All these equations work kinda well in practice, but if you need to be very precise you have to [do some extra gamma stuff]. The luminance difference between ignoring gamma and doing proper gamma is up to 20% in the dark grays.

Here's a fully self-contained JavaScript function that does the "extra" stuff to get that extra accuracy. It's based on Jive Dadson's C++ answer to this same question.

// Returns perceived brightness (0-1) of the given 0-255 RGB values
// Based on this C++ implementation: https://stackoverflow.com/a/13558570/11950764
function rgbBrightness(r, g, b) {
  let v = 0;
  v += 0.212655 * ((r/255) <= 0.04045 ? (r/255)/12.92 : Math.pow(((r/255)+0.055)/1.055, 2.4));
  v += 0.715158 * ((g/255) <= 0.04045 ? (g/255)/12.92 : Math.pow(((g/255)+0.055)/1.055, 2.4));
  v += 0.072187 * ((b/255) <= 0.04045 ? (b/255)/12.92 : Math.pow(((b/255)+0.055)/1.055, 2.4));
  return v <= 0.0031308 ? v*12.92 : 1.055 * Math.pow(v,1.0/2.4) - 0.055;
}

P
Pavel P

For clarity, the formulas that use a square root need to be

sqrt(coefficient * (colour_value^2))

not

sqrt((coefficient * colour_value))^2

The proof of this lies in the conversion of a R=G=B triad to greyscale R. That will only be true if you square the colour value, not the colour value times coefficient. See Nine Shades of Greyscale


there are parenthesis mistmatches
unless the coefficient you use is the square root of the correct coefficient.
B
Ben S

Please define brightness. If you're looking for how close to white the color is you can use Euclidean Distance from (255, 255, 255)


No, you can't use euclidian distance between sRGB values, sRGB is not a perceptually uniform Cartesian/vector space. If you want to use Euclidian distance as a measure of color difference, you need to at least convert to CIELAB, or better yet, use a CAM like CIECAM02.