-
Hi, thanks |
Beta Was this translation helpful? Give feedback.
Replies: 26 comments 9 replies
-
Hi @bsdis, It is absolutely possible and I do that quite regularly. Assuming you have the data for the colour rendition chart, at the basic minimum, this is what you need to do: colour/colour/characterisation/correction.py Line 852 in c3245b1 We have a repository which does extraction and provides an example of colour correction here: https://github.com/colour-science/colour-checker-detection/blob/develop/colour_checker_detection/examples/examples_detection.ipynb |
Beta Was this translation helpful? Give feedback.
-
Hi @KelSolaar
But it seems that especially if you look on the white background its still not the same on the "after calibration" images. Is this to be expected? Can that be fixed somehow? |
Beta Was this translation helpful? Give feedback.
-
Here is actually a better image. I have multiplied with and capped at 255, which also shows that some channels are saturated after the calibration (First 4 are before calibration and last 4 are after). Is this because i need to do a white balancing also? |
Beta Was this translation helpful? Give feedback.
-
Hi @bsdis, This is a great example of why relying on a colour rendition chart for colour correction is not the solution to all the imaging calibration problems as What is happening here is that your scene illumination is changing while the ColorChecker Classic values are given for a particular illuminant, i.e. CIE D50. Ideally for things to really work, your scene should be lit with CIE D50 so that you can perform colour correction against the published values. Ideally, you would need to measure the spectral irradiance of the scene and the spectral reflectance of the ColorChecker Classic (with a spectro-photometer) and recompute its values for the particular scene. Alternatively, something cheap is to neutralise on the bottom row after the colour correction but it is obviously not ideal! I hope that makes sense! |
Beta Was this translation helpful? Give feedback.
-
Hi @KelSolaar Aha - yes that makes perfect sense. I have to keep that in mind.
So I guess Sorry - I hope its not to many questions. Thanks :) |
Beta Was this translation helpful? Give feedback.
-
Yeah, so you would need to neutralise before that conversion, colour correction, white-balancing and, most of the operations must ideally be done on floating-point linear data. The idea is to compute the average RGB colour of one of the swatches of the bottom row, we commonly take the third from the bottom right in the VFX industry, divide your image by it so that this swatch is white (or close to it because you only took the average) and then multiply the image by the value of that swatch, it should be around [0.19~, 0.19~, 0.19~] for sRGB. |
Beta Was this translation helpful? Give feedback.
-
Hi @KelSolaar Hmm I think I might have misunderstood you.
So code becomes like this
But this gives me quite strange results. |
Beta Was this translation helpful? Give feedback.
-
You would need to divide by the average value of the colour checker in the image, i.e. |
Beta Was this translation helpful? Give feedback.
-
As I understand the code, the following variables are:
So: divide by the average value of the colour checker in the image, i.e. COLOUR_CHECKER_IMAGES[I] and multiply by that of the reference one
Is that correct? However it still gives quite different backgrounds though. So I guess thats due to my scene illumination changing while the ColorChecker Classic values are given for a particular illuminant like you mentioned, right? What I actually wanted to achieve was to be able to take an arbitrary image under an arbitrary illumination and and arbitrary camera, and somehow calibrate those into same colorspace, so that I could directly compare the colors in the in images. In each picture there would always be a colorchecker of course. |
Beta Was this translation helpful? Give feedback.
-
Not really, you would need to divide each given image by its own colour checker Neutral 5 swatch average, because each image has its own illuminant, you need to neutralise for each one of them accordingly and individually.
Assuming you have neutralised each one of them, you will see that not only their whites are closer, but also that their exposure is roughly the same! You will still have discrepancies though because of the illuminant differences. |
Beta Was this translation helpful? Give feedback.
-
Hi @KelSolaar
the output of colour.cctf_encoding seem to be in the range -0.52 to 1.47. Should that not be in the range 0 to 1? |
Beta Was this translation helpful? Give feedback.
-
Hi,
Absolutely and good question! We never (pretty much never) clip data in Colour because this is what allows inversion of the transforms. Yes, you will often get negative values and values over one, this is by design. Whether it makes sense is something we are expecting the user to assess and is dependent on the usage. For your current case, because you are plotting an image, it would make sense to clip the values. They are effectively being clipped by |
Beta Was this translation helpful? Give feedback.
-
I see. Instead of clipping, wouldn't it make more sense to keep the information as much as possible by shifting to min=0 and divide with max value to get 0<=x<=1 ? |
Beta Was this translation helpful? Give feedback.
-
Good question again! We cannot really decide for the user here, given we are often working with negative out-of-gamut colours or very large HDR values, e.g. [1.e7, 1e9], the image would be entirely black except for a small speckle, i.e. the Sun, in a lot of cases. A single large value is needed to throw you off trajectory here and it happens all the time in the type of applications we use Colour for. We really do swim in an ocean of nans and infs :) |
Beta Was this translation helpful? Give feedback.
-
haha yes ok. thats a really good point :D |
Beta Was this translation helpful? Give feedback.
-
depending on the number of terms/degree you use, you will get the same results, basically, all the methods are equivalent for degree 1 (linear). |
Beta Was this translation helpful? Give feedback.
-
@KelSolaar I was hoping I could maybe get your opinion on the path I have chosen so far. My usecase is that I need to compare colors in different images across many images. The challenge here is that I have no idea about the lighting used on each image and I have no idea about the camera used for each image (could be some phone cam or some DSLR cam or whatever). The only thing I know is that an xrite colorchecker is present on each image taken. Input images are in sRGB jpeg which is default output from many devices.. |
Beta Was this translation helpful? Give feedback.
-
I would maybe try to see if you can infer the illuminant from the scene, there is a great deal of literature on the topic, e.g. https://www.groundai.com/project/rehabilitating-the-color-checker-dataset-for-illuminant-estimation/. I never tried any of that unfortunately but it is certainly on my radar when time permits! Otherwise, your approach is certainly correct! |
Beta Was this translation helpful? Give feedback.
-
This publication is of particular interest: https://arxiv.org/abs/1504.04548 |
Beta Was this translation helpful? Give feedback.
-
AHa...that sounds really interesting... I will look into that. Thanks a lot for the pointers! Will be sure to feed back what I find out :) |
Beta Was this translation helpful? Give feedback.
-
Closing this one for now! Feel free to add comments and I will re-open! |
Beta Was this translation helpful? Give feedback.
-
Hi @KelSolaar, I have some questions:
Thanks |
Beta Was this translation helpful? Give feedback.
-
Hi, Im trying the notebook to do a color correction of a custom image. However, I cant seem to get the thing to run correctly at all (even with no modifications). this is the error I encounter:
anyone know how to fix this? |
Beta Was this translation helpful? Give feedback.
-
Hi @KelSolaar @bsdis @alianoroozi , I have been following your conversation with bsdis. It was very usefull. Instead of 24 colour average I am using a algorithm to find the dominant colour. It seems to work. But I am consistently facing an issue with calibrating some images that has more brigtness in it. Also, I tried the white balancing as you mentioned earlier in the threat. But it woresens my problem after including it. I am attaching the code and the results. Please let me know where I am going wrong. And what I can do to solve that type of issue |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hello @danoneata,
Cheers, Thomas |
Beta Was this translation helpful? Give feedback.
Hi @bsdis,
It is absolutely possible and I do that quite regularly.
Assuming you have the data for the colour rendition chart, at the basic minimum, this is what you need to do:
colour/colour/characterisation/correction.py
Line 852 in c3245b1
We have a repository which does extraction and provides an example of colour correction here: https://github.com/colour-science/colour-checker-detection/blob/develop/colour_checker_detection/examples/examples_detection.ipynb