is designed to get an image as input and output an RGB triplet representing the dominant color in the image. You can pass a callback function to skew the algorithm toward colors you prefer (dark, light, saturated, etc.) so that those colors get a higher chance of winning. This algorithm will return a color that is present in the image, unlike an average for example.
You don’t care about the blah-blah?
What’s the idea?
It just gets to all the pixels in the image and store their color value as a key in a hash map. The value is then the number of pixels of the same color encountered. However, this gives poor results so here are the improvements I wrote:
- I treat at most 5000 pixels so if the image is bigger I start undersampling. This is an order of magnitude, not a certain rule.
- To avoid noise due to flat areas (text, borders, …) I start by right shifting all the RGB values by 6. So all the values are now between 0 and 3, allowing to encompass large areas of almost-the-same color. Then I perform another pass of the same algorithm (with a shift of 4) on all the pixels falling in the previous winning color group. And so on until I reach zero.
- When trying to find the group that has the most pixels, I use a callback to weight the score. This allows customization and tell the algorithm you wand a rather dark color, you wan to exclude black, you want only highly saturated color excluding the greys or anything else you might think of.
How does it work
Very simple. See the user guide
To get in touch, the simplest is to leave me a commenton the blog. For issues, you may go to the GitHub project
- June 20, 2013: version 1.1 promoted to stable.
- June 20, 2013: version 1.1-rc1 – Various improvements to avoid aliasing in colors when building groups. Also translates in better performances.
- June 18, 2013: version 1.0 – First release