master
MMaker 2023-06-10 13:46:54 -04:00
parent affd4a859a
commit 5a909bd55c
Signed by: mmaker
GPG Key ID: CCE79B8FEDA40FB2
3 changed files with 63 additions and 1 deletions

1
.gitignore vendored 100644
View File

@ -0,0 +1 @@
__pycache__

View File

@ -1,2 +1,25 @@
# sd-webui-color-enhance
# Color Enhance
Script for [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to enhance colors.
This is the same algorithm GIMP/GEGL uses for color enhancement. The gist of this implementation is that it converts the color space to [CIELCh(ab)](https://en.wikipedia.org/wiki/CIELUV#Cylindrical_representation_(CIELCh)) and normalizes the chroma (or ["colorfulness"](https://en.wikipedia.org/wiki/Colorfulness)) component. Original source can be found in the link below.
https://gitlab.gnome.org/GNOME/gegl/-/blob/master/operations/common/color-enhance.c
In my personal (and possibly subjective) opinion, this removes the need to select a VAE based solely on color, and instead select one on it's true merit of accurately converting the latent space to a pixel representation of the image.
### Example
Left is the raw output (with a manually selected VAE!), right is with the enhancement (at a full strength of `1`).
![Comparison](https://files.catbox.moe/4ze471.jpg)
### Notes
Install the script below this readme under the `scripts` folder in the root directory of the web UI. To use this directly in the txt2img and img2img tabs, go to Settings -> Postprocessing, and add this script to the `Enable postprocessing operations in txt2img and img2img tabs` option. Save and restart the web UI to apply the changes.
This should not cause any issues with hires fix, inpainting, etc, as the postprocessing pipeline in the web UI only applies postprocessing scripts directly before returning the final image(s). There should also be no quality degradation from re-using the same image multiple times (as in inpainting), as operations are performed in floating point before being converted back to uint8. If you want to have peace of mind, you can always run this postprocessing script in the `Extras` tab as a last step.
If you were to test this out with an image you believe is already heavily saturated/colorful, and apply this at full strength, you should see essentially 0 change to the image, an indication of how working in this color space removes the need to worry about oversaturating or blowing out an image.
As far as I know, this technique dates back to the early 2000s. Examples can be seen [here](https://en.wikipedia.org/wiki/Colorfulness#Chroma) and [here](https://en.wikipedia.org/wiki/HSL_and_HSV#Disadvantages) which show how this is an improvement over traditional HSL/HSV color grading. There has since been [many](https://en.wikipedia.org/wiki/Color_appearance_model#Color_appearance_models) new developments in this area, but I haven't quite yet looked into these, and this already seems to perform very well. An improvement may not be seen in new techniques anyway, as SD only outputs 24-bit RGB images, ultimately leaving color grading options limited.

View File

@ -0,0 +1,38 @@
import gradio as gr
import imageio.core.util
import numpy as np
import skimage.color
from PIL import Image
from modules import scripts_postprocessing
from modules.ui_components import FormRow
imageio.core.util._precision_warn = lambda *args, **kwargs: None
class ScriptPostprocessingColorEnhance(scripts_postprocessing.ScriptPostprocessing):
name = "Color Enhance"
order = 30000
def ui(self):
with FormRow():
strength = gr.Slider(label="Color Enhance strength", minimum=0, maximum=1, step=0.01, value=0)
return { "strength": strength }
def process(self, pp: scripts_postprocessing.PostprocessedImage, strength):
if strength == 0:
return
info_bak = {} if not hasattr(pp.image, "info") else pp.image.info
pp.image = self._color_enhance(pp.image, strength)
pp.image.info = info_bak
pp.info["Color Enhance"] = strength
def _lerp(self, a: float, b: float, t: float) -> float:
return (1 - t) * a + t * b
def _color_enhance(self, arr, strength: float = 1) -> Image.Image:
lch = skimage.color.lab2lch(lab=skimage.color.rgb2lab(rgb=np.array(arr, dtype=np.uint8)))
lch[:, :, 1] *= 100/(self._lerp(100, lch[:, :, 1].max(), strength)) # Normalize chroma component
return Image.fromarray(np.array(skimage.color.lab2rgb(lab=skimage.color.lch2lab(lch=lch)) * 255, dtype=np.uint8))