9-10 June 2022 | Computation Color Imaging Workshop 2022

9 June 2022 – 9:00 – 13:00 (Paris – CEST), 16:00 – 20:00 (Tokyo – JST)

10 June 2022 – 9:00 – 13:00 (Paris – CEST), 16:00 – 20:00 (Tokyo – JST)


Created in 2007, the Computational Color Imaging Workshop is a premier international forum on color image and advanced types of images (spectral, 3D, etc.), including acquisition, processing, rendering, quality assessment, analysis, reproduction. The workshop also adresses color vision, and material appearance. Applications in many fields are concerned: Computer vision, health and beauty, arts and design, video and display, printing and manufacturing, remote sensing, natural sciences, etc.

The workshop presents a state of the art and pioneering research works in these fields:

  • Color vision
  • Color, spectral, 3D imaging
  • Computer graphics
  • Computational lighting
  • Color image/video quality
  • Physical modeling for color
  • Color image reproduction
  • Digital printing and fabrication
  • Secured image
  • Material Appearance
  • Color acquisition, calibration and display

Best young researcher presentation Award

The Best Young Researcher Presentation Prize has been awarded to Morgane Gerardin (INRIA, France).

Special honors are also attributed to Donghui Li (Chiba University, Japan), and Marco Buzzelli (Università degli studi di Milano – Bicocca, Italy).

Congratulation to them !

Morgane Gerardin graduated from Institut d’Optique Graduate School, France in 2018. She received her Ph.D. degree from Université Grenoble Alpes in 2021, for her work on connecting physical and chemical properties with dry pigments appearance.

She is currently a PostDoc at Inria, France, whose research interests include translucent material appearance measurements and modeling for photo-realistic renderings.


Talks, Summaries and Slides

Session 1 – Spectral Imaging | June 9, 2022

David Rousseau (Université d’Angers, France) – KEYNOTE | When spectral imaging meets machine learning [SLIDES]

In the era of data-driven computer vision, unequaled performances are accessible with advanced algorithms of machine learning such as deep learning.  Because deep learning follows an end-to-end optimization process, one can skip the conventional metrological colorimetric step and instead focus on the extraction of information. This can relax some constraints on the design of the instrumentation. In this talk, we will provide examples of how deep learning can be used to decrease the cost of instrumentation when included in such a computational imaging strategy. This is illustrated with recent applications in the domain of spectral imaging of plants.

Aiman Raza (ENTPE, France) – Framework for quality control of spectral imaging devices [SLIDES] [Full Paper]

Hardware development of spectral imaging has reached new heights with commercial availability on the market of good quality, portable and fast imaging systems. This creates a growing interest for in-situ acquisition with innovative research on indoor and outdoor scenes. In this work, we aim to estimate the errors made with commercially available Hyper Spectral Imaging systems to capture real scenes under different illuminants. The topic of interest is the accuracy of commercially available hyperspectral systems in comparison to reference spot spectroradiometers. The hyperspectral cameras tested were found to have acceptable radiometric accuracies for chromatic content and different photometric and colorimetric accuracies. It was also identified that a good radiometric/photometric precision does not necessarily indicate a good colorimetric precision for the same device and color. It depends on the light sources and color patches, thus highlighting the need to identify the reproduction accuracy of every test device methodically. This accuracy study thus describes a formal layout for the characterization of hyperspectral imaging devices using identifiable error metrics.

Ken-ichi Ito (Toyohashi University of Technology, Japan) – Computational Lighting: Optimization of light source spectrum for detecting oral lesions


Session 2 – Material appearance and printing | June 9, 2022

Hiroyuki Kubo (Chiba University, Japan) – KEYNOTE | Light Transport Acquisition and Analysis: from Vision to Graphics. [SLIDES]

Donghui Li (Chiba University, Japan) – CCIW honor | Verification of gloss representation in printing using Texture-Aware Error Diffusion Algorithm

In printing, applications of image processing technology have undoubtedly fundamentally improved the efficiency of printing reproduction. It plays an important role in the transition from screening to halftone technology. Recently, the printing technology related to texture-aware has gradually become the goal of the printing industry to improve the appearance reproduction. Our Texture-aware error diffusion (TAED) was also proposed to achieve such a goal. This is a method to properly represent textures of halftone images, but we believe that the representation of textures is also strongly related to the representation of perceptual gloss. To further verify the relationship between them, we applied the TAED to images containing glossy objects. To verify the feasibility of the method, we used our proposed gloss index. As a result, it was confirmed that the halftone images using TAED represented gloss appropriately compared to other methods.

Simon Steilin, Thierry Fournel (Université Jean Monnet Saint-Etienne, France) – Pattern analysis in old printed books for the detection of conterfeights.

Novelty detection and localization in ornaments can help a human expert in various applications as the attribution of Enlightenment-era books telling apart original from counterfeit during censure at the time, the monitoring of the evolution of the printing press tool, or the detection of a change in the graphical design due to untoward printing defects. In addition, a proper rendering is required to efficiently assist the user. In this work, the reconstruction-based anomaly localization by means of an auto-encoder is revisited. A spatial transformer is placed upstream of a one-class classic autoencoder in order to perform an in-place reconstruction from a low-dimensional latent representation. A smoothed novelty map is computed from the gradient of the difference between the input sample image and its reconstruction. The resulting map is superimposed as translucent layer in the Kubelka framework on the input image (the background) to obtain a well-resolved, highlighted (achromatic or chromatic) rendering of novelties. The ability to locate novelties is qualitatively and quantitatively compared to that of vanilla autoencoders on two datasets generated from MNIST and from printed catalogs of old vignettes, respectively.

Nicolas Dalloz (HID Global, Université Jean Monnet Saint-Etienne, France) – Algorithms for image multiplexing with laser-induced metasurfaces


Session 3 – Color Vision | June 10, 2022

Jean-Baptiste Thomas (Université de Bourgogne, France, and NTNU, Norway) – KEYNOTE | Standardization of spectral imaging: What is the RGB of spectral images? [SLIDES]

Several spectral imaging sensors provide data of a variety of dimension and of various quality. The variation spans from spectral radiance per wavelength at a given sampling, to a set of values corresponding to the integration of radiance over a set of spectral filters. This variety was a good aset for the development of spectral imaging solutions that could be tuned to specific applications. However, there is a limitation to this when it comes to a generalised use of this data. Several aspects may benefit from an increased standardisation, e.g. communication, encoding and algorithmics. In the colour imaging case, we accepted RGB as a standard representation for colour images, and even if each RGB does not correspond to similar capturing protocols, it provides a strong basis to communicate, store, visualise and handle digital color images. For example, a huge collection of colour images captured with different colour cameras were used to succesfully train machine learning solutions and achieve classification tools that will perform over a great range of colour images captured with different cameras. Similar attempts based on spectral images are limited generally to a reduced set of data captured with similar sensors and often by the same group of persons. This limits the usability and the deployment of the solutions developed. We propose to discuss such standardisation, by going through some of the constraints we may want to enforce, and some of the hypothesis we may use to help the design of such standardisation. I will also provide tentative directions we may want to follow.

Marco Buzzelli (Università degli studi di Milano – Bicocca, Italy) – CCIW honorAngle-Retaining Chromaticity: color invariants and properties [SLIDES]

The Angle-Retaining Chromaticity diagram (ARC) is used to map tristimulus values into a two-dimensional representation, so that angular distances in the original three-dimensional space are preserved as Euclidean distances in ARC. This property makes the ARC diagram particularly useful for computational color constancy, where the illuminant intensity is purposely discarded and illuminant chromaticities are compared in terms of angular distances, through either the recovery error or the reproduction error. The extension of the ARC diagram into a full color space will be presented, along with resulting properties and invariants to pixel value transformations.

Emilie Robert (CNES, France) – Toward individualized images: quantifying the need [SLIDES]

Image-processing pipelines exist in a wide variety in photography industry. For scientific applications such as geology, oceans observation, dermatology, etc it is of strong interest to have full control on the image-processing pipeline in order to provide an image strictly linked to the scene radiometry. Often, when the application is highly human related, the image also has to be easy to interpret and therefore needs to reflect human perception. This last point suggest the implementation of a stage of “individualization” in the image processing pipeline of images linked to color critical applications. Indeed, the results presented in this talk quantifies the intervariabilities existing in the largest Color Matching Functions database ever measured. The original study made in this work allows comparing the color vision differences existing between 151 observers to other color errors along the image-processing pipeline. It shows a strong interest for individual imaging development when the application is color-critical and highly linked to human perception.

Rei Nakayama (Chiba University, Japan) – Relationship between spectral bandwidth and color vision diversity

Claudio Rota (Università degli studi di Milano – Bicocca, Italy) – Visual media enhancement using CNNs [SLIDES]

Images and videos are common in our daily life, and although modern digital cameras can capture high quality visual media, in some situations their perceived quality could be considerably reduced because of visible artifacts introduced by the camera imaging pipeline or during post-processing. Image and video enhancement using Convolutional Neural Networks (CNNs) is rapidly gaining attention, as it allows to enhance visual media even in those cases where traditional methods fail. The key elements of these methods are briefly analyzed in the talk and open issues and future challenges are highlighted.


Session 4 – Color of materials | June 10, 2022

Pichayada Katemake (Chulalongkorn University, Thailand) – KEYNOTE | Optimizing multi-coloured LEDs for identifying pigments based on Self Organizing Map and Principle Component Analysis [SLIDES]

Colorimetric data and spectral reflectance data of a great numbers of pigments obtained from a method of using narrow multi-colored LEDs as light sources instead of colored filters were used to optimize the number of LED channels for identifying pigments in conservation and restoration applications. Self Organizing Map and Principle Component Analysis techniques were applied. The performance of these techniques will be discussed.

Morgane Gerardin (INRIA, France) – Best Young Researcher Presentation Award | Color variations and appearance modelization of pigments used in parietal painting: case of hematite dry powders [SLIDES]

The appearance of dry powders is complex to describe because it involves multiple physical phenomena that depend on the chemical nature, the size and the morphology of the grains. In a bulk material of known chemical composition, absorption is the main phenomenon responsible for the perceived  color. But for dry powders several scattering phenomena occurring at both the surface or inside the medium also contributes to the coloration of such material. As a result, various shades are observed for powders made out of the same material but whose grain morphology differ from one another. From Bidirectionnal Reflectance Distribution Function (BRDF) measurements, back-scattering is identified as the main phenomena involved in the powder appearance. We propose an analytical and physically based BRDF model which is able to accurately reproduce the measurements performed on optically thick layer of dry powders with various grains’ morphology. Our results are significantly better than the ones obtained with existing models. We focused on hematite α-Fe2O3 as it is a traditional pigment responsible for the different shades of red observed in parietal paintings. The study is performed on several pure nano-crystallized α-Fe2O3 hematite powders that have been synthesized in order to mimic the color range observed on actual paintings.

Davide Marelli (Università degli studi di Milano – Bicocca, Italy) – Material appearance acquisition of textiles.

Being able to provide a spatial representation of the appearance of planar textile material enables its usage in physically-based rendering engines. This representation can be used in a virtual 3D scene allowing its photorealistic rendering under different lighting and viewing conditions. In this talk, I present the design and development of a custom hardware and software solution for the acquisition of the appearance of textiles. The hardware device is designed to be portable, easy to assemble, and made of cheap consumer components. The software pipeline builds on the Photometric Stereo technique to recover the Spatially Varying Bidirectional Reflectance Distribution Function modeled as surface normal, diffusive albedo, and roughness maps. The advantages, limitations, and possible improvements of the proposed solution are discussed.

Fanny Dailliez (Université Grenoble-Alpes, France) – Verifying the optical effect of a clear coating on the halftone patterns of printed surfaces by using multispectral spectroscopy [SLIDES]

Color characterization of halftone prints is usually done with a spectrophotometer that acquires the average spectral reflectance of the print over a rather large area. It is also possible to look at the small halftone patterns with a microscope, and this is even necessary to observe the effect of the multi-convolutive reflection process that occurs when the print is covered with a protective transparent layer. In this work, we present a multispectral microscope that we have developed, giving similar spectral reflectance averaged over a large area as a spectrophotometer, and use this device to compare images of the same print with and without protective layer. We also show that our predictive model of the multi-convolutive process applied on the images without protective layer predicts microscopic images of the print with protective layer very close to those acquired.


Call for papers

Speakers and attendees are all invited to submit a paper to the MDPI Journal of Imaging, which will be published, if accepted after peer review, into a special issue Selected papers from CCIW. Submit your contribution through : https://www.mdpi.com/journal/jimaging/special_issues/CCIW


  • Mathieu Hébert, Université Jean Monnet Saint-Etienne, France
  • Shoji Tominaga, Norwegian University of Science and Technology, Norway
  • Raimondo Schettini, Università degli Studi di Milano-Bicocca, Italia
  • Alain Trémeau, Université Jean Monnet Saint-Etienne, France