Our Academy is a unique training and education resource for your team

DXOMARK Academy offers extensive instruction about image quality. Our curriculum includes intensive workshops about image quality fundamentals, expert sessions on select topics, and training focused on in-depth smartphone camera evaluation using Analyzer. Our team is also available to develop customized workshops. We can conduct all workshops and training sessions listed below either at your site or in our offices. We also offer training sessions for lab operators who will handle equipment and perform image quality tests.

Image Quality Fundamentals

DXOMARK Academy’s Image Quality Fundamentals workshops introduce junior engineers and beginners to various aspects of image quality such as camera design and hardware, camera tuning, objective measurements, perceptual analysis, and how to measure photo and video attributes. These 3-day workshops are intensive and are designed for a maximum of 8 attendees per session. They also include practical lab testing using Analyzer.

Expert Sessions

Image quality evaluation is a vast field that changes according to technological developments in the field. For those who are familiar with the basic notions and techniques of image quality testing, DXOMARK Academy’s Expert Sessions can provide a deeper understanding of specific areas of image quality such as HDR, Color, Exposure, Selfie testing, etc. Expert Sessions last 3 days for each single topic and are held for a maximum of 8 attendees at a time.

Operator Workshops

The first step of image quality testing is shooting photos and performing a preliminary analysis of those photos. Lab operators need precise guidelines about how to take photos based on the image quality attribute being tested; required testing conditions; and how to use the available lab equipment. DXOMARK Academy provides detailed workshops for lab operators that include step-by-step guidelines for everyday image quality testing. These workshops are conducted for a maximum of 4 attendees at a time either at your site or at DXOMARK’s labs.

Analyzer Training Sessions

DXOMARK’s Analyzer solution is the imaging industry’s foremost image quality testing suite. We provide specific technical training for each Analyzer module so that you can make the most of Analyzer in your labs. This training includes instruction about image quality protocols and how to use the hardware and software included with Analyzer.

Seminars

Seminar topics cover all that is needed to know for evaluating image quality of a cameras such as HDR/Exposure, Autofocus, Bokeh, Color, and Selfie cameras to name a few.

Shenzhen

Seminar

Shanghai

Seminar

Paris

CIC Conference

Technical Articles

Smartphones vs Cameras: Closing the gap on image quality

DXOMARK looks at the challenges of the video conferencing experience

The importance of touch-to-display response time in gaming

2000 to 2021: The evolution of smartphone audio playback

Scientific Publications

DXOMARK scientists present the results of their research, including the development of ground-breaking algorithms, at image science conferences throughout the world. Here are some examples:

Machine Learning: Predicting Audio Quality for high SPL Smartphone Recordings (AES NY 2023)

(presented at AES NY 2023)

In this paper, we explore a machine learning approach to evaluate audio quality for high sound pressure level (SPL) smartphone recordings. Our study is based on perceptual evaluations conducted by technical experts on eight audio sub-attributes (tonal balance, treble, midrange, bass, dynamics, temporal artifacts, spectral artifacts, and other artifacts) of audio quality for 121 smartphones released from 2019 to 2021. To address this task, we propose a Convolutional Neural Network (CNN) model, which proves to be a simple yet effective choice. We employ a pre-augmentation technique to enhance the training dataset size, creating a comprehensive dataset comprising recording spectrograms and corresponding perceptual evaluation scores. Our findings indicate that while the CNN model has certain limitations, it demonstrates promising capabilities in predicting evaluation scores, particularly in aspects of tonal balance, bass, and spectral artifact assessment.

pdf Poster (pdf) , the full paper is available here.

 

An image quality assessment dataset for portraits (CVPR 2023)

(presented at CVPR 2023)

Year after year, the demand for ever-better smartphone photos continues to grow, in particular in the domain of portrait photography. Manufacturers thus use perceptual quality criteria throughout the development of smartphone cameras. This costly procedure can be partially replaced by automated learning-based methods for image quality assessment (IQA). Due to its subjective nature, it is necessary to estimate and guarantee the consistency of the IQA process, a characteristic lacking in the mean opinion scores (MOS) widely used for crowdsourcing IQA. In addition, existing blind IQA (BIQA) datasets pay little attention to the difficulty of cross-content assessment, which may degrade the quality of annotations. This paper introduces PIQ23, a portrait-specific IQA dataset of 5116 images of 50 predefined scenarios acquired by 100 smartphones, covering a high variety of brands, models, and use cases. The dataset includes individuals of various genders and ethnicities who have given explicit and informed consent for their photographs to be used in public research. It is annotated by pairwise comparisons (PWC) collected from over 30 image quality experts for three image attributes: face detail preservation, face target exposure, and overall image quality. An in-depth statistical analysis of these annotations allows us to evaluate their consistency over PIQ23. Finally, we show through an extensive comparison with existing baselines that semantic information (image context) can be used to improve IQA predictions.

pdf Full paper (pdf)

 

Laboratory Evaluation of Smartphone Audio Zoom Systems (AES EU 2023)

(presented at AES EU 2023)

In this paper, we propose a rating protocol for evaluating smartphone audio zoom systems through objective and perceptual testing. Audio zoom is a newly developed function that helps isolate a sound source from its surroundings in accordance with the smartphone camera’s focal point and zoom level when recording videos with the camera app. The most important criterion for evaluating a good performance both objectively and perceptually is the device’s ability to focus mainly on the target sound. We also consider and discuss other audio quality criteria; and finally, we conclude by comparing test results and suggesting possible improvements to smartphone audio zoom systems.

pdf Poster (pdf) , the full paper is available here.

 

Real-Content Based Method for HDR Tone Curve Characterization (SID 2023)

(presented at DisplayWeek 2023)

The classic window pattern with a black background is the basis of any display characterization, but nowadays, displays, particularly smartphone displays, integrate complex image processing and adaptations, meaning that they cannot be entirely characterized with these simple patterns. In this paper we discuss a new EOTF (Electro-Optical Transfer Function) measurement method performed directly on real-life scene HDR videos and show how this method relates to the display system user-experience.

pdf Full paper (pdf)

 

Improvement of the flare evaluation for cameras and imaging
applications when using near-infrared lighting

(presented at Electronic imaging 2023)

The number of cameras designed for capturing the nearinfrared (NIR) spectrum (sometimes in addition to the visible) is increasing in automotive, mobile, and surveillance applications. Therefore, NIR LED light sources have become increasingly present in our daily lives. Nevertheless, camera evaluation
metrics are still mainly focused on sensors in the visible spectrum. The goal of this article is to extend our existing flare setup and objective metric [1] to quantify NIR flare for different cameras and to evaluate the impact of NIR filters on lenses. We also compare the results in both visible and NIR lighting. Moreover, we propose a new method to measure with our flare setup the ISO speed rating in the visible spectrum (as originally defined in ISO standard 12232 [2]) as well as an equivalent sensitivity defined for the NIR spectrum.

pdf Full paper (pdf)

Noise quality estimation on portraits in realistic controlled
scenarios

(presented at Electronic imaging 2023)

The wide use of cameras by the public has raised the interest of image quality evaluation and ranking. Current cameras embed complex processing pipelines that adapt strongly to the scene content by implementing, for instance, advanced noise reduction or local adjustment on faces. However, current methods of Image
Quality assessment are based on static geometric charts which are not representative of the common camera usage that targets mostly portraits. Moreover, on non-synthetic content most relevant features such as detail preservation or noisiness are often untractable. To overcome this situation, we propose to mix classical
measurements and Machine learning based methods: we reproduce realistic content triggering this complex processing pipelines in controlled conditions in the lab which allows for rigorous quality assessment. Then, ML based methods can reproduce perceptual quality previously annotated. In this paper, we focus on noise quality evaluation and test on two different setups: closeup and distant portraits. These setups provide scene
capture conditions flexibility, but most of all, they allow the evaluation of all quality camera ranges from high quality DSLRs to video conference devices. Our numerical results show the relevance of our solution compared to geometric charts and the importance of adapting to realistic content.

pdf Full paper (pdf)

Evaluation of image quality metrics designed for DRI tasks with
automotive cameras

(presented at Electronic imaging 2023)

Nowadays, cameras are widely used to detect potential obstacles for driving assistance. The safety challenges have pushed the automotive industry to develop a set of image quality metrics to measure the intrinsic camera performances and degradations. However more metrics are needed to correctly estimate computer vision algorithms performance, which depends on environmental conditions. In this article we consider several metrics that have
been proposed in the literature: CDP, CSNR and FCR. We show a test protocol and promising results for the ability of these metrics to predict the performance of a reference computer vision algorithm that was chosen for the study.

pdf Full paper (pdf)

An Image Quality Assessment Dataset for Portraits

Year after year, the demand for ever-better smartphone photos continues to grow, in particular in the domain of portrait photography. Manufacturers thus use perceptual quality criteria throughout the development of smartphone cameras. This costly procedure can be partially replaced by automated learning-based methods for image quality assessment (IQA). Due to its subjective nature, it is necessary to estimate and guarantee the consistency of the IQA process, a characteristic lacking in the mean opinion scores (MOS) widely used for crowdsourcing IQA. In addition, existing blind IQA (BIQA) datasets pay little attention to the difficulty of cross-content assessment, which may degrade the quality of annotations. This paper introduces PIQ23, a portrait-specific IQA dataset of 5116 images of 50 predefined scenarios acquired by 100 smartphones, covering a high variety of brands, models, and use cases. The dataset includes individuals of various genders and ethnicities who have given explicit and informed consent for their photographs to be used in public research. It is annotated by pairwise comparisons (PWC) collected from over 30 image quality experts for three image attributes: face detail preservation, face target exposure, and overall image quality. An in-depth statistical analysis of these annotations allows us to evaluate their consistency over PIQ23. Finally, we show through an extensive comparison with existing baselines that semantic information (image context) can be used to improve IQA predictions.

pdf Full paper (pdf)

Objective image quality evaluation of HDR videos captured by smartphones

(presented at Electronic imaging 2022)

High Dynamic Range (HDR) videos attract industry and consumer markets thanks to their ability to reproduce wider color gamuts, higher luminance ranges and contrast. While the cinema and broadcast industries traditionally go through a manual mastering step on calibrated color grading hardware, consumer cameras capable of HDR video capture without user intervention are now available. The aim of this article is to review the challenges found in evaluating cameras capturing and encoding videos in an HDR format, and improve existing measurement protocols to objectively quantify the video quality produced by those systems. These protocols study adaptation to static and dynamic HDR scenes with illuminant changes as well as the general consistency and readability of the scene’s dynamic range. An experimental study has been made to compare the performances of HDR video capture to Standard Dynamic Range (SDR) video capture, where significant differences are observed, often with scenespecific content adaptation similar to the human visual system.

pdf Full paper (pdf)

Image quality evaluation of video conferencing solutions with realistic laboratory scenes.

(presented at Electronic imaging 2022)

Video conferencing has become extremely relevant in the world in the latest years. Traditional image and video quality evaluation techniques prove insufficient to properly assess the quality of these systems, since they often include special processing pipelines, for example, to improve face rendering. Our team proposes a suite of equipment, laboratory scenes and measurements that include realistic mannequins to simulate a more true-to-life scene, while still being able to reliably measure image quality in terms of exposure, dynamic range, color and skin tone rendering, focus, texture, and noise. These metrics are used to evaluate and compare three categories of cameras for video conference that are available on the market: external webcams, laptop integrated webcams and selfie cameras of mobile devices. Our results showed that external webcams provide a real image quality advantage over most built-in webcams in laptops but cannot match the superior image quality of tablets and smartphones selfie cameras. Our results are consistent with perceptual evaluation and allow for an objective comparison of very different systems.

pdf Full paper (pdf)

New visual noise measurement on a versatile laboratory setup in HDR conditions for smartphone camera testing

(presented at Electronic imaging 2022)

Cameras, especially cameraphones, are using a large diversity of technologies, such as multi-frame stacking and local tone mapping to capture and render scenes with high dynamic range. ISO defined charts for OECF estimation and visual noise measurement are not really designed for these specific use cases, especially when no manual control of the camera is available. Moreover, these charts are limited to few measurements. We developed a versatile laboratory setup to evaluate image quality attributes, such as exposure, dynamic range, details preservation and noise, as well as autofocus performance. It is tested in various lighting conditions, with several dynamic ranges up to 7EV difference within the scene, under different illuminants. Latest visual noise measurements proposed by IEEE P1858 or ISO-15739 are not giving fully satisfactory results on our laboratory scene, due to differences in the chart, framing and lighting conditions used. We performed subjective visual experiments to build a quality ruler of noisy gray patches, and use it as a dataset to develop and validate an improved version of a visual noise measurement. In the experiments we also studied the impact of different environment conditions of the grey patches to assess their relevance to our algorithm. Our new visual noise measurement uses a luminance sensitivity function multiplied by the square root of the weighted sum of the variances of the Lab coordinates of the patches. A non-linear JND scaling is applied afterwards to get a visual noise measurement in units of JND of noisiness.

pdf Full paper (pdf)

Automatic Noise Analysis on Still Life Chart

(presented at London Imaging Meeting 2021, London, UK)

We tackle the issue of estimating the noise level of a camera, on its processed still images and as perceived by the user. Commonly, the characterization of the noise level of a camera is done using objective metrics determined on charts containing uniform patches at a given condition. These methods can lead to inadequate characterizations of the noise of a camera because cameras often incorporate denoising algorithms that are more efficient on uniform areas than on areas containing details. Therefore, in this paper, we propose a method to estimate the perceived noise level on natural areas of a still-life chart. Our method is based on a deep convolutional network trained with ground truth quality scores provided by expert annotators. Our experimental evaluation shows that our approach strongly matches human evaluations.

pdf Full paper (pdf)

Portrait Quality Assessment using Multi-Scale CNN

(presented at London Imaging Meeting 2021, London, UK)

We propose a novel and standardized approach to the problem of camera-quality assessment on portrait scenes. Our goal is to evaluate the capacity of smartphone front cameras to preserve texture details on faces. We introduce a new portrait setup and an automated texture measurement. The setup includes two custom-built lifelike mannequin heads, shot in a controlled lab environment. The automated texture measurement includes a Region-of-interest (ROI) detection and a deep neural network. To this aim, we create a realistic mannequins database, which contains images from different cameras, shot in several lighting conditions. The ground-truth is based on a novel pairwise comparison technology where the scores are generated in terms of Just-Noticeable-differences (JND). In terms of methodology, we propose a Multi-Scale CNN architecture with random crop augmentation, to overcome overfitting and to get a low-level feature extraction. We validate our approach by comparing its performance with several baselines inspired by the Image Quality Assessment (IQA) literature.

pdf Full paper (pdf)

Evaluation of the Lens Flare

(presented at Electronic Imaging 2021 conference, Burlingame, California, USA)

We present an objective metric for quantifying the amount of flare of the lens of a camera module. This includes hardware and software tools to measure the spread of the stray light in the image. A novel measurement setup has been developed to generate flare images in a reproducible way via a bright light source, close in apparent size and color temperature to the sun, both within and outside the field of view of the device. The proposed measurement works on RAW images to characterize and measure the optical phenomenon without being affected by any non-linear processing that the device might implement.

pdf Full paper (pdf)

RAW Image Quality Evaluation Using Information Capacity

(presented at Electronic Imaging 2021 conference, Burlingame, California, USA)

We propose a comprehensive objective metric for estimating digital camera system performance. Using the DXOMARK RAW protocol, image quality degradation indicators are objectively quantified, and the information capacity is computed. The model proposed in this article is a significant improvement over previous digital camera systems evaluation protocols, wherein only noise, spectral response, sharpness, and pixel count were considered.

pdf Full paper (pdf)

DXOMARK Objective Video Quality Measurements

(presented at Electronic Imaging 2020 conference, Burlingame, California, USA)

Video capture is becoming more and more widespread. The technical advances of consumer devices have led to improved video quality and to a variety of new use cases presented by social media and artificial intelligence applications. Device manufacturers and users alike need to be able to compare different cameras.This article presents a comprehensive hardware and software measurement protocol for the objective evaluation of the whole video acquisition and encoding pipeline, as well as its experimental validation.

pdf Full paper (pdf)

Depth Map Quality Evaluation for Photographic Applications

(presented at Electronic Imaging 2020 conference, Burlingame, California, USA)

As depth imaging is integrated into more and more consumer devices, manufacturers have to tackle new challenges. Applications such as computational bokeh and augmented reality require dense and precisely segmented depth maps to achieve good results. Modern devices use a multitude of different technologies to estimate depth maps, such as time-of-flight sensors, stereoscopic cameras, structured light sensors, phase-detect pixels or a combination thereof. Therefore, there is a need to evaluate the quality of the depth maps, regardless of the technology used to produce them. The aim of our work is to propose an end-result evaluation method based on a single scene, using a specifically designed chart.

pdf Full paper (pdf)

Quantitative measurement of contrast, texture, color and noise for digital photography of HDR scenes

(presented at Electronic Imaging 2018 conference, Burlingame, California, USA)

We describe image quality measurements for HDR scenes covering local contrast preservation, texture preservation, color consistency, and noise stability. By monitoring these four attributes in both the bright and dark parts of the image, over different dynamic ranges, we benchmarked four leading smartphone cameras using different technologies and contrasted the results with subjective evaluations.

pdf Full paper (PDF)

Image quality benchmark of computational bokeh

(presented at Electronic Imaging 2018 conference, Burlingame, California, USA)

We propose a method to quantitatively evaluate the quality of computational bokeh in a reproducible way, focusing on both the quality of the bokeh (depth of field, shape), as well as on artifacts brought by the challenge to accurately differentiate the face of a subject from the background, especially on complex transitions such as curly hairs.

pdf Full paper (PDF)

Towards a quantitative evaluation of multi-imaging systems

(presented at Electronic Imaging 2017 conference, San Francisco, California, USA)

This paper presents laboratory setups designed to exhibit the characteristics and artifacts that are peculiar to Multi-Image technologies. We also propose metrics towards the objective and quantitative evaluation of those artifacts.

pdf Full paper (PDF)

Autofocus measurement for imaging devices

(presented at Electronic Imaging 2017 conference, San Francisco, California, USA)

We propose an objective measurement protocol to evaluate the autofocus performance of a digital still camera. As most pictures today are taken with smartphones, we have designed the first implementation of this protocol for devices with touchscreen trigger.

pdf Full paper (PDF)

Device and algorithms for camera timing evaluation 

(presented at Electronic Imaging 2014 conference, San Francisco, California, USA)

This paper presents a novel device and algorithms for measuring the different timings of digital cameras shooting both still images and videos. These timings include exposure (or shutter) time, electronic rolling shutter (ERS), frame rate, vertical blanking, time lags, missing frames, and duplicated frames.

pdf Full paper (PDF)

Electronic trigger for capacitive touchscreen and extension of ISO 15781 standard time lag measurements to smartphones

(presented at Electronic Imaging 2014 conference, San Francisco, California, USA)

We present in this paper a novel capacitive device that stimulates the touchscreen interface of a smartphone (or of any imaging device equipped with a capacitive touchscreen) and synchronizes triggering with our LED Universal Timer to measure shooting time lag and shutter lag according to ISO 15781:2013.

pdf Full paper (PDF)

Measurement and protocol for evaluating video and still stabilization systems

(presented at Electronic Imaging 2013 conference, San Francisco, California, USA)

This article presents a system and a protocol to characterize image stabilization systems both for still images and videos.

pdf Full paper (PDF)

Development of the I3A CPIQ spatial metrics

(presented at Electronic Imaging 2012 conference, San Francisco, California, USA)

The I3A Camera Phone Image Quality (CPIQ) initiative aims to provide a consumer-oriented overall image quality metric for mobile phone cameras. In order to achieve this goal, a set of subjectively correlated image quality metrics has been developed. This paper describes the development of a specific group within this set of metrics, the spatial metrics. Contained in this group are the edge acutance, visual noise and texture acutance metrics. A common feature is that they are all dependent on the spatial content of the specific scene being analyzed. Therefore, the measurement results of the metrics are weighted by a contrast sensitivity function (CSF) and, thus, the conditions under which a particular image is viewed must be specified. This leads to the establishment of a common framework consisting of three components shared by all spatial metrics. First, the RGB image is transformed to a color opponent space, separating the luminance channel from two chrominance channels. Second, associated with this color space are three contrast sensitivity functions for each individual opponent channel. Finally, the specific viewing conditions, comprising both digital displays as well as printouts, are supported through two distinct MTFs.

pdf Full paper (PDF)

An objective protocol for comparing the noise performance of silver halide film and digital sensor

(presented at Electronic Imaging 2012 conference, San Francisco, California, USA)

Digital sensors have obviously invaded the photography mass market. However, some photographers with very high expectancy still use silver halide film. Are they only nostalgic reluctant to technology or is there more than meets the eye? The answer is not so easy if we remark that, at the end of the golden age, films were actually scanned before development. Nowadays film users have adopted digital technology and scan their film to take advantage from digital processing afterwards. Therefore, it is legitimate to evaluate silver halide film “with a digital eye”, with the assumption that processing can be applied as for a digital camera. The article will describe in details the operations we need to consider the film as a RAW digital sensor. In particular, we have to account for the film characteristic curve, the autocorrelation of the noise (related to film grain) and the sampling of the digital sensor (related to Bayer filter array). We also describe the protocol that was set, from shooting to scanning. We then present and interpret the results of sensor response, signal to noise ratio and dynamic range.

pdf Full paper (PDF)

Performance of extended depth of field systems and theoretical diffraction limit

(presented at Electronic Imaging 2012 conference, San Francisco, California, USA)

Extended depth of field (EDOF) cameras have recently emerged as a low-cost alternative to autofocus lenses. Different methods, either based on longitudinal chromatic aberrations or wavefront coding have been proposed and have reached the market. The purpose of this article is to study the theoretical performance and limitation of wavefront coding approaches. The idea of these methods is to introduce a phase element making a trade-off between sharpness at the optimal focus position and the variation of the blur spot with respect to the object distance. We will show that there are theoretical bounds to this trade-off: knowing the aperture and the minimal MTF value for a suitable image quality, the pixel pitch imposes the maximal depth of field. We analyze the limitation of the extension of the depth of field for pixel pitch from 1.75µm to 1.1µm, particularly in regards to the increasing influence of diffraction.

pdf Full paper (PDF)

Information capacity: a measure of potential image quality of a digital camera

(presented at Electronic Imaging 2011 conference, San Francisco, California, USA)

The aim of the paper is to define an objective measurement for evaluating the performance of a digital camera. The challenge is to mix different flaws involving geometry (as distortion or lateral chromatic aberrations), light (as luminance and color shading), or statistical phenomena (as noise). We introduce the concept of information capacity that accounts for all the main defects than can be observed in digital images, and that can be due either to the optics or to the sensor. The information capacity describes the potential of the camera to produce good images. In particular, digital processing can correct some flaws (like distortion). Our definition of information takes the possible correction into account and the fact that processing can neither retrieve lost information nor create some. This paper extends some of our previous work where the information capacity was only defined for RAW sensors. The concept is extended for cameras with optical defects as distortion, lateral and longitudinal chromatic aberration or lens shading.

pdf Full paper (PDF)

Dead leaves model for measuring texture quality on a digital camera

(presented at Electronic Imaging 2010 conference,  San Jose, California, USA)

We describe the procedure to evaluate the image quality of a camera in terms of texture preservation. We use a stochastic model coming from stochastic geometry, and known as the dead leaves model. It intrinsically reproduces occlusions phenomena, producing edges at any scale and any orientation with a possibly low level of contrast. An advantage of this synthetic model is that it provides a ground truth in terms of image statistics. In particular, its power spectrum is a power law, as many natural textures. Therefore, we can define a texture MTF as the ratio of the Fourier transform of the camera picture by the Fourier transform of the original target and we fully describe the procedure to compute it. We compare the results with the traditional MTF (computed on a slanted edge as defined in the ISO 12233 standard) and show that the texture MTF is indeed more appropriate for describing fine detail rendering. This is true in particular for camera phones that have to apply high level of denoising and sharpening. Correlation with subjective evaluation is shown, as a part of some work done in the I3A/CPIQ initiative.

pdf Full abstract (PDF)

Measuring texture sharpness of a digital camera

(presented at Electronic Imaging 2009 conference, San Jose, California, USA)

A method for evaluating texture quality as shot by a camera is presented. It is shown that usual sharpness measurements are not completely satisfying for this task. A new target based on random geometry is proposed. It uses the so-called dead leaves model. It contains objects of any size at any orientation and follows some common statistics with natural images. Some experiments show that the correlation between objectives measurements derived from this target and subjective measurements conducted in the Camera Phone Image Quality initiative are excellent.

pdf Full paper (PDF)

Sensor information capacity and spectral sensitivities

(presented at Electronic Imaging 2009 conference, San Jose, California, USA)

In this paper, we numerically quantify the information capacity of a sensor, by examining the different factors than can limit this capacity, namely sensor spectral response, noise, and sensor blur (due to fill factor, cross talk and diffraction, for given aperture). In particular, we compare the effectiveness of raw color space for different kinds of sensors. We also define an intrinsic notion of color sensitivity that generalizes some of our previous works. We also attempt to discuss how metamerism can be represented for a sensor.

pdf Full paper (PDF)

Extended depth-of-field using sharpness transport across color channels

(presented at Electronic Imaging 2009 conference, San Jose, California, USA)

In this paper we present an approach to extend the Depth-of-Field (DoF) for cell phone miniature camera by concurrently optimizing optical system and post-capture digital processing techniques. Our lens design seeks to increase the longitudinal chromatic aberration in a desired fashion such that, for a given object distance, at least one color plane of the RGB image contains the in-focus scene information. Typically, red is made sharp for objects at infinity, green for intermediate distances, and blue for close distances. Comparing sharpness across colors gives an estimation of the object distance and therefore allows choosing the right set of digital filters as a function of the object distance. Then, by copying the high frequencies of the sharpest color onto the other colors, we show theoretically and experimentally that it is possible to achieve a sharp image for all the colors within a larger range of DoF. We compare our technique with other approaches that also aim to increase the DoF such as Wavefront coding.

pdf Full paper (PDF)

Characterization and measurement of color fringing

(presented at Electronic Imaging 2009 conference, San Jose, California, USA)

This article explains the cause of the color fringing phenomenon that can be noticed in photographs, particularly on the edges of backlit objects. The nature of color fringing is optical, and particularly related to the difference of blur spots at different wavelengths. Therefore color fringing can be observed both in digital and silver halide photography. The hypothesis that lateral chromatic aberration is the only cause for color fringing is discarded. The factors that can influence the intensity of color fringing are carefully studied, some of them being specific to digital photography. A protocol to measure color fringing with a very good repeatability is described, as well as a mean to predict color fringing from optical designs.

pdf Full paper (PDF)

Does resolution really increase image quality?

A general trend in the CMOS image sensor market is for increasing resolution (by having a larger number of pixels) while keeping a small form factor by shrinking photosite size. This article discusses the impact of this trend on some of the main attributes of image quality. The first example is image sharpness. A smaller pitch theoretically allows a larger limiting resolution which is derived from the Modulation Transfer Function (MTF). But recent sensor technologies (1.75μm, and soon 1.45μm) with typical aperture f/2.8 are clearly reaching the size of the diffraction blur spot. A second example is the impact on pixel light sensitivity and image sensor noise. For photonic noise, the Signal-to-Noise-Ratio (SNR) is typically a decreasing function of the resolution. To evaluate whether shrinking pixel size could be beneficial to the image quality, the tradeoff between spatial resolution and light sensitivity is examined by comparing the image information capacity of sensors with varying pixel size. A theoretical analysis that takes into consideration measured and predictive models of pixel performance degradation and improvement associated with CMOS imager technology scaling, is presented. This analysis is completed by a benchmarking of recent commercial sensors with different pixel technologies.

pdf Full paper (PDF)

Sensor spectral sensitivities, noise measurements and color sensitivity

This article proposes new measurements for evaluating the image quality of a camera, particularly on the reproduction of colors. The concept of gamut is usually a topic of interest, but it is much more adapted to output devices than to capture devices (sensors). Moreover, it does not take other important characteristics of the camera into account, such as noise. On the contrary, color sensitivity is a global measurement relating the raw noise with the spectral sensitivities of the sensor. It provides an easy ranking of cameras. To have an in depth analysis of noise vs. color rendering, a concept of Gamut SNR is introduced, describing the set of colors achievable for a given SNR (Signal to Noise Ratio). This representation provides a convenient visualization of what part of the gamut is most affected by noise and can be useful for camera tuning as well.

pdf Full paper (PDF)

Advances in Camera Phone Picture Quality

A unique digital postprocessing technique compensates for performance problems posed by ever-shrinking pixels

by Dr. Guichard Frédéric, DXOMARK

From Photonics Spectra , November 2007

As camera phones become ubiquitous, consumer demand for a photographic experience similar to that of traditional digital cameras is growing. Coupled with the ready availability of high-definition displays, this need has translated into a requirement for higher-resolution cameras in mobile phones. However, handset design aesthetics impose a much smaller form factor for the miniature camera modules built into hand-sets than can be accommodated by reusing the same technology found in digital still cameras.

One of the most challenging aspects of designing a high-resolution camera for a mobile phone is the limitation on the overall height of the camera, measured from the top of the lens to the back of the camera substrate. The typical target height is 6 mm or less, unless a more expensive folded-optics design is considered. Given the angular acceptance of CMOS image sensor pixels, the maximum-size sensor that can be used with such a thin camera measures approximately 4.5 mm diagonal. To increase the resolution without increasing the height of the camera (or thickness of the phone), more pixels must fit into the array defined by this diagonal size. Using a 2.2-× 2.2-µm-pixel size, 2-megapixel sensors can be used in these thin cameras. To achieve 3.2-megapixel resolution, 1.75 × 1.75-µm-pixel size must be used, and 5-megapixel resolution requires 1.4 × 1.4-µm pixel.

pdf Full paper (PDF)

A measure of color sensibility for imaging devices

by Dr. Guichard Frédéric and Jérôme Buzzi

We define color sensitivity or effective color depth based on the “number of reliably distinguished colors”, using ideas from information theory. This figure of merit allows the comparison of different sensors or cameras and we indicate how it can be used both for the design of imaging devices and to optimize their adaptation to the scene.

pdf Full paper (PDF)

Noise in imaging chains : correlations and predictions

by Dr. Guichard Frédéric and Jérôme Buzzi

Noise is an important factor in image quality. We analyze it in images produced by digital cameras. We show that, beyond the usual standard deviation measurement, spatial correlations also convey interesting information which allows to (i) better describe the perception of the noise, (ii) analyze an unknown imaging chain. Indeed, knowledge of these spatial correlations is necessary to predict the noise after the rescaling and sampling involved in a realistic imaging chains.

pdf Full paper (PDF)

Uniqueness of blur measure

by Dr. Guichard Frédéric and Jérôme Buzzi

After discussing usual approaches to measuring blur of optical chains, we show theoretically that there is essentially a unique way to quantify blur by a single number. It is the second derivative at the origin of the Fourier transform of the kernel. This somewhat surprisingly implies that blur is especially sensitive to attenuation of the low frequencies.
The blur measure is in fact the quadratic size of the spot diagram. A series of experiments show that this measure is correlated to perceptual blur. We verify that the blur measure behaves as expected with respect to the standard ”blur” and ”sharpen” tools of usual image processing tools. We apply the measure to assess quality of cameras, natural images and image processings.

pdf Full paper (PDF)