Our Academy is a unique training and education resource for your team

 

DXOMARK Academy offers extensive instruction about image quality. Our curriculum includes intensive workshops about image quality fundamentals, expert sessions on select topics, and training focused on in-depth smartphone camera evaluation using Analyzer. Our team is also available to develop customized workshops. We can conduct all workshops and training sessions listed below either at your site or in our offices. We also offer training sessions for lab operators who will handle equipment and perform image quality tests.

Image Quality Fundamentals

DXOMARK Academy’s Image Quality Fundamentals workshops introduce junior engineers and beginners to various aspects of image quality such as camera design and hardware, camera tuning, objective measurements, perceptual analysis, and how to measure photo and video attributes. These 3-day workshops are intensive and are designed for a maximum of 8 attendees per session. They also include practical lab testing using Analyzer.

Expert Sessions

Image quality evaluation is a vast field that changes according to technological developments in the field. For those who are familiar with the basic notions and techniques of image quality testing, DXOMARK Academy’s Expert Sessions can provide a deeper understanding of specific areas of image quality such as HDR, Color, Exposure, Selfie testing, etc. Expert Sessions last 3 days for each single topic and are held for a maximum of 8 attendees at a time.

Operator Workshops

The first step of image quality testing is shooting photos and performing a preliminary analysis of those photos. Lab operators need precise guidelines about how to take photos based on the image quality attribute being tested; required testing conditions; and how to use the available lab equipment. DXOMARK Academy provides detailed workshops for lab operators that include step-by-step guidelines for everyday image quality testing. These workshops are conducted for a maximum of 4 attendees at a time either at your site or at DXOMARK’s labs.

Analyzer Training Sessions

DXOMARK’s Analyzer solution is the imaging industry’s foremost image quality testing suite. We provide specific technical training for each Analyzer module so that you can make the most of Analyzer in your labs. This training includes instruction about image quality protocols and how to use the hardware and software included with Analyzer.

Seminars

Seminar topics cover all that is needed to know for evaluating image quality of a cameras such as HDR/Exposure, Autofocus, Bokeh, Color, and Selfie cameras to name a few.

Shenzhen 2019

Seminar

Shanghai 2019

Seminar

Paris 2019

CIC Conference

Technical Articles

Smartphones vs Cameras: Closing the gap on image quality

Disruptive technologies in mobile imaging: Taking smartphone cameras to the next level

Multi-camera smartphones: Benefits and challenges

Evaluating computational bokeh: How we test smartphone portrait modes

Scientific Publications

DXOMARK scientists present the results of their research, including the development of ground-breaking algorithms, at image science conferences throughout the world. Here are some examples:

DXOMARK Objective Video Quality Measurements

(presented at Electronic Imaging 2020 conference, Burlingame, California, USA)

Video capture is becoming more and more widespread. The technical advances of consumer devices have led to improved video quality and to a variety of new use cases presented by social media and artificial intelligence applications. Device manufacturers and users alike need to be able to compare different cameras.This article presents a comprehensive hardware and software measurement protocol for the objective evaluation of the whole video acquisition and encoding pipeline, as well as its experimental validation. 

pdf Full abstract (pdf)

Depth Map Quality Evaluation for Photographic Applications

(presented at Electronic Imaging 2020 conference, Burlingame, California, USA)

As depth imaging is integrated into more and more consumer devices, manufacturers have to tackle new challenges. Applications such as computational bokeh and augmented reality require dense and precisely segmented depth maps to achieve good results. Modern devices use a multitude of different technologies to estimate depth maps, such as time-of-flight sensors, stereoscopic cameras, structured light sensors, phase-detect pixels or a combination thereof. Therefore, there is a need to evaluate the quality of the depth maps, regardless of the technology used to produce them. The aim of our work is to propose an end-result evaluation method based on a single scene, using a specifically designed chart.

pdf Full abstract (pdf)

Quantitative measurement of contrast, texture, color and noise for digital photography of HDR scenes

(presented at Electronic Imaging 2018 conference, Burlingame, California, USA)

We describe image quality measurements for HDR scenes covering local contrast preservation, texture preservation, color consistency, and noise stability. By monitoring these four attributes in both the bright and dark parts of the image, over different dynamic ranges, we benchmarked four leading smartphone cameras using different technologies and contrasted the results with subjective evaluations.

pdf Full abstract (PDF)

Image quality benchmark of computational bokeh

(presented at Electronic Imaging 2018 conference, Burlingame, California, USA)

We propose a method to quantitatively evaluate the quality of computational bokeh in a reproducible way, focusing on both the quality of the bokeh (depth of field, shape), as well as on artifacts brought by the challenge to accurately differentiate the face of a subject from the background, especially on complex transitions such as curly hairs.

pdf Full abstract (PDF)

Towards a quantitative evaluation of multi-imaging systems

(presented at Electronic Imaging 2017 conference, San Francisco, California, USA)

This paper presents laboratory setups designed to exhibit the characteristics and artifacts that are peculiar to Multi-Image technologies. We also propose metrics towards the objective and quantitative evaluation of those artifacts.

pdf Full abstract (PDF)

Autofocus measurement for imaging devices

(presented at Electronic Imaging 2017 conference, San Francisco, California, USA)

We propose an objective measurement protocol to evaluate the autofocus performance of a digital still camera. As most pictures today are taken with smartphones, we have designed the first implementation of this protocol for devices with touchscreen trigger.

pdf Full abstract (PDF)

Device and algorithms for camera timing evaluation 

(presented at Electronic Imaging 2014 conference, San Francisco, California, USA)

This paper presents a novel device and algorithms for measuring the different timings of digital cameras shooting both still images and videos. These timings include exposure (or shutter) time, electronic rolling shutter (ERS), frame rate, vertical blanking, time lags, missing frames, and duplicated frames.

pdf Full absract (PDF)

Electronic trigger for capacitive touchscreen and extension of ISO 15781 standard time lag measurements to smartphones

(presented at Electronic Imaging 2014 conference, San Francisco, California, USA)

We present in this paper a novel capacitive device that stimulates the touchscreen interface of a smartphone (or of any imaging device equipped with a capacitive touchscreen) and synchronizes triggering with our LED Universal Timer to measure shooting time lag and shutter lag according to ISO 15781:2013.

pdf Full abstract (PDF)

Measurement and protocol for evaluating video and still stabilization systems

(presented at Electronic Imaging 2013 conference, San Francisco, California, USA)

This article presents a system and a protocol to characterize image stabilization systems both for still images and videos.

pdf Full abstract (PDF)

Development of the I3A CPIQ spatial metrics

(presented at Electronic Imaging 2012 conference, San Francisco, California, USA)

The I3A Camera Phone Image Quality (CPIQ) initiative aims to provide a consumer-oriented overall image quality metric for mobile phone cameras. In order to achieve this goal, a set of subjectively correlated image quality metrics has been developed. This paper describes the development of a specific group within this set of metrics, the spatial metrics. Contained in this group are the edge acutance, visual noise and texture acutance metrics. A common feature is that they are all dependent on the spatial content of the specific scene being analyzed. Therefore, the measurement results of the metrics are weighted by a contrast sensitivity function (CSF) and, thus, the conditions under which a particular image is viewed must be specified. This leads to the establishment of a common framework consisting of three components shared by all spatial metrics. First, the RGB image is transformed to a color opponent space, separating the luminance channel from two chrominance channels. Second, associated with this color space are three contrast sensitivity functions for each individual opponent channel. Finally, the specific viewing conditions, comprising both digital displays as well as printouts, are supported through two distinct MTFs.

pdf Full abstract (PDF)

An objective protocol for comparing the noise performance of silver halide film and digital sensor

(presented at Electronic Imaging 2012 conference, San Francisco, California, USA)

Digital sensors have obviously invaded the photography mass market. However, some photographers with very high expectancy still use silver halide film. Are they only nostalgic reluctant to technology or is there more than meets the eye? The answer is not so easy if we remark that, at the end of the golden age, films were actually scanned before development. Nowadays film users have adopted digital technology and scan their film to take advantage from digital processing afterwards. Therefore, it is legitimate to evaluate silver halide film “with a digital eye”, with the assumption that processing can be applied as for a digital camera. The article will describe in details the operations we need to consider the film as a RAW digital sensor. In particular, we have to account for the film characteristic curve, the autocorrelation of the noise (related to film grain) and the sampling of the digital sensor (related to Bayer filter array). We also describe the protocol that was set, from shooting to scanning. We then present and interpret the results of sensor response, signal to noise ratio and dynamic range.

pdf Full abstract (PDF)

Performance of extended depth of field systems and theoretical diffraction limit

(presented at Electronic Imaging 2012 conference, San Francisco, California, USA)

Extended depth of field (EDOF) cameras have recently emerged as a low-cost alternative to autofocus lenses. Different methods, either based on longitudinal chromatic aberrations or wavefront coding have been proposed and have reached the market. The purpose of this article is to study the theoretical performance and limitation of wavefront coding approaches. The idea of these methods is to introduce a phase element making a trade-off between sharpness at the optimal focus position and the variation of the blur spot with respect to the object distance. We will show that there are theoretical bounds to this trade-off: knowing the aperture and the minimal MTF value for a suitable image quality, the pixel pitch imposes the maximal depth of field. We analyze the limitation of the extension of the depth of field for pixel pitch from 1.75µm to 1.1µm, particularly in regards to the increasing influence of diffraction.

pdf Full abstract (PDF)

Information capacity: a measure of potential image quality of a digital camera

(presented at Electronic Imaging 2011 conference, San Francisco, California, USA)

The aim of the paper is to define an objective measurement for evaluating the performance of a digital camera. The challenge is to mix different flaws involving geometry (as distortion or lateral chromatic aberrations), light (as luminance and color shading), or statistical phenomena (as noise). We introduce the concept of information capacity that accounts for all the main defects than can be observed in digital images, and that can be due either to the optics or to the sensor. The information capacity describes the potential of the camera to produce good images. In particular, digital processing can correct some flaws (like distortion). Our definition of information takes the possible correction into account and the fact that processing can neither retrieve lost information nor create some. This paper extends some of our previous work where the information capacity was only defined for RAW sensors. The concept is extended for cameras with optical defects as distortion, lateral and longitudinal chromatic aberration or lens shading.

pdf Full abstract (PDF)

Dead leaves model for measuring texture quality on a digital camera

(presented at Electronic Imaging 2010 conference,  San Jose, California, USA)

We describe the procedure to evaluate the image quality of a camera in terms of texture preservation. We use a stochastic model coming from stochastic geometry, and known as the dead leaves model. It intrinsically reproduces occlusions phenomena, producing edges at any scale and any orientation with a possibly low level of contrast. An advantage of this synthetic model is that it provides a ground truth in terms of image statistics. In particular, its power spectrum is a power law, as many natural textures. Therefore, we can define a texture MTF as the ratio of the Fourier transform of the camera picture by the Fourier transform of the original target and we fully describe the procedure to compute it. We compare the results with the traditional MTF (computed on a slanted edge as defined in the ISO 12233 standard) and show that the texture MTF is indeed more appropriate for describing fine detail rendering. This is true in particular for camera phones that have to apply high level of denoising and sharpening. Correlation with subjective evaluation is shown, as a part of some work done in the I3A/CPIQ initiative.

pdf Full abstract (PDF)

Measuring texture sharpness of a digital camera

(presented at Electronic Imaging 2009 conference, San Jose, California, USA)

A method for evaluating texture quality as shot by a camera is presented. It is shown that usual sharpness measurements are not completely satisfying for this task. A new target based on random geometry is proposed. It uses the so-called dead leaves model. It contains objects of any size at any orientation and follows some common statistics with natural images. Some experiments show that the correlation between objectives measurements derived from this target and subjective measurements conducted in the Camera Phone Image Quality initiative are excellent.

pdf Full abstract (PDF)

Sensor information capacity and spectral sensitivities

(presented at Electronic Imaging 2009 conference, San Jose, California, USA)

In this paper, we numerically quantify the information capacity of a sensor, by examining the different factors than can limit this capacity, namely sensor spectral response, noise, and sensor blur (due to fill factor, cross talk and diffraction, for given aperture). In particular, we compare the effectiveness of raw color space for different kinds of sensors. We also define an intrinsic notion of color sensitivity that generalizes some of our previous works. We also attempt to discuss how metamerism can be represented for a sensor.

pdf Full abstract (PDF)

Extended depth-of-field using sharpness transport across color channels

(presented at Electronic Imaging 2009 conference, San Jose, California, USA)

In this paper we present an approach to extend the Depth-of-Field (DoF) for cell phone miniature camera by concurrently optimizing optical system and post-capture digital processing techniques. Our lens design seeks to increase the longitudinal chromatic aberration in a desired fashion such that, for a given object distance, at least one color plane of the RGB image contains the in-focus scene information. Typically, red is made sharp for objects at infinity, green for intermediate distances, and blue for close distances. Comparing sharpness across colors gives an estimation of the object distance and therefore allows choosing the right set of digital filters as a function of the object distance. Then, by copying the high frequencies of the sharpest color onto the other colors, we show theoretically and experimentally that it is possible to achieve a sharp image for all the colors within a larger range of DoF. We compare our technique with other approaches that also aim to increase the DoF such as Wavefront coding.

pdf Full abstract (PDF)

Characterization and measurement of color fringing

(presented at Electronic Imaging 2009 conference, San Jose, California, USA)

This article explains the cause of the color fringing phenomenon that can be noticed in photographs, particularly on the edges of backlit objects. The nature of color fringing is optical, and particularly related to the difference of blur spots at different wavelengths. Therefore color fringing can be observed both in digital and silver halide photography. The hypothesis that lateral chromatic aberration is the only cause for color fringing is discarded. The factors that can influence the intensity of color fringing are carefully studied, some of them being specific to digital photography. A protocol to measure color fringing with a very good repeatability is described, as well as a mean to predict color fringing from optical designs.

pdf Full abstract (PDF)

Does resolution really increase image quality?

A general trend in the CMOS image sensor market is for increasing resolution (by having a larger number of pixels) while keeping a small form factor by shrinking photosite size. This article discusses the impact of this trend on some of the main attributes of image quality. The first example is image sharpness. A smaller pitch theoretically allows a larger limiting resolution which is derived from the Modulation Transfer Function (MTF). But recent sensor technologies (1.75μm, and soon 1.45μm) with typical aperture f/2.8 are clearly reaching the size of the diffraction blur spot. A second example is the impact on pixel light sensitivity and image sensor noise. For photonic noise, the Signal-to-Noise-Ratio (SNR) is typically a decreasing function of the resolution. To evaluate whether shrinking pixel size could be beneficial to the image quality, the tradeoff between spatial resolution and light sensitivity is examined by comparing the image information capacity of sensors with varying pixel size. A theoretical analysis that takes into consideration measured and predictive models of pixel performance degradation and improvement associated with CMOS imager technology scaling, is presented. This analysis is completed by a benchmarking of recent commercial sensors with different pixel technologies.

pdf Full abstract (PDF)

Sensor spectral sensitivities, noise measurements and color sensitivity

This article proposes new measurements for evaluating the image quality of a camera, particularly on the reproduction of colors. The concept of gamut is usually a topic of interest, but it is much more adapted to output devices than to capture devices (sensors). Moreover, it does not take other important characteristics of the camera into account, such as noise. On the contrary, color sensitivity is a global measurement relating the raw noise with the spectral sensitivities of the sensor. It provides an easy ranking of cameras. To have an in depth analysis of noise vs. color rendering, a concept of Gamut SNR is introduced, describing the set of colors achievable for a given SNR (Signal to Noise Ratio). This representation provides a convenient visualization of what part of the gamut is most affected by noise and can be useful for camera tuning as well.

pdf Full abstract (PDF)

Advances in Camera Phone Picture Quality

A unique digital postprocessing technique compensates for performance problems posed by ever-shrinking pixels

by Dr. Guichard Frédéric, DXOMARK

From Photonics Spectra , November 2007

As camera phones become ubiquitous, consumer demand for a photographic experience similar to that of traditional digital cameras is growing. Coupled with the ready availability of high-definition displays, this need has translated into a requirement for higher-resolution cameras in mobile phones. However, handset design aesthetics impose a much smaller form factor for the miniature camera modules built into hand-sets than can be accommodated by reusing the same technology found in digital still cameras.

One of the most challenging aspects of designing a high-resolution camera for a mobile phone is the limitation on the overall height of the camera, measured from the top of the lens to the back of the camera substrate. The typical target height is 6 mm or less, unless a more expensive folded-optics design is considered. Given the angular acceptance of CMOS image sensor pixels, the maximum-size sensor that can be used with such a thin camera measures approximately 4.5 mm diagonal. To increase the resolution without increasing the height of the camera (or thickness of the phone), more pixels must fit into the array defined by this diagonal size. Using a 2.2-× 2.2-µm-pixel size, 2-megapixel sensors can be used in these thin cameras. To achieve 3.2-megapixel resolution, 1.75 × 1.75-µm-pixel size must be used, and 5-megapixel resolution requires 1.4 × 1.4-µm pixel.

pdf Full abstract (PDF)