Meet our ultra realistic mannequins

These hyper-realistic mannequins are designed to facilitate image quality evaluation, each having a specified spectral response for skin tone and all the details of a real human face. Due to the static nature of these mannequins (compared to a real person who moves), test scenes using these mannequins are absolutely identical, making resulting measurements repeatable and reliable. Further, using a mannequin can save precious testing time for a lab operator. We have three mannequins, each with a different skin tone.These mannequins are produced by a specialized French provider. We test and measure the spectral response of each realistic head in our labs before shipping to ensure the same quality, and assign each mannequin a unique identification number to facilitate high traceability.

Eugene (pdf REALMAN_001):

• Size: 500 x 250 x 400 mm

Used in: HDR Portrait setup

Sienna(pdfREALMAN_002_1):

• Size: 400 x 250 x 360 mm

Diana (pdf REALMAN_003):

• Size: 450 x 300 x 470 mm

Key metrics:

  • Exposure
  • Noise
  • Face detail preservation
  • Perceptual analysis

Key features:​

  • Material: Polyurethane foam, proprietary skin-like pigmentation, silicon, real hair, glass eyes
  • Add-on: Support trolley (SUPPORT_REALMAN) 895x445x1290mm
  • Add-on: Motorized rotation system for realistic mannequins (ROT_REALMAN_001)
AI measurements on Realistic mannequin

Each mannequin is a highly-repeatable hyper-realistic stand-in for a human model, accurate down to the finest details of their features and the spectral response of their skin.

These mannequins can be used by themselves or integrated in more complex setups.

The resulting images can be evaluated perceptually for many image quality attributes, and the high degree of repeatability also allows objective measurements, such as the exposure on the face, and AI-based metrics, such as the preservation of details.

The preservation of detail metric focuses on specific details of each mannequin. For Eugene, the details are best appreciated on his thick beard.

Figure 4. Region of Interest of the Detail Preservation Metric for Eugene

Figure 5. Region of Interest of the Detail Preservation Metric for Sienna

Figure 6. Region of Interest of the Detail Preservation Metric for Diana

Our automatic metric relies on a convolutional neural network (CNN) that reproduces in less than a second detail preservation perceptual analysis of a quality expert through their human visual system. The results are provided in a perceptual scale quantified in just-objectionable differences, or JOD, which has a very straight-forward interpretation: a difference of one unit in the metric results of two images means that 75% of the observers will be able to see a difference in level of detail of the two images. This tools can integrate or replace perceptual analysis thus speeding-up considerably the time to complete an evaluation.

How we made our Artificial head