Face Identity “Likeness”: Insights for the Study of Face Perception and Identification
Abstract
Abstract
Colloquially, we commonly observe that some face images look more “like” an identity than
others. This experience stems from the fact that different images of the same identity can
vary in appearance, and that this appearance variation affects how closely an image resembles our own internal representation of what that identity should look like. Although we
perceive the “likeness” of face identities on a daily basis, surprisingly little is known about
how these perceptions are formed. Do we perceive face images as a better likeness if they
are photographed a certain way? Are face images of an identity perceived as a better likeness if they resemble images of that identity which have been seen previously? Further, are
identities represented by prototypes that reflect the viewing experience an observer has with
that identity? In a set of experiments, I addressed each of these questions using a combination of psychological and computational methods. First, using face images of identities
that participants are unfamiliar with and wherein each identity is shown across the same
changes in viewpoint and illumination, I tested whether higher likeness ratings are assigned
to certain viewpoint or illumination conditions (Experiment 1). The results showed that
participants who are unfamiliar with a face identity rate images as a better likeness when
the images show the identity in a more frontal viewpoint and with flash (as opposed to
ambient) illumination. At profile viewpoints, there is no difference in the likeness ratings
assigned to face images across illumination conditions. Next, using an image-based “face
space” generated by processing face images through a deep convolutional neural network
trained for face identification, I tested whether participants assign higher likeness ratings to
face images that either a) resemble a “central identity prototype” of a given face identity,
or b) exist within a more dense region of that identity’s specific subspace within the overall
DCNN-generated face space (Simulation 1). This simulation demonstrated that measures of
local area density are consistent with how human observers rate the perceived likeness of a
face image. Further, the distance of an image from an identity-specific prototype showing
the same identity was not consistent with human ratings of perceived likeness. Finally, by
familiarizing participants with an identity using images that show the identity from a single viewpoint or illumination, I tested whether participants assign higher likeness ratings to
images that resemble those which were seen previously for a given identity (Experiments 2
and 3). The results showed that, regardless of either the viewpoint/illumination of the face
image being rated and the viewpoint/illumination of face images that were seen previously
of a given identity, participants rate images as a better likeness if the image they are rating
matches the viewpoint/illumination of the images they were shown previously. Collectively,
these experiments provide insight into how variation in appearance is perceived across images of a given identity, and how this variation contributes to a face image being perceived
as a good likeness of the identity being portrayed.