Video-based deception detection

Matthew L. Jensen, Thomas O. Meservy, Judee K. Burgoon, Jay F. Nunamaker

Research output: Chapter in Book/Report/Conference proceedingChapter

5 Scopus citations

Abstract

This chapter outlines an approach for automatically extracting behavioral indicators from video and explores the possibility of using those indicators to predict human-interpretable judgments of involvement, dominance, tenseness, and arousal. The team utilized two-dimensional spatial inputs extracted from video to construct a set of discrete and inter-relational features. Then three predictive models were created using the extracted features as predictors and human-coded perceptions of involvement, tenseness, and arousal as the criterion. Through this research, the team explores the feasibility and validity of the approach and identifies how such an approach could contribute to the broader community.

Original languageEnglish (US)
Title of host publicationIntelligence and Security Informatics
Subtitle of host publicationTechniques and Applications
EditorsHsinchun Chen, Christopher Yang
Pages425-441
Number of pages17
DOIs
StatePublished - 2008

Publication series

NameStudies in Computational Intelligence
Volume135

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Video-based deception detection'. Together they form a unique fingerprint.

Cite this