United States security officials may soon have much more than intuition to depend on to determine the answers of the questions which they pose to people every day to ferret out possible terrorists.
Computer and behavioural scientists at the University of Buffalo are developing automated systems that track faces, voices, bodies and other biometrics against scientifically tested behavioural indicators to provide a numerical score of the likelihood that an individual may be about to commit a terrorist act.
"The goal is to identify the perpetrator in a security setting before he or she has the chance to carry out the attack," said Venu Govindaraju, professor of computer science and engineering in the UB School of Engineering and Applied Sciences.
Govindaraju is co-principal investigator on the project with Mark G Frank, associate professor of communication in the UB College of Arts and Sciences.
The project, recently awarded USD 800,000 grant by the National Science Foundation, will focus on developing in real-time an accurate baseline of indicators specific to an individual during extensive interrogations while also providing real-time clues during faster, routine screenings.
"We are developing a prototype that examines a video in a number of different security settings, automatically producing a single, integrated score of malfeasance likelihood," he said.
A key advantage of the UB system is that it will incorporate machine learning capabilities, which will allow it to "learn" from its subjects during the course of a 20-minute interview.
That's critical, Govindaraju said, because behavioural science research has repeatedly demonstrated that many behavioural clues to deceit are person-specific.
"As soon as a new person comes in for an interrogation, our programme will start tracking his or her behaviours, and start computing a baseline for that individual 'on the fly'," he said.
But the researchers caution that no technology, no matter how precise, is a substitute for human judgment.
"No behaviour always guarantees that someone is lying, but behaviours do predict emotions or thinking and that can help the security officer decide who to watch more carefully," said Frank.
"Random screening is fair, but is it effective?" asked Frank. "The question is, what do you base your decision on -- a random selection, your gut reaction or science? We believe science is a better basis and we hope our system will provide that edge to the security personnel."
Govindaraju said the UB system also would avoid some of the pitfalls that hamper a human screener's effectiveness.
"Human screeners have fatigue and bias, but the machine does not blink," he said.
The UB project is designed to solve one of the more challenging problems in developing accurate security systems -- fusing information from several biometrics, such as faces, voices and bodies.
"No single biometric is suited for all applications," said Govindaraju, who also is founder and director of UB's Center for Unified Biometrics and Sensors. "Here at CUBS, we take a unique approach to developing technologies that combine and 'tune' different biometrics to fit specific needs. In this project, we are focusing on how to analyse different behaviours and come up with a single malfeasance indicator."