October 29, 2012 /  / Comments Off

Reuven Cohen | Forbes

In something that looks straight out of the CBS show “Person of Interest“, the science websitePhsy.org is reporting on a potentially important breakthrough from researchers at Carnegie Mellon.

In research sponsored by the United States Army Research Laboratory, the Carnegie Mellon researchers presented an artificial intelligence system that can watch and predict what a person will ‘likely’ do in the future using specially programmed software designed to analyze various real-time video surveillance feeds. The system can automatically identify and notify officials if it recognized that an action is not permitted, detecting what is describes as anomalous behaviors.

According to the paper, one such example are cameras at an airport or bus stations with an autonomous system flagging a bag abandoned for more than a few minutes.

The paper presents a complex knowledge infrastructure of a high-level artificial visual intelligent system, called the Cognitive Engine. In particular it describes how the conceptual specifications of basic action types can be driven by a hybrid semantic resources. In layman’s terms, the context of an action. For example, is a person leaving a bag because he’s sitting next to it? Or has that person left all together?

The goal of the research is to create artificial intelligence system similar to that of human visual intelligence. The ability for a computer system to make effective and consistent detections. The researchers noted that humans evolved by learning to adapt and properly react to environmental stimuli, becoming extremely skilled in filtering and generalizing over perceptual data, taking decisions and acting on the basis of acquired information and background knowledge. These computer vision algorithms needed to be complemented with higher-level tools of analysis involving knowledge representation and reasoning, often under conditions of uncertainty.

The Cognitive Engine represents the core module of the Extended Activity Reasoning system (EAR) in the CMU-Minds Eye architecture. Mind’s Eye is the name of the Defense Advanced Research Projects Agency (DARPA*) program for building AI systems that can filter surveillance footage to support human (remote) operators, and automatically alert them whenever something suspicious is recognized (such as someone leaving a package in a parking lot and running away).

Alessandro Oltramari, a postdoctoral researcher and Christian Lebiere, both from the Department of Psychology at Carnegie Mellon, suggest that this automated video surveillance approach could find applications both in military and civil environments.

* DARPA is an agency of the United States Department of Defense responsible for the development of new technologies for use by the military.

To submit your feedback or your articles for publication: terminalxmedia@gmail.com