Video Analytics Myths, or Big Brother Is Watching You
Face recognition, one of the most demanded and flexible technologies, is gradually conquering more and more industries and entering your life, even if you don’t see it. As far as video analytics gains momentum, the true outbreak of superstitions, doubts, and myths makes the technology look rather controversial to laypersons. In this post, we’ll try to dispel some video analytics myths and answer some pertinent and compelling questions.Many of our customers worry that face recognition technology can be biased or prone to critical errors or can collect personal information. To confirm their concerns, they give examples of the technology misuse or software bugs. This post is to dispel the most persisting myths and evaluate actual risk.
Myth 1. Video analytics cannot be effective in a multinational country
What they say: The face recognition system cannot perform with the same efficiency when capturing people of different ethnic groups, which automatically renders it useless in multinational countries.What the facts say: The face recognition system, like any other artificial intelligence, needs learning. The more people pass through the system, the better it will identify key parameters. This is the responsibility of software developers whose carelessness aggravated by an undue rush may lead to conflicts detrimental to both company reputation and technology image.
Myth 2. Video analytics may discredit an honest person
What they say: The face recognition system may erroneously perceive an ordinary shop visitor as a criminal and make him/her suffer from consequences. That is why we may not rely on the technology in such critical matters!
What the facts say: Too often, these incidents are caused by the technology misuse by people installing and configuring the video surveillance systems. Here are key contributors to correct video analytics:
- Good illumination. When implementing a face recognition system, always mind illuminance in the area covered by a camera. The darker a room, the more recognition errors occur, tests show.
- Right cameras. Face recognition quality heavily depends on cameras: high definition and good capturing ability are mandatory for the technology success.
- Camera positioning. Some companies try to use the face recognition technology without first consulting developers and thus either locate cameras too high or tilt them wrongly. As a result, video analytics may identify people incorrectly.
Myth 3. Cheating video analytics is easy
What they say: A violator can “deceive” a security system by putting on glasses, cultivating or shaving a beard, or hiding his/her face under a headband or hood. In real life, the system will fail to respond timely and recognize a violator.
What the facts say: Face recognition uses neural networks analyzing 54 points on a human face. The system will recognize a violator with up to 75% of his/her face covered, so that the presence or absence of a beard and dark glasses will not deceive it. Moreover, once the suspect is recognized, the system alerts security officers automatically, thus prevailing over ordinary surveillance systems in terms of both speed and reliability.
Myth 4. Using video analytics technology is something illegal
What they say: I am afraid that my employees or visitors will accuse me of invading their privacy or violating any rights. I don’t want they think of me as Big Brother controlling all aspects of their lives.
What the facts say: The face recognition technology does not invade people privacy. Using video analytics is permitted by law to ensure public safety, thus not violating but protecting rights of right-minded people. When integrating video analytics with customer servicing, you improve you service quality and convey personal message to every customer, just like Internet targeted ads work, but right here and now.
The video analytics technology employing AI, neural networks, and software code is nothing more than a basis which can be tailored to any business specifics and needs. Correct and ethical use of this technology is the responsibility of implementing companies, developers, and those who teach AIs. Face recognition is neither biased nor unfair technology in itself, and incidents are often due to its improper implementations or misconfiguration.