HSE TRAININGS IN VIRTUAL REALITY

A health, safety & environmental (HSE) training in virtual environment helps employees address sophisticated issues in a more effective way, ensures maximum involvement in the production process, and allows for emergency drills being difficult to conduct in reality. 

Should it be high voltage equipment failure or fire at the production facility, VR safety training helps reveal employee reaction to any emergency or sudden obstacles, reproduce tricky situations when an incident doesn’t follow the usual scenario, and, more importantly, enables employees to get prepared to such incidents in advance. For example, the introduction of VR in Ford’s manufacturing process reduced injures by 70%.

VR TRAINING

VR training is a virtual reality app developed in line with an emergency response procedure and staff training methodology as prepared by the specialists of the customer’s training center. During such VR trainings, employees obtain necessary knowledge and skills and pass tests in a computer simulation, with the test results being available for the supervisor who thus can speed up onboarding and new knowledge perception through practice.

Any hazardous situation or emergency, such as accidents on electrical distribution networks, can be easily reproduced in a computer simulation, with virtual reality technologies being capable of showing almost any content, including even “playback” of past accidents to prevent them in the future. 

The consequences of any HSE breach can be reinforced in virtual reality through visual effects, like a full-screen explosion accompanied by bruising, thus impacting psycho-emotional state of employees to make them be more careful in their routine activities. 

01.png

VR TRAINING FORMATS

VR training formats may differ depending on a particular task, each having certain advantages and limitations.

VR trainings can be tailored to various platforms: PC, VR glasses and headsets, industrial VR systems or multi-user devices with VR support. Each scenario differs in terms of user immersion, content adaptability for mobile use, and the quality of graphics.

Augmented reality 3D apps for smartphones and tablets can be used almost anywhere; however, the mobility advantage is balanced by rather poor quality of graphics.

Training on desktops (PCs) and laptops boasts the highest possible quality of graphics with an average level of immersion and mobility and is the most common format used by corporate training centers today.

VR tools are usually divided into:

  • VR glasses using a mobile device (“mobile VR”)
  • VR systems (“stationary VR”), including helmets, joysticks, space positioning systems and powerful workstations (VR Ready laptops or powerful PCs) for the highest possible content broadcasting

Each format has its advantages and limitations. Thus, mobile VR has a high level of mobility, as users need just a smartphone or VR glasses themselves, and a high degree of immersion, since employee’s peripheral vision is not distracted. Both elements easily fit in a backpack, medium-sized bag or a classic briefcase. As a result, coaches are not tied to a specific training location and can easily conduct field demonstrations almost anywhere.

However, VR glasses do not support physical movement, with all content being perceived just from one point in a 360-degree format. 

VR systems (HMD) offer the best possible graphics and the ability to move within a space of nine square meters, thus making it possible to get oneself truly immersed into virtual reality and interact with the surrounding virtual space using joysticks.

This format can be considered mobile as well because the equipment itself does not take much space and is not very heavy, but installation and configuration takes time and effort. This tool is perfect for classrooms at corporate training centers. In addition, a multiplayer solution is available for collective training where users can see avatars of each other, communicate and perform collective actions in a virtual world, thus achieving the maximum degree of immersion.

02.png

HOLD ON TO YOUR HARD HATS: NEURAL NETWORK WATCHING OVER YOU AND 20 TYPES OF PPE YOU MAY HAVE

Is there a way to make sure that workers actually wear and use personal protective equipment (PPE) at hazardous production facilities? No big deal. Just train a neural network to recognize if PPE is on workers all over the facility, and then prevent accidents. How to do so? That’s what our today’s post is about.

Is standard CCTV system effective? There are ordinary people behind the cameras, getting tired, distracted, having momentary lapses in concentration that could prove fatal for workshop personnel. The good news is that progress comes to save the day. A properly trained artificial intelligence detects safety practice violations at once and in no time alerts a worker, one way or another, e.g. with a sharp sound, vibration, or SMS.

We trained AI-based video analytics to monitor safety compliance in the workplace in multiple ways:

Cameras
  • Easily recognize if a person lacks a hard hat
  • Detect if a person lacks goggles and gloves at hazardous production facilities, a safety harness  (carabiner will get you busted), high-visibility vests, respirators, caps, and other personal protective equipment.
  • Count people at the facility and memorize the number of people present at a certain time.
  • Develop customized movement detectors for specific operations like machinery warehouse, construction operations, oil rigs operations and others.
  • Combine data from electrical equipment, machinery, oil rig sensors and others with people activities from cameras to detect complex cases
  • Trigger an alarm when a person enters a hazardous area, while this trigger can be set in line with the area’s machinery start and halt.

We called this platform for workplace accident prevention “Digital Worker“, and its video analytics tool can detect:

  • hard hats,
  • worker outfit: Vests, Boots, life jacket,
  • safety carabiners,
  • respirators,
  • safety goggles,
  • correct vest use (matters when working with electric equipment),
  • carrying out large tools beyond the perimeter,
  • gloves of different length.

How does a Digital Worker IoT platform count people and know a worker’s job? It does so by means of digital differentiation, capable of seeing the color of safety wear or hard hat on a worker as one of the ways. Let’s take a drilling site for example, where each hard hat color depends on the job. Blue hard hats are worn by drilling engineers, green is found on heads of drilling supervisors, and so on. The site can have color video cameras. The number of hard hats counted in real time is equal to the number of workers. Each camera covers a specific area of a drilling site. At the end of the day, all this data is consolidated to make a schedule for every area, showing what workers did the job, how many of them and in what area.

The system recognizes dangerous situations and notifies the operator about them. So, it is forbidden to be near the drilling rig when it is in working condition. In case a worker comes closer to the forbidden area, camera notices it and sends an alert. 

The system also reacts when foreign objects prevent machinery operation. This picture shows how an animal is detected on the production line:

Animal

Can you trick a neural network? Let’s say, a worker fastens a hard hat to a belt. What is going to happen? In the case like this, two detectors are involved to find a bone structure, check whether a color spot is on top of it, and look for objects that move in sync. Therefore, a hard hat on the belt will hardly “look around” a person, because to do so, it should be on the head that moves in a certain way. Such a move can be easily detected, so tricksters are caught on their way to break rules.

What else can a neural network see?

  • Falling. If a worker falls, the machine nearby can be instantly stopped.
  • Workers being in hazardous areas. It’s a win-win. Metallurgical enterprises engage people to work close to tanks filled with red-hot steel, and sometimes standing on the wrong side of a tank could be dangerous.
  • Whether a worker has dozed off at a workplace or moves around.
  • An AI can see steam jets, smoke, pipe integrity, and fire. 
  • It checks whether the hatches and doors are closed.
  • Lost items: it is very convenient to watch over things in the clean zone at production facilities, especially chemical.

Digital Worker platform is a mix of IoT and AI technologies and tools for industrial enterprises and construction companies developed to ensure employee safety and improve performance with a focus on minimizing the number of accidents, preventing fraud and safety violations.

LIE TO ME: AI TO DETECT DECEPTION

People lie. Statistically, any person of 18 to 44 tells a lie at least twice every 24 hours. Sad but true: some lies can endanger people around. Thus, let us dive deep into innovations that successfully detect deception around the world.

One of the best places to implement such technologies is airport. Automated Virtual Agent for Truth Assessments in Real-Time, or AVATAR, which is basically a lie-detecting computer kiosk system, is installed in some international airports of the USA, Canada, and the European Union. A 3D avatar of a customs officer appears on a terminal’s screen and merely asks travelers a few questions, while the system assesses a respondent and reveals deceptive or risky behaviors:

  • Near-infrared camera under the screen captures eye movement, gaze direction, and eye dilation.
  • Microphone logs voice pitch changes.
  • Touchscreen panel scans fingerprints, when travelers input their data, and automatically checks them against the database of offenders.
  • RFID system can identify what passport the traveler carries in a bag or a pocket.
  • A floor mat in front of the kiosk hides motion sensors that detect signs of nervousness, i.e. when respondents curl their toes.
NNTC_airport.png

According to Aaron Elkins, a professor at the San Diego State University, the AVATAR as a deception-detection judge has a success rate of 60% to 75% and sometimes up to 80%. “Generally, the accuracy of humans as judges is about 54% to 60% at the most,” he said.

Yet another solution was developed by Converus (Lehi, Utah) claiming that their EyeDetect is the most accurate lie detector available. A computer runs this software and a specialized camera captures eye movement of the person being tested. During a 30-minute true/false test, the camera captures changes in pupillary response, eye movement, blinking, staring, and makes other assessments to evaluate how honest a respondent is. The system’s algorithm makes 60 measurements per second, which is about 180,000 per test. The Department of State recently paid Converus $25,000 to use EyeDetect when vetting local hires at the US Embassy in Guatemala, WIRED’s reporting revealed.

iCognative suggests another vision on such case: their technology detects if any specific information is stored in the brain by measuring the brainwaves. Wireless headset is placed on a person’s head – it uses sensors that collect brain responses from the scalp and muscle movements. The person goes through a test with special triggers (words, phases or pictures), that form association and provoke reaction from the brain. The iCognitive software analyzes ECG signals and determine whether information under test is present or absent.

NNTC_crowd.png

Innovative technologies in deception detection can make a difference: more crimes cleared, offenders caught, and which is no less important, fewer violence attempts in the future. If you are interested in implementing AI for security or other purposes, contact NNTC and we will provide all the information and help.

TOP TECHNOLOGY TRENDS OF 2019 BY GARTNER

Gartner, Inc., a research and advisory company, has revealed a list of technology trends that will have an impact on businesses in 2019. Here are the most interesting of them – we are sharing five out of ten technology trends highlighted by Gartner analysts David Cearley and Brian Burke (Top 10 Strategic Technology Trends for 2019, published 15/10/18).

Autonomous Things
Autonomous things, i.e. robots, drones, vehicles, appliances, and agents, use AI to automate functions previously performed by humans. Experts believe that interaction of autonomous devices will be the main path for AI development.

Augmented Analytics 
Augmented analytics enables the testing of a broad range of hypotheses, thus opening up new data processing and analysis opportunities. Moreover, automated insights of augmented analytics can be embedded into enterprise applications to optimize decisions and actions of all employees. Augmented analytics includes:

  • Data preparation using machine learning automation to augment data profiling and quality, harmonization, modeling, manipulation, enrichment, metadata development, and cataloging.
  • Business intelligence enabling business users and citizen data scientists to automatically find information, visualize and narrate relevant findings without building models or writing algorithms.
  • Augmented data science and machine learning that use AI to automate AI modeling and their key aspects, such as feature engineering, model selection, operationalization, explanation, tuning, and management.

AI-Driven Development 
New platforms will allow application developers to integrate AI capabilities and models into a solution without the help of professional data scientists.

Tools used to build AI-powered solutions (AI platforms and services) are expanding to target not only data scientists, but also the community of professional developers. These tools are, in turn, being empowered with AI-driven capabilities that help professional developers and automate tasks related to the development of AI-enhanced solutions.

AI-enabled tools in particular are evolving from assisting in and automating of application development functions to automating more sophisticated business-domain processes (from general development to business solution design).

Digital Twins
A digital twin refers to the digital representation of a real-world entity, process or system. Separate digital twins can interconnect to form more complex and larger systems. They are mainly used in Internet of Things (IoT) to: monitor system health, show new ways to improve efficiency, and help develop new technologies and services. Experts say that, at the next stage of technology evolution, enterprises will be implementing digital twins of their entire organizations (so called DTOs).

NNTC also created a digital twin for a massive enterprise. In 2018, we have launched our “Digital Worker” safety platform. The platform shows “the big picture” of the enterprise by combining multiple events (3D format included) registered by any systems: video analytics, industrial and wearable IoT, access control, SCADA, and other systems.

Smart Spaces
A smart space is a physical or digital environment in which humans interact with technology-enabled systems and which evolves across five key dimensions: openness, connectedness, coordination, intelligence, and scope. According to David Cearley, a Gartner analyst, this trend has been coalescing for some time around elements such as smart cities, digital workplaces, smart homes, and connected factories.

*Source: Gartner Identifies the Top 10 Strategic Technology Trends for 2019

IT’S TIME TO BUST THESE ARTIFICIAL INTELLIGENCE MYTHS

Artificial intelligence (AI), a sharp uptrend from 2018, keeps running high this year. However, many people are still wary of it. No matter how great the scientific innovations are, there are very common myths about the AI, and it is indeed high time to take a close, fresh look at the most popular ones. So let’s bust them.

AI: Much More Than Just a Robot

Pop culture actively promotes AI as android robots (Blade Runner; I, Robot; Chappie; Detroit: Become Human; Star Trek, etc.). Robots are truly an outstanding example of what AI can be. Just think of those running, jumping and dancing bad boys made by Boston Dynamics or Pepper robots, famous welcoming assistants. However, AI goes far beyond this, and you encounter this technology even more often than you can imagine.

For example, artificial intelligence recognizes and processes pictures you take on your phone. Surely, more megapixels mean better picture, but the market is taken by the companies whose powerful phones are accompanied by strong processing algorithms (i.e. color correction, white balance, zoom, and blur background), quality of which directly depends on artificial intelligence performance.

Another example of AI application is health service. A neural network by Third Opinion analyzes patient’s medical data (X-rays, ultrasound scans, MRI, blood test results) and detects malconditions as good as medical professionals do. The AI watches over us: a video analysis system by VisionLabs with a help of NNTC identifies individuals in public places, finds matches with the wanted person database, and promptly alerts security service.

Last but not least, AI drives science development. On April 10, 2019, an AI algorithm developed by Katie Bouman, an MIT graduate, enabled scientists to present the first ever black hole image. She created the algorithm for visualizing data from telescopes around the world that followed a black hole, with imaging now achieving angular resolutions as fine as 10 microarcsec.

AI Will Not Take Your Job

Robots will never completely replace humans, that’s for sure, since feelings, emotions, and critical thinking are all beyond machines’ capability. However, robots excel at calculations and simple tasks – no hard choices or emotional investing.

As a rule, AI can perfectly handle one particular task only, like the one that plays chess – it does nothing but plays chess, though this AI is at the top of this game. But if you ask it to distinguish between a kitten and a puppy in the picture, you will see an epic fail of this AI.

Lacking emotional investment, which is synonymous with art professions and inherent in service and teaching, artificial intelligence has absolutely no clue how to act in critical situations when time is running out and critical thinking is a must. AI needs human support and supervision, so it cannot act without guidance yet.

DRONES COMBAT ILLEGAL FISHING

How can drones help fight illegal fishing? Let’s discuss the most common use case.

To detect and track illegal fishing vessels in the sea, a drone has to cover the largest possible area during one flight. In addition, drones should carry cameras to take photos and videos, powerful transmitters to stream video to a command center, and high-capacity batteries to ensure long operation.  

Today, responsible authorities use long-range fixed-wing aircraft, which meets the above requirements, but at the cost of convenience and flexibility:

  1. Fixed-wing aircraft requires an airstrip to take off and land. This limits a mobility and patrol area.
  2. If no airstrip available, a catapult is required for take-off and a parachute for landing. Unfortunately, catapult needs transportation and time for deployment. Parachute system adds weight to the aircraft thus reducing the range and/or payload.
  3. Fixed-wing aircraft is, basically, a plane. A skilled pilot is required for take-off, fly and landing, otherwise aircraft will crash. 

To overcome the above limitations, while keeping the required drone range, VTOL aircraft can be used. Recently, the market saw a lot of announcements of VTOL aircrafts ranging from small 1.5 m wingspan to larger 4 m ones.

Such aircrafts uses two major designs: a) tilt rotors, where the same motors and propellers are used for take-off and landing, and b) aircrafts that use propellers for horizontal flight and dedicated VTOL rotors for take-off and landing. Dedicated VTOL propellers are easier to build and maintain, however, they create additional drag during horizontal flight, thus reducing aircraft flight range. On the other hand, tilt rotors require additional mechanics to operate.

Overall, despite above trade-off, VTOL aircrafts can greatly increase the efficiency of patrolling operations in illegal fishing fighting mainly due to possibility to change patrolling areas easily.

ARTIFICIAL INTELLIGENCE MANAGES DOCUMENTS, OR HOW DBRAIN SPECIALISTS DROP PROCESSING TIME DOWN TO SECONDS

Document submission and processing have always entailed filling heaps of forms, sending scans, and numerous data checks and re-checks – something that requires much time and effort until you opt for OCR-powered AI.

OCR stands for Optical Character Recognition technology that detects an image, breaks it into fields, scans them, and automatically transfers data to respective forms (agreements, applications, CRM bases, etc.).

Imagine that you want to travel abroad. No more long and tedious communication with a travel agent, no more visa applications to fill in, and no more fear of making a mistake and spoiling a form — just scan your passport with OCR, and all your further applications will be automatically populated with necessary data. This technology adopted by major corporations and government agencies can reduce paperwork and alleviate customer stress.

OCR is also used to:

  • automatically read bank cards
  • instantly recognize passports
  • automatically enter invoice details for online payment
  • quickly enter data into agreements
  • reconcile customer data obtained from different sources
  • automatically populate CRM base
  • and do many other things

However, the system is not ideal yet, and Dbrain, the technology developer, admits text recognition errors to be its greatest shortcoming, especially when processing photos with sharp folds, or those affected by backlights, or taken by a low-class phone. To solve the problem, Dbrain added two functions to the OCR technology.

  • Context analysis. A scanned text is additionally processed by a neural network taught to consider a context and automatically correct errors, similar to how Google corrects mistypes in search requests.
  • A human-in-the-loop concept. Text extracted by the system is transmitted, in real time, to skilled experts connected to Dbrain platform, for manual check. Such human-and-machine combination improves recognition accuracy from 85% to 99% in all texts, including handwritten ones. Another remarkable advantage of the manual check is that it solves manuscript-related problems as the algorithm learns to find and correct errors, with recognition quality growing over time.

Users should not worry about their personal data confidentiality, since Dbrain assures personal data to be transmitted in an anonymized form. The algorithm blurs an image and breaks user’s passport into several fields on the client side, with information coming to Dbrain servers in an anonymized form, thus preventing from field-to-person match. Fields are recognized independently from each other and transmitted to a client in an encrypted form over HTTPS, all in less than a second.