Real-Time Hazard Detection

via Machine Vision for Retail Robots

  • Client Location:USA
  • Project Duration:2017-2018
  • Team Size:3
  • Teach Stack: C++, MFC/Qt, OpenCV, TensorFlow

In 2017, when a Fortune 200 company was faced with a challenge of finding a hazard detection solution for its new line of retail robots, they were referred to Volo. Given that there was no such solution available in the market at the time, the company needed to make sure that their new tech partner would have the needed capacity and experience to handle a project that was complex and research intensive. While their senior management was in Armenia with the purpose of getting acquainted with Volo to discuss a possible partnership, they shared their concerns regarding an issue they were experiencing with their line of retail robots. Having assessed Volo’s previous experience with image processing solutions, all their lingering doubts were put to rest. With deadlines pending, they turned to us for help.

The Challenge

Today’s fast-paced retail environment demands that companies invest in solutions that speed up their services and respond quickly to issues that impact their customers. One such issue is quickly detecting spills, trash, and hazards in retail stores and notifying staff to promptly take care of it. Our partner company was a provider of retail store maintenance robots. In the testing stage, they learned that their robots were experiencing issues in recognizing smaller objects (under 2 cm in height) and transparent fluids. While the robots were equipped with monocular vision to collect and sort hazard information, they weren’t able to identify and bypass these particularly challenging objects, which often resulted in their malfunction. I.e., robots would often get stuck in sticky fluids like honey, oil, etc.

M robot


The project required us to overcome several obstacles:

  • The robots needed to detect objects under 2 cm tall.
  • The robots needed to recognize and detect objects in cases when there were changes in the surface/pattern of the floor.
  • The robots had limited processing power and battery life, which our solution had to accommodate.
  • The existing hardware of the robots was not subject to change.

The Solution

We had a three-pronged approach to finding a solution to the problem. Considering the pressing deadlines of our partner, we formed three separate teams that would work on three possible solutions simultaneously.

Machine Learning

We used machine learning algorithms to improve the trash recognition capacity of the robots. While those algorithms were successful in improving hazard detection, they required a higher processing power, which was draining battery life and decreasing the operational time of the robot.

M robot

Image Processing

Given that image processing does not achieve 100% accuracy, we came up with algorithms that would increase the robot’s performance for specific scenarios. E.g., if the pattern and texture of the floor in a particular retail store were constant, our solution provided 90-95% accuracy. The algorithm configurations could be easily adjusted and uploaded into the robot controller application and calibrated for different environments.

Our algorithm also worked without preconfiguring the pattern/texture of the floor, however with less accurate performance.

Infrared Data Processing

We developed and tested several image processing algorithms that would use the data received from the infrared camera of the robot. The plan was to combine and process data from both the infrared and the regular cameras of the robot for more accurate results. However, yet again we were hit with the robot’s hardware limitations, which didn’t allow us to make the necessary changes for complete infrared support.


Having tried and tested the abovementioned three approaches, we came to the conclusion that our image recognition algorithm was best suited for this particular robot hardware. We are in the constant process of tuning our algorithms, and in case of having a robot with more advanced hardware, we would be able to combine all three algorithms, which would result in very high accuracy.

M robot

Technologies Used

  • Our image processing algorithm was developed entirely using C++.
  • Auxiliary software with corresponding GUI was developed for image processing algorithm testing (C++, MFC/Qt, OpenCV).
  • The mobility and agility of the robot was achieved through the development of solutions/algorithms for ROS and using lidar (for the avoidance of big objects and obstacles).
  • For the machine learning solution, we used a custom-trained model within the TensorFlow framework.
Partnerships & Certificates

Transform your business with VOLO

Captcha is required

You've made the right decision by contacting us.

Our team will be in touch with you within 1 business day.

In the meantime, you can explore our latest case studies.

Maybe your success story will be next.

See our success stories


Something went wrong

Try again

Please rotate your device

This page is best viewed in portait orientation.