Researchers from the Department of Electronics and Electrical Communications at IIT Kharagpur have developed a technology that removes blur of rain from videos real-time and displays it on the windscreen of vehicles to reduce accidents.
Accidents due to low visibility during heavy rain are a major concern all over the world and a technological solution was much awaited.
Also, the solution needed to give real-time visual output with least errors as there is considerable risk involved in case any deviation in the imaging.
The technology, Real-time Rain Removal from Videos, is a proprietary algorithm which has a big advantage for Driver Assistance Systems (DAS) that uses video information to impart traffic related information.
“The technology is envisioned to be used to enhance the safety of air, rail, ship and auto transportation by providing on-screen visualization of clear videos in rainy conditions. The visual acuity of rainy videos captured by surveillance cameras can also be enhanced by this algorithm,” said Lead researcher Prof Sudipta Mukhopadhyay, faculty at IIT Kharagpur.
“In the proposed solution, time evolution properties of the consecutive video frames are analysed and the detection of regions affected by rain along with their restoration is proposed. Unlike previous approaches, the proposed algorithm does not assume the shape, size, direction and velocity of the raindrops and the intensity of rain, which makes it robust to different rain conditions. It will also be able to distinguish moving objects from the rain regions in the video. This approach requires less number of frames for removal of rain from videos, reducing the delay and execution time of the algorithm and thus providing better frame rates than other approaches,” explained researcher Abhishek Kumar Tripathi.
The technology has been patented internationally by the researchers under the name ?Method and Apparatus for Detection and Removal of Rain from Videos using Temporal and Spatiotemporal Properties.”
“Instead of working on the colour components, the proposed algorithm works only on the intensity of component of the video, so for autonomous driving/surveillance applications which does not require the videos to be displayed on a screen, we can work directly on monochrome videos rather than colour videos. Use of a single component (intensity) instead of three colour components further reduces the execution time of the algorithm,” added Tripathi.