Apps

Researchers Develop Mobile Tech to Help Blind Users ‘See’

Written by prodigitalweb

 

A team of computer scientists and researchers are developing highly upgraded adaptive Mobile Tech through which blind users will be able to see with the help of their smartphone or tablet, however; this Mobile Tech will also help the other visually-impaired people.

This team is well skilled in machine learning and computer vision technology however; currently they are developing a smart vision system for mobile devices, which will help people and blind users, who are suffering from sight problems in unfamiliar indoor environments. Based on the results of the primary work on assistive technologies team is planning to use depth sensor technology and different combinations of colors inside the tablets and smartphones as Google had done in the recent Project Tango with purpose to develop the 3D mapping, navigation, localization as well object recognition, however; Google Faculty Research Award is funding the research work which is in progress at University of Lincoln, UK. Apart from that team will also develop the best interface to help the users to understand that it’s sound, vibrations or the spoken word.

Dr Nicola Bellotto, who is project lead and an expert of human-centered robotics and machine perception at Lincoln’s School of Computer Science stated that “The base of this research our previous work which was for the creation of interface with an aim to help the visual impairment people.

The more he added that there are many visual aids available in the market which ranges from guide dogs to cameras and other wearable tech sensors. But the main problem with them is acceptability, adaptability and usability. If a large percentage of users will be able to use the technology which is embedded in electrical devices such as; smartphones and tablets, so it would not essential for them to wear an extra equipment to feel self-conscious”.

However; in market there are many existing smartphone apps which are able to recognize an object or able to speak the text for describing the different places, but embedded sensor based devices are something which will bring the revolution for blind and visually-impaired people and now researchers are trying to exploit the technology of sensor based devices . In this research the main aim of researchers to create a system with embedded human-in-the-loop, which will be provide localization based information to visually impaired users and understand that how people are observing and recognizing the special feature of environment.

The research team which includes Dr Grzegorz Cielniak, specialist of machine perception and mobile robotics and Dr Oscar Martinez Mozos, specialist in quality of life technologies and machine learning aims to develop advanced system which will recognize visual objectives in the environment. And in this process the data will be detected with the help of device camera and by identify the type of environment or user moves in particular space. Research team is working with Google sponsor and it is collaborating with specialists at Google to make this Mobile Tech more advanced, adaptive and friendly.

 

About the author

prodigitalweb