The rise of technology in handheld devices has shaped the way we live, but it isn't accessible to everyone. People with sensory limitations, like blindness, can’t access the wealth of information the way most of us do. Hari Palani recognized this disparity and decided to create a solution. UNAR Labs, Palani's new startup that he co-founded with UMaine professor, Nick Giudice, created Midlina, an AI-powered software solution that will assist visually impaired people with accessing graphical information in digital media using smartphones and tablets.
How does Midlina work?
Midlina is a software solution that integrates with mobile operating systems to convert visual graphic information into a multi-sensory equivalent, which users can access via haptic (touch), audio, and finger movements. It’s essentially going to change the way any multimedia information is currently presented to the visually-impaired. For instance, imagine a simple scenario of a blind student in a classroom where the teacher is giving a lecture based on bar graphs. Sighted students can follow the lecture as they can see the relative height of each bars and understand the information conveyed through the graph, but blind students cannot access such information in real time. By contrast, with a Midlina-powered device, the blind student could access that very same information in real time by touching and feeling the height of the bar using touch, audio, and finger movement feedback.
How did you originally become involved in spatial informatics and blindness accessibility?
My background is in geo-informatics engineering, and I used to work as a senior software engineer for a big corporation in India. In 2010 my wife, then girlfriend, came to the University of Maine to pursue her master’s degree in spatial informatics with professor Nick (Giudice), who is the co-founder of UNAR Labs. During our conversations, she would often tell me stories about Nick, about how he did things. I got curious and started an awkward email conversation with Nick asking about how he overcomes his blindness, and Nick humbly responded, “Your life activities primarily rely on vision, while mine primarily rely on touch and audio. Beyond that, there is not much difference as we both are accessing, inferring, and using the information around us to experience the world and do what needs getting done.” Nick’s response made a profound impact on me, which subsequently convinced me to quit my job and join Nick as a graduate student at UMaine in 2011.
How did UNAR Labs get started?
While I was getting my master’s degree, Nick and I came up with the basic concept of using vibration as a means of accessing information, and the blind community was hugely supportive of it. Enthusiasm from blind users during the initial usability testing clearly indicated that our approach was viable and confirmed that we were on the right path towards solving a significant information access problem faced by millions of blind individuals. This motivated me to pursue my PhD, during which I focused on the basic science behind how sensory substitution could be implemented, from a technology standpoint, to develop a non-visual information access solution. We have spent the past 8 years researching and understanding how to optimize this approach. We knew how to make the best solution to solve this problem, and we didn’t want to wait for someone else to do it. We knew we had the right idea and the right team, so with that confidence we started UNAR Labs in the fall of 2017. I graduated this past summer and now work at UNAR Labs full time.
Where do hope to be in 5 years?
To explain this, I am going to use an analogy. Everyone has a smartphone and uses its digital keyboard. We don’t even think about what we type because of a feature called predictive text. Prior to 4 years ago, the predictive text feature was actually a product from a startup. After three or four years, it was eventually integrated into the smartphone’s native software itself. We are envisioning something similar to that, where Midlina will be integrated into every digital media platform, and every bit of digital media information will become accessible to everyone beyond their sensory bounds.