Archive.fm

Assistive Technology Update - A fast-paced weekly update for AT professionals and enthusiasts

ATU687 - .lumen with Cornel Amariei

Duration:
27m
Broadcast on:
26 Jul 2024
Audio Format:
mp3

Your weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist people with disabilities and special needs.   Special Guest: Cornel Amariei - CEO and Founder Website: dotlumen.com     Bridging Apps: bridgingapps.org     Sign up to attend our next online full day training here: https://bit.ly/3ygEfGf     —————————— If you have an AT question, leave us a voice mail at: 317-721-7124 or email tech@eastersealscrossroads.org Check out our web site: http://www.eastersealstech.com Follow us on Twitter: @INDATAproject Like us on Facebook: www.Facebook.com/INDATA
Hi, this is Cornell, and I'm the CEO and founder of .Loomit, and this is your Assistive Technology Update. Hello and welcome to your Assistive Technology Update. A weekly dose of information that keeps you up to date on the latest developments in the field of technology designed to assist individuals with disabilities and special needs. I'm your host, Josh Anderson, with the INDATA Project at Easterseals Crossroads in beautiful Indianapolis, Indiana. Welcome to episode 687 of Assistive Technology Update. It is scheduled to be released on July 26th, 2024. On today's show, we're super excited to have Cornell CEO and founder of .Loomit on. We also welcome back Amy Berry from Bridging Apps with an app worth mentioning. Now let's go ahead and get on with the show. Listers, I want to make sure that you are fully aware of our next full-day training. So as you know, here at INDATA, one of the jobs is getting the word out about Assistive Technology. One way we do that is by podcasts like this, but we also do four to five full-day trainings throughout the year. Well, our next training is coming up on August 22nd, 2024 from 9 a.m. to 3 p.m. Eastern. This training is over innovative assistive technology, and it is online only. All that you have to do is register to attend. Now spots are limited, so do make sure that you go and register for it as soon as possible. I will put a link down to register down in the show notes. During this day, we'll talk about all kinds of fun stuff, including artificial intelligence, robots, Internet of Things, virtual reality, augmented reality, adaptive control interfaces, and really more importantly than kind of talking about these technologies, we're going to talk about how they work into the world of assistive technology. I will be doing most of this training and I will tell you I am not an expert on any of these things. Now I say that in that kind of way. What I mean is you don't want me building your artificial intelligence systems or teaching your machine's machine learning. You will not get very far in that, but I do work in the world of assistive technology and see how artificial intelligence is being used by some of the amazing creators, including our guest today. So if you'd like to join us for a full day training, get some of those magical little CEUs that you might need for some kind of certification that you still keep or just want to kind of have a little bit of fun or maybe learn how these innovative and emerging technologies are being used in the world of assistive technology, please do join us for our next in data, full day training, innovative assistive technology, which will be on August 22nd, 2024 from 9 a.m. to 3 p.m. Eastern. We'll put a link down in the show notes so that you can go and register as soon as possible. We can't wait to see you there. Folks, we cannot thank you enough for giving us a listen here at assistive technology update. Without you, we would not have been around for coming up on getting pretty darn close to that 700 episode mark. But did you know this is not the only podcast that we have? You can also check out our sister show, Assistive Technology Frequently Asked Questions. This show comes out once a month and it features panelists, Belva Smith, Brian Norton, and myself as we try to answer the questions that are plaguing your mind about assistive technology. We gather up all the questions we get during the month from emails, phone calls, and many other means. And then we do our best to answer them. But I'm going to tell you folks, believe it or not, we do not know everything. So we rely on our listeners a lot to reach out to us and give us some of those answers or maybe just talk about their personal experiences and things that have happened to them. So if you like assistive technology update, you may very well love assistive technology frequently asked questions. Again, it's assistive technology frequently asked questions where you can get your questions about assistive technology answered or if you happen to have the answers to some of the questions asked on that show, please, please, please do reach out and let us know so that we can help the community with the answers that they so desperately seek. Much like assistive technology update, you can find assistive technology frequently asked questions wherever you prefer to get your podcast. And as always listeners, thank you for listening. Next up on the show, please join me in welcoming back Amy Berry from Bridging Apps with an app worth mentioning. This is Amy Berry with Bridging Apps and this is an app worth mentioning. This week, I'm sharing an app called TrueLink. TrueLink Financial is a credit card company that's sole mission is to assist caregivers, guardians, and judiciaries in maintaining control of money for their loved ones with intellectual and mental health challenges, all while sustaining a control level of independence. This extends for the use of clients with special needs trusts. It's a pre-funded debit or credit card where all parties agree to how the money is to be used and any purchases that are not pre-authorized will be declined. The TrueLink app is a simple way for both the funders and the spenders to maintain the prepaid account. The account and the app is very easy to set up. The transfer of money on the prepaid card is completed on the app by the guardian or fiduciary and then the user sees what money has been added to their card by looking on the app, as well as what funds can be used for. And then the card can simply be used independently by the user. This card has been a wonderful resource for our reviewer as she has a special needs trust and often needs to rely on funds for emergency repairs to her wheelchair or speaking device or even her van. Funds can be approved and transferred fast so her world does not slow down. This resource is also an amazing tool for people with dementia and addicts and recovery as caregivers and sponsors can help manage the user's money by blocking specific stores and/or transactions. It really helps them maintain independence in their life while also sustaining a high level of security. TrueLink is a non-profit. There are minimal fees involved in setting up an account and each time you transfer money. There are also fees for using the card for ATM machines. In addition, there's a max balance on the card of $20,000 and a max of $5,000 per transaction. This may seem high but also know that if the user unknowingly uses the card for purposes that are not for their benefit, the max amount will not crush their stability. TrueLink is available for free at the iTunes Store, Google Play Stores, and on the web. For more information on this app and others like it, visit bridgingapps.org. Listeners, wearables, and artificial intelligence are some buzzwords in the world of technology, both consumer and assistive. Our guest today is Cornell from .lumen and he's here to tell us how these things can come together to assist individuals with navigating their surroundings and a bunch more things in a whole new way. Cornell, welcome to the show. Pleasure being here. Thank you for inviting us. Yeah, I am really excited to get talking about the technology but before we do that, could you tell our listeners a little bit about yourself? Definitely. I'm Cornell. I come from Eastern Europe specifically from Romania. I was actually born in a family where every family member except myself has a disability. That means my parents, my sister, my nephew, my cousins, my grandparents. I come from a family where I'm the only person who doesn't have a disability. After a career in the automotive field, and by training an engineer and scientist, I decided to build things that help. Basically, that's me. One minute description. Perfect. Perfect. We'll probably hear a little bit more about your passion and everything as we get to talking about the tech. I guess just big picture, let's start off with what is .lumen? Sure. Well, .lumen is a startup which we started four years ago, like four years and one month ago. But let me take you to the problem, why we founded this company, when what problem we're solving, and the problem which we're solving is the lack of scalability of advanced mobility solutions for the visually impaired. So let me describe it a bit. Right now, you have over 300 million people with visually impairment worldwide, and it's growing. It's growing fast. But if you check the solutions which are the most used, you still come back to the why came into the guide dog. And now the guide dog. It's a great solution by what it does. It's features and how it can help. That's great. But just to point a couple of problems with it, last year we spent half a billion dollars, and we only trained 2,000 guide dogs. So the real cost of a guide dog is through the roof. It is not something which is scalable. In the entire world, we only have 28,000 guide dogs. So that's sort of where we're we're we're we're the tours which we are going. That is the problem which we are solving at that limit. We are solving it. We call it the dot limit glasses. Basically what they do, they are self-driving car. Everything that a self-driving car does, our glasses also do, but on the pedestrian side. But rather than driving wheels as a as a car, we actually guide people. So to give you, I think it would be a better analogy. Let's think of the guide dog. A guide of works by pulling your hand. Put a guide to pull your hand, avoid you from obstacles, keep you on the sidewalk, stop it crossing. Sometimes it helps to crawl, works indoor and algebra, etc. Our glasses, they do exactly the same, but they don't pull your hand obviously. They're not on your hand. They're actually on your head. So it's actually a headset. They actually slightly pull your head using vibrations. So you actually feel them on your head, or they pull you towards the path you should go on avoiding you from obstacles, keeping you on the sidewalk, stopping good crossings, helping you cross, helping you navigate curbs or steps or stairs, all of these situations. That's like a very, very quick presentation. What did you do? Nice. And that's amazing. I mean, they can actually sense or know that there's obstacles in my path. Like you said, there's crosswalks, all those different things they're able to, I guess identify and interpret that information and then pass that on to the user that quickly. The system is much faster than a human can react to it. So basically, the system understands everything from not only where the ground is and obstacles below or above the ground, because obviously a pothole, for example, is an obstacle below the ground. That's a first level understanding. But the most complex part about it is that it understands semantically. What do I mean about this? If you think in a poor geometrical world, the sidewalk and the road, they are the same. They don't necessarily have obstacles on them. But clearly, you shouldn't be walking on the road. You should be walking on the sidewalk. The system knows this difference. It knows how to determine that. It knows how to determine where there's ice, where there's water bodies, where there are mud patches, where it's terrain, where there are rocks. It knows to determine all of the surfaces on which you shouldn't necessarily walk. And it does that with what we believe is the most advanced AI in the world specifically for this task. Nice. That's awesome. Can you kind of describe the device, the glasses themselves? Can you kind of describe what they're like? It's always hard on a podcast to say what they look like. But just kind of how they feel and kind of how they work. Sure. So it's a headset. It's actually on your forehead region. So most of the interface is actually on your forehead region. It has a component in the front and a component in the back. The part in the front, that's where all the cameras are. The system has six cameras. It's where the buttons are. You can control various features with the buttons. It's where some microphones are. So you can listen to you. You can talk with the device and it answers. It has some directional speakers on the side so you can actually hear what the device is saying without covering your ears. And in the back, you have the super computing unit and the batteries. It's a small unit which does all the computing and the batteries and everything. So this is a very similar with an AR or VR headset, but it doesn't sit on the eye region. It sits above and similar in weight, even in some cases lower weight, similar in comfort and everything. So the quickest analogy is a small AR headset. I think that's the closest thing to how it feels. With a mention that in the forehead region, there is a specific system which we have. We call it a haptic feedback array. It's a set of haptic actuators which actually vibrate your head towards the direction where you have to go. So that's the primary difference which separates from other other headsets. Nice. I know some things might still be in development, but how long does it usually take a user to get used to it? I know sometimes with haptics, it can take a little bit of a use to feel which side of the head maybe it's vibrating on and everything. How long does it usually take a user to get used to figuring out how to use it? We have multiple videos into the internet which at this point have over 40 million views of people. In the first minute or a minute and a half, as they first put the device on, they were able to navigate to real life complex situations. I mean people at CS, for example, at CS, it's a busy show and they were visually impaired individuals which were able to do it in a minute and a half. It's immensely intuitive. It's one of the great wings which we have created the way you train and the way you understand the device is incredibly fast. So it's a minute and a minute and a half, but obviously for more advanced features, it takes an hour or two. Still, that's definitely not bad. I think it takes a lot longer than that to get used to a guide dog, which as you said, kind of at the beginning are not easy to get or find or train or or really get into the folks' hands that need them. I got to ask you because I know you kind of came from the automotive world. Is that where the idea came to kind of move the kind of the self-driving car, kind of technology into a, for lack of a better term, self-driving person kind of tech? Partially, the experience which we had because a lot of the team were a team of 15 engineers and scientists working on this and a bunch of us are coming from the automotive field. Particularly for myself, I actually don't necessarily think it's where the idea came from. The idea actually came from the problem. So when I found out the lack of guide dogs worldwide, that's when we got, we quickly got to the idea and it just happened that automotive was the sector in which I was. Definitely, it helps from technical perspective with a lot of similarities, but nobody tried to do a self-driving car on the head. Absolutely nobody until we did it. One fundamental thing which I think was at the core of the idea and we were the first in the world to do it, since the 50s, people have been trying to represent visual information in a non-visual way. What do I mean about this? If you have an obstacle, you feel a vibration or you hear a sound and things like this. Unfortunately, the world is so complex that while it can work in a lab environment, in a controlled environment, the moment you go to the real world, it doesn't work anymore. You cannot represent more than one, maybe in some cases, two obstacles or situation in a non-visual way. It's immensely complicated. But what we knew we wanted to do very different from day one is that we didn't want to represent the world because it is not scalable because it doesn't work in the real in the real life situation. We looked at a guide dog. The guide dog is not barking when you have five obstacles, not barking five times. It's just guiding me around them. That's what we do. That's what we fundamentally do different and that's what we have patterns for and we're the only ones in the world who can do it. Nice, that is awesome. Why don't we talk a lot about navigation and how the glasses are able to help with that. What else either can the glasses do now or what are your plans for the future that they might be able to also do to help individuals with visual impairments? The first set of features is replicating everything the guide dog does, but preferably better. Of course, with the exception of the emotional support. We are still building technology. We cannot provide the same level of companionship and emotional support that a service dog can do. That one, unfortunately, we cannot scale, but everything else we can. If you think of a guide dog, obviously there is a set of commands which is publicly known, which varies a bit region to region, guide dog school to guide dog school, but it pretty much goes to the same two couple of things. It can guide you in general so that you don't specifically tell a destination. You just say let's go and it will keep you straight a body from obstacles and stop when you need to take a decision or in some situations you can add the guide dog to take you to a particular object or place. Take me to an empty seat, take me to work if they have been there multiple times and they remember the route. Those we call the guide me and the take me functionality. Those are the two functionalities which the glasses also have, but here's something interesting, the take me functionality. You don't need to constrain yourself to requesting the glasses to take you to places you've been before. The glasses can take you anywhere. What do I mean by this? You can take your smartphone, go on your favorite navigation app like Google Maps or Apple Maps, etc. You can find the best coffee shop around or the best restaurant or particular address which you're searching for. You can find it. You can press share. You can share it with the glasses and the glasses will take you there navigating all kinds of obstacles keeping you on the sidewalk, helping you cross crossings. This is already full autonomous driving car, which it does. But even more, and this is something we're now experimenting with, it will very soon, even in bet I'd already does it, it will very soon be able to help you navigate public transport. So you can actually go as far as you want. You will be able, in extremis you will actually go to airport level. So it will be able, if you're in one corner of the city and you want to take, you know, a tram or a bus or anything, it will take you to the bus. The bus station will help you get in the right bus. It will keep you in the bus. It will go out of the right station and then it will guide you to the address. That is something which was never achieved before on any kind of navigation system. I don't mean visually impaired assistive technology. I mean in general urban navigation. It was never done before until we've done it. That's awesome. That does just open up a whole new level of transportation and just, you know, that's such a barrier for so many individuals. So that will be absolutely great. Well, I know you've been working on this for quite a while, I believe you said like four years. Can you maybe tell me a story about someone's experience kind of testing it or maybe something that you found along the way that was pretty interesting or maybe kind of maybe blew your mind or someone's mind that was blown when testing and using the glasses? Definitely and it happens so many times we're so lucky to travel and to give demonstrations and to see people how they reacted to them. One of them, one of the moments, which I clearly remember it was 2021 in January. So we founded the company during the pandemic, actually during the curfew. We founded the company very bad moment and in January 2021, the company was a few months old. We create the first haptic navigation system and we had this virtual path on which we invited visually impaired individuals to go on. So it was in a very large hole. There was absolutely no obstacles. We wanted to be a scientific as possible experiment. So we had these virtual paths going around virtual obstacles. So you couldn't use any other senses to detect the obstacles. It was just a large hole and a device and the first person which we tested with and it worked in a minute. We never expected it to be so intuitive. We never expected it to work so well. No, it was all assumptions until I mean, we tested and we thought, yeah, I think it's quite good. But to see visually impaired individuals ranging from 18 years old up to I think 75 or 8 years old, 80 was in the first week of testing. We invited like 14 individuals or 30 and to see all of them being able in a minute or worst case, two minutes to navigate with the precision of centimeters on virtual paths. That was absolutely incredible. That was also the month in which we finalized the patent and we actually proved everything in the patent working. I remember one of the one of the individuals, a very well-known visually impaired individuals, vice president of National Blind Association here in Romania. He said a quote which I'll forever remember. He said, "We believed something like this will exist, but not during our lifetime." And that was a quote which really, really was incredible. It took us months to realize that it was the first time I remember when anything but a guide dog actually guided with that precision or a human companion. It was absolutely the first time and that was pretty amazing. But that was a technological demonstrator. We're proving the technology, but then when we were proving the product, I mean, two months ago, beginning of the era, I remember I was at the United Nations in Vienna on the headquarters and there were some blind individuals testing. There was a particular woman, I think she was from the US, together with her son, teenage son. The son didn't have any kind of visual impairment and the mother, she was training for like two minutes and then she began walking. She was a guide dog user and she began walking and the son literally began crying. It was 15 seconds until he began crying. I never expected such a reaction. I actually didn't believe those reactions were real when I see for other products or anything else. But that was the moment when I saw that, I was like, "Okay, we're doing something right here." Most definitely. Most definitely. What kind of phase are you in as far as development and kind of getting it out to folks? Basically, we're putting the first version in a limited run on the market end of this year. Now, what it has to be understood, first of all, this has like 70% of the performance of a Tesla autopilot. This is amazing, the level of technology, which is, I know we did some research and we had an external audit on this. Basically, we asked some automotive companies, "How long would you take you to develop something like this?" Their answer was like seven, eight years and 50 million euros, about 50 million dollars. We did it much faster and much cheaper, but the same amount of work. It took us 120 years of work. If I add all the hours of everybody who works in this project, it took us 120 years to get here. It's pretty amazing. We tested this technology with over 300 visually impaired individuals from over 30 countries and over 2,000 blindfolded individuals. Not really important on the second part, but it's a fun fact because we actually counted them. This is something which was validated all over the world, from West Coast to West, to Japan, in Africa, in the Nordics in Europe, in the southern part of Europe, we have tested this all over the world. This end of the year, we're doing a limited run for the European market. Beginning of next year, we're releasing internationally in the European market as a medical device, very important. It is a medical device. This is not a consumer product. It is something which is guaranteed to the criteria of medical devices. Looking at other assistive technologies, I simply cannot accept the level of lack of regulation and lack of performance and lack of safety which they bring. We do not accept, we do not condone something like this. We build things by medical device regulation, and that's much tougher to do than just a simple consumer product. It's a clear differentiator which we really wanted to do. In the US, our timeline is roughly the end of next year, so the end of 2025, we want to be in the US. Obviously, we are certifying according to EU regulation now, and we're beginning very soon the process for FDA certification in the states. There's a bit of work for the United States, but right now, Europe is in a few months horizon. That is awesome. We're running out of time. If our listeners would want to find out more, what's a great way for them to do that? I think there are two options. One of it is obviously the website, www.dotlumen.com, dotlumen in letters, or social media, dotlumen. We are present on LinkedIn, on X, on former Twitter, on Facebook, Instagram, and we are also on TikTok, which was actually not expected on TikTok. Apparently, we just reached like 20 million views, which was really, really not expected. Awesome. Thank you so much for coming on today. Such a really, really great medical device to really be able to help folks. I know there hasn't been anything out there except for the dog for so long in order to really help folks with navigation and not just a portion of navigation, but really the entire scope of being able to travel. I think of just how so many individuals, it seems like maybe they had something that could help on the city streets, but I feel like this could help anywhere, anywhere that you possibly wanted to go and possibly wanted to walk and possibly wanted to be, and just really great. We love the work that you're doing and can't wait to kind of see it, to come to fruition. So thank you so much for coming on the show today and telling us all about it. Thank you so much. Always a pleasure to discuss about what we're building. Do you have a question about assistive technology? Do you have a suggestion for someone we should interview on an assistive technology update? If so, call our listener line at 317-721-7124. Send us an email at tech@eastersealscrossroads.org or shoot us a note on Twitter @indataproject. Our captions and transcripts for the show are sponsored by the Indiana Telephone Relay Access Corporation or INTRAC. You can find out more about INTRAC at relayindiana.com. A special thanks to Nicole Prieto for scheduling our amazing guests and making a mess of my schedule. Today's show was produced, edited, hosted, and fraud over by yours truly. The opinions expressed by our guests are their own and may or may not reflect those of the INDATA Project, Easterseals Crossroads, our supporting partners or this host. This was your assistive technology update. I'm Josh Anderson with the INDATA Project at Easterseals Crossroads in beautiful Indianapolis, Indiana. We look forward to seeing you next time. Bye-bye.