Podcast: Download
Podcast Summary:
Vista Center for the Blind and Visually Impaired. held the 3rd Annual Sight Tech Global 2022, Virtual and in-person event. Kevin Chao attended virtually for the first 2 days and Being just 20 minutes away, Kevin attended the first in-person event put on by the Vista Center and documented his experience. You can find Kevin’s Sight Tech Global 2022 document below.
It was a pleasure to have Kevin back in the Blind Abilities Studio to tell us all about the Sight Tech Global 2022. From his hands-on demonstrations to his conversations with attendees, you will learn what caught Kevin’s attention and how meeting the technology pioneers rounded out his virtual and in-person conference experience.
Hope you enjoy.
Sight Tech Global 2022 Conference
December 7-9, 2022
Sight Tech Global is the first global, virtual conference dedicated to fostering discussion among technology pioneers on how rapid advances in AI and related technologies will fundamentally alter the landscape of assistive technology and accessibility. The event started virtually in 2020 and had its first in-person event this year.
Kevin Chao attended all three days.
Day One Individual Sessions | Virtual
via Sight Tech Global’s YouTube channel
- APH with Greg Stilson – Update on the Dynamic Tactile device (DTD) that is suppose to be a blind person’s monitor to show text and graphics on a display that can be felt using fingers and hand. The user testing that has determined the size of 30×10, congress advocacy to create high-tech department of education K-12 funding for STEAM textbook (with charts, graphs, formulas) with cost being $15K, new standard EBRF to be able to show text + graphics in a meaningful way, and new SDK to open up for apps developers to build DTD experiences (education, work, or games). I wish this was able to show visual designs and interactions: wireframes, mocks, prototypes, etc.
- DOT with Eric Kim and Ki Sung – Story of how DOT started by seeing a Braille bible and the desire to bring the iPad-like experience to Braille. DOT organized a variety of engineers who specialize in sounds and magnets to make a new electromagnetic dot cell that is cheaper, thinner, and more durable. It was proven out with customers on DOT Watch. The electromagnetic dot cell is being licensed as a core-technology in DTD. DOT is making more partnerships to bring more tactile graphics and Braille in DOT 3. It was unfortunate to hear that DOT Pad will not be a commercial consumer product. Even with having low-res and basic tactile graphics, other blind people won’t be able to get their hands on it.
- VR Access with PEAT’s Bill Curtis-Davidson and Alexa Huth– VR Access still has a ways to go, since some low-vision users can only use it in small burst before being unable to see or function, and blind people don’t have any form of access. It’s great that there’s lots of focus from key-stake holders who understand the human computer interaction from blind and low-vision perspective to focus on employment, education, and entertainment; leveraging existing assistive technology that people have been familiar with for 2D digital of audio games and magnifiers. I’m curious to find out what all this VR hype is about when it becomes accessible to blind people.
- VR “screenreader” with Owlchemy’s Jazmin Cano and Peter Galbraith – Traditional screen readers are 2D. VR is 3D and based on how audio games interact, and feedback to a blind person is the basis of VR screen readers. VR screen readers are in the early-days, but it’s contextually based on gestures such as pointing, palm towards object, and grabbing that will give summary, detail, and interact. There’s text-to-speech (TTS) feedback to provide the context or descriptions. I would love this in AR to be able to point or palm things to get an understanding of what’s in a given direction, and more details of what is near in the real world.
- Audio Descriptions the Pixar Way with Eric Pearson – AI TTS that is more flat and can’t emote which detracts the audio user experience from a first-run featured film, where there is so much attention paid to the visual story. There is a lot of care and thoughtfulness that goes into high-quality, immersive, and meaningful audio descriptions by professionals, end-users prefer and will turn on Pixar and Apple quality audio description. Users don’t prefer and sometimes turn off AI TTS non-emote described videos.
- How Amazon’s Alexa aims to make accessibility fairer with Peter Korn and Josh Miele – Show and tell feature that works with camera-enabled Echo that uses product, object, and text recognition for blind people to have Alexa identify cans, bottles, packages, and other items in and around the kitchen/home. Echo now includes audible notification of messages that uses microphone/camera and other signals to determine if you’re nearby, and audibly informs you of notifications, and ask if you would like to hear them; providing access to visual-only light at the crown to inform of notifications. Amazon expressed excitement around more human-sounding TTS, but we’ll see how well they can emote to provide the feeling and connection of human voices.
Day Two Individual Sessions | Virtual
via Sight Tech Global’s YouTube channel
- A deep-dive into Apple’s industry-leading screen reader, VoiceOver – A rare, but really impactful behind the scene look at VoiceOver which set the foundation for many of the innovations and creativity being possible since. The Detection feature is able to detect and describe doors, signs, people, and provide auditory and haptic feedback. There is also auditory and haptic guidance for locking signal to satellite for SOS. Apple expressed excitement of the future around bringing latest innovative tech in a usable form to blind.
- Hands on with Seleste – Story of how computer science students, founder who is sighted, and CTO who is blind talked about cost and challenges with smart glasses tech like Orcam and Envision AI, such as being expensive and complicated. They shared how they have a contextual and cheaper way. A lot was framed in what it could do and what is possible: contextual AI visual describer with remote volunteer. It was confusing to have the host and speaker have smart glasses, but not wear or demo them.
- Hands on with ARx – Demo-able and usable wearable based on Android that has been used successfully, and provides the most cost effective solution that is helping blind people have hands-free access to visual info. I’m not a fan of the need to have a tether cable to an Android device and the spider-like contraption on the head, which is why I use Envision AI glasses.
- What’s Next with StellarTrek – Standalone dedicated talking GPS with tactile buttons that has unique features for blind, such as creating POIs/favorites and manual routes that allow retracing steps. It uses 2x cameras to provide detection of doors and text to solve that last frustrating 50 feet of where is the door and what does it say. This is needed on a wearable that is hands-free.
- The Problem with AI – Deep learning can do just that, go really deep on a subject, analyze large set of patterns, determine outliers, and surface the extremes. With their being bias and improper representation, disabled assistive technology users are being considered outliers, if at all. There is no logic or reasoning with deep learning, as like any tool, it has it’s strengths and weaknesses.
- Did Computer Vision AT Just Get Better or Worse? – There are a variety of AI that have converged to be able to create conversations or images based on text prompts. These AI images are based on concepts portrayed in media and alt text on available images. Due to the lack of disabled assistive technology user representation, there are some strange and bizarre images that the AI generated without reason or logic. e.g. when asking for a person in a wheel chair or guide dog: Wheelchair is focused on torso with no head. e.g. When asking for a person with a guide dog, there may be leash in hand, no harness or connection with guide dog, and a cane in the picture. With more diverse and inclusive datasets, more logic and reasoning, this has the potential for creative artistic expression visually by anyone just through words and prompts, opening up visual expression art to non-visual blind people.
- AI Decision Systems Permeate Our Lives. Now what? – AI decisions are made for education, employment, housing, shopping, etc. without humans in the loop. It was hard to not feel doomed, but with adding humans back in the loop, supporting those nuanced decisions to not just have control in a few, gave some hope to how disabled people won’t just be outliers.
- What Waymo earned at the DOT Inclusive Design Challenge– I was hoping to understand the self-driving in-car trip experience, which is available in Phoenix, and I couldn’t figure out in time while in Scottsdale. The focus was around the requesting, finding, and getting in car experience. All worked well for the LightHouse-SF blind participants with the app being accessible and car being easy to find. It has 2 features, such as getting warmer to use distance/direction and different honk sounds that can be set as preferred to solve the last frustrating 50 feet to get to the car, since it doesn’t always go to you if it’s out of the way. This self-driving future seems so close, but so far, and I look forward to trying it in San Francisco.
Day Three Highlights | In Person | Kevin’s Notes
For blindness tech, this is by far the best in-person + virtual conference I’ve attended for having a concentrated balance of blind, low-vision, and sighted technologist, innovators, creators, and advocates in a manageable less than ~70 attendees, especially compared to NFB that has ~3K and CSUN ~5K attendees.
My highlight was The Frontiers of Accessibility by Jim Fruchterman and Mike May who are two giants of accessibility. The networking and reconnecting, specifically with a mentor/role model/advisor, Mike, who I first met 2-years after going blind. It was amazing and wonderful catching up with Mike and Charles LaPierre (who was the CTO of the Sendero GPS for blind) at breakfast and lunch (sat across from Jim who started Benetech) and asking him a question during his talk. There was reflection on where and how far OCR reading machine for the blind and GPS talking+Braille GPS for the blind tech have come in the past 2-3 decades. I luckily and fortunately had both book and location literacy positively impacted by Jim, Mike, and Charles via Benetech and Sendero Group’s work. There are lots of opportunities in fully inclusive and accessible home appliances and car (is it too much to ask for a blind person to be able to operate a washing machine and car independently?).
The next best thing was the hands-on demos, including:
- Meeting a LinkedIn Accessibility Engineer who was wearing and using ARA by Strap Tech that has actuators to provide haptic feedback for obstacles that are knee > head level and she has been using it since retiring her guide dog.
- Meeting Seleste CEO and trying on the smart glasses that are suppose to have AI contextual remote video assistant capabilities. There was a lot of interest in the dream by blind and low-vision people, but there was nothing that can be demoed for the First Generation smart glasses that is on pre-ordered with shipping in a couple of weeks. I was able to hear some music on some bone conducting circular pads that didn’t make contact with bone and distorted with being hard to hear.
- Met Blind Lyft Accessibility Specialist former animator/VFX artist who learned how to digitally draw and create art again using SVG. Pairing up with Chancey Fleet and her Dimensions Tactile graphics lab and NYPL Tech Workshops. I was able to check out and feel tactile art examples and learn about the practical applications of using SVG beyond art, such as creating circuit schematics, architectural drawings, charts and graphs, website and design wireframes, and more. A couple of tactile graphics that stuck out were of the Golden Gate Bridge and a steamy coffee cup on a coaster and a cane next to it. It was hard to figure it out and make sense of it because of lack of technique and literacy the when it comes to tactile graphics for blind, i.e. there is not enough exposure to how to feel and make sense of tactile graphics.
- APH: Reshaping the Future of Braille through the Dynamic Tactile Device (DTD) that is federally funded through department of education to do what hasn’t changed in over 2 decades revolution how STEAM textbooks are delivered to blind students who need Braille and tactile graphics on a display that can be pushed content to using the cloud or USB. This is something that has been tried without success by many startups and assistive technology companies over the years and decades, but this has all the key players and partnerships, leveraging all of the strengths, factoring in a new EBRF format, how to field test it with teacher for visually impaired first, and constantly getting feedback from blind people to determine the feel and features. It was bigger and thicker than expected, but it was really cool to feel a San Francisco LightHouse map with labels in context, being able to zoom-out for overview, zoom-in for details, and pan around with buttons that were logically placed and intuitive.
There were a variety of other sessions on:
- developing and testing assistive technologies for blind and low-vision people enabled by AI, computer vision, sensors;
- AI-enabled Hiring opportunities and unintentional barriers created for employment of people with disabilities.
- DOT Dynamic tactile display, we discuss the potential for dramatic enhancements of blind accessibilities in education, job productivity, entertainment, and metaverse.
- How do AI, automation and remote interviewing affect the hiring of blind and visually impaired applicants? Natural language processing, sentiment analysis, and other AI technologies pose a serious potential of exclusion for people with disabilities.
Contact Your State Services
If you reside in Minnesota, and you would like to know more about Transition Services from State Services contact Pre-ETS Program and Transition Services Coordinator Shane DeSantis by email or 651-358-5205.
Contact:
You can follow us on Twitter @BlindAbilities
On the web at www.BlindAbilities.com
Send us an email and give us a call at 612-367-6093, we would love to hear from you!
Get the Free Blind Abilities App on the App Storeand Google Play Store.
Give us a call and leave us some feedback at 612-367-6093 we would love to hear from you!
Check out the Blind Abilities Community on Facebook, the Blind Abilities Page, and the Career Resources for the Blind and Visually Impaired