The world is built for people with sight. Most people rely on their vision to help them throughout the day, from cooking to reading to walking around. However, for people who have lost their sight or whose vision has become impaired, everyday tasks are harder. People with visual impairments must rely on other senses to get around, and they face many obstacles people with full sight don’t have to worry about. Recently, I was able to use my passion for computer science (CS) to explore and address some of the issues that people with visual impairments might face.


It started with computer science education

I began learning about computer science in ninth grade when I came into the high school program at my school, Porter-Gaud, an independent school in Charleston, South Carolina. There, Mr. Bergman, the CS teacher, taught us Python, which we used to create games that addressed issues we cared about. Then, in tenth grade, Mr. Renton, another CS teacher, introduced us to virtual reality (VR) and helped us create VR games, using Unity and C# to develop the application and the Microsoft Mixed Reality Headset to run the application.

This year, with that computer science knowledge and the guidance of Mr. Renton, I attempted to solve two obstacles people with visual impairments face: the common task of avoiding running into things when walking around and noticing who is around you and what they look like.


Putting my skills to work for people with visual impairments

Using the Azure Kinect camera and C#, I developed a device that alerts the user when they walk too close to an obstacle using beeps of increasing frequency. Think of it sort of like a car’s backup alarm for humans. Also, using the Microsoft Face API, I coded a program that can take a picture of the user’s surroundings when a voice command is spoken or a button on the Xbox Adaptive controller is pressed. Then, the computer reads aloud the location, age, and emotion of anyone within view, allowing anyone who uses the device, regardless of quality of their vision, to know who and what is around them. I chose to use the adaptive controller because it has large, easily pressed buttons that would be easy to use for a person with a visual impairment. I also coded in the option to use voice commands using the C# Speech Recognition Engine to make the device even easier to use.

With my school’s recent transition to remote learning, I’ve continued to work on and improve this project from home with help from my teachers. Along with the tech tools I mentioned, I’ve been able to collaborate with my teachers using Microsoft Visual Studio, an integrated development environment (IDE) used to create computer programs, apps, and websites. I’ve been able to use my family members as tests to improve the accuracy of the face detection and have even found a way to describe to the user their surroundings. Using the Microsoft Computer Vision API, the device can now summarize the scene the user is looking at and then read that description aloud. Remote learning has been crucial in helping me continue to develop my device, as it has helped me quickly and easily ask my teachers questions and troubleshoot any bugs in my program as if I was still at school in person.

 

Feedback in the design process

Initially, I decided to create this device because I saw that most of the assistive technology for people with visual impairments is expensive and is limited to only a few functions. I wanted to create something that helps people with visual impairments do as much as possible in a convenient and compact package. To get insight from someone familiar with the difficulties people with visual impairments face, I spoke to a representative from the Charleston Association for the Blind and Visually Impaired, who gave me very valuable feedback. She suggested that, instead of beeping when the user gets close to an obstacle, my device should provide some sort of tactile feedback as beeping blocks out environmental noises used by people with visual impairments to navigate their environment, a suggestion that I am planning on implementing. Also, she suggested that the device include the ability to recognize people who the user has programmed into the device and then read their name aloud when they enter the scene, another feature I am working on implementing.

Right now, the device is an early prototype and is large and bulky as an Azure Kinect camera must be worn on the front of the body and a backpack containing a computer carried on the user’s back, but I hope to be able to develop a smaller, more portable version of the device that can be worn on the front of the body without being noticed.

Hopefully, more and more digital solutions to real-world obstacles will begin to emerge as we realize the importance that computer science can play in our future. As a student, I’m excited to learn more and continue using technology to find innovative solutions to common problems.

If you’re an educator or student looking for support with remote learning, please check out our Remote Learning site here.