Designing in the Dark: How to Help Congenitally Blind People to Design
During the eight years of my experience as a product designer, I have come across many challenges in design and other aspects of problem-solving. Some were relatively easy, others required more time and effort. However, there was one problem that bothered me the most, as it remained unsolved for years. It had nothing to do with my professional career or academics, but was rather a personal call that made me feel responsible as a designer. Today, I may have opened the door towards its solution.
My name is Aren Khachatryan and four years ago I challenged myself to design a system for the visually impaired to recognize color, interact with geometric shapes and use that same technology to perform design work.
In this article, I will explain my approach, challenges that I faced, and solutions in resolving this issue. There is also a video interview at the end, where I explain what made me think about this problem and how I came up with a potential solution.
a. Describe color to a person who was born blind.
b. Help them further, by teaching how to use that knowledge to design a poster, book cover, magazine layout, UI elements or other artwork.
Over 80% of all the information we receive is visual.
Admittedly, there are visually impaired artists who create amazing works of art, but nearly all of them were able to see at some point in their lives.
Thus, it is still very hard to convey the idea of color to the majority of people suffering from congenital blindness (see Tommy Edison on YouTube).
There are, already, various applications that will read out loud and describe an image using AI or image recognition software, but they will not allow for designing your own work (e.g. Microsoft’s Seeing AI).
Similarly, there are braille and full tactile displays, which elevate shapes from a flat platform to trace and mimic imagery; there are also flat haptic displays, that simulate texture and bumps to a touch, via vibrations on their surface. Some even allow to “draw” shapes on them, but none of that technology allows for color recognition and complete control over design tools.
To find a solution, we need to combine the information from the “Problem” and “Research” sections of this report. At first, we need to describe color to people who have never seen it, then we need to create the hardware and software to let them use their other senses to know which colors are displayed and where. If they can’t see a picture, then they must feel it!
Simply naming the color and shapes to a user through audible feedback is not enough, since it will require a lot of time to read-out the visual information, instead of providing an instant signal to the brain.
Hence, we need to make a way for blind users to instantly feel the color. This could be done by a combination of touch and sound.
Challenges: Without visual feedback, there is an overwhelming amount of information in a single picture or a design, all of which needs to be delivered fast and without any clutter.
Color is often described as being warm or cool, which means that if the users are provided with some sort of a heat map for the imagery (which they can feel by touch), they could instantly get a color value for a particular spot within that image. They can then map out the rest of it by touching all or most of the points of the heat map.
A few rules and principles about color should also be explained. Users need to be taught what emotions and ideas colors invoke in society, marketing and business. They also need to learn basic principles of color pairings, swatches and theory. After that, they can be trained to design basic layouts for posters, covers and other minimalist, yet beautiful works and progress further into designing more sophisticated products.
Once they have the basic understanding of color usage, they can start feeling it with the new technology. This can be achieved if the hardware has a surface divided into individual thermal points (e.g. 5mm x 5mm), which can independently change their temperature from 40° to 138° F. Then any image can be represented by cool and warm spots on this surface, and simply running their hands across users should get an initial idea of the “mood” or theme of that image.
In addition to the heat mapping, each color will also have a tune. As the users run their fingers across the thermal map, they will get a general idea of the image by heat first. However, they will not hear a sound tune just yet, in order not to be overwhelmed. The sound can only be heard at one point at a time, meaning that they will need to lift all but one finger, to get the information for that point. This means, that the heat pad will also need to have multi-touch sensitivity (like a touchscreen), in order to tell the computer where the user’s fingers land. If done rapidly, like typing, users should be able to quickly touch all 10 fingers at various points and get color values at those points, by establishing a color map in their mind.
How to Distinguish Between Hues, Shades, Tones, and Tints?
1. Hue is the pure form of color
2. Shades are hues mixed with black
3. Tones are hues mixed with gray
4. Tints are hues mixed with white (pastels)
Given that a hue will have a tune or a sound, we can then modify that tune or use its alternate families to associate it with colors of the same family.
Example 1: A hue is defined by a tune (a certain music chord or melody).
We can slide the pitch frequency upward or downward to indicate whether it’s a shade or a tint (much like what the pitch wheel on a synthesizer does).
Example 2: Nature sounds can be used (e.g. canary sound for yellow)
In this case, an audible track could simply indicate the value of the new color with the mood or intensity of the sound (fast and frequent for pastels, or soft and slow for shades and tones).
Fading the sound or increasing the volume will also be good ways of achieving these associations.
Lastly, a short audible feedback can be voiced to tell the exact color value in RGB for confirmation.
Designing and Reading Shapes
As mentioned before, there have been numerous tactile displays designed over the years that would help one to draw and recognize shapes on a platform consisting of round-headed pins.
However, the high cost, heavy weight, and complex structure make them inaccessible for individuals in poor regions. Their cost and weight are usually high due to individual pneumatic or motorized systems, used to control those touch points (the round-headed pins).
The new design, shown below, consists of a much simpler magnetic design, which is more compact, lightweight and low in production cost.
Instead of pins, it uses tightly packed small square keys, similar to the ones on a regular keyboard; and instead of elevating the pins to mimic the surface, it will lower its keys to create the shape of the image. This will allow for an easier drawing experience, since you will be indenting the shape as you draw, instead of bringing it up (which is an unnatural experience).
Just like on a regular keyboard, when a key is pressed, the spring beneath it allows for vertical movement. However, an electric magnet will hold it in place until later released by a reset function. All keys can move independently and can be lowered by the user’s input, or controlled by the computer module (the key can be pulled down by the e-magnet, be held and then released to its position, or it can be manually pressed with a finger or a stylus pen when drawing).
Designing with the New Tactile Display
Crafting designs on the new tactile display is very simple. Vector graphics software is the most common tool used by graphic and UX/UI designers. Most often, designers use primitive tools, such as the rectangle, ellipse or line segment tool, all of which require a simple click-and-drag input in order to be drawn (corners, stroke and other attributes can be adjusted later via voice command). As a result, you are drawing all those shapes, by dragging your hand/mouse in a line. Pen and spline tools are not used as frequently when designing posters, book covers, magazine layouts or even UI elements, but the tactile display can be used for drawing splines as well.
If a user needs to draw a rectangle, they will simply select the rectangle tool and drag a finger across the board, and a rectangle will be formed using the line they’ve just drawn, as its diagonal. All tools can be selected via voice command, then centered, resized or positioned in the same manner.
Type can be 3D-printed (serif and sans serif, italic and bold) and introduced to our users, in order for them to recognize the different styles of typography. They will then be taught which fonts to pair together to establish an appealing look. Voice input can be used here as well.
The tactile display will not have the resolution to emboss the small type, but it can indicate the size and position of the text box or the title.
The ideal solution would be to combine both panels into one, and have a tactile display with thermal points within the keys/pins. An easy approach would be to have the initial temperature at all points be at the lowest setting by default (40° F or lower) using a thermoelectric cooling device, like a Peltier cooler. This is because heating of those points can be done easier and faster, than cooling, hence by having all the points start at the coolest temperature, and remain so until triggered by a new image, the refresh time for updating the new information would become much faster.
Similarly, we can teach blind users how to design industrial products by feeling their shape.
The users may be presented with 3D-printed shapes or models of any object, including vehicles, gadgets, furniture and architecture.
Again, they can be taught by feeling how some shapes are more appealing than others, what proportions work the best in design and which products and brands have the highest demand due to their design. These will establish feel-patterns for the blind designers, to shape the new cars, furniture and other objects of our future.
This idea is not perfect. It was not user-tested or prototyped yet, but sometimes you believe in something very strongly and want to share it with the world! Right now I am sharing it with you — the reader of my very first article — and I hope it inspired you. All the feedback I get for it would be greatly appreciated and used to make me a better designer. Thank you.