Finding the Light

Light has played an important role in my life, both in a physical sense as well as a more symbolic one. In the physical sense, there is my inability, due to a visual impairment, to perceive light in a way that allows me to see like most people. I have a condition called retinitis pigmentosa that results in progressive vision loss, starting from the periphery and moving to the center of the visual field, and there is a good possibility that someday in the future I will lose my remaining eyesight. Today, I have less than 10 degrees of central vision remaining, which means that I am classified as legally blind.

It was only after I was diagnosed with my visual disability that I became interested in photography, which is all about playing with light. I approached photography not only as a personal challenge, but as a way to challenge the world and the way it sees me as a person with a visual disability. As a personal challenge, photography has encouraged me to not withdraw from the world, but to engage with it. Photography has encouraged me to get out of my comfort zone and travel, because as they say, “in order to take more interesting photos, you have to visit more interesting places.” I use photography to challenge assumptions about ability and disability. One of my favorite things to do is to pull up to a spot with my white cane and take out my favorite camera to take a photo (these days that camera is likely to be my iPhone). The idea is to use two things that are not often associated with each other (a blind person’s white cane and a camera) to challenge assumptions about what it means to be blind and what blind people can do. In this sense, photography is a tool I use to educate others.

In a more symbolic way, light refers to the role education and educators have played in my life. I have been fortunate to have a number of mentors in my life. One of those was Julio, the social worker who was assigned to me when I struggled in school after arriving in the U.S. as a non-English speaker. In the middle of a somewhat chaotic transition to a new country, a new culture and a new language, Julio became my lifeline. As a strong Dominican-American male figure, Julio became my role model for what I could achieve if I applied myself and pursued an education. My second mentor was Profe Rick. Although Profe was the Spanish teacher at my high school, and I didn’t take Spanish, he became a trusted friend without whose support I would not have made it through boarding school.

Just a few years after I arrived in the U.S., I received a scholarship that allowed me to attend a Quaker boarding school for ninth grade. This was a turning point in my life. The motto of my boarding school was “Turn to the light,” a saying that captures the Quaker idea that each of us has an inner light that represents that of God within us. While I am not a religious person, this idea of inner light left a lasting impression. It has guided my work throughout my life, including what I do today as an inclusive learning consultant. My goal in this role is to find that inner light in each person, that spark that represents each person’s potential and ability to contribute. Just as Julio and Profe Rick found that spark in me and lit my inner light, I try to look for ways in which technology can empower learners who face similar challenges as the ones I faced in school to find their own inner light and unleash their potential. What keeps me going in this work is what I call the “magical moment:” that moment when you see the spark in a person’s eye that lets you know you’ve changed their life for the better in an instant.

I had such a “magical moment” a decade ago when I first encountered inclusive technology. I had just been diagnosed with my visual impairment and was struggling to find my way through a master’s degree in instructional technology at the University of South Florida. At around that time, Apple had released OS X Tiger with the VoiceOver screen reader and the advanced “Alex” voice. What made this a “magical moment” for me was the message I got from the technology. It was a message of hope that everything was going to be OK because there were really smart people working on technology that would allow me to accomplish my goals even if I lost my remaining vision. In this way, “Alex” spoke to more than just my ears and my brain — it spoke to my heart and my soul. It was the spark I needed to persevere in my studies and go on to complete my master’s degree and later my doctorate.

When we think of light, we often just think of it only in the physical sense, that light which allows us to perceive the colors and beauty in the world around us. But light can be much more. It can be our inspiration, our spark that keeps us going and allows us to overcome the challenges we face in our lives. For me, light has not only been the physical light I have been losing with every passing year, but the symbolic light I have gained through the people and technology that have come into my life to allow me to have a meaningful and fulfilling life.

My challenge to you is this: How will you be that light for somebody else? More importantly, how will you help them “turn to the light” and find their own spark?

Double exposure showing an image of cars moving toward the light at the end of a tunnel overlaid over a closeup of Luis's eye.

On the usability of touch screens to screen readers

Recently, Katie Sherwin of the Nielsen Norman Group published an article on the NNG website summarizing her experience with the VoiceOver screen reader for iOS devices, and her suggestions for designing better interactions for blind users. The article has some good design suggestions overall: creating cleaner copy with streamlined code is always a good thing. So is including alternative text for images, making sure all interactions work with the keyboard, and the hierarchy of the content is clearly  indicated by headings that separate it into logical sections. On these suggestions, I am in full agreement with the author.

Where I disagree with her is on the representation of what the experience of using a mobile device is really like for actual blind users. The author herself acknowledges that she only started experimenting with the screen reader after attending a conference and seeing how blind users interacted with their devices there. It is not clear how much time she has had to move beyond the most basic interactions with VoiceOver. Thus she states that “screen readers also present information in strict sequential order: users must patiently listen to the description of the page until they come across something that is interesting to them; ; they cannot directly select the most promising element without first attending to the elements that precede it. ”

This may be accurate if we are talking about someone who has just started using VoiceOver on an iPhone or iPad. It ignores  the existence of the Rotor gesture familiar to many power users of VoiceOver. With this gesture, users actually can scan the structure of a web page for the content they want to focus on. They can see how the page  is organized with headings and other structural elements such as lists, form elements and more.  Many users of VoiceOver also use the Item Chooser (a triple-tap with two fingers) to get an alphabetical list of the items on the screen. Both of these features, the Rotor and the Item Chooser, allow users to scan for content, rather than remaining limited to the sequential kind of interaction described in the NNG article.

As for the point about the cognitive load of the gestures used on a touch screen device like the iPhone, it should be pointed out that the number of gestures is actually quite small compared with the extensive list of keyboard shortcuts needed on other platforms. I do agree with the author that typing remains a challenge when using the onscreen keyboard, but there are other options available to make text entry easier: the user can choose to use any of the many Bluetooth keyboards available on the market for a more tactile experience; dictation is built in and has a pretty good level of accuracy for those who prefer using their voice; and new input modes introduced in iOS 8 and 9 allow for handwriting recognition as well as Braille input.

To help new users with the learning curve (and the cognitive load), Apple provides a built-in help feature that is only available in the VoiceOver settings when the feature is active. Once a user goes into the help, he or she can perform gestures to hear a description of what they do. Another benefit for users is the fact that many of the gestures are the same across the Apple ecosystem. Thus, a VoiceOver user can transfer much of what they have learned on an iOS device to the Mac, which has a trackpad roughly the size of an iPhone, the new Apple TV with its touchpad remote, and even the Apple Watch (with a few modifications to account for the limited screen real estate). Finally, I have found that learning the gestures is as much a  matter of muscle memory as it is about remembering the gestures and what they do. The more time you spend performing the gestures, the easier they become. As with any learned skill, practice makes a difference.

Again, there is a lot of good advice in this article as it relates to the need for more inclusive designs that minimize unnecessary cognitive load for users. However, a key point that is missing from that advice is the need to get feedback on designs from actual people who are blind. The way a blind user of VoiceOver interacts with his or her iOS device will often be a lot different from the way a developer just becoming familiar with the feature will do so (same goes for a usability expert).

Global Accessibility Awareness Day: The Need for an Ecosystems Approach to Accessibility in Education

On the occasion of Global Accessibility Awareness Day, I am excited about the many online and face to face events that will mark this important step toward ensuring a more accessible and inclusive environment for those of us who have special needs.  I will be presenting a session on the use of photography as a tool for disability advocacy as part of Inclusive Design 24, a free 24-Hour event sponsored by The Paciello Group and Adobe Systems. Photography has long been a passion of mine, and I welcome any opportunity to share how I use it as an educator and advocate to challenge perceptions of ability/disability. I will also be sharing resources and insights during a #GAADILN  twitter chat sponsored by the Inclusive Learning Network of the International Society for Technology in Education (ISTE).

I love Global Accessibility Awareness Day (or GAAD as I will refer to it from now on) but if there is one thing that I would change is the name of the event. To me it should be Global Accessibility Action Day. With many of these types of events the focus is on raising awareness of the needs of people of disabilities, as if we have not been asking for our rights for decades now (the ADA is more than two decades old, you know). GAAD gets it right by focusing on action. Organizations such as Shaw Trust Accessibility Services, Deque Systems and Accessibility Partners are offering a number of free services such as document audits, app accessibility consultations and website user testing. Many others are providing webinars and live presentations that aim at more than raising awareness by providing practical information on how to make documents, website and apps more accessible. A review of the full list of events available on the GAAD website makes it clear that this event is about more than just awareness, it is about taking the next step for accessibility.

In my own field of education, I see much progress being made but I also see a need for a more ecosystems approach to inclusion and accessibility. When I think of ecology I think about systems that have a number of parts working together as one, with the sum of these parts being greater than they are on their own.  When it comes to students with disabilities, a number of technologies are now available as built-in options on the mobile devices many of them own. While I am a witness to the impact these technologies can have on the lives of students with disabilities (having been one who used these technologies myself) I believe their impact is limited by their use in isolation rather than as part of a more comprehensive system.

What I would like to see is a change in thinking to focus on a systems approach that addresses what I see as the three As of accessibility:

  • Accessibility Features: companies such as Apple  now include a comprehensive toolkit for accessibility that is built into the core of the operating system.  This means that when I take my new Mac, iPhone or Apple Watch out of the box it will be ready for me to use without the need to purchase or install additional software. Not only that but as my vision gets worse I know that I will be able to take my device out of the box and set it up independently, without having to wait for someone with better eyesight to help me.  These built-in accessibility features have been life-changing for me. Without them I’m not sure I would have been able to pursue higher education and complete my master’s and doctoral studies. I also would not be able to do my photography that brings so much joy and beauty into my life. Unfortunately, not all educators know about even the most basic of these features that are built into the technology their districts have spent so much money to purchase. I am often surprised when I do presentations around the country (and sometimes in other parts of the world) by how little awareness there is among educators of the potential they hold literally  in their hands to change a student’s life. We need to do better in this area of professional development to allow these tools to have an even greater impact on education for all students, not just students with disabilities but any student who struggles with the curriculum and needs additional support.
  • Accessibile Apps:  the built-in accessibility features provide a great baseline for addressing the needs of people with disabilities, but they can’t do it all. There is just too much diversity and variability for that to be the case: not just in the traits and needs of users, but in the settings and tasks where technology is used. For this reason, it is often necessary to extend the capabilities of the built-in accessibility features by installing apps that provide greater customization options. A great example is the Voice Dream Reader app. While iOS has a robust text to speech feature with word highlighting that now supports a high quality Alex voice, Voice Dream Reader allows for even greater customization. The user can adjust the color of both the word and sentence highlighting, something which cannot be done with the built-in Speak Selection feature of iOS.  For those who are blind and use the VoiceOver screen reader, the developer has done an excellent job of labeling all of the app’s controls.   A companion Voice Dream Writer app even provides a special mode for VoiceOver users to make it easier for them to enter and edit text, showing an strong commitment to usability for all users on the part of this developer. Other examples of developers who are doing exemplary work when it comes to creating accessible apps include AssistiveWare ( developers of Proloquo2Go, Proloquo4Text and Pictello, all apps with excellent support for VoiceOver and Switch Control) and Red Jumper  (developers of the popular Book Creator app). The latter added an Accessibility option for images and other objects to help students and educators create accessible content with the app. Unfortunately, these developers are still the exception rather than the rule. With too many apps, swiping through with VoiceOver results in hearing “button” over and over with no indication of what the button actually does. Worse, many of the buttons for key actions sometimes can’t even be selected. Without attention to accessibility from app developers, the accessibility features can’t work to their full potential. No matter how good the voice built into VoiceOver is (and Alex is pretty good) it does me no good if I can’t select the buttons within an app and determine what they do.
  • Accessible Content: the same problems that exist with apps that are inacessible comes into play with much of the content that is available online for students. Too many videos lack captions (or include only automatic computer generated captions that contain too many errors to be useful), and too many ebooks include images that are not labeled with accessibility descriptions  for those who can’t see them. Without these accessibility descriptions, which can be easily added in authoring tools such as iBooks Author, a blind student taking a science class or an economics class will not be able to access the diagrams and other graphics that are so often used in these fields. Again, adding in features such as accessibility descriptions allows the built-in accessibility feature, in this case VoiceOver, to work to its full potential. There are many wonderful examples of books that include accessibility, as well as resources to help educators develop their own accessible books with easy to learn and use tools such as iBooks Author. These include Creating Accessible iBooks Textbooks with iBooks Author from the National Center for Accessible Media and Inclusive Design for iBooks Author by my friend and fellow Apple Distinguished Educator Greg Alchin. For a great example of an engaging and accessible book, one need not look any further than Reach for the Stars, a  multi-touch book from SAS that makes astronomy come alive not only for blind students but anyone who wants to learn about our universe using all of their senses.

As shown by the following diagram, when the three components are present (robust accessibility features, accessible apps, and accessible content) we get a synergy that results in an even greater impact than each tool or feature can have on its own: this is the sweet spot for accessibility in education.

Three overlapping circles labeled as Accessibility Features, Apps and Accessible Content, with the spot where they converged labeled as Sweet Spot.

To ensure accessibility in education we all must work together to realize the advantages of an accessibility ecosystem: companies such as Apple and others who are building accessibility into their products, app developers and content authors. As AssistiveWare’s David Niemeijer so nicely stated in his own GAAD post when we  take accessibility into account we really are designing for everyone because we all one day get old and require the ability to customize the text size and other features of our devices to account for our aging vision and hands.

Furthermore, to quote from a recent Apple commercial, “inclusion promotes innovation.” Thinking about accessibility from the start, in line with the principles of universal design, requires us to be even more creative as we seek to solve problems of access that may someday result in usability improvements for everyone.

A great example of that is the recently released Apple Watch.  Since it has a small screen that makes it difficult to enter text, much of the interaction with the Apple Watch takes place through the Siri personal assistant. The voice recognition technology that makes Siri possible actually had its origins in the disability community, but now it  can be used to account for the constraints of a smart watch and its small screen.

The Apple Watch is also a  great example of an ecosystems approach to accessibility and  its benefits. This device includes many of the same accessibility features that are available on the iPhone and the iPad, which are the same features I can use on my Mac. What this means is that if I get a new Apple Watch I will already know how to use these features, with a few modifications to account for the smaller screen. Similarly, a blind student who has been using his or her iPhone can easily transfer the use of many VoiceOver gestures to the trackpad built into Mac laptops or the Magic Trackpad used on iMacs.

Why is an ecosystems approach like this so important? Ultimately it is because I as a person with a disability need accessibility 24/7, 365 days a year, most likely for the rest of my life (unless a cure is found for my condition). My need for accessibility doesn’t stop when I get up from my desk at home and walk out the door. I need accessibility as I order a ride from a ride sharing service from my smart phone (which has Zoom and VoiceOver built in) , as I take and share the photos that bring so much joy to my life and capture the beauty I encounter in the places I am lucky to visit (through accessible apps such as Instagram) and as I continue to develop my skills and knowledge base by reading ebooks about my field I download from the iBookstore and read with iBooks (accessible content) . For someone like me, accessibility is needed across a number of settings and situations if I am to be independent and continue to make a valuable contribution to society. Only an ecosystems approach can provide the kind of comprehensive accessibility I and many others who have disabilities need to live a fulfilling life.

We Have the Will, Do You?

I generally devote my posts on this blog to the sharing of videos, tips and other resources related to accessibility and universal design, rather than addressing political issues related to disability and society. Today I am going to make an exception.

As I was going through my Twitter feed and looking at posts from my excellent PLN, I came across the following Tweet from star ed tech blogger Vicki Davis:

You can read the referenced blog post for yourself, but the gist of it is that we now have the technology to allow students who have special needs to shine if only we had the will to take advantage of it. I  agree with this idea (we do have some amazing built-in accessibility features that every student, not just those who have special needs, can use to enjoy greater access to information). You can learn all about these technologies by watching the many videos on my YouTube channel.  However, what I had a problem with was the assumption the post was based on, which the author makes plain early into the piece:  “Like Hawking, many students are trapped in the prison of a body that does not unleash their capability.”

For those of you who are not familiar with it and don’t have to deal with it on a regular basis (as I do) this is an example of the “medical model of disability” at work. This is a model that holds that people with disabilities experience life through a “broken” body that has to be normalized through the use of modern medicine (through pills, surgeries or prosthetics ). Now, the decision to avail yourself of any of these interventions is a personal one and I try not to judge anyone who makes such a choice. What I do take issue is the narrative of disability that such a model supports, a narrative that revolves around shame and pity (as mentioned in an excellent comment to the post submitted by Anna, a parent of a child with disability).

On the one hand, I am happy to see that the mainstream ed tech community is finally starting to engage with this topic and recognizing the usefulness of assistive technology and accessibility for everyone. When popular and influential  bloggers like Vicki Davis focus on a topic it brings attention to it because of their large audience  (she has more than 98,000 followers on Twitter, about 95,000 more than me if you are counting). On the other hand, when popular bloggers share messages based on faulty assumptions, those assumptions are given even more life and staying power.

Fortunately, one of the beauties of blogging  is that people can engage in dialogue (sometimes civil and sometimes not)  through comments. Unfortunately, not everyone reads the comments and most leave the blog only with the ideas presented in the post as their takeaway. In the case of this post, I was happy to read a very articulate response from Anna that called out the author on her assumptions, and did so in a very respectful way. I hope that I am doing the same here. My intent is not to single out Vicki Davis. I follow her and see all the good she does through her work. My intent is to show that the ed tech community as a whole needs to change its assumptions and long-held beliefs about people like me. The same community that creates and uses many of the tools that can do the most to empower people with disabilities is also one of the most exclusionary when it comes to disability. If you want to see that at work, just submit a conference proposal that mentions universal design, accessibility or similar terms and see how well that goes. But I digress, that’s a topic for another post.

As Anna suggests, there is another way of thinking about this issue, one that is inspired by a different set of assumptions. Instead of focusing only on people with disabilities and their bodies, a “social model of disability” focuses on the environment as the source of significant barriers for people like me. For example, one of the major issues I deal with is the lack of transportation in my home area. If I had access to good transportation, my ability to get to a job, to doctor’s appointments and even to leisure activities like going to the gym would not require so much planning and effort.

Rather than “letting the caged bird sing” from a social model perspective we wouldn’t build a cage in the first place. The social construction of disability  is at work when a class website is built without any thought for how a parent with low vision (not just a person with a disability, but also someone who is older or who has had an eye injury) can use it, or when a classroom has too little space between tables for someone who uses a wheelchair to get around the room comfortably and without too much effort. People with disabilities are too familiar with the social construction of disability. It is often those who don’t have disabilities who are not, because as Vicki Davis states in a response to Anna’s comment, they don’t have to live it every day.

I agree with Vicki Davis on a key point. It is a matter of will. But it is not a willingness to take advantage of technology that will make a difference going forward. Rather it will be a willingness to question assumptions about the nature of ability/disability. No matter how much technology there is (and how good it is) as long as people’s thinking doesn’t change then we are not really moving the needle on this issue. People with disabilities recognize the social construction of disability (as evidenced by years of struggle leading to legislation such as the Americans with Disabilities Act).  The question is does the rest of society (including the ed tech community)  have a similar willingness to reflect on and change the way it portrays and discusses disability. It is much easier to retreat into “inspiration porn“. It takes a lot more will to do the long term work of changing  assumptions through ongoing reflection. Will you have that will?