All posts by Luis Pérez, Ph. D.

Luis Pérez is an inclusive learning consultant based in St. Petersburg, Florida. He has more than a decade of experience working with educators to help them integrate technology in ways that empower all learners. Luis holds a doctorate in special education and a master’s degree in instructional technology from the University of South Florida, and he is the author of Mobile Learning for All: Supporting Accessibility with the iPad, from Corwin Press. Luis has been honored as an Apple Distinguished Educator (ADE) in 2009, and as Google in Education Certified Innovator in 2014. He is also a TouchCast and Book Creator Ambassador. Luis currently serves as the Professional Learning Chair of the Inclusive Learning Network of the International Society for Technology in Education (ISTE), which recognized him as its 2016 Outstanding Inclusive Educator. His work has appeared in publications such Teaching Exceptional Children, Closing the Gap Solutions, THE Journal, and The Loop Magazine. In addition to his work in educational technology, Luis is an avid photographer whose work has been featured in Better Photography magazine, Business Insider, the New York Times Bits Blog and the Sydney Morning Herald. Luis has presented at national and international conferences such as South by Southwest EDU, ISTE, CSUN, ATIA and Closing the Gap.

10+ Accessibility Features Coming with iOS 11

 

Slide during Apple's Keynote at WWDC showing a big list of upcoming iOS 11 features that were not discussed during the keynote.At its recent World Wide Developers Conference (WWDC), Apple gave developers and the public a preview of the next version of iOS, its operating system for mobile devices such as the iPad, iPhone and iPod Touch. This post will not provide a full overview of all the changes coming to iOS 11, but will instead focus on  a few key ones that will have the most impact for people like me who rely on the built-in accessibility of these devices.

A public beta will be coming later this summer, and I can’t wait to get my hands on it to start testing some of these new features. For now, this overview is based on everything I have read on the Web (shout out to AppleVis for a great overview of the new features for those with visual impairments), what I saw in the videos available through the WWDC app after each day of the conference, and updates from people I follow on social media who were at the WWWDC (you should definitely follow Steven Aquino, who provided excellent coverage of all things iOS accessibility from WWDC).

Without further delay, here are ten new or enhanced iOS accessibility features coming soon to an iPad or iPhone near you:

  1. Smart Invert Colors: In iOS 11, Invert Colors will no longer be an all or nothing affair. The new version of Invert Colors will actually leave the images and video alone with a new Smart Invert option. This fixes a problem I have always had with Invert Colors. Sometimes there is text in a graphic or video that is essential for understanding, but with Invert Colors as it currently exists it can be difficult to read this text. This will no longer be the case with iOS 11 and its new version of Invert Colors.
  2. Enhanced Dynamic Type: Dynamic Type has been enhanced to reduce clipping and overlapping at larger text sizes, and Dynamic Type will work in more of the UI for apps that support it. In some areas of the UI where text is not resized dynamically, such as tab bars, a tap and hold on the selected control will show it at a larger size in the middle of the screen.
  3. VoiceOver descriptions for images: VoiceOver will be able to detect text that’s embedded in an image, even if the image lacks alternative text (or as Apple calls it, an accessibility description). VoiceOver will  also announce some of the items in a photo that has not been described (tree, dog, sunset, etc.), much like the Camera app already does when you take a photo with VoiceOver turned on.
  4. A more customizable Speech feature: you can now customize the colors for the word and sentence highlighting that is available for Speech features such as Speak Selection and Speak Screen. These features are helpful for learners who struggle with decoding print and need the content read aloud. The highlighting can also help with attention and focus while reading, and it’s nice to see we can now change the color for a more personalized reading experience.
  5. Type for Siri: In addition to using your voice, iOS 11 also allows you to interact with Siri by typing your requests. This is not only an accessibility feature (it can help those with speech difficulties who are not easily understood by Siri) but a privacy convenience for everyone else. Sometimes you are in a public place and don’t want those around you to know what you are asking Siri.
  6. More options for captions: for videos that include closed captions, you can now enable an additional style that makes the text larger and adds an outline to make it stand out from the background content.  Along with this new style, you can now turn on spoken captions or convert the captions to Braille . This last option could make the content more accessible to individuals with multiple disabilities.
  7. Switch Control enhancements: typing can take a lot of time and effort when using switch access technologies. With iOS 11, Apple hopes to make this process easier by providing better switch control word prediction  as well as a “scan same key after tap” option (this will repeat the same key without requiring the user to scan to it again, which can take some time). Other Switch Control enhancements for better overall usability include:
    • Point Mode has an additional setting for more precise selections: this will add a third scan to refine the selection at an even slower pace, and early reports are that it actually selects the actual point rather than the surrounding accessibility element (button, etc.).
    • Scanner Menu option for Media Controls: recognizing that media playback is a popular activity for switch users (just as it is for everybody else), a new category for Media Controls has been added to the scanner menu. I assume that this feature will work on any app with playback controls, which would make it a great option to use with Voice Dream Reader or any other app with playback controls at the bottom of the screen (which require a lot of scanning to access).
  8. Improved PDF accessibility support: while I am not a big fan of PDF as a format, there are still a lot of legacy PDF documents out there so it is nice to see improved support for PDF accessibilty in iOS 11. One of the common uses of PDFs is to collect information through forms, and with iOS 11 Apple promises better support for forms as well as for tagged (properly marked up) PDF documents.
  9. Better Braille support: as reported by AppleVis, the Braille improvements in iOS 11 include better text editing and more customizable actions that can be performed through shortcuts on a Braille display.
  10. A redesigned Control Center: you will have more ways to get to the features you use the most with the new Control Center, which will now allow you to add widgets for the Accessibility Shortcut,  the Magnifier and text resizing.

As with most releases of iOS, there are a number of features that can be beneficial to users with disabilities even if they are not found in the Accessibility area of Settings. These include:

  1. An improved Siri voice: we got a taste of how much more natural the new Siri voice will sound, but there was not a lot of detail provided. It is not clear if this voice will be available to VoiceOver users, or if it will be incorporated into the rest of the Speech features such as Speak Selection and Speak Screen when iOS 11 finally ships to the general public in the fall.
  2. Siri translation: Siri can now translate from English into a few languages – Chinese, French, German, Italian, or Spanish.
  3. Persistent Reader View in Safari: when you long-press the Reader icon, you can choose to have reader activate on all websites, or just on the current one. Reader view is helpful for removing ads and other distractions that can compete for attention, especially for people with learning difficulties such as ADHD.
  4. Apple TV remote in Control Center: It is also possible to add an onscreen Apple TV remote to the Control Center. This will be helpful for Switch Control or VoiceOver users who may prefer this option to using the physical Apple TV remote.
  5. One handed keyboard: this option is intended to help anyone who is trying to enter text while one hand is busy with groceries, etc. but it can also be helpful to someone who is missing a limb. Tapping on the Globe icon that provides access to third party keyboards will now show an option for moving the keyboard to either side of the screen where it can be easier to reach with one hand.
  6. One handed Zoom in Maps: this feature is intended for drivers to have better access to Maps while on the road, but as with the one-handed keyboard, others will benefit from this change as well. As someone who often has one hand busy with a white cane, I welcome all of these features that make the iPhone easier to use with just one hand.
  7. Redesigned onscreen keyboard: Letters, numbers, symbols, and punctuation marks are now all on the same keyboard. Switching between them is as easy as a simple flicking gesture on the desired key.
  8. Easier setup of new devices; anything that reduces the amount of time entering settings is helpful to switch and screen reader users. When setting up a new iOS device, there’s now an option in iOS 11 to hold it near an existing device to automatically copy over settings, preferences, and iCloud Keychain.
  9. More customizable AirPod tap controls: AirPods can now be customized with separate double tap gestures for the left and right AirPod.  One can be set to access Siri, for example, while another can be set to play the next track. Previously, double-tap settings were applied to both AirPods. This will be helpful for individuals who rely on AirPods to access Siri as an accessibility option.
  10. Less restrictive HomeKit development: it will now be possible to develop a HomeKit device without being part of Apple’s HomeKit licensing program. All that will be required is a developer account. The catch is that any HomeKit devices  developed this way cannot go up for sale. That should be fine for assistive tech makers who just want to experiment and prototype solutions for their clients without the investment required to be part of the official HomeKit program. As The Verge suggests, this could also encourage more developers to dip their toes with HomeKit development, which will hopefully lead to more options for those of us who depend on smart homes for improved accessibility.
  11. QR Code support in the Camera: QR codes can be a helpful way to provide access to online resources without requiring a lot of typing of URLS and the like. They are frequently used in the classroom for this purpose, so I know teachers will find having this feature as a built-in option a welcome addition.
  12. SOS: There’s an Emergency SOS option in Settings app that allows users to turn on an “Auto Call” feature. This will immediately dial 911 when the Sleep/Wake button is pressed five times. A similar feature has been available on the Apple Watch since the introduction of watchOS 3, and it’s nice to see it on the iPhone as well.
  13. Core Bluetooth support for Apple Watch: while this is an Apple Watch feature, I’m mentioning it here because the Apple Watch is still very closely tied to its paired iPhone. With watchOS 4, which was also previewed at the WWWDC, Apple Watch Series 2 is getting support for connecting directly to Bluetooth low energy accessories like those that are used for glucose tracking and delivery. Furthermore, the Health app will have better support for diabetes management in conjunction with CoreBluetooth, including new metrics related to blood glucose tracking and insulin delivery.
  14. Indoor navigation in Maps: Maps has been a big help for me whenever I find myself in an area I don’t know. I love the walking directions and how well they integrate with the Apple Watch so that I can leave my phone in my pocket as I navigate with haptic feedback and don’t give off that lost tourist look. With iOS 11, these features will be extended to indoor spaces such as major malls and airports.
  15. A redesigned App Store: the screenshots I have seen point to a bigger, bolder design for the App Store, which will be welcome news to those of us with low vision. If you like how Apple News looks now, you will be pleased with the redesigned App Store.
  16. Built-in screen recording: I rely on Screenflow to record my video tutorials, but having screen recording as a built-in feature will be convenient for quick videos. This will be great for providing tech support to parents, or for documenting accessibility bugs to developers.
  17. Person to Person payments in Messages: anything that allows payments without the need to use inaccessible currency is A OK with me.

The Notes app has been greatly enhanced in iOS 11 to make it an even better tool for diverse learners who need options for how they capture, store and retrieve information:

  • Instant Markup: adding an annotation to a PDF document or screenshot will be as easy as picking up the Apple Pencil and touching the screen to start drawing/annotating.
  • Instant Notes: tapping the lock screen with Apple Pencil will create a quick handwritten note that will appear in Notes once the device is unlocked.
  • Inline Drawing: when you begin to draw or annotate in Notes, the text around the annotation will move out of the way. You can add inline drawings in Mail as well.
  • Searchable annotations in Notes: everything you write with the Apple Pencil in Notes will now be searchable, making it much easier to find the highlights in long notes taken during a lecture or long presentation.
  • Document Scanner: the new Document Scanner in Notes will detect the edges to automatically scan a document, crop it, and remove glare and tilt to produce a cleaner image. This will result in a better scan you could then pass the scanned document off to a different app to perform even better optical character recognition (OCR). I am hoping this feature is just a start, and eventually we will get built-in OCR in iOS.

A major focus with iOS 11 is improved support for iPad productivity. This includes support for Drag and Drop in an enhanced Split View, as well as a new multi-tasking view with what appears to be the equivalent of Spaces on the Mac. With Apple’s excellent track record of accessibility, I’m confident these features will have the same level of accessibility as the rest of iOS.

I can’t wait to try out iOS 11 when the public beta becomes available to start enjoying some of these features on at least one of my devices (not the one I use to get work done, of course – at least not until late in the beta cycle when everything is much more stable).

How about you – which ones of these iOS 11 features have you most excited? Which ones do you think you will use the most?

 

Designed for (fill in the blank)

On the occasion of Global Accessibility Day (GAAD), Apple has created a series of videos highlighting the many ways its iOS devices empower individuals with disabilities to accomplish a variety of goals, from parenting to releasing a new album for a rock band. Each of the videos ends  with the tagline “Defined for” followed by the name of the person starring in the video, closing with “Designed for Everyone.” In this brief post, I want to highlight some of the ways in which this is in fact true. Beyond the more specialized features highlighted in the video (a speech generating app, the VoiceOver screen reader, Made for iPhone hearing aids and Switch Control), there are many other Apple accessibility features that can help everyone, not just people with disabilities:

  • Invert Colors: found under Accessibility > Display Accommodations, this feature was originally intended for people with low vision who need a higher contrast display. However, the higher contrast Invert Colors provides can be helpful in a variety of other situations. One that comes to mind is trying to read on a touch screen while outdoors in bright lighting. The increased contrast provided by Invert Colors can make the text stand out more from the washed out display in that kind of scenario.
  • Zoom: this is another feature that was originally designed for people with low vision, but it can also be a great tool for teaching. You can use Zoom to not only make the content easier to read for the person “in the last row” in any kind of large space, but also to highlight important information. I often will Zoom In (see what I did there, it’s the title of one of my books) on a specific app or control while delivering technology instruction live or on a video tutorial or webinar. Another use is for hide and reveal activities, where you first zoom into the prompt, give students some “thinking time” and then slide to reveal the part of the screen with the answer.
  • Magnifier: need to read the microscopic serial number on a new device, or the expiration name on that medicine you bought years ago and are not sure is still safe to take? No problem, Magnifier (new in iOS 10) to the rescue. A triple-click of the Home button will bring up an interface familiar to anyone who has taken a photo on an iOS device. Using the full resolution of the camera, you can not only zoom into the desired text, but also apply a color filter and even freeze the image for a better look.
  • Closed Captions: although originally developed to support the Deaf and hard of hearing communities, closed captions are probably the best example of universal design on iOS. Closed captions can also help individuals who speak English as a second language, as well as those who are learning how to read (by providing the reinforcement of hearing as well as seeing the words for true multimodal learning). They can also help make the information accessible in any kind of loud environment (a busy lobby, airport, bar or restaurant) where consuming the content has to be done without the benefit of the audio. Finally, closed captions can help when the audio quality is low due to the age of the film, or when the speaker has a thick accent. On Apple TV, there is an option to automatically rewind the video a few seconds and temporarily turn on the closed captions for the audio you just missed. Just say “what did he/she say?” into the Apple TV remote.
  • Speak Screen: this feature found under Accessibility > Speech are meant to help people with vision or reading difficulties, but the convenience it provides can help in any situation where looking at the screen is not possible – one good example is while driving. You can open up a news article in your favorite app that supports Speak Screen while at a stop light, then perform the special gesture (a two finger swipe from the top of the screen) to hear that story read aloud while you drive. At the next stop light, you can perform the gesture again and in this way catch up with all the news while on your way to work! On the Mac, you can even save the output from the text to speech feature as an audio file. One way you could use this audio is to record instructions for any activity that requires you to perform steps in sequence – your own coach in your pocket, if you will!
  • AssistiveTouch: you don’t need to have a motor difficulty to use AssistiveTouch. Just having your device locked into a protective case can pose a problem this feature can solve. With AssistiveTouch, you can bring up onscreen options for buttons that are difficult to reach due to the design of the case or stand. With a case I use for video capture (the iOgrapher) AssistiveTouch is actually required by design. To ensure light doesn’t leak into the lens the designers of this great case covered up the sleep/wake button. The only way to lock the iPad screen after you are done filming is to select the “lock screen” option in AssistiveTouch. Finally, AssistiveTouch can be helpful with older phones with a failing Home button.

While all of these features are featured in the Accessibility area of Settings, they are really “designed for everyone.” Sometimes the problem is not your own physical or cognitive limitations, but constraints imposed by the environment or the situation in which the technology use takes place.

How about you? Are there any other ways you are using the accessibility features to make your life easier even if you don’t have a disability?

3 Ways the iPhone Has Disrupted My Life for the Better

The 10th anniversary of the iPhone announcement in 2007 was mentioned on a number of podcasts I listen to this past week, and this got me into a reflective mood. I can remember vividly where I was when the announcement took place. At the time I was a graduate student at the University of South Florida, and I watched the announcement on the big screen in the iTeach Lounge where I worked as a graduate assistant.

I must admit that at first I was a bit skeptical. The first version of the iPhone was pretty expensive, and it took me a year after the launch to decide that I wanted to get in on the fun.  If I remember correctly, it cost me $399 for 8GB of storage when I bought my first iPhone from Cingular Wireless (remember them?). As cool as that first iPhone was, it took two important developments to make me a true believer.  The first one was the release of the App Store in 2008, which opened up  a world of possibilities only limited to developers’ imagination. The second was the accessibility support announced with the release of the iPhone 3GS. After my first iPhone contract with Cingular was up, I actually returned to a traditional flip phone for a little while for my next phone. Once the accessibility support was announced, though, I was locked in. I have been an iPhone owner ever since.

In addition to the App Store and the built-in accessibility support, there are three other important ways in which the iPhone has disrupted my life in significant ways that go beyond just being able to have access to information and communication on the go.

A Better Set of Eyes

The iPhone couldn’t have come at a better time for me. At the time, my vision loss was getting the point where using a traditional DSLR camera was becoming harder and harder. As I detailed in an article for the National Federation of the Blind’s Future Reflections magazine, the built-in accessibility features of the iPhone have allowed me to continue with my passion for capturing the beauty in the world around me. The way I see it, the iPhone is now “a better set of eyes” for me. Most of the time, I can’t be sure that I have actually captured a decent image when I aim the phone at a scene. It is not until later, when I am reviewing the images more carefully at home, that I notice small details I didn’t even know were in the frame. You can see some examples of my photography on my Instagram page.

Instagram collage showing best nine images of 2016.

Going forward, this idea of the iPhone as my “best set of eyes” is going to be important to me beyond photography. As my vision loss progresses, I will be able to rely on the iPhone’s ever improving camera to recognize currency, capture and read aloud the text in menus, business cards and more, and tell me if my clothes are exactly the color I intended. I have no doubt that “computer vision” will continue to get better and this gives me hope for the future. Already, the VoiceOver screen reader can recognize some objects in your images and describe them aloud. This technology was developed to make searching through large image libraries more efficient, but it will be helpful to people with visual impairments like me as well.

Independence at the Touch of a Button

The second major way the iPhone has disrupted my life for the better is by giving me back my independence in a big way, through apps such as Uber and Lyft. Now, I know you can use these apps on other smartphones, so they are not exclusive to the iPhone. However, when you really think about it, no iPhone means no App Store. No App Store means there is no incentive for other companies to copy what Apple did.

Uber has replaced the many frustrations I had with public transportation (lateness, high taxi fares) with a much more convenient and less expensive solution. Yes, I know some of my blind friends have had a number of issues with Uber (such as outright discrimination from drivers who are not comfortable with a guide dog in their vehicles), but this would probably happen with taxicabs too.

My own experience with Uber has been mostly positive, and the service allows me to easily get to doctor’s appointments, and provides me with a reliable way to get to the airport so that I can do my work of spreading the message of accessibility and inclusive design for education to a broader audience beyond my local area. Uber and Lyft, and the iPhone as the platform that made them possible, have really opened up the world to me.

Can You Hear Me Now?

One of the big trends at the Consumer Electronics Show (CES) this year was the presence of Alexa, Amazon’s voice assistant, on all kinds of consumer appliances. Alexa joins Apple’s Siri, Microsoft’s Cortana and Google’s Assistant in heralding a future where voice and speech recognition replace the mouse and the touch screen as the primary input methods for our computing devices. We are not quite there yet, but the accuracy of these services will continue to improve and I am already seeing the potential with some of the home automation functions that are possible with the existing implementations (having my lights be automatically turned on when I arrive at home, for example).

Here, again, the iPhone deserves quite a bit of credit. The release of Siri as part of the iPhone 4S in 2011 brought the idea of speech recognition and voice control to the mainstream. Previously, its use was limited mostly to individuals with motor difficulties or niche markets like the medical and legal transcription fields. Siri helped popularize this method of voice interaction and made it more user friendly (remember when you had to sit for several minutes training speech recognition software to recognize just your voice?).

Looking Ahead

The smartphone is a mature technology and some have questioned whether it has reached its apex and will soon give way to other options for content consumption and communication. One possibility would involve virtual, augmented or even mixed reality. Given the visual nature of AR and VR this gives me some cause for concern just like I had at the release of the iPhone back in 2007. However, just like Apple took a slab of glass and made it accessible when few people thought it could, with some creativity we can make AR and VR accessible too.

We have come a long way in just 10 years (sometimes I find it hard to remember that it has only been that long). In that time, Apple has shown that “inclusion promotes innovation.”  Accessible touch screens, voice controlled assistants, ride sharing services, are just a few of the innovations that have developed within an accessible ecosystem started with the iPhone. Thank you Apple, and congrats on the 10th anniversary of iPhone.Here’s to the next 10, 20 or 30 years of innovation and inclusion.

 

 

HazeOver as a low vision aid

HazeOver is a $4.99 Mac app marketed as a distraction aid. The idea is that it dims all other windows so you can focus on the content in the foreground window (a blog post like this one, a paper you are drafting for school, etc.). The developers have prepared a short demo video that shows how the app works.

 

While that may be a good way to use this utility, for me it has become a helpful low vision aid as well. I often have a difficult time finding the mouse cursor and popup windows if they are out of my field of view (currently about 7 or 8 degrees depending on the day). I have been using Mousepose to help with the mouse cursor problem. Even with the mouse cursor set to the largest size it allows in Mac OS, I still have a difficult time locating it on the screen, especially when I have a dual monitor setup. I have found that the spotlight Mousepose puts around the mouse cursor when I press a special key (I have set to F1) makes this task much easier.

HazeOver does pretty much the same thing but for popup windows. When one of these windows pops up on the screen, the focus is assigned to it and all other windows are dimmed. In the HazeOver preferences, you can determine whether you want just one window to be highlighted or all front windows within the active app. I find the one window setting to be the most helpful with popups. You can adjust the level of dimming at any time using a slider that can be accessed by clicking the Menu Bar icon. For the best performance, HazeOver asks to get access to Mac OS as an assistive device.

A free trial of HazeOver is available from the developer’s site if you want to try it out first before you buy it on the Mac App Store.

 

Read to Me in Book Creator 5

Read to Me Screen in Book Creator

Book Creator for iPad recently added a new Read to Me text to speech feature that allows learners to hear their books read aloud within the app (without having to transfer the book to iBooks first). The feature also supports accessibility in two other ways:

  • all embedded media can be played automatically. This is great for those with motor difficulties, and it also creates a better flow during reading (no need to stop and start the text to speech to hear the embedded media).
  • automatic page flips: again this is a great feature for those who can’t perform the page flip gesture to turn the pages in a book.

These options can be configured through a Settings pane where it is possible to change the voice (you can choose any voice available for Speech in iOS), slow it down, or remove the word by word highlighting that goes along with it. For better focus, it is also possible to show one page at a time by unchecking the “Side by side pages” option under Display.

I created a short video to show how the new feature works (with a bonus at the end: how to fix the pronunciations with the new pronunciation editor built into the Speech feature in iOS 10).

 

Commentary: Coding as the New Exclusion

Note: This blog post is a work in progress. I wanted to start this conversation as soon as possible, while the Hour of Code is going on. I will update the post throughout the week with additional information and resources, including a video showing the Tickle app working with VoiceOver.

As some of you already know, this week is the  Hour of Code. From hourofcode.com:

The Hour of Code takes place each year during Computer Science Education Week. The 2016 Computer Science Education Week will be December 5-11, but you can host an Hour of Code all year round. Computer Science Education Week is held annually in recognition of the birthday of computing pioneer Admiral Grace Murray Hopper (December 9, 1906).

I have nothing against coding and I do not mean to rain on the excitement that so many of my educator friends have about this event. Used wisely, coding can be an effective way to engage learners who have traditionally not found something to be excited about in their education. Used blindly (pardon the pun coming from me) it can be another way to further narrow the focus of the curriculum and leave some kids behind. That’s what I want to focus on in this short post.

First, I want to make clear that I truly believe “everyone can code” or should at least have the opportunity to explore coding. The question is does everyone want to code? If I am artist, or  a writer, or a performer, then these activities are the ones that are going to engage me with learning. I believe in Universal Design for Learning and providing options so that all learners can find the entry point to learning that works best for them. If we focus too narrowly on coding, we may actually be creating an environment where some learners (and their passions and interests) are left behind. Rather than focusing on the specific skill of coding, why not focus on design thinking and incorporate coding into project based learning and more broadly defined approaches?  I believe this is what is going to prepare learners to solve the big problems we face now and in the future. Ok, I will get off my soapbox. There are people with far more expertise in computer science education who can make this argument in a more eloquent way than me.

What I do want to focus on today is the aspect of coding and computer education that I am finding most troubling: the lack of accessibility in the many apps and tools available to educators. For a long time, print was the biggest barrier to learning for those of us who have any kind of disability such as blindness, dyslexia or other print disabilities. We are now making progress in that area. Digital text is a much more accessible format because it can be easily transformed into a variety of formats as needed. A learner who is blind can access print content on a computer or mobile device with the help of a screen reader, software that takes the text and converts it to audio (or even Braille with the addition of a separate Braille display). Text to speech has advanced greatly, with higher quality voices and more accurate reproduction of the print content than ever before. And finally, when print is the only option available, Optical Character Recognition (OCR) has improved and made the conversion to more universal formats much easier. In some cases, this can even be done using the camera on a mobile device with apps such as Prizmo and kNFB Reader.

There is currently a push to make coding into one of the basic literacy skills our learners should possess, along with reading, writing and math. I have no problem with that. What I do have a problem with is the fact that accessibility is not even on the map with a lot of the developers in this space, with only a few exceptions. If we don’t pay attention to accessibility, coding will become the new print- one more barrier for our diverse learners to overcome.

Over the last few months, I made it my mission to try out as many apps and tools for teaching coding as I could. I even looked at a few of the toys that can be programmed through code written on an app, and bought one of those toys myself. I was greatly disappointed when I opened the apps and found a general lack of even the most basic accessibility practices – buttons that were left unlabeled, or parts of the app that were completely inaccessible with a screen reader. Now, I understand that in some cases it is difficult to make the app accessible due to its visual nature, but surely you can at least make the basic interface accessible. That would at least allow someone who needs accessibility support to participate in some of the activity. As it is, he or she can’t participate at all when key aspects of the app are not accessible.

The excuse is even more difficult to accept when there are apps/tools that are at least trying to make coding accessible. They are not perfect, but they have at least shown that progress can be made, and that’s all I am asking for -at least consider accessibility, give it a shot, and then let the community help you by providing feedback and then listening and incorporating it into your updates. Here are two tools that are doing a great job of incorporating accessibility into coding:

  • Apple’s Swift Playgrounds: As usual, Apple leads the way with its Swift Playgrounds app. Swift Playgrounds is a free app that runs on iPad, as long as that iPad is running iOS 10 or later. Rather than go into all of the details of Swift Playgrounds here, I recommend you check out fellow ADE Michelle Cordy’s post on Swift Playgrounds. It is very comprehensive and links to many resources to help you get the most from Swift Playgrounds. Thank you Michelle! Want to see Swift Playgrounds being used with Switch Control? The Switch Master Christopher Hills has done a video showing just that.
  • Tickle: With Tickle, you can program LEGO, Star Wars BB-8, Arduino, Sphero and other robots, and even smart home devices, all wirelessly. Tickle uses a simple to learn block programming approach, but you can peek under the hood to see the Swift 3.0 code at any time. The Tickle team has done a great job of labeling the interface elements of the app, as shown on this video:
    https://youtu.be/IK5NMh4Tsxs

Coding and the tools used to teach it don’t have to be inaccessible. With just a little bit of work, they can be accessible to all learners. Only when that’s the case can we really claim that “everyone can code.” My call to action for you is to contact the developers of the apps you want to use and ask them about accessibility – ask them if it is something they have considered. As a next step, learn how to test your apps for basic accessibility. Basic navigation with VoiceOver is pretty simple to learn and you can then at least have a quick look at the app to make sure its interface has the proper labeling needed by your learners who rely on assistive technology.

The future of work is definitely changing. Manufacturing jobs that once promised to be a ladder for upward mobility are disappearing fast and will probably not return. The future may well be in technology for many of our learners, but we need to make a concerted effort to ensure the path to success is open to all during this period of transition. The statistics paint a bleak picture when it comes to STEM careers for individuals who have disabilities. Ensuring that the tools we use to teach coding and basic computer science concepts are accessible are a first step toward building a better future for all of our learners. Only then will we be able to say that not only “everyone can code” but also everyone can be a computer scientist,  a software developer or anything they want to be.

Amazon Echo as an Accessibility Support

Amazon describes the Echo as a hands-free, voice-controlled device that uses Alexa (Amazon’s answer to Siri, Cortana and other voice assistants) to play music, control smart home devices, provide information, read the news, set alarms, and more. I had been wanting to try the Echo since its launch, but I was just not willing to pay the $180 for the original version of this device. 

When Amazon announced a smaller version of the Echo, the Echo Dot, for $50 in the spring of this year, I saw this as a perfect opportunity to try it. The smaller version includes a lower quality speaker than its larger cousin, but since I have a number of Bluetooth speakers already this is not a major issue. Other than the speaker, the rest of the device performs similarly whether you are using the $180 model or the $5o dollar one. Unfortunately, the original Echo Dot was originally released in limited quantities and quickly sold out before I could get my hands on one. 

I had to wait until this fall, when Amazon released a second generation Echo Dot, at the same $50 price point. I quickly ordered one to see how I could use it as a person with a visual impairment. I am intrigued by the use of speech as an interface. I am excited by the prospect of a future where my interactions with my computing devices and even my home become even more seamless – with no buttons to find and press, no specific commands to memorize.  We are not there yet (the speech recognition still has some limitations), but devices like the Echo make me hopeful about the future.

What Is It?

The Amazon Echo

The Echo Dot is shaped like a large hockey puck. It is basically the equivalent of taking the top inch and a half or so from the cylinder-shaped original Echo (the part above the speaker). Around the top edge of this hockey puck are the seven microphones it uses to recognize your voice commands, and a ring light used to provide visual feedback when a command has been recognized. On top of the hockey puck are the few buttons you can use:

  • On/Off button (3 o’clock): the only indication the device has turned on/off is the ring light around the top edge coming on. A tone or other audio feedback would have been helpful.
  • Volume buttons (12 o’clock and 6 o’clock): As you press these buttons, the ring light around the top of the device will let you know the volume level (and you will also get some audio feedback in the form of a tone that will become louder or softer was you press the buttons).
  • Mute button (9 o’clock): Pressing this button will mute the Echo’s 7 microphones so that it temporarily stops recognizing your commands. The ring light on top of the device will turn red to let you know it is muted. This may come in handy if you are plan on watching TV for a while and don’t want the Echo to be triggered by the series of Amazon commercials featuring the trigger word.

Basically, you have to say a trigger word before the Echo will recognize a command. By default, this trigger word is “Alexa” but you can change it by going into the Alexa app on your mobile device. I have mine set to “Echo” (to avoid my device being triggered by Amazon’s commercials)  but “Amazon” is also an option.

The Alexa app is how you first set up your Echo Dot and adjust its settings. It is also how you download and install Skills (the Echo equivalent of apps). These Skills basically expand the range of commands you can use with your Echo.  Overall, Amazon has done a nice job of making the Alexa app for iOS VoiceOver compatible. I had no major issues with unlabeled buttons and the like as I interacted with it.

Ask and You Should Receive (An Answer)

The most basic use of the Echo is to ask it questions it can answer by searching on the Web. This ranges from simple math (“Alexa, what is 125 times 33?”), to unit conversion (“Alexa, how many pounds are in 40 kilograms?”), to spelling and definitions (“Alexa, what is the definition of agoraphobia?”, “Alexa, how do you spell pneumonia?”).

My favorite use of this feature is to ask for updates about my favorite sports teams: “Alexa,  how are the Giants doing?” or “Alexa, when do the Giants play next?” To help Echo provide more accurate responses, I have specified my favorite teams in the Alexa app for iOS (Settings > Sports Update).  In case you are wondering, I love the New York Giants and Mets!

Rather than going over everything you can ask Alexa, I will point you to Amazon’s own extensive list of Alexa commands you can use on the Echo devices.

Get The Day Off to a Good Start

I have set my Echo as my primary alarm to help me get up in the morning (“Alexa, wake me up at 7 am.” or “Alexa set an alarm for 7 am.”). Once I have set an alarm with my voice, I can open the Alexa app and use it to change the alarm sound (Nimble is currently my favorite), or delete the alarm if I no longer need it (I can also do this with just my voice by saying “Alexa, cancel my alarm for 7 am.”) I can just say “Alexa, snooze” if I want to get a few more minutes of sleep before I start my day.

Following my alarm, I have set up a number of Skills that provide me with a nice news summary to start the day (“Alexa, give me my Flash Briefing?”). Right now, I have the following Skills set up for my Flash Briefing: CNET (for the latest tech news), NPR (for a nice summary of national and international news) and Amazon’s Weather Skill (for a nice summary of current weather conditions). Some of these skills (CNET, NPR) play a recording of the content, while others (Amazon’s Weather) use synthesized speech (which is quite pleasant on the Echo if I may add).

To install a Skill, you will open the hamburger menu (located on the left side of the Alexa app if you are using it on iOS), then choose Skills. You can browse or search until you find the Skills that match your needs. Tapping Your Skills in the upper right corner will show you all of your installed skills. You can tap the entry for any of the listed skills to disable (delete) it. If you just want to temporarily disable the skill, you can go to Settings > Account > Flash Briefing and use the on/off toggles to disable or enable a skill (again, you will first have to tap the hamburger menu in the Alexa app to access Settings).

Manage Your Life

In addition to alarms, the Echo supports timers which can be helpful for cooking (we don’t want that casserole to be overcooked, do we?). To set a timer, just say “Alexa, set a timer for 10 minutes?”

Timers can also be helpful for individuals who have executive functioning challenges. Executive functioning is the ability to self regulate, which includes the ability to stay on task and manage and keep time. For someone with this kind of challenge, you can set multiple timers with your Echo. For example, you can set a timer for someone to do an activity for one hour (“Alexa, set a timer for one hour”) then set a second timer for each separate step  that needs to be completed to accomplish the assigned task during that hour. For example, I can say “Alexa, set a second timer for 25 minutes” to have someone read for 25 minutes as part of a larger one hour block of study time. When that 25 minute timer ends I can have the person take a five minute break then repeat the steps to set up a second timer for another 25 minutes of work.

You can also manage your to do list with Alexa: just say “Alexa, add (name of to do item) to my to do list” or “Alexa, remind me to (name of task).” You can review your to do list with Alexa (“Alexa, what’s on my to do list?”) but you can’t remove or edit to do items with your voice – for this you have to go into the Alexa app on your mobile device. Personally, I prefer to use other tools to manage my to do list (Reminders for iOS, Google Keep) but the Echo to do list feature can be helpful for to do lists that are more relevant for the home (cleaning supplies, groceries, etc.).

In the Alexa app, you can also set up any calendar in your Google account as the destination where any events created with the Echo will be added. For example, I can say “Alexa, add (event name) to my calendar,” respond to a few prompts, and that event will be created in the Google calendar I have specified. I can then check what I have scheduled for a given day by saying “Alexa, what’s on my calendar for (today, tomorrow, Friday, etc.).” Again, the ability to stay organized and follow up on appointments and due dates is something most of us take for granted but is a skill that is not as well developed in some people. Any kind of environmental support for these skills, such as what the Echo can provide, is helpful.

A New Way to Read

The Echo is a great way to listen to your books as they are read aloud with either human narration or synthesized speech. This can be a great way to take advantage of the Echo in a classroom setting. Since Amazon owns Audible, you can access any audiobook on your Audible account through the Echo. Just say “Alexa, play (book title) on Audible.” and the Echo will fetch the book and start reading it. You can then use the commands “Alexa, stop” and “Alexa, resume my book” to control playback. You can also navigate the book’s chapters by saying “Alexa, next (or previous) chapter.” Finally, you can set a sleep timer for the current book with the commands”Alexa, set a sleep timer for (x) minutes” or “Alexa, stop playing in (x) minutes.”

Many Kindle books can also be read aloud. To see a list of the books you have purchased that support reading on the Echo, visit Music & Books on the Alexa app, then choose Kindle Books. To start listening to a book, just say “Alexa read (title of the book).” The expected playback commands, “Alexa, stop,” “Alexa resume my Kindle book” and so on are supported for books that can be read aloud.

Let There Be Light

Echo can be a great way to control lights and other appliances using just your voice. This can be especially helpful for those who have motor difficulties that make interaction with with these features of the home a challenge. As a person with a visual impairment, I use my smart lights to ensure my home is well lit when I get home. I set this up as a “Coming Home” routine in the app for my Hue lights. Using geofencing, the app determines when I am close to my home and automatically turns on the lights and sets them to a specified scene (a preset brightness and color). No more fumbling to find my way around a dark home when I come home! Similarly, I can set up a “Leaving Home” routine to make sure the lights automatically turn off if I leave them on by mistake. How-to Geek has a nice article detailing how to set up and configure Hue lights.

By installing the Hue Skill, you can get basic voice control of your lights through the Echo. This Skill gets information about the rooms and scenes (presets for sets of lights with predetermined brightness levels/colors) you have set up from the Hue app  installed on your mobile device. The first step in getting your Echo to control your lights then is to get all of your Hue rooms and scenes recognized. You will do this by going to the Smart Home section in the Alexa app, then scrolling down to Your Devices and selecting “Discover Devices.” You may have to tap the circular button on your Hue bridge to get everything recognized. If everything is recognized correctly, you should see every scene and room you have set up in the Hue app listed as an individual device in the Alexa app. Although I only have three Hue lights (two white and one color)  I have 30 devices recognized by my Alexa app (one device for each individual light, room and scene).

The next step is to set up your Groups in the Alexa app. This is done by choosing “Create group” in the Smart Home section. To give you an idea of my setup, I have the following groups set up: All Lights, Living Room and Office. For each group, I have then enabled the lights, scenes and rooms I want it to include. For example, for my Living Room group I have the following items enabled: Living Room Color and Bookshelf (the names I assigned to the two individual lights I own), Living Room (the room containing the two lights together), and the different Scenes (presets) I have created. These presets are assigned to the room and are currently “Bright in Living Room,” “Dimmed in Living Room” and my favorite “Florida Sunset in Living Room.” For this last one, I was able to choose a nice photo of a sunset I took at the beach and the Hue app automatically picked sunset colors for the scene!

With my current configuration, I can use the following commands to control my lights:

  • “Alexa, turn on (or off) all the lights”: As expected, this turns on/off all of my connected lights using the All Lights group I set up in the Alexa app, which includes a single device called All Hue Lights.
  • “Alexa, turn on the Living Room (or Office) lights”: this commands turns all of the lights assigned to a specific room on or off at once.
  • “Alexa, turn on (or off) the Bookshelf light”: this command turns on or off the individual light called Bookshelf, a single soft white bulb I have set up near my bookshelf.
  • “Alexa, set the Bookshelf light to 50%” or “Alexa, set the Living Room (or Office) lights to 50%: I can control the light level of any individual light or room.
  • “Alexa, turn on Florida Sunset (or any of my named scenes)”: this will turn on my Florida Sunset scene which configures the main living room light to a nice red/orange shade selected from a photo in the Hue app.

Echo is not the only way I can control my lights. Because I have a version of the Hue lights that is HomeKit compatible, I can also use Siri on my iOS devices. In fact, I find the voice control provided by Siri to be not only more intuitive and easier to set up, but also to offer better performance (quicker response). If you have an old iPhone 6s just lying around, you could set it up with “hey Siri” so that it works pretty much like an Echo as far as light control goes. Another thing I like about the Siri control is that I can use my voice to change the color of my lights by saying “set the (light name) to blue (or any of the basic colors).

Finally, I have a Wemo switch I am using to control my Christmas tree lights over the holidays. I have set up this Wemo switch with a rule to automatically turn on the Christmas tree every day at 5:30 pm (around the time when sunset takes place for us in Florida) and then turn it off at 11 pm. I can also just say “Alexa, turn the Christmas tree on (or off) at any time for more manual control. Unfortunately, the Wemo does not work with Siri like the Hue lights. It is limited to the Echo for voice control.

There is a lot more you can do with your lights with the help of the online automation service IFTT, which has an entire channel dedicated to Hue lights. For example, you can say “Alexa, trigger party time” to have your lights set to a color loop. I am still looking for an IFTT trigger that turns my lights blue each time the Giants win.

Are You Entertained?

Ok, so you are not impressed by voice controlled lights? Well, there is more the Echo can do. By far, the most common way I use this device is as a music player. What can I say, whether studying or working out, music is a big part of my life. I have my Echo paired with a nice Bluetooth speaker for better sound than what the built-in speaker can produce. If Bluetooth is not reliable enough for you, you can directly connect the Echo to any speaker that accepts the included 3.5 mm audio cable.

Echo supports a number of music services, including Prime Music (included with Amazon Prime), Spotify (my favorite), Pandora, iHeartRadio and TuneIn. The following commands are supported for playback:

  • “Alexa, play (playlist name) on Spotify”: play songs from any playlist you have set up on Spotify. My favorite is the Discover Weekly playlist released each Monday. This isa collection of songs curated by the Spotify team and a great way to discover new music.
  • Alexa, play (radio station name) on Pandora (or TuneIn or iHeartRadio)”: if you have any of these services set up in the Alexa app, the Echo will start playing the selected station.
  • “Alexa, like this song (or thumbs up/down)”: assign a rating to a song playing on Pandora or iHeartRadio.
  • “Alexa, next”: skip to the next song. Saying “Alexa, previous” will work as expected (at least on Spotify).
  • “Alexa, stop” or “Alexa, shut up”: stop music playback. Saying “Alexa, resume (or play)” will get the music going again.
  • “Alexa, what’s playing?”: get the name of the song and artist currently playing.
  • “Alexa, set the volume to (a number between 1 and 10)”: control the volume during playback.

Update: In the first version of this post, I forgot to mention podcasts. The Echo Dot supports podcasts through the TuneIn service, which does not require an account. The Echo could be an excellent podcast receiver, but it is limited by the fact that podcast discovery is not that great on TuneIn.  The first  thing you need to do is look to see that your favorite podcast is available on TuneIn.

You do this through the Alexa app, by going into Music and Books and selecting TuneIn, then Podcasts. If your podcast is available on TuneIn, make a note of the name it is listed under. You can then say “Alexa, play the  (name of podcast) podcast on TuneIn” and you should be able to listen to the most recent episode of the podcast if you got the name correctly. This was hit or miss in my experience. For podcasts with straightforward names (Radiolab, the Vergecast) I was able to get my Echo to play the latest episode with no problems, but for others it got confused and instead played a song that closely matched my request.

Spotify also supports podcasts now, but I was not able to access them through my Echo. I hope Spotify adds better support for this type of content in a future update. I really enjoy podcasts because they allow me to access content without having to look at a screen, which is tiring to my eyes.

While the Echo does not control playback on a TV (it is limited to music), it can at least help with information about the program you are watching. For example, you can ask “Alexa, who plays (character name” in (movie or TV show)?” or “Alexa, who plays in (movie or TV show)?” to get a full cast list.

Out and About

While there has been some valid criticism of ride sharing services for refusing rides to people who use guide dogs, these services are an improvement over the taxi services many of us have had to rely on due to our disabilities. This is case with me. My visual impairment prevents me from safely driving a car, so I have to rely on other people to drive me or I have to use public transportation (which is not very reliable where I live). Uber and Lyft have been a Godsend for me: I use them to get me to the airport and any meetings or appointments. Most of the time I will request  a ride through an iPhone app, but with Echo I can do it with a simple command as well: “Alexa, ask Uber to request a ride” or “Alexa, ask Lyft for a ride.”

Uber and Lyft are both Skills you have to install on your Echo. Once you have them installed, you will also have to set up a default pickup location the first time you launch the skill. After requesting a ride, you will be prompted a couple of times to make sure you really want to order a ride. Once your ride is on its way, you can say “Alexa, ask Uber (or Lyft) where’s my ride” to get a status.

Before you go out, why not make sure you are dressed for the weather – whether that be snow in more northern parts of the country or rainstorms in the part of the country where I live (Florida). You can just ask “Alexa, what’s the weather like?” or “Alexa, is it going to rain today?” or even “Alexa, will I need an umbrella today?” You can get an idea of the traffic to your destination by saying “Alexa, what’s the traffic like?” This requires you to enter your home address and a destination you visit frequently in the Alexa app (this can be your work address or, in my case, my local airport).

There is a lot more you can do with Echo. I have just scratched the surface with some of the things I myself have been able to try out. For example, I would love to install a Nest thermostat so that I can use my voice to control the temperature (“Alexa, set the temperature to 75 degrees.” – hey, I am from the Caribbean, you know). Other smart home applications include controlling locks and even your garage door. I am not quite ready to trust my home security to my Echo, but it’s nice to know these options exist for those who need them as a way to make their homes more universally designed and capable of meeting their accessibility needs.

If you are a person with a disability (or even if you are not), how are you using your Amazon Echo? If you don’t have one, is this something you are considering?

Bonus: Can’t speak the commands needed to interact with the Echo? No problem. Speech generating devices to the rescue. I have been using the Proloquo4Text app on my iOS device to send commands to my Echo with no problems. I created an Echo folder in Proloquo4Text that has the commands I would use most frequently. Here is a quick demo: