Blog

Three Accessibility Tests You Can Do During COVID-19

Are you overwhelmed with the tsunami of free tools unleashed on educators this past week? If so, you are not alone. I consider myself a techie, and I too feel overwhelmed. I’m glad to see some of my favorite tools provide free access for educators during this difficult time, but not everything that is free is also good, especially when it comes to accessibility.

At a time of crisis, I can’t expect you to add “accessibility professional” to the many hats you are already wearing. The good news is that you don’t have to be an expert to perform some basic vetting of tools to make sure only the better ones reach our learners. Also, we want to create some good habits that will not only address this immediate crisis but result in better learning opportunities for everyone going forward.

I encourage you to start with the tools that are familiar to most people in your setting. Typically, that will be a selection of products from the “big three” in consumer technology (Apple, Google and Microsoft) and popular learning management platforms such as Google Classroom, Canvas or Blackboard. These vendors, by virtue of their market reach and need to sell products to the federal government, have teams dedicated to making sure their products meet accessibility requirements. To assist with your decision making, I have compiled a list of the accessibility support pages for the major ed tech vendors.

For other products, there are three simple tests you can perform to decide if a tool or online service meets the lowest bar for accessibility:

  • Can the text be selected? If the content can’t be selected, then there is a good chance it will not be available to the text to speech technology many students with learning disabilities (or visual impairments) use to help them perceive and process the information. This is often the case with PDF documents that have been scanned incorrectly, with each page consisting of a single image rather than actual text.
  • Can the interface and content be navigated with just the keyboard? Not all learners will be using a mouse or touch screen to interact with their devices. Some learners may only be able to use the keyboard to navigate due to motor or vision challenges, or they may be using assistive technology that mimics the keyboard (such as switch access technology). The No Mouse Challenge is a simple way to determine if a website or online tool has been designed to work well with the keyboard, and it does not take a lot of time or knowledge of accessibility to perform. Press the Tab key a few times and as you do, ask yourself – do I know where I am? You should be able to track the keyboard focus through some kind of highlighting (usually a thick border around the item that has keyboard focus).
  • Is it captioned? Are the captions of good quality? If there is any video content, you should play through it to see if there are closed captions. Just remember, that not all captions are created equal. The automatic captions created by services like YouTube don’t count. While they continue to improve, at this point they are not accurate enough to provide equivalent access to someone who relies on captions for understanding. I recommend playing the first minute of the video, then picking a random point somewhere in the middle. The captions should include appropriate punctuation, and identify changes in speakers if multiple people are shown.

If a tool or service passes these three simple tests, it doesn’t mean that it is accessible, it just means that it clears the first hurdle of accessibility. At that point, you should be left with a smaller group of tools that you can examine in more detail using a rubric or a set of questions like the ones found in Is It Accessible? Question to Ask, from the AEM Center. 

Kudos to all of you who are fighting for kids and their right to access their education throughout this roller coaster of a time. 

I hope you get to recharge a little this weekend – we’ll be right back out there next week doing what we do best. See you on the front lines! 

(Note: This post solely reflects my own opinion as an accessibility professional and user of assistive technology – it does not reflect the opinions of my employer.)

Accessible Content Creation Tutorials for COVID-19

My last post focused on the accessibility features learners can use to access content while they are at home due to school closures related to the COVID-19 (Coronavirus) crisis. This blog will focus on the flip side of our accessibility coin: accessible tools are only as good as the content we provide them. In this post, I’ll provide a selection of tutorials I have developed over the years that focus on basic accessibility best practices, starting with SLIDE.

Slide Into Accessibility

I developed this series of short videos for a presentation I delivered at the ISTE conference. It was accompanied by a blog post that explain each of the principles in more detail: Styles, Links, Images, Design and Empathy.

 

Related:

Google Docs Accessibility

This series of four videos covers the basic steps of making a Google Doc more accessible.

 

Book Creator

This video shows how to add image descriptions in Book Creator.

 

Related:

And to end with a little inspiration (but not inspiration porn), I leave you with I’m More Powerful Than You Think as a reminder of our why. These learners are why we will continue to work hard over the next few weeks to develop stronger, more inclusive educational systems.

Onward in health.

COVID-19 Resources on this Website

I hope everyone is staying safe and healthy during this difficult situation we are facing due to the outbreak of COVID-19 around the world. I have put together a “playlist” of some of the tutorials and resources I have shared on this website over the years, with the goal of providing some tools for ensuring all students can access learning materials while schools are closed. My goal is to keep things simple and not overwhelm you with too many tools (apps and extensions) – very rarely does one of my tutorials go over the 5 minute mark. You have enough on your plate as a parent and educator (or in your other roles as well). So, I’m going to focus on the technology you probably already have at home.

Apple (iPad and iPhone)

I have created a YouTube playlist with tutorials for each of the basic accessibility features included on iOS devices (like the iPad and iPhone). Some of the tutorials are older, but the features work in a similar way on later versions of iOS. I have also added some recent videos highlighting new features in iOS 13 (Zoom, VoiceOver and Speak on Touch).

 

You should also check out:

Chromebook

You should also check out:

Amazon Echo

The Amazon Echo devices can be used as  book readers (with Audible and Kindle titles), and it can also support learners (and remote workers) with a variety of tasks (setting timers to take breaks while working at home, controlling lights and more).

Windows Devices

Stay tuned. I’ll be doing a video later this week on how I use Windows 10 on my laptop as a person with low vision.

And remember: Take care of yourself by drinking enough water, eating well and taking frequent breaks to practice deep breathing and mediation. Just like you are told on a plane – put the mask on first. You can’t help anyone if you yourself are sick and need assistance. We will get through this difficult situation by being there for each other.

It is heartwarming how the assistive tech community is already coming together to address questions and offer assistance. If you have time, check out the AT Town Hall organized by Mike Marotta, with a number of brilliant people that I count as my colleagues and friends.

iOS 13 Features for Supporting Low Vision

With the latest updates for its mobile devices, Apple continues to refine the experience for those of us who have low vision and use features such as Zoom, Spoken Content (formerly Speech) and VoiceOver. There are now actually two distinct operating systems, each optimized for the device it runs on. iOS 13 continues to be the operating system for the iPhone, while the iPad gets a new iPadOS optimized for multi-tasking on the bigger screen. Both operating systems support a new Dark Mode that provides improved contrast for those of us who need it.

Apple has also changed where the accessibility features are located in Settings. They now have their own pane at the root level. . I like this change because it raises the profile of the accessibility features and makes them easier to find, but I do know that for some people it will take a while to get used to this change based on muscle memory gained over many years of using iOS. That’s why I have started each video in this post with reminder of this change.  Let’s take a look at what else is new in iOS 13 (and iPadOS).

Zoom Zoom

This first video starts with a new feature that is only available on the iPad running iPad OS. It’s called Pinned Zoom and can be found under the Zoom Region setting (along with the existingWindow and Full Screen Modes). Basically, this splits the iPad’s larger screen into two areas: one area shows a 100% view, while the other area shows a zoomed in view. The location of the zoomed in area can be changed and this area can also be resized using a divider. My favorite trick is to tap the handle between the two areas to show a highlight in the 100% view. Moving this highlight with one finger allows me to easily pan in the zoomed in area.

Those who use an external keyboard will love the fact that Zoom on iOS now supports many of the same keyboard shortcuts as the Mac (for example, Command + Option +8 to zoom in and out). Each keyboard shortcut can be turned on independently (I use them all) by going to Zoom, Keyboard Shortcuts.

Finally, the Zoom controller is more customizable – you can change its color and set up some actions to quickly activate specific Zoom features. I can’t remember what the defaults were, but I now have mine set up as follows:

  • a single tap shows the options menu
  • a double-tap zooms in and out
  • a triple-tap activates the new Speak on Teach feature (see the next section)

Let Your Fingers Do The Talking

As I hinted in the last section, there is a new feature for text to speech (now called Spoken Content instead of Speech) on iOS 13 and iPadOS. It is called Speak on Touch, and it lets you drag your finger over any part of the screen to hear the content under your finger read aloud. You can think of it as a more flexible way to use Speak Screen.

You will find Speak on Touch as a hand icon in the Speech Controller, which you can now have on display all the time. The rest of the Zoom Controller itself looks a little different – instead of the turtle and the hare icons for adjusting the speaking rate, there is now a single button you can tap to cycle through the different speeds (1.5X, 2X and 1/2 speed).

As with Zoom, the Speech Controller also has a couple of actions that can be activated with a long press or a double tap. I have mine set to activate Speak on Touch when I double-tap the Speech Controller, while a long press activates Speak Screen and reads everything on the screen (a great option for anyone who is unable to perform the Speak Screen gesture – a two finger drag from the top of the screen).

VoiceOver For One

You can probably sense a theme by now – VoiceOver is also more customizable in iOS 13 and iPad OS, starting with the fact that you can customize many of the gestures and keyboard shortcuts. Just go to VoiceOver, Commands and don’t worry, if you go crazy and need to start over with a default configuration, that option is available.

VoiceOver also has a new Activities option that lets you set up custom VoiceOver settings for specific apps and contexts. For example, I have set up VoiceOver to have a faster speaking rate and also use a new, more natural sounding Siri female voice when I am reading a website in Safari. As soon as I exit Safari, VoiceOver automatically switches back to my default Alex voice that I have been using for years.

The final new feature in VoiceOver is one that developers should find very helpful. There is now a caption panel that appears at the bottom of the screen and shows a textual representation of what VoiceOver is reading aloud. My only wish is that Apple soon provides some customization for the caption panel, as the text is currently too small for me.

There are other tweaks to VoiceOver in iOS 13 and iPadOS, but rather than discussing them here, I would recommend checking out AppleVis’ excellent rundown of iOS 13 improvements for individuals who are blind or deaf-blind. Hopefully between AppleVis and this post we have covered everything you need to know the new vision supports in iOS 13/iPadOS.

All The Rest

The Accessibility pane has been reorganized and streamlined. Many of the low vision options that used to be at the root level of the pane (such as Bold Text and Larger Text) are now organized under a Display and Text category, and there is a new Differentiate Without Color option that should be helpful for those who can’t distinguish certain colors when they are used as the only visual cue.

The Reduce Motion setting (which removes animations that could cause problems for those with motion sensitivity) is now found in a new Motion category. There you will also find settings for replacing motion with a more subtle cross-fade, for turning off  the automatic playback of full screen effects in Messages and for disabling automatically playing videos (such as App Store preview videos).

Both iOS and iPadOS now support the use of a mouse with a visible pointer. This feature is found under Assistive Touch. After connecting a Bluetooth or USB mouse, go to Accessibility, Touch, AssistiveTouch and choose Pointer Style. You can adjust both the size and color of the mouse pointer. If you also use Zoom with a connected pointing device, there is a setting for controlling how panning takes place: I prefer to have the panning take place when I move the pointer to the edge of the window.

Did I miss anything? Have any questions? Hit me up on Twitter: @eyeonaxs.

Yes, I Am Disabled (and Proud)!

Note: This post is in response to a recent Twitter post by Katie Novak.  For context, make sure to read Katie’s original tweet and her blog post as well as the many excellent responses that have followed.

For many people, the title of this post would not make sense. They would have a difficult time understanding how I could be proud of something (my disability) that is often seen through such a negative lens in our society. When we think of disability, we often focus on the challenges it presents to people. Rarely do we consider its positive aspects. Yes, there are challenges, and in no way am I implying that we don’t live in an ableist society where disabled people still face a range of obstacles in their efforts to secure civil rights and the opportunity to live a good life. But at the same time, disability as a natural part of the human experience changes our lives in a number of positive ways. For me, the most positive aspect of having a disability is the community, the many wonderful people I’ve met who continue the fight and speak up for themselves with dignity and grace in ways big and small. I am grateful  for those who have come before me and made civil rights for disabled people possible. Without them, I would not be here today.

In school we are taught about the civil rights movement as if it were ancient history – something that took place in the now distant past, a time when people of color and women fought for their rights and secured important victories that have shaped our society to this day. Rarely do we think about the civil rights movement for disabled people, or what some have called the “civil rights struggle of the 21st century.” This is not ancient history – it’s going on today as we speak, and in many different ways: from our efforts to get access to an education, to those aimed at securing good jobs, accessible transportation options and the ability to participate in the digital world on an equal basis.

It would seem like discussing language in that context would be focusing on the wrong thing. Why focus on language (disabled vs. person with a disability) when there are bigger problems that deserve our attention? Well, language is the way in which we define who is in and who is out. It explains why of all places, I would feel the most excluded at an event meant to celebrate inclusive education. When a speaker says something like “I don’t see disability” (yes, this has actually happened/happens) it sends people like me a powerful message – I don’t see you, you are invisible to me. Implying that I don’t have a disability is essentially saying that the last 20 years of my life did not take place. My disability is not just what I can do, it is who I am in the world. I see the failure to fully recognize people’s identities as a blind spot within the Universal Design for Learning (UDL) movement (if you’ll pardon the pun coming from a person with a visual impairment). That is why the conversation Katie’s tweet and blog post started is so important.

Having grown out of the universal design movement in architecture, UDL approaches disability from a social model perspective. Sure, this is a better  than the medical model that dominated the conversation on disability for most of our history. That model is the one championed by the medical profession and by charities. Disabled people were/are seen as broken beings who need to be made whole through surgery, medication and other interventions. In order to raise the funds needed to research “cures” for disabilities such as autism, charities grounded in the medical model rely on skewed portrayals of disabled people. You are either a helpless victim who needs saving or a “superman/superwoman” who can climb Mount Everest, run an ultra marathon or perform some other exceptional achievement. There is not much of an in-between. Mostly ignored are the majority of people with a given condition who just want to be able to do the same things as everyone else: get up in the morning, have a good cup of coffee, go to a job they love and pays them a fair wage, get home to take care of their kids, and sit down to watch a good movie.

The social model of disability, in contrast, does not focus on “fixing” the person, but rather on addressing the barriers that exist in the environment and keep people from accomplishing their goals, whether it is boarding a bus or learning how to code. Thanks to the social model, we have accessible entrances to buildings that benefit not just disabled people, but those making deliveries, parents pushing strollers and more. Angela Blackwell calls that the “curb cut effect.” When we design for the needs of one, we actually find solutions for the many. Or as Microsoft puts it in their Inclusive Design Toolkit, when we “solve for one, we extend too many.” I am incredibly grateful for the many innovations the social model of disability has brought about that have made my life easier: from the technology I now use to access the Web and compose this blog post to the UDL principles that have improved educational opportunity for me and many other learners.

The success of the social model of disability is actually why disabled folks like me are now in a position to critique it and point out where it falls short. One way in which it does is by failing to fully account for people’s lived experiences. Even if you were to create an accessibility utopia where few barriers existed, that would not erase the last twenty or so years of the life that I have lived as a disabled person. And neither would I want that to happen, as my life would not be as rich and as meaningful without the experiences I have had (both good and bad) from the time I was first diagnosed with my visual impairment to where I am today. And yet that is what often happens within the small community of UDL. I notice that there is a discomfort around the use of the word “disability’ (I joke that it is the four letter word of UDL). I think in part that tension arises from the need to “sell” UDL to the broader education community, in the process forgetting that the movement originated out of a need to do what was right and just (including learners in the margins) not what was popular.

I think we can find a balance where we make UDL welcoming to both the general public and  the disability community. The change has to start with the language we use, and I have a few suggestions:

  • Be comfortable with including the terms “disability” and “disabled” in our conversations. Doing so is incredibility validating to those of us who have a lived experience with disability. Whenever you avoid something, you are actually saying something about it, if that makes sense. You are saying “this is so bad” I don’t want to even talk about it. Silence is not neutral – it always “says” something.We can talk about learner variability and disability. The two are not exclusive.
  • As a movement that is built on the importance of providing choice and flexibility, we should be comfortable using both person-first and identity-first language as appropriate. Person-first language (e.g. person with a disability, person with autism) is well intentioned in that it seeks to emphasize the worth of disabled people by putting the person in front of the disability (thus the term person-first language). But we don’t need other people to give us self-worth. We can gain our own self-worth if given the opportunity to live life on our own terms.
  • We need to be clear with our language to avoid misunderstandings. What I propose is that we start by sharing our own labels in relation to disability. There is precedent for this: you have probably seen speakers who share with the audience their pronouns in order to be clear about their gender identity (mine are him/his). We can do the same thing for disability if at the start of our presentations or professional development sessions we share what we call ourselves: person with disability, disabled person or ally are just a few of the options. We should then explain why we have chosen the labels we want to use for ourselves, and give other people the permission to use theirs. I know this suggestion will be controversial. There is a resistance to labels within the UDL movement (you’ve probably heard expressions such as “labels only belong on clothes”). However, I think labels need to be considered in context. It always depends on who is doing the labeling and why. If an outsider is using the label to demean or lessen the worth of a person, then yest that is problematic. But if insiders (members of a community) choose to use the label for themselves, then it can be empowering and a way to build community.

I personally consider myself a disabled person. While disability is not all of who I am, it is a big part of my identity. My disability is not something separate from who I am (which the term “person with a disability” implies). It shapes my world from the moment I open my eyes in the morning to the time I close them at night. It is not like a suitcase that I get to put down whenever I want to.

There are times when I can “pass.” When I am sitting in front of the computer or at a meeting most people can’t tell that I have a disability. After all, “I don’t look blind.” My eyes don’t look or act any differently than most sighted people. But the moment it is time to go and I have to take out my cane, the gig is up. I don’t always have the option of hiding my disability and neither do I want to do that. It took me some time to come to terms with my disability and to get comfortable in my own skin. I know that is a journey without a fixed timetable. As with most experiences in life, we all process the events in our lives differently, and besides, it is not a race. That is why I take no offense if someone says he or she is a “person with a disability.” That’s fine with me because that’s where they are in their journey, and I was once at that point too.

When I was first diagnosed with my visual impairment, I spent countless hours researching my condition and reading research articles (which by the way were way beyond my understanding as someone who does not have a medical background). It was my hope at that time that there would be some kind of treatment or innovative surgery I could get that would “fix” everything and return me to my former life as a sighted person. I followed the medical model because that is how I was first exposed to disability, through my medical diagnosis. At that point I saw my disability as the taking of something – the taking of my independence (as I could no longer drive). I am quite open in sharing that it was not a great time in my life.

It was only when I started to embrace my disability, when I started to see that I could work with it, instead of around it, that I started to thrive in my life, both professionally and personally. A key point was learning about the work of John Swain and Sally French and their Affirmative Model of Disability, which completely changed my perspective and the way I saw myself in relation to my disability. It no longer became something I had to overcome, but rather something that I drew strength from, even with the occasional frustrations that come from living in a world that doesn’t always accommodate my needs.

I don’t have all the answers, and this post is just a reflection of my own always evolving way of thinking about disability and what it means to be disabled. I cannot speak for someone who has another kind of disability, or even a person with a visual impairment for that matter. There is great variation even within my own community. Some of us need large print, but in my case large print is actually a barrier since it places more of the text outside my limited peripheral vision.

I welcome your feedback on this post, both good and bad. By engaging in difficult conversations, we continue to grow and evolve. I am so grateful for everyone who has shared their experiences on Katie’s blog, especially Joni Degner and Eric Moore. You should really read their posts. By sharing our experiences we give others permission to do the same, and in doing so we build community.

In closing, I want to leave you with a Maya Angelou quote my colleague Mindy Johnson shared with me that now informs how approach all of my work: “Do the best you can until you know better. Then when you know better, do better.” Let’s commit to always doing better and becoming more inclusive. And when we get it wrong (as we all will at some point), let’s commit to continuing these kinds of courageous conversations. We will be better for it.

 

 

 

 

 

Entre/Between

I presented this poem at the 2018 ISTE Conference during my TED Ed talk:

Neither here nor there
Neither blind nor sighted
I see you, but not all of you
You see me, but not all of me

Ni aquí, ni allá (Neither here nor there)
The islands, the city, the country
Español, Spanglish, English
Y hoy, ¿quién soy? (And today, who am I?)
Hay, ¿quién sabe? (Well, who knows?)

So I learn to live in between
In and out of the shadows
And as the light turns to dark
And the darkness comes to life
I’ve learned to just dance
Just dance in those shadows

10+ Accessibility Features Coming with iOS 11

 

Slide during Apple's Keynote at WWDC showing a big list of upcoming iOS 11 features that were not discussed during the keynote.At its recent World Wide Developers Conference (WWDC), Apple gave developers and the public a preview of the next version of iOS, its operating system for mobile devices such as the iPad, iPhone and iPod Touch. This post will not provide a full overview of all the changes coming to iOS 11, but will instead focus on  a few key ones that will have the most impact for people like me who rely on the built-in accessibility of these devices.

A public beta will be coming later this summer, and I can’t wait to get my hands on it to start testing some of these new features. For now, this overview is based on everything I have read on the Web (shout out to AppleVis for a great overview of the new features for those with visual impairments), what I saw in the videos available through the WWDC app after each day of the conference, and updates from people I follow on social media who were at the WWWDC (you should definitely follow Steven Aquino, who provided excellent coverage of all things iOS accessibility from WWDC).

Without further delay, here are ten new or enhanced iOS accessibility features coming soon to an iPad or iPhone near you:

  1. Smart Invert Colors: In iOS 11, Invert Colors will no longer be an all or nothing affair. The new version of Invert Colors will actually leave the images and video alone with a new Smart Invert option. This fixes a problem I have always had with Invert Colors. Sometimes there is text in a graphic or video that is essential for understanding, but with Invert Colors as it currently exists it can be difficult to read this text. This will no longer be the case with iOS 11 and its new version of Invert Colors.
  2. Enhanced Dynamic Type: Dynamic Type has been enhanced to reduce clipping and overlapping at larger text sizes, and Dynamic Type will work in more of the UI for apps that support it. In some areas of the UI where text is not resized dynamically, such as tab bars, a tap and hold on the selected control will show it at a larger size in the middle of the screen.
  3. VoiceOver descriptions for images: VoiceOver will be able to detect text that’s embedded in an image, even if the image lacks alternative text (or as Apple calls it, an accessibility description). VoiceOver will  also announce some of the items in a photo that has not been described (tree, dog, sunset, etc.), much like the Camera app already does when you take a photo with VoiceOver turned on.
  4. A more customizable Speech feature: you can now customize the colors for the word and sentence highlighting that is available for Speech features such as Speak Selection and Speak Screen. These features are helpful for learners who struggle with decoding print and need the content read aloud. The highlighting can also help with attention and focus while reading, and it’s nice to see we can now change the color for a more personalized reading experience.
  5. Type for Siri: In addition to using your voice, iOS 11 also allows you to interact with Siri by typing your requests. This is not only an accessibility feature (it can help those with speech difficulties who are not easily understood by Siri) but a privacy convenience for everyone else. Sometimes you are in a public place and don’t want those around you to know what you are asking Siri.
  6. More options for captions: for videos that include closed captions, you can now enable an additional style that makes the text larger and adds an outline to make it stand out from the background content.  Along with this new style, you can now turn on spoken captions or convert the captions to Braille . This last option could make the content more accessible to individuals with multiple disabilities.
  7. Switch Control enhancements: typing can take a lot of time and effort when using switch access technologies. With iOS 11, Apple hopes to make this process easier by providing better switch control word prediction  as well as a “scan same key after tap” option (this will repeat the same key without requiring the user to scan to it again, which can take some time). Other Switch Control enhancements for better overall usability include:
    • Point Mode has an additional setting for more precise selections: this will add a third scan to refine the selection at an even slower pace, and early reports are that it actually selects the actual point rather than the surrounding accessibility element (button, etc.).
    • Scanner Menu option for Media Controls: recognizing that media playback is a popular activity for switch users (just as it is for everybody else), a new category for Media Controls has been added to the scanner menu. I assume that this feature will work on any app with playback controls, which would make it a great option to use with Voice Dream Reader or any other app with playback controls at the bottom of the screen (which require a lot of scanning to access).
  8. Improved PDF accessibility support: while I am not a big fan of PDF as a format, there are still a lot of legacy PDF documents out there so it is nice to see improved support for PDF accessibilty in iOS 11. One of the common uses of PDFs is to collect information through forms, and with iOS 11 Apple promises better support for forms as well as for tagged (properly marked up) PDF documents.
  9. Better Braille support: as reported by AppleVis, the Braille improvements in iOS 11 include better text editing and more customizable actions that can be performed through shortcuts on a Braille display.
  10. A redesigned Control Center: you will have more ways to get to the features you use the most with the new Control Center, which will now allow you to add widgets for the Accessibility Shortcut,  the Magnifier and text resizing.

As with most releases of iOS, there are a number of features that can be beneficial to users with disabilities even if they are not found in the Accessibility area of Settings. These include:

  1. An improved Siri voice: we got a taste of how much more natural the new Siri voice will sound, but there was not a lot of detail provided. It is not clear if this voice will be available to VoiceOver users, or if it will be incorporated into the rest of the Speech features such as Speak Selection and Speak Screen when iOS 11 finally ships to the general public in the fall.
  2. Siri translation: Siri can now translate from English into a few languages – Chinese, French, German, Italian, or Spanish.
  3. Persistent Reader View in Safari: when you long-press the Reader icon, you can choose to have reader activate on all websites, or just on the current one. Reader view is helpful for removing ads and other distractions that can compete for attention, especially for people with learning difficulties such as ADHD.
  4. Apple TV remote in Control Center: It is also possible to add an onscreen Apple TV remote to the Control Center. This will be helpful for Switch Control or VoiceOver users who may prefer this option to using the physical Apple TV remote.
  5. One handed keyboard: this option is intended to help anyone who is trying to enter text while one hand is busy with groceries, etc. but it can also be helpful to someone who is missing a limb. Tapping on the Globe icon that provides access to third party keyboards will now show an option for moving the keyboard to either side of the screen where it can be easier to reach with one hand.
  6. One handed Zoom in Maps: this feature is intended for drivers to have better access to Maps while on the road, but as with the one-handed keyboard, others will benefit from this change as well. As someone who often has one hand busy with a white cane, I welcome all of these features that make the iPhone easier to use with just one hand.
  7. Redesigned onscreen keyboard: Letters, numbers, symbols, and punctuation marks are now all on the same keyboard. Switching between them is as easy as a simple flicking gesture on the desired key.
  8. Easier setup of new devices; anything that reduces the amount of time entering settings is helpful to switch and screen reader users. When setting up a new iOS device, there’s now an option in iOS 11 to hold it near an existing device to automatically copy over settings, preferences, and iCloud Keychain.
  9. More customizable AirPod tap controls: AirPods can now be customized with separate double tap gestures for the left and right AirPod.  One can be set to access Siri, for example, while another can be set to play the next track. Previously, double-tap settings were applied to both AirPods. This will be helpful for individuals who rely on AirPods to access Siri as an accessibility option.
  10. Less restrictive HomeKit development: it will now be possible to develop a HomeKit device without being part of Apple’s HomeKit licensing program. All that will be required is a developer account. The catch is that any HomeKit devices  developed this way cannot go up for sale. That should be fine for assistive tech makers who just want to experiment and prototype solutions for their clients without the investment required to be part of the official HomeKit program. As The Verge suggests, this could also encourage more developers to dip their toes with HomeKit development, which will hopefully lead to more options for those of us who depend on smart homes for improved accessibility.
  11. QR Code support in the Camera: QR codes can be a helpful way to provide access to online resources without requiring a lot of typing of URLS and the like. They are frequently used in the classroom for this purpose, so I know teachers will find having this feature as a built-in option a welcome addition.
  12. SOS: There’s an Emergency SOS option in Settings app that allows users to turn on an “Auto Call” feature. This will immediately dial 911 when the Sleep/Wake button is pressed five times. A similar feature has been available on the Apple Watch since the introduction of watchOS 3, and it’s nice to see it on the iPhone as well.
  13. Core Bluetooth support for Apple Watch: while this is an Apple Watch feature, I’m mentioning it here because the Apple Watch is still very closely tied to its paired iPhone. With watchOS 4, which was also previewed at the WWWDC, Apple Watch Series 2 is getting support for connecting directly to Bluetooth low energy accessories like those that are used for glucose tracking and delivery. Furthermore, the Health app will have better support for diabetes management in conjunction with CoreBluetooth, including new metrics related to blood glucose tracking and insulin delivery.
  14. Indoor navigation in Maps: Maps has been a big help for me whenever I find myself in an area I don’t know. I love the walking directions and how well they integrate with the Apple Watch so that I can leave my phone in my pocket as I navigate with haptic feedback and don’t give off that lost tourist look. With iOS 11, these features will be extended to indoor spaces such as major malls and airports.
  15. A redesigned App Store: the screenshots I have seen point to a bigger, bolder design for the App Store, which will be welcome news to those of us with low vision. If you like how Apple News looks now, you will be pleased with the redesigned App Store.
  16. Built-in screen recording: I rely on Screenflow to record my video tutorials, but having screen recording as a built-in feature will be convenient for quick videos. This will be great for providing tech support to parents, or for documenting accessibility bugs to developers.
  17. Person to Person payments in Messages: anything that allows payments without the need to use inaccessible currency is A OK with me.

The Notes app has been greatly enhanced in iOS 11 to make it an even better tool for diverse learners who need options for how they capture, store and retrieve information:

  • Instant Markup: adding an annotation to a PDF document or screenshot will be as easy as picking up the Apple Pencil and touching the screen to start drawing/annotating.
  • Instant Notes: tapping the lock screen with Apple Pencil will create a quick handwritten note that will appear in Notes once the device is unlocked.
  • Inline Drawing: when you begin to draw or annotate in Notes, the text around the annotation will move out of the way. You can add inline drawings in Mail as well.
  • Searchable annotations in Notes: everything you write with the Apple Pencil in Notes will now be searchable, making it much easier to find the highlights in long notes taken during a lecture or long presentation.
  • Document Scanner: the new Document Scanner in Notes will detect the edges to automatically scan a document, crop it, and remove glare and tilt to produce a cleaner image. This will result in a better scan you could then pass the scanned document off to a different app to perform even better optical character recognition (OCR). I am hoping this feature is just a start, and eventually we will get built-in OCR in iOS.

A major focus with iOS 11 is improved support for iPad productivity. This includes support for Drag and Drop in an enhanced Split View, as well as a new multi-tasking view with what appears to be the equivalent of Spaces on the Mac. With Apple’s excellent track record of accessibility, I’m confident these features will have the same level of accessibility as the rest of iOS.

I can’t wait to try out iOS 11 when the public beta becomes available to start enjoying some of these features on at least one of my devices (not the one I use to get work done, of course – at least not until late in the beta cycle when everything is much more stable).

How about you – which ones of these iOS 11 features have you most excited? Which ones do you think you will use the most?