fbpx

Removing Barriers in Digital Assessment for Test Takers

Assessment is a technology. We have a long history of asking a sample of questions within a subject which allow us to draw inferences about the test taker. Over time, test makers have learned that there are some things that we need to be concerned about, including biases, reliability, validity, and accessibility. Only by addressing these concerns can we create tests that are fair for test takers.

One of the goals in the construction of fair tests is to remove elements that aren’t related to the skills or knowledge we’re trying to measure – often referred to as removing “construct irrelevant barriers.”

This article focuses on providing access to the test taker, not just for people we consider having special needs, but all test takers.

Assumptions about Access

There are many aspects of taking even a paper-based test that we take for granted.  In a classroom setting, we assume that the text on the paper is big enough to read, that there is enough light in the room to read the text, that the test taker can understand the language of the text. Additionally, we may expect that there are chairs, desks, writing implements, electricity, computers, Internet – the list goes on and on. We already spend a lot of effort making sure test takers can access tests, none of which has anything to do with the subject of the test.

When we question our assumptions about access, we can gain an understanding of why barriers exist and how we can remove the unintentional barriers for test takers.

Universal Design

By making tests universally designed, we reduce the barriers to access, and let test takers demonstrate their knowledge and skills. To quote Ron Mace from the Center for Universal Design at the North Carolina State University College of Design:

“Universal design is the design of products and environments to be usable by all people, to the greatest extent possible, without the need for adaptation or specialized design…” 

In the context of online assessment, a universal design can help widen the usability of applications. We should strive to widen our assumptions about how people will use assessment applications and create interfaces that allow for differences in abilities and preferences.

An example of Universal Design would be selecting a font that displays well on screens of devices, and setting the font size and spacing in a way you believe meets the needs of most users. Scribes, calligraphers, and typesetters have been studying the slightest of variations to the presentation of text since writing was invented.

People generally read better using familiar font settings like typeface and size. A designer incorporating Universal Design tries to balance many factors, including providing familiar type settings as well as the variation of needs of the people reading the content. 

However, the default settings presented to users may be unreadable, or at least difficult to read for some people. In these instances, we can provide the user with access to font settings that let them make changes to font size, letter/word/line spacing, text or background colors, or even the font face itself. Access to the font settings for the user provides the “customizable design.” In other words, while the designers tried to meet the needs of most users with their initial design, they also provided a way for the user to customize the visual presentation of type to their own specific needs.

Accessibility

Accessibility can be more generally thought of as how people gain access to places, spaces, or information. The term ‘accessibility” in the context of assessment is more precisely about providing extra access capabilities to assessments, the assessment administrative contexts, and the assessment results. Test takers and all of their agents need to be able to access the tests and all peripheral content related to the tests.

We need to make sure that the subject remains the focus of our assessment. For example, if you are using a picture as part of a question, can that picture be described in a way in text that is useful to people who can’t see it without leading the correct response? If not, consider using a different picture, otherwise your test would require the test taker to have a certain level of visual perception. In this case, your test would also be a vision test – which likely was not listed as one of the purposes (unless you’re at the optometrist, of course).

While it is often impossible to remove all construct irrelevant content, we should strive to reduce it as much as possible. The optometrist might rely on you being able to know the Latin alphabet and communicate with them. I’m sure that isn’t a given though, and they often make accommodations for specific people.

Legal Compliance

I’d be remiss if I didn’t mention that many assessments are legally required to meet the accessibility needs of their test takers. The United States (Section 508 of the Rehabilitation Update of 2017) and Europe (EN 301 549) have laws for meeting certain accessibility requirements. Both reference the Web Content Accessibility Guidelines (WCAG) Level AA versions 2.0 and 2.1.

WCAG provides guidelines for making web content accessible to ALL users, not just users with specific accessibility needs. Testing programs should, at a minimum, adhere to the guidelines, and conduct usability testing for test takers who use assistive technology, like screen readers.

The IMS standard for assessment content (Question and Test Interability, or QTI) versions 2.2 and higher encourage and include the ability to markup content for accessibility. They also allow for assessment specific accommodation content to aid test takers that require extra support and guidance.

By providing access to tests, we remove barriers for test takers to demonstrate what they know and can do. We want to be able to say we measured accurately what a person’s knowledge and skills are, and the results are ONLY about those skills that the test was intended to assess. Accessibility is one of the ways we remove construct irrelevant barriers.

Barriers in Digital Assessment