Schools and education districts worldwide have become increasingly digitized with many online schooling options and digital learning platforms available to students and teachers. With these new and innovative learning options comes a need to ensure that technology-based assessment practices are fair and effective in measuring student growth and achievement. However, best practices for pencil and paper assessments in the classroom and online assessments are not the same.
This is why the Association of Test Publishers (ATP) developed an in-depth guidebook on how to implement and utilize online assessments, including in the design, delivery, and scoring of digital assessments. The ATP also provided guidelines around test security, validity, reliability, and fairness in the guide. In short, the Guidelines for Technology-Based Assessment offer the perfect starting point for test administrators or school district officials when developing online assessments.
The guide is comprehensive and includes chapters on a variety of topics and issues related to digital assessments. The chapters in the guide include:
- Test development and authoring (including gamification and technology-enhanced items)
- Test design and assembly—linear or adaptive
- Test delivery environments (including web, mobile, offline, locked-down browsers, disruptions, and interoperability)
- Scoring—automated and technology-assisted
- Digitally based results reporting
- Data management (storage, maintenance, integrity, integration)
- Psychometric and technical quality
- Test security
- Data privacy in technology-based assessment
- Fairness and accessibility
- Global testing considerations including translation
Each chapter was written by an expert in that area and reviewed by other experts to ensure that the information within the guide is accurate and reflects best practices industry-wide. The guide is designed to be used with high-stakes or summative assessments, although the principles, especially around test design, can also be applied to formative assessments as well.
Although there is a lot of information to sort through within the guide, five key topic areas stand out. These include takeaways about technology-enhanced items and game-based assessment, interoperability, fairness and accessibility, scoring, and security. By considering each, a teacher or test developer can ensure that their technology-based assessment measures what they hope to measure, in an equitable manner, while maintaining a high level of privacy and security.
Technology-Enhanced Items & Game-Based Assessment
One of the best ways to use technology to enhance learning is through Portable Custom Interactions or PCIs. PCIs give students tremendous freedom to demonstrate learning, in context and with multiple outcomes by enabling students to interact with game-like simulations within a test.
When creating any test item, the key is to start with the end goal — what the question should measure and allow students to demonstrate — in mind. This means that when using technology-enhanced items (TEIs) or game-based assessments it’s important to start by considering the learning standard being measured and be designed to allow students to demonstrate an understanding of those skills. Some other key guidelines to consider include:
- Ensuring that TEIs will work on student devices – students may be taking the assessment on a tablet, laptop, Chromebook, or another device, ensure that the student can view the item without scrolling too much.
- Provide tutorials – students need to understand how to operate the technology before starting so they aren’t bogged down trying to take the assessment and the results show what the student knows, not how well they managed the tech.
- Reduce construct irrelevance variance or CIV – as much as possible, test developers need to ensure that the content and student skill remain front and center. TEIs should be easily accessed and developed with the test taker in mind.
There are a wide variety of technology-based assessment tools and platforms that schools implement when teaching and assessing students. When developing assessment software the software must be able to access and communicate with software from other platforms to ensure smooth and secure information transfer from one place to another.
For teachers, this means that an assessment can be given on one platform and student scores or data can be analyzed by other platforms or grade books designed for such a purpose. This interoperability ensures that educators can leverage data effectively, develop more engaging assessments, and positively impact student growth.
Fairness & Accessibility
In many ways, technology-based assessment levels the playing field for students and creates the opportunity for assessing more fairly and equitably. However, there are some key considerations regarding fairness and accessibility including:
- ELL Considerations – assessments should take into account language-based needs and make accommodations and modifications as appropriate within the platform. For digital testing, this may mean providing tutorials with heavy reliance on picture-based instructions or including different translations for how to take the assessment.
- Students with mental health concerns – digital testing platforms need to ensure that accommodations are easily implemented by instructors and are also easy for students to utilize throughout the test.
- Students without experience on digital platforms – not all students have expertise in technology and using technology for assessment. Educators and test developers need to ensure that tests are designed simply and intuitively so that students can engage with the material as authentically as possible.
Technology-based assessment scoring can be divided into two sections, automated and human scoring. For automated, or AI-scored assessments it is critical that test developers employ a rigorous and thorough quality control process for both scoring assessments and communicating scores with students and families. This means that the software should identify correct and incorrect items to provide accurate data.
However, not all items can be scored automatically, for human scored items there are a few things to remember:
- These items should always have a clearly defined rubric so that students know what they are supposed to do.
- The scorers need to have adequate training to avoid bias and ensure test accuracy.
- Technology platforms for storing and displaying student responses should be user-friendly and intuitive.
Privacy & Security
Working with sensitive testing data is a major concern for test developers. With many tests being offered online and via cloud-based interactions, the potential for a data breach is high. To maintain the integrity of the test, test developers must act to prevent a breach before it happens, employ deterrence strategies before and during testing, and develop a response plan for cheating or data breaches.
It is just as important to consider the personally identifiable information that the test may have collected as well. Having a robust security system in place to maintain a high level of privacy will ensure that students’ data remains secure. Doing this is not just a matter of best practice, in many places, it is also a legal requirement as governments strive to keep personal data safe.
Why is the guide important?
The shift to online technology-based assessment has allowed educators to get a more accurate view of what their students can and cannot do in a shorter amount of time when compared with traditional testing methods. However, online testing is relatively new and some important nuances and differences come with it. This is why these ATP Guidelines are an important starting point for test developers, they give a baseline for best practices, which ultimately leads to more effective assessment, less wasted time, and more accurate data.
Open Assessment Technologies has software solutions that can help develop high-quality assessments designed for online learning. To learn more about how OAT solutions can improve assessments for your students, click here and reach out to a sales representative.