Each assessment you set represents a valuable opportunity to collect data. Where did students do well? What did they get wrong? Has the class made progress over time? This information can help you shape your curriculum, teaching methods, and student support for the better.
However, there are significant obstacles that can prevent you from collecting useful data: a lack of specialist training for educators, the time cost, and the limited range and depth of assessment metrics available from traditional testing, for example. Not to mention the challenge of presenting data in a clear, useful format that will enable meaningful action.
Computer-based assessment, however, brings obvious benefits in gathering and analyzing data in new and more efficient ways. In this article, we’ll explain how you can leverage these advantages to make a difference.
Key Takeaways
- Gathering and analyzing data from assessments is a crucial part of the education cycle, allowing you to plan useful next steps.
- Traditional data collection can be time-consuming and may lead to human error.
- Computer-based assessment lets you gather and export data in moments.
- Online assessments also yield richer data that can make a real difference in the classroom.
- Working with the right platform can help you address any technological shortcomings to make sure tests are delivered smoothly at scale.
The Power of Data
As educators, we know how important it is to assess student progress. This can happen during a course of study to check whether and how the students are learning, or at the end of a unit to judge overall performance.
Understanding whether students have retained knowledge or understood key information can help shape the next steps for educators. Certain topics may need to be revisited in class or teaching styles adjusted. It can also be important to understand how students with special needs and disabilities (SEND) are coping, and whether they might need more adjustments for accessibility.
Aside from measuring knowledge, some assessments can also help you understand the skills your students hold. This can be a vital part of making sure they’re prepared for their futures—whether that’s further study or the world of work. Assessments need to be carefully designed for this purpose.
Data from assessments can also be helpful on an institutional level, or higher, to understand the success of policy decisions and where there might be further training needs for staff.
Opportunities in Computer-Based Assessment
Computer-based testing offers obvious benefits through the application of new technology:
Instant data collection
It slashes the time needed for collating data, replacing the need to manually enter grades or scan transcripts. With the right technology platform behind you, you can gather and export large volumes of data almost instantly.
This unlocks the potential for data-driven education by removing the burden on staff, who have enough to be getting on with. That time can then be used more valuably to improve the educational experience for students in light of the data. Interventions can also be more timely, with changes applied in the classroom almost immediately if needed.
More granular detail
Traditional tests (such as multiple-choice or essay questions) provide a limited set of data— sometimes just a simple summative mark. Computer-based tests, on the other hand, can allow you to go into the granular detail of student performance. This can include useful information such as:
- How long students took to complete the whole test and individual questions.
- Where students used accessibility aids like transcripts or contextual clues.
- Whether students skipped or paused questions, which can suggest disengagement.
- A student’s path through the assessment, if using computer adaptive testing (CAT). This personalizes the test to each student, with the adaptive technology picking questions depending on their success on previous questions.
- Where exactly students clicked on visual questions like hotspots or sliders, so you can identify common misconceptions across incorrect answers. If you’re using portable custom interactions (PCIs) you can check rich log data to understand exactly how students interacted with them online.
- Which question types took students longer and/or produced the most mistakes. If you find students are struggling with text-based questions, you might provide more literacy support. Alternatively, you might decide that a visual question would give them a better opportunity to demonstrate their understanding.
- How the actual questions and tests are performing. Are questions measuring what is intended? This can help identify questions that should be revised or reformatted for more accurate measurement.
You can cross-reference this data with student demographics to reveal more information about the performance of students of different ages, backgrounds, or genders, for example. This is an important step in ensuring assessments are equitable. It can also illuminate where students with SEND may need further support, either with adjusted assessments or extra teaching.
Easier reporting
Collecting granular data is a great starting point, but the next important step is to present it in a clear format. Dashboards and reports can help individual educators and higher-level decision-makers (at institution level or above) make better-informed decisions guided by data insights.
Computer-based assessment data can plug into your preferred reporting software, if it comes in the right format. For example, the TAO platform provides an API that links their secure data warehouse to your analytics software so you can create dashboards as needed. And as a TAO Enterprise user, you can generate custom reports to share with your team and external stakeholders as needed, selecting the information that’s most important to you.
Clear, user-friendly dashboards and reports can make it easier for all staff to use the data, even if they don’t have specialist data training. Therefore, planning and policy discussions can become more productive, and colleagues at all levels (from individual educators to higher-level decision-makers) are empowered to take meaningful data-driven action.
Challenges To Watch Out for—and How To Approach Them
Although transitioning to digital assessment brings many benefits, there are pitfalls to be aware of, particularly when considering data collection.
Scalability
If you want to collect meaningful data from a large cohort—across a school district, a large university, or an education consortium, for example—you’ll need a technological solution that scales. The key barrier to this can be high traffic causing slow-downs and even crashes which can prevent students from completing a test.
Working with the right platform is key, as well as gaining the right support in implementing tech systems. A great example is how TAO helped the DEPP (Direction de l’évaluation, de la prospective et de la performance, a section of the French Ministry of Education) improve the implementation of online testing for France’s 6th-grade students using an external hosting service. As a result, the DEPP was able to run 160,000 tests, with a minimum of 20,000 connections per minute.
Technological capability and support
You will also need to address the provision of technology across the board. For example, are your institutions equipped with the right devices for the students to take the tests? Are connections solid and reliable in every location?
When working with the DEPP, TAO improved connection speed by 40% and reduced bandwidth demand to make sure tests could be delivered consistently across the participating schools.
You may also need specialist tech support on hand to ensure that the test process is smooth and that data arrives without errors. Again, working with the right platform can help you access experts when needed; TAO testing offers dedicated support services and custom training, for example.
Ensuring accurate data
Raw data is one thing, but how do you know it’s accurate? Computer-based tests help you reduce the potential for human error in data collection, which can make for a better dataset to begin with.
Reviewing your data and “pruning” errors is also an important part of the process, to ensure that the dataset is as free from bias as possible. For example, a poorly worded question could lead to a high number of students performing poorly, which might lead to an incorrect analysis of their understanding or the quality of the teaching. Similarly, a test pitched at the wrong ability level could lead to student disengagement, again preventing a clear picture of their ability.
With digital technology, you can quickly spot any outliers, perhaps removing them from the dataset if needed. For example, answering times could help you understand where students have answered abnormally quickly (which might signify guessing).
Conclusion
Assessment can yield powerful data, especially when exploiting the innovations of technology to gather more granular information on student performance. With computer-based assessment, you can quickly and easily gather and export a whole range of information that can inform important changes in the classroom and beyond, without needing a team of data specialists.
But there are some challenges, particularly in making sure your assessments are scalable in a technological sense so you can collect the most accurate data at volume. You will then need to find ways to empower your team to actually use it to make changes on the ground.
Read more about how to use the data you’ve gathered to drive progress for students, and reimagining education through data analytics on the TAO blog.
FAQs
What is the advantage of using computers for scoring assessments?
Using computers for scoring assessments can save significant time and effort in grading. Additionally, it can reduce errors and biases, potentially leading to more accurate datasets. Scores don’t need to be uploaded manually, either—saving you even more time.
Why is it important to analyze data from computer-based assessments? Data from computer-based assessments can be extremely useful in suggesting the next steps for educators. Finding out more about where students performed well or badly can show where they need more support; computer-based assessments can also show fine detail like where they paused on a question or where they clicked on the screen, helping you understand their difficulties better.