Statistics

The statistics module, available on FLOWmulti, can be used to gain insight into the performance of both test-takers and questions on a particular flow. The module is available to both assessors and reviewers, although whereas assessors will only see the summary statistics for the students to which they have been assigned, the reviewer has more general oversight over all students enrolled on the flow.

To access the statistics module as an assessor, click the left-hand flow menu from inside the marking tool and choose Statistics.

access_stats.png

As a reviewer use the statistics button from the flow landing page.

access_stats_rev.png

The landing page shows a graphical overview of the generated statistics, split up into several sections which are explained further below.

stats_1.png

Test-Level Summary Statistics

Test-level statistics are provided in 3 charts, shown below.

stats_2.png

Distribution of Points

stats_3.png

A histogram depicting the number of test-takers within each scoring interval. Use the wrench icon to adjust the size of the bins to 2, 5, 10 or 20 point ranges.

Distribution of Answers

stats_4.png

A pie chart which shows the proportion of items (note this is at the item-level rather than the individual question-level) marked as either fully correct, partially correct, incorrect, unanswered or awaiting manual scoring. You can adjust the chart by clicking on the legend to hide or show different categories.

Summary

stats_5.png

A summary of the scoring including the mean, the maximum and the minimum scores achieved as well as an indication of which was the most and least difficult items (based on average scores) as well as which test-taker scored the highest overall.

Item-Level and Participant-Level Summary Statistics

Item-level and participant-level statistics are shown under four separate tabs, each described in the following sections. Most of the data available here can be exported in case you wish to load it into other statistical applications to conduct further analysis.

Participants

The participant tab gives an overview of how each test-taker has performed on the test.

stats_6.png

  1. Participant information (which can be anonymised depending on the flow settings).
  2. Access the individual participant view to drill down into a more detailed overview of that particular test-taker's performance.
  3. See a summary of any annotations added against the participant (only for Assessors)
  4. Submission status.
  5. The calculated score (includes items that have been manually scored).
  6. A visual indicator of the distribution of scoring outcomes across the test items (values are correct, partially correct, incorrect, awaiting manual scoring or unanswered).
  7. Export the table to a CSV file. It is possible to select which columns should be included in the export.
Score

The score tab shows a more detailed view of how test-takers have performed across the different questions and items in the test.

stats_7.png

  1. Participant information (which can be anonymised depending on the flow settings).
  2. Access the individual participant view to drill down into a more detailed overview of that particular test-taker's performance.
  3. Score information split into different columns for the auto-scoring and manual scoring as well as the total score.
  4. The scoring applied to each item in the test with a colour-coded indication of whether the items was answered correctly or incorrectly. You can see if a manual score has been applied by hovering over a value. 
  5. Export the table to a CSV file. It is possible to select which columns should be included in the export.
Assignment

The assignment tab shows a summary of how the questions performed in relation to the scores achieved by the test-takers.

stats_8.png

  1. Item number or ID.
  2. Access more detail by drilling down into each individual question.
  3. Average, variance and spread columns to denote the mean score achieved on the item, the variance and the standard deviation of the scores in each item.
  4. A visual indicator of the distribution of scoring outcomes across the test items (values are correct, partially correct, incorrect, awaiting manual scoring or unanswered).
  5. Export the table to a CSV file. It is possible to select which columns should be included in the export.
Selected Responses

stats_9.png

The selected responses tab shows a list of the individual items with a correct answer key, a points value for the correct answer and an indication of which response option was selected for each test-taker. Where the test is set to shuffle answer options the data visible here will be the response option anchored in its original position when the test was built (not necessarily the letter selected by students if shuffling was enabled).

Please note that this feature does not support all question types and works best for single best answer style items.

It is possible to export this data to load into other statistical packages if required.

Individual Test Item Overview

The individual item overview shows a more detailed set of summary statistics about that specific item including a summary, score and answer distribution, comparative distribution boxplot, granular item analysis and details of the test item itself.

stats_10.png

Summary

stats_11.png

Shows a brief summary of the item performance data including the average score, variance and standard deviation.

Answer Distribution

stats_12.png

A pie chart showing the distribution of scoring outcomes on the item. Options are correct, partially correct, incorrect, unanswered and awaiting manual scoring. You can adjust the chart by clicking on each label in the legend to show or hide it in the chart. 

Answer Distribution on Question

stats_13.png

A pie chart showing the distribution of responses on the item. Not all question types are supported and this feature worked best with single best answer style questions. You can navigate between individual questions in the item using the dropdown field to the bottom right.

Boxplot of the Item

stats_14.png

Shows the summary statistics mapped onto a boxplot for comparison against other items in the test. The currently selected item is shown in blue.

Points and Answers of the Participants

stats_15.png

Shows the score achieved by each test-taker on the item, split into auto-scored and manually-scored credit. Also displays the correct answer and that chosen by the test-taker.

Content of the Item

stats_16.png

A preview of the item as it was displayed to test-takers. Where the test is set to shuffle answer options the data visible here will be the response option anchored in its original position when the test was built (not necessarily the same order the options were presented to individuals). 

Individual Participant Overview

The individual participant overview provides details the student performance across the test and on each item in the test. This includes summary information, mark and answer distributions, cohort comparison (histogram) and a detailed outline of how they responded to each item.

stats17.png

Information

stats_18.png

Shows the test-taker's biographical information, but can be anonymised in the flow settings.

Mark Distribution

stats_19.png

Shows the proportion of the total available marks the test-taker has been awarded, with a visual indication of the split between auto-scored and manually awarded points.

Answer Distribution

stats_20.png

A pie chart displaying the distribution of the scoring outcomes for all of the items in that test-taker's submission. Options are correct, partially correct, incorrect, unanswered or awaiting manual scoring. You can adjust the chart by clicking on each label in the legend to show or hide it in the chart.

Score Comparison

stats_21.png

A histogram that shows where the test-taker sits within the overall cohort distribution of scores. The bin in which the test-taker resides is denoted by the dark blue bar. Use the wrench icon to adjust the size of the bins.

Participant Overview

stats_22.png

A complete overview of the test-taker's performance across all items in the test. This includes the status of each item (correct, partially correct, incorrect, unanswered or awaiting manual scoring) as well as the scoring achieved by auto-validation and by manual marking and how that scores compares to the cohort mean for each item.

Was this article helpful?
0 out of 1 found this helpful

Articles in this section