How do we find unobvious errors in the online assignment interfaces for children

Each new lesson on the platform is the result of the joint work of methodologists, designers, illustrators, programmers and testers. New assignments are usually tested in schools, where methodologists can observe how much they understand students, collect feedback and feedback. But some problems in small samples may go unnoticed. And here the study of the detailed actions of the students comes to the rescue – where they clicked, what numbers they entered, what answer they chose. The actions of children within the tasks provide valuable information that allows us to improve our platform to make learning more convenient and understandable. Improvements can relate to both the task interface and the wording of explanations and questions.


What we know and what not

For all tasks, we have available the events “the student began to solve the task”, “the task is completed, the decision is correct”, “the task is completed, there were errors”. Each session of the solution leaves a log of such events, based on which we can find out how many children make mistakes in the task and how much time they spend on the solution.


This is an example of statistics for a specific task. The graphs on the left show the number of correct and incorrect decisions and the percentage of errors. The right side shows the distribution of time required for students to solve tasks.

Some terms

Each unit of content has a working title. Lessons consist of cards. We called them the same as paper cards that teachers in schools give out in class. Cards are divided into semantic parts – chunks (from the English. Chunk – a piece, a chunk), each of which consists of several tasks – beads.

Sometimes questions arise for some tasks – for example, why do children leave them more often than others without completing them? Why do they spend so much time on some seemingly simple task? Why in a series of tasks of the same type the proportion of errors differs several times?

To answer such questions, we need to look inside the solution – to see not only the “true / false” result, but also the actions that led to it. What specific mistake does the student make? How does he form his answer? This is where action analysis comes to the rescue.

First attempts

In the first attempts to conduct such an analysis, JS programmers finalized the code of the first cards from the first-class mathematics course. In each card additional events were added, their own for each type of task.

For example, we have tasks for solving examples with the “cubes” scheme. Then the child must click on the cube, which will burst from this. Then you need to calculate how many dice are left and write down the answer.


So the subtraction task looks first


After the student “bursts” the cube, he needs to enter the answer in the window

Events of the type “turned on the voice acting of the task”, “clicked on die number i”, “entered the number into input” were added to tasks of this type.

It turned out that more than half of the wrong decisions are absolutely correct answers: the number 6. The “mistake” consisted in clicking on the wrong cube: not a single cube except the last could be burst, and clicking on them was considered a mistake by the card. We corrected this logic, and now clicking on other cubes is not considered an error. As a result, the percentage of error-free completion of tasks increased from 65% to 75%, and first-graders no longer have to guess what they did wrong.


The graph shows how much the number of unsuccessful passes of the card, which includes the revised task, has decreased.

This way of working made it possible to well understand the details of solving children tasks, but it was very time-consuming:

  • The JS programmer must modify the card by adding the sending of the necessary events.
  • The tester must verify that the changes did not spoil the functionality of the card.
  • The analyst must get the decision logs, understand the events and draw conclusions about what is happening.

Such a solution could not be scaled and distributed to all cards. Therefore, we developed a variant with events common to all cards.

Second attempt

All cards contain common events, such as clicks, dredges, or input values ​​into inputs. A special component has been created that tracks these elementary events and sends them to the server.

Examples of these events and the additional data they contain:

  • click – (x, y) – click coordinates, css class and text of the clicked element
  • input to input – entered value, true or not
  • start of drag – coordinates, text of the dragged item
  • end of dredge – similarly

The action tracking component is included in the card in one line, and does not require additional efforts from JS programmers and testers. The component has been added to mathematics cards of grades 5-9.

I will give a few examples of what was discovered using the data collected in this way.

Drum

As an example of refinement of the task interface, you can bring the element “drum”, which is used in some cards. Children click on the arrows and change the answer options until they find the right one. The change of options is animated – the drum scrolls up or down.


Task with the drum element

The click map from this task is expected to contain many clicks in the area of ​​the triangular arrows. However, not all of these clicks turned out to be the same – there were two different types of css classes. The experiment in the card showed that different values ​​correspond to the clickable and non-clickable state of the arrows. A non-clickable state appears during the scroll scroll animation.

We found clicks on blocked arrows in 85% -90% of students. That is, children often sought to click the arrow again before the scroll animation ended. The card ignored such clicks. Animation at that time lasted 800 ms, but some children managed to make a new click after 100-200 ms.


Then I felt how the inactive button annoyed the children

To make the interface more responsive, we have significantly accelerated scrolling. Such acceleration was extended to all cards with “reels”.

Discharges

In addition to very small atomic actions, such as cliques, we can study what answers children give and what mistakes they make.

For example, in one of the assignments, sixth graders repeat the names of the digits of a number and learn to recognize tenths and hundredths. Here is an example of a task where children need to mark a digit in a given category.


The task of determining the categories today looks like this

Here on the click map we saw clicks on rectangles with numbers. By the coordinates of the click, you can understand which of the numbers the student clicked on. It should also be noted that the first click on a number selects it, and the second click removes the choice. Then from the event log you can deduce which categories the student chose before clicking on the “Finish” button.

At the first meeting with this task, about a third of the children made a mistake in it. Some of them, as expected, confused the tenths and dozens, but other errors were more surprising. For example, 7% of children noted both tens and tens of thousands. Another 5% – thoroughly added to this list also tenths. 1.5% of children noted all the figures in general.

The task interface has been modified to allow only one digit to be selected – when you click on a new digit, the selection from the previous one is removed. In the new version of the assignment, the percentage of errors decreased to 20%, and students can better understand that the name of the category unambiguously correlates with the position of the number in the record of the number.

Fraction

Another example is a task introducing children to the main property of ordinary fractions. At the beginning of the assignment, students are shown an illustration where the fraction is represented by a partially filled figure.


So the beginning of the quest looked earlier

Children must indicate which part of the figure is painted over. 88% of children cope with this stage without errors, writing in the numerator “3”. 9% of students write “1”: they probably like gray more than green. Another 3% of children write “4” – well, in fact, because all these parts are not white!

In the revised version of the card, the question was changed, and its new wording is “What part is green?” As a result, the number of errors has decreased three times, now 96% of children now go to the main content of the card, without tripping here out of the blue.

Results of the second attempt

We received interesting information and made useful improvements. But this way of investigating events requires very painstaking work from the analyst. To convert a sequence of clicks into an understandable course of a solution, you need, firstly, to study the layout of the card and understand which element a particular click has. Secondly, to understand the logic of work – where the student selects some element, where he removes the choice, where he rearranges the elements in places. In fact, you have to literally duplicate the functionality of the card.

Of course, in the course of such investigations, functions are gradually developed to process standard mechanics (for example, “choosing one option from a row located horizontally”). But still the tasks are so diverse that it is impossible to fully automate this process. In addition, most often the study of a specific card ends with the conclusion “everything goes according to plan” – the errors in children are about the same as expected, and there are no difficulties with the interface either. This, on the one hand, indicates the good work of the product team, but on the other hand, it can demotivate, as it seems that their own efforts have been wasted.

With the help of elementary events, we studied what answers children give and how they come to their answer. Knowing the answers of students is relevant in any assignments, but due to the huge variety of mechanics, it is very difficult to restore answers from a sequence of small events. This led to the idea of ​​creating a separate event “the student gave an answer.”

What logs are we collecting now and what do they give

Each time the card checks the student’s response, we send an event with information about the answer. The event contains the following information:

  • correct answer or incorrect
  • the answer itself, that is, the current state of the active elements of the card (what is entered in the input, which of the radio buttons is selected, which points are marked on the plane, and so on, depending on the current task)
  • optionally – at what stage of the assignment is the student now

It is important that in the code of the card there is obviously a check of the student’s answer, and all the status at the moment is known. All that remains is to add a line sending this response to the server. That is, in this option there is no need to duplicate the logic of the card, which created so many difficulties at the previous stage.

Information about the task stage is needed in cards with non-linear passage. For example, the student may have a choice – write down the answer to the problem immediately or solve it in steps.

The accumulated statistics of such events gives us:

  1. Map of the movement of students in the stages of the assignment. We understand which stages are easy for children, and in which they have difficulty.
  2. Statistics of answers from each stage. It helps to see what kind of mistakes students make.

Since events have a single format, their automatic processing is possible. Now, having released a new card, we can see the next day in a special application how children cope with tasks.



Typical spelling errors are evident

We include sending events with answers to all new cards, and also add them to the old ones as they are finalized. Now all employees involved in the task creation process can see what is given to students easily and what causes difficulties.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *