Understanding Your Scores
Last updated
Last updated
At the end of each session, you'll receive a NeuroTracker score. This score represents the speed at which you were able to successfully all track your targets 50% of the time. It represents the upper limit, or breaking point, of multiple object tracking speed. This also referred to as a ''speed threshold''.
It is a snapshot of your mental ability at that moment. 📸
If you are tracking less than 4 targets, you will also receive a speed score. To provide an even comparison, speed scores are normalized to 4 targets, which gives your NeuroTracker score.
The NeuroTracker score is a combined measure of your attention, working memory, information processing speed, and executive functions.
Since results relate to high-level cognitive abilities, increasing scores are a direct result of improved brain function! 🧠
The NeuroTracker task in 3D is a virtual simulation, so the speed of the balls represents a real-world speed across the user’s field of view, measured as 68 centimeters-per-second at speed 1.0. For this measure to be accurate it is required that users view their displays from a distance equivalent to the diagonal width of the display. Which means being closer for smaller screens, and further away for larger displays. This distance is also required to get the correct field of view to effectively stimulate peripheral vision.
Since attention is impacted by a variety of factors, including sleep, stress, motivation, physical activity and more, it is normal for scores to fluctuate. Research suggests that if such influences lower scores on any given session, the training is still beneficial.
Do not be discouraged if you have a bad result. What’s most important is that you are improving your mental skills over time! 📈
Tip: try not to focus too much on high or low individual session scores - it is better to look at the average of the last three sessions to interpret progress.
Typically 3 sessions are used to establish a cognitive baseline, the average of which provides a validated measure of high-level cognitive abilities. The standardized scientific baseline is 3 x Core sessions at 4 targets with 8 seconds tracking duration. Initial Baseline
The recommended starting point for all users is to complete 3 Core sessions. The average for which provides an initial baseline. This measure is a reference point which is compared against all future progress. For this reason NeuroTrackerX always displays this on the end user dashboard. Current Baseline
A ‘Current Baseline’ is based on the last three Core sessions an individual has completed, which can be used to show learning effects when compared to an initial baseline. For example, improving from a baseline of 1.0, to 1.5, would represent a 50% improvement in learning. Because the NeuroTracker task is largely devoid of practice or technique related effects, this improvement represents a raw improvement in brain functions for this task.
Tip: it is always useful to compare current scores with initial baseline to see the bigger picture. Even if you have a relatively low score it is likely a much higher level of performance than where you started out.
15-30 sessions are typically used to evaluate an individual’s learning rate. Rather than how high or low and scores are, the key factor is how much relative improvement is achieved over the sessions. A high learning rate is associated with high levels of neuroplasticity, meaning that the brain is better prepared to adapt in response to the mental demands placed on it.
A landmark NeuroTracker study published Nature Scientific Reports showed that elite athletes have brains with superior capacities for learning, which could be a critical factor as to why they can achieve such high levels of performance on the sports field.
A high learning rate is more important than high session scores, because it reveals that a user is benefitting from training in ways likely to improve real-world performance.
Core scores are the reference point to compare scores with other types of Sessions such as Overload, Selective and Dynamic. Each type of session emphasizes cognitive demands on different types of attention. For example, the Selective session emphasizes selective attention - the ability to filter out distractions and focus only on what's important. By comparing differences in scores to Core, you can gain insights into a user's particular attentional strengths and weaknesses, and adapt training programs accordingly. For instance, if a user has a a much lower Dynamic than Core score, then adding more Dynamic sessions into their training will help them overcome attentional weaknesses in chaotic environments.
Generally, adding dual-tasks to training will result in lower session scores. However there are two key factors to take into account here - the complexity of the dual-task and the amount of training with it.
More complex dual-tasks will result in bigger drops on session scores, however more training with the dual-task will reduce this drop.
As a general rule, if a dual-task score drops below 50% of a user's Current Baseline, then it is too difficult and a simpler dual-task should be used. On the other hand, if the dual-task score is within 10% of a user's Current Baseline, then they have mastered that task and are ready to progress to a more complex one.
In addition to a session score, more granular performance metrics can be viewed for any completed session. This can be accessed by clicking on the session point on the end user dashboard chart.
Highlight metrics include:
Consistency Score: a measure of how variable tracking speed performance was over the session. A low score here means that over the 20 trials of the session the user was successful at relatively high speeds, yet also was unsuccessful at relatively low speeds, suggesting susceptibility to attention lapses. This score tends to increase with the benefits of training over time.
Fastest Trial Score Success: the single highest successful trial speed of the session.
Lowest Trial Score Miss: the single lowest trial speed fail of the session.
Other highlights include a user’s personal milestone achievements specific to the their training history, such as reaching a relatively high level of consistency.
Now let’s cover two micro analyses of session data.
Trial Success Breakdown
The results of each trial in a NeuroTracker session are categorized into three groups:
Perfect Trials: correct identification of all targets.
Near Misses: correct identification of all targets except one.
Significant Misses: incorrect identification of 2 or more targets.
The types of misses help give insights on whether a user was close to a trial success, or mostly lost tracking overall.
Two key things to note:
Trial results are presented according to the high, medium or low speed they were attempted at. These speed references are automatically based on the user's Current Baseline.
Results at the outer boundaries of the spider chart means more trials were completed in that category, and vice versa.
Overall, this data gives a snapshot of the distribution of trial results relative to the tracking speeds at which they were performed.
Average Response Time Per Trial
This metric is basically a measure of how much time it took for a user to input answers on each session trial. Although answering quickly is not part of the NeuroTracker task and does not influence session score, it can provide useful additional insights.
Generally, user's response times will get faster with more training, which is likely related to increases in working memory capacities. Also uncertainty in answering will typically result in slower response times.
For a more detailed breakdown, the filters for Perfect Trials, Near Misses and Significant Misses can be selected to compare response times to trial results along with the precise time taken to input answers.
Find more interesting details about scores in this blog post: