Typing accuracy is unintuitive
- math
I was playing the demo of Final Sentence, which is a last man standing typing game. The game shows you some of your statistics like WPM (words per minute) and your accuracy. Accuracy tells you what percentage of your keystrokes were correct. This got me thinking about accuracy and made me realize that it is not always a very useful metric in its usual form.
I show an alternative format of displaying accuracy information that I find more intuitive and representative. (I very highly doubt I’m the first to propose something like this)
Below I have listed a few accuracy values and their conversions to the alternative format.
- 50% accuracy is 1 error per 2 keystrokes.
- 75% accuracy is 1 error per 4 keystrokes.
- 90% accuracy is 1 error per 10 keystrokes.
- 95% accuracy is 1 error per 20 keystrokes.
- 99% accuracy is 1 error per 100 keystrokes.
For example, a typist going from 95% accuracy to 99% accuracy can type approximately 80 more correct characters between errors. A huge improvement, but only a 4% accuracy difference. While going from 50% to 95%, a big 45% accuracy difference, corresponds to only 18 more correct characters between errors.
The better you become, the higher your accuracy, but the less your improvements will be reflected in changes in your accuracy. If you type 1 in 5 characters incorrectly and improve it by 1 character, this is going from 80% to 83.33%. But if you type 1 in 100 characters incorrectly and improve it by 1 character, this is going from 99% to 99.01%. This 0.01% change feels pointless even though it is real progress!
A benefit of using accuracy is that the value is bounded (0%-100%), where the alternative format is not. Also, if you never mistype (unlikely but still) you need a special value to display.
Now, I am not trying to say that accuracy is a bad metric, but it’s nonlinear nature can be unintuitive. And for a typing game I think showing accuracy as “1 error per x keystrokes” is better.
