Consider what intelligence tests were originally created for: kids with developmental issues, the point being to show what percent of 'normal' they were at. They somewhat worked for that - and at least moved the field from using terms like 'idiot' and 'moron'.
Personally, as someone who both tests well on 'intelligence' tests - and who studied psychometrics (test design) - I consider measures above 100 to not be nonsense exactly, but not useful. Given that they're not administering the same tests as in the 50s, I would expect changes to be an effort to re-normalize the tests based on the population, making changes across decades not really meaningful.
Admittedly, I studied test design back in the 70s, haven't kept up with it, not even reading journal articles - so I'm at least four decades out of date.
But I hope someone current in the field might correct me if things have changed dramatically.