It is a slightly more robust measure, but it is still silly because 90% of citations are shallow: most authors haven't even read the paper they are citing. We tend to cite famous authors and famous venues in the hope that some of the prestige will get reflected. (Daniel Lemire)Unlike me, Daniel Lemire doesn't just point out the inadequacy of citation counting. He proposes to do something about it.
We have the technology to measure the usage made of a cited paper. Some citations are more significant: for example it can be an extension of the cited paper. Machine learning techniques can measure the impact of your papers based on how much following papers build on your results.He's starting a project to develop such an approach, but he needs your help (if you've published one or more scientific papers). He needs you to head over to his site and fill out a short form that will give him and his collaborators the data they need to start building textual analysis tools that will allow for automated analysis of which papers have the largest influence on how a field develops. Please head over and help him out.
In case you want to see the link before you click on it, here it is:
1The Wikipedia entry on impact factors has a good summary of the major criticisms, centering on validity of the scores, editorial policies that can affect them, ways in which they can be manipulated, and ways in which they may be misused.