Preliminary thoughts on longitudinal k-means for bibliometric trajectories

I read with great interest Baumgartner and Leydesdorff's article* on group based trajectory modeling of bibliometric trajectories and I immediately wanted to try it. She used SAS or something like that, though, and I wanted R. I fooled around with this last year for a while and I couldn't get it going in the R package for GBTM**

Later, I ran across a way to do k-means clustering for longitudinal data - for trajectories! Cool. I actually understand the math a lot better, too.

Maybe I should mention what I mean about trajectories in this case. When you look at citations per year for articles in science, there's a typical shape .. a peak at year 2-3 (depends on field), and then slacks off and is pretty flat. Turns out there are a few other typical shapes you see regularly. One is the sleeping beauty - it goes along and then gets rediscovered and all the sudden has another peak - maybe it turns out to be useful for computational modeling once computers catch up. Another is the workhorse paper that just continues to be useful overtime and takes a steady strain - maybe it's a really nice review of a phenomenon. There may be 5 different shapes?  I don't think anyone knows yet, for sure.

So instead of my other dataset I was playing with last year with like 1000 articles from MPOW, I'm playing with articles from MPOW that were published between 1948 and 1979 and that were identified in a 1986 article as citation classics. 22 articles. I downloaded the full records for their citing articles and then ran an R script to pull of the PY of the citing articles (I also pulled of cited articles and did a fractional Times Cited count but that's another story). I cut off the year the article was published, and then kept the next 35 years for each of the articles. It's like up to 2015 for a couple but I don't think that will matter a lot as we're a ways into 2016 now.

Loaded it into R, plotted the trajectories straight off:

trajLooks like a mess and there are only 22!

Let's look at 3 clusters:

3clustersOk, so look at the percentiles. 4% is one article. This is a very, very famous article. You can probably guess it if you know MPOW. Then the green cluster is probably the work horses. The majority are the standard layout.

Let's look at 4 clusters:

4clustersYou still here have the one crazy one. Like 5 workhorses. The rest are variations on the normal spike. Some a really sharp spike and then not much after (these were the latest ones in the set - the author didn't have enough distance to see what they would do). Others a normal spike then pretty flat.

So I let it do the default and calculate with 2, 3, 4, 5, 6 clusters. When you get above 4, you just add more singletons. The article on kml*** says there's no absolute way to identify the best number of clusters but they give you a bunch of measurements and if they all agree, Bob's your uncle.

qualityBigger is better (they normalize and flip some of them so you can look at them like this). Well, nuts. So the methods that look at compactness of the clusters divided by how far apart they're spaced (the first 3, I think?) are totally different than 4 - which is just like distance from centroids or something like that. I don't know.I probably have to look at that section again.

Looking at the data, it doesn't make sense at all to do 5 or 6. Does 4 add information over 3? I think so, really. Of course with this package you can do different distance measurements and different starting points, and different numbers of iterations.

What practical purpose does this solve? Dunno? I really think it's worth giving workhorse papers credit. A good paper that continues to be useful... makes a real contribution, in my mind. But is there any way to determine that vs. a mediocre paper with a lower spike short of waiting 35 years? Dunno.

 

*Baumgartner, S. E., & Leydesdorff, L. (2014). Group‐based trajectory modeling (GBTM) of citations in scholarly literature: dynamic qualities of “transient” and “sticky knowledge claims”. Journal of the Association for Information Science and Technology, 65(4), 797-811. doi: 10.1002/asi.23009 (see arxiv)

** Interesting articles on it. It's from criminology and looks at recidivism. Package.

*** Genolini, C., Alacoque, X., Sentenac, M., & Arnaud, C. (2015). kml and kml3d: R Packages to Cluster Longitudinal Data. Journal of Statistical Software, 65(4), 1-34. Retrieved from http://www.jstatsoft.org/v65/i04/

Comments are off for this post