Making things besides journal articles available

(by Christina Pikas) Apr 18 2016

Communication is central to science and the vast majority of it happens outside of peer reviewed journal articles. Some informal scholarly communication is intended to be ephemeral but in the past couple of decades, more informal communication is conducted in an online text-based medium in a way that could be captured, saved, searched, and re-used. Often, it isn't.

Libraries have always had a problem with gray literature. Unpublished dissertations, technical reports, conference papers, government documents, maps, working documents... they are one level of difficult to find. Some say, "well, if it's good information it will be in the journal literature" or "if it's worth saving, it will be in the journal literature." But we know that: details are left out of the methods sections, data are not included, negative results are under reported, etc. In some fields conferences are as easy to find as journal articles whereas in other fields they're impossible (and some of that is due to the level of review and importance of the communication to that field).

Practically, if you get the idea for something from a blog post, then you need to attribute the blog post. If the blog post goes missing, then your readers are out of luck.

This is all in lead up to a panegyric on the efforts of John G. Wolbach Library of the Harvard-Smithsonian Center for Astrophysics with ADS, and particularly  Megan Potterbusch, Chloe Besombes, and Chris Erdmann who have been working on a number of initiatives to archive and make this information available, searchable, and citable.

Here is a quick listing of their projects:

Open Online Astronomy Thesis Collection, https://zenodo.org/communities/about/astrothesis/

Information about it is here: http://www.astrobetter.com/blog/2016/04/11/an-open-online-astronomy-thesis-collection

Even if your dissertation is in an institutional repository and is available from the university, this will make it more easy to find. Also, you can link to your datasets and whatnot.

Conference Materials: http://altbibl.io/gazette/open-access-publishing-made-easy-for-conferences/

We have folks who have been very dissatisfied with the existing options for hosting conference proceedings. I know one group went from AIP where they had been for decades, to Astronomical Society of the Pacific, to IoP and still weren't happy. They wanted to make the information available but not super expensive. This may be an option for long term access and preservation.

Informal astronomy communications: https://github.com/arceli/charter

This is more for like blog posts.

Research software: https://astronomy-software-index.github.io/2015-workshop/

 

All of this is pulled together by ADS (see also ADS labs), which is a freely available research database for Astro and related subjects (we are more interested in planetary science and solar physics at MPOW). PubMed gets all the love, but this is pretty powerful stuff.

 

 

 

 

 

 

No responses yet

Communications Theories - the continuing saga

(by Christina Pikas) Apr 16 2016

The dissertation was accepted by the grad school and is on its way to the institutional repository and PQ to be made available to all (I will link to it as soon as it's available). Yet I still fight the battle to own and, if not ever be a native to theory, then at least be semi-fluent.

Late in the dissertation I identified this book: Theories and Models of Communication (2013). In Cobley P., Schulz P. J. (Eds.). Berlin: De Gruyter. I browsed it a bit on Google Books and then requested it from another library. I'm just getting the chance to look at it more carefully now. A lot is not new, but it is well-organized here.

Chapter 2:

Eadie, W. F., & Goret, R. (2013). Theories and Models of Communication:  Foundations and Heritage. In P. Cobley, & P. J. Schulz (Eds.), Theories and Models of Communication (pp. 17-36). Berlin: De Gruyter.

Communication as a distinct discipline emerged after WWII. Theories and researchers came from psychology, sociology, philosophy, political science... I guess probably engineering and physics, too. Then again, physicists turn up everywhere 🙂

This chapter described 5 broad categories of approaches to communication

  1. communication as shaper of public opinion - this came from WWII propaganda work. Main dudes: Park, Lippman, Lazarsfeld, Lasswell
  2. communication as language use - this is like semiotics. Main dudes: Sassure, Pierce
  3. communication as information transmission - this would be where you find the linear models like Shannon & Weaver as well as updates like Schramm and Berlo. From those came Social Learning/Social Cognitive Theory (Bandura), Uses and Gratifications, Uncertainty Reduction Theory (Berger and Calabrese), and eventually Weick, who we all know from the sensemaking stuff.
  4. communication as developer of relationships - Bateson, Watzlawick "interactional view", Expectancy Violations Theory (Burgoon), Relational Dialectics Theory (Baxter)
  5. communication as definer, interpreter, and critic of culture - this is where you get the critical theory (like critical race theory, etc.). Frankfurt School (Marcuse, Adorno, Horkheimer, Benjamin), Structuralism, Gramsci, Habermas

Chapter 3:

Craig, R. T. (2013). Constructing Theories in Communication Research. In P. Cobley, & P. J. Schulz (Eds.), Theories and Models of Communication (pp. 39-57). Berlin: De Gruyter.

"A scientific theory is a logically connected set of abstract statements from which empirically testable hypotheses and explanations can be derived." (p.39)

"Metatheory articulates and critiques assumptions underlying particular theories or kinds of theories" (p. 40)

He uses words in a different way than I think I learned. Like metatheory - his is like meta about theories, but I think other people may use it like overarching big mama theory with baby theories?

Anyhoo. He says there are these metatheoretical assumptions useful to understand the landscape of communications theories.

  1. about objects that are theorized (ontology)
  2. basis for claims of truth or validity (epistemology)
  3. normative practices for generating, presenting, using theories (praxeology)
  4. values that determine worth of a theory (axiology)

Ontology - what is communication? Basically transmission models or constitutive models.  "symbolic process whereby reality is produced, maintained, repaired, transformed" (Carey, 2009)

His constitutive metamodel of communication theories (these were described better in chapter 2, but reiterated by the author himself in 3)

  1. rhetorical - communication is a practical art
  2. semiotic - intersubjective mediation via signs
  3. phenomenological - experiencing otherness through authentic dialog (or perhaps BS - no it doesn't say that 🙂 )
  4. cybernetic - communications = information processing
  5. sociopsychological - communications = expression, interaction, influence
  6. sociocultural - communications = means to (re)produce social order
  7. critical - discursive reflection on hegemonic ideological forces and their critiques

Theory means something different in physics than it does in sociology. This is due to the objects of study and how and what we can know about them as well as by what values we judge the theory. Two main approaches to constructing theory in comms are: empirical-scientific and critical-interpretive.

Functions of a scientific theory: description, prediction, explanation, and control.

Two kinds of explanation: causal and functional. Communication explanatory principles: hedonistic (pleasure seeking), understanding-driving, consistency-driven, goal-driven, process-driven, or functional (cites Pavitt, 2010).

Criteria to judge quality: empirical support, scope, precision, aesthetic (elegance), heuristic value.

Theory != model|paradigm . Model is a representation, theory provides explanation.  Paradigm is a standard research framework used in a particular field.

Epidemiological assumptions.

  • Realist - underlying causal mechanisms can be known
  • Instrumentalist - scientific concepts correspond to real things and can be useful in making predictions
  • Constructivist - phenomena can't be known independently of our theories  - paradigm determines how empirical data will be interpreted.

A classical issue is level of analysis - do you go biological or psychological or do you go more sociological? Small groups? Societies?

Also do you build the whole theory at once or add to it as you go along to build it up?

Critical-Interpretive - these are like from humanities like rhetoric, textual criticism, etc. "Purpose has been ideographic (understanding historical particulars) rather than nomothetic (discovering universal laws)" p. 49

Interpretive. Methods (praxeology) - conversation analysis, ethnography, rhetorical criticism. Emphasize heuristic functions of theory. Not generalizable causal explanations, but conceptual frames to assist in interpreting new data. It's accepted to use multiple theories to better understand "diverse dimensions of an object" instead of insisting on one right path. Carbaugh and Hastings 1992 4 phases of theory construction

  1. developing a basic orientation to communication
  2. conceptualizing specific kinds of communicative activity
  3. formulating the general way in which communication is patterned within a socioculturally situated community
  4. evaluating the general theory from the vantage point of the situated case (p.51)

Critical. purpose of critical theory is social change.

Anyway, more to follow as I hopefully continue on in the book.

No responses yet

Notes from Dan Russell Advanced Skills for Investigative Searching

(by Christina Pikas) Apr 15 2016

This class was held 4/15/2016 at University of Maryland at the Journalism School, hosted by the Future of Information Alliance. Some information is here. Slides are here. Updated Tip Sheet is here.

I've previously taken his MOOC and enjoyed tips on his blog but things change so quickly it was good to get an update.

Of course I didn't bring my laptop so... these are from handwritten notes.

  • Capitalization doesn't matter except for OR when it's crucial. don't use AND it doesn't do anything.
  • Diacriticals do matter. e and é are basically interchangeable but a and å are not. (it does offend native speakers of countries that use these....)
  • If you need to search for emoji you'll have to use Baidu. This is relevant searching for businesses in Japan, for example
  • filetype:   works for any extension. If you're looking for datasets you may use filetype:csv . Regular google searches don't search docs, you'll need to search them separately
  • site:  it's different if you use nyc.gov, www.nyc.gov, or .nyc.gov . To be most general use site:.nyc.gov that . after the : acts like a * if there are subdomains
  • There is no NOT. Instead use -<term>.  No space between the minus and the term.
  • Synonyms are automatic. Use quotes around a single term to search it verbatim (also turns off spell check for that term). If quotes are around a phrase, it does not do a verbatim search.
  • There are no stop words
  • inurl:   ... this is useful if pages have a certain format like profile pages on Google Plus
  • If you want to get an advanced search screen. Click on the gear to select it. Gear is in the upper right hand corner. That's the only way to get limiting by region (region limiting isn't always domain), number search, language search. Some advanced search things can also be gotten by using dropdown boxes after searching or using things like inurl: filetype:
  • related:<url> gets you sites with term overlap (not linking/linked similarity).
  • Google custom search engine  - lets you basically OR a bunch of site: searches to always search across them.

Image Search

  • Tabs across the top of results for topic clusters found
  • Search by image - click on camera and then point to or upload image. Can drag an image in or control click on an image. After search can then add in terms to narrow to domain.
  • Example - find a tool in the basement, take a picture on a white background with it in a normal orientation, then search to find it in catalogs, etc.
  • Crop images to the salient bit.
  • On mobile devices the standard search is actually a google appliance search - not as powerful. Open chrome and search from there if you need more.

Other notes

  • Things are changing all the time because of adversarial arrangements with optimization people.
  • link:   was removed this week.
  • results are an estimate. When you narrow you sometimes get more results because it starts by searching only the first tier of resources. First tier has millions of results in it - and the ones that have been assessed as highest quality. If it doesn't find enough in the first tier - like when you narrow a lot - it will bump down to the second tier with like billions more results
  • consider using alerts.
  • to find any of these services - just Google for them
  • google trends is interesting. can narrow by time or region. Also look for suggestions when searching. Can search for an entity or for search term. remember trends are worldwide
  • Google correlate - example: Spanish tourism authorities want to know what UK tourists are looking for. Find the search for Spain and tourism, and see what keywords use by UK searchers correlate.
  • Country versions are more than just languages. Consider using a different country version to get a different point of view.
  • Wikipedia country versions are useful for national heros and also controversial subjects (example: Armenian genocide)
  • define   (apparently no : needed)

I think all librarians should probably take his class. Good stuff.

No responses yet

Notes from International Symposium on Science of Science 2016 (#ICSS2016) - Day 2

(by Christina Pikas) Mar 30 2016

This day's notes were taken on my laptop - I remembered to bring a power strip! But, I was also pretty tired, so it's a toss up.

 

Luis Amaral, Northwestern

What do we know now?

Stringer et al JASIST 2010 distribution of number of citations

25% of papers overall in WoS (1955-2006) haven’t been cited at all yet for particular journals (ex. Circulation) there may be no papers that haven’t been cited.

Stringer et all PLoS ONE 2008 – set of papers from a single journal

Discrete log normal distribution – articles published in a journal in a year

Works well for all but large, multidisciplinary journals – Science, Nature, PNAS, but also PRL and JACS

For most journals takes 5-15 years to reach asymptotic state

Moreira et al PLOS ONE 2015 – set of papers from a department. Also discrete log normal.

Also did work on significant movies - citations using IMDB connections section (crowd sourced annotation of remakes, reuse of techniques like framing, references/tributes, etc.)

Brian Uzzi, Northwestern

Age of Information and the Fitness of Scientific Ideas and Inventions

How do we forage for information – given a paper is published every 20 minutes – such that we find information related to tomorrow’s discoveries?

He’s going to show WoS, patents, law and how pattern works.

Foraging with respect to time (Evans 2008, Jones & Weinberg 201?)

Empirical strategies of information foraging some papers reference a tightly packed by year, some high mean age, high age variance…

Average age of information (mean of PY - PY of cited articles)

Low mean age, high age variance is most likely to be tomorrow’s hits (top 5% cited in a field)

Tried this same method in patent office- inventors don’t pick all the citations. Examiner assigns citations. Patents have the same hotspot.

 

Audience q: immediacy index, other previous work similar…

A: they mostly indicate you want the bleeding edge. Turns out not really you need to tie it to the past.

Cesar Hidalgo, MIT

Science in its Social Context

Randal Collins “the production of socially decontextualized knowledge” “knowledge whose veracity doesn’t depend on who produced it”

But science is produced in a social context

He is not necessarily interested in science for science's sake but rather, how people can do things together better than they can do individually.

What teams make work that is more cited

Several articles show that larger teams produce work that is more cited, but these papers were disputed. Primary criticism: other explanatory factors like larger things are more cited, more connected teams, self-promotion/self-citation with more authors, also cumulative advantage – after get one paper in high impact journal easier to get more in there

Various characteristics – number authors, field, JIF, diversity (fields, institution, geographic, age),

Author disambiguation (used Google Scholar – via scraping)

Connectivity – number of previous co-authorship relationships

Collaboration negative fields vs. collaboration positive fields

On average more connected the team more cited the paper on average. Interaction between JIF and connectivity. Weak but consistent evidence that larger and more connected teams get cited more. Effects of team composition negligible compared to area of publication and JIF.

 

How do people change the work they do?

Using Scholar 99.8%, 97.6% authors publish in four or more fields… typically closely related fields

Policy makers need to assign money to research fields – what fields are you likely to succeed in?

Typically use citations but can’t always author in fields you can cite (think statistics)

Use career path? Fields that cite each other are not fields authors traverse in their career path.

Q: is data set from Google Scholar sharable?

A: He’s going to ask them and when his paper is out and then will

Guevara et al (under review ) arxiv.org/abs/1602.08409

Data panel

Alex Wade, Microsoft Research – motivation: knowledge graph of scholarly content. Knowledge neighborhood within larger knowledge graph usable for Bing (context, and conversations, scaling up the knowledge acquisition process), Cortana, etc. Can we use approaches from this field (in the tail) for the web scale? Microsoft Academic Graph (MAG). MS academic search is mothballed. Now on Bing platform building this graph – institutions, publications, citations, events, venues, fields of study. >100M publications. Now at academic.microsoft.com  - can see graph, institution box. Pushed back into Bing – link to knowledge box, links to venues, MOOCs, etc. Conversational search… Cortana will suggest papers for you, suggest events. Aka.ms/academicgraph

[aside: has always done better at computer science than any other subject. Remains to be seen if they can really extend it to other fields. Tried a couple of geoscientists with ok results.]

James Pringle, Thomson Reuters – more recent work using the entire corpus. Is the Web of Science up to it? 60 M records core collection. Partnered with regional citation databases (Chinese, SciELO, etc). "One person’s data is another person’s metadata." Article metadata for its own use. Also working with figshare and others. Building massive knowledge graph. As a company interested in mesolevel. Cycle of innovation. Datamining, tagging, visualization… drug discovery…connection to altmetrics… How do we put data in the hands of who needs it. What model to use? Which business model?

Mark Hahnel, Figshare

Figshare for institutions – non-traditional research outputs, data, video … How can we *not* mess this up? Everything you upload can be tracked with a DOI. Linked to GitHub. Tracked by Thomson Reuters data cite database. Work with institutions to help them hold data. Funder mandates for keeping data but where’s the best place?

Funders require data sharing but don’t provide infrastructure.

Findable, interoperable, usable, need an api … want to be able to ask on the web: give me all the information on x in csv and get it. Can’t ask the question if data aren’t available.

Need persistent identifiers. Share beta search.

Daniel Calto, Research Intelligence, Elsevier

Data to share – big publisher, Scopus, also Patent data and patent history,

Sample work: comparing cities, looking at brain circulation (vs. brain drain) – Britain has a higher proportion of publications by researchers only there for 2 years  - much higher than Japan, for example

Mash their data with open public information.

Example: mapping gender in Germany. Women were more productive in physics and astronomy than men. Elsevier Research Intelligence web page full global report coming

Panel question: about other data besides journal citations

Hahnel: all sorts of things including altmetrics

Pringle: usage data  - human interactions, click stream data, to see what’s going on in an anonymous way. What’s being downloaded to a reference manager; also acknowledgements

Calto: usage data also important. Downloading an abstract vs. downloading a full text – interpreting still difficult. How are academic papers cited in patents.

Afternoon:

Reza Ghanadan, DARPA

Simplifying Complexity in Scientific Discovery (aka Simplex)

DSO is in DARPA, like DARPA’s DARPA

Datafication > knowledge representation > discovery tools

Examples: neuroscience, novel materials, anthropology, precision genomics, autonomy

Knowledge representation

Riq Parra – Air Force Office of Science Research

(like Army RO and ONR) their budget is ~60M all basic research (6-1)

All Air Force 6-1 money goes to AFOSR

40 portfolios – 40 program officers (he’s 1 of 40). They don't rotate like NSF. They are career.

Air Space, Outer Space, Cyber Space.

Some autonomy within agency. Not panel based. Can set direction, get two external reviews (they pick reviewers), talk a lot with the community

Telecons > white papers > submissions > review > funding

How to talk about impact of funding? Mostly anecdotal – narratives like transitions. Over their 65 years they’ve funded 78 Nobel Prize Winners on average 17 years prior to selection

Why he’s here – they do not use these methods to show their impact.  He would like to in spirit of transparency show why they fund what they fund, what impact it has, how does it help the Air Force and its missions.

Ryan Zelnio, ONR

horizon scan to see where onr global should look, where spend attention and money, assess portfolio

global technology awareness quarterly meetings

20-30 years out forecasting

Bibliometrics is one of a number of things they look at. Have qualitative aspects, too.

Need more in detecting emerging technologies

Dewey Murdick, DHS S&T

All the R&D (or most) for the former 22 agencies. More nearer term than an ARPA. Ready within months to a couple years. R&D budget 450M … but divide it over all the mission areas and buy everyone a Snickers.

Decision Support Analytics Mission – for big/important/impactful decisions. Analytics of R&D portfolio.

Establishing robust technical horizon scanning capability. Prototype anticipatory analytics capability.

Brian Pate, DTRA

Awareness and forecasting for C-WMD Technologies

Combat support agency – 24x7 reachback capability. Liaison offices at all US Commands.

6.1-6.3 R&D investments.

Examples: ebola response, destruction of chem weps in Syria, response to Fukushima.

Low probability event with high consequences. No human studies. Work with DoD agencies, DHS, NIH, others.

Move from sensing happening with state actors to anticipatory, predicting, non-state actors.

Deterrence/treaty verification, force protection, global situational awareness, counter wmd

BSVE – biosurveillance architecture, cloud based social self-sustaining, pre-loaded apps

Transitioned to JPEO-CWD – wearable CB exposure monitor

FY17 starting DTRA tech forecasting

Recent DTRA RFI – on identifying emerging technologies.

Audience q: Do you have any money for me?

Panel a: we will use your stuff once someone else pays for it

Ignite talks - random notes

Forecite.us

Torvik:

Abel.lis.illinois.edu

Ethnea - instance based ethnicity, Genni (JCDL 2013), Author-ity (names disambiguated)

Predict ethnicity gender age

MapAffil - affiliation geocoder

Ethnicity specific gender over time using 10M+ pubmed papers

 

Larramore: Modeling faculty hiring networks

 

Bruce Weinberg, Ohio State

Toward a Valuation of Research

IRIS (Michigan) – people based approach to valuing research. People are the vectors by which ideas are transmitted, not disembodied publications

- CIC/AAU/Census

Innovation in an aging society – aging biomedical research workforce

Data architecture

  • bibliometric
  • dissertations
  • web searches
  • patents
  • funding
  • star metrics (other people in labs), equipment, vendors
  • tax records
  • business census

Metrics for transformative work

  •  text analytics
  • citation patterns from WoS

Impact distinct from transformative. Mid-career researchers moving more into transformative work.

Some findings not captured in my notes: how women PhD graduates are doing (same positions, paid slightly more, held back by family otherwise). PhD graduates in industry staying in the same state, making decent money (some non-negligible proportion in companies with median salaries >200k ... median.)

John Ioannidis, Stanford

Defining Meta-research: an evolving discipline

- how to perform communicate verify evaluate reward science

- paper in PLOS Biology, JAMA

 

 

No responses yet

Notes from International Symposium on Science of Science 2016 (#ICSS2016) - Day 1

(by Christina Pikas) Mar 28 2016

This conference was held at the Library of Congress March 22 and 23, 2016. The conference program is at: http://icss.ist.psu.edu/program.html

I had the hardest time remembering the hashtag so you may want to search for ones with more C or fewer or more S.

This conference was only one track but it was jam-packed and the days were pretty long. On the first day, my notes were by hand and my tweets were by phone (which was having issues). The second day I brought a power strip along and then took notes and tweeted by laptop.

One thing I want to do here is to gather the links to the demo tools and data sets that were mentioned with some short commentary where appropriate. I do wish I could have gotten myself together enough to submit something, but what with the dissertation and all. (and then I'm only a year late on a draft of a paper and then I need to write up a few articles from the dissertation and and and and...)
Maryann Feldman SciSIP Program Director

As you would expect, she talked about funding in general and the program. There are dear colleague letters. She really wants to hear from researchers in writing - send her a one-pager to start a conversation. She funded the meeting.

Katy Börner Indiana University

She talked about her Mapping Exhibit - they're working on the next iteration and are also looking for venues for the current. She is interested in information analysis/visualization literacy (hence her MOOC and all her efforts with SCI2 and all). One thing she's trying now is a weather report format. She showed an example.

She did something with the descriptive models of the global scientific food web. Where are sources and where are sinks of citations?

Something more controversial was her idea of collective allocation of funding. Give each qualified PI a pot of money that they *must* allocate to other projects. So instead of a small body of reviewers, everyone in the field would be a reviewer. If the top PI got more than a certain amount. They would have to re-allocate to other projects.

I'm not sure I got this quote exactly but it was something like:

Upcoming conference at National Academy of Science on Modeling Sci Tech Innovations May 16-18.

They have a data enclave at Indiana with research data they and their affiliates can use. (I guess LaRiviere also has and has inherited a big pile o'data? This has been a thought of mine... getting data in format so I could have it lying around if I wanted to play with it).
Filippo Radicchi Indiana University

He spoke about sleeping beauties in science. These are the articles that receive few citations for many years and then are re-discovered and start anew. This is based on this article. Turns out the phenomenon occurs fairly regularly and across disciplines. In some cases it's a model that then is more useful when computing catches up. In other cases it's when something gets picked up by a different discipline. One case is something used to make graphene. He's skeptical one of the top articles in this category is actually being read by people who cite it because it's only available in print in German from just a few libraries! (However, a librarian in the session *had* gotten a copy for a staff member who could read German).

I would love to take his 22M article data set and try the k-means longitudinal. If sleeping beauty is found often, what are the other typical shapes beyond the standard one?

He also touched on his work with movies - apparently using an oft-overlooked section of IMDB that provides information on references (uses same framing as x, adopt cinematography style of y, remakes z... I don't know, but relationships).

Carl Bergstrom University of Washington

The first part of his talk reviewed Eigenfactor work which should be very familiar to this audience (well except a speaker on the second day had no idea it was a new-ish measure that had since been adopted by JCR - he should update his screenshot - anyhoo)

Then he went on to discuss a number of new projects they're working on. Slides are here.

Where ranking journals has a certain level of controversy, they did continue on to rank authors (ew?), and most recently articles which required some special steps.

Cooler, I think was the next work discussed.  A mapping technique for reducing a busy graph to find patterns. "Good maps simplify and highlight relevant structures." Their method did well when compared to other method and made it possible to compare over years. Nice graphic showing the emergence of neuroscience. They then did a hierarchical version. Also pretty cool. I'd have to see this in more detail, but looks like a better option than the pruning and path methods I've seen to do similar things. So this hierarchical map thing is now being used as a recommendation engine.  See babe'.eigenfactor.org . I'll have to test it out to see.

Then (it was a very full talk) women vs. men. Men self-cite more. Means they have higher h-index.
Jacob Foster UCLA (Sociology)

If the last talk seemed packed. This was like whoa. He talked really, really fast and did not slow down. The content was pretty heavy duty, too. It could be that the remainder of the room basically knew it all so it was all review. I have read all the standard STS stuff, but it was fast.

He defines science as "the social production of collective intelligence."

Rumsfeld unknown unknowns... he's more interested in unknown knowns. (what do you know but do not know you know... you know? 🙂 )

Ecological rationality - rationality of choices depends on context vs rational choice theory which is just based on rules, not context.

Think of scientists as ants. Complex sociotechnical system. Information processing problem, using Marr's Levels.

  • computational level: what does the system do (e.g.: what problems does it solve or overcome) and similarly, why does it do these things
  • algorithmic/representational level: how does the system do what it does, specifically, what representations does it use and what processes does it employ to build and manipulate the representations
  • implementational/physical level: how is the system physically realised (in the case of biological vision, what neural structures and neuronal activities implement the visual system)

https://en.wikipedia.org/wiki/David_Marr_(neuroscientist)#Levels_of_analysis

Apparently less studied in humans is the representational to hardware. ... ? (I have really, really bad handwriting.)

science leverages and tunes basic information processing (?).. cluster attention.

(incidentally totally weird Google Scholar doesn't know about "american sociological review" ? or ASR? ended up browsing)
Foster,J.G., Rzhetsky,A., Evans, J.A. (2015) Tradition and Innovation in Scientists’ Research Strategies. ASR 80, 875-908. doi: 10.1177/0003122415601618

Scientists try various strategies to optimize between tradition (more likely to be accepted) and innovation (bigger pay offs). More innovative papers get more citations but conservative efforts are rewarded with more cumulative citations.

Rzhetsky,A.,Foster,I.T., Foster,J.G.,  Evans, J.A (2015) Choosing experiments to accelerate collective discovery. PNAS 112, 14569–14574. doi: 10.1073/pnas.1509757112

This article looked at chemicals in pubmed. Innovative was new ones. Traditional was in the neighborhood of old ones. They found that scientists spend a lot of time in the neighborhood of established important ones where they could advance science better by looking elsewhere. (hmmm, but... hm.)

The next bit of work I didn't get a citation for - not even enough to search - but they looked at JSTOR and word overlap. Probabilistic distribution of terms. Joint probability. (maybe this article? pdf). It looked at linguistic similarity (maybe?) and then export/import of citations. So ecology kept to itself while social sciences were integrated. I asked about how different social sciences fields use the same word with vastly different meanings - mentioned Fleck. He responded that it was true but often there is productive ambiguity of new field misusing or misinterpreting another field's concept (e.g., capital). I'm probably less convinced about this one, but would need to read further.

Panel 1: Scientific Establishment

  • George Santangelo - NIH portfolio management. Meh.
  • Maryann Feldman - geography and Research Triangle Park
  • Iulia Georgescu, Veronique Kiermer,Valda Vinson - publishers who, apparently, want what might already be available? Who are unwilling (except PLOS) or unable to quid pro quo share data/information in return for things. Who are skeptical (except for PLOS) that anything could be done differently? that's my take. Maybe others in the room found it more useful.

Nitesh Chawla University of Notre Dame

(scanty notes here - not feedback on the talk)

Worked with Arnet Miner data to predict h-indices.

Paper: http://arxiv.org/abs/1412.4754 

It turns out, that according to them, venue is key. So all of the articles that found poor correlation between JIF and an individual paper's likelihood of being cited.. they say actually a pretty good predictor when combined with researcher's authority. Yuck!

Janet Vertesi Princeton University

Perked up when I realized who she is - she's the one who studied the Rover teams! Her book is Seeing Like a Rover.  Her dissertation is also available online, but everyone should probably go buy the book.  She looked at more a meso level of knowledge, really interested in teams. She found that different teams - even teams with overlapping membership - managed knowledge differently. The way instrument time (or really spacecraft maneuvering so you can use your instrument time) was handled was very different. A lot had to do with the move in the '90s for faster...better... cheaper (example MESSENGER). She used co-authoring networks in ADS and did community detection. Co-authorship shows team membership as same casts of characters writing. This field is very different from others as publications are in mind while the instruments are being designed.

She compared Discovery class missions - Mars Exploration Rover - collectivist, integrated; everyone must give a go ahead for decisions; Messenger - design system working groups (oh my handwriting!)

vs. Flagship - Cassini - hierarchical, separated. Divided up sections of spacecraft. Conflict and competition. Used WWII as a metaphor (?!!). No sharing even among subteams before release.  Clusters are related to team/instrument.

New PI working to merge across - this did show in evolution of network to a certain extent.

Galileo is another flagship example. breaks down into separate clusters. not coordinated.

Organization of teams matters.

I admitted my fan girl situation and asked about the engineers. She only worked with scientists because she's a foreign national (may not mean anything to my readers who aren't in this world but others will be nodding their heads).  She is on a team for an upcoming mission so will see more then. She also has a doctoral student who is a citizen who may branch off and study some of these things.
Ying Ding Indiana University

She really ran out of time in the end. I was interested in her presentation but she flew past the meaty parts.

Ignite Talks (15s per slide 2min overall or similar)

  • Filippo Menczer - http://scholarometer.indiana.edu/ - tool to view more information about authors and their networks. Browser extension.
  • Caleb Smith,
  • Orion Penner - many of us were absolutely transfixed that he dropped his note pages on the floor as he finished. It was late in the day!  He has a few articles on predicting future impact (example). On the floor.
  • Charles Ayoubi,
  • Michael Rose,
  • Jevin West,
  • Jeff Alstott - awesome timing, left 15 for a question and 15 for its answer. Audience didn't play along.

Lee Giles Penn State University

It was good to save his talk for last. A lot going on besides keeping CiteSeer up and running. They do make their data and their algorithms freely available (see: http://csxstatic.ist.psu.edu/about ) . This includes extracting references. They also are happy to add in new algorithms that make improvements and work in their system. They accept any kind of document that works in their parsers so typically journal articles and conference papers.

RefSeer - recommends cites you should add

TableSeer - extracts tables (didn't mention and there wasn't time to ask... he talked a lot about this for chemistry... I hope he's working with the British team doing the same?)

Also has things to extract formulas, plots, and equations. Acknowledgements. Recommend collaborators (0 for me, sniff.) See his site for links.

 

 

2 responses so far

Preliminary thoughts on longitudinal k-means for bibliometric trajectories

(by Christina Pikas) Mar 18 2016

I read with great interest Baumgartner and Leydesdorff's article* on group based trajectory modeling of bibliometric trajectories and I immediately wanted to try it. She used SAS or something like that, though, and I wanted R. I fooled around with this last year for a while and I couldn't get it going in the R package for GBTM**

Later, I ran across a way to do k-means clustering for longitudinal data - for trajectories! Cool. I actually understand the math a lot better, too.

Maybe I should mention what I mean about trajectories in this case. When you look at citations per year for articles in science, there's a typical shape .. a peak at year 2-3 (depends on field), and then slacks off and is pretty flat. Turns out there are a few other typical shapes you see regularly. One is the sleeping beauty - it goes along and then gets rediscovered and all the sudden has another peak - maybe it turns out to be useful for computational modeling once computers catch up. Another is the workhorse paper that just continues to be useful overtime and takes a steady strain - maybe it's a really nice review of a phenomenon. There may be 5 different shapes?  I don't think anyone knows yet, for sure.

So instead of my other dataset I was playing with last year with like 1000 articles from MPOW, I'm playing with articles from MPOW that were published between 1948 and 1979 and that were identified in a 1986 article as citation classics. 22 articles. I downloaded the full records for their citing articles and then ran an R script to pull of the PY of the citing articles (I also pulled of cited articles and did a fractional Times Cited count but that's another story). I cut off the year the article was published, and then kept the next 35 years for each of the articles. It's like up to 2015 for a couple but I don't think that will matter a lot as we're a ways into 2016 now.

Loaded it into R, plotted the trajectories straight off:

trajLooks like a mess and there are only 22!

Let's look at 3 clusters:

3clustersOk, so look at the percentiles. 4% is one article. This is a very, very famous article. You can probably guess it if you know MPOW. Then the green cluster is probably the work horses. The majority are the standard layout.

Let's look at 4 clusters:

4clustersYou still here have the one crazy one. Like 5 workhorses. The rest are variations on the normal spike. Some a really sharp spike and then not much after (these were the latest ones in the set - the author didn't have enough distance to see what they would do). Others a normal spike then pretty flat.

So I let it do the default and calculate with 2, 3, 4, 5, 6 clusters. When you get above 4, you just add more singletons. The article on kml*** says there's no absolute way to identify the best number of clusters but they give you a bunch of measurements and if they all agree, Bob's your uncle.

qualityBigger is better (they normalize and flip some of them so you can look at them like this). Well, nuts. So the methods that look at compactness of the clusters divided by how far apart they're spaced (the first 3, I think?) are totally different than 4 - which is just like distance from centroids or something like that. I don't know.I probably have to look at that section again.

Looking at the data, it doesn't make sense at all to do 5 or 6. Does 4 add information over 3? I think so, really. Of course with this package you can do different distance measurements and different starting points, and different numbers of iterations.

What practical purpose does this solve? Dunno? I really think it's worth giving workhorse papers credit. A good paper that continues to be useful... makes a real contribution, in my mind. But is there any way to determine that vs. a mediocre paper with a lower spike short of waiting 35 years? Dunno.

 

*Baumgartner, S. E., & Leydesdorff, L. (2014). Group‐based trajectory modeling (GBTM) of citations in scholarly literature: dynamic qualities of “transient” and “sticky knowledge claims”. Journal of the Association for Information Science and Technology, 65(4), 797-811. doi: 10.1002/asi.23009 (see arxiv)

** Interesting articles on it. It's from criminology and looks at recidivism. Package.

*** Genolini, C., Alacoque, X., Sentenac, M., & Arnaud, C. (2015). kml and kml3d: R Packages to Cluster Longitudinal Data. Journal of Statistical Software, 65(4), 1-34. Retrieved from http://www.jstatsoft.org/v65/i04/

No responses yet

So is walk-up access doable anymore?

(by Christina Pikas) Mar 13 2016

A trackback to my previous posts on ways to get literature  discussed the authors inquiries to German universities about walk-up access. Walk-up access is just that, a non-affiliated person showing up in person at a library and having access to their subscriptions. The vast majority of our licenses do actually allow walk-up. Of the STEM things, the main outlier is SciFinder (Chemical Abstracts). It does not allow walk-up.

Thing is, I work in a research lab and in 2009 we moved behind the barrier so we do not have any place unaffiliated people can use our access. So I really haven't kept up with how hard or easy it is for people who are actually able to physically visit a research library.

I asked on one of the current incarnations of LSW (Library Society of the World) on Mokum. The responses were a pleasant surprise. Most had easy walk-up access. Some had computers that didn't require a login whereas others provided short term logins. Printing can be paid for in cash.

One of NYC librarians said her library charged people to get access to the building at all! Clearly not a land grant institution. Another librarian is in a place where there have been multiple shootings in the actual library building. Completely understandable that they are locked down now.

Overall, at least this access does still seem possible for people who live somewhat near a research institution, particularly if it's a public university.

No responses yet

Search as Conversation

(by Christina Pikas) Mar 11 2016

Not a new idea but seemingly ignored by research databases, no?

I just read: Beyond algorithms: Optimizing the search experience. Making search smarter through better human-computer interaction. by Daniel Tunkelang (yes, I am turning into his fan girl, but it's because he does have interesting things to say!) posted in October 2015.

I immediately wanted to bookmark, tweet, e-mail, print and waive it around... yes, this.

Some search tools - like Google and supposedly* like Siri with whatever lies beneath - do take a series of queries together to try to answer a bigger question.

Our databases do have facets. Some also have type ahead or auto suggest but the results are often hilarious and are not using query understanding techniques but just matching terms off a frequency list.

The one search box but then segment the experience - I think this is where bento is trying to go... but doesn't really? We can for sure do better.

Anyway. Read the blog post.

 

*my Siri has gotten stupider. It really has. It used to provide better results.

No responses yet

Thoughts on alternatives to Sci-Hub

(by Christina Pikas) Mar 10 2016

There have been a lot of blog posts, news pieces, and listserv comments about Sci-Hub. Some have said that while they know it is wrong, they feel scientists have been forced into using the system because they have no alternatives for access. Some responses have been on the order of: we asked our favorite scientists at big US research institutions and they say they have access to everything they need so why don't you? or We give away articles to the very poorest of countries (who might not even be able to take advantage of because poor connectivity), so that should be enough (what about the middle range countries?). Or you make a lot of money and your university has an endowment, you surely can afford this journal and you're just stealing! Or Jean Valjean didn't have access to bread, either, but that didn't mean stealing was right!

Others have repeatedly countered the whole difference between stealing things (bread) and making copies that do not diminish the original (if possibly the market for it).

Anyhoo, what I really want to talk about here is the alternatives for closed access articles. Probably not an exhaustive list.

  • licensed access through your institution as part of a site-wide subscription (on campus, or via VPN/proxy from off)
  • interlibrary loan
  • license your own copy ($30-75)
  • individual subscription (through a society or just from the publisher)
  • "rent" access to view a copy for 24 hours
  • find copies self archived in institutional and disciplinary repositories, on their websites, and other random places
  • find copies illegally shared as part of course materials for another course (this happens for stuff I'm looking for pretty regularly, actually, particularly chapters from social sciences books)
  • contact author for copy
  • contact buddy, relative, etc., at another university to request
  • use walk up access at a local public university
  • use #icanhazpdf
  • use Sci-Hub

So let's look at hassle factor. Part of what goes into figuring out the hassle factor is how you identified the article in the first place and what network you're on.

At MPOW if you use Google Scholar or PubMed and you're on our network, you should be able to go right to the full text for the majority of things you're looking for because we have a lot of subscriptions. We have our IPs registered with Google so it points to our subscriptions and our link resolver. If you use our link resolver, it fills out the ILL form for you from there. Still, it is more convenient to get a pdf from/through Google than wait for ILL or for us to scan and e-mail you something.

What if you're off campus? A quick check of #icanhazpdf showed some people were asking because they were off campus. That, to me, seems like the height of laziness and inconsideration. Does their campus really have no remote access? The person who is sending it to you has to go through more effort than it would take you to VPN or use EZProxy.

One commentator heard from someone who does have access at work but couldn't be assed to use the library tools to locate it. Really? So the search on Sci-Hub doesn't work (I'm told) so the best way to use it is through the doi. I can put the doi of an article into my FindIt tool and get a proxied link to the best source for full text immediately - even if it's at a 3rd party aggregator. Legally. I can also put the PMID in. In fact, I have a plugin in my browser that automatically links the DOI to my link resolver.

Ok, so you may not be at an organization that has all this set up. There are lots of industrial and government scientists who have very little access to the literature. Even if they do have access, they might not have the connecting tools.

In many places ILL is awful. Let's be quite honest. Another form. Asks a lot of information. May have a different login. May take 2-3 weeks to arrive. It may be fax quality. May be a cost associated. In one sociology class I was in as a student they were going off on how bad it was: wrong article, missed several pages, illegible copies... the one guy put his request in like 5 times before getting a full, readable copy. He kept putting it in after a while to see how many tries it would take! Your buddies on Twitter do not have to print, scan to fax quality, and then send.

I love how people say you can use your local publib. Mine is not going to ill for scholarly articles for you. They don't have that kind of budget or staff. I think it's getting harder to use walk up access, too. If you have eduroam you can get on the network but if you're at a local small business? It's not like when the journals were in print.

I don't even know where I was going with this but to say that #icanhaspdf has a point. Library systems need to get easier and get in the workflow, but also scholars might actually need to put some effort in to learn to do things the right way.

4 responses so far

Who do I want to rescue me?

(by Christina Pikas) Mar 09 2016

DM has continued a meme - who do you want to rescue you?

These are not ranked, necessarily:

  1. Dr. Who
  2. Paw Patrol
  3. Mark Watney
  4. the guys in the Scott Lynch books (all both men and women, not just Locke)
  5. Kvothe

 

No responses yet

Older posts »