Archive for the 'knowledge management' category

An ephemeral platform, used for other than ephemeral, and the death of Storify

As I say in my dissertation and elsewhere, informal scholarly communication in social media is both ephemeral and archival. Maybe this is new because some online traces intended to be for a limited number of recipients for immediate use have longer life and wide reach. Some utterances in social media live on well after the originator intended (for good and bad). But maybe it's not entirely new as certainly letters among scientists have been preserved (some of these were no doubt sent specifically for preservation purposes).

I've long been a fan of blogs for personal knowledge management, that is, thinking through readings, partial results, tutorials for how to do things. Blogs are easily searched, archived, migrated, shared, and don't enforce an artificial or proprietary structure found in other tools. However, I also know that long-term bloggers who have established a readership through careful, well-edited posts impose new barriers on themselves for using their blogs for this purpose. I found in my studies that some superstar bloggers almost entirely stopped blogging because they didn't want to post anything incomplete or partial and there were too many other things to do.

I think this has been one of the motivating factors for the use of Twitter for long threads of stories and analysis. Twitter has great reach and immediacy, and interactivity... but at the expense of search (although it is certainly better than it was) and preservation. Who of us hasn't dug through our likes and RT to try to find something interesting we saw ages ago?

We're using a platform specifically built for ephemeral communication for communication that should be saved and preserved.

So individuals who value this knowledge management function, or who appreciate careful analysis or good storytelling serialized over 10s of tweets have adopted Storify to gather and order and preserve and contextualize the pieces. Storify added tools to make it a bit easier. Instead of Storify, you could embed individual tweets (this embedding function also calls back to Twitter so really doesn't preserve). You could <eek> screenshot. you could even just write it up and quote the text.

And Storify is going away this Spring. We do have notice, luckily, but we still have a problem. We need to back our stuff up - we need to back other people's stuff up. Not everything is of the same value to the originator as it is to someone else.

My plea - and it will go unheard - is to put things back into blogs which you then tweet. Or back your useful tweets up to a blog?

FWIW, I'm trying to capture AGU meeting tweets and I'll load them into FigShare ... but the odds of some researcher capturing and saving your stuff is actually quite slim.

This post was inspired by a tweet that has a thread and interesting points by her interlocutors :


3 responses so far

Bots, Mixed Initiative, and Virtual Personal Assistants

I've been trying to write this post for a while but am finally just throwing my hands up about having an well-done oeuvre to just get the thing done.

When I saw Daniel Tunkelang's brief post on virtual assistants I was like, oh, that again. But there were some links and doing my usual syntopic reading I fell into the rabbit hole a bit.

Used to be that computer science was like "automate all the things." More automated, more better. Bates (1990) was all like wait a minute here, there are some things it makes sense to hand off and others it makes sense for the human to do. People do some things faster. People learn and explore and think by doing.  People need to control certain things in their environment. But other things are a hassle or can be easily done by a computer. What you don't want to do is to make the effort of supervising the automation so arduous that you're trading one hassle for another.

For quite a few years, there has been an area of research called "mixed initiative" that looks specifically at things like virtual assistants and automating where it makes sense without overburdening the user. As I was dabbling in this area a couple of years ago, I read some articles. It seemed weird to me, though, because I think most knowledge workers my age or younger probably don't know how to work with a living human assistant. I have never worked anywhere with a secretary who offloaded work from me. Never worked somewhere with someone to help me schedule meetings, type out correspondence, format articles, do my travel stuff, etc. I have been on teams with deliverables that were sent through an editor - but that was like a special technical writer. I suppose I would have to negotiate with an assistant what I would want him or her to do and then accept (within boundaries) that they might do things differently than I do. I would have to train them. Should I expect more of a virtual assistant?

All of this is in the back of my head when I started following the links.

So what do they mean by virtual assistants - they're hot, but what are they doing and do they work?

Scheduling meetings

  • Meekan is, apparently, a bot that takes an informal request within Slack and negotiates with other calendars to make an appointment.
  • is similar but you cc Amy (a bot, but I like that she has a name), and she takes on the negotiation for you.

Project/Team Management (loosely construed)

  • Howdy will get feedback from team members and also take lunch orders. Seems sort of like some things I saw baked into Basecamp when I saw a demo. It's in Slack, too.
  • Awesome helps manage teams on Slack.


Travel, Shopping, ...

  • Assist does a few different things like travel and shopping.

General but often operating a device

  • Siri
  • Cortana
  • Amazon Alexa
  • Google Now (sorta)
  • Facebook M

A lot of us don't want to talk to our assistant, but to text them. One of the articles pointed to this.


When I talked to engineers back in the day about their personal information management, there were a lot of things they were doing themselves that it just seemed like they should be able to offload to someone who is paid less (Pikas, 2007). Likewise, I was talking to a very senior scientist who was spending hours trying to get his publications to be right on the external site. Even though statements are routinely made to the contrary, it seems like work is pushed off from overhead/enterprise/admin to the actual mission people - the scientists and engineers - in an attempt to lower overhead. It pushes money around, sure, but it doesn't solve the goal. So here's an idea, if we really, really, really aren't going to bring back more overhead/enterprise/admin folks, are there bots we can build in to our systems to ease the load?

If Slackbot watches you and asks you personal questions: isn't that cute. If Microsoft does: evil, die, kill with fire. If your employer does: yuck?



Bates, M. J. (1990). Where should the person stop and the information search interface start. Information Processing & Management, 26(5), 575-591. doi:10.1016/0306-4573(90)90103-9

Pikas, C. K. (2007). Personal Information Management Strategies and Tactics used by Senior Engineers. Proceedings of the Annual Meeting of the American Society for Information Science and Technology, Milwaukee, WI. , 44 paper 14.

Comments are off for this post

When listening to the users may not be the best thing

Jan 16 2016 Published by under information policy, knowledge management

At work we evaluated the fitness of one large collaboration platform for use for another group. The government was already funding this one big thing and it made sense to see if it could be leveraged instead of starting from scratch even though the potential user groups are extremely different.

The system we evaluated was carefully designed with lots of input from user groups, by well meaning, competent people, using best practices from the field. GAO has fussed at them a few times over the years for the same things they always pick up on and there are always questions about if their system is used enough and how and what contracts they have let and for how much. They have a roadmap for development that is carefully developed in coordination with the users and they use agile development with frequent small releases and quarterly larger releases. There's lots of training available both ad hoc, recorded, and live as well as in person presentations at conferences and the like.  They have a bunch of case studies in which the system has had a pivotal role in supporting collaboration and solving a difficult problem for the users.

Sounds great, right? The only thing is that the actual system is pretty ugly and not all that functional - certainly not what we had been designing with our ambitious state of the art system. We asked about things like how access control is done, how information is organized and retrieved, how content management is done, what the portal does, how it supports communication and collaboration... all fell very far short of our expectations. How could this be? We were looking at current features in products on the market - we even looked at products they have.

In my opinion (not anyone else's), it's all about their users and their governance. They have proposed many of the things we want in our system and their users de-prioritize all of them and do not chose to fund them. You see, a lot is needed for really good content discovery - there's a lot of infrastructure, which is invisible to the user (see Star's stuff on infrastructure). There's a lot of humans developing and training information organization schemes and building ways to ingest and process information such that search works. There are the policy requirements in a federated system like this to allow these various repositories to be searched. There's ongoing maintenance and user testing and ranking and boosting and troubleshooting for even a decent search to work, not to mention the full content discovery.

So the professionals propose projects to work on these things and improve them, but the users - who are expert in an ENTIRELY different area - are not getting it and are not trusting the professionals. And money is always limited. So the communication pieces aren't integrated. There's not fine role based access control. There's no way to search across various things... But their users are happy and are getting EXACTLY what they asked for.

So how do you design a governance system and development for a massive collaboration system such that it is user-based and need-based, but you still can fund infrastructure work needed to provide the functions for the users. I don't know. We laid off our taxonomist because management thought our search tool did all that itself - it doesn't.  Clearly we don't know how to make the case, either.

Is there hope? If the two systems are joined, might the developers leverage our information to force some of these improvements? Dunno.

One response so far

The value of blogging and goals for 2016

As I prepare the slides and review my dissertation in preparation for the defense on the 19th, I keep coming back to the assertions I made in 2004 about the value of blogs for personal knowledge management. More recently, Pat Thomson blogged on THE about the value of blogging to scholarly writing. I think the value of blogs for communicating with the public is probably oversold. Seems like a lot of the scientists, social scientists, and other scholars who go into blogging with that goal find that they are instead communicating with like minds - scientists in other research areas, teachers, hobbyists/enthusiasts/citizen scientists - instead of changing minds and informing the uninformed.

It's not that there aren't cases in which that's true, it's just probably not frequent or widespread enough to sustain involvement for a new blogger.

I also don't mean to imply that there's no social in the social software. The community built through blogging can be very rich and supportive. The feedback on blogs can be very helpful.

I do miss blogging more and I don't think that blogging less has made me more productive offline. Instead, I find writing very slow and tedious and I get very frustrated that readers of my work are not able to understand me through it.

So. I'm going to try to be here more. I'm going to try to practice writing more. I'm going to try to do more research blogging. I'll also try to capture and share any neat tricks in analysis.


Some further reading:

Dennen, V. P. (2014). Becoming a blogger: Trajectories, norms, and activities in a community of practice. Computers in Human Behavior, 36(0), 350-358. doi:10.1016/j.chb.2014.03.028

Hank, C. (2013). Communications in Blogademia: An Assessment of Scholar Blogs’ Attributes and Functions New Review of Information Networking, 18(2), 51-69. doi:10.1080/13614576.2013.802179

Mewburn, I., & Thomson, P. (2013). Why do academics blog? An analysis of audiences, purposes and challenges. Studies in Higher Education, 38(8), 1105-1119. doi:10.1080/03075079.2013.835624

Olive, R. (2013). ‘Making friends with the neighbours’: Blogging as a research method. International Journal of Cultural Studies, 16(1), 71-84. doi:10.1177/1367877912441438

Comments are off for this post

Why special librarians should be active on their organization's intranet social media

Back to the dissertation now and trying to do a big push this month to get ready to defend in the Fall. A member of my committee had some great suggestions of parts of the literature that I should cover in my literature review and had missed. Particularly some CMC things I overlooked (more on these another time).

Ran across this when looking for other things by Leonardi:

Leonardi,P.M. and Meyer,S.R. (2015) Social Media as Social Lubricant: How Ambient Awareness Eases Knowledge Transfer. American Behavioral Scientist 59, pp.10-34. DOI: 10.1177/0002764214540509

My place of work has an internal Facebook like thingy but it wasn't originally built and supported by IT. With all of the competing priorities, it wasn't clear at all that some social software should get their limited funding and attention. So a bunch of researchers set up Elgg on their own on a surplus computer running under someone's desk. Use took off. Eventually it was taken over by IT who now manages and supports it.

I immediately saw it as a place to advertise library services and resources, troll for questions that needed answering, and blog about things that can't go here. Later, 3 of us won a mini grant to create an add-on that allowed users to list what books they had on their bookshelves that they would be willing to lend out and to track to whom the books were lent.

But selling social media (beyond SharePoint) in the workplace might still be difficult.

This article finds the somewhat obvious, but has a nice lit review and it might be persuasive to some.

From the lit review:

Internal knowledge sharing in organizations is good because

  • increases efficiency
  • increases innovation
  • decreases mistakes
  • makes the organization as a whole more competitive

Internal knowledge sharing is difficult because knowledge is "sticky"

  • takes work to share (individual)
  • people believe they might lose power or status by sharing (individual)
  • knowledge is too complex to transfer
  • it might be hard to find people with whom to share knowledge (technology)
  • knowledge from outside the immediate group in the organization might be devalued (culture)

So the idea of the article is that people need to look around a bit - sort of like jumping in in jump rope - before knowing how to ask a question and to whom to direct a question.  Using social media not necessarily to ask the question but to find sources and figure out how to approach them should help mitigate the stickiness issues.

They did a survey in a large telecommunications company and only worked with people who used their internal social networking site.

Unexpectedly, initial tie strength and complexity of the question impact if the seeker will ask the question immediately.  Asking right away when it's complex leads to less satisfaction. But, they found that even when the question isn't ambiguous, waiting to ask it made the knowledge transfer more satisfactory. This bit from page 27 is interesting:

for the sample of knowledge seekers who did not ask for knowledge right away, of the five media we tested (phone, email, instant message, face-to-face, and enterprise social network site), only enterprise social network site was significant and positive. This suggests that, in support of H3, the enterprise social networking site was the only medium—when used in the short time between when the knowledge seeker identified the knowledge source and when he or she asked for the knowledge—that increased the likelihood that the knowledge seeker was satisfied with transfer. Furthermore, neither identify tie strength nor knowledge complexity had a significant impact on the likelihood of satisfactory knowledge transfer.

(H3 is exactly that - using social networking to gain more information about the source will increase satisfaction with the answer)

The authors emphasize again that they found that it's the "awareness of ambient communication" aspect of social networking that helps, not just using it as a direct channel through which to direct the communication.

Back to my post title. What does this mean for special librarians in corporate, government, or research settings (not academic or public)? It reinforces the idea that maintaining an active presence on your intranet social networking site is a good idea so that your potential users can check you out, get to know you, and better ask you questions. Of course, try not to sound like an idiot on there because then they'll know that, too 🙂

Also, if your organization is interested in KM, has an intranet, AND you have enough people to get to some sort of critical mass (what might that be?), setting up one of these social networking services is probably a good idea.

Leonardi, P., & Meyer, S. (2014). Social Media as Social Lubricant: How Ambient Awareness Eases Knowledge Transfer American Behavioral Scientist, 59 (1), 10-34 DOI: 10.1177/0002764214540509

Comments are off for this post

Knowing what you know, or rather, what you've written

When I first came to work where I work now, I asked around for the listing of recent publications so I could familiarize myself with what types of work we do. No such listing existed even though all publications are reviewed for public release and all copyright transfer agreements are *supposed* to be signed by our legal office. Long story short, I developed such a listing and I populated it by having alerts on the various research databases.

Now, 9 years later, it's still running and it is even used to populate an external publications area on our expertise search app.

By its nature and how it's populated, there's absolutely no way it could be totally comprehensive and it is also time-delayed. It's probably a little better now with how fast the databases have gotten and because Inspec and Compendex now index all author affiliations and not just the first author.

Anyway, our leadership is on an innovation kick and looking at metrics to see how we compare to our peers and also if any interventions have positive effects. The obvious thing to look at is patents, but that's complicated because policies toward patenting changed dramatically over the years. They're looking now at number of publications - something I think they probably ought to note as part of being in the Sci/Tech business. My listing has been looked at, but that only started in 2003/2004. From here forward the public release database can be used... but what about older stuff? Well, in the old days the library (and the director's office) kept reprint copies of everything published. Awesome. Well, except they're kinda just bound volumes of all sorts of sizes and shapes of articles. I guess these got scanned somehow and counted, but they ended up with a few articles with no dates or citations (title and author but not venue). Three of these got passed to me to locate. They're not in the above mentioned research databases, but we know they were published (as re-prints were provided) and not in technical reports.

The answer? Google. Of course. The first was a book chapter that was cited in a special issue of a journal dedicated to the co-author. The second was a conference paper that appeared on the second author's CV (originally written in 1972 - thank goodness for old professors with electronic CVs!). The third was a conference paper cited by a book chapter indexed by Google Books. BUT to find the year, I have to request the book from the medical library... which I have done.

At least back in the day the leadership understood the value of keeping a database (print volumes) of our work. From at least 2003 until 2012, there was no such recognition. Now that I will be benchmarking us with peer organizations, I wonder if they're in the same boat or if they've keep their house in order with respect to their intellectual contributions?

Comments are off for this post