Archive for the 'interfaces' category

No, vendor, we don't want a pile of crap actually

Dec 02 2017 Published by under Collection Development, interfaces, libraries

Large Copper Dung Beetle (Kheper nigroaeneus) on top of its dung ball https://commons.wikimedia.org/wiki/File:Large_Copper_Dung_Beetle_(Kheper_nigroaeneus)_on_top_of_its_dung_ball_(12615241475).jpg

Yes, I have posted about this a number of times, and no this will probably not be too different.   Our vendors have swept up the little competition and then redone their boutique databases to make them - generally - work like piles of crap.

So there are two massive 3rd party aggregators that sell massive piles of crap. Don't get me wrong, these are super attractive to libraries who can then say: look at all these titles we cover! Look at how much content we have! The problem is that with our current state of information abundance, with lots of big package deals, with more and more open access, and with informal scholarly sharing < cough >, getting the full text of recent articles from big name journals really isn't a thing.

The thing is efficient, precise, thorough, appropriate information at the right time and place. I say: I need exactly information on this thing! The aggregators go: here's a massive pile of crap!  I'm like, well I don't need a pile of crap, I need exactly this thing. System returns: here's another pile of crap!

Look at the Aerospace database, for example. Used to be the only real database that covered hypersonics and was thorough at all at covering AIAA and NASA technical reports. It was CSA when I got to know it. Compendex, in comparison, is just adding AIAA stuff this year and isn't going back to the 60s. CSA databases got sold to ProQuest. I have no idea what the hell they've done with it because every time I do a search I end up with trade pubs and press releases - even when I go through the facets to try to get rid of them.

CSA used to have a computer science database, too. The current computer collection in ProQuest doesn't even allow affiliation searching. Also, a search I did there yesterday - for a fairly large topic - didn't return *any* conference papers. For CS. Really.

This is not to pick on PQ, ok maybe it is, but their competitors really aren't any better.

 

At the same time, we keep having people tell us at my larger organization, that we *must* get/have a discovery layer. Let me just tell you again, that we did a lot of testing, and they did not provide us *any* value over the no additional cost search of a 3rd party aggregator. They are super expensive, and really just give you - guess what - all your stuff in a huge pile of crap. I hear nothing but complaints from my colleagues who have to deal with these. The supposition was that we wanted a Google interface. Ok, maybe a sensible quick search is fine, but that only works when you, like Google, have extremely sophisticated information retrieval engines under the hood. Saying - hey we cover the same journals as your fancy well-indexed database but without the pesky indexing and also lumped together with things like newspapers, press releases, and trade pubs... is not really effective. It's a pile of crap.

You may say, "But think of the children!" The poor freshman dears who can't search to save their lives and who just need 3-5 random articles after they've already written their paper just to fill in their bibliography due in the morning....

Is that really who and what we're supporting? Should we rather train them in scholarly research and how to get the best information? And anyway, for my larger institution, we hardly have any freshmen at all.

No, vendors, we do not want a large pile of crap, but thanks for offering!

2 responses so far

Bots, Mixed Initiative, and Virtual Personal Assistants

I've been trying to write this post for a while but am finally just throwing my hands up about having an well-done oeuvre to just get the thing done.

When I saw Daniel Tunkelang's brief post on virtual assistants I was like, oh, that again. But there were some links and doing my usual syntopic reading I fell into the rabbit hole a bit.

Used to be that computer science was like "automate all the things." More automated, more better. Bates (1990) was all like wait a minute here, there are some things it makes sense to hand off and others it makes sense for the human to do. People do some things faster. People learn and explore and think by doing.  People need to control certain things in their environment. But other things are a hassle or can be easily done by a computer. What you don't want to do is to make the effort of supervising the automation so arduous that you're trading one hassle for another.

For quite a few years, there has been an area of research called "mixed initiative" that looks specifically at things like virtual assistants and automating where it makes sense without overburdening the user. As I was dabbling in this area a couple of years ago, I read some articles. It seemed weird to me, though, because I think most knowledge workers my age or younger probably don't know how to work with a living human assistant. I have never worked anywhere with a secretary who offloaded work from me. Never worked somewhere with someone to help me schedule meetings, type out correspondence, format articles, do my travel stuff, etc. I have been on teams with deliverables that were sent through an editor - but that was like a special technical writer. I suppose I would have to negotiate with an assistant what I would want him or her to do and then accept (within boundaries) that they might do things differently than I do. I would have to train them. Should I expect more of a virtual assistant?

All of this is in the back of my head when I started following the links.

So what do they mean by virtual assistants - they're hot, but what are they doing and do they work?

Scheduling meetings

  • Meekan is, apparently, a bot that takes an informal request within Slack and negotiates with other calendars to make an appointment.
  • x.ai is similar but you cc Amy (a bot, but I like that she has a name), and she takes on the negotiation for you.

Project/Team Management (loosely construed)

  • Howdy will get feedback from team members and also take lunch orders. Seems sort of like some things I saw baked into Basecamp when I saw a demo. It's in Slack, too.
  • Awesome helps manage teams on Slack.

 

Travel, Shopping, ...

  • Assist does a few different things like travel and shopping.

General but often operating a device

  • Siri
  • Cortana
  • Amazon Alexa
  • Google Now (sorta)
  • Facebook M

A lot of us don't want to talk to our assistant, but to text them. One of the articles pointed to this.

 

When I talked to engineers back in the day about their personal information management, there were a lot of things they were doing themselves that it just seemed like they should be able to offload to someone who is paid less (Pikas, 2007). Likewise, I was talking to a very senior scientist who was spending hours trying to get his publications to be right on the external site. Even though statements are routinely made to the contrary, it seems like work is pushed off from overhead/enterprise/admin to the actual mission people - the scientists and engineers - in an attempt to lower overhead. It pushes money around, sure, but it doesn't solve the goal. So here's an idea, if we really, really, really aren't going to bring back more overhead/enterprise/admin folks, are there bots we can build in to our systems to ease the load?

If Slackbot watches you and asks you personal questions: isn't that cute. If Microsoft does: evil, die, kill with fire. If your employer does: yuck?

 

References

Bates, M. J. (1990). Where should the person stop and the information search interface start. Information Processing & Management, 26(5), 575-591. doi:10.1016/0306-4573(90)90103-9

Pikas, C. K. (2007). Personal Information Management Strategies and Tactics used by Senior Engineers. Proceedings of the Annual Meeting of the American Society for Information Science and Technology, Milwaukee, WI. , 44 paper 14.

Comments are off for this post

Searching Scopus by Date Added to the Database

In my previous post, I complained that my metrics weren't comparable over the course of a few months, even for articles published in 2009.

I looked in the instructions, and I couldn't find anything that discussed searching by date added to the database. I looked at all the fields on the detailed view and there wasn't anything to help. No accession number. No date added. Hmph.

So I started to think about the alerts I had set up.When you click through "view all new results in Scopus", you get a search like so:

(AFFIL((my place of work)) AND ORIG-LOAD-DATE AFT 1390059048 AND ORIG-LOAD-DATE BEF 1390674349
Huh. So I wondered... can you just find the right AFT and search in advanced search for that?  Yup. Sure can!
What are these crazy numbers though? (most people will know right away - I didn't, and I should have). So I looked around - no I didn't have any from that time period to use. I chatted with the Scopus help and they insisted 1) can't search on that field (I told them I already proved you could) 2) it was part of the alert system and not part of the database (????) 3) they couldn't give me the numbers for the time period I want, because you can't search for them anyway.
So then I asked LSW and the brilliant Deborah and as brilliant but time delayed Meg told me it was Unix time - seconds since 1/1/1970.  Stephanie also provided me with a search string from early January (thank you!). I read about that in R, but Deborah even linked to an online converter and boom - Bob's your uncle.
So, if you want to find articles added to the database before or after a certain time, convert the time to Unix time and then use
ORIG-LOAD-DATE AFT
or
ORIG-LOAD-DATE BEF

adding 5/7: I was contacted by Scopus - I would like to post detailed information from the e-mail but haven't gotten permission. She did verify that this search will work, but only so far back. That information isn't kept indefinitely. Also, you can use RECENT(n) where (n) is the number of days. You can AND that on to any advanced search.

Comments are off for this post

How not to support advanced users

Oct 29 2011 Published by under information retrieval, interfaces

At first I wasn’t going to name names, but it seems like this won’t make sense unless I do.

Over the years Cambridge Scientific Abstracts became CSA and then now is just part of ProQuest. The old peachy tan-colored interface always supported advanced searching. When the tabbed olive colored interface came out a few years ago, some of the advanced search features were a little buried, but you could still find them (I blogged about it then, but was corrected by someone who showed me where they were). The databases I’ve always used on CSA are very specialized. I use Aerospace and High Technology the most, but I also use Oceanic Abstracts and Meteorological and Geoastrophysical Abstracts. For my own work, I also use LISA.

I find that for topics like missile design, including hypersonics and propellant formulations, and spacecraft design, Aerospace and High Technology does much better than the general databases like Compendex. Oceanic abstracts is a great complement to GEOBASE (and GeoRef, but meh) on other topics I research.

I have search alerts set up in these various databases. Some I review and forward to my customers whereas others I keep for my own use. The alerts take advantage of the advanced searching available and are tweaked over time to be as precise as possible.

So now that we’re all moving to the new ProQuest interface, it was time to translate my searches to the new format. Luckily, ProQuest has a help page that takes you from the searches in the old interface to the new. I have to say, though, that there are pieces missing. I found in Illumina (the olive colored interface), I could just use kw to get the primary fields out of the record and leave off the references. In the new interface, I had to list all of the fields individually. Also, I had a real problem nesting all of the searches I needed to do. Long story short, I did manage to figure out some satisfactory searches for the alerts.

Now, here’s what actually prompted me to write this post. I am an advanced user and I do have a lot of experience with different interfaces. When I do find a problem in the interface, I’ll report it – particularly if it’s keeping me from performing some task.

In the new interface, if you have something more than the basic search, it often will not let you see the last few pages of results.

For example, in Aerospace (the name now leaves off high tech, let’s hope it still covers the same content):

propellant friction sensitivity – is just fine and you can see all the results

propellant AND “friction sensitivity” – either done through the basic search screen or done through the advanced search, will not let you see the third page. It gives an error.

Fine, so I reported this to their help desk. They replied a week later and we’ve been exchanging e-mails ever since. They’ve assumed I was technologically inept, that my computer was broken, that my library had set up something wrong with the database, that our network was messed up, and that we had a proxy server causing errors. I sent them the error messages from the screen. I sent them screenshots. I tried the same search on three browsers and got another librarian to try from her computer. We could all replicate the problem. They said they visited my library’s web page and couldn’t find a link to the database. Well, *my library* doesn’t have an external web presence – at all! Further, I had already given them the direct URL and told them at least three times that I wasn’t going through a proxy server because I was on campus.  They wanted a screenshot of the search screen (?!?) so I sent that.

Yesterday morning, I got another e-mail. Upon further investigation, they found that this was… a known error… and that technical services was working to fix it. The work around is to re-sort the records until I had seen them all.

Do they have any idea how mad that makes me? How much time I spent proving I was seeing what they already knew was happening?  Did they even check their knowledge base or did they decide to screw with me for three weeks before even checking?

I’ve had it, but damn it, I need that stinking database for my work and there’s no other real option. GRRR.

Is this how to treat your advanced users?  The first search string I sent them should have clued them in (it’s not the one above, it’s much longer). Plus, they asked and I told them I was a librarian when I submitted the report.

3 responses so far