Open Science & Altmetrics Monthly Roundup (July 2014)

Don’t have time to stay on top of the most important Open Science and Altmetrics news? We’ve gathered the very best of the month in this post. Read on!

Impactstory announces a new sustainability model: $5/month subscriptions

Last week, we announced that we’re switching our non-profit sustainability model to a subscription plan: $5 per month after a free, 14-day trial period. From the Impactstory blog:

Our goal has always been for Impactstory to support a second scientific revolution, transforming how academia finds, shares, understands, and rewards research impact. Today we believe in that goal more than ever. That’s why we’re a nonprofit, and always will be. But this transformation is not going to happen overnight. We need a sustainability model that can grow with us, beyond our next year of Sloan and NSF funding. This is that model.

So what does five bucks a month buy you? It buys you the best place in the world to learn and share your scholarly impact. It buys you a profile not built on selling your personal data, or cramming your page with ads, or our ability to hustle up more funding.

Five bucks buys you a profile built on a simple premise: we’ll deliver real, practical value to researchers, every day. And we’ll do it staying a nonprofit that’s fiercely commitment to independence, openness, and transparency.

To read the full announcement, check out last Thursday’s post.

The K(ardashian)-Index debuts

Neil Hall has caused a stir with his paper, “The Kardashian index: a measure of discrepant social media profile for scientists” published last week in Genome Biology. The tongue-in-cheek article outlines Hall’s idea for a metric that identifies scientists whose presence on Twitter isn’t matched by a record of scholarly impact, evidenced by many citations to their work. Here’s how the index works:

pnO1N3W.png

Where “F(a) is the actual number of twitter followers of researcher X and F(c) is the number researcher X should have given their citations.”

While many viewed Hall’s paper as being all in good fun, some are concerned that by denigrating those with more Twitter followers than would be “appropriate” given their number of citations, it reinforces the idea that a very narrow type of scholarly impact is the type of impact that matters most, above and beyond one’s ability to communicate with others about the work they’re doing.

And by making fun of the idea that there might be more flavors of impact than traditionally assumed, we disincentivize researchers from ever breaking from the conservative approaches to measuring impact–approaches that no longer fully reflect reality for those practicing web-native science.

Huge progress made on 20+ Open Science projects at Mozilla Science Global Sprint

On July 22 New Zealand Standard Time, an international team of coders and scientists began a 52-hour sprint to improve Open Science lessons and learning materials, teaching tools, and software and standards for better science. The sprint was organized by Mozilla Science and coordinated virtually across the world using collaborative notepads, video conferencing software, and GitHub. Among the improvements made to Open Science software and standards was work done on Scholarly Markdown, the Open Access Button, and reproducible research guidelines. Improvements to teaching materials included bioinformatics, medical imaging, and oceanography capstone examples for Software Carpentry courses; Data Carpentry training materials like social science examples and lessons on Excel; and a great guide to using Excel for science. For more info, including can’t-miss links to other great Open Science projects, check out the Mozilla Science blog.

Other Open Science & Altmetrics News

  • Open Notebook Science marches on at the Jean Claude Bradley Memorial Symposium: In early July, Open Science advocates gathered for a one-day symposium celebrating the life and work of Jean Claude Bradley, Open Notebook Science pioneer. Some of Open Science’s finest minds presented at the meeting, including Antony Williams (Royal Society of Chemistry) and Peter Murray-Rust (Cambridge University). For more info, including links to the presentations, visit the JCBMS wiki.

  • 1:am altmetrics conference dates announced: The organizers of London’s first altmetrics conference released meeting dates and a preliminary lineup. 1:am will be held September 25-26, 2014 at the Wellcome Collection in London. Speaking will be publisher, researcher, and institutional representatives including Jennifer Lin of PLOS, Mike Thelwall of the University of Wolverhampton, Arfon Smith of GitHub, and Sarah Callaghan of the Research Data Alliance’s Metrics working group. Impactstory will also be in (virtual) attendance, outlining our non-profit’s vision for an Open altmetrics infrastructure. Sound interesting? Check out the 1:am website for more information and to purchase tickets.

  • Digital Science-backed startups had a big month: The innovative Macmillan Publishing subsidiary, Digital Science, had two cool announcements for the Open Science community in July: they invested in Write LaTeX, the startup responsible for Overleaf, a real-time, collaborative word processing environment for authoring scientific publications; and Figshare (who Digital Science also backs) was named Wired UK’s Startup of the Week. Congrats!

  • As WSSSPE2 approaches, killer papers on software sustainability and impacts are going online: The second Working towards Sustainable Software for Science: Practice and Experiences (WSSSPE) workshop is still months away, but we’re already seeing awesome papers like this one by Dan Katz (NSF) and Arfon Smith (GitHub) on creating mechanisms for assigning credit to software creators, and this one by James Howison (University of Texas at Austin) that proposes retracting bit-rotten publications in order to incentivize researchers to keep their research software accessible and usable. It’s obvious that excellent research will be shared at WSSSPE2 in November; for more information on the conference, check out the WSSSPE2 website.

  • The 2014 Open Knowledge Festival was a resounding success: Reports from the 2014 Open Knowledge Festival came streaming in across the Internet not long after the meeting ended in mid-July. Some highlights of the coverage: the OKFestival’s own Storify feeds describe the wealth of activities that happened at the Fest; festival goers were treated to excellent company and conversation at the ScienceOpen-sponsored ice cream break; and Lou Woodley’s apt write-up of the entire Festival, which drove home the point that in-person meetings are important–they bring like-minded people together and create opportunities for collaboration that you don’t often get by watching a meeting’s livestream.

Stay connected

Speaking of “bringing like minded people together”: we share altmetrics and Open Science news as-it-happens on our Twitter, Google+, Facebook, or LinkedIn pages. And if you don’t want to miss next month’s news roundup, remember that you can sign up to get these posts and other Impactstory news delivered straight to your inbox.

Starting today, Impactstory profiles will cost $5/month. Here’s why that’s a good thing.

Starting today, Impactstory profiles cost $5 per month.

Why? Because our goal has always been for Impactstory to support a second scientific revolution, transforming how academia finds, shares, understands, and rewards research impact. That’s why we’re a nonprofit, and always will be. But (news flash), that transformation is not going to happen overnight. We need a sustainability model that can grow with us, beyond our next year of Sloan and NSF funding. This is that model.

So what does five bucks a month buy you? It buys you the best place in the world to learn and share your scholarly impact. It buys you a profile not built on selling your personal data, or cramming your page with ads, our ability to hustle up more funding, or a hope that Elsevier buys us (nonprofits don’t get acquired).

Five bucks buys you a profile built on a simple premise: we’ll deliver real, practical value to real researchers, every day. And we’ll do it staying a nonprofit that’s fiercely committed to independence, openness, and transparency. Want to fork our app and build a better one? Awesome, here’s all our code. Want access to the data behind your profile? Of course: it’s one click away, in JSON or CSV, as open as we can make it. And that ain’t changing. It’s who we are.

We’ve talked to a lot of users that feel $5/month is a fair deal. Which is great; we agree. But we know some folks may feel differently, and that’s great too. Because if you’re in that second group, we want to hear from you. We’re passionate about building the online profile you do think is worth $5 a month. In fact, we’re doing a huge round of interviews right now…if you’ve got ideas, drop us a line at team@impactstory.org and we’ll schedule a chat. Let’s change the world, together.

New signups will get a 14-day free trial. If you’re a user now, you’ll also get a 14-day trial; plus if you subscribe you’ll get a cool  “Impactstory: Early Adopter” sticker for your laptop. If you’re in a spot where you can’t afford five bucks a month, we understand.  We’ve got a no-questions-asked waiver; just drop us a line showing us how you’re linking to your Impactstory profile in your email signature and we’ll send you a coupon for a free account.

We’re nervous about this change in some ways; it’s not exactly what we’d imagined for Impactstory from the beginning. But we’re confident it’s the right call, and we’re excited about the future. We’re changing the world. And we’re delivering concrete value to users. And we’re not gonna stop.

Your questions, answered: introducing the Impactstory Knowledge Base

We’re launching a new feature today to make it even easier to use Impactstory: the Impactstory Knowledge Base.

We’ve seeded the Knowledge Base with answers to users’ frequently asked questions: how to create, populate and update your Impactstory profile, embed your Impactstory profile in other websites, and more. And we’ll be adding more articles–particularly those aimed at “power users”–in the coming months.

Head over to the Knowledge Base now to check it out!

Got a “how to” you want us to add in our next round of edits to the Knowledge Base? Email us at team@impactstory.org to share it.

7 ways to make your Google Scholar Profile better

Albert Einstein's Google Scholar profile

Google Scholar Profiles are useful, but are not as good as they could be. In our last post, we identified their limitations: dirty data, a closed platform, and a narrow understanding of what constitutes scholarly impact.

That said, Google Scholar Profiles are still an important tool for thousands of academics worldwide. So, how can researchers overcome Google Scholar Profiles’ weaknesses?

In this post, we share 7 essential tips for your Google Scholar Profile. They’ll keep your citation data clean, help you keep tabs on colleagues and competitors, increase your “Googlability,” and more. Read on!

1. Clean up your Google Scholar Profile data

Thanks to Google Scholar Profiles’ “auto add” functionality, your Profile might include some articles you didn’t author.

If that’s the case, you can remove them in one of two ways:

  1. clicking on the title of each offending article to get to the article’s page, and then clicking the trashcan/“Delete” button in the top green bar

  2. from the main Profile page, ticking the boxes next to each incorrect article and selecting the “Delete” from the drop-down menu in the top green bar

If you want to prevent incorrect articles from appearing on your profile in the first place, you can change your Profile settings to require Google Scholar to email you for approval before adding anything. To make this change, from your main Profile page, click the “More” button that appears in the top grey bar. Select “Profile updates” and change the setting to “Don’t automatically update my profile.”

Prefer to roll the dice? You can keep a close eye on what articles are automatically added to your profile by signing up for alerts (more info about how to do that below) and manually removing any incorrect additions that appear.

2. Add missing publications to your Profile

Google Scholar is pretty good at adding new papers to your profile automatically, but sometimes articles can fall through the cracks.

To add an article, click “Add” in the top grey bar on the main Profile page. Then, you can add your missing articles in one of three ways:

  1. Click the “Add article manually” link in the left-hand navigation bar. On the next page, add as much descriptive information about your article, book, thesis, patent, or other publication as possible. The more metadata you add, the better a chance Google Scholar has of finding citations to your work.

  2. Click “Add articles” in the left-hand navigation bar to get a list of articles that Google Scholar thinks you may have authored. Select the ones you’ve actually authored and add them to your profile by clicking the “Add” button at the top.
  3. Select “Add add article groups” from the left-hand navigation bar to review groups of articles that Scholar thinks you may have authored under another name. This is a new feature that’s less than perfect–hence we’ve listed it as a last choice for ways to add stuff to your profile.

Got all your publications added to your Profile? Good, now let’s move on.

3. Increase your “Googleability”

One benefit to Google Scholar Profiles is that they function as a landing page for your publications. But that functionality only works if your profile is set to “public.”

Double-check your profile visibility by loading your profile and, at the top of the main page, confirming that it reads, “My profile is public” beneath your affiliation information.

If it’s not already public, change your profile visibility by clicking the “Edit” button at the top of your profile, selecting “My profile is public”, and then clicking “Save”.

4. Use your Google Scholar Profile data to get ahead

Though Google Scholar Profile’s limitations means you can’t use it to completely replace your CV, you can use your Profile data to enhance your CV. You can also use your Profile data in annual reports, grant applications, and other instances where you want to document the impact of your publications.

Google Scholar doesn’t allow users to download a copy of their citation data, unfortunately. Any reuse of Google Scholar Profile data has to be done the old-fashioned way: copying and pasting.

That said, a benefit of regularly updating your CV to include copied-and-pasted Google Scholar Profile citations is that it’s a low-tech backup of your Google Scholar Profile data–essential in case Google Scholar is ever deprecated.

5. Stay up-to-date when you’ve been cited

One benefit to Google Scholar Profiles is that you can “Follow” yourself to get alerts whenever you’re cited. As we described in our Ultimate guide to staying up-to-date on your articles’ impact:

Visit your profile page and click the blue “Follow” button at the top of your profile. Click it. Enter your preferred email address in the box that appears, then click “Create alert.” You’ll now get an alert anytime you’ve received a citation.

Easy, right?

You can also click “Follow new articles” on your own profile to be emailed every time a new article is added automatically–key to making sure the data in your Profile is clean, as we discussed in #1 above.

6. …and stay up-to-date on your colleagues and competitors, too

Similarly, you can sign up to receive an email every time someone else receives a new citation or publishes a new article. (I like to think of it as “business intelligence” for busy academics.) It’s as easy as searching for them by name and, on their profile page, clicking “Follow new articles” or “Follow new citations.”

7. Tell Google Scholar how it can improve

Finally, Google Scholar–like most services–relies on your feedback in order to improve. Get in touch with them via this Contact Us link to let them know how they can better their platform. (Be sure to mention that an open API is key to filling the service gaps they can’t offer, especially with respect to altmetrics!)

Do you have Google Scholar Profiles hacks that you use to get around your Profile’s limitations? Leave them in the comments below or join the conversation on Twitter @impactstory!

Updated 12/19/2014 to reflect changes in the Google Scholar profile redesign.

4 reasons why Google Scholar isn’t as great as you think it is

These days, you’d be hard-pressed to find an academic who doesn’t think that Google Scholar Profiles are the greatest thing since sliced bread. Some days, I agree.

Why? Because my Google Scholar Profile captures more citations to my work than Web of Knowledge or Scopus, automatically adds (and tracks citations for) new papers I’ve published, is better at finding citations that appear in non-English language publications, and gives me a nice fat h-index. I’m sure you find it valuable for similar reasons.

And yet, Google Scholar is still deeply flawed. It has some key disadvantages that keep it from being as awesome as most imagine that it is.

In this post, I’m going to do some good ol’ fashioned consciousness-raising and describe Google Scholar Profiles’ limitations. And in our next post, I’ll share tips I’ve learned for getting the most out of your Google Scholar Profile, limitations be darned.

1. Google Scholar Profiles include dirty data

Let’s begin with the most basic element of your Profile: your name. If your name includes diacritics, ligatures, or even apostrophes, Google Scholar may be missing citations to your work. (Sorry, O’Connor!) And if you have a common name, it’s likely you’ll end up with others’ publications in your Profile, which you are unfortunately responsible for identifying and removing. (We’ll cover how to do that in our next post.)

Now, what about the quality of citations? Google Scholar claims to pull citations from anywhere on the scholarly web into your Profile, but their definition of “the scholarly web” is less rigorous than many people realize. For example, our co-founder, Heather, has citations on her Google Scholar Profile for a Friendfeed post. And others have found Google Scholar citations to their work in student handbooks and LibGuides–not the worst places you can get a cite from, but still: Nature they ain’t.

Google Scholar citations are also, like any metric, susceptible to gaming. But whereas organizations like PLOS and Thomson Reuters’ Journal Citation Index will flag and ban those found to be gaming the system, Google Scholar does not respond quickly (if at all) to reports of gaming. And as researchers point out, Google’s lack of transparency with respect to how data is collected means that gaming is all the more difficult to discover.

The service also misses citations in a treasure-trove of scholarly material that’s stored in institutional repositories. Why? Because Google Scholar won’t harvest information from repositories in the format that repositories across the world tend to use (Dublin Core).

Google Scholar Profile data is far from perfect, but that’s a small problem compared to the next issue.

2. Google Scholar Profiles may not last

Remember Google Reader? Google has a history of killing beloved products when the bottom line is in question.  It’s not exaggerating to say that Google Scholar Profiles could literally go away at any moment.

To me, it’s not unlike the problem of monoculture in agriculture. Monoculture can be a good thing. For those unfamiliar with the term, monoculture is when farmers identify the most powerful species of a crop–the one that is easiest to grow and yields the best harvest year after year–and then grow that crop exclusively. Google Scholar Profiles were, for a long time, the most easy to use and powerful citation reports available to scholars, and so Google Scholar has become one of the most-used platforms in academia.

But monoculture is also risky. Growing only one species of a crop can be catastrophic to a nation’s food supply if, for example, that species were wiped out by blight one year. Similarly, academia’s near-singular dependence on Google Scholar Profile data could be harmful to many if Google Scholar were to be shelved.

3. Google Scholar Profiles won’t allow itself to be improved upon

Other issues aside, it’s worth acknowledging that Google Scholar Profiles are very good at doing one thing: finding citations on the scholarly web. But that’s pretty much all they do, and Google is actively preventing anyone else from improving upon their service.

It’s been pointed out before that the lack of a Google Scholar API means that no one can add value to or improve the tool. That means that services like Impactstory cannot include citations from Google Scholar on Impactstory, nor can we build upon Google Scholar Profiles to find and display metrics beyond citations or automatically push new publications to Profiles. Based on the number of Google Scholar-related help tickets we receive, this lack of interoperability is a major pain point for researchers.

4. Google Scholar Profiles only measure a narrow kind of scholarly impact

Google Scholar Profiles aren’t designed to meet the needs of web-native scholarship. These days, researchers are putting their software, data, posters, and other scholarly products online alongside their papers. Yet Google Scholar Profiles don’t allow them to track citations–nor any other type of impact indicator, including altmetrics–to those outputs.

Google Scholar Profiles also promote a much-maligned One Metric to Rule Them All: the h-index. We’ve already talked about the many reasons why scholars should stop caring about the h-index; most of those reasons stem from the fact that h-indices, like Google Scholar Profiles, aren’t designed with web-native scholarship in mind.

Now that we’re clear on the limitations of Google Scholar Profiles, we’ll help you overcome ‘em by sharing 7 essential workarounds for your Google Scholar Profile in tomorrow’s post. Stay tuned!

Impactstory Advisor of the Month: Keith Bradnam (July 2014)

Headshot of Keith Bradnam

Meet our Advisor of the Month for July, Keith Bradnam! Keith is an Associate Project Scientist with the Korf Lab at UC Davis and active science communicator (read his blog, ACGT, and follow him on Twitter at @kbradnam).

Why is Keith our Advisor of the Month? Because he shared his strategies for success as a scientist at a well-attended Impactstory info session he organized at UC Davis earlier this month. Plus, he’s helping us to improve Impactstory every day, submitting bug reports and ideas for new features on our Feedback forum.

We recently emailed Keith to learn more about why he decided to become an Advisor, what made his recent workshop so great, and his thoughts on using blogging to become a more successful scientist.

Why did you initially decide to join Impactstory?

When I first heard about Impactstory, it just seemed like such an incredibly intuitive and useful concept. Publications should not be seen as the only form of scientific ‘output’, and having a simple way to gather together the different aspects of my academic life seemed like such a no-brainer.

In the past, I have worked in positions where I helped develop database resources for other scientists. These type of non-research positions, often only provide an opportunity for one formal publication a year (e.g. a paper in the annual Nucleic Acids Research ‘Database’ issue). This is a really poor reflection of the contributions that many bioinformaticians (and web programmers, database administrators etc.) make to the wider scientific community. In the past we didn’t have tools like GitHub to easily show the world what software we were helping develop.

Why did you decide to become an Advisor?

Impactstory is a great service and the more people that get to know about it and use it, the better it will become. I want to be part of that process, particularly because I still think that there are many people who are stuck in the mindset that a CV or résumé is the only way to list what you have done in your career.

I’m really hopeful that tools like Impactstory will forever change how people assess the academic achievements of others.

How have you been spreading the word about Impactstory in your first month as an Advisor?

I’ve mainly been passing on useful tweets from the @Impactstory Twitter account and keeping an eye on the Impactstory Feedback Forums where I’ve been adding some suggestions of my own and replying to questions from others. Beyond that, I’ve evangelized about Impactstory to my lab, and I gave a talk on campus to Grad students and Postdocs earlier this month.

How did your workshop go?

Well perhaps I’m biased 🙂 but I think it was well-received. There was a good mix of Grad students, Postdocs, and some other staff, and I think people were very receptive to hearing about the ways that Impactstory could be beneficial to them. They also asked lots of pertinent questions which has led to some new feature requests for the Impactstory team to consider. [You can view a video of Keith’s presentation over at his blog.]

You run a great blog about bioinformatics–ACGT. Why do you blog, and would you recommend it to others?

Blogging is such an incredibly easy way to share useful information to your peers. Sometimes that information can be succinct, factual material (these are the steps that I took to install software ‘X’), sometimes it can be opinion or commentary (this is why I think software ‘X’ will change the world), and sometimes it can just be entertainment or fun (how I used software ‘X’ to propose to my wife).

I think we’re currently in a transition period where people no longer see ‘blogging’ as being an overly geeky activity. Instead, I think that many people now appreciate that blogging is just a simple tool for quickly disseminating information.

I particularly recommend blogging to scientists. Having trouble following a scientific protocol and need some help? Blog about it. Think you have made an improvement on an existing protocol? Blog about it. Have some interesting thoughts about a cool paper that you have just read? Blog about it. There are a million and one topics that will never be suitable for a formal peer-reviewed publication, but which would make fantastic ideas for a blog post.

Blogging may be beneficial for your career by increasing your visibility amongst your peers, but more importantly I think it really improves your writing skills and — depending on what you blog about — you are giving something back to the community.

What’s the best part about your current gig as an Associate Project Scientist with the Korf Lab at UC Davis?

I think that most people would agree that if you work on a campus where you get to walk past a herd of cows every day, then that’s pretty hard to beat! However the best part of my job is that I get to spend time mentoring others in the lab (students, not cows), and I like to think that I’m helping them become better scientists, and better communicators of science in particular.

Thanks, Keith!

As a token of our appreciation for Keith’s hard work, we’re sending him an Impactstory t-shirt of his choice from our Zazzle store.

Keith is just one part of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

Open Science & Altmetrics Monthly Roundup (June 2014)

Don’t have time to stay on top of the most important Open Science and Altmetrics news? We’ve gathered the very best of the month in this post. Read on!

UK researchers speak out on assessment metrics

There are few issues more polarizing in academia right now than research assessment metrics. A few months back, the Higher Education Funding Council for England (HEFCE) asked researchers to submit their evidence and views on the issue, and to date many well-reasoned responses have been shared.

Some of the highlights include Ernesto Priego’s thoughtful look at the evidence for and against; this forceful critique of the practice, penned by Sabaratnam and Kirby; a call to accept free market forces “into the internal dynamics of academic knowledge production” by Steve Fuller; and this post by Stephen Curry, who shares his thoughts as a member of the review’s steering group.

Also worth a look is Digital Science’s “Evidence for excellence: has the signal overtaken the substance?”, which studies the unintended effects that past UK assessment initiatives have had on researchers’ publishing habits.

Though the HEFCE’s recommendations will mainly affect UK researchers, the steering group’s findings may set a precedent for academics worldwide.

Altmetrics researchers agree: we know how many, now we need to know why

Researchers gathered in Bloomington, Indiana on June 23 to share cutting-edge bibliometrics and altmetrics research at the ACM WebScience Altmetrics14 workshop.

Some of the highlights include a new study that finds that only 6% of articles that appear in Brazilian journals have 1 or more altmetrics (compared with ~20% of articles published in the “global North”); findings that use of Twitter to share scholarly articles grew by more than 90% from 2012 to 2013; a study that found that most sharing of research articles on Twitter occurs in original tweets, not retweets; and a discovery that more biomedical and “layman” terms appear in the titles of research shared on social media than in titles of highly-cited research articles.

Throughout the day, presenters repeatedly emphasized one point: high-quality qualitative research is now needed to understand what motivates individuals to share, bookmark, recommend, and cite research outputs. In other words, we increasingly know how many altmetrics research outputs tend to accumulate and what those metrics’ correlations are–now we need to know why research is shared on the social Web in the first place, and how those motivations influence various flavors of impact.

Librarians promoting altmetrics like never before

This month’s Impactstory blog post, “4 things every librarian should do with altmetrics,” has generated a lot of buzz and some great feedback from the library community. But it’s just one part of a month filled with librarians doin’ altmetrics!

To start with, College & Research Libraries News named altmetrics a research library trend for 2014, and based on just the explosion of librarian-created presentations on altmetrics in the last 30 days alone, we’re inclined to agree! Plus, there were librarians repping altmetrics at AAUP’s Annual Meeting and the American Library Association Annual Meeting (here and here), and the Special Libraries Association Annual Meeting featured our co-founder, Heather Piwowar, in two great sessions and Impactstory board member, John Wilbanks, as the keynote speaker.

More Open Science & Altmetrics news

Stay connected

We share altmetrics and Open Science news as-it-happens on our Twitter, Google+, Facebook, or LinkedIn pages. And if you don’t want to miss next month’s news roundup, remember that you can sign up to get these posts and other Impactstory news delivered straight to your inbox.

4 things every librarian should do with altmetrics

Researchers are starting to use altmetrics to understand and promote their academic contributions. At the same time, administrators and funders are exploring them to evaluate researchers’ impact.

In light of these changes, how can you, as a librarian, stay relevant by supporting their fast-changing altmetrics needs?

In this post, we’ll give you four ways to stay relevant: staying up-to-date with the latest altmetrics research, experimenting with altmetrics tools, engaging in early altmetrics education and outreach, and defining what altmetrics mean to you as a librarian.

1. Know the literature

Faculty won’t come to you for help navigating the altmetrics landscape if they can tell you don’t know the area very well, will they?

To get familiar with discussions around altmetrics, start with the recent SPARC report on article-level metrics, this excellent overview that appeared in Serials Review (paywall), and the recent ASIS&T Bulletin special issue on altmetrics.

Then, check out this list of “17 Essential Altmetrics Resources” aimed at librarians, this recent article on collection development and altmetrics from Against the Grain, and presentations from Heather and Stacy on why it’s important for librarians to be involved in altmetrics discussions on their campuses.

There’s also a growing body of peer-reviewed research on altmetrics. One important concept from this literature is the idea of “impact flavors”–a way to understand distinctive patterns in the impacts of scholarly products.

For example, an article featured in mainstream media stories, blogged about, and downloaded by the public has a very different flavor of impact than a dataset heavily saved and discussed by scholars, which is in turn different from software that’s highly cited in research papers. Altmetrics can help researchers, funders, and administrators optimize for the mix of flavors that best fits their particular goals.

There’s also been a lot of studies on correlations (or lack thereof) between altmetrics and traditional citations. Some have shown that selected altmetrics sources (Mendeley in particular) are significantly correlated with citations (1, 2, 3), while other sources, like Facebook bookmarks, have only slight correlations with citations. These studies show that different types of altmetrics are capturing different types of impact, beyond just scholarly impact.

Other early touchstones include studies exploring the predictive potential of altmetrics, growing adoption of social media tools that inform altmetrics, and insights from article readership patterns.

But these are far from only studies to be aware of! Stay abreast of new research by reading through the PLOS Altmetrics Collection, joining the Altmetrics Mendeley group and following the #altmetrics hashtag on Twitter.

2. Know the tools

There are now several tools that allow scholars to collect and share the broad impact of their research portfolios.

In the same way a you’d experiment with new features added to Web of Science, you can play around with altmetrics tools and add them to your bibliographic instruction repertoire (more on that in the following section). Familiarity will enable you to do easy demonstrations, discuss strengths and weaknesses, contribute to product development, and serve as a resource for campus scholars and administration.

Here are some of the most popular altmetrics tools:

Impactstory

lopjuza.png

If you’re reading this post, chances are that you’re already familiar with Impactstory, a nonprofit Web application supported by the Alfred P. Sloan Foundation and NSF.

If you’re a newcomer, here’s the scoop: scholars create a free Impactstory profile and then upload their articles, datasets, software, and other products using Google Scholar, ORCID, or lists of permanent identifiers like DOIs, PubMed IDs, and so on. Impactstory then gathers and reports altmetrics and traditional citations for each product. As shown above, metrics are displayed as percentiles relative to similar products. Profile data can be exported for further analysis, and users can receive alerts about new impacts.

Impactstory is built on open-source code, offers open data, and is free to use. Our robust community of users helps us think up new features and prioritize development via our Feedback forum; once you’re familiar with our site, we encourage you to sign up and start contributing, too!

PlumX

PlumX Artifact Screen Shot.pngPlumX is another web application that displays metrics for a wide range of scholarly outputs. The metrics can be viewed and analyzed at any user-defined level, including at the researcher, department, institution, journal, grant, and research topic levels. PlumX reports some metrics that are unique from other altmetrics services, like WorldCat holdings and downloads and pageviews from some publishers, institutional repositories, and EBSCO databases. PlumX is developed and marketed by Plum Analytics, an EBSCO company.

The service is available via a subscription. Individuals who are curious can experiment with the free demo version.

Altmetric

Altmetric-Explorer-Screenshot-University-of-Texas-Sample.pngThe third tool that librarians should know about is Altmetric.com. Originally developed to provide altmetrics for publishers, the tool primarily tracks journal articles and ArXiv.org preprints. In recent years, the service has expanded to include a subscription-based institutional edition, aimed at university administrators.

Altmetric.com offers unique features, including the Altmetric score (a single-number summary of the attention an article has received online) and the Altmetric bookmarklet (a browser widget that allows you to look up altmetrics for any journal article or ArXiv.org preprint with a unique identifier). Sources tracked for mentions of articles include social and traditional media outlets from around the world, post-publication peer-review sites, reference managers like Mendeley, and public policy documents.

Librarians can get free access to the Altmetric Explorer and free services for institutional repositories. You can also request trial access to Altmetric for Institutions.

3. Integrate altmetrics into library outreach and education

Librarians are often asked to describe Open Access publishing choices to both faculty and students and teach how to gather evidence of impact for hiring, promotion, and tenure. These opportunities–whether one on one or in group settings like faculty meetings–can allow librarians to introduce altmetrics.

Discussing altmetrics in the context of Open Access publishing helps “sell” the benefits of OA. Altmetrics, like download counts that appear in PLOS journals and institutional repositories, can highlight the benefits of open access publishing. They can also demonstrate that “impact” is more closely tied to an individual’s scholarship rather than a journal’s impact factor.

Similarly, researchers often use an author’s h-index for hiring, tenure, and promotion, conflating the h-index with the quality of an individual’s work. Librarians are often asked to teach and provide assistance calculating an h-index within various databases (Web of Science, SCOPUS, etc.). Integrating altmetrics into these instruction sessions is akin to providing researchers with additional primary resource choices on a research project. Librarians need to make researchers aware of many tools they can use to evaluate the impact of scholarship, and of the relevant research–including benefits of and drawbacks to different altmetrics.

So, what does altmetrics outreach look like on the ground? To start, check out these great presentations that librarians around the world have given on the benefits of using altmetrics (and particular altmetrics tools) in research and promotion.

Another great way to stay relevant on this subject is to find and recommend to your grad students and faculty readings on ways they can use altmetrics in their career, like this one from our blog on the benefits of including altmetrics on your CV.

4. Discover the benefits that altmetrics offer librarians

There are reasons to learn about altmetrics beyond serving faculty and students. A major one is that many librarians are scholars themselves, and can use altmetrics to better understand the diverse impact of their articles, presentations, and white papers. Consider putting altmetrics on your own CV, and advocating the use of altmetrics among library faculty who are assembling tenure and promotion packages.

Librarians also produce and support terabytes’ worth of scholarly content that’s intended for others’ use, usually in the form of digital special collections and institutional repository holdings. Altmetrics can help librarians understand the impacts of these non-traditional scholarly outputs, and provide hard evidence of their use beyond ‘hits’ and downloads–evidence that’s especially useful when making arguments for increased budgetary and administrative support.

It’s important that librarians explore the unique ways they can apply altmetrics to their own research and jobs, especially in light of recent initiatives to create recommended practices for the collection and use of altmetrics. What is useful to a computational biologist may not be useful for a librarian (and vice versa). Get to know the research and tools and figure out ways to use them to your own ends.

There’s a lot happening right now in the altmetrics space, and it can sometimes be overwhelming for librarians to keep up with and understand. By following the steps outlined above, you’ll be well positioned to inform and support researchers, administrators, and library decision makers in their use. And in doing so, you’ll be indispensable in this new era of web-native research.

Are you a librarian that’s using altmetrics? Share your experiences in the comments below!

This post has been adapted from the 2013 C&RL News article, “Riding the crest of the altmetrics wave: How librarians can help prepare faculty for the next generation of research impact metrics” by Lapinski, Piwowar, and Priem.

Ten reasons you should put altmetrics on your CV right now

If you don’t include altmetrics on your CV, you’re missing out in a big way.

There are many benefits to scholars and scholarship when altmetrics are embedded in a CV.

Altmetrics can:

  1. provide additional information;
  2. de-emphasize inappropriate metrics;
  3. uncover the impact of just-published work;
  4. legitimize all types of scholarly products;
  5. recognize diverse impact flavors;
  6. reward effective efforts to facilitate reuse;
  7. encourage a focus on public engagement;
  8. facilitate qualitative exploration;
  9. empower publication choice; and
  10. spur innovation in research evaluation.

In this post, we’ll detail why these benefits are important to your career, and also recommend the ways you should–and shouldn’t–include altmetrics in your CV.

1. Altmetrics provide additional information

The most obvious benefit of including altmetrics on a CV is that you’re providing more information than your CV’s readers already have.  Readers can still assess the CV items just as they’ve always done: based on title, journal and author list, and maybe–if they’re motivated–by reading or reviewing the research product itself. Altmetrics have the added benefit of allowing readers to dig into post-publication impact of your work.

2. Altmetrics de-emphasize inappropriate metrics

It’s generally regarded as poor form to evaluate an article based on a journal title or impact factor. Why? Because high journal impact factors vary across fields and an article often receives more or less attention than its journal container suggests.

But what else are readers of a CV to do? Most of us don’t have enough domain expertise to dig into each item and assess its merits based on a careful reading, even if we did have time. We need help, but traditional CVs don’t provide enough information to assess the work on anything but journal title.

Providing article-level citations and altmetrics in a CV gives readers more information, thereby de-emphasizing evaluation based on journal rank.

3. Altmetrics uncover the impact of just-published work

Why not suggest that we include citation counts in CVs, and leave it at that? Why go so far as altmetrics? The reason is that altmetrics have benefits that complement the weaknesses of a citation-based solution.

Timeliness is the most obvious benefits of altmetrics. Citations take years to accrue, which can be a problem for graduate students who are applying for jobs soon after publishing their first papers and for those promotion candidates whose most profound work is published only shortly before review.

Multiple research studies have found that counts of downloads, bookmarks and tweets correlate with citations, yet accrue much more quickly, often in weeks or months rather than years. Using timely metrics allows researchers to showcase the impact of their most recent work.

4. Altmetrics legitimize all types of scholarly products

How can readers of a CV know if your included dataset, software project, or technical report is any good?

You can’t judge its quality and impact based on the reputation of the journal that published it, since datasets and software aren’t published in journals. And even if they were, we wouldn’t want to promote the poor practice of judging the impact of an item by the impact of its container.

How, then, can alternative scholarly products be more than just space-filler on a CV?

The answer is product-level metrics. Like article-level metrics do for journal articles, product-level metrics provide the needed evidence to convince evaluators that a dataset or software package or white paper has made a difference. These types of products often make impacts in ways that aren’t captured by standard attribution mechanisms like citations. Altmetrics are key to communicating the full picture of how a product has influenced a field.

5. Altmetrics recognize diverse impact flavors

The impact of a research paper has a flavor. There are scholarly flavors (a great methods sections bookmarked for later reference or controversial claims that change a field), public flavors (“sexy” research that captures the imagination or data from a paper that’s used in the classroom), and flavors that fall into the area in between (research that informs public policy or a paper that’s widely used in clinical practice).

We don’t yet know how many flavors of impact there are, but it would be a safe bet that scholarship and society need them all. The goal isn’t to compare flavors: one flavor isn’t objectively better than another. They each have to be appreciated on their own merits for the needs they meet.

To appreciate the impact flavor of items on a CV, we need to be able to tell the flavors apart. (Citations alone can’t fully inform what kind of difference a research paper has made on the world. They are important, but not enough.) This is where altmetrics come in. By analyzing patterns in what people are reading, bookmarking, sharing, discussing and citing online we can start to figure out what kind – what flavor – of impact a research output is making.

More research is needed to understand the flavor palette, how to classify impact flavor and what it means. In the meantime, exposing raw information about downloads, shares, bookmarks and the like starts to give a peek into impact flavor beyond just citations.

6. Altmetrics reward efforts to facilitate reuse

Reusing research – for replication, follow-up studies and entirely new purposes – reduces waste and spurs innovation. But it does take a bit of work to make your research reusable, and that work should be recognized using altmetrics.

There are a number of ways authors can make their research easier to reuse. They can make article text available for free with broad reuse rights. They can choose to publish in places with liberal text-mining policies, that invest in disseminating machine-friendly versions of articles and figures.

Authors can write detailed descriptions of their methods, materials, datasets and software and make them openly available for reuse. They can even go further, experimenting with executable papers, versioned papers, open peer review, semantic markup and so on.

When these additional steps result in increased reuse, it will likely be reflected in downloads, bookmarks, discussions and possibly citations. Including altmetrics in CVs will reward investigators who have invested their time to make their research reusable, and will encourage others to do so in the future.

7. Altmetrics can encourage a focus on public engagement

The research community, as well as society as a whole, benefits when research results are discussed outside the Ivory Tower. Engaging the public is essential for future funding, recruitment and accountability.

Today, however, researchers have little incentive to engage in outreach or make their research accessible to the public. By highlighting evidence of public engagement like tweets, blog posts and mainstream media coverage, altmetrics on a CV can reward researchers who choose to invest in public engagement activities.

8. Altmetrics facilitate qualitative exploration

Including altmetrics in a CV isn’t all about the numbers! Just as we hope many people who skim our CVs will stop to read our papers and explore our software packages, so too we can hope that interested parties will click through to explore the details of altmetrics engagement for themselves.

Who is discussing an article? What are they saying? Who has bookmarked a dataset? What are they using it for? As we discuss at the end of this post, including provenance information is crucial for trustworthy altmetrics. It also provides great information that helps CV readers move beyond the numbers and jump into qualitative exploration of impact.

9. Altmetrics empower publication choice

Publishing in a new or innovative journal can be risky. Many authors are hesitant to publish their best work somewhere new or with a relatively-low impact factor. Altmetrics can remedy this by highlighting work based on its post-publication impact, rather than the title of the journal it was published in. Authors will be empowered to choose publication venues they feel are most appropriate, leveling the playing field for what might otherwise be considered risky choices.

Successful publishing innovators will also benefit. New journals won’t have to wait two years to get an impact factor before they can compete. Publishing venues that increase access and reuse will be particularly attractive. This change will spur innovation and support the many publishing options that have recently debuted, such as eLife, PeerJ, F1000 Research and others.

10. Altmetrics spur innovation in research evaluation

Finally, including altmetrics on CVs will engage researchers directly in research evaluation. Researchers are evaluated all the time, but often behind closed doors, using data and tools they don’t have access to. Encouraging researchers to tell their own impact stories on their CVs, using broad sources of data, will help spur a much-needed conversation about how research evaluation is done and should be done in the future.

OK, so how can you do it right?

There can be risks to including altmetrics data on a CV, particularly if the data is presented or interpreted without due care or common sense.

Altmetrics data should be presented in a way that is accurate, auditable and meaningful:

  • Accurate data is up-to-date, well-described and has been filtered to remove attempts at deceitful gaming
  • Auditable data implies completely open and transparent calculation formulas for aggregation, navigable links to original sources and access by anyone without a subscription.
  • Meaningful data needs context and reference. Categorizing online activity into an engagement framework helps readers understand the metrics without becoming overwhelmed. Reference is also crucial. How many tweets is a lot? What percentage of papers are cited in Wikipedia? Representing raw counts as statistically rigorous percentiles, localized to domain or type of product, makes it easy to interpret the data responsibly.

Assuming these presentation requirements are met, how should the data be interpreted? We strongly recommend that altmetrics be considered not as a replacement for careful expert evaluation but as a supplement. Because they are still in their infancy, we should view altmetrics as way to ground subjective assessment in real data; a way to start conversations, not end them.

Given this approach, at least three varieties of interpretation are appropriate: signaling, highlighting and discovery. A CV with altmetrics clearly signals that a scholar is abreast of innovations in scholarly communication and serious about communicating the impact of scholarship in meaningful ways. Altmetrics can also be used to highlight research products that might otherwise go unnoticed: a highly downloaded dataset or a track record of F1000-reviewed papers suggests work worthy of a second look. Finally, as we described above, auditable altmetrics data can be used by evaluators as a jumping off point for discovery about who is interested in the research, what they are doing with it, and how they are using it.

How to Get Started

How can you add altmetrics to your own CV or, if you are a librarian, empower scholars to add altmetrics to theirs?

Start by experimenting with altmetrics for yourself. Play with the tools, explore and suggest improvements. Librarians can also spread the word on their campuses and beyond through writing, teaching and outreach. Finally, if you’re in a position to hire, promote, or review grant applications, explicitly welcome diverse evidence of impact when you solicit CVs.

What are your thoughts on using altmetrics on a CV? Would you welcome them as a reviewer, or choose to ignore them? Tell us in the comments section below.

This post has been adapted from “The Power of Altmetrics on a CV,” which appeared in the April/May 2013 issue of ASIS&T Bulletin.

Impactstory Advisor of the Month: Jon Tennant (June 2014)

Jon Tennant (blogTwitter), a PhD candidate studying tetrapod biodiversity and extinction at Imperial College London, was one of the first scientists to join our recently launched Advisor program.

jon.jpeg

Within minutes of receiving his acceptance into the program, Jon was pounding the virtual pavement to let others know about Impactstory and the benefits it brings to scientists. For this reason–and the fact that Jon has done some cool stuff in addition to his research, like write a children’s book!–Jon’s our first Impactstory Advisor of the Month.

We chatted with Jon to learn more about how he uses Impactstory, what it’s like being an Advisor, and what he’s doing in other areas of his professional life.

Why did you initially decide to create an Impactstory profile?

A couple of years ago, I immersed myself into social media and the whole concept of ‘Web 2.0’. It was clear that the internet was capable of changing many aspects of the way in which we practice, communicate, and assess scientific research. There were so many tools though, and so much diversity, it was all a bit daunting, especially as someone so junior in their career. Although I guess that’s one of the advantages of being at this stage – I wasn’t tied down to any particular way of ‘doing science’ yet, and free to experiment.

Having followed the discussions on alternative and article-level metrics, when ImpactStory was released it seemed like a tool that could really make a difference for myself and the broader research community. At the time, it made no sense to me how the outputs of research were assessed – the name or the impact factor of a journal was given far too much meaning, and did nothing to really encapsulate the diversity of ways in which quality or impact, or putative pathways to impact, could be measured. ImpactStory seemed to offer a decent alternative, and hey look – it does! Actually, it’s not an alternative, but complementary tool for a range of methods in assessing how research is used.

Why did you decide to become an Advisor?

Pretty much for the reasons above! One thing I’m learning as a young scientist is that it’s easy to be part of an echo chamber on social media, advocating altmetrics and all the jazzy new aspects of research, but many scientists aren’t online. Getting those people involved in conversations, and alerting them to cool new tools is made a lot easier as an Advisor.

I reckon this type of community engagement is pretty important, especially in what appears to be such a crucial transitional phase for researchers, including things like open access and data, and the way in which research is assessed (e.g., through the REF here in the UK). ImpactStory obviously has a role in making this much easier for academics.

How have you been spreading the word about Impactstory in your first month as an Advisor?

Mostly sharing stickers! They actually work really well in getting people’s attention. They’re even more doubly useful when people ask things like “What’s a h-index”, so you can actually use them as a basis for further discussion. But yeah, I don’t really go out of my way to preach to people about altmetrics and ImpactStory – academics really don’t like being told what they should be doing and things, especially at my university. I prefer to kind of hang back, wait for discussions, and inject that things like altmetrics exist, and could be really useful when combined with things like a social media presence, or an ORCID, and that they are one of an integrated set of tools that can be really useful for assessing how your research is being used, as well as a kind of personal tracking device. I’d love to hold an ImpactStory/altmetrics Q and A or workshop at some point in the future.

You just wrote a children’s book about dinosaurs–tell us about it!

Let it be known that you brought this up, not me 😉

So, pretty much just by having a social media presence (mostly through blogging), I was asked to write a book on kids dinosaurs! Of course I said yes, and along with a talented artist, we created a book with pop-out dinosaurs that you can reconstruct into your very own little models! You can pre-order it here.* I think it’s out in October in the UK and USA. Is there an ImpactStory bit for that…? [ed: Not yet! Perhaps add it as a feature request on our Feedback forum? :)]

* (I don’t get royalties, so it’s not as bad promoting it…)

What’s the best part about your current gig as a PhD student at Imperial College London?

The freedom. I have an excellent supervisor who is happy to let me blog, tweet, attend science communication conferences and a whole range of activities that are complimentary to my PhD, as long as the research gets done. So there’s a real diversity of things to do, and being in London there’s always something science-related going on, and there’s a great community vibe too, with people who work within the broader scope of science always coming together and interacting. Of course, the research itself is amazing – I work with a completely open database called the Palaeobiology Database/Fossilworks, where even the methods are open so anyone can play with science if they wish!

Thanks, Jon!

Jon is just one of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!