New Features and Usage-Based Pricing

Today we’re adding some features to the OpenAlex API: better search, content download, and new docs. Most importantly, we’re also introducing usage-based pricing.

New features

Advanced search at last

We’ve had lots of request for advanced search features to support systematic reviews. Good news: they’re here!

  • Proximity search: find terms near each other
  • Exact matching: skip stemming when you need precision
  • Wildcards: for when you’re not sure of the exact form
  • Lonnnnng queries: Searches can be up to several pages in length (8kb)

Find details and examples of advanced search in the new developer docs here.

Note for developers: the old filter syntax for search is now deprecated; the ?search= parameter approach remains. It’ll be the One Way To Do It moving forward. Filter searches will redirect to the ?search param.

Semantic search

We’re also launching semantic search. Instead of just matching keywords, it uses embeddings to match the meaning of your search–so a search for “kelp biomechanics” also finds articles about algae and wave mechanics. But you don’t have to stop there: you can even paste a whole abstract into the search bar to find related papers!

Semantic search is in beta; we don’t recommend using it for sensitive production workflows yet. But we would love to hear your feedback! If it’s well-used we’ll continue to invest more resources into it.

Full-text downloads

We’re hosting PDFs and TEI XML for our 60M open-access works. You can search and filter for works of interest, filter to get just ones with PDFs, and then download the PDFs in bulk—all with the API. Or you can use our new OpenAlex CLI to do it from the command line, massively parallelized, in a single command. Or your agent can—they love CLIs.

openalex download \ 
  --api-key YOUR_KEY \ 
  --output ./climate-pdfs \ 
  --filter "topics.id:T10325,has_content.pdf:true" \ 
  --content pdf

See the full-text documentation for details.

New docs

We’ve completely rebuilt our documentation. The old docs are deprecated and will redirect soon. The new docs are clearer, cleaner, up to date, and AI-optimized. We want to make OpenAlex as easy as possible to use for everyone, whether they’re an expert or a novice vibe-coding their first app.

API keys are now required.

As we announced in January, you’ll need an API key for all requests. Getting one is free and takes about 30 seconds: create an account at openalex.org, then grab your key at openalex.org/settings/api. You can still make a few calls without an API key for demo purposes, but it’s not suitable for any kind of production use. The API keys are essential for our new usage-based pricing model. What’s a usage-based pricing model? Gentle Reader, a mere centimeter now separates you from the answer. 👇

Usage-based pricing

Different API operations cost us different amounts to run. Doing stuff with PDFs is expensive, but looking up a single work by ID is nearly free. We think it’s essential that our pricing reflects these actual costs. Usage-based pricing is a natural fit for this: it’s transparent, sustainable, and fair.

Here’s what things cost. See the developer docs for more details.

endpointcost per callcost per 1,000 calls
single work lookup by DOI or ID00
list and filter$0.0001$0.10
search$0.001$1.00
PDF/XML download$0.01$10.00

Free usage

Every API key gets $1 of free usage per day. We’ve always subsidized free users using revenue from paying ones–this makes the exact extent of that subsidy clear, transparent, and unambiguous.

What does that daily dollar get you? Assuming you return 100 works per request:

endpointdaily free callsdaily free results
single work lookup by DOI or IDunlimited unlimited
list and filter10,0001,000,000
search1,000100,000
PDF/XML download100100

To use a real-world example: grabbing all 694k works by Finnish authors takes about 7k paginated requests at $0.10 per thousand or $0.70. That’s covered by your free daily allowance. But if you want all 9 million works from Japan, that will cost about $9. (You could even download all 480M works in OpenAlex this way for $480—but don’t do that lol, download the full dataset instead, it’s free).

It’s easy to track your usage: every API response includes headers showing how much you’ve spent and how much you’ve got left. You can also check openalex.org/settings/usage anytime.

Prepaid usage

Most users will find that the free plan covers all their needs. However, for some projects, you may need more usage. The great thing about usage-based pricing is that most of the time this will only cost you a few bucks. You’re just paying for what you need. You can buy prepaid usage in 1min with your credit card, whenever you want, however much you want. It supplements your daily free allowance.

Organizational plans

Organizations also buy prepaid usage. But many will want to get annual plans instead, which offer major discounts, data sync, curation dashboards, and more. Check out our new Member, Member+, and Supporter plans for more details.

FAQ

I thought it was free? The data remains free. The full OpenAlex dataset—all 480M works, all the metadata—is free to download, share, remix, and build on. We’re committed to keeping it sustainably free by charging for a service (the API) built on that dataset. Free data, paid service–this is the path laid out in the POSI principles, which we’ve signed and enthusiastically support

How do I track my usage? Every API response includes usage info; you can also call the rate-limit endpoint or check your usage page on openalex.org. Learn more here

How is my usage data used? We analyze usage data to improve the overall service and we provide institutions aggregated usage summaries for their institutions upon request. We only collect what we need to run OpenAlex. We aren’t building tools to monitor individuals and we don’t sell your data. You can read our full privacy policy here.

Why charge per request instead of per result? We’re trying to link our costs to our pricing, and our costs mostly scale with requests, not results; a search that returns 10 results costs us about the same as one that returns 10,000.

Will prices change?

Yes, probably. The point of this model is to keep our prices tightly linked to our costs, and our costs will likely change with new tech, new use cases, and new data.

Where from here?

AI accelerates every day. The future of knowledge is getting rebuilt, right now. If we build on checkerboards of enclosed, walled gardens, we build a fragmented, incoherent future for scholarship and humanity.

We think OpenAlex can help with that. We’re gathering and connecting the literature into a cohesive living library, complete and organized and accessible to everyone. Today’s new pricing model helps us stay in this for the long haul.

An API-based sustainability model lets us deliver (and monetize) value in the post-GUI era. Soon, users won’t go to openalex.org (or any SaaS website), they’ll use APIs to vibe-code a custom interface for any question in minutes. [1] The post-GUI world will be tough on some open sustainability models. But it’s also an amazing opportunity for open infrastructure, if we adapt our pricing model correctly. That’s what we’re doing today.

We’re so very excited about this next chapter. Questions? Hit us up at support@openalex.org.

Let’s build!

[1] Check out our Q1 town hall for more on our post-GUI strategy, and check out this vibe-coding webinar to see several real-life examples of building five-minute custom OpenAlex dashboards.

OpenAlex rewrite (“Walden”) launch!

Today, OpenAlex gets a new engine.

After a year of rebuilding, refactoring, and retesting, the Walden rewrite is now live — powering all of OpenAlex. It’s the same dataset shape you know, but faster, cleaner, and more complete.

You’ll notice better references, better OA detection, better language and license coverage, better everything. We’ve added 190 million new works, including datasets, software, and other research objects from DataCite and thousands of repositories. And thanks to our new foundation, fixes and improvements now roll out in days, not months.

Want to see exactly what changed? Check out OREO — the OpenAlex Rewrite Evaluation Overview — to compare old vs. new data in detail. [edit Dec 13, 2025: OREO is no longer up because the legacy OpenAlex data is no longer being updated…it’s all Walden now, so there’s no comparator].

And if you’d like to dig into the full list of updates, the Walden release notes have you covered.

For the next few weeks, you can still access the old dataset with data-version=1, and starting tomorrow, you can download full snapshots of both the legacy and Walden datasets in the usual way.

The rebuild is done. The road ahead is wide open.

Onward.

OpenAlex rewrite enters beta! 🎉

It’s a big week at OpenAlex. On Monday, we announced that OpenAlex is now our top-level brand (and retired the “OurResearch” name). Yesterday we unveiled our new logo. And today, we’re thrilled to launch the beta release of our fully-rewritten codebase (codenamed Walden)!

Walden is faster, bigger, and more maintainable–that means quicker bug fixes, more content, easier feature development, and a smoother experience all around.

Throughout October, we’ll be running Walden and the old system (Classic) side by side, with Classic remaining the default. On November 1 2025, Walden becomes default, and we’ll publish the last data snapshot from the old system (more info on timelines here).

How to test-drive Walden

Walden beta is already live in the API and UI so you can start exploring it right away!

Just remember that it’s still in beta: there are lots of known issues and it’s changing every day. If you notice an that’s not already in OREO tests or known issues, report it here.

Key improvements

When you check it out, what should you expect to see? The best way to view a list of improvements is to check out the tests in OREO, especially work tests. But here’s a high-level overview:

  • 150M+ new works: Newly indexed articles, books, datasets, software, dissertations, and more! You can explore just the newly added works here.
  • Better consistency: Unpaywall and OpenAlex will now always agree.
  • Better metadata: more citations, more language and retraction coverage, better keywords, more OA data.

Looking Ahead

The last year of rewriting OpenAlex was tough. We couldn’t move as fast as we wanted on new features, and support often lagged. But now we’re equipped to move fast without breaking things. Expect faster improvements, better support, and more ambitious features dropping in Q4, including:

  • Community curation: fix mistakes (like in Wikipedia) and see them reflected in days.
  • Vector search endpoint: find relevant works and other entities based on semantic similarity of free-form text
  • Download endpoint: Access PDF text from DOI or OpenAlex ID
  • Better funding metadata: New grants entity with better coverage of grant objects and linkages to research outputs and funders

This is a turning point for OpenAlex—and we’re excited to build the future of research infrastructure together with you. The engine’s rebuilt. The road ahead is wide open. Let’s go.

PS want to learn more about Walden? Come to our webinar Oct 7th at 10am Eastern. You can register to attend here.

New OpenAlex API features!

We’ve got a ton of great API improvements to report! If you’re an API user, there’s a good chance there’s something in here you’re gonna love.

Search

You can now search both titles and abstracts. We’ve also implemented stemming, so a search for “frogs” now automatically gets your results mentioning “frog,” too. Thanks to these changes, searches for works now deliver around 10x more results. This can all be accessed using the new search query parameter.

New entity filters

We’ve added support for tons of new filters, which are documented here. You can now:

  • get all of a work’s outgoing citations (ie, its references section) with a single query. 
  • search within each work’s raw affiliation data to find an arbitrary string (eg a specific department within an organization)
  • filter on whether or not an entity has a canonical external ID (works: has_doi, authors: has_orcid, etc)

Request multiple records by ID at once

This has been our most-requested feature and we’re super excited to roll it out! By using the new OR operator, you can request up to 50 entities in a single API call. You can use any ID we support–DOI, ISSN, OpenAlex ID, etc.

Deep paging

Using cursor-based paging, you can now retrieve an infinite number of results (it used to be just the top 10,000). But remember: if you want to download the entire dataset, please use the snapshot, not the API! The snapshot is the exact same data in the exact same format, but much much faster and cheaper for you and us.

More groups in group_by queries

We now return the top 200 groups (it used to be just the top 50).

New Autocomplete endpoint

Our new autocomplete endpoint dead easy to use our data to power an autocomplete/typeahead widget in your own projects. It works for any of our five entity types (works, authors, venues, institutions, or concepts). If you’ve got users inputting the names of journals, institutions, or other entities, now you can easily let them choose an entity instead of entering free text–and then you can store the ID (ISSN, ROR, whatever) instead of passing strings around everywhere. 

Better docs

In addition to documenting the new features above, we’ve also added lots of new documentation for existing features, addressing our most frequent questions and requests:

Thanks to everyone who’s been in touch to ask for new features, report bugs, and tell us where we can improve (also where we’re doing well, we’re ok with that too).
We’ll continue improving the API and the docs. We’re also putting tons of work into improving the underlying dataset’s accuracy and coverage, and we’re happy to report that we’ve improved a lot on what we inherited from MAG, with more improvements to come. We’ve delayed the launch of the full web UI, but expect that in the summer…we are so excited about all the possibilities that’s going to open up.

Green Open Access comes of age

Posted on


This morning David Prosser, executive director of Research Libraries UK, tweeted, “So we have @unpaywall, @oaDOI_org, PubMed icons – is the green #OA infrastructure reaching maturity?(link).

We love this observation, and not just because two of the three projects he mentioned are from us at Impactstory 😀. We love it because we agree: Green OA infrastructure is at a tipping point where two decades of investment, a slew of new tools, and a flurry of new government mandates is about to make Green OA the scholarly publishing game-changer.

A lot of folks have suggested that Sci-Hub is scholarly publishing’s “Napster moment,” where the internet finally disrupts a very resilient, profitable niche market. That’s probably true. But like music industry shut down Napster, Elsevier will likely be able shut down Sci-Hub. They’ve got both the money and the legal (though not moral) high ground and that’s a tough combo to beat.

But the future is what comes after Napster. It’s in the iTunes and the Spotifys of scholarly communication. We’ve built something to help to create this future. It’s Unpaywall, a browser extension that instantly finds free, legal Green OA copies of paywalled research papers as you browse–like a master key to the research literature. If you haven’t tried it yet, install Unpaywall for free and give it a try.

Unpaywall has reached 5,000 active users in our first ten days of pre-release.

But Unpaywall is far from the only indication that we’re reaching a Green OA inflection point. Today is a great day to appreciate this, as there’s amazing Green OA news everywhere you look:

  • Unpaywall reached the 5000 Active Users milestone. We’re now delivering tens of thousands of OA articles to users in over 100 countries, and growing fast.
  • PubMed announced Institutional Repository LinkOut, which links every PubMed article to a free Green copy in institutional repositories where available. This is huge, since PubMed is one of the world’s most important portals to the research literature.
  • The Open Access Button announced a new integration with interlibrary loan that will make it even more useful for researchers looking for open content. Along with the interlibrary loan request, they send instructions to authors to help them self-archive closed publications.

Over the next few years, we’re going to see an explosion in the amount of research available openly, as government mandates in the US, UK, Europe, and beyond take force. As that happens, the raw material is there to build completely new ways of searching, sharing, and accessing the research literature.
We think Unpaywall is a really powerful example: When there’s a big Get It Free button next to the Pay Money button on publisher pages, it starts to look like the game is changing. And it is changing. Unpaywall is just the beginning of the amazing open-access future we’re going to see. We can’t wait!

How to smash an interstellar paywall

Posted on


Last month, hundreds of news outlets covered an amazing story: seven earth-sized planets were discovered, orbiting a nearby star. It was awesome. Less awesome: the paper with the details, published in the journal Nature, was paywalled. People couldn’t read it.

That’s messed up. We’re working to fix it, by releasing our new free Chrome extension Unpaywall. Using Unpaywall, you can get access to the article, and millions like it, instantly and legally. Let’s learn more.

First, is this really a problem? Surely google can find the article. I mean, there might be aliens out there. We need to read about this. Here we go, let’s Google for “seven terrestrial planets nature article.” Great, there it is, first result. Click, and…

What, thirty-two bucks to read!? Well that’s that, I quit.

Or maybe there are some ways around the paywall? Well, you can know someone with access. My pal Cindy Wu helped out her journal club out this way, offering on Twitter to email them a copy of the paper. But you have to follow Cindy on Twitter for that to work.

Or you could know the right places to look for access. Astronomers generally post their papers are on a free web server called the ArXiv, and sure enough if you search there, you’ll find the Nature paper.  But you have to know about ArXiv for that to work. And check out those Google search results again: ArXiv doesn’t appear.

Most people don’t know Cindy, or ArXiv. And no one’s paying $32 for an article. So the knowledge in this paper, and thousands of papers like it, is locked away from the taxpayers who funded it. Research becomes the private reserve of those privileged few with the money, experience, or connections to get access.

We’re helping to change that.

Install our new, free Unpaywall Chrome extension and browse to the Nature article. See that little green tab on the right of the page? It means Unpaywall found a free version, the one the authors posted to ArXiv. Click the tab. Read for free. No special knowledge or searches or emails or anything else needed. 

Today you’ll find Unpaywall’s green tab on ten million articles, and that number is growing quickly thanks to the hard work of the open-access movement. Governments in the US, UK, Europe, and beyond are increasingly requiring that taxpayer-funded research be publically available, and as they do Unpaywall will get more and more effective.

Eventually, the paywalls will all fall. Till then, we’ll be standing next to ‘em, handing out ladders. Together with millions of principled scientists, libraries, techies, and activists, we’re helping make scholarly knowledge free to all humans. And whoever else is out there 😀 👽.

behind the scenes: cleaning dirty data


Dirty Data.  It’s everywhere!  And that’s expected and ok and even frankly good imho — it happens when people are doing complicated things, in the real world, with lots of edge cases, and moving fast.  Perfect is the enemy of good.

Thanks http://www.navigo.com.au/2015/05/cleaning-out-the-closet-how-to-deal-with-dirty-data/ for the image

Alas it’s definitely behind-the-scenes work to find and fix dirty data problems, which means none of us learn from each other in the process.  So — here’s a quick post about a dirty data issue we recently dealt with 🙂  Hopefully it’ll help you feel comradery, and maybe help some people using the BASE data.

We traced some oaDOI bugs to dirty records from PMC in the BASE open access aggregation database.

Most PMC records in BASE are really helpful — they include the title, author, and link to the full text resource in PMC.  For example, this record lists valid PMC and PubMed urls:

and this one lists the PMC and DOI urls:

The vast majority of PMC records in BASE look like this.  So until last week, to find PMC article links for oaDOI we looked up article titles in BASE and used the URL listed there to point to the free resource.

But!  We learned!  There is sometimes a bug!  This record has a broken PMC url — it lists http://www.ncbi.nlm.nih.gov/pmc/articles/PMC with no PMC id in it (see, look at the URL — there’s nothing about it that points to a specific article, right?).  To get the PMC link you’d have to follow the Pubmed link and then click to PMC from there.  (which does exist — here’s the PMC page which we wish the BASE record had pointed to).

That’s some dirty data.  And it gets worse.  Sometimes there is no pubmed link at all, like this one (correct PMC link exists):

and sometimes there is no valid URL, so there’s really no way to get there from here:

(pretty cool PMC lists this article from 1899, eh?.  Edge cases for papers published more than 100 years ago seems fair, I’ve gotta admit 🙂 )

Anyway.  We found this dirty PMC data in base is infrequent but common enough to cause more bugs than we’re comfortable with.  To work around the dirty data we’ve added a step — oaDOI now uses the the DOI->PMCID lookup file offered by PMC to find PMC articles we might otherwise miss.  Adds a bit more complexity, but worth it in this case.

 

 

So, that’s This Week In Dirty Data from oaDOI!  🙂  Tune in next week for, um, something else 🙂

And don’t forget Open Data Day is Saturday March 4, 2017.   Perfect is the enemy of the good — make it open.

Introducing oaDOI: resolve a DOI straight to OA

Posted on


Most papers that are free-to-read are available thanks to “green OA” copies posted in institutional or subject repositories.  The fact these copies are available for free is fantastic because anyone can read the research, but it does present a major challenge: given the DOI of a paper, how can we find the open version, given there are so many different repositories?screen-shot-2016-10-25-at-9-07-11-am

The obvious answer is “Google Scholar” 🙂  And yup, that works great, and given the resources of Google will probably always be the most comprehensive solution.  But Google’s interface requires an extra search step, and its data isn’t open for others to build tools on top of.

We made a thing to fix that.  Introducing oaDOI:

We look for open copies of articles using the following data sources:

  • The Directory of Open Access Journals to see if it’s in their index of OA journals.
  • CrossRef’s license metadata field, to see if the publisher has reported an open license.
  • Our own custom list DOI prefixes, to see if it’s in a known preprint repository.
  • DataCite, to see if it’s an open dataset.
  • The wonderful BASE OA search engine to see if there’s a Green OA copy of the article. BASE indexes 90mil+ open documents in 4000+ repositories by harvesting OAI-PMH metadata.
  • Repository pages directly, in cases where BASE was unable to determine openness.
  • Journal article pages directly, to see if there’s a free PDF link (this is great for detecting hybrid OA)

oaDOI was inspired by the really cool DOAI.  oaDOI is a wrapper around the OA detection used by Impactstory. It’s open source of course, can be used as a lookup engine in Zotero, and has an easy and powerful API that returns license data and other good stuff.

Check it out at oadoi.org, let us know what you think (@oadoi_org), and help us spread the word!

What’s your #OAscore?

Posted on


We’re all obsessed with self-measurement.

We measure how much we’re Liked online. We measure how many steps we take in a day. And as academics, we measure our success using publication counts, h-indices, and even Impact Factors.

But we’re missing something.

As academics, our fundamental job is not to amass citations, but to increase the collective wisdom of our species. It’s an important job. Maybe even a sacred one. It matters. And it’s one we profoundly fail at when we lock our work behind paywalls.

Given this, there’s a measurement that must outweigh all the others we use (and misuse) as researchers: how much of our work can be read?

This Open Access Week, we’re rolling out this measurement on Impactstory. It’s a simple number: what percentage of your work is free to read online? We’d argue that it’s perhaps the most important number associated with your professional life (unless maybe it’s the percentage of your work published with a robust license that allows reuse beyond reading…we’re calculating that too). We’re calling it your Open Access Score.

We’d like to issue a challenge to every researcher: find out your open access score, do one thing to raise it, and tell someone you did. It takes ten minutes, and it’s a concrete thing you can do to be proud of yourself as a scholar.

Here’s how to do it:

  1. Make an Impactstory profile. You’ll need a Twitter account and nothing more…it’s free, nonprofit, and takes less than five minutes. Plus along the way you’ll learn cool stuff about how often your research has been tweeted, blogged, and discussed online.
  2. Deposit just one of your papers into an Open Access repository. Again: it’s easy. Here’s instructions.
  3. Once you’re done, update your Impactstory, and see your improved score.
  4. Tweet it. Let your community know you’ve made the world a richer, more beautiful place because you’ve made you’ve increased the knowledge available to humanity. Just like that. Let’s spread that idea.

Measurement is controversial. It has pros and cons. But when you’re measuring the right things, it can be incredibly powerful. This OA Week, join us in measuring the right things. Find your #OAscore, make it better, tweet it out. If we’re going to measure steps, let’s make them steps that matter.

 

Crossposted on the Open Access Week blog.

Now, a better way to find and reward open access


There’s always been a wonderful connection between altmetrics and open science.

Altmetrics have helped to demonstrate the impact of open access publication. And since the beginning, altmetrics have excited and provoked ideas for new, open, and revolutionary science communication systems. In fact, the two communities have overlapped so much that altmetrics has been called a “school” of open science.

We’ve always seen it that way at Impactstory. We’re uninterested in bean-counting. We are interested in setting the stage for a second scientific revolution, one that will happen when two open networks intersect: a network of instantly-available diverse research products and a network of comprehensive, open, distributed significance indicators.

So along with promoting altmetrics, we’ve also been big on incentives for open access. And today we’re excited that we got a lot better at it.

We’re launching a new Open Access badge, backed by a really accurate new system for automatically detecting fulltext for online resources. It finds not just Gold OA, but also self-archived Green OA, hybrid OA, and born-open products like research datasets.

A  lot of other projects have worked on this sticky problem before us, including the Open Article Gauge, OACensus, Dissemin, and the Open Access Button. Admirably, these have all been open-source projects, so we’ve been able to reuse lots of their great ideas.

Then we’ve added oodles of our own ideas and techniques, along with plenty of research and testing. The result? Impactstory is now the best, most accurate way to automatically assess openness of publications. We’re proud of that.

And we know this is just the beginning! Fork our code or send us a pull request if you want to make this even better. Here’s a list of where we check for OA to get you started:

  • The Directory of Open Access Journals to see if it’s in their index of OA journals,
  • CrossRef’s license metadata field,  to see if the publisher has uploaded an open license.
  • Our own custom list DOI prefixes, to see if it’s in a known preprint repo
  • DataCite, to see if it’s an open dataset.
  • The wonderful BASE OA search engine to see if there’s a Green OA copy of the article.
  • Repository pages directly, in cases where BASE was unable to determine openness.
  • Journal article pages directly, to see if there’s a free PDF link (this is great for detecting hybrid OA)

What’s it mean for you? Well, Impactstory is now a powerful tool for spreading the word about open access. We’ve found that seeing that openness badge–or OH NOES lack of a badge!–on their new profile is powerful for a researcher who might otherwise not think much about OA.

So, if you care about OA: challenge your colleagues to go make a free profile and see how open they really are. Or you can use our API to learn about the openness of groups of scholars (great for librarians, or for a presentation to your department). Just hit the endpoint http://impactstory.org/u/someones_orcid_id to find out the openness stats for anyone.

Hit us up with any thoughts or comments, and enjoy!