Economics of open science

The economics of open science describe the economic aspects of making a wide range of scientific outputs (publication, data, software) to all levels of society.

Open science involves a plurality of economic models and goods. Journals and other academic institutions (like learned societies) have historically favored a knowledge club or a toll access model: publications are managed as a community service for the selected benefit of academic readers and authors. During the second half of the 20th century, the "big 5" largest publishers (Elsevier, Springer, Wiley, Taylor & Francis and the American Chemical Society) have partly absorbed or outcompeted non-profits structure and applied an industrial approach to scholarly publishing.

The development of the web shifted the focus of scholarly communication from publication to a large variety of outputs (data, software, metrics). It also challenged the values and the organization of existing actors with the development of an international initiatives in favor of open access and open science. While initially distanced by new competitors, the main commercial publishers have started to flip to author-pay models after 2000, funded through article processing charges and the negotiation of transformative deals. Actors like Elsevier or Wiley have diversified their activities from journal ownership to data analytics by developing a vertical integration of tools, database and metrics monitoring academic activities. The structuration of a global open science movement, the enlargement of scientific readership beyond professional researchers and increasing concerns for the sustainability of key infrastructures has enabled the development of open science commons. Journals, platforms, infrastructures and repositories have been increasingly structured around a shared ecosystem of services and self-governance principles.

The costs and benefits of open science are difficult to assess due to the coexistence of several economic models and the untraceability of open diffusion. Open publishing is less costly overall than subscription models, on account of reduced externalities and economies of scale. Yet the conversion of leading publishers to open science has entailed a significant increased in article processing charges, as the prestige of well-known journals make it possible to extract a high consent to pay. Open science brings significant efficiency gain to academic research, especially regarding bibliographic and data search, identification of previous findings and text and data mining projects. Theses benefits extend to non-academic research, as open access to data and publications eases the development of new commercial services and products. Although the overall economic and social impact of open science could be high, it has been hardly estimated.

The development of open science has created new forms of economic regulations of scientific publishing, as funders and institutions has come to acknowledged that this sector no longer operated in normal market conditions. International coordinations like the cOAlitionS attempt to set up global rules and norms on to manage the transition to open science.

Economic models
Debates on the economic theory of open science have been largely influenced by the classic typology of economic goods between Private goods, Public goods, Club goods and Common-pool resources. According to a common definition matrix gradually developed by Paul Samuelson, Ricard Musgrave and Elinor Ostrom private goods and club goods are exclusive (they cannot be freely shared and are exclusively used by owners or members), while privated goods and common goods are rivalrous (they cannot be consumed simultaneously).

In theory, the outputs of open science could be defined as public goods: they are not exclusive (free-licensed publications, data or software can be shared without restriction) and they are not substractive (they can be undefinitely copied). In 2017 an OECD report underlined that research data "exhibit public good characteristics" as "it is not exhausted in consumption (i.e. it can be consumed many times without being diminished), and it may be inefficient to exclude potential users". For Elinor Ostrom and Charlotte Hess this approach does not fit with the actual uses and constraints of knowledge online. Like shared natural resources, the outputs of open science can be polluted, exhausted or enclosed: "The parallel, yet contradictory trends, where, on the one hand, there is unprecedented access to information through the Internet but where, on the other, there are ever-greater restrictions on access (...) indicate the deep and perplexing characteristics of this resource". Additionally, in contrast to other forms of knowledge commons, open science actors continue to enforce exclusion rules for the creation, curation and administration of resources: "the scientific and scholarly commons furnishes information input into a scientific discovery but the Mertonian norms of priority award the property rights in the claim to whoever is first to publish."

The leading definitions of open access and open science are sufficiently ambiguous to allow for a plurality of allocation systems: "open access is a boundary object that does not refer to a common set of practices, assumptions or principles." Consequently, uses and models of open science can span the entire typology of economic goods:



The coexistence of the differing economic models of open science remains an evolving process. Competing narratives of the future of open access involve all the potential axis of open science goods: they include the disruption of legacy scientific publisher by new competitors, the transformation of private scientific goods into public goods and the rehabilitation of community-led governance. Nikos Koutras has argued for a structural inflexion of the role of commercial publishers, which would act more as editorial service than gatekeeper, as "it is feasible for authors to not rely on [them]".

Models of open science are embedded into wider socio-economic structures. North-South inequalities remain a major structural factor, that affect not only the access and use of open science output, but also the way the discourses and representations on open science.

Open science club
The economic theory of club goods was originally developed in the 1965 by James Buchanan to complement the distinction between private and public goods. While clubs are private organizations they also manage the allocation of the resources between the individual members, in a similar manner to a public service. Membership criteria are a fundamental feature of clubs and affect their efficiency: "The central question in a theory of clubs is that of determining the membership margin, so to speak, the size of the most desirable cost and consumption sharing arrangement."

Definitions of knowledge club
Before the Second World War, academic publishing was mostly characterized by a wide range of community-driven scholarly structures with little concerns for profitability. They relied on informal community norms rather than commercial regulations. Theses structures have been described as knowledge clubs: "until the second part of the twentieth century, most journals could be assimilated to a club model".

While managed by a community and publicly available, knowledge club are demic and mostly used to the benefit of their members. As a defining feature of the club model, scientific authors are not paid for their publications: "ever since the first scientific journals were founded in 1665 in London and Paris, journals have not paid authors for articles." Acknowledgment and recognition by the relevant community of peers is the main incentive: "intangible rewards (made nearly tangible in tenure and promotion) compensate scholars for relinquishing royalties on their journal articles".

Users and consumers of club goods are basically the same population as the core readers of scientific journals are also their core contributors: "the set of potential producers and the set of incumbent consumers are the same set". Determination of relevant membership and exclusion criteria plays a fundamental role in the management of the club. In contrast with other forms of clubs (such as Health clubs), membership criteria of knowledge clubs are not enforced strictly but stem from widespread conventions: it "happens quite naturally (i.e. culturally) in scholarly knowledge clubs by simple cost of access in time and language." As there are no formal process of adhesion, knowledge club can be joined by non-reliable members, so long as they are willing to devote the necessary time to demonstrate they adhere to common cultural values and customs: "Hostile pranks, such as the Sokal hoax/fraud, demonstrate that clubs may be hoodwinked by outsiders who apparently ‘speak their language’ but are in fact using it to challenge their knowledge."

The concept of knowledge club has highlighted the continuities between scientific publications and other form of restrictive associations. Journals are strongly embedded in wider institutional networks and communities and cannot be dissociated from it: "More specialized journals appeared in the 18th and 19th centuries, most of which were published by learned societies. Only at the end of the 19th century did university presses gain importance as publishers of scholarly journals". While in their daily management knowledge clubs are not strictly separated from other economic actors, the interests of the community takes precedence over any other economic incentive: "we see a journal as a club in which access to these services is internalised as a membership benefit. While the services might still be outsourced, in practice it can be seen that such a shift potentially has substantial political and economic consequences as to how we see the relations among players." As adhesion to the club is not exclusionary, researchers are usually part of a complex network of clubs: "Membership of an academic institution with the relevant benefits, including access to subscription content, is another parallel club (...) Further work will be needed to define those situations which are better analysed as complex clubs, with differential membership contributions, and those situations where multiple clubs are interacting."

Community-lead journals have been progressively acquired or outcompeted by large international publishers after the Second World War: "The small society presses, struggling to cope with growing scale, were supported and then largely supplanted by the ‘Big 5’ commercial presses". While the knowledge club has receded, some of its conventions have persisted: "academic journals have retained their club-like qualities through blind peer review (even more through open review), and via editorial boards that are carefully constructed to ‘send the right signals’ in order to build prestige and quality assurance". The evaluation of scientific journal remained largely performed as a community service, with researchers submitting peer-reviews for free. Journals continued to be officially managed by editorial committee, although in a context of ownership by a large industrial structure, their authority and their ability to set the policy of the publication is limited. Both the authors and the audience of academic publication have primarily non-commercial incitations: "When publishing articles in academic journals, most scholars are predominantly motivated by curiosity, priority and the expected gain in reputation, and much less so by any monetary rewards for the actual publications."

Knowledge club and open science
The OA Diamond Study states that non-commercial journals in open access "maintain a secular tradition of “club” journals, set up for the uses and interests of a specific closed community of knowledge." While access to read is no longer a material condition for membership, non-commercial journals are still largely managed for the benefits of a community. They are "still strongly embedded in institutional environments (from a legal and governance perspective)."

Club journals have gained a new relevance with the development of electronic publishing and played a fundamental role in the early development of online open science in the early 1990s. Pioneers of open access electronic publishings were non-commercial and community-driven initiatives that built up on a trend of grassroot publishing innovation in the social sciences and the humanities: "In the late ‘80s and early ‘90s, a host of new journal titles launched on listservs and (later) the Web. Journals such as Postmodern Cultures, Surfaces, the Bryn Mawr Classical Review and the Public-Access Computer Systems Review were all managed by scholars and library workers rather than publishing professionals."

The development of the web and free editing tools for scientific publications like Open Journal System made it possible to run non-commercial journals and apply automated editorial routines at a limited cost: "model made economic sense as outsourced specialisation, but technological change has upended that logic by dramatically lowering the cost of in-house production." Equipment in personal computers has additionally reinforced the free exchange of services among club members, as new contributions could be made beyond peer review evaluation: "The wide availability of desktop hardware and software enabled new capabilities among authors, and an expectation from publishers that authors would self-manage much of the layout and editing of articles." Unless they had largely transformed their publication in a commercial activity, knowledge clubs and historical scientific societies have embraced open access: since 2014, the Royal Society has published the Royal Society Open Science journal.

New forms of research infrastructure developed in the context of open science have also retained some features of a "club model". While they create and manage a common or a public goods, reearch data repositories are frequently developed for the selected benefit and prestige of a few institutions: "In the case of research data repositories, such a “privileged” situation may arise for research funders, research centres, universities, or a disciplinary group or society that may gain recognition and further funding from supporting or hosting an open data repository." Yet, a full membership or club model, that would also include restrictions to access, remains rare among research data repositories as "it is difficult to develop and maintain a group large enough to cover costs, affecting both scale and sustainability."

Despite the continuities, the articulation between the historical values of the club (exclusion, internal management, centrality) and the new values of open science remain challenging. Scientific clubs have long maintained an ambiguous position on the value of openness. While openness was held as fundamental scientific principle, that makes it possible to have a free exchange of ideas, knowledge club have also relied on structural mechanism of exclusion: "A claim of openness, and a narrative that this openness sits at the core of the value system, that is not quite realized in practice. The building of institutions that seek to enhance openness – the Royal Society holding formalized meetings, open to members, in the place of private demonstrations – that are nonetheless exclusive (...) Yet what is passed down to us today, is less that exclusive gentleman's club and more the core values that it sought to express."

Recent developments of open science and citizen science creates a new source of tension that is still not resolved: "There are profound challenges to adapting our institutions to interact productively with differing knowledge systems, but we are perhaps for the first time well placed to do so." According to Samuel Moore, the main discourses on open science and scientific commons continues to encover exclusionary practices reminiscent of historical knowledge clubs : "many uses of the term commons in scholarly communications are themselves ill- or un-defined and intend to evoke a kind of participatory, inclusive or freely accessible resource."

Scientific publishing: a hybrid market
In Western Europe and North America, direct ownership of journals by academic communities and institutions started to wane in the 1950s. The historical model of scientific periodicals seemed unable to keep up with the quickly increasing volume of publication in the context of big science. In 1959, Robert Maxwell created one of the first giant of scientific publishing, Pergamon and through the following decades acquired hundreds of journal to small university press and scientific societies. While theses journals were not very profitable individually, with such a high concentration made Pergamon became "too big to fail" and was able to impose its own conditions to academic libraries and other potential customers. This approach was applied as well by Springer and Elsevier. Scientific publishing in the second half of the 20th century has been described as a two-sided market with "significant network externalities" since "authors prefer to publish in academic journals with the largest readership, and readers prefer the journals with the best authors".

Due to high market concentration, scholarly journal are not "subtractable": they cannot be replaced by equivalent product on the market, which hinders competition. The development of bibliometric index has reinforced this locked-in process, as highly quoted journals will receive more submissions.

By the 1980s, the CEO of Elevier, Pierre Vinken aimed for an annual growth rate of 20%, mostly through an uncontrolled raise of subscription prices. From 1985 to 2010, the budget allocated by American Research Libraries to periodicals increased five-fold.

As it became a private good, the scientific journal was also transformed into an industrial product, with an increased standardization of publishing norms, peer-review process or copyrights. In contrast smaller scientific publishers participate to a more typical publishing market, with a persistent competition. Four out of the five most costly journals "are published by big five publishers".

With the conversion to electronic journals, scientific publishing became a hybrid market: along with individual subscriptions, leading publishers introduced big deals or pluri-annual licenses to large bundle of journal titles. Big deals were typically negotiated with national networks of research libraries and academic institutions. Big deals proved advantageous to publishers as well as they limited the "administrative costs" of managing a large number of contracts between journals and buyers. The bundling of thousands of journals titles made it possible to ensure the commercial viability of journals that would have had a limited success. In 2011, David Colquhoun showed that 60% of the journals included in the Elsevier licenses granted to the University College were accessed less than 300 times per year and 251 journls were not even accessed once. Even though the inflation of individual subscription cost has slowed, the total amount allocated to scientific periodicals has continued to rise: "While the North-American research libraries spent about a third more on journals than on monographs in 1987, this ratio had risen to about four to one by 2011."

Following the generalization of the big deal model, the main transactions between large publishers and scientific institutions no longer operated under normal market conditions with fixed public prices: "An optimal pricing strategy when bundling electronic information still does not exist (...) Prices will be determined in bilateral negotiations and every library pays a different price according to its institutional willingness to pay." Opting out of a big deal is a nearly impossible choice for major scientific institutions as "big deals of different publishers are complementary and not substitutes". Big deal licenses are usually covered by non-disclosure agreements, so that prices can be determined on the basis of the financial capacity of the buyer: "these practices give publishers pricing flexibility that allows them to try to charge the highest price that each institution is willing to pay and make it hard for new publishers to compete." For Jason Potts et al., this deviation from the market norms shows that the market model is fundamentally less efficient than the knowledge club in the context of scientific publishing. It creates more ecosystemic costs than the direct management of the journals and other scientific outputs by the community: ""If our argument is that clubs and communities are capable of acting together to solve collective action provisioning problems in ways that are more efficient than either markets or the state, then the dissolution of clubs, or their inability to coordinate, will lead to inefficient or non-existent provisioning." Due to increased concentration, large publishers also became a powerful lobby: in the United States, Elsevier had long influenced some key policy issues relating to the economy of publishing and was able to significantly slow down the transition to open access.

The scientific market is structured by large scale inequalities. Overpriced subscriptions and paywalls have been "a major barrier to progress in developing countries"' While leading publishers have initiated programs at a reduced costs for developing countries, their impact has been limited by the complexity of the subscriptions procedures: "the library or consortia have to go through a procedure to request the discount, assuming they know it exists and can navigate the bureaucracy involved."

From readers to authors: hybrid and full open access
The early developments of Open science and the open sharing of scientific publication on the web was at first a challenge for leading commercial publishers: the executive board of Elsevier "had failed to grasp the significance of electronic publishing altogether, and therefore the deadly danger that it posed—the danger, namely, that scientists would be able to manage without the journal". Initial experiments of scientific journals were mostly led by non-commercial initiatives.

Commercially viable open access journals have appeared in the late 1990s, under the leadership of new competitors such as the Public Library of Science (PLOS), MDPI or Hindawi. They relies generally on the author-pay model: the journal provides an editorial service to the author of a publication and charge an article-processing charge to cover the editing costs. This practice is anterior to open access, as subscription-based journals held by scientific societies occasionally charged for additional services (such as photographies in color). For Peter Suber, this economic model is similar to broadcast television: "If advertisers can pay all the costs of production, then a TV studio can broadcast a show without charging viewers. In the case of scholarly research articles, the model works because authors are willing to relinquish royalties to get their message across and a growing number of institutions that employ researchers or fund research are willing to consider the cost of dissemination".

After 2000 large commercial publishers started to adopt a hybrid business model also termed open choice: authors have either the possibility to submit for free a paywalled articles or pay for a free versions. This practice has been increasingly criticized as double dipping. Due to the complexity of the publishing market, largely structured around big deal license, scientific publishers were in a position of "collecting money twice" Supervisors of open access policies have become more skeptical of the transitory nature of hybrid models. According to the lead coordinator of Plan S, Robert-Jan Smits "when I then asked [the large publishers] when this transition would be completed, they were silent. The reason for this was clear: they saw hybrids as a way to continue the status quo." To overcome this risk, the Plan S stated that transformative journals will no longer be compliant after a transition period that ends in 2023.

While they were originally devised for a subscription-based model, big deals have been repurposed as large scale agreement to generalize commercial open access. Since subscription costs were already bundled at a national level, they could be repurposed as publications licenses or Article-Processing Charges licenses as part of a journal flipping. In 2015, the Max Planck Society issued a White Paper on the economic cost of the transformation to open access: "All the indications are that the money already invested in the research publishing system is sufficient to enable a transformation that will be sustainable for the future." In this context, the APC become the default business model for all journals: "Our own data analysis shows that there is enough money already circulating in the global market – money that is currently spent on scientific journals in the subscription system and that could be redirected and re-invested into open access business models to pay for APCs." The economic debate over journal flipping has largely evacuated the issue of potential savings: negotiations aim rather to ensure a global conversion to open science at the same costs as existing subscriptions licenses. Several national negotiations with big publishers attempted to implement the journal flipping approach with a limited success.

Commercial open access models based on article processing charges create new structural inequalities, no longer in terms of access to read but access to publish. High APC prices mean that in practices global south authors are bound to be cut off from major journals: "APCs also present a problem for researchers in the Global South, who typically have much smaller budgets to work with than their northern counterparts."

From publishing to analytics: diversification of revenue streams


In parallel with the open science movement, leading scientific publishers have diversified their activities beyond publishing and moved "from a content-provision to a data analytics business". The ubiquituous use of digital technologies in research activities and in the institutional management of science and highed education has been "creating new income streams". Large publishers have been well positioned in this new market, as they already have know-how, the infrastructures and intellectual property on a large range of scientific outputs. Additionally, they have the necessary resources for long-term investments thanks to the accumulated high margins of journal subscriptions. Negotiation of "big deal" additionally created a favorable framework, as access to subscriptions or APCs could be easily tied with exclusive contracts on other databases and tools.

In the 2010s, leading publishers developed or acquired new key infrastructures for the management scientific and pedagogic activities: "Elsevier has acquired and launched products that extend its influence and its ownership of the infrastructure to all stages of the academic knowledge production process". In the past two decades, there has been 340 merging and acquisitions for Elsevier, 240 for Informa (Taylor & Francis) and 80 for Wiley. While most of theses transactions were linked to academic content, until the 2010s, it has quickly been expanded to academic "services" and other data analytics tools. Although all leading publishers attempt a vertical integration of existing and new services it can take different shapes. For instance there is "a clear attempt by Wiley to enhance its control over the university decision-making process in education, as Elsevier has for academic knowledge production." By 2019, Elsevier has either acquired or built a large portofolio platforms, tools, databases and indicators covering all aspects and stages of scientific research:

"the largest supplier of academic journals is also in charge of evaluating and validating research quality and impact (e.g., Pure, Plum Analytics, Sci Val), identifying academic experts for potential employers (e.g., Expert Lookup5), managing the research networking platforms through which to collaborate (e.g., SSRN, Hivebench, Mendeley), managing the tools through which to find funding (e.g., Plum X, Mendeley, Sci Val), and controlling the platforms through which to analyze and store researchers’ data (e.g., Hivebench, Mendeley)."

Since it has expanded beyond publishing, the vertical integration of privately owned infrastructures has become extensively integrated to daily research activities: "the privatised control of scholarly infrastructures is especially noticeable in the context of ‘vertical integration’ that publishers such as Elsevier and SpringerNature are seeking by controlling all aspects of the research lifecycle, from submission to publication and beyond". In contrast with publication, which was an outsourced business separated from institutional and community activities, the new services developed by large publishers are embedded in the infrastructure of universities and create potentially stronger dependency links: "Pure embeds Elsevier within the university workflow process through its abilities to manage research at the university level, including the provision of a dashboard to facilitate decision making by university research administrators (Elsevier, “Features”)."

Metrics and indicators are key components of vertical integration: "Elsevier's further move to offering metrics-based decision making is simultaneously a move to gain further influence in the entirety of the knowledge production process, as well as to further monetize its disproportionate ownership of content". While depency subscription journals has been fragilized by the open science movement, metrics can create a new locked-in situation for scientific institutions: "For universities keen on raising or maintaining their rankings, publishing in Elsevier high impact journals may help them gain the advantage (...) vertical integration and the promotion of citation metrics and algorithmic recommendations may, in fact, constitute rent-seeking behavior designed to increase." Consequently, a shift of leading publishers to data analytics is not incompatible with the parallel development of a large APC market for open science publishing. For Samuel Moore, it is even "incentivised by the governmental policies for OA through APCs, repository services" as it create new "need to track compliance".

The emerging open science market has been compared with the business models of social networks, search engines and other forms of platform capitalism. While content access is free, it is indirectly paid through data extraction and surveillance. "If the primary negative manifestation of market power in the publishing sector is high paywall price (lack of access, therefore), the result of monopolistic competition in academic data analytics will be the combination of dependence and surveillance that we might associate with, e.g., Facebook." Increasing similarities with other digital platforms may have contributed to the increased regulations on the academic publishing market in Europe in the 2010s: "It's why Facebook, Apple and Google are now dominant: once they are controlling X per cent of the market, it's almost impossible for a competitor to come up."

Open Science Commons
The concept of commons was originally developed to describe the management of "a resource shared by a group of people" and the establishment of common governance rules to ensure that the resource is not overused or polluted (which would result in a tragedy of the commons). Similarly to club goods, common goods are used and maintained by a community. Yet, the membership is no longer exclusive: "toll goods (also called club goods) share with private goods the relative ease of exclusion". Typical forms of commons include shared natural resources like timber, berries or fishes. They are managed by unformal local associations. Governance rules are neither rigid nor pre-existing but have to be adapted to the specific requirements of the resource and the local environment: "One of the central findings was that an extremely rich variety of specific rules were used in systems sustainable over a long time period. No single set of specific rules, on the other hand, had a clear association with success." Common goods are also differentiated to Public goods (such as air or radio waves) due their subtractibility: natural resources can be depleted and rules have to be put in place to ensure they will not be overused.

The emergence of digital knowledge commons
Until the 1990s open knowledge was not considered as a commons in economic theory but as a "classic example of a pure public good, a good available to all and where one person's use does not subtract from another's use". For Elinor Ostrom and Charlotte Hess, this framework is no longer viable as the principle of non-excludability has been significantly weakened: "new technologies can enable the capture of what were once free and open public goods (...) Knowledge, which can seem so ubiquitous in digital form, is, in reality, more vulnerable than ever before" In a scientific context, examples of new enclosures of public goods may include all the surveillance data systems put in place by Elsevier, Springer or Academic social networks that capturate activities such as social interactions, reference collections. The uncontrolled development of the early web highlighted the need for common management of knowledge resources: "People started to notice behaviors and conditions on the web — congestion, free riding, conflict, overuse, and pollution — that had long been identified with other types of commons."

The open access of the 1990s and early 2000s movement aimed to ensure that science will be a public good, freely usable to all. The unlimited potential circulation of online content has transformed historical forms of non-commercial open access as a commons, at least from a reader's point of view: "By definition, OA literature excludes no one, or at least no one with an Internet connection. By contrast, non-OA electronic journals try very hard to exclude nonsubscribers from reading the articles".

While "knowledge commons is not synonymous with open access", the process of making open access a reality has also incidentally created a global "community network of the open-access movement": decisions had to be made regarding a commonly accepted definition of open access, free licenses and potential exclusions of non-open access initiatives that are embodied in the Budapest Open Access Initiative.

The early open science infrastructures aimed to ensure the circulation of scientific publications as a common good. Archives or institutional repositories were conceived as local or global community services. In August 1991, Paul Ginsbarg created the first inception of the arXiv project at the Los Alamos National Laboratory in answer to recurring storage issue of academic mailboxes on account of the increasing sharing of scientific articles. Repositories embody numerous characteristics of common resources under the definition of Elinor Ostrom: they maintain and protect a scientific resources, they implement weak requirements for membership (submissions are not peer-reviewed) and they prime coordination and shared management over competition. By the early 2000s, numerous repositories strived to "comply with the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) of 1999, which ensures the interoperability of different repositories for the purpose of locating their contents".

Enclosures of open science commons
Archive repositories and other forms of open science infrastructures have been originally individual initiatives. As such, their status as a scientific commons were not always institutionalized and they were hardly protected against potential privatizations: "one recent strategy of traditional commercial journal publishers to avoid negative externalities from green OA is to acquire successful OA repositories". The acquisition of Digital Commons and SSRN by Elsevier has highlighted the lack of reliability of critical scientific infrastructure for open science, which creates the conditions of a tragedy of the commons. The SPARC report on European Infrastructures underlines that "a number of important infrastructures at risk and as a consequence, the products and services that comprise open infrastructure are increasingly being tempted by buyout offers from large commercial enterprises. This threat affects both not-for-profit open infrastructure as well as closed, and is evidenced by the buyout in recent years of commonly relied on tools and platforms such as SSRN, bepress, Mendeley, and GitHub." Weak definitions of scientific commons, and of the requirements and expectations of commons governance, may have facilitated this take-over: "many self-described ‘commons’ projects for open access publishing simply restate the values of commercial publishing (...) while relying on the language of a more progressive politics."

In contrast with the consolidation of privately owned infrastructure, the open science movement "has tended to overlook the importance of social structures and systemic constraints in the design of new forms of knowledge infrastructures." It remained mostly focused to the content of scientific research, with little integration of technical tools and few large community initiatives. "common pool of resources is not governed or managed by the current scholarly commons initiative. There is no dedicated hard infrastructure and though there may be a nascent community, there is no formal membership." In 2015, the Principles for open science infrastructures underlined the discrepancy between the increasing openness of scientific publications or datasets and the closeness of the infrastructure that control their circulation.

"Over the past decade, we have made real progress to further ensure the availability of data that supports research claims. This work is far from complete. We believe that data about the research process itself deserves exactly the same level of respect and care. The scholarly community does not own or control most of this information. For example, we could have built or taken on the infrastructure to collect bibliographic data and citations but that task was left to private enterprise."

The fragility of open science commons until the 2010s contrasts with the dynamics of contributive projects beyond the scope of research and scientific activities. Wikipedia, Open Street Map or Wikidata are open communities with a low threshold of admission and membership that will come to tipify the online knowledge commons. Their management is analogous to natural common-pool resource system, where local uses and participation are rarely discriminated a priori, although repeated abuses can lead to exclusion.

Consolidation of the common ecosystem (2015-)
Since 2015, open science infrastructures, platforms and journals have converged to the creation of digital academic commons. While they were initially conceived either as a public good (a "grid" that ensure the distribution of a non-excludable resource) or a club good (with a limited range beyond a specific communities), open science infrastructures have been increasingly structured around a shared ecosystem of services and standards has emerged through the network of dependencies from one infrastructure to another. Open science infrastructures face similar issues met by other open institutions such as open data repositories or large scale collaborative project such as Wikipedia: "When we study contemporary knowledge infrastructures we find values of openness often embedded there, but translating the values of openness into the design of infrastructures and the practices of infrastructuring is a complex and contingent process".

The conceptual definition of open science infrastructures has been largely influenced by the analysis of Elinor Ostrom on the commons and more specifically on the knowledge commons. In accordance with Ostrom, Cameron Neylon understates that open infrastructures are not only a public good characterized by the management of a pool of common resources but also by the elaboration of common governance and norms. The economic theory of the commons make it possible to expand beyond the scope of limited scope of scholar associations toward large scale community-led initiatives: "Ostrom's work (...) provide a template (...) to make the transition from a local club to a community-wide infrastructure." Open science infrastructure tend to favor a non-for profit, publicly funded model with strong involvement from scientific communities, which disassociate them from privately owned closed infrastructures: "open infrastructures are often scholar-led and run by non-profit organisations, making them mission-driven instead of profit-driven." This status aims to ensure the autonomy of the infrastructure and prevent their incorporation into commercial infrastructure. It has wide range implications on the way the organization is managed: "the differences between commercial services and non-profit services permeated almost every aspect of their responses to their environment".

As of 2022, major actors that have formally adopted the core principles of open science infrastructures (or POSSE) include Crossref, CORE, OpenAir, and OpenCitations.

Consolidation of the commons ecosystem has been also visible in non-commercial journals, which moved from a knowledge club paradigm to more global commons initiative. While the daily management of non-commercial journals fit better to the definition of a knowledge club, more innovative models of governance "tend to bridge the secular heritage of scientific societies with the new wave of digitized knowledge commons such as Wikipedia or OpenStreetMap". New forms of common-based regulations and distributed decision-making processes have been gradually introduced: "The ascending role of the editorial committee and volunteers brings OA diamond journals closer to community-run projects where contributors are constantly self-learning and appropriating tasks" Integrations of the specific perspectives of the global South have redefined the common "understandings of the commons" beyond the perspectives of "more powerful stakeholders, wealthy disciplines and countries in the Global North".

The future of scientific commons remain a debated issue. The OA Diamond Study underlines the Open Access Commons as a potential future road of development for non-commercial open access journals and beyond: "The OA Commons will be a new more integrated international OA publishing system and ecosystem that serves the research community." Fragmentation has hindered on the development of non-commercial structures. Reliancy on small local communities result on a low visibility to potential readers or funders: most of the estimated 17,000 to 29;000 non-commercial journals are currently off the charts of scientific publishing indicators. The creation of common services and infrastructures as well as inter-disciplinary and inter-community coordinations may contribute to overcome built-in limitations of the knowledge club model: "The OA Commons will be community-driven and will bring communities together who already are or want to work together to become more effective." Alternatives visions of scientific commons include more decentralized models of "small, semi-autonomous projects that are loosely affiliated but mutually reliant" as large platforms and infrastructures could be "unable to account for nuanced relational practices of commoning in local communities and a variety of contexts."

Cost
Due to the coexistence of several economic models, there can be no unilateral estimate of the cost of open science. Cost-estimate frequently relies on different "scenarios" that match the different models of open science. In 2021, Grossmann and Brembs retain 7 different scenarios that includes outsourcing to a leading commercial publisher, small-scale non-commercial journals supported by free software and volunteer contributions or a hypothetical "decentralized, federated platform solution where all scholarly articles are published without being divided into journals". Economies of scale are also a significant factor, as large platforms and infrastructures can benefit from bundled expenses in numerous areas.

According to Grossmann and Brembs the total costs of scholarly publication range between $194.89 to $723.16 per article. Regardless of the wide variation per models and potential economies of scale, even the highest estimate of costs of publications in open science are low: "publication costs only cover 15% of the subscription price (...) assuming a conservative profit margin of 30% (i.e., US$1,200 per article) for one of the large publishers there remains a sizeable gap of about US$2,200 in non-publication costs, or 55% of the price of a scholarly subscription article".

Editorial work and evaluation
Editorial work and the management of peer review remain the core activity of academic journals and their most identified contribution and service rendered to scientific communities. It is the main expenses of the non-commercial journals surveyed by the OA Diamond Study: "the five main expenses/payables of the journal are editing (531), copy-editing (463), technical and software support (393), typesetting (384), and design (336)". Translation is a frequently quoted added cost, that may have been incentivicized by open science as the potential audience of local non-academic readers create incentives to maintain a multilingual website. Even before their conversion to electronic publishing, non-for-profits journals have maintained an affordable price, due to a nearly exclusive focus on editorial services: "in 2013, the mean price per article in for-profit journals was 3.2 times higher than in non-for-profit journals, and that the corresponding ratio for the median price per article was even 4.33:1". With Internet publication, costs can be significantly lower, due to additional cuts in transaction and labor costs, use of share platforms and infrastructure or reliance on voluntary work: "over 60% of journals reported annual costs in the previous year under $/€10,000, including in-kind contributions."

In contrast with the more technical aspects of scientific publication, editorial work cannot be easily scaled. Unless relying on volunteer work, cost in time and expertise is roughly similar: "For certain tasks, for example copyediting or typesetting, there are hundreds of individual companies worldwide providing those services (...) having compared the pricing of those service providers with others, we found only a very small variation of cost for such tasks". The only potential margin is by allocating this work to the scientific authors: "The wide availability of desktop hardware and software [has created] an expectation from publishers that authors would self-manage much of the layout and editing of articles."

For Peter Suber, the expenses of open science journals "peer review is the most significant." As an inheritance of the historical model of knowledge club the main costs of peer review are not directly supported by journals: the evaluation is performed freely by researchers. Following the expansion of commercial journals since the late 20th century, free services of peer review have increasingly become an over-exploited resource. The conversion of subscription journals to author-pays model of open access has added new pressure to a strained practice: as authors are the main customers of what has essentially become an editorial service, fast-track evaluation are in high demand. While the development of integrated editorial system has streamlined the editorial process of receiving and managing reviews, locating competent reviewers is major issue and create added costs and work to journal editors: "finding, recruiting and retaining reviewers" are a major concern of non-commercial journal editors.

The development of new Open science platforms and infrastructure makes it possible to unbundle the academic editorial workflow: "costs are reduced by eliminating the need for type-setting and copy-editing, with web-hosting costing only $15/year, and a total operating cost of between $6.50–$10.50 per article." Through theses mechanism, "open access has the opportunity to become a cost-reducing mechanism for scholarly publishing."

Technical infrastructure
Conversion to electronic publishing has created significant economies of scale. Large scale publishers has been among the first beneficiaries of reduced editorial and technical cost, through to the concentration and the standardization of the publishing infrastructure: "These newly empowered players brought an industrial approach to the publication and dissemination process, for the first time realising the benefits that these specialised capital and skills could provide by operating at a scale that was unprecedented to that date." This process started before the development of electronic publishing, with the creation of internal databases to manage peer-review and other key aspects of editorial management. Large academic search engines finalized this process: "As the dominant publishers build databases, discovery systems, and online platforms to house large and integrated collections of journals, it is more difficult for small publishers to compete with them. Building an effective platform for publishing e-journals is expensive (...) After a platform has been created, it is much cheaper and easier to add new journals to it than to build new and redundant platforms."

After 2000, non-commercial publishers and infrastructure have gradually benefited from the same economies of scale, due to the development of open software tools dedicated to academic production such as Open Journal Systems, that facilitated the creation and the administration of journal website and the digital conversion of existing journals. Among the non-commercial journals registered to the Directory of Open Access Journals, the number of annual creation has gone from 100 by the end of the 1990s to 800 around 2010. By 2021, Open Journal Systems has become "a widespread solution in the peer review management of journals".

Many Open Science Infrastructure run "at a relatively low cost" as small infrastructures are an important part of the open science ecosystem. 16 data research repositories surveyed by the OECD in 2017 quoted technical infrastructure and shared services among the costs "most likely to be susceptible to cost optimisation". In 2020, 21 out of 53 surveyed European infrastructures "report spending less than €50,000". Overall, European infrastructures were financially sustainable in 2020 which contrasts with the situation ten years prior: in 2010, European infrastructures had much less visibility: they usually lacked "a long-term perspective" and struggled "with securing the funding for more than 5 years".

Beyond the economies of scale, technical infrastructure also created fixed costs to standard publishing services such as article identification (DOI), plagiarism check, long-term digital preservation and standardized XML. While most of theses services are covered by flat fees at a limited expenses, it can still affect significantly the tight budgets of small non-commercial journals.

Commercial services and prestige
The overall price per article of commercial publishers is consistently higher than in the non-commercial sectors. This discrepancy has been partly accounted by the maintenance of several services necessary to run a business activity (pricing, transaction management, marketing...), that non-commercial structure can drop entirely with little impact. Leading commercial journals claim to be more selective in regard to article submissions, which has the effect of creating a more complex and time-consuming acceptation workflow: "The more effort a publisher invests in each paper, and the more articles a journal rejects after peer review, the more costly is each accepted article to publish [although] the key question is whether the extra effort adds useful value". Besides, "costs vary widely in this sector". Without any transformation of the editorial workflow or efficiency gain a standard well-established subscription journal could "charge about $3,700 per paper to cover costs". Yet, in a publisher like Nature, the costs would be "at £20,000–30,000 ($30,000–40,000) per paper". In the higher end of the commercial spectrum, cost per articles are more likely to embed services that are not directly related to publishing : "One possible reason for such variation between journals and publishers is that it is generally unclear whether proposed costs relate to those directly involved in article processing or those required in order for a publisher to ‘break even’ if they receive zero subscription income for an article made OA."

Early projections suggested that commercial models of open science would result in lower editorial costs than subscription journals. On the basis of the APC commonly practiced by leading open access publishers like PLOS, Houghton & Oppenheim identified a potential save of £800 per articles (£1525 instead of £2335 for subsciption publishing). Taken globally, this would result in "savings of around £500 million per annum nationally in the UK in a worldwide open access system". Critics at the time focused on the irrealism of a global conversion to open science: "many of the savings hypothesised would depend on the rest of the world adopting author-pays or self-archiving models." In 2012, David Lewis characterized commercial open access based on article-processing charges as a "disruptive innovation" that will radically "shift in the nature of scholarly journal publishing”. New commercial publishers seemed able to lower significantly editorial expenses: by 2013, "some emerging players (...) say that their real internal costs are extremely low" with Hindawi publishing "22,000 articles at a cost of $290 per article".

Prestige continues to be a significant driver of price-making in the commercial open access market: "In the academic environment, prestige and reputation have a lot of staying power (...) So far, leading firms in the academic industry have been remarkably resistant to disruptive innovators." The evolution of the cost of article processing charge and the concentration of the commercial open access market has challenged this assumptions. Due to the prestige of some publishers or the integration of new editorial services (like fast-track peer review), the mean price of open access articles has consistently risen: "there is no standard price, and largely no regulation of APCs, which results in some publishers demanding very large amounts of money from authors for the privilege of publishing OA." In France, the mean price of APCs of open access journals has gone up significantly between 2013 (€1395) and 2020 (€1745). A range of scenarios include the total cost of APCs nearly getting comparable to the cost of subscriptions by 2030 (68.7M€ vs. 97,5M€), while a full journal flip from subscriptions to APCs would be much costlier (168.7M€).

High APCs price are less related to the measurable quality of the journal or of the editorial service than to the capacity of well-known actors to impose elevated prices: "several studies reported only very weak or no correlation between quality of journals (measured in journal impact factors) and the level of APC. Contrarily, the level of APCs for publishing an article is more related to the market power of specific academic publishing companies". The risk of uncontrolled growth of APCs had been clearly identified at the start of the Plan S initiative: its coordinator, Robert-Jan Smits, was "determined to introduce a limit on APCs of €2,000", but in the end, "the cap rejected, on account of too many members of the Plan S Coalition being against its enforcement."

Benefits
The economic contribution of open science to scientific publishing, to non-academic economic sectors or to society remains little documented. In 2019, the economists Michael J. Fell underlined that while open science policies usually advanced the claim that opening research can bring "significant social and economic benefits", there have been "no systematic attempt has yet been made to identify and synthesize evidence relating to this claim and present a clear picture of the economic impacts that open science might have, how these comes about, and how benefits might be maximized." In his assessment of the state of the art, Fell identified 21 empirical studies that aimed to evaluate the "direct economic impacts in which open science has been a contributory factor", with a focus on Anglo-american countries (United Kingdom, United States and Canada) and Scandinavian countries (Denmark and Finland). Estimates are complicated by the fact that open science is both a scientific and a social movement: the specific scope of academic publishing is too limiting and yet it is more challenging to develop global macro-economic indicators like has been done on open data.

Transaction costs
In a market economy, the diffusion of private goods is usually assorted with transactions costs. Theses costs cover all the services required to manage the commoditization of a product, both from the side of the producer as from the side of a consumer. In a wider sense, it encompasses all the work and time allocated all the stakeholders of the transactions to perform it, such as benchmarks, negotiations or contractualizations.

Savings on transaction costs are frequently quoted as a significant advantage of non-excludable resource system (common good, public good) over private markets. Since access to the resource is weakly restrained and conditioned by unformal rules, its allocation is less costly overall: "a community could (...) produce better quality outcomes at lower economic costs, because of lower transaction costs, than alternative institutional systems involving private property" According to a 2017 OECD report, market allocation of open knowledge outputs is not sufficiently efficient: as the price of a public good converges to zero, any attempt at setting up a price will result in excessive exclusion and reduced collective benefits.

"The market may not be the best mechanism for the allocation of a public good, as any price above the marginal cost (of copying and distribution) will reduce net welfare – by locking out users and uses that do not have the capacity to pay or are not willing to pay. For digital information made available online the marginal cost is very low – close to zero. However, a price set at zero or close to zero will not be sufficient to cover full costs (…) To be sustainable, data repositories need to generate sufficient revenue to cover their costs, but setting a price above the marginal cost of copying and distribution will reduce net welfare."

All the models of open science have a direct impact on one sub-sample of transaction costs: exclusion costs. There is no need to maintain systems of enforce rules to ensure that a publication will not be used by an unauthorized reader: "This exclusion costs the excluder money. One cost is digital-rights management or DRM, the software lock that opens for authorized users and blocks access to the unauthorized. A second cost is writing and enforcing the licensing agreement that binds subscribers." In addition to DRM system, large commercial publishers have also developed intrusive methods to track subsequent usages of a publication.

Non-commercial Open Science journals and open infrastructures can mitigate a larger range of services and costs. Author-paid journals still have to maintain transactional activities, as the management of article-processing charges becomes a core business activity. Moreover, the real cost of large commercial agreements with leading publisher is not well documented as the proceedings of big deals are not public. Even in the context of journal flipping, the negotiation of complex licenses may represent a significant time investments from library and research institution.

Access costs
Access to pre-existent research outputs such as publications, datasets or code is a major condition of research. The subscription model used to rely on the assumptions that researchers only need to access a few specialized publications. In practice, research is more unpredictible and frequently rely crossing methods and observations from different fields or even different disciplines. In 2011, a survey of JISC showed that 68% of UK researchers felt they did have a sufficiently wide access to journal and conference papers. Non-academic professional audiences experience difficulties in accessing directly relevant research: "a quarter of those in industry/commerce described their current level of access as fairly/very difficult" as "the main barrier was unwillingness to pay". A survey of business use of research in Denmark highlighted a large range of strategies to avoid the high cost of pay-per-view, especially through collaboration with academics that have institutional access. The impact of open access is amplified by the high degree of internationalization of research: "83% of the economic return on cancer research is drawn from research from non-UK sources" which are less likely to be accessible in a subscription-based model.

Beyond the extended coverage, Open science can enhance the efficiency of bibliographic search: "It can take longer for people to access closed research outputs than when access is open." Access to the full text is also a common bibliographic search strategies, since it will often reference other relevant publications. A case studies on knowledge workers shows that restrictions to access translate in significant costs in work-time: "knowledge-based SME employees spent on average 51 min to access the last research article they had difficulty accessing, and this rose to 63 min for university researchers."

The open science movement has also entailed the diffusion of new resources: open research data and software. In these cases, access was not limited or constrained by high prices but generally non-existent, as they were at most shared across research teams or institutions. Economic estimates of the impact of opening new research output are consequently more difficult, as there is no prior market. Several studies Houghton and Beagrie on the commercial use of major open data portals (Economic and Social Data Service, Archaeology Data Service, British Atmospheric Data Service and the European Bioinformatics Institute) attempted to circumvent the issue by estimating the "willingness to pay", as a proxy for the positive economic impact: how much would the company would agree to pay if the service became only accessible through subscriptions. In all the case, this "consumer surplus" was much higher than the cost needed to run the service (for instance, £21m per year for the Economic and Social Data Service against an operating cost of £3m, or £322m per year for the European Bioinformatics Institute against an operating cost of £47). For large repositories of data or publication, the consumer surplus may be even more significant on a long-term basis, as the value of the infrastructure and the potential benefits it brings becomes more important as the range of hosted outputs continues to expand: "data archives are appreciating rather than depreciating assets. Most of the economic impact is cumulative and it grows in value over time, whereas most infrastructure (such as ships or buildings) has a declining value as it ages. Like libraries, data collections become more valuable as they grow and the longer one invests in them, provided that the data remain accessible, usable, and used."

Research efficiency
Impacts of open science on research efficiency stem from the benefits of enhanced access to previous work. Due to the complexity of bibliography search, closed subscription system "can lead to high levels of duplication—that is, where separate teams work on the same thing unbeknownst to each other." The issue is not limited to academic research, but affect industrial R&D as well: "an analysis of pharmaceutical patents by 18 large companies showed that 86% of target compounds were investigated by two or more companies" Non-publication of data or intermediary results can also have cascading effects on the overall quality of research. Meta-analysis rely on the reproductibility of pre-existent observations and experiments to identify the scientific consensus on a specific topic or field of research. They can be affected by statistical errors and bias, as well as the pre-selection of statistically significant results. Extensive opening of final and intermediary data sources makes it easier to spot potential mistakes.

Text and data mining projects have more recently become a major focus of studies on potential gain of research efficiency. In contrast with the standard procedures of state of the art, text mining projects process very large corpus and are bound to be limited by the available collections in academic libraries. Additionally, special authorization has to be given from the publishers unless the corpus is published in a free license, as the proper use of automated analyses require making copies accessible among project members. Access procedures may represent a significant investment for text mining projects: "As well as the costs and time required to reach such agreements, it also introduces significant uncertainty into such projects as it is possible that some agreements may not be reached". In 2021, a quantitative analysis of text and data mining research showed that "there is strong evidence that the share of DM research in total research output increases, where researchers do not need to acquire specific consent by rights holders". The restrictive effect of the lack of open access or text and data mining exception is sufficiently noticeable to highlight "an adverse net effect of IP on innovation, in the sense that there is strong evidence for stricter copyright hindering the wide adoption of novel ways to build on copyright works and generate derivative works." In 2012, a JISC report estimated that a facilitated use of text and data mining tools, notably in the context of bibliographic search, could generate significant gains of productivity: "if text mining enabled just a 2% increase in productivity – corresponding to only 45 minutes per academic per working week (...) this would imply over 4.7 million working hours and additional productivity worth between £123.5m and £156.8m in working time per year."

Economic and social development
The potential economic impact for open science research outputs is significant. Innovation commons are a major, although overlooked, source of economic growth: "The innovation commons is the true origin of innovation. It is the source from which the subsequent markers of innovation emerge — the entrepreneurial actors, the innovating firms, the new markets, and so on." Recent developments like the growth of data analytics services across a large variety of economic sectors have created further needs for research data: "There are many other values (...) that are promoted through the longterm stewardship and open availability of research data. The rapidly expanding area of artificial intelligence (AI) relies to a great extent on saved data." In 2019, the combined data market of the 27 countries of the European Union and the United Kingdom was estimated at 400 billion euros and had a sustained growth of 7.6% per year. although no estimation was given of the specific value of research data, research institutions were identified as important stakeholders in the emerging ecosystem of "data commons".

In 2011, a JISC report estimated that there was 1.8 million knowledge workers in the United Kingdom working in R&D, IT, engineering services most of whom being "unaffiliated, without corporate library or information center support." Among a representative set of English knowledge workers, 25% stated that access to the literature was fairly difficult or very difficult and 17% had a recent access problems that has never been resolved. A 2011 survey of Danese business highlighted a significant dependence of R&D to academic research: "Forty-eight per cent rated research articles as very or extremely important". Consequently, lack or difficulty of access affects the development of commercial services and products: "It would have taken an average of 2.2 years longer to develop or introduce the new products or processes in the absence of contributing academic research. For new products, a 2.2 years delay would cost around DKK 36 million per firm in lost sales, and for new processes it would cost around DKK 211 000 per firm in lost savings." Research data repositories have also experimented with efficient data management workflows that can become a valuable inspiration for commercial structures: "properly designed data commons can serve to R&D processes as an active and accessible repository for research data".

Estimations of the global business impact of open science are challenged by another positive economic factor of open science: weak or even inexistent transactions costs. Commercial uses of open publications, data or software occurs unformaly and is hardly identifiable: "use of open science outputs (e.g., by firms) often leaves no obvious trace, so most evidence of impacts is based on interviews, surveys, inference based on existing costs, and modelling approaches." Concrete impact of open science on commercial products and activities has been measured at the scale of a few major projects. The Human Genome Project made all the progressively available results of human sequencing within 24 of discovery from 1990 to 2003. A retrospective assessment showed a very high return cost on investment: "a $3.8 billion project drove $796 billion in economic impact [and] created 310,000 jobs". Another case study focused on the incidence of opening data on a pharmaceutical compound, JQ1: 105 patents have been filed in the following years, in comparison with less than 30 for similar compounds.

Social impact has become an important focus of open science infrastructure in the late 2010s. Access to non-academic audiences has created a new potential justification for their funding and maintenance. Potential groups that may benefit from open access "include citizen scientists, medical patients and their supporting networks, health advocates, NGOs, and those who benefit from translation and transformation (e.g., sight-impaired people)."

Economic regulation of open science
Economic regulation of scientific publishing has long be stuck in a "collective action dilemma", due to the lack of coordinations between all stakeholders: "To truly reduce their costs, librarians would have to build a shared online collection of scholarly resources jointly managed by the academic community as a whole, but individual academic institutions lack the private incentives necessary to invest in a shared collection."

Although it was initially expected by some economists, that the academic publishing markets would be structurally disrupted by new open access competitors, change was mostly driven by scientific communities, scientific institutions and, lately, coordinations of funders. In the 2000s, forms of regulation appear at a local scale to solve obvious market failure in the management of open science outuput. Development of research data repositories has been sustained by the implementation of "government or funder mandates for open data that require the producers of data to make them openly accessible". Mandates have been less easily expanded to other scientific outputs such as publications, that could not be covered by open data programs and were already invested by large commercial structures. In the great recession, scientific institutions and libraries had to balanced significantly reduced budgets, which entailed a first wave of big deal cancellation as well as "promoted the search for alternatives to this model". This specific context created a precedent for a secondary wave of big deal cancellation, no longer solely motivated by fund cuts but also by "the advance of open science"

In the early 2010s, leading publishers had come under heightened pressure to convert to open access. Along with the mobilization of researchers, the realization that academic publishing no longer operated in normal market conditions has redefined the position of scientific funders and policy-makers: [For Robert-Jan Smits], "if we really want OA to become a reality, we just have to make it obligatory, I thought: no more friendly requests, but rules and implications." Freedom of Information Requests has come to unveil the real cost of big deals in several countries. On July 17, 2012 the European Union issued a recommendation on Access to and Preservation of Scientific Information that called to "define clear policies" on open access. This approach "was a major shift compared with the previous EU 7th Framework Programme (2007–13), which had defined OA merely as a pilot action in select areas." It initiated a new cycle of regulatory policies of large academic publishers. The Horizon 2020 research program made open access a requirement for funding.

The Plan S was originally "a simple plan" mostly addressed to funding agencies: "any researcher who receives a grant from one of them must only publish in an OA journal under a CC BY licence". The early draft included a mechanism to cap the price of Article-Processing charges that was finally not retained in the final version. The first official version released in September 2018 favored "transformative agreements, where subscription costs are offset by publication costs, can help to accelerate the transition to open access." While criticized for its bias in favor of commercial open access, and the perpetuation of high publishing costs, the Plan S has facilitated the creation of a global coordination in negotiations with large publishers.