User talk:Novem Linguae/Essays/There was no lab leak/Archive 2

Lab Leeks! Lab Leeks! Get your hot, steaming Lab Leeks here!!!
Another Wuhan Lab Leak page with WP:NPOV problems: Drastic Team. --Guy Macon (talk) 02:21, 22 June 2021 (UTC)


 * Jokes aside, I think Lab leak theory deserves a separate page rather than a redirect because people spent a lot of their time to write this essay (there is also an alternative redirect to ). Just reuse the current content and sources of this essay on the page "Lab leak theory". My very best wishes (talk) 04:07, 23 June 2021 (UTC)


 * Would Wuhan lab leak theory with a redirect from Lab leak theory be a better title? There is that theory that it came from a US Army lab... --Guy Macon (talk) 04:49, 23 June 2021 (UTC)
 * Yes, because it was claimed to come from Wuhan lab, although there are many other labs . One can find whatever. My very best wishes (talk) 05:46, 23 June 2021 (UTC)

Paper cited at Luc_Montagnier
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7982270/ J Med Life. 2021 Jan-Feb; 14(1): 118–120.] is currently used to support the statement that Montagnier's claims have no evidence. I have added the Frutos paper (who does explicitly name/cite Montagnier), but I'm wondering whether this one is an acceptable source. It is not a review paper, the journal has a low-ish impact factor (just about 1), and the paper doesn't seem to be the kind of source that accurately reflects the mainstream consensus, notably making much out of reported detection of SARS-CoV-2 outside of China:

The study in question doesn't seem to have gone past the preprint stage. It is mentioned here, although this, by a virologist from Glasgow Caledonian University, seems to put significant doubt on the reported March 2019 positive result.

Given this, should the J Med Life paper be used as a source? RandomCanadian (talk / contribs) 15:40, 24 July 2021 (UTC)

Some arguments against this essay
The folks over at Talk:COVID-19 misinformation disagreed with some points in this essay. I'd like to take a closer look at their arguments, evaluate them to see if they are convincing, and then adjust this essay accordingly. – Novem Linguae (talk) 00:52, 9 July 2021 (UTC)

I've added my responses in-line, with the green text serving as Novem Linguae's summary of the arguments made in the aforementioned article space. My responses follow as indented material responding to each argument. I've grouped similar arguments together and added subsections for ease of reading. I haven't answered all of them, I only answered the ones I could tackle at first reading.-- Shibboleth ink (♔ ♕) 01:00, 10 July 2021 (UTC)

Has WIV not been transparent?

 * Argument about WIV not being transparent. Implying they are shady, have a bad safety record, and are covering things up. Of course there are no records of laboratory accidents. After the Laboratory was oppened the Chinese didn't follow through on their promises about transparency towards the French. After SARS-I leaked a total of four times from the lab in Beijing and Chinese sources cautioned at the opening not to repeat those mistakes, there wasn't a lot of transparancy. On the other hand the situation was dire enough so that they told the US in 2018 that they don't have enough people to operate the lab safely and that's why the US helped to train WIV staff.
 * Argument that WIV took down their database of viruses. "WIV's viruses are closely tracked" that might be true but the WIV took down their database of viruses and is not willing to share it with anyone.

BSL4 research (and specifically GoFR) conducted at BSL2?

 * Argument that BSL4 research was dangerously conducted in BSL2 conditions. But even if the biosafety 4 lab was completely safe that's completely irrelevant given that Shi's lab did their gain of function research in biosafety 2 and not in biosafety 4.
 * Argument that gain of function research was conducted at WIV. I'm a bit rusty on this. Did WIV conduct gain of function research, and if so, what was the nature of it?
 * There is, understandably, disagreement about whether WIV was conducting GoFR. Lots of lab leakers say it was happening, others say it wasn't. But even Alina Chan, one of the lab leak's most voracious proponents, doesn't believe the WIV was conducting GoFR. Regarding the EcoHealth grant: Alina Chan, a molecular biologist and postdoctoral researcher at the Broad Institute of the Massachusetts Institute of Technology and Harvard, said in a lengthy Twitter thread that the Wuhan subgrant wouldn’t fall under the gain-of-function moratorium because the definition didn’t include testing on naturally occurring viruses “unless the tests are reasonably anticipated to increase transmissibility and/or pathogenicity.” She said the moratorium had “no teeth.” But the EcoHealth/Wuhan grant “was testing naturally occurring SARS viruses, without a reasonable expectation that the tests would increase transmissibility or pathogenicity. Therefore, it is reasonable that they would have been excluded from the moratorium.”
 * To summarize, if someone like Alina Chan who has lots of experience in biotechnology and genetics, but very little experience in virology, is unclear about whether any of this counts as GoFR, why are the lab leak proponents so sure that it counts? Perhaps because it supports their position?
 * Regarding Guo et al in JVI, that does not depict GoFR in my expert opinion (and does not depict BSL4 research conducted at BSL2) for two reasons:
 * 1) For any virus that actually causes concerning human disease (meaning SARS-COV-1 AKA "high pathogenicity coronaviruses"), they used what are called "pseudoviruses." Creating a pseudovirus cannot be described as "gain of function" for one very important reason: pseudoviruses cannot function. They cannot replicate, they cannot cause disease, they cannot jump out of the dish and infect humans. They are incapable of causing an outbreak. Instead of creating a virus, you instead take the packaging of the virus (the membrane and envelope proteins) and remove all the relevant genetic material inside (or make it so incomplete that it cannot function). Then you put the proteins of interest on top, without providing the genetic material that contains the instructions for the protein. One analogy is like if you'd taken a cow to a horses-only rodeo, and put a horse costume on it. The cow still isn't a horse, even if it looks like one enough to get into the rodeo. If you wanted to take your pseudo-horse and make baby horses out of it, it isn't gonna happen, no matter how hard you try. In this scenario, a pandemic would be a horse-pocalypse. Nobody working with these cows dressed as horses is going to create a horse-pocalypse. And, just to clear up any confusion, pseudoviruses are not created using genetic engineering because you, very specifically, are not creating, modifying, or combining any genetic material to create a genome of any kind. Pseudoviruses are, on purpose, created without the genetic material necessary to be harmful. Here's more reading on that
 * 2) Any virus that they actually did perform genetic engineering experiments with (to create a replicating virus), does not infect humans and, therefore, can be safely handled at BSL-2. We here in the US also handle these viruses at BSL-2. This is standard operating procedure in virology, all around the world. Any experiment involving genetic engineering, you perform it at the highest biosafety level of any part of any virus you're involving in that genetic engineering experiment. You'll notice that in that Guo et al paper, they very specifically followed this standard. They never handled a genetically engineered replication-competent virus that contained any BSL-4 virus at BSL-2. They only performed those experiments on BSL-2 viruses and handled them at BSL-2. To help illustrate this point, we do the same sort of stepping down in biosafety with hantaviruses. If it has never been shown to infect humans, and we're pretty darn sure it doesn't infect healthy immunocompetent humans, then we handle it at BSL-2 or BSL-2+ (e.g. Thottapalayam and Prospect Hill viruses).
 * So, to summarize, that Guo et al paper does not depict GoFR, by any useful definition, because 1) they used pseudoviruses for any genetic material capable of infecting humans and 2) they did engineering experiments exclusively with bat-viruses swapping genetic material with other bat viruses. Since none of the resulting viruses had any novel tropism or pathogenicity in humans, this also does not meet the US government's definition of Gain-of-Function Research: "research which could enable a pandemic-potential pathogen to replicate more quickly or cause more harm in humans or other closely-related mammals."
 * As an aside, I want to point out that this entire discussion is one of the many reasons why wikipedia does not allow original research, and cautions so heavily against using non-literature sources (or primary literature sources) to discuss complicated scientific and medical topics like "gain of function." We are getting so deep into the weeds here, as far as what "counts" and does not "count" as GoF or genetic engineering. I would tell you that for the first 2-3 years of my graduate training, working on these viruses and spending almost every waking minute thinking about these laboratory techniques, I had difficulty understanding all these nuances. Why do so many people think they are qualified to understand these nuances with very little formal study? Why would you trust a journalist or non-virologist to get this right, when it's so difficult to understand these nuances as a virologist? This is exactly why peer-reviewed review articles written by experts are so valuable. There's a deep and complex regulatory structure to all of this, involving formal risk-assessments and biosafety committees, staffed with physicians and scientists who have devoted their careers to keeping laboratory experiments safe.-- Shibboleth ink (♔ ♕) 13:06, 9 July 2021 (UTC)
 * Just to help further illustrate why this argument is bupkis: A quote from the Guo et al manuscript: "all four bat SARSr-CoV strains with the same genomic background but different S proteins could use human ACE2 and replicate at similar levels. However, there are some differences in how they utilize R. sinicus ACE2s." The entire paper is investigating how chimerizing bat viruses with each other makes them more or less able to bind to bat proteins. They all behave similarly in human cells, so chimerizing them will have no impact on their ability to infect human cells. Therefore, it isn't "gaining" any human pathogenicity or transmissibility, and therefore does not qualify for the modern widely-accepted definition of "Dual-use gain-of-function research."-- Shibboleth ink (♔ ♕) 01:12, 10 July 2021 (UTC)

The GoFR moratorium and "increasing biosecurity incidents"

 * Argument that the gain-of-function research moratorium was enacted because biosecurity incidents have gotten more frequent lately, not less frequent. As far as leaks decreasing remember 2014 when the US had an anthrax and a smallpox accident in the same year? Things got so bad that the moratorium against gain of function research was created.

To address the events that led up to the moratorium

 * This is an inappropriate conflation of two things: "Scientists who didn't follow established step-by-step protocols to the letter and put themselves in danger as a result" and "dangerous gain of function experiments involving pathogens that can cause pandemics." The series of events that spurred the creation of the moratorium involved accidental screw-ups where a pathogen was stored incorrectly or not inactivated rigorously enough. But, in all of these cases, the follow-up procedures worked as intended and the system self-corrected.


 * In one of these incidents in the first half of 2014, a CDC scientist didn't follow the protocol correctly and only put the sample in formic acid/acetonitrile for 24 hours instead of 48. When this was discovered, that scientist immediately reported the mistake and proper procedures were followed to figure out what happened, why, and how to prevent it from happening again. This is the system working as expected. An event that probably did not actually put any scientists in danger was immediately dealt with and investigated so that it would not happen again. Nobody got sick. And, with anthrax, most importantly, this sort of situation is not contagious because Anthrax in this setting is not contagious. It cannot cause a pandemic. I just want to emphasize, that anthrax in and of itself is not a massive massive biosafety issue. It is the culturing and growth of large quantities of anthrax spores that are a problem. The situation detailed here does not involve spore formation. If you go out into your backyard and take a handful of soil, and sequence it, you will likely find Bacillus anthracis.  In this situation in the CDC lab, it was slightly more dangerous than eating a handful of soil. But this mistake by one CDC scientist only posed risk to other CDC researchers. It did not pose any risk to the general public, and ultimately, no one actually got sick.


 * In another of the events leading up to the moratorium, an FDA scientist discovered a bunch of vials of Smallpox in the back of a cold room from 30 years ago. As soon as this was discovered, they reported it to the proper authorities and disposed of the vials appropriately. No experiments were conducted with the material contained within the vials, and this was ultimately blamed on a clerical error done in the 70s before modern biosafety practices were established. No one was put in any danger, as the vials were never opened and thus no sample ever left its "primary container" outside of a BSL-4 laboratory. All transfer of the material was conducted in "leak-proof secondary containment," as is proper practice (meaning the vial itself is sealed within another leak-proof container). This is another example of an unfortunate historical situation in which these vials were created in the 50s, but not properly disposed of in the 70s when labs moved around. It is extremely important to note that the vials were never opened outside of a BSL-4 laboratory. This was a problem only because the vials were not accounted for in the 70s, and thus not properly stored in a "secondary container" in that cold room. As soon as this was discovered, the vials were dealt with consistent with the procedures set down in modern biosafety manuals. The vials were transferred to a BSL-4, analyzed, and disposed of properly. FDA and CDC announced they would start performing audits of every cold room on the campus, evaluating for this exact scenario. As soon as the scientists and regulatory officials recognized this was a possibility, they established rigorous protocols for how to deal with such a possibility moving forward. What more can you ask for? This situation is overall more of an issue with "security" rather than "dangerous experiments." The problem is that some dangerous actor could theoretically have obtained these vials and done something bad with them. Or that the vials could have been inadvertently broken, releasing the virus. This is of course concerning! And so it is good that a full-scale review occurred to make sure this does not happen again. Ultimately, nobody got sick.


 * There was also one event in 2014 at the NIH where a researcher accidentally gave a chicken H5N1 avian influenza instead of the more benign H9N2. This was bad, but I'd like to emphasize it is no more worrisome than the many many poultry markets in China which have tested positive for H5N1. Was it bad? yes. Were people punished as a result? Also yes. But was this an event that could have caused a pandemic? No. Natural H5N1 does not pose a pandemic risk in and of itself, only when modified to transmit more efficiently in humans. And we know from the Fouchier and Kawaoka experiments, that when it is made to transmit more readily in humans, it becomes much much much less deadly! To the point of being basically just like any BSL-2 influenza.  This situation at NIH did not involve any GoFR experiments and did not involve any human infections whatsoever. Nobody got sick.


 * I also want to emphasize, biosecurity and biosafety is not about preventing these incidents from ever occurring. It is about making sure if they do occur, there are secondary and tertiary safe guards to prevent any actual harm from occurring as a result. You cannot build a 100% safe BSL-4 lab. No guideline or protocol will ever 100% account for every possible scenario. That may sound shocking, but it's true. All you can do is build multiple multiple safe guards so that when an error in protocol eventually occurs, everybody is safe. The name of the game is not only "prevention" but also "management" of such incidents. This is the philosophy behind having multiple "layers" of containment (e.g. building BSL-4 labs inside BSL-3 labs, or having primary, secondary, and tertiary containers in which samples are transported, having multiple methods of inactivation of pathogens,  having more than one method of "testing" and "verifying" inactivation in an autoclave (chemical, biological, procedural),  etc).
 * We want to make biosecurity research as safe as physically possible, but also not so restricted that it actually creates more risk or prevents the research from being conducted at all. For example, adding more PPE does not inherently make a situation safer, because added PPE can also create issues when it's time to take it all off when exiting the laboratory,  and because workers are less likely to utilize more restrictive PPE.


 * Importantly, none of these events involved "gain of function" research in any way, and no one actually got sick as a result of these incidents. It was simply a set of events that made people go on "higher alert" for biosafety/biosecurity practice regulations. And so the Cambridge Working Group seized this opportunity to advocate for enhanced regulations that they'd wanted for several years before these events occurred. Lipsitch reignited this debate with the 2012 H5N1 studies in the Netherlands and in Wisconsin, that have very little, if anything, to do with the events described above (CDC anthrax, FDA smallpox, NIH H5N1). He just saw this as a further reason to be upset about GoFR.
 * And you'll notice, that's why many other scientists were confused about the conflation of the two. Scientists for Science, for example, advocated for a review of biosafety practices at the federal level, poor application of which is the actual cause of the events above. These are relatively isolated incidents, that all occurred in a short span, igniting debate and concerning researchers and the public alike. It was a bad few months in 2014 where a few of these events happened. But it is extremely misleading to use these events as evidence for the statement "laboratory accidents are happening more frequently" because you're selecting 2 or 3 isolated events that have not re-occurred in subsequent years, and appear to have been isolated incidents that have been thoroughly investigated and used to design remedies. Science (and, by extent, biosafety) is a process that course-corrects as time moves on.-- Shibboleth ink (♔ ♕) 17:12, 9 July 2021 (UTC)

Are lab accidents becoming more or less common?

 * As to the question "are lab accidents becoming more or less common?" we need to look at a broader set of data and ask a more specific set of questions. We need to be careful about whether we're talking about accidents which result in an infection or accidents in general and also, whether we're talking about procedural errors or actual exposures of researchers to an infectious agent.

Over the last 60 years, biosafety practices have improved considerably. We've invented the BSL-4 standard, increased the levels of containment involved in all research (including a full redesign of BSL-1, 2, and 3 standards), and developed practices for monitoring and reporting illnesses of researchers, implementing video monitoring and so-called "buddy systems,"  and making sure unqualified or concerning researchers don't obtain access to these laboratories (so-called personnel reliability programs which evaluate the medical, psychological, and interpersonal fitness of researchers in these labs). We've also developed better biosafety technology, like laminar flow hoods which create a unidirectional stream of airflow that traps any aerosolized particles and pushes them directly into HEPA filters. This technology was not widespread in biosafety laboratories until the 80s.


 * Among these modern developments is the enhanced requirement for reporting every single alteration in protocol, even if it does not put any researchers (or the public) at risk. CDC and the DHHS require reporting of any such incident that occurs in Select Agent labs.  In the past, incidents were often only made public or reported to government officials if an illness occurred, or if substantial risk was interpreted to exist for the public. But now, every tiny little thing is reported. In a 2019 report of incidents that occurred at Boston University's NEIDL, for example, you can see two incidents which pose no risk to the public whatsoever. In the 50s or 60s, probably neither of these would have been reported to anyone. Is this change in standards a good thing, that we report more and more benign stuff? Yes! It allows regulators to actually decide what counts as "concerning" and what does not. However, with that change, comes the responsibility to more precisely choose our words when discussing these incidents.
 * Several studies of "laboratory acquired infections" (so-called "LAIs") have found that the overall number of LAIs has been decreasing since the 1950s, due to all of the improved biosafety practices described above.   In the U.K., for example, 82.7 cases of LAIs occurred per 100,000 person-years of work occurred in 1988, versus 16.2 per 100,000 in 1994. This is just one of many examples of decreasing LAIs over time.
 * Overall, this question represents a similar problem to the vaccines and autism misinformation conundrum. Autism represents a disease for which we have much better diagnostic criteria now than we did 30 or 40 years ago. We catch a lot more autistic kids earlier in the course of disease and at younger ages. Unfortunately, the timing of that diagnosis often coincides with the administration of many vaccines against childhood illnesses. This has led to widespread misunderstanding and misinformation about a purported link between autism and vaccines, for which no credible evidence exists.
 * My point in bringing up this example is this: More reports of X do not always mean an increase in X. More reports of Y thing related to X that is a broader definition, do not always mean that X has increased. In the case of autism spectrum disorders (ASD), the last several decades have resulted in a broader definition of ASD, and a resulting increase in diagnoses. However, the actual number of autistic children has likely not changed. We're just finding it more. Likewise, with laboratory incidents, the criteria for what is a "reportable" event has broadened, resulting in a perceived increase in notable accidents, but there is no evidence that actual "accidents" which endanger the public have increased. Indeed, actual infections of laboratory workers have decreased. Surely, that is the more important metric for determining risk to the public.-- Shibboleth ink (♔ ♕) 15:43, 9 July 2021 (UTC)

False positive COVID tests

 * Argument that false positives from COVID tests are not an insurmountable problem, you just re-test with a better test that has no false positives. While COVID tests have false positive rates, any specific finding of a COVID case would lead to DNA sequencing of the involved blood which is more expensive but has no false positive rate and more importantly tells us the exact sequence of the virus.
 * This is a grave misunderstanding of how clinical testing works. I wrote a long explainer about this exact problem, with numerous peer-reviewed scientific journal article references, back in April of 2020. But here I'll just write a shorter summary answering this specific question. If you want more detail, check out that explainer.


 * You cannot create a test which has "zero false positives." That's statistically and physicochemically impossible. You can reduce the number of false positives, and better tests make that possible, but you cannot eliminate them entirely. If any clinical diagnostic test is given to a large enough group of people, there will be some false positives. All we can do to increase the utility of the test is reduce that number. Many experts describe clinical diagnostic tests as a "resource" which itself should be conserved. Meaning we should not indiscriminately administer clinical diagnostic tests, if we want to maximize their usefulness. In medicine, we have pre-test inclusion criteria, which narrow down the population actually given the test, so that we can reduce the number of false positives. This is because we are increasing so-called "pre-test probability."
 * For example, let's say you want to give an antigen capture test (checking for antibodies against COVID-19) to the general population of a country. You need to first make sure your test captures as few antibodies generated against common-cold coronaviruses as possible. But you cannot completely eliminate the binding of these antibodies. The viruses are too structurally similar, and antibodies are too good at binding to things. All you can do is try to find the most distinct possible regions of the viruses, and use those as the "bait." But some patient out there could still have developed a "broadly-binding antibody" against this region, after being infected with a common cold coronavirus.  You cannot eliminate that possibility completely. Instead, you make the biochemistry of the test as good and specific as possible, and then you add pre-test criteria to try and remove such people from the testing pool (e.g. "did they have COVID symptoms in the last year?" or "have you been hospitalized with COVID-19 symptoms in the past year?"). These questions may affect the generalizability of the study's findings, but they make the test results more accurate. You also can combine multiple tests (either the same test multiple times or multiple tests which assess different areas of the virus or different mechanisms (e.g. antibody capture tests and B cell sequencing assays). Of these, multiple tests with different mechanisms and different "targets" are the most effective at reducing false positives). In this framework, you would only count someone as positive if they are positive on all the included tests. This substantially reduces false positives, but again it does not eliminate it completely. If you give the test to a large enough group of people, it will still produce some false positives.-- Shibboleth ink  (♔ ♕) 16:18, 9 July 2021 (UTC)

If you're knowledgeable about these topics, feel free to chime in, and I'll summarize your responses and adjust this essay accordingly. Thanks. – Novem Linguae (talk) 00:52, 9 July 2021 (UTC)
 * Is it alright if I edit inline? Just a lot of points to cover and it will be easier that way. -- Shibboleth ink (♔ ♕) 11:43, 9 July 2021 (UTC)

As an aside, this entire talk page section is an exercise in Brandolini's Law: "The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it." -- Shibboleth ink (♔ ♕) 17:29, 9 July 2021 (UTC)

Discussion re: additional WIV manuscripts (PMIDs 27170748 and 29190287)
I tend to agree with the points raised by, and  in Talk:COVID-19_misinformation.

Regarding WIV doing live virus SARSrCoV work at BSL2 labs, the paper that Ian Lipkin and Ralph Baric expressed concern about was PMID: 27170748, as reported in this Minerva article. It seems you may be conflating this paper with the Guo et al paper in your analysis above.

Regarding what counts as GoFR, as you are surely aware, there is no scientific consensus on what constitutes "Gain of function research" let alone "Gain of function research of concern". The P3CO framework serves as recommended policy guidance for federally funded research, and is currently under review. The paper that is most oft cited WRT GoFR at the WIV is PMID: 29190287, and this work meets the definition of PPP enhancement under the 2014 Pause and the 2017 P3CO framework.

Regarding your comments on decrease in LAIs, you are surely aware that even a minute decimal percentage is a concern with enhanced PPPs, as it only takes one infected lab worker taking the subway home to spark a global pandemic. I remind you again of Lipsitch’s position that virologists should not be the only ones allowed to discuss epidemiology or matters of science policy. ''Hath not a non virologist hands, organs, dimensions, senses, affections, passions? Fed with the same food, hurt with the same weapons, subject to the same diseases, healed by the same means, warmed and cooled by the same winter and summer, as a Virologist is? If you prick us, do we not bleed?'' CutePeach (talk) 12:38, 11 July 2021 (UTC)


 * , re: manuscripts and whether the work within them counts as "GoFR," I have only answered the papers I have been presented with. As I will continue to do. Re: PMID 27170748, I have great respect for Lipkin and Baric, but I would point out again (as I did above) that US researchers were also conducting work with these viruses at BSL-2. Does that make it right? No, of course not necessarily. I'm simply demonstrating that it was commonly accepted that these particular bat viruses (WIV1 and WIV16) are not human-pathogenic, and therefore can be safely handled at BSL-2.


 * There are many multiple examples besides the hantaviruses that I provided, where a non-human pathogenic close relative of a human pathogenic virus can be safely handled at a lower biosafety level. Tamiami virus (a close relative of Lassa virus), Ross River virus (a close relative of Semliki Forest virus), or non-neurovirulent strains (e.g. Kunjin) of West Nile all come to mind. All of these are examples of viruses closely related to BSL-3 and 4 viruses that are themselves handled at BSL-2 (and BSL-2+) because they lack a concerning virulence in humans. In fact, I believe Dr. Lipkin himself has benefited from this commonly accepted practice, in his West Nile/Kunjin work.


 * This is a cornerstone of biosafety, enabling researchers to conduct experiments without over-burdening them with unnecessary and expensive restrictions. This allows experiments to be conducted more easily (plaque assays, antibody inhibition assays, flow cytometry of cells, etc. which are all quite frustrating to conduct at BSL-3 and 4) which enables faster generation of treatments and vaccines. Of course, eventually, the findings are later replicated on the real-deal human pathogenic viruses, but at the appropriately higher biosafety level. Doing the experiments first on closely related viruses that are more easily (and still safely) handled at BSL-2 (and BSL-2+) means less time is wasted at BSL3 or 4.


 * Of course, any genetic engineering work that combines or chimerizes viruses to create replication-competent clones must be conducted at the highest biosafety level of any of the involved pathogens. As I said above. I would also point out that in this paper (PMID 27170748), there was no gain in any function. And neither Lipkin nor Baric are claiming that there was. How could this be "Gain-of-Function research of concern" if no function was actually gained? Indeed, the viruses, when ORFX was deleted, lost the function of interferon inhibition and NF-κB activation. There certainly was not any gain in human virulence.


 * "Regarding what counts as GoFR, as you are surely aware, there is no scientific consensus on what constitutes "Gain of function research" let alone "Gain of function research of concern"" -- Yes, exactly as I said above. Is this a straw man argument? The controversy is exactly why it would be inappropriate for us to describe this work as "gain of function" in wiki-voice. We must instead use attributed quotations as sourced from duly weighted RSes. This is what I have already said in a discussion on the Investigations talk page.


 * Re: PMID 29190287, my argument about this is the same as above, they are chimerizing bat viruses with bat viruses, and not altering their human infectivity. Again, there is no gain in any function here, so my opinion is that it would be inappropriate to describe this work as "gain of function." Re: P3CO, these experiments do not enhance the ability for any included virus to infect humans more than it already did. How does that meet the P3CO definition?


 * "I remind you again of Lipsitch’s position that virologists should not be the only ones allowed to discuss epidemiology or matters of science policy." I don't believe I've ever disputed this, or if I have, I don't recall it. I would characterize this as another straw man argument. I think experts in adjacent fields such as Alina Chan, Richard Ebright, Lipsitch, or other biosafety experts are perfectly qualified to debate whether something is "gain of function research." That's why experts in these fields serve on expert government panels and IBCs and so on. Above, I argued that completely unqualified individuals probably should not be the ones we are citing about this (e.g. Yuri Deign has expertise in "entrepreneurship" and "life extension" and software engineering, with an MBA and a bachelor's in computer science. He is clearly unqualified). But absolutely I would agree that we should duly weight quotations from experts such as Ebright, Lipsitch, and Relman. I haven't looked much into the overall set of RSes about their thoughts on these matters, but we would need to make sure their statements are weighted based on the coverage in those RSes. However, no matter how much it is covered, I do not believe it would rise to the level of wiki-voice inclusion. The controversy in topic-relevant well-respected widely-circulated scientific literature sources would prevent this.-- Shibboleth ink (♔ ♕) 16:21, 11 July 2021 (UTC)


 * I didn’t say Lipkin and Baric expressed concern with PMID: 27170748 over possible GoFR at the WIV. We’re talking here about a species of virus which the WIV produced some research on in 2013 claiming it could directly infect humans without an intermediate species . Relman expounds on Lipkin and Baric’s concerns, in this webinar he gave the other day.


 * On PMID 29190287, construction of novel chimeric SARS-related coronaviruses able to infect human cells is something that as per the P3CO framework, should have undergone a risk-benefit review, but apparently did not. That is something that Fauci has to answer to, and has less to do with the WIV’s safety record, though it's not entirely irrelevant here. With this and similar reports of their activities, I think there is reasonable concern about the WIV, the undisclosed viruses it holds, and the undisclosed ways it may have worked on them. It may have been an affiliated lab, as you will have read in Shoham’s report, that they are allegedly part of an undisclosed network of labs.


 * On your last point, we are in agreement. I thought you were discounting the opinion of Chan WRT to GoFR definitions and her say on future policy planning. I actually share your POV about GoFR WRT to lab origins, which is why I pointed you to the proposal from Drs Latham and Wilson on your talk page, as its the only lab origins hypothesis that doesn’t involve any significant form of bioengineering. Deigin’s finding WRT to the double CGG sequence is significant, and remains anomalous, but his claim of it being strong evidence of gene splicing is somewhat tedious, as it rests on only a tiny handful of nucleotides. When we take Latham and Wilson’s thesis, against the backdrop of white space that is everything we don’t know about this virus and its early spread - it is IMO the most plausible version of the lab leak hypothesis. It is not given any consideration in your thesis posted to Reddit, so I do hope you find the time to read it, as its clear from this Reuters piece that Fauci has . CutePeach (talk) 13:27, 12 July 2021 (UTC)
 * , the issue isn't whether a virus can infect a human, the issue is whether it has virulence in humans. Lots of viruses can infect different species, but cause no symptoms or extremely mild symptoms. One example would be Vesicular stomatitis virus, which we also handle in BSL-2. Also, you cannot say that research did not undergo risk-benefit review, when all we have is the manuscript. They have an IBC at WIV, we have no idea from the available data whether it underwent a risk-benefit review and IBC protocol review. Have you gone through their IBC meeting minutes? Are they available somewhere? I would love to see those, if so.


 * I've discussed this double arginine CGG elsewhere, but it is not actually that unusual as double CGGs appear many hundreds of times throughout the genomes of other alpha and betacoronaviruses. I would describe this overall as pareidolia. Deigin is looking for something to explain the opinions he already holds, and he takes whatever he's found and claims it is significant. When we all should be doing the inverse of this as good scientific practice.-- Shibboleth ink (♔ ♕) 16:55, 12 July 2021 (UTC)
 * , SARS is pretty virulent, so one would assume other SARS-like viruses are virulent too. Your comparison to Vesicular stomatitis is completely out of left field, so I remind you again that Lipkin and Baric have expressed serious concern with the WIV and its safety practises, and they had very good reasons for doing so.


 * Regarding risk-benefit review, the controversy is exactly around that process, as it is completely confidential and out of the public record. Scientists like David Relman of the CWG have advocated for this process to be made public and for the names of the review panelists to be made public for many years. They even want the review panel to be spun off into an independent agency.


 * As for the WIV’s IBC, you probably aren’t going to get to see any of their minutes as in case you weren’t aware, the Chinese government has been busy covering up the origins and early spread of the virus from the start. On your next trip to China, you might want to learn a bit more about how the Chinese government actually governs. CutePeach (talk) 07:48, 13 July 2021 (UTC)
 * On your next trip to China, you might want to learn a bit more about how the Chinese government actually governs., if you're going to be snarky, you can stop posting in my userspace. Thanks. – Novem Linguae (talk) 08:05, 13 July 2021 (UTC)
 * Have you gone through their IBC meeting minutes? Are they available somewhere? I would love to see those, if so. That's a bit snarky too, and essentially calls for the answer it got. It's usual in engineering fields to refer to something as unaudited(/unreviewed) if the audit(/review) isn't released or is otherwise unavailable, precisely because otherwise the lack of evidence is evidence of lack and that doesn't make sense. InverseZebra (talk) 21:36, 3 November 2021 (UTC)
 * , see my comments on your talk page. Thanks. I was actually enjoying this discussion before this comment. I'd like to discuss content, but personal attacks and pointy edits like this are not very conducive to that discussion.-- Shibboleth ink (♔ ♕) 10:54, 13 July 2021 (UTC)

Environment Research
The quotes may be a little extensive, it was difficult to decide on what to include. I had noticed the source was added in a hidden comment by RandomCanadian for eventual expansion. — Paleo Neonate  – 01:12, 6 November 2021 (UTC)
 * Resolved, thanks, — Paleo Neonate  – 01:54, 7 November 2021 (UTC)

RaTG13
I'm not sure if it best belongs under misconceptions or against... — Paleo Neonate  – 04:27, 8 November 2021 (UTC)