Wikipedia talk:Wikipedia Signpost/2019-01-31/Op-ed


 * I have nothing nice to say about CMU. While I was a grad student at Pitt, I reached out to a CMU professor about an on-wiki issue but they never took me up on my offer. Clearly, we as a community still have a problem with academics mucking about on wiki without having a proper sit-down with editors right in their own neighborhoods. Sad misconceptions like these underline the point I made toward WikiEd a few years ago when they ended the campus ambassador program. I guess we learn nothing. Chris Troutman  ( talk ) 07:24, 31 January 2019 (UTC)
 * The frustrating thing is that there clearly are ethical ways of doing this sort of research. It's a pity that more thought wasn't put into methods beforehand. T.Shafee(Evo &#38; Evo)talk 10:09, 31 January 2019 (UTC)
 * Small world, I was a grad student at Pitt too through I actually did talk to prof. Kraut and it was a relatively constructive relationship (I helped them with their first wave of Wikipedia research, through sadly didn't manage to get myself credited...). I think the problem with this project is that it was too much focused on theory and too little on the benefis to our community. But, sadly, professors don't advance their career solving Wikipedia/media problems, they do so by publishing papers for which reviewers are more likely to complain about 'not enough theory' rather than 'not enough practical implications'. After all, that's academia. --Piotr Konieczny aka Prokonsul Piotrus&#124; reply here 17:11, 1 February 2019 (UTC)


 * Were the barnstar bombers indef-blocked for abusing wikipedia for personal gains? - Altenmann >talk 07:49, 31 January 2019 (UTC)


 * "Random Rewards" reminds me of the badges participants in the The Wikipedia Adventure accumulate, ending up (having barely learned to sign their posts) with more than a dozen such badges on their user pages. Stolen valor!  – Athaenara  ✉  08:00, 31 January 2019 (UTC)
 * Stolen valor is an interesting analogy, thanks for pointing that out Athaenara. Bri.public (talk) 19:01, 31 January 2019 (UTC)
 * For what it is worth, everybody recognizes that TWA awards are meaningless and values them as such. Barnstars are another matter entirely, seeing as they (usually) actually signify something. —Compassionate727 (T·C) 19:31, 31 January 2019 (UTC)
 * Barnstars have been significantly devalued, I was even given one by a COI SPA after deleting his spam. – Athaenara  ✉  20:26, 31 January 2019 (UTC)


 * We should grant retrospective honoris causa barnstars to Confucius, Omar Khayyam and Spinoza. And then claim that at least these three barnstars are recognizing something. Pldx1 (talk) 09:00, 31 January 2019 (UTC)
 * User:Herostratus/External Barnstars Already ahead of you. Herostratus (talk) 02:21, 3 February 2019 (UTC)


 * Well my ego has been sufficiently stroked on account of the story since the barnstar chosen as the lead image for the story is...The Press Barnstar, which I had suggested to the community some years back. I feel good about it :) Now if only I could actually earn it as opposed to have it simply materialize out of thin air... TomStar81 (Talk) 10:13, 31 January 2019 (UTC)
 * Bloody FaceBook now wants data on us? Poked and prodded is bad enough, but FaceBook grant says it all and all I need know about CMU. Glad this as rejected.-- Dloh cier ekim  (talk) 10:41, 31 January 2019 (UTC)


 * "It will dilute the value of the barnstar" is the most ridiculous argument I've ever heard. It's a digital star that anyone can give for any reason- that someone else got one for a trivial reason does not devalue your own, any more than someone else getting a hug means one from your SO means less. An actual concern is that the researchers aimed to perform a study that affected, in however minor a way, a thousand people who did not explicitly consent to be in the study. I don't care if there's some buried term int he wikimedia TOS that legally indemnifies it- research on subjects without their consent is unethical, regardless of the scale of harm. -- Pres N  13:21, 31 January 2019 (UTC)


 * Well, just because the valuation of something is low (or perceived by some as low) does not mean that said value cannot be debased. That said, I agree with the main thrust of your comments, as to what the major issues of significance here are. While it is not per se unethical to conduct research without informed consent under every single one of the various legal and institutional rubrics which define such matters, in this case (where there were non-anonymous subjects whose responses were to be tested and behaviour monitored), the approach was very obviously inappropriate in the extreme, as an ethical matter.  Learning of this makes very curious as to who is on Carnegie's institutional review board and how they possibly thought this was permissible behaviour for researchers. Indeed, to the extent any of these researchers are members of the APA, I wonder how they might feel about this attempted research, given it would have, to my eye, pretty flagrantly violated numerous provisions of the associaton's ethics code. I'm also curious whether the information culled, beyond being shared with the venture's partners, was also intended for use in publication with one of the APS journals, given the role Kraught serves as facilitator for the APS Wikipedia initiative. But putting the APA and APS to the side for the moment though, and returning to the active parties here, this whole affair gives off an odour that does not reflect well on any of the institutions involved.  It's an embarrassment that any researcher thought this could end well and yet more indication of how disrespectful academics can be of Wikipedia and community in particular--to say nothing of how laissez-faire they can be about their ethical responsibilities with regard to online research generally. I'd like to say I believe it likely that this affair would stain the reputation of the involved parties among serious researchers, but, the truth is that I rather doubt it will.. Snow let's rap 01:03, 1 February 2019 (UTC)


 * From an economic/monetary standpoint, it clearly does dilute the value of the Barnstar. Barnstars have a concrete value in that they signify the community's respect and thus lend prestige and authority to the recipient. Every instance of a barnstar created reduces the value of every instance that already exists; they would lend much more prestige and authority, for example, if only twelve people had them. We accept the very small dilution every time someone hands out a barnstar because we consider the value the recipient gets out of it, particularly as a morale boost, and the value the community gets out of it, as a way of identifying reputable people, to be worth it. It is harder to make that argument for thousands of random barnstars, especially in light of the fact that they would lose much of their value as a recognition: you would no longer be able to look at someone's user or usertalk page and think: "That person has a barnstar, I can trust him or her to be smart, competent and experienced" because a ton of people who are potentially none of those things now have them. —Compassionate727 (T·C) 19:44, 31 January 2019 (UTC)


 * You could double-check that the paper was 455 pages? That's more of a book or a very lengthy, in-depth study, not a "paper". Liz Read! Talk! 16:23, 31 January 2019 (UTC)
 * Without seeing the paper myself, I would guess there are more than a few pages consisting mostly or entirely of raw or uninterpreted data. —Compassionate727 (T·C) 19:48, 31 January 2019 (UTC)
 * The paper in question is 10 pages long. That's a fairly common page limit in computer science conference papers. Maybe could update the op-ed to reflect that it's not 455 pages? Cheers, Nettrom (talk) 16:24, 4 February 2019 (UTC)


 * , you've taken some legitimate flack from the community for some of your Signpost work, but I just want to say that this op-ed piece was really well done. A Original Barnstar Hires.svg to you for your work on this. – wbm1058 (talk) 21:31, 31 January 2019 (UTC)
 * So if the goal of this research was to "help Wikipedia retain editors and encourage them to do needed work", I have an idea for a better experiment. Start refilling some editors' coffee cups, and see what effect that has on their contributions. Also note the effect Trump's experiment with delayed Federal worker pay had on outcomes like the wait time at airport security checkpoints, etc. – wbm1058 (talk) 21:42, 31 January 2019 (UTC)
 * The relationship between Wikipedia and academia is an interesting one. I would have expected these communities to overlap in goals and values, but this is often not the case. Problems arise when folks come here to further their own academic interests instead of working to build an encyclopedia. In this case it is clear that the purported benefits to Wikipedia were secondary to the research agenda; if this proposal was an earnest effort to improve Wikipedia, the researchers would have worked with the community from the start to design a study that was consistent with our needs and values. Proposals from universities should be treated with the same skepticism as any other form of paid editing, but instead we seem to presume a certain level of altruism and good faith just because they are in the field of education. –dlthewave ☎ 05:32, 1 February 2019 (UTC)
 * As a retired academic researcher and Wikipedian, the problem simply is that the researcher (often a research student) and their advisors and any ethics review panel know little about Wikipedia or how it really works. They will not have encountered what Wikipedia is NOT. If the proposal had been to test the motivation value of handing out Employee of the Month awards to supermarket staff, it would pretty obvious to all concerned that obtaining consent from supermarkets to participate would be required as either physical access to staff or access to staff emails would be needed. The organisational barrier would be obvious. However Wikipedia doesn’t exhibit the same obvious organisational barriers. It’s the free encyclopedia anyone can edit. Many people have no realisation that there is an organisation behind it or a community to be consulted. Even if that occurs to them, it is not obvious from our home page or links off it where they should be asking. For example Contact_us could be updated to have a heading “For researchers” to direct their enquiry somewhere useful. I note that some researchers do find their way to our research mailing list where they can get practical advice on design (and acceptability) of Wikipedia-related research. So we do have a good entry point for researcher enquiries there. Kerry (talk) 08:23, 1 February 2019 (UTC)


 * , I don't disagree in the slightest, but to be frank, any researcher working in the social and psychological sciences who is going to be doing research involving direct stimulus-response testing of individuals needs to have informed consent. That is research ethics 101.  Believe me, I understand the complications that this creates for the research itself, particularly in the arena of social psychology, but there are reasons these principles were adopted by the scientific community in the last century and those reasons weren't by any stretch of the imagination trivial concerns.  The lack of institutional watchdogs such as you would find in using individuals as test subjects through their employement should not be treated as free license to utilize people as test subjects through open communities online, without obtaining consent. Ethics should not go out the window just because there isn't a sufficient presence to invoke practical liabilities--that's not the bedrock upon which ethical research should lay.   And frankly, even a grad student not getting this is an embarrassment to the profession and a sign that their institution has flubbed the task of their basic education in this area.  Taking shortcuts through that cut through ethical barriers just because you are conducting it online is no more acceptable than trolling people is acceptable because it's done in the anonymity of the internet.  No researcher should feel one whit more comfortable conducting research in an online forum where that same experiment would be clearly unacceptable if conducted at a farmer's market.  This isn't rocket science: the people one might be inclined to use as subjects online are still people, and it is still just as shady to exploit them by failing to get consent. And if the only thing keeping research in line were intermediaries with their own liabilities and legal limitations, and not the good ethical sense and training of the researcher themselves, things will get to a bad place fast, as indeed we have seen happen repeatedly in recent times, often involving one of the funding partners in this very research. It's bad enough that we have to worry about this kind of behaviour from social media players, marketing firms, and the political class, all of whom act with such disturbing impunity when it comes to the privacy and consent. To allow academics to get in on that game without any sense of concern as to the implications... S<b style="color: #66c0fd">n</b><b style="color: #99d5fe;">o</b><b style="color: #b2dffe;">w</b> <b style="color: #d4143a">let's rap</b> 23:18, 1 February 2019 (UTC)
 * I too was shocked at what appeared to be blatant disregard for "informed consent". In spite of having all the same problems, Restivo/van de Rijt got published in what I assume to be a peer-reviewed journal.  I looked at that paper and I found the following paragraph near the beginning:
 * This study's research protocol was approved by the Committees on Research Involving Human Subjects (IRB) at the State University of New York at Stony Brook (CORIHS #2011-1394). Because the experiment presented only minimal risks to subjects, the IRB committee determined that obtaining prior informed consent from participants was not required. Confidentiality of personally-identifiable information has been maintained in strict accordance with Human Subjects Committee requirements for privacy safeguards.
 * What does this mean for us? It means that as far as the above-mentioned Committees on Research Involving Human Subjects are concerned, it's OK to mess with people's heads, not to mention rending the social fabric, without telling them that they're part of an experiment.  What's up with that?  This looks like a bigger problem than just one naïve professor at CMU and his grad student.  Bruce leverett (talk) 02:49, 4 February 2019 (UTC)


 * Yup--and while research institutions have been known to apply that "minimal risks" standard when culling data from pre-existing media, here the researchers were to have been directly experimenting with the subjects, providing stimuli and recording responses and that has traditionally been seen by all institutions, professional associations, and researchers in good standing a brightline rule for when consent is required. Unfortunately, it would seem that the principle of social psychology (that most any person in the contemporary world with web access is familiar with), whereby the consequences of improper behaviour online are seen as less consequential or "real" than the same conduct would be perceived to be offline, applies as much to many researchers with regard to their work as it does to random joes who drops their standards for appropriate conduct. Even though such researchers ought to be more on guard than most people about the irrationality and dangers of such a cognitive bias. Like I said, an absolute embarrassment to the profession and something that needs to be addressed.  Someone should do a systematic review of that--a research topic of some actual consequence. Cripes, would I love to see the expression on one of these researchers' faces when, while at a conference, they realized they were being referenced obliquely in a breakdown of slipping ethical standards owing to an inability to contextual rationalization. What sweet irony that would be. <b style="color: #19a0fd;">S</b><b style="color: #66c0fd">n</b><b style="color: #99d5fe;">o</b><b style="color: #b2dffe;">w</b> <b style="color: #d4143a">let's rap</b> 09:02, 4 February 2019 (UTC)
 * I would love to have seen the human subjects research ethics application for this study. I wonder if a FOI request would get it?  I've seen some questionable applications go through when 'commercialization' is mentioned.AugusteBlanqui (talk) 11:10, 4 February 2019 (UTC)
 * Well, Carnegie Mellon is a private university and thus would not typically be subject to FOIA requests directly, and while HS-IRBS are required to maintain minutes of their meetings and other documentation of their review of proposed research, they are not typically required to file these documents with OHRP or other federal oversight entity unless the agency requests it (for example, as part of a review)--and if the documents are not within the possession of such a federal agency, they typically cannot be reached by a FOIA. (There are possible exceptions where, as alluded to before, the research institution is a state entity or it used federal funds in the research). I suppose it's possible (maybe even probable) that when multiple parties sign on to an 'IRB of record' agreement (this is where the involved institutions agree to allow one IRB to investigate and authenticate compliance for joint research), if even one of the researchers involved is from a state institution (or arguably used federal funds on the research in even a trivial way), the documentaion could be reachable by FOIA that way, even if the IRB in question was that of a private institution that did not use federal funds and did not file the documentation with a federal entity.  But I just don't know the regulations that intimately to say for sure. My overall inclination is to say taking this approach could be an ordeal. However, some private institutions try to be more transparent than others and I imagine some may be amenable to public requests. This is where one would begin investigating such an inquiry with regard to Carnegie Mellon. <b style="color: #19a0fd;">S</b><b style="color: #66c0fd">n</b><b style="color: #99d5fe;">o</b><b style="color: #b2dffe;">w</b> <b style="color: #d4143a">let's rap</b> 12:04, 4 February 2019 (UTC)
 * Snow Rise here's a question: how would this research have insured that no minors were used as test subjects? I mean, apparently CMU cares about this.  Wikipedia-as-petri dish seems to leave the door open for violations of child protection policies as far as informed consent goes or general ethical guidelines for research.  Might be worth Wikipedia Foundation getting the word out. AugusteBlanqui (talk) 12:13, 4 February 2019 (UTC)
 * I don't see how they could have, honestly. In many (but not all) research situations involving informed consent, parents or other legal guardians can provide consent as proxies.  Here though, the IRB decided it was fine to conduct this research without asking anyone for consent, whether directly or in a guardianship role.  That actually raises an interesting question, because I note that Pennsylvania has a statute (Act 153) which requires that all researchers likely to have contact with minors to register with three state entities.  Now obviously the type of harm that statute seeks to protect against anticipates mostly in person interactions, but looking at the statutory language itself, I see nothing that obviates the university of that responsibility when contact is restricted to online research.  In any event, CMU's own internal child protection policy makes clear that "Programs and Activities Involving Minors" is defined as any program, event, or activity involving one or more individuals under the age of 18 that is...[s]ponsored, funded and/or operated by any Carnegie Mellon administrative unit, academic unit, or student organization, regardless of location. This includes programs and activities conducted on-campus, off-campus, or  remotely via the internet  or other means of communication" [emphasis added].  I suppose if I were in a dialogue with the IRB, that would be a fruitful question to raise regarding their review of this research--whether all researchers who might reasonably have had contact with minors through this research had Act 153-compliant registration. <b style="color: #19a0fd;">S</b><b style="color: #66c0fd">n</b><b style="color: #99d5fe;">o</b><b style="color: #b2dffe;">w</b> <b style="color: #d4143a">let's rap</b> 12:44, 4 February 2019 (UTC)
 * ,, I thought the two of you might be interested in some of the issues we are discussing here, particularly as we can't be sure this will be the last time we will see something of this sort. Thank you, btw, for providing a check here; we really rely on editors like you who volunteer time on both Meta and the local project as a first line of review of such matters, and you really came through for the community. Also, hi to you both--I hope you've been well? <b style="color: #19a0fd;">S</b><b style="color: #66c0fd">n</b><b style="color: #99d5fe;">o</b><b style="color: #b2dffe;">w</b> <b style="color: #d4143a">let's rap</b> 13:28, 4 February 2019 (UTC)


 * Dear fellows. May be we are the target of an experiment about "how much proud are these people to mimic the star system of various military organizations" and even crosses with Oak Leaves, Schwertern und Brillanten. Smile better, we are studied. Pldx1 (talk) 10:02, 1 February 2019 (UTC)
 * How dreadful, to be a victim of a nefarious conspiracy of Facebook and Google to commit random acts of undeserved, unprovoked kindness. Jim.henderson (talk) 14:54, 1 February 2019 (UTC)
 * Random distribution of barnstars as part of a behavioral experiment is not an act of kindness though is it? AugusteBlanqui (talk) 17:00, 1 February 2019 (UTC)
 * We talk a lot of about intent here on Wikipedia such as in WP:NOTHERE. We are here to build an encyclopedia and I think it is easy to underestimate and dismiss the number of obstacles that stand in the way of doing so, but the overall sum is considerable. <span style="color:black;text-shadow: 4px 4px 15px white, -4px -4px 15px white">Mkdw  <span style="color: #0B0080;text-shadow: 4px 4px 15px white, -4px -4px 15px white">talk 06:17, 3 February 2019 (UTC)


 * Nothing would devalue a PhD degree more than if a post-grad student were not able to find anything better or more scientific to base his or her doctoral thesis on than Wikipedia barnstars.Kudpung กุดผึ้ง (talk) 09:10, 3 February 2019 (UTC)
 * Amen to that. – Athaenara  ✉  02:43, 25 February 2019 (UTC)


 * Well we may be disgusted but we should hardly be surprised. Facebook and other habitual intruders-on-people's-lives are (perhaps) finally being shamed and brought to book, but the temptation of unlimited data of unprecedented precision will remain a powerful inducement to misbehave. We may need to be protected by more than angry talk pages; clearly tools could be made to detect such behaviour. Chiswick Chap (talk) 08:55, 4 February 2019 (UTC)
 * Thanks for your kind words, @Snow Rise, and for the note on my talk. I should point out that I don't monitor WP:VPM systematically, and might have missed the VPM discussion if I hadn't been alerted to it by @Vexations.
 * I appreciate the points raised above, and agree that there are issues around for example child protection. However, I personally don't think it's productive  at this stage to get too locked into those details.  My concern is that there should be high-level filters in place, and that those filters should include issue such as addiction and child protection.  But it makes little sense to me to start discussion the nature of the filters when there is no framework for all filtration system.
 * As I noted at the VPM discussion, I have a very low regard for the ethical controls at universities. They are now so heavily dependent on corporate funding that a corporate approach to ethics is hardwired into all their decision-making processes.   The responses at VPM by @Diyiy and Robertekraut to my points at VPM about disclosure only underline the convergence between corporate ethics and those contemporary academia.
 * So Wikipedia needs its own filters. But m:Research:Committee is dormant (or possibly extinct), and there seems to be nothing in its place. Instead we had this proposal brought to community with the support of @Halfak (WMF), who is a previous research colleague of Diyly and Robertekraut. Whatever view anyone takes of either the substantive or ethical merits of that research (see it at m:Research:The Rise and Decline), Halfak had a clear conflict of interest in assessing this research.  Yet so far as I could tell from the VPM discussion, there was no other oversight of this project within the WMF.
 * That is clearly wrong. We need some framework for assessing research ethics either at the WMF or at en.wp, or both; yet we have neither.
 * I don't try to follow the internal politics of the WMF, so I have idea whether the issues raised at the VPM discussion have led to discussions within WMF; but I have seen nothing publicly about news structures or policy. I think that's a serious and astonishing omission, but it is how it is.
 * So it seems to me that en.wp needs to set up its own framework for screening research proposals. -- Brown HairedGirl (talk) • (contribs) 05:55, 6 February 2019 (UTC)


 * @: It would certainly be a start, though it would be a difficult situation for all involved if such a system were not built in lockstep with the WMF, which some under-informed researchers (unfamiliar with the organizational and legal complexities of this project and community) may presume is the only entity with whom they need to communicate such plans. Indeed, in this case, despite the fact that it seems the research was to be carried out in this local community, it seems that no effort was made to seek input anywhere outside of Meta until after the proposal came under the scrutiny of rank and file editors. Incidentally, I noted for the first time today upon review of the proposal page at Meta with a closer eye, that we are months or weeks past when most of this project was supposed to have taken place, and that the first communications here (Dec. 18) took place weeks after the stimulus portion of the experiment was to take place).  Are we entirely certain that they did not proceed with any testing before their approach came under under fire? I'd very much like to know the answer to that question.


 * Anyway, as I was saying, we're going to have real discord (I mean Knowledge Engine levels of animosity, disruption, and distrust) if the local community and the WMF don't operate as a unified front on an issue of this importance. But that shouldn't necessarily stop us from taking preliminary steps. I don't think we would have a difficult time rallying the community to create a policy which states that no research shall be conducted here which involves human behavioural testing relating to the study activities in any way induced by the study itself unless informed consent is sought from each user utilized in said study, and that failure to do so is to be treated as refused consent for each such person.


 * Really that needs to be in the Terms of Use to have full efficacy (one more reason we need the WMF to hear our concerns here; perhaps it does not hurt to bring into the conversation at this point).  But, although it would be one of those very rare policies that is more precatory to outside players than useful for internal processes, creating a community consensus document as to that principle would at least have the impact of putting researchers on notice as to how such behaviour is likely to be regarded here: that would have uncertain effects with regard to later review of their research conduct by those entities (institutional or governmental) who are capable of engaging in oversight of their work at various levels.  The professional and legal implications would be quite uncertain without a document that is more expressly legally operative (such as a new section/additional language in the ToU), but given the number of potential complications that might nevertheless arise from disregarding an express statement of this nature (with regard to their institutions, any professional associations to which they belong, the OHRP and other federal regulators, and states Attorneys General, and their funders/commercial partners to name a few interested entities), such a local policy might at least give researchers pause in the future about proceeding with human testing on this platform without first attaining consent. Since it seems we can't always trust them to exercise their professional conduct in this regard by way of their own restraint without making such a blunt statement. <b style="color: #19a0fd;">S</b><b style="color: #66c0fd">n</b><b style="color: #99d5fe;">o</b><b style="color: #b2dffe;">w</b> <b style="color: #d4143a">let's rap</b> 07:01, 6 February 2019 (UTC)
 * Whatever the potential legal consequences to a random 'independent' researcher who violates child protection policies by conducting behavioral research, WMF should have been well aware of child protection issues. If I conduct research in Ireland then I am required to certify whether or not it involves minors.  If it does involve minors, or even worse if I admit that I would be unable to tell, then there is not a chance in hell the proposal would get through without informed consent being attached.  Zero.  A 'minimal risk to participants' rationale might work for adults--maybe.  The WMF (and/or the researcher) would be vulnerable in multiple jurisdictions too--what passes for child protection in Pennsylvania might not work in the Netherlands, etc.  I'm gobsmacked that WMF 'signed off' on this. Maybe child protection/informed consent was part of the original plan?AugusteBlanqui (talk) 10:47, 6 February 2019 (UTC)
 * "If I conduct research in Ireland then I am required to certify whether or not it involves minors. If it does involve minors, or even worse if I admit that I would be unable to tell, then there is not a chance in hell the proposal would get through without informed consent being attached. Zero."
 * It's meant to work the same way in regard to U.S. research: here are the relevant federal regulations on human testing as regards special protections for research involving minors.  Note that both the assent of the child and the permission of a parent or guardian is required, and assent is defined expressly as follows:  "Assent means a child's affirmative agreement to participate in research. Mere failure to object should not, absent affirmative agreement, be construed as assent." The IRB is also given the direct responsibility for ascertaining that those requirements are met.  Which rather raises the question of whether the researchers here made clear in their application to the IRB that close to 25% of Wikipedians are below the required age of consent for a study of this nature (the regulations also indicate that the age of consent required is the age of consent in the jurisdiction where the research takes place).  If not, it raises two unavoidable possibilities, neither one of them great: 1) they just didn't think about the ethical complications here long enough for this to occur to them, or 2) This obvious reality was known to them and they didn't disclose it.  If they did in fact disclose this fact in their HS-IRB application, and the Board still let the research move forward, we're potentially talking about an even bigger problem of botched ethical controls at CMU. It's worth repeating here also that Pennsylvania law also requires that any researcher having contact with children obstain a number of different certifications, and that CMU's own internal policies on this make clear that the requirement, insofar as the university is concerned, is meant to those who will undertake such actions online. So I'd be very interested in knowing if these certifications were sought and granted for any party to this research who was going to (or did?) have interactions with our editors, and whether this information was presented in the IRB application or in any review meetings (also required under federal law).


 * And while these issues regarding minors are certainly very salient and troubling concerns with regard to the ethical controls here, I don't want to get so lost in the weeds of just one of the more eye-catching issues that it seems as if this research was ethically compliant with regard to all the other participants. Because I think very much was not. Putting aside the question of informed consent for adults for a moment--I do not believe most researchers, institutions, or professional associations would view this as acceptable circumstances to proceed without consent, "low risk" or not, but we can table that for a moment--there are numerous other concerns, the most obvious of which is privacy. Experiments which test a subject's response to stimuli ( especially  those conducted without their consent or knowledge) are meant to be conducted with the utmost confidentiality; there are requirements under law, under the policies of particular research establishments, and under the conduct codes of professional associations.  Here, there was absolutely zero possibility of their keeping the stimulus and response of the individual subjects confidential, since they were going to be taking place on very much arguably the world's single most open platform in existence, where every detail of those interactions would be freely viewable to anyone with an internet connection.  It is mind-boggling to me that none of this sent up red flags to any of the players involved (the researchers, their university oversight, their financial partners) before this got to the Meta and Wikipedia itself. <b style="color: #19a0fd;">S</b><b style="color: #66c0fd">n</b><b style="color: #99d5fe;">o</b><b style="color: #b2dffe;">w</b> <b style="color: #d4143a">let's rap</b> 21:30, 6 February 2019 (UTC)


 * The process should not be limited to an ethics review; it would also need to include community-level consent. Using an analogy raised by another commenter, imagine if this experiment was proposed to the management of a small employee-owned grocery store and it was discovered that the research was funded by a big-box chain. Even if the procedure was demonstrated to be completely harmless to the participants, it would obviously not be in the best interests of the community and would certainly be rejected.
 * In this case, a number of editors expressed that they felt uncomfortable with any research associated with Facebook. The researchers did not seem prepared to address these concerns and seemed to think that they only had to convince us that the risk to individual participants was low. We should develop a research approval procedure that involves both an ethics review and community approval, with the understanding that community approval may be withheld for any reason just as individual consent may be withheld for any reason. –dlthewave ☎ 14:56, 6 February 2019 (UTC)


 * Well, that's just a separate issue from what others have been focused on here--which is not to say it is a non-issue. But I will say that the researchers here did disclose their intentions with a Meta research page--a whole month before they intended to start their research...--though, from what I have seen they did not reach out to the local community until after concerns were raised at Meta (which was, I note with concern, well after they had planned to be already underway in their research). In any event, I share your concern at the cavalier attitude displayed with regard to Facebook being a part of this research, regardless of whether it was through a grant arrangement.  If it were quite literally any other company in the world, these concerns would be lessened, but given the company's recent history on privacy issues regarding third party activities that have touched upon so many concerning behaviours, I don't think any concerns in this area can ever be described as mere hand-wringing. One way to address such concerns in the future is to make funding disclosures a requisite part of any research proposal presented at Meta, along with a requirement that such proposals be advertised in major community news spaces for each local project on which research will be conducted. Indeed, when you look at those proposals, they are laughably skimpy on the details regarding parties and oversight--and indeed, give only a partial accounting of the proposed research methodology itself. <b style="color: #19a0fd;">S</b><b style="color: #66c0fd">n</b><b style="color: #99d5fe;">o</b><b style="color: #b2dffe;">w</b> <b style="color: #d4143a">let's rap</b> 21:30, 6 February 2019 (UTC)


 * Perhaps some people should read again the contributions of, at meta:Research:How role-specific rewards influence Wikipedia editors’ contribution and Wikipedia:Village pump: Research project: The effects of specific barnstars... or, perhaps, simply read them at least once.   Both authors asked for permission and asked for feed-back. Their intent was to select a sample of people worth of a barnstar and then send them or not the said barnstar. Imo, the best method would have been to send a message like  Being clear about the sender would have avoided many side-discussions while being endorsed by MLA@CMU would have been perceived as largely more praiseworthy than being endorsed by a random guy on a social network, therefore amplifying the effect (if any). One can also say that not perceiving beforehand that children don't like you playing with their pokemons... was another error. In any case, nothing was done, due to the ignited feed-back. Pldx1 (talk) 13:38, 6 February 2019 (UTC)


 * "In any case, nothing was done, due to the ignited feed-back."
 * I hope that's true. It would be nice to have confirmation on that either way, given that the original proposed timeline had the researchers beginning testing on Dec. 1st, and as best I have seen, the first concerns about the research were not raised until a time after that. <b style="color: #19a0fd;">S</b><b style="color: #66c0fd">n</b><b style="color: #99d5fe;">o</b><b style="color: #b2dffe;">w</b> <b style="color: #d4143a">let's rap</b> 21:35, 6 February 2019 (UTC)


 * I hope that's true. It would be nice to have confirmation. Dear User:Snow Rise. If they say they havn't, then they havn't... except if they are a bunch of liars. What is your educated guess ? In any case, it would be interesting to have a study of all the barnstars that were granted in the current window of say 12 months, centered on dec. 2018, in order to detect changes of behavior, if any. Or to have a more general study, to detect if there are monthly tendencies, or general trends or what else. How many more theses !!! Pldx1 (talk) 12:10, 9 February 2019 (UTC)


 * I have to be honest: I can't really tell if you're being facetious or not. For my part I wouldn't have had any reservation about believing them if they said they had not yet proceeded, despite the projected timeline.  But they have never actual said as much as far as I can see, in any of the related discussions, and that fact is what inspired my response to you above. However, I also trust that J-Mo would not be asserting below that they had not yet proceeded into human subject testing in such a factual and assuring manner unless he was privy to additional knowledge beyond what is present in the previous threads (which again, do not provide clarity from either researcher as to this point).


 * Of course, I'm assuming a lot on good faith there--for example: 1) that Diyi and Robertkraut have not responded to pings here because they are feeling a little bruised by the whole affair and taking a wikibreak, and that they are not avoiding answering questions which they know we would not like the answers to and which they now realize could potentially have real professional consequences, 2) that J-Mo did not give assurances on a mere presumption of his, rather than predicating said assurances on additional inside knowledge that he was privy to that allows him to be certain they did not proceed with testing--but I still have enough AGF in me for each of these individuals to allow for that. I may not be blown away by every aspect of the ethical conduct of these researchers or the tone of the response of certain WMF staff members in responding to community concerns here, but my "educated guess" (as you put it) is that they would not complicate the situation further by being misleading (even through omission or assumption). <b style="color: #19a0fd;">S</b><b style="color: #66c0fd">n</b><b style="color: #99d5fe;">o</b><b style="color: #b2dffe;">w</b> <b style="color: #d4143a">let's rap</b> 15:36, 9 February 2019 (UTC)

I don't understand the purpose of this op ed. But I was involved in the discussion around this particular research proposal, I have experience performing and evaluating this kind of research, and I know the alleged perpetrators (or perhaps victims is a better term) well. So here are some facts, however unwelcome they may be to some (not all) of the people involved in this discussion, and in the related discussions on the VP and on Meta. Finally, an appeal: assume good faith of researchers who approach the Wikipedia community openly and honestly. Recognize that your own preconceived notions about what a researcher wants, or what affiliation and funding sources they have, may be incorrect or incomplete. Ask questions, but try to ask them like you'd interview a job candidate rather than like you'd interrogate a criminal suspect. You can't stop truly nefarious researchers, at least not in a systematic way. You can teach good faith researchers how to respect community norms. And you can tell them "no" and trust they will comply with community decisions. But treat them all as de-facto enemies, and you lose all the potential benefits of research while reaping none of the rewards and doing nothing to curb risks of individual harm or community disruption. J-Mo 23:51, 6 February 2019 (UTC)
 * 1) First, to address the preceding comment: the study was not performed, and will not be performed. Robertkraut and Diyiy engaged actively and in good faith with members of English Wikipedia who expressed a range of concerns about the study, and as a result of that discussion decided not to perform the study, a decision which they conveyed promptly and through the appropriate channels. They are both competent, professional and ethical researchers and have done nothing to my knowledge that would suggest otherwise. Dr. Kraut is a founding partner for the Wikipedia Education Program, and a veteran researcher of Wikipedia and other online communities.
 * 2) In the case of the 2016 paper, "Supporting in part by... a grant from Google" probably means that one or more of the graduate students involved had a research fellowship from Google at the time. Graduate students in technical fields are routinely funded by research grants from public and private institutions. In general, code, data, and research reports generated by researchers funded under Google (or Facebook, or NSF) grants are publicly available, and the choice of what research to perform is not dictated by the grantmaking entity. So a researcher or team might get $100k to fund 3 graduate students (tuition+stipend) for a year, based on a proposal that says something like "We will investigate the motivations of people who contribute to online communities", but Facebook/Google generally doesn't get to tell them what communities to investigate, or what research methods to use, and doesn't get special private access to their findings.
 * 3) It is not clear to me why Aaron Halfaker's name (and picture) are being called out here. To me, this has the appearance of an attempt to suggest that he is involved in some sort of unethical or otherwise nefarious activities to undermine Wikipedia, and is using his position within WMF to further those activities. If this is indeed what is being suggested, it is both incorrect and, frankly, kind of gross. Aaron Halfaker probably cares more about the wellbeing of English Wikipedia than any researcher you can name, and has done as much or more to further the goals of the project and benefit its participants—in both a volunteer and WMF staff capacity.
 * 4) IRBs assess potential for harm, according to evidence-based risk assessment criteria. Without knowing the details of CMU's IRB's response to Dr. Kraut's research proposal, I cannot comment directly on the issue of "how could the IRB let this happen?" But I can say that, to my knowledge, sending people templated expressions of gratitude is unlikely to cause them harm. Potential for online community disruption is out of the scope of IRBs, which is why we have our own documented processes for assessing potential for community disruption when we vet research proposals.
 * 5) Those processes are effective (as they were in this case) only insomuch as the researchers are willing to comply with them (as they did in this case). However, when researchers are subjected to personal attacks and bad faith accusations (as they were, by some editors, in this case) and drummed off Wikipedia, it undermines the authority of the process we ourselves created. A different, less professional and ethical, set of researchers who want to perform a study on Wikipedia may be less inclined to tell us about it after seeing how these researchers were treated. We as a community have very few effective protections against this kind of behavior. And we already have our hands full with legitimate vandalism, COI editors, and (at least potentially) coordinated attempts to sow disinformation in order to further the aims of state and non-state actors.
 * 6) The real clear and present danger that bad researchers present to Wikipedia is what they do with editors' non-public data. If someone is running a survey, or conducting interviews or otherwise collecting personal information about contributors, they need to have a clear statement of why that data is necessary to collect, how it will be used, how it will be securely stored, anonymized, etc., who has access, and how long it will be kept. Good faith, professional researchers will have clear answers to these questions. Ideally, their data practices should be verifiable by an external authority (IRBs are good at this). Bad faith researchers, or even simply naive researchers who didn't spend years in grad school, may not. Treat bad answers as red flags.
 * 7) Research benefits Wikipedia directly. We know what we know about the gender gap and the editor decline because of research. Intervention-style research can be an invaluable tool for figuring out how to address pressing issues like the gender gap, knowledge gaps, the editor decline, toxic cultures, vandalism, disinformation, editor burnout, and systemic bias. Some kinds of interventions don't work as well if everyone knows they're being 'intervened' with. Not having that knowledge can have a range of effects, from null/negligable to pronounced. And sometimes we don't like being 'intervened' with even if we believe the intervention isn't likely to be harmful. The potential benefits and risks of any particular intervention should be weighed based on a reasoned assessment of the nature, and scale, of both intended and unintended consequences. We have processes for that, but those processes depend entirely on good faith collaboration between researchers and community members.
 * 8) Research furthers Wikipedia's mission. In order to make the "sum of all human knowledge" available to everyone we need to understand how Wikipedia came to be, how it works, and even how it doesn't work. Researchers shouldn't expect that they can use "but we make science!" as a blanket excuse to do whatever they want, but the potential mission-aligned benefits of understanding this or that social or psychological feature of Wikipedia (and the people who write it) are valid points for consideration when weighing risk vs. reward.


 * I was unable to keep my response brief, and it felt weird to post another wall of text in the "comments" section of this Op Ed, so I decided to post it on your talkpage instead. Cheers, J-Mo 00:56, 11 February 2019 (UTC)


 * It is abundantly clear that this "experiment" represented disruptive behavior. Forcing editors to think about restrictions on IP barnstars has effects.  Some of the disruption continues even here, with people talking about some kind of "filters" -- which would be truly turning a temporary plague of computers invading human behavior into a permanent case of the same disease.  While it may pay off to be more watchful, I hope people will resist giving any more power to computers.  The key thing for us to take home from this is that the Golden Age of Vandalism is long behind us.  There was a time when people would transclude an HTML table with colored cells to put Goatse on the Main Page for the sheer joy of it.  But now, many of our vandals are drawing paychecks.  They have a purpose for seemingly mindless irrational behavior, and it may take a lot of imagination to try to riddle out what that purpose could possibly be. Wnt (talk) 15:59, 23 February 2019 (UTC)  P.S. I've largely ignored J-Mo's comment above because he seems to describe only partial knowledge -- if the research did take place, he might simply not have heard about it, so it is not really a meaningful denial AFAICT.

There was actually a similar experiment at the German Wikipedia that showed that barnstars increase new editor retention. I wonder what the difference was between the two that only one was allowed to go forwards. In addition, there's also m:Research:Testing capacity of expressions of gratitude to enhance experience and motivation of editors. What determines whether or not a given experiment like this will be permitted? By the way, should this talk page be broken into sections? It seems to have gotten pretty long. Care to differ or discuss with me? The Nth User 16:15, 28 April 2019 (UTC)