User:Ziko/AI images and German Wikipedia

AI-generated images: urgently needed or urgently to be banned?
'''"But the law doesn't care!" German Wikipedia's Digital Thematic Regulars' Table on 8 March slid into an argument about AI-generated images. Or: AI-faked images, as one design expert put it. Or: AI-nicked images, as one lawyer expressed his indignation. So do we need rules in Wikipedia to prevent children being exposed and parents running to court?'''

Fortunately, the discussion remained very civilised, because I had invited the distinguished photographer Martin Kraft, a consultant for interactive design, as well as Lukas Mezger, media lawyer and former chairman of the Presidium of WMDE, to the open online evening on Wikipedia related topics. Both have been well-known in the Wikipedia community for many years. A third input contribution came from me, Ziko van Dijk.

Many more fellow contributors and supporters took part in the discussion, including staff members of the German association. (As agreed in the unrecorded online meeting, I only mention statements of the speakers by name here). Because, as Martin rightly said, AI images have been the hot topic in the media industry for months. Incidentally, the Wikimedia Foundation is also concerned about whether AI brings not only opportunities but also risks.

Who does such a thing anyway?
Anyone who tries to define artificial intelligence is nailing a pudding to the wall. But maybe "AI" is simply a collective term for very different techniques, or even just marketing-speak, as education expert Nele Hirsch suspects?

Instead of a definition, I have shown by way of example how to make AI images with the programme Stable Diffusion. You type a few words into the "prompt" and get a random image that somewhat matches the words. The result depends on a few factors. According to the survey at the meeting, eight participants (32 per cent) had already generated AI images themselves, 16 had not yet.

I also presented some thoughts for discussion: the distinction between "imagination" and "technology" in creating images, the framework of law and rules in a concrete wiki, and the path from the world to the wiki.

"What if they look like real people?"


Lukas provided a serious counterpoint to my open-mindedness (or naïve enthusiasm?) for AI-generated images:"'Suppose you once posted a private photo from your holiday on Facebook. And if you enter a certain prompt into an AI programme, the result is more or less your photo, because it was used to train the AI. It doesn't come out exactly like your photo, but almost. What if the creator of the photo doesn't want that?'"If the faces of minors appear on AI-generated pictures, then parents jump up on the manufacturers of the algorithms, Lukas says. The manufacturers then claim, of course, that it is only a resemblance created by coincidences of the software."'But the law doesn't care about that. The law asks: does the face in the picture look like a living child? If so, then it is an infringement of personality rights.'"The creators of the images used as the basis for the algorithms are not asked for their consent either. They are not amused and then come to Lukas' law firm: "Why does the palm tree I painted appear in such an image generator?" So how to deal with the risk of our media platform Wikimedia Commons hosting images that may show real people or infringe copyright? "I think we should consider restrictions on AI-generated images on Commons because of these risks!"

"The human factor has been taken out"


Holy Jimbo! Martin had to introduce his contribution with a disarming "I as a legal layman". Certainly there could be similarities to living persons, but is that always a derivation? The AI puts things together associatively from a body of experience:"'When I paint or draw a person from my mind, even if I'm not thinking of a concrete one, I'm still putting together parts of people in my mind. So, I might just happen to paint the same picture as someone else.'"Two semesters ago, a student of his wrote a paper on how AIs can help photographers take better pictures. AI filters in Photoshop, where the sky can be replaced and so on. The paper, of course, is completely outdated since the AI wave of 2022.



The so-called algorithm is not a classical algorithm, but rather a machine learning. Based on millions of images, the machine builds a model - but a model is not just a collection of images. It's better to think of neural networks that put things together associatively. This already existed in the past, but only recently has it become so good that you can hardly tell the difference.

Actually, we are talking about two AIs, Martin explained. One forges generatively: make me a picture that looks like Salvador Dalí. And the second one can recognise a forgery. The better the "forgery recogniser" got, the better the "forger" got:"'If I tell the AI: make me an infographic on renewable energies in Germany, it will do so. But not by sitting down and researching and thinking about the presentation. Instead, it asks itself: How do other people make infographics? What does it look like for this topic? Then it puts something together and thinks: this could work.'"That's why we're dealing with stuff, Martin warns, that's not real, but plausible at best. With Louis de Funès, it's not the head that's being measured. It's about faking. But the AI can't really comprehend. "That's the limitation."

"But the technology is still young"


"Grossly simplified: it's high performance statistics," a participant seconded. To train a model, data is extracted, series are continued. If you were to ban that, then you would also ban statistical data being collected. This reminds him of the debate on whether scientific papers are allowed to be read by machines in order to find out which drugs are often researched.

What about responsibility? Lukas told us that the WMF normally does not have to assume direct liability for content uploaded by users. However, in case of infringement, the uploading user would be liable; furthermore, someone who publicly re-uses such content. In the discussion, it was countered that the risk of creating an image that is "too similar" is very small. One should not be discouraged by such a hypothetical case.

Lukas insisted that even in the case of low risks, rules were needed for Wikimedia Commons: "'The person who uploads an image cannot know whether the AI has delivered a legally impeccable image!'"Martin, in turn, replied that the "scope of possibilities of what has not yet been thought or made" is so huge that the copyright question can only be assessed in retrospect. In other words, if there is an original image and an AI image is actually too similar to the original, then you have to deal with it. Admittedly, the problem is that we do not know the possibility of an original image when an AI image has been uploaded.

And what is the point of it all?


Lukas rightly displayed his obstinate side: "Again - can anyone give me a cool case for this, where we could use an AI-generated image in Wikipedia?" Several participants mentioned the problem of not having a free image available for a topic. Then the AI could help out. But then the article would no longer represent a genuine picture of reality.

Related to this is the fan art debate on Wikimedia Commons: this is not so much about the culturally interesting phenomenon of amateur art with pop culture content. Rather, people are trying to illustrate the Wikipedia articles on Harry Potter with their own drawings. Or: In the article on the film " de:Schindlers Liste" we show a drawing of two hands to imitate the motif of the famous film poster.

One participant articulated the dilemma as follows: "Either the Klingon is too derivative or too fake." If the drawing is too similar to the original, it infringes copyright. If it is too dissimilar, it defeats its purpose. And the fact that Harry Potter wears a scar in the shape of a lightning bolt, or that a movie poster shows hands, can be described in words if necessary. (Apart from that: Our Wikipedia readers won't care much about such a discussion. They simply google for pictures and find whatever they want somewhere on the internet). A more appropriate approach seems to be one that was tried a few years ago: Artists drew celebrities who may already be deceased and of whom only unfree photographs exist ("Unseen"). In our regulars' table discussion, however, we agreed that this was also just a bypassing of copyright. The participants were divided on whether this was objectionable.

And one may also ask whether the drawing looks at all like the person.This question is familiar from the days when there was no photography. Therefore, you have to argue on the Wikipedia talk page which Mozart painting looks most like the Austrian composer. And some might object that drawings are not automatically superior to photographs: After all, there is the saying that someone is not well captured in a photograph.

Martin introduced the idea of using AI images to illustrate building styles and art styles in Wikipedia. If you want to show a typical Bauhaus building, you don't use a specific one by artist A or artist B, but a neutral, general one. Of course, one would then indicate that it was AI-made. However, he was told, an expert would then have to confirm that the picture really shows the Bauhaus style. But the participants of our meeting were not comfortable with such a choice of image either.

"Banning, you can demand that at the regulars' table, but... what can be done, will be done"


But for Martin it was clear: we won't be able to ban AI, because in the future we won't be able to tell what was created by a human (perhaps with a camera) and what was created by a machine. And AI is everywhere already, just think of a smartphone with a beauty filter. The biggest danger for Wikipedia is rather that the sources we use contain AI: Media reports we rely on could be fake. One participant suggested that the press will use AI images "like crazy" as stock photos and beyond. Technology is also getting better, he said, "The six-finger shit is going to go away soon."

I still tried to expand the discussion with prepared questions. Such as: will there be prompt specialists in the future, and will they get some kind of recognition? Will we see campaigns by activists who decry possible bias in AI training data? Will there be a seal of approval for media in the future that the content is "AI-free", or that the models have at least been trained according to ethical standards?

'''To sum up: AI technologies are developing rapidly, and it would be wrong to adopt new rules now that will soon no longer make sense. Especially since we seem to be able to deal with potential problems with our current rules. Besides, a court ruling may shake up the situation as early as tomorrow.'''

And what do we make of the fact that Wikipedia articles have long since become "victims" of data collectors themselves, for example to train ChatGPT? Without having asked the Wikipedia authors. But this question already points to a future Digital Thematic Regulars' Table: Salino01 will guide us through the discourse thicket of AI texts. It's worth being on the notification list. (Ziko, 10.03.2023)