Wikipedia:Reference desk/Archives/Computing/2021 July 26

= July 26 =

Humans adapting to AI
AI is an often misused term. At least some neural networks and similar had troubles or have been criticized because of badly choosen samples and "learning" the wrong stuff. What I am wondering is if the opposite effect was considered or studied in reliable sources or if it is addresed in some article, since human brains seem still quite good and probably better/faster at learning. This would probably be a more serious concern for some not so smart AI used on social media and similar than for more serious AI research. 176.247.163.210 (talk) 01:13, 26 July 2021 (UTC)
 * What did you mean by "the opposite effect"? Neural networks learning the right stuff from well-chosen examples? Or criticizing NS (natural stupidity) instead of AS? If human brains seem quite good, it may be because we have no good measure of goodness. --Lambiam 09:21, 26 July 2021 (UTC)
 * I'm not sure what OP was asking but humans definitely adapt to AI, for example emphasising sounds for speech recognitione e.g. sayiong "hey google lights oFF" knowing that in a noisy room it can mistake an "off" for an "on" unless emphasised. -- Q Chris (talk) 11:05, 26 July 2021 (UTC)
 * By "opposite effect" I was indeed meaning "humans learning from AI" and was curious about more complex feedback loops. The example by Q Chris seems fitting, but somewhat tame, since probably humans use this emphasis only when talking to AI; still, if the AI is trained with these same data, there's room for some feedback loop, in this case leading to recognising not enphasised "off" even less. 176.247.163.210 (talk) 12:40, 26 July 2021 (UTC)


 * If you're looking for weird feedback loops between human society and machine learning algorithms. I would recommend digging into YouTube content aimed at very small children.
 * On both ends, people are being manipulated by the algorithm. On the obvious side you have children whose preferences are shaped by a being fed the algorithm-chosen content before they're even old enough to know what content is available, and on the other side, you have people who are doing their best to intuit what seemingly random nonsense will be chosen by machine to be profitable.
 * The result is far weirder than most people realize. A lot of it verges into what you'd probably guess was a fetish video for strange adults if you didn't know the context.
 * here is a rather famous Ted Talk about the issue.
 * There are probably similar issues happening with other algorithmic content choosing systems, but this one is easy to see and understand because us adults are far enough outside of it. ApLundell (talk) 01:39, 27 July 2021 (UTC)
 * For similar issues, where algorithms play a role in pushing conspiracy theories, see here for YouTube, here for Amazon and here for Facebook. --Lambiam 09:53, 27 July 2021 (UTC)


 * Our article about Human–computer interaction is fairly awful, but it might be the research field you are thinking of.
 * Any answer has to depend on how you define "AI", but in context I do not see why neural networks need to be involved. If a chess grandmaster prepares for their next match using a computer program, whether they use LcZero (a neural network engine) or Stockfish version 11 (a "standard", minimax, alpha-beta pruning etc. engine) does not make a huge qualitative difference in whether they are learning from "AI". Tigraan Click here for my talk page ("private" contact) 11:59, 27 July 2021 (UTC)