Talk:Transduction (machine learning)

Hello
Hello. I've reverted "posterior predictive probability of new cases given previous, observed cases" to "posterior probability of new cases given previous, observed cases". A predictive probability is just the probability of something that someone wants to predict; given that the sentence already spells out that we're interested in "new cases given previous, observed cases", it doesn't add anything. I don't mean to be difficult, I just don't think it's an improvement. Happy editing, Wile E. Heresiarch 22:14, 16 Oct 2004 (UTC)


 * Ok, I see your point. In my mind "posterior probability" is reserved for a distribution over parameters, but I agree that in the context of the article the intended meaning is clear. --MarkSweep 09:29, 17 Oct 2004 (UTC)


 * From a Bayesian point of view, probability can be assessed for any kind of variable, so any kind of variable may have a posterior probability. It's quite commonly the case that posterior probabilities are assessed for model parameters, but that doesn't restrict the use of the term in other contexts. Happy editing, Wile E. Heresiarch 16:59, 17 Oct 2004 (UTC)

Is there any possibility of making this clearer to a neophyte? I have an MS in Computer Engineering, with an emphasis in AI (1987), but still have trouble understanding this. I can just imagine a high schooler trying to figure this one out. Perhaps an example would be helpful... Tjamison (talk) 20:36, 18 December 2007 (UTC)

Labels
Can someone define the term "label" in the context that it is refereed to on this page? -Joseph —Preceding unsigned comment added by Ieee8023 (talk • contribs) 18:18, 1 September 2010 (UTC)
 * A "label" is just a value (such as a class ID) associated with a point (or feature vector). For example, suppose you would like to build a machine that can identify the gender of a person in a photograph. You could start by scraping a bunch of pictures of people from the web. In this case, the array of pixel-values in each image would be the "feature-vector". A human would then label a several of these feature-vectors with the class "male" or "female". Then, you could (theoretically) use transduction to predict labels for all of the other feature-vectors.--Headlessplatter (talk) 17:36, 16 March 2011 (UTC)

TSVM link
Instead of: in this category is the Transductive Support Vector Machine (TSVM).

Shouldn't this be: in this category is the Transductive Support Vector Machines (TSVM).

i.e. the link should be to the subchapter on TSVM, rather than the entire SVM article... Ric8cruz (talk) 12:41, 11 April 2016 (UTC)

I agree there is a problem with TSVM: TSVM is not transductive since you can still generalize to unseen examples.

Originality
The article claims that transduction, i.e. "reasoning from observed, specific (training) cases to specific (test) cases", was invented by Vapnik in the 90's. But reasoning from particulars to particulars was recognized as a distinct form of reasoning as early as William Ernest Johnson's 1924 textbook, Logic, in which he called this kind of reasoning "eduction". This kind of reasoning, rather than standard induction, was recognized as the central concern by Bayesians in the tradition of de Finetti, who proved his famous representation theorem with the goal of cashing out all probabilistic inference in reasoning about particulars. Dennis Lindley, representing a culmination of this tradition, in his 1988 Wald Memorial Lecture to the Institute of Mathematical Statistics went as far as defining the problem of statistical inference in the Bayesian paradigm just as passing "from one set x of observations to express opinion about another, as yet unobserved, set y." It might be the case that as a computer scientist Vapnik was working independently of the long statistical tradition, but it is still a mistake to attribute the introduction of the concept solely to him. Syllogician (talk) 21:55, 24 June 2024 (UTC)