Wikipedia:Reference desk/Archives/Computing/2018 June 30

= June 30 =

PAC learning - the function of the probability distribution D
Hi,

I am trying to understand the basic definition of PAC learning from Shai Shalev-Shwartz's "understanding machine learning". They define a hypothesis class to be PAC learnable if for every distribution D over the instances, and for any labeling function f, an approximately correct hypothesis can be learned with high probability over the random choice of a training set. Two issues that are not entirely clear to me:

(1) what is exactly the function of D? isn't the requirement of "for every D" very strict? What if D samples unrepresentative training set? for example, in a digit recognition task, what prevents D from sampling only the digit 6 - how could an hypothesis selected based on that training set have a small generalization loss?

(2) They define PAC learnability as a property of an hypothesis class, i.e., of a solution. Intuitively I'd expect that learnability would be a property of the problem, as some problems are harder than others. What role do the properties of the problem play in the definition?

Thanks! — Preceding unsigned comment added by 87.71.54.101 (talk) 18:17, 30 June 2018 (UTC)
 * Wikipedia articles: PAC (Probably approximately correct learning), Error tolerance (PAC learning).
 * Answers in Quora:
 * Video lectures: PAC Learning,


 * D is the fixed unknown probability distribution from which some instances x are drawn for training. Intuitively I'd define PAC learnability as demonstration that a particular hypothesis strategy applied to a given problem D provides results that converge to ever decreasing error as more instances are learned. Early geocentric modelling of planet orbits that required arbitrary hypotheses of epicyclic motions was overthrown by the more PAC-learnable heliocentric model that was refined in accuracy by Newton (hypothesis of orbital motions) and Kepler (source of observation data). DroneB (talk) 12:23, 1 July 2018 (UTC)