Wikipedia:Reference desk/Archives/Mathematics/2011 October 14

= October 14 =

Category theory - Ob and Mor functors
Hello everyone,

I'm teaching myself introductory category theory and have come across two interesting (interesting for me!) functors; Ob: CAT -> SET, sending a small category to its set of objects, and Mor: CAT -> SET, sending a small category to its morphisms. However, I'm having a little trouble getting my head around these functors: if I understand the first correctly, it will take a category in CAT to its set of objects and the morphisms in CAT (i.e. functors) to their corresponding functions, right? So then I tried to work out whether this is full or faithful, but I'm struggling, I think this category of categories is confusing me. When Ob acts on a functor, it should map it to a function (i.e. a set of pairs of objects which is not 'one-to-many') but this function has to act on both objects in each category and their arrows, so should it be a set of pairs of objects and pairs of arrows? Sorry I'm a bit confused about all this, and probably also not articulating it very well, would be very grateful if anyone could give me some help!

Spalton232 (talk) 15:22, 14 October 2011 (UTC)


 * You are right in your description of $$Ob$$; it is just a forgetful functor, like the usual examples $$TOP\to SET$$ &c (think a small category as it is: a set with a certain structure, and a functor between small categories as a map between their underlying sets which is structure-preserving ). So, $$ObA$$ is the crude set of objects of the small category A, as you say. Similarly, if F is a functor between two small categories A and B, part of the data of the functor F is how it works on objects, that is, the function $$ObF:ObA\to ObB$$, that takes an object x of A in that object of B that we usually indicate just Fx. (The other part of data of a functor $$F:A\to B$$ is how it works on morphisms, that is, beside the mentioned function $$x\mapsto Fx$$ above, there is also, for any morphism $$h:x\to y$$ in A, a morphism $$Fh: Fx\to Fy$$, and this information is not included in ObF). So, given A B and F as before, we may indicate the map $$F\mapsto ObF$$ as $$Ob_{A,B}$$:


 * $$Ob_{A,B}\colon\mathrm{Hom}_{CAT}(A,B)\rightarrow\mathrm{Hom}_{SET}(ObA,ObB)$$


 * (recall that $$\mathrm{Hom}_{CAT}(A,B)$$ is the set of functors from $$A$$ to $$B,$$ and $$\mathrm{Hom}_{SET}(ObA,ObB)$$ is the set of maps from $$ObA$$ to $$ObB$$). So $$Ob_{A,B}$$ is certainly injective (an inclusion indeed), meaning that Ob is faithful . Also, it is certainly not surjective for all A and B, meaning that Ob is not full. --pm a 00:30, 15 October 2011 (UTC)


 * Ok, I can certainly see why it's not surjective, because functors have to follow constraints which arbitrary maps between the objects don't. However, when we go from $$Ob_{A,B}\colon\mathrm{Hom}_{CAT}(A,B)\rightarrow\mathrm{Hom}_{SET}(ObA,ObB)$$, we're forgetting the information about how the functor acts on morphisms aren't we? Only retaining the map between objects. So then couldn't we have 2 functors which differ on morphisms but act the same on objects, so then get mapped to the same element of $$\mathrm{Hom}_{SET}(ObA,ObB)$$, thus preventing injectivity? Spalton232 (talk) 12:49, 16 October 2011 (UTC)
 * Yes, you are right, sorry for the slip. When in a proof something is certainly so, as a rule, it is at least not certainly so! ;-) --pm a  14:26, 20 October 2011 (UTC)

small samples size probabilities
I am not sure how much of this is probability and how much is statistics, but a point in the right direction would be helpful. I have a machine that produces a 'hit' with probability 99/100 and a 'miss' with 1/100. I observe some results, which are only summarized as n trials and k misses. Given these values, how confident can I be that I was using my machine to produce these results, and not a machine with other hit/miss ratios. Or, rather, for a given n, what is the 95% (or 99%) confidence interval of the range of results my machine could produce. The major factor here is that n will be small, so I do not think traditional z or t test statistics can be used. N can be anywhere from 0-1000, but is usually 0-30. k is usually small (0-5). I really just want a thought process for throwing out data that (with some degree of confidence) was NOT produced by my machine, and those that (with again some degree of confidence) COULD have been produced by my machine. Thanks Micah Manary (talk) 22:10, 14 October 2011 (UTC)
 * You can never have any confidence that your machine was being used, as opposed to some other machine with a very similar p value. To test a specific hypothesis about p you can use Pearson's chi-squared test, or if you want to work a little harder, Fisher's exact test.  I'm not sure how to derive a confidence interval without testing a range of hypotheses.  In this simple situation it would also be possible to work out the probabilities directly using the formulas for mean and variance of a binomial distribution, but I don't feel like writing out the explanation. Looie496 (talk) 00:49, 15 October 2011 (UTC)
 * Let's take $$n=1000$$ for example. We'll find a 99% confidence interval for k, symmetric if possible around its mean 10. We'll start with the probability that it is exactly 10 - using the mass function of the binomial distribution, it is 0.12574. The probability that k is 11 is 0.114309, so the probability that $$k\in[10,11] = 0.240049$$. We want the interval to be symmetrical, so we next add $$k=9$$ and get $$\mathrm{Pr}[k\in[9,11]]=0.365663$$. We continue adding 12, then 8, etc. until we get $$\mathrm{Pr}[k\in[3,18]]=0.990416$$. So a 99% confidence interval is [3,18].
 * There's no simple formula for the confidence interval for a general n, but for any specific n it can be found with a straightforward calculation.
 * But the whole concept of "confidence intervals" belong to the frequentist approach. The correct Bayesian way is to describe a prior distribution on the miss probability and update it according to data. For example, a possible situation is that there's 50% prior probability that this is the correct machine with 1% chance, and if not the miss probability is beta-distributed with $$\alpha=1,\ \beta=10$$. -- Meni Rosenfeld (talk) 19:11, 15 October 2011 (UTC)