Talk:Science/Outline discussion

This page is for discussing the format of Talk:Science/Outline. Add 2c below: Casliber (talk · contribs) 05:01, 18 October 2011 (UTC)

Outline as laid out
Ok folks, are we happy with the layout as coined at Talk:Science/Outline? If yes, note below. If problems, itemise issue and preferred tweak below: Casliber (talk · contribs) 05:04, 18 October 2011 (UTC)

Satisfied

 * 1) Casliber (talk · contribs) 05:04, 18 October 2011 (UTC)
 * 2) Broadly satisfied. Fifelfoo (talk) 04:05, 20 October 2011 (UTC)
 * 3) Mike Christie (talk - contribs -  library) 23:57, 20 October 2011 (UTC)
 * 4) I'm not a main editor on this page because im pretty new to this, but I like what you have proposed.MilkStraw532 (talk) 00:05, 21 October 2011 (UTC)

Unsatisfied

 * 1) Sections 2 and 4 need work: 4 in ordering for logical flow and importance; 2 under experiment—also formal (ie: non-experimental) science, way to present "observational" as opposed to "experimental". section 2 seems to have good coverage, but needs work on hierarchy and flow. Fifelfoo (talk) 05:29, 18 October 2011 (UTC) Fifelfoo (talk) 04:04, 20 October 2011 (UTC)

Proposed tweak

 * Speak now or forever hold you peace...(just kidding) Casliber (talk · contribs) 05:04, 18 October 2011 (UTC)


 * I feel like Dustin Hoffman in that scene at the end of The Graduate. Please note that your nice outline, save for the historical and development parts, is really an outline of the stucture of the physical and applied physical sciences, as impacted by the principles of positivism. It really has nothing to do with the formal sciences, or with the antipositivist (interpretivist) approach to the social sciences. Worse still, it may occur to some dyed-in-the-wool positivists that some types of "fringe science," holistic health practices, psychiatry, and some other semi-formal fields that cannot be quantitated, are not much different different from the antipositivist approach to social sciences and (even) animal behavior. So that's a problem. The positivists would like to solve this by declaring by fiat that all nonpositist nonquantitative methods are not "science." But that still leaves us with the other end, at the formal sciences, like math. And if fails to describe the manifestly nonpositivist things like intuition that play a role even in the inductive physical sciences. Where do new theories come from? Thus, I suggest that the outline be narrowed down to history and perhaps a discussion of how the distinction between formal/nonformal sciences (deductive vs. inductive) science occurred. Then the role of positivism and interpretism and a discussion of the social sciences. All the stuff which is implicitly positivistic (quantitation, statistics) in the physical sciences can be moved over to the history and philsophy of science article. Besides the fact that the formal sciences have a much smaller role for induction (except when forming hypotheses), and the nature of theory and theorems is much different (knowledge is proven in math, never in physical sciences) the other sciences have a problem with the role of the human subconscious, which is one demonstrable route to knowledge. For example, we know that people can learn to do things that they cannot teach, because they themselves do no know how they do them. In WW II, experienced spotters of airplanes knew British from German airplanes, but could not say what quantitative criteria they used to tell the difference. Their ability was genuine, and could be scientically and objectively and statistically demonstrated. So plane-spotting was partly a science, not like ESP. But it was also an art, since it could not be fully reduced to a set of princples. The way in which master plane-spotter trained a student was interesting. The student guessed at the type of a plane seen, and the master said "correct" or "incorrect." After very many trials the students learned to differentiate, but could not say what criteria THEY were using, either! This is all done at an inaccessable part of the mind. A similar thing happens in sociology, behavior animal research, psychiatry, medicine, and so on. An animal behavioralist can tell when one monkey is about to attack another, but not how she knows. The knowledge is real, because it can be shown statistically to be predictive (more than chance), but it cannot be reduced to an algorithm that can be used to teach a computer (say). However, it can be passed to students, by example. A huge amount of human knowledge exists in this twilight-zone state, and since "science" means "knowledge," we should not be surprised if we encounter this problem THERE. And in the social sciences, indeed we do. But it's there in all the physical sciences, too, being ignored. We have a master/apprentice situation in Ph.D. training and medical training, and it's there for a reason. That reason is that methodological knowledge is being transferred in a non-positivist fashion. The outline I see here takes no heed of that. S  B Harris 18:22, 18 October 2011 (UTC)
 * To summarise, you have a problem with the section "Philosophy"'s dot points and the section "Scientific Method" not accurately reflecting the breadth of scientific practice? Fifelfoo (talk) 22:50, 18 October 2011 (UTC)
 * Does this diff resolve your issues with the structure of the Methods section? Hopefully the methodological anarchist critiques should have been raised in full in the philosophy section immediately preceding. Fifelfoo (talk) 00:00, 19 October 2011 (UTC)
 * Does this diff resolve your issues with the structure of the Philosophy section? Fifelfoo (talk) 00:25, 19 October 2011 (UTC)
 * Those changes are actually pretty good! Looking at this outline I think the article can be written like this, if it is filled with would be editor-warnings of the type: . To keep it from bloating up with that stuff. Also, there should be some fairly heavy warnings right up front. I would expand the  header to say something like: This article is about the general term as it refers to many fields of reproducable knowledge, including mathematics and applied sciences. For the specific topics of experimental study by scientists, see Natural science. As for specifics, the first thing I would do is move the "Formal sciences" part ABOVE the scientific method, since the full method as we know it, isn't really used in the formal sciences. And we need to talk about many of these, including classificational sciences and methodologies, statistics, etc, before we get to the "scientific method," in which all these features appear as "taken for granted" out-of-the-box tools, not fields of study. The other thing that distinguishes the "experimental sciences" from those that merely describe (formal sciences, but also political science and government theory, which begins to look like the study of history) is that experimental sciences include the variable of TIME. Thus, they all succeed or fail in whether they are not only explanatory, but also predictive. And the predictiveness must include the future as well! "Prediction is very hard, especially about the future" sounds like something Yogi Berra would have said (and he is credited with it sometimes), but it's actually first said by Neils Bohr. Who knew that retrodiction is prized in science if it flows naturally from a theory (the theory doesn't look too ad hoc), but prediction is even more highly-prized. And one must discuss this. When a theory explains something that has already happened (like a lot of political science) it's retrodiction. But nobody expects historians to predict the future, and nobody really hires poli-sci profs to be political consultants and campaign managers. Those jobs really ARE applied sciences, even experimental sciences, in the way we mean classically. So are large parts of computer science-- you never know what will happen with a large program, and there's only one experiment-- run the code. Rutherford's division of all science as "either physics or stamp collecting" is the division between induction and deduction, but also between past classification, and future-prediction that is necessarily uncertain. Often it's ONLY the time variable that brings uncertainty to induction (everything else you can easily fix with redefinition). All this must be explained. A fully applied science like engineering must be (almost) fully predictive (engineering experimentation goes on, but this is materials science and other physics-like disciplines). Much of political science and library science is not predictive but only explanatory. None of the humanities are at all predictive, and that may be what makes them humanities. The predictable part of the fine arts are the craft things, and they are sciences in the wider sense of word. And so on. This is the article to discuss all that. The scientific method and philosophy of science should go in those articles, and only be summarized shortly here, inasmuch as it applies to only parts of "real knowledge" in many other disciplines. I would say that "science" implies an algorithm that you can reduce to a series of steps that will work. It's what's left over that is humanities, history, fine art, and whatever remains in the "softer" disciplines after whatever algorithms and predictiveness they offer has been removed (as the parts that have been "reduced to a science")-- and you then look at whatever you still have.  S  B Harris 02:24, 19 October 2011 (UTC)
 * Two sciences I always like to use to test the idea of scientific method, and your conception of "time", are geology and astronomy. In both cases experiment in the sense of controlled variables in a human created test, are impossible.  Quantitative sociology and more rigorous qualitative public behaviour studies are another set.  We aren't often allowed to lock undergraduates in basements and give half of them truncheons (any more).  Yet these sciences are intimately related to the experimental sciences, because of the concept of observing new sets of previously unknown givens as the test.  (Also, I think you accidentally altered some of the hierarchy, and so I've fiddled it back, please check again). Fifelfoo (talk) 02:59, 19 October 2011 (UTC)
 * Okay, I'm seeing a tension in section 2, on practices. Sbharris is ordering the discussion of practice based on disciplinary practices: Formal, Natural, Life, Social.  I'm trying to order them based on methodologies: Formal Proof; Tested Proof: Experimental versus Observational divide; Quantitative versus Qualitative divide.  I think we need advice on how to proceed here.  Fifelfoo (talk) 04:23, 19 October 2011 (UTC)
 * You can order these things in any way you like-- I have no strong feelings about it. Indeed you may have to go through the fields twice, ordering them one way as per subject (which the article and infobox does) and then going through them again by methodology. It's insightful either way, and maybe more so if you do it both ways. I will remark that the time-element in disciplies which are purely observational comes in predicting what you'll see NEXT, when you observe in a way that you haven't before. You're still placing a bet on the future, which is the key thing (not like historians). And what you're doing counts as an experiment. It is controlled and prospective, even if it isn't interventive. To take an example I just looked at, suppose you have a gamma ray telescope in orbit, and while "stamp"-collecting you see two type of gamma-ray bursts. One is very short-- less than 2 seconds and quite often shorter; it looks like an event taking place in a volume much smaller than a standard star (2 light seconds across). The other event is very powerful and lasts 20 seconds or more-- a lot larger than a star and more like the size of an exploded star (supernova). So you make a hypothesis that the long bursts ARE supernovas, and the short ones are some kind of small-object collision, like a black hole eating a neutron star. This theory makes testable predictions about the future when you figure out how to look at the remnants of short bursts (which nobody had when it was first made)-- it predicts no supernova radiation signatures, and also it predicts that these events are more common in regions where there are old stars (metal rich) since it takes time for neutron stars to form and spiral in. Then you do your observations and see if you predicted the future. If you see things opposite of the way you said they should be, you were wrong (in 2005 astronomers found their predictions confirmed, which means at least they weren't WRONG). That's like where you see if you can predict the stock market tomorrow, not just explain it today-- it doesn't mean you're right, but if you're malpredictive it does mean you are wrong. Biology and astonomy are often strongly predictive of future observation in this way (to the extent that they aren't, they are simply extending classificational studies, and are stamp-collecting). History is not predictive at all (hence not a science) and political science and government are only predictive sometimes (to the extent that they have testable theories). The key is not whether or not you can influence an event-- all you have to do in science is to SHOW you understand it (KNOW it), and the best evidence of special knowledge, scientia, is prediction of some very complicated future behavior-- especially something you've never seen before (and nobody has seen before). Consider the starlight bending in passing right by the Sun in a solar eclipse, predicted by Einstein in general relativity to be twice the bending that even Newton's theory would allow. Astronomers LATER did that experiment, but they could influence neither the eclipse nor the Sun nor the light-- but they were looking in a different way for an odd effect that nobody had previously suggested. Your theory clearly has to do that, and in a way that anybody can use it to do, or you run a heavy risk of being labeled a pseudoscience. Of course there is that twilight zone of disciplines I wrote of where you can clearly show that you can predict the future better than chance (in a way that would satisfy James Randi, but not in a way that you could put down in an algorithm or equation. That's a very tough place to be-- the arts that are only PARTLY reduced to science. Medicine is a good example. S  B Harris 00:27, 20 October 2011 (UTC)
 * I think the prediction/test versus explanation division is a useful one in explaining different methodologies, it is a differentiation I've come across before in hps literature. I'll work the section in that manner.  Fifelfoo (talk) 00:41, 20 October 2011 (UTC)
 * Fifelfoo, can you provide a couple of links to different revs that show the two versions you mention above, for section two? I've been looking in the history and am not sure I am looking at the right versions. Mike Christie (talk - contribs -  library) 01:33, 20 October 2011 (UTC)
 * I've edited difflinks into the above where I mention each different emphasis in section 2, here's the current compromise status of section 2 Fifelfoo (talk) 02:39, 20 October 2011 (UTC)
 * Thanks. I can see the justification for Sbharris's version, but I think yours is preferable.  I won't comment further since it seems (above) that Sbharris is also OK with your approach, so this is just expressing support. Mike Christie (talk - contribs -  library) 02:57, 20 October 2011 (UTC)
 * I'm okay with either approach-- I just don't want formal sciences under "scientific method" (SM) because they use something resembling the scientific method only part of the time (when exaining conjectures, which are the math equivalent of hypotheses). Consider Fermat's theorum: mathematics can do an SM-like thing where they make a conjecture and test it by letting a computer run very large numbers and showing it holds to very large exponents, as far as you can go. But that's not a proof, because there are always larger numbers. But in the physical sciences, this level of incomplete certainty is all the further you ever get. In math, you can go one step further and show your conjecture is true for all whole numbers, even ones you haven't tried yet, and now it becomes a proven theorem, not a theory. After that, no fear of black swans is necessary, any more than we're afraid that we'll find some distant planet where there aren't 100 centimeters to a meter. These things in math, once proven, are known to be true-by-definition, so they are logical redundancies or syllogisms in deduction (just not obvious ones). Thus, the "findings" are findings about language/logic, not about physical nature. So (like logic itself) mathematics falls at least partly outside the scientific method. The part that does look like scientific method is just the part that isn't obvious because we're not smart enough to see immediately that 2 + 2 = 4, simply because we already defined 4 that way. S  B Harris 17:01, 20 October 2011 (UTC)

Comments. Overall I think the proposed outline is very good, but I have a couple of questions. -- Mike Christie (talk - contribs - library) 11:17, 20 October 2011 (UTC)
 * Do we really need a separate etymology section? If the etymology is given without commentary it will be too short for a section; if it's given with commentary it would more naturally be integrated into the history section.
 * I don't know much about the sociology of science, but the "Women in science" section jumped out at me as different in kind from the others. Are there other topics (e.g. "minorities in science") that are similar enough to be dealt with at the same time?  If so, could this section be broadened at all?  I'm not necessarily saying it should be broadened -- I don't know the field well enough to say that; I'm just asking the question.
 * Is Needham's Grand Question really intended to be given as much prominence as it appears from the outline? I would have thought it's a useful point to mention but not at all an organizing principle for the section.
 * Statistics seems too specific for a "main" link at the top of the formal sciences section
 * Wouldn't scientific misconduct more naturally belong in the "Scientific community" section? And is it really worth a "main"  link?
 * Etymology: I don't think we need it, it certainly shouldn't be folded into history (it isn't the focus of the scholarly literature). My impression is that the position of women in science has been an especial focus of research publication and policy, to a level far beyond other equity concerns.  Needham's question is a useful one as it emphasises differences about modernity/pre-modernity—there's space for the other grand questions of history of science there to contextualise the observations in the subsections, especially as the aim is to be very very terse.  If you can think of a better main link, add it!  Statistics might be a remnant from when mathematics was organised as an "adjunct technique" to the inductive sciences.  Fifelfoo (talk) 22:02, 20 October 2011 (UTC)
 * I am leaning to the view that women are an especial group to note as well, so keeping the segment is a plus I think. Casliber (talk · contribs) 23:22, 20 October 2011 (UTC)
 * I'm OK with all the above answers, but I'm sceptical about Needham; I guess I'll wait and see how that turns out. I'll add my name to the Satisfied list above. Mike Christie (talk - contribs - library) 23:57, 20 October 2011 (UTC)
 * Concerning the etymology I see no need for a special section but I think mentioning it is helpful for understanding how the concept developed. I think the reason specialist works don't spend time on it is that specialists already know it. But Wikipedia is not a specialist secondary source, and we should not assume we are writing for specialists.--Andrew Lancaster (talk) 07:53, 23 October 2011 (UTC)
 * There is a discussion starting on p. 40 of The Semantics of Science by Roy Harris (2005) (visible on Google Books) that might be a good source for some of this material. Mike Christie (talk - contribs - library) 12:41, 23 October 2011 (UTC)