User talk:Tnfiddler

"Deletionism v Inclusionism" or "MS Encarta v The Hitchhiker's Guide to the Galaxy"
Before people get too riled up, I enjoy a little humor and hyperbole. This is not an encyclopedia article, and I certainly am not aiming for dry, academic writing. I chose the header to try to express a metaphor as a starting point to look at two views. At first I saw it almost as a matter of an encyclopedia conducting a search of the knowledgespace of humanity. Under that metaphor I would have chosen "Depth First Search v Breadth First Search", using search in the slightly different sense of exploration.

I think in this search of knowledge, there are some optimizations both approaches would probably use and find little problem with. Vandalism, hoaxes, and false information would be clearly undesirable branches to follow, excepting a second-level or meta-analysis of some topic, e.g. Orson Wells' War of the Worlds or a famous defacement of the White House website. An indicator would be that neither of those exceptions was originally hoaxing or defacing wikipedia itself. In the sense of these branches having a negative value, one would prune them first in a search.

Now, the next step would be choosing which branches to explore first, and which to explore at all. Here is where the two views seem to separate. It seems that the deletionism takes a depth first approach. I seen an emphasis in their comments on "raising the level" of articles and topics of clear importance. To me, it seems similar to finding a good branch and following it deeply.

The other tack would be that of the inclusionism, of fanning the search out before deepening. This seems remarkably like the inclusionism way of fanning the search out with new articles, stub articles, requests for articles, incomplete or raw articles, and less notable articles. This is the breadth first strategy of fanning out into less weighted branches.

Here, it seems, we have a conflict over how to allocate resources, whether they be servers, eyeballs, or fingers used in the creation of wikipedia. There is also the mirror side of the wikipedia being a manner of pre-computation for a user's eventual search for information. In a sense it is an intermediate, cached step in an end user's search. Raw, partially or unstructured data is process and ordered for the user, and using one of our two approaches exclusively would produce a vastly different structure and content from the other. The users of wikipedia are expending resources in their search, and a large part of wikipedia's image comes from the amount of resources a user must expend to find some information.

I chose MS Encarta as an example of a structure based on a strict space constraint. I could as easily have used the paper encyclopedia metaphor. With a limited space, it is more effecient to constrain the search structure to a deeper search, thus the culling of less fruitful brances. Where the space constraints on wikipedia fall, I have no idea. I would guess that space is less of an issue than the size, structure, and content of information.

Let's take one lower valued, resource hog as an example, vanity pages. I can certainly see how a vanity page may seem a waste of space. It is not really demanded by many users, if even more than one. I view the matter a little differently. I think a vanity page is valued less because it is more of a fact (assuming no other flaws than vanity) than information. Information is facts with meaning, and while a vanity page may have fine facts and meaning to its creator, it has little meaning to the rest of the users and hence conveys no information to them. Now, even should one have unlimited space, the page conveys little advantage to its inclusion. When we look at the flip side, the user, then this page becomes an extra branch, a branch that should be culled in the preprocessing, a part of the clutter that wastes time and effort- the user's resources.

Perhaps one moderating factor would be a lack of linking. If a vanity page were of no relevance, they would not be linked to from anything else, and would not clutter a user's link crawl. If a vanity page were linked to from pages of no relevance, well, then there's a problem. As far as a user's crawling search, it is links from important pages to irrelevant pages that interfere. I think the editors of important page are justified and should be commended in removing links to irrelevant pages.

As for wasted server space, perhaps a mechanism of page decay based on usage- by volume and variety, could naturally tag or bring unused branches to notice for some manner of action. Another possibility might be a tagging and tiering or weighting of articles. If one could not link from a first tier article to a second tier article, the link structure could be protected. Maybe users might have to opt-in to allow second-tier articles to appear from a word search- removing clutter. I'm not necessarily an advocate of these techniques, but I don't like to complain without throwing out ideas- especially ones to try to ease the workload.

I'm not an advocate of vanity pages, but merely chose them as an extreme example. I do think there is value in the less obviously valuable branches of an information search. Compared to the web in general, wikipedia provides a way to structure and organize information, and I would like to see that advantage expanded to the fringes of knowledge. I would like to see wikipedia further the evolution of structured knowledge, perhaps more of a Hitchhiker's Guide to the Galaxy. What if the entry for a small planet of hairless apes was deemed non-notable? Should it be AfD or rate expansion to "Mostly Harmless"?

Off to the pub for a beer.. or two. Tnfiddler 16:50, 27 October 2006 (UTC)

Requests for adminship/Elonka
Thank you very much for your support in my RfA. Unfortunately consensus was not reached, and the nomination was not successful. However, I do appreciate your comments, am still in support of the Wikipedia project, and will continue to contribute without interruption. Thanks again! --Elonka 19:31, 25 October 2006 (UTC)