Talk:Cache (computing)/Archive 1

Application Cache
Caching is also an important concept in application software design. Can any expert in this area also include references to the same ? I have summarized the concept from an application developer's perspective here. If someone is willing to collaborate, I can create a new page on Wikipedia for the same.

On a completely unrelated note, the entire first paragraph is sloppy, has grammar issues, and needs to be redone. —Preceding unsigned comment added by 68.117.198.168 (talk) 02:47, 12 August 2010 (UTC)

Under disk cache:

"While CPU caches are generally managed entirely by hardware, other caches are managed by a variety"

This isn't a complete sentence, and needs to be filled out, at least to a stub. Could someone knowledgeable on cache complete this, at least to a full sentence? —Preceding unsigned comment added by Savingedmund (talk • contribs) 14:30, 3 September 2009 (UTC)

Pronunciation
I'm aware there's another section on this, but that was a question, this is a statement. I really do hate to say it, but I'm afraid it's pronounced 'Kay-sh' (One Syllable), not "cash", the "cash" pronunciation is an American mis-pronunciation. This needed to be corrected.


 * Can you find proof? I've just looked it up in 'The Oxford Dictionary of English Etymology' which contains the kæʃ pronunciation. If you can find something more authoratitive (The text is copyrighted 1969 even though it was printed recently) then I'm sure we'll be happy to change. tommylommykins 15:26, 12 September 2006 (UTC)


 * It's a French word, and in French, it's pronounced "cash", so I have grave doubts that "kaysh" could possibly be more correct than "cash". --Doradus 03:08, 25 December 2006 (UTC)

Every time I see this word, I have the sensation that it's spelled wrong. I think it should be spelled caché, which means "hidden" in French (pronounced Kah-shay, two syllables, with stress on the last syllable). Sorry, I speak French. My name is Luiz Netto, I have an account on the Portuguese Wikipedia. — Preceding unsigned comment added by 4.243.212.219 (talk)

Well I speak French too, I've studied and worked in France and I've got a degree in French, and I can assure you that the word isn't 'caché' (which is an ajective, not a noun). 'Cache' (pronounced 'cash') is a loan word. The original expression in full in French is 'Cache d'armes' (arms cache). 'Cache' literally means hiding place. Source: my Collins Robert French/English dictionary (published 2001). I have noticed a tendency, however, for US military spokesmen to pronounce it 'caché' (I suspect they are confusing it with 'cachet').Iantnm 18:18, 19 March 2007 (UTC)

FWIW "kaysh" is a popular pronunciation in both Australia and New Zealand. My guess is that this word was seen a lot more than it was heard when it first arrived in Aus and NZ and the pronunciation was coined to rhyme with similarly spelled words, cake, case, Cate, crate, craze. Although it might not fit in with the rest of the world, bear in mind that America has their own pronunciations of words that creep there way into acceptability in other nations. A-loo-mah-nim for one. 203.56.127.1 (talk) 04:21, 12 June 2009 (UTC)

Yeah I really think it's worth noting that basically all Australians pronounce it as keɪʃ. —Preceding unsigned comment added by 121.45.135.123 (talk) 14:04, 13 August 2009 (UTC)


 * Would it be acceptable to add the keɪʃ pronunciation [as colloquial?] After working in both Australia and New Zealand, I have personally found this pronunciation is much more widely used even if the "cash" pronunciation is official.203.56.127.1 (talk) 01:30, 29 January 2010 (UTC)

Cache towns
Any reason why Cache, Utah and Cache, Oklahoma should be mentioned at all in the article? A search will return those as well, and you'll figure out pretty quickly which one is the article you want. Also, no offense intended to those who live there, but they seem to be pretty minor towns. (Same argument goes for Nokia -- but the connection between the company and the town should still be mentioned there.) A disambiguation page would do the trick nicely, I think.

Or let me put it this way, there are a lot of places on Earth. If we added articles on all of them and stuck a "there is also a town/lake/mountain called ..." disclamer on every colliding title things would start to be pretty cluttered. (Especially with multiple entries on many articles.) There is a better mechanism to deal with ambiguity -- disambiguation pages.

magetoo 00:25, 19 January 2005 (UTC)

I boldly deleted the town references now.

magetoo 02:36, 28 Apr 2005 (UTC)


 * You do know that the search isn't always (or even most of the time) operational, right? The proper thing to do would probably be to create a disambiguation page somewhere. Tim Rhymeless (Er...let's shimmy)  03:10, 28 Apr 2005 (UTC)


 * I know that now. When I wrote the first two paragraphs, it worked (and I expected it to). I'll get on with the disambiguating business now. (Ok, maybe not, since it would break links. What do you other guys think?)  I still believe articles should be kept as "clean" as possible, and that we shouldn't have to work around a broken search function. magetoo 03:35, 28 Apr 2005 (UTC)


 * Just link to the disambiguation page at the top; saying for other uses of Cache, see Cache (disambiguation}. Tim Rhymeless (Er...let's shimmy)  03:49, 28 Apr 2005 (UTC)

I don't understand this part:

"... modern general-purpose CPUs inside personal computers may have a dozen [caches], each specialized to a different part of the problem of executing programs".

What does this mean? Level 1 i-cache and d-cache, level 2 cache, etc. How do you get a dozen caches in the CPU? This is way too ambiguous, and not a very good summary of CPU caches. Can someone explain and clean this up? Otherwise I'm inclined to replace this with a cleaner summary.


 * A dozen is too large. When I wrote that I was thinking I would treat predictors and caches as a single kind of thing.  Since then, I've decided to treat them seperately.


 * In any case, let's count all the caches in a K8 (Athlon 64/Opteron):
 * L1 Icache
 * L1 ITLB
 * L2 ITLB
 * L1 Dcache
 * L1 DTLB
 * L2 DTLB
 * Unified L2


 * Those are all the ones I know about. For completeness, here are the predictors in the K8:


 * Branch direction predictor
 * Branch Target Address Cache (I'll call this a predictor because I suspect it is allowed to return false information some of the time)
 * Return address stack
 * Data access stride predictor (issues hardware prefetches)


 * Some others that I don't think are in the K8 or P4 yet:


 * Store-load bypass predictor
 * Load hoist valid predictor (there must be a better name for this. Was in EV6.)


 * The difference between caches and predictors is that predictors are allowed to be wrong. Caches have coherence protocols (sometimes in software) so that the results can be trusted.

How about this: "... modern general-purpose CPUs usually have multiple caches. I-caches store CPU instructions that were fetched, often improving the speed of program loop execution (and obviating the value of loop-unrolling). D-caches cache data loading and storing, often improving the speed of repeated computations using values from memory. Unified caches share the job of both I-cache and D-cache by storing both instructions and data.

Furthermore, caches are considered L1 (i.e level one) if they are the cache nearest the processing unit, L2 caches are the second nearest. It makes sense to layer caches in this way because the L1 cache is typically on the processor die and hence small but very fast. An L2 cache may be off the processor die and slower than L1 cache but it is cost-effective to make it much larger. L2 cache is still between the processor and the memory bus bottleneck, so is still much faster than the main memory."64.157.7.133 22:01, 7 May 2007 (UTC)

Moved from bug report 835659. -- Tim Starling 00:44, Nov 5, 2003 (UTC)


 * The page  links to another page under the term "LRU". But as it turns out, the page does not deal with "least recently used"-cache strategies but airline components; what is probably not what the reader wanted to know. :-)

The opening of this article needs work: it gives the impression that cache means CPU memory cache. A cache doesn't need to be tiny, consider the web browser cache example that is given later. A cache can also be used to store values that are expensive to compute. ( 08:25, 21 Nov 2003 (UTC)

I agree, though a note on that page suggests that it should be folded into this one. Perhaps the CPU caching section should be a subsection of cache hierarchy in a modern computer. Iain McClatchie 23:39, 18 May 2004 (UTC)

That is a question for the whole WikiPedia: shouldn´t a page as academic as this one on cache feature book and paper references ??? I mean the usual, non Web references. I´d like to know if there´s some written policy on this. Hgfernan 14:18, 14 May 2004 (UTC)

No-write allocation
No-write allocation redirects here but there is no mention of write/no-write allocate on this page. A couple of lines of description would help. Sjf 01:12, 7 February 2006 (UTC)
 * Done --My another account 19:19, 5 June 2007 (UTC)

Merge and stuff
I've merged CPU memory cache into here (since this is what the article talks about). I've also reorganised bits (since I renamed the first section "CPU memory cache"). Some information from both articles have been removed, all of which should be here. From CPU memory cache,


 * Typically the fastest piece of memory on a CPU is internal flip-flops and buffers. These are not generally accessible to the programmer.  Next fastest is the architecture registers.  These are the registers that are accessible to the programmer (if he/she is programming in assembler). (This is irrelevant)


 * The level-1 cache might be between 32 kibibits and 512 kibibits, say. In a 32 bit architecture that would be between 1024 and 16384 words (each word being 32 bits). For example, an Intel 80486 has an 8-KB on-chip cache, and most Pentiums have a 16-KB on-chip level one cache that consists of an 8-KB instruction cache and an 8-KB data cache. ("word" is imprecise, and I don't think mentioning typical cache sizes is important anyway)


 * In general, a given instruction will require the CPU to fetch between one and two words of information, including the instruction itself and any data it may read from or write to main memory. If the required word is not in L1 cache, it is a "cache miss", and the word must be fetched from lower memory before the instruction can continue.  Because of the difference in speed between the CPU and main memory, a cache miss can result in a relatively long delay.  Much of the effort in cache design, then, is to first reduce the chance of a cache miss, and, second, to reduce the penalty incurred by a miss. (Some of this is already implied the article, and x86 instructions are not multiples of the word size anyway)


 * A cache will usually arrange data in "lines" (sometimes also called blocks), a line is a contiguous sequence of words starting at an address that is a multiple of the line size. A line might be 8 words say. So we might have 1024 lines in our cache.  A "line" in main memory when accessed will end up as a line in the cache. Usually a cache will transfer data to and from memory in chunks of a whole line, this is more efficient than transferring each word individually.  This is both a source of efficiency and inefficiency: if we end up accessing all of the data in a line then bring the line into cache all at once will have been more efficient than bring each word into the cache; on the other hand, if we only access one word from the line then we paid the cost of having to bring the whole line in which will be slower.  This inefficiency can be reduced if the requested or "critical" word is the first to be retrieved, though this approach requires more difficult design and hardware. (Feel free to add bits in where appropriate, but the line size is generally a multiple of the memory bus width (typically 64 bits), not the "word" size (16 or 32 bits), which is just jargon anyway)


 * Because of the speeds at which caches operate the cost in power to drive the electronics of each line to check its contents becomes significant. (What?)


 * Whenever a new line is brought into the cache, a line must be removed to make room for it. Because the most recently removed line is slightly more likely to be required than a line in the next level cache, some architectures implement a victim cache, which holds that line.  An L1 victim cache will be slightly slower than the L1 cache, but faster than L2. (I've never heard of a L1 victim cache, and it's easier to just renumber the caches)

And from cache,


 * ... may have a gigabyte of main memory which takes 80 nanoseconds to access. It's processor chip will typically have a second level (or L2) cache of 512 kilobytes which takes 8 nanoseconds to access, and a primary cache (L1) of 8 kilobytes which takes 666 picoseconds to access. (I think a computer with 1 GB of memory would have faster caches. I have 512 MB, a 32+32K L1, and 256K L1 running at processor speed)


 * In the above diagram, and in the hardware caches of computers, theindexes and tags are numeric, but for other kinds of caches they may be other kinds of values. In a browser cache, for instance, the main memory index and cache tags are URLs, the cache entry indexes are the special hashed filenames you may have seen if you look through the browser cache directory, and the memory and cache data entries are the contents of files themselves: GIFs, HTML files, etc. (I think the only thing which isn't implied is the bit about hashing)

I'm also wondering if we should add buffer to "see also" (a buffer stores data for access now, while a cache stores data for access later).

I've pondered moving this page to Cache (computer science), but decided not to, as a very large number of pages link to here. Feel free to do that too.

--Elektron 12:32, 2004 Jun 5 (UTC)


 * I actually think it's fine where it is (unless the page gets twice or three times as large. This is not just a computer science concept, so I think we can leave everything here. It may need a bit of a reworking to make things flow better though. Dori | Talk 12:40, Jun 5, 2004 (UTC)

Another edit
I think "In modern days, cache most often refers to computers or computer programs storing information that is accessed frequently in a special place in order to increase performance." is superfluous, since it is mentioned in the following paragraph. Also, alternate meanings usually are separated by a. I also tweaked the second paragraph so it reads better (I think). --Elektron 10:52, 2004 Jun 12 (UTC)

pronunciation question
Could anyone help me verify if there is any other way to pronounce the word "cache"? I think I have heard it pronounced not as "cash" but also in a way that sounds similar to "sachet". THanks


 * Well, I've used (and heard) "kay-sh" in the past. magetoo 02:36, 28 Apr 2005 (UTC)


 * I guess it depends on whether we should give the "correct" pronunciation, or document all common practice. The "sachet" pronunciation comes from confusion with a different word, caché.  The "kaysh" pronunciation, I imagine, comes from the application of English rules for the silent "e" at the end of the word (making the "a" long), even though the word comes from French which has no such rule, and even though the "ch" is still pronounced as "sh".  (I suppose if it were pronounced "kaytch" then at least it would be consistent.  :-)  --Doradus 02:50, 24 December 2006 (UTC)

Operation section
Lain,

It appears you've removed an entire section based solely upon readability without providing a clear explanation in either the discussion nor on your talk page. Truthfully, 'readability' is rather subjective and can depend upon numerous factors such as fog index, audience, etc. I gather your simple comment, 'readability' implies you found the presentation of the content confusing. However, I don't necessarily agree since the content contained separated paragraphs, complete and clear sentences, and provided bullets to list various classifications of things. Furthermore the section, 'Operation' that I created was intended to give a high-level/theoretical discussion in simple terms on the topic of cache as a lead in to the existing content that was rather practical and quite focussed towards particular applications of cache, hence the addition of my section title, 'Applications'. It appears you marginally agreed with my description I provided in this section however, the new narrowing content under a broad topic such as cache may be inappropriate in this article.

Could you provide a more ellaborate description of why you thought the theoretical description of cache was 'unreadable' and necessitated replacement paragraphs that appear to site specifics that don't completely cover the topic of cache? Perhaps simple modification, rather than the common Wikipedia faux pas of deleting useful content would have been more appropriate particularly without justifcation for deletion of nontrivial content. As you are aware, contributors to Wikipedia are often proud of their work and may feel strongly about their content. Wikipedia is enriched through the various perspectives of numerous contributors rather than a single perspective of one frequent contributor. Im simply looking for an explanation for the deletion, since the content I had provided: all of the 'Operation' section, was largely academic in nature and conveniently referenced other existing articles on the subject where appropriate (several of which you have contributed too also). The new content refers to fewer techniques and industry advances than before. I welcome the comments from others that grasp this topic and discussion.

Should theory not be present in the cache article? Compuer cache is a broad topic, to which thousands/millions of programmers have found numerous ways to implement. Simplifying the operation from theory to a specific practice seems to be the reverse of what we may need here. 4.152.240.80 07:39, 5 Mar 2005 (UTC)

I appreciate that you're proud of your contribution. I too have had material that I added to another article (Hubbert Peak) reverted or rewritten in a manner which I thought made it less factual and worse.

I think that the idea of adding an "operation" section is a good one. But the previous entry had some weaknesses, and in the process of fixing those I rewrote most of it. Here are some of the things I had trouble with:


 * "Caches, by definition, provide increased read and/or write response times on one or many data stores." No.  Caches reduce read response times (latency).  Writes don't have a concept of latency... caches bundle writes to reduce writeback bandwidth, and all you need for this is a store data buffer.
 * "the update policy and integrity of the cache must operate to support this behavior." The integrity of the cache must operate?  This sentence makes no sense to me.
 * "Frequently, the volatility of the cache is a factor that is also considered while defining the update policy." I think I know what you're trying to get at, but this isn't the way to write it.  First introduce the various write policies, then explain the consequences of those policies.  A write-back cache has less write traffic to the backing store than a write-through cache.
 * Your operation section didn't actually say how caches work, so I put that in first.
 * Poor word choice: "volatility", "selected", "writeable", "key methods", "transcend", "persisted into", "modified resource", "liveliness", "lifetime", "real-time".
 * "A cache without an algorithm or consideration to coherency is merely a buffer." This is a provocative distinction, but I'm not sure I agree with it.  A store data buffer in a CPU, for instance, is always kept coherent.

Then there was the flowery verbiage:


 * "Ostensibly, this aspect of computing will only advance further as the 21st century unfolds to meet the growing data demands of end users and other automated systems." -- zero factual content
 * "As such, the cache (usually) must maintain liveliness and integrity to remain an authoritative source to accurately represent the backing data store."
 * "Consequently, the lifetime of the cache is usually indefinite or may be terminal at critical events in the system (e.g. changes in the liveliness of the backing data store(s))."
 * "Caches commonly exist in numerous forms, yet, 2 basic types are the LRU Cache and the LFU Cache." -- Here you are talking about replacement policy.

Theory is fine, when it explains something. Iain McClatchie 09:03, 7 Mar 2005 (UTC)


 * I had doubts that you would respond, kudos for referring to specific text of the original.


 * As a reference to other readers
 * Original text
 * "Rewrite"


 * You've managed to conjure up some reasons - of course most of which are no basis for the extermination you administered to the previous text. The previous text was structured.  You've taken a notably good idea and refined it to apply to a very specific implementation of cache - quite the opposite of the purpose this section of the article should serve.  The simple question is: can you describe cache without describing it in terms of a specific application or use?  You've been unable to do that so far.  As far as pride, admittedly I do have some pride in what I write (as Im sure you do too), but note that I haven't made the customary wikipedian response of reverting my text back as it was prior to your attempt to address this (new for you) concept. I still very much agree with what I wrote, it does need some minor cleanup - which I indicated in the history page - as does your take on this section. The original text belongs rather than your replacement.  My responses (in order):


 * "I too have had material that I added to another article (Hubbert Peak) reverted or rewritten in a manner which I thought made it less factual and worse." Are you citing this to present it as a reason to continue this self-absorbed and ineffective behavior?  Did you know that is a logical fallacy?
 * My mistake, you are correct; response times are improved by using caching. This is what I intended to convey, caches have been long known for doing this.  However, you assumed that the text said that inevitable delays caused by write activities are observed by the client.  I never was specific enough for you to infer that.  You've made an erroneous conclusion in the absence of any evidence to support that.  A simple sentence add or modify would have cleared that up - if it was necessary; reread the paragraph you'll find that it was not the purpose of the paragraph to make this specific distinction and give that level of detail so early on in the explanation.
 * "...the update policy must operate to maintain integrity potentially compromised by this architecture." Simple word-smithing, minor word edit. You obviously feel what does/does not make sense to you is a basis for all understanding.  Did you forget your audience?
 * No you missed the point completely. It's not about bandwidth and traffic density between the cache and the backing data store(s).  Volatility is something quite different.  Yes, the selected cache algorithm and update policy has consequences.  Volatility can coerce the selection of the update policy but does not drive it.  You'll need to learn more about cache volatility.
 * "This isn't the way to write it." How would you know how to write it if you don't know what cache volatility is?
 * As you say, first introduce the various write policies. It did that, the 3 bullets draw the readers attention to three of the most basic write policies quickly yet not without providing a lead in to these write policies.
 * Operation section - how caches work. Bear in mind that 'Operation' does not imply low-level functionality.  If you believe that you're the wrong person for this article.  Rather, operation should give a high-level concept of behavior, responsibilities, criteria, environmental conditions, etc.  Realize the word you're trying to describe here: cache.  The bare naked concept, nothing so specific as cpu cache, in a word, theory.  BTW, nice touch ups on the CPU Cache article.
 * Word choice. Vocabulary. Abstract thought and terminology. There's nothing wrong here. Simply put, this is the lexicon of theory.
 * Im suprised you haven't thought of this 'provocative concept.' Try considering it more. I hope to find you have added a paragraph on it a month or so from now. If your epiphany is realized in the next few weeks, give it some time, these things need nurturing to become effective diction.  Adopt this same approach as you learn about cache volatility.
 * Flowery verbiage. Interesting description. Oh but don't be alarmed, the broadening of ones vocabulary only serves to educate and enrich the mind.
 * Factual content. You're correct in your assessment.  This particular sentence was a forecast, a prediction and thus contained very little fact other than the observation of current trends in the industry.  To disregard these trends and misdirect the reader toward the specifics of cpu cache is presumptuous, short-sighted, and as a result renders no useful understanding or preparedness to the reader.  The reader is coming to the cache article looking for a broad, high-level concept, various implementations of cache, examples, pitfalls, responsibilities, etc.  This succinct sentence empowers the reader to begin to grasp and monitor these trends.  Learning from trends lead to technological advances.  Nothing wrong here.
 * No comment? Reads good to me too.
 * No comment? Reads good to me too.
 * Very good! Replacement policy is the $1000 concept here and is one name, albeit common, yet referred to in this article as an update policy.  You've mentioned write policy also.  There are several in the industry. It depends on who you talk to, but they all refer to the same - a cache algorithm - one that solves the prototypical cache consistency problem, a.k.a the cache coherency problem.


 * Cache. Before you attempt another edit on this article, you should reread your replacement text for the Operation section.  If you had a background in engineering and theory, you would quickly realize that your text is directed and focused on a particular implementation of cache.  You've managed to leave out some crucial issues to which my original text addressed.  Granted it didn't provide a detailed discussion on every aspect of cache.  But that's not the job here.  A simple overview of the inner workings, the technical aspect of many approaches to caching.  As such, this article would barely accomplish anything if it presented a single perspective leading readers as prey to their defeat should they depend on the limited content of cpu caching applications.  True, write-back policy as you call it (a.k.a a delayed-write policy) is applicable to nearly all caches as well as the write-through approach for cache algorithms.


 * Try not to be so naive by assuming that the presentation of a simple focused explanation covers the broad concept and implications of caching. Secondly, after noticing your user talk page, I see that you have a plan for several articles on wikipedia, and it appears you've established it single-handedly (are you presuming ownership?).  A plan is always good, but a single-minded plan is self-defeating in light of the purpose of wikipedia - that of being a source of knowledge submitted by the masses.  A single-minded approach does not lend itself to cooperation, collective research, combined knowledge - it excludes it!  I've observed your activity in the wiki histories of some of these other related topics and it appears Im not the only one you've stomped on through your careless cut and replace behavior.  It also appears that some of these other wikipedians/contributors have been offended by this - some have responded in restraint (in adherance to the Wikimedia Code of Conduct).  I do seem to be someone (if not the only one) who recently gave any care to the quality of the content of this system as a consequence of your destructive actions.


 * Realize that you're probably faced with a limited background advanced only by the details of cpu caching. Perhaps you may wish to consider the collective contributions from others before "fixing and rewriting" as you describe it.  I doubt caching is the only area this approach applies to. 4.152.216.127 04:59, 11 Mar 2005 (UTC)

degrees of evaluated punani
I'm right in thinking that sentence I removed by my recent revert didn't mean anything, aren't I? Paranoia is making me think that there is some small chance punani might mean something other than its slang meaning... --Harris 12:36, 7 December 2005 (UTC)
 * Yes, bravo on your revert. 70.104.179.26 05:48, 14 December 2005 (UTC)

Pronunciation changed
Recently, an anonymous user changed the pronounciation from "/kæʃ/" to "[BE:kaʃ, AE:kæʃ]" (diff). I reverted it, but the pronounciation was re-added with the edit message "on a computer science congress we just found out and discussed these different pronunciations in british and american english".

Is there any proof supporting this, at all? The issue has been discussed above, and even The Oxford Dictionary of English Etymology appears to say that it is pronounced kæʃ; cache also agrees about the UK English pronunciation. I am going to revert the change again, and hope that if it changed again, next time someone would refer to a good enough source. -- intgr 11:33, 15 December 2006 (UTC)

Actually, the issue is very simple, and I can't imagine how there could be an edit war over it: in the "Received Pronunciation", which appears in most dictionaries, this short /a/ should be pronounced [æ], but many if not most of British speakers realise this phoneme as [a]. So I think that both pronunciations must be mentioned for BE. KelilanK 04:22, 8 February 2007 (UTC)

It is definitely pronounced CASH
Hi guys,

Sorry, I made an edit on the main page and forgot to check the "talk" section. My bad.

The original person that tried to correct us about the pronunciation was probably Australian. For some completely bizarre reason which no Australian seems to be able to tell me why, they have decided to change the way it is pronounced to KAYSH. Being from England, this really does get up my nose, as we have ALWAYS pronounced it CASH and think it's hilarious every time an Aussie decides to pronounce it their way. It's obviously frustrating too.

Microsoft themselves, including Skillsoft - their official internet learning partner, pronounce it CASH throughout their seminars, training courses and conferences. If they pronounce it that way, I somehow don't think that the Australian I.T. industry has got it right. I don't think it has anything at all to do with "British" or "U.S." English, it's just the dumb Aussies. No offence guys - it's not your fault really, that's just how you've been incorrectly taught. I wonder how or when it actually started to be mispronounced. Puzzling.

Anyway, I have audio evidence to back this up, so if someone can tell me how to add this to the page, I will VERY happily do so.

For some reason Australians also mispronounce crèche too, saying KRAYSH rather than KRESH. What is it with them and adding Ys to everything? Funnily enough though, one pronunciation I have agreed to adopt is that for our blessed Routers. In the U.K. and the U.S. we juggle between ROWT and ROOT. Unfortunately, ROOT over here actually means something rude so I used to get a lot of laughs when I first got here and started saying it like that. But when I found out it meant ‘getting laid’, you can understand why I decided to change. But I refuse to change cache on principle.

Humph! At least this is one thing I agree with the U.S. on. —The preceding unsigned comment was added by 0s1r1s (talk • contribs).

Australians elongate the 'A' sound in a lot of words. Its a by-product of the strong Irish influence. There is a wikipedia article that explains this. I pronounce it 'KAYSH' and so does everyone I know. Are you saying that speaking with a regional accent is wrong? 203.143.238.107 (talk) 03:14, 2 May 2008 (UTC)

Microsoft may be an authority on many things (anticompetitive business practices, for example) but I wasn't aware that pronunciation was one of them. Pavium (talk) 11:17, 3 May 2008 (UTC)

Data Plurality
Just wanted to note a few grammatical mistakes I found. I think I corrected them all, but I could have missed a couple.

The word data is plural. As such, phrases like "the data is" should be "the data are," etc. The singular of data is datum. —The preceding unsigned comment was added by 24.161.198.139 (talk) 00:46, 18 March 2007 (UTC).


 * You are probably right. But could you fix the data article first? --75.37.227.177 08:03, 22 July 2007 (UTC)

memory or disk
The "Operation" paragraph says: "A cache is a block of memory for temporary storage of data likely to be used again. The CPU and hard drive frequently use a cache, as do web browsers and web servers."

But web browsers don't use a block of memory for their web cache: they typically use the hard-disk. Futhermore, I don't have a clue what a "nugget" of data means.Ogy403 20:38, 18 August 2007 (UTC)

Read-ahead and trashing
Perhaps someone could write a paragraph about read-ahead as a common form of caching? Also, I think it's worth mentioning that problems such as random access and inadequate cache size can cause trashing and actually reduce performance instead of increasing it. FloydATC 12:49, 6 September 2007 (UTC)

Cache vs buffer remark
Bold text is mine, it was recently reverted: "A cache acts often also as a buffer, and vice versa."

I would like to explain in-depth this thought.

Let's look at the very old concept of 512 byte disk blocks. A 512 byte buffer (not yet a cache: it wasn't kept after I/O for future use) exists somewhere in the operating system, because fwrite may potentially read/write byte-by-byte. Then what is such a buffer? What purpose does it serve? Well, first of all, it's there because CPU cannot easily address disk location as a RAM location (impossible via RAM-like access address bus/data bus) so it's just easier to do "the slow stuff" in one bulk. OK. But the second reason? When you want one byte from the disk, it gives you full 512 bytes almost as fast (tadam! implicit caching! with prefetch!). Without 512b buffer, your byte-by-byte disk I/O would not be neccessarily sequential.

Hence: such most basic buffer, that was used long long before invention of cache, greatly increases I/O performance.

It applies to any buffer (a) that is transferred as one bulk (b) for which sequential is faster than random.

So why the revert?

--Kubanczyk 15:45, 21 September 2007 (UTC)


 * Yes, this is true for the operating system page cache (disk cache) and the CPU cache, but that is pretty much where the similarities end. In programming, temporary areas of memory that are filled by another subroutine or non-storage peripheral device (e.g. network interfaces) are ubiquitously called buffers; there is no way in the universe you could apply the definition of cache to those uses of the word buffer.
 * I don't even agree with the claim that "A cache acts often also as a buffer" (for varying levels of often); a cache in programming more often refers to an associative store of immutable pre-computed objects that are dropped explicitly when out of date, not modified in-place.
 * So yes, both caches and buffers both can be applied in a single situation, but this is rarely the case in practice. -- intgr [talk] 16:21, 21 September 2007 (UTC)


 * Glad to read that.
 * Hmmm you are right, "there is no way in the universe you could apply the definition of cache to those uses of the word buffer." I've never tried to do this, no no. But if I did I would have succeeded (in this universe). Because both "areas of memory that are filled by another subroutine" and "non-storage peripheral device (e.g. network interfaces)" perfectly fall under above 1+2 conditions. Essential speed-up because of buffers, as byte-by-byte transfers would be much slower. I don't claim those are cache. I claim those prefetch and thus speed-up which in my mind is they act as cache.
 * As a side note to "a cache in programming more often refers to an associative store..." - sanity check; when a encyclopedia readers will come here, take a glance at buffer and cache, will they be able to comprehend this sentence at all? Having worked as developer for some time, I know you mean GoF Flyweight Pattern, but... it is not proper/common/expected/helpful to call it cache in this context.
 * Been thinking of another possiblity of describing buffer vs cache, but it sadly requires the wiki reader to be 100% on programmer perspective:
 * buffer is a memory area that your program (i.e. your VAS) allocates and uses, for bla bla bla...
 * cache is something invisible for you, you suspect it exists since your data arrive quickly; but you don't really care
 * hmmm come to think of that, do you see any examples of "cache" that is visible to users? This article doesn't mention invisibility at all. And AFAIR "cache" in French has something to do with invisibility?
 * (With above two definitions, one man's buffer may be vital part of another's man cache - but not otherwise)
 * --Kubanczyk 17:10, 21 September 2007 (UTC)


 * Except that you just redefined "cache", this definition is not used in practice and is original research at best. -- intgr [talk] 17:51, 21 September 2007 (UTC)


 * Yeah, it was rather a loose thought (not a new lead section). --Kubanczyk 19:14, 21 September 2007 (UTC)


 * Sorry, I was under the impression that you were going to say that a buffer is a cache on the article. :)
 * The new explanation sounds much better than it was before. -- intgr [talk] 01:00, 22 September 2007 (UTC)


 * No sorry; at first I've had such evil intent. It was only recently that I've realized: "increasing performance via prefetch" != "caching". Glad we've landed on the same page. --Kubanczyk 08:50, 22 September 2007 (UTC)

Pronunciation
I don't understand IPA so I'm not clear on what the opening sentence is saying with regard to pronouncing Cache. Is "cash-ay" really an alternative pronunciation (I always thought that it was "cash")? - hydnjo talk 00:58, 13 October 2007 (UTC)


 * Yes, this has already been reverted by an anonymous editor. -- intgr [talk] 16:55, 13 October 2007 (UTC)

Why the stress mark? IPA does not use these with monosyllables, as it is redundant. I have removed it. &mdash; 217.46.147.13 (talk) 09:03, 2 May 2008 (UTC)

Caches and blackout
How about a few lines about caches and their failure during a blackout? I imagine something like Due to it's nature, data that is in a cache during a blackout, is inevitably lost. --phil (talk) 22:52, 9 January 2008 (UTC)


 * Huh? What do blackouts have to do with caches? There is nothing to stop storing cache on permanent storage (i.e. hard disk). -- intgr [talk] 05:53, 3 May 2008 (UTC)

Prescriptivism vs descriptivism
Given the amount of confusion over the proper pronunciation, isn't it sensible to mention /keIS/ in the main body, not in a footnote? And calling it a mispronunciation is rather loaded; 'spelling pronunciation with no etymological justification' or similar is both more accurate and less prescriptive. It's clearly not unknown if OED comment on it (I don't have a subscription, nor a dead tree copy myself, so I can't check that reference right now); the AHD actually kas /keIS/ as the only pronunciation given. So, mispronunciation or not, it's evidently not uncommon. —Preceding unsigned comment added by 79.72.220.239 (talk) 22:31, 9 August 2008 (UTC)


 * Per Wikipedia's verifiability policy, you can mention it if you cite a reliable source for it (or preferably a few). -- intgr [talk] 13:30, 10 August 2008 (UTC)


 * As I said, the AHD gives /keIS/ as the only pronunciation (provided I'm reading the key to their system correctly). I don't have access to any major dictionaries handy, but the wording of the current footnote suggests that the OED mentions it was well, though as a mispronunciation. —Preceding unsigned comment added by 79.72.134.250 (talk) 21:12, 15 August 2008 (UTC)


 * I don't think you are reading their key correctly - The AHD cites cache as "kăsh", which according to their pronunciation key corresponds to . I agree that the wording of the footnote is misleading, and have corrected it. The OED offers only (current) and  (obsolete). --Peter Farago (talk) 18:47, 18 October 2008 (UTC)


 * I have updated the article to reflect the standard pronunciation /kæʃ/, with four citations from major English dictionaries (both British and American) in its favor. I have also noted /keʃ/ as an Australian regionalism. intgr, although I have not been able to find a reliable source, I have documented numerous attestations from forums and blogs. The fact that I have been unable to find a dictionary citation for the Australian variant is not entirely surprising - Australian English is usually considered dialectical by authors of American and British dictionaries. The best known dictionary of Australian English, the Macquarie Dictionary, is not available for free access online, but for anyone who has access, this would be the most reputable source to consult. --Peter Farago (talk) 17:52, 18 October 2008 (UTC)

cache
how to delete cache on my computer —Preceding unsigned comment added by 196.207.33.197 (talk) 18:27, 8 January 2009 (UTC)

Different pointer for top level?
Since there are so many long-standing uses of the word cache, and its use in computer science in just one of them, shouldn't the entry "Cache" direct to the disambiguation page (http://en.wikipedia.org/wiki/Cache_(disambiguation)) rather than to the computer science cache page? That is my proposal. Trasel (talk) 17:55, 31 January 2009 (UTC)

if we made whole memory as cache memory then what will happen...???? —Preceding unsigned comment added by 210.212.45.144 (talk) 17:19, 30 July 2009 (UTC)

if we made our whole memory as cache then what will be the affect ...
what will be the affect of making whole memory as cache memory ..??? —Preceding unsigned comment added by 210.212.45.144 (talk) 17:24, 30 July 2009 (UTC)

Would be too costly. There are always trade-offs between size, cost, and latency in memories. —Preceding unsigned comment added by 134.96.242.63 (talk) 08:53, 12 February 2010 (UTC)