User talk:One cookie

Welcome

Hello and welcome to Wikipedia! We appreciate encyclopedic contributions, but some of your recent contributions do not conform to our policies. For more information on this, see Wikipedia's policies on vandalism and limits on acceptable additions. If you'd like to experiment with the wiki's syntax, please do so on Sandbox rather than in articles.

If you still have questions, there is a new contributor's help page, or you can write   below this message along with a question and someone will be along to answer it shortly. You may also find the following pages useful for a general introduction to Wikipedia. I hope you enjoy editing Wikipedia! Please sign your name on talk pages using four tildes ( ~ ); this will automatically produce your name and the date. Feel free to write a note on the bottom of if you want to get in touch with me. Again, welcome!  -- T y l e r d m a c e  ( talk  ·  contr ) 17:26, 17 November 2008 (UTC)
 * The five pillars of Wikipedia
 * How to edit a page
 * Help pages
 * Tutorial
 * How to write a great article
 * Manual of Style
 * Policy on neutral point of view
 * Guideline on external links
 * Guideline on conflict of interest


 * In November of 2008, I added  to the list of notable people (at the time, "notable inhabitants") at.
 * This edit was swiftly reverted, and the page for Ulm left bearing no acknowledgement of the man, nor of the global renown in which the relationship between the man and his city is held: by the dispassionate fist of the Wikipedian body politic, this will likely endure.
 * I made the edit when I was young, and hopeful. I feel it important for me to make a public proclamation that I did not, and to this day, I do not , regret having made my edit.
 * One cookie (talk) 01:01, 11 February 2024 (UTC)

Blackwell
Please stop writing a factually incorrect screed on the Blackwell page. Simply ignoring the numerous problems with your rewrite and reverting it does not make those errors any more true. Here are a few examples of how misinformed you are:

"the RTX 4090 most powerful 40-series GPU, featured an A102 die, a lower-binned variant of the A100 die, with complete A100 dies having been reserved for professional-tier GPUs, accelerators and other datacentre products" Aside from the terrible grammar, the RTX 4090 uses the AD102 die, not A102, and no AD100 die exists as there are no dedicated dies for Lovelace server products like there was for Ampere. A100 is an Ampere-based server product that is incorrectly stated to be a die. Secondly, professional Lovelace products like the RTX 6000 Ada uses AD102 just like the RTX 4090, not a separate die. This is also not particularly relevant to the article discussing how the RTX 50 series will use separate GB20x dies to the GB100 Blackwell datacenter products.

"The topology of B200 is similar: it also comprises separate processing and memory chiplets, but in order to avoid the decreasing yield of fully-functional chips as die size increases and the reticle limit is approached" "Chiplets" have a very specific definition. They enable entirely disaggregated functions within an integrated circuit, such as AMD's CCDs in Ryzen processors containing cores and cache while the separate I/O die provides PCIe lanes and the memory controller. Nvidia's approach with using two GB100 dies connected is still fundamentally monolithic where two dies that could function on their own are joined together to make a bigger die. A single GB100 die could function on its own which is not the case with a truly disaggregated chiplet approach. Secondly, HBM stacks are not considered "chiplets" in the same way that GDDR6 video memory is not "chiplets". Referring to semiconductors with slang terms like "chips" is deeply unserious.

Your extensive writing about Nvidia server designs reads more like an advertisement, which is in violation of the WP:PROMOTION policy. Again, the design of servers is not relevant to the focus of the article which is the Blackwell architecture itself and how it works. The "Proccessor features" section makes explaining the architecture less clear as it is not broken down into separate elements within an SM such as CUDA cores, RT cores and Tensor cores to explain how each have been changed with the Blackwell architecture. You need to actually explain why rwriting paragraphs about Nvidia's server designs and not about the architecture in an easy to understand manner is a better approach. You have simply insisted on it with the main source being an Nvidia investor presentation. SHJX (talk) 15:39, 8 April 2024 (UTC)


 * The point of a message on a user's talk page is to keep a threaded record of a discussion. That is why I wrote a message on your page, this message you've written should have been a reply to that.
 * "the RTX 4090 most powerful 40-series GPU, featured an A102 die, a lower-binned variant of the A100 die, with complete A100 dies having been reserved for professional-tier GPUs, accelerators and other datacentre products"
 * Aside from the terrible grammar, the RTX 4090 uses the AD102 die, not A102, and no AD100 die exists as there are no dedicated dies for Lovelace server products like there was for Ampere. A100 is an Ampere-based server product that is incorrectly stated to be a die. Secondly, professional Lovelace products like the RTX 6000 Ada uses AD102 just like the RTX 4090, not a separate die. This is also not particularly relevant to the article discussing how the RTX 50 series will use separate GB20x dies to the GB100 Blackwell datacenter products.
 * The "terrible grammar" is a single missing comma - punctuation is not grammar. There is a way of dealing with this which isn't a revert to an earlier version of a page: edit the text.
 * Before I made edits to the page, it had sentences such as "the Blackwell architecture was leaked in 2022 and B100 & B40 was officially revealed in October 2023 in an official Nvidia roadmap in an official Nvidia investor presentation for investors" on it. THAT is terrible grammar. I repaired all of these problems. The current version of the page includes that sentence again, because of you.
 * Your extensive writing about Nvidia server designs reads more like an advertisement, which is in violation of the WP:PROMOTION policy. Again, the design of servers is not relevant to the focus of the article which is the Blackwell architecture itself and how it works. The "Proccessor features" section makes explaining the architecture less clear as it is not broken down into separate elements within an SM such as CUDA cores, RT cores and Tensor cores to explain how each have been changed with the Blackwell architecture. You need to actually explain why rwriting paragraphs about Nvidia's server designs and not about the architecture in an easy to understand manner is a better approach. You have simply insisted on it with the main source being an Nvidia investor presentation.
 * The reason I quoted facts and figures relating to Nvidia's server products is that those are the facts and figures available from Nvidia. They haven't released much information specific to the processors themselves and anything else currently available in news articles is assumption. Codenames such as "GB202", etc, are reported by outlets like Videocardz on the back of accounts on Twitter which leak context-free details, but they can't be verified and have not been announced, or mentioned, by Nvidia. Unless the leaks are of documents which have verifiably come from Nvidia unedited, even if the leaker has been right about things in the past, and even if some news outlet writes an article off the back of what they leak, they are not reliable sources and shouldn't feature on Wikipedia. And avoid trying to be clever and engaging in WP:SYNTH. If conclusions can be synthesised from information available, present the information, leave the synthesis to the reader.
 * Case study: the sentence "Nvidia claims 20 petaflops of FP4 compute with the dual GB100 die B100 accelerator". I rewrote this sentence (because as written, it was not true) and cited the Nvidia white-paper on Blackwell. You restored the original sentence. The original cites an article on "bios-it.co.uk" which does not feature any text making that claim, but which does feature a screenshot of an image from that Nvidia whitepaper with the figure "20 petaflops" on it - the article does not verify that figure, as that cannot be done at this time, but nor does it attempt to. Until that is possible, the Nvidia source is the most useful source. The screenshot it uses includes a title, an image of a processor, some text annotating the image of the processor, and text below the processor. The title refers to GB200, the Blackwell "superchip", which is the very unhelpful name Nvidia has given their board with dual GPUs and a Grace CPU. The text of the annotation includes figures relating to the processor in the image - including the figure 10 TB/S, presented as the speed of the HBM bus between the dies. The text below the image includes the figure 8 TB/S, given as the speed of the bus between the two GPUs on the GB200 board - the text below the image does not relate to the processor, but to GB200, and this is where the "20 petaflops of FP4" figure comes from - Nvidia does not claim "20 petaflops of FP4 compute with the dual GB100 die B100 accelerator". They claim 20 petaflops of FP4 compute with the GB200 "superchip" product, which has two GPUs and one CPU - see the GB200 specifications on page 11 of the whitepaper. You have no way of knowing what percentage any of those chips contribute to that performance figure. Those details don't exist publicly. Because of poor reading comprehension and an inability to recognise a weak source, this is another mistake you have restored. one 🍪 cookie 09:28, 11 April 2024 (UTC)
 * The terrible grammar is not just "a single missing comma". You incorrectly claimed that the RTX 4090 and RTX 6000 Ada use different dies when they both use AD102 and not some mysterious "complete A100" or non-existent AD100 die. The AD102 die used in the RTX 4090 is cut down about 10% for yield reasons but it is not a "lower-binned" version of "A100". The binning that does happen for the AD102 dies used for the RTX 4090 and RTX 6000 Ada are for specific properties like stability in the case of the RTX 6000 and clock speeds and ability to use higher voltages in the RTX 4090's case. You are being dishonest when you claim that you were simply writing about the GB200 Grace Blackwell product. In reality, you included details such as the GB200 NVL72 rack-scale server design that "could fit several linked GB200s, giving a product with a total of 72 B200s and 36 "Grace" CPUs." To reiterate again, talking about server designs is not relevant for an article about the Blackwell architecture.
 * Just to be clear, I reverted your edit because of the factual errors and large amount of irrelevant material that had been added. Some good aspects that you did add was added back in to the article such as mentioning how Blackwell's "contributions to the mathematical fields of game theory, probability theory, information theory, and statistics have influenced or are implemented in transformer-based generative AI model designs or their training algorithms" and how Nvidia's GTC presentation did not mention gaming. SHJX (talk) 12:08, 11 April 2024 (UTC)
 * You are being dishonest when you claim that you were simply writing about the GB200 Grace Blackwell product.
 * No, I am not - I'm not precious about what I wrote, I'm annoyed that you're shitting up the quality of the page. If there are errors in what I wrote, I have no problem with them being fixed by someone else - "there is a way of dealing with this which isn't a revert to an earlier version of a page: edit the text - but you aren't fixing errors, you're restoring utter nonsense. Just to be clear, you replaced
 * Details of the Blackwell architecture were leaked in 2022, and ascending tiers of Blackwell GPU dies code named B40 and B100, and a computer-on-module code named GB200, were revealed in October 2023 in an Nvidia slideshow presentation for investors hosted on their official website and was formally announced by CEO Jensen Huang in the keynote presentation of Nvidia GTC 2024.
 * ...with
 * Named after statistician and mathematician David Blackwell, the name of the Blackwell architecture was leaked in 2022 with the B40 and B100 accelerators being confirmed in October 2023 with an official Nvidia roadmap shown during an investors presentation. and was officially announced at Nvidia GTC 2024 keynote on March 18, 2024.
 * How is reverting to that text preferable to an edit? It says that on the page right now. How is "and was officially announced at Nvidia GTC 2024 keynote on March 18, 2024" a complete sentence? Are you reading the words you're causing to end up on the page at all?
 * My assumption is that B40 is a single-die product - that any of the individual dies which aren't binning high enough to be paired and packaged as B100s are being used to make B40s. From the number, it's reasonable to assume they'll have some amount lower than 50% of the performance of a B100, so that would make sense - but no information to confirm this exists, so I did not put it on the page. Gathering the citations necessary to reasonably claim that and putting them in one place would be synthesis. Instead, all that can go on the page currently are the details Nvidia has provided, and until things can be verified by third parties, the sparse information regarding proposed implementations of Blackwell hardware is the most relevant information available to people trying to understand the architecture. one 🍪 cookie 14:48, 11 April 2024 (UTC)