Talk:Core dump

complaint against User:Macrakis
I have marked this article as being innaccurate and having a bad point of view against http://en.wikipedia.org/wiki/User:Macrakis for the following reason : 1. the article had repeatedly removed important information about core dumps on linux. - point of view 2. the author has tried to make the core dumps sound spooky and this is misleading. -accuracy. We need to be accurate and professional here. It is possible for open operating systems to provide a full set of information about what it exactly in the core dump and how it is used. at least on the level of an wikipedia. see the ebuild and apt discussion, or the discussion about different versions of mario brothers games. all of these are programs in the public eye and standard objects worthy of a discussion in the wikipedia.


 * User:Macrakis's "point of view" is that the information either isn't important or doesn't belong here (a point of view with which I agree, BTW; WP:NOT an instruction manual). That's not the type of point of view Wikipedia should avoid.  A lot of that information sounds like an essay on core dumps, which means it's original material - another thing that doesn't belong in Wikipedia.  If you want to have a "core dumps for Linux users" page, set up a Web site of your own and put it there, or write something for Wikibooks; WP:NOT a web host.


 * I fail to see where he tried to make the core dumps sound "spooky". Please cite passages where you believe he does so. Guy Harris 20:25, 29 October 2006 (UTC)

explaination
What I would like to see here is a full set of information about what is in the core dump, for each relevant operating system, collected from the various sources. In depth meaning providing detail. This is not about original material here, but collecting what is available.

If wikipedia can contain writing software that is in common use, and different versions of them, then we can also write about different versions of operating systems that produces core dumps and tools to use them. The question is how to structure this information to make it useful.

Calling the core dump unstructured is misleading and spooky. The structure of a core is the structure of the memory of a process. This memory is well structured.

Statements like "in that case, source-level debuggers may not be able to access or interpret the memory state in a useful way" are misleading and also scary. What debuggers are we talking about here? Are we talking about source-level debuggers that cannot show the dissasmbly here? What if we have the sources, but the code was not compiled with debug info turned on? There are huge combinations of possibilities that must be enumerated and explored. We need to break these statements down and show under what circumstances what is available.

given a full set of source code for the entire system, with each process that is running compiled by yourself, including the compiler, what part of a dump is unstructured?

Another example of Point of View/Spookiness : the statements "There are also special-purpose tools called dump analyzers to analyze dumps." Spooky because there are names being invented here that are ghosts, they dont exist. It is misleading to give things the wrong names. Why dont we tell people what tools are really available for each task.

Who calls them "Dump Anaylzers". What information is gained by the public here? The entire "dump analyzers" article is such low quality.

It seems that the goal here is to limit the information available to the public. I think that there is some effort going on to spread fear, uncertaintly and doubt about core dumps. I will not put up with it on the wikipedia. We need to present the full information that is available to the public.

Why can the wikipedia be a reference for mathematical functions and not for computing functions? Why can people have nodes like this one : Printf Binutils Objdump ? What about this, it is full of examples : C_preprocessor. What is wrong with providing example screenshots like here : King_Bowser?

Why dont you just go and delete those nodes information instead of deleting what I added in.

how come they can provide instructions about games, like "The Yoshis must fight the young king in his private chambers, where he's keeping their Super Happy Tree." but we cannot tell people about "The resulting core file can be dumped with objdump because it can decode the file"

What gives you the right to repeatedly to delete my content? It is not hurting you to have more information available to the public. I removed the instruction manual aspect, and just presented the facts. Now my facts keep on getting deleted. I can see that we are going to have a long drawn out fight over this one.

I can also tell you that I believe that freedom of information is going to win on this one. There is nothing you can do to stop people from expanding and making the wikipedia better. I believe that we need to take a neutral point of view and just present the facts about the issue. Not invent new terms, not cut out relevant information. We can also link in some of these articles and then present information from them.

User:Mdupont 20:50, 29 October 2006 (CET)\

Dear Mdupont, I have nothing against core dumps or unsafe programming languages. I have myself written compilers and debuggers, and have also worked on operating systems starting with Multics and PWB Unix through Mach (aka OS X). I have used both unsafe (assembler, C) and safe(r) (Lisp, Java, Ada) programming languages. There is nothing "spooky" about saying that core dumps are an unstructured representation of the core image. Of course the data structures in core are in the core dump, but the meta-data about them may or may not be. Exactly as you say, some programs are distributed with symbol table, source code references, etc., some are not. Which is why the article read "may not" be able to tell you etc. (i.e. sometimes it can, sometimes it can't). And of course once you start modifying the system image using (for example) untagged struct variants as in C, even the symbol table won't necessarily tell you what you're looking at.

I don't know why you claim that there is no such thing as a "dump analyzer". I agree 100% that the current dump analyzer article is very poor, but what does that have to do with anything? Try a search for "dump analyzer" if you don't believe they exist; for historical perspective, look about 1/2 way down the page Multics Printer Software.

I don't know where you get the idea that I want to suppress information. There is a place for everything. WP policy says "Wikipedia articles should not include instructions or advice ..., suggestions, or contain "how-to"s. This includes tutorials, walk-throughs, instruction manuals, video game guides, and recipes."

En esperant que nous pourrons cooperer fructueusement dans l'avenir, --Macrakis 21:45, 30 October 2006 (UTC)

References to work into article
Debugging memory on Linux Full text 	html formatHtml (14 KB) Source 	Linux Journal archive Volume 2001, Issue 87  (July 2001) table of contents Page: 2 Year of Publication: 2001 ISSN:1075-3583

Getting to Know gdb Full text 	html formatHtml (26 KB) Source 	Linux Journal archive Volume 1996, Issue 29es  (September 1996) table of contents Article No. 5 Year of Publication: 1996 ISSN:1075-3583 Authors Mike Loukides Andy Oram Publisher Specialized Systems Consultants, Inc.  Seattle, WA, USA

Certifying off-the-shelf software components Voas, J.M. jmvoas@rstcorp.com; This paper appears in: Computer Publication Date: Jun 1998 Volume: 31, Issue: 6 On page(s): 53-59 ISSN: 0018-9162 References Cited: 9 CODEN: CPTRB4 INSPEC Accession Number: 5954240 Digital Object Identifier: 10.1109/2.683008 Posted online: 2002-08-06 22:11:22.0

Aleph One. “Smashing The Stack For Fun And Profit.” Phrack, 7(49), November 1996.

Arash Baratloo, Timothy Tsai, and Navjot Singh. “Transparent Run-Time Defense Against Stack Smashing Attacks.” Proceedings of the USENIX Annual Technical Conference, June 2000.

C. F. Webb. “Subroutine call/return stack.” IBM Technical Disclosure Bulletin, 30(11), Apr. 1988.

Crispin Cowan, Calton Pu, Dave Maier, Heather Hinton, Jonathan Walpole, Peat Bakke, Steve Beattie, Aaron Grier, Perry Wagle and Qian Zhang. “StackGuard: Automatic Adaptive De- tection and Prevention of Buffer-Overflow At- tacks,” Proceedings of the 7th USENIX Security Conference, January 1998, San Antonio, TX.

Immunix.org. “StackGuard Mechanism: Emsi’s Vulnerability.” http://immunix.org/StackGuard/emsi vuln.html.

Hiroaki Etoh. “GCC extension for protecting appli- cations from stack-smashing attacks.” http://www.trl.ibm.co.jp/projects/security/ssp

User:Mdupont 20:50, 29 October 2006 (CET)

Hex dump is not Core dump
I did a search for "hex dump", to check whether the article exists. Instead I found "Core dump". They are not one and the same. Core dump is always in binary and can be viewed in many different ways, including as hex dump. But hex dump is just a view of some information - not core dump. Redirection should be removed and a proper article written.

--Aleksandar Šušnjar 04:35, 17 January 2006 (UTC)
 * This seems fixed by now... --Philippe 12:08, 12 October 2006 (UTC)

Source?
"The name comes from core memory and the image of dumping a bulk commodity such as gravel or wheat."

Certainly not something else, sir or madam?

Cleanup
Someone want to integrate the crap below the article into the article itself? --Disavian 07:07, 6 June 2006 (UTC)

An important oversight
I note that nowhere does this page actually define the term 'dump'. I realise that people who know enough about computers to write this article possibly take such knowledge for granted, but spare a thought for non-nerds who are struggling just to understand the terminology. —Preceding unsigned comment added by 203.26.98.142 (talk) 08:05, 3 July 2008 (UTC)
 * A dump is a copy of a record, generally a large one, hence its need to be dumped rather than gently deposed. LokiClock (talk) 22:56, 12 December 2009 (UTC)

possible "in popular culture" reference
http://nobodyscores.loosenutstudio.com/index.php?id=223 —Preceding unsigned comment added by 63.82.6.67 (talk) 01:35, 6 January 2010 (UTC)

PC and UNIX centricism
The article makes many general statements that actually apply only to PC and Unix-like operating systems. In particular,


 * OS/360 had the ability to write raw storage dumps to disk in the late 1960's.
 * The use of the misnomer core dump is limited to PC and Unix-like operating systems. Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:03, 24 January 2011 (UTC)

Storage dump
There is not reference to "storage dump" as "core dump". The two are quite distinct, creating a new article under "Storage dump(Computer Science)" (That link to oracle shows nothing, nor can I find anything about terminology "storage dump" as same status as "core dump")
 * Swestlake (talk) 20:13, 3 April 2013 (UTC)

Database dump
A database dump is the typical way backups are made for databases, so this is not suitable as a reference. Schily (talk) 14:58, 29 June 2015 (UTC)


 * Sorry, what's the actual problem with database dumps? Those are dumps, and they do store large amounts of data for later examination or other purposes; as we know, database dumps aren't necessarily used only in disaster recovery scenarios, they're frequently used as data snapshots for later comparisons, performance benchmarks, etc. &mdash; Dsimic (talk &#124; contribs) 05:02, 30 June 2015 (UTC)


 * They serve a different purpose than code dumps and this is a coredump article. A database dump is mainly for being able to restore the database and serves as a long term storage/backup while a coredump is written to be able to examine the state of a process at a later time (and only up to that time) without keeping it in memory. This is not the only method to allow debugging; note that e.g. UNOS (operating system), the first UNIX clone and the first UNIX realtime implementation did keep failed processes in memory. Schily (talk) 11:09, 30 June 2015 (UTC)


 * That's perfectly fine, but here's what the database dump reference actually covers (quoted from the article's lead section):
 * The term "core dump", "memory dump", or just "dump" has also become a jargon to indicate any storing of a large amount of raw data for further examination or other purposes.
 * Using the term "dump" as a jargon for "storing of a large amount of raw data" is what's covered, and databases are dumped very often. &mdash; Dsimic (talk &#124; contribs) 11:16, 30 June 2015 (UTC)

Which component is responsible for generating core dumps in modern OSes?
It might make sense to state which component is responsible for generating core dumps on modern OSes, like AIX, BSDs, Linux, OS X and Solaris. Jeffrey Walton (talk) 00:04, 21 May 2018 (UTC)

Security concerns on modern platforms
It might make sense to talk about core dumps in the context of platform security. For example, suppose a program is built *without* NDEBUG and subsequently raises a SIGABRT due to a failed assertion. If the program was handling sensitive information like a password, private key or social security number, then the sensitive information will be written to the file system unprotected (the "core dump"). Further, platforms like Linux, OS X and Windows may ship the core file unprotected to an error reporting service so the sensitive information is available to the OS manufacturer and developer. These are pretty egregious breaches and not acceptable for applications handling sensitive information. Jeffrey Walton (talk) 00:05, 21 May 2018 (UTC)

octal
The article says "Modern core dump files and error messages typically use hexadecimal encoding, as decimal and octal representations are less convenient to the programmer. " When the dump was in octal it was because it was because octal was native to the computer, e.g. the CDC 6000 series. Bubba73 You talkin' to me? 03:28, 10 November 2018 (UTC)


 * Decimal tooPeter Flass (talk) 15:05, 10 November 2018 (UTC)


 * Yes, there were some decimal computers too. I had many octal dumps, because the computer used octal.  I'm going to take the sentence out.  Bubba73 You talkin' to me? 00:07, 11 November 2018 (UTC)