Talk:Integer (computer science)

homed integer
homed integer pops up in the VisualC++ header files in reference to the DEC Alpha. Anybody know what that's about? Hackwrench 21:08, 5 November 2005 (UTC)

Finally found the relevant MSDN article. Hackwrench 22:48, 5 November 2005 (UTC)

Current correct URL as of 2-10-2014:  Jimw338 (talk) 00:43, 11 February 2014 (UTC)

use case column
The column of uses for 64bit mentions "very large numbers". This is quite subjective, for many people already 32bit is "very large". And it doesnt tell something about the actual place where those very large numbers are used. It would be nice if someone could find some good concise words for "file size for modern multi gigabyte system file, directory, hdd sizes. Or network transfer rates, or transfer amounts." — Preceding unsigned comment added by 213.61.9.75 (talk) 10:15, 6 May 2013 (UTC)

In re:examples. Would using Gangnam Style's breaking of the Youtube counter be a good example of that "Very Large" size comparison for 332 vs. 64 bit signed integers? — Preceding unsigned comment added by Doryllis (talk • contribs) 18:26, 12 September 2019 (UTC)

Word
(this was originally under Quadword)

I think this probably isn't adequate. I mean, I know what "unit," "computer memory or storage," "four," and "word" all mean, but I still don't know what "quadword" means (at least, I don't think I do). Isn't that interesting? --LS That is because no one has explained how many bytes are in a word. In IBM/370 Assembler a word has 32 bits or 4 bytes. A doubleword is the length of 2 words, thus 64 bits or 8 bytes. I was entering something about this under word with a one word instruction as an example and found someone disagreeing with me---I give up. Rose Parks


 * problem is that different machines have different word sizes

OK, different problem: if you use "word" without further ado, many people reading the article won't know that you mean something different from what is ordinarily meant by "word."

I found this link on a page with byte...what was I to think? RoseParks

The article asserts that now word usually means 16 bits but I think it still means "most efficient unit size on some particular architecture" or "architecture defined sequential packing of bits". What do others think? --drj


 * I think the author is showing his/her IBM PC x86 bias by saying a word is 16 bits.

I have been around enough architectures that I consciously don't decide how big a word is, until someone tells me.


 * Somewhen in the past, drj was completely right, word meant the machine's "native" bit width, i.e. registers would always contain a word. But I think the trend to call 16 bits a word always, even outside of the now quite rare 16-bit beasts, is definitely there. This may be due to the wintel juggernaut, but it may also be due to there being a need to call 16 bits (and 32 bits, etc.) something. Word is certainly not the best choice, but most people that are not chip designers seem to find it adequate.

Despite the dominance of Wintel I still couldn't find enough evidence to support the case for word usually meaning 16 bits (for example count "16 bit word" versus "32 bit word" on altavista). So I deleted that claim from the head page. I can believe that amongst programmers that have only been exposed to wintel boxes word usually means 16 bits, but that not what was claimed. I don't really want to fight a battle over this but I would like some evidence. --drj

Likewise I'm not sure about 'longword' and 'long' as always meaning 32 bits. There are existing architectures where the C type 'long' is 64 bits. --Matthew Woodcraft

I think the article is clear enough about the terms being somewhat ambiguous (how's that for a .sig quote!) --LDC
 * I know it's incorrect to say "That variable should have been declared as long, which has at least 32 bits on any computer". C does not guarantee that long has at least 32 bits, only that it has >= the number of bits that int has.  E.g. http://www.unix.org/version2/whatsnew/lp64_wp.html says "C language standards specify a set of relationships between the various data types but deliberately do not define actual sizes.". 01:47, 6 December 2008 (UTC)
 * The C standard does require that a long be able to represent all values, inclusive, between -2^31+1 (for non-two's complement systems) and 2^31-1. —Preceding unsigned comment added by 69.166.47.137 (talk) 00:34, 22 February 2010 (UTC)

Redirect
It strikes me that some of the information on this page is duplicated in the articles word, byte etc. Perhaps we should move it to a single place, and redirect from the others? --Uriyan

SI units?
picking a nit: As far as I know (and the SI entry supports me) terms like "megabyte" are not, in any way, SI measurements. The section that compares the power-of-ten meanings of the prefixes with the power-of-two meanings of the prefixes is fine, because it is describing the prefixes. measured in a metric fashion (i.e. powers of ten). I decided to drop out all the talk about hard drives and just leave the illustration of the difference, and the cross-reference to binary prefix, where the controversy is covered much better. Zack 18:23, 27 August 2005 (UTC)


 * It's not about SI *units*, it's about SI *prefixes*. Examples for units are meter, gramm, pascal, Kelvin, Joule. Mega always meant and always will mean "Million" which is exactly 10^6 or 1000000. --82.141.58.16 16:04, 18 October 2006 (UTC)

Maybe cover the int function present in lots of languages/math packages? — Omegatron 07:27, 17 October 2005 (UTC)

Char?
Why are  datatypes discussed here? datatypes represent a character, not an integer. And how can a  exist (characters are textual, not numeric). --208.138.31.76 (talk) 14:44, 7 January 2008 (UTC)


 * Well, in C/C++ (and probably others), chars can be signed or unsigned, and if explicitly declared as such, are considered an integral type. Oli Filth(talk) 19:38, 7 January 2008 (UTC)


 * Thank you for clearing that one up. By the way, (as far as I know)Java's  type is a strict Unicode implementation (I'm not so sure about .NET though) --208.138.31.76 (talk) 21:13, 7 January 2008 (UTC)


 * I'm no Java expert, but I've just taken a look at the Java spec, and it specifies  as an unsigned integral type (see section 4.2).  I think, therefore, that we should revert the changes related to Java.  Oli Filth(talk) 21:22, 7 January 2008 (UTC)


 * I have a little test for the Java  type.


 * This program assigns values to  variables. Because addition is defined for all integral types, the + operator could possibly be either integer addition or string concatenation.


 * I have done some Java programming before and, as far as I am concerned, a  is a one-character string. I have never tried to concatenate two  s or a   and an integer, only  a   and a string.


 * What would this program print? An integer or a string starting with "Hello, "? --208.138.31.76 (talk) 16:12, 8 January 2008 (UTC)


 * I don't know! However, there's an easy way to find out. I'm not sure what bearing this would have on the article, though.  Oli Filth(talk) 02:25, 9 January 2008 (UTC)


 * This would demostrate how the  type works - as text or an integer. The Java spec also states that the addition operator is defined for all integral types (including  ) and, because   is overloaded for strings and integers, the output depends on whether + is used for integer addition (returning an  ) or concatenation (returning a string). This could show that   may not be the same as  . --208.138.31.76 (talk) 14:53, 9 January 2008 (UTC)


 * Concatenation only happens when there is a string. Remember the rule about not being able to overload functions differing only in return type? Character literals don't invoke it. Also, the + operator is invoked from left to right, so (1 + 1 + "a" + 1 + 1) is "2a11". —Preceding unsigned comment added by 69.166.47.137 (talk) 00:27, 22 February 2010 (UTC)


 * From a high level perspective a character are a different concept from an integer. Characters enumerate letters. That the {} languages implicitly convert char to int adds to confusion but that does not make them the same. Most non {} languages (Ada and Pascal spring to my mind) treat them separately offering functions query a characters ordinal number or create a character from it's ordinal number. --Krischik T 11:57, 24 July 2008 (UTC)

Why is UCHAR/unsigned char not shown in the table under C/C++ on the row for byte->unsigned? Instead it shows the same information as for signed char. — Preceding unsigned comment added by 135.196.5.146 (talk) 10:53, 22 November 2012 (UTC)

Decimal digits
The decimal digits is somehow misleading - taking the example a byte:

If you use a byte for arithmetic then a byte does not go all the way +/- 999 - with a range of -128 .. 127 I would say its about 2.125 digits but certainly not 3.

If you want to print a byte then -128 takes 4 characters - so it is again it's not 3.

So the numbers shown in "Decimal digits" is not answer to any real live question. --Krischik T 11:57, 24 July 2008 (UTC)


 * Well, "-" isn't a digit. Whilst I'm inclined to agree with your logic, I think it's still a useful indicator of the order of magnitude of the ranges in question, in "units" that are slightly more tangible.  Oli Filth(talk 19:14, 24 July 2008 (UTC)

Integer vs "Integral"
Can we include any reference as to why Microsoft refers to integers and "integrals" in C#? In the .Net CTS they are integers along with every other language I am personally familiar with. At the top of this article it states, "These are also known as integral data types" but there is no reference to this fact provided. As best I can tell, this seems to be a Microsoft-only designation. The wikipedia article on Integrals talks about the calculus definition. In the math Integer article contains the statement, "More formally, the integers are the only integral domain whose positive . . ." but this doesn't explain why an integer would be called an "integral." There is some clarification here, but Wolfram states "Numbers that are integers are sometimes described as "integral" (instead of integer-valued), but this practice may lead to unnecessary confusions with the integrals of integral calculus" on this page. Again, this seems to be only a C# practice from a computer language perspective. Any ideas?--P Todd (talk) 02:12, 30 October 2009 (UTC)


 * "Integer" is a noun; "Integral" in this context is an adjective modifying "types". Also, there is a reference to the C standard now. I know at least the Java language specification uses the term "integral types" also. —Preceding unsigned comment added by 69.166.47.137 (talk) 00:21, 22 February 2010 (UTC)


 * "Integer" is never defined while "Integral" is being defined. It feels like we are talking around something in a way that wouldn't be accessible to a non-computer science user. So we define the "adjective" as being related to the noun without ever really defining the noun. I tried to define the noun and was shot down because "That's what the math page does" but the introduction paragraph should be readable by someone who hasn't read every link. Right now, the page seems almost tautological.Doryllis (talk) 17:14, 30 December 2019 (UTC)

Flops
Aren't flops 2 bits? Can those be added to the chart? Tidus3684 (talk) 21:31, 31 March 2011 (UTC)
 * I've always understood that  'flops'    means 'floating point operations per second' with floating point processor speeds being commonly measured in MegaFlops. Murray Langton (talk) 21:55, 31 March 2011 (UTC)


 * Me too. 212.68.15.66 (talk) 10:36, 7 April 2011 (UTC)

Suggest merge
I sugest we merge Short integer and Long integer here; I don't see the reason why the content here must be duplicated in those articles, and surely it's more of an overview if the various integer sizes are described here. --Wtshymanski (talk) 13:14, 1 September 2011 (UTC)
 * Support with reservations. The content is not really duplicated in both of these articles. Look, for example, at the tables there: they list sizes of the integer type in different environments, whereas this page generalizes for all cases. I think we can merge by making separate subsections for short and long integer types in the section Common integral data types, then copying all the information except the redundant parts of the introduction and finally moving the current information in Common integral data types to these subsections if possible.1exec1 (talk) 20:04, 4 September 2011 (UTC)
 * Support. 1exec1's proposal sounds good. - Frankie1969 (talk) 18:43, 11 September 2011 (UTC)
 * Support merge makes comparison easier. Max Longint (talk) 22:15, 5 October 2011 (UTC)
 * Comment While we are at this, I would suggest to dismantle/rewrite Computer numbering formats Max Longint (talk) 22:30, 5 October 2011 (UTC).

pronunciation
Hello, I'm a non-native English speaker. Could the article include how we should pronounce "integer" ? like "g" or "j" ? Thanks. — Preceding unsigned comment added by 145.248.195.1 (talk) 13:42, 22 November 2011 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified one external link on Integer (computer science). Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20100822072551/http://flash-gordon.me.uk/ansi.c.txt to http://flash-gordon.me.uk/ansi.c.txt

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 13:52, 14 November 2017 (UTC)

Perl 6, and arbitrary arithmetic
Compared to other languages like Python, Lisp and Python, I wouldn't say that Perl 6, does have arbitrary precision integers. By default they are C native integers, unless you use bigint, or use decimal, libraries explicitly, to overload creation of literals and operators. So, I think it like calling C++ having arbitrary precision integers. When it doesn't. 81.6.34.246 (talk) 12:43, 19 December 2018 (UTC)

Making Computer Science accessible for the general audience
Right now, the article reads well if and only if you understand all the linked references. That is bad writing, in general, for an encylopedia even if you are focused on computer science. Can we allow that some words will have to be defined here in order to not create a sentence that basically says "Integers are integers" which tells the general or beginner audience exactly nothing? Doryllis (talk) 16:12, 23 September 2019 (UTC)

long and int are NOT equivalent
Under the table for long integers it says "the terms long and int are equivalent" but that is simply not true. — Preceding unsigned comment added by 90.230.55.237 (talk) 20:01, 24 September 2019 (UTC)


 * Indeed, even the given reference https://www.open-std.org/JTC1/SC22/WG14/www/docs/n1570.pdf makes that clear, for example on page 27 the example values contradict the idea that int=long. I've edited the page accordingly. ~ Eldacan (talk) 12:49, 15 March 2024 (UTC)