Talk:Integer overflow

[Untitled]
Integer arithmetics are frequently used in computer programs on all types of systems, since floating-point operations may incur higher overhead (depending on processor capabilities).

Floating-point operations may or may not actually be more expensive than integer arithmetic on given hardware. I think there are much better reasons to use integers instead of floats: integers are exact and give you more precision than floats of the same size. For example, in a 32-bit integer, you get 32 bits of precision, whereas an IEEE single precision float, which also takes 32 bits, provides only 24 bits of precision (23 bits mantissa, and the sign bit). When the numbers you're trying to represent are integers (or even rationals with a common denominator), you're therefore better off using ints.

Inglorion 09:27, 19 August 2006 (UTC)

Merge with arithmetic overflow
Integer overflow is a special case of arithmetic overflow. I don't see the need for two articles. Derek farn 08:54, 10 October 2006 (UTC)

I think this makes sense as two different articles. Integer overflow is of particular interest in computer security and is reference from the computer security page. The content on this article is currently underdeveloped and could be significantly expanded. Rcseacord 18:10, 12 December 2006 (UTC)


 * But, currently, the arithmetic overflow page adds no significant value, and only confuses because there is no obvious difference between integer overflow and arithmetic overflow. In fact, the first line of 'integer overflow' says  "an integer overflow occurs when an arithmetic operation attempts to create a numeric value that is too large to be represented...", which is exactly the definition of arithmetic overflow.  Until there is something in the two articles that shows a difference between these two terms, they really should be merged. 141.166.41.125 (talk) 21:05, 6 September 2015 (UTC)

Note to self. -- C. A. Russell ( talk ) 19:08, 9 May 2009 (UTC)

Wrap around, Signed/Unsigned, mixed message
From the article:
 * Since an arithmetic operation may produce a result larger than the maximum representable value, a potential error condition may result. In the C programming language, signed integer overflow causes undefined behavior, while unsigned integer overflow causes the number to be reduced modulo a power of two, meaning that unsigned integers "wrap around" on overflow. This "wrap around" is the cause of the famous "Split Screen" in Pac-Man.
 * A "wrap around" corresponds to the fact, that e.g. if the addition of two positive integers produces an overflow, it may result in a negative number. In counting, one just starts over again from the bottom.
 * Example: 16 bit signed integer: 30000 + 30000 = −5536.

The example given here of a negative number resulting from addition is an example of a signed integer overflow, but its usage immediately after the statement about C programming behavior is contradictory. I don't know enough about C programming to know whether the statement that "signed overflow produces undefined behavior, while unsigned overflow produces wrap around" is accurate or transposed. If it is accurate, the example should be changed to, perhaps, 35000 + 35000 = 4464; otherwise, the words "signed" and "unsigned" words should be swapped to be accurate in the preceding statement. 199.2.205.141 (talk) 14:15, 29 October 2013 (UTC)
 * Signed overflow is undefined in C. I changed the example to prevent confusion. Hiiiiiiiiiiiiiiiiiiiii (talk) 01:52, 13 January 2014 (UTC)

Merger proposal
I propose that Arithmetic overflow be merged into Integer overflow. The arithmetic overflow article is short and underdeveloped, and duplicates the material in integer overflow. Ads they currently stand, there is no apparent difference in definition between the two terms. 141.166.41.125 (talk) 21:05, 6 September 2015 (UTC)

I agree Mchcopl (talk) 18:42, 12 April 2016 (UTC)!

I agree. This needs to be acted on. 98.122.177.25 (talk) 20:14, 12 July 2016 (UTC)

Ouch! It should have been the other way around. Integer overflow is a special case of arithmetic overflow, which can also include float overflow (too large integer exponent, which again is a type of integer overflow, but is (or may be) handled differently for the exponents of floats (i.e. by assigning the special +Infinity and -Infinity values)).37.44.138.159 (talk) 07:18, 25 August 2019 (UTC)

Ambiguity Regarding underflow
The definition of integer underflow is incorrect. Integer overflow is the blanket term for where a value exceeds the range of an integer. When the maximum value is exceeded, this is positive overflow. When the minimum value is exceeded, this is negative overflow -- not underflow!

Integer underflow is a less remarkable example of information loss, where precision is lost due to an arithmetic operation such as integer division or bitwise right-shift, or due to a conversion from a type that can represent fractional values, e.g. floating-point. For example, where 3 / 2 = 1, we are seeing integer underflow.

The references to integer underflow are misleading. There will always be more examples of the misuse of the term integer underflow than examples of its correct use, because the proportion of material concerned with integer overflow far exceeds that concerned with integer underflow. Thus a picture is painted of a general consensus around the incorrect definition. Of the five citations, the first is an article CWE-191 which gets the definition of underflow wrong. The second has only one reference which is a broken link. The third has a mistake by its own definition (-1 is not a smaller absolute value than 0). The fourth, again, is using the wrong definition of integer underflow. The fifth begins by talking about buffer underflow and then continues the semantics into a section on integers.

The preceding wikipedia article gets the definition of integer overflow right and even describes the example from the third underflow article as integer overflow. (But then it scores an own goal by saying this is integer wraparound. Ouch!)

Some of these citations are great examples of the ambiguity and require only minor clean-up. However the conclusion the draw is wrong. In particular, the following sentence is factually wrong and should be amended from

When the term integer underflow is used, it means the ideal result was closer to minus infinity than the output type's representable value closest to minus infinity.

to:

When the term integer underflow is used, it is often taken to mean the ideal result was closer to minus infinity than the output type's representable value closest to minus infinity. — Preceding unsigned comment added by 95.44.211.219 (talk) 07:48, 1 September 2019 (UTC)

Example: accession numbers
At :

"Due to the continued growth of GenBank, NCBI will soon begin assigning GIs exceeding the signed 32-bit threshold of 2,147,483,647 for those remaining sequence types that still receive these identifiers. The exact date that the 32-bit threshold will be crossed depends on submission volume and is projected for late 2021."

This can be another example?

Proposal: I feel that there should be a more generalised Wikipedia article grouping topics that include Year 2000 problem, IPv4 address exhaustion and Integer overflow. These all relate to the same fundamental idea: problems (often unforeseen or ignored) arising from use of integer types with a maximum value that can eventually be exceeded.

—DIV (1.145.43.213 (talk) 06:48, 5 September 2021 (UTC))

Minecraft
It should be using Minecraft: Java Edition and Minecraft: Bedrock edition instead of "Windows 10 Edition"/"Pocket Edition"/"Minecraft". Bedrock edition also no longer has the far lands, and they were always different on bedrock. Eteled286 (talk) 23:59, 28 September 2022 (UTC)

unsigned integers
The article says that integer overflow does not occur for unsigned integers in the C11 standard. This is true, but slightly misleading as readers will think it is a C11 innovation. Actually the same thing was true in the ANSI/C89 standard (3.1.2.5) and the C99 standard (6.2.5). Apart from one word that seems redundant, both those standards have the same sentence as appears in C11. Zerotalk 04:28, 29 January 2023 (UTC)

Explanation for removal
I am removing this:
 * Caution should be shown towards the latter choice [testing for overflow after the operation]. Firstly, since it may not be a reliable detection method (for example, an addition may not necessarily wrap to a lower value). Secondly, because the occurrence of overflow itself may in some cases be undefined behavior. In the C language, overflow of unsigned integers results in wrapping, but overflow of signed integers is undefined behavior. Consequently, a C compiler is free to assume that the programmer has ensured that signed overflow cannot possibly occur and thus its optimiser may silently ignore any attempt to detect overflow in the result subsequent to the calculation being performed without giving the programmer any warning that this has been done. It is thus advisable to always implement checks before calculations, not after them.

The wiki-reason for removal is that it is unsourced. The real-life reason for removal is that it is nonsense. A C compiler is not entitled to read the programmer's mind on whether a bit of code is intended for overflow detection, and not entitled to assume that the programmer has taken precautions to avoid overflow. The implementation-defined nature of integer overflow means that the result of the operation is not specified in the standard, not that there is no result or that the compiler is allowed to assume the result is different from what it might actually be. For example, if the programmer writes, given x,y known to be positive, "z = x + y; if (z < 0) something;", the compiler is not allowed to generate code that might make z negative without executing something. However, the compiler is allowed to generate code that will set z to INT_MAX in the event of overflow, and then optimise out the test because it knows that z will always be positive. That's what "implementation-defined" means.

Another problem with the text is the incorrect before/after dichotomy. Suppose again that x,y are known positive. Compare these: "z = x + y; if (z < 0) something;" and "if (y > INT_MAX - x) something; z = x + y". If the compiler was permitted to assume overflow is not going to occur, both the before and after tests are available for optimising out. In principle, it is impossible to test for a condition that a super-smart compiler assumes does not exist. In actuality, both examples are valid though the second one only tests for overflow in the (overwhelmingly most common) case that the implementation-defined result of overflow is silent wrap-around.

Repeating myself a bit, "implementation-defined" means "the implementation has decided what the result will be". It does not ever mean "the implementation can make assumptions about the wishes of the programmer". If the implementation has decided that the result of integer overflow is silent wrap-around, then the compiler must generate code that is correct for silent wrap-around. It can't generate wrap-around code and then assume there was no wrap-around. Anyway, the most important reason for signed integer overflow being implementation-defined (and a good reason for testing before the operation for maximum portability) is that an implementation is allowed to generate a trap condition. Zerotalk 03:06, 12 September 2023 (UTC)
 * I might agree that the previous one isn't perfect, but recent discussions of C compilers indicate that it is needed, in some form or other. c optimizers have been doing things that I wouldn't suggest for some time. I disagree with this one, too, but it seems that they are doing it. Gah4 (talk) 05:11, 12 September 2023 (UTC)
 * Sorry, I can't understand your comment. What exactly are compilers doing, and what evidence can you offer? Zerotalk 12:58, 12 September 2023 (UTC)
 * Incidentally, I know of some things that look somewhat similar but are actually not. For example, "if (x + 2 > x) something" can be optimized to "something". It is valid because if no overflow occurs then the test is true arithmetically, and if there is overflow the result of the test is up to the implementation so treating it as true is allowed. A number of such gotchas are known.  My example "z = x + y; if (z < 0) something" is not like that. Although the result of "z = x + y" is implementation-defined if it overflows, the meaning of "if (z < 0) something" is perfectly well defined. If execution flow reaches the statement "if (z < 0) something" with z < 0 but something is not evaluated, that's a bug. Zerotalk 13:53, 12 September 2023 (UTC)
 * I haven't followed it exactly, but I am pretty sure that there are plenty of sources. There is a lot of discussion, especially as programs fail. C has always had overflow undefined, as different processors do different things. But many people know what the common processors do, and assume that. Now they get surprised. The (x+2 > x) is the easy case, but there are many that are harder to see, and that compiler optimizers seem to find. Gah4 (talk) 20:26, 12 September 2023 (UTC)
 * I haven't followed it exactly, but I am pretty sure that there are plenty of sources. There is a lot of discussion, especially as programs fail. C has always had overflow undefined, as different processors do different things. But many people know what the common processors do, and assume that. Now they get surprised. The (x+2 > x) is the easy case, but there are many that are harder to see, and that compiler optimizers seem to find. Gah4 (talk) 20:26, 12 September 2023 (UTC)
 * I haven't followed it exactly, but I am pretty sure that there are plenty of sources. There is a lot of discussion, especially as programs fail. C has always had overflow undefined, as different processors do different things. But many people know what the common processors do, and assume that. Now they get surprised. The (x+2 > x) is the easy case, but there are many that are harder to see, and that compiler optimizers seem to find. Gah4 (talk) 20:26, 12 September 2023 (UTC)