Talk:Minifloat

Expert Help Needed

 * 1) Please look at this and make sure it makes sense to you. :)
 * 2) There don't seem to be good sources for this article.  There is a paper linked, but no other references.
 * 3) I took off the stub tag since the article is long enough, but I'm not sure it still shouldn't be expanded.

I do not understand exactly the demand for sources. The value of the concept is explained in the article itself. Minifloats are useful in courses about computer arithmetic, IEEE 754 floating point numbers and floating point arithmetic generally. They are used in special graphic cards. I intended the article as an aid to understand the problems with floating point numbers.

Links to other sources, where the word minifloat is used are http://home.earthlink.net/~mrob/pub/math/floatformats.html#minifloat http://www2.enel.ucalgary.ca/People/Norman/enel369winter2001/lab6/ http://www.cs.queensu.ca/home/cisc101/spring2005/webnotes/programs/Minifloat.java http://www.cs.virginia.edu/cs216/classes/lecture16.pdf http://www.caslab.queensu.ca/~apsc142i/W2003/lecturenotes/Section_BCDJ/Lecture15/Lecture%2015.pdf

The special case with exactly 16 bit is called half precision and part of the draft of the revision of IEEE 754 (IEEE 754r) http://www.absoluteastronomy.com/reference/half_precision http://www.absoluteastronomy.com/encyclopedia/H/Ha/Half_precision.htm http://www.bostoncoop.net/~tpryor/wiki/index.php?title=Half_precision

Even smaller floating point number are called microfloat, see http://www.dclausen.net/projects/microfloat/

--Brf 08:31, 6 June 2006 (UTC)

If for no other reason than sources make it easier for others to follow along, or to see the same or similar subjects discussed and explained in different words or from a slightly different point of view. It's not criticism, but trying to make the article better. :)

The request for expert help is to try to get another person who is knowledgable about the subject matter to review and see if all is well. I will see what other help I can scrounge as I'm a bit unclear about some of the material. JByrd 00:04, 7 June 2006 (UTC)


 * I might not be eligible to do anything at all about this (except talk here) because I'm the author of one of the sources. However I'm also probably pretty close to being an expert on this topic. I haven't actually written an entire floating-point library, but I've written lots of routines that manipulate floats directly, over the course of many years, and done too much debugging that consists of manual conversion (looking at binary or octal or hexadecimal and figuring out what the floating-point number is). In order to compile my fairly extensive list of floating-point formats for different computers throughout history I've read through lots of lecture notes, and that's where the minifloats usually come up.


 * I did not make up the name "minifloat", I found it on one or more of the college/university course notes. I don't know which one(s) used the term, but it was one of: Washington University St Louis, Carnegie Mellon University, Plymouth State Univ., Clarkson Univ., Univ. of South Carolina, Univ. of Maryland, Univ. of Minnesota, Univ. of Utah, Univ. of Texas, Northwestern Univ., and Washington State Univ. Vancouver. I didn't keep links for most of them, because so many of them were the same. Also, most of the links die after a few years, because the course materials get cleared off the university server.


 * My page used to be at the Earthlink URL given above by Brf, but it's not there anymore. But by checking archive.org you can see that I was using the words "minifloat" and "microfloat" by Sep 22, 2004 but the words weren't there on the 17th. (And my personal journal has a note on Sep 19th saying that I worked on the "floatformats" web page. By browsing through the archive.org revisions you can see I was expanding the page a lot that year.)


 * One current source using the term "minifloat", and which I do not have a record of having used myself, is this one at Queen's Univ.. Using archive.org again we can see that they had a "minifloat" Java program as early as August 20, 2003


 * Wikipedia had been around a while, but this page didn't get started until May 29 2006. Others can be found using Google or Bing, but any of them might have used me or Wikipedia as a source too. However, even if I am invalidated as a source, I believe you could still use Queen's University. As I said at the start, others will have to decide this because I have a conflict of interest.


 * Robert Munafo (talk) 21:14, 27 June 2012 (UTC)

Microfloat
Following your link: 'MicroFloat is a Java software library for doing IEEE-754 floating-point math on small devices which don't have native support for floating-point types... In this package you get support for 32-bit "float" and 64-bit "double" data types.'

In other words, your citation for microfloats is a page that deals with nothing smaller than IEEE-754 single.

That's not to say that the term isn't in widespread usage (I don't know if it is); just that your citation doesn't establish that fact. --75.36.132.72 10:36, 28 July 2007 (UTC)

Decimal values
If you follow the IEEE 754 standard should the decimal value 0 1110 111 not be: (-1)02 × 2(11102 - 7) × 1.1112 = 240

when the Exponent bias is: 2(4 - 1) - 1 = 7

as the exponent is 4 bits long? I can't place the current value of 15360...

Different bias values
The paper changes the bias value several times in the examples. It would probably be helpful if they all were the same. It would also help if the example bias was 7 (?) so that it is centered on 1.0 like other floating point numbers.Spitzak (talk) 17:23, 27 April 2023 (UTC)
 * I agree, the current bias resulting in the smallest normalized number being 1.0 is really weird, it does not line up with other floating point standards such as from IEEE. I think that 1.0 should indeed be near the center. I don't know what exact value the bias should be, but if it were up to me, I think having the maximum non-infinity value be 240 would make sense because it roughly lines up with an 8-bit integer, and then the smallest normalized value would be 0.015625, and the smallest non-zero value would be 0.001953125 (aka 1/512, not far off from 1/240, which makes this choice of bias very symmetrical around 1.0). Aaronfranke (talk) 18:05, 5 August 2023 (UTC)
 * Upon further research, all IEEE floating-point formats have the bias set to $$2^{exponent bits - 1} - 1$$. For example, IEEE half-precision has 5 exponent bits with bias 15 ($$2^{4} - 1 = 15$$), IEEE single-precision has 8 exponent bits with bias 127 ($$2^{8 - 1} - 1 = 127$$), IEEE double-precision has 11 exponent bits with bias 1023 ($$2^{11 - 1} - 1 = 1023$$), and IEEE quadruple-precision has 15 exponent bits with bias 16383 ($$2^{15 - 1} - 1 = 16383$$). I believe making this change is the only correct approach if we want an 8-bit minifloat to fit in with the rest of the IEEE floating-point formats. Aaronfranke (talk) 03:29, 6 August 2023 (UTC)
 * I wrote a Python script to generate WikiText tables of IEEE-style floating-point formats with any bit composition: https://gist.github.com/aaronfranke/0d1217e521c4ec784d39e92b5f039115 The current default values in the script are what I would propose this page lists as the 8-bit float format, due to its close conformance with IEEE formats. Aaronfranke (talk) 08:18, 6 August 2023 (UTC)
 * I updated the page, this should resolve this issue. Aaronfranke (talk) 16:09, 10 August 2023 (UTC)