Talk:JPEG XR

Update to Licensing Needed
The licensing section of the article needs to state that it is a license quite similar to a BSD one in that a BSD licensed file does not become GPL licensed just because it is compiled and linked into an application that has a GPL license. MS is stating that the code does not go into a GPL type license. The article also incorrectly states that it cannot be distributed with a GPL based operating system. That is not true because it can be distributed as its own self-contained binary along with the operating system.

The article's licensing section doesn't mention that most of the other consumer level image, video, audio formats are under a much more closed licensing than HD Photo.

—The preceding unsigned comment was added by 199.253.16.1 (talk) 16:22, 4 May 2007 (UTC).

Working
I am working on this article. Please hang on for a day or two. -- so U  m  y  a  S  ch  19:17, 12 May 2006 (UTC)
 * Done. -- so U  m  y  a  S  ch  12:42, 15 May 2006 (UTC)

Licensing
Any idea of the licensing issues of this format? Is it patented? Free to use in any application? --WhiteDragon 14:11, 25 May 2006 (UTC)
 * You need to agree to a license agreement just to read the specs... I'll give you 3 guesses. —Preceding unsigned comment added by 69.248.114.49 (talk • contribs)
 * Well, the format will most definitely be patented. But, how will it be licensed is still unclear. Most analysts propose MS to make it free from royalty for greater acceptance. They also feel doing so would greater promote use of Windows Media formats. -- so U  m  y  a  S  ch  09:41, 26 May 2006 (UTC)
 * Licensing is now explained on Bill Crow's blog, there's a lot of new info there that could be added, so in the meantime I'm going to put a link to the blog on the article's links. ~ (22 July 2006)
 * Ultra-brief summary for lazy readers: free until 2010, free for small-volume use (up to 50,000 units per year), free for hosted applications. See Crow's blog for details.

The problem with the licence description is that the fact that it mentions 'opensource operating systems" - given there are multiple operating systems under different 'opensource licences' - the description needs to be rephrased as, "opensource operating systems licenced under the General Public Licence (GPL)" - Kaiwai (kaiwai dot gardiner at gmail dot com)

Onother licensing issue; don't know where the 'Compression Algorithm' bit came from but I have some strong thoughts it came from the HDPhoto Bitstream Specification 1.0. It is explicitily forbidden by the License Agreement accompanied in that document to distribute any part of the document and... Even better it is not allowed to use any of the reference material for a product other than one that can interface with a Microsoft product (Thus Windows)... Just some thoughts, don't know how to implement this on this page


 * As part of JPEG XR, royalty free access to patented stuff is guaranteed. --soum talk 08:22, 2 August 2007 (UTC)

As for all other JPEG technologies, the target is to make the baseline technology of JPEG XR available on a royality-free licence. That means while patents will apply, you shall be able to get a licence-fee-free licence to use JPEG XR. Currently, the situation is that Mircosoft is granting licence-fee free access to their JPEG XR patents, though at least two additional patents from third parties have been identified that *might* apply to JPEG XR and whose owners are not willing to grant a royality-free access to the technology. —Preceding unsigned comment added by 141.58.47.118 (talk) 15:12, 17 November 2009 (UTC)

Miraculous claims
A truly integer-only (that nonetheless handles floating-point color), block-based compression/decompression algorithm that performs more than twice as well as JPEG, while still retaining comparable computational cost? Unless i'm really behind on developments in the state of the art, this sounds way too good to be true.

Are there independent verifications of these claims, or at least slightly more concrete descriptions of the algorithm(s) in question? --Piet Delport 12:36, 27 May 2006 (UTC)


 * Well, I can't say you what exact algorithm MS is using, but the number of shifts and adds mentioned does sound fair to me. I am researching on a image format BTC-PF which does give comparable performance. As for compression ratio and quality, we still have to wait. And as for the integer/float stuff, no compression algorithm processes color info as it is. Rather they convert it to some format suitable for the algorithm. The transform is what is integer-only here. -- so U  m  y  a  S  ch  12:57, 27 May 2006 (UTC)


 * With "performance", i mean the algorithm's compression performance, not the raw speed/computational cost. (In other words, what the quote "delivers a lossy compressed image of better perceptive quality than JPEG at less than half the file size" is talking about.)
 * And about the color conversion/transform: you can't just "convert" floating point data to integer and then back again.  Their dynamic ranges are fundamentally different:  if you simply round the floating point values to integer/fixed-point ones, you throw away pretty much the entire range/precision that floating point gives you to begin with.  --Piet Delport 15:36, 27 May 2006 (UTC)


 * Yes floating pt numbers can be represented by integers without loss of data - use two different ints to store the mantissa and the exponent separately, and transform them into something else. I dunno exactly what they are doing there, but clever encoding techniques, under some constrained conditions can make integer ops simulate floating point arithmetic. Such tricks would be analoguous to using a 8-bit memory to store numbers 230 or even much more, under certain conditions. So by the amount of public info is there, neither way can be conclusively proved or disproved. We need to wait for more information to be disclosed, at least the codec be made available for use. -- so U  m  y  a  S  ch  16:37, 27 May 2006 (UTC)


 * Sure, it's possible to emulate floating point arithmetic this way, but that doesn't make any sense in this context. It would just complicate the code, while also slowing it down (probably significantly; PC's don't have FPUs just for decoration).
 * Besides, even if this were done, it would still be "floating point", not "integer": using a superficially different representation of the parts of the floating point numbers does not change change the fact that the computation is still floating point, instead of integer/fixed point. --Piet Delport 20:25, 28 May 2006 (UTC)


 * What I am saying is that the info is from MS WMPhoto spec sheet. And as of now, info on WMPoto is a bit scarce. So we have to wait till the codec is released so that independent tests so what exactly it does. And emulating floating point arithmetic using integer ops can be sometimes faster provided a lot many other things are satisfied, such as using logarithms to represent the number and other such trickeries, and they do not make code bloat; rather they make possible computations which would have been impossible in given time/space constraints. But generally such trickeries hold for very specialized use, and I do agree that using it for image compression seems a little far-fetched. But as I said, we are still short on info as of now. Still, I will try to rewrite that bit as best as I can. Please help out as much if you can.-- so U  m  y  a  S  ch  01:15, 29 May 2006 (UTC)


 * What concerns me is that that seems to be the least far-fetched of the claims. :)
 * I'm in favor of removing/commenting the claims, until a better/more solid citation can be found, as per Verifiability. --Piet Delport 08:10, 29 May 2006 (UTC)

Well, removed the confusing bit for now. What do you think? -- so U  m  y  a  S  ch  08:28, 29 May 2006 (UTC)


 * "Confusing bit"? The removed text looks sensible to me. :)
 * The miraculous-sounding compression performance claim are still there, however. --Piet Delport 10:33, 29 May 2006 (UTC)


 * What? U want to remove the performance claims? C'mon, the figures have been quoted by every publication now. U think it would be wise to remove that? And i removed the bit that stated integer ops handling floating pt color losslessly, something I presume you were confused with (A truly integer-only (that nonetheless handles floating-point color)). -- so U  m  y  a  S  ch  12:21, 29 May 2006 (UTC)


 * I think you're getting the wrong idea here:
 * I don't want to remove the performance claims, i want to correct them. I'm almost certain that they're simply being quoted out of context, or have otherwise been distorted in the telling.  (The other possibility is that the claims really are completely true, and that Microsoft is harboring some earth-shattering new advances in data compression and information theory.  As fantastic as this would be, i don't have my hopes up, though.)
 * I was not "confused" by the integer/floating point bits; i was saying that, like the performance claims, they don't make much sense if you take them at face value, for reasons i already explained above.
 * In other words, these claims raise more questions than answers, and i think Wikipedia should be explaining them, or otherwise clarify the situation. (Which, in practical terms, probably means waiting for someone with access to more solid information about the algorithm to drop by and notice the talk page...)  --Piet Delport 19:41, 29 May 2006 (UTC)
 * If you look at MS' spec sheet, the numbers have been quoted from there. And that spec sheet is the most authoritative source we have now (considering, the software is still not released). (Yes, even I do not entirely believe in the claims. I myself am researching on image and video compression and know that at that performance level, you most likely have to limit yourself to a "decent enough" quality :-) ). But still independent reports (reports by websites such as CNet) confirm that WMPhoto images "visibly contained more detail that JPEG and JPEG 2000" at same compression level. So I guess we have to believe in what MS states, as of now, till it is released and subsequently dissected. -- so U  m  y  a  S  ch  11:12, 30 May 2006 (UTC)


 * I was at the WinHEC session presenting WMPhoto, so I can essentially verify the substance of their claims. One of the most telling slides was the comparison of error images between JPEG and WMPhoto. At half the bit rate, WMPhoto actually had more absolute error, but its distribution was very uniform and even, much like random noise, while the JPEG errors had the characteristic block structure, granules near edges, and so on. There is also the question of default settings for chroma subsampling. The capabilities do not differ significantly between the two formats, but most JPEG encoders do chroma subsampling by default, while WMPhoto does not. Lastly, limitations in JPEG's Ycc<->RGB conversion tend to systematically darken colors near the edge of the gamut, while WMPhoto solves this problem. Some of the "2x" benefit would be achievable by better tuning of JPEG implementations, but I do think it reasonably accurately represents what the average user will see in typical applications.--Raph Levien 20:30, 1 June 2006 (UTC)


 * Does that mean that the "half the bits" claim is comparing JPEG with subsampling against WMPhoto without subsampling? Intriguing.

Here again one need to be very carefully how to define "the error". MS has claimed in the past to have an absolute error less than that of JPEG, which can be confirmed. However, this type of error has little to do with "PSNR" = log of mean square error, which is the most accepted error measure in the community, and the two have yet again little to do with the visual error (as found in subjective tests).

What can be said is that the PSNR (MSE) is somewhere half-way between JPEG and JPEG 2000, and closer to JPEG 2000.

As far as the visual error (subjective quality) is concerned, JPEG XR is actually much closer to JPEG than JPEG 2000 if applied "naively" as in the reference implementation, sometimes even worse than JPEG. However, this is not the end of the story. Similar to JPEG 2000 and JPEG, JPEG-XR can be improved and tuned towards visual performance. These optimizations of course increase the complexity (maybe more than similar improvements would require for JPEG or JPEG 2000), but they can help significantly and bring the JPEG XR performance close to JPEG 2000.

Thus, I can certainly not "confirm" Microsoft's claims in so far as "useful" error measures are concerned. Rather, MS decided to use an error measure in the demos they were good at. It is more realistic to say that JPEG XR is midway between the standards, and where exactly depends on the measure. —Preceding unsigned comment added by 141.58.47.118 (talk) 15:20, 17 November 2009 (UTC)

DRMs
Will there be DRM on that format? David.Monniaux 13:47, 8 June 2006 (UTC)
 * No, at least, not in the near future. See the last comment on this blog entry. ~

Licensing?
How open will this format be? Will third parties be able to write plugins using this format? Will it be legally encumbered to sabotage any hope of interoperability? grendel|khan 17:45, 10 December 2006 (UTC)


 * Why would you suspect Microsoft of using a format standard to control and thwart interoperability? Have they ever done that before? Dicklyon 18:07, 10 December 2006 (UTC)


 * Microsoft has 15 issued U.S. patents that mention "lapped biorthogonal transform", mostly with inventor Henrique S. Malvar. Those might be a good source of public-domain description and drawings of their methods. Dicklyon 18:27, 10 December 2006 (UTC)

The format will be open, and MS is playing nice here as much as I can see it. MS grants royality free licences to the JPEG XR technology. However, see above, there are other players in industry that might not play so nicely. Whether their patents withstand a court is, of course, another question. —Preceding unsigned comment added by 141.58.47.118 (talk) 15:22, 17 November 2009 (UTC)

Summary list of formats needs addition
The list of audio/video formats at the bottom of this article needs to have some added links for physical media containers for audio/video for the major types of media used (DVD, SVCD, VCD, audio CD, etc). —Preceding unsigned comment added by User:199.253.16.1 (talk • contribs)

Integer Ops
"Integer operations typically work faster than divides". i beleive this should be "integer operations typically work faster than floating point operations". there is an integer devide as an integer operation.

Support
Can there be a section which talks about the support of this file format, i.e. which programs can read it? Althepal 05:41, 30 April 2007 (UTC)

Examples?
How about someone comparing jpeg to hd photo (at same file size) and uploading as a png? Althepal 23:22, 29 July 2007 (UTC)


 * Wikipedia is the wrong place for that sort of thing. Any such comparison is inevitably biased. The most you could reasonably put here would be examples of typical HD Photo artifacts. JPEG 2000 contains this image, and I suppose it could be adapted for the HD Photo article. But even that seems pretty biased, because one animal image at one particular resolution doesn't tell you much about performance in other domains. In particular, both JPEG 2000 and HD Photo are targeting professional digital photography, where the resolution is normally much higher. There's also the problem that some encoders do a much better job than others; the creator of that image doesn't even indicate in the description which JPEG 2000 encoder he used. -- BenRG 14:49, 30 July 2007 (UTC)


 * I agree with Ben that it would be quite difficult, but I also think it might be possible and worth doing.


 * First, I'm not sure such an attempt will show anything valid by the time it goes through the web. I've visited websites proclaiming illustrated differences, but they either look the same to me, or grossly exaggerated, in many cases.  Furthermore, some of these websites seem to just be regurgitating Microsoft's "doctrine" as if it proves anything, a strategy I automatically view with suspicion.  (Just do a Google search for "HD Photo" JPEG, with the quotes as shown, to find them.)


 * Second, I am sure *.png is the wrong format for such an attempt. PNG was never intended for high resolution professional photography.  More realistically, it's a superior replacement for *.gif and ideal for screen shots.


 * For it to have a prayer of being useful, IMHO the test samples would need to be converted back to TIFF. To see if your browser will display TIFFs, open this link in a new window (broadband required):
 * http://www2.tulane.edu/beauty_shots/gibson2_cmyk.tif
 * On Mac OS 10.4.10 with Safari it opens nicely. With Firefox 2.0.0.5, it calls up QuickTime as a "Helper" first, but it also opens there.  I fully expect it works on Windows XP (and Linux) just as well.  (If it doesn't, get QuickTime for Win and Firefox; they're both free.  No QT for Linux, AFAIK.)


 * The best approach would be to simply show thumbnails linked to full-sized and downloadable files, always converted from the same original TIFF (or Raw converted to TIFF), all of which (including the "original" TIFF) readers could then see for themselves. This would protect the servers - only people who actually want to see them would receive them.  The article would need to clearly state the steps taken to create the samples.  Such an approach might be novel for a wiki, but I see no reason why it wouldn't be allowed as long as the images are either public domain (or donated), and I'm sure it would work.


 * The use of original RAW files would be problematic, as most browsers would be unlikely to know how to open them.


 * An important issue to address: as photos become ever larger, HD Photo becomes more efficient. For small originals, even old JPEG is good and JPEG 2000 is clearly the winner.  For really big files, HD Photo clearly beats original JPEG, but it becomes a toss-up with JPEG 2000.  Incidentally, it would be more appropriate to compare HD Photo to JPEG 2000, yet most people seem to be asking for comparisons to JPEG.  Whatever...  Give people what they want!


 * Personally, I see absolutely no reason to bother with HD Photo. I do not believe HD Photo is better than JPEG 2000.  (Beside which, I *always* use TIFF and Raw formats for archives, and JPEG 2000 at 100% for everything else, anyway.)


 * A final thought: My photos are far too valuable (to me at any rate) to simply accept the word of others. IF I *had* seen any compelling evidence of the truth of claims for HD Photo's superiority (I did not), I would do my own experiments from my own photos.  I originally arrived at the use of TIFFs for archives as the results of hundreds of experiments.  TIFF's longevity is due to 3 factors: 1) they got it nearly right from the get-go, 2) it was flexible enough that it has always adapted as the needs of photographers changed, and 3) it's not proprietary (HD Photo is).


 * To provide an encyclopedically balanced and neutral presentation of the pros and cons of HD Photo, all of the above (and possibly more) need to be addressed.  Frankly it's more work than I care to tackle, but perhaps someone else would like to take my outline and run with it.  If so, be bold!


 * I Hope This Helps.


 * Badly Bradley 23:15, 30 July 2007 (UTC)


 * No, PNG is the ideal format, since it has lossless encoding and universal compatibility with browsers. You'd need to do pretty severe compression (maybe 50:1) in both HD and JPEG to get enough distortion to see.  Might be worth a try, but based on the JPEG/JPEG2000 comparisons I've seen, the results vary a lot with the image type, so it might be impossible to give an unbiased comparison. Dicklyon 00:21, 31 July 2007 (UTC)


 * I went back and took a much more careful look at PNG. I had somehow completely missed the point that it supports up to 64-bit RGB.  Given that, it is suitable for accurately displaying the difference between JPEG, JPEG 2000, & HD Photo.  Since I only use lossless compression in my photography, I need to reconsider that I might indeed use PNG more often than just for screen shots...  (Metadata is still an issue though - see below.)


 * Quoting from the Wikipedia's own article about PNG: "PNG does not support Exif image data from sources such as digital cameras, which makes it problematic for use amongst amateur and especially professional photographers. TIFF does support it as a lossless format, but is much larger in file size for an equivalent image."


 * For the purpose we are proposing, the meta data would need to be explicitly stated in the article anyway, so this is not a deal breaker.


 * Quoting from the Wikipedia's own article about PNG:
 * "Internet Explorer ≤6 also renders PNGs in a slightly incorrect color gamut."
 * "Internet Explorer 7 has addressed the transparency-rendering issues, though not the gamma- and color-correction errors"
 * "Apple's Safari, Mozilla Firefox, and Opera have full PNG compatibility."


 * Therefore, as a Mac User, I would have no trouble with someone using PNG for this purpose (though I would prefer to see a TIFF used); Windows users not willing to use Firefox might have a problem with PNG though.  And what about Linux users?  I'd guess they could use Firefox.


 * Badly Bradley 14:19, 31 July 2007 (UTC)


 * Using more than the usual 24-bit RGB would be pointless, because most browswers wouldn't handle it, and all would have to convert to 24-bit RGB for display. Just use the standard lossless encoding of24-bit RGB and you convey the exact bits that the other compression schemes decode to.  You can display those with no profile conversion or anything for comparison with each other.  I'd look into the IE problem more; I don't think there's any actual problem there, and certainly not enough to get in the way of a side-by-side comparison. Dicklyon 15:21, 31 July 2007 (UTC)


 * "Using more than the usual 24-bit RGB would be pointless" - AGREED, absolutely! I was merely expressing my great surprise at discovering 64-bit support.  If it also had Infrared (IR) channel and Exif support, it just might interest professional photographers, and certainly would interest advanced amateurs such as myself.  (The cleverly crafted header intrigues me.)  OTOH, TIFF already does everything I need it to do.


 * Also, I've gone back to my first comments on this thread to strike-through an inaccuracy.


 * A parting thought: 4 example images (1 each of PNG, JPEG, HD Photo, JPEG 2000), from each of multiple sets, each set at several resolutions, would be needed. Suggested set subjects might include a close-up of human faces, a macro of an insect, a photo of a complex circuit board, a landscape, buildings in a city setting, a suspension bridge (to see how Moiré patterns are handled), a scan of a text document with fine print (for the benefit of readers interested in OCR, and any subject with both bright highligths and deep shadows in the same image (i.e wide dynamic range) etc.


 * I feel that we have pretty thoroughly defined the tasks. The really big question remains: does anyone actually want to tackle this potentially really big job?  Done right, it would be an outstanding achievement.


 * Badly Bradley 18:11, 31 July 2007 (UTC)


 * Suppose someone did this, and it showed that HD Photo was the best format. Would you trust that result? What if it showed JPEG 2000 was the best—would you believe it then? I think the only sensible answer is "no" to both questions. It's really, really easy to unconsciously bias the results of tests like this. For example, if you have previous experience with JPEG 2000 but not HD Photo, you might notice that the results of the JPEG 2000 compression look worse than you expected, and search for the problem so that you can get a more accurate result. By doing that for JPEG 2000 and not for HD Photo, you introduce a statistical bias.
 * Also, I don't understand why you care about this in the first place. No image format will ever have higher fidelity than TIFF or PNG; the only reason not to use them is the larger file size. If you don't care about the file size, which you apparently don't, then why even consider other formats? -- BenRG 02:40, 1 August 2007 (UTC)


 * You seem to be addressing the guy who has already said he has no interest in lossy compression formats. However, be aware that HD Photo also has lossless compression, so it's not out the question that he might prefer it, even though file size is only a secondary concern.  But the point of comparison images is obviously at the other end of the spectrum, where the file size is compressed to low enough that there are visible degradations, which can be compared for different compression schemes.  I think such an illustration can have value, even though I know I wouldn't trust it be representative or general. Dicklyon 05:30, 1 August 2007 (UTC)

I use Mac OS X and Windows XP, and I have both Safari and Firefox on both comps. I almost always use Firefox, because it is far more secure than IE, and far more feature-packed (when extended) than Safari. I only use IE when a Microsoft website requires it. However, truth be told, about 70% of people use IE. Nevertheless, PNG has wider support than TIFF, and I don't even know if Wikipedia supports TIFF. I think the focus of the comparison would be on artifacts, not color reproduction, but even in Internet Explorer the color thing wouldn't mess up the point of the comparison. Althepal 18:46, 31 July 2007 (UTC)


 * Well, I dont think the comparison should be made. Each algorithm has its strength. In my (admittedly limited) tests with Standard test images, I found HD Photo outperform JPEG 2000 (both in lossless mode) to varying degress in 8 out of 15 images. So, it was more or less inconclusive. Even in lossy modes, it was better in some (between 40 - 60 %) of the test cases. So careful selection of the pictures can skew the results. So the best thing that can be done here is to link to a proper review done by some third party, which tests over a significant number of samples, rather than let the user draw a skewed inference based on just one image. As for PNGs, all browsers have perfect support for images without any transform (alpha channel, gamma correction). So, it can be used to accurately represent the compression artefacts as well. But browsers might post-process the images, as such they cannot offer an accurate representation. --soum talk 07:44, 1 August 2007 (UTC)


 * I agree; better to find a secondary source, a published comparison, than to do our own original research. Dicklyon 15:34, 1 August 2007 (UTC)


 * BenRG asks "Would you trust that result? ...would you believe it then? I think the only sensible answer is "no" to both questions."


 * Indeed. IF I saw something credible, I would then do my own testing, which as I already stated was how I arrived at the use of TIFF and JPEG 2000.  (Others have noted that HD Photo's lossy modes seem better in some scenarios and worse in others...  My seeming "interest" in this area was simply about trying to be helpful, and was perhaps unwelcomed after all.)


 * BenRG asks "If you don't care about the file size, which you apparently don't, then why even consider other formats?" to which Dicklyon replied "[Badly Bradley] has already said he has no interest in lossy compression formats. ... HD Photo also has lossless compression ... he might prefer it, even though file size is only a secondary concern."


 * Yes, the reason I am interested *is* the lossless mode. And I do care about file size, but that must be balanced against performance.  The problem I encounter with compression is that, depending on the compression method and circumstances, it can take anywhere from slightly shorter to dramatically longer to open a compressed file than it would take to simply move the much larger uncompressed file in the first place.  This is greatly affected by the relative speed of the disk drive, and exacerbated if the system ends up needing to use virtual memory during the decompression.  I always compress the files archived to optical media, where decompress/open is typically at least as fast as uncompressed open (and in the case of CD-ROM space is short).  I always move working files to my fastest volumes and never compress those.  I see just enough merit that I will be experimenting with HD Photo.  I want to know how the claimed much tighter compression affects the file opening times.


 * It bothers me that HD Photo is a proprietary format. However, if the just announced reincarnation of HD Photo as a "JPEG XR" standard comes to pass (and it may well never happen) I could set that objection aside.


 * Dicklyon later says "better to find a secondary source, a published comparison, than to do our own original research."


 * There's the rub! I have yet to see a published comparison I believed while there are enough people lined up on both sides of the debate to leave me very curious, yet if we were to pursue this inside Wikipedia it seems likely we would be shot down on the WP:NOR front.


 * Honestly, I really am interested in this topic; frankly, I begin to feel as if, just maybe, we have reached the stage of "beating a dead horse".


 * Badly Bradley 17:12, 2 August 2007 (UTC)


 * Yes, including a comparison will indeed be OR, and thus not suitable for this article. That does not mean it is not an important enough topic. It IS a very important study, but Wikipedia is not the proper channel to disseminate the results. Your efforts are not at all unwelcome, its just that the policies that govern the content here has tied our hands.


 * There are not enough published comparisons which does justice to the leading formats out there. So there is a lot left to interpolation or word-of-mouth. We have to deal with that, for now at least. If it gets standardized as JPEG XR, it will have more implementations (there is only one at the moment!), then we can have a better picture of how well it performs - whether the advantages or limitations are of the format itself or the MS implementation. And btw, JPEG XR or not, the pledge not to sue applies to any implementation (IANAL, though).


 * Btw, I sometimes research on image compression algorithms and had come across biorthogonal transforms, which, at least in theory, distributes noise, unlike JPEG which concentrates the noise where there is a sharp transition in frequency. This actually reduces visibility or artefacts and give the impression of a much less blocky image, even at the same signal to noise ratio. Though, I havent done a thorough compression yet (I am not out to developing a generic format, rather help develop or tune algorithms to specific usage scenarios, so my tests are extremely targeted!) If I ever get around to compiling the results (dont count on it), I will post a link to the publication here. :P Though, be warned, it wont be a JPEG/JPEG 2000/HDP evangelism.


 * As for lossless performance (memory/computation), it depends *significantly* on the implementation. A GPU bound implementation will be a lot faster than a CPU (even a multi core one) bound one (thats due to the nature of multimedia computations). But from regular usage, HDP (the MS' impl) does not seem slow. Its like any other compressed image format - the lag between opening a compressed image and an uncompressed one is hardly noticieable to regular user (I havent timed the performance). However, since disc access is much less in compressed formats, computation slow-downs are more than compensated. And how large images do you use that they would cause your system to thrash! Also, with HDP, a lower (than original) resolution rendering does not need to be decoded to full resolution and then resized (there are some constraints, though). Thats a huge gain.


 * My personal opinion on the format is "it looks promising but needs more looking into". And what I have said before is in the hope that others realize the potential and take a deep look into the details and find out whether it lives up to the promise. Who knows, maybe this will end with someone doing a reliable study that we can then quote! --soum talk 07:20, 3 August 2007 (UTC)


 * I still believe we have collectively laid out a viable plan. We can hope that someone will see it, like it, run with it outside Wikipedia, and then let us know to look for it.  If I weren't already buried in a backlog of projects, I'd do it myself.  (Fact: I'm deliberately ignoring something I really should be doing, because I find this far more interesting.  Shame on me!)


 * I've come across "biorthogonal transforms" before. My impression was that at high ratios (i.e. the point where losses are becoming noticeable) it produces an effect reminiscent of the way portrait photographers "soften" faces, which isn't a bad thing really.  Beyond that it just looks blurred.  Does that jive with your experience?


 * I'd never heard of using the GPU to decompress files from the HDD. How would you go about finding out which algorithms can do that?  Perhaps there is a wiki article you could refer to.  (I'm assuming they fall back to the CPU(s) in the absence of a suitable GPU.)


 * As for thrashing - my system is maxed out at 1 GB of SDRAM. The upgraded HDD already pumps data about as fast as the bus can take it.  The OS and apps have both become so honking huge that it takes surprisingly little to fill the system.  So, when I'm doing photography stuff, I quit everything not required, thus avoiding virtual memory.  (When the machine only had 512 MB, it was impossible to avoid.)


 * My master image files (the permanent archives) are 64-bit RGBI at 4800 ppi, scanned from 35 mm negatives of ultra-fine grain pro stock (expensive but extremely satisfying). The grain is just becoming visible at this resolution.  They run about 85 MB uncompressed.  Avoiding virtual, they take 23 to 24 seconds to open from a 7200 RPM ATA66 HDD.  Otherwise, it really does thrash!  (My next Mac will have 4 GB of SDRAM and a 7,200 RPM SATA HDD, but at the rate software grows it will only be a temporary reprieve...  I don't see my images growing beyond 100 MB each though, not even with further refinements in the grain structure.)


 * As much as I enjoy this, other people are waiting on me to get back to work...


 * Badly Bradley 19:51, 3 August 2007 (UTC)

Well, I was actually inspired to do some lossless compression tests. I used two images from the RAWpository. The first was Nikon D2X sample #2; it's an outdoor scene with sky, trees, a building, and ripply reflective water in about equal proportion, 4288x2848, 24bpp. The second was Konica/Minolta 7D sample #2; it's an indoor image of someone's desk, in fairly good light but with lots of CCD noise visible, 3008x2000, 24bpp. Results (again, this is lossless):

So JPEG 2000 beats the pants off everyone else, much to my surprise, and TIFF and PNG were far more competitive than I expected (except for compression time). On the other hand, the difference between the best and worst compression is only about 33%, which is too little to matter in most cases.

Incidentally, libjpeg at 99% quality with chroma subsampling off gets the images down to 12,627,855 and 6,856,482 bytes respectively, and I'm damned if I can see the slightest difference between the output and the input, even flipping between them at high zoom. So one can argue about the practical usefulness of lossless compression modes.

Note that I would expect different encoders to produce different results, even in lossless modes. For example, I used PNGOUT because it usually compresses far better than most PNG implementations (it's very slow by design), and I used Photoshop and ZIP for TIFF because it did the best of the combinations I tried. But I only tried one implementation of HD Photo and JPEG 2000. So there's still plenty of room for bias here. -- BenRG 21:02, 3 August 2007 (UTC)


 * The site you linked only has NEF and JPEG versions, no uncompressed or lossless rendered image. What did you start with for your compression experiments?  Where can we find a copy?  Dicklyon 22:24, 3 August 2007 (UTC)


 * I used the NEF and MRW files, which are camera-raw formats recognized by Photoshop. But come to think of it, I forgot to think about the size of the camera-raw files; they're 20,281,988 and 9,190,688 bytes, which is smaller than many of the compressed formats. Now I'm starting to doubt my whole methodology. Maybe I should have downsampled the images to get rid of the Bayer pattern, or whatever these cameras use.


 * I could upload PNG versions of the images to Wikipedia (the license allows it), but they're pretty large and I'm not sure it's a reasonable use of Wikipedia's resources. If you want to do your own tests you should probably do it with different images anyway. No sense repeating my mistakes—make your own. -- BenRG 00:31, 4 August 2007 (UTC)


 * "No sense repeating my mistakes—make your own." ;-o   Now, THAT is something I'm an expert at!  I like to refer to it as "Trail Blazing" though die-hard Wikipedians call it "Original research". Badly Bradley 01:45, 4 August 2007 (UTC)


 * Could you email the pngs to me at my user handle at acm.org please? That's interesting about all the rendered images being bigger than the compressed raw files; but not all that surprising, really.  Dicklyon 04:40, 4 August 2007 (UTC)

Kodak's standard test images
I've just learned that Kodak published "a default set of Kodak reference images that are commonly used in the image compression industry to demonstrate the effectiveness of various methods" (ref.: StuffIt® Image Compression White Paper, Date: 1/5/2006 (Revision 2.1) page 5 of 15). The reference set consists of a series of uncompressed *.png photos. Thumbnails of the Kodak reference images can be seen on page 6 of 15; obtain your own free copy of the white paper by filling out a very brief form at http://www.stuffit.com/imagecompression/.

I'm still trying to find out how to obtain a set of Kodak's images, and whether or not they are free. In the meantime if someone else already knows, please enlighten all of us.

Meanwhile I did some stopwatch-timed tests on my Mac and verified what I already strongly suspected. When I compress a *.tif it slashes the size by 36%. When saved to my fastest HDD, the compressed versions take about 31% longer to open. To be a fair test, my graphics editor was launched well prior to the test and left open continuously. (E.g. 3.5" x 4.5" 24-bit RGB, 7.5 MB uncompressed opens in 3.54 sec, vs. 4.8 MB compressed opens in 4.63 sec).

Badly Bradley 17:46, 14 August 2007 (UTC)


 * Or just get the white paper here. Dicklyon 21:23, 14 August 2007 (UTC)


 * I found the images, too, here. Dicklyon 21:24, 14 August 2007 (UTC)

JPEG offers a set of test images for image quality and performance assessment that have been used internally. These images are available for testing on a royality-free basis. Note that the Kodak test set is potentially problematic as their images are both too small, and also potentially have licence implications. To get access to the JPEG test set, write to "thor at math dot tu dash berlin dot de". —Preceding unsigned comment added by 141.58.47.118 (talk) 15:26, 17 November 2009 (UTC)

Software support
XP user already support to view the HD photo by downloading Windows Live Photo Gallery. Matthew_hk  t  c  21:18, 13 September 2007 (UTC)

Standardization as JPEG XR
If it has not been already standardized, why is the article title changed? The lines in the article say "under consideration" and "tentatively titled". —Preceding unsigned comment added by 221.128.147.219 (talk) 15:57, 19 January 2008 (UTC)
 * I agree. Until the announcement from Official sources that "HD Photo is now JPEG XR", we should stick with the old name. If no one objects, I will move it back. --soum talk 23:36, 19 January 2008 (UTC)

PCT?
Halfway through the "Compression algorithm" section, the article begins referring to something called the "PCT", which I assume is some kind of transform. A google search gives two possibilities: "Pairwise Correlating Transform" and "Photo Core Transform". The former could be a reference to the Karhunen-Loève theorem, and would kind of make sense, but the latter looks to be the more likely expansion. In any case, it's pointless to just start using an acronym mid-article without any explanation.

As it stands, the article's explanation of the algorithm is essentially:
 * 1) Divide into blocks
 * 2) ??? (PCT)
 * 3) Profit!

mistercow (talk) 00:34, 29 February 2008 (UTC)


 * I wrote the original description based on the freely available format specification from MS (I think it was part of the Device Porting Kit). The spec is rather poorly written and omits a lot of details (saying to consult the reference implementation, which I didn't do). It never says what PCT stands for (not that I noticed, anyway), so I never spelled it out in my summary. Three anonymous editors later made this aggregate change, and Dicklyon then made this edit, removing the first mention of PCT but not the subsequent mentions, which left it in its current confusing state. As far as I can tell Dicklyon's change is incorrect (the PCT is precisely specified in the spec and it is lossless), so I just undid it. I don't know if PCT really stands for Photo Core Transform, nor do I know the theory behind it. -- BenRG (talk) 16:46, 29 February 2008 (UTC)


 * Well, I'm pretty sure it's what's called a "lapped transform." I'll look for sources; there are lots of technical papers on lapped transforms by the relevant Microsoft researchers, but I'm not sure where or if they say that's what's in the HD Photo codec. Sorry I messed up the PCT reference. Dicklyon (talk) 08:05, 1 March 2008 (UTC)


 * I believe it is meaningless to state that a transform is in itself lossless. Usually in image compression, the quantization step (the step after transformation) introduces the loss of data (this is the case with JPEG). I think it would be better to say something like "The transform enables lossless compression". This is of course very vague, but vague is just what this article is at this point, regarding the mysterious transform. I'm trying to learn some more about HD photo, and I will post something here if I learn something useful. Janpeder (talk) 13:10, 15 April 2008 (UTC)


 * In JPEG 2000, the transform is designed to be exactly invertible, when computed with integers; in JPEG, that's not the case, even before the quantization step. That's the sense in which the transform itself can be lossless or not. Dicklyon (talk) 18:21, 15 April 2008 (UTC)

It is now officially JPEG XR – It is time to rename.
Greetings, everyone

I'm sure by now you all know that this file format is now officially called JPEG XR is now a free international standard:

It is time to rename the article into JPEG XR. Is there any valid objections?

Please register any valid objections within the next seven days. Afterward, I'll archive this topic and will proceed with the rename.

17:26, 20 August 2009 (UTC)


 * I went ahead and moved it (over a redirect) because this seems too obvious to need a discussion period. -- BenRG (talk) 21:33, 20 August 2009 (UTC)
 * I hope you did the right thing. Fleet Command (talk) 08:39, 22 August 2009 (UTC)

Memory performance
The HD Photo bitstream specification claims that "HD Photo offers image quality comparable to JPEG-2000 with computational and memory performance more closely comparable to JPEG",

Is this taken from MS presentation? seems it is not truth. Is independent reaserch availible ? —Preceding unsigned comment added by 78.26.156.207 (talk) 16:55, 16 September 2010 (UTC)


 * The JPEG committee studied the technology and was satisfied with its capabilities (or they presumably would not have approved it as a standard). However, that statement seems to have come from Microsoft (because that is a quote from the HD Photo bitstream specification, not the JPEG XR standard). –LightStarch (talk) 16:45, 24 September 2010 (UTC)

Release dates
I am looking at the sidebar. How is it possible that the latest release was before the initial release? Is this a mistake? —Preceding unsigned comment added by Mikez302 (talk • contribs) 05:09, 1 October 2010 (UTC)
 * Seems to be. I changed the date to that mentioned in the history section.--Oneiros (talk) 21:45, 1 October 2010 (UTC)
 * The standards organizations have published some new specs and revisions. I just updated various places in the article to reflect that. —LightStarch (talk) 05:34, 18 December 2010 (UTC)

Software support: Items to be listed
This article is not a standalone list. Therefore, it may have a number of items without a Wikipedia articles, just for the sake of having broad coverage on the matter, especially in the light of the fact that the total number of supporting software isn't so high to threaten a violation of WP:INDISCRIMINATE.

In time, I think I have found independent sources on some of the items. I am about to add them to the article. Fleet Command (talk) 17:33, 29 September 2011 (UTC)
 * If you have independent sources, by all means add them. But the ones that are referenced only by the vendor's own site should be removed. - MrOllie (talk) 17:38, 29 September 2011 (UTC)
 * Which means you didn't read my message at all. Whenever you decided to read it, I'll be here. Fleet Command (talk) 11:44, 30 September 2011 (UTC)
 * I read it, I just disagree. - MrOllie (talk) 15:08, 30 September 2011 (UTC)
 * Oh! May I ask exactly with which part do you not agree? Fleet Command (talk) 19:34, 1 October 2011 (UTC)

Rewriting the Support for more color accuracy
The section "Support for more color accuracy" does not mention all the colour modes supported. I just added the other day that it supports RGB555 and RGB565. It also supports YUV however and premultiplied alpha. It seems like it would make the section a bit long and hard to read if i added another paragraph explaining this. I've created a table showing all colour modes supported that I reckon makes things a bit easier to understand. Below is a proposal for a replacement for that section.

--start--

JPEG XR supports a wide range of color formats including RGB, YUV, CMYK and RGBE in a range of color depths. Color data can also be stored as unsigned integers, floating points or fixed points. It also contains support for alpha channels as well as an arbitrary of extra channels. The table below represents a comprehensive list of pixel format supported in JPEG XR.

In addition to the pixel formats listed above JPEG XR also has some support for BGR channel ordering and Pre-multiplied alpha.

, and several script and stylesheet links contain  at the beginning of their url. Some content elements contain a  tag though, the article content is all in english, and the dns server name is en.wikipedia.org. Nothing in the article source appears to reference language, chinese or otherwise.

I can't find any reference about non-content issues, maybe I'll have to contact OTRS? I'm running Firefox 73.0.1 on Manjaro Linux, but it seems unlikely to just be me, as every other page loads in english as normal, and this one persists after clearing cache and rebooting. --Vickas54 (talk) 07:13, 29 February 2020 (UTC)


 * Whatever the bug was, it's gone now. Perhaps it got cleaned up as part of adding the COVID-19 headers that are currently up, though I can't rule out changes/updates to my own system. --Vickas54 (talk) 16:46, 26 March 2020 (UTC)