If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#141
|
|||
|
|||
Bart van der Wolf wrote:
"Mike Engles" wrote in message ... SNIP Also why is image data from spacecraft and astronomy not gamma encoded. It is after all digital photography. Not necessarily. There is a difference between photometric data (e.g. spectral reflection/absorption/emission in certain bands), and pictorial imaging (e.g. stereo pairs in either visible light bands or mixed with other spectral data). One common issue between them is the desire to reduce quantization errors (at least half of the LSB) to a minimum. Gamma encoding provides a visually efficient encoding, but it can underutilize the capacity at the lower, and overutilize (=additional quantization errors) at the higher counts. Then there is the trade-off caused by limited transmission bandwidth, and there is only so much one can do with compression... Bart Hello I would have thought that photographs taken by spacecraft are to be viewed. They would be stored on the spacecraft in a file, prior to relay. It strikes me that if gamma encoding is necessary for terrestrial imaging to maximise the use of a limited number of bits, then that would also apply to space photography. There was a thread in the scanner group, where the expert consensus was that any imaging,storage and processing in a linear domain invited image degradation and posterisation. Yet we find that such linear imaging,storage and processing is common in scientific digital imaging, where one would imagine that extreme accuracy was paramount. Do they use a large number of bits to avoid problems associated with linear storage and processing? The expert consensus was that one would need 18 to 20 bit linear images to match the efficiency of a 8 bit gamma encoded image. What is sauce for the goose is sauce for the gander. Timo Autiokari has been saying for ages that scientific imaging was done linearly. He has been abused soundly for his claims. We have been told that no one who does serious image processing does it linearly. So all the scientists of the world who regularly do their processing in a linear domain are not really serious and that they are merely FADISTS like Timo Autiokari. Mike Engles |
#142
|
|||
|
|||
Bart van der Wolf wrote:
"Mike Engles" wrote in message ... SNIP Also why is image data from spacecraft and astronomy not gamma encoded. It is after all digital photography. Not necessarily. There is a difference between photometric data (e.g. spectral reflection/absorption/emission in certain bands), and pictorial imaging (e.g. stereo pairs in either visible light bands or mixed with other spectral data). One common issue between them is the desire to reduce quantization errors (at least half of the LSB) to a minimum. Gamma encoding provides a visually efficient encoding, but it can underutilize the capacity at the lower, and overutilize (=additional quantization errors) at the higher counts. Then there is the trade-off caused by limited transmission bandwidth, and there is only so much one can do with compression... Bart Hello I would have thought that photographs taken by spacecraft are to be viewed. They would be stored on the spacecraft in a file, prior to relay. It strikes me that if gamma encoding is necessary for terrestrial imaging to maximise the use of a limited number of bits, then that would also apply to space photography. There was a thread in the scanner group, where the expert consensus was that any imaging,storage and processing in a linear domain invited image degradation and posterisation. Yet we find that such linear imaging,storage and processing is common in scientific digital imaging, where one would imagine that extreme accuracy was paramount. Do they use a large number of bits to avoid problems associated with linear storage and processing? The expert consensus was that one would need 18 to 20 bit linear images to match the efficiency of a 8 bit gamma encoded image. What is sauce for the goose is sauce for the gander. Timo Autiokari has been saying for ages that scientific imaging was done linearly. He has been abused soundly for his claims. We have been told that no one who does serious image processing does it linearly. So all the scientists of the world who regularly do their processing in a linear domain are not really serious and that they are merely FADISTS like Timo Autiokari. Mike Engles |
#143
|
|||
|
|||
On Mon, 22 Nov 2004 02:44:47 GMT, Chris Cox
wrote: This is still a problem. When 64 bit processors become the norm (and the @#!^&$ OS allows a fully 64 bit application), then that becomes less of a problem. Is a 64 bit optimised Photoshop likely to be faster, or just more able to do complex operations? Or do the programmers generally aim for a bit of both if you'll pardon the pun That depends a lot on the CPU in question, and the operation in question. Most likely there will be little performance difference, but a big difference in available RAM (addressibility). Thanks. That, at least, will be very useful given the image sizes I usually have. -- Hecate - The Real One veni, vidi, reliqui |
#144
|
|||
|
|||
Chris Cox writes:
In article , Matt Austern wrote: Chris Cox writes: I've tried. Their engineer insists that it's 30x faster to work with 15 bit quantities than 16 bit ones. Which is correct (for 0..32768 representation versus 0..65535 representation). Perhaps this is offtopic, and perhaps you can't answer it without revealing proprietary information, but can you explain why 15-bit computation should be so much faster than 16-bit? (If there's a publication somewhere you could point me to, that would be great.) I've thought about this for a few minutes, I haven't been able to think of an obvious reason, and now I'm curious. 1) Because a shift by 15 (divide by 32768) is much faster than a divide by 65535. One of the most common operations is (value1*value2 + (maxValue/2)) / maxValue With 0..255 we can pull some tricks to make the divide reasonably fast. For 0..65535 the tricks take quite a bit more time (and serialize the operation), or we have to use a multiply by reciprocal For 0..32768, we can just use a shift. 2) A lot fewer overflows of 32 bit accumulators This is still a problem. When 64 bit processors become the norm (and the @#!^&$ OS allows a fully 64 bit application), then that becomes less of a problem. 3) The 2^N maximum value also has some benefits when dealing with subsampled lookup tables that require interpolation. 4) the 2^N maximum value also has benefits to blending operations that need a middle value (for 0..255 it was pretty random whether 127 or 128 was used for the middle). Thanks! That all makes perfect sense. One of the crucial things I missed, apparently, was that we really aren't talking about a 15-bit representation. I missed the fact that the range really is, [0, 32768] not [0, 32768). Actually, I think you've also given me some fun new questions to ask at interviews. |
#145
|
|||
|
|||
Chris Cox writes:
In article , Matt Austern wrote: Chris Cox writes: I've tried. Their engineer insists that it's 30x faster to work with 15 bit quantities than 16 bit ones. Which is correct (for 0..32768 representation versus 0..65535 representation). Perhaps this is offtopic, and perhaps you can't answer it without revealing proprietary information, but can you explain why 15-bit computation should be so much faster than 16-bit? (If there's a publication somewhere you could point me to, that would be great.) I've thought about this for a few minutes, I haven't been able to think of an obvious reason, and now I'm curious. 1) Because a shift by 15 (divide by 32768) is much faster than a divide by 65535. One of the most common operations is (value1*value2 + (maxValue/2)) / maxValue With 0..255 we can pull some tricks to make the divide reasonably fast. For 0..65535 the tricks take quite a bit more time (and serialize the operation), or we have to use a multiply by reciprocal For 0..32768, we can just use a shift. 2) A lot fewer overflows of 32 bit accumulators This is still a problem. When 64 bit processors become the norm (and the @#!^&$ OS allows a fully 64 bit application), then that becomes less of a problem. 3) The 2^N maximum value also has some benefits when dealing with subsampled lookup tables that require interpolation. 4) the 2^N maximum value also has benefits to blending operations that need a middle value (for 0..255 it was pretty random whether 127 or 128 was used for the middle). Thanks! That all makes perfect sense. One of the crucial things I missed, apparently, was that we really aren't talking about a 15-bit representation. I missed the fact that the range really is, [0, 32768] not [0, 32768). Actually, I think you've also given me some fun new questions to ask at interviews. |
#146
|
|||
|
|||
Matt Austern wrote:
[] One of the crucial things I missed, apparently, was that we really aren't talking about a 15-bit representation. I missed the fact that the range really is, [0, 32768] not [0, 32768). As soon as you start doing any filtering operations you will can overshoot - ie. you need a range -32768..0..32767, what I would call signed 16-bit. Of course, no physical medium can depict the negative brightness levels the negative values imply. [cross-posting trimmed] David |
#147
|
|||
|
|||
Matt Austern wrote:
[] One of the crucial things I missed, apparently, was that we really aren't talking about a 15-bit representation. I missed the fact that the range really is, [0, 32768] not [0, 32768). As soon as you start doing any filtering operations you will can overshoot - ie. you need a range -32768..0..32767, what I would call signed 16-bit. Of course, no physical medium can depict the negative brightness levels the negative values imply. [cross-posting trimmed] David |
#148
|
|||
|
|||
|
#149
|
|||
|
|||
|
#150
|
|||
|
|||
|
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Sony Cybershot P100 VX '640x480' movie mode is fake | Mark Elkington | Digital Photography | 17 | November 2nd 04 01:24 AM |
What's the D300's "Close-up mode" for? | Darryl | Digital Photography | 10 | September 23rd 04 05:11 PM |
Q-Confused about which picture record mode to use in a digital camera. | Mr. Rather B. Beachen | Digital Photography | 1 | July 13th 04 01:50 AM |
What image quality mode to use? | Mr. Rather B. Beachen | Digital Photography | 2 | July 13th 04 01:21 AM |
wireless 550EX in manual mode with 420EX | danny | Other Photographic Equipment | 1 | February 15th 04 03:35 PM |