A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital Photography
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

"16-bit" mode.



 
 
Thread Tools Display Modes
  #141  
Old November 23rd 04, 01:05 AM
Mike Engles
external usenet poster
 
Posts: n/a
Default

Bart van der Wolf wrote:

"Mike Engles" wrote in message
...
SNIP
Also why is image data from spacecraft and astronomy not
gamma encoded. It is after all digital photography.


Not necessarily. There is a difference between photometric data (e.g.
spectral reflection/absorption/emission in certain bands), and
pictorial imaging (e.g. stereo pairs in either visible light bands or
mixed with other spectral data).
One common issue between them is the desire to reduce quantization
errors (at least half of the LSB) to a minimum. Gamma encoding
provides a visually efficient encoding, but it can underutilize the
capacity at the lower, and overutilize (=additional quantization
errors) at the higher counts. Then there is the trade-off caused by
limited transmission bandwidth, and there is only so much one can do
with compression...

Bart



Hello

I would have thought that photographs taken by spacecraft are to be
viewed. They would be stored on the spacecraft in a file, prior to
relay. It strikes me that if gamma encoding is necessary for terrestrial
imaging to maximise the use of a limited number of bits, then that would
also apply to space photography. There was a thread in the scanner
group, where the expert consensus was that any imaging,storage and
processing in a linear domain invited image degradation and
posterisation. Yet we find that such linear imaging,storage and
processing is common in scientific digital imaging, where one would
imagine that extreme accuracy was paramount.

Do they use a large number of bits to avoid problems associated with
linear storage and processing? The expert consensus was that one would
need 18 to 20 bit linear images to match the efficiency of a 8 bit gamma
encoded image.

What is sauce for the goose is sauce for the gander.

Timo Autiokari has been saying for ages that scientific imaging was done
linearly. He has been abused soundly for his claims. We have been told
that no one who does serious image processing does it linearly. So all
the scientists of the world who regularly do their processing in a
linear domain are not really serious and that they are merely FADISTS
like Timo Autiokari.

Mike Engles
  #142  
Old November 23rd 04, 01:05 AM
Mike Engles
external usenet poster
 
Posts: n/a
Default

Bart van der Wolf wrote:

"Mike Engles" wrote in message
...
SNIP
Also why is image data from spacecraft and astronomy not
gamma encoded. It is after all digital photography.


Not necessarily. There is a difference between photometric data (e.g.
spectral reflection/absorption/emission in certain bands), and
pictorial imaging (e.g. stereo pairs in either visible light bands or
mixed with other spectral data).
One common issue between them is the desire to reduce quantization
errors (at least half of the LSB) to a minimum. Gamma encoding
provides a visually efficient encoding, but it can underutilize the
capacity at the lower, and overutilize (=additional quantization
errors) at the higher counts. Then there is the trade-off caused by
limited transmission bandwidth, and there is only so much one can do
with compression...

Bart



Hello

I would have thought that photographs taken by spacecraft are to be
viewed. They would be stored on the spacecraft in a file, prior to
relay. It strikes me that if gamma encoding is necessary for terrestrial
imaging to maximise the use of a limited number of bits, then that would
also apply to space photography. There was a thread in the scanner
group, where the expert consensus was that any imaging,storage and
processing in a linear domain invited image degradation and
posterisation. Yet we find that such linear imaging,storage and
processing is common in scientific digital imaging, where one would
imagine that extreme accuracy was paramount.

Do they use a large number of bits to avoid problems associated with
linear storage and processing? The expert consensus was that one would
need 18 to 20 bit linear images to match the efficiency of a 8 bit gamma
encoded image.

What is sauce for the goose is sauce for the gander.

Timo Autiokari has been saying for ages that scientific imaging was done
linearly. He has been abused soundly for his claims. We have been told
that no one who does serious image processing does it linearly. So all
the scientists of the world who regularly do their processing in a
linear domain are not really serious and that they are merely FADISTS
like Timo Autiokari.

Mike Engles
  #143  
Old November 23rd 04, 02:35 AM
Hecate
external usenet poster
 
Posts: n/a
Default

On Mon, 22 Nov 2004 02:44:47 GMT, Chris Cox
wrote:

This is still a problem.
When 64 bit processors become the norm (and the @#!^&$ OS allows a
fully 64 bit application), then that becomes less of a problem.

Is a 64 bit optimised Photoshop likely to be faster, or just more able
to do complex operations? Or do the programmers generally aim for a
bit of both if you'll pardon the pun


That depends a lot on the CPU in question, and the operation in
question.
Most likely there will be little performance difference, but a big
difference in available RAM (addressibility).

Thanks. That, at least, will be very useful given the image sizes I
usually have.

--

Hecate - The Real One

veni, vidi, reliqui
  #144  
Old November 23rd 04, 03:32 AM
Matt Austern
external usenet poster
 
Posts: n/a
Default

Chris Cox writes:

In article , Matt Austern
wrote:

Chris Cox writes:

I've tried. Their engineer insists that it's 30x faster to work with
15 bit quantities than 16 bit ones.

Which is correct (for 0..32768 representation versus 0..65535
representation).


Perhaps this is offtopic, and perhaps you can't answer it without
revealing proprietary information, but can you explain why 15-bit
computation should be so much faster than 16-bit? (If there's a
publication somewhere you could point me to, that would be great.)
I've thought about this for a few minutes, I haven't been able to
think of an obvious reason, and now I'm curious.


1) Because a shift by 15 (divide by 32768) is much faster than a divide
by 65535.

One of the most common operations is (value1*value2 + (maxValue/2)) /
maxValue

With 0..255 we can pull some tricks to make the divide reasonably fast.
For 0..65535 the tricks take quite a bit more time (and serialize the
operation), or we have to use a multiply by reciprocal
For 0..32768, we can just use a shift.


2) A lot fewer overflows of 32 bit accumulators

This is still a problem.
When 64 bit processors become the norm (and the @#!^&$ OS allows a
fully 64 bit application), then that becomes less of a problem.


3) The 2^N maximum value also has some benefits when dealing with
subsampled lookup tables that require interpolation.


4) the 2^N maximum value also has benefits to blending operations that
need a middle value (for 0..255 it was pretty random whether 127 or 128
was used for the middle).


Thanks! That all makes perfect sense.

One of the crucial things I missed, apparently, was that we really
aren't talking about a 15-bit representation. I missed the fact that
the range really is, [0, 32768] not [0, 32768).

Actually, I think you've also given me some fun new questions to ask
at interviews.
  #145  
Old November 23rd 04, 03:32 AM
Matt Austern
external usenet poster
 
Posts: n/a
Default

Chris Cox writes:

In article , Matt Austern
wrote:

Chris Cox writes:

I've tried. Their engineer insists that it's 30x faster to work with
15 bit quantities than 16 bit ones.

Which is correct (for 0..32768 representation versus 0..65535
representation).


Perhaps this is offtopic, and perhaps you can't answer it without
revealing proprietary information, but can you explain why 15-bit
computation should be so much faster than 16-bit? (If there's a
publication somewhere you could point me to, that would be great.)
I've thought about this for a few minutes, I haven't been able to
think of an obvious reason, and now I'm curious.


1) Because a shift by 15 (divide by 32768) is much faster than a divide
by 65535.

One of the most common operations is (value1*value2 + (maxValue/2)) /
maxValue

With 0..255 we can pull some tricks to make the divide reasonably fast.
For 0..65535 the tricks take quite a bit more time (and serialize the
operation), or we have to use a multiply by reciprocal
For 0..32768, we can just use a shift.


2) A lot fewer overflows of 32 bit accumulators

This is still a problem.
When 64 bit processors become the norm (and the @#!^&$ OS allows a
fully 64 bit application), then that becomes less of a problem.


3) The 2^N maximum value also has some benefits when dealing with
subsampled lookup tables that require interpolation.


4) the 2^N maximum value also has benefits to blending operations that
need a middle value (for 0..255 it was pretty random whether 127 or 128
was used for the middle).


Thanks! That all makes perfect sense.

One of the crucial things I missed, apparently, was that we really
aren't talking about a 15-bit representation. I missed the fact that
the range really is, [0, 32768] not [0, 32768).

Actually, I think you've also given me some fun new questions to ask
at interviews.
  #146  
Old November 23rd 04, 12:16 PM
David J Taylor
external usenet poster
 
Posts: n/a
Default

Matt Austern wrote:
[]
One of the crucial things I missed, apparently, was that we really
aren't talking about a 15-bit representation. I missed the fact that
the range really is, [0, 32768] not [0, 32768).


As soon as you start doing any filtering operations you will can
overshoot - ie. you need a range -32768..0..32767, what I would call
signed 16-bit. Of course, no physical medium can depict the negative
brightness levels the negative values imply.

[cross-posting trimmed]

David


  #147  
Old November 23rd 04, 12:16 PM
David J Taylor
external usenet poster
 
Posts: n/a
Default

Matt Austern wrote:
[]
One of the crucial things I missed, apparently, was that we really
aren't talking about a 15-bit representation. I missed the fact that
the range really is, [0, 32768] not [0, 32768).


As soon as you start doing any filtering operations you will can
overshoot - ie. you need a range -32768..0..32767, what I would call
signed 16-bit. Of course, no physical medium can depict the negative
brightness levels the negative values imply.

[cross-posting trimmed]

David


  #148  
Old November 23rd 04, 04:28 PM
Timo Autiokari
external usenet poster
 
Posts: n/a
Default

On Mon, 22 Nov 2004 21:32:21 GMT, wrote:

I'm not so sure that ACR works in a totally linear domain. Images
exposed with bracketing, and compensated to be the same with the
exposure slider, may have equal mid-tones, but the shadows and
highlights will display that a different gamma is used. If you drag the
ACR exposure slider to the left, after it runs out of "hidden
highlights", it stretches the highlights so that 4095 in the RAW data
stays anchored at 255 in the output, and never gets darker. That is not
linear exposure compensation.


That editing operation is still applied over linear image data, even
if the operation itself is not linear. Exposure adjustment in fact is
a linear operation, multiplication by a factor, so the control should
behave linearly (this is the same as scaling the right input slider in
the Levels dialog).

About the main issues in this thread, you found that DNG converter
seem not to be linear, I can not provide anything to this issue since
I can not run the DNG sw. But I sure would like to know more, e.g. if
someone could possibly do a correlation test please. Could it be
possible for your to put your original data available for download,
yes, sure it is simple data, but not everybody have the means to
create it.

About the other main issue, the Photoshop "16" bit/c codespace, there
sure is enough weirdness with that. E.g. if you create PhotoshopRaw
data from level 0 to level 65535 (that is from 0000h to FFFFh) then
open it to Photoshop, then save it as PhotoshopRaw under another name
you'll get:

0000h 0002h 0002h 0004h 0004h 0006h 0006h ... up to the level 32768
(8000h), only even values there. Above that it converts to 8001h 8003h
8005h ... etc up to the 65535 (FFFFh), only odd values there.

So at the middle it will calmly snip one level away, the coding there
is: ...7FF8h 7FFAh 7FFCh 7FFEh 8000h 8001h 8003h 8005h ... Due to this
discontinuity 'd say that the 15th bit of Photoshop is quite un-usable
for applications that require accuracy.

Timo Autiokari
  #149  
Old November 23rd 04, 04:28 PM
Timo Autiokari
external usenet poster
 
Posts: n/a
Default

On Mon, 22 Nov 2004 21:32:21 GMT, wrote:

I'm not so sure that ACR works in a totally linear domain. Images
exposed with bracketing, and compensated to be the same with the
exposure slider, may have equal mid-tones, but the shadows and
highlights will display that a different gamma is used. If you drag the
ACR exposure slider to the left, after it runs out of "hidden
highlights", it stretches the highlights so that 4095 in the RAW data
stays anchored at 255 in the output, and never gets darker. That is not
linear exposure compensation.


That editing operation is still applied over linear image data, even
if the operation itself is not linear. Exposure adjustment in fact is
a linear operation, multiplication by a factor, so the control should
behave linearly (this is the same as scaling the right input slider in
the Levels dialog).

About the main issues in this thread, you found that DNG converter
seem not to be linear, I can not provide anything to this issue since
I can not run the DNG sw. But I sure would like to know more, e.g. if
someone could possibly do a correlation test please. Could it be
possible for your to put your original data available for download,
yes, sure it is simple data, but not everybody have the means to
create it.

About the other main issue, the Photoshop "16" bit/c codespace, there
sure is enough weirdness with that. E.g. if you create PhotoshopRaw
data from level 0 to level 65535 (that is from 0000h to FFFFh) then
open it to Photoshop, then save it as PhotoshopRaw under another name
you'll get:

0000h 0002h 0002h 0004h 0004h 0006h 0006h ... up to the level 32768
(8000h), only even values there. Above that it converts to 8001h 8003h
8005h ... etc up to the 65535 (FFFFh), only odd values there.

So at the middle it will calmly snip one level away, the coding there
is: ...7FF8h 7FFAh 7FFCh 7FFEh 8000h 8001h 8003h 8005h ... Due to this
discontinuity 'd say that the 15th bit of Photoshop is quite un-usable
for applications that require accuracy.

Timo Autiokari
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Sony Cybershot P100 VX '640x480' movie mode is fake Mark Elkington Digital Photography 17 November 2nd 04 01:24 AM
What's the D300's "Close-up mode" for? Darryl Digital Photography 10 September 23rd 04 05:11 PM
Q-Confused about which picture record mode to use in a digital camera. Mr. Rather B. Beachen Digital Photography 1 July 13th 04 01:50 AM
What image quality mode to use? Mr. Rather B. Beachen Digital Photography 2 July 13th 04 01:21 AM
wireless 550EX in manual mode with 420EX danny Other Photographic Equipment 1 February 15th 04 03:35 PM


All times are GMT +1. The time now is 04:59 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.