A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital Photography
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

"16-bit" mode.



 
 
Thread Tools Display Modes
  #152  
Old November 23rd 04, 06:12 PM
Dave Martindale
external usenet poster
 
Posts: n/a
Default

Timo Autiokari writes:

0000h 0002h 0002h 0004h 0004h 0006h 0006h ... up to the level 32768
(8000h), only even values there. Above that it converts to 8001h 8003h
8005h ... etc up to the 65535 (FFFFh), only odd values there.


So at the middle it will calmly snip one level away, the coding there
is: ...7FF8h 7FFAh 7FFCh 7FFEh 8000h 8001h 8003h 8005h ... Due to this
discontinuity 'd say that the 15th bit of Photoshop is quite un-usable
for applications that require accuracy.


What do you mean by "accuracy"? The behaviour you are describing is the
most accurate conversion between [0..65535] to [0..32768] and back.
There is a one-code-value discontinuity where it goes from rounding down
to rounding up. This represents a maximum error of half a code value
out of 32768, or approximately one part in 65536.

Meanwhile, photographic data comes from a camera with 10 or 12 or maybe
14-bit A/D converter, so the inherent accuracy of the camera data given
a totally noise-free image is 2, 8, or 32 times worse. Then real images
have noise on top of that.

So this rounding error in Photoshop is totally insignificant compared to
other errors in the data incurred earlier.

Besides, I can recall earlier arguments with you where you claimed that
8 bit linear coding was good enough for photographic work because the
noise in photographs was sufficient to mask the (rather bad)
quantization errors due to 8-bit linear coding. Why is it that you
don't worry about errors of 1 part in 512 when your own recommendations
are at stake, but do worry about an error of 1 part in 65536 when
criticizing Photoshop?

Dave
  #153  
Old November 23rd 04, 06:31 PM
Dave Martindale
external usenet poster
 
Posts: n/a
Default

Mike Engles writes:

What I have just read chimes with everything I think should happen in
digital imaging. It does completely contradict everything that has been
written about gamma encoding in these and other forums with the
necessity for gamma to maximise the use of available bits.


ftp://ftp.alvyray.com/Acrobat/9_Gamma.pdf


Yet this guy seems to be a pioneer of digital imaging.


First, note that the article was written nearly 10 years ago. Since
then, we have the PNG file format that explicitly tells you what
non-linear transformation was used in encoding the image. We have
colour management systems, with data chunks encoded in a file header
telling you even more about the meaning of the data. And I think that
even in 1995 TIFF would let you describe the data nonlinearity.

He's right that a lot of guessing happened in 1995. But things are
better now. He also talks a lot about one particular application,
Altamira Composer, which apparently assumes PC monitors have a gamma of
1.8 (with the participation of the lookup table in the hardware). To
the best of my knowledge, this value has never been common on PCs, only
on Macs, so one could describe this as simply a bad assumption for PC
software.

Anyway, it's now perfectly possible to *store* images using a nonlinear
encoding, but unpack them to a wider linear representation before doing
arithmetic on them, then convert back to the nonlinear representation
for storage again. He recommended linear storage because that avoids
conversion operations, and avoids having to store the data to describe
the nonlinearity, but that's not necessary to do linear arithmetic.

Unfortunately, the memo does *not* discuss the cost of linear storage.
It's a simple fact that if you store 8 bits per component (i.e. 24 bit
colour), 8-bit linear coding does not provide sufficient intensity
resolution to code shadow areas without quantization artifacts. 8-bit
"gamma corrected" encoding is used because it provides more resolution
in the shadows, where it's needed, and less in the highlights, where the
steps are still small enough not to see. To use linear coding without
quantization problems, you'd need 12 or better yet 16 bits per
component, and most applications do not want to pay the extra price in
file size for no visible benefit.

Also why is image data from spacecraft and astronomy not gamma
encoded.It is after all digital photography. They must be transmitting/
recording in at least 18 bit. That is the bit level that Chris Cox et al
say is the minimum necessary for linear images, without gamma encoding.


First, the data from those sources is quantitative data used to make
actual measurements of intensity. Producing pretty pictures is somewhat
incidental. So it's worth providing a wide linear data path, and
calibrating the whole thing periodically, in order to get numbers that
mean something. But consumer cameras are not used as photometers, so
the same level of accuracy is not needed.

As for how many linear bits are needed to equal 8 bits gamma encoded: it
all depends on the brightness range you want to represent. 16 bits is
pretty damned good.

It does seem that what we have today is two types of digital imaging.
One is the truly scientific one that uses ALL linear data. The other is
a convenient engineering one that delivers the goods simply, by pre
compensating the linear data to display on non linear displays.


Or, more accuratly, by non-linearly encoding the data in a way that fits
human perceptual abilities without wasting bits.

Engineers were always happy with approximations.


Engineers are happy with what does the job at the lowest cost necessary.
For photometry, you need more bits and a calibrated chain. For
photography you don't.

Dave
  #154  
Old November 23rd 04, 07:42 PM
Mike Engles
external usenet poster
 
Posts: n/a
Default

Dave Martindale wrote:

Mike Engles writes:

What I have just read chimes with everything I think should happen in
digital imaging. It does completely contradict everything that has been
written about gamma encoding in these and other forums with the
necessity for gamma to maximise the use of available bits.


ftp://ftp.alvyray.com/Acrobat/9_Gamma.pdf


Yet this guy seems to be a pioneer of digital imaging.


First, note that the article was written nearly 10 years ago. Since
then, we have the PNG file format that explicitly tells you what
non-linear transformation was used in encoding the image. We have
colour management systems, with data chunks encoded in a file header
telling you even more about the meaning of the data. And I think that
even in 1995 TIFF would let you describe the data nonlinearity.

He's right that a lot of guessing happened in 1995. But things are
better now. He also talks a lot about one particular application,
Altamira Composer, which apparently assumes PC monitors have a gamma of
1.8 (with the participation of the lookup table in the hardware). To
the best of my knowledge, this value has never been common on PCs, only
on Macs, so one could describe this as simply a bad assumption for PC
software.

Anyway, it's now perfectly possible to *store* images using a nonlinear
encoding, but unpack them to a wider linear representation before doing
arithmetic on them, then convert back to the nonlinear representation
for storage again. He recommended linear storage because that avoids
conversion operations, and avoids having to store the data to describe
the nonlinearity, but that's not necessary to do linear arithmetic.

Unfortunately, the memo does *not* discuss the cost of linear storage.
It's a simple fact that if you store 8 bits per component (i.e. 24 bit
colour), 8-bit linear coding does not provide sufficient intensity
resolution to code shadow areas without quantization artifacts. 8-bit
"gamma corrected" encoding is used because it provides more resolution
in the shadows, where it's needed, and less in the highlights, where the
steps are still small enough not to see. To use linear coding without
quantization problems, you'd need 12 or better yet 16 bits per
component, and most applications do not want to pay the extra price in
file size for no visible benefit.

Also why is image data from spacecraft and astronomy not gamma
encoded.It is after all digital photography. They must be transmitting/
recording in at least 18 bit. That is the bit level that Chris Cox et al
say is the minimum necessary for linear images, without gamma encoding.


First, the data from those sources is quantitative data used to make
actual measurements of intensity. Producing pretty pictures is somewhat
incidental. So it's worth providing a wide linear data path, and
calibrating the whole thing periodically, in order to get numbers that
mean something. But consumer cameras are not used as photometers, so
the same level of accuracy is not needed.

As for how many linear bits are needed to equal 8 bits gamma encoded: it
all depends on the brightness range you want to represent. 16 bits is
pretty damned good.

It does seem that what we have today is two types of digital imaging.
One is the truly scientific one that uses ALL linear data. The other is
a convenient engineering one that delivers the goods simply, by pre
compensating the linear data to display on non linear displays.


Or, more accuratly, by non-linearly encoding the data in a way that fits
human perceptual abilities without wasting bits.

Engineers were always happy with approximations.


Engineers are happy with what does the job at the lowest cost necessary.
For photometry, you need more bits and a calibrated chain. For
photography you don't.

Dave



Hello

Do they use a high number of bits in space imaging? I cannot imagine
they do as storage must be limited for high amounts of data. After all
the systems in use on say the Cassini mission are over 10 years old in
technology terms. I can see why accuracy is essential for photometry,
but there are also imaging cameras, which should use gamma. I doubt that
these are more than 8 bit per colour.

Mike Engles
  #155  
Old November 23rd 04, 07:42 PM
Mike Engles
external usenet poster
 
Posts: n/a
Default

Dave Martindale wrote:

Mike Engles writes:

What I have just read chimes with everything I think should happen in
digital imaging. It does completely contradict everything that has been
written about gamma encoding in these and other forums with the
necessity for gamma to maximise the use of available bits.


ftp://ftp.alvyray.com/Acrobat/9_Gamma.pdf


Yet this guy seems to be a pioneer of digital imaging.


First, note that the article was written nearly 10 years ago. Since
then, we have the PNG file format that explicitly tells you what
non-linear transformation was used in encoding the image. We have
colour management systems, with data chunks encoded in a file header
telling you even more about the meaning of the data. And I think that
even in 1995 TIFF would let you describe the data nonlinearity.

He's right that a lot of guessing happened in 1995. But things are
better now. He also talks a lot about one particular application,
Altamira Composer, which apparently assumes PC monitors have a gamma of
1.8 (with the participation of the lookup table in the hardware). To
the best of my knowledge, this value has never been common on PCs, only
on Macs, so one could describe this as simply a bad assumption for PC
software.

Anyway, it's now perfectly possible to *store* images using a nonlinear
encoding, but unpack them to a wider linear representation before doing
arithmetic on them, then convert back to the nonlinear representation
for storage again. He recommended linear storage because that avoids
conversion operations, and avoids having to store the data to describe
the nonlinearity, but that's not necessary to do linear arithmetic.

Unfortunately, the memo does *not* discuss the cost of linear storage.
It's a simple fact that if you store 8 bits per component (i.e. 24 bit
colour), 8-bit linear coding does not provide sufficient intensity
resolution to code shadow areas without quantization artifacts. 8-bit
"gamma corrected" encoding is used because it provides more resolution
in the shadows, where it's needed, and less in the highlights, where the
steps are still small enough not to see. To use linear coding without
quantization problems, you'd need 12 or better yet 16 bits per
component, and most applications do not want to pay the extra price in
file size for no visible benefit.

Also why is image data from spacecraft and astronomy not gamma
encoded.It is after all digital photography. They must be transmitting/
recording in at least 18 bit. That is the bit level that Chris Cox et al
say is the minimum necessary for linear images, without gamma encoding.


First, the data from those sources is quantitative data used to make
actual measurements of intensity. Producing pretty pictures is somewhat
incidental. So it's worth providing a wide linear data path, and
calibrating the whole thing periodically, in order to get numbers that
mean something. But consumer cameras are not used as photometers, so
the same level of accuracy is not needed.

As for how many linear bits are needed to equal 8 bits gamma encoded: it
all depends on the brightness range you want to represent. 16 bits is
pretty damned good.

It does seem that what we have today is two types of digital imaging.
One is the truly scientific one that uses ALL linear data. The other is
a convenient engineering one that delivers the goods simply, by pre
compensating the linear data to display on non linear displays.


Or, more accuratly, by non-linearly encoding the data in a way that fits
human perceptual abilities without wasting bits.

Engineers were always happy with approximations.


Engineers are happy with what does the job at the lowest cost necessary.
For photometry, you need more bits and a calibrated chain. For
photography you don't.

Dave



Hello

Do they use a high number of bits in space imaging? I cannot imagine
they do as storage must be limited for high amounts of data. After all
the systems in use on say the Cassini mission are over 10 years old in
technology terms. I can see why accuracy is essential for photometry,
but there are also imaging cameras, which should use gamma. I doubt that
these are more than 8 bit per colour.

Mike Engles
  #156  
Old November 23rd 04, 09:20 PM
external usenet poster
 
Posts: n/a
Default

Mike Engles wrote:

I would have thought that photographs taken by spacecraft are to be
viewed.


Here's the deal:

sensor - processor - ... - processor - display

The 'sensor' is (these days) linear, as is most processing. The
'display', however, is almost never linear. Thus at some point in the
"processor" chain, one _must_ compensate for this nonlinear response
or one will have Problems upon viewing. Early television, in the
interests of making receivers as simple (cheap) as possible, put the
compressors at the broadcaster since the receiver's CRT would do the
expansion. The same model is (or should be) used for photographic
image processing as well: you collect linear data, you mangle it
through whatever linear processing steps, and then, right at the end,
you compress it ("apply gamma") prior to JPEG or whatever.

If there is no need for a human to view the image on a non-linear
display, then this compression step can be removed. There are indeed
applications where the image is not inteded for human consumption.
There are other cases, though, where processing non-linear data as if
it was linear ("homomorphic filtering") has its uses. (Indeed, when
one feeds gamma-encoded samples to a JPEG encoder, one is engaged in
homomorphic processing...)
  #157  
Old November 23rd 04, 09:20 PM
external usenet poster
 
Posts: n/a
Default

Mike Engles wrote:

I would have thought that photographs taken by spacecraft are to be
viewed.


Here's the deal:

sensor - processor - ... - processor - display

The 'sensor' is (these days) linear, as is most processing. The
'display', however, is almost never linear. Thus at some point in the
"processor" chain, one _must_ compensate for this nonlinear response
or one will have Problems upon viewing. Early television, in the
interests of making receivers as simple (cheap) as possible, put the
compressors at the broadcaster since the receiver's CRT would do the
expansion. The same model is (or should be) used for photographic
image processing as well: you collect linear data, you mangle it
through whatever linear processing steps, and then, right at the end,
you compress it ("apply gamma") prior to JPEG or whatever.

If there is no need for a human to view the image on a non-linear
display, then this compression step can be removed. There are indeed
applications where the image is not inteded for human consumption.
There are other cases, though, where processing non-linear data as if
it was linear ("homomorphic filtering") has its uses. (Indeed, when
one feeds gamma-encoded samples to a JPEG encoder, one is engaged in
homomorphic processing...)
  #158  
Old November 23rd 04, 10:35 PM
Chris Cox
external usenet poster
 
Posts: n/a
Default

In article ,
wrote:

Kibo informs me that Chris Cox stated that:

In article ,
wrote:
It is exactly what is happening here. I get 0, 1, 3, 4, 5, 8, etc. No
2, 6, 7, 11, etc, at all, no matter what is done to the data.


And, again, without your original data - I can't guess what could have
gone wrong.

I do know that for anyone else doing a similar experiment (inside and
outside Adobe), they get the full 32769 values.


Yeah, that's what I would've expected. I find it impossible to believe
that PS could be getting it that badly wrong without it showing up in a
dozen different, really obvious ways.
And speaking of supplying original data, where the hell does Adobe keep
the PS raw file format doc's that used to be on the website? - I wanted
to try John's experiment for myself, but all I could find was the PS
SDK, for which Adobe wants money.


They're part of the SDK, and always have been.
(and don't get me started on why the SDK isn't free)

Photoshop RAW is just bytes - no header, no documentation.

Chris
  #159  
Old November 23rd 04, 10:35 PM
Chris Cox
external usenet poster
 
Posts: n/a
Default

In article ,
wrote:

Kibo informs me that Chris Cox stated that:

In article ,
wrote:
It is exactly what is happening here. I get 0, 1, 3, 4, 5, 8, etc. No
2, 6, 7, 11, etc, at all, no matter what is done to the data.


And, again, without your original data - I can't guess what could have
gone wrong.

I do know that for anyone else doing a similar experiment (inside and
outside Adobe), they get the full 32769 values.


Yeah, that's what I would've expected. I find it impossible to believe
that PS could be getting it that badly wrong without it showing up in a
dozen different, really obvious ways.
And speaking of supplying original data, where the hell does Adobe keep
the PS raw file format doc's that used to be on the website? - I wanted
to try John's experiment for myself, but all I could find was the PS
SDK, for which Adobe wants money.


They're part of the SDK, and always have been.
(and don't get me started on why the SDK isn't free)

Photoshop RAW is just bytes - no header, no documentation.

Chris
  #160  
Old November 23rd 04, 10:43 PM
Chris Cox
external usenet poster
 
Posts: n/a
Default

[[ This message was both posted and mailed: see
the "To," "Cc," and "Newsgroups" headers for details. ]]

In article ,
wrote:

In message ,
Chris Cox wrote:

I do know that for anyone else doing a similar experiment (inside and
outside Adobe), they get the full 32769 values.


I just came up with an idea to check if it was the internal
representation or the "info" tool itself, and sure enough it was the
info tool that was at fault.

What I did was open the "levels" dialog, and set the input max to 2.
Then, I moved the info tool over the pixels, and sure enough, there was
not a direct correspondence between the "old" and "new" values. the
first 0 became a 0; the second 0 became a 52; all numbers that are
multiples of 52 were present in the new values (no gaps).

The info tool is toast in 16-bit greyscale mode.


OK - that must have slipped through QE somehow.
(and I think I did most of my tests in RGB mode)

I'll have someone double check it in the current build and fix it if
it's still broken (well, as soon as I get rid of this @#!^&*^%$ cold).

Chris
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Sony Cybershot P100 VX '640x480' movie mode is fake Mark Elkington Digital Photography 17 November 2nd 04 02:24 AM
What's the D300's "Close-up mode" for? Darryl Digital Photography 10 September 23rd 04 05:11 PM
Q-Confused about which picture record mode to use in a digital camera. Mr. Rather B. Beachen Digital Photography 1 July 13th 04 01:50 AM
What image quality mode to use? Mr. Rather B. Beachen Digital Photography 2 July 13th 04 01:21 AM
wireless 550EX in manual mode with 420EX danny Other Photographic Equipment 1 February 15th 04 04:35 PM


All times are GMT +1. The time now is 06:46 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.