A Photography forum. PhotoBanter.com

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PhotoBanter.com forum » Digital Photography » Digital Photography
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

"16-bit" mode.



 
 
Thread Tools Display Modes
  #161  
Old November 23rd 04, 10:43 PM
Chris Cox
external usenet poster
 
Posts: n/a
Default

[[ This message was both posted and mailed: see
the "To," "Cc," and "Newsgroups" headers for details. ]]

In article ,
wrote:

In message ,
Chris Cox wrote:

I do know that for anyone else doing a similar experiment (inside and
outside Adobe), they get the full 32769 values.


I just came up with an idea to check if it was the internal
representation or the "info" tool itself, and sure enough it was the
info tool that was at fault.

What I did was open the "levels" dialog, and set the input max to 2.
Then, I moved the info tool over the pixels, and sure enough, there was
not a direct correspondence between the "old" and "new" values. the
first 0 became a 0; the second 0 became a 52; all numbers that are
multiples of 52 were present in the new values (no gaps).

The info tool is toast in 16-bit greyscale mode.


OK - that must have slipped through QE somehow.
(and I think I did most of my tests in RGB mode)

I'll have someone double check it in the current build and fix it if
it's still broken (well, as soon as I get rid of this @#!^&*^%$ cold).

Chris
  #162  
Old November 24th 04, 12:31 AM
Dave Martindale
external usenet poster
 
Posts: n/a
Default

Mike Engles writes:

I would have thought that photographs taken by spacecraft are to be
viewed.


Not just viewed, measured. For that, you need to calibrate the camera
regularly, and preserve the data from it. For that, it's worth keeping
the data in linear form, and using more memory and transmission time.

It strikes me that if gamma encoding is necessary for terrestrial
imaging to maximise the use of a limited number of bits, then that would
also apply to space photography.


Generally no, because the tradeoffs are different. Some cameras *do*
allow you to save data in a linear losslessly compressed form called
"raw", precisely when you want more control over what's done with it.
If you have raw camera data, you can process it in 16-bit linear form if
you want.

There was a thread in the scanner
group, where the expert consensus was that any imaging,storage and
processing in a linear domain invited image degradation and
posterisation.


Any processing in *8 bit* linear invites posterization and other
degradation. Using *16 bit* per sample linear avoids most of this for
ordinary pictorial images. Using *floating point* linear is enough for
high dynamic range images. You must distinguish between these different
linear forms.

Yet we find that such linear imaging,storage and
processing is common in scientific digital imaging, where one would
imagine that extreme accuracy was paramount.


I'll bet it isn't 8 bit linear.

Do they use a large number of bits to avoid problems associated with
linear storage and processing? The expert consensus was that one would
need 18 to 20 bit linear images to match the efficiency of a 8 bit gamma
encoded image.


Yes, though the 18 or 20 bit number depends on what you mean by
"efficiency", and what intensity range you're trying to cover.

What is sauce for the goose is sauce for the gander.


Can't you see that 8-bit linear and 16-bit linear are entirely different
sauces?

Timo Autiokari has been saying for ages that scientific imaging was done
linearly. He has been abused soundly for his claims.


He's been abused for recommending 8-bit linear over 8-bit nonlinear.

We have been told
that no one who does serious image processing does it linearly.


Oh, who said that? I do the actual signal processing in linear space
(in 32-bit floating point), but often store images in 8-bit nonlinear
form. There's no contradiction here; it just requires a conversion.

So all
the scientists of the world who regularly do their processing in a
linear domain are not really serious and that they are merely FADISTS
like Timo Autiokari.


Again, they're not using 8-bit linear for any serious measurement data.

Dave
  #163  
Old November 24th 04, 12:31 AM
Dave Martindale
external usenet poster
 
Posts: n/a
Default

Mike Engles writes:

I would have thought that photographs taken by spacecraft are to be
viewed.


Not just viewed, measured. For that, you need to calibrate the camera
regularly, and preserve the data from it. For that, it's worth keeping
the data in linear form, and using more memory and transmission time.

It strikes me that if gamma encoding is necessary for terrestrial
imaging to maximise the use of a limited number of bits, then that would
also apply to space photography.


Generally no, because the tradeoffs are different. Some cameras *do*
allow you to save data in a linear losslessly compressed form called
"raw", precisely when you want more control over what's done with it.
If you have raw camera data, you can process it in 16-bit linear form if
you want.

There was a thread in the scanner
group, where the expert consensus was that any imaging,storage and
processing in a linear domain invited image degradation and
posterisation.


Any processing in *8 bit* linear invites posterization and other
degradation. Using *16 bit* per sample linear avoids most of this for
ordinary pictorial images. Using *floating point* linear is enough for
high dynamic range images. You must distinguish between these different
linear forms.

Yet we find that such linear imaging,storage and
processing is common in scientific digital imaging, where one would
imagine that extreme accuracy was paramount.


I'll bet it isn't 8 bit linear.

Do they use a large number of bits to avoid problems associated with
linear storage and processing? The expert consensus was that one would
need 18 to 20 bit linear images to match the efficiency of a 8 bit gamma
encoded image.


Yes, though the 18 or 20 bit number depends on what you mean by
"efficiency", and what intensity range you're trying to cover.

What is sauce for the goose is sauce for the gander.


Can't you see that 8-bit linear and 16-bit linear are entirely different
sauces?

Timo Autiokari has been saying for ages that scientific imaging was done
linearly. He has been abused soundly for his claims.


He's been abused for recommending 8-bit linear over 8-bit nonlinear.

We have been told
that no one who does serious image processing does it linearly.


Oh, who said that? I do the actual signal processing in linear space
(in 32-bit floating point), but often store images in 8-bit nonlinear
form. There's no contradiction here; it just requires a conversion.

So all
the scientists of the world who regularly do their processing in a
linear domain are not really serious and that they are merely FADISTS
like Timo Autiokari.


Again, they're not using 8-bit linear for any serious measurement data.

Dave
  #164  
Old November 24th 04, 02:01 AM
external usenet poster
 
Posts: n/a
Default

In message ,
"David J Taylor" wrote:

Matt Austern wrote:
[]
One of the crucial things I missed, apparently, was that we really
aren't talking about a 15-bit representation. I missed the fact that
the range really is, [0, 32768] not [0, 32768).


As soon as you start doing any filtering operations you will can
overshoot - ie. you need a range -32768..0..32767, what I would call
signed 16-bit. Of course, no physical medium can depict the negative
brightness levels the negative values imply.


Anf what are they, anyway? Thirsty light sponges?
--


John P Sheehy

  #165  
Old November 24th 04, 02:01 AM
external usenet poster
 
Posts: n/a
Default

In message ,
"David J Taylor" wrote:

Matt Austern wrote:
[]
One of the crucial things I missed, apparently, was that we really
aren't talking about a 15-bit representation. I missed the fact that
the range really is, [0, 32768] not [0, 32768).


As soon as you start doing any filtering operations you will can
overshoot - ie. you need a range -32768..0..32767, what I would call
signed 16-bit. Of course, no physical medium can depict the negative
brightness levels the negative values imply.


Anf what are they, anyway? Thirsty light sponges?
--


John P Sheehy

  #167  
Old November 24th 04, 10:14 AM
external usenet poster
 
Posts: n/a
Default

Kibo informs me that Timo Autiokari stated
that:

On Tue, 23 Nov 2004 10:50:44 +1100, wrote:

With photography, the intention is to produce a final image
that is as similar as possible to what a human eye would've
seen through the viewfinder,


For you information,


As it happens, I've designed imaging systems, so I'm quite familiar with
the differences between requirements of a scientific imaging system, vs
the requirements of a device to create images intended to approximate
what a human eye would see in the same situation. And for /your/
information, a photograph evokes only a very vague approximation of what
an eye would've seen if it'd been in place of the camera. Even a
gamma-corrected (ie; non-linear) image is just another in a long string
of compromises that makes it a little easier to trick the human eye into
perceiving the printed/displayed image as 'real'.

what ever real life scene the human eye is
viewing at, it happens that *linear* light (photons) will hit the
sensors on the retina.


You're ignoring the fact that most scientific imaging uses
false-colouring *precisely because* the 'true' image would either be
invisible, too dark, or too bright to be processed by a naked human eye.
If the human eye was capable of perceiving, (for example),
Doppler-shifted light from a star on the other side of the galaxy, we
wouldn't need space-telescopes in the first place, would we? - We could
just look out the window instead. And the human eye can't correctly
image even fairly close stars - we perceive most stars as being white,
(even though they are strongly coloured), because their light is too dim
for our colour vision to pick it up. Fortunately, scientific imaging
systems can show us their *real* colour. Closer to home, scientific
instruments create images via things like soft X-rays, or infrared light
- situations where the capabilities & limitations of the human eye are
completely irrelevant. The particular scaling system, (whether it's
linear, log, exponential, bell-shaped or whatever) that's optimal for
scientific imaging has nothing whatever to do with how the eye perceives
light, & everything to do with the physics of whatever it is that the
device is intended to measure.

Displays however are not capable to output very high luminance levels
but it so happens that the eye has the iris so it can adapt to
different brightness levels,


The eye does a hell of a lot more to deal with large contrast ranges
than just adjust the iris. For example; the retina automatically
performs an astonishingly-similar analog of darkroom or PS contrast
masking to 'correct' for localised highlights in the visual field that
would otherwise 'blowout', just as photographers do to 'correct' photos
of sunsets or other scenes with contrast ranges that are too big to
print or display.

therefore 1:1 linearity is not needed,
just an overall linearity of the transfer function is enough.
Nonlinearity in this path makes the image appearance too dark or to
bright in some portion of the tonal reproduction range.


For starters, the light output of a display isn't even close to being
linear, nor should it be. If you actually look at the transfer graph for
a calibrated monitor, you'll find that the transfer curve is
exponential. It's no harder to calibrate a monitor to give a completely
linear input-voltage to light output relationship, rather than a 1.8 or
2.2 gamma curve, then run an extremely accurate linear greyscale
gradient across it, but it would result in precisely the *perceived*
non-linearity you've just mentioned. We gamma-correct monitors for the
*exact purpose* of eliminating that non-linear perception.

which requires a non-linear response.


False.


No, I'm afraid not. You would do well to read up on how the human eye
works, as well as about scientific imaging techniques, because the stuff
you're saying is just plain wrong.

--
W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est
---^----^---------------------------------------------------------------
  #168  
Old November 24th 04, 10:41 AM
external usenet poster
 
Posts: n/a
Default

Kibo informs me that Timo Autiokari stated
that:

On Mon, 22 Nov 2004 21:32:21 GMT, wrote:

I'm not so sure that ACR works in a totally linear domain.


It definitely doesn't.

Images
exposed with bracketing, and compensated to be the same with the
exposure slider, may have equal mid-tones, but the shadows and
highlights will display that a different gamma is used. If you drag the
ACR exposure slider to the left, after it runs out of "hidden
highlights", it stretches the highlights so that 4095 in the RAW data
stays anchored at 255 in the output, and never gets darker. That is not
linear exposure compensation.


Nor is it an exact analog of adjusting by F-stops (ie; non-linear),
which is what I'd like it to be. You should try C1 some time, its
exposure compensation adjustment control is *way* more like adjusting
the exposure compensation dial on a camera. Going from that to to ACR
weirds me out every time.

That editing operation is still applied over linear image data, even
if the operation itself is not linear. Exposure adjustment in fact is
a linear operation, multiplication by a factor,


Incorrect. It's scaled in F-stops, which are exponential, not linear.
You'll find the mathematical details in any good textbook on
photography.

So at the middle it will calmly snip one level away, the coding there
is: ...7FF8h 7FFAh 7FFCh 7FFEh 8000h 8001h 8003h 8005h ... Due to this
discontinuity 'd say that the 15th bit of Photoshop is quite un-usable
for applications that require accuracy.


*sigh*

You're talking about the LSB of a 15 bit value sometimes skipping a
value, which, (assuming that you're correct about it), is an inaccuracy
of around 0.003%. To put this hypothetical error into perspective, it'd
have to be at least *four times greater* to alter a 12 bit RAW image by
even a single step in value - a change that would be not only be
completely invisible to the human eye, but would be completely swamped
by the much, much greater errors contributed by the sensor noise in the
camera, *plus* the ADC error in the camera, *plus* the colour-space
rounding errors in the computer, *plus* the DAC inaccuracy in your video
card, *plus* the video amp inaccuracy in your monitor. The 'error'
you're talking about is about as significant as an ICBM missing the
targetted position by a couple of feeet.

--
W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est
---^----^---------------------------------------------------------------
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Sony Cybershot P100 VX '640x480' movie mode is fake Mark Elkington Digital Photography 17 November 2nd 04 02:24 AM
What's the D300's "Close-up mode" for? Darryl Digital Photography 10 September 23rd 04 05:11 PM
Q-Confused about which picture record mode to use in a digital camera. Mr. Rather B. Beachen Digital Photography 1 July 13th 04 01:50 AM
What image quality mode to use? Mr. Rather B. Beachen Digital Photography 2 July 13th 04 01:21 AM
wireless 550EX in manual mode with 420EX danny Other Photographic Equipment 1 February 15th 04 04:35 PM


All times are GMT +1. The time now is 03:27 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PhotoBanter.com.
The comments are property of their posters.