Thread: "16-bit" mode.
View Single Post
  #228  
Old November 29th 04, 07:43 AM
Dave Martindale
external usenet poster
 
Posts: n/a
Default

Mike Engles writes:

Believe it or not, this was the way it was done
20 years ago, but the idea got lost along the way, leading to the mess
described in this memo. Unfortunately, it is probably too late to
change. The technique offered here is the best that can be done short of
changing all the hardware."


There's more to the history than that. Computer graphics started out
using 8-bit linear images, because it was simple and obvious.
Television started out using analog gamma-corrected voltages, because
there were a bunch of good reasons to put the gamma correction in the
camera instead of in the receiver. But along the way, computers started
generating television signals, and digitizing television, and television
itself became digital, and then photography came along and borrowed from
all of these.

It seems that there was a differnt way and the only people who use it
now are the scientists.


The vast majority of digital images in existence are probably 8-bit
"gamma corrected" data, because that's a particular sweet spot that
makes a good tradeoff between cost and results. But there are people
working with fixed-point linear data and floating-point linear data,
and people who store one way and process the other.

Dave