If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#71
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
David J Taylor wrote:
Wolfgang Weisselberg wrote: Oversampling in audio, unless I misunderstand, would be sampling with a much higher frequency than the target 44.1kHz frequency. In DSLRs that would mean using 4 (2x oversampling) or 16 (4x oversampling) pixels where there is now one. For a 6MP resolution you'd need 24 million pixels at just 2x oversampling in width and height. For physical reasons so many pixels need large (and expensive) sensors or must deal with small full well sizes and photon noise. On the other hand, the new sRAWs may be downsampled from ordinary RAWs, and thus count as oversampling + downsampling. Of course you can get the same effect by resizing your photo intelligently from 8MPix to, say 2MPix (2x) or 0.9MPix (3x) Yes, the oversampling allows the first (analog) filter to be simpler, and the subsequent filtering to be done digitally. Final samples are delivered at 44.1KHz (or higher in studio work). Ok. If oversampling were used for digital cameras, I don't think that the photon noise and dynamic range would be significantly worse, as the same area of silicon is used per output pixel. No, it wouldn't. The fill factor isn't 100%, even with mirco lenses. You'll have only 1/4 or 1/9 or 1/16 of the electrons in every sub-pixel, so you must up the analog gain before the A/D converter, increasing noise along with the signal ... Anyway, you can just as easily use a too-many-mpix digital P&S camera with RAW output and then do your worst with the raw data --- dcraw _is_ open source, after all --- and see if your expectations are met. It's not impossible to remove the IR filter and the AA filter, as far as I know. There you are, your perfect "oversampling" testbed. In your 2Mpix or 0.9Mpix analogy, That's not an analogy. it would mean that you could use an anti-aliasing filter with an equivalently strong cut-off (i.e. less sharp 8Mpix images). Basically I am not _using_ any 8MPix, but only 2/0.9MPix, so upsampling that to 8MPix obviously is not adding any information. There remains one problem with digital photography though, as someone mentioned recently, that the very simple anti-aliasing filter Well, moving 50% of all photons exactly one pixel in one vertical direction and then moving 50% of all photons there exactly one pixel in one horizontal direction doesn't sound "very simple" to me, but that's apparently what Canon does --- it's not "just" blurring randomly. -Wolfgang |
#72
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
Chris Malcolm wrote:
Hence most cameras now come with menu-settable changes in the degree of sharpening applied to their jpgs, and to what level of image resolution. [...] What I don't understand is why the same kind of flexibility is not applied to AA filtering. AA filtering is analog. This sharpening is digital. That's not comparing apples and oranges, but apples and sunshine. I can't see any technological barriers to it. You are aware that changing the focus screen is quite easy compared to changing, say, the IR filter and AA filter? the fact that some camera makers (such as Leica, Sony, & Fuji, IIRC) are already in their top models edging in the direction of less optical AA filtering in order to exploit more of the native resolution in their technology it seems to me that camera technology has now got to the point where this has become relevant. And Leica needed to add external IR blocking filters ... There is no optimal solution, but there are worse and better solutions. Some camera makers may think that less AA filtering is better (for selling cameras, at least), even when that means "fun" with high frequency parts. -Wolfgang |
#73
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
Chris Malcolm wrote:
Exactly. And that will require (among other things) optics and sensors as comfortably beyond the range of the capabilities of human vision as today's audio technology is beyond that of human hearing. I doubt that will happen. Optics has been around for a _long_ time. - The "Nimrud Lens" is 3000 years old. - The "Visby lenses" (Vikings, 11th century) shows that they had access to rock crystal lenses, some of them aspherical, 5cm diameter and, used as a loupe, practically no distortion and very little spherical aberation. They rival even current day CNC made aspherical lenses. - They also had very exact spheres (up to 4.5cm diameter), so the asphericality wasn't just because they didn't manage any better. - Telescopes? 1608, the Dutch perspective glass", on which a certain "Galileo Galilei" improved a lot and got into trouble for thinking what he saw. Audio technology is quite new, in comparison: - the phonautograph (patented 1857) recorded sound waves on lamp-blackened glass and paper. - the paréophone (Charles Cros, 1877) described the first soundwave-membrane-needle and needle-membrane-soundwave system. Charles Cros also invented an early color photography method. - built soon after by T.A.Edison as the (cylinder)-phonograph ("sound writer"). So what _is_ new in optics? As far as I know (and I am certainly not an expert!): - lenscoating, usable early WWII - multicoating (a logical extension of coating), allowing many-element lenses. - DO optics. Maybe that'll turn out to allow lighter/cheaper optics. Maybe not. It'll be an icremental improvement, though. - IS technology. Not really an "optics" technology, per se, but often added to lenses. On the other hand, tripods have been used for the first photographies. The steadycam has been around since the late 1970ies. Gyro stabilisation is another option (http://www.ken-lab.com/stabilizers.html). - using computers to design lenses - Zoom lenses of ever increasing quality and speed in the professional department - less extremely fast lenses, e.g. Canon: 50mm f/1.0 = f/1.2, 200mm f/1.8 = 200mm f/2.0 (with IS). No 50mm f/0.95 lenses any more, not to speak of f/0.7 lenses. Probably a reaction to faster sensors/films. In other words, nothing revolutionary, lots of evolutionary small steps. Sensors, now ... Digital sensors used in research are very near the theoretical maximum. Have a look at http://www.fairchildimaging.com/main...aSheetRevB.pdf More than 90% quantum efficiency, 100% fill factor, near no dark current, low read noise --- what more can one do? - the filters of the Bayer sensor eat quite a bit of light. I understand that a CYGM pattern eats less light, but it seems no new cameras are using it any more. RGBE is not used much either. Kodak has recently thought publically about making every second pixel in a bayer sensor unfiltered, but reducing color resolution even further, for a stop or two more sensivity. - non-Bayer solutions (Foveon, 3-ccd, ...) have other technical problems, often especially in the low light area. - The fill factor of modern DSLR sensors are already good, you might squeeze out half a stop, but at a high price. - Use back illuminated sensors, instead of front illuminated ones. You _might_ be able to get 1.5-2 stops out of it, but backlit sensors are _really_ expensive. So there you are. There's nothing you can do about the fact that photons are basically a scarce resource, quantizised and arriving randomly (= photon noise). You can use larger pixels (less resolution on the same silicon area, or larger formats needed). You can go for 16 bit converters for current pixel sizes, allowing you to forget ISO for RAW shooting[1] --- unless you increase the full well capacity, which can increase dynamic range, but needs more light to do it (to get the larger wells full). The 40D already has a 14 bit A/D converter. And P&S cameras already have too many MPix ... Now, someone will come up with some schemes based on the time to fill the sensor, or multiplying photons for higher sensitivity ... -Wolfgang [1] the 20D's 12 bit converter arrive at unity (+1 photon caught == +1 in the A/D converter) at around ISO 1200. With 14 bit the least significant bit would indicate 1 photon at ISO 300. With 16 bit, unity would arrive at ISO 75, and we could ignore all ISO settings (except for in-camera JPEGs), we'd get a "louder" picture, but not ONE BIT or detail more. |
#74
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
Wolfgang Weisselberg wrote:
David J Taylor [] If oversampling were used for digital cameras, I don't think that the photon noise and dynamic range would be significantly worse, as the same area of silicon is used per output pixel. No, it wouldn't. The fill factor isn't 100%, even with mirco lenses. You'll have only 1/4 or 1/9 or 1/16 of the electrons in every sub-pixel, so you must up the analog gain before the A/D converter, increasing noise along with the signal ... That's why I said "significantly". If the area is more or less the same, the photons captured are the same, hence the same S/N (in the photon-limited case). Anyway, you can just as easily use a too-many-mpix digital P&S camera with RAW output and then do your worst with the raw data --- dcraw _is_ open source, after all --- and see if your expectations are met. It's not impossible to remove the IR filter and the AA filter, as far as I know. There you are, your perfect "oversampling" testbed. I don't have either the time or facilities for such tests. In your 2Mpix or 0.9Mpix analogy, That's not an analogy. it would mean that you could use an anti-aliasing filter with an equivalently strong cut-off (i.e. less sharp 8Mpix images). Basically I am not _using_ any 8MPix, but only 2/0.9MPix, so upsampling that to 8MPix obviously is not adding any information. ... and I was not suggesting upsampling, just comment on what a 2Mpix AA filter would look like on the 8Mpix captured image. There remains one problem with digital photography though, as someone mentioned recently, that the very simple anti-aliasing filter Well, moving 50% of all photons exactly one pixel in one vertical direction and then moving 50% of all photons there exactly one pixel in one horizontal direction doesn't sound "very simple" to me, but that's apparently what Canon does --- it's not "just" blurring randomly. -Wolfgang It's what all manufacturers attempt to do with layers of birefringent materials. http://en.wikipedia.org/wiki/Anti-aliasing_filter David |
#75
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
Wolfgang Weisselberg wrote:
Chris Malcolm wrote: Exactly. And that will require (among other things) optics and sensors as comfortably beyond the range of the capabilities of human vision as today's audio technology is beyond that of human hearing. I doubt that will happen. Sensors, now ... even further, for a stop or two more sensivity. - non-Bayer solutions (Foveon, 3-ccd, ...) have other technical problems, often especially in the low light area. - The fill factor of modern DSLR sensors are already good, you might squeeze out half a stop, but at a high price. - Use back illuminated sensors, instead of front illuminated ones. You _might_ be able to get 1.5-2 stops out of it, but backlit sensors are _really_ expensive. You can go for 16 bit converters for current pixel sizes, allowing you to forget ISO for RAW shooting[1] --- unless you increase the full well capacity, which can increase dynamic range, but needs more light to do it (to get the larger wells full). The 40D already has a 14 bit A/D converter. You've just listed several ways it could happen. Greg -- http://lodesertprotosites.org Ticketmaster and Ticketweb suck, but everyone knows that: http://www.ticketmastersucks.org Dethink to survive - Mclusky |
#76
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
G.T. wrote:
Wolfgang Weisselberg wrote: Chris Malcolm wrote: Exactly. And that will require (among other things) optics and sensors as comfortably beyond the range of the capabilities of human vision as today's audio technology is beyond that of human hearing. Sensors, now ... [...] You've just listed several ways it could happen. Hmmm, and they are then "comfortably beyond the range of the capabilities of human vision"? I /doubt/ that. After all, you cannot get a "larger print" of audio data, so there is an upper limit. If we ignore that, let's just point out that with microscopes and telescopes, macro lenses, wide angle lenses and tele lenses we *are* already /way/ "comfortably beyond the range of the capabilities of human vision". Sensors that can collect single photons and give color images when our own eyes don't see a thing even in B/W anymore? Already done. 20+ MPix resolution? What problem? Well, the problem is that while our eye only focusses on one thing and has a very tiny area of sharp vision[1], the brain "stitches" a panorama image from the saccadic eye movements[2], preknowledge, and tons of postprocessing ... noting, given enough time, a lot of things. Just look at TV, and compare a still frame, especially one with some noise (e.g. snow) in it to the moving images ... -Wolfgang [1] the Fovea centralis, part of the Macula lutea (yellow spot), is just 0.5mm(!) in diameter. It sees about 2 degrees --- twice the width of your thumb on your stretched out arm. Don't forget that the retina *massively* compresses the visual data, as well, as can be shown by e.g. after images and fading in completely static images. [2] during which blur is intelligently supressed (look into a mirror into your right eye, then your left eye. Did you see your eyes moving?). |
#77
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
David J Taylor wrote:
Chris Malcolm wrote: [] I know. I used to be one of them. More than twenty five years ago I was writing code to get rid of aliasing artefacts in monochrome 256x256 digital camera images :-) That was back in the old days when the only way to acquire a digital camera was to make it yourself by unsoldering the metal top of a military spec dynamic RAM chip and then focusing an image on it with a lens. Fascinating - did you have a lot of success with your code? What nature of test images did you use (edges or bar wedges), and did you end up designing in the frequency or time/space domain? It was a very specific application, not general purpose. In the end we chose a combination of two methods. First, defocus the camera a little. Second, use the contextual prior knowledge that the visible blurred edge is in fact a real world sharp edge and sharpen it. We used that to interpolate sharpened edge location into a virtual image of twice the pixel dimensions of the original image. -- Chris Malcolm DoD #205 IPAB, Informatics, JCMB, King's Buildings, Edinburgh, EH9 3JZ, UK [http://www.dai.ed.ac.uk/homes/cam/] |
#78
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
acl wrote:
On Oct 28, 1:46 pm, Chris Malcolm wrote: David J Taylor wrote: Chris Malcolm wrote: David J Taylor I understand anti-aliasing perfectly in audio. What makes the audio problem and its typical solutions different is that it's easy with quite cheap technology to make systems with accurate responses well above the frequency range of human hearing. So the aliasing problems and the technology to remove them behave very much more like the simplified theoretical mathematical models. That makes the practical engineering and design problems much simpler. Well, is there some aspect of the discussion here that you think doesn't really correspond to reality? (since you mention simplified theoretical models, which is of course true). I am curious, rather than confrontational. No, I was referring to the engineering problems of designing with imperfect approximations to the ideal components of theory, such as the famous inelastic string and dimensionless points of physics. BTW: the design of the low-pass filter is complex and can cause overshoots and ringing as seen in some photos. Multi-level sampling at different rates can be used to simplify the filter design. It's quite an interesting area - there's probably quite a lot of expertise in your Engineering Department, but I don't have any names. I know. I used to be one of them. More than twenty five years ago I was writing code to get rid of aliasing artefacts in monochrome 256x256 digital camera images :-) That was back in the old days when the only way to acquire a digital camera was to make it yourself by unsoldering the metal top of a military spec dynamic RAM chip and then focusing an image on it with a lens. That's interesting. Do you know of current algorithms that do something other than remove colour artifacts? I haven't seen any (but I never looked carefully into the matter). I'm decades out of date in that area, but if I was looking for that, I'd look first into this online compendium of computer vision produced by my colleague Prof Fisher :-) http://homepages.inf.ed.ac.uk/rbf/CVonline/ -- Chris Malcolm DoD #205 IPAB, Informatics, JCMB, King's Buildings, Edinburgh, EH9 3JZ, UK [http://www.dai.ed.ac.uk/homes/cam/] |
#79
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
Chris Malcolm wrote:
David J Taylor wrote: Chris Malcolm wrote: [] I know. I used to be one of them. More than twenty five years ago I was writing code to get rid of aliasing artefacts in monochrome 256x256 digital camera images :-) That was back in the old days when the only way to acquire a digital camera was to make it yourself by unsoldering the metal top of a military spec dynamic RAM chip and then focusing an image on it with a lens. Fascinating - did you have a lot of success with your code? What nature of test images did you use (edges or bar wedges), and did you end up designing in the frequency or time/space domain? It was a very specific application, not general purpose. In the end we chose a combination of two methods. First, defocus the camera a little. Second, use the contextual prior knowledge that the visible blurred edge is in fact a real world sharp edge and sharpen it. We used that to interpolate sharpened edge location into a virtual image of twice the pixel dimensions of the original image. OK, I understand what you're doing. Having a limited domain in which to work helps a lot in that case. Thanks, David |
#80
|
|||
|
|||
Pentax K10D beats (sharpness, detail) Canon 40D?
On Nov 3, 4:44 pm, Chris Malcolm wrote:
acl wrote: On Oct 28, 1:46 pm, Chris Malcolm wrote: David J Taylor wrote: Chris Malcolm wrote: David J Taylor I understand anti-aliasing perfectly in audio. What makes the audio problem and its typical solutions different is that it's easy with quite cheap technology to make systems with accurate responses well above the frequency range of human hearing. So the aliasing problems and the technology to remove them behave very much more like the simplified theoretical mathematical models. That makes the practical engineering and design problems much simpler. Well, is there some aspect of the discussion here that you think doesn't really correspond to reality? (since you mention simplified theoretical models, which is of course true). I am curious, rather than confrontational. No, I was referring to the engineering problems of designing with imperfect approximations to the ideal components of theory, such as the famous inelastic string and dimensionless points of physics. Yes, my question then was "what, here, do you believe is analogous to the inelastic string and points of physics?" Because, to the best of my knowledge (which isn't much, mind you) we're not really simplifying anything in the above discussions of aliasing (the finite size of the pixels and other effects like that that are usually ignored are easy to take into account and don't do much). I know. I used to be one of them. More than twenty five years ago I was writing code to get rid of aliasing artefacts in monochrome 256x256 digital camera images :-) That was back in the old days when the only way to acquire a digital camera was to make it yourself by unsoldering the metal top of a military spec dynamic RAM chip and then focusing an image on it with a lens. That's interesting. Do you know of current algorithms that do something other than remove colour artifacts? I haven't seen any (but I never looked carefully into the matter). I'm decades out of date in that area, but if I was looking for that, I'd look first into this online compendium of computer vision produced by my colleague Prof Fisher :-) http://homepages.inf.ed.ac.uk/rbf/CVonline/ Absolutely brilliant, many thanks! I haven't found anything relevant, but it is very interesting nonetheless. |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Pentax K10D | Tyler Heibeck | Digital SLR Cameras | 31 | October 2nd 07 01:15 PM |
Pentax K10D or Canon 30D | GS[_2_] | Digital SLR Cameras | 19 | June 16th 07 10:49 PM |
Pentax K10d | frederick | Digital SLR Cameras | 44 | September 17th 06 09:25 PM |
Pentax K10D now on Pentax site | Pete D | Digital SLR Cameras | 0 | September 14th 06 01:13 AM |
Canon Kit Lens beats Nikon in every test. | Steve Franklin | Digital SLR Cameras | 17 | August 19th 05 10:31 PM |