A few software-only features to improve DSLRs 
Friday, January 12, 2007, 11:14 AM - Photography
From time to time, I am wondering "why is this feature not available in my camera?".

To me, there are some features that might be quite handy in some situations, and strangely they are not always available. The surprising thing is that many of them are pure software features, which means that they would increase development/validation/user documentation/support costs a little, but would not increase production cost at all. As some manufacturers implemented some of them, in most cases it's clearly not an implementation issue, but probably just a choice from the manufacturer.

Here is a list of some of those features:

*Focus confirmation and sensor stabilization on manual lenses:
If the body already provides sensor stabilization, it can also be used with manual lenses. As with focus confirmation, the only piece of information required is the focal length. So why not letting users enter it manually?

*Focus length information for AF zoom lenses:
Obviously this is known by the camera, as it's part of exif information. Why not displaying it into the viewfinder when user is changing focal length?

*Depth of field information:
Could be computed easily inside camera.

*Trap focus:
You manually set the focus, and when camera detects something in focus at the specified focus location/distance, it fires. This would ease macrophotography a lot.

*Ability to set the auto-ISO interval:
My camera is only automatically adjusting ISO sensitivity between 100 and 400. While it's nice in full light, when the light is dimmer it becomes quite annoying to have to manually switch between ISO settings. I'd like to be able to change it to 200-800 for some indoor or even to 800-3200 for handheld night photography.

*Ability to have a preview shot with histogram:
Take a shot that will not be written to flash card, in order to have an idea about the light rendering and the histogram values. For newcomers, it would be very nice, especially for night scenes.

*In camera on-demand raw to jpeg conversion:
Ability to convert raw to jpeg in camera, allowing to choose potential exposure compensation.

*Timer based interval shooting:
Nice for thunder shooting.
  |  0 trackbacks   |  permalink   |  related link

CCD reading and mechanical considerations in DSLR 
Thursday, December 28, 2006, 11:45 AM - Photography
Found a very interesting interview of Pentax engineers about the K10D.
They are mainly talking about data acquisition/reading from the 10Mp sensor and mechanical consideration.
  |  0 trackbacks   |  permalink   |  related link

About speed increase between x86 and x64, AMD vs Intel 
Thursday, December 14, 2006, 11:08 AM - Optimization
When switching from x86 to x64, it seems that Lame is getting about a 15% speed increase. Something to be considered is that in x86 mode we are using some hand-coded mmx and 3dnow functions, and some sse intrinsics functions (well, only 1 function right now), while in x64 we are only using the sse intrinsics functions. So the comparison is:

x86: mmx + 3dnow + sse
x64: sse only => +15% speed

In many benchmarks comparing x86 vs x64, when there is a speed increase, the speed up is more important on K8 based processors (Athlon64/Opteron) than on Core based processors (Core2). I've seen several articles implying, based on such tests, that there could be a non-optimal performance of Core processors in x64.

Let's try to find a possible explaination, based on Lame results.

*Lame is not using 64bits integer arithmetics, so speedup can not be caused by the ability to process this kind of computation in a reduced number of cycles.

*Lame is heavily floating-point based.

Speed increase could be because the compiler is vectorizing code to use sse/sse2 operations, which are always available in x64. However, experience demonstrated that current compilers are not good at fully automatic vectorisation, so it's unlikely to be the case.

As Lame is using single precision floating point arithmetic, we can also discard any (unlikely) potential speed increase of double precision arithmetic in x64.

The only remaining point that I can think of, is the fact that the sse ISA is register based while the x87 ISA is stack based, and x64 adds even more registers into the sse ISA.

Could it be because x64 is increasing the number of internal registers of the floating point units? Unlikely, as the K8 floating point core has 120 internal registers available, which is plenty.

The likely explaination is that sse code is more compact than x87 code. Sse is register-based, while x87 is stack based. In a stack based model, you have to first push your operands on the top of the stack, and only then you can do the calculation. In contrast, in a register based scheme you can keep data into other register while operating on new data, as you can do computation on data stored in any register, not just on top of the stack.

A point to consider is that the sse ISA is using 128bits registers, that K8 must process in 2 chunks of 64bits. It means that using unvectorized sse with only 32bits of data into registers needs to use floating point computation units twice (with 1 run totally wasted), compared to only once in x87, to just have the same computationnal result.
Despite of this, K8 provides a substancial speed up in floating point when comparing sse based x64 versus x87 based x86 mode. Obviously, it means that computation units are not fully loaded, otherwise the doubled computation would decrease speed.

It seems that K8 features good execution units, but that it's not optimally efficient into feeding them, and that sse based floating point in x64 helps it to feed its units.

This would mean that the speed increase witnessed in x64 is not because of the 64bits mode itself, but because it's helping an unoptimal decoding stage of K8. It would explain a few things:

*There is more speed increase when going from x86 to x64 on K8 than Core, as Core features a more efficient decoding stage (micro and macro-ops fusion).
*Speed increase when going from x86 to x64 is more important (on K8) in floating point based software than in integer based software.
  |  0 trackbacks   |  permalink   |  related link

Does higher density sensor imply more noise? 
Thursday, November 30, 2006, 11:49 AM - Photography
This year, there is a trend toward increased sensor density in DSLR, mainly because of the new 10 megapixels cameras.
Many people (including me) are wondering about the noise level of those cameras compared to the lower density sensors. We have the following models that are in competition:

Canon 400D (10Mpix) vs Canon 350D (8Mpix)
Sony alpha-100 (10Mpix) vs KM 5D/7D (6Mpix)
Pentax K10 (10Mpix) vs Pentax K100 (6Mpix)
Olympus E400 (10Mpix) vs Olympus E500 (8Mpix)

The first constatation is usually that the 10Mpix cameras are producing more noise at higher ISO sensitivities. Comparing full scale crops side by side shows an increased noise level.

But there is something that should be considered: the increased resolution. Let's compare the noise of a 10Mpix to what a 6/8Mpix would gave you, but resized to 6/8Mpix. Once resized, it's not that obvious anymore that the noise level is increased.
So for 10Mpix DSLR we have:

*increased noise in full frame size
*similar noise when resized
*higher resolution
*similar noise level at low ISO
*lack of 3200 ISO setting (usually)

Seems that after all, the only real drawback might be the lack of 3200 ISO setting, but not really an increased noise level.
  |  0 trackbacks   |  permalink   |  related link


Back Next