Saturday, January 27, 2007, 08:09 PM - Lenses
As a zoom lens for my Dynax 5D, I originally bought a relatively cheap 70-300 Tamron zoom. With all the positive things said on the Dyxum forum about the old lens known as the "Beercan", I decided to buy one. In order to know how do both really compare, I decided to do a few tests.So here it is, a comparison of those two lenses:
*Minolta AF 70-210 F4
*Tamron AF 70-300 F4-5.6 LD
The Minolta beeing an old design from about 1985, it is only available as second hand, while you can build the Tamron as new. Regarding price, both share a similar price.
First, let's compare both, without a camera.
*Body:
The Tamron body is plastic, while the Minolta one is a metal body. Hopefully, both have a metal rear mount. This is purely subjective, but the Tamron really feels cheap when compared to the Beercan. It seems that both are not in the same league. Small detail: serial is sticked on the Tamron, while engraved on the the Minolta.
*Weight:
435g for the Tamron, 695g for the Beercan. That's a important difference for lenses that are supposed to be used without mono/tripod.
*Zoom range:
As indicated by their respective names, the Tamron is 70-300, while the Minolta is only 70-210. Obviously 1 point for the Tamron.
*Aperture:
The Minolta is a constant F/4 aperture, while the Tamron is a more usual 4-5.6 aperture. At maximum zoom, the Minolta is 1 stop faster, but you should note that when set at 210mm, the Tamron can be opened up to F/5, so it's only about 1/2 F-stop slower than the Minolta. The Minolta being a constant maximum aperture lens, it is easier to use in fully manual mode.
*Minimum aperture:
The minimum aperture on the Tamron is F/22, while the Minolta can be closed down to F/32, 1 F-stop slower. In some situations, it means that you could perhaps avoid using a neutral density filter on the Minolta if you need a slow shutter speed.
*Macro:
Both are claimed to be able to do some macro shots, but don't count on them if you really want to often do macrophotography. They only achieve their maximum magnification (1:4 on the Minolta, 1:2 on the Tamron) at full zoom extension, so it's not easy at all when compared to a 100mm macro.
*Filter diameter:
55mm for the Minolta, 62mm for the Tamron. Both filter sizes will be in a similar price range.
*Zoom ring:
When zooming, the Tamron extends while the Minolta does not. Both zoom rings share a similar size, but the Minolta is a lot more smooth. No zoom creep on those lenses, and front element does not rotate when zooming.
*Focus ring:
Front element rotates on both lenses when focusing. Zoom ring is a lot more smooth on the Beercan, but the focus ring is very small. It is so small that you might perhaps want to use the lens hood to rotate the focus ring. As the Beercan was released when Minolta was pushing its first autofocus SLRs, it seems to be a purely marketing decision.
*Rear element:
While the rear element is sealed and easily reachable for cleaning on the Minolta, there is something to be said about the Tamron.
On the Tamron, the rear element is a bit deep inside the lens body, so it is not that easy to reach when you want to clean it. Worst, it goes even deeper inside the lens when zooming, opening access to a kind of internal cavity located around the optical elements. If dust goes inside, it will be nearly impossible to clean. It won't directly affect the optical performance, but can then go into you camera body when zooming in or out.
Now, it's time to test both zooms on a camera. I tried them on my 5D, which is using a 6MP sensor. In all shots, exif info should still be present.
*Focus speed:
None of those lenses is a speed king, but focusing is faster on the Minolta. It's also less noisy than the Tamron when focusing.
*Resolution test:
Minolta, 75mm, F/4:
Tamron, 75mm, F/4:
Minolta, 75mm, F/8:
Tamron, 75mm, F/8:
At 75mm, wide open, there is not much difference between both lenses. Closed down to F/8, the Minolta Beercan is a lot sharper than the Tamron.
Minolta, 210mm, F/4:
Tamron, 210mm, F/5:
Minolta, 210mm, F/8:
Tamron, 210mm, F/8:
At 210mm, the first thing that can be noticed is that the real focal lengths are different. I don't know if the Tamron is more than the reported 210mm or if the Beercan is less than 210mm, but practically the Tamron's focal length is higher than the one from Minolta.
Regarding optical resolution, beeing wide open or closed down to F/8, the Minolta is a lot sharper than the Tamron. To me the Beercan wide open (F/4) is even sharper than the Tamron closed down to F/8.
Tamron, 300mm, F/5.6:
Minolta, 210mm interpolated to 300mm, F/8:
Tamron, 300mm, F/8:
300mm can only be done using the Tamron. At this focal length, it is really soft wide open. Closed down to F/8, it's better, but still soft.
As a comparison, I cropped a 210mm shot from the Beercan, and resized it using Paint Shop Pro. When comparing the "emulated" 300mm from the Minolta to the real 300mm from the Tamron, it seems to me that the Minolta is still providing more details than the Tamron. However, we can also notice that the noise pattern from the camera sensor is bigger (the F/8 shots are using 400 iso).
Quite an amazing result from the Beercan here, that makes you think twice about the 90mm extra focal length that is provided by the Tamron.
*Color test:
Minolta:
Tamron:
Probably not the best color test, but to me there is not much difference regarding color there.
*Chromatic aberration (color fringing) test:
Minolta, 210mm, F/4:
Minolta, 210mm, F/8:
Tamron, 300mm, F/5.6:
Tamron, 300mm, F/8:
Wide open, there is some noticeable fringing with both lenses, with a bit more on the Beercan. Closed down to F/8, chromatic aberrations are reduced and both lenses seem to produce about the same amount of fringing.
*Bokeh test:
Probably not the best case, but here are my test shots:
Minolta, 210mm, F/4:
Minolta, 210mm, F/5:
Minolta, 210mm, F/8:
Tamron, 210mm, F/5:
Tamron, 210mm, F/8:
Differences are subtle, so I will let you draw your own conclusion there.
To me the conclusion regarding the bokeh test is that it was a bad place to change lenses, and that I now have to clean my sensor.
As a final conclusion, I would say that I personally prefer the Beercan to the Tamron. The only real advantages for the Tamron are weight and the fact that it can be bought new.
Now, if we could have a new lens based on the Minolta Beercan, but with a reduced size/weight by cropping it to APS sensors, and with an improved coating, that would be a very good modern lens.
| 0 trackbacks
| permalink
| related link
Friday, January 12, 2007, 11:14 AM - Photography
From time to time, I am wondering "why is this feature not available in my camera?".To me, there are some features that might be quite handy in some situations, and strangely they are not always available. The surprising thing is that many of them are pure software features, which means that they would increase development/validation/user documentation/support costs a little, but would not increase production cost at all. As some manufacturers implemented some of them, in most cases it's clearly not an implementation issue, but probably just a choice from the manufacturer.
Here is a list of some of those features:
*Focus confirmation and sensor stabilization on manual lenses:
If the body already provides sensor stabilization, it can also be used with manual lenses. As with focus confirmation, the only piece of information required is the focal length. So why not letting users enter it manually?
*Focus length information for AF zoom lenses:
Obviously this is known by the camera, as it's part of exif information. Why not displaying it into the viewfinder when user is changing focal length?
*Depth of field information:
Could be computed easily inside camera.
*Trap focus:
You manually set the focus, and when camera detects something in focus at the specified focus location/distance, it fires. This would ease macrophotography a lot.
*Ability to set the auto-ISO interval:
My camera is only automatically adjusting ISO sensitivity between 100 and 400. While it's nice in full light, when the light is dimmer it becomes quite annoying to have to manually switch between ISO settings. I'd like to be able to change it to 200-800 for some indoor or even to 800-3200 for handheld night photography.
*Ability to have a preview shot with histogram:
Take a shot that will not be written to flash card, in order to have an idea about the light rendering and the histogram values. For newcomers, it would be very nice, especially for night scenes.
*In camera on-demand raw to jpeg conversion:
Ability to convert raw to jpeg in camera, allowing to choose potential exposure compensation.
*Timer based interval shooting:
Nice for thunder shooting.
Thursday, December 28, 2006, 11:45 AM - Photography
Found a very interesting interview of Pentax engineers about the K10D.They are mainly talking about data acquisition/reading from the 10Mp sensor and mechanical consideration.
Thursday, December 14, 2006, 11:08 AM - Optimization
When switching from x86 to x64, it seems that Lame is getting about a 15% speed increase. Something to be considered is that in x86 mode we are using some hand-coded mmx and 3dnow functions, and some sse intrinsics functions (well, only 1 function right now), while in x64 we are only using the sse intrinsics functions. So the comparison is:x86: mmx + 3dnow + sse
x64: sse only => +15% speed
In many benchmarks comparing x86 vs x64, when there is a speed increase, the speed up is more important on K8 based processors (Athlon64/Opteron) than on Core based processors (Core2). I've seen several articles implying, based on such tests, that there could be a non-optimal performance of Core processors in x64.
Let's try to find a possible explaination, based on Lame results.
*Lame is not using 64bits integer arithmetics, so speedup can not be caused by the ability to process this kind of computation in a reduced number of cycles.
*Lame is heavily floating-point based.
Speed increase could be because the compiler is vectorizing code to use sse/sse2 operations, which are always available in x64. However, experience demonstrated that current compilers are not good at fully automatic vectorisation, so it's unlikely to be the case.
As Lame is using single precision floating point arithmetic, we can also discard any (unlikely) potential speed increase of double precision arithmetic in x64.
The only remaining point that I can think of, is the fact that the sse ISA is register based while the x87 ISA is stack based, and x64 adds even more registers into the sse ISA.
Could it be because x64 is increasing the number of internal registers of the floating point units? Unlikely, as the K8 floating point core has 120 internal registers available, which is plenty.
The likely explaination is that sse code is more compact than x87 code. Sse is register-based, while x87 is stack based. In a stack based model, you have to first push your operands on the top of the stack, and only then you can do the calculation. In contrast, in a register based scheme you can keep data into other register while operating on new data, as you can do computation on data stored in any register, not just on top of the stack.
A point to consider is that the sse ISA is using 128bits registers, that K8 must process in 2 chunks of 64bits. It means that using unvectorized sse with only 32bits of data into registers needs to use floating point computation units twice (with 1 run totally wasted), compared to only once in x87, to just have the same computationnal result.
Despite of this, K8 provides a substancial speed up in floating point when comparing sse based x64 versus x87 based x86 mode. Obviously, it means that computation units are not fully loaded, otherwise the doubled computation would decrease speed.
It seems that K8 features good execution units, but that it's not optimally efficient into feeding them, and that sse based floating point in x64 helps it to feed its units.
This would mean that the speed increase witnessed in x64 is not because of the 64bits mode itself, but because it's helping an unoptimal decoding stage of K8. It would explain a few things:
*There is more speed increase when going from x86 to x64 on K8 than Core, as Core features a more efficient decoding stage (micro and macro-ops fusion).
*Speed increase when going from x86 to x64 is more important (on K8) in floating point based software than in integer based software.
Back Next