Cameras Should Be Reviewed In Auto Mode
I recently tested Google’s Night Sight on my Pixel, comparing it with my mirrorless camera and the iPhone XS. I tested the mirrorless camera with three lenses: a high-end lens that cost me ₹40K, and two prime lenses with wide apertures like F2.8 and F1.8. I spent more than a lac on this package.
The mirrorless camera sometimes mis-focused, while the Pixel XL and iPhone focused correctly every time. Sometimes the mirrorless camera focused correctly, but used too long a shutter speed, so the camera moved, producing a blurry photo. Again, the Pixel and iPhone never did that, producing a sharp photo every time.
The bottom line is that the Pixel consistently produced a usable photo. And this is the original Pixel, not the Pixel 3! The mirrorless camera, on the other hand, produced a blurry photo often enough that I wouldn’t trust it for night photography. Tools need to inspire confidence, not doubt and distrust, in their users’ mind.
If you asked me to make one recommendation for people who want to do night photography, without qualifications, I’d pick the Pixel.
This made me realise that the way cameras have been traditionally reviewed is wrong. Reviewers don’t consider UX. They don’t give enough weight to the hoops you have to jump through to get good results from a camera, whether focusing manually, choosing a shutter speed or taking two photos and then combining them in a HDR app. These are things smartphones do automatically. Reviewers are tech-savvy photographers themselves, and they spent a huge amount of time learning the technical complexity, so they don’t often remember that this is impenetrable for many people. As an analogy, the GUI brought computing to the masses. Saying that the command-line can also perform the task in question means nothing if you can’t figure out how to do it. Even if someone does understand all this technobabble, it doesn’t mean they want to deal with it. SLRs are clunky tools to work with, and traditional reviewers don’t give enough weight to this. This is the wrong way to evaluate a tool. Tools need to just work, and just work for as many people as possible.
Instead, reviews should be done the following way: select a list of scenes to photograph, indoor and out, day, dusk and night, moving subjects and still ones, different genres like macro, astrophotography, portraits, kids and pets, etc. After selecting a representative set of scenes, select a list of cameras to compare against. If you’re reviewing an SLR, compare it with flagship phones like the Pixel 3, iPhone XS, Huawei P20 Pro, Galaxy S9+, etc. In 2019, only a person with their head stuck in the sand would claim they’re not competing. SLR reviews that don’t compare with flagship smartphones are biased, knowingly or not.
SLRs also raise the question of which lens to test with. Choose the cheapest lens with a reasonable focal length, say between 26mm and 40mm [1]. This is usually the kit lens. If the SLR doesn’t do well with that lens, it will lose the comparison. Users don’t want to be system integrators. I want to buy a car, not research which engine, transmission and steering wheel are good, buy each of them, and assemble my car myself.
Auto mode: We’ll test all the selected cameras out of the box, in auto mode, without modifying any settings. Again, if an SLR doesn’t work well in that mode, it will fail the comparison.
HEIF, not RAW: We’ll test with HEIF output (or JPEG if your camera is slightly behind the curve). Not RAW. Yes, RAW can be edited to produce a good output, but the camera should do it itself.
Automatic HDR: This goes for things like testing dynamic range and HDR. If an advanced camera like the iPhone comes with automatic HDR that produces a great photo without you having you turn things on or off, and an SLR doesn’t, the SLR would be rated as having worse dynamic range.
Reviewers don’t give sufficient weight to things like automatic HDR. In theory, you could take a bracketed set of photos on your SLR. You have to understand what that means, how many photos to take and how far apart in EV they should be, and under what situations. You then have to capture. And manage the files, which is a nuisance. When I tried two similar framings of a HDR scene, I got confused which photo goes with which. You then have to evaluate different HDR apps. As with any product evaluation, you need to understand who the main players in the market are, what their pros and cons are, try out each, check if a frighteningly costly app like Aurora HDR is worth ₹7000, check if the app is available on both Windows and Mac (to avoid locking yourself in) and whether you need to pay again. You then need to decide whether you want the standard or pro version of your HDR app. You then download a free trial. To do the HDR merge, you export your bracketed set of photos from your photo management app like Luminar or Lightroom to your HDR app. Some apps have plugins to automate this process, but that’s yet another thing to set up. Other apps make you export JPEGs to the filesystem, open in the HDR app, do the HDR, save the result to the filesystem, delete the input files, import the HDR photo to your photo manager, check if it moved or copied, and if it copied, delete the original. After doing all this, you may realise that JPEG is not a good intermediate format because of generation loss. So you switch to using PNGs. And you then find out that your HDR app doesn’t support PNG, so you go through the entire process again with TIFF. You want to enable TIFF compression, in case you keep the TIFFs around. You then realise that your HDR app also accepts RAW input, so you may want to understand the pros and cons of giving the RAWs to your HDR app. This brings up the question of which cameras it supports. If it supports DNGs, you may think that answers the question, since DNG is a universal file format, but it’s not — I found the Pixel’s DNGs to be over-exposed and washed out in Lightroom. Apparently every camera’s DNGs need to be processed differently. DNG is, in that sense, not a file format but a collection of file formats that have the same extension.
Do you want to do all this, when your smartphone can do it with zero effort? Of course not. Humans shouldn’t be made to spend hours doing what a computer can do. Reviewers don’t take the end-to-end workflow into account are telling us half-truths.
Having a HDR mode that you need to turn on and off manually depending on the scene is a step better than not having in-camera HDR at all, but it’s not good enough. You need to know what HDR is in the first place, then understand which scenes require it, which requires you to understand the dynamic range of RAW files produced by your camera, and when one photo is enough to cover the entire dynamic range of the scene and when it’s not. You’ll have to develop the skill of looking at a scene with your naked eye and estimating whether it requires HDR. If you learn all that, you’ll still invariably forget to sometimes turn HDR on when you should, or forget to turn it off when it can only make things worse. These real-world considerations can’t be ignored, or blamed on users, when the competition — smartphones — have set a higher bar.
Autofocus: If an SLR produces a mis-focused photo, it will lose that comparison to a smartphone that focuses correctly. We shouldn’t evaluate the technology, such as by saying that a lens has phase-detect autofocus. So what? Some of my mirrorless camera lenses have this, but still mis-focus when compared to my iPhone and Pixel, which focus correctly almost every single time. I care about the end result — is the camera focusing correctly?— not the technology.
Auto-exposure: In addition to autofocus, we’ll also use auto-exposure, and check if the photo is over- or under-exposed.
Handheld: Most testing should be done handheld, because tripods are a nuisance to research, buy and carry around. Why should you need one for scenes that other cameras, like Google’s Night Sight, can photograph handheld?
Fields of view: We’ll choose scenes that require different fields of view: some, like architecture photography, require an ultrawide lens because you can’t always step back to take it all in. If you do, you might bump into the building on the other side of the road, or get traffic in your photo, which you don’t want. Some, like a helicopter in flight, a tiger, or a scene from a hilltop, require a telephoto lens, because you can’t move closer.
Video: In addition to photo mode, we’ll test the video mode. Does the stabilisation produce a watchable video if you record while walking?
Are the colors natural? Is the frame rate high enough, like 60 FPS, to avoid a staccato effect? This is terrible with UHD video, to the extent that I try to avoid capturing video below 60 FPS, at any resolution. Does it have video HDR?
Slomo: Does the camera take slomo video, and how good is it?
Panorama: My mirrorless camera technically has a panorama mode, but it never works. It sometimes fails because it expects me to move the camera in one direction, but doesn’t indicate on screen which direction it is. Or it fails saying I’m not moving it perfectly horizontally. Or that I’m moving it too slowly. Or too quickly. Unlike the iPhone, which tells me to slow down and lets me correct the mistake and continue recording. The mirrorless camera’s panorama mode is quick to find some excuse, any excuse, to stop recording and throw away what I’ve already put in effort into. The mirrorless camera would fail the panorama test. It’s as if it doesn’t have a panorama mode. One could even argue that it’s worse than not having a panorama mode, because at least it wouldn’t waste my time that way.
Timelapse: Many SLRs don’t have a timelapse mode that produces a video in-camera. Heck, mine doesn’t even have an intervalometer builtin. OEMs expect us to buy hardware intervalometers, which is an amazing level of cluelessness, since it can be done in software. People will instead buy smartphones.
Even if a camera does have a timelapse mode, none I’ve used comes close to convenience of the iPhone’s timelapse mode. Don’t ask me how long I intend to record, because I don’t know yet, or I may change my mind. Or how long the final video should be — pick something reasonable, like 30 seconds. Or what interval to use between photos — figure it out yourself!
Geotagging: Do a test where you give people different cameras and send them out on a day trip. When they’re back, ask them to locate a photo of a particular monument, for example. With geotagging, on smartphones, you can easily pick it on a map rather than having to scroll through hundreds of photos. SLRs would lose these — these clunky devices rarely have GPS, or connect to your smartphone to use its location. Ideally, SLRs should make of the smartphone’s GPS since it works indoors and is quicker.
In summary, cameras should be reviewed based on how they perform out of the box, with the default auto settings, when used by someone who has a good creative sense but zero technical knowledge. How much does it do for us automatically, how easy is it to use, does it require us to jump through hoops? This is the bar that smartphones have set, and cameras that don’t meet it will be the equivalent of cars that require a hand-crank to start [2].
[1] Full-frame equivalent. We’ll exclude specialty lenses, like fisheye or lenses that don’t have autofocus. Yes, there are such lenses.
[2] Does it mean that what experts can do with the tool should be given zero weightage? Of course not. It should be given a smaller weightage, say 20%, in the 80/20 sense, as opposed to the 20% weightage they’re currently given. But there are other reviews that focus on advanced users, so we need reviews along the lines of this blog post.