My (Limited) Return to Flashlight Reviewing – Part I

Some of you out there may remember me. 🙂

Beginning in early 2007, I started posting personal reviews of flashlights I owned on the main online flashlight discussion forum at the time, I had initially modeled my reviews after the (now long-since extinct) run by Doug P. (aka Quickbeam on cpf). But there was a distinctive defining feature of my reviews –  direct comparative testing of different models of the same class within each review. I will come back to this point at the end of this post.

My reviews quickly became popular, and by the end of that year manufacturers’ started contacting me to review their lights. Within a year, most of the major manufacturers were sending me lights to review. And by the time I wound down my review testing in early 2016, I had reviewed nearly 600 flashlights (not to mention about two dozen massive round-up comparison reviews, broken down by battery class).

So why did I stop, and why am I making a (limited) return?

The answer to the first part is a combination of life getting in the way and waning interest, for reasons I’ll explain below. As for my return, I’ll cover that in a part II post.

As was likely obvious from my reviews, I have research background. Indeed, one of my innovations was to structure my reviews roughly in the format of a scientific research paper – something that is common now, but basically unheard of when I started. But as a successful professional in my own field, my work responsibilities continued to expand to the point where I had little free time left anymore – and couldn’t handle the flood of requests I was getting.

I was also getting less satisfaction from the reviews. I had found the pace of innovation in flashlight design and performance had really slowed down. Through most of my time as a reviewer, overall LED emitter output was easily doubling every 12-18 months. And those early years saw huge explosions in innovative circuit designs  – with increasingly efficient constant current regulation and tons of specialty modes – and huge experimentation in user interfaces (e.g., visually-linear ramping outputs, intuitive magnetic control rings, etc.). And of course in the beam patterns – as a result of diverse designs and layouts in terms of emitters, reflectors, optics, etc.

But by ~2015, LED technology had largely fully matured, without the previous leaps in performance I had seen. An endless variety of me-too lights crossed my threshold that didn’t offer anything significant over what had come before.

Even worse, I was seeing increasingly the loss of useful features and designs, as manufacturers reverted to simpler and cheaper circuits (but with increasingly rakish physical designs, to distract you from the lack of substance). For example, formerly “hidden” modes were increasingly showing up in main sequences or too easily accessed (i.e., you could far too easily “tactically strobe” yourself now). The very useful “moonlight” modes for dark-adapted eyes were rapidly disappearing. And visually-linear ramps were turning into a joke with speeds so high that you could barely access a couple of discrete levels, etc.

Some of the other key drivers for my reviews had also diminished over time. I have always been singly focused on the truth when it comes to reviewing – by providing accurate, independent testing. While ANSI FL-1 standards were far from perfect, their widespread adoption at least helped to level the playing field in terms of reported specs – assuming makers were accurately representing their lights (which, while far from perfect, did improve over time and were fairly accurate by that point).

Moreover, when I started, there were very few truly independent flashlight reviewers out there. As many others joined the field, and started producing their own detailed reviews, I felt the need for my own personal reviews had lessened somewhat – there were plenty of others out there to pick up the torch (pun intended).

That said, I was concerned about how many other reviewers out there seemed to be more focused on producing glossy-looking outdoor photographs of flashlights than they were on rigorous comparative testing. Intentional or not, these glitzy presentations were serving as free marketing tools for makers. I don’t mean to cast shade on my fellow reviewers here – I believe the vast majority were simply focused on producing the highest quality reviews possible, and they had far more photographic experience/skills than scientific. But the end result was a not-so-subtle shift of reviews being used are marketing tools, which I didn’t enjoy (and didn’t want to be a part of).

The intervening years

I haven’t been entirely absent from the online reviewing world in the intervening years – but I moved on to a (somewhat) less labour-intensive hobby, yet one that involved even more of my quantitative analysis skills and interest: whisky reviewing. 🙂

More specifically, since 2016 I have been running a meta-critic review site for whiskies (, where I integrate reviewer scores in a statistically-rigorous way for popular bottlings. Here is a little background on the methodology, with links through the rest of the site on everything you (n)ever wanted to know about how to integrate reviewer scores (not to mention the actual database, which is based on >25,000 individual review data points). It’s true that I did my own reviews there too, but these were just personal sensory analysis – they didn’t require all the long-hours of testing and review preparation as for flashlights.

To be honest, my interest there has also waned in recent years – ironically for many of the same reasons as I left flashlight testing. With the rising popularity of whisky in recent years (and the unchanged need for extensive barrel aging), the field has similarly become saturated with an increasing array of lower quality bottlings – put out at an ever increasing frequency to distract the public from the lack of substance. And as many established whisky reviewers wind down their own reviewing, and an increasing number of less-experienced reviewers join the field, it has become harder and harder to find the consistent reviewers that I need to build up the statistical models to integrate quality reviewer scores.

It’s ironic in another sense – the one thing I always resisted in my flashlight testing was an overall score or rank of the lights I reviewed. That’s because I wasn’t trying to give you an overall impression of a light, boiled down to a single number.  I wanted to show you how a given light compares to others in its class, on all the independent scales that you may care about, so that you can make your own decision based on the extensive comparative data. Whisky reviewing was quite different – you can statistically divide whiskies into flavour categories by cluster analysis, so knowing the relative quality of a particular bottle in a given flavour cluster was what actually mattered (and where I could add value by developing and maintaining the meta-critic).

So where do we go from here?

Or put another way, why I have come back to flashlight testing?  That question has a number of facets as well, which I think I’ll save for the next post.




Leave a Reply

Your email address will not be published. Required fields are marked *