Back in my earlier reviewing days, I received many questions about the testing methodology that I used. But I frequently got asked even more general questions about flashlights (including which ones would be best for a specific purpose). It is hard to provide advice in the abstract, so I previously tried to assemble some general resources on one page a few years ago. These are still very relevant for today.

I have re-posted my old FAQ page below, with some updated info and links to specific pages on this new site. In general, you will find that many of the background and methodology pages on this revamped website have more info.

Can you recommend a flashlight for me?

I probably get asked this question more than any other. I typically try to avoid making specific recommendations of individual models, as the design and performance can change over time. Also, I may not fully understand your needs, so it’s best that you come to your own conclusions based on the data I present.

But I realize that all the choices – especially with all the different types of batteries – makes this a potentially confusing space to sort out. So to help, I have recently created a new section of this website to help you get started in narrowing down your choices.

Please see my Flashlights Recommendations page for some suggestions – broken down by battery type – and further links for more info. I’ve also provided some general perspective on why the specific types (and numbers) of batteries are used, for different purposes.

What do all the terms in your reviews mean?

The best way to answer this would be to suggest you take a look at the various sections of my Testing Methods page, to see if that answers your specific question.

I have prepared a series of introductory overviews/primers on various aspects of flashlight form and function, available on YouTube.

To start, here is an introduction as to why and how I do flashlight reviews:

Next, the individual primers are presented below. These are a series of videos, each one corresponding to a key aspect of what I look at in my individual reviews.

Please note that these are not intended as comprehensive examinations of the various topics, but rather as starting points to help you understand what I am referring to in my various reviews. They were recorded generally unrehearsed and unscripted, so please bear with me if I seem to ramble a little sometimes.

To see a discussion of these videos, please see check out my Selfbuilt’s introductory flashlight video primers thread on candlepowerforums.com. Or visit my YouTube channel.

Why don’t you review more lights from brand X?

Like many enthusiasts, I began by reviewing lights that I personally bought. These were mainly “brand name” lights, although I also looked at some higher-quality “budget” lights.

Eventually, manufacturers start sending me lights to review, so that I could compare their performance to other lights that I had done. As you can imagine given my current status, I now receive far more requests to review than I can handle. As a result, I currently only do invited reviews where the manufacturer/dealer agrees to my standard review terms and conditions. Even at that, I typically turn down most of the requests that I receive, largely due to lack of time.

As a corollary to the above, I generally do not “ask” manufacturers to send me lights for review. In my view, it should be up the the manufacturers to decide on their own whether or not they want their lights reviewed. This helps reduce my bias in selecting lights (i.e., I typically leave it up to the manufacturers to propose specific lights, and I respond based on my availability and interest). As a general rule, if you don’t see models from a major manufacturer in my review list, that’s most likely because they have not asked me to review (see the exception for budget lights below).

So how do I choose among what is offered? There are a number of variables that I consider, including general interest in the user community, perceived build quality, history and reputation of the manufacturer/dealer, etc. I also try to balance different size and class lights, including various beam patterns and battery formats. I also try to balance established makers and new potential entrants. But as a reviewer, I am always particularly interested in testing something novel – that is, something with new features, or a distinctive user interface or build. Often, these are not necessarily the most high-profile or high-volume lights.

My goal here is never to advance the particular agenda of a given group. I provide a consistently fair and impartial assessment of all lights that I choose to review (see my About the Author page for more info). But the selection of those lights is bound to remain somewhat idiosyncratic, given that I choose from what is offered, and only agree to review lights that interest me for some reason. In the end, we all only keep doing things as long as we enjoy doing them.

Why don’t you do more “budget” light reviews?

So-called “budget” lights (i.e., lights that are less expensive than those from the established brand name makers) can often offer good value for the money.

The problem as a reviewer is that I quickly discovered that budget lights could be incredibly inconsistent from batch to batch. The reason for this seems to be that many of the budget “brands” are actually often only a loose set of model standards manufactured by more than one plant. Copying and counterfeiting is also rampant, especially for perceived “popular” budget models. Although this is opaque to most end users, budget light dealers tend to buy in batches from multiple distributors (who in turn may be collecting samples from other distributors, multiple sources, etc.). So no two batches can be guaranteed to be same – they are often slightly (or significantly) different from previous batches, even from those sold by the same distributor/dealer previously.

I had this experience early on in my reviewing – I gave a positive review to a certain budget light, only to find other users quickly complaining of poorer build quality and performance. I bought a new sample from the same dealer, and discovered a completely different light (with a different body thickness, screw threads, switch – and most importantly, circuit). In every measurable way, the newer version was inferior to the previous one I had tested. The only similarity was the make and model number (and event at that, the labeling was poorer quality).

As a reviewer, I can’t justify reviewing a light where there is no reasonable assurance of consistent quality of manufacture. I run the considerable risk of misleading people in their flashlight purchasing, which goes against the whole reason why I do reviews.

There are ways you can try to protect yourseful in budget light purchasing. The first is to review the various flashlight forums for ALL comments on a specific model (i.e., don’t fall into the trap of only looking for what you want to see). Second, try to find a reputable dealer for the model (i.e., not one of the “deal sites” or eBay vendors, where high volume is often a key factor in their business model). A good dealer will value customer satisfaction, and will try to only work with distributors who provide a consistent quality product. You may pay a few more dollars, but it can make all the difference in whether or not you receive what you expect.

I do occasionally still review budget-style lights, but only if I have reasonable assurances from a reputable dealer or manufacturer. Given my limited time (discussed above for the various brands I review), I feel it is important to focus on lights where there is a reasonable expectation of consistency.

Why don’t you do more outdoor beamshots?

I know everyone loves to see outdoor beamshots of flashlights. But these are actually very difficult to do well, in a meaningful way. As I’ve explained on my About page and my Testing Methods page, my main goal here is appropriate comparative testing of flashlight performance.

This raises a major issue in terms of beamshots. If I were to use automatic camera settings – to best show off the individual beams – then any comparative value between lights would be lost. If I were to lock the camera to some common setting of exposure and aperture, then many lights would look over-exposed while others would look under-exposed.

The key point here is that a single beamshot (with single camera settings) cannot possible reproduce what you see in real life. Your brain and eye are remarkably adaptative, and so your relative perceptions will be very different from a single objective camera reference point. Not to mention the huge variability in how beamshots will appear from one graphics card/monitor combination to another. This is all part of why I generally stick with indoor “white wall” beamshots (under controlled conditions), at a series of camera exposure settings (to facilitate direct comparisons).

My one exception to this is for my high-output “throwy” lights, where you can’t easily compare at close distances. For these, I do standardized outdoor beamshots at 100-yards (or as best as I can standardize, at any rate). To learn more about how I do these, please see my Beamshots page. For an older summary of my actual outdoor shots, please see:

Selfbuilt’s 100-Yard Outdoor Beamshot Compendium

How can I make sense of beam tints?

I’ve actually added a more detailed discussion of colour temperature, tint, and colour rendition on a new emitter measures methodology page here. I recommend you start there for more details.

The old flashlightwiki website had a very good succinct page about ANSI White that is particularly useful in explaining these terms in relation to flashlights. Information on that site could change, but for now it is still a useful resource.

To summarize things simply here, LED lights come in three general white color temperatures – identified as “cool white”, “neutral white” and “warm white”. Most commercial LEDs are “cool white”, as these are the most efficient. As a bit of a side-bar, most white LEDs are based on a blue LED core, covered with yellow phosphor. To make a “warmer” color temperature, extra phosphor needs to be applied – which reduces the amount of light transmitted for the same core run at the same current/voltage settings. But some people like a “warmer” tint, so there is a market for all these LEDs.

To help you see some actual camera beam tint comparisons – please see these older comparison threads that I posted on CPF:

Color Rendition and Tint Comparison: Cree, Rebel, GDP, Nichia
4Sevens Mini Tint Comparison – Warm, Neutral, Cool White
4Sevens Neutral White tints – Comparison to Cool White

Any tips for effective flashlight use?

I always planned to develop some additional content on this, but never got around to it. However, you can follow a thread I started on CPF years ago that has some incredibly useful ideas shared by the members on how to maximize your flashlight use:

Tips for effective flashlight use?

Why do you use a cooling fan for runtimes?

One thing that is critical to keep in mind when comparing runtimes results is whether or not active cooling is being used by the reviewer.

All my runtimes are done under a small cooling fan, for consistency and safety reasons. Even in a climate-controlled environment, I can tell you that the ambient temperature in my office in the morning and evening can be quite different – and quite different from one season to the next. For this reason, I use active fan cooling to provide a more standardized testing environment for the lights. This is important to allow you to accurately compare results. As I do a lot of runtimes – including unsupervised ones overnight (which I don’t recommend, especially for Lithium or Li-ion batteries) – I also prefer the safety of knowing the lights are being at least somewhat cooled.

Many will claim that this is not representational of “actual use” of the light, and there is of course merit to that point. But by the same token, it is equally true that doing runtimes with NO cooling is not representational either. In real life, you will typically be carrying the light around, where there will be some active cooling from your hand-holding (i.e., your own circulatory system works to transfer heat away from the light, through the interface of your skin). This is why picking up a light that has been running in isolation for some time (e.g., tailstanding on the ground) can be quite an unpleasent experience with a bare hand – even though it would never have gotten that hot if you had been holding it in your bare hand the whole time.

Also, in “actual use”, you may very well be outdoors, where there will typically be movement of air over the light from your own movements, or from the relative climatic conditions (i.e. a windy evening). An isolated high-powered light running inside a lightbox in the corner of a variable ambient temperature office is hardly the same as those “real world” conditions either.

And of course, it gets better than that – the “real world” is highly variable! As I write this, it is late December in Canada. At night, it is generally well below 0 degrees Celcius, and rather windy. That is certainly quite different from the frequent >30 degree Celcius August evenings around here (which can often be quite still, with humidex values reaching much higher). Again, the point for comparative flashlight testing is not to match every possible environment – but to provide as standardized an environment as possible, that falls within a normal range.

Another point to keep in mind is that few people will turn on a fully-charged light and let it run to battery exhaustion in one sitting, as we reviewers do. In most cases, you will be turning the light on and off for short bursts of time. For lights that don’t have a timed step-down feature (i.e., ones that have no step-down, or use a thermal step-down control), the fan-cooled runtime data is probably more relevant to helping you gauge overall battery life. For example, I keep a light by the back door that I know runs constantly at a regulated max level for just over 2 hours with a cooling fan (i.e., from continuous runtime testing). I use this light for taking the dog outside at night before bed, and typically spend no more than 2 mins on max each time (i.e., it doesn’t have time to heat up and trigger its thermal step-down). As a result, I know I can go for at least 2 months before having to recharge the cell, because this usage pattern more closely matches my runtime testing paradigm.

It is worth considering whether the actively-cooled, continuous-runtime testing results are likely indicative of your actual usage patterns. You just can’t expect that any one standardized testing method is going to be directly generalizable to every possible “real world” scenario.