COMING 2026

Objective Data.
Not Opinions.

The first independent optical measurement laboratory producing calibrated, quantitative, reproducible performance data for rifle scopes.

13
Performance Axes
±0.1″
Arc-Second Precision
100%
Independent
CTR RES EDGE RES EDGE SHRP TURR CONS CHROM AB COLOR ACC COLOR NEU DISTORT TURR FEEL CLICK Q PARLLX LIGHT TX VALUE

Complete Performance Mapping

Every scope tested across 13 independent metrics. Resolution, aberration, color accuracy, mechanical quality — all quantified, all comparable, all reproducible.

One chart. One scope. The complete picture.

How scopes are evaluated today

Subjective Reviews
"The glass looked pretty sharp to me" — based on human observation, impossible to verify or reproduce.
Sponsor Conflicts
Reviewers dependent on manufacturer relationships and affiliate revenue. Objectivity compromised.
No Standards
No calibrated methodology. No way to compare reviews across sources. No accountability.
ScopeCreep Solution
Camera-based automated measurement. Calibrated reference standards. Published methodology. Data, not opinion.

13-Axis Performance Characterization

Every scope tested across a comprehensive matrix of optical and mechanical metrics. Quantified. Comparable. Reproducible.

Center Resolution
Image sharpness at the reticle — where it matters most. Measured in arc-seconds: lower is sharper. The single most important optical metric.
Edge Resolution
Image sharpness at the field edges. Cheap optics fall apart here. Premium glass stays sharp corner to corner.
Edge Sharpness
Edge performance as a percentage of center. 90%+ means flat field performance. Below 70% and you'll notice the mush.
Chromatic Aberration
Color fringing at high-contrast edges — the purple and green halos. Well-corrected glass eliminates this. Budget glass glows like a rave.
Color Accuracy
Does the scope show true colors? Some optics shift reds, others kill greens. Matters for target ID and low-light hunting.
Color Neutrality
Does the scope add a tint? Some run warm (yellow), others cold (blue). Neutral glass shows the world as-is.
Distortion
Barrel or pincushion warping of straight lines. Low distortion keeps your holdovers honest at the edges of the field.
Light Transmission
How much light makes it through to your eye. More transmission = better low-light performance. Premium coatings make the difference.
Turret Consistency
Does optical quality hold when you dial 15 mils of elevation? Many scopes fall apart at turret extremes. We test at four positions.
Mechanical Quality
Turret feel, click definition, resistance consistency. That tactile feedback matters when you're dialing in the cold with gloves.
Parallax Accuracy
Does the parallax knob actually eliminate parallax at marked distances? Spoiler: most lie. We measure the truth.
Return-to-Zero
Dial up, dial back — are you actually at zero? Tested across multiple box tests. Reliable tracking or expensive paperweight.
Value Score
Performance per dollar. A $400 scope that matches a $1,500 scope scores higher. Data-driven buying decisions.

Measurement vs. Opinion

"A scope either resolves 2.1 arc-seconds or it doesn't. That's data, not opinion."
Every test produces numbers. Numbers don't have sponsors.

Laboratory-Grade Methodology

No opinions. No eyeballing. Every measurement is captured, computed, and documented under controlled conditions using calibrated reference standards.

01

Controlled Environment

All testing is performed in a climate-controlled laboratory environment. Temperature is maintained at 20°C ±2°C, humidity at 45% ±10% RH. These conditions are logged for every test session. Thermal equilibration time is enforced before any measurements begin — scopes acclimate to lab conditions for a minimum period before testing.

Lighting is calibrated and uniform across the test field. Illumination levels are measured at multiple points and must fall within ±5% uniformity before testing proceeds. Color temperature is fixed at 5600K daylight-balanced with high CRI sources to ensure color measurements are valid.

02

Precision Mounting

Each scope is secured in a custom fixture designed for repeatability. The fixture accommodates 1-inch, 30mm, and 34mm tube diameters with non-marring contact surfaces. Mounting torque is controlled and documented. The scope's optical axis is aligned to the test system before any measurements begin.

The fixture isolates the scope from external vibration. All adjustments are locked before image capture. The same mounting procedure is followed for every scope — no shortcuts, no variations.

03

Magnification Calibration

We don't trust dial markings. Every scope's true magnification is calibrated at multiple dial positions before optical testing begins.

A precision scale of known dimension is photographed through the scope at a fixed distance. The apparent size in the captured image, combined with known sensor geometry, yields true optical magnification. This is repeated at multiple dial positions to create a complete calibration curve for each scope — revealing not just error at one setting, but how the error changes across the zoom range.

When we say we tested at "16x," we mean true optical 16x — not whatever the manufacturer printed on the dial. Some scopes are off by 10-15%. This step alone invalidates most amateur comparisons.

04

Calibrated Reference Standards

Resolution is measured against precision optical test targets with known spatial frequencies. These are laboratory-grade standards — not printed paper charts. The targets allow us to determine the smallest detail a scope can resolve, quantified in arc-seconds.

Color accuracy is measured using industry-standard color reference targets with known colorimetric values. We capture the scope's rendition and compute deviation from ground truth using established color difference formulas.

05

Camera-Based Capture

A high-resolution camera sensor captures images through the scope's eyepiece. The camera resolution exceeds the resolving capability of any scope we test — the camera is never the limiting factor. Raw image files are captured for maximum data integrity.

Camera settings are fixed: manual exposure, fixed white balance, no in-camera processing. Multiple frames are captured at each test condition to enable statistical validation. Every capture is triggered remotely to eliminate vibration from human contact.

06

Multi-Position Testing

Here's what separates us from everyone else: we test at four turret positions, not just centered. Many scopes perform beautifully with turrets centered but fall apart when you dial 15 mils of elevation for a long shot.

  • Position 1: Turrets centered (elevation 0, windage 0) — the baseline
  • Position 2: Maximum elevation, windage centered — vertical erector stress
  • Position 3: Elevation centered, maximum windage — horizontal erector stress
  • Position 4: Maximum elevation AND windage — worst-case combined loading

This reveals erector tube quality, internal tolerances, and real-world performance. A scope that only performs well at P1 is not a long-range scope.

07

Automated Analysis

Captured images are processed by custom analysis software. Computer vision algorithms detect test targets, extract measurements, and compute metrics. No human judgment enters the measurement process.

The software calculates resolution limits, modulation transfer function (MTF) curves, chromatic aberration magnitude, color deviation, distortion profiles, and field uniformity maps. Every number comes from the image data — not from someone squinting and guessing.

08

Mechanical Evaluation

Turret feel isn't just subjective. Click torque is measured with a calibrated gauge — we record the force required for each click across full turret travel. Consistency is computed as the variation in that force.

Click acoustics are captured and analyzed for frequency, amplitude, and duration. Return-to-zero is validated through box tests with optical verification of actual point-of-aim shift. Parallax is measured at multiple distances and compared against knob markings.

09

Data Integrity

Every test session generates: raw image files, environmental logs, calibration records, computed metrics, and analysis outputs. All data is archived. Any result can be recomputed from source files.

Test protocols are version-controlled. If we change methodology, historical data is flagged. Scopes tested under Protocol v1.2 are comparable to each other but may not be directly comparable to Protocol v1.3 results. We document everything.

10

Independence Guarantee

We purchase scopes at retail or accept user submissions. We do not accept manufacturer test samples. We do not accept payment for favorable results. We do not run affiliate links. We have no sponsorship relationships.

If a $400 scope outperforms a $2,000 scope, we publish that. If a flagship product is mediocre, we publish that. The data is the data. Our only loyalty is to the measurement.

The Bottom Line

Every scope goes through the same process. Same fixture. Same environment. Same analysis. The only variable is the scope itself. That's how you get data you can trust.