The Difficulty in Evaluating Blade Steel

   05.03.15

The Difficulty in Evaluating Blade Steel

I know I am a steel junkie. Gimme the datasheet for some new crazy material (Beta Ti, anyone?) and I am happy. Recently someone emailed me about evaluating blade steels and wanted me to distinguish more precisely between something like S30V steel (which is very good) or M390 (which is a truly superior steel). I feel confident that M390 is better than S30V, but I can’t say by how much.

I can’t quantify it more than that, and I can’t be more precise than that. I want to be precise, but unfortunately I think it is probably impossible for a hobbyist reviewer to be more precise than bad, good, and great. There are three problems everyone encounters when evaluating blade steel, and knowing what they are can help you read reviews with a grain of salt.

Problem #1: Secret Society of Secrets

The major problem with evaluating steels is that so much of the information we need is secret. The recipes are almost entirely secret. Sure, the knife companies show off the ingredients, mainly the amount of carbon and chromium and some exotic elements (YIPEE! Nitrogen!), but how those ingredients are mixed together is a secret. It’s a secret for business reasons, and I get that.

The real problem is that even the performance numbers are secret. Imagine if you were buying a car and the only thing they told you was how much it cost. You didn’t know the top speed, the horsepower, the torque or the miles per gallon, just price. Would you buy a car with that information? Probably only begrudgingly. But that’s almost exactly the position we are in when purchasing a knife.

The only number that major manufacturers release is the Rockwell C-Scale Hardness numbers (usually expressed as “HRc X-Y” with X and Y being two numbers apart, like 57-59). That’s it. But there is a ton of data we just don’t get.

We should be allowed to see what the lock strength is in an industry-wide standardized test (Cold Steel’s dead weight hanging test seems fine). I’d like to know how the steel performed on the CATRA test (here is some more on the CATRA). I’d like to know what the main grind and cutting bevel’s angles are. I’d like to know the amount of pressure needed to disengage a lock. And most of all, I’d love to know the volume of the knife, which is the best descriptor of how it will actually carry in the pocket. A Manix2 LW is only a hair heavier than the 940-1, but the 940-1 feels much smaller because it is a more slender shape.

P1040436

The data the knife companies give us is just not enough. We as consumers should demand more.

Problem #2: The Popeye Problem (I am what I am)

The Dragonfly II is a general purpose knife. I am sure it is not good at cutting steel, I am sure it is not good at grinding rock, and I am sure that the fact that it can’t do these things doesn’t really make a difference. For the general user or even the regular everyday user, most steels are good enough. They are general purpose steels and they function in general use well. 1095 is an old steel and you know what? It’s pretty darn good.

P1050402

I like other steels better, but it works for me 95% of the time. D2 is an old steel and you know? It’s damn good. Sure, I like better steels, but for the most part, given even moderate use like hunting, most steels are good enough.

It’s only when you get in to very strenuous applications like cutting netting at sea or stamping out thousands of metal parts an hour do you see large benefits from small changes in steel. This not what steel junkies want to hear. They want to pursue an ever smaller increase in performance from an ever increasingly expensive steel. But the reality is, use does not dictate that pursuit; desire does. And I am just as guilty as the next person in following Crucible and others down that rabbit hole, but I am okay with the fact that it’s a want and not a need.

Problem #3: Bro Science

You have seen this before–some yahoo on the Internet cutting rope on camera or chopping up a cinder block with a fixed blade. Total 100% baloney. I love the “Knife Test” series (which I watched well before they were on this site), but I recognize it for what it is–the knife equivalent of car crashes in NASCAR. The problem with this form of testing (and one of the reasons I don’t place a lot of weight on it when doing reviews) is that it fails to comport with a basic tenant of the scientific method: it’s not repeatable. It’s not repeatable for two reasons.

The first is that these “tests” do not have controlled methods. The angle of the cut is not precisely measured and not repeatable. The material is not uniform across the internet (one sisal rope is not the same as another). The blade conditions are not known (edge angle, method of sharpening, sharpness at the time of the test and no “out of the box” is not sufficiently precise). Without this information all of these tests are just guessing.

Science is a complex, long, and often tedious process, but it is all of those things because science, especially in the “hard sciences” is incredibly precise and everything is recorded. Seeing my wife’s experiments mature and produce results is incredible because these are experiments run over months and years and then checked, rechecked, and checked again. This is not what the Bro Scientists are doing. They don’t have the tools or the time. Compared to something like the CATRA test (which is, itself, lacking compared to the tests my wife runs in her state of the art university laboratory), these “knife tests” are just play acting.

Supposing, however, that some industrious person decided to do all of those things that real science requires (measuring edge angles before and after, using a uniform medium and method of sharpening, etc.), there is yet another problem: sample size.

The Bro Scientists will show you spine whacks and lock rock, but what they miss is that they have demonstrated not the failure of a design or a manufacturer as a whole, but the failure of a single knife. It’s hard to generalize from only one item when thousands are made. You wouldn’t look at one at-bat to figure out if a baseball player is good–you look at a season or a couple of seasons. Remember 1994? Ken Caminiti won the MVP award for the National League putting up a monster season unlike anything he had done before. Then we found out he was using a cocktail of chemicals that would make a bodybuilder blush. One season, like one knife, doesn’t tell you much of anything. You could have bought a lemon or the best model that came off the line.

The Bro Science Movement has brought the knife community all sorts of weird and wacky controversies–alleged lock rock on Sebenzas, the “failure” of the Manix 2 ball bearing lock, and most recently the “Elmax Controversy.” What Bro Science hasn’t brought us is useful, generalizable information.  Bro Science is to the knife community what Gwar is to music: pure theater.

Conclusion

Steels are hard to evaluate. They are very complicated affairs with all sorts of variables. If a hobbyist is telling you there is a HUGE difference between S30V and S35VN, look at them with skepticism.

P1050233

There are differences and sometimes noticeable ones (I have found differences between 8Cr13MoV from different companies and even differences between companies versions of 420HC and 1095).

But by in large, we can’t get better than bad, good, and great.

Avatar Author ID 51 - 1808848586

A devoted Dad and Husband, daily defender of the Constitution, and passionate Gear Geek. You can find Tony's reviews at his site: www.everydaycommetary.com, on Twitter at EverydayComment, on Instagram at EverydayCommentary, and once every two weeks a on a podcast, Gear Geeks Live, with Andrew from Edge Observer.

Read More