I agree that it's irresponsible in a way, but I think your analogy is flawed.
You compare HDM's actions to an engineer pointing out weak spots in airplanes. However, unlike computer security, airplanes are difficult to fix and can cause mass death if the weaknesses are taken advantage of. Computer software can easily be fixed, and the fixes are often automatic.
The point of HDM's software is two-fold, at least:
1) To force vendors to fix their software a quickly as possible.
Yes, he's putting people in danger as a result. But at the same time, he's putting the burden of a strict deadline on the software creators (I'll call them vendors for now, even though that may not be exactly correct). Once a public exploit is released, the vendor is forced to fix the problem on a much stricter time-line.
The assumption that blackhats already have an exploit is one that must be made. Blackhats work at least as hard as security researchers to find their own vulnerabilities and write their own exploits. But the blackhats aren't reporting or publicizing the vulnerabilities that find, that would be counter-productive.
I think it's very important, once a security researcher finds a vulnerability, to report it and do everything in his (or her) power to make sure the fix is released as quicly as possible.
2) To allow security experts and pen-testers to test for problems.
There are actually several reasons why this is important. Sometimes, a vendor says they fix problems that they haven't (Oracle comes to mind). After patching mission-critical systems, it's important to make sure it's actually safe. Because, yet again, the black-hats may be trying the same thing, and you don't want to be caught with a system that you think is patched but isn't.
Other times, you are in charge of a network where you don't have access to the end-points. The only way to force another department to do an update (this is from a government standpoint) is to prove that there's a problem. I'm sure other pen-testers have similar experiences.
The fact that he's releasing kernel exploits instead of software exploits is probably a good thing. It forces kernel/hardware developers to be more careful. It's easy to get sloppy with systems that aren't being targeted (Apple comes to mind), but once a system gets in front of the crosshairs, it's far less sloppy.