News:

So the widespread use of emojis these days kinda makes forum smileys pointless, yeah?

Main Menu

Error checking and paranoia

Started by nslay, January 16, 2009, 10:15:52 AM

Previous topic - Next topic

0 Members and 6 Guests are viewing this topic.

nslay

This is pretty heavy handed question, but when does error checking/handling become paranoia?  I find myself spending most of my time writing error checking/handling code and I feel like its mostly counter productive.  In some circumstances, if code fails, the system is screwed anyways.  It's just not clear where one draws that line.  A related, and equally difficult, question is when do you allow software to crash itself?  I do believe that general criteria could be developed to address these questions.  What do you think?
An adorable giant isopod!

Camel

If you write code by contract, you could theoretically have a system whereby there's no error checking at all!

<Camel> i said what what
<Blaze> in the butt
<Camel> you want to do it in my butt?
<Blaze> in my butt
<Camel> let's do it in the butt
<Blaze> Okay!

MyndFyre

Quote from: Camel on January 30, 2009, 10:38:55 AM
If you write code by contract, you could theoretically have a system whereby there's no error checking at all!

That's interesting though.  At one point you'll have to deal with user input (or you wouldn't have a useful program!).  But you can't expect input to follow a contract.

I personally do follow the strategy of writing code by contract, but it's still an interesting problem.
Quote from: Joe on January 23, 2011, 11:47:54 PM
I have a programming folder, and I have nothing of value there

Running with Code has a new home!

Quote from: Rule on May 26, 2009, 02:02:12 PMOur species really annoys me.

nslay

Design by contract looks like an interesting development/debugging strategy...however, I think it is an awful idea for finished software.  You do not fail and fail hard when something trivial goes wrong in a finished product.  This is especially true in mission-critical software...but then I believe everything should be treated as "mission-critical" anyways.  An end user should suffer the least when software encounters errors.

When I say "error checking", I'm referring to library and system calls.  Of course, any input should be treated with a grain of salt.

Here's an example of where writing error handling code is a waste of time:

char *str = strdup( "cow" );


This should never fail unless the system is totally screwed.  Where does one draw a line between reasonable error conditions, and totally insane/catastrophic error conditions like the one above?
An adorable giant isopod!

MyndFyre

Well, that's a tough case especially for debugging. 

The only reason strdup would fail in this way is in the case that the process is out of heap memory, right?  If you're debugging a process, that error is going to show up in multiple ways (most frequently around a null pointer fault) but most likely in different places. 

Checking for errors on allocations like that is a great DEBUGGING strategy.  But I think it generally goes above and beyond on code that makes it into a production environment.

When I program I generally go for (in order of importance):
* The ease of changing the code I'm writing.
* The ease of understanding the interfaces to the code I'm writing.
* The correctness of my code with regard to the contracts that it exposes (because if a function says that it will never fail and then fails, I'm not writing my contract correctly).
* The correctness of my code in general.

There are situations where I might have something like this (pseudocode):


re : Regex = "\\d+"
value = re.Match("502blah");


At this point I know "value" is going to contain a string with only numbers in it.  .NET provides two utility methods, int.Parse() and int.TryParse(); the former throws an exception and the latter fails with a Boolean result.  Typical usage indicates to use TryParse(), but since I've already validated that it will only contain one or more numbers, I can skip it.
Quote from: Joe on January 23, 2011, 11:47:54 PM
I have a programming folder, and I have nothing of value there

Running with Code has a new home!

Quote from: Rule on May 26, 2009, 02:02:12 PMOur species really annoys me.

Camel

#5
Quote from: nslay on February 02, 2009, 10:37:00 AM
Here's an example of where writing error handling code is a waste of time:

char *str = strdup( "cow" );


This should never fail unless the system is totally screwed.  Where does one draw a line between reasonable error conditions, and totally insane/catastrophic error conditions like the one above?
Explicitly writing code to check if you've blown the heap is just pedantic; if you've truly blown the heap then you probably wont be able to tell the user what happened anyways. In modern languages, the VM will throw an error, and I think that's the best way to handle the situation.

One major exception to that would be in an embedded system where *NULL doesn't generate a fault - probably most systems with 16 bits or fewer of addressing, for example. Aside from limited memory making it more likely to blow the heap, it's important to check because otherwise your allocation table will become corrupt, and then even free() won't work.

Quote from: MyndFyre on February 03, 2009, 12:00:58 PM
.NET provides two utility methods, int.Parse() and int.TryParse(); the former throws an exception and the latter fails with a Boolean result.
Cool. I don't think there's any analog for that in Java; exceptions seem to be preferred.

<Camel> i said what what
<Blaze> in the butt
<Camel> you want to do it in my butt?
<Blaze> in my butt
<Camel> let's do it in the butt
<Blaze> Okay!

nslay

Quote from: Camel on February 11, 2009, 02:40:37 AM
Quote from: nslay on February 02, 2009, 10:37:00 AM
Here's an example of where writing error handling code is a waste of time:

char *str = strdup( "cow" );


This should never fail unless the system is totally screwed.  Where does one draw a line between reasonable error conditions, and totally insane/catastrophic error conditions like the one above?
Explicitly writing code to check if you've blown the heap is just pedantic; if you've truly blown the heap then you probably wont be able to tell the user what happened anyways. In modern languages, the VM will throw an error, and I think that's the best way to handle the situation.
I disagree.  The programmer may know in advance the magnitude in memory quantity they are dealing with and checking for failure is good practice in such cases.

Quote
One major exception to that would be in an embedded system where *NULL doesn't generate a fault - probably most systems with 16 bits or fewer of addressing, for example. Aside from limited memory making it more likely to blow the heap, it's important to check because otherwise your allocation table will become corrupt, and then even free() won't work.
Embedded systems typically don't have VM, which is why NULL could be a valid address.  I imagine that kind of system is scary to program and debug on.

Quote

Quote from: MyndFyre on February 03, 2009, 12:00:58 PM
.NET provides two utility methods, int.Parse() and int.TryParse(); the former throws an exception and the latter fails with a Boolean result.
Cool. I don't think there's any analog for that in Java; exceptions seem to be preferred.
An adorable giant isopod!

iago

I remember seeing a vulnerability writeup in the last few months, I think it was in Flash but I could be wrong, about an exploitable null reference bug. The problem was that it would fail to allocate memory, add a user-controlled value to the memory, and write there (or something much more complicated). So there are cases where you have to worry about NULLs, such as when you use it in an array.

nslay

Quote from: iago on February 13, 2009, 10:41:06 AM
I remember seeing a vulnerability writeup in the last few months, I think it was in Flash but I could be wrong, about an exploitable null reference bug. The problem was that it would fail to allocate memory, add a user-controlled value to the memory, and write there (or something much more complicated). So there are cases where you have to worry about NULLs, such as when you use it in an array.

It's probably something like


void *buf = malloc( size );

memcpy( buf + offset, user_provided_buf, len ); /* offset is large enough s.t. NULL + offset is a valid address */
An adorable giant isopod!

iago

I found the story about the bug, but I didn't re-read it so I still don't remember what the deal was. Here you go:
http://blogs.iss.net/archive/cve-2008-0017.html


nslay

Quote from: iago on February 14, 2009, 10:09:00 AM
I found the story about the bug, but I didn't re-read it so I still don't remember what the deal was. Here you go:
http://blogs.iss.net/archive/cve-2008-0017.html



I didn't read it yet, but I think its more likely a misuse of realloc since its commonly used to resize arrays.


buf = realloc( buf, size );

memcpy( buf + offset, user_provided_buffer, len ); /* append data to end of buffer, except when realloc fails, write data to address NULL + offset */
An adorable giant isopod!