lol, I thought that as well, but I'm pretty sure most of the suggestions are on fallacious grounds.
Yeah, I agree. The double is more appropriate here, and adding decimal places is kind of silly. There is no ambiguity if you declare the datatype as
double or
float.
Antivirus:
I don't care where you are, but you sounded so 'sophomoric' if that is the correct word. It's like you had a big label on your forehead that said
NOVICE. I recognized what you were saying as stuff that they told us when we first started programming, and then never heard again later.
In practice, if you aren't sure whether to make a variable a float or a double, make it a double. There are instances where you would want to use a float, but it's not going to matter if there are floats that are scattered about as single variables, even in large programs. It will only make a difference if you say, want to declare a real valued 4 dimensional array of 100x100x100x100, or something like that.
However, I believe this algorithm takes a long time to converge to more than 6 correct decimal places (over a day, at least?)