"*Never* round in the middle of a calculation. You only round the final answer. " was what I was taught in high school, and I remember there were some classic examples of this, at least pre-internet, not sure what is on the inter webs nowadays.

In my current line of work, higher level managers who aren't as number savvy don't want to see a table where "0.037 % of the patients Native Americans over the age of 55", they don't seem to want to see any percentage lower than a "0.1%," which means (you guessed it) there is going to be rounding error.

]]>Looks like it works:

> sum(c(0.335,0.335,0.330))

[1] 1

> sum(round_preserve_sum(c(0.335,0.335,0.330)))

[1] 1

"But may I ask: why did you round any variable prior to processing?"

The rounding discussed here occurs at the presentation stage (after any other processing). I agree that the greatest practical precision (e.g., double floating point precision) should be used until then.

]]>I read thru the linked SO post and links therein, and it still looks to me like all of this is a kludge to "fix" poorly designed processing. ]]>