In my current line of work, higher level managers who aren't as number savvy don't want to see a table where "0.037 % of the patients Native Americans over the age of 55", they don't seem to want to see any percentage lower than a "0.1%," which means (you guessed it) there is going to be rounding error.

]]>

> sum(c(0.335,0.335,0.330))

[1] 1

> sum(round_preserve_sum(c(0.335,0.335,0.330)))

[1] 1

]]>The rounding discussed here occurs at the presentation stage (after any other processing). I agree that the greatest practical precision (e.g., double floating point precision) should be used until then.

]]>I read thru the linked SO post and links therein, and it still looks to me like all of this is a kludge to "fix" poorly designed processing. ]]>

round_preserve_sum <- function(x, digits = 0) {

up <- 10 ^ digits

x <- x * up

y <- floor(x)

indices <- tail(order(x-y), round(sum(x)) - sum(y))

y[indices] <- y[indices] + 1

y / up

}