I think that everyone knows about rounding to the nearest INTEGER from a REAL with the conversion to INTEGER rounding down, so for rounding up you need to add 1, and to round to the nearest, add 0.5 before the conversion.
My issue is with rounding to a specific precision. The context is that in picking a coordinate off a screen, the position is given in pixels, which are INTEGER and therefore of limited precision anyway. Where the screen image represents a physical object which may have dimensions in (say) metres, there has to be a real-world-to-pixel scale factor from which I can determine the real-world coordinates of the point picked. Inevitably, this comes out with a huge string of decimal places. I have found that to round to a particular precision, say millimetres, I can get the effect I want with an internal WRITE, as in:
CHARACTER*(2) TEMP_TEXT
WRITE (TEMP_TEXT,’(F20.3)’) VALUE
READ (TEMP_TEXT,*) VALUE
It occurs to me that inside the WRITE statement there has to be a rounding algorithm, and I have gone a roundabout way to using it.
So far, so good. The method works if I want to round to a particular number of decimal places, and before anyone says, yes, I know that the binary representation will be inexact. The problem is not solved by this method if I want to round to (say) the nearest 0.2m, 0.25m or 0.5m. My efforts so far do work, but are clumsy.
Is there anyone out there with a mathematics and computing background who knows of a good general purpose rounding algorithm?
Eddie