Hi,
We get datafiles from external sources, supposedly in a standard format. Occasionally, these files contain some small amounts of 'rubbish'. What we do to try and decode them is to read the data into an INTEGER buffer, equivalence that to a REAL buffer, and read the values from whichever buffer is appropriate for the expected data type:
INTEGER IBUFF(NVALS), ITYPES(NVALS) REAL RBUFF(NVALS)
EQUIVALENCE (IBUFF, RBUFF)
READ(LU) (IBUFF(I),I=1,NVALS)
DO I=1,NVALS IF(ITYPES(I).EQ.1 ) THEN !data value is integer RVAL = IBUFF(I) ELSE IF( ITYPES(I).EQ.2) THEN !data value is real RVAL = RBUFF(I) END IF .. .. END DO
The problem is that, when the data is corrupted, the value for RBUFF(I) can be an illegal floating point value and the assignment to RVAL causes a crash.
What I would like to do is to be able to test the value of IBUFF(I) to see if it would be an invalid real when equivalenced to RBUFF(I) and to report accordingly, rather than 'fall over'.
So, is there a 'simple' range of integer values that correlate to 'valid' reals or an internal function that can do this test? (Presumably, the debugger knows, so it can print 'Illegal floating value')
TIA
K