Silverfrost Forums

Welcome to our forums

Initialization via data statements fails

6 Oct 2020 11:09 #26442

I have attached a link to my Project on DropBox (too long to post). Simple enough. It initializes several arrays that are in named common via DATA statements.

The issue is in the first rows at Line 167 in the INCLUDE file.

     $  'AL E','T',  101, 85.833333333333D0,         200000.000000000000D0,30.500000000000D0,          25000.000000000000D0,0.000000000000D0,0.000000000000D0,

The variable SPCC_83 (1,1:6) is not initialized properly. The print statement shows that the second element will be set to 215.25.

       85.8333333333             ** 215.250000000**              200000.000000              13.5000000000
       1.00000000000              100000.000000

Every other element is, as far as I can determine, proper.

I would appear that this 215.25 is inserted, since the elements that follow are correct, just displaced by one array position.

https://www.dropbox.com/s/wim9oemo7thzyhy/FPStackFault.zip?dl=0

7 Oct 2020 1:27 #26443

Compiler Version 8.64.0, same with the DLL's.

7 Oct 2020 8:27 #26445

Thank you for the feedback. I have made a note of this failure.

7 Oct 2020 3:28 #26450

Bill

There are a lot of files in your archive. Is it possible for you to represent the fault in a small program?

7 Oct 2020 5:29 #26451

The Plato Project FORUMTESTING contains the relevant files.

That said, I shall pare it down.

Done.

Bill

13 Oct 2020 11:15 #26467

The ZIP file has been updated at the link below.

https://www.dropbox.com/s/wt4i5nky5x1e878/FPStackFault.zip?dl=0

Apparently once one establishes a link in DropBox, the file is 'frozen'. Apologies for not realizing this.

15 Oct 2020 7:21 #26472

Thanks.

17 Oct 2020 2:36 #26476

Bill

FTN95 has a maximum line length of 32K characters and this limit must have been exceeded in this data statement.

As a rough calculation you have 212x135 + 104 characters which is less than 32K but getting close.

I suggest that you reduce the number of decimal places in the data where this is possible.

I will take a look to see why this has not been faulted.

At first sight, it looks like the extra characters have wrapped around in some way, either in the line buffer or because the DATA statement has not found enough data.

18 Oct 2020 5:46 #26478

Paul, thanks. I'll take a look and see if breaking up the DATA into smaller chucks can correct the issue. I'll also look through other DATA sections to see if I did this somewhere else!!

Bill

19 Oct 2020 5:55 #26480

Bill

It might be easier to remove trailing zeros from the data where these are redundant.

19 Oct 2020 8:40 #26481

Paul,

I suppose that you mean that 0.0D0 is exactly the same as 0.00000000D0.

I wonder if there have ever been compilers where this is not so?

Eddie

19 Oct 2020 10:49 #26482

Yes, and there are approximately 15 significant figures with double precision. So one would hope that truncation is always admissible.

19 Oct 2020 11:05 (Edited: 19 Oct 2020 1:44) #26483

Eddie, real/double zero is a special case; so are integers that can be represented in 24 bits or less. Conversion of reals with these values to integers or to reals of a different precision results in no loss of accuracy.

A more realistic example is shown in the following program:

program tst
implicit none
real :: r1 = 0.1234
double precision :: d1 = 0.1234d0, result
!
result=r1
print *,result
result=d1
print *,result
end program

The output:

          0.123400002718
          0.123400000000

shows that the single precision constant 0.1234 may not be suitable as a value for setting into a double precision variable, if the trailing digits 2718 are unacceptable as replacements for 0000.


In Bill's program, I do not know why he needs to use DATA to assign values to hundreds or thousands of variables. Putting that many constants into a source file, with lines extended beyond 132 columns, is almost asking for trouble.

Why not put the values into a text file, and read that file once at the beginning? If speed is a concern, convert that text file to an unformatted binary file once, and read that unformatted file at the beginning of a run.

19 Oct 2020 11:49 #26484

Also, the advantage of a text file is that text reals are interpreted at the same precision as the variable, so in a text file 0.10000000D0 is the same as 0.1 (which is the same as 0.1D0), while 'd = 0.1' is the classic example of precision fail. Try real8 :: d = 0.1 character text8

   write (*,*) 'real*8 :: d = 0.1', d
   d = 0.1
   write (*,*) 'd = 0.1      ', d
   text = ' 0.1 '
   read (text,*) d
   write (*,*) 'read (text,*)', d
   d = 0.1d0
   write (*,*) 'd = 0.1d0    ', d
   end

The data statement approach is different in F77.

I often use minimal text for real*8 constants, as the following all give the same value to 'd'

  real*8 d
  d = 3.0d0
  d = 3.
  d = 3
19 Oct 2020 12:16 #26487

Mecej4,

I would expect zero to be zero, as it is perfectly representable in all Fortran numeric formats. See here, in old-fashioned Fortran:

      PROGRAM TST    
      REAL*8 A, B
      DATA A, B / 0.0D0, 0.00000000000D0 /
      WRITE(*,100) A, B, A-B, A+B
 100  FORMAT(4D20.10)
      END

I think that I would expect many small integers to be represented exactly in single or double precision as defined in DATA or an assignment, and especially even numbers or powers of two. I just wouldn't rely on it. I wouldn't expect any general floating point number to be held exactly.

Putting all your defined values in a single DATA statement does seem to me to be making a rod for your own back, as the chance of understanding it later is really rather slim.

Eddie

19 Oct 2020 12:55 #26488

Bill

I have spent some time on this today and things are not as I guessed.

The initial line buffer size is some 17000 characters which is OK. The problem is deeper down, maybe in the processing of the resulting comma list.

This means that my suggestion to reduce the number of characters will not work. Rather you should reduce the number of items or use an alternative approach.

19 Oct 2020 12:57 #26489

Paul, breaking the DATA into two sections worked great. Thanks for the heads up and suggestion regarding removing the 'extraneous' zeroes.

Just as a comment for mecej4: While reading a text file instead of using a DATA statement, even one that contains comments, may be preferable to DATA in some instances, for my deliveries to my Customers there are already enough files. Adding another one, and one that absolutely has to be present for the software to work, creates a burden on the software to find, process, and, if not present, produce a meaningful enough error message that my not-so-computer-facile users can understand and attempt a recovery.

I have one set of DATA statements that initialize about a dozen arrays of 54,000+ elements each. Broken into sections, it takes a while to compile, but only gets compiled once per delivery. Much easier to do it via DATA. And, when it does get changed, it will automatically be compiled and included!

Please login to reply.