I have made progress on testing write / read of large records, now up to 25 gBytes. These now work for Unformatted sequential read write and also for stream read/write.
They should work on the next release of FTN95 Ver 9.0?
For records larger than 2 GTBytes, the unformatted sequential header/trailer is now 9 bytes, where byte 1 = -2 for an 8-byte size value.
I have also been looking at the I/O speeds for PCIe SSDs ( I think mine is a Ver 3 ) The rates they quote are a bit misleading.
On my PC which has 64 GBytes of physical memory, If I do a write then read test for a 32 GByte file, I can get write speeds up to 2.8 GBytes/sec, unformatted read up to 4.0 GBytes/sec and stream read over 7.5 GBytes/sec.
These high read speeds are basically because the file is stored in the memory disk buffers.
If I split this test program into 3 seperate programs or increase the file size to more than 64 GBytes, the performance declines:
For write speed reduces as the file size increases, from 3.0 GBy/sec for the first record, to 0.8 GBy/sec for most others. This is due to overflowing the available memory buffers and also the SSD internal buffers.
For sequential read, speed starts at 0.28 GBy/sec for the first record, then to 0.9 GBy/sec. This is due to limited available memory buffers.
For stream read, speed starts at 2.15 GBy/sec for the first record, then declines gradually to 1.5 GBy/sec.
These read speeds are much less than the case where reads followed writes and file size was much less than physical memory.
this is due to limited available memory buffers.
On the plus side: if the file is buffered in memory by the OS, stream re-read can achieve great rates in moving the data from OS memory to program memory ( over 8 GBy/sec in some cases )
On ite -ve side: if the SSD buffers are full or OS memory buffers are not pre-loaded, transfer rates are far lower than the disk technology quote.
Buffering is great when it works!
In these tests I tested records from 1 GByte to 25 GBytes, so large records do now work.
However, in these tests the I/O lists I am using are very basic:
'write (lu, iostat=iostat ) vector' or 'read (lu,iostat=iostat) vector'
so more complex I/O lists may be different.
Even 'read (lu,iostat=iostat) vector(1:nn)' crashes with a Vstack error, while 'read (lu,iostat=iostat) (vector(k),k=1,nn)' is much slower for a well buffered case.
We can now save arrays much larger than 4 GBytes.
I will post the program that sucessfully tests up to 25 GByte records and 66 GByte files.
If using stream read, it is also relatively easy to read FTN95, Gfortran or Ifort unformatted sequential records, providing the I/O list is not too complex. Files created with Stream I/O should be much more portable.
The Vstack error at 8 GBytes was a surprise so there could be more surprises at greater sizes ?