|
forums.silverfrost.com Welcome to the Silverfrost forums
|
View previous topic :: View next topic |
Author |
Message |
IanLambley
Joined: 17 Dec 2006 Posts: 490 Location: Sunderland
|
Posted: Wed Jul 06, 2016 12:56 pm Post subject: |
|
|
When creating a new direct access file, has anyone tried writing the last record first? |
|
Back to top |
|
|
wahorger
Joined: 13 Oct 2014 Posts: 1217 Location: Morrison, CO, USA
|
Posted: Wed Jul 06, 2016 3:21 pm Post subject: |
|
|
Nice idea, but even if that were to work, the read access times would still be an issue.
Notice that all the timings that John posted are the read access times, not the times of creation of the file.
Assuming that the underlying code uses the function _fsopen(), one can assume that it is the flushing of the buffers to update the physical file after the I/O operation that causes the performance "penalty". I copied this from a website as I was researching _fsopen():
Quote: | When a stream is opened in update mode, both reading and writing may be performed. However, writing may not be followed by reading without an intervening call to the fflush() function or to a file positioning function (fseek(), fsetpos(), rewind()). Similarly, reading may not be followed by writing without an intervening call to a file positioning function, unless the read resulted in end-of-file. |
If we are to assume that this is a requirement to guarantee that the proper data would be accessible by all parties, then the buffering normally performed by the OS must be bypassed by flushing the buffers after every I/O operation. The FTN95 library code must assume that buffers must be flushed regardless of the last I/O operation.
There is a hint about the COMPAT option in the MSDN documentation. This is specified for as compatibility so 16 bit programs can access the file. Which likely means that the 16 bit programs would not have the SHARE ability to specify access rights and therefore the buffers have to be flushed in an identical manner as, say, DENYNONE.
It is all very interesting. |
|
Back to top |
|
|
John-Silver
Joined: 30 Jul 2013 Posts: 1520 Location: Aerospace Valley
|
Posted: Mon Jul 11, 2016 3:57 pm Post subject: |
|
|
Ian wrote:-
Quote: | When creating a new direct access file, has anyone tried writing the last record first? |
why on earth would you want to write the last record first ?
I thought the whole point about DIRECT access files was that you 'go straight to the record' you're after rather than reading through' a sequential file, which, logically, obviously takes more time.
Hence the concept of first and last written files has no obvious (to me anyway) meaning in that sense. |
|
Back to top |
|
|
IanLambley
Joined: 17 Dec 2006 Posts: 490 Location: Sunderland
|
Posted: Mon Jul 11, 2016 4:30 pm Post subject: |
|
|
John,
I'm glad you asked that, and I do have an answer, I really do.
In one of the examples, there was a DO loop writing 1 to n records. Not knowing how the system allocates file space, I made the suggestion of writing 1 record at the end, because for a new file, the system will be forced to allocate a file of n x record_length bytes. This would be carried out as one operation rather than repeatedly extending the file. I just thought it might be faster.
I made the suggestion because in days gone by, a chap who I worked with had the great misfortune to use an IBM mainframe and when I suggested a solution to something, he said "Oh yes I will create the file". He then created a batch job which allocated a file of a specified length and width before he could edit it. Just thought that there may be some merit in allocating the space before use.
See I said I had a reason!
Ian |
|
Back to top |
|
|
davidb
Joined: 17 Jul 2009 Posts: 560 Location: UK
|
Posted: Mon Jul 11, 2016 8:39 pm Post subject: |
|
|
Writing something for the nth record at the start of the program run also "allocates" space for all records of the file, and could be used as a mechanism for ensuring there is sufficient free space before a lengthy calculation is run (you can trap the error early if there isn't enough space).
This is not usually a problem these days where disc space is relatively plentiful. _________________ Programmer in: Fortran 77/95/2003/2008, C, C++ (& OpenMP), java, Python, Perl |
|
Back to top |
|
|
KennyT
Joined: 02 Aug 2005 Posts: 317
|
Posted: Tue Nov 22, 2016 12:37 pm Post subject: |
|
|
Has anyone tried these tests on Windows 10?
i'm getting horrendous speeds on direct access unformatted writes and my disc access (according to Task Mangler) goes to 95-100%...
K |
|
Back to top |
|
|
wahorger
Joined: 13 Oct 2014 Posts: 1217 Location: Morrison, CO, USA
|
Posted: Tue Nov 22, 2016 2:08 pm Post subject: |
|
|
Yes, this still holds under Windows 10. |
|
Back to top |
|
|
JohnCampbell
Joined: 16 Feb 2006 Posts: 2554 Location: Sydney
|
Posted: Tue Nov 22, 2016 10:18 pm Post subject: |
|
|
KennyT wrote: | Has anyone tried these tests on Windows 10? | No, I have avoided the use of SHARE= with my disk access approach.
John |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
Powered by phpBB © 2001, 2005 phpBB Group
|