Silverfrost Forums

Welcome to our forums

SLow performance with DIRECT ACCESS unformatted files

1 Jul 2016 1:25 #17723

Thanks, John, for the re-run and modification of the sharing.

The SHARE is important to my code, as multiple users can be accessing the same file; they are just not allowed to access it at the same time. So the SHARE being used is a logical consequence of this. Referring back to my previous message on this thread, the older system(s) did not have this issue with a performance penalty.

So, the question is whether or not the SHARE option is the culprit, or is it the OS? Or, is it a combination of both?

Thanks for running this benchmark comparison and continuing the thread toward, hopefully, a resolution.

1 Jul 2016 8:57 #17724

Perhaps someone can say if the SHARE option uses _fsopen() to accomplish this function, while if SHARE is not present, perhaps fopen()?

2 Jul 2016 7:33 #17725

Bill,

I am interested to know why you need multiple write access to a direct access file. I suspect any multi-user database system would require this. This is not a typical use of direct access files, which does come with a performance penalty.

As to why sequential access is faster, isn't this because you can't have multiple write access for a sequential file, ie multiple write is not available for that type of file, or only a single user could append to the end of the file.

I think the reason why multiple write is slower on more recent versions of windows is because the file buffers are larger, so continually flushing the buffers after every write involves more disk I/O. Allowing multiple write would be equivalent to closing the file after every write operation.

John

2 Jul 2016 1:37 #17726

John,

Actually, I never allow multiple WRITE access, but I do allow multiple READ access (SHARE='DENYWR'). When I need exclusive access, I deny read [u:30aef1971b]and[/u:30aef1971b] write (SHARE='DENYRW'). I had to play around with these a lot during the transition to FTN95.

The first attempt to limit other users from accessing the file was SHARE='COMPAT'. What in the old system was not more than the blink of an eye became a significant pause in the operation of the code, thus prompting the investigation 18 months ago.

I find it interesting that SHARE='DENYNONE' has the performance penalty. It would logically seem to be the same as SHARE=' '. Perhaps not...

In any event [and for me specifically], taking 99+% of the temporary/working files out and doing the job in memory means I don't have to worry this any more, but that's just me and it isn't a significant limitation to the program anyway.

Whether this is a 'problem' or 'just how it is', it is important for current and future users to be aware.

5 Jul 2016 3:41 #17730

Bill,

Based on my testing example above, my interpretation of COMPAT is it should be the same as DENYRW, which is the same as SHARE=' '. The only cases that allow multiple write are DENYNONE and DENYRD.

Either COMPAT is wrong, as it appears to be allowing multiple write or my interpretation is wrong ? If I understand this correctly, then COMPAT needs to be fixed.

Do you or anyone else know what is the correct interpretation of this problem ?

John

6 Jul 2016 11:56 #17747

When creating a new direct access file, has anyone tried writing the last record first?

6 Jul 2016 2:21 #17748

Nice idea, but even if that were to work, the read access times would still be an issue.

Notice that all the timings that John posted are the read access times, not the times of creation of the file.

Assuming that the underlying code uses the function _fsopen(), one can assume that it is the flushing of the buffers to update the physical file after the I/O operation that causes the performance 'penalty'. I copied this from a website as I was researching _fsopen():

When a stream is opened in update mode, both reading and writing may be performed. However, writing may not be followed by reading without an intervening call to the fflush() function or to a file positioning function (fseek(), fsetpos(), rewind()). Similarly, reading may not be followed by writing without an intervening call to a file positioning function, unless the read resulted in end-of-file.

If we are to assume that this is a requirement to guarantee that the proper data would be accessible by all parties, then the buffering normally performed by the OS must be bypassed by flushing the buffers after every I/O operation. The FTN95 library code must assume that buffers must be flushed regardless of the last I/O operation.

There is a hint about the COMPAT option in the MSDN documentation. This is specified for as compatibility so 16 bit programs can access the file. Which likely means that the 16 bit programs would not have the SHARE ability to specify access rights and therefore the buffers have to be flushed in an identical manner as, say, DENYNONE.

It is all very interesting.

11 Jul 2016 3:30 #17776

John, I'm glad you asked that, and I do have an answer, I really do.

In one of the examples, there was a DO loop writing 1 to n records. Not knowing how the system allocates file space, I made the suggestion of writing 1 record at the end, because for a new file, the system will be forced to allocate a file of n x record_length bytes. This would be carried out as one operation rather than repeatedly extending the file. I just thought it might be faster.

I made the suggestion because in days gone by, a chap who I worked with had the great misfortune to use an IBM mainframe and when I suggested a solution to something, he said 'Oh yes I will create the file'. He then created a batch job which allocated a file of a specified length and width before he could edit it. Just thought that there may be some merit in allocating the space before use.

See I said I had a reason! Ian

11 Jul 2016 7:39 #17779

Writing something for the nth record at the start of the program run also 'allocates' space for all records of the file, and could be used as a mechanism for ensuring there is sufficient free space before a lengthy calculation is run (you can trap the error early if there isn't enough space).

This is not usually a problem these days where disc space is relatively plentiful.

22 Nov 2016 11:37 #18445

Has anyone tried these tests on Windows 10?

i'm getting horrendous speeds on direct access unformatted writes and my disc access (according to Task Mangler) goes to 95-100%...

K

22 Nov 2016 1:08 #18447

Yes, this still holds under Windows 10.

22 Nov 2016 9:18 #18451

Quoted from KennyT Has anyone tried these tests on Windows 10? No, I have avoided the use of SHARE= with my disk access approach. John

Please login to reply.