|
forums.silverfrost.com Welcome to the Silverfrost forums
|
View previous topic :: View next topic |
Author |
Message |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2554 Location: Sydney
|
Posted: Thu Jan 24, 2019 3:09 am Post subject: |
|
|
Dan,
At the rates you reported, you are not doing disk transfers but doing transfers between the disk memory buffers and the program memory.
Using Resource Monitor to review memory and disk usage should demonstrate this, although the tests don't last long !!
At these apparent rates, there are so many potential bottlenecks (bandwidth limitations), including memory speeds and software conversion rates. My tests showed my software limits of 300 to 500 Mibytes / sec, so I could never demonstrate the quoted SSD performance for my test problems.
Your test results are not a problem, but it is important to be able to reproduce the disk buffering to be available in the real problem. Strategies to pre-charge the buffers could be effective, you just need to understand how. ( open the file earlier might help?)
However, for the case where you are reading large data feeds, say 10x installed memory ( 500 gb file ) disk transfer performance becomes the issue. I haven't had this problem for 10 years, when I used a pre-processor to filter out the raw data to a smaller interface file for multiple analysis (marine survey data).
It all depends on the rate at which you are receiving the data vs the rate required to process it, which is a small subset of all data processing projects. Not many data feeds are at 4 GiBy / sec and if they were you would have the budget to but a better "disk" controller. |
|
Back to top |
|
|
DanRRight
Joined: 10 Mar 2008 Posts: 2815 Location: South Pole, Antarctica
|
Posted: Thu Jan 24, 2019 7:01 am Post subject: |
|
|
John, Large data files become pretty common thing. NVMe at 3GB/s speeds are also common, no RAM buffering needed, they become mainstream at < $0.25/GB. Soon many people will start asking why regular READ is so hell slow |
|
Back to top |
|
|
JohnCampbell
Joined: 16 Feb 2006 Posts: 2554 Location: Sydney
|
Posted: Thu Jan 24, 2019 7:37 am Post subject: Re: |
|
|
DanRRight wrote: | Soon many people will start asking why regular READ is so hell slow |
Regular reads, especially text reads, can be limited by the processing rate of the I/O library routines. As I recall, my I/O library attempts process at about 0.5 GB/sec.
At 1 gbyte/sec, this would be equivalent to 4 processor cycles per byte, which I would expect is a fast processing rate. To get more for character conversion, you could be looking at a vector (AVX registers?) or multi-thread approach (for very long strings).
Achieving 4 gbyte/sec would probably be no conversion, ie byte (binary) transfer. These would be rates that would probably exceed any data processing capacity, such as even basic calculating of mean, standard deviation or high/low limits. Consider the processing rates if also getting the median !
Dgurok, what type of data processing are you performing ?
With HDD, we were once limited to about 20-30 MByte/sec. Options of SSD or memory disk buffers have moved the goal posts, highlighting the processing rates and our data processing limitations.
FTN95 binary transfer library is useful for this type of problem. |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
Powered by phpBB © 2001, 2005 phpBB Group
|