If I could comment, based on my limited experience of 64-bit programming.
- Eddie is right, in that you typically use only 1 or 2 large arrays.
- These arrays only work well if they are smaller than the physical memory installed, as exceeding the physical memory implies virtual memory paging, which can be done much more efficiently in the program algorithm than the O/S dumb paging approach. (If you invoke paging then the 32-bit 'out-of-core' algorithm is probably more efficient. )
- My 64-bit algorithm has seen a significant restructure of my 'FTN77' memory management approach to a more extensive use of ALLOCATE. The 64-bit code assumes the problem can be solved in memory, with no out-of-core capability. If in memory, it is much more efficient, but can only efficiently solve problems smaller than the physical memory available.
- Removing all the overheads of providing an out-of-core approach has allowed more flexibility in manageing the solution and allowed easier and quicker development of alternative solutions.
- The 64-bit limitation of requiring ALLOCATE for addresses > 2gb has had some benefits in solution definition, as large fixed size static arrays are no longer useable.
- The introduction of SSD for I/O has changed the umbalance. With (very) significant increases in disk transfer rates, 32-bit solutions are now less disadvantaged.
The problem with operating in a competitive environment is that other suppliers have the flexibility of 64-bit solutions and we must meet their capability or become an even smaller niche operator.
John