 |
forums.silverfrost.com Welcome to the Silverfrost forums
|
View previous topic :: View next topic |
Author |
Message |
PaulLaidler Site Admin
Joined: 21 Feb 2005 Posts: 8261 Location: Salford, UK
|
Posted: Tue Apr 10, 2012 7:32 am Post subject: |
|
|
There are currently no plans to create a 64bit version of FTN95 nor are there plans to significantly further extend FTN95 into Fortran 200x.
However, we are investigating the possibility of making ClearWin+ accessible to third party 64bit compilers. The idea is that, for developers who need to have a 64bit addressable memory, development can be carried out on a small scale model using 32bit FTN95 with CHECKMATE and SDBG etc., then full scale models will be run using a third party 64bit compiler together with a 64bit ClearWin+ DLL.
This approach, if it works, will not meet all of the preferences that have been expressed above but will satisfy most of them. I realise that this is not a perfect solution to the problem but hopefully it will provide a way forward. |
|
Back to top |
|
 |
brucebowler Guest
|
Posted: Wed Apr 11, 2012 1:00 pm Post subject: Re: |
|
|
PaulLaidler wrote: |
There are currently no plans to create a 64bit version of FTN95 nor are there plans to significantly further extend FTN95 into Fortran 200x.
|
Is it just me or does that sound like FTN95 has joined the ranks of "dead end" products? I certainly hope I'm not reading that correctly... |
|
Back to top |
|
 |
dpannhorst
Joined: 29 Aug 2005 Posts: 165 Location: Berlin, Germany
|
Posted: Wed Apr 11, 2012 3:42 pm Post subject: |
|
|
It seems that Silverfrost has decided to give up and users should migrate to Intel with their code! What a pity!
Detlef |
|
Back to top |
|
 |
PaulLaidler Site Admin
Joined: 21 Feb 2005 Posts: 8261 Location: Salford, UK
|
Posted: Wed Apr 11, 2012 6:51 pm Post subject: |
|
|
Please do not use this Forum for negative comments about Silverfrost or its products. The Forum is provided and maintained by Silverfrost for the common good and information is provided in good faith.
We are committed to continue supporting FTN95 etc. together with CHECKMATE, SDBG and ClearWin+ etc. This will continue to provide a useful development tool for many users. In the short term the aim is to make ClearWin+ available to third party 64bit compilers so that users who need a 64bit environment can continue to take advantage of Silverfrost development and GUI tools. |
|
Back to top |
|
 |
LitusSaxonicum
Joined: 23 Aug 2005 Posts: 2419 Location: Yateley, Hants, UK
|
Posted: Thu Apr 12, 2012 10:36 pm Post subject: |
|
|
I�m afraid I am with Paul on this one.
Clearwin started out as a �costs extra add-on�. In another thread, we debated whether the Clearwin code a programmer writes is legitimate Fortran, and decided that it probably was, apart from the @ character. Hence, for someone else�s Fortran with a Clearwin add on, one might write:
Code: |
I = WINIOQQ (�%ca[Shame on you]�) |
instead of the familiar WINIO@. In another thread we learned that Clearwin was simply a �wrapper for the Windows API�, although it did �do some internal housekeeping�. So it looks like a sensible approach to make a compiler-independent version of it � at least an extra product that could come to market very quickly and generate revenue.
I attended a one-day workshop at Salford when Clearwin was launched, and FTN77 with DBOS struggled to make itself easy to use in Windows (95 I believe). Clearwin came into its own when DBOS was abandoned.
I won�t be deserting FTN95, even if Clearwin is available for other compilers. There are reasons for this.
FTN95 is a mature and stable product, with a long update cycle and few changes. Compare that to the error fix list for other compilers in their user support pages. Not only that, but a good proportion of the fixes are to do with Plato or enhancements to Clearwin which the other compilers do not have.
Clearwin and FTN95 come from the same stable, and interface with each other smoothly
Compilation is quick, and the diagnostics are excellent. I don�t use the debugger, but I understand that it too is excellent.
Unlike some other users, I just don�t need those great big data spaces.
Windows (and this is after 3 versions that allegedly were 64-bit) isn�t really a 64-bit operating system. Also, it seems to me that if you really do need arrays >2Gb in size, you don�t need many of them. Early in this thread I suggested that one only needed the facility to make a few 64-bit addressable arrays � perhaps as few as 4 of them, and possibly only 1 ! From my perspective of ignorance surely that can�t be too difficult.
I also think Paul is right in not plunging full speed into dealing with the latest published standard. After all, 1990 is 22 years ago, and in the bug fix list for FTN95, just look at how few (of an already small list) of those fixes relate to Fortran 77 features. Most of the bug fixes are to do with Fortran 90/95. It is inconceivable that large applications are out there waiting to be compiled with all the features of Fortran 200x. To incorporate all the new features in Fortran 200x � especially in one go - would destroy a valuable stability. My guess is that these features will arrive eventually, but will be introduced gradually.
Microsoft make the job of producing compilers hard enough in the changes they make to Windows and to Visual Studio. A big chunk of the bug fixes to FTN95 is repairing things that Microsoft broke. As developers, forum users should have sympathy - even keeping abreast of stylistic changes in Windows is a major job � and for those with no sympathy for Paul, I suggest that you go and read Kipling�s �If� (Readily available on the web).
Eddie |
|
Back to top |
|
 |
DanRRight
Joined: 10 Mar 2008 Posts: 2939 Location: South Pole, Antarctica
|
Posted: Sat Apr 14, 2012 1:49 am Post subject: |
|
|
I like the idea of making Clearwin compiler independent. The more people will use it the more new programming features will be there, more user examples, in simple words - the better for all us. It's probably even time to think about tablet/cellphone GUI interfaces with Clearwin i also miss so much right now
Still i'd like to understand why switch to 64bit for FTN95 is considered as not feasible move. Basically no any new features like Fortran F200X are needed, the 64bit FTN95 compiler will do the same things, it's all about system memory allocation which will be different. I do not know all hidden pitfalls here but, say, if the compiler is written in C isn't it possible just to recompile it with minimal changes using MS 64-bit C or GCC ( they already have their 64bit Fortran)? |
|
Back to top |
|
 |
PaulLaidler Site Admin
Joined: 21 Feb 2005 Posts: 8261 Location: Salford, UK
|
Posted: Sat Apr 14, 2012 7:41 am Post subject: |
|
|
Internally FTN95 contains a front end (that does the parsing etc. and creates a data base) and two back ends (one that converts the data base to 32bit assembler and another that converts the data base to .NET CLR). Creating a 64bit compiler would require an additional back end that would create 64bit assembler from the data base (or a major rewrite of the front end so that it outputs C code instead of the existing data base). That is not to mention changes that would be required to SDBG and salflibc.dll even if we adopted a third party C compiler and linker. |
|
Back to top |
|
 |
LitusSaxonicum
Joined: 23 Aug 2005 Posts: 2419 Location: Yateley, Hants, UK
|
Posted: Sat Apr 14, 2012 11:41 am Post subject: |
|
|
Paul,
More from me, I'm afraid.
Isn't it the case that a Fortran programmer who wants large arrays doesn't want many of them? It is inconceivable that anyone wants (say) 100 arrays at 20Gb each ...
Isn't it also the case that the only part of Windows 64bit that is truly 64bit is the heap, and we use the heap for allocatable arrays? Hence, a general purpose 64bit capability would be useless in any case, as the static area is the same size in 32 and 64 bit Windows.
Surely, therefore, gaining a very large array capability isn't a matter of making the whole back end 64 bit, but of adding the capability to create and use a small number of INTEGER*8 addressable arrays that have to be ALLOCATE-able. My enquiries lead me to suspect that most users clamouring for 64 bit capability really only want one such array!
Microsoft Fortran used to have a limit on array sizes that was overcome initially by declaring arrays as [HUGE] - they weren't all that huge, my recollection is that it permitted them to be more than about 32k.
My suggestion therefore is that you have a think about permitting a programmer to declare a small number of arrays as "Silverfrost Huge", which would require INTEGER*8 addressing.
As for me, my overarching requirement is simply that 32-bit code runs in Windows 64bit.
Eddie |
|
Back to top |
|
 |
PaulLaidler Site Admin
Joined: 21 Feb 2005 Posts: 8261 Location: Salford, UK
|
Posted: Sun Apr 15, 2012 7:24 am Post subject: |
|
|
Thanks for the suggestion Eddie. At the moment I do not know how to do that. How do you import/export from/to a 64bit address using 32bit assembler? If I do it via a pipe to a 64bit DLL then that would be messy and very slow. |
|
Back to top |
|
 |
dpannhorst
Joined: 29 Aug 2005 Posts: 165 Location: Berlin, Germany
|
Posted: Sun Apr 15, 2012 9:03 am Post subject: |
|
|
I fully agree with Dan!
It becomes necessary to have a 64-bit compiler of FTN95 just before adding new features like F200x. The customers ask for software products running as a 64-bit application although nothing inside of the software will really need 64 bit. They just want to see there software will be installed in C:/Windows/Program Files and in C:/Windows/Program Files (x86). So it might be just an argument for selling the product as a "modern" software.
You have to go with the development and not against the development. It is no good argument to say even old DOS applications are running in a DOS box under Windows7-64 bit.
We are not developing our software for scientific purposes, we develop for our customers!
Regards,
Detlef |
|
Back to top |
|
 |
LitusSaxonicum
Joined: 23 Aug 2005 Posts: 2419 Location: Yateley, Hants, UK
|
Posted: Sun Apr 15, 2012 1:50 pm Post subject: |
|
|
Hi Paul,
You put me on a spot where my ignorance shows!
I'll think about it and respond when I get back from a week's travels.
Parsing the code would be helped if you had an explicit ALLOCATE[HUGE] (or some such) rather than trying to work out from context whether or not 64bit addressing was required.
I also imagine that programmers don't want to ALLOCATE and DEALLOCATE >2Gb arrays terribly often, so it doesn't matter if this operation is comparatively slow. Isn't this just a matter of carving out a block of the heap and telling Windows "Keep off, this is mine". Maybe that could be done in your 64 bit DLL.
My mental picture of how the 32 bit program would talk to its 64 bit memory pool is rather like disk caching or virtual memory operation (except that the "disk" is the 64 bit memory area). If the user runs through his huge array sequentially when operating on it, and the cache is big enough, the cache misses should be comparatively few in number, and then again, cache reloading could be run through your 64 bit DLL.
Presumably a cache could be around a gigabyte or so, using up a big proportion of the 32bit heap.
In my hazy memory is a recollection that virtual memory operations were a part of FTN77 with DBOS, so maybe the niceties of this have already been worked out.
Even under Windows 32bit, there were "virtual disk" or "ramdisk" programs that could use RAM beyond 4Gb as if they were hard disks. I remember suggesting to several people that this was a way forward - their programs written to use disk-based solutions would speed up significantly if they used "ramdisks" instead of physical disks, but this is a solution that software developers who write software for others cannot use, as their software has to run on whatever machine the final user has at his disposal.
Regards
Eddie |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2621 Location: Sydney
|
Posted: Sun Apr 15, 2012 3:14 pm Post subject: |
|
|
If I could comment, based on my limited experience of 64-bit programming.
1) Eddie is right, in that you typically use only 1 or 2 large arrays.
2) These arrays only work well if they are smaller than the physical memory installed, as exceeding the physical memory implies virtual memory paging, which can be done much more efficiently in the program algorithm than the O/S dumb paging approach. (If you invoke paging then the 32-bit "out-of-core" algorithm is probably more efficient. )
3) My 64-bit algorithm has seen a significant restructure of my "FTN77" memory management approach to a more extensive use of ALLOCATE. The 64-bit code assumes the problem can be solved in memory, with no out-of-core capability. If in memory, it is much more efficient, but can only efficiently solve problems smaller than the physical memory available.
4) Removing all the overheads of providing an out-of-core approach has allowed more flexibility in manageing the solution and allowed easier and quicker development of alternative solutions.
5) The 64-bit limitation of requiring ALLOCATE for addresses > 2gb has had some benefits in solution definition, as large fixed size static arrays are no longer useable.
6) The introduction of SSD for I/O has changed the umbalance. With (very) significant increases in disk transfer rates, 32-bit solutions are now less disadvantaged.
The problem with operating in a competitive environment is that other suppliers have the flexibility of 64-bit solutions and we must meet their capability or become an even smaller niche operator.
John |
|
Back to top |
|
 |
DanRRight
Joined: 10 Mar 2008 Posts: 2939 Location: South Pole, Antarctica
|
Posted: Mon Apr 16, 2012 10:32 pm Post subject: |
|
|
Eddie,
with respect to your comment - are you saying that the only thing which needs to be changed in FTN95 to be 64-bit like some other existing compilers are is just making 64bit ALLOCATE/DEALLOCATE which in turn means simply using for them integer*8 variables for the array indexes (because the maximum for integer*4 numbers is 4GB) plus under-the-hood of course some low level trickery only C/C++/ assembler programmers know?
Isn't this actually how 64bit Intel IVF was made - no changes in the 32bit portion and the only what actually is 64bit is ALLOCATE ?
Well, after playing with 4GB trick in FTN95 i will say that if this is the case, and the only what will be changed to 64bits is ALLOCATE it will fit me for a while even though i have not 1-2 but tons of arrays waiting to be >2GB. I will unload all of them into 64bit ALLOCATABLE space and leave the code part in 32bit one. The trick will work because most probably not me not anyone else will succeed to break 2GB 32bit limit for the code itself any time soon which will probably need 100M lines of Fortran source code. LOL |
|
Back to top |
|
 |
LitusSaxonicum
Joined: 23 Aug 2005 Posts: 2419 Location: Yateley, Hants, UK
|
Posted: Tue Apr 17, 2012 5:16 pm Post subject: |
|
|
I'm always pleased if anyone agrees with me, or even if not, then if anyone is provoked to discuss!
PC RAM is cached into a small quantity of faster RAM, now on the cpu itself. Disks are cached into RAM. The mechanism I propose, caching a HUGE array defined as 64bit addressable RAM into a smaller area of RAM that is 32bit addressable has to be a lot faster than even SSDs - but I do accept that virtualising RAM onto a hard drive is going to be slow. I agree, John, that one can sometimes program this faster yourself for particular problems than using a general purpose code.
I think that the hardware to enable the use of lots of really big arrays is some way off, and we'll need faster processing to deal with the volumes of data.
Eddie |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2621 Location: Sydney
|
Posted: Wed Apr 18, 2012 7:04 am Post subject: |
|
|
Eddie,
You said:
Quote: |
I think that the hardware to enable the use of lots of really big arrays is some way off, and we'll need faster processing to deal with the volumes of data. |
That time is already here. Others where I work are using a workstation with 96gb of memory for large 3d geotechnical modelling with Abaqus to speed up their non-linear solutions.
There are many other commercial analysis packages that can use lots of memory.
My packages are nowhere near that size, and the challenge is setting up those large models. Not enough time to develop the range of techniques demanded by these size problems.
John |
|
Back to top |
|
 |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
Powered by phpBB © 2001, 2005 phpBB Group
|