Silverfrost Forums

Welcome to our forums

Multi-core compilation ?

15 Sep 2009 4:18 #4948

Just wondering - has anyone ever had any thoughts about exploiting multi-core processors for ftn95 compilation. Visual C++ allows multiple C++ modules to be compiled concurrently in different cores and it occurs to me that it should be possible to use the dependency files generated at the start of compilation (in VS - those mls files) to do something similar (or better) for ftn95.

Of course the problem with this sort of thing is it seems straightforward but the details get in the way. Assuming I can get my modules compiled in the correct order though, is there anything else that's going to get in the way?

Alan

15 Sep 2009 10:16 #4952

😃 LOL The FTN95 (nd before FTN77) is the only compiler in the world for which exactly the compilation speed is not an issue at all. Not in the slightest extent

Are you trying to improve 1, 2, maximum 10 second compilation?

16 Sep 2009 11:07 #4956

Multi-core compilation is all about dealing with project size, not single file compilation speed. Even if it takes 2 seconds to compile a file, compiling 4 at the same time will be quicker, albeit not 4 times quicker in reality.

This is a bit of an issue for me: 151 fortran source files = 5-6 **minutes ** total compilation time. You do that enough times during the day and it becomes tedious enough to want to improve it and a quad core processor should be able to help.

Admitedly not all of the files need compiling every time, but add in a substantial number of module dependencies and making even small changes can result in a wait of a minute or so, and if you're trying to track down a fiddly bug it all adds up.

16 Sep 2009 11:21 #4957

If you create a project using VS or Plato then the IDE will sort out the dependencies and only recompile where necessary.

16 Sep 2009 11:42 #4958

Indeed they do, but I often still end up compiling enough files to take a minute of so all told. Actually dependency checking can also take quite a while (even if no source files have changed), although it seems rather variable - sometimes it'll zip by, sometime it'll take half a minute to figure out nothing needs recompiling.

I have a lot of modules which are used in other files so editing the modules tends to require all the dependent files to be recompiled. I also find that I have to do a full recompile reasonably regularly as the VS dependency checker quite often gets it wrong - say I add or modify variables within a module it can leave many files that use the module uncompiled, resulting in linker errors. The opposite can also happen, where I am simply editing the code within a module procedure, but this still ends up requiring all the dependent files to be compiled.

My project should really be split up - I used to have some separate static libraries, but found that the dependency checker didn't work well with those: a change to a module in the static library would not cause recompilation within the main project, so I ended up having to do a manual full recompilation.

So anyway, I was just wondering if there were any thoughts on exploiting multi-core processors to help, or more to the point can anyone come up with a show-stopper reason why it won't work.

16 Sep 2009 1:31 #4959

It would be a substantial task to program this into VS and Plato. i.e. separating one list of dependencies into two or more mutually exclusive lists.

I am not sure that there would be much interest in having this feature nor am I sure of the benefit.

I guess you could do the separation manually into two separate projects and open two instances of VS. By default Plato only allows one instance of itself but this can be changed in the Options dialog.

16 Sep 2009 1:47 #4960

Paul. I appreciate the limited interest - I'm sure I could get better performance by rejigging my code but it's not really practical. I think I'm just asking whether anyone can see any major 'ah but it won't work because...'. I was thinking of knocking something to build a tree from the generated mls files (which are nicely readable) then spawning multiple ftn95 proceses in the correct order - so long as you only compile a file once all the modules in it's use list have already been compiled you should be okay I think. It'd be interesting to see if it does indeed have any benefit. Cheers, Alan

16 Sep 2009 6:33 #4963

Quoted from acw

This is a bit of an issue for me: 151 fortran source files = 5-6 **minutes ** total compilation time. You do that enough times during the day and it becomes tedious enough to want to improve it and a quad core processor should be able to help.

What is total files size in MB and how many total fortran lines do you have? How sub/functions are organized -- one file-one subroutine/function?

17 Sep 2009 2:36 #4967

Alan,

You must have a short memory or are new to computing, as the speed improvements, especially over the last 10 years alone, changes my perspective on this question.

Why do you change your data structures so often ?

In the past, we relied on first spending time to define data structures that did not change very often. Then we quarintined code into libraries that were debugged, stable and not likely to change. The alternative of waiting half a hour for a full recompile, soon taught you to find a new programing approach.

With the addition of modules, which allow data structures to be changed more easily, I would still worry about debugging code where the data structures are so extensively linked, as you imply.

I'm amazed how quickly the compiler is now. I have long given up on the MAKE approach, but recompile the lot, deleting all .mod files before I start.

Good luck, John

Please login to reply.