Mecej4,
That's an excellent answer, and is probably sufficient for the original question. The assumption in a lot of very old code is that all the variables were the same length. In many computers, REAL, INTEGER and where avaiilable, LOGICAL types were all of two-word length, and when the word was 24 bit, that meant 48 bit precision, or what would be REAL6 etc. Some computers had 30 or 32 bit words, and one brand, with 60 bit words, only used one word for each variable. If you have REAL8 and INTEGER*4, the common blocks become different lengths.
A major problem with those old computers was a complete lack of memory, or what we now call RAM. If you have 32k words, and have to have executable code and data in memory, then something had to be done. For code, it was overlaying, where the space was reused by loading in different subprograms as and when required. One of the reasons that one could not rely on a locally defined variable retaining its value between subprogram calls is that there was a good chance it would be overwritten when the pattern of subprograms in memory was changed.
The reuse of memory for data storage used a similar mechanism, so that a COMMON block used to store (say) input coordinates in the early stages of a program could be used to store results. It is after all only a block of memory. You see this in some programs where the arrays declared in a particular subroutine are replaced with others in a subroutine called later in the code. The example by AKS probably reuses the memory withing a particular subroutine.
Running off the end of a COMMON block normally meant running into the start of something else. If that something else was no longer useful, then there was no ill effect, in exactly the same way as reusing memory for more executable code or data, but if the something else was vital, the best you could hope for was a crash to save you from a silly answer that you might believe.
The idea of a fixed length bigger than you ever thought necessary for an array is still a good, simple, idea in some circumstances. I used it in a program that students still use for field surveying exercises. Commercial software has too big a learning curve and too many options for one-day exercises in the use of a total station. Now, as an accomplished surveyor, I might do a couple of hundred sights. The program will handle 15,000, and I've yet to see a student manage a hundred. The 15,000 was only because there was space for the extra zeroes in the source code from 150, a practical limit from when the program was first written and run on PC machines with only 256k RAM, a lot of which was not available to the application.. That was back in 1985, of course, and a lot has changed since then, as the program now runs in Windows.
The really critical point is not to run off the end of a data structure, whether it is an array or a common block, and to do so is a serious failure on the part of the programmer. One example I read about was a camera that stored JPEG images on a memory card. By dint of establishing the average size of an image from those already taken, it can tell that there is enough space left for an additional picture. But when that last picture is taken, because the JPEG compression is different for every image, that last image is too big and cannot be written to the memory card. In the particular example, it crashed the camera software and made the memory card corrupt. The correct approach was to check the actual size relative to the space available, but the software author had not foreseen the situation. (There were other strategies, such as a more aggressive compression, that might have worked.)
But then the problem would not have arisen with fixed length files, such as the uncompressed TIF that an old camera of mine will store, and maybe the software had been written with that in mind.