compile error on compile Livermore Loops coded in C - parallel-processing

I want to compile and run Livermore Loops in C code
But every time I get compile error and I don't get any clear answer
This is a source code page
https://netlib.org/benchmark/livermorec
Can anyone help me with this?
I want to compile livermore loops in c and parallelize it using openmp library. Unfortunately, at the first stage, I cannot compile and run the codes related to the livermore loops
I am new to c tools and parallize
I've applied everything I've learned about c, unfortunately it doesn't work for me to compile and run those loops.

The Livermore Loops are ancient. Leaving aside the question of how relevant they are, they are coded in pre-Ansi C. So you need to give your compiler an option to accept this prehistoric syntax.
There is some information here: https://www.reddit.com/r/C_Programming/comments/3gwr4j/resources_for_preansi_c/

Related

How compile SCSS faster

Simple topic simple question: are there a way to compile faster SCSS when you have a MASSIVE folder of partial files like this
I know more partial file you have more slow the compile is but i'd like to know if there is a way to compile faster.
In general, sass needs a compiler written based on different programming languages to be compiled, if the speed of any of these compilers is slow for you, you can use Sass direct or use https://sass-lang.com/dart-sass or use compilers in faster programming languages such as java
this is good answer (--link--) 👇
there are three things to think about:
Sass becomes slowly as many SASS files are included to the process. Big SASS-Frameworks tend to use a lot of files and latest when you use a lot of big modules it heavily could slow down at all. Sometimes there are more modules included than needed.
Often the standard project settings try to to a lot of work at the same time. I.e. writing min-files in same process simply doubles the time. If it is that: just prepare 'min-files' at the end of your work. Up to that using additonal post-processors to autoprefix like linters and maby postcss needs extra time ... which counts doubles when writing min-files at once...
JS-Sass-Compilers are slower at all. So you can save time using native SASS direct. This may not be handsome but in big projects that helped me a lot. If you may try that here the link to information how to install: https://sass-lang.com/install

Diagram of the routines used in the execution of a Fortran code

I am trying to modify a complex Fortran code for fluid dynamics written by many people which consists of many routines, subroutines, and functions. I wonder if there is an option in gdb or any other debugger or code that can generate a diagram of the routines called when the code is executed with a specific option. I am looking to generate a diagram like this or similar where I can see all the routines and subroutines that were called when executing Fortran the code, so I can have an idea about what routines to modify.
Rather than use a debugger it would probably be more common to use the output of a profiler - Is it possible to get a graphical representation of gprof results? provides a few suggestions, and below is the output generated by using gprof and gprof2dot on one of my own little codes

Profiling Rust with execution time for each *line* of code?

I have profiled my Rust code and see one processor-intensive function that takes a large portion of the time. Since I cannot break the function into smaller parts, I hope I can see which line in the function takes what portion of time. Currently I have tried CLion's Rust profiler, but it does not have that feature.
It would be best if the tool runs on MacOS since I do not have a Windows/Linux machine (except for virtualization).
P.S. Visual studio seems to have this feature; but I am using Rust. https://learn.microsoft.com/en-us/visualstudio/profiling/how-to-collect-line-level-sampling-data?view=vs-2017 It has:
Line-level sampling is the ability of the profiler to determine where in the code of a processor-intensive function, such as a function that has high exclusive samples, the processor has to spend most of its time.
Thanks for any suggestions!
EDIT: With C++, I do see source code line level information. For example, the following toy shows that, the "for" loop takes most of the time within the big function. But I am using Rust...
To get source code annotation in perf annotate or perf report you need to compile with debug=2 in your cargo toml.
If you also want source annotations for standard library functions you additionally need to pass -Zbuild-std to cargo (requires nightly).
Once compiled, "lines" of Rust do not exist. The optimiser does its job by completely reorganising the code you wrote and finding the minimal machine code that behaves the same as what you intended.
Functions are often inlined, so even measuring the time spent in a function can give incorrect results - or else change the performance characteristics of your program if you prevent it from being inlined to do so.

Attempting to reduce executable size of Go program [duplicate]

This question already has answers here:
Reason for huge size of compiled executable of Go
(3 answers)
Closed 3 years ago.
EDIT / CLARIFICATION:
It seems that I have failed in explaining myself here. I am not criticizing Go, or it's runtime, or the fact that the executables are large. I was also not trying to say that Go is bad while C is good.
I was merely pointing out that the compiled executable seems to always be at least around 1MB (presumably this is the runtime overhead), and that importing a package seems to put the entire package inside, regardless of usage.
My actual question was basically if those 2 points are the default behavior or the only behavior? I gave some examples of C programs that are code-wise equivalent to the Go programs, but that I have carefully picked compiler and linker flags for them to avoid linking with any external C runtime (I verified this with Dependency Walker just to be sure). The purpose of the C examples was to show how small the actual code is, and also to show a case where you do need something and you import only what you need.
I actually think that this behavior (put everything inside just in case) is a good setting to have as a default, but I thought that there may be some compiler or linker flag to change this. Now, I do not think that it would be sensible to say you don't want the runtime or parts of it. However, I think that selectively including parts of a package is not such a strange thing to have. Let me explain:
Let's say we are writing in C/C++ and we include a huge header file with tons of functions, but we only use a small portion of them. In this scenario it is possible to end up with an executable that will not contain any unused code from that header file. (Real world example: A math library with support for 2D/3D/4D vectors and matrices, quaternions, etc.. all of which come in 2 version one for 32bit floats and one for 64bit floats + conversions from one to the other)
This is the sort of thing I was looking for. I fully understand that doing this may cause issues in some cases, but still. It's not like Go does not have other things that may cause serious issues.. they have the "unsafe" package, which is there if you need it but it's like "use at your own risk" kinda package.
ORIGINAL QUESTION:
After working with Go (golang) for some time I decided to look into the executable that it produces. I saw that the my project was clocking in at more than 4.5MB for the executable alone, while another project that is similar in complexity and scope, but written in C/C++ (compiled with MSVC) was less than 100KB.
So I decided to try some things out. I've written really stupid and dead simple programs both in C and Go to compare the output.
For C I am using MSVC, compiling a 64 bit executable in release mode and NOT linking with the C runtime (as far as I understand it seems to me that Go executable only link with it if using CGO)
First run: A simple endless loop, that's it. No prints, no interaction with the OS, nothing.
C:
#include "windows.h"
int main()
{
while (true);
}
void mainCRTStartup()
{
main();
}
GO:
package main
func main() {
for {
}
}
Results:
C : 3KB executable. Depends on nothing
GO: 1,057 KB executable. Depends on 29 procedures from KERNEL32.DLL
There is a huge difference there, but I thought that it might be unfair. So next test I decided to remove the loop and just write a program that immediately returns with an exit code of 13:
C:
#include "windows.h"
int main()
{
return 13;
}
void mainCRTStartup()
{
ExitProcess(main());
}
GO:
package main
import "os"
func main() {
os.Exit(13)
}
Results:
C: 4KB executable. Depends on 1 procedure from KERNEL32.DLL
GO: 1,281 KB executable. Depends on 31 procedures from KERNEL32.DLL
It seems that Go executable "bloated". I understand that unlike C, Go puts a considerable amount of it's runtime code into the executable, which is understandable, but it's not enough to explain the sizes.
Also it seems like Go works in a package granularity. What I mean is that it will not cram into the executable packages that you do not use, but if you import a package you get ALL of it, even if you only need a small subset, even if you don't use it at all. For example just importing "fmt" without even calling anything there expands the previous executable from 1,281KB to 1,777 KB.
Am I missing something like some flags to the Go compiler to tell it to be less bloated (I know there are many flags that I can set and also give flags to the native compiler and linker, but I have not found for this specifically) or is it just something no one cares about in 2019 anymore since what are a few megabytes really?
Here are some things that the Go program includes that the C program does not include:
Container types, such as hash maps and arrays, and their associated functions
Memory allocator, with optimizations for multithreaded programs
Concurrent garbage collector
Types and functions for threading, such as mutexes, condition variables, channels, and threads
Debugging tools like stack trace dumping and the SIGQUIT handler
Reflection code
(If you are curious exactly what is included, you can look at the symbols in your binary with debugging tools. On macOS and Linux you can use nm to dump the symbols in your program.)
The thing is—most Go programs use all of these features! It’s hard to imagine a Go program that doesn't use the garbage collector. So the creators of Go have not created a special way to remove this code from programs—since nobody needs this feature. After all, do you really care how big "Hello, world!" is? No, you don’t.
From the FAQ Why is my trivial program such a large binary?
The linker in the gc toolchain creates statically-linked binaries by default. All Go binaries therefore include the Go runtime, along with the run-time type information necessary to support dynamic type checks, reflection, and even panic-time stack traces.
Also keep in mind that if you are compiling on Windows with MSVC, you may be using a DLL runtime, such as MSVCR120.DLL... which is about 1 MB.

Does GCC feature a similar parameter to pgcc's -Minfo=accel?

I'm trying to compile code on GCC that uses OpenACC to offload to an NVIDIA GPU but I haven't been able to find a similar compiler option to the one mentioned above. Is there a way to tell GCC to be more verbose on all operations related to offloading?
Unfortunately, GCC does not yet provide a user-friendly interface to such information (it's on the long TODO list...).
What you currently have to do is look at the dump files produced by -fdump-tree-[...] for the several compiler passes that are involved, and gather information that way, which requires understanding of GCC internals. Clearly not quite ideal :-/ -- and patches welcome probably is not the answer you've been hoping for.
Typically, for a compiler it is rather trivial to produce diagnostic messages for wrong syntax in source code ("expected [...] before/after/instead of [...]"), but what you're looking for is diagnostic messages for failed optimizations, and similar, which is much harder to produce in a form that's actually useful for a user, and so far we (that is, the GCC developers) have not been able to spend the required amount of time on this.

Resources