why golang chose syscall rather than libc [closed] - go

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Go wraps all syscall-s in package syscall, just like what libc does if I understand them right.
I've researched a few languages,
Haskell, using libc in compiler and the libraries normally use it too, although there is a few libraries wrapping syscall for users.
Java and almost all JVM languages choosing libc.
No need mention script languages, such as lua, ruby or python, they need portable, so they need libc as an implementation of POSIX.
I didn't use rust recently, but there are some people just said rust using libc too.
So, why golang decided to implement a syscall package at first. It's not portable, cost more people to port to each kernel, even each major version of the same kernel.

Because Go manages its processes in goroutines managed by the Go Runtime which is written in C and statically linked to the compiled user code during the linking phase. Since go manages its syscall with its own runtime not directly in the OS, that is why it implemented its own syscall package.

Related

What is wrong with configure/make that we need new build tools? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Is make not general enough?
makedepend, autoconf, automake all work to build Makefiles. Is there a flaw in make that causes this type of usage to break down for some languages?
How does ant, Bazel, Maven or other system compile or build a project better than make?
make come from Unix and is generally good for "if you have X you need to do Y to get Z" one file at a time (like invoking the C compiler on a C source). All the autoconf/automake/configure tooling is to customize C programs to the actual platform, and is essentially not overlapping with make.
This did not work well for Java as the compiler was fast for compiling multiple files but the overhead of starting the JVM was much too high for compiling one file at a time. So, a different approach was needed. First plain javac, then ant (which for all practical purposes is a scripting language), and then maven (which isn't because that was a bad idea).
So, the answer is that different tools arose for different needs.

How can I install pc, a pascal compiler? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I only found fpc but not pc in my system to compile pascal. The system is redhat. How should I install pc? The only one I found is http://www.freepascal.org/. But it doesn't seem have pc.
pc is a general name for the system Pascal compiler on old unices, just like cc was the equivalent for the system C compiler.
If the code is really old and from mainframe or unix descent (early eighties), it is probably Berkeley, Sun or some other OS/vendor specific Pascal. If not then sb just tried to mimic that for the buildsystem of a newer codebase by symlinking "pc" to some other compiler.
Anyway, "pc" is too generic, and more information is needed to know what compiler you are searching for. Free Pascal always referred to itself either as ppc or as fpc, never as "pc".
To my best knowledge Berkeley Pascal was removed from the distro going from BSD to *BSD in the early nineties, and never made it to Linux.
Your best bet is to port to an existing compiler, porting to Free Pascal (using mode ISO) or Gnu Pascal in the very unlikely case that it is an Extended Pascal dialect codebase. Gnu Pascal, despite being unmaintained, is still buildable with a considerable effort.
The convention to symlink pascal compilers to "pc" never really caught on, and neither is there an universal buildsystem that requires the shortcut.
Even for C buildsystems seems to favour the CC environment variable for the C compiler's name nowadays.

gcc assembly vs direct to machine code [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I recently started learning how to program, and I found this one thing curious:
Why does gcc go the extra mile of compiling the c-code to assembly and then to machine code?
Wouldn't it be just as reasonable to compile direct to machine code? I know that e.g. the MS c Compiler does this, so what's the reason for gcc for behaving like this?
Because for one thing, there's already the assembler who does a fairly good job at tranlating assembly to machine code -- there would be no point in gcc re-implementing that functionality. (also keep in mind that assembly still is /somewhat/ symbolic) On a second point, one probably doesn't /always/ want to compile straight down to machine code -- on embedded systems, there's a good chance the generated assembly undergoes a final by-hand optimization.Last but not least, it's very helpful in debugging the compiler itself in case it misbehaves. Nobody likes to read raw machine code.
GCC is very much unix and this is the unix way to make separate tools that build on each other rather than integrating. You need to have an assembler and linker, the job of the assembler is to generate machine code for a target, makes no sense to repeat that. the job of the compiler is to boil down a higher level language to a lower one. Generating assembly language allows for much easier debugging of the compilers output, and it lets the assembler do its job rather than repeating the task in two places.
Of course it is not only unix compilers that do this, it makes a lot of sense to do this on all platforms and has been done this way forever. Straight to machine code is the exception rather than the rule, usually when there is a specific reason to do so.
I dont understand the fascination with this question and why it keeps getting asked over and over again. Please search for previous answers before asking...

Designing a makefile for installing / uninstalling software that I design [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm writing a compiler and there are certain things that I simply can't do without knowing where the user will have my compiler and its libraries installed such as including and referencing libraries that add built-in functionality like std I/O. Since this is my first venture into compilers I feel that it's appropriate to only target Linux distributions for the time being.
I notice that a lot of compilers (and software projects in general) include makefiles or perhaps an install.py file that move parts of the application across the user's file system and will ultimately leave the user with something like a new shell command to run the program, which, (in a compiler such as python's case) knows where the necessary libraries are and where the other necessary files have been placed in order to run the program properly.
How does this work? Is there some sort of guideline to follow when designing these install files?
I think the best guideline I can give you at a high level would be:
Don't do this yourself. Just don't.
Use something like the autotools or any of the dozen or so other build systems out there that handle much of the details involved here for you.
That being said they also add a certain amount of complexity when you are just starting out and that may or may not be worth the effort to start with but they will all pay off in the end assuming you use them appropriately and don't need anything too extensively specialized that they don't provide nicely.

If MPI is Message Passing Interface, then what is MPICH? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Can anyone defines clearly what is MPICH ?
For what it is used ?
Its relation with MPI.
Why do we need MPICH ?
MPI is a standard interface definition. ie - it defines how to program to it, but doesn't provide an implementation.
MPICH is a specific implementation that conforms to that interface, and is portable to a huge number of platforms. OpenMPI (not to be confused with OpenMP) is another implementation, as is LAM, and many vendors have their own implementations tuned to their platforms. If you write your program to conform to the MPI standard, you can link to any conforming implementation.
MPICH was one of a handful of reference implementations that became widely available in the mid-90's.
MPICH is to MPI as GNU libc is to the C standard library.

Resources