I am able to compile the contract but I need to know where those compiled binaries are saved.. I'm using macOs BigSur. I'm really struggling with that.. please help
Based on tags of this question, I'm assuming that you're using the solc binary compiler without any framework (such as Hardhat, Brownie, etc).
By default, solc does not save the binaries. You can specify the output destination with the -o option.
# compiles `MyContract.sol` and saves the output to the `binaries` folder
solc --bin -o ./binaries/ ./MyContract.sol
From solc --help:
Output Options:
-o [ --output-dir ] path
If given, creates one file per component and
contract/file at the specified directory.
Related
First, some background info about elf32-x86-64 format.
It is a format that leverages 64-bit hardware while enforcing 32-bit pointers. Ref1 and Ref2.
Question
I am trying to link the Google Test framework binaries to my project.
I use objdump -f to check the format of Google Test binaries and my binaries.
Google Test format is elf64-x86-64. Mine elf32-x86-64. So they cannot be linked together.
Then I add below content to the google test's internal_utils.cmake file:
set(ZEPHYR_LINK_FLAGS "-Wl,--oformat=elf32-x86-64")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${ZEPHYR_LINK_FLAGS}")
I hope the linker flag can change the output format to elf32-x86-64.
But google test build failed with below error:
/usr/lib/gcc/x86_64-linux-gnu/7/libstdc++.so: error adding symbols: File in wrong format
The /usr/lib/gcc/x86_64-linux-gnu/7/libstdc++.so is also a elf64-x86-64 format.
And I checked the generated object file, such as:
./googletest/CMakeFiles/gtest_main.dir/src/gtest_main.cc.o
It is still elf64-x86-64.
So it seems the linker flag doesn't affect the object file format.
I remember the linker ld will choose the output format based on its first encountered object file. So I guess I need to tell the compiler to output a elf32-x86-64 format.
How can I ask the compiler to output a elf32-x86-64 object file?
ADD 1 - 3:29 PM 11/1/2019
I have managed to compile the Google Test as elf32-x86-64 with below tuning:
Add compile flag -mx32
And add link flag -Wl,--oformat=elf32-x86-64
Now the output binaries libgtest.a, libgtest_main.a are elf32-x86-64. But they need to be linked to libstdc++.so. So far, it is elf64-x86-64 on my system. And I haven't found a elf32-x86-64 one. Thus below error:
/usr/lib/gcc/x86_64-linux-gnu/7/libstdc++.so: error adding symbols: File in wrong format
ADD 2 - 3:47 PM 11/1/2019
After installing sudo apt-get install gcc-multilib g++-multilib (ref), I got a elf32-x86-64 version of libstdc++.so at below location:
/usr/lib/gcc/x86_64-linux-gnu/7/x32/libstdc++.so
And it ultimately points to /usr/libx32/libstdc++.so.6.0.25
Now it seems I just need to find a way to tell the linker to use it... So close!
ADD 3 - 2:44 PM 11/4/2019
Thanks to Florian and EmployedRussian, I change Google Test's internal_utils.cmake file to add below 4 lines:
set(MY_COMPILE_FLAGS "-mx32")
set(cxx_base_flags "${cxx_base_flags} ${MY_COMPILE_FLAGS}")
set(MY_LINK_FLAGS "-mx32")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${MY_LINK_FLAGS}")
Now the generated executable are elf32_x86-64 format.
So basically, I add the -mx32 to both compile and link flags.
And in the generated rules.ninja file, the link rule goes like this:
command = $PRE_LINK && /usr/bin/c++ $FLAGS $LINK_FLAGS $in -o $TARGET_FILE $LINK_PATH $LINK_LIBRARIES && $POST_BUILD
The $FLAGS and $LINK_FLAGS are defined in the build.ninja file as below:
FLAGS = -Wall -Wshadow -Werror -mx32 ...
LINK_FLAGS = -mx32 ...
So essentially, there are 2 -mx32 options in the ninja command definition contributed by the $FLAGS $LINK_FLAGS respectively.
So why do I need to specify the -mx32 for twice??
And I don't understand why I can specify -mx32 for CMAKE_EXE_LINKER_FLAGS.
First, -mx32 is only a compile option (ref), not a linker option.
Second, from the link rule definition, the $LINK_FLAGS are passed to usr/bin/c++ without a -Wl, prefix, so even the option can be appreciated by the linker, it won't be passed to the linker.
GCC will adjust the linker command line accordingly if you invoke it as gcc -mx32. It is more than just a compiler flag.
I hope this is a simple question and I'm just missing something fundamental.
I'm trying to emulate a binary build manager for an embedded Cortex-M0 target using a CMake project. I'm having some trouble figuring out how to generate list files for each dependency of my executable target.
The current build system, when building a file called main.c passes -Wa,-alh=.\CortexM0\ARM_GCC_493\Debug/main.lst as an argument to gcc. I can't figure out how to get CMake to use the current filename without the extension to save the file.
I've looked at the get_filename_component command, but it appears only to get the filename of the output:
add_executable(TestExe main.c)
get_filename_component(curr_name TestExe NAME_WM)
message(${curr_name})
As expected, this prints TestExe instead of the hoped for main
Is there a simple variable I'm overlooking that I could put in my toolchain file's CMAKE_C_FLAGS like -Wa,-alh=${CURR_SOURCE}.lst? Or some other method that I'm not seeing?
System info:
Windows 10
Msys shell
CMake 3.7.2
arm-none-eabi-gcc v4.9.3
You can use Expansion Rules and extend CMAKE_C_COMPILE_OBJECT:
set(CMAKE_C_COMPILE_OBJECT "${CMAKE_C_COMPILE_OBJECT} -Wa,-alh=<OBJECT>.lst")
But there is unfortunately
no Expansion Rule that does give the current source file without path and extension
so you will get in the above example main.c.o.lst as an output name
Footnote: In CMake generated makefile projects, if you just need the assembly file can just do make main.s or for the pre-processed file make main.i.
I'm currently working on a project using Arduino 1.0.6 IDE and it does not seem to accept C++11 std::array. Is it possible to change the compiler flag to make this work?
Add custom compiler flags to platform.local.txt. Just create it in the same directory where platform.txt is. For example:
compiler.c.extra_flags=
compiler.c.elf.extra_flags=
compiler.S.extra_flags=
compiler.cpp.extra_flags=-mcall-prologues -fno-split-wide-types -finline-limit=3 -ffast-math
compiler.ar.extra_flags=
compiler.objcopy.eep.extra_flags=
compiler.elf2hex.extra_flags=
In this example C++ flags will make large sketch smaller. Of course, you can use your own flags instead. Since platform.local.txt does not overwrite standard files and is very short, it is very easy to experiment with compiler flags.
You can save platform.local.txt for each project in its directory. It will NOT have any effect in project's directory, but this way if you decide to work on your old project again you will be able to just copy it to the same directory where platform.txt is (typically ./hardware/arduino/avr/) and continue work on your project with project-specific compiler flags.
Obviously, using Makefile as ladislas suggests is more professional and more convenient if you have multiple projects and do not mind dealing with Makefile. But still, using platform.local.txt is better than messing with platform.txt directly and an easy way to play with compiler flags for people who are already familiar with Arduino IDE.
You can use #pragma inside the *.ino file so as not to have to create the local platforms file:
#pragma GCC diagnostic warning "-fpermissive"
#pragma GCC diagnostic ignored "-Wwrite-strings"
For other ones, see HERE.
Using the IDE is very difficult to do that.
I would advise you to go full command line by using Sudar's great Arduino Makefile.
This way you'll be able to customise the compiler flags to your liking.
I've also created the Bare Arduino Project to help you get started. The documentation covers a lot points, from installing the latest avr-gcc toolchain to how to use the repository, compile and upload your code.
If you find something missing, please, feel free to fill an issue on Github so that I can fix it :)
Hope this helps! :)
Yes, but not in 1.0.6, in 1.5.? the .\Arduino\hardware\arduino\avr\platform.txt specifies the command lines used for compiling.
One can either modify this file directly or copy it to your user .\arduino\hardware\... directory to create a custom platform. As not to alter the stock IDE. This will also then exist in other/updated IDEs that you can run. You can copy just the platform file and boards.txt. And have your boards.txt file link to the core: libraries as not to have a one-off. See
Reference: Change CPU speed, Mod New board
I wanted to add the -fpermissive flag.
Under Linux here what I have done with success
The idea is to replace the two compilers avr-gcc and avr-g++ by two bash scripts in which you add your flags (-fpermissive for me)
With root privilege:
rename the compiler avr-gcc (present in /usr/bin) avr-gcc-real
rename the compiler avr-g++ (present in /usr/bin) avr-gcc-g++-real
Now create to bash scripts avr-gcc and avr-g++ under /usr/bin/
script avr-gcc contains this line:
avr-gcc-real -fpermissive $#
script avr-g++ contains this line:
avr-g++-real -fpermissive $#
As you may know $# denotes the whole parameters passed to the script. Thus all the parameters transmitted by the IDE to compilers are transimitted to your bash scripts replacing them (which call the real compilers with your flags and the IDE one)
Don't forget to add executable property to your scripts:
chmod a+x avr-gcc
chmod a+x avr-g++
Under Windows I don't know if such a solution can be done.
We have a project and shared libraries libprivate.so (private so) which was using old libraries libcurl.so.3. The system was upgraded with new system libraries libcurl.so.4.
For some internal issues, right now we do not want to make use of latest libraries libcurl.so.4, we want to make use of libcurl.so.3.
Hence I copied libcurl.so.3 in local folder and set LD_LIBRARY_PATH according. When I link my entire project it says that there is version conflict between libcurl.so.4 and libcurl.so.3 required libprivate.so (libprivate.so is compiled long time ago with libcurl.3.so).
Should I not worry about this warning and proceed further?
When I correctly specify LD_LIBRARY_PATH which has libcurl.so.3, why it is taking from system directory /usr/lib64/libcurl.so.4? when I do ldd my_binary, it takes from libcurl.so.4. How do I stop it? Specifying -L with specific location also doesn't work. Modiying /etc/ld.conf will do for the entire system. I want to make this when I ran my project.
Specifying explicit path it works like /home/mydir/libcurl.so.3, but I do not want to do it.
I want to have these conditions only when I execute my project. In other cases it can make use of latest libraries.
Thanks for your help
If the command you show in your comment is correct:
gcc test.c -L~/lib/x86_64/ -lcurl -o test
... then you need a space between -L and ~/lib/x86_64/ or the shell won't expand the ~, so the linker is not looking in the right directory.
So you need either:
gcc test.c -L ~/lib/x86_64/ -lcurl -o test
or:
gcc test.c -L$HOME/lib/x86_64/ -lcurl -o test
(You don't need a space here because variables are expanded anywhere in a word, but ~ is only expanded at the start of a word.)
There is a laptop on which I have no root privilege.
onto the machine I have a library installed using configure --prefix=$HOME/.usr .
after that, I got these files in ~/.usr/lib :
libXX.so.16.0.0
libXX.so.16
libXX.so
libXX.la
libXX.a
when I compile a program that invokes one of function provided by the library with this command :
gcc XXX.c -o xxx.out -L$HOME/.usr/lib -lXX
xxx.out was generated without warning, but when I run it error like this was thrown:
./xxx.out: error while loading shared libraries: libXX.so.16: cannot open shared object file: No such file or directory , though libXX.so.16 resides there.
my clue-less assumption is that ~/.usr/lib wasn't searched when xxx.out is invoked.
but what can I do to specify path of .so , in order that xxx.out can look there for .so file?
An addition is when I feed -static to gcc, another error happens like this:
undefined reference to `function_proviced_by_the_very_librar'
It seems .so does not matter even though -L and -l are given to gcc.
what should I do to build a usable exe with that library?
For other people who has the same question as I did
I found a useful article at tldp about this.
It introduces static/shared/dynamic loaded library, as well as some example code to use them.
There are two ways to achieve that:
Use -rpath linker option:
gcc XXX.c -o xxx.out -L$HOME/.usr/lib -lXX -Wl,-rpath=/home/user/.usr/lib
Use LD_LIBRARY_PATH environment variable - put this line in your ~/.bashrc file:
export LD_LIBRARY_PATH=/home/user/.usr/lib
This will work even for a pre-generated binaries, so you can for example download some packages from the debian.org, unpack the binaries and shared libraries into your home directory, and launch them without recompiling.
For a quick test, you can also do (in bash at least):
LD_LIBRARY_PATH=/home/user/.usr/lib ./xxx.out
which has the advantage of not changing your library path for everything else.
Should it be LIBRARY_PATH instead of LD_LIBRARY_PATH.
gcc checks for LIBRARY_PATH which can be seen with -v option