ada95 have 3 files .ali, .adb and .o - can I compile - gcc

I've found some old college work, with my final Ada95 project on it. Sadly, the disc was corrupted, and I have only managed to recover 3 files (the source and executable couldnt be recovered):
project.adb, project.ali and project.o
Are these 3 files enough to compile a new exe? I'm downloading the gnat compiler now, but have to admit, I have forgotten almost everything ada related...
Frank
[EDIT]
shucks.... using GCC to compile the project.adb throws an error about a missing ads file, which I cannot recover.
Is it possible to extract this / compile just the ".o" or ".ali" files? Or, am I stuffed?

project.adb is a source file.
Since you say that gcc complains about a missing .ads file, that indicates that project.adb contains a package body. You can manually construct a corresponding package spec by putting the following into package.ads:
package Project is
end Project;
Now that's almost certainly not enough, because the project spec probably had some type and constant declarations in it, so you'd have to analyze your package body and identify what it references. Infer what those declarations should look like and add them. Oh, and if your package body "with's" any packages that are not part of the standard Ada library, you'll have to recover those as well.
If you do manage to get your reverse engineered spec and the body to compile, you'll still have to create a "driver" program that "with's" the project package, and calls whatever functions and/or procedures that carried out the function of your project (and you'll have to pull the specs of those subprograms--which match their appearance in the package body--into the spec as well.)
Frankly, if it were me, I'd spend more time on trying to use some disk recovery tools to pull whatever else I could off the disk.

In Ada95 (and 2005) one mostly work with adb files (occasionally with ads files) everything else is generated on the run. In your case the adb file is surely other linked up to other ads files.
However, ads files are usually small programs (Obviously, if you are not attempting really exotic things as 'the dining philosophers') which pertain to the algorithmic/mathematical structure of the program, if you can dig out what you did in your project then it should not be impossible to restore it !

Related

What happens to imports and package names when a Go program is compiled

Once a go program is compiled, is there any way one can extract the name of the packages (as later explained in the question) if the machine code is decompiled to Assembly (or then to C)?
In go we generally import packages by providing a link to the repo (if open source). For example github.com/abc/abc. This is means that the username of the library developer is usually part of the package import. Now, when the program is compiled, what happens to them? Can they be somehow extracted from the compiled binary?
Generally speaking, in think the compiler should put the whole code in one place and then it can get rid of those names, but I am unsure about it. That is why I asked the question. I asked this question because in one special case if that package import is somehow included in the binary, it will lead to a serious security problem.
After some investigation, I found that Go does include these in the binary. I opened the binary in a text editor and I tried to look up repo names. At the end of the file, there is a big text portion which includes a lot of textual information. The names of repos used in the program can also be simply looked up.
This means that if you are working on security projects where you need to hide as much information as possible, you need to somehow be careful about these.

Should the STM32 HAL be included as a precompiled library

I have a Keil STM32 project for a STM32L0. I sometimes (more often than I want) have to change the include paths or global defines. This will trigger a complete recompile for all code because it needs to ‘check’ for changed behaviour because of these changes. The problem is: I didn’t necessarily change relevant parameters for the HAL and as such it isn’t needed (as far as I understand) that these files are completely recompiled. This recompilation takes up quite a bit of time because I included all the HAL drivers for my STM32L0.
Would a good course of action be to create a separate project which compiles the HAL as a single library and include that in my main project? (This would of course be done for every microcontroller separately as they have different HALs).
ps. the question is not necessarily only useful for this specific example but the example gives some scope to the question.
pps. for people who aren't familiar with the STM32 HAL. It is the standardized interface with which the program interfaces with the underlying hardware. It is supplied in .c and .h files instead of the precompiled form of the STD/STL.
update
Here is an example of the defines that need to be managed in my example project:
STM32L072xx,USE_B_BOARD,USE_HAL_DRIVER, REGION_EU868,DEBUG,TRACE
Only STM32L072xx, and DEBUG are useful for configuring the HAL library and thus there shouldn't be a need for me to recompile the HAL when I change TRACE from defined to undefined. Therefore it seems to me that the HAL could be managed separately.
edit
Seeing as a close vote has been cast: I've read the don't ask section and my question seeks to constructively add to the knowledge of building STM32 programs and find a best practise on how to more effectively use the HAL libraries. I haven't found any questions on SO about building the HAL as a static library and therefore this question at least qualifies as unique. This question is also meant to invite a rich answer which elaborates on the pros/cons of building the HAL as a separate static library.
The answer here is.. it depends. As already pointed out in the comments, it depends on how you're planning to manage your projects. To answer your question in an unbiased way:
Option #1 - having HAL sources directly in your project means rebuilding HAL every time anything in its (and underlying) headers changes, which you've already noticed. Downside of it is longer build times. Upside - you are sure that what you build is what you get.
Option #2 - having HAL as a precompiled static library. Upside - shorter build times, downside - you can no longer be absolutely certain that the HAL library you include actually works as you want it to. In particular, you'd need to make sure in some way that all the #defines are exactly the same as when the library has been built. This includes project-wide definitions (DEBUG, STM32L072xx etc.), as well as anything in HAL config files (stm32l0xx_hal_conf.h).
Seeing how you're a Keil user - maybe it's just a matter of enabling multi-core build? See this link: http://www.keil.com/support/man/docs/armcc/armcc_chr1359124201769.htm. HAL library isn't so large that build times should be a concern when it comes to rebuilding its source files.
If I was to express my opinion and experience - personally I wouldn't do it, as it may lead to lower reliability or side effects that will be very hard to diagnose and will only get worse as you add more source files and more libraries like this. Not to mention adding more people to work on the project and explaining to them how they "need to remember to rebuild X library when they change given set of header files or project-wide definitions".
In fact, we've ran into the same dilemma for the code base I work on - it spans over 10k source and header files in total, some of which are configuration-specific and many of which are shared. It's highly modular which allows us to quickly create something new (both hardware- and software-wise) just by configuring existing code, mainly through a set of header files. However because this configuration is done through headers, making a change in them usually means rebuilding a large portion of the project. Even though build times get annoying sometimes, we opted against making static libraries for the reasons mentioned above. To me personally it's better to prioritize reliability, as in "I know what I build".
If I was to give any general tips that help to avoid rebuilds as your project gets large:
Avoid global headers holding all configuration. It's usually tempting to shove all configuration in one place, create pretty comments and sections for each software module in this one file. It's easier to manage this way (until this file becomes too big), but because this file is so common, it means that any change made to it will cause a full rebuild. Split such files to separate headers corresponding to each module in your project.
Include header files only where you need them. I sometimes see an approach where there are header files created that only "bundle" other header files and such header file is later included. In this case, making a change to any of those "smaller" headers will have an effect of having to recompile all source files including the larger file. If such file didn't exist, then only sources explicitly including that one small header would have to be recompiled. Obviously there's a line to be drawn here - including too "low level" headers may not be the greatest idea either, e.g. they may not be meant to be included as being internal library files which may change any time.
Prioritize including headers in source files over header files. If you have a pair of your own *.c (*.cpp) and *.h files - let's say temp_logger.c/.h and you need ADC - then unless you really need some ADC definition in your header (which you likely won't), then include the ADC header file in your temp_logger.c file. Later on, all files making use of the temp_logger functions won't have to be re-compiled in case HAL gets rebuilt again.
My opinion is yes, build the HAL into a library. The benefit of faster build time outweighs the risk of the library getting out of date. After some point early in the project it's unusual for me to change something that would affect the HAL. But the faster build time pays off many times.
I create a multi-project workspace with one project for the HAL library, another project for the bootloader, and a third project for the application. When I'm developing, I only rebuild the application project. When I make a release build, I select Project->Batch Build and rebuild all three projects. This way the release builds always use all the latest code and build settings.
Also, on the Options for Target dialog, Output tab, unchecking Browse Information will greatly reduce the build time.

How can I compare .text and .data segments of a .dll against the same ones in a different .dll?

I have a 20+ yo .dll, written in C that none of my colleagues want to touch. With good reason, it uses macros, macro constants and casting EVERYWHERE, making the symbol table quite lean.
Unfortunately, I have to sometimes debug this code and it drives me crazy that it doesn't use something as simple as enums which would put symbols in the .pdb file to make debugging just that little bit easier.
I would love to convert some of the #defines to enums, even if I don't change the variable types as yet, but there is a genuine fear that it will cause possible issues in terms of performance if it were to change the code generated.
I need to show definitively, that no compiled code changes will occur, but it would seem that the .dll is actually changing significantly in a 64 bit build. I looked at one of the function's disassembly code and it appears to be unaffected, but I need to show what is and is not changing in the binary to alleviate the fears of my colleagues as well as some of my own trepidation, plus the bewilderment as to why any changes would propagate to the .dll at all, though the .dlls are of the same size.
Does anyone have any idea how I could do this? I've tried to use dumpbin, but I'm not that familiar with it and am getting some mixed results, prolly because I'm not understanding the output as much as I like.
The way I did this was as follows:
Turn on /FAs switch for project.
Compile that project.
Move the object file directory (Release => Release-without-enums)
Change #defines to enums
Compile that project again.
Move the object file directory (Release => Release-with-enums)
From a bash command line. Use the command from the parent of the Release directory:
for a in Release-without-enum/*.asm; do
git diff --no-index --word-diff --color -U10000 $a "Release-with-enum/$(basename $a)";
done | less -R
The -U10000 is just so that I can see the entire file of each file. Remove it if you just want to see the changes.
This will list all of the modifications in the generated assembly code.
The changes found were as follows:
Symbol addresses were moved about for apparently no reason
Referencing __FILE__ seems to result in not getting a full path when using enums. Why this would translate to removing the full path when using enums is a mystery as the compiler flags have not changed.
Some symbols were renamed for apparently no reason.
Edit
2 and 3 seem to be caused by a corrupted .pdb error. This might be due to the files being used in multiple projects in the same solution. Rebuilding the entire solution fixed those 2 problems.

OpenGL SDK How To Create New Projects With Premake

I have recently begun the process of learning OpenGL to start making some Graphical applications using C++. I have installed the OpenGL SDK and I am able to build the projects properly on that. However, on the OpenGL SDK site there is little to no information what-so-ever on how to create new projects using the elements of the SDK, such as freeglut etc. I have Premake 4.0 and I understand I have to do something with the lua files, I do not know lua however and am not sure how to use the Lua files to create a new project. So could you help me out? Im using VS2010, should I create the project, then do something with premake? Or create some sort of lua file, then use premake on that? Any help would be wonderful because I am very lost, and would really like to get started with OpenGL. I have experimented a lot with this, such as copying the lua files from the sdk, but that came with no luck.
If you are not familiar with Premake4, then I strongly advise you to just use Visual Studio projects directly. If you're having trouble with that, then please amend your question with exactly what you did, and exactly the error messages that Visual Studio gave you when attempting to build. You should include:
The include paths. The full set of include paths, including full absolute directory names (including the path of your project and solution files).
The static library search paths.
The static libraries you are including.
The defines you are building with.
Note: If you don't know what any of these are, then you need to stop and learn a lot more about how C++ projects work. You need to understand how compilers deal with include paths, static libraries, #defines, etc.
If you are not familiar with Premake4, and you still want to use Premake4 with the SDK, then you first must become familiar with Premake4 without the SDK. I could give you an entire premake4.lua script that you could just plug in, change a few lines, and everything would magically work (and if you want that, you could look at how the SDK's examples are built. Specifically examples/premake4.lua). But if I did that, you wouldn't learn anything. You'd just be copy-and-pasting code, without having the slightest understanding of how it works.
So instead, I'm going to tell you what steps you should take to learn how to use Premake4.
Step 1: Hello World, Premake-style. You should make a single .cpp file that is a Hello World application. It just has a standard main function that prints "Hello World" to the console. That's the easy part.
The hard part is the Premake4 script. Rather than creating a Visual Studio project directly, you are going to write a Premake4 script to build that project for you.
The Premake4 documentation can walk you through the steps of making your first solution and project. You will of course need to add files to that project. You should also learn how to use configurations, so that you can have a Debug and Release build. The Debug build should have symbols, and the Release build should be optimized.
Step 2: Multiple projects. Here, you have two .cpp files: test.cpp and main.cpp. In test.cpp, place a function that prints something. The function shouldn't take parameters or anything. In main.cpp, you should have the main function that calls the function defined in test.cpp. There should also be a test.h which has a prototype for the function defined in test.cpp.
The trick here is that you aren't compiling them into the same executable. Not directly. You want two projects: one named test and one named main. The test project should be a static library, which compiles test.cpp. The main project will be the actual executable, which compiles main.cpp. Both of them should include test.h in their file lists.
Here, you're learning that solutions can have multiple projects. The two projects have different file lists. Each project can have a separate kind, which determines the type of build for that project alone. The test project should be a StaticLib, while the main project should be a ConsoleApp.
You will also need to learn to use the links command to link them together. The main project should use links to specify test. test does not need to link to something.
Step 3: Mastering directories.
Here, you're going to do the same thing as Step 2. Except for one thing: put test.h and test.cpp in a different directory (a subdirectory of the current one). You also want a test.lua file in that directory, which you will execute from your main premake4.lua file with a dofile command. The test.lua is where you define your test project. You can call dofile on the test.lua file anytime after you have created the solution with the solution command.
Note that the main project will need to change the directory where it finds test.h. You will also need to use the includedirs command in the main project to tell the compiler where to search for the test.h header you include in main.cpp.
Step 4: Back to the SDK. At this point, you should now be familiar enough with Premake4 to look back at the instructions I pointed you to and understand them a bit better. Then, just do what the instructions say. When it tells you what the first line of your script should be, make that the first line of your script. Put the UseLibs function where it says to put them; it even gives you an example of where it goes. Think of UseLibs as a fancy combination of links and includedirs.

Comparing VB6.exes

We're going through a massive migration project at the minute and trying to validate the code that is deployed to the live estate matches the code we have in source control.
Obviously the .net code is easy to compare because we can disassemble. I don't believe this is possible in vb6 exes because of the manner of compilation.
Does anyone have any ideas on how I could validate the source code and the compiled executable matches the file I have in Live.
Thanks
Visual Basic had (has) two ways of compiling, one to the interpreter ( called P-code) that would result in smaller binaries, and a second one that generates "regular" windows .exe file (called native) that was introduced because it was supposed to be fastar than p-code; although the compiled file size increased with this option.
If your compilation was using p-code, it is in theory possible to restore the sources.
Either way is pretty difficult to do, but there are tools that claim they can partially do this, one that I know of ( never tried but there is a trial version ) is VB-decompiler
http://www.vb-decompiler.org/
Unfortunately that's almost impossible. Bear in mind that VB6 code compiled on different machines will have different exe sizes and deployment requirements.
This is why the old VB'ers had a dedicated machine to compile their code.
This won't help you with already deployed items, but if you upped the revision number on every compile (there is a project setting to do this for you automatically) then you could easily compare version numbers.
My old company bought a copy of that VB-Decompiler and as noted before VB5/6 generates P-Code extra, that tool did produce some code and if not Assembly code which could be "read".
If you have all the code you compiled, you could compare the CRC's of that code to what is deployed in the field. But if you don't have the original compiled code, depending upon how you compiled the code you (if you used P-Code rather than Native Code you may be able to disassemble but the disassembly will look nothing like your source code). I doubt you would have shipped the PDB's with the exe's, but if you did, you could certainly use those to compare with the source code in your repository.
Have a trusted computer that can check out the various libraries and exes you make and compile them automatically. Keep those in a read-only but accessible location. Then do a binary comparison between the deployed site and your comparison site.
However I am not sure of the logic over disassembling the the complied units. My company and most other places I know of use a combination of a build computer and unit testing. In our company the EXE we make is a very thin shell over a bunch of libraries. For example a button click will be passed to a UI Active X DLL that does the actual processing. What we do after a build is run a special EXE that perform our list of unit test. If they all passed we know our libraries, where 90% of our code is, are good. As for the actual EXE we have a hand procedure that takes about two hours to do and then we are good. IIt is rare for any errors to happen in the EXE.

Resources