How to find Executable References programmatically - windows

I have a server with many executables files (.exe, .dll) but this files used to link to another executables too... and so on.
To prevent deployment errors, I need a way to search for every dependency and it's dependencies and check for any platform incompatibility (32 or 64 bits).
I have no problem to detect the binaries's platform but I don't know how to get a list of dependencies.
Any suggestion?

Check out http://www.dependencywalker.com/ - this is the best tool for seeing the module dependency hierarchy.

This question seems to get asked regularly here. I don't recommend reverse engineering your dependencies as you propose. The problem is the on different platforms the dependencies may differ. If you can you should work it out from the source. This is not as hard as it may seem at first glance.

For managed code you can use Reflection to go through all the managed dependencies, and the Dependency Walker that tenfour pointed out should help with the native. Problems will arise when you start factoring in other types of dependencies that are not as easily checked: registry keys, COM, configuration files/settings.
Your best bet would be to develop a set of acceptance tests for deploying to your server or setting up some of staging before pushing to a live server. This will help catch deployment errors as well as a number of other functional defects that might have slipped through initial testing.

Related

How to publish a Maven project

I am developing a Java framework/API to solve a problem at a client. The code/idea is my property (not the client's). I think it might be useful for others, so I would like to publish it as a open source project.
By publishing I mean bringing it out in the open - making it available as a Maven project.
I can think of conforming to Maven structure, proper documentation/example usage available on a web site, and unit tests, maybe some code coverage threshold.
But does it have to be run by some committee? Do I have to present it to somebody? What steps do I need to take to eventually have it available as a Maven dependency?
There's no committee or approval process that I know of. All you have to do is put your code into a public Github repo. This is how open source software works.
Per Kapep's excellent suggestion below, you have to choose a license as well. Apache, Creative Commons, Gnu, MIT - these are a few of your choices. Know what they mean before you decide.
Your problem begins on that day - you'll have to make others aware of it and see if it's adopted by others. If it's good, you'll have the nice problems of dealing with a user base and having others change your code. If not, it'll languish in the repo.

why do we need complexity in dependency management

I am not sure if the title of the question is correct, but please read the question.
I have been working on C/C++ for most part of my work life (close to 11 years). we only had C/C++ source/header files and all dependencies were managed by Makefiles. things were simple and manageable.
for the last 1.5 years i have shifted to Java domain. and i feel extremely irritated that most difficult aspect of working with anything new is the dependency managers. e.g. maven, leiningen, builder, sbt, etc etc etc.
whenever i download anything new from the open source world, there is a significant amount of time to be spent to just to setup the compilation, build, run environment. that too when i am using eclipse. why can't all the dependencies be placed along with the software to be downloaded?? why the tools like maven,leiningen, etc must make a separate internet connection to download the dependencies. i know that maven forms a local repository and should be able to find the dependency locally as it downloads whole internet anyway, but why is this model used. I am behind a firewall and not everything is accessible, and the tools fail to download dependencies. i am sure the same situation is there in most work environments.
recently i started with clojure, and boy it has been a pain to get eclipse configured for clojure. leiningen is supposed to be some magic which must be used with any clojure development. sometimes it feels learning leiningen is more important than learning concepts of clojure. i downloaded so called 'standalone' jar file for leiningen as 'self-install' was not working for me. but it fooled me. as soon as i run 'lein' command it is making an internet connection and trying to download somethings. WHY? it wont even print the help menu without connecting with the internet. WHY? there is no way i can fulfill its demands without bypassing my internet firewall, as i dont know, and no one can tell me what all things this guy wants. there is simply no other way.
And every one seems to be inventing their own. Java had ant which was simple, and went to Maven, some project uses Ruby based Builder, Clojure has leiningen, Scala has sbt. Go has something else. WHY? Why we need this added complexity in a world already full of complexity. why cant there be just one tool.
All you experts in Java technology please excuse my rant. I am sure this question will be downvoted and closed as from someone who is not trying hard enough to understand the things. But please believe me i have spend enough hours battling with this unnecessary complexity.
I just want to know how others get around this, or am i the only unfortunate one facing these issues.
I guess this question cannot accept an answer. I humbly can provide you with elements, hopefully they will help you get some perspective on the problem.
There are mainly 2 problems I identify with Java build systems:
some of them are declarative while others are using scripts
the fragmentation of the Java tools for building and exercising control is tied to people and Java stewardship of the space, not so much the technological choices.
Maven is the paramount of a method of defining your build using a formal grammar in a standard manner. Your pom.xml file contains a lot more than just your build : it is the identity of your artifacts, the project metadata, the modules and the plugins brought in. It treats with particular attention of the declaration of the dependencies and repositories.
Maven is declarative.
For a certain population of programmers, this is great, and they don't create new projects very often. It works well over time, it consolidates the build nicely.
Ant is a different system where you define tasks that will execute, chained in a particular order. All the definitions are made using XML and in effect, you are writing scripts and declaring how they will be stitched together.
Buildr (full disclosure: I am a committer there) is a build system which was created off the frustration of dealing with the inefficiency of the declarative approach for cases where the build needed to do additional steps and complex testing and the rigidity of using XML for a build. It is script-based, enforcing conventions over configuration (expecting a few good defaults, but letting you drive if you need to change things).
I am not familiar with Gradle and SBT but I think they extend and build on this approach, from what I heard.
So this gives you I hope a better picture of the landscape in terms of build tools.
The reason why no standard build tool emerged is probably tied to the fact Sun didn't push one with Java. Eventually, I think they adopted Ant (I have some most JSR jars being built with it). There also has been some products built in this space over extending some of those build systems ; there is always going to be a huge difference between people being paid to maintain code rather than doing it on the side.
And well, people argue. Build systems are a great way to start a flame war. We have a hard time agreeing on a standard though some of the common elements are now settling well around the Maven artifacts.
As for the need to download the Internet over and over again, it's a rather long story but here are a few things that may trigger the need for an unnecessary download:
any of the dependencies using SNAPSHOT will try to get the latest snapshot. This is a great scheme but it takes its toll. You might depend on something that depends on a snapshot, and get a download because of that.
Maven doesn't redownload the artifacts but sometimes checks md5. This is easy to fix, just use the -O option from the command line.
Tools like Buildr were built around fixing this issue once for all. First off, you only download what you said you would. Second, no connection is made again unless you asked for it. By default, Buildr doesn't play the transitive dependencies game though you can ask for it, but you have to do it explicitly.
I hope this was informative and that your journey in Java land becomes less painful going forward.

What is the appropriate way to build WSO2 Carbon tags?

I'm trying to build multiple tags of WSO2 Carbon side-by-side for comparison purposes, but I'm concerned I may be missing something about the directory layout and how to do the builds. Please could I have some help?
At present, I've checked out what I think are the relevant tags from:
https://svn.wso2.org/repos/wso2/tags/carbon/3.0.0/
https://svn.wso2.org/repos/wso2/tags/carbon/3.1.0_core/
https://svn.wso2.org/repos/wso2/tags/carbon/3.2.0/
https://svn.wso2.org/repos/wso2/tags/carbon/3.2.2/
https://svn.wso2.org/repos/wso2/tags/carbon/3.2.3/
I've then tried running Maven builds from the top-level directories of each of the checkouts (in various ways, some involving skipping the tests and others not), with varying results (almost all of them unsuccessful in one way or another, whether due to missing artifacts, failing tests or other reasons). I also tried building 3.2.2 and 3.2.3 from the .../carbon/3.2.2/patch-releases/3.2.2 directory and the .../carbon/3.2.3/patch-releases/3.2.3 directories, as per the answer #ThiliniIshaka gave here:
WSO2 sourcecode of identity server (wso2is-3.2.3-src.zip) is always built with errors
This seemed to work (after some fiddling around) for 3.2.2, but some of the tests for 3.2.3 fail and this pulls down the build (I can make it work with the -fn flag to Maven, but that just results in what looks like an incomplete build). Furthermore, the earlier tags don't seem to have a corresponding patch-releases directory, so the same technique won't work for them even if I get it working for 3.2.3.
As an aside, I'm also deeply confused by things such as the 3.2.2 tag containing a 3.2.3 directory under patch-releases, etc.
All of this leads me to think I may be missing the point in some fundamental manner :)
The questions I thus have are:
Am I checking out the right things in the first place?
From which directories and how should I be building each of the tags please?
Do I need the same version of Maven for all of the tags?
Is there any good build documentation for the various different versions explaining some of this please? I've found various technical blogs, but seemingly nothing foolproof and comprehensive (I'm probably looking in the wrong places).
Many thanks.
Answering for the above queries;
Yes, these tags are created for relevant branch and point releases of carbon.
As the previous thread suggests [1] you can build the source, could you please provide us with the issues you get when building the source?
Yes, you need to build above tags with maven2.
Only the trunk [2] (where the normal developments going on) needs maven 3.
Some hints are provided in this blog post.
Start from the root level with mvn install (to skip running tests, build with mvn install -Dmaven.test.skip=true). If you are to build tags related to point releases, build from patch-releases directory.
Hope this helps.
Thanks

Jenkins + Cmake + JIRA = CI of multiple interdependent projects?

We have a number of small projects within our system running on Linux (Slackware 7-11, slowly migrating to RHEL 6.0). Around 50-100 applications and 15-20 libraries. Almost all our applications use one or more of our libraries. Our source tree looks something like this:
/app1
/app2
/app3
/include
/foo/app4
/foo/app5
/foo/app6
/foo/lib1
/foo/lib2
/lib/lib3
/lib/lib4
/lib/include
Now, I've done some work creating some CMakeLists.txt files and built most of the libs and some of the apps. I'm fairly comfortable with using cmake to build. I did this with v2.6, and I recently (an hour ago) upgraded to 2.8. Each of the above projects have their own CMakeLists.txt file specific to the project to do building and installation (no packaging, yet).
I have a requirement to make use of and enforce continuous integration. I've installed and played around with Jenkins, and from what I've seen I'm very impressed. I'm also evaluating JIRA to do our issue tracking.
Just to get things up and going, I've done a cmake install on all the libs, so the apps can find them in the filesystem. Headers are installed to /usr/local/include and libs to /usr/local/lib. Is this a bad thing to do? Would it be better to tell cmake to look for the lib's source directory, use the export interface or the recently introduced ExternalProject_Add?
Because I'm going to be using Jenkins, I cannot be guaranteed that cmake can find the source or build directory. Of course, I can tell Jenkins to build the projects in order (or at least, build the dependencies first). If an update to a library breaks the building of another project, then I guess it'll be up to someone with 3/4 of a wit to determine this.
Thank you in advance
Just to get things up and going, I've done a cmake install on all the libs, so the apps can find them in the filesystem. Headers are installed to /usr/local/include and libs to /usr/local/lib. Is this a bad thing to do?
No it is not a bad thing to do, but your build should reproduce resources from scratch. Things like portability and fixing build bugs will become an issue if things need to be pre-installed in the system outside of the build process. If you are able to do it other ways as you mentioned I would suggest that way, but if its going to make your build that much longer, its something you need to feel out. My ideology is everything should be movable to a new Jenkins machine with a fresh install at the drop of a hat, again this always isn't achievable, but something to strive for.
Because I'm going to be using Jenkins, I cannot be guaranteed that
cmake can find the source or build directory. Of course, I can tell
Jenkins to build the projects in order (or at least, build the
dependencies first). If an update to a library breaks the building of
another project, then I guess it'll be up to someone with 3/4 of a wit
to determine this.
Well one of the things I do in interdependent jobs is that on the successful building of one jobs triggers the job that depends on it. So for example if A depends on B, and A fail, B will never be run and whoever created the issue in build A is responsible for it and so on. This prevents a cascading affect of broken build that all were caused by a broken dependency. I would suggest that you keep files in a particular build in its job folder and specify to the dependency the location of the required files. Again keep your builds separate and clean.
I'm also evaluating JIRA to do our issue tracking.
I highly recommend JIRA as an issue tracking system for company; You might want to look at this Jenkins plugin for integration. If your using git, and you dont mind hosting your code off site, I would GitHub issues a shot as well.
Goodluck you seem to be on the right track.

Managing internal 3rd Party Dependencies

We have a lot of different solutions/projects which are managed by different teams. Our solution needs to reference several projects that another team owns. We don't want to add these dependencies as project references because we do not intend on modifying that code, we just want to use it. Also we already have quite a bit of projects in our solution and don't want to add a bunch more since it will slow down Visual Studio. So we are building these projects in a separate solution and adding them as file references to our solution.
My question is, how do people manage these types of dependencies? Should I just have some automated process what looks for changes to those projects, builds them and checks the dlls into our source control, after which we treat them like other 3rd party dependencies? Is there a recommended way of doing this?
One solution, although it may not necessarily be what you are looking for, is to have each dependent sub-system perform a release. This release could be in the form of a MSI install, or just a network share of assemblies. When a significant change is made, that team could let you know, and you could run the install or a script to copy the files.
Once you got the release, you could put them into the GAC, that way you would not have to worry about copying them to your project bin folders.
Another solution, assuming you are using a build server or continuous integration of some kind, is to have a post build step or process stage the files. Than at any given moment, the developers of the other teams could grab the new files , or have a script or bat file pull them down locally.
EDIT - ANOTHER SOLUTION
It might be best to ask why do you have these dependencies? Do you really need them locally when building your part of the application? Could you mock out the dependencies in your solution, allowing you to code, build, and run unit tests? The the actual application would wire these up in your DEV/Test/Prod environments. Keeping your solution decoupled and dependent free may be a better solution for the individual team. Leave the integration and coupling when the application runs in a real setting.
(Not a complete answer, but still:)
Any delivery is better stored in a file/binary repository, as opposed to a VCS used to manage sources history.
We prefer managing those deliveries in a repo like Nexus, and we are using maven to get back the right dependencies.
Even if those tools can be more Java-oriented, Nexus can store anything, and maven is only there to read the pom.xml of each artifact and compute the right dependencies.

Resources