We have a Julia code that is processing data for our users and display results.
I need to build/compile it to Windows executable as the users (financial analysts) have very limited IT knowledge, they cannot use Julia, the work environment is remote and they cannot install Julia, they don't know how to use .jl files, command line etc. It needs something very simple, an executable, binary, easy to distribute and use.
I tried the package compiler it seems not possible to build for Win32/64 standalone with all dependencies included, couldn't create something useful and easy to distribute.
Please help. Tx.
Related
I am developing an OS for embedded devices that runs bytecode. Basically, a micro JVM.
In the process of doing so, I am able to compile and run Java applications to bytecode(ish) and flash that on, for instance, an Atmega1284P.
Now I've added support for C applications: I compile and process it using several tools and with some manual editing I eventually get bytecode that runs on my OS.
The process is very cumbersome and heavy and I would like to automate it.
Currently, I am using makefiles for automatic compilation and flashing of the Java applications & OS to devices.
All steps, roughly, for a C application are as follows and consist of consecutive manual steps:
(1) Use Docker to run a Linux container with lljvm that compiles a .c file to a .class file (see also https://github.com/davidar/lljvm/tree/master)
(2) convert this c.class file to a jasmin file (https://github.com/davidar/jasmin) using the ClassFileAnalyzer tool (http://classfileanalyzer.javaseiten.de/)
(3) manually edit this jasmin file in a text editor by replacing/adjusting some strings
(4) convert the modified jasmin file to a .class file again using jasmin
(5) put this .class file in a folder where the rest of my makefiles (the ones that already make and deploy the OS and class files from Java apps) can take over.
Current options seem to be just keep using makefiles but this is a bit unwieldly (I already have 5 different makefiles and this would further extend that chain). I've also read a bit about scons. In essence, I'm wondering which are some recommended tools or a good approach for complicated builds.
Hopefully this may help a bit, but the question as such could probably be a subject for a heated discussion without much helpful results.
As pointed out in the comments by others, you really need to automate the steps starting with your .c file to the point you can integrated it with the rest of your system.
There is generally nothing wrong with make and you would not win too much by switching to SCons. You'd get more ways to express what you want to do. Among other things meaning that if you wanted to write that automation directly inside the build system and its rules, you could also use Python and not only shell (should that be of a concern though, you could just as well call that Python code from make). But the essence of target, prerequisite, recipe is still there. And with that need for writing necessary automation for those .c to integration steps.
If you really wanted to look into alternative options. bazel might be of interest to you. The downside being the initial effort to write the necessary rules to fit your needs could be costly. And depending on size of your project, might just be too much. On the other hand once done with that, it'd be very easy to use (apply those rules on growing code base) and you could also ditch the container and rely on its more lightweight sand-boxing and external rules to get the tools and bits you need for your build... all with a single system for build description.
I have a go app which relies heavily on static resources like images and jars. I want to install that go executable in different platforms like linux, mac and windows.
I first thought of bundling the resources using https://github.com/jteeuwen/go-bindata, but since the files(~100) have size ~ 20MB or so, it takes a really long time to build the executable. I thought having a single executable is an easy way for people to download the executable and run it. But seems like that is not an effective way.
I then thought of writing a installation package for each of the platform like creating a .rpm or .deb packages? So these packages contain all the resources and puts it into some platform specific pre defined locations and the go executable can reference them. But the only thing is that I have to handle that in the go code. I have to see if it is windows then load the files from say c:\go-installs or if it is linux then load the files from say /usr/local/share/go-installs. I want the go code to be as platform agnostic as it can be.
Or is there some other strategy for this?
Thanks
Possibly does not qualify as real answer but still…
As to your point №2, one way to handle this is to exploit Go's way to do conditional compilation: you might create a set of files like res_linux.go, res_windows.go etc and put a set of the same variables in each, pointing to different locations, like
var InstallsPath = `C:\go-installs`
in res_windows.go and
var InstallsPath = `/usr/share/myapp`
in res_linux.go and so on. Then in the rest of the program just reference the res.InstallsPath variable and use the path/filepath package to construct full pathnames of actual resources.
Of course, another way to go is to do a runtime switch on runtime.GOOS variable—possibly in an init() function in one of the source files.
Pack everything in a zip archive and read your resource files from it using archive/zip. This way you'll have to distribute just two files—almost "xcopy deployment".
Note that while on Windows you could just have your executable extract the directory from the pathname of itself (os.Args[0]) and assume the resource file is located in the same directory, on POSIX platforms (GNU/Linux and *BSD etc) the resource file should still be located under /usr/share/myapp or a similar place dictated by FHS (or particular distro's rules), so some logic to locate that file will still be required.
All in all, if this is supposed to be a piece of FOSS, I'd go with the first variant to let the downstream packagers tweak the pathnames. If this is a proprietary (or just niche) software the second idea appears to be rather OK as you'll play the role of downstream packagers yourself.
So I made this executable program that uses the Windows library and some others (string, ctime, lmcons...) in C++. When it runs on my computer it works great but when I transfer the executable to a computer that does not have some of those libraries on it the program does not run. How do I "add" those libraries in with my code?
1 - You need to identify libraries that need to exist on the system in order to execute your application.
2 - you need to create a package that contain these libraries. It could be an installation or a zip file. Depending on the libraries, sometimes they need to be registered on the system, sometimes just dropped in. If you use install packaging software, you can set up registration [if needed]. If you distribute zip or ftp folder, you may need to supply script file. Sometimes libraries are part of some Microsoft package and this package can be prerequisite to run your application. You may pack it into your installation and have it installed silently. There are many ways as you see.
3 - this is up to you how you want to distribute your application and supporting libraries. But best is when user doesn't have to jump the hoops trying to install your stuff. User should click and forget.
I've got a certain project that I build and distribute to users. I have two build configurations, Debug and Release. Debug, obviously, is for my use in debugging, but there's an additional wrinkle: the Debug configuration uses a special debugging memory manager, with a dependency on an external DLL.
There's been a few times when I've accidentally built and distributed an installer package with the Debug configuration, and it's then failed to run once installed because the users don't have the special DLL. I'd like to be able to keep that from happening in the future.
I know I can get the dependencies in a program by running Dependency Walker, but I'm looking for a way to do it programatically. Specifically, I have a way to run scripts while creating the installer, and I want something I can put in the installer script to check the program and see if it has a dependency on this DLL, and if so, cause the installer-creation process to fail with an error. I know how to create a simple CLI program that would take two filenames as parameters, and could run a DependsOn function and create output based on the result of it, but I don't know what to put in the DependsOn function. Does anyone know how I'd go about writing it?
You can read the PE imports table to find out what DLLs are required at load time. This is what Dependency Walker does, and also the dumpbin tool included with the Microsoft Platform SDK (which is installed by Visual Studio and also available as a separate download). Some of the debughelp APIs provide access to information from the PE header, but why not invoke the dumpbin tool and inspect its output? Since it's text-based non-interactive it should be pretty straightforward to integrate into your installer build process. Dependency Walker also has a capability to run in non-interactive mode with text output.
If you do need to retrieve the information without the help of any other tool, the ImageDirectoryEntryToDataEx function is a good place to start. Also, here's a question that shows how to do it manually (but do use ImageHlp instead, which knows about all the various variants of the PE format):
Printing out the names of implicitly linked dll's from .idata section in a portable executable
I have been a CS student for a while and it seems like I (or many of my friends) never understood what's happening behind the scene when it terms to make, install etc.
Correct me but is make a way to compile a set of files?
what is it mean by "installing a program to a computer" like on windows because when I am coding in different languages such as java or perl, we dont install what we wrote. we would compile (if not, interpret language) and just run it. So, why are programs such as Skype needs to be "installed"?
Can anyone clarify this? I feel like this is something i need to know as a programmer.
Make is a build system
Make is a build system which is simply a way to script the steps needed to compile a program. Make specifically can be used with anything, but is usually used to compile C or C++ programs. It simplifies and creates a standard way for programmers to script the preparation of their program, so that it can be built and installed with ease
Why a build system
You see, if your program is a simple one source file program, then using make might be an overkill, as compiling the simplest c program is as simple as
gcc simpleprogram.c -o simpleprogram.out
However, as the size of the software grows, the complexity of it grows, and the complexity of how it needs to be built grows. For example, you may want to determine which version of each library is installed in the computer which you are compiling in, you may want to run some tests after compiling your program to determine it is working correctly, or you may want to automatically download some dependencies your program has.
Most software built need a mixture of these tasks eventually. So, instead of reinventing the wheel, they use a build system which allow scripting this. If you are familiar with Java (which you mentioned) a build system comparable to make, but used in the java world is Apache Ant.
Why install
Well, lets assume that you used the "make" command but not "make install". The "make" command is usually used to just to prepare the program for compilation, and the compile it. However, once your program is compiled, all you have is an executable in the directory in which you compiled the program in. The program, its documentation, and it's configuration files haven't been put in the appropriate directories needed for all users to use it. That's what "make install" is for. Make install takes all the files associated with the program you just compiled, and puts said files in the appropriate directories, so that it becomes available to everyone, and so that each component is in the expected directory according to your operating system.
make is a bit of software that reduces the amount of code that needs to be compiled - it compares modification times of the source code with the target. If the code has changed a compile is done to construct the target otherwise you can skip that step.
Installing software is placing the executables/configuration files into the right places - perhaps constructing some files along the way. E.g. usernames in your skype example