I am trying make/install inkscape on WinXp. Although I am rather inexperienced on this subject, I have a reasonably well developed MingW infrastructure, and have successfully built/installed a variety of packages, such as Cairo, Pango, GTK+, Glade, etc.
At the moment I am trying to make/install inkscape. Following a huge amount of work, it all configures and compiles, with MSys:
cd ${LOCALBUILDDIR} && \
cd inkscape-0.91 && \
./configure --prefix=${LOCALDESTDIR} CFLAGS='-std=gnu++11' CXXFLAGS='-std=gnu++11' && \
make CFLAGS='-std=gnu++11' CXXFLAGS='-std=gnu++11' && \
make install
and after successful config/compile I use
cd ${LOCALBUILDDIR} && \
cd inkscape-0.91 && \
make CFLAGS='-std=gnu++11' CXXFLAGS='-std=gnu++11' && \
make install
to prevent recreating/overwriting the config etc files for the edits discussed below.
This fails on the link with:
make[3]: Entering directory `/build32/inkscape-0.91/src'
CXXLD inkscape.exe
libtool: link: cannot find the library `/usr/lib/libintl.la' or unhandled argument `/usr/lib/libintl.la'
At no time ever have I had any Dir's called "/usr". I appreciate that is the default on Unix etc, but my installation has its libs in D:/Apps/MingW/local32/lib", and which certainly includes libintl.la (and all the required gettext and libintl bits, as it would need for the success with all the other packages). Indeed, the inkscape configure and compile steps report finding libintl etc. correctly. Also, it obtains the path correctly for the compiler, etc. as it would need to for the success with the compiler.
I am guessing that somewhere in the inkscape configure/in/ac/m4 etc bits, it has hard-wired, or something, the link step for libintl for Unix default. I have tried many permutations to manually alter "/usr/lib" lines in the config and libtool files, but obtain the same link crash.
I would be grateful for any hints as to where to look for/correct this in inkscape (or even generally).
OK, sussed it. It wasn't libintl per se, even though that is the "error/crash message".
The problem was in libpopt.la, which had hardwired "/usr/lib" dependency, rather than correctly creating the .la for the Windows/MingW environment.
popt was required for inkscape.
I kept running AgenRansack on the inkscape build dir's looking for "/usr" etc, when I should have run AgentRansack on my /local32/lib dir.
It now compiles and links without complaint ... but doesn't launch ... oh well, on to that later :-(
Related
I have been able to successfully compile Tcl/Tk Frameworks on macOS (following instructions here). I use these Frameworks inside an .app for distributions. I would like to customize my Frameworks adding extra extensions, for example Drag&Drop TkDND (by the way, I really think this basic GUI feature should be integral part of Tk...).
Instructions here seem to refer to adding the extension to a normal installation, not a Framework. I haven't found any explicit instructions. Sorry if the question is naive, but I am very unexperienced of Tcl/Tk. PS: my .app accesses Tk through Perl. I would very much appreciate any help/instructions/link.
I don't use tkdnd, so I do not have a answer specific to that installation.
Adding to my script I have blocks in my build script such as this one which builds the 'tdom' extension.
cd $SRCDIR
cd tdom*
if [[ $? -eq 0 ]]; then
make distclean
./configure \
--exec-prefix=$INSTLOC \
--prefix=$INSTLOC \
--with-tcl=$INSTLOC/Library/Frameworks/Tcl.framework/tclConfig.sh
make CFLAGS="-O2 -mmacosx-version-min=${macosxminver}"
make install
fi
However, linking to the Tk libraries may complicate things. And every package is different and uses different variables. So I will need to download tkdnd and build it and see if there are any issues, so expect an upcoming edit to this answer.
(Edit: I fixed the script in the original question, so the following
paragraph no longer applies)
My changes to the init.tcl script are not quite perfect, as you can see, the wrong package is loaded when I run via 'wish' (wish is in a different location than tclsh, which causes some issues). I should have the local installation's path located earlier in the auto_path. If you are using my script, you need to be aware of this.
bll-mac:$ ../darwin/64/tcl/bin/tclsh
% package require tdom
0.9.1
bll-mac$ ../darwin/64/tcl/bin/wish
% package require tdom
0.8.3
% package require tdom 0.9.1
0.9.1
There really isn't any difference between a framework (and b) and a normal installation, the framework just provides a structure for resource location.
Edit:
It appears that the following works to compile and install the tkdnd package.
The redefinition of PKG_CFLAGS is necessary because the tkdnd makefile
has an argument defined that is not supported by the compiler (on Mojave).
So PKG_CFLAGS is a copy of what's in the makefile without -fobjc-gc.
I only tried doing a package require tkdnd. I don't know how to use
the package, so I did not try anything else.
cd $SRCDIR
cd tkdnd*
if [[ $? -eq 0 ]]; then
make distclean
./configure \
--prefix=$INSTLOC \
--exec-prefix=$INSTLOC \
--with-tcl=$INSTLOC/Library/Frameworks/Tcl.framework \
--with-tk=$INSTLOC/Library/Frameworks/Tk.framework
make CLAGS_OPTIMIZE="-O2 -mmacosx-version-min=${macosxminver}" \
PKG_CFLAGS="-DMAC_TK_COCOA -std=gnu99 -x objective-c"
make install
fi
This seems to install the extension in the standard path (/usr/local/lib) but not in the Tk.framework. Probably the "make instal" should require some extra values.
I would like to use pigz to compress massive tar archives.
I am using cygwin. Unfortunately, pigz is not one of the standard cygwin packages.
Anyone know how to install pigz under cygwin?
Below are the 2 techniques I tried without success:
1) The README on this webpage (or in the README file, if you download the source from here) says that you should be able to build it from source merely by
Type "make" in this directory to build the "pigz" executable.
When I do that on my machine, I get a ton of warnings starting with
pigz.c:2950:20: warning: unknown conversion type character 'j' in format [-Wformat=]
(intmax_t)g.in_tot, (intmax_t)len, tag);
and then this final error:
gcc -o pigz pigz.o yarn.o try.o deflate.o blocksplitter.o tree.o lz77.o cache.o hash.o util.o squeeze.o katajainen.o -lm -lpthread -lz
pigz.o:pigz.c:(.text+0xd4f8): undefined reference to `fsync'
collect2.exe: error: ld returned 1 exit status
make: *** [pigz] Error 1
That about exhausts my ability to build programs from source...
2) It looks like there is an old 2015 port of pigz version 2.3.3 to Cygwin Ports, the expanded cygwin package repository.
But that version out of date (the latest pigz is 2.4). Indeed, it looks like Cygwin Ports has migrated to github and searching there for pigz there finds nothing.
I am not even sure how to use Cygwin Ports! The project's homepage merely says
Follow the normal Cygwin installation instructions in order to install
any of the packages currently maintained by this project.
I assume that that means to run cygwin's setup-x86.exe, but when it asks you to "Choose A Download Site" you will need to enter some URL for Cygwin Ports.
Web searching found little information. This link says to use http://sourceware.org/cygwinports/ but setup-x86.exe soon generated an error for that URL. The instructions in this link also did not work for me.
The C99 standard specifies the j specifier for printf(). (Note that the 99 refers to 1999. It is now 2018.) You can force the pigz compilation to not assume C99 by changing __STDC_VERSION__-0 >= 199901L || __GNUC__-0 >= 3 to 0. Then it won't try to use j.
Please let me know what the values of __STDC_VERSION__, __GNUC__, and __GNUC_MINOR__ are for your compiler.
Also pigz requires POSIX compliance, which would provide the fsync() call. You can just delete the reference to fsync(), which would just result in the --synchronous and -Y options having no effect.
To follow up on comments above that I had with #varro and matzeri, I can now answer my own question: my suspicion was correct: RTools was the culprit. I found that if I temporarily removed all RTools elements from my Windows Path env var (for me: c:\Rtools\bin and c:\Rtools\mingw_32\bin), then I was able to get pigz make to work.
After doing this Path edit, I uninstalled my existing cygwin, reinstalled cygwin, installed my usual extra packages (chere, openssh, subversion, zip, unzip) and all their dependencies, installed make and all its dependencies, installed gcc-core (is the C compiler) and all its dependencies. At that point, I was able to make pigz perfectly.
There is a much easier way than compiling yourself. I had the same problem, and with a little bit of research found multiple ready-made .exe files (pigz.exe) for direct usage in Windows. I am using this one:
https://sourceforge.net/projects/pigz-for-windows/files/
The OP's main concern was: "I would like to use pigz to compress massive tar archives.", and I hope that this is a useful answer to that concern, although it does not explain how to get around the compiling problems.
Some additional notes:
The interesting thing that some folks may not be aware of is that nothing keeps us from using normal Windows binaries from within Cygwin, and vice versa. That is, even if the OP had sophisticated Cygwin / bash (or whatever) scripts which drive pigz and the whole process of compressing, he could use the ready-made pigz native Windows version linked above.
With or without Cygwin, there is no need to compile pigz yourself, unless you want the latest features or bug fixes.
Personally, I am using the native Windows pigz version from within Cygwin since a while. AFAIK, pigz has no progress bar, which is somehow inconvenient for me (from time to time I have to compress a single huge file (around 60 GB)). A convenient way to get around this is the pv utility. Since I haven't found a native Windows version of it, and since I am too lazy to compile it for Windows myself, I am using Cygwin's pv to display the progress when I let the native Windows pigz compress those huge files.
i'm tackling the problem of compiling vmime library using this guide with MinGW. As this guide states, first i need to compile libiconv library with these commands(yep i'm new to MinGW):
$ tar -xvvzf libiconv-1.13.1.tar.gz
$ cd ./libiconv-1.13.1
$ ./configure --prefix=/mingw #configures makefile, use /mingw as a prefix
$ make
$ make install
after all this commands the libiconv.dll.a appears in libiconv-1.13.1\lib.libs
directory.Also after compiling process appears the /bin directory and there is only 1 library - libcharset-1.dll.
My question is - how do i know if the library properly compiled, without errors?Should i check the output from the MSYS console? there are tons of checks, it seems pretty boring task. Thanks in advance, glad to hear any advice!
You're building a GNU Autotools package.
./configure generates the makefile(s) needed by make to build the library
on your particular system. If it thinks the library can't be built on your particular
system, it will tell you why. It might just miss some reason why you can't build
the library, because the library developer(s) have to script the tests that it runs, and might
just overlook some necessary ones. But if it misses something then make will fail.
make executes all the commands necessary to build the library on your system. If any of them fail,
then make will fail, and will tell you so unmistakably.
Likewise make install does everything necessary to install the library
under the default or specified prefix path.
Classically, unix tools (like the autootols) will inform you when something goes wrong
and not inform you that nothing went wrong.
First, a little bit of background as to why I'm asking this question: Our product's daily build script (as run under Debian Linux by Jenkins), does roughly this:
Creates and enters a build environment using debootstrap and chroot
Checks out our codebase (including some 3rd party libraries) from SVN
Runs configure and make as necessary to build all of the code
Packages up the results into an install file that can be uploaded to our headless server boxes using our install tool.
This mostly works fine, but every so often (maybe one daily build out of 10), the part of the script that builds one of our third-party libraries will error out with an error like this one:
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/bash
/root/software/3rdparty/libogg/missing autoconf
/root/software/3rdparty/libogg/missing: line 81: autoconf: command not found
WARNING: 'autoconf' is missing on your system.
You should only need it if you modified 'configure.ac',
or m4 files included by it.
The 'autoconf' program is part of the GNU Autoconf package:
<http://www.gnu.org/software/autoconf/>
It also requires GNU m4 and Perl in order to run:
<http://www.gnu.org/software/m4/>
<http://www.perl.org/>
make: *** [configure] Error 127
As far as I can tell, this happens occasionally because the timestamps of the files in the third-party library are different (e.g. off by a second or two from each other just due to the timing of when they were checked out from the SVN server during that particular build). That causes the configure script to think that it needs to auto-regenerate a file, so then it tries to call 'automake' to do so, and errors out because automake is not installed.
Of course the obvious thing to do here would be to install automake in the build environment, but the build environment is not one that I can easily modify (due to institutional reasons), so I'd like to avoid having to do that if possible. What I'd like to do instead is figure out how to get the configure scripts (which I can modify) to ignore the timestamps and just always do the basic build that they do when the timestamps are equal.
I tried to finesse the problem by manually running 'touch' on some files to force their timestamps to be the same, and that seemed to make the problem occur less often, but it still happens:
./configure --prefix="$PREFIX" --disable-shared --enable-static && \
touch config* aclocal* Makefile* && \
make clean && make install ) || Failure "libogg"
Can anyone familiar with how automake works supply some advice on how I might make the "configure" calls in our daily build work more reliably, without modifying the build environment?
You could try forcing SVN to use commit times on checkout on your Jenkins server. These commit times can also be set in SVN if they don't work out for some reason. You could use touch -d or touch -r instead of just touch to avoid race conditions there.
I have started recently to work with automake and autoconf and I am a little confused about how to distribute the code.
Usually when I get a code that works with a configure file, the only thing that I get is a confiure file and the code itself with the Makefile.am and so on. Usually I do
./configure
make
sudo make install
and thats all but when I generate my configure from a configure.ac file it toss out lots of files that I thought where just temporary but if I give the code to a partner and he makes configure, it doesn't work, it needs either remake the autoreconf or have all this files (namely instal.sh,config.sub...).
Is there something that I am missing? How can I distribute the code easily and clean?
I have searched a lot but I think I am searching for the right thing because I cannot find anything useful.
Thanks a lot in advance!
Automake provides a make dist target. This automatically creates a .tar.gz from your project. This archive is set up in such a way that the recipient can simply extract it and run the usual ./configure && make && make install invocation.
It is generally not recommended to check the files generated by Autotools into your repository. This is because they are derived objects. You wouldn't check in your .o files!
Usually, it is a good idea to provide a autogen.sh script that carries out any actions required to re-create the Autotools build infrastructure in a new version control system checkout. Often, it can be as simple as:
#!/bin/sh
autoreconf -i
Then set it chmod +x, and the instructions for compiling from a clean checkout can be ./autogen.sh && ./configure && make.