PDCurses missing seperator error in Cygwin - makefile

I've been trying to get PDCurses in Visual Studio 2015 for 2 days now and I'm still having trouble. The best solution I found was downloading Cygwin and using the make file. When I call makefile I resieve a seperator.
$ make -f vcwin32.mak WIDE=Y
vcwin32.mak:10: *** missing separator. Stop.
When I view file, these are the first 15 lines.
# Visual C++ NMakefile for PDCurses library - Win32 VC++ 2.0+
#
# Usage: nmake -f [path\]vcwin32.mak [DEBUG=] [DLL=] [WIDE=] [UTF8=] [target]
#
# where target can be any of:
# [all|demos|pdcurses.lib|testcurs.exe...]
O = obj
!ifndef PDCURSES_SRCDIR
PDCURSES_SRCDIR = ..
!endif
!include $(PDCURSES_SRCDIR)\version.mif
!include $(PDCURSES_SRCDIR)\libobjs.mif
Line 10:
!ifndef PDCURSES_SRCDIR
I'm learning how much I really suck at command. Any advice?

The makefile is (given the name vcwin32.mak) for Visual C++, win32 (which uses a different make program, i.e., nmake). It won't work with GNU make.
PDCurses's win32 dependencies don't include anything relevant to Cygwin. Most people would use Visual Studio, which provides nmake.
Occasionally someone builds win32-applications with Cygwin, but you'd have better success with mingw (fewer extraneous libraries).

Related

What is the nmake equivalent of filter-out?

I've been given a makefile for ubuntu, and I'm trying to use it with nmake on Windows 10.
nmake doesn't seem to recognize the filter-out keyword such as in the following line:
OBJS_TEST = $(filter-out $(EXE_OBJ), $(OBJS))
Does nmake have a keyword with the same functionality?
For completeness, the lines from the beginning of the file before the above line (and a few lines below) are as follows:
EXE = main
TEST = test
OBJS_DIR = .objs
###############################################
### THE LINE IN QUESTION IS BELOW #############
OBJS_TEST = $(filter-out $(EXE_OBJ), $(OBJS))
###############################################
CPP_TEST = $(wildcard tests/*.cpp)
# CPP_TEST += uiuc/catch/catchmain.cpp
# The above line doesn't work with the "+=" extension in nmake; replace with below.
CPP_TEST = $(CPP_TEST) $(wildcard tests/*.cpp)
The error reported is:
fatal error U1001: syntax error : illegal character '-' in macro
As far as I'm aware there is no equivalent to filter-out in nmake. Also, nmake does not support the wildcard function so you'll have to deal with that. And, I'm suspicious that your replacement for += won't work; in most versions of POSIX make FOO = $(FOO) is illegal as it gives an infinite loop of variable lookup. Maybe nmake works differently, though.
nmake is SO different from POSIX make and GNU make that you will either have to rewrite the makefile from scratch, or else just go get a version of GNU make for Windows (or build it yourself). GNU make is quite portable and runs well on Windows. That would be a LOT less work.

Undefinition of MSVC definition _MBCS and definition of _UNICODE not possible calling cmake from command line [duplicate]

This question already has answers here:
How to define a C++ preprocessor macro through the command line with CMake?
(4 answers)
Closed 3 years ago.
I want to generate a visual studio project 2010 with cmake from a CMakeLists.txt. Basically this works fine. However, one detail lead to my following observation:
The default character set for the generated visual studio project is 'Multibyte Character'. I want to change it to Unicode. As far as I found out, I need to define either _UNICODE or UNICODE, but I also need to undefine _MBCS. This works out, if I put it in the CMakeLists.txt, but I can't get it working, if I want to set these definitions by command line:
CMakeList.txt, works fine:
add_definitions ( -D_UNICODE )
remove_definitions ( -D_MBCS )
Command line, definitions are ignored by cmake, if I do it like this:
cmake -D_UNICODE="" -U_MBCS=""
Command line, definitions are ignored by cmake, if I do it like this:
cmake -DCMAKE_CXX_FLAGS_INIT="-D_UNICODE -U_MBCS"
I assumed that both ways are the same, but obviously the handling of the definitions from command line is different. I am doing something wrong or is it only possible by using add_definitions / remove_definitions ?
By the way, I'm using cmake 3.10.
The -D flags passed to cmake are completely unrelated to the -D flags passed to the compiler. See cmake(1). In short, cmake -DVARIABLE=VALUE ... is roughly equivalent to using set(VARIABLE VALUE CACHE STRING "") inside your CMakeLists.txt.
If you cannot use add_definitions or target_compile_definitions, you can still pass flags to the compiler by setting CMAKE_CXX_FLAGS_INIT the first time you invoke cmake or by changing CMAKE_CXX_FLAGS on later invocations of cmake:
cmake -DCMAKE_CXX_FLAGS_INIT="-D_UNICODE -U_MBCS" ...

Substituting for a .SET on the the command line

I have some (Microblaze) assembly I need to build (via the GCC cross-assembler and linker) and execute many times with the (same) constants, currently fixed via
.SET
commands, changed each time.
Is there a way to automate the setting of in-assembly constants in this way and so avoid the dull task of resetting the code for each build?
You can use the power of C pre-processor in your assembler files. This could be done simply changing file extension from .s to .S (capital S) on Unix-like platform or to .sx on Windows. Then using gcc instead of gas over these files will let C pre-processor first run through the source and then gas will be called automatically.
In this case you can use all regular pre-processor #define, #ifdef, etc. And of cause you can pass these defines from the command line with gcc's -D parameter.

How to force gcc to link like g++?

In this episode of "let's be stupid", we have the following problem: a C++ library has been wrapped with a layer of code that exports its functionality in a way that allows it to be called from C. This results in a separate library that must be linked (along with the original C++ library and some object files specific to the program) into a C program to produce the desired result.
The tricky part is that this is being done in the context of a rigid build system that was built in-house and consists of literally dozens of include makefiles. This system has a separate step for the linking of libraries and object files into the final executable but it insists on using gcc for this step instead of g++ because the program source files all have a .c extension, so the result is a profusion of undefined symbols. If the command line is manually pasted at a prompt and g++ is substituted for gcc, then everything works fine.
There is a well-known (to this build system) make variable that allows flags to be passed to the linking step, and it would be nice if there were some incantation that could be added to this variable that would force gcc to act like g++ (since both are just driver programs).
I have spent quality time with the gcc documentation searching for something that would do this but haven't found anything that looks right, does anybody have suggestions?
Considering such a terrible build system write a wrapper around gcc that exec's gcc or g++ dependent upon the arguments. Replace /usr/bin/gcc with this script, or modify your PATH to use this script in preference to the real binary.
#!/bin/sh
if [ "$1" == "wibble wobble" ]
then
exec /usr/bin/gcc-4.5 $*
else
exec /usr/bin/g++-4.5 $*
fi
The problem is that C linkage produces object files with C name mangling, and that C++ linkage produces object files with C++ name mangling.
Your best bet is to use
extern "C"
before declarations in your C++ builds, and no prefix on your C builds.
You can detect C++ using
#if __cplusplus
Many thanks to bmargulies for his comment on the original question. By comparing the output of running the link line with both gcc and g++ using the -v option and doing a bit of experimenting, I was able to determine that "-lstdc++" was the magic ingredient to add to my linking flags (in the appropriate order relative to other libraries) in order to avoid the problem of undefined symbols.
For those of you who wish to play "let's be stupid" at home, I should note that I have avoided any use of static initialization in the C++ code (as is generally wise), so I wasn't forced to compile the translation unit containing the main() function with g++ as indicated in item 32.1 of FAQ-Lite (http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html).

Change older makefile system to take advantage of parallel compiles

We use Microsoft NMAKE to compile a large number of native C++ and some Intel Fortran files. Typically the makefiles contains lines such as this (for each file):
$(LINKPATH)\olemisc.obj : ole2\olemisc.cpp $(OLEMISC_DEP)
$(CCDEBUG) ole2\olemisc.cpp
$(GDEPS) ole2\olemisc.cpp
OLEMISC_DEP =\
e:\ole2\ifaceole.hpp\
e:\ole2\cpptypes.hpp\
etc.
It works fine, but compiles one file at a time. We would like to take advantage of multi core processors and compile more than one file at a time. I would appreciate some advice about the best way to make that happen, please. Here is what I have so far.
One: GNU make lets you execute parallel jobs using the --jobs=2 option for example and that works fine with GCC (we cant use GCC sadly). But Microsoft's NMAKE does not seem to support such an option. How compatible would the two name programs be, and if we did start using GNU MAKE, can you run two cl.exe processes at the same time? I would expect them to complain about the PDB (debug) file being locked, or does one of the newer cl.exe command line arguments get you around that?
Two: cl.exe has a /MP (build with multiple processes) flag, which lets you compile multiple files at the same time if passed together via the command line, for example:
cl /MP7 a.cpp b.cpp c.cpp d.cpp e.cpp
But using this would require changes to the makefile. Our make files are generated by a our own program from other files, so I can easily change what we put in the makefiles. But how do you combine the dependencies from different cpp files together in the makefile so they get compiled together via one cl.exe call? Each .obj is a different target with a set of commands to make it?
Or do I change the makefile to not call cl.exe, but rather some other little executable that we write, which then collects a series of .cpp files together and shells out to cl.exe passing multiple arguments? That would work and seems doable, but also seems overly complicated and I cant see anyone else doing that.
Am I missing something obvious? There must be a simpler way of accomplishing this?
We do not use Visual Studio or a solution file to do the compiles, because the list of files is extensive, we have a few special items in our makefiles, and theoretically do not want to be overly tied to MS C++ etc.
I thoroughly recommend GNU make on windows. I tend to use cygwin make as the environment it creates tends to be very portable to Unix-like platforms (Mac and Linux for a start). Compiling using the Microsoft toolchain, in parallel and with 100% accurate dependencies and CPU usage works very well. You have other requirements though.
As far as your nmake question goes, look up batch-mode inference rules in the manual. Basically, nmake is able to call the C compiler once, passing it a whole load of C files in one go. Thus you can use the compiler's /MP... type switches.
Parallel compiling built into the compiler? Pah! Horribly broken I say. Here is a skeleton anyway:
OBJECTS = a.obj b.obj c.obj
f.exe: $(OBJECTS)
link $** -o $#
$(OBJECTS): $$(#R).c
# "The only syntactical difference from the standard inference rule
# is that the batch-mode inference rule is terminated with a double colon (::)."
.c.obj::
cl -c /MP4 $<
EDIT
If each .obj has its own dependencies (likely!), then you simply add these as separate dependency lines (i.e., they don't have any shell commands attached).
a.obj: b.h c.h ../include/e.hpp
b.obj: b.h ../include/e.hpp
∶
Often such boiler plate is generated by another tool and !INCLUDEd into the main makefile. If you are clever, then you can generate these dependencies for free as you compile. (If you go this far, then nmake starts to creak at the seams and you should maybe change to GNU make.)
One further consideration to keep in mind here is this: You basically have to define one batch rule for each path and extension. But if you have two files with the same name in two different source directories with a batch inference rule for both of those directories, the batch rule might not pick the one you want.
Basically the make system knows it needs to make a certain obj file, and as soon as it finds an inference rule that lets it do that, it will use it.
The work around is to not have duplicate named files, and if that cant be avoided, dont use inference or batch rules for those files.
Ok, I spent some time this morning working on this, and thanks to bobbogo, I got it to work. Here are the exact details for anyone else who is considering this:
Old style makefile which compiles one file at a time has tons of this:
$(LINKPATH)\PS_zlib.obj : zlib\PS_zlib.cpp $(PS_ZLIB_DEP)
$(CC) zlib\PS_zlib.cpp
$(LINKPATH)\ioapi.obj : zlib\minizip\ioapi.c $(IOAPI_DEP)
$(CC) zlib\minizip\ioapi.c
$(LINKPATH)\iowin32.obj : zlib\minizip\iowin32.c $(IOWIN32_DEP)
$(CC) zlib\minizip\iowin32.c
Note that each file is compiled one at a time. So now you want to use the fancy Visual Studio 2010 /MP switch "/MP[n] use up to 'n' processes for compilation" to compile multiple files at the same time. How? Your makefile needs to make use of batch inference rules in nmake, as follows:
$(LINKPATH)\PS_zlib.obj : zlib\PS_zlib.cpp $(PS_ZLIB_DEP)
$(LINKPATH)\ioapi.obj : zlib\minizip\ioapi.c $(IOAPI_DEP)
$(LINKPATH)\iowin32.obj : zlib\minizip\iowin32.c $(IOWIN32_DEP)
#Batch inference rule for extension "cpp" and path "zlib":
{zlib}.cpp{$(LINKPATH)}.obj::
$(CC) $(CCMP) $<
#Batch inference rule for extension "c" and path "zlib\minizip":
{zlib\minizip}.c{$(LINKPATH)}.obj::
$(CC) $(CCMP) $<
In this case, elsewhere, we have
CCMP = /MP4
Note that nmake inference batch rules do not support wildcards or spaces in the paths. I found some decent nmake documentation somewhere that states that you need to create a separate rule for every extension and source file location, you can not have one rule if the files are in the different locations. Also, files that use #import can not be compiled with /MP.
We have a tool that generates our makefiles, so it now also also generates the batch inference rules.
But it works! The time to compile one large dll went from 12 minutes down to 7 minutes! Woohoo!

Resources