Compiling Z3 on a OSX - macos

I am trying to compile Z3 version 4.1.2. AFter a successful configuration, when you do "make", I get the following error:
Makefile:151: lib.srcs: No such file or directory
Makefile:152: shell.srcs: No such file or directory
Makefile:153: test.srcs: No such file or directory
Making test.srcs...
/usr/local/bin/dos2unix takes only stdin and stdout
make: *** [test.srcs] Error 1

I think the problem is that all textual files in z3-src-4.1.2.zip use "carriage return" (cr) and "line feed" (lf) for encoding line termination. The zip was created on a Windows machine. Another problem is the "dos2unix" application. It is an application that converts windows/dos textual files into unix/linux/osx textual files. It is a very simple application. It just replaces cr/lf with a lf.
On Linux, this application takes a single argument: the file name to be modified.
I'm currently working on a new build system that avoids this issues. In the meantime, here a some workarounds.
1) Use git to retrieve the source. git will take care of the cr/lf vs lf issue.
Here is the command for retrieving Z3:
git clone https://git01.codeplex.com/z3
If you do that, you don't need to use dos2unix.
So, you can remove the lines #$(DOS2UNIX) in Makefile.in. Another option is to replace
DOS2UNIX=#D2U#
with
DOS2UNIX=touch
in the beginning of Makefile.in
After these changes, you should be able to compile it on OSX. I successfully compiled it on OSX 10.7.
2) Get the "unstable" branch.
http://z3.codeplex.com/SourceControl/changeset/view/946a06cddbe4
This is the current "working branch". It contains the new build system. It is not ready, but it is good enough to generate the Z3 executable. Here are the instructions to build Z3 using this branch
Download the code from the page above. Or use git to retrieve the "unstable" branch. Then, execute
autoconf
./configure
python scripts/mk_make.py
cd build
make
I managed to compile it on OSX 10.7 last Friday.
3) Keep the .zip, but convert all textual files. I'm using the following python script to convert all files in the new build system. If you execute this python script in the Z3 root directory, it will convert all files.
import os
import glob
import re
import getopt
import sys
import shutil
def is_cr_lf(fname):
# Check whether text files use cr/lf
f = open(fname, 'r')
line = f.readline()
sz = len(line)
return sz >= 2 and line[sz-2] == '\r' and line[sz-1] == '\n'
# dos2unix in python
# cr/lf --> lf
def dos2unix(fname):
if is_cr_lf(fname):
fin = open(fname, 'r')
fname_new = '%s.new' % fname
fout = open(fname_new, 'w')
for line in fin:
line = line.rstrip('\r\n')
fout.write(line)
fout.write('\n')
fin.close()
fout.close()
shutil.move(fname_new, fname)
if is_verbose():
print "dos2unix '%s'" % fname
def dos2unix_tree_core(pattern, dir, files):
for filename in files:
if fnmatch(filename, pattern):
fname = os.path.join(dir, filename)
if not os.path.isdir(fname):
dos2unix(fname)
def dos2unix_tree():
os.path.walk('.', dos2unix_tree_core, '*')
dos2unix_tree()

Related

How to update boosts symbolic links all at once?

I have boost 1.58.0 and 1.67.0 libraries under /usr/lib and my libboost_*.so is pointing to the 1.67 lib. Now I would like to change all the symbolic links at once with a command like this ln -f libboost_*.so.1.58.0 libboost_*.so.
My actual question is how do I make the first star remember so that the second star has the same name? Excuse me for the lack of jargon, I'm not sure how to phrase this better.
Is there an easy solution for this or would I have to write a shellscript saving the first match into some intermediate variable?
I just wrote a small python script. Would still be interesting to see if there is a better shell solution.
import os
dir = "/usr/lib/x86_64-linux-gnu"
files = os.listdir(dir)
print(files)
for file in files:
if False or os.path.islink(os.path.join(dir, file)):
if "boost" in file:
print("Creating symlink for file " + file)
src = os.path.join(dir, file + ".1.58.0")
dst = os.path.join(dir, file)
print(src)
print(dst)
try:
os.symlink(src, dst)
except OSError:
os.remove(dst)
os.symlink(src, dst)

Python on windows removes created file

I'm trying to write a file with a python program. When I perform all the actions command line, they all work fine. The file is created.
When I perform the actions in a python script, the file does not exist after the script terminates.
I created a small script that demonstrates the behavior.
import os
import os.path
current_dir = os.getcwd()
output_file = os.path.join(current_dir, "locations.js")
print output_file
f = open(output_file, "w")
f.write("var locations = [")
f.write("{lat: 55.978467, lng: 9.863467}")
f.write("]")
f.close()
if os.path.isfile(output_file):
print output_file + " exists"
exit()
Running the script from the command line, I get these results:
D:\Temp\GeoMap>python test.py
D:\Temp\GeoMap\locations.js
D:\Temp\GeoMap\locations.js exists
D:\Temp\GeoMap>dir locations.js
Volume in drive D is Data
Volume Serial Number is 0EBF-9720
Directory of D:\Temp\GeoMap
File Not Found
D:\Temp\GeoMap>
Hence the file is actually created, but removed when the script terminates.
What do I need to do the keep the file?
Problem was solved by changing firewall settings.

How do I import a file of SQL commands to PostgreSQL?

I'm running this command from PostgreSQL 9.4 on Windows 8.1:
psql -d dbname -f filenameincurrentdirectory.sql
The sql file has, for example, these commands:
INSERT INTO general_lookups ("name", "old_id") VALUES ('Open', 1);
INSERT INTO general_lookups ("name", "old_id") VALUES ('Closed', 2);`
When I run the psql command, I get this error message:
psql:filenameincurrentdirectory.sql:1: ERROR: syntax error at or near "ÿ_I0811a2h1"
LINE 1: ÿ_I0811a2h1 ru
How do I import a file of SQL commands using psql?
I have no problems utilizing pgAdmin in executing these sql files.
If your issue is BOM, Byte Order Marker, another option is sed. Also kind of nice because if BOM is not your issue it is non-destructive to you data. Download and install sed for windows:
http://gnuwin32.sourceforge.net/packages/sed.htm
The package called "Complete package, except sources" contains additional required libraries that the "Binaries" package doesn't.
Once sed is installed run this command to remove the BOM from your file:
sed -i '1 s/^\xef\xbb\xbf//' filenameincurrentdirectory.sql
Particularly useful if you file is too large for Notepad++
Okay, the problem does have to do with BOM, byte order marker. The file was generated by Microsoft Access. I opened the file in Notepad and saved it as UTF-8 instead of Unicode since Windows saves UTF-16 by default. That got this error message:
psql:filenameincurrentdirectory.sql:1: ERROR: syntax error at or near "INSERT"
LINE 1: INSERT INTO general_lookups ("name", "old_id" ) VAL...
I then learned from another website that Postgres doesn't utilize the BOM and that Notepad doesn't allow users to save without a BOM. So I had to download Notepad++, set the encoding to UTF-8 without BOM, save the file, and then import it. Voila!
An alternative to using Notepad++ is this little python script I wrote. Simply pass in the file name to convert.
import sys
if len(sys.argv) == 2:
with open(sys.argv[1], 'rb') as source_file:
contents = source_file.read()
with open(sys.argv[1], 'wb') as dest_file:
dest_file.write(contents.decode('utf-16').encode('utf-8'))
else:
print "Please pass in a single file name to convert."

Mass Convert .xls and .xlsx to .txt (Tab Delimited) on a Mac

I have about 150 .xls and .xlsx files that I need converting into tab-delimited. I tried using automator, but I was only able to do it one-by-one. It's definitely faster than opening up each one individually, though. I have very little scripting knowledge, so I would appreciate a way to do this as painlessly as possible.
If you would be prepared to use Python for this I have written a script that converts Excel spreadsheets to csv files. The code is available in Pastebin.
You would just need to change the following line:
writer = csv.writer(fileout)
to:
writer = csv.writer(fileout, delimiter="\t")
to make the output file tab delimited rather than the standard comma delimited.
As it stands this script prompts you for files one at a time (allows you to select from a dialogue), but it could easily be adapted to pick up all of the Excel files in a given directory tree or where the names match a given pattern.
If you give this a try with an individual file first and let me know how you get on, I can help with the changes to automate the rest if you like.
UPDATE
Here is a wrapper script you could use:
#!/usr/bin/python
import os, sys, traceback
sys.path.insert(0,os.getenv('py'))
import excel_to_csv
def main():
# drop out if no arg for excel dir
if len(sys.argv) < 2:
print 'Usage: Python xl_csv_wrapper <path_to_excel_files>'
sys.exit(1)
else:
xl_path = sys.argv[1]
xl_files = os.listdir(xl_path)
valid_ext = ['.xls', '.xlsx', '.xlsm']
# loop through files in path
for f in xl_files:
f_name, ext = os.path.splitext(f)
if ext.lower() in valid_ext:
try:
print 'arg1:', os.path.join(xl_path,f)
print 'arg2:', os.path.join(xl_path,f_name+'.csv')
excel_to_csv.xl_to_csv(os.path.join(xl_path,f),
os.path.join(xl_path,f_name+'.csv'))
except:
print '** Failed to convert file:', f, '**'
exc_type, exc_value, exc_traceback = sys.exc_info()
lines = traceback.format_exception(exc_type, exc_value, exc_traceback)
for line in lines:
print '!!', line
else:
print 'Sucessfully conveted', f, 'to .csv'
if __name__ == '__main__':
main()
You will need to replace the :
sys.path.insert(0,os.getenv('py'))
At the top with an absolute path to the excel_to_csv script or an environment variable on your system.
Use VBA in a control workbook to loop through the source workbooks in a specified directory or a list of workbooks, opening each, saving out the converted data, then closing each in turn.

Distributing .app file after command line xcodebuild call

I'm building/archiving my Mac app for distribution from a command line call (below), with Xcode 4.3 installed. To be clear, I didn't have a working solution for this problem earlier to Xcode 4.3, so advice for earlier Xcode releases could easily still be valid. Here's the call:
/usr/bin/xcodebuild -project "ProjectPath/Project.pbxproj" -scheme "Project" -sdk macosx10.7 archive
This runs successfully, and it generates an .xcarchive file, located in my ~/Library/Developer/Xcode/Archives/<date> folder. What's the proper way to get the path the the archive file generated? I'm looking for a way to get a path to the .app file contained therein, so I can distribute it.
I've looked at the MAN page for xcodebuild (and done copious searching online) and didn't find any clues there.
There is an easier way, simply specify the archivePath you want to archive:
xcodebuild -archivePath GoTray -scheme GoTray archive
Then you will get the xcarchive file at GoTray.xcarchive in current directory.
Next, run xcodebuild again to export app from the xcarchive file:
xcodebuild -exportArchive -exportFormat APP -archivePath GoTray.xcarchive -exportPath GoTray
Building on the answer provided here, I came up with a satisfactory multi-part solution. The key to it all, was to use the environment variables Xcode creates during the build.
First, I have a post-action on the Archive phase of my build scheme (pasted into the Xcode project's UI). It calls a Python script I wrote (provided in the next section), passing it the names of the environment variables I want to pull out, and a path to a text file:
# Export the archive paths to be used after Archive finishes
"${PROJECT_DIR}/Script/grab_env_vars.py" "${PROJECT_DIR}/build/archive-env.txt"
"ARCHIVE_PATH" "ARCHIVE_PRODUCTS_PATH" "ARCHIVE_DSYMS_PATH"
"INSTALL_PATH" "WRAPPER_NAME"
That script then writes them to a text file in key = value pairs:
import sys
import os
def main(args):
if len(args) < 2:
print('No file path passed in to grab_env_vars')
return
if len(args) < 3:
print('No environment variable names passed in to grab_env_vars')
output_file = args[1]
output_path = os.path.dirname(output_file)
if not os.path.exists(output_path):
os.makedirs(output_path)
with open(output_file, 'w') as f:
for i in range(2, len(args)):
arg_name = args[i]
arg_value = os.environ[arg_name]
#print('env {}: {}'.format(arg_name, arg_value))
f.write('{} = {}\n'.format(arg_name, arg_value))
def get_archive_vars(path):
return dict((line.strip().split(' = ') for line in file(path)))
if __name__ == '__main__':
main(sys.argv)
Then, finally, in my build script (also Python), I parse out those values and can get to the path of the archive, and the app bundle therein:
env_vars = grab_env_vars.get_archive_vars(ENV_FILE)
archive_path = env_vars['ARCHIVE_PRODUCTS_PATH']
install_path = env_vars['INSTALL_PATH'][1:] #Chop off the leading '/' for the join below
wrapper_name = env_vars['WRAPPER_NAME']
archived_app = os.path.join(archive_path, install_path, wrapper_name)
This was the way I solved it, and it should be pretty easily adaptable to other scripting environments. It makes sense with my constraints: I wanted to have as little code as possible in the project, I prefer Python scripting to Bash, and this script is easily reusable in other projects and for other purposes.
You could just use a bit of shell, get the most recent folder in the Archives dir (or use the current date), and then get the most recent archive in that directory.

Resources