Scripting Xcode documentation with DOxygen problems - xcode

I am trying to use the following script by duckrowing (http://www.duckrowing.com/2010/03/18/documenting-objective-c-with-doxygen-part-ii/), to document an existing xcode project.
#
# Build the doxygen documentation for the project and load the docset into Xcode
#
# Created by Fred McCann on 03/16/2010.
# http://www.duckrowing.com
#
# Based on the build script provided by Apple:
# http://developer.apple.com/tools/creatingdocsetswithdoxygen.html
#
# Set the variable $COMPANY_RDOMAIN_PREFIX equal to the reverse domain name of your comany
# Example: com.duckrowing
#
DOXYGEN_PATH=/Applications/Doxygen.app/Contents/Resources/doxygen
DOCSET_PATH=$SOURCE_ROOT/build/$PRODUCT_NAME.docset
if ! [ -f $SOURCE_ROOT/Doxyfile]
then
echo doxygen config file does not exist
$DOXYGEN_PATH -g $SOURCE_ROOT/Doxyfile
fi
# Append the proper input/output directories and docset info to the config file.
# This works even though values are assigned higher up in the file. Easier than sed.
cp $SOURCE_ROOT/Doxyfile $TEMP_DIR/Doxyfile
echo "INPUT = $SOURCE_ROOT" >> $TEMP_DIR/Doxyfile
echo "OUTPUT_DIRECTORY = $DOCSET_PATH" >> $TEMP_DIR/Doxyfile
echo "RECURSIVE = YES" >> $TEMP_DIR/Doxyfile
echo "EXTRACT_ALL = YES" >> $TEMP_DIR/Doxyfile
echo "JAVADOC_AUTOBRIEF = YES" >> $TEMP_DIR/Doxyfile
echo "GENERATE_LATEX = NO" >> $TEMP_DIR/Doxyfile
echo "GENERATE_DOCSET = YES" >> $TEMP_DIR/Doxyfile
echo "DOCSET_FEEDNAME = $PRODUCT_NAME Documentation" >> $TEMP_DIR/Doxyfile
echo "DOCSET_BUNDLE_ID = $COMPANY_RDOMAIN_PREFIX.$PRODUCT_NAME" >> $TEMP_DIR/Doxyfile
# Run doxygen on the updated config file.
# Note: doxygen creates a Makefile that does most of the heavy lifting.
$DOXYGEN_PATH $TEMP_DIR/Doxyfile
# make will invoke docsetutil. Take a look at the Makefile to see how this is done.
make -C $DOCSET_PATH/html install
# Construct a temporary applescript file to tell Xcode to load a docset.
rm -f $TEMP_DIR/loadDocSet.scpt
echo "tell application \"Xcode\"" >> $TEMP_DIR/loadDocSet.scpt
echo "load documentation set with path \"/Users/$USER/Library/Developer/Shared/Documentation/DocSets/$COMPANY_RDOMAIN_PREFIX.$PRODUCT_NAME.docset\"" >> $TEMP_DIR/loadDocSet.scpt
echo "end tell" >> $TEMP_DIR/loadDocSet.scpt
# Run the load-docset applescript command.
osascript $TEMP_DIR/loadDocSet.scpt
exit 0
However, I am getting these errors
Osascript:/Users/[username]/SVN/trunk/Examples: No such file or directory
Earlier in the script output (in xcode window after building) I see these msgs:
Configuration file '/Users/[username]/SVN/trunk/Examples' created
the problem I think is that the full path is actually
'/Users/[username]/SVN/trunk/Examples using SDK'
I was working on the assumption that the whitespaces were the culprit. So I tried two approaches:
$SOURCE_ROOT = "/Users/[username]/SVN/trunk/Examples using SDK"
$SOURCE_ROOT = /Users/[username]/SVN/trunk/Examples\ using\ SDK
set $SOURCE_ROOT to quoted form of POSIX path of /Users/$USER/SVN/trunk/Examples\ using\ SDK/
but all give the same Osascript error as above. Also, the docset is not build into the requested directory
/Users/$USER/Library/Developer/Shared/Documentation/DocSets/$COMPANY_RDOMAIN_PREFIX.$PRODUCT_NAME.docset\
I've scratched my head over this for a while but can't figure out what is the problem. One hypothesis is that I am running Doxygen on a project that is not a new project. To handle this EXTRACT_ALL is set to YES (which should remove all warning messages, but I get 19 warnings too).
Any help would be much appreciated
thank you
Peyman

I suggest that you double quote "$SOURCE_ROOT" wherever you use it in your shell script.

Mouviciel....i figured it out....needed to put the whole variable in parenthesis i.e. $(SOURCE_ROOT).
thank you for your help

Related

My script is failing on an if/else statement

I was using this bitbucket backup code. On running bitbucket.diy-backup.sh, it exits from shell at an if statement in common.sh.
Attached relevant parts of code from both files -
bitbucket.diy-backup.sh -
set -e
SCRIPT_DIR=$(dirname "$0")
source "${SCRIPT_DIR}/utils.sh"
source "${SCRIPT_DIR}/common.sh"
common.sh -
BACKUP_VARS_FILE=${BACKUP_VARS_FILE:-"${SCRIPT_DIR}"/bitbucket.diy-backup.vars.sh}
...
# Just exits here
if [ -f "${BACKUP_VARS_FILE}" ]; then
source "${BACKUP_VARS_FILE}"
debug "Using vars file: '${BACKUP_VARS_FILE}'"
else
error "'${BACKUP_VARS_FILE}' not found"
bail "You should create it using '${SCRIPT_DIR}/bitbucket.diy-backup.vars.sh.example' as a template"
fi
Here is the file structure - all three files are in the same directory. Ideally, it should go through the first if statement, or maybe it goes to else. But running ./bitbucket.diy-backup.sh just exits at this statement.

bats - how can i echo the file name in a bats script for reporting

I have some bats scripts that I run to test some functionality
how can I echo the bats file name in the script?
my bats script looks like:
#!/usr/bin/env bats
load test_helper
echo $BATS_TEST_FILENAME
#test "run cloned mission" {
blah blah blah
}
in order for my report to appear as:
✓ run cloned mission
✓ run cloned mission
✓ addition using bc
---- TEST NAME IS xxx
✓ run cloned mission
✓ run cloned mission
✓ addition using bc
---- TEST NAME IS yyy
✓ run cloned mission
✓ run cloned mission
✓ addition using bc
but got the error
2: syntax error:
operand expected (error token is ".bats
2")
what is the correct way to do it?
I don't want to change the sets names for it only to echo the filename between different tests.
Thanks.
TL;DR
Just output the file name from the setup function using a combination of prefixing the message with # and redirecting it to fd3 (documented in the project README).
#!/usr/bin/env bats
setup() {
if [ "${BATS_TEST_NUMBER}" = 1 ];then
echo "# --- TEST NAME IS $(basename ${BATS_TEST_FILENAME})" >&3
fi
}
#test "run cloned mission" {
blah blah blah
}
All your options
Just use BASH
The simplest solution is to just iterate all test files and output the filename yourself:
for file in $(find ./ -name '*.bats');do
echo "--- TEST NAME IS ${file}"
bats "${file}"
done
The downside of this solution is that you lose the summary at the end. Instead a summary will be given after each single file.
Use the setup function
The simplest solution within BATS is to output the file name from a setup function. I think this is the solution you are after.
The code looks like this:
setup() {
if [ "${BATS_TEST_NUMBER}" = 1 ];then
echo "# --- TEST NAME IS $(basename ${BATS_TEST_FILENAME})" >&3
fi
}
A few things to note:
The output MUST begin with a hash #
The MUST be a space after the hash
The output MUST be redirected to file descriptor 3 (i.e. >&3)
A check is added to only output the file name once (for the first test)
The downside here is that the output might confuse people as it shows up in red.
Use a skipped #test
The next solution would be to just add the following as the first test in each file:
#test "--- TEST NAME IS $(basename ${BATS_TEST_FILENAME})" {
skip ''
}
The downside here is that there will be an addition to the amount of skipped tests...
Use an external helper function
The only other solution I can think of would be to create a test helper that lives in global scope and keeps tracks of its state.
Such code would look something like this:
output-test-name-helper.bash
#!/usr/bin/env bash
create_tmp_file() {
local -r fileName="$(basename ${BATS_TEST_FILENAME})"
if [[ ! -f "${BATS_TMPDIR}/${fileName}" ]];then
touch "${BATS_TMPDIR}/${fileName}"
echo "---- TEST NAME IS ${fileName}" >&2
fi
}
remove_tmp_file() {
rm "${BATS_TMPDIR}/$(basename ${BATS_TEST_FILENAME})"
}
trap remove_tmp_file EXIT
create_tmp_file
Which could then be loaded in each test:
#!/usr/bin/env bats
load output-test-name-helper
#test "run cloned mission" {
return 0
}
The major downside here is that there are no guarantees where the output is most likely to end up.
Adding output from outside the #test, setup and teardown functions can lead to unexpected results.
Such code will also be called (at least) once for every test, slowing down execution.
Open a pull-request
As a last resort, you could patch the code of BATS yourself, open a pull-request on the BATS repository and hope this functionality will be supported natively by BATS.
Conclusion
Life is a bunch of tradeoffs. Pick a solution that most closely fits your needs.
I've figured out a way to do this, but it requires you changing how you handle your individual setup in each file.
Create a helper file that defines a setup function that does as Potherca described above:
global.bash:
test_setup() { return 0; }
setup() {
(($BATS_TEST_NUMBER==1)) \
&& echo "# --- $(basename "$BATS_TEST_FILENAME")" >&3
test_setup
}
Then in your test, instead of calling setup you would just load 'global'.
If you need to create a setup for a specific file, then instead of creating a setup function, you'd create a test_setup function.
Putting the echo in setup outputs the file name after the test name.
What I wound up doing is adding the file name to the test name itself:
test "${BATS_TEST_FILENAME##*/}: should …" {
…
}
Also, if going the setup route, the condition can be avoided with:
function setup() {
echo "# --- $(basename "$BATS_TEST_FILENAME")" >&3
function setup() {
…
}
setup
}

Setting Puppet variables with a BASH command

I've been trying to set a variable in a Puppet manifest that can be used across the puppet run. I have the following variables:
$package = 'hello'
$package_ensure = 'present'
$package_version = '4.4.1'
$package_maj_version = '4'
I'm trying to add another variable:
$ensure
using a BASH If statement using the above variables (since this is a source install I can't use an rpm command to see if the hello program is installed):
if [ -d "/opt/${package}${package_maj_version}" ]; then echo present; else echo absent; fi
but, I haven't been able to find a way to do so. I keep getting errors such as:
Error: Could not parse for environment production: Could not match ${package}${package_maj_version}"
Any help on this would be greatly appreciated.

How to test a Makefile for missing dependencies?

Is there a way to test for missing dependencies that shows up when compiling a project with multiple jobs (-jN where N > 1)?
I often encounter packages, mostly open source, where the build process works fine as long as I use -j1, or -jN where N is a relatively low value such as 4 or 8 but if I used higher values likes 48, a little uncommon, it starts to fail due to missing dependencies.
I attempted to build myself a bash script that would, given a target, figure out all the dependencies and try to build explicitly each of those dependency with -j1 in order to validate that none are missing dependencies on their own. It appears to work with small / medium package but fails on more important one like uClibc for example.
I am sharing my script in here, as some people may understand better what I mean by reading code. I also hope that a more robust solution exists and could be shared back.
#!/bin/bash
TARGETS=$*
echo "TARGETS=$TARGETS"
for target in $TARGETS
do
MAKE="make"
RULE=`make -j1 -n -p | grep "^$target:"`
if [ -z "$RULE" ]; then
continue
fi
NEWTARGETS=${RULE#* }
if [ -z "$NEWTARGETS" ]; then
continue
fi
if [ "${NEWTARGETS}" = "${RULE}" ]; then
# leaf target, we do not want to test.
continue
fi
echo "RULE=$RULE"
# echo "NEWTARGETS=$NEWTARGETS"
$0 $NEWTARGETS
if [ $? -ne 0 ]; then
exit 1
fi
echo "Testing target $target"
make clean && make -j1 $target
if [ $? -ne 0 ]; then
echo "Make parallel will fail with target $target"
exit 1
fi
done
I'm not aware of any open source solution, but this is exactly the problem that ElectricAccelerator, a high-performance implementation of GNU make, was created to solve. It will execute the build in parallel and dynamically detect and correct for missing dependencies, so that the build output is the same as if it had been run serially. It can produce an annotated build log which includes details about the missing dependencies. For example, this simple makefile has an undeclared dependency between abc and def:
all: abc def
abc:
echo PASS > abc
def:
cat abc
Run this with emake instead of gmake and enable --emake-annodetail=history, and the resulting annotation file includes this:
<job id="Jf42015c0" thread="f4bfeb40" start="5" end="6" type="rule" name="def" file="Makefile" line="6" neededby="Jf42015f8">
<command line="7">
<argv>cat abc</argv>
<output>cat abc
</output>
<output src="prog">PASS
</output>
</command>
<depList>
<dep writejob="Jf4201588" file="/tmp/foo/abc"/>
</depList>
<timing invoked="0.356803" completed="0.362634" node="chester-1"/>
</job>
In particular the <depList> section shows that this job, Jf42015c0 (in other words, def), depends on job Jf4201588, because the latter modified the file /tmp/foo/abc.
You can give it a try for free with ElectricAccelerator Huddle.
(Disclaimer: I'm the architect of ElectricAccelerator)

Setting environment variables with puppet

I'm trying to work out the best way to set some environment variables with puppet.
I could use exec and just do export VAR=blah. However, that would only last for the current session. I also thought about just adding it onto the end of a file such as bashrc. However then I don't think there is a reliable method to check if it is all ready there; so it would end up getting added with every run of puppet.
I would take a look at this related question.
*.sh scripts in /etc/profile.d are read at user-login time (as the post says, at the same time /etc/profile is sourced)
Variables export-ed in any script placed in /etc/profile.d will therefore be available to your users.
You can then use a file resource to ensure this action is idempotent. For example:
file { "/etc/profile.d/my_test.sh":
content => 'export MYVAR="123"'
}
Or an alternate means to an indempotent result:
Example
if [[ ! grep PINTO_HOME /root/.bashrc | wc -l > 0 ]] ; then
echo "export PINTO_HOME=/opt/local/pinto" >> /root/.bashrc ;
fi
This option permits this environmental variable to be set when the presence of the
pinto application makes it warrented rather than having to compose a user's
.bash_profile regardless of what applications may wind up on the box.
If you add it to your bashrc you can check that it's in the ENV hash by doing
ENV[VAR]
Which will return => "blah"
If you take a look at Github's Boxen they source a script (/opt/boxen/env.sh) from ~/.profile. This script runs a bunch of stuff including:
for f in $BOXEN_HOME/env.d/*.sh ; do
if [ -f $f ] ; then
source $f
fi
done
These scripts, in turn, set environment variables for their respective modules.
If you want the variables to affect all users /etc/profile.d is the way to go.
However, if you want them for a specific user, something like .bashrc makes more sense.
In response to "I don't think there is a reliable method to check if it is all ready there; so it would end up getting added with every run of puppet," there is now a file_line resource available from the puppetlabs stdlib module:
"Ensures that a given line is contained within a file. The implementation matches the full line, including whitespace at the beginning and end. If the line is not contained in the given file, Puppet appends the line to the end of the file to ensure the desired state. Multiple resources can be declared to manage multiple lines in the same file."
Example:
file_line { 'sudo_rule':
path => '/etc/sudoers',
line => '%sudo ALL=(ALL) ALL',
}
file_line { 'sudo_rule_nopw':
path => '/etc/sudoers',
line => '%sudonopw ALL=(ALL) NOPASSWD: ALL',
}

Resources