I am trying to write a Makefile command that will output an error if the Go code is not correctly formatted. This is for a CI step. I am struggling with how to get it working in the make file. This solution works on the bash command line:
! gofmt -l . 2>&1 | read
But copying this into the Makefile:
ci-format:
#echo "$(OK_COLOR)==> Checking formatting$(NO_COLOR)"
#go fmt ./...
#! gofmt -l . 2>&1 | read
I get the following error:
/bin/sh: 1: read: arg count
These days, I use golangci-lint, which includes gofmt checking as an option.
But if for some reason you want to do this yourself, the command I previously used for precisely that purpose is:
diff -u <(echo -n) <(gofmt -d ./)
See, for example, the .travis.yml files on one of my projects.
Related
I have a one-liner that spits out all of the files modified in my current feature branch, which is branched off of a shared, upstream development branch. I then hope to feed the files that exist to the phpcs linter via xargs -- something like this:
git diff --name-only shared-upstream-development-branch | grep "\.php$" | xargs test -f {} && echo {} | xargs vendor/bin/phpcs
However, when I run this, I get something like the following:
test: extra argument
‘path/to/my/file.php’
I feel like I'm close to having a working solution.
How can I modify the one-liner above to correctly see if each PHP file still exists, then feed it onward to phpcs?
I know that everything up through the output of the grep command works well, as removing the two parts of the one-liner that refer to xargs gives me a nice list of file names.
(I also tried using --diff-filter=d to filter out deleted files, but this does not seem to work with my version of git, as I still get a complaint from phpcs about how a file "does not exist.")
&& separates commands, and is not an argument to xargs; you need to execute an explicit shell to use &&.
xargs sh -c 'test -f "$1" && echo "$1"' _ {}
I have seen several questions about this topic, but I lack the ability to translate this to my specific problem. I have a for loop that loops through sub directories and then executes a .sh script on a compressed text file inside each directory. I want to parallelize this process, but I'm struggling to apply gnu parallel.
Here is my loop:
for d in ./*/ ; do (cd "$d" && script.sh); done
I understand I need to input a list into parallel, so i have been trying this:
ls -d */ | parallel cd && script.sh
While this appears to get started, I get an error when gzip tries to unzip one of the txt files inside the directory, saying the file does not exist:
gzip: *.txt.gz: No such file or directory
However, when I run the original for loop, I have no issues aside from it taking a century to finish. Also, I only get the gzip error once when using parallel, which is so weird considering I have over 1000 sub-directories.
My questions are:
How do I get Parallel to work in my case? How do I get parallel to parallelize the application of a .sh script to 1000s of files in their own sub-directories? ie- what is the solution to my problem? I gotta make progress.
What am I missing? Syntax, loop, bad script? I want to learn.
Is Parallel actually attempting to run all these .sh scripts in parallel? Why dont I get an error for every .txt.gz file?
Is parallel the best option for the application? Is there another option that is better suited to my needs?
Two problems:
In:
ls -d */ | parallel cd && script.sh
what is paralleled is just cd, not script.sh. script.sh is only executed once, after all parallel cd jobs have run, if there was no error. It is the same as:
ls -d */ | parallel cd
if [ $? -eq 0 ]; then script.sh; fi
You do not pass the target directory to cd. So, what is executed by parallel is just cd, which just changes the current directory to your home directory. The final script.sh is executed in the current directory (from where you invoked the command) where there are probably no *.txt.gz files, thus the error.
You can check yourself the effect of the first problem with:
$ mkdir /tmp/foobar && cd /tmp/foobar && mkdir a b c
$ ls -d */ | parallel cd && pwd
/tmp/foobar
The output of pwd is printed only once, even if you have more than one input directory. You can fix it by quoting the command and then check the second problem with:
$ ls -d */ | parallel 'cd && pwd'
/homes/myself
/homes/myself
/homes/myself
You should see as many pwd outputs as there are input directories but it is always the same output: your home directory. You can fix the second problem by using the {} replacement string that is substituted with the current input. Check it with:
$ ls -d */ | parallel 'cd {} && pwd'
/tmp/foobar/a
/tmp/foobar/b
/tmp/foobar/c
Now, you should have all input directories properly listed in the output.
For your specific problem this should work:
ls -d */ | parallel 'cd {} && script.sh'
Nowhere in Codeclimate docs written how to specify coverage formatter. But when I'm trying to send coverage to Codeclimate:
./cc-test-reporter before-build
./cc-test-reporter after-build
It is failing:
Error: could not find any viable formatter. available formatters: simplecov, lcov, coverage.py, clover, gocov, gcov, cobertura, jacoco
I have gocov installed. Also I generated a report with goconv:
gocov test -coverprofile=out
And I tried to specify the report file to Codeclimate in various ways:
./cc-test-reporter after-build out
./cc-test-reporter after-build < out
But had no luck...
I haven't found any formatter related directives for .codeclimate.yml file. The doc is written in super "you know" style so it didn't help. How to enable/send test coverage with Codeclimate?
Export var:
CC_TEST_REPORTER_ID=...
Run:
for pkg in $(go list ./... | grep -v vendor); do
go test -coverprofile=$(echo $pkg | tr / -).cover $pkg
done
echo "mode: set" > c.out
grep -h -v "^mode:" ./*.cover >> c.out
rm -f *.cover
./cc-test-reporter after-build
Abby from Code Climate Support here. Currently, the test-reporter tries to infer where the coverage data could be from a set of default paths. This is usually enough for users following the default setup of the coverage tool.
But, in the case where the output is not located on one of those paths, you can use the test-reporter low-level commands to indicate the
type of coverage data and where it is. In this particular case you would do something like:
export CC_TEST_REPORTER_ID=<repo-test-reporter-id>
./cc-test-reporter before-build
gocov test -coverprofile=out
./cc-test-reporter format-coverage --input-type gocov out
./cc-test-reporter upload-coverage
You can check more on how to use a test-reporter command by using the flag --help. For example, ./cc-test-reporter format-coverage --help.
That should get you in a good place. If not, let us know at hello#codeclimate.com or here, and I can get more insight. :)
For those of you that still have this problem:
For me - the problem was that I had incorrectly named the output coverage file as "cp.out" instead of "c.out" which is what the cc-test-reporter expects.
Here's my working script:
#!/bin/sh
inputFile=$1
while IFS="=" read -r repo reporterID
do
cd "$HOME""/go/src/github.com/""$repo" || exit
echo "performing code coverage upload for ""$repo"
git checkout master && git pull ##don't know if main is used
cp -f "$HOME"/shellScripts/test-reporter-latest-darwin-amd64 .
chmod 755 test-reporter-latest-darwin-amd64
./test-reporter-latest-darwin-amd64 before-build
export CC_TEST_REPORTER_ID="$reporterID"
go test -coverprofile=c.out ./...
./test-reporter-latest-darwin-amd64 after-build --prefix "github.com/""$repo"
echo
echo "======================="
echo
done < "$inputFile"
I want to test the output of a bash script when one of the executables it depends on is missing, so I want to run that script with the dependency "hidden" but no others. PATH= ./script isn't an option because the script needs to run other executables before it reaches the statement I want to test. Is there a way of "hiding" an executable from a script without altering the filesystem?
For a concrete example, I want to run this script but hide the git executable (which is its main dependency) from it so that I can test its output under these conditions.
You can use the builtin command, hash:
hash [-r] [-p filename] [-dt] [name]
Each time hash is invoked, it remembers the full pathnames of the commands specified as name arguments, so they need not be searched for on subsequent invocations. ... The -p option inhibits the path search, and filename is used as the location of name. ... The -d option causes the shell to forget the remembered location of each name.
By passing a non-existent file to the -p option, it will be as if the command can't be found (although it can still be accessed by the full path). Passing -d undoes the effect.
$ hash -p /dev/null/git git
$ git --version
bash: /dev/null/git: command not found
$ /usr/bin/git --version
git version 1.9.5
$ hash -d git
$ git --version
git version 1.9.5
Add a function named git
git() { false; }
That will "hide" the git command
To copy #npostavs's idea, you can still get to the "real" git with the command builtin:
command git --version
Since we know the program is running in bash, one solution is to - instead of "hiding" the program - emulate the behaviour of bash in this circumstance. We can find out what bash does when a command isn't found quite easily:
$ bash
$ not-a-command > stdout 2> stderr
$ echo $?
127
$ cat stdout
$ cat stderr
bash: not-a-command: command not found
We can then write this behaviour to a script with the executable name, such as git in the question's example:
$ echo 'echo >&2 "bash: git: command not found" && exit 127' > git
$ chmod +x git
$ PATH="$PWD:$PATH" git
$ echo $?
127
$ cat stdout
$ cat stderr
bash: git: command not found
Suppose I have a command git-local (it could be a Bash function or a binary in /usr/local/bin) and suppose I would like git-local to have the same tab completion as the command git has. Finally, suppose that I'm efficient (read lazy) and I don't want to go find the code that the git commmand uses to manually copy over and bloat my .bashrc (or whatever external file I paste it in and the source). Is there a simple way I can have git-local use the same autocompletion as git?
8.7 Programmable Completion Builtins:
If the -p option is supplied, or if no options are supplied, existing completion specifications are printed in a way that allows them to be reused as input.
Something like
$(complete -p git | awk '$NF="git-local"')
maybe?
E.g.:
$ complete -p foobar
-bash: complete: foobar: no completion specification
$ complete -p traceroute
complete -F _known_hosts traceroute
$ $(complete -p traceroute | awk '$NF="foobar"')
$ complete -p foobar
complete -F _known_hosts foobar