I'd like to use Yarn's Workspaces feature to run a shell command in each of my workspaces.
Yarn supports the yarn workspaces run foo command, but foo must be a script defined in package.json, not an arbitrary command (eg. echo "foobar").
Ideally I'd like to have a single script eg. "foo": "yarn workspaces run echo 'foobar'" in my top level package.json. One workaround is to add a script called foobar to each workspace's package.json, and have the top-level package.json delegate to these, but with a non-trivial number of workspaces this becomes cumbersome to maintain.
In Yarn 3 (Berry) exec and workspaces foreach commands can be utilised to achieve this. For example: yarn workspaces foreach exec echo 'foobar'
Related
The code below is some step that is part of a MakeFile which is executed when a certain stage is run in ci/cd.
.PHONY:deploy_endpoint_configuration
deploy_endpoint_configuration:
#echo Deploying endpoint configuration
gcloud endpoints services deploy ocr/model_adapter/predict_contracts.yaml --project $(project) &> $(project_dir)/deploy_contract.txt
cat $(project_dir)/deploy_contract.txt | grep -Eo '\d*-\d*-\w*' | tail -n1 > $(project_dir)/config_id.txt
What I'm experiencing is that the content of deploy_contract.txt etc is always empty when this piece of code is executed in the pipeline (e.g. GitLab). I don't understand why? Has this to do w/ the fact that MakeFile executes a new shell for every command? Not entirely sure though, and yet this is hard to debug. I do confirm this issue when I run it as followed: gitlab-runner exec docker -here the stage- (for local debug purpose). But when I run it locally on macOS (i.e. only execute make deploy_endpoint_configuration), thus it's not wrapped in a container as before, it runs and functions like it should (read as the content of config_id and deploy_contract is not empty but contains the stdout + errorout)
for reference:
image used in the ci/cd stage = image: dsl.company.com:5000/python:3.7-buster
on top of that gcloud cli is installed to make use of gcloud commands.
Anyone an idea to why no content is written to my files? (it's for sure deploying though - so there must be something)
My suspicion is that it's because your command script relies on bash features. GNU make will always run /bin/sh as its shell. On some systems (like RedHat and RedHat-derived systems) /bin/sh is actually a link to bash and so makefile recipes that use bash-specific features will work. I believe MacOS does the same, although I don't do Mac.
On other systems, like Debian and Debian-derived systems like Ubuntu, /bin/sh is a simple, fast POSIX-compliant shell like dash, which doesn't support fancy bash things and so makefile recipes that use bash-specific features will not work.
Probably your container image is one of the latter type, while MacOS (your local system) uses bash as the shell.
You are using &> which is a bash-specific feature, that is not supported by POSIX shells.
You should write this using POSIX syntax, which is >$(project_dir)/deploy_contract.txt 2>&1
Either that or you can try overriding the shell in your makefile:
SHELL := /bin/bash
to force make to run bash instead... this will work as long as the container image you're running does actually have bash in it.
I've run into a pretty dumb and frustrating issue.
My team is using this script quite regularly for development
cd client && source .env && yarn run start
They are all on macOS and it works fine. I am on Ubuntu and for some reason NPM sends this command to sh instead of bash (on sh the source command is . ) which throws an error.
I've spent hours trying to figure out how to change that but I can't see anything. Is there some setting I can change? Why does NPM choose sh instead of the much more common bash or zsh? Is there anything I can do to fix this?
I want make a shell script to startup my project env.
I'm using ITerm2 with zsh and oh-my-zsh installed.
I want to:
Open directory
Activate python virtualenv
Run django manage command
Switch to the new tab
change directory
Run gulp command to watch for frontend changes
All I got is this:
#!/bin/zsh
cd ~/Projects/python/project_name
source ~/virtualenvs/project_name/bin/activate
python ./backend/manage.py runserver
tab
cd front
gulp watch
And as you can expect this doesn't work. Can you point me direction where I should look or is this even possible to do with just shell script?
Completely possible.
I did pretty much the same thing you're trying (although it is a Rails project) using an NPM package called ttab.
Install NPM.
Install TTab.
You can run the commands in the new tab like so:
# First switch to directory
cd front
# Open new tab in that directory and execute
ttab -G eval "gulp watch"
Note: You can execute several commands if needed, like gulp watch; rails s.
If you need to run a command on the original tab that is also on a different directory, you can create a procedure/function in your script file to do that:
# Define the function before it is called
gotofolder()
{
cd ~/mydirectory
}
# start the other tabs (...)
# change the original tab directory
gotofolder
# Run Rails or whatever
./bin/rails s
If you want to take a look at how I did this, please checkout the confereai.sh script in my MDFR repo.
I recently installed Go onto our server with CentOS 6.3. The install appears to have gone fine. However I made a test "hello world" script, and when I run I get the following output.
fork/exec /tmp/go-build967564990/command-line-arguments/_obj/a.out: permission denied
Now running go env or other go commands seem to work. At first I figured it was a permission issue, however running as root user I get the same thing. An
I encountered this issue today but the solutions above did not work. Mine was fixed by simply running:
$ export TMPDIR=~/tmp/
then I was able to get the script to run with:
$ go run hello.go
hello, world
The only downside is you have to run export TMPDIR every time you want to run an application.
Kudos to Adam Goforth
Just guessing: Your nix perhaps disables for security reasons executing programs in /tmp. It might be configurable in CentOS, but I don't know that.
The alternative solution: It seems you're trying go run to execute a Go program (which is as script as C is a script). Try (assuming $GOPATH=~, the easy possibility) instead a normal build, i.e. instead of
me:~/src/foo$ go run main.go
try
me:~/src/foo$ go build # main.go should not be necessary here
me:~/src/foo$ ./foo
This approach will still use /tmp-whatever to create the binary, IIRC, but it will not attempt to execute it from there.
PS: Do not run these command as root. No need for that with correct setup.
I am using Fedora 31 and got a similar error which brought me here. I could not run the Go debugger used by Jetbrains IntelliJ Ultimate/GoLand without fork/exec & permission denied error. The solution was this:
setsebool deny_ptrace 0
See https://fedoraproject.org/wiki/Features/SELinuxDenyPtrace for details.
exec.Command return this sturct: type Cmd
standard comment:
// Dir specifies the working directory of the command.
// If Dir is the empty string, Run runs the command in the
// calling process's current directory.
Dir string
so resolve, you can pass the exec cmd's path to exec yourself command:
cmd := exec.Command(xxxxx)
cmd.Dir = xxxxPath
and then you can call Run() or other Output() func
Instead of settings TMPDIR which might affect other programs, you can set GOTMPDIR:
mkdir /some/where/gotmp
export GOTMPDIR=/some/where/gotmp
Then add export GOTMPDIR=/some/where/gotmp to your profile (i.e. .bash_profile) to make the variable permanent.
To fix this issue on my Chromebook I just remounted /tmp as executable. There may be security implications in doing this, but since go run works on other platforms I figure maybe it's not that bad (especially on a local dev machine):
sudo mount -i -o remount,exec /tmp/
I added this to my .bash_profile script.
Consider trying:
sudo mount -o remount exec /tmp
I've tried executing the following:
#!C:\cygwin\bin\bash.exe
ls ${WORKSPACE}
But that doesn't find ls (even if it's on the windows path). Is there any way to set this up?
UPDATE: In other words, I want to be able to set up a build step that uses cygwin bash instead of windows cmd like this page shows you how to do with Python.
So put your cygwin's bin directory in your PATH.
In case you don't know how to do it (Control Panel -> System -> Advanced -> Environment Variables), see: http://support.microsoft.com/kb/310519
That shell-script has two errors: the hash-bang line should be "#!/bin/bash", and ${WORKSPACE} is not a shell-variable. Hudson has a bunch of variables of its own which are expanded in the commands you specify to be run (i.e. when you add commands in the web gui).
If you want to run Hudson build step on the Cygwin command line, you need to figure out what command Hudson runs and in which directory.
To give a more specific answer, you need to show us how your project is configured and what steps you want to run separately.
Provided cygwin's bin folder is in your path, the following works for me:
#!/bin/sh
ls ${WORKSPACE}
I find Hudson does not pick up environment variable changes unless you restart the server.
you might want to try to give a full path to ls
/cygdrive/c/cygwin/bin/ls
One other thing that seems to work is to use this:
#!C:\cygwin\bin\bash.exe
export PATH=$PATH:/usr/bin
ls
But it would be nice not to have to modify the path for every script.
Have you thought about power shell? as much as I like cygwin, it's always been a little flaky, powershell is a solid fully functional shell on windows, another option is Windows Services for UNIX it gives you korn shell or c shell not quite as nice as bash but it gets the job done
You will need to pass the --login (aka -l) option to bash so that it will source Cygwin's /etc/profile and set up the PATH variable correctly. This will cause the current directory to get changed to the default "home" but you can set the environment variable CHERE_INVOKING to 1 before running bash -l and it will stay in the current directory if you need to preserve that.