Run yarn/npm scripts from a subfolder's package.json - yarnpkg

I maintain a monorepo for the react-querybuilder package. I'm merging in the documentation website code under the /website directory, but not as a workspace (those are in /packages/*).
The /website directory has its own package.json with the docusaurus * scripts (start/build/deploy/etc.). I'd like to have scripts in the root /package.json that execute the scripts in /website/package.json. Currently I have something like this in /package.json:
{
"scripts": {
"website:install": "cd website && yarn",
"website:start": "cd website && yarn start",
"website:build": "cd website && yarn build",
"website:deploy": "cd website && yarn deploy"
}
}
Is there a better, more generic way to do that? This way I have to name every script twice, once in /package.json and once in /website/package.json.
(I tried --cwd, but that doesn't actually run scripts defined in that other directory's package.json. It runs the scripts defined in the root package.json from the other directory. E.g., yarn --cwd website build is effectively the same as yarn build, at least in my case.)
I thought there might be a yarn flag like --cwd (--pkg? --config?) that actually runs the scripts defined in the other directory, or maybe you'd have to specify a file.
Am I missing something?

Related

Docker Build/Deploy using Bash Script

I have a deploy script that I am trying to use for my server for CD but I am running into issues writing the bash script to complete some of my required steps such as running npm and the migration commands.
How would I go about getting into a container bash, from this script, running the commands below and then exiting to finish pulling up the changes?
Here is the script I am trying to automate:
cd /Project
docker-compose -f docker-compose.prod.yml down
git pull
docker-compose -f docker-compose.prod.yml build
# all good until here because it opens bash and does not allow more commands to run
docker-compose -f docker-compose.prod.yml run --rm web bash
npm install # should be run inside of web bash
python manage.py migrate_all # should be run inside of web bash
exit # should be run inside of web bash
# back out of web bash
docker-compose -f docker-compose.prod.yml up -d
Typically a Docker image is self-contained, and knows how to start itself up without any user intervention. With some limited exceptions, you shouldn't ever need to docker-compose run interactive shells to do post-deploy setup, and docker exec should be reserved for emergency debugging.
You're doing two things in this script.
The first is to install Node packages. These should be encapsulated in your image; your Dockerfile will almost always look something like
FROM node
WORKDIR /app
COPY package*.json .
RUN npm ci # <--- this line
COPY . .
CMD ["node", "index.js"]
Since the dependencies are in your image, you don't need to re-install them when the image starts up. Conversely, if you change your package.json file, re-running docker-compose build will re-run the npm install step and you'll get a clean package tree.
(There's a somewhat common setup that puts the node_modules directory into an anonymous volume, and overwrites the image's code with a bind mount. If you update your image, it will get the old node_modules directory from the anonymous volume and ignore the image updates. Delete these volumes: and use the code that's built into the image.)
Database migrations are a little trickier since you can't run them during the image build phase. There are two good approaches to this. One is to always have the container run migrations on startup. You can use an entrypoint script like:
#!/bin/sh
python manage.py migrate_all
exec "$#"
Make this script be executable and make it be the image's ENTRYPOINT, leaving the CMD be the command to actually start the application. On every container startup it will run migrations and then run the main container command, whatever it may be.
This approach doesn't necessarily work well if you have multiple replicas of the container (especially in a cluster environment like Docker Swarm or Kubernetes) or if you ever need to downgrade. In these cases it might make more sense to manually run migrations by hand. You can do that separately from the main container lifecycle with
docker-compose run web \
python manage.py migrate_all
Finally, in terms of the lifecycle you describe, Docker images are immutable: this means that it's safe to rebuild new images while the old ones are running. A minimum-downtime approach to the upgrade sequence you describe might look like:
git pull
# Build new images (includes `npm install`)
docker-compose build
# Run migrations (if required)
docker-compose run web python manage.py migrate_all
# Restart all containers
docker-compose up --force-recreate

Setting a laravel storage directory permission by ebextentions

I'm working on elastic beanstalk exextentions. A storage-permission-denied error occurs every deployments and I a have to type command to resolve that. Does the code below(.extensions/chmod.config), prevent the error occur ?
container_commands:
01addpermission:
command: "chmod -R 755 /var/app/current/storage"
01clearcache:
command: "php /var/app/current config:cache"
The code sadly will not work. The reason is that container commands run when your app is in the staging folder, not in current folder:
The specified commands run as the root user, and are processed in alphabetical order by name. Container commands are run from the staging directory, where your source code is extracted prior to being deployed to the application server.
You can try to use relative paths:
container_commands:
01addpermission:
command: "chmod -R 755 ./storage"
02clearcache:
command: "php . config:cache"
The alternative is to use postdeploy platform hook which runs commands after you app is deployed:
Files here run after the Elastic Beanstalk platform engine deploys the application and proxy server

How do I use Yarn's `.pnp.js` file?

Using Yarn 2, the default installation method creates a .pnp.js file (Plug'n'Play) instead of a node_modules directory.
How do I use this file to run my Node application?
To run a Node application with Yarn's Plug'n'Play you must preload the .pnp.js file using the --require flag.
node --require ./.pnp.js foo.js
Note: Make sure that the --require path starts with ./ or ../.

Is there difference `yarn dev` and `yarn run dev`?

To run the local server for development I normally use yarn run dev.
But it seems yarn dev provides same function. Is this command just a short alias for yarn run dev?
I couldn't find info for yarn dev in documents.
You can leave out run from this command.
Basically, not only dev command, you can directly use any scripts by name without keyword run.
So, yarn dev and yarn run dev both do the same.
From the docs:
https://classic.yarnpkg.com/en/docs/cli/run/
It’s also possible to leave out the run in this command, each script can be executed with its name
And a similar example is given for yarn 2
https://yarnpkg.com/cli/run
Same thing, but without the "run" keyword :

why am I getting No such file or directory message while running a command in linux

I am testing an application by starting certain commands in sequence. During which for one of the command when I get into the directory and run it it works where as when I run directly it gives No such file or directory
cd /opt/abc/ then running gulp serve will work
Where as when I run directly - /opt/abc/gulp serve it fails
Attached is the snapshot of the same.
enter image description here
You are thinking that PWD is in your PATH, which as root it should not be (I personally don't think it should be for any user, but definitely not the super user).
So, when you type in "gulp serve" it's checking your path and finding "gulp". If you type which gulp you can see where it finds it. When you type the path to gulp, it is not finding an executable there. So-- that's not the one you want, and if there is a file called gulp there it is not executable.

Resources