Strapi how to start in background? - strapi

Usually we use "strapi start" to start strapi.
I'm hosting it on AWS ubuntu:
tried "start strapi &" to run it in background. However, once the terminal is closed, we can't access the strapi console anymore.

I got script not found: server.js error when using #user1872384's solution.
So, here is the correct way to run strapi in background mode.
NODE_ENV=production pm2 start --name APP_NAME npm -- start
This will just tell pm2 to use npm start command and let npm do the which script to run part.
Hope it helps someone.

To run the strapi in development mode, use the following pm2 command from your project folder.
pm2 start npm --name my-project -- run develop
and
pm2 list
to view the status

We also can start with pm2 by type
pm2 start "yarn develop"

Need to use pm2:
To Start:
npm install pm2 -g
NODE_ENV=production pm2 start server.js --name api
To list all process:
pm2 list
┌──────────┬────┬─────────┬──────┬───────┬────────┬─────────┬────────┬─────┬────────────┬────────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├──────────┼────┼─────────┼──────┼───────┼────────┼─────────┼────────┼─────┼────────────┼────────┼──────────┤
│ api │ 0 │ 0.1.0 │ fork │ 21817 │ online │ 0 │ 2m │ 0% │ 108.0 MB │ ubuntu │ disabled │
└──────────┴────┴─────────┴──────┴───────┴────────┴─────────┴────────┴─────┴────────────┴────────┴──────────┘
To Stop, use the id:
pm2 stop 0

First
npm install pm2 -g
add server.js to root of your project and write below line:
const strapi = require('strapi');
strapi().start();
save then
pm2 start server.js

The best way is to use pm2 and its ecosystem.config.js file.
Firstly, install pm2 by:
npm i -g pm2#latest
In ecosystem.config.js, add the following code:
module.exports = {
apps: [
{
name: 'give-your-app-a-name',
script: 'npm',
args: 'start',
watch: true, // automatically restart the server for file changes
max_memory_restart: '450M',
env: {
NODE_ENV: 'production',
},
},
{
name: 'give-your-another-app-a-name',
script: 'npm',
args: 'start',
env: {
NODE_ENV: 'production',
},
},
],
}
Finally on your server do:
pm2 start ecosystem.config.js
That's it.

Here's the official page about starting Strapi with PM2.
Starting with strapi command
By default there is two important commands.
yarn develop to start your project in development mode.
yarn start to start your app for production.
You can also start your process manager using the yarn start or develop command.
pm2 start npm --name my-app -- run develop

pm2 start npm --name my-app -- run develop

Related

Windows Docker Dockerfile COPY file inside folder

I'm trying to build a Dockerfile to copy a file to container, I'm using Windows 10. This is my Dockerfile
FROM openjdk:8
COPY /target/myfile.java /
And I'm getting the error:
failed to solve with frontend dockerfile.v0: failed to build LLB: failed to compute cache key: "/target/myfile.java" not found: not found
I already tried //target//myfile.java, \\target\\myfile.java, \target\myfile.java, target/myfile.java, target\myfile.java but none of them worked.
If I put the myfile.java on the same directory of Dockerfile and use COPY myfile.java / works without problem. So the problem is to copy a file inside a folder. Any suggestion?
I tried your Dockerfile locally and it built fine with the following directory structure:
Project
│ Dockerfile
│
└───target
│ │ myfile.java
I built it from the 'Project' directory with the following command:
docker build . -t java-test
I could only reproduce the error when the Docker server couldn't find the 'myfile.java', i.e. using the following directory structure:
Project
│ Dockerfile
│
└───target
│ └───target
│ └───myfile.java
So your dockerfile looks fine, just make sure you build it from the right directory with the correct build context and the file is stored in the correct place locally

API Rest with Hyperledger Fabric and node js Heroku

I am trying to connect my Hyperledger Fabric network with my backend in Heroku.
I did all the connections as the examples suggest. This is how my code looks like:
my code
When I deploy to Heroku I get the following error:
[NetworkConfig101.js]: NetworkConfig101 - problem reading the PEM file :: Error: ENOENT: no such file or directory
My .pem files are in the same folder as my configuration file. folders
Using a path from working dir. For example, your tree structure dir:
.
├── app.js
└── artifacts
├── crypto-config
│ ├── ca.crt
│ └── key.pem
└── network-config.yaml
In network-config.yaml, you should use path:
path: ./artifacts/crypto-config/ca.crt
Another way is using absolute path.
path: /data/app/artifacts/crypto-config/ca.crt

pm2 deploy fails after full fetch

I want to deploy a simple app to my ec2 instance but I got this error:
bash: line 0: cd: /home/ubuntu/source: No such file or directory
fetch failed
Deploy failed
1
I don't understand why is there a 'source' directory when i haven't created it on my virtual or local machine. It's like pm2 created it on its own. Can someone explain why is it there and how can I deploy it successfully?
My ecosystem.config.js:
module.exports = {
apps: [{
name: 'puk',
script: 'project/'
}],
deploy: {
production: {
user: 'ubuntu',
host: 'ec2-35-180-119-129.eu-west-3.compute.amazonaws.com',
key: '~/.ssh/id_rsa.pub',
ref: 'origin/master',
repo: 'git#github.com:nalnir/pukinn.git',
path: '/home/ubuntu/',
'post-deploy': 'npm install && pm2 startOrRestart ecosystem.config.js'
}
}
}
Full log after pm2 deploy production command:
--> Deploying to production environment
--> on host ec2-35-180-119-129.eu-west-3.compute.amazonaws.com
○ deploying origin/master
○ executing pre-deploy-local
○ hook pre-deploy
○ fetching updates
○ full fetch
bash: line 0: cd: /home/ubuntu/source: No such file or directory
fetch failed
Deploy failed
1
I have faced the same issue and got this thread, but the above answer/comments are not very helpful for me. There is no helpful document on the PM2 website too. So I do one by one all steps from initial:
Do first setup before calling update command on any existing folder. Because PM2 create their own folder structure: [Current, Source, Shared] (Read here)
pm2 deploy ecosystem.config.js stage setup
When you want to deploy new code then do with the below command:
pm2 deploy ecosystem.config.js stage update --force
Why --force?
You may have some changes in your local system that aren’t pushed inside your git repository, and since the deploy script get the update via git pull they will not be on your server. If you want to deploy without pushing any data, you can append the --force option:
My deploy object in ecosystem.config.js file :
deploy : {
stage : {
// Deploy New: pm2 deploy ecosystem.config.js stage setup
// Update: pm2 deploy ecosystem.config.js stage update --force
user : '_MY_SERVER_USER_NAME_', // remote server username
host : '_MY_REMOTE_SERVER_IP_', // remote server ip
ref : 'origin/stage', // remote repo name
repo : 'git#bitbucket.org:_MY_REPO_SSH_CLONE_URL_.git', // repo url
path : '_REMOTE_DIRECTIVE_', // src root paths like /home/ubuntu/
'pre-deploy-local': '',
'post-deploy' : 'npm install && pm2 reload ecosystem.config.js --only MyAppName',
'pre-setup': ''
}
}
I Hope, It will helpful for others.
script parameter expects the actual script path, not the directory
You should change it to the name of your main script, for example: script: './index.js'
You should also update your deploy.production.path to something like /home/ubuntu/project
As stated in the Ecosystem file reference, script expects the Path of the script to launch

AWS CodeBuild does not work with Yarn Workspaces

I'm using Yarn Workspaces in my repository and also using AWS CodeBuild to build my packages. When build starts, CodeBuild takes 60 seconds to install all packages and I'd want to avoid this time caching node_modules folder.
When I add:
cache:
paths:
- 'node_modules/**/*'
to my buildspec file and enable LOCAL_CUSTOM_CACHE, I receive this error:
error An unexpected error occurred: "EEXIST: file already exists, mkdir '/codebuild/output/src637134264/src/git-codecommit.us-east-2.amazonaws.com/v1/repos/MY_REPOSITORY/node_modules/#packages/configs'".
Is there a way to remove this error configuring AWS CodeBuild or Yarn?
My buildspec file:
version: 0.2
phases:
install:
commands:
- npm install -g yarn
- git config --global credential.helper '!aws codecommit credential-helper $#'
- git config --global credential.UseHttpPath true
- yarn
pre_build:
commands:
- git rev-parse HEAD
- git pull origin master
build:
commands:
- yarn run build
- yarn run deploy
post_build:
commands:
- echo 'Finished.'
cache:
paths:
- 'node_modules/**/*'
Thank you!
Update 1:
The folder /codebuild/output/src637134264/src/git-codecommit.us-east-2.amazonaws.com/v1/repos/MY_REPOSITORY/node_modules/#packages/configs was being attempted to be created by Yarn, with the command - yarn at install phase. This folder is one of my repository packages, called #packages/config. When I run yarn on my computer, Yarn creates folders linking my packages as described here. An example of how my node_modules structure is on my computer:
node_modules/
|-- ...
|-- #packages/
| |-- configs/
| |-- myPackageA/
| |-- myPackageB/
|-- ...
I was having the exact same issue ("EEXIST: file already exists, mkdir"), I ended up using S3 cache and it worked pretty well. Note: for some reason the first upload to S3 took way (10 minutes) too long, the others went fine.
Before:
[5/5] Building fresh packages...
--
Done in 60.28s.
After:
[5/5] Building fresh packages...
--
Done in 6.64s.
If you already have your project configured you can edit the cache accessing the Project -> Edit -> Artifacts -> Additional configuration.
My buildspec.yml is as follows:
version: 0.2
phases:
install:
runtime-versions:
nodejs: 14
build:
commands:
- yarn config set cache-folder /root/.yarn-cache
- yarn install --frozen-lockfile
- ...other build commands go here
cache:
paths:
- '/root/.yarn-cache/**/*'
- 'node_modules/**/*'
# This third entry is only if you're using monorepos (under the packages folder)
# - 'packages/**/node_modules/**/*'
If you use NPM you'd do something similar, with slightly different commands:
version: 0.2
phases:
install:
runtime-versions:
nodejs: 14
build:
commands:
- npm config -g set prefer-offline true
- npm config -g set cache /root/.npm
- npm ci
- ...other build commands go here
cache:
paths:
- '/root/.npm-cache/**/*'
- 'node_modules/**/*'
# This third entry is only if you're using monorepos (under the packages folder)
# - 'packages/**/node_modules/**/*'
Kudos to: https://mechanicalrock.github.io/2019/02/03/monorepos-aws-codebuild.html

Docker - Creating multiple containers/environments with different versions

I'm starting with MongoDB and taking four courses. All of them use different versions of mongodb, python, nodejs, asp.net, mean stack, etc. The structure of my desirable workspace:
courses
├─ mongodb_basic
│ ├─ hello_world-2.7.py
│ └─ data
│ └─ db
├─ python-3.6_mongodb
│ ├─ getting_started.py
│ └─ data
│ └─ db
├─ dotnet_and_mongodb
│ ├─ (project files)
│ └─ data
│ └─ db
├─ mongodb_node
│ ├─ (project files)
│ └─ data
│ └─ db
└─ mean_intro
└─ (project files)
I want to keep my Windows 10 system clean using Docker without installing all the stuff on the host and stuck in the first course, don't know how to:
link containers
python/pymongo <-> mongodb
aspnet <-> mongodb
... <-> mongodb
map data\folders
start/stop linked containers with one command (desirable)
I'd like to keep a workspace on the host (external HDD) in order to work on different computers (three W10 PCs).
Google results have many tuts (containerize, docker-compose, etc.) and don't know where to start.
I think it might be possible to do what you are trying to do using docker-compose and defining the dockerfiles correctly. So if you are wondering where to start, I would suggest getting acquainted with the dockerfiles and docker-compose.
To answer your question:
linking containers:
that can be done using docker-compose. Specify the container services you want to use in a compose file like the one specified here.
NOTE: the volumes: declaration is where you would specify your workspace folder structure for the containers to access.
map folder/data: Again I would check out the link mentioned above. In their dockerfile they use the ADD command to add the current directory of the container into the path of the /code directory. This was included as a volume: in the compose file. What does that mean? Well whatever you change in the host workspace, should show up in the root directory of the container.
start/stop with one command: you should be able to create,start or stop all the services or a specific service using one of the docker-compose up, docker-compose start or docker-compose stop
commands.
For your application you might even be able to get away with defining your workspace as volumes in all of the dockerfiles and then building them with a script. Or you can use some kind of orchestration service like Kubernetes as well but that might be overkill.
Hope this is helpful.

Resources