AdonisJs 4.1: how to create a start script that migrates the db and start the server? - bash

I have a dockerized AdonisJs App that I'm trying to deploy on ECS (AWS). I managed to deploy the image but now I don't know how to run migrations when I deploy.
By following a Udemy course I saw that somebody had to do the same stuff, but with laravel. In the Dockerfile instead of running CMD ['artisan','serve'] he created a start.sh script where he starts the app, put it in the background, runs the migrations, and then put the app back in the background. Here is the script:
#!/bin/sh
# turn on bash's job control
set -m
# Start the primary process and put it in the background
php-fpm &
# Start the helper process
php artisan migrate
# now we bring the primary process back into the foreground
# and leave it there
fg %1
I tried to do the same thing with Adonis, this is my script (one of many versions):
#!/bin/sh
# turn on bash's job control
set -m
# Start the primary process and put it in the background
adonis serve &
# Start the helper process
adonis migration:run
# now we bring the primary process back into the foreground
# and leave it there
fg %1
But I always get errors. For example:
The server starts but then the migrations do not run because adonis cannot connect to the database. I don't know how to debug it, since if I just start the app normally Adonis can perfectly connect to the database.
(I tried this only locally) The server starts, the migrations run but then the server process doesn't come in foreground so the application is not really started (curl localhost it gives me curl: (7) Failed to connect to localhost port 80: Connection refused) and I cannot either stop the server with ctrl+c, I have to find the docker container and stop the container.
This is what the console shows me:
SERVER STARTED
info: serving app on http://0.0.0.0:80
Nothing to migrate
Could you please help me create a script that does this?
EDIT1: I noticed that even if I create a script with only "adonis serve" it still doesn't work, so maybe that it's just not the right way to start the server trough a script?

The first problem I needed to solve was the migration command that wasn't terminating. The cause was how I was starting the adonis-scheduler package. I was starting it from the start/kernel.js file. Now instead I created a scheduler.js (inside start) with:
'use strict'
/*
|--------------------------------------------------------------------------
| Run Scheduler
|--------------------------------------------------------------------------
|
| Run the scheduler on boot of the web sever.
|
*/
const Scheduler = use('Adonis/Addons/Scheduler')
Scheduler.run()
And added this in server.js:
new Ignitor(require('#adonisjs/fold'))
.appRoot(__dirname)
.preLoad('start/scheduler') //code added
.fireHttpServer()
.catch(console.error)
By doing this my migration always terminates.
The second problem I faced was that "#!/bin/bash" wasn't supported, so I needed to change the script in a "#!/bin/sh" format. Unfortunately this format doesn't support putting job in background and move them in foreground later, so I just run the migrations and then start the server. This is the file:
#!/bin/sh
# Start the helper process
adonis key:generate
adonis migration:run --force
# Start the primary process
adonis serve

Related

Jenkins screen session will killed after pipeline finish

Im currently working on a deployment process that is working fine until the last stage.
I have Jenkins installed on my debian 10 machine. I have a git project with a Jenkinsfile inside.
All stages are working fine.
My problem is now, that I want to start a session with Jenkins with the screen command, the session is now created (detached) and after that the pipeline will finish, the session doesnt exist anymore.
To create the session I use the following commands:
screen -S server ./start.sh
-> That will tell me: Must be connected to a terminal.
Then I tried this command:
screen -dm -S server ./start.sh
-> Here the session will be created and then removed after Jenkins is finish with the pipeline
I found the following solution that is working fine for me:
sh 'JENKINS_NODE_COOKIE=dontKillMe ./start.sh'
That will run my bash script with the screen command

WebLogic in Docker: how to start managed servers automatically

I am learning Docker, and creating an image for Oracle WebLogic 12.2.1.4 server.
My image is ready, working fine. It contains
an admin server
two managed servers
When I run my image with docker run -d -p 7001:7001 --name WL oracle/weblogic-12.2.1.4.0:1.0 the admin server starts automatically because I added the following line at the end of my Dockerfile:
CMD /u01/oracle/user_projects/domains/$DOMAIN_NAME/startWebLogic.sh
But I need to start managed servers manually. I need to login into the container and start them by hand:
docker exec -it WL /bin/bash
./startManagedWebLogic.sh MANAGED_SERVER_1 http://localhost:7001 &
./startManagedWebLogic.sh MANAGED_SERVER_2 http://localhost:7001 &
This is not what I want. I want to start managed servers automatically after admin server is up and running.
I was thinking about to create a new bash script, copy it into the image and use it to boot up the admin and managed servers. Like this:
start-wls-domain.sh
#!/bin/bash
/u01/oracle/user_projects/domains/$DOMAIN_NAME/startWebLogic.sh &
# there are a more sophisticated way to check the status of the admin server but it is okay for test
sleep 60
./startManagedWebLogic.sh MANAGED_SERVER_1 http://localhost:7001 &
./startManagedWebLogic.sh MANAGED_SERVER_2 http://localhost:7001 &
This script can be called from Dockerfile with CMD command.
But with this solution, I lost the ability to see the output on the default Docker log. The docker logs WL -f will display nothing.
Another issue with this bash script solution is if this script finished the container will stop running. Do I need an infinite loop at the end of this script?
If possible I would like to have a solution without start-wls-domain.sh.
What is the best and easiest way to start Weblogic managed servers automatically within a Docker container?
I followed the suggestions and I run different servers in different containers. That way I was able to start properly the server.
I published the solution on Github, here.

Composer script to start a background process, whose lifetime is bound to that of the Composer script

I've started experimenting with Composer Scripts.
I have a project where there are "Functional tests" of the API endpoints. Running the whole test suite requires running the following commands in order:
composer install to install all required dependencies of the backend APIs
php yii server --test to start a lite server that is connected to the "test" MySQL database. The test server starts running on localhost:9000.
sh vendor/bin/phpunit --configuration tests/functional/phpunit.xml to run th actual tests. This last command triggers the execution of all test cases, most of which execute HTTP calls to the lite server, launched at step 2.
I would like to automate and "atomize" this 3 step process into a single Composer script that can be easily started, killed and restarted effortlessly.
Here's my current progress:
"scripts": {
"test-functional": [
"#composer install",
"php yii server --test",
"sh vendor/bin/phpunit --configuration tests/functional/phpunit.xml"
]
}
The problem is that the 2-nd command (php yii server --test) "steals" the terminal because PHPs built-in lite server requires the terminal to be open while the command is running. Killing the command kills the lite server as well. I tried suffixing the second step of the script with & which generally makes processes go in the background and not steal the terminal, but it seems this trick doesn't work for Composer Scripts. Any other workaround or possibility that I'm missing?
My final goal is to make the 3 steps execute in an atomic way, output the results of the tests and end the command, cleaning up everything (including killing the lite server, launched in step 2).

AWS ECS trouble - Running shell script to boot program

I am trying to run a Docker image on amazon ECS. I am using a command that starts a shell script to boot up the program:
CMD ["sh","-c", "app/bin/app start; bash"]
in order to start it because for some reason when I run the app (elixir/phoenix app) in the background it was crashing immediately but if I run it in the foreground it is fine. If I run it this way locally, everything works fine but when I try to run it in my cluster, it shuts down. Please help!!
Docker was supposed to keep track of your running foreground process, if the process stop, the container stop. The reason your container work when you use command with "bash" because bash wasn't stop.
I guess you use she'll script to start an application that serve in background like nginx or a daemon. So try to find an option that make the app running foreground will keep your container alive. I.e nginx has an option while starting "daemon off"
for some reason when I run the app (elixir/phoenix app) in the background it was crashing immediately
So you have a corrupted application and you are looking for a kludge to make it looking like it somewhat works. This is not a reliable approach at all.
Instead you should:
make it working in background
use systemctl or upstart to manage restarts of Erlang VM on crashes
Please note that it matters where you have your application compiled. It must be the exactly same architecture/container as the production one, with same Erlang, Elixir, OS versions, otherwise nobody guarantees it will be robust or even working.

Elixir phoenix debugging leads to interactive shell instead of pry

I've been trying to debug a phoenix application.
To do so I used the following instructions:
- To set a break point: require IEx; IEx.pry
- To start a debugging server: iex -S mix phx.server
The problem comes when starting the server, the above instruction leads me to the elixir interactive shell (iex(1)>) and does not allow the server to run, from here if I execute the code manually it stops in the prys but I was hopping to have the server running and stop whenever a request hit the pry. Is there any solution?
I am currently using earlang 1.20, elixir 1.5 and phoenix 1.3
It is common behavior that the iex -S mix phx.server command opens an IEx shell. It runs the web server on the port configured for the Phoenix endpoint within the respective MIX_ENV simultaneously to that. For each IEx.pry() breakpoint it executes, it should prompt on the command line whether you want to open an IEx shell there.
Check which MIX_ENV your server is running in, e.g. by typing the following in the IEx shell:
Mix.env
And then find the configuration in the files within the config/ directory. The line should look like this:
config :nameofyourapp, Web.Endpoint,
http: [port: 4000],
Maybe something other than 4000 is configured there? Or maybe some other application, or even a previously started instance of your web application already listens on that port and therefore prohibits the debug server from binding on that port? In that case, shutdown other running mix phx.server processes.

Resources