Run a shell script on dokku app deployment - shell

I've been looking for a way to run a one time script that loads data into our database. Currently we're using dokku-alt for our development environment and we have a python script that runs to update our schema, data and functions we need available for our application. The problem that I'm facing is trying to find a way to run our script on application deployment through dokku-alt.
I've ventured into using a worker, but the workers themselves don't perform how I would expect them to. From what I've noticed is that a worker will terminate every process once it's complete. This is NOT what we need. We need to run the script once to load up our data and schema and close gracefully. We still want our web process to continue working, so the child process sending a kill signal to the other process.
So my question is, is there a way to run a script just one time on deployment without having to write a custom plugin?
05:23:07 schema.1 | started with pid 15
05:23:07 function.1 | started with pid 17
05:23:07 data.1 | started with pid 19
05:23:07 web.1 | started with pid 21
05:23:07 web.1 | Picked up JAVA_TOOL_OPTIONS: -Xmx384m -Xss512k -Dfile.encoding=UTF-8 -Djava.rmi.server.useCodebaseOnly=true
05:23:12 function.1 | Begin dbupdater
05:23:12 function.1 | pq://user:UcW3P587Eki8Fqrr#postgresql:5432/db
05:23:13 schema.1 | Begin dbupdater
05:23:13 data.1 | Begin dbupdater
05:23:13 data.1 | pq://user:UcW3P587Eki8Fqrr#postgresql:5432/db
05:23:13 schema.1 | pq://user:UcW3P587Eki8Fqrr#postgresql:5432/db
05:23:13 schema.1 | do (AccountCrosstabKey_create.sql)
05:23:13 schema.1 | Done
05:23:13 data.1 | do (Accountinfo_data.sql)
05:23:13 function.1 | do (Connectby_create.sql)
05:23:13 function.1 | Done
05:23:13 data.1 | Done
05:23:13 schema.1 | exited with code 0
05:23:13 system | sending SIGTERM to all processes
05:23:13 function.1 | terminated by SIGTERM
05:23:13 data.1 | terminated by SIGTERM
05:23:13 web.1 | terminated by SIGTERM
Python script:
#!/usr/bin/python
import os
import sys
import glob
import shlex
import subprocess
import postgresql
import postgresql.driver as pg_driver
try:
print('Begin dbupdater')
dbhost = os.environ.get('DATABASE_URL','localhost').replace('postgres://', 'pq://')
print(dbhost)
targetDir = sys.argv[1]
db = postgresql.open(dbhost)
os.chdir(targetDir)
currDir = os.getcwd()
for file in glob.glob("*.sql"):
sqlCmd = ''
with open(file,'r') as myfile:
sqlCmd = myfile.read().replace('DO', '').replace('$do$', '')
db.do('plpgsql',sqlCmd)
print('do (' + file + ')')
db.close()
print('Done')
except (ValueError, KeyError, TypeError) as error:
print (error)

You can execute this, to rum custom python script in dokku instance.
dokku --rm-container run [APP_NAME] python [your_script_name.py]
--rm-container flag delete the container after script finished.

Related

How do I get Istanbul to recognize code coverage when using ESM?

I'm using ESM to loading my modules and I use them in this way:
// More info on why this is needed see (https://github.com/mochajs/mocha/issues/3006)
async function wire(){
await import("./Sanity.spec.mjs");
await import("./Other.spec.mjs");
run();
}
wire();
I run these tests using nyc mocha --delay --exit ./test/suite.js, but when I run Istanbul it does not seems to recognize my imports and fails to provide coverage information...
3 passing (14ms)
----------|----------|----------|----------|----------|-------------------|
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
----------|----------|----------|----------|----------|-------------------|
All files | 0 | 0 | 0 | 0 | |
----------|----------|----------|----------|----------|-------------------|
How can I get Istanbul to recognize the ESM loaded code?
Native ESM support is available from Mocha v7.1.0 (February 2020).
See:
Relase notes: https://github.com/mochajs/mocha/releases/tag/v7.1.0
Pull request: https://github.com/mochajs/mocha/pull/4038

Unwanted too many arguments error being printed out

I am writing a bash script that logs into remote nodes and returns the services being run on that node.
#!/bin/bash
declare -a SERVICES=('redis-server' 'kube-controller-manager' 'kubelet' 'postgres' 'mongod' 'elasticsearch');
for svc in "${SERVICES[#]}"
do
RESULT=`ssh 172.29.219.109 "ps -ef | grep -v grep | grep $svc"`
if [ -z ${RESULT} ]
then
echo "Is Empty" > /dev/null
else
echo "$svc is running on this node"
fi
done
Now the output of ssh 172.29.219.109 "ps -ef | grep -v grep | grep $svc" on the node is ::
postgres 2102 1 0 Jan29 ? 00:24:27 /opt/PostgresPlus/pgbouncer/bin/pgbouncer -d /opt/PostgresPlus/pgbouncer/share/pgbouncer.ini
postgres 2394 1 0 Jan29 ? 00:20:10 /opt/PostgresPlus/9.4AS/bin/edb-postgres -D /opt/PostgresPlus/9.4AS/data
postgres 2431 2394 0 Jan29 ? 00:00:01 postgres: logger process
postgres 2434 2394 0 Jan29 ? 00:07:15 postgres: checkpointer process
postgres 2435 2394 0 Jan29 ? 00:01:10 postgres: writer process
postgres 2436 2394 0 Jan29 ? 00:03:27 postgres: wal writer process
postgres 2437 2394 0 Jan29 ? 00:20:03 postgres: autovacuum launcher process
postgres 2438 2394 0 Jan29 ? 00:37:00 postgres: stats collector process
postgres 2494 1 0 Jan29 ? 00:08:12 /opt/PostgresPlus/9.4AS/bin/pgagent -l 1 -s /var/log/ppas-agent-9.4.log hostaddr=localhost port=5432 dbname=postgres user=postgres
postgres 2495 2394 0 Jan29 ? 00:11:25 postgres: postgres postgres 127.0.0.1[59246] idle
When I run the script, I do get the result I want but Im getting an unwanted message which seems to be related to the variable in which I am storing my result.
# ./map_services_to_nodes.sh
./map_services_to_nodes.sh: line 12: [: too many arguments
postgres is found on this node
The Algo that I im using is ::
Search for all services defined in my array.
Store the result in a variable.
If Variable is empty, that means that service is not running.
If its not empty, service is running.
You need to escape $ (to avoid local expansion) and " when using inside ssh command, also avoid using the outdated back-ticks for command-substitution, use $(..), see Why Use $(STATEMENT) instead of legacy STATEMENT
RESULT=$(ssh 172.29.219.109 "ps -ef | grep -v grep | grep \$svc")
and double quote variables inside test operator,
if [ -z "${RESULT}" ]
Changed the below
if [ -z ${RESULT} ]
to
if [ -z "${RESULT}" ]
and it worked.
# ./map_services_to_nodes.sh
postgres is found on this node

Weird output of pg_stat_activity

I have troubles with the output of this simple query:
select
pid,
state
from pg_stat_activity
where datname = 'My_DB_name'
while running it different ways:
In IDE
Via running psql in terminal
In bash script:
QUERY="copy (select pid, state from pg_stat_activity where datname = 'My_DB_name') to stdout with csv"
psql -h host -U user -d database -t -c "$QUERY" >> result
1 and 2 return results as I need them:
1:
pid state
------ -----------------------------
23126 idle
25573 active
2642 active
20420 idle
23391 idle
5339 idle
7710 idle
1558 idle
12506 idle
2862 active
716 active
9834 idle in transaction (aborted)
2:
pid | state
-------+-------------------------------
23126 | idle
25573 | idle
2642 | active
20420 | idle
23391 | idle
5339 | active
7710 | idle
1558 | idle
12506 | idle
2211 | active
716 | active
9834 | idle in transaction (aborted)
3 is weird - it doesnt give me any state name except 'active'
23126,
25573,
2642,
20420,
23391,
5339,
7710,
1558,
12506,
1660,active
716,active
1927,active
9834,
What am I missing? How to get all the state names via bash script?
pg_stat_activity is a catalog view that will show different content depending on whether you're logged in as a superuser, or as a non-privileged user.
From your output, it looks like you're logged in as superuser in #1 and #2, but as a normal user in #3.

Shell Script not able to Kill Process

I am using below script to find and kill process but its somehow not working.
Please help to edit this if any flaw. I am greping JVM. Using AIX Machine.
PID=`ps -eaf | grep JVM| grep -v grep | awk '{print $2}'`
if [[ "" != "$PID" ]]
then
echo "killing $PID"
kill $PID
else
echo "PID not found"
fi
From the Wikipedia entry:
In Unix and Unix-like operating systems, kill is a command used to
send a signal to a process. By default, the message sent is the
termination signal, which requests that the process exit. But kill is
something of a misnomer; the signal sent may have nothing to do with
process killing.
So by default kill sends SIGTERM (equivalent to kill -15) you will probably need to do SIGKILL:
kill -9 $PID
or if you're been extra cautious or you need the system to shutdown gracefully then I recommend you use SIGINT as it is the same as Ctrl-C on the keyboard. So
kill -2 $PID
Java apps I'm afraid doesn't always handle SIGTERM correctly they rely upon good behaviour in the shutdown hooks. To make sure an app correctly handles signals like SIGTERM you can directly process the SIGTERM signal:
public class CatchTerm {
public static void main(String[] args) throws Exception {
Signal.handle(new Signal("TERM"), new SignalHandler () {
public void handle(Signal sig) {
//handle sigterm such as System.exit(1)
}
});
Thread.sleep(86400000);
}
}
For completeness here are the common signals
| Signal | ID | Action | Description | Java
| --- | --- | --- | --- | ---
| SIGHUP | 1 | Terminate | Hangup | The application should reload any config
| SIGINT | 2 | Terminate | Ctrl-C | Keyboard interrupt, start clean shutdown
| SIGQUIT | 3 | Terminate | Terminal quit signal | JVM traps and issues a Thread dump
| SIGABRT | 6 | Terminate | Process abort signal | Do not handle, quit immediately
| SIGKILL | 9 | Terminate | Kill (forced) | Cannot be trapped
| SIGTERM | 15 | Terminate | Termination signal. | Quit quickly, safe but fast
For more advanced process selection see killall and pkill:

handling hanging shell command in ruby

I have a cronjob that runs a ruby scrip that collects data from a bash utility (ipmitool). Sometimes this utility hangs, causing the whole script to hang, causing the cron jobs to stack up...
the line of code that does this is:
'macaddress' => `timeout 5 ipmitool lan print | grep 'MAC Address'`.split(':',2)[1].strip
in the cron job this still causes the script to hang but when I manually test the following in a ruby script & run form terminal:
ans = `timeout 1 sleep 20 | grep 'hello'`
the shell command terminates properly
how can I prevent the cron script from hanging?
edit: Here's strace of the hanging (hang is at select) :
open("/root/.freeipmi/sdr-cache/sdr-cache-xxxx.localhost", O_RDONLY) = 4
mmap(NULL, 2917, PROT_READ, MAP_PRIVATE, 4, 0) = 0x7f0ea2dfd000
ioctl(3, IPMICTL_SEND_COMMAND, 0x7fff74802020) = 0
select(4, [3], NULL, NULL, {60, 0}

Resources