Weird output of pg_stat_activity - bash

I have troubles with the output of this simple query:
select
pid,
state
from pg_stat_activity
where datname = 'My_DB_name'
while running it different ways:
In IDE
Via running psql in terminal
In bash script:
QUERY="copy (select pid, state from pg_stat_activity where datname = 'My_DB_name') to stdout with csv"
psql -h host -U user -d database -t -c "$QUERY" >> result
1 and 2 return results as I need them:
1:
pid state
------ -----------------------------
23126 idle
25573 active
2642 active
20420 idle
23391 idle
5339 idle
7710 idle
1558 idle
12506 idle
2862 active
716 active
9834 idle in transaction (aborted)
2:
pid | state
-------+-------------------------------
23126 | idle
25573 | idle
2642 | active
20420 | idle
23391 | idle
5339 | active
7710 | idle
1558 | idle
12506 | idle
2211 | active
716 | active
9834 | idle in transaction (aborted)
3 is weird - it doesnt give me any state name except 'active'
23126,
25573,
2642,
20420,
23391,
5339,
7710,
1558,
12506,
1660,active
716,active
1927,active
9834,
What am I missing? How to get all the state names via bash script?

pg_stat_activity is a catalog view that will show different content depending on whether you're logged in as a superuser, or as a non-privileged user.
From your output, it looks like you're logged in as superuser in #1 and #2, but as a normal user in #3.

Related

how get the cisco switch interfaces' status by snmp?

By using command line(ssh), I can get switch interfaces status like below(just demo):
Cisco-Switch# show int status
Port Name Status Vlan Duplex Speed
Eth0/1 test_alias connected 1 a-full a-100
Eth0/2 notconnect 1 auto auto
Eth0/3 connected 3 a-full a-100
Eth0/4 connected 3 a-full a-100
Eth0/5 potchann linkFlapE 255 auto auto
Eth0/6 notconnect 300 auto auto
Eth0/7 sfpAbsent routed auto auto
Eth0/8 sfpAbsent routed auto auto
Eth0/9 connected trunk full a-10G
By using SNMP walk(oid:.1.3.6.1.2.1.2.2.1 or .1.3.6.1.2.1.31.1.1.1), I can get every interface name, adminStatus, operStatus and so on.
I got these after the summary:
| adminStatus | OperStatus | commandLine Port Status |
| up | up | connected |
| up | down | notconnect |
| up | down | linkFlapE |
| up | down | sfpAbsent |
| down | down | disable |
| down | down | sfpAbsent |
abviosly, there are 3 staus when AdminStatus is up and OperStatus is down in snmp oid "1.3.6.1.2.1.2.2.1".
So, I think the commandLine port status cannot be fetched by this snmp oid.
finnaly, I don't find a way to get switch interface status(like "connected","notconnect","disable","sfpAbsent") in SNMP.
hoping someone can tell me the oid to resolve it.
Thx for your help.
Most of the interface information is retrievable using this OID:
.1.3.6.1.2.1.2.2.1.7
Maybe u can try this oid .1.3.6.1.2.1.2.2.1.8 (ifOperStatus). It should have this (1-up, 2-down, 3-testing, 4-unknown, 5-dormant, 6-notPresent, 7-lowerLayerDown) from the site I have been search for. link

PostgreSQL on Windows 10. Locale is broken

I have installed PostgreSQL version 11 on Windows 10 PC two times. First time it was from official installer for Windows, second time it was a set of packages for cygwin.
The problem is, I can't get any database locale settings to work correctly. In any of the two above cases, the Postgres cluster was initialized with initdb command.
With cygwin install, the command has -E UTF8, --locale=uk_UA.utf8 and same for collation and ctypes. cygwin seemed to recognize the command, the cluster was created. Then I've created database with appropriate settings and some tables in it.
The output of simple query were plain wrong for my locale. It was $ sign instead of грн for monetary, . insteed of , for fraction and so on. The official installer gave the same results, with locale set up and displayed correctly.
Same initdb and create database are giving me correct results with linux OS.
initdb
initdb —pgdata=... \
—locale=uk_UA.utf8
—lc-collate=...
—lc-type=...
—lc-monetary=...
—lc-numeric=...
—lc-time=...
—encoding=UTF-8
Here, basically I’ve repeated the uk_UA.utf8 locale.
Also tried with “uk-x-icu” locale, as windows version compiled with icu library as it seems.
The queries
create database db
template = template0
encoding = 'UTF8'
lc_collate = 'uk_UA.utf8'
... = 'uk_UA.utf8'
lc_ctype = 'uk_UA.utf8'
connection_limit = -1
is_template = false
;
create table c_types (
id serial,
c_date date,
c_text text,
c_time time,
c_timestamp timestamp,
c_money money,
c_float float
);
insert into c_types(c_date,c_text,c_time,c_timestamp,c_money,c_float) values
('2019-09-01', 'text0', '00:00:01', timestamp '2019-09-01 20:00:00', 1000.0001, 1000.0001),
('2019-09-01', 'text1', '00:00:02', timestamp '2019-09-01 21:00:00', 2000.0001, 2000.0001)
;
select * from c_types;
Correct output(Linux)
# id | c_date | c_text | c_time | c_timestamp | c_money | c_float
#----+------------+--------+----------+---------------------+---------------+-----------
# 1 | 2019-09-01 | text0 | 00:00:01 | 2019-09-01 20:00:00 | 1 000,00грн. | 1000.0001
# 2 | 2019-09-01 | text1 | 00:00:02 | 2019-09-01 21:00:00 | 2 000,00грн. | 2000.0001
This post shows that lc_numeric does not do influence separator in numerics as is
https://stackoverflow.com/a/41759744/8339821
Influenced functions are to_number, to_char etc
https://stackoverflow.com/a/8935028/8339821
The question is, how can I set up Postgres for my locale?

Unwanted too many arguments error being printed out

I am writing a bash script that logs into remote nodes and returns the services being run on that node.
#!/bin/bash
declare -a SERVICES=('redis-server' 'kube-controller-manager' 'kubelet' 'postgres' 'mongod' 'elasticsearch');
for svc in "${SERVICES[#]}"
do
RESULT=`ssh 172.29.219.109 "ps -ef | grep -v grep | grep $svc"`
if [ -z ${RESULT} ]
then
echo "Is Empty" > /dev/null
else
echo "$svc is running on this node"
fi
done
Now the output of ssh 172.29.219.109 "ps -ef | grep -v grep | grep $svc" on the node is ::
postgres 2102 1 0 Jan29 ? 00:24:27 /opt/PostgresPlus/pgbouncer/bin/pgbouncer -d /opt/PostgresPlus/pgbouncer/share/pgbouncer.ini
postgres 2394 1 0 Jan29 ? 00:20:10 /opt/PostgresPlus/9.4AS/bin/edb-postgres -D /opt/PostgresPlus/9.4AS/data
postgres 2431 2394 0 Jan29 ? 00:00:01 postgres: logger process
postgres 2434 2394 0 Jan29 ? 00:07:15 postgres: checkpointer process
postgres 2435 2394 0 Jan29 ? 00:01:10 postgres: writer process
postgres 2436 2394 0 Jan29 ? 00:03:27 postgres: wal writer process
postgres 2437 2394 0 Jan29 ? 00:20:03 postgres: autovacuum launcher process
postgres 2438 2394 0 Jan29 ? 00:37:00 postgres: stats collector process
postgres 2494 1 0 Jan29 ? 00:08:12 /opt/PostgresPlus/9.4AS/bin/pgagent -l 1 -s /var/log/ppas-agent-9.4.log hostaddr=localhost port=5432 dbname=postgres user=postgres
postgres 2495 2394 0 Jan29 ? 00:11:25 postgres: postgres postgres 127.0.0.1[59246] idle
When I run the script, I do get the result I want but Im getting an unwanted message which seems to be related to the variable in which I am storing my result.
# ./map_services_to_nodes.sh
./map_services_to_nodes.sh: line 12: [: too many arguments
postgres is found on this node
The Algo that I im using is ::
Search for all services defined in my array.
Store the result in a variable.
If Variable is empty, that means that service is not running.
If its not empty, service is running.
You need to escape $ (to avoid local expansion) and " when using inside ssh command, also avoid using the outdated back-ticks for command-substitution, use $(..), see Why Use $(STATEMENT) instead of legacy STATEMENT
RESULT=$(ssh 172.29.219.109 "ps -ef | grep -v grep | grep \$svc")
and double quote variables inside test operator,
if [ -z "${RESULT}" ]
Changed the below
if [ -z ${RESULT} ]
to
if [ -z "${RESULT}" ]
and it worked.
# ./map_services_to_nodes.sh
postgres is found on this node

Shell Script not able to Kill Process

I am using below script to find and kill process but its somehow not working.
Please help to edit this if any flaw. I am greping JVM. Using AIX Machine.
PID=`ps -eaf | grep JVM| grep -v grep | awk '{print $2}'`
if [[ "" != "$PID" ]]
then
echo "killing $PID"
kill $PID
else
echo "PID not found"
fi
From the Wikipedia entry:
In Unix and Unix-like operating systems, kill is a command used to
send a signal to a process. By default, the message sent is the
termination signal, which requests that the process exit. But kill is
something of a misnomer; the signal sent may have nothing to do with
process killing.
So by default kill sends SIGTERM (equivalent to kill -15) you will probably need to do SIGKILL:
kill -9 $PID
or if you're been extra cautious or you need the system to shutdown gracefully then I recommend you use SIGINT as it is the same as Ctrl-C on the keyboard. So
kill -2 $PID
Java apps I'm afraid doesn't always handle SIGTERM correctly they rely upon good behaviour in the shutdown hooks. To make sure an app correctly handles signals like SIGTERM you can directly process the SIGTERM signal:
public class CatchTerm {
public static void main(String[] args) throws Exception {
Signal.handle(new Signal("TERM"), new SignalHandler () {
public void handle(Signal sig) {
//handle sigterm such as System.exit(1)
}
});
Thread.sleep(86400000);
}
}
For completeness here are the common signals
| Signal | ID | Action | Description | Java
| --- | --- | --- | --- | ---
| SIGHUP | 1 | Terminate | Hangup | The application should reload any config
| SIGINT | 2 | Terminate | Ctrl-C | Keyboard interrupt, start clean shutdown
| SIGQUIT | 3 | Terminate | Terminal quit signal | JVM traps and issues a Thread dump
| SIGABRT | 6 | Terminate | Process abort signal | Do not handle, quit immediately
| SIGKILL | 9 | Terminate | Kill (forced) | Cannot be trapped
| SIGTERM | 15 | Terminate | Termination signal. | Quit quickly, safe but fast
For more advanced process selection see killall and pkill:

Run a shell script on dokku app deployment

I've been looking for a way to run a one time script that loads data into our database. Currently we're using dokku-alt for our development environment and we have a python script that runs to update our schema, data and functions we need available for our application. The problem that I'm facing is trying to find a way to run our script on application deployment through dokku-alt.
I've ventured into using a worker, but the workers themselves don't perform how I would expect them to. From what I've noticed is that a worker will terminate every process once it's complete. This is NOT what we need. We need to run the script once to load up our data and schema and close gracefully. We still want our web process to continue working, so the child process sending a kill signal to the other process.
So my question is, is there a way to run a script just one time on deployment without having to write a custom plugin?
05:23:07 schema.1 | started with pid 15
05:23:07 function.1 | started with pid 17
05:23:07 data.1 | started with pid 19
05:23:07 web.1 | started with pid 21
05:23:07 web.1 | Picked up JAVA_TOOL_OPTIONS: -Xmx384m -Xss512k -Dfile.encoding=UTF-8 -Djava.rmi.server.useCodebaseOnly=true
05:23:12 function.1 | Begin dbupdater
05:23:12 function.1 | pq://user:UcW3P587Eki8Fqrr#postgresql:5432/db
05:23:13 schema.1 | Begin dbupdater
05:23:13 data.1 | Begin dbupdater
05:23:13 data.1 | pq://user:UcW3P587Eki8Fqrr#postgresql:5432/db
05:23:13 schema.1 | pq://user:UcW3P587Eki8Fqrr#postgresql:5432/db
05:23:13 schema.1 | do (AccountCrosstabKey_create.sql)
05:23:13 schema.1 | Done
05:23:13 data.1 | do (Accountinfo_data.sql)
05:23:13 function.1 | do (Connectby_create.sql)
05:23:13 function.1 | Done
05:23:13 data.1 | Done
05:23:13 schema.1 | exited with code 0
05:23:13 system | sending SIGTERM to all processes
05:23:13 function.1 | terminated by SIGTERM
05:23:13 data.1 | terminated by SIGTERM
05:23:13 web.1 | terminated by SIGTERM
Python script:
#!/usr/bin/python
import os
import sys
import glob
import shlex
import subprocess
import postgresql
import postgresql.driver as pg_driver
try:
print('Begin dbupdater')
dbhost = os.environ.get('DATABASE_URL','localhost').replace('postgres://', 'pq://')
print(dbhost)
targetDir = sys.argv[1]
db = postgresql.open(dbhost)
os.chdir(targetDir)
currDir = os.getcwd()
for file in glob.glob("*.sql"):
sqlCmd = ''
with open(file,'r') as myfile:
sqlCmd = myfile.read().replace('DO', '').replace('$do$', '')
db.do('plpgsql',sqlCmd)
print('do (' + file + ')')
db.close()
print('Done')
except (ValueError, KeyError, TypeError) as error:
print (error)
You can execute this, to rum custom python script in dokku instance.
dokku --rm-container run [APP_NAME] python [your_script_name.py]
--rm-container flag delete the container after script finished.

Resources