Spring transaction hangs for iptables command - spring

As part of error handling for our processes, we have tried to disable the communication between the process to the database machine listener port using the following iptables command
iptables -A INPUT -p tcp --destination-port <database-listener-port> -s <database-host-ip> -j DROP
However, this cause the process to get stuck with the following log coming from AbstractPlatformTransactionManager::getTransaction
DEBUG: Creating new transaction with name [<Transaction-Name>]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; ''
Enabling it, later on with 'iptables -F' makes the transaction 'get back to life' again and the connection is being retrieved and ends successfully.
We are most concern with the fact all connection timeout configuration were not activated (?) and therefore we so such hangs, none of our connection pool defaults (see below) has such infinite timeout (we tried also giving a small default for the abandonedConnectionTimeout but it didn't help and we returned for the true default we believe should be in production) and we expected some kind of cancel operation should be performed.
abandonedConnectionTimeout=0
acquireIncrement=5
acquireRetryAttempts=3
checkoutTimeout=5000
idleConnectionTestPeriod=60
inactivityTimeout=1800
inactivityTimeoutforNonUsedConnection=1800
validateConnection=true
Thanks for any assist on this matter.

Related

how do you close a Pywinrm session?

hello I'm using PyWinRM to poll a remote windows server.
s = winrm.Session('10.10.10.10', auth=('administrator', 'password'))
As there is no s.close() function available, I am worried about leaking file descriptors.
I've checked by using lsof -p <myprocess> | wc -l and my fd count is stable
but my google searches show that ansible had fd leaks previously; ansible relies on pywinrm to manage remote window hosts as well
kindly advice, thanks!
Actually, I had a quick look at the code of wirm (as of 20201117)
and the "Session" is not an actual session in the traditional sense, but only an object holding the creds to authenticate.
Each time run_cmd or run_ps is invoked, a session in opened on the target and closed on completion of the task. So there's nothing to close, really.

How to provide a restart count to systemd service

I have an embedded device which manages its various services using systemd. Our status reporting application is one of these services. It is always on and it automatically restarts on failure (crashes, exceptions, OOM conditions, whatever).
We report an event to our cloud services on device restart (technically application restart) but I'd like to distinguish first start (after reboot) from restart. Is there a mechanism built into systemd which can provide the service restart count, or do I need to roll my own method?
Do you have the journal ? If you do, then you can get the count like this:
journalctl -b -u myservicename.service |grep -c Started
The -b option limits logs to the current boot; -u limits to the service in argument.
Then you grep for the "Started" line, and tell grep to only give you the number of matches.
you can use following command:
systemctl show foo.service -p NRestarts
It will return a value if the service is in a restart loop, otherwise, will return nothing.

Establishing a simple connection to postgres server for load test in bash

I am currently trying to load test our instance hosting a postgres instance from a bash script. The idea is to spawn a bunch of open connections (without running any queries) and then checking the memory.
To spawn a bunch of connections I do:
export PGPASSWORD="$password"
for i in $(seq 1 $maxConnections);
do
sleep 0.2
psql -h "$serverAddress" -U postgres >/dev/null &
done
However, it seems that the connections don't stay open, as when I check for active connections, I get 0 from the ip of the instance I'm running it from. However, if I do
psql -h "$serverAddress" -U postgres &
manually from the shell, it keeps the connection open. How would I open and maintain open connections within a bash script? I've checked the password is correct, and if I exclude the ampersand from within the script, then I do enter the psql console with an open connection as expected. It's just when I background it in the script that it causes problems.
You can start your psql sessions in a sub-shell while you loop by using the sub-shell parentheses syntax like below. However, if you do this I recommend you write code to manage your jobs and clean them up when you are done.
(psql -h "$serverAddress" -U postgres)&
I tested this and I was able to maintain connections to a postgres instance this way. However, if you are checking for active connection via a select statement like select * from pg_stat_activity; you will see these connections as open and idle to the instance not active as they are not executing any task or query.
If you put this code in a script an execute it you will need to make sure that the script does not terminate before you are ready for all the sessions to die.

where to send daemon optional output so it's readable

My daemon has option
-r WhereShouldIOutputAdditionalData
daemon is listening on port 26542 and writes on the same port , I want additional data to output to 26542 as well, I tried using
-r /dev/tcp/127.0.0.1/26542
and it doesn't work, When I do
> /dev/tcp/127.0.0.1/26542
I get connection refused. Deamon that I use: vowpal_wabbit, machine learning library.Any ideas?
Per an unoffical man page at
https://github.com/JohnLangford/vowpal_wabbit/wiki/Command-line-arguments
I see
-r [ --raw_predictions ] arg File to output unnormalized predictions to
So I think the -r argument is expecting a sort of /path/to/logs/raw_preds.log argument.
With this, you'll have "captured the optional output so it is readable." You could open a separate window and use the dev/admins old friend tail -f /path/to/logs/raw_preds.log to see info as it is written to the file.
If you really want it all to appear on one port, (which isn't exactly clear from your question), you'd need a separate program that can multi-plex the outputs, AND has control of your required port number. Also you'll need to be concerned about correct order of output.
IHTH.
I'm sorry, what you want to do it's impossibile for two reasons:
First, bash cannot listen on a given TCP port.
For example you cannot write a TCP server daemon in plain bash (you could use netcat for that), you can only connect() to a TCP port in bash.
Also, it is impossibile to listen on the same TCP ip:port that is already in LISTEN state by another process.

Not closing ssh

I have a korn 88 shell script which creates a folder on the remote host using the following command:
ssh $user#$host "mkdir -p $somedir" 2>> $Log
and after that transfers a bunch of files in a loop using this
scp -o keepalive=yes $somedir/$file $user#$host:$somedir
I wonder if first command will leave connection open after script ends?
Each of the commands opens and closes its own connection. It's easy to use a tool like tcpdump to verify this.
This is a consequence of the fact that the exit() system call used to terminate a process closes all open file descriptors including socket file descriptors. Closing a socket closes the connection behind the socket.
New-enough versions of ssh have the ability to multiplex several virtual connections over a single physical connection. So what you could do is start up some long-running ssh command in the background with connection multiplexing enabled, and then subsequent connections will re-use that connection with much faster startup times. See the manpage for ssh_config for info on connection multiplexing, relevant options are ControlMaster and ControlPath.
But as William Pursell suggests, rsync is probably easier and faster, if it's an option.

Resources