Log file is changed automaticaly - PostgreSQL - windows

I am starting PostgreSQL 11 Server with command line on Windows and I am trying to give the log file parameter, however when I start the server the log file is being changed to the default one that is assigned in the postgresql.conf with log_directory and log_filename settings.
I tried to delete the log_directory and log_filename data from postgresql.conf file, but it didn't work the log file is still being changed to the default one that was given in the old log_directory and log_filename values.
I am stoping the server every time to get the new data updated, and I am starting it with this command line:
"C:\Program Files\PostgreSQL11\bin\pg_ctl.exe" -D "C:\Program Files\PostgreSQL11\data\pg11" -w -o "-F -p 5423" -l "C:\Program Files\PostgreSQL11\data\logs\pg11\MY_LOG_FILE.log" start
I get this log message in my log file and after that the log messages will be saved in the old default log file:
2019-07-30 11:18:00 CEST [19996]: [4-1] user=,db=,app=,client= TIPP:
The further log output will appear in the directory
»C:/PROGRA~1/POSTGR~2/data/logs/pg11«

It is mentioned in the documentation:
pg_ctl encapsulates tasks such as redirecting log output and properly
detaching from the terminal and process group.
However, since nobody is having any idea about this issue, it looks like there is a difference between the log file that is passed to the executable and the log file from the postgresql.conf, the one that is passed to the executable is just to log data from the executable while it is starting the server, the other one from the config file is to log data from inside the server like when you execute a query, so the result that I have had makes sense now, and what I got is actually the normal behavior, but in this case the documentation should be fixed.
If this is not the case, and the pg_ctl should really redirect the server log output then this is a bug in PostgreSQL 11.4, just for you guys to know.

Related

dbstart won't accept $ORACLE_HOME

I'm trying to start some oracle databases on RHEL, however when I run the dbstart command, I get an error message saying ORACLE_HOME_LISTNER isn't set.
[oracle#olxxxa ~]$ dbstart $ORACLE_HOME
ORACLE_HOME_LISTNER is not SET, unable to auto-start Oracle Net Listener
Usage: /u01/app/oracle/product/12.1.0/dbhome_1/bin/dbstart ORACLE_HOME
Processing Database instance "xxxa": log file /u01/app/oracle/product/12.1.0/dbhome_1/startup.log
Processing Database instance "xxxb": log file /u01/app/oracle/product/12.1.0/dbhome_1/startup.log
Looking online, I saw people were saying to change the dbstart file so that ORACLE_HOME_LISTNER is set from $1 to $ORACLE_HOME, which I did however I'm still getting the same error. I also read that I could pass $ORACLE_HOME right into the dbstart command, however I get the same output with or without the variable being passed.

/var/log/cloud-init-output.log is not present on RHEL 7.5

I've got a custom hardended RHEL 7.5 custom AMI. I want to use user data to complete some deploy time configuration. I've already ensured that /var/lib/cloud/* is removed before I create the AMI.
These are the contents of my user data:
echo "My script fired." >> /tmp/test.txt
echo "This line should produce an output log."
The file /tmp/test.txt is present, indicating the my script did indeed run. However, the expected result of the second statement is that a file /var/log/cloud-init-output.log should be produced in accordance with the AWS docs. This file is not present.
How do I make sure that user data produces the expected output log file?
It appears that Red Hat felt the file was "completely unnecessary": https://bugzilla.redhat.com/show_bug.cgi?id=1424612
In order to view user data output, journalctl logs will need to be grepped:
sudo grep cloud-init /var/log/messages

AWS Launch Configuration not picking up user data

We are trying to build an an autoscaling group(lets say AS) configured with an elastic load balancer(lets say ELB) in AWS. The autoscaling group itself is configured with a launch configuration(lets say LC). As far as I could understand from the AWS documentation, pasting a script, as-is, in the user data section of the launch configuration would run that script for every instance launched into an auto scaling group associated with that auto scaling group.
For example pasting this in user data would have a file named configure available in the home folder of a t2 micro ubuntu image:
#!/bin/bash
cd
touch configure
Our end goal is:
Increase instances in auto scaling group, they launch with our startup script and this new instance gets added behind the load balancer tagged with the auto scaling group. But the script was not executed at the instance launch. My questions are:
1. Am i missing something here?
2. What should I do to run our startup script at time of launching any new instance in an auto scaling group?
3. Is there any way to verify if user data was really picked up by the launch?
The direction you are following is right. What is wrong is your user data script.
Problem 1:
What you have to remember is that user data will be executed as user root, not ubuntu. So if your script worked fine, you would find your file in /root/configure, NOT IN /home/ubuntu/configure.
Problem 2:
Your script is actually executing, but it's incorrect and is failing at cd command, thus file is not created.
cd builtin command without any directory given will try to do cd $HOME, however $HOME is NOT SET during cloud-init run, so you have to be explicit here.
Change your script to below and it will work:
#!/bin/bash
cd /root
touch configure
You can also debug issues with your user-data script by inspecting /var/log/cloud-init.log log file, in particular checking for errors in it: grep -i error /var/log/cloud-init.log
Hope it helps!

cant create file when trying to mysqldump to csv on remote server

So I have a batch server that runs a batch script. This script issues a mysqldump command for our db server.
mysqldump -h nnn.nn.nnn.nn -u username -p password --tab=/var/batchfiles/ --fields-enclosed-by='"' --fields-terminated-by="," --fields-escaped-by="\\" --lines-terminated-by="\\n" store_locations stores
When the command runs, I get an error:
Can't create/write to file '/var/mi6/batch/stores.txt' (Errcode: 2) when executing 'SELECT INTO OUTFILE'
Now I have tried also outputting to the /tmp dir as suggested at http://techtots.blogspot.com/2011/12/using-mysqldump-to-export-csv-file.html and it is still unable to write the file as it tells me it already exists, even though it doesn't.
Bottom line is, I would like to be able run a script on server A that issues a mysql command for the db server and have that output file saved to server A in csv format.
FYI, I have also tried just running mysql and redirecting output to a file. This creates a tab file but you dont have much control over the output which so it wont really work either.
mysqldump in a --tab mode is a CLI tool for SELECT INTO OUTFILE. And the latter is normally supposed to be used to create a delimited file afresh and only on the db server host.
SELECT ... INTO Syntax
The SELECT ... INTO OUTFILE statement is intended primarily to let you
very quickly dump a table to a text file on the server machine. If you
want to create the resulting file on some other host than the server
host, you normally cannot use SELECT ... INTO OUTFILE since there is
no way to write a path to the file relative to the server host's file
system.
You have at least following options:
use mysql instead of mysqldump on a remote host to create a tab delimited file instead
mysql -h<host> -u<user> -p<password> \
-e "SELECT 'column_name', 'column_name2'... \
UNION ALL SELECT column1, column2, FROM stores" > \
/path/to/your/file/file_name
you can pipe it with sed or awk and create a CSV file from a tab delimited output. See this for details
you can make a location for a file on a remote host accessible through network-mapped path on db server's file system.

Cronjob not executing the Shell Script completely

This is Srikanth from Hyderabad.
I the Linux Administrator in one of the corporate company. We have a squid server, So i prepared a Backup squid server, so that when LIVE Squid server goes down i can put the backup server into LIVE.
My squid servers are configured with Centos 5.5. I have prepared a script to take backup of all configuration files in /etc/squid/ of LIVE server to the backup server. i.e It will copy all files from Live server's /etc/squid/ to backup server's /etc/squid/
Here's the script saved as squidbackup.sh in the directory /opt/ with permission 755(rwxr-xr-x)
#! /bin/sh
username="<username>"
password="<password>"
host="Server IP"
expect -c "
spawn /usr/bin/scp -r <username>#Server IP:/etc/squid /etc/
expect {
"*password:*"{
send $password\r;
interact;
}
eof{
exit
}
}
** Kindly note that this will be executed in the backup server that will check for the user which is mentioned in the script. I have created a user in the live server and given the same in the script too.
When i execute this command using the below command
[root#localhost ~]# sh /opt/squidbackup.sh
Everything works fine till now, this script downloads all the files from the directory /etc/squid/ of LIVE server to the location /etc/squid/ of Backup server
Now the problem raises, If i set this in crontab like below or with other timings
50 23 * * * sh /opt/squidbackup.sh
Dont know what's wrong, it is not downloading all files. i.e Cronjob is downloading only few files from /etc/squid/ of LIVE server to the /etc/squid/ of backup server.
**Only few files are downloaded when cron executes the script, If i run this script manually then it is downloading all files perfectly with out any errors or warnings.
If you have any more questions, Please go ahead to post it.
Now i kindly request to give if any solutions are available.
Please Please, Thank you in advance.
thanks for your interest. I have tried what you have said, it show like below, but previously i use to get the same output to mail of the User in the squid backup server.
Even in cron logs it show the same, but i was not able to understand what was the exact error from the below lines.
Please note that only few files are getting downloaded with cron.
spawn /usr/bin/scp -r <username>#ServerIP:/etc/squid /etc/
<username>#ServerIP's password:
Kindly check if you can suggest any thing else.
Try the simple options first. Capture the stdout and stderr as shown below. These files should point to the problem.
Looking at the script, you need to specify the location of expect. That could be an issue.
50 23 * * * sh /opt/squidbackup.sh >/tmp/cronout.log 2>&1

Resources