/var/log/cloud-init-output.log is not present on RHEL 7.5 - amazon-ec2

I've got a custom hardended RHEL 7.5 custom AMI. I want to use user data to complete some deploy time configuration. I've already ensured that /var/lib/cloud/* is removed before I create the AMI.
These are the contents of my user data:
echo "My script fired." >> /tmp/test.txt
echo "This line should produce an output log."
The file /tmp/test.txt is present, indicating the my script did indeed run. However, the expected result of the second statement is that a file /var/log/cloud-init-output.log should be produced in accordance with the AWS docs. This file is not present.
How do I make sure that user data produces the expected output log file?

It appears that Red Hat felt the file was "completely unnecessary": https://bugzilla.redhat.com/show_bug.cgi?id=1424612
In order to view user data output, journalctl logs will need to be grepped:
sudo grep cloud-init /var/log/messages

Related

NagiosXI docker container: return code of 13 is out of bounds

I continually receive error in title. (see picture)
nagios image
However, I have given my sh script all permissions (chmod 777 with nagios as owner). My script also works fine on a nagios core container but with a nagios xi docker container, it messes up.
Here is the permissions on my script in the picture for proof:
permissions
The command also works on the the UI if I manually call it in the service management section of nagios.
Command also works using nagios user to run script
nagios user running script
Docker container I am using: https://hub.docker.com/r/mavenquist/nagios-xi
I've tried using this post's solutions: Nagios: return code of 13 is out of bounds
It's not entirely possible to answer your question completely with the information provided, but here are some pointers:
Never set 777 permissions. In your case the owner of the script is already "nagios:nagios" so a more reasonable permission would be 550 -- i.e. allow the nagios user and group to read and execute the file, but not modify it (why would it).
The error you're getting (return code 13) means that 1.sh for some reason is returning 13. Why is impossible to know without inspecting the script, but you can try to run the plugin as nagios and inspect the output, hopefully the script is well written enough to inform you of what the error is:
# su -c "/your/plugin -exactly -as -configured" nagios
A general rule for troubleshooting Nagios is that whatever you see in the GUI will be the exact same thing as what happens when you run the script manually as the nagios user, so it's a good way to figure out what is happening.

Log file is changed automaticaly - PostgreSQL

I am starting PostgreSQL 11 Server with command line on Windows and I am trying to give the log file parameter, however when I start the server the log file is being changed to the default one that is assigned in the postgresql.conf with log_directory and log_filename settings.
I tried to delete the log_directory and log_filename data from postgresql.conf file, but it didn't work the log file is still being changed to the default one that was given in the old log_directory and log_filename values.
I am stoping the server every time to get the new data updated, and I am starting it with this command line:
"C:\Program Files\PostgreSQL11\bin\pg_ctl.exe" -D "C:\Program Files\PostgreSQL11\data\pg11" -w -o "-F -p 5423" -l "C:\Program Files\PostgreSQL11\data\logs\pg11\MY_LOG_FILE.log" start
I get this log message in my log file and after that the log messages will be saved in the old default log file:
2019-07-30 11:18:00 CEST [19996]: [4-1] user=,db=,app=,client= TIPP:
The further log output will appear in the directory
»C:/PROGRA~1/POSTGR~2/data/logs/pg11«
It is mentioned in the documentation:
pg_ctl encapsulates tasks such as redirecting log output and properly
detaching from the terminal and process group.
However, since nobody is having any idea about this issue, it looks like there is a difference between the log file that is passed to the executable and the log file from the postgresql.conf, the one that is passed to the executable is just to log data from the executable while it is starting the server, the other one from the config file is to log data from inside the server like when you execute a query, so the result that I have had makes sense now, and what I got is actually the normal behavior, but in this case the documentation should be fixed.
If this is not the case, and the pg_ctl should really redirect the server log output then this is a bug in PostgreSQL 11.4, just for you guys to know.

AWS Launch Configuration not picking up user data

We are trying to build an an autoscaling group(lets say AS) configured with an elastic load balancer(lets say ELB) in AWS. The autoscaling group itself is configured with a launch configuration(lets say LC). As far as I could understand from the AWS documentation, pasting a script, as-is, in the user data section of the launch configuration would run that script for every instance launched into an auto scaling group associated with that auto scaling group.
For example pasting this in user data would have a file named configure available in the home folder of a t2 micro ubuntu image:
#!/bin/bash
cd
touch configure
Our end goal is:
Increase instances in auto scaling group, they launch with our startup script and this new instance gets added behind the load balancer tagged with the auto scaling group. But the script was not executed at the instance launch. My questions are:
1. Am i missing something here?
2. What should I do to run our startup script at time of launching any new instance in an auto scaling group?
3. Is there any way to verify if user data was really picked up by the launch?
The direction you are following is right. What is wrong is your user data script.
Problem 1:
What you have to remember is that user data will be executed as user root, not ubuntu. So if your script worked fine, you would find your file in /root/configure, NOT IN /home/ubuntu/configure.
Problem 2:
Your script is actually executing, but it's incorrect and is failing at cd command, thus file is not created.
cd builtin command without any directory given will try to do cd $HOME, however $HOME is NOT SET during cloud-init run, so you have to be explicit here.
Change your script to below and it will work:
#!/bin/bash
cd /root
touch configure
You can also debug issues with your user-data script by inspecting /var/log/cloud-init.log log file, in particular checking for errors in it: grep -i error /var/log/cloud-init.log
Hope it helps!

Bash change password on boot

* QUICK SOLUTION *
For those of you visiting this page based on the title solely and not wanting to read through everything below, or thinking everything below doesn't apply to your situation, maybe this will help... If all you are looking to do is change a users password on boot and are using Ubuntu 12.04 or similar, here is all you have to do. Add a script to start on boot containing the following:
printf "New Password\nRepeat Password\n" | passwd user
Keep in mind, this must be run as root, otherwise you will need to provide the original password like so:
printf "Original Password\nNew Password\nRepeat Password\n" | passwd user
* START ORIGINAL QUESTION *
I have a first boot script that sets up a VM by doing some configuration and file copies from a mounted iso. Basically the following happens:
VM boots for the first time.
/etc/rc.local is used to mount a CD ISO to /media/cdrom and execute /media/cdrom/boot.sh
The boot.sh file does some basic configuration, copies some files from CD to the VM and should update the users password, using the current password.
This part of the script fails. The password is not updating. I have tried the following:
VAR="1234test6789"
echo -e "DEFAULT\n$VAR\n$VAR" | passwd user
Basically the default VM is setup with a user (for example jack) with a default password (DEFAULT) The script above, using the default password updates to the new password stored in VAR. The script works by itself when logged in, but I cant get it to do the same on boot. I'm sure there is some sort of system policy or something that prevents this. If so, I need some sort of work around. This VM is being mass deployed and is packaged automatically and configured with a custom user password that is passed from the CD ISO.
Please help. Thank you!
* UPDATE *
Oh, and I'm using Ubuntu 12.04
* UPDATE *
I tried your suggestion. The following files directly in the rc.local ie the password does not update. The script is running however. I tested by adding the touch line.
touch /home/jack/test
VAR="1234test5678"
printf "P#ssw0rd\n$VAR\n$VAR" | passwd jack
P#ssw0rd is the example default VM password.
Jack is the example username.
* UPDATE *
Ok, we think the issue may be tied to rc.local. So rc.local is called really early on before run levels and may be causing the issue.
* UPDATE *
Well, potentially good news. The password seems to be updating now, but its updating to something other than what I set in $VAR. I think it might be adding something to it. This is ofcourse just a guess. Everytime I run the test, immediately after the script runs at boot I can no longer login with the username it was trying to update. I know that's not a lot of information to go on, but it's all I've got at the moment. Any ideas what or why its appending something else to the password?
* SOLUTION *
So there were several small problems as to why I could not get the suggestion below working. I won't outline them here as they are irrelevant. The ultimate solution was from Graeme tied in with some other features of my script which I will share below.
The default VM boots
rc.local does the following:
if [ -f /etc/program/tmp ]; then
mount -t iso9660 -o ro /dev/cdrom /media/cdrom
cd /media/cdrom
./boot.sh
fi
(The tmp file is there just to prevent the first boot script from running more than once. After boot.sh runs one, it removes that tmp file.)
boot.sh on the CDROM runs (with root privileges)
boot.sh copies files from the CDROM to /etc/program
boot.sh also updates the users password with the following:
VAR="DEFAULT"
cp config "/etc/program/config"
printf "$VAR\n$VAR\n" | passwd user
rm -rf /etc/program/tmp
(VAR is changed by another part of the server that is connected to our OVA deployment solution. Basically the user gets a customized, well random password for their VM so similar users cannot access each others VMs)
There is still some testing to be done, but I am reasonably satisfied that this issue is resolved. 95%
Edit - updated for not entering the original password
The sh version of echo does not have the -e option, unlike bash. Switch echo for printf. Also the rc.local script will have root privileges, so it won't prompt for the original password. Using that will cause the command to fail since 'DEFAULT' will be taken as the new password and the confirm will fail. This should work:
VAR="1234test6789"
printf "$VAR\n$VAR\n" | passwd user
Ubuntu uses dash at boot time, which is a drop in replacement for sh and is much more lightweight that bash. echo -e is a common bashism which doesn't work elsewhere.

Cronjob not executing the Shell Script completely

This is Srikanth from Hyderabad.
I the Linux Administrator in one of the corporate company. We have a squid server, So i prepared a Backup squid server, so that when LIVE Squid server goes down i can put the backup server into LIVE.
My squid servers are configured with Centos 5.5. I have prepared a script to take backup of all configuration files in /etc/squid/ of LIVE server to the backup server. i.e It will copy all files from Live server's /etc/squid/ to backup server's /etc/squid/
Here's the script saved as squidbackup.sh in the directory /opt/ with permission 755(rwxr-xr-x)
#! /bin/sh
username="<username>"
password="<password>"
host="Server IP"
expect -c "
spawn /usr/bin/scp -r <username>#Server IP:/etc/squid /etc/
expect {
"*password:*"{
send $password\r;
interact;
}
eof{
exit
}
}
** Kindly note that this will be executed in the backup server that will check for the user which is mentioned in the script. I have created a user in the live server and given the same in the script too.
When i execute this command using the below command
[root#localhost ~]# sh /opt/squidbackup.sh
Everything works fine till now, this script downloads all the files from the directory /etc/squid/ of LIVE server to the location /etc/squid/ of Backup server
Now the problem raises, If i set this in crontab like below or with other timings
50 23 * * * sh /opt/squidbackup.sh
Dont know what's wrong, it is not downloading all files. i.e Cronjob is downloading only few files from /etc/squid/ of LIVE server to the /etc/squid/ of backup server.
**Only few files are downloaded when cron executes the script, If i run this script manually then it is downloading all files perfectly with out any errors or warnings.
If you have any more questions, Please go ahead to post it.
Now i kindly request to give if any solutions are available.
Please Please, Thank you in advance.
thanks for your interest. I have tried what you have said, it show like below, but previously i use to get the same output to mail of the User in the squid backup server.
Even in cron logs it show the same, but i was not able to understand what was the exact error from the below lines.
Please note that only few files are getting downloaded with cron.
spawn /usr/bin/scp -r <username>#ServerIP:/etc/squid /etc/
<username>#ServerIP's password:
Kindly check if you can suggest any thing else.
Try the simple options first. Capture the stdout and stderr as shown below. These files should point to the problem.
Looking at the script, you need to specify the location of expect. That could be an issue.
50 23 * * * sh /opt/squidbackup.sh >/tmp/cronout.log 2>&1

Resources