How can I get Logstash-Keystore to find its password? - elasticsearch

For background: I'm attempting to automate steps to provision and create a multitude of Logstash processes within Ansible, but want to ensure the steps and configuration work manually before automating the process.
I have installed Logstash as per Elastic's documentation (its an RPM installation), and have it correctly shipping logs to my ES instance without issue. Elasticsearch and Logstash are both v7.12.0.
Following the keystore docs, I've created a /etc/sysconfig/logstash file and have set the permissions to the file to 0600. I've added the LOGSTASH_KEYSTORE_PASS key to the file to use as the environment variable sourced by the keystore command on creation and reading of the keystore itself.
Upon running the sudo /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create command, the process spits back the following error:
WARNING: The keystore password is not set.
Please set the environment variable `LOGSTASH_KEYSTORE_PASS`.
Failure to do so will result in reduced security.
Continue without password protection on the keystore? [y/N]
This should not be the case, as the keystore process should be sourcing my password env var from the aforementioned file. Has anyone experienced a similar issue, and if so, how did you solve it?

This is expected, the file /etc/sysconfig/logstash will be read only when you start logstash as a service, not when you run it from command line.
To create the keystore you will need to export the variable with the password first, as explained in the documentation.
set +o history
export LOGSTASH_KEYSTORE_PASS=mypassword
set -o history
sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create
After that, when you start logstash as a service it will read the variable from the /etc/sysconfig/logstash file.

1 - you should write your password for KEYSTORE itself.
It is under config/startup-options.
E.g. LOGSTASH_KEYSTORE_PASS=mypassword (without export)
2 - Then you should use the Keystore password to create your keystore file.
set +o history
export LOGSTASH_KEYSTORE_PASS=mypassword
set -o history
..logstash/bin/logstash-keystore --path.settings ../logstash create
Note: logstash-keystore and logstash.keystore are different things. you created the one with dot. It is in config/.. directory where your startup.options is.
History command is to hide your password to be seen. Because if somebody uses "history" to list all the commands used previously, they can see your password.
3 - Then you can add your first password into keystore file. You should give your keystore password beforehand.
set +o history
export LOGSTASH_KEYSTORE_PASS=mypassword
set -o history
./bin/logstash-keystore add YOUR_KEY
Then it will ask for your VALUE. If you do not give your keystore password, you get an error: Found a file at....but it's not a valid Logstash keystore
4 - Once you give your password. You can list the content of your keystore file, or remove. Replace "list" with "remove".
./bin/logstash-keystore list

Related

How to pass ansible vault password as an extra var?

I have the ability to encrypt variables using another mechanism(Azure pipeline secret feature), so I would like to save an ansible-vault password there(in Azure pipeline) and pass it to playbook execution as an extra var.
May I know if it can be done so?
An example of what/how I'm expecting is
ansible-playbook --extra-vars "vault-password=${pipelinevariable}"
Vault password cannot be passed as an extra var. There are several ways to provide it which are all covered in the documentation:
Providing vault password section in the general vault documentation.
Using vault in playbooks
Very basically your options are:
providing it interactively passing the --ask-vault-pass option
reading it from a file (static or executable) by either:
providing the --vault-password-file /path/to/vault option on the command line
setting the ANSIBLE_VAULT_PASSWORD_FILE environment variable (e.g. export ANSIBLE_VAULT_PASSWORD_FILE=/path/to/vault).
There is much more to learn in the above doc, especially how to use several vault passwords with ids, how to use a client script to retrieve the password from a key store...
Although this doesn't use extra vars, I believe it fulfills what you were trying to do:
Optional/one-time only: ask for the password and set it as an environment variable:
read -s ansible_vault_pass && export ansible_vault_pass
Now use that variable in your ansible command:
ansible-playbook your-playbook.yml --vault-password-file <(cat <<<"$ansible_vault_pass")
Credits for, and explanation of the <(cat <<<"") technique are in this other StackOverflow answer: Forcing cURL to get a password from the environment.
May I know if it can be done so?
Not familiar with Ansible Vault, but you have at least two directions based on the documents shared by Zeitounator.
1.Use a CMD task first to create a vault-password-file with plain-text content. (Not sure if the vault-password-file can be created in this way, it might not work.)
(echo $(SecretVariableName)>xxx.txt)
Then you may use the newly created xxx.txt file as input of ansible-playbook --vault-password-file /path/to/my/xxx.txt xxx.yml.
2.Create a corresponding vault-password-file before running the pipeline, add it to version control. (Same source repo of your current pipeline)
Then you can use ansible-playbook --vault-password-file easily when the vault-password-file is available. Also you can store the password file in private github repo, fetch the repo via git clone https://{userName}:{userPassword}#github.com/xxx/{RepoName}.git, copy the needed password file to the directory where you run the ansible-playbook commands via Copy Files task. This direction should work no matter if direction 1 is supported.

Why is .pgpass file not supplying a password for the pg_dump, vacuumdb, or reindexdb commands?

I'm trying to execute several different PostgreSQL commands inside of different bash scripts. I thought I had the .pgpass file properly configured, but when I try to run pg_dump, vacuumdb, or reindexdb, I get errors about how a password isn't being supplied. For my bash script to execute properly, I need these commands to return an exit code of 0.
I'm running PostgreSQL 9.5.4 on macOS 10.12.6 (16G1408).
In an admin user account [neither root nor postgres], I have a .pgpass file in ~. The .pgpass file contains:
localhost:5432:*:postgres:DaVinci
The user is indeed postgres and the password is indeed DaVinci.
Permissions on the .pgpass file are 600.
In the pg_hba.conf file, I have:
# pg_hba.conf file has been edited by DaVinci Project Server. Hence, it is recommended to not edit this file manually.
# TYPE DATABASE USER ADDRESS METHOD
local all all md5
host all all 127.0.0.1/32 md5
host all all ::1/128 md5
So, for example, from a user account [neither root nor postgres], I run:
/Library/PostgreSQL/9.5/pgAdmin3.app/Contents/SharedSupport/pg_dump --host localhost --username postgres testworkflow13 --blobs --file /Users/username/Desktop/testdestination1/testworkflow13_$(date "+%Y_%m_%d_%H_%M").backup --format=custom --verbose --no-password
And I get the following error:
pg_dump: [archiver (db)] connection to database "testworkflow13" failed: fe_sendauth: no password supplied
I get the same result if I run this with sudo as well.
Curiously, pg_dump does execute, and does export out a .backup file to the testdestination1 directory, but since it throws an error, if it's in a bash script, the script is halted.
Where am I going wrong? How can I make sure that the .pgpass file is being properly read so that the --no-password flag in the command works?
Please start with a read to official docs.
Also, even this topic is more than 2 years also, i strongly suggest to update to at least to version 10, anyhow nothing relevant has been changed around .pgpass
.pgpass need to be chmod 600, fine, the user that uses that must can read, so that must be the owner of that file.
Please remove the --no-password that just confuse and is not needed.
Using 127.0.0.1 instead of localhost clarify where you are going, "usually" are the same.
... from a user account [neither root nor postgres] ...
The user you are using for must have read access to .pgpass, as said, so you have to clarify that and provide that file to that user, maybe using the PGPASSFILE env variable could be useful for you.
Another way is the use of .pg_service.conf file with or without the .pgpass, for what you have written it looks like that may be more appropriate
Also you could set the PGPASSWORD in the env of the user.
Think about security, some choices look the simpliest but can expose accesses .. and as DBA I'm frankly tired about peoples that store password in visible places, printed in logs or on github or set "trust" in pg_hba and finally comes to me to say "postgreSQL is insecure".. hahaha!
Final note, you do not have a pg_hba error, in case you will have a "pg_hba" error message.
Turns out that changing all three lines in the pg_hba.conf file to the trust method of authentication solved this.
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
Since the method is trust, the .pgpass file may be entirely irrelevant--I'm not sure, but at least I got it working.

psql asks for password and does not read from pgpass.conf

I have installed my Postgresql database on a Windows server environment. I'd like to schedule a job using Windows Task scheduler to run every night so I need to run the following command without asking for password:
psql -U myUserName-d myDBName -c "select MyFunctionName()"
When I run the above query in my cmd shell, it asks me for password. When I enter the password manually, the function is correctly run.
So my solution is to read from the pgpass.conf file so no password is required.
Here are the things I have done to achieve this:
I created the pgpass.conf file in a directory I created in the %appdata% (AppData\Roaming\postgresql to be precise).
Here are the contents of this file:
localhost:5432:myDBName:myUserName:myPassword
I have also tried with the value 127.0.0.1 instead of localhost above.
I, then, added the an environment variable (in the user variables for administrator list) called PGPASSFILE and gave it the pgpass.conf location.
;C:\Users\administrator\AppData\Roaming\postgresql\pgpass.conf
Finally I stopped and restarted my Postgres service on Windows services and re-ran the command. But it is still asking for password.
How can I let my command know from where to read the password?
If you don't want to set the PGPASSFILE environment variable, put the password file in the standard location %APPDATA%\postgresql\pgpass.conf as described by the documentation.

How to enter Multiple entries in .pgpass file?

I am supposed to execute same psql command from a bash script on 5 remote machines using a username and password.
I have read that we have to pass the credentials in .pgpass file and use the -w option while executing the psql command.
But how can I execute the same command on the 5 machines using the same .pgpass file?
You can add multiple entries in .pgpass file for e.g.
syntax:
hostname:port:database:username:password
sample file:
test.net:5432:testdb:testuser:testpass
test1.net:5432:testdb1:testuser1:testpass1
test2.net:5432:testdb2:testuser2:testpass2
Make sure the permission of .pgpass file is set to 0600
chmod 0600 .pgpass
You can also use wildcards too (such as *) which is particularly handy for the database.
This means that for the pgpassfile syntax:
hostname:port:database:username:password
Can be used with value such as:
my-host:5432:*:my-username:my-plaintext-password
To enable you to connect to all databases on the server using the same credential. If you need a different credential for specific databases, then use more rows preceding this one.

WinSCP connect to Amazon AMI EC2 Instance changing user after login to "root"

I followed instructions here carefully however I haven't get this working right. Here is what I did:
Run WinSCP enter Hostname (Elastic IP of my Instance)
enter username "ec2-user"
enter public key file
chose SCP for the protocol
Under SCP/Shell settings I chose "sudo su -"
Hit Login
WinSCP asks me for passphrase key, Hit OK
Shows up this error
Error skipping startup message. Your
shell is probably incompatible with
the application (BASH is recommended).
NOTE: This works on Putty
With credit to this post and this AWS forum thread, it seems the trick is to
comment out Defaults requiretty in sudoers. My procedure now:
Log in to your EC2 instance using Putty.
Run sudo visudo, a special command to edit /etc/sudoers.
Press the Insert key to start Insert mode.
Find the line Defaults requiretty. Insert a hash symbol (#) before that line to comment it out:
#Defaults requiretty
Press the Esc key to exit Insert mode.
Type :wq to write the file and quit visudo.
In WinSCP:
Under Advanced > Environment > SCP/Shell, change the Shell to sudo su -.
Under SSH > Authentication, choose your Private key file (.ppk file).
WinSCP does not support commands that require terminal emulation or user input.
See: http://winscp.net/eng/docs/remote_command#limitations
Since sudo su - expects a password, it wouldn't work.
There is a way around it: make root logon without being prompted for a password. You can do this by editing your sudoers file usually located at /etc/sudoers and adding:
root ALL=NOPASSWD: ALL
Needless to say, this is Not a Very Good Thing To Do - for reasons which should be obvious :)
I was having the same problem and solved it using the steps in this tutorial. I would have posted it here, but I don't have enough rep for images/screens.
http://cvlive.blogspot.de/2014/03/how-to-login-in-as-ssh-root-user-from.html
The following tutorial worked for me and provides helpful screenshots. Logging in as a regular user with sudo permissions just required tweaking a few WinSCP options.
http://cvlive.blogspot.de/2014/03/how-to-login-in-as-ssh-root-user-from.html
Set Session/File protocol to: SCP, enter host/instance ip, port - usually 22, and regular username. Enter password credentials if the login requires it.
Add Advanced/SSH/Authentication/Private key file.
Unchecking Advanced/SSH/Authentication/attempt "keyboard interactive" authentication should allow Advanced/Environment/SCP Shell/Shell/Shell: sudo su - to provide sudo permissions for accessing webserver directories as a non-owner user.
Update 08/03/2017
WinSCP logging can be helpful to troubleshoot issues:
https://winscp.net/eng/docs/logging
[WinSCP] Logging can be enabled from Logging page of Preferences dialog.
Logging can also be enabled from command-line using /log and /xmllog
parameters respectively, what is particularly useful with scripting.
In .NET assembly, session logging is enabled using
Session.SessionLogPath1).
Depending on WinSCP connection errors, some server installations may need a directive added to the (Ubunto, CentOS, other-Linux-Server) /etc/sudoers file to not require TTY for a specified user. Creating a file in /etc/sudoers.d/ (using a tool such as Amazon Command Line Interface or PuTTY) may be a better option than editing /etc/sudoers. Some /etc/sudoers versions recommend it:
This file MUST be edited with the 'visudo' command as root.
Please consider adding local content in /etc/sudoers.d/ instead of
directly modifying this file.
See the man page for details on how to write a sudoers file.
When editing a sudoers file (as root) through the command-line, the 'visudo' command should be used to open the file as it will parse the file for syntax errors. /etc/sudoers.d/ files are typically owned by root and chmoded with minimal permissions. The default /etc/sudoers file may be referenced as it should automatically have recommended chmod permissions on installation. e.g.: 0440 r--r----- .
https://superuser.com/a/869145 :
visudo -f /etc/sudoers.d/somefilename
Defaults:username !requiretty
Helpful Links:
Stackoverflow: cloud-init how to add default user to sudoers.d
https://www.digitalocean.com/community/tutorials/how-to-edit-the-sudoers-file-on-ubuntu-and-centos
WinSCP Forum:
https://winscp.net/forum/viewtopic.php?t=3046
https://winscp.net/forum/viewtopic.php?t=2109
WinSCP Doc: https://winscp.net/eng/docs/faq_su
With SCP protocol, you can specify following command as custom shell
on the SCP/Shell page of Advanced Site Settings dialog:
sudo -s
[...]
Note that as WinSCP cannot implement terminal emulation, you need to
have sudoers option requiretty turned off.
Instructions in Ubuntu Apache /etc/sudoers recommend adding directives to /etc/sudoers.d rather than editing /etc/sudoers directly. Depending on the installation, adding directive to /etc/sudoers.d/cloud-init may work as well.
It may be helpful to create an SSH test user with sudo permissions by following the steps provided in instance documentation to ensure that the user has recommended instance settings and any updates to server sudoer files can be effected and removed without affecting other users.
I enabled SSH root login on Debian Linux Server:
To enable SSH login for a root user on Debian Linux system you need to first configure SSH server. Open /etc/ssh/sshd_config and change the following line:
FROM:
PermitRootLogin without-password
TO:
PermitRootLogin yes
Once you made the above change restart your SSH server:
/etc/init.d/ssh restart
Source
Then i used SCP File protocol with root user name in winscp.
Under SCP/Shell settings, instead of "sudo su -", choose /bin/bash.
It should work.

Resources