Unable to output container password details when using ansible with podman - elasticsearch

when using ansible-podman, I am unable to output the std-out of the container run command as I might do when using the command line. This means that I don't get to see the automatically generated password and keystore password, along with other details.
Even when using the tty parameter of the ansible-podman-container, the logs report:
Auto-configuration will not generate a password for the elastic built-in superuser, as we cannot determine if there is a terminal attached to the elasticsearch process. You can use the bin/elasticsearch-reset-password tool to set the password for the elastic user."
There is no elastic user created, and when I exec into the container, the bin/elasticsearch-reset-password tool fails with:
ROR: Failed to reset password for the [elasticsearch] user
As https is standard on the 8.5 image, I am unable to use it, as I cannot set up auth properly. Also, I cannot use apt to install an editor, as the user elasticsearch does not have sufficient permissions.
If you think this is a podman error then please let me know, and I will hassle the devs, and see if I can't get better output and tty detection etc.
An alternative I have tried is using ansible to run a shell command, but the output is no different.
What I really want is to be able to obtain the password to output to an ansible variable so that I can spin up a pod of containers, including elasticsearch, for running tests.
Alternatively, I can use elasticsearch 7.17.7 with http, but I am going to need encryption for production, and there doesn't seem to be a way to do it with ansible.
Perhaps there is an environment variable that I am missing that I could set to create the password? I have tried setting ELASTIC_PASSWORD, but it is of no help.
I am connecting from django using django-elasticsearch-dsl, and get the following error, when verify_cert is set to false:
AuthenticationException(401, 'security_exception', 'missing authentication credentials for REST request [/forum_posts_index/_search]')
Any help gratefully received...

Related

Nifi cli toolkit errors

I am trying to use NIFI CLI toolkit so that I can load 100 parameters using parameter context. (doing it using UI is a tedious job and not appropriate for deployments as well)
I installed Nifi toolkit on remote server. I generally ssh into this server from local and run my commands.
To see the UI, I use ssh tunnel.
There is only one user for this Nifi Instance (admin) and it don't have https certificates. (Single User Authentication)
When i start CLI prompt it works fine (below)
# sh cli.sh
CLI v1.16.3
Type 'help' to see a list of available commands, use tab to auto-complete.
Session loaded from /root/.nifi-cli.config
#>
But when I use any Nifi commands, it throws below error:
#> nifi list-param-contexts -u http://localhost:8443/
**ERROR**: Error executing command 'list-param-contexts' : **Unexpected end of file from server**
if I use https ,
#> nifi list-param-contexts -u https://localhost:8443/
ERROR: Error executing command 'list-param-contexts' : truststore, truststoreType, and truststorePasswd are required when using an https url
But I don't use truststore or passwords for single admin user.
Also commands related to Registry working fine.
#> registry current-user -u http://localhost:18080
anonymous
#> registry list-buckets -u http://localhost:18080
# Name Id Description
- ------------ ------------------------------------ -----------
1 First-bucket 8a3da253-f635-4b01-941f-bfb6437cead7
2 delete 4df052fd-fddc-4e7b-b4ea-bb3b8b691385
Is this something related to https/URI ? Is Nifi CLI works only when truststore and keystore were used ? But if we dont want to use them, is there an alternative or this is a different issue altogether ?
I don't find much information online for this issue and I was struck from many days..
Sincere request to help me find the problem..

Ubuntu-based Logstash Keystore Permissions Issues

Background: I'm working in an Ubuntu 20.04 environment setting up Logstash servers to ship metrics to my Elastic cluster. With my relatively basic configuration, I'm able to have a Filebeat process send logs to a Loadbalancer, which then spreads them across my Logstash servers and up to Elastic. This process works. I'd like to be able to use the Logstash Keystore to prevent having to pass sensitive variables to my logstash.yml file in plain text. In my environment, I'm able to follow the Elastic documentation to setup a password-protected keystore in the default location, add keys to it, and successfully list out those keys.
Problems: While the Logstash servers successfully run without the keystore, the moment I add them and try to watch the logfile on startup, the process never starts. It seems to continue attempting restart without ever logging to the logstash-plain.log. When trying to run the process in the foreground with this configuration, the error I received was the rather-unhelpful:
Found a file at /etc/logstash/logstash.keystore,
but it is not a valid Logstash keystore
Troubleshooting Done: After trying some steps found in other issues, such as replacing the /etc/sysconfig/logstash creation with simply adding the password to /etc/default/logstash, the errors were a little more helpful, stating that the file permissions or password were incorrect. The logstash-keystore process itself was capable of creating and listing keys, so the password was correct, and the keystore itself was set to 0644. I tried multiple permissions configurations and was still unable to get Logstash to run as a process or in the foreground.
I'm still under the impression it's a permissions issue, but I don't know how to resolve it. Logstash runs as the logstash user, which should be able to read the keystore file since its 0644 and housed in the same dir as logstash.yml.
Has anyone experienced something similar with Logstash & Ubuntu, or in a similar environment? If so, how did you manage to get past it? I'm open to ideas and would love to get this working.
Try running logstash-keystore as the logstash user:
sudo -u logstash /usr/share/logstash/bin/logstash-keystore \
--path.settings /etc/logstash list
[Aside from the usual caveats about secret obfuscation of this kind, it's worth making explicit that the docs expect logstash-keystore to be run as root, not as logstash. So after you're done troubleshooting, especially if you create a keystore owned by logstash, make sure it ultimately has permissions that are sufficiently restrictive]
Alternatively, you could run some other command as the logstash user. To validate the permission hypothesis, you just need to read the file as user logstash:
sudo -u logstash file /etc/logstash/logstash.keystore
sudo -u logstash md5sum /etc/logstash/logstash.keystore
su logstash -c 'cat /etc/logstash/logstash.keystore > /dev/null'
# and so on
If, as you suspect, there is a permissions problem, and the read test fails, assemble the necessary data with these commands:
ls -dla /etc/logstash/{,logstash.keystore}
groups logstash
By this point you should know:
what groups logstash is in
what groups are able to open /etc/logstash
what groups are able to read /etc/logstash/logstash.keystore
And you already said the keystore's mode is 644. In all likelihood, logstash will be a member of the logstash group only, and /etc/logstash will be world readable. So the TL;DR version of this advice might be:
# set group on the keystore to `logstash`
chgrp logstash /etc/logstash/logstash.keystore
# ensure the keystore is group readable
chmod g+r /etc/logstash/logstash.keystore
If it wasn't permissions, you could try recreating the store without a password. If it then works, you'll want to be really careful about how you handle the password environment variable, and go over the docs with a fine-tooth comb.

Container access to gcloud credentials denied

I'm trying to implement the container that converts data from HL7 to FHIR (https://github.com/GoogleCloudPlatform/healthcare/tree/master/ehr/hl7/message_converter/java) on Google Cloud. However, I can't build the container, locally, on my machine, to later deploy to the cloud.
The error that occurs is always in the authentication part of the credentials when I try to rotate the image locally using the docker:
docker run --network=host -v ~/.config:/root/.config hl7v2_to_fhir_converter
/healthcare/bin/healthcare --fhirProjectId=<PROJECT_ID> --fhirLocationId=<LOCATION_ID> --
fhirDatasetId=<DATASET_ID> --fhirStoreId=<STORE_ID> --pubsubProjectId=<PUBSUB_PROJECT_ID> --
pubsubSubscription=<PUBSUB_SUBSCRIPTION_ID> --apiAddrPrefix=<API_ADDR_PREFIX>
I am using Windows and have already performed the command below to create the credentials:
gcloud auth application-default login
The credential, after executing the above command, is saved in:
C:\Users\XXXXXX\AppData\Roaming\gcloud\application_default_credentials.json
The command -v ~ / .config: /root/.config is supposed to enable the docker to search for the credential when running the image, but it does not. The error that occurs is:
The Application Default Credentials are not available. They are available if running in Google
Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined
pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials for more information.
What am I putting error on?
Thanks,
A container runs isolated to the rest of the system, it's its strength and that's why this packaging method is so popular.
Thus, all the configuration on your environment is void if you don't pass it to the container runtime environment, like the GOOGLE_APPLICATION_CREDENTIALS env var.
I wrote an article on this. Let me know if it helps, and, if not, we will discussed the blocking point!

How to spin up spinnaker locally for the first time

How to spin up a local version of Spinnaker? This has been answered and addressed in detail here.
https://github.com/spinnaker/spinnaker/issues/1729
Ok, so I got it to work, but not without you valuable help! #lwander
So I'll leave the steps here for posterity.
Each line is a separate command in the command line, I've installed this on a virtual machine with a freshly installed Ubuntu 14.04 copy with nothing else than SSH. Then SSH as root, You will need to configure sshd on your console to allow root access.
https://askubuntu.com/questions/469143/how-to-enable-ssh-root-access-on-ubuntu-14-04
> curl -O https://raw.githubusercontent.com/spinnaker/halyard/master/install/stable/InstallHalyard.sh
created a user account member of the adm and sudo groups (is this necessary???)
then Install Halyard:
bash InstallHalyard.sh
Verify that HAL is installed and validate its version.
hal -v
Tell Hal that the deployment type will be as a local instance (this will publish all services in localhost which will be tricky later in order to access them, but I have a turnaround so keep reading)
hal config deploy edit --type localdebian
Hal will complain that a version has not been selected, just tell HAL which version:
hal config version edit --version 1.0.0
The tell HAL which storage you are going to use, in my case and since it is local I want to use redis.
hal config storage edit --type redis
So now we need to add a cloud provider to HAL, we use AWS so we add it like this:
hal config provider aws edit --access-key-idXXXXXXXXXXXXXXXXXXXX--secret-access-key
I created a user on AWS and added access keys to the user inside IAM on the user security credentials tab. Obviously my access-key-idis not XXXXXXXXXXXXXXXXXXXX, I edited it. You do not need to enter the secret-access-key because the command will prompt for it.
Then you need to create a username relative or that will only concern you spinnaker installation however this will get related to you AWS Account-ID, so in MY spinnaker local installation I chose the username spinnakermaster you should choose yours!. And my AWS Account ID is not YYYYYYYYYYYY, I've edited too.
All the configurations and steps that you'll need to do inside AWS for this to work are really well documented here:
[https://www.spinnaker.io/setup/providers/aws/](https://www.spinnaker.io/setup/providers/aws/
)
And to tell HAL of of the above here's the command:
hal config provider aws account add spinnakermaster --account-id YYYYYYYYYYYY --assume-role role/spinnakerManaged
And after all that and if everything went according to plan we can ask HAL to deploy our brand new spinnaker installation.
hal deploy apply
It will begin a long installation downloading and configuring all the services.
Once it has finished you may do whatever you like but in my case I created a monitoring script like the one described here:
https://github.com/spinnaker/spinnaker/issues/854
Which can be launched on a recursive manner as this:
watch -n1 spinnaker-status.shor until toctrl+Cit!.
then to be able to access your local VM spinnaker copy you can either setup a reverse proxy with the proxy server of your choice to forward all the requests to localhost or you can simply ssh the SH** out of this redirecting the ports;
ssh root#ZZZ.ZZZ.ZZZ.ZZZ -L 9000:127.0.0.1:9000 -L 8084:127.0.0.1:8084 -L 8083:127.0.0.1:8083 -L 7002:127.0.0.1:7002 -L 8087:127.0.0.1:8087 -L 8080:127.0.0.1:8080 -L 8088:127.0.0.1:8088 -L 8089:127.0.0.1:8089
Where obviously theZZZ.ZZZ.ZZZ.ZZZ is not an actual IP Address.
And finally to begin having fun with this cutie you have to go to your browser of choice and type into the address bar:
http://127.0.0.0:9000
Hope this helps and saves some time to everybody!.
Cheers.
EN

any custom openstack centos image with a set password i can use?

I have to do some quick benchmarking.
I am unable to my vms since neutron is not setup properly.
I can create centos vm.. but i can not log into it.
I tried adding keypair, i tried could init change root password
#cloud-config
chpasswd:
list: |
root:stackops
centos:stackops
expire: False
it does not work. I mean it did not give any errors on log console but i am not abel to login with the credentials i set.
So my question is ..where can i find a openstack centos 7 image whose password is already set ( i guess it would be a custom one)
If Neutron isn't set up correctly, you're not going to be able to do much with your OpenStack environment. However, even with broken networking, you can pass your user-data script to the instance using the --config-drive option, e.g:
nova boot --user-data /path/to/config.yaml --config-drive=true ...
There is a checkbox in the Horizon gui to use this feature as well. This attaches your configuration as a virtual CD-ROM device, which cloud-init will use rather than the network metadata service.
If I put your cloud-config into a file called user-data.yaml, and then run:
nova boot --image centos-7-cloud --user-data user-data.yaml centos
Then I can log in as the centos user using the password stackops.

Resources