Any way to login to Elasticsearch using terminal? - elasticsearch

I am new at Elasticsearch and am basically working around the security aspects of it. So after defining the user-roles and creating new users, whenever I want to run a curl command using terminal, I have to specify the user credentials like,
-u {username}:{password}
So, is there any way to login to the localhost so that I enter the credentials one time only and after that I can simply run the commands without entering the credentials?
I am using the basic license of Elasticsearch.

You can use the netrc feature provided by curl. You basically store the credentials in a file and use -n flag so curl accesses creds from that file.
Reference: https://everything.curl.dev/usingcurl/netrc

Related

Goland IDE build and run with aws-vault

Trying to google something for Goland vs Golang is proving to be quite hard. Everything I am searching seems to come back for code or switching profiles. That is all already handled.
I had a project that was taking in json and processing the data. I was able to use the run and debug button to build and debug my go code with the default configuration.
That changed I am pulling data files from S3 and that requires authentication to aws which we use aws-vault for.
The issue I am running into is in this configuration there is no additional settings. There is a checkbox to Run after build but no way for me to say Run with aws-vault
Now I have to uncheck Run after build and add the flag
-gcflags="-N -l" -o app
and then attach to that process with Shift + Option + fn + F5.
What I am looking for is being able to run aws-vault exec user -- go ... within the IDE so I do not have a build step, a run step and then manually attaching to the process.
Figured out at least what I feel is a better solution that allows you to run any code (including cli) that is using an AWS SDK.
I am on a mac so osascript works for me but the prompt can be whatever your os supports. Or if you have a Yubikey you can use prompt=ykman.
In ~/.aws there are 2 files config and credentials these tell the SDK how to auth.
To start in ~/.aws/config there is a profile for each role that is needed. Default is a role that you assume all the others are ones that the code would escalate to.
[default]
output=json
region=<your region>
mfa_serial=arn:aws:iam::<you>
[profile dev-base]
source_profile=default
role_arn=arn:aws:iam::<account to escalate to>
[profile staging-base]
source_profile = default
role_arn = arn:aws:iam::<account to escalate to>
[dev]
region = <your region>
[staging]
region = <your region>
Note: one oddity is that I had to put the role in this file with the region so that the role exists.
This may not be needed if you are not using java. You could put the full role in the previous file (but I also use java so this is my setup) in ~/.aws/credentials
[dev]
ca_bundle = /Users/<username>/.aws/cert.pem
credential_process=aws-vault exec dev-base -j --prompt=osascript
[staging]
ca_bundle = /Users/<username>/.aws/cert.pem
credential_process=aws-vault exec master-base -j --prompt=osascript
Note: An oddity here is that ca_bundle is specified. Something in golang was not happy with using the AWS_CA_BUNDLE and this appears to work.
Now when the code is ran a pop-up displays asking for an MFA token.
Also, when running any aws cli command you can use the --profile ie aws s3 ls --profile dev that you want to use and the pop-up will appear.
Editing these file manually when using aws-vault might not be the best way to do it but at the moment this is how we manage them and this seems to give the best workflow.

Specify user name and passoword in JFrog Artifactory URL in order to avoid the prompt for credentials

I have a JSON file which I have in maven artifactory.
I need to put the URL in another web application which would directly point to this JSON file. But then the issue I am facing is, due to the authentication issue, it is not able to read it.
So is it possible to specify the credentials directly in the URL so that we can avoid the authentication issue and the other web application could read the JSON file directly.
The web application needs the URL which would return a JSON file.
I'm not a Maven expert, but you can add the credentials in the URL. In the sample below I've used curl to show how that would work.
For example, this curl command would upload a file (download would work the same, just without the -T <MyFileName>)
curl -uadmin:<PASSWORD> -T <MyFileName> "http://jfrog.local/artifactory/generic-local/bla"
If I want to add the credentials into the URL I'd end up with
curl -T <MyFileName> "http://admin:<PASSWORD>#jfrog.local/artifactory/generic-local/bla"
Both curl commands do the the same, the difference is where I have the credentials.

Automate Heroku CLI login

I'm developing a bash script to automatic clone some projects and another task in dev VM's, but we have one project in Heroku and repository is in it. In my .sh file I have:
> heroku login
And this prompt to enter credentials, I read the "help" guide included on binary and documentation but I can't found anything to automatic insert username and password, I want something like this:
> heroku login -u someUser -p mySecurePassword
Exist any way similar to it?
The Heroku CLI only uses your username and password to retrieve your API key, which it stores in your ~/.netrc file ($HOME\_netrc on Windows).
You can manually retrieve your API key and add it to your ~/.netrc file:
Log into the Heroku web interface
Navigate to your Account settings page
Scroll down to the API Key section and click the Reveal button
Copy your API key
Open your ~/.netrc file, or create it, with your favourite text editor
Add the following content:
machine api.heroku.com
login <your-email#address>
password <your-api-key>
machine git.heroku.com
login <your-email#address>
password <your-api-key>
Replace <your-email#address> with the email address registered with Heroku, and <your-api-key> with the API key you copied from Heroku.
This should manually accomplish what heroku login does automatically. However, I don't recommend this. Running heroku login does the same thing more easily and with fewer opportunities to make a mistake.
If you decide to copy ~/.netrc files between machines or accounts you should be aware of two major caveats:
This file is used by many other programs; be careful to only copy the configuration stanzas you want.
Your API key offers full programmatic access to your account. You should protect it as strongly as you protect your password.
Please be very careful if you intend to log into Heroku using any mechanism other than heroku login.
You can generate a non-expiring OAuth token then pass it to the CLI via an environment variable. This is useful if you need to run Heroku CLI commands indefinitely from a scheduler and you don't want the login to expire. Do it like this (these are not actual Tokens and IDs, BTW):
$ heroku authorizations:create
Creating OAuth Authorization... done
Client: <none>
ID: 80fad839-876b-4ea0-a41e-6a9a2fb0cf97
Description: Long-lived user authorization
Scope: global
Token: ddf4a0e5-9294-4c5f-8820-b51c52fce4f9
Updated at: Fri Aug 02 2019 21:26:09 GMT+0100 (British Summer Time) (less than a minute ago)
Get the token (not the ID) from that authorization and pass it to your CLI:
$ HEROKU_API_KEY='ddf4a0e5-9294-4c5f-8820-b51c52fce4f9' heroku run ls --app my-app
Running ls on ⬢ my-app... up, run.2962 (Hobby)
<some file names>
$
By the way this also solves the problem of how to use the Heroku CLI when you have MFA enabled on your Heroku account but your machine doesn't have a web browser e.g., if you are working on an EC2 box via SSH:
$ heroku run ls --app my-app
heroku: Press any key to open up the browser to login or q to exit:
› Error: quit
$ HEROKU_API_KEY='ddf4a0e5-9299-4c5f-8820-b51c52fce4f9' heroku run ls --app my-app
Running ls on ⬢ my-app... up, run.5029 (Hobby)
<some file names>
$
EDIT: For Windows Machines
After you run heroku authorizations:create, copy the "Token", and run the following commands:
set HEROKU_API_KEY=ddf4a0e5-9299-4c5f-8820-b51c52fce4f9
heroku run ls --app my-app
If your goal is just to get the source code, you could use a simple git client. You just need the api key.
Steps to get api key
Log into the Heroku web interface
Navigate to your Account settings page
Scroll down to the API Key section and click the Reveal button
Copy your API key
Download source code using git
Use this url template for git clone
https://my_user:my_password#git.heroku.com/name_of_your_app.git
In my case the user value was my email without domain.
Example :
if mail is **duke#gmail.com**
user for heroku auth will be **duke**
Finally just clone it like any other git repositories:
git clone https://duke:my_password#git.heroku.com/name_of_your_app.git
I agree that Heroku should have by now provided a way to do this with their higher level CLI tool.
You can avoid extreme solutions (and you should, just like Chris mentioned in his answer) by simply using curl and the Heroku API. Heroku allow you to use your API Token (obtainable through your user settings / profile page on the Heroku dashboard).
You can then use the API to achieve whatever it is you wanted to do with their command line tool.
For example, if I wanted to get all config vars for an app I would write a script that did something like the following:
-H "Accept: application/vnd.heroku+json; version=3" \
-H "Authorization: Bearer YOUR_TOKEN```
If *YOUR_APP_NAME* had only one config variable called *my_var* the response of the above call would be
{
"my_var": some_value
}
I've found using this all the time in CI tools that need access to *Heroku* information / resources.

Using Laravel Artisan and file permissions

I'm new to Laravel and I find this framework awesome.
Artisan is also great but a have a little problem using it.
Let's say that I create a new Controller with Artisan like this
php artisan make:controller Test
There will be a new file in app/Http/Controllers named Test and the permission on this file will be root:root
When I want to edit this file with my editor over ftp I can't because I'm not logged as root.
Is there any ways to tell Artisan to create files with www-data group for example (without doing an chown command) ?
Since you have root shell access, the following command will execute another one using the www-data user-
sudo -u www-data php artisan make:controller Test
Replace www-data with whatever the username your web server operates under, or the username you login to the FTP service with.
When you do this, the controller will be owned by www-data, which is what you want.
Note: do not ever run commands copy-pasted from the internet without knowing exactly what they do, especially in a root shell.
In this case, the -u parameter tells sudo to execute the command as a specific user, not as the root user.
From the manpage:
-u user, --user=user
Run the command as a user other than the default target user (usually root ). The user may be
either a user name or a numeric user ID (UID) prefixed with the ‘#’ character (e.g. #0 for UID
0). When running commands as a UID, many shells require that the ‘#’ be escaped with a backslash
(‘\’). Some security policies may restrict UIDs to those listed in the password database. The
sudoers policy allows UIDs that are not in the password database as long as the targetpw option
is not set. Other security policies may not support this.
I know this is a really old post but I'd also really advise anyone agains editing your Laravel files over FTP. I used to do this in my pre-Laravel days and it NEVER ended well.
Editing over FTP can have all kinds of problems- dropping connection mid-edit being the least of them. Security and live development errors being a much larger concern.
Develop on your local or dev environment, commit/push to git, then either pipeline to your server or handle your FTP uploads and cleanup after the fact. Pipelines are your best bet if your host will allow them. We use Atlassian BitBucket for ours but the set-up and deployment should be relatively similar for most hosts. Check with your host for documentation on their pipeline set-up:
https://www.atlassian.com/continuous-delivery/tutorials/bitbucket-pipelines
There's also some tutorials online for pipelining straight to FTP (if on a shared host, say):
https://www.savjee.be/2016/06/Deploying-website-to-ftp-or-amazon-s3-with-BitBucket-Pipelines/
It is because you ran a command from root user, try to run the command from the user which you using for edit the project via ftp.

How to save customized kibana dashboard?

I have a logstash-elasticsearch-kibana local setup and I have a problem when it comes to save Kibana dashboards. Selecting the "Save" option I get the following error: "Save failed Dashboard could not be saved to Elasicsearch" I'm using the logstash dashboard that comes with Kibana and after some modifications I tried to save it getting this error.
Initially, I thought my user and password are incorrect and I might have forgot the same. Then I changed my user and password via the following command:-
In the documentation its given
sudo htpasswd -c /etc/nginx/conf.d/kibana.myhost.org.htpasswd user
So, I did the following:-
sudo htpasswd -c /etc/nginx/conf.d/testing.somedomain.com.htpasswd user
I also tried
sudo htpasswd -c /etc/nginx/conf.d/kibana.testing.somedomain.com.htpasswd user
but still no luck.
Any leads would be of great help. I read some post on SO but was not of much help.

Resources