Unable to connect ElasticSearch instance to Grafana - elasticsearch

I have a self-hosted Grafana instance and a Elastic Cloud hosted Elasticsearch instance with Kibana. I'm hoping to get the two connected so I can query Elasticsearch directly from Grafana. I've gone through the official docs and haven't found them helpful here.
Here's the form I see when adding a datasource in Grafana:
Here's my Elastic Cloud deployment applications:
The URL in the Grafana form is coming from the Elasticsearch "Copy Endpoint" link.
The user defined here is a user I set up in the Elasticsearch "Stack Management" section with the kibana_system user role.
I've tried several of the options defined and cannot get a successful response from my ES instance.
Depending on the settings I've tried, it will either be a 400 error, 403 error, 503 error, or the Laravel Elasticsearch error: Types cannot be provided in get mapping requests, unless include_type_name is set to true..
With my current settings, I'm hitting the Laravel error above. I would assume this means my credentials are correct, but something else is happening in the verification request.
The verification request is going to https://[grafanahost]/api/datasources/proxy/17/logs*/_mapping
{
"error":{
"root_cause":[
{
"type":"illegal_argument_exception",
"reason":"Types cannot be provided in get mapping requests, unless include_type_name is set to true."
}
],
"type":"illegal_argument_exception",
"reason":"Types cannot be provided in get mapping requests, unless include_type_name is set to true."
},
"status":400
}
Thank you!

Related

Nifi API Authentication - Access Token

I'm attempting to collect details about all processors on two installed instances of Nifi, versions 1.19.0 and 1.20.0. I want to use Nifi to do this so I can perform some minor ETLs and save the resultant dataset in a database. Both instances are configured as standalone authentication with SSL.
However, when I configure the InvokeHTTP processor how I think it should be to aquire a token, I'm getting a 400 status code and a response.body of "The username and password must be specified." on both instances.
Here's the super basic flow:
Nifi Flow
Here's what the error looks like:
Attributes of failed response
Here's the current config of the InvokeHTTP processor on the 1.20.0 instance which we can focus on from here out since they are both responding similarly.
Postman - InvokeHTTP Config
When I run the request with Postman, I get the expected token in the response body. And I'm able to make subsequent requests (the above config was a response from https://nifi06:8443/nifi-api/processors/39870615-0186-1000-e7d7-c59fa621ca7d in Postman).
I've tried the following:
Adding dynamic attributes for username/password as in the above configuration.
I've added them as JSON to the preceding Generate FlowFile processor's custom text.
And I've also tried using the Request Username/Request Password properties of the InvokeHTTP processor.
All return with the same response.body of "The username and password must be specified."
This seems it should be really simple and I'm sure I've been staring at it too long to see what I'm missing. Can anyone tell me where I need to specify the username/password?
Thanks.

How can I test using WAF to protect API Gateway RestAPI from SQL injection contained in request payload?

I've created a few resources in AWS in an attempt to create MRE showcasing how the WAF can be used to prevent malicious requests from being sent to an API Gateway RestAPI.
Ive created
S3 Bucket
Kinesis Data Firehose
WAF Web ACL
API Gateway RestAPI
Ive associated the RestAPI with the WAF ACL at the stage-level. The WAF Web ACL has been configured to use the an AWS managed rule
AWS-AWSManagedRulesSQLiRuleSet
The SQL database rule group contains rules to block request patterns
associated with exploitation of SQL databases, like SQL injection
attacks. This can help prevent remote injection of unauthorized
queries. Evaluate this rule group for use if your application
interfaces with an SQL database.
https://docs.aws.amazon.com/waf/latest/developerguide/aws-managed-rule-groups-list.html
Configured with defaults shown below
This is the only rule and all other defaults were chosen, as shown below.
Ive created a serverless api POST resource endpoint and despite having everything configured as described above, requests with payload/body as shown below are not blocked by the WAF.
{
"MyString":"SELECT username, password FROM Users"
}
Why arent requests being blocked by the WAF? How can I configure so that requests with SQL within request payload are rejected at the WAF before being sent to RestAPI?
I assumed that my SQL above would be enough. Is that not true? What SQL code can I use to validate the WAF and AWS Managed Rule is working as expected? What logic could explain why my request above is not being blocked?
Ive also configured S3 logging, and here is an example of 1 of the records from the s3 log, showing that this request was allowed
"timestamp": 1612921225458,
"formatVersion": 1,
"webaclId": "ABC",
"terminatingRuleId": "Default_Action",
"terminatingRuleType": "REGULAR",
"action": "ALLOW",
"terminatingRuleMatchDetails": [],
"httpSourceName": "APIGW",
"httpSourceId": "ABC:ABC:Prod",
"ruleGroupList": [{
"ruleGroupId": "AWS#AWSManagedRulesSQLiRuleSet",
"terminatingRule": null,
"nonTerminatingMatchingRules": [],
"excludedRules": null
}
],
"rateBasedRuleList": [],
"nonTerminatingMatchingRules": [],
"requestHeadersInserted": null,
"responseCodeSent": null,
What assumption am I making that is incorrect here?
I would like an MRE that shows how WAF rejects the request when post body contains malicious looking SQL, and also return 4xx response. I'm assuming this is achievable.
Update 1: Ive tried to add my own rules and rule groups, using rule builder and specifically looking at the request body. Even with these in place, it still does not reject.
Ive added rules for Body, Query string, Path. Still, request was not rejected

How to Create Elasticsearch Point in Time (PIT)?

I'm trying to use the search_after parameter with a point in time (PIT) to paginate search results. This is the documentation section I'm consulting.
I'm making a POST to /my-index/_pit?keep_alive=1m.
The /_pit endpoint only accepts the POST method (if I try GET, it says only POST is accepted), and per the doc, does not take a request body. However, the response I receive is a 400 with this message:
"type": "parse_exception",
"reason": "request body is required"
I can't find any other examples of a /_pit request and I'm just confused by these responses.
Has anyone successfully gotten back a PIT?
In case it's relevant, we have a managed elastic cloud deployment on a standard subscription.
I ended up finding an Elastic forum post indicating that the PIT API is only available as of version 7.10. Sure enough, I tried against a 7.10 deployment and it succeeded as a POST without a body.
So, I feel as though there isn't much guidance on this outside of this particular example and I felt the need to post this for other users that struggled as I did.
If you're using an API tool like postman, you have to update your headers to include Content_Type: application/json, and set your authorization method as what you need (I used basic for password and username).
The index that you should use (my-index-000001 from their example) should be one that you've set up for your search query (right before the _search portion). Leave the body empty and send over the post request and you'll get your id.

Giving read only access to a user IBM Cloud Elasticsearch

I want to have a user which has a read-only access to a given index. I have read the Elasticsearch documentation and learnt that this can be achieved using the xpack API provided by Elasticsearch as a security feature. Now I am using "IBM Cloud Databases for Elasticsearch" and it comes with Elasticsearch 6.8.4, I am successfully able to communicate with it via Python and REST APIs as well and can manage to create index document etc but I am not able to use any of the xpack methods at all, not even the basic ones like get_role or get_user, it gives an error that I have attached herewith. I have also tried a same Elasticsearch version locally deployed on my machine and I am successfully able to use all the xpack methods. Below are the examples of how I am trying to use get_user method using requests and elasticsearch python.
Here is the requests method used and the response:
# Get User via requests
endpoint = "https://9fb4-f729-4d0c-86b1-da.eb46187.databases.appdomain.cloud:31248/_xpack/security/user"
header = {'Authorization': access_token,
'Content-Type': 'application/json',
'Accept': 'application/json'}
requests.get(url=endpoint,
auth=(cred_elastic["username"],cred_elastic['password']),
verify='./cert.pem',
headers=header).json()
Response:
{'error': {'root_cause': [{'type': 'security_exception',
'reason': 'Unexpected exception indices:data/read/get'}],
'type': 'security_exception',
'reason': 'Unexpected exception indices:data/read/get'},
'status': 500}
Here is python elasticsearch same method and response:
# Creating Elasticsearch Object
context = create_default_context(cadata=cred_elastic['tls_certificate'])
es = Elasticsearch(cred_elastic['endpoint'],
ssl_context=context,
http_auth=(cred_elastic['username'],
cred_elastic['password']))
es.security.get_user()
Response:
TransportError: TransportError(405, 'Incorrect HTTP method for uri [/_security/user] and method [GET], allowed: [POST]', 'Incorrect HTTP method for uri [/_security/user] and method [GET], allowed: [POST]')
Additionally, in the second method, the error is different but if instead I use put_user, it throws the exact same 500 error the former method throws.
I am using the default user and service credentials that IBM Cloud creates for authentication.
Update: This is the link to the service that I am using (Contains Documentation link as well):
https://cloud.ibm.com/catalog/services/databases-for-elasticsearch
That's because IBM Cloud Databases for Elasticsearch doesn't use xpack. So, if you're attempting to use it, it won't work. Currently, they only have one type of user.

How to pull pubsub metrics from google api

We are using our own logging solution because stackdriver is su...bpar. I want to pull the metrics on how many unacknowledged messages there are in the pubsub. Started to read the docs on that and they are all over the place.
Found this page:
https://cloud.google.com/monitoring/api/metrics
Despite being under the api it does not describe any api calls, but does contain the description of the metric I want to extract.
Now I am thinking I need to use the monitoring api to extract what I need somehow:
https://cloud.google.com/monitoring/api/ref_v3/rest/
So I use the api explorer to try a couple of methods:
https://developers.google.com/apis-explorer/#search/monitoring/monitoring/v3/monitoring.projects.groups.list
I query and gives me an available url:
GET https://monitoring.googleapis.com/v3/projects/myprojectname/groups?key={YOUR_API_KEY}
I go to my project's console (api & credentials page) and generate an api key without restrictions and paste it in trying to curl.
curl https://monitoring.googleapis.com/v3/projects/myproject/groups?key=myrandomkeylkjlkj
{
"error": {
"code": 401,
"message": "Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.",
"status": "UNAUTHENTICATED"
}
}
Why is this happening? How can I get the metrics? I went to the url provided but it explains oauth token creation and has nothing regarding the api keys. I just need to curl things to make sure I am going the right way.
Why does this have to be so hard? Killed several hours of my life trying to get this.
curl -H "Authorization: Bearer $(gcloud config config-helper --format='value(credential.access_token)')" https://monitoring.googleapis.com/v3/projects/myproject/groups

Resources