I am using readonlyRest plugin to secure elastic and kibana but once I added the following in my readonlyrest.yml, the kibana starts giving me "Authentication Exception", what could be the reason for that?
kibana.yml
elasticsearch.username: "kibana"
elasticsearch.password: "kibana123"
readonlyrest.yml
readonlyrest:
enable: true
response_if_req_forbidden: Access denied!!!
access_control_rules:
- name: "Accept all requests from localhost"
type: allow
hosts: [XXX.XX.XXX.XXX]
- name: "::Kibana server::"
auth_key: kibana:kibana123
type: allow
- name: "::Kibana user::"
auth_key: kibana:kibana123
type: allow
kibana_access: rw
indices: [".kibana*","log-*"]
My kibana and elastic are hosted on same server, is that the reason?
Another question: If I want to make my elastic server accessible only through a particular host then can I write that host in the first section of access_control_rules as mentioned in readonlyrest.yml?
Elastic version: 6.2.3
Log error: I didn't remember exactly but it was [ACL] Forbidden and showing false in all the three control rules.
Related
Am trying to run Elastic APM in GKE cluster. I have installed elastic-search, kibana and apm-server. All services are up and running. All these components has been through helm charts. Below are the configuration for each component.
apmConfig:
apm-server.yml: |
apm-server:
host: "0.0.0.0:8200"
queue: {}
output.elasticsearch:
hosts: ["http://elasticsearch-master.monitoring.svc.cluster.local:9200"]
username: "elastic"
password: "password"
kibanaConfig:
kibana.yml: |
server.host: 0.0.0.0
server.port: 5601
elasticsearch.hosts: "http://elasticsearch-master.monitoring.svc.cluster.local:9200"
kibana.index: ".kibana"
server.basePath: "/kibana"
server.rewriteBasePath: true
server.publicBaseUrl: "https://mydomain/kibana"
elasticsearch:
username: "kibana_system"
password: "password"
I have tried to add APM integration to one of my service by using the below config:
var apm = require('elastic-apm-node').start({
// Override the service name from package.json
// Allowed characters: a-z, A-Z, 0-9, -, _, and space
serviceName: 'shopping',
// Use if APM Server requires a secret token
secretToken: '',
// Set the custom APM Server URL (default: http://localhost:8200)
serverUrl: 'https://mydomain/apm',
// Set the service environment
environment: 'production'
})
When I start the service, I get the below error in logs:
{"log.level":"error","#timestamp":"2022-08-18T10:08:31.584Z","log":{"logger":"elastic-apm-node"},"ecs":{"version":"1.6.0"},"message":"APM Server transport error (301): Unexpected APM Server response"}
301 - Moved permanently. It would be great , if I could get any help?
The problem have been solved. The reason was because I was using a proxy and hitting port 80, while in the APM server service I was using port as 8200.
After changing the port to 80 and targetPort to 8200 value in apm server service, I was able to correctly instrument Elastic APM with my services.
I am currently trying to forward GCP's Cloud Logging to Filebeat to be forwarded to Elasticsearch following this docs with the GCP module settings on filebeat according to this docs
Currently I am only trying to forward audit logs so my gcp.yml module is as follows
- module: gcp
vpcflow:
enabled: false
var.project_id: my-gcp-project-id
var.topic: gcp-vpc-flowlogs
var.subscription_name: filebeat-gcp-vpc-flowlogs-sub
var.credentials_file: ${path.config}/gcp-service-account-xyz.json
#var.internal_networks: [ "private" ]
firewall:
enabled: false
var.project_id: my-gcp-project-id
var.topic: gcp-vpc-firewall
var.subscription_name: filebeat-gcp-firewall-sub
var.credentials_file: ${path.config}/gcp-service-account-xyz.json
#var.internal_networks: [ "private" ]
audit:
enabled: true
var.project_id: <my prod name>
var.topic: sample_topic
var.subscription_name: filebeat-gcp-audit
var.credentials_file: ${path.config}/<something>.<something>
When I run sudo filebeat setup I keep getting this error
2021-05-21T09:02:25.232Z ERROR cfgfile/reload.go:258 Error loading configuration files: 1 error: Unable to hash given config: missing field accessing '0.firewall' (source:'/etc/filebeat/modules.d/gcp.yml')
Although I can start the service, but I don't seem to see any logs forwarded from GCP's Cloud Logging pub/sub topic to elastic search.
Help or tips on best practice too would be appreciated.
Update
If I were to follow the docs in here, it would give me the same error but in audit
I set up ElasticSearch on AWS and I am trying to load application log into it. The twist is that application log entry is in JSON format, like
{"EventType":"MVC:GET:example:6741/Common/GetIdleTimeOut","StartDate":"2021-03-01T20:46:06.1207053Z","EndDate":"2021-03-01","Duration":5,"Action":{"TraceId":"80001266-0000-ac00-b63f-84710c7967bb","HttpMethod":"GET","FormVariables":null,"UserName":"ZZZTHMXXN"} ...}
So, I am trying to unwrap it. Filebeat docs suggest that there is decode_json_fields processor; however, I am getting message fields in Kinbana as a single JSON string; nothing unwrapped.
I am new to ElasticSearch, but I am not going to use it as an excuse not to do analysis first. Only as an explanation that I am not sure which information is helpful for answering the question.
Here is filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/opt/logs/**/*.json
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- decode_json_fields:
fields: ["message"]
output.logstash:
hosts: ["localhost:5044"]
And here is Logstash configuration file:
input {
beats {
port => "5044"
}
}
output {
elasticsearch {
hosts => ["https://search-blah-blah.us-west-2.es.amazonaws.com:443"]
ssl => true
user => "user"
password => "password"
index => "my-logs"
ilm_enabled => false
}
}
I am still trying to understand the filtering and grok parts of Logstash, but it seems that it should work the way it is. Also, I am not sure where the actual tag messages comes from (probably, from Logstash or Filebeat), but it seems irrelevant as well.
UPDATE: AWS documentation doesn't give an example of just loading through filebeat, without logstash.
If I don't use logstash (just FileBeat) and have the following section in filebeat.yml:
output.elasticsearch:
hosts: ["https://search-bla-bla.us-west-2.es.amazonaws.com:443"]
protocol: "https"
#index: "mylogs"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "username"
password: "password"
I am getting the following errors:
If I use index: "mylogs" - setup.template.name and setup.template.pattern have to be set if index name is modified
And if I don't use index (where would it go in ES then?) -
Failed to connect to backoff(elasticsearch(https://search-bla-bla.us-west-2.es.amazonaws.com:443)): Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license from the /_license endpoint, Filebeat requires the default distribution of Elasticsearch. Please make the endpoint accessible to Filebeat so it can verify the license.: unauthorized access, could not connect to the xpack endpoint, verify your credentials
If transmitting via logstash works in general, add a filter block as Val proposed in the comments and use this json plugin/filter: elastic.co/guide/en/logstash/current/plugins-filters-json.html - it automatically parses the json into elasticsearch fields
I've been following the guide on https://cube.dev/docs/deployment#express-with-basic-passport-authentication to deploy Cube.js to Lambda. I got it working against an Athena db such that the /meta endpoint works successfully and returns schemas.
When trying to query Athena data in Lambda however, all requests are resulting in 504 Gateway Timeouts. Checking the CloudWatch logs I see one consistent error:
/bin/sh: hostname: command not found
Any idea what this could be?
Here's my server.yml:
service: tw-cubejs
provider:
name: aws
runtime: nodejs12.x
iamRoleStatements:
- Effect: "Allow"
Action:
- "sns:*"
# Athena permissions
- "athena:*"
- "s3:*"
- "glue:*"
Resource:
- "*"
# When you uncomment vpc please make sure lambda has access to internet: https://medium.com/#philippholly/aws-lambda-enable-outgoing-internet-access-within-vpc-8dd250e11e12
vpc:
securityGroupIds:
# Your DB and Redis security groups here
- ########
subnetIds:
# Put here subnet with access to your DB, Redis and internet. For internet access 0.0.0.0/0 should be routed through NAT only for this subnet!
- ########
- ########
- ########
- ########
environment:
CUBEJS_AWS_KEY: ########
CUBEJS_AWS_SECRET: ########
CUBEJS_AWS_REGION: ########
CUBEJS_DB_TYPE: athena
CUBEJS_AWS_S3_OUTPUT_LOCATION: ########
CUBEJS_JDBC_DRIVER: athena
REDIS_URL: ########
CUBEJS_API_SECRET: ########
CUBEJS_APP: "${self:service.name}-${self:provider.stage}"
NODE_ENV: production
AWS_ACCOUNT_ID:
Fn::Join:
- ""
- - Ref: "AWS::AccountId"
functions:
cubejs:
handler: cube.api
timeout: 30
events:
- http:
path: /
method: GET
- http:
path: /{proxy+}
method: ANY
cubejsProcess:
handler: cube.process
timeout: 630
events:
- sns: "${self:service.name}-${self:provider.stage}-process"
plugins:
- serverless-express
Even this hostname error message is in logs however it isn't an issue cause.
Most probably you experiencing issue described here.
#cubejs-backend/serverless uses internet connection to access messaging API as well as Redis inside VPC for managing queue and cache.
One of those doesn't work in your environment.
Such timeouts usually mean that there's a problem with internet connection or with Redis connection. If it's Redis you'll usually see timeouts after 5 minutes or so in both cubejs and cubejsProcess functions. If it's internet connection you will never see any logs of query processing in cubejsProcess function.
Check the version of cube.js you are using, according to the changelog this issue should have been fixed in 0.10.59.
It's most likely down to a dependency of cube.js assuming that all environments where it will run will be able to run the hostname shell command (looks like it's using node-machine-id.
I am using the cloud.google.com/go SDK to programmatically provision the GKE clusters with the required configuration.
I set the ClientCertificateConfig.IssueClientCertificate = true (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#ClientCertificateConfig).
After the cluster is provisioned, I use the ca_certificate, client_key, client_secret returned for the same cluster (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#MasterAuth). Now that I have the above 3 attributes, I try to generate the kubeconfig for this cluster (to be later used by helm)
Roughly, my kubeconfig looks something like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <base64_encoded_data>
server: https://X.X.X.X
name: gke_<project>_<location>_<name>
contexts:
- context:
cluster: gke_<project>_<location>_<name>
user: gke_<project>_<location>_<name>
name: gke_<project>_<location>_<name>
current-context: gke_<project>_<location>_<name>
kind: Config
preferences: {}
users:
- name: gke_<project>_<location>_<name>
user:
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
On running kubectl get nodes with above config I get the error:
Error from server (Forbidden): serviceaccounts is forbidden: User "client" cannot list resource "serviceaccounts" in API group "" at the cluster scope
Interestingly if I use the config generated by gcloud, the only change is in the user section:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
This configuration seems to work just fine. But as soon as I add client cert and client key data to it, it breaks:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
I believe I'm missing some details related to RBAC but I'm not sure what. Will you be able to provide me with some info here?
Also reffering to this question I've tried to only rely on Username - Password combination first, using that to apply a new clusterrolebinding in the cluster. But I'm unable to use just the username password approach. I get the following error:
error: You must be logged in to the server (Unauthorized)