Cant able to start auditbeat - elasticsearch

Hi i am using elk stack of version 7.1.1 with x-pack installed and i'm trying to configure and setup Auditbeat but it's showing the following error on startup :
ERROR instance/beat.go:916 Exiting: 2 errors: 1 error: failed to create audit client: failed to get audit status: operation not permitted; 1 error: unable to create DNS sniffer: failed creating af_packet sniffer: operation not permitted
Exiting: 2 errors: 1 error: failed to create audit client: failed to get audit status: operation not permitted; 1 error: unable to create DNS sniffer: failed creating af_packet sniffer: operation not permitted
My auditfile conf
auditbeat.modules:
- module: auditd
audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ]
audit_rules: |
- module: file_integrity
paths:
- /bin
- /usr/bin
- /sbin
- /usr/sbin
- /etc
- module: system
datasets:
- host
- login
- package
- process
- socket
- user
state.period: 12h
user.detect_password_changes: true
login.wtmp_file_pattern: /var/log/wtmp*
login.btmp_file_pattern: /var/log/btmp*
setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
setup.kibana:
host: "localhost:5601"
output.elasticsearch:
hosts: ["localhost:9200"]
username: "elastic"
password: "mypassword"
Please help me solve it.

I would assume you have lauched auditbeat under unprivileged user.
Due to auditbeat has to interact with auditd, most of activities should be performed by root. [at least root rights solved the same issue in my case]
PS: if you can't switch to root try this:
link

Related

Failed to get network: Failed to create new channel client: event service creation failed: could not get chConfig cache reference

Failed to get network: Failed to create new channel client: event service creation failed: could not get chConfig cache reference: QueryBlockConfig failed: QueryBlockConfig failed: queryChaincode failed: Transaction processing for endorser [peer-node-endpoint]: Endorser Client Status Code: (2) CONNECTION_FAILED. Description: dialing connection on target [peer-node-endpoint]: connection is in TRANSIENT_FAILURE
Getting this error when trying to connect fabric-sdk-go with network using connection-profile.yaml in Hyperledger fabric.
NOTE: chaincode is deployed and works just fine when I hit transaction from terminal.So no doubts on this side.
I saw same problem is posted on stack-overflow already but thats outdated as hyperledger-fabric v2.2 changes a lot as compared to v1.
Here is my connection profile from fabric-samples test-network.(only difference is I gave path of tls-cert file instead of pasting private key.
Here is connection-profile.yaml of test-network which works fine on local machine.
---
name: test-network-org1
version: 1.0.0
client:
organization: Org1
connection:
timeout:
peer:
endorser: '300'
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.example.com
certificateAuthorities:
- ca.org1.example.com
peers:
peer0.org1.example.com:
url: grpcs://localhost:7051
tlsCACerts:
path: /path/to/cert/file
grpcOptions:
ssl-target-name-override: peer0.org1.example.com
hostnameOverride: peer0.org1.example.com
certificateAuthorities:
ca.org1.example.com:
url: https://localhost:7054
caName: ca-org1
tlsCACerts:
path: /path/to/cert/file
httpOptions:
verify: false
But if just change peer (logical) name i.e peer0.org1.example.com. from
peer0.org1.example.com to peer0.org1.com i.e just remove "example" word from name it gives same error I posted above.
So i am just wondering why is it the case because in hyperledger-fabric connection-profile documentation it say that this name is just logical name and nothing else and we can give it any name and all that matters is (peer/ca) url endpoint.
And new connection-profile looks like this:
---
name: test-network-org1
version: 1.0.0
client:
organization: Org1
connection:
timeout:
peer:
endorser: '300'
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.com
certificateAuthorities:
- ca.org1.example.com
peers:
peer0.org1.com:
url: grpcs://localhost:7051
tlsCACerts:
path: /path/to/cert/file
grpcOptions:
ssl-target-name-override: peer0.org1.com
hostnameOverride: peer0.org1.com
certificateAuthorities:
ca.org1.example.com:
url: https://localhost:7054
caName: ca-org1
tlsCACerts:
path: /path/to/cert/file
httpOptions:
verify: false
And in docker logs it shows this error:
peer0.org1.example.com|2022-10-03 06:47:51.442 UTC 0c95 ERRO [core.comm] ServerHandshake -> Server TLS handshake failed in 3.5334ms with error remote error: tls: bad certificate server=PeerServer remoteaddress=172.20.0.1:61090
Now the path to cert file is correct and thats I'm sure about.
So can anyone guide me why changing just peer (logical) name on local hit this error.
if someone want to create same case.This can also be produced on Local (by running test-network from fabric-samples).
And as far code is concerned its the same as in fabric-samples I am just trying to run it using connection-profile.yaml (assetTransdferBasic case).
if anyone want any help regarding more detail I will be available.

How to use gpload utility?

I have YAML file below:
---
VERSION: 1.0.0.1
DATABASE: xxx
USER: xxx
HOST: xxx
PORT: 5432
GPLOAD:
INPUT:
- SOURCE:
LOCAL_HOSTNAME:
- 192.168.0.21
PORT: 8081
FILE:
- /home/root/test_input.txt
- COLUMNS:
- age: int4
- name: varchar
- surname: varchar
- FORMAT: text
- DELIMITER: '|'
- ERROR_LIMIT: 2
- LOG_ERRORS: true
OUTPUT:
- TABLE: sf_dfs.test_gpload
- MODE: INSERT
PRELOAD:
- REUSE_TABLES: true
But i recieve a error: error when connecting to gpfdist http://192.168.0.21:8081//home/root/test_input.txt, quit after 11 tries (seg0 slice1 192.168.0.23:6000 pid=2021845)
encountered while running INSERT INTO
Maybe somebody have experience about this program?
Looks like it is a port issue. If the database is up then please rerun the job with different port. Ensure that firewall is not blocking this port.
A couple of questions:
Are you running gpload as root? root generally does not have access permissions to the database. It needs to be run as gpadmin or a superuser.
The input file is in /home/root. If you are running as gpadmin, can gpadmin access this file? Permissions on the file?
Finally, does the target table exist in the database (sf_dfs.test_gpload)? Was it created and distributed across all segments? The error would seem to indicate the table is not there.

Gluster_Volume module in ansible

Request you to help me on the following Issue
I am writing a High available LAMPAPP on UBUNTU 14.04 with ansible (on my home lab). All the tasks are getting excecuted till the glusterfs installation however creating the Glusterfs Volume is a challenge for me since a week. If is use the command moudle the glusterfs volume is getting created
- name: Creating the Gluster Volume
command: sudo gluster volume create var-www replica 2 transport tcp server01-private:/data/glusterfs/var-www/brick01/brick server02-private:/data/glusterfs/var-www/brick02/brick
But if i use the GLUSTER_VOLUME module i am getting the error
- name: Creating the Gluster Volume
gluster_volume:
state: present
name: var-www
bricks: /server01-private:/data/glusterfs/var-www/brick01/brick,/server02-private:/data/glusterfs/var-www/brick02/brick
replicas: 2
transport: tcp
cluster:
- server01-private
- server02-private
force: yes
run_once: true
The error is
"msg": "error running gluster (/usr/sbin/gluster --mode=script volume add-brick var-www replica 2 server01-private:/server01-private:/data/glusterfs/var-www/brick01/brick server01-private:/server02-private:/data/glusterfs/var-www/brick02/brick server02-private:/server01-private:/data/glusterfs/var-www/brick01/brick server02-private:/server02-private:/data/glusterfs/var-www/brick02/brick force) command (rc=1): internet address 'server01-private:/server01-private' does not conform to standards\ninternet address 'server01-private:/server02-private' does not conform to standards\ninternet address 'server02-private:/server01-private' does not conform to standards\ninternet address 'server02-private:/server02-private' does not conform to standards\nvolume add-brick: failed: Host server01-private:/server01-private is not in 'Peer in Cluster' state\n"
}
May i know the mistake i am committing
The bricks: declaration of Ansible gluster_volume module requires only the path of the brick. The nodes participating in the volume are identified as cluster:.
The <hostname>:<brickpath> format is required for the gluster command line. However when you use the Ansible module, this is not required.
So your task should be something like:
- name: Creating the Gluster Volume
gluster_volume:
name: 'var-www'
bricks: '/data/glusterfs/var-www/brick01/brick,/data/glusterfs/var-www/brick02/brick'
replicas: '2'
cluster:
- 'server01-private'
- 'server02-private'
transport: 'tcp'
state: 'present'

Cube.js timing out in serverless environment

I've been following the guide on https://cube.dev/docs/deployment#express-with-basic-passport-authentication to deploy Cube.js to Lambda. I got it working against an Athena db such that the /meta endpoint works successfully and returns schemas.
When trying to query Athena data in Lambda however, all requests are resulting in 504 Gateway Timeouts. Checking the CloudWatch logs I see one consistent error:
/bin/sh: hostname: command not found
Any idea what this could be?
Here's my server.yml:
service: tw-cubejs
provider:
name: aws
runtime: nodejs12.x
iamRoleStatements:
- Effect: "Allow"
Action:
- "sns:*"
# Athena permissions
- "athena:*"
- "s3:*"
- "glue:*"
Resource:
- "*"
# When you uncomment vpc please make sure lambda has access to internet: https://medium.com/#philippholly/aws-lambda-enable-outgoing-internet-access-within-vpc-8dd250e11e12
vpc:
securityGroupIds:
# Your DB and Redis security groups here
- ########
subnetIds:
# Put here subnet with access to your DB, Redis and internet. For internet access 0.0.0.0/0 should be routed through NAT only for this subnet!
- ########
- ########
- ########
- ########
environment:
CUBEJS_AWS_KEY: ########
CUBEJS_AWS_SECRET: ########
CUBEJS_AWS_REGION: ########
CUBEJS_DB_TYPE: athena
CUBEJS_AWS_S3_OUTPUT_LOCATION: ########
CUBEJS_JDBC_DRIVER: athena
REDIS_URL: ########
CUBEJS_API_SECRET: ########
CUBEJS_APP: "${self:service.name}-${self:provider.stage}"
NODE_ENV: production
AWS_ACCOUNT_ID:
Fn::Join:
- ""
- - Ref: "AWS::AccountId"
functions:
cubejs:
handler: cube.api
timeout: 30
events:
- http:
path: /
method: GET
- http:
path: /{proxy+}
method: ANY
cubejsProcess:
handler: cube.process
timeout: 630
events:
- sns: "${self:service.name}-${self:provider.stage}-process"
plugins:
- serverless-express
Even this hostname error message is in logs however it isn't an issue cause.
Most probably you experiencing issue described here.
#cubejs-backend/serverless uses internet connection to access messaging API as well as Redis inside VPC for managing queue and cache.
One of those doesn't work in your environment.
Such timeouts usually mean that there's a problem with internet connection or with Redis connection. If it's Redis you'll usually see timeouts after 5 minutes or so in both cubejs and cubejsProcess functions. If it's internet connection you will never see any logs of query processing in cubejsProcess function.
Check the version of cube.js you are using, according to the changelog this issue should have been fixed in 0.10.59.
It's most likely down to a dependency of cube.js assuming that all environments where it will run will be able to run the hostname shell command (looks like it's using node-machine-id.

state_replicaset/state_replicaset.go98 error making http request: Get kube-state-metrics:8080/metrics: lookup kube-state-metrics on IP:53 no such host

We are trying to start metricbeat on typhoon kubernetes cluster. But after startup its not able to get some pod specific events like restart etc because of the following
Corresponding metricbeat.yaml snippet
# State metrics from kube-state-metrics service:
- module: kubernetes
enabled: true
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_statefulset
- state_pod
- state_container
- state_cronjob
- state_resourcequota
- state_service
- state_persistentvolume
- state_persistentvolumeclaim
- state_storageclass
# Uncomment this to get k8s events:
#- event period: 10s
hosts: ["kube-state-metrics:8080"]
Error which we are facing
2020-07-01T10:31:02.486Z ERROR [kubernetes.state_statefulset] state_statefulset/state_statefulset.go:97 error making http request: Get http://kube-state-metrics:8080/metrics: lookup kube-state-metrics on *.*.*.*:53: no such host
2020-07-01T10:31:02.611Z WARN [transport] transport/tcp.go:52 DNS lookup failure "kube-state-metrics": lookup kube-state-metrics on *.*.*.*:53: no such host
2020-07-01T10:31:02.611Z INFO module/wrapper.go:259 Error fetching data for metricset kubernetes.state_node: error doing HTTP request to fetch 'state_node' Metricset data: error making http request: Get http://kube-state-metrics:8080/metrics: lookup kube-state-metrics on *.*.*.*:53: no such host
2020-07-01T10:31:03.313Z ERROR process_summary/process_summary.go:102 Unknown or unexpected state <P> for process with pid 19
2020-07-01T10:31:03.313Z ERROR process_summary/process_summary.go:102 Unknown or unexpected state <P> for process with pid 20
I can add some other info which is required for this.
Make sure you have the Kube-State-Metrics deployed in your cluster in the kube-system namespace to make this work. Metricbeat will not come with this by default.
Please refer this for detailed deployment instructions.
If your kube-state-metrics is deployed to another namespace, Kubernetes cannot resolve the name. E.g. we have kube-state-metrics deployed to the monitoring namespace:
$ kubectl get pods -A | grep kube-state-metrics
monitoring kube-state-metrics-765c7c7f95-v7mmp 3/3 Running 17 10d
You could set hosts option to the full name, including namespace, like this:
- module: kubernetes
enabled: true
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_statefulset
- state_pod
- state_container
- state_cronjob
- state_resourcequota
- state_service
- state_persistentvolume
- state_persistentvolumeclaim
- state_storageclass
hosts: ["kube-state-metrics.<your_namespace>:8080"]

Resources