Spring RabbitMQ convertAndSend is not working properly - spring

I am using this code to queue data into RabbitMQ: https://www.javainuse.com/spring/spring-boot-rabbitmq-hello-world
I configured the following properties correctly to match the RabbitMQ configuration
Host
Username
Password
Exchange
Routing key
Queue
But RabbitMQSender#send or rabbitTemplate.convertAndSend(exchange, routingkey, company); is not queuing any data into RabbitMQ and in the same time it's not returning any error
I tried to change the username or pwd to an incorrect one and I got not_authorized so the connection with correct username/pwd/queue/exchange/routingkey seems fine but it's not doing anything.
I tried to send event via Curl and it's working correctly, the event is queued correctly in RabbitMQ
curl -v -u username:pwd -H "Accept: application/json" -H "Content-Type:application/json" POST -d'{
"properties": {
},
"routing_key": "my-routingkey",
"payload":"hi",
"payload_encoding": "string"
}' localhost:15672/api/exchanges/%2F/my-exchange/publish
Does the spring RabbitTemplate#convertAndSend execute in the background this API localhost:15672/api/exchanges/%2F/my-exchange/publish ?
If not, what I need to change in my code?

I was trying to queue events into a remote RabbitMQ server which was not configured properly in kubernetes: it was missing the storage field storage: 10Gi and the RabbitMQ was failing silently ...
spec:
replicas: 1
image: rabbitmq:3.10.7-management
persistence:
storageClassName: managed-csi
storage: 10Gi

Please check whether exchange with the correct name is created

Related

How to authenticate in REST request to MinIO

I am experimenting with MinIO. I try to send REST API calls directly to MinIO port 9000. So far, I understood that authentication works the same as the Amazon S3 API authentication works - correct? Unfortunately, I am also new to S3.
Here are my questions:
What does a request header to MinIO look like?
I read that I also need a signature that needs to be calculated somehow. How is this calculation done?
I do my experiments on Windows 10 and run MinIO in a Docker Container. My experiments target "http://localhost:9000/"
So far I only get a 403 error for a GET request:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied.</Message>
<Resource>/</Resource>
<RequestId>173BACCB19FAF4C4</RequestId>
<HostId>d200d104-da55-44e2-a94d-ce68ee959272</HostId>
</Error>
I read through the S3 Api Reference "https://docs.aws.amazon.com/pdfs/AmazonS3/latest/API/s3-api.pdf#Type_API_Reference" but to be honest, I got lost.
Can please someone help me out?
You needs to set an authentication values.
URL
GET http://localhost:9099/{bucket name}/{file name}
Select Authorization tab
Select Type AWS Signature
Access Key : copy from minio UI
Secret Key : copy from minio UI
Service name: s3
Postman access
minio browser
Create Key
Access Key / Secret Key
local docker compose file
save as docker-compose.yml
version: "3"
services:
minio-service:
image: minio/minio:latest
volumes:
- ./storage/minio:/data
ports:
- "9000:9000"
- "9099:9099"
environment:
MINIO_ROOT_USER: admin
MINIO_ROOT_PASSWORD: admin-strong
command: server --address ":9099" --console-address ":9000" /data
restart: always # necessary since it's failing to start sometimes
launching container
$ docker compose up
URL
http://localhost:9000/
The confidential is matched docker-compose.yml
user name : admin
password: admin-strong

How to decode JSON in ElasticSearch load pipeline

I set up ElasticSearch on AWS and I am trying to load application log into it. The twist is that application log entry is in JSON format, like
{"EventType":"MVC:GET:example:6741/Common/GetIdleTimeOut","StartDate":"2021-03-01T20:46:06.1207053Z","EndDate":"2021-03-01","Duration":5,"Action":{"TraceId":"80001266-0000-ac00-b63f-84710c7967bb","HttpMethod":"GET","FormVariables":null,"UserName":"ZZZTHMXXN"} ...}
So, I am trying to unwrap it. Filebeat docs suggest that there is decode_json_fields processor; however, I am getting message fields in Kinbana as a single JSON string; nothing unwrapped.
I am new to ElasticSearch, but I am not going to use it as an excuse not to do analysis first. Only as an explanation that I am not sure which information is helpful for answering the question.
Here is filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/opt/logs/**/*.json
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- decode_json_fields:
fields: ["message"]
output.logstash:
hosts: ["localhost:5044"]
And here is Logstash configuration file:
input {
beats {
port => "5044"
}
}
output {
elasticsearch {
hosts => ["https://search-blah-blah.us-west-2.es.amazonaws.com:443"]
ssl => true
user => "user"
password => "password"
index => "my-logs"
ilm_enabled => false
}
}
I am still trying to understand the filtering and grok parts of Logstash, but it seems that it should work the way it is. Also, I am not sure where the actual tag messages comes from (probably, from Logstash or Filebeat), but it seems irrelevant as well.
UPDATE: AWS documentation doesn't give an example of just loading through filebeat, without logstash.
If I don't use logstash (just FileBeat) and have the following section in filebeat.yml:
output.elasticsearch:
hosts: ["https://search-bla-bla.us-west-2.es.amazonaws.com:443"]
protocol: "https"
#index: "mylogs"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "username"
password: "password"
I am getting the following errors:
If I use index: "mylogs" - setup.template.name and setup.template.pattern have to be set if index name is modified
And if I don't use index (where would it go in ES then?) -
Failed to connect to backoff(elasticsearch(https://search-bla-bla.us-west-2.es.amazonaws.com:443)): Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license from the /_license endpoint, Filebeat requires the default distribution of Elasticsearch. Please make the endpoint accessible to Filebeat so it can verify the license.: unauthorized access, could not connect to the xpack endpoint, verify your credentials
If transmitting via logstash works in general, add a filter block as Val proposed in the comments and use this json plugin/filter: elastic.co/guide/en/logstash/current/plugins-filters-json.html - it automatically parses the json into elasticsearch fields

Unable to use elasticsearch sink connector (kafka-connect)

I'm currently trying to start an elasticsearch sink connector on a kafka-connect cluster (distributed mode)
This cluster is deployed in kubernetes using the helm charts provided by confluent with some tweaks in it.
Here is relevants parts :
For values.yaml
configurationOverrides:
"plugin.path": "/usr/share/java,/usr/share/confluent-hub-components"
"key.converter": "org.apache.kafka.connect.storage.StringConverter"
"value.converter": "org.apache.kafka.connect.json.JsonConverter"
"key.converter.schemas.enable": "false"
"value.converter.schemas.enable": "false"
"internal.key.converter": "org.apache.kafka.connect.json.JsonConverter"
"internal.value.converter": "org.apache.kafka.connect.json.JsonConverter"
"config.storage.replication.factor": "3"
"offset.storage.replication.factor": "3"
"status.storage.replication.factor": "3"
"security.protocol": SASL_SSL
"sasl.mechanism": SCRAM-SHA-256
And for the kube cluster part :
releases:
- name: kafka-connect
tillerless: true
tillerNamespace: qa3-search
chart: ../charts/cp-kafka-connect
namespace: qa3-search
values:
- replicaCount: 2
- configurationOverrides:
config.storage.topic: kafkaconnectKApp_connect-config_private_json
offset.storage.topic: kafkaconnectKApp_connect-offsets_private_json
status.storage.topic: kafkaconnectKApp_connect-statuses_private_json
connect.producer.client_id: "connect-worker-producerID"
groupId: "kafka-connect-group-ID"
log4j.root.loglevel: "INFO"
bootstrap_servers: "SASL_SSL://SOME_ACCESSIBLE_URL:9094"
client.security.protocol: SASL_SSL
client.sasl.mechanism: SCRAM-SHA-256
- prometheus:
jmx:
enabled: false
- ingress:
enabled: true
hosts:
- host: kafka-connect.qa3.k8s.XXX.lan
paths:
- /
- cp-schema-registry:
url: "https://SOME_ACCESSIBLE_URL"
Then I am loading the elasticsearch sink connector as such :
curl -X POST -H 'Content-Type: application/json' http://kafka-connect.qa3.k8s.XXX.lan/connectors -d '{
"name": "similarads3",
"config": {
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"consumer.interceptor.classes": "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor",
"topics": "SOME_TOPIC_THAT_EXIST",
"topic.index.map": "SOME_TOPIC_THAT_EXIST:test_similar3",
"connection.url": "http://vqa38:9200",
"batch.size": 1,
"type.name": "similads",
"key.ignore": true,
"errors.log.enable": true,
"errors.log.include.messages": true,
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "SOME_ACCESSIBLE_URL",
"schema.ignore": true
}
}' -vvv
More over I'm loading user and password for brokers auth via environment variable, and I'm pretty sure it is connected with rights ACL...
What is troubling me, is that there is no index creation when the connector starts, and there is no error what so ever in kafka-connect's logs... And it says everything has started
Starting connectors and tasks using config offset 68
When running a curl on /connectors/similarads3/status, everything is running, without errors.
So it seems like I overlooked something, but I can't figure out what is missing.
When I check consumers lag on this particular topics, it seems like no messages where consumed ever.
If there is not enough information, I'm able to provide more.
Does someone have an idea ?
EDIT : I should have mentioned that I tried to configure it with a topic that does not exist : again no error in logs. (I don't know how to interpret this)
EDIT 2 : This issue is solved
Actually we found the issue and it appears that i did overlooked something: in order to read from a topic protected by ACLs rights, you have to provide the SASL configuration for both the connector and the sink consumer.
So just duplicating the configuration prefixed with consumer. fixed this problem.
However I'm still surprised that no logs can point to this.
We had issues trying to use the topic.index.map property. Even if you got it working there is a note in the docs that it is deprecated.
topic.index.map
This option is now deprecated. A future version may remove it completely. Please use single message transforms, such as RegexRouter, to map topic names to index names.
I'd try using the RegexRouter to accomplish this instead.
"transforms": "renameTopicToIndex",
"transforms.renameTopicToIndex.type": "org.apache.kafka.connect.transforms.RegexRouter"
"transforms.renameTopicToIndex.regex": ".*"
"transforms.renameTopicToIndex.replacement": "test_similar3"

Kubernetes Endpoint created for Kafka but not reflecting in POD

In Kubernetes cluster I have created Endpoint pointing to Kafka cluster. Endpoint created successfully.
Name - kafka
Endpoint - X.X.X.X:9092
In my Spring Boot application's deployment yaml I have kept environment variable BROKER_IP. For this environment variable I have pointed:
env:
- name: BROKER_IP
value: kafka
The POD is in Error state. In my bootstrap-server I am getting kafka and not the actual Endpoint that was created. Any thoughts?
UPDATE - Just tried kafka:9092 and it worked. So wondering does the ENDPOINT maps to IP only and not the Port? Is my understanding correct??
Is it possible that you forgot to create the Service object matching the Endpoints? Because you are providing the ip-port pairs yourself the Service would need to be selectorless.
This works for me:
kind: Endpoints
apiVersion: v1
metadata:
name: kafka
subsets:
- addresses: [{ip: "1.2.3.4"}]
ports: [{port: 9092}]
---
kind: Service
apiVersion: v1
metadata:
name: kafka
spec:
ports: [{port: 9092}]
Testing it:
$ kubectl run kafka-dns-test --image=busybox --attach --rm --restart=Never -- nslookup kafka
If you don't see a command prompt, try pressing enter.
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: kafka.default.svc.cluster.local
Address: 10.96.220.40
Successful lookup, ignore extra *** Can't find xxx: No answer messages
Also, because there is a Service object you get some environment variables in your Pods (without having to declare them):
KAFKA_PORT='tcp://10.96.220.40:9092'
KAFKA_PORT_9092_TCP='tcp://10.96.220.40:9092'
KAFKA_PORT_9092_TCP_ADDR='10.96.220.40'
KAFKA_PORT_9092_TCP_PORT='9092'
KAFKA_PORT_9092_TCP_PROTO='tcp'
KAFKA_SERVICE_HOST='10.96.220.40'
KAFKA_SERVICE_PORT='9092'
But the most flexible way to use a Service is still to use the dns name (kafka in this case).

MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled

Get continuously this error in var/reports file.
I tried below link solution but still it not fixed.
Can anyone please help me for this as it goes on critical now.
MISCONF Redis is configured to save RDB snapshots
I have written this same answer here. Posting it here as well
TL;DR Your redis is not secure. Use redis.conf from this link to secure it
long answer:
This is possibly due to an unsecured redis-server instance. The default redis image in a docker container is unsecured.
I was able to connect to redis on my webserver using just redis-cli -h <my-server-ip>
To sort this out, I went through this DigitalOcean article and many others and was able to close the port.
You can pick a default redis.conf from here
Then update your docker-compose redis section to(update file paths accordingly)
redis:
restart: unless-stopped
image: redis:6.0-alpine
command: redis-server /usr/local/etc/redis/redis.conf
env_file:
- app/.env
volumes:
- redis:/data
- ./app/conf/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
the path to redis.conf in command and volumes should match
rebuild redis or all the services as required
try to use redis-cli -h <my-server-ip> to verify (it stopped working for me)

Resources