I have a K8 cluster on GCP running elasticsearch. Now I need to create a backup.
I've installed the GCS-plugin on my pods in stateful-set and tried setting it up with the following documentation:
https://github.com/elastic/elasticsearch/blob/master/docs/plugins/repository-gcs.asciidoc
When I try to configure a repository to use credentials stored in keystore I get the following response back:
{
"error": {
"root_cause": [
{
"type": "repository_exception",
"reason": "[my_backup] repository type [gcs] does not exist"
}
],
"type": "repository_exception",
"reason": "[my_backup] repository type [gcs] does not exist"
},
"status": 500
}
Any lead would be helpful, thanks!
I think the problem is that I can't install the plugin on the nodes, so I’ve installed it on the pods instead. And that the installation is not persistent after I restart the pods. So to make the installation persist on K8 I needed to build a custom image that installs the plugin. A bit tricky, but the plugin seems to be intended for GCE. So I decided to move from K8 to a managed instance group on GCE instead.
Related
I have 2 gcp projects pool-infra pool-dev. I use a github action to run a mvn command the configuration look like this...
- name: Authenticate with pure-infra project
uses: 'google-github-actions/auth#v0.8.1'
with:
service_account: my#pool-infra.iam.gserviceaccount.com
workload_identity_provider: projects/<pool-infra-id>/locations/global/workloadIdentityPools/....
token_format: 'access_token'
project_id: pure-platform-dev
- name: 'Set up Cloud SDK'
uses: 'google-github-actions/setup-gcloud#v1'
with:
project_id: pool-app
- name: Run Package
working-directory: my-service
run: |
gcloud config set project pool-app
gcloud config get project
mvn clean package jacoco:report
But I see an error that suggests the project ID is incorrect...
"errors": [
{
"domain": "usageLimits",
"message": "Cloud SQL Admin API has not been used in project <pool-infra-num> before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview?project=<pool-infra-num> then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.",
"reason": "accessNotConfigured",
"extendedHelp": "https://console.developers.google.com"
}
],
I would expect those project nums to be for pool-app not infra. What am I missing? How do I properly set the project for the mvn project?
This is coming from the JDBC connection pool when it tries to connect.
I have the pipe running and doing all his needs basically.
There is the jenkins file; the job succeeds, but there is a failure in one of the stages.
On stage helm ('push helm to registry')
stages {
stage("Push to Helm registry") {
steps {
script {
rtServer (
id: 'helm-registry',
url: '-----------------',
credentialsId: 'sys-bm-artifactory'
)
rtUpload (
serverId: 'helm-registry',
spec: """{
"files": [
{
"pattern": "${app_name}-*.tgz",
"target": "${app_name}/"
}
]
}"""
)
}
}
}
credentials works ok.
helm chart pushed correctly to jFrog repository.
the error relates to version of artifactory version.
Using rtServer and rtUpload for connection between helm chart to Jfrog registry.
Error (in the middle of the log):
11:08:13 Failed sending usage report to Artifactory: java.io.IOException: Could not get Artifactory version.
Why am I getting this error? Where does it come from?
Still I see all artifacts stored in registry.
Ok, the problem with this error appearing in the CI was probably because we override the same version every time.
So, to avoid this exception just increase the version and every push make sure you don't override the same file.
Note: make sure credentials + JFrog URL are correct.
I took an elastic search snapshot (snapshot_1) of 4 indices on an EC2 server "A" and copied the data to another EC2 server "B". I updated the path in elasticsearch.yml and restarted the ES on server B. (I updated the path and restarted ES before putting the data on that path but the path existed and had the required access).
Upon querying the index file in the snapshot directory I do see that snapshot_1 exists.
[elasticsearch#2ed2c2eaa5be uat_dump]$ cat index-0
{"snapshots":[{"name":"snapshot_1","uuid":"bBc6chD0TCKiQvuqn8gsow","state":1}],"indices":{"my_index_1":{"id":"J-c4ZvN0T02HeyQR8ueyZw","snapshots":["bBc6chD0TCKiQvuqn8gsow"]},"my_index_2":{"id":"ifn1Geq2RHe6wAMuGxpAMw","snapshots":["bBc6chD0TCKiQvuqn8gsow"]},"my_index_3":{"id":"X9dPrB3fRd-WrfNnZN69mQ","snapshots":["bBc6chD0TCKiQvuqn8gsow"]},"my_index_4":{"id":"9OjzD37WRROJFkfu-N7LNg","snapshots":["bBc6chD0TCKiQvuqn8gsow"]}}}
But when I am trying to restore the snapshot, I receive the error
{
"error": {
"root_cause": [
{
"type": "snapshot_restore_exception",
"reason": "[my_backup:snapshot_1] snapshot does not exist"
}
],
"type": "snapshot_restore_exception",
"reason": "[my_backup:snapshot_1] snapshot does not exist"
},
"status": 500
}
I do a file incompatible-snapshots and its contents are
{"incompatible-snapshots":[]}
Any pointers what I could be doing wrong?
I have aws MSK set up and i am trying to sink records from MSK to elastic search.
I am able to push data into MSK into json format .
I want to sink to elastic search .
I am able to do all set up correctly .
This is what i have done on EC2 instance
wget /usr/local http://packages.confluent.io/archive/3.1/confluent-oss-3.1.2-2.11.tar.gz -P ~/Downloads/
tar -zxvf ~/Downloads/confluent-oss-3.1.2-2.11.tar.gz -C ~/Downloads/
sudo mv ~/Downloads/confluent-3.1.2 /usr/local/confluent
/usr/local/confluent/etc/kafka-connect-elasticsearch
After that i have modified kafka-connect-elasticsearch and set my elastic search url
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=AWSKafkaTutorialTopic
key.ignore=true
connection.url=https://search-abcdefg-risdfgdfgk-es-ex675zav7k6mmmqodfgdxxipg5cfsi.us-east-1.es.amazonaws.com
type.name=kafka-connect
The producer sends message like below fomrat
{
"data": {
"RequestID": 517082653,
"ContentTypeID": 9,
"OrgID": 16145,
"UserID": 4,
"PromotionStartDateTime": "2019-12-14T16:06:21Z",
"PromotionEndDateTime": "2019-12-14T16:16:04Z",
"SystemStartDatetime": "2019-12-14T16:17:45.507000000Z"
},
"metadata": {
"timestamp": "2019-12-29T10:37:31.502042Z",
"record-type": "data",
"operation": "insert",
"partition-key-type": "schema-table",
"schema-name": "dbo",
"table-name": "TRFSDIQueue"
}
}
I am little confused in how will the kafka connect start here ?
if yes how can i start that ?
I also have started schema registry like below which gave me error.
/usr/local/confluent/bin/schema-registry-start /usr/local/confluent/etc/schema-registry/schema-registry.properties
When i do that i get below error
[2019-12-29 13:49:17,861] ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
kafka.common.KafkaException: Failed to parse the broker info from zookeeper: {"listener_security_protocol_map":{"CLIENT":"PLAINTEXT","CLIENT_SECURE":"SSL","REPLICATION":"PLAINTEXT","REPLICATION_SECURE":"SSL"},"endpoints":["CLIENT:/
Please help .
As suggested in answer i upgraded the kafka connect version but then i started getting below error
ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:63)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:210)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:61)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:72)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:39)
at io.confluent.rest.Application.createServer(Application.java:201)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:41)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: Timed out trying to create or validate schema topic configuration
at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:168)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:111)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:208)
... 5 more
Caused by: java.util.concurrent.TimeoutException
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:161)
... 7 more
First, Confluent Platform 3.1.2 is fairly old. I suggest you get the version that aligns with the Kafka version
You start Kafka Connect using the appropriate connect-* scripts and properties located under bin and etc/kafka folders
For example,
/usr/local/confluent/bin/connect-standalone \
/usr/local/confluent/etc/kafka/kafka-connect-standalone.properties \
/usr/local/confluent/etc/kafka-connect-elasticsearch/quickstart.properties
If that works, you can move onto using connect-distributed command instead
Regarding Schema Registry, you can search its Github issues for multiple people trying to get MSK to work, but the root issue is related to MSK not exposing a PLAINTEXT listener and the Schema Registry not supporting named listeners. (This may have changed since versions 5.x)
You could also try using Connect and Schema Registry containers in ECS / EKS rather than extracting in an EC2 machine
I am using Elasticsearch version 2.1.0. How can I know the version of Curator being used ?
While changing the settings (number of replicas) I am getting an exception as:
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason":Can't update [index.number_of_replicas] on closed indices [[.marvel-es-2016.12.12] - can leave index in an unopenable state"
"status": 400
}
Any clues ?
You can get curator version by using the following command
$ curator --version
I think you are trying to set the replica to a indices which is in closed state.
Try setting replicas after opening the indices.
Related information can be found here