I wanted to use confluent in AWS EC2 environment. How can I install it. I have tried the confluent cli in my local and want to replicate this feature of connecting sql to kafka. Is there any documentation on this?
You can find how to install Confluent Platform from DEB or YUM sources in Confluent docs. Otherwise, extract the same package you would have done locally.
There's AWS quickstart templates or Ansible setups on Confluent Github for setting up a full cluster. Or you could use EKS to run it in Kubernetes, if that's something you're comfortable with. I'm sure there's some third party Terraform repos out there as well...
For non-container, production use cases, you'd use systemctl to start services on independently running servers, not all Confluent services on just one system like with confluent start
Sounds like you just want to run KSQL, but it's not clear if/where you have a running Kafka cluster
you just need to download confluent zip from here
https://www.confluent.io/download/
unzip in your desired folder
for start confluent services
go to confluent bin /path to extract folder/confluent/bin
for start all confluent service
confluent start
for check service status
confluent status
for stop service
confluent stop
Related
I downloaded Confluent Platform in my local windows machine & tried to start zookeeper, but it is giving me below error:
c:\confluent>.\bin\windows\zookeeper-server-start.bat .\etc\kafka\zookeeper.prop
erties
Classpath is empty. Please build the project first e.g. by running 'gradlew jarA
ll'
Confluent does not test their products on Windows, last I heard.
The recommendation is to install WSL or use the Confluent Docker containers.
I recently started learning ELK and I succeed to parse my XML files locally. But now I would like to have access to my server to get access to all of my XML files (upgrade every 30 seconds)
I have the ip-address of my server and my question is: should I install Filebeat locally and configure my filebeat.yml to get access to the server or I should install the Filebeat in the server and then indicate my locally address?
Filebeat is a shipper, which collects, aggregate and forward logs to your desired output (logstash, elasticsearch etc).
It works as an agent, so you need to install it in every node from which you want to collect logs from. For instance, if you want to collect logs from your local machine then install filebeat there, if you want to collect from logstash server itself, then install filebeat there. If you want to collect log from both, then filebeat needs to be installed in both machines. and use logstash as an output,
have a look at this illustration,
But when I tried to install filebeat on my server using
curl -L -O elastic.co/downloads/beats/filebeat/filebeat-6.3.1-amd64.deb
I get this message:
Could not resolve host: www.elastic.co; Name or service not known
The OS version of the server is : Linux version 3.10.0-693.17.1.el7.x86_64
I have developed Spring Boot applications. I have setup admin and RabbitMQ as well as spring cloud bus. When i refresh the end points of applications, it refreshes the properties for application.
Can anyone please help me how to setup RabbitMQ in kubernetes now? I did research to an extent and found in few articles that it needs to be deployed as "Statefulset" rather than "Deployment" https://notallaboutcode.blogspot.de/2017/09/rabbitmq-on-kubernetes-container.html. I could not get why this needs to be done exactly. Also any useful link on deploying RabbitMQ in kubernetes would help.
It depends on what you're looking to do and what tools you have available. I guess your current setup is much like that described in http://www.baeldung.com/spring-cloud-bus. One approach to porting that to kubernetes might be to try to get your setup working with docker-compose first and then you could port that docker-compose to kubernetes deployment descriptors.
A simple way to deploy rabbitmq in k8s would be to set up a Deployment using a rabbitmq docker image. An example of this is https://github.com/Activiti/activiti-cloud-examples/blob/fe732096b5a19de0ad44879a399053f6ae02b095/kubernetes/kubectl/infrastructure.yml#L17. (Notice that file isn't radically different from a docker-compose file so you could port from one to the other.) But that won't be persisting data outside of the Pods so if the cluster were to go down or the Pod/s were to go down then you'd lose message data. The persistence is ephemeral.
So to have non-ephemeral persistence you could instead use a StatefulSet as in the example you point to. Another example is https://wesmorgan.svbtle.com/rabbitmq-cluster-on-kubernetes-with-statefulsets
If you are using helm (or can use helm) then you could use the rabbitmq helm chart, which uses a StatefulSet.
But if your only reason for needing the bus is to trigger refreshes when property changes happen then there are alternative paths available with Kubernetes. I'm guessing you need the hot reloads so you could look at using https://github.com/fabric8io/spring-cloud-kubernetes#propertysource-reload Or if you need the config to come from git specifically then you could look at http://fabric8.io/guide/develop/configuration.html (If you didn't need the hot reloads or git then you could consider versioning your configmaps and upgrading them with your application upgrades like in https://dzone.com/articles/configuring-java-apps-with-kubernetes-configmaps-a )
If you have installed helm in your cluster
helm install stable/rabbitmq
This will install rabbitmqserver on your cluster, the following commands are for obtaining the password and erlang cookie, replace prodding-wombat-rabbitmq for w/e kubernetes decides to name the pod.
kubectl get secret --namespace default prodding-wombat-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode
kubectl get secret --namespace default prodding-wombat-rabbitmq -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode
To connect to the pod:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prodding-wombat-rabbitmq" -o jsonpath="{.items[0].metadata.name}")
Then prorxy to localhost so you can connect in your browswer
kubectl port-forward $POD_NAME 5672:5672 15672:15672
I am trying to start Solr Cloud as windows service using Procrun but I can not find working solution how it can be done. Maybe there is some solution how to do this?
I have tried to setup Solr Cloud using this article - https://opensourceconnections.com/blog/2013/08/27/solrcloud-as-a-windows-service/ but it is not working.
Please try to use the NSSM tool (Non-Sucking-service-manager) that fits your requirement to setup Solr cloud on Windows as a service.
The detailed steps for implemetation are listed in the below link
How to Run Solr as a Service on Windows
For powershell with solrcloud and using zookeeper ensemble, please run
&"$nssm" install solr $ScriptPath start -cloud -p 8984 -z """""""$solrSvrArrayCsv""""""" -f
$nssm is the path to your nssm exe
$scriptPath is the path to your solr.cmd file
$solrSvrArrayCsv is a comma seperated array of zookeeper ensemble nodes ie) "zookeeper1:2181,zookeeper2:2181,zookeeper3:2181"
It must be wrapped in double quotes
This works for me launching solr cloud with ssl
I have built a DCOS local universe and installed it into a cluster behind a firewall - there is no internet access to the cluster. One of the packages installed in the universe is Flink. I have installed DCOS using the cluster_docker_registry_url variable pointing at a local Docker registry which has a very small number of packages on it; it is not a mirror of the main Docker Hub.
When I try to install the Flink package into DCOS, I get 404 errors in the Mesos logs relating to missing docker images that I assume the package tries to download from the local Docker registry. The Flink cluster fails to start.
What Docker images does the Flink package try to download? I thought the build process of a local universe pulled all dependencies down when it is built, so there should be no external dependencies once it's built? What do I need to do to be able to install DCOS when there is no internet access?
That depends on the scala version you are using:
scala 2.10: mesosphere/dcos-flink:1.2.0-1.4
scala 2.11: mesosphere/dcos-flink-2-11:1.2.0-1.4
See here
Furthermore, it requires
openjdk:8-jre ,see here
For more details feel free to refer to the universe specification for the Apache Flink service (or ping me directly):
https://github.com/mesosphere/universe/blob/version-3.x/repo/packages/F/flink/1/