i am trying to configure a cluster between two Apache Traffic Server, version 7.1.1, following the instructions on http://www.divedeepstaylong.com/admin-guide/configuration/multi-server-caches.en.html#full-clustering.
This is my records.config configuration for the cluster:
#######Cluster
LOCAL proxy.local.cluster.type INT 1
CONFIG proxy.config.cluster.ethernet_interface STRING eth0
CONFIG proxy.config.cluster.rsport INT 8088
CONFIG proxy.config.http.cache.cluster_cache_local INT 1
CONFIG proxy.config.proxy_name STRING atsvsobjetos
#
CONFIG proxy.config.body_factory.template_sets_dir STRING etc/trafficserver/body_factory
CONFIG proxy.config.ssl.server.ticket_key.filename STRING NULL
CONFIG proxy.config.log.config.filename STRING logging.config
CONFIG proxy.config.cache.ip_allow.filename STRING ip_allow.config
The 8080 tcp and 8088,8086,8089 udp ports are open in the firewall, but when I ask about the cluster, this is the answer:
# bin/traffic_ctl metric get proxy.process.cluster.nodes
proxy.process.cluster.nodes 1
What I am doing wrong? somebody can help me? I am completely stuck!
Thanks in advance!
Clustering was removed in ATS. You must now individually configure each node.
Related
I used bitnami/kafka to deploy kafka on minikube. A describe of the pod kafka-0 looks says that server address is:
KAFKA_CFG_ADVERTISED_LISTENERS:INTERNAL://$(MY_POD_NAME).kafka-headless.default.svc.cluster.local:9093,CLIENT://$(MY_POD_NAME).kafka-headless.default.svc.cluster.local:9092
My kafka address is set like so in Spring config properties:
spring.kafka.bootstrap-servers=["kafka-0.kafka-headless.default.svc.cluster.local:9092"]
But when I try to send a message I get the following error:
Failed to construct kafka producer] with root cause:
org.apache.kafka.common.config.ConfigException:
Invalid url in bootstrap.servers: ["kafka-0.kafka-headless.default.svc.cluster.local:9092"]
Note that this works when I run kafka locally and set the bootstrap-servers address to localhost:9092
How do I fix this error? What is the correct kafka URL to use and where do I find it? thanks
Minikube network is different to the host network, you need a bridge.
The advertised listener is in the minikube realm, not findable from the host.
You could setup a service and an ingress in minikube pointing to your kafka, setup your hosts file to the ip address of the ingress and the hostname advertised.
spring.kafka.bootstrap-servers needs valid server hostnames along with port number as comma-separated
hostname-1:port,hostname-2:port
["kafka-0.kafka-headless.default.svc.cluster.local:9092"] is not looking like one!
I am trying to access Elastic Search via my static IP address but it's not working.
What did I try?
I created a Bitnami Elastic Search VM Instance from GCP Marketplace
I assigned a static IP to the same VM
I replaced the network.host to 0.0.0.0 inside elasticsearch.yml file
I added my static IP to network.publish_host inside elasticsearch.yml file
I added a firewall rule to allow all ports and added 0.0.0.0 as source filter
Now when trying to access Elastic Search using http://_my_static_ip:9200 I get nothing, the request fails. What am I missing here?? Any help would be appreciated. Thanks
The issue was my GCP being using an IPv6 address, I didn't know about this, this is something a developer on Fiverr told me, anyone having the same issue with Bitnami's GCP deployment needs to add the following line:
-Djava.net.preferIPv4Stack=true
to the following file:
/opt/bitnami/elasticsearch/config/jvm.options
After that restart your elasticsearc service using the following command:
sudo /opt/bitnami/ctlscript.sh restart
That should fix the issue if you have proper firewall rules set up and also added proper IPs to elasticsearch.yml config file. Read original question's What did I try? section.
I wish to run my elasticsearch remotely on gcloud VM, this is configured to run at 127.0.0.1 at a specific port 9200. How to access this from a website outside this vm? If I change the network host to 0.0.0.0 on the yml file, even 9200 port becomes inaccessible. How do I overcome this problem?
Changed network.host: [_site_ , _local_ , _global_ ]
_site_ = internal ip given by google cloud vm,
_local_ = 127.0.0.1,
_global_ = found using curl ifconfig.me,
Opened a specific port (9200) and tried to connect with global IP address.
curl to the global ip gives
>Output: Failed to connect to (_global_ ip) port 9200: connection refused.
So put network.host:0.0.0.0 and then try to allow 9200 and 9201 port and restart the elasticsearch service.If you are using ubuntu then sudo service elasticsearch restart then check by doing curl -XGET 'http://localhost:9200?pretty'.Let me know if you are still facing any issues.
Use following configurations for elasticsearch.yml
network.host: 0.0.0.0
action.auto_create_index: false
index.mapper.dynamic: false
Solved this problem by going through the logs and found out that the public ip address is re-mapped to the internal ip address, hence network.host can't be set to external ip directly. Elasticsearch yml config is as follows:
'network.host: xx.xx.xxx.xx' is set to the internal ip (given by google),
'http.cors.enabled: true',
'http.cors.allow-origin:"*", (Do not use * in production, its a security issue)
'discovery.type: single-node' in my case to make it work independently and not in a cluster
Now this sandboxed version can be accessed from outside the VM using the external IP address given by Google.
ifconfig shows
inet 192.168.10.1
I can access
http://localhost/
http://127.0.0.1/
http://192.168.10.1
They are all the same.
I also can access neo4j and elasticsearch ports on the following urls
Elasticsearch
http://127.0.0.1:9200/
http://localhost:9200/
Neo4j
http://127.0.0.1:7474/browser/
http://localhost:7474/browser/
But port 9200 and 7474 are not working for 192.168.10.1
http://192.168.10.1:9200
http://192.168.10.1:7474
Something I need to do make the port 7474 (neo4j) and 9200 (elasticsearch) working for 192.168.10.1, but I don't know how.
Please advise, thanks!
I figured it out.
Neo4j
Set up neo4j on the ip (except localhost), in my case
http://192.168.10.1:7474
In the neo4j.conf file
uncomment the following line.
# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
dbms.connectors.default_listen_address=0.0.0.0
Elasticsearch
Modify elasticsearch.yml
add the following line
network.host: 0.0.0.0
Then start elasticsearch.
Hi i am configuring ftp on amazon ec2 mincro linux instance i have suceessfully install and configure vsftpd on my instance, and also i create user for my ftp. but when i ftp my instance it will give following error
Error:
"Connection reset by by peer"
can any one help me regarding this what was im going wrong, or what im missing,
Note i have configure instance firewall as following
Custom tcp rule:
Port range : 20 - 21
source : 0.0.00/0
any hep is greatly appreciated. thanks in advance.
I'm using an instance for VPN and FTP(vsftpd) servers. Add this in the /etc/vsftpd/vsftpd.conf
pasv_addr_resolve=NO|YES
pasv_address=You Elastic IP Address|Hostname
pasv_min_port=2020
pasv_max_port=2020
Where pasv_address is your Elastic IP Address (set pasv_addr_resolve=NO) or you can use dyndns service and set pasv_addr_resolve=YES correspondingly. Then open 2020 and 21 ports in firewall. In this configuration you can use FTP server even in passive mode (Incoming connections are prohibited in you local PC).
All vsftpd config options are described here