Consul fabio microservices - microservices

Can you clarify pls what are the main steps to make Consul run on the local machine (not in dev mode(!)) to connect e.g. a microservice from another microservice, if it's possible using Fabio as a load balancer.
Should I create a datacenter with acl and ets.?
Too a lot of documentation but it's still not clear from what to start.
Thanks a lot!

Here's a step-by-step example of how you can configure Fabio to route to a microservice which is registered in a Consul server environment that is protected by ACLs.
First, you'll need to create a few configuration files for Consul and Fabio.
$ tree
.
├── conf.d
│ ├── config.hcl
│ └── web.hcl
├── fabio-policy.hcl
└── fabio.properties
Here's a brief overview of what we'll add to these files.
conf.d - Consul server configuration dir. config.hcl which defines the Consul server configuration, and web.hcl which is the service definition for our example web service.
fabio-policy.hcl - The Consul ACL policy which will be assigned to the token created for Fabio LB.
fabio.properties - The Fabio configuration file.
Create configuration files
conf.d/config.hcl
This is a basic single node Consul server cluster (3 or more are recommended for production) with ACLs enabled.
# Configure the Consul agent to operate as a server
server = true
# Expect only one server member in this cluster
bootstrap_expect = 1
# Persistent storage path. Should not be under /tmp for production envs.
data_dir = "/tmp/consul-fabio-so"
acl {
# Enable ACLs
enabled = true
# Set default ACL policy to deny
default_policy = "deny"
}
# Enable the Consul UI
ui_config {
enabled = true
}
web.hcl
This is a service definition which registers a service named "web" into the Consul catalog.
service {
# Define the name of the service'
name = "web"
# Specify the listening port for the service
port = 8080
# Register a HTTP health check (requried by Fabio) for this service
# By default Fabio will only route to healthy services in the Consul catalog.
check {
id = "web"
http = "http://localhost:8080"
interval = "10s"
timeout = "1s"
}
# Fabio dynamically configures itself based on tags assigned to services in
# the Consul catalog. By default, 'urlprefix-` is the prefix for tags which
# define routes. Services which define routes publish one or more tags with
# host/path # routes which they serve. These tags must have this prefix to be
# recognized as routes.
#
# Configure Fabio to route requests to '/' to our backend service.
tags = [
"urlprefix-/"
]
}
fabio-policy.hcl
This ACL policy allows Fabio to register itself into the Consul catalog, discover backend services, and additional Fabio configuration. This policy will be created in Consul after bootstrapping the ACL system.
# Allow Fabio to discover which agent it is running on.
# Can be scoped to specific node(s) if additional security is requried
agent_prefix "" {
policy = "read"
}
# Allow Fabio to lookup any service in Consul's catalog
service_prefix "" {
policy = "read"
}
# Allow Fabio to lookup nodes so that it can resolve services endpoints to the
# correct node IP.
node_prefix "" {
policy = "read"
}
# Allow Fabio to register itself as a service in Consul.
# This used for Fabio instances to be discoverable in Consul's catalog, and for
# Consul to execute health checks against Fabio.
service "fabio" {
policy = "write"
}
# Allow Fabio to read configuration overrides from the KV store
# https://github.com/fabiolb/fabio/wiki/Routing#manual-overrides
key_prefix "fabio/config" {
policy = "read"
}
fabio.properties
This is the configuration file for Fabio.
Configures the ACL token to use when authenticating to Consul.
registry.consul.token = "<token. To be created later>"
Start and configure Consul
Start the Consul server (not in dev mode).
$ consul agent -config-dir=conf.d
==> Starting Consul agent...
Version: '1.9.5'
Node ID: 'f80693eb-0f47-1f9f-e8cc-063ad28ca8da'
Node name: 'b1000.local'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: true)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600)
Cluster Addr: 10.0.0.21 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false
==> Log data will now stream in as it occurs:
...
Bootstrap the ACL system. This creates a management token with privileges
for the entire cluster. Save this information.
$ consul acl bootstrap
AccessorID: e135b234-2227-71fe-1999-ffb75c659110
SecretID: ef475ff8-5f92-6f8e-0a59-2ad3f8ed8dda
Description: Bootstrap Token (Global Management)
Local: false
Create Time: 2021-06-05 14:26:07.02198 -0700 PDT
Policies:
00000000-0000-0000-0000-000000000001 - global-management
Set the CONSUL_HTTP_TOKEN environment variable to the value of our secret ID.
This will be used for subsequent administrative commands.
$ export CONSUL_HTTP_TOKEN="ef475ff8-5f92-6f8e-0a59-2ad3f8ed8dda"
Create the ACL policy for Fabio
$ consul acl policy create -name=fabio-policy -rules=#fabio-policy.hcl
<output snipped>
...
Create a token for Fabio which utilizes this policy.
$ consul acl token create -description="Token for Fabio LB" -policy-name="fabio-policy"
AccessorID: 474db6b0-73b0-3149-dafc-a50bab41b574
SecretID: b6490a01-89a8-01a1-bbdf-5c7e9898d6ea
Description: Token for Fabio LB
Local: false
Create Time: 2021-06-05 15:13:09.124182 -0700 PDT
Policies:
fc0c6a84-8633-72cc-5d59-4e0e60087199 - fabio-policy
Update fabio.properties and set token ID.
# registry.consul.token configures the acl token for consul.
registry.consul.token = b6490a01-89a8-01a1-bbdf-5c7e9898d6e
Start the web server and Fabio
Start the backend web server so that it can accept connections. For this example, I'm going to use devd.
This command instructs devd to listen on port 8080 on all IPs on the system, and serve content from the current directory.
$ devd --all --port=8080 .
15:21:46: Route / -> reads files from .
15:21:46: Listening on http://devd.io:8080 ([::]:8080)
Next, start Fabio.
$ fabio -cfg fabio.properties
2021/06/05 15:22:40 [INFO] Setting log level to INFO
2021/06/05 15:22:40 [INFO] Runtime config
<snip>
...
2021/06/05 15:22:40 [INFO] Version 1.5.14 starting
2021/06/05 15:22:40 [INFO] Go runtime is go1.16.2
2021/06/05 15:22:40 [INFO] Metrics disabled
2021/06/05 15:22:40 [INFO] Setting GOGC=100
2021/06/05 15:22:40 [INFO] Setting GOMAXPROCS=16
2021/06/05 15:22:40 [INFO] consul: Connecting to "localhost:8500" in datacenter "dc1"
2021/06/05 15:22:40 [INFO] Admin server access mode "rw"
2021/06/05 15:22:40 [INFO] Admin server listening on ":9998"
2021/06/05 15:22:40 [INFO] Waiting for first routing table
2021/06/05 15:22:40 [INFO] consul: Using dynamic routes
2021/06/05 15:22:40 [INFO] consul: Using tag prefix "urlprefix-"
2021/06/05 15:22:40 [INFO] consul: Watching KV path "/fabio/config"
2021/06/05 15:22:40 [INFO] consul: Watching KV path "/fabio/noroute.html"
2021/06/05 15:22:40 [INFO] HTTP proxy listening on :9999
2021/06/05 15:22:40 [INFO] Access logging disabled
2021/06/05 15:22:40 [INFO] Using routing strategy "rnd"
2021/06/05 15:22:40 [INFO] Using route matching "prefix"
2021/06/05 15:22:40 [INFO] Config updates
+ route add web / http://10.0.0.21:8080/
2021/06/05 15:22:40 [INFO] consul: Registered fabio as "fabio"
...
While some of the output has been omitted, we can see that Fabio is listening on port 9999, is successfully watching Consul's KV for configuration, has successfully discovered our "web" service, and registered itself into Consul's catalog.
If you connect to Fabio at http://localhost:9999, you should see a directory listing being returned by the backend web server, devd, which is listening on port 8080.

Related

Why i can’t to access to the interface of kibana?

i already installed elastic and kibana 8.2 in ubuntu 22.04 and i try to access kibana from the browser of my host it told me "Kibana server is not ready yet"
this my elastic and kibana yml files :
network.host: 192.168.1.10
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# --------------------------------- Readiness ----------------------------------
#
# Enable an unauthenticated TCP readiness endpoint on localhost
#
#readiness.port: 9399
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 11-06-2022 21:39:47
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["elastic"]
# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0
# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
this for kibana yml :
server.host: "192.168.1.10"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false
# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""
# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://192.168.1.10:9200"]
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
i put the # ip of the ubuntu server in config file of kibana and elastic because i configured a static ip for the server
More probably, Kibana is no able to reach elasticsearch because you're providing a http link and not a https
elasticsearch.hosts: ["http://192.168.1.10:9200"]
However, ssl was enabled in the config file of elasticsearch
You can try to disable the ssl and the security features and then you can configure them one by one according to the status of your project.
# Enable security features
xpack.security.enabled: false
xpack.security.enrollment.enabled: false
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: false

Correctly handle Socket.io on Docker Swarm using either Traefik or NGINX

I'm working on an application in Node.js using Socket.IO which is deployed using Docker Swarm and I want the option of multiple instances of the application service. But, the application is failing when there is more than one instance. The failure involves an error in the browser for every Socket.IO message, the data that's supposed to be sent in the message never arrives, etc.
The Docker Stack file has four services
the Node.js application
the REDIS instance required for handling Socket.IO and Sessions in a multi-node Socket.IO service -- Yes, I've read the Socket.IO documentation on this, implementing the connect-redis SessionStore, and using socket.io-redis to do multi-node Socket.IO
the database (MySQL)
a reverse proxy - I've used both NGINX and Traefik
In Socket.IO there is a routine keepalive request such as a GET on /socket.io/?EIO=3&transport=polling&t=NLjcKJj&sid=X5UnuTjlYNJ4N8OsAAAH. This request is seen in the log file for the reverse proxy, and gets handled by the application. The debugging output from Engine.IO says it receives these requests.
Specifically:
2020-10-28T05:06:02.557Z Net read redis:6379 id 0
2020-10-28T05:06:02.557Z socket.io:socket socket connected - writing packet
2020-10-28T05:06:02.557Z socket.io:socket joining room X5UnuTjlYNJ4N8OsAAAH
2020-10-28T05:06:02.557Z socket.io:client writing packet {"type":0,"nsp":"/"}
2020-10-28T05:06:02.557Z socket.io:socket joined room [ 'X5UnuTjlYNJ4N8OsAAAH' ]
2020-10-28T05:06:02.656Z engine intercepting request for path "/socket.io/"
2020-10-28T05:06:02.656Z engine handling "GET" http request "/socket.io/?EIO=3&transport=polling&t=NLjcKJj&sid=X5UnuTjlYNJ4N8OsAAAH"
2020-10-28T05:06:02.656Z engine setting new request for existing client
2020-10-28T05:06:02.655Z engine intercepting request for path "/socket.io/"
2020-10-28T05:06:02.655Z engine handling "POST" http request "/socket.io/?EIO=3&transport=polling&t=NLjcKJh&sid=X5UnuTjlYNJ4N8OsAAAH"
2020-10-28T05:06:02.655Z engine unknown sid "X5UnuTjlYNJ4N8OsAAAH"
2020-10-28T05:06:02.774Z engine intercepting request for path "/socket.io/"
2020-10-28T05:06:02.774Z engine handling "GET" http request "/socket.io/?EIO=3&transport=polling&t=NLjcKLI&sid=X5UnuTjlYNJ4N8OsAAAH"
2020-10-28T05:06:02.774Z engine unknown sid "X5UnuTjlYNJ4N8OsAAAH"
2020-10-28T05:06:02.775Z engine intercepting request for path "/socket.io/"
2020-10-28T05:06:02.775Z engine handling "POST" http request "/socket.io/?EIO=3&transport=polling&t=NLjcKLJ&sid=X5UnuTjlYNJ4N8OsAAAH"
2020-10-28T05:06:02.775Z engine setting new request for existing client
2020-10-28T05:06:02.775Z socket.io:client client close with reason transport close
2020-10-28T05:06:02.775Z socket.io:socket closing socket - reason transport close
2020-10-28T05:09:14.955Z socket.io:client client close with reason ping timeout
2020-10-28T05:09:14.955Z socket.io:socket closing socket - reason ping timeout
The log message saying engine unknown sid "X5UnuTjlYNJ4N8OsAAAH" seems significant. It's saying the Session ID is not known. But the sessions are shared between the nodes using REDIS. Hence, it is confusing why the session would be unknown since they're supposed to be shared using connect-redis.
Another significant thing is the logging in the browser.
In the JavaScript console there is a continuous reporting of these messages:
WebSocket connection to 'ws://DOMAIN-NAME/socket.io/?EIO=3&transport=websocket&sid=h2aFFkOvNZtFc1DcAAAI' failed: WebSocket is closed before the connection is established.
Failed to load resource: the server responded with a status of 400 (Bad Request)
The last is reported as occurring with http://DOMAIN-NAME/socket.io/?EIO=3&transport=polling&t=NLjf5hB&sid=h2aFFkOvNZtFc1DcAAAI
Then, for these requests I see the response body is:
{
"code": 1,
"message": "Session ID unknown"
}
That is obviously consistent with the unknown sid message earlier. I take that to mean the connection is being closed because the server thinks the Session ID is incorrect.
In the research I've done into this, I've learned that in Docker Swarm the traffic is distributed in a round robin fashion -- that is Docker Swarm acts as a round robin load balancer. The success path with Socket.IO in such a case is to implement sticky sessions.
I read somewhere that the sticky session support in NGINX does not work for this situation, and that Traefik instead can support this situation.
In NGINX I had this proxy configuration:
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy false;
proxy_pass http://todos;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
upstream todos {
ip_hash;
server todo:80 fail_timeout=1s max_fails=3;
keepalive 16;
}
That did not change the behavior - still unknown sid etc. Hence I've switched to Traefik, and I'm having trouble finding documentation on this side of Traefik. It's my first time using Traefik, FWIW. I was able to implement HTTPS using Lets Encrypt, but not the sticky sessions.
To configure Traefik, I'm using command line arguments and Docker container labels such that the entire configuration is in the Docker Stack file.
traefik:
image: traefik:v2.0
restart: always
ports:
- "80:80" # <== http
- "8080:8080" # <== :8080 is where the dashboard runs on
- "443:443" # <== https
deploy:
replicas: 1
labels:
#### Labels define the behavior and rules of the traefik proxy for this container ####
- "traefik.enable=true" # <== Enable traefik on itself to view dashboard and assign subdomain to view it
- "traefik.http.routers.api.rule=Host(`monitor.DOMAIN-NAME`)" # <== Setting the domain for the dashboard
- "traefik.http.routers.api.service=api#internal" # <== Enabling the api to be a service to access
- "traefik.http.routers.api.entrypoints=web"
placement:
constraints:
- "node.hostname==srv1"
command:
- "--providers.docker.swarmmode=true"
- "--providers.docker.endpoint=unix:///var/run/docker.sock"
- "--providers.docker.watch=true"
- "--log.level=DEBUG"
- "--accesslog=true"
- "--tracing=true"
- "--api.insecure=true" # <== Enabling insecure api, NOT RECOMMENDED FOR PRODUCTION
- "--api.dashboard=true" # <== Enabling the dashboard to view services, middlewares, routers, etc...
- "--providers.docker=true" # <== Enabling docker as the provider for traefik
- "--providers.docker.exposedbydefault=false" # <== Don't expose every container to traefik, only expose enabled onesconfiguration file
- "--providers.docker.network=todo_webnet" # <== Operate on the docker network named web
- "--entrypoints.web.address=:80" # <== Defining an entrypoint for port :80 named web
- "--entrypoints.web-secured.address=:443" # <== Defining an entrypoint for https on port :443 named web-secured
- "--certificatesresolvers.mytlschallenge.acme.tlschallenge=false" # <== Enable TLS-ALPN-01 to generate and renew ACME certs
- "--certificatesresolvers.mytlschallenge.acme.email=E-MAIL-ADDRESS#DOMAIN-NAME" # <== Setting email for certs
- "--certificatesresolvers.mytlschallenge.acme.storage=/letsencrypt/acme.json" # <== Defining acme file to store cert
- "--certificatesresolvers.mytlschallenge.acme.httpChallenge.entryPoint=web"
volumes:
- /home/ubuntu/letsencrypt:/letsencrypt # <== Volume for certs (TLS)
- /var/run/docker.sock:/var/run/docker.sock # <== Volume for docker admin
networks:
- webnet
todo:
image: robogeek/todo-app:first-dockerize-redis
# ports:
# - "80:80"
networks:
- dbnet
- webnet
- redisnet
deploy:
replicas: 2
labels:
#### Labels define the behavior and rules of the traefik proxy for this container ####
- "traefik.enable=true" # <== Enable traefik to proxy this container
- "traefik.http.routers.todo.rule=Host(`DOMAIN-NAME`)" # <== Your Domain Name goes here for the http rule
- "traefik.http.routers.todo.entrypoints=web" # <== Defining the entrypoint for http, **ref: line 30
- "traefik.http.routers.todo.service=todo"
- "traefik.http.services.todo.loadbalancer.healthcheck.port=80"
- "traefik.http.services.todo.loadbalancer.sticky=true"
- "traefik.http.services.todo.loadbalancer.server.port=80"
- "traefik.http.routers.todo-secured.rule=Host(`DOMAIN-NAME`)" # <== Your Domain Name goes here for the http rule
- "traefik.http.routers.todo-secured.entrypoints=web-secured" # <== Defining the entrypoint for http, **ref: line 30
- "traefik.http.routers.todo-secured.service=todo"
- "traefik.http.routers.todo-secured.tls=true"
- "traefik.http.routers.todo-secured.tls.certresolver=mytlschallenge" # <== Defining certsresolvers for https
# - "traefik.http.routers.todo-app.middlewares=redirect#file" # <== This is a middleware to redirect to https
# - "traefik.http.routers.nginx-secured.rule=Host(`example.com`)" # <== Your Domain Name for the https rule
# - "traefik.http.routers.nginx-secured.entrypoints=web-secured" # <== Defining entrypoint for https, **ref: line 31
depends_on:
- db
- redis
dns:
- 8.8.8.8
- 9.9.9.9
environment:
- SEQUELIZE_CONNECT=models/sequelize-mysql-docker.yaml
- SEQUELIZE_DBHOST=db
- SEQUELIZE_DBNAME=tododb
- SEQUELIZE_DBUSER=dbuser
- SEQUELIZE_DBPASSWD=PASS-WORD-HIDDEN
- REDIS_ENDPOINT=redis
- NODE_DEBUG=redis
- REDIS_PASSWD=PASS-WORD-HIDDEN
- DEBUG=todos:*,ioredis:*,socket.io:*,engine
command: [ "./wait-for-it.sh", "-t", "0", "db:3306", "--", "node", "./app.mjs" ]
Looking on the Traefik forum I found this: https://community.traefik.io/t/sticky-sessions-dont-work/1949
Per the discussion, I added the following label to the todo container:
- "traefik.http.services.todo.loadbalancer.sticky.cookie.name=StickySessionCookie"
And now it works fine, scaling from 1 up to 4 containers so far and it is working great.
Just in case some one is running in HTTPS mode. This was my configuration:
Within docker-compose file in the labels section:
- "traefik.http.services.<service-name>.loadbalancer.sticky=true"
- "traefik.http.services.<service-name>.loadbalancer.sticky.cookie.name=StickyCookie"
- "traefik.http.services.<service-name>.loadbalancer.sticky.cookie.secure=true"
NOTE: You can change StickyCookie to any value you want.

How to register server on Consul [Java]

I don't know how to register a service on Consul server if I got some service like "locahost:8090/user/login/username"? I'm so appreciate for your help!
Assuming that this service is already running, you can use one of the following methods to register it with Consul:
Using a service definition file
You can create a service definition json file and use the consul agent running on that host to register the service.
$ sudo mkdir /etc/consul.d
$ echo '{"service": {"name": "myService", "tags": ["java"], "port": 8080}}' \
| sudo tee /etc/consul.d/myService.json
$ consul agent -dev -config-dir=/etc/consul.d
==> Starting Consul agent...
...
[INFO] agent: Synced service 'myService'
...
More info here: https://www.consul.io/intro/getting-started/services.html
Using HTTP Rest API
More info here: https://www.consul.io/api/agent/service.html#register-service.
What info can be registered?
Please note that you can only add the IP and the port of your service in consul and not the entire URL.
Thanks,
Arul

Getting 401 Authorization Required from client Filebeat in ELK ElasticSearch Logstash Kibana)

I'm trying to setup my first ELK environment on RHEL7 using this guide,
I installed all required components (Nginx,logstash,kibana,elasticsearch),
I also installed filebeat on my client machine that I'm trying to pull the logs from, But when checking the installation I get 401:
[root#myd-vm666 beats-dashboards-1.1.0]# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.10.2</center>
</body>
</html>
in my filebeat configuration I stated the logstash host and the certificate location as follows:
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["16.XX.XXX.XXX:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["16.XX.XXX.XXX:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
tls:
# List of root certificates for HTTPS server verifications
certificate_authorities: "/etc/pki/tls/certs/logstash-forwarder.crt"
I verified that The logstash-forwarder.crt is in the right place.
And on my server, I have this configuration, /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
What am I missing? is there another key/certificate I need to place on the client?
If you are approaching AWS Elasticsearch with username/password security as
then and version of both are compatible then
In Aws, while configuring your Elasticsearch service configure it for whitelisting of IP instead of Master User.
or
Configure FileBeat–> Logstash–>Elasticsearch with master username/password also it will work.
Reference: https://learningsubway.com/filebeat-401-unauthorized-error-with-aws-elasticsearch/

How to navigate to an external url provided by a yo generator's server

I'm using a yo generator (generator-moda), running on an ec2 instance, and want to navigate from my browser to the external url provided but my browser just hangs on connecting...
Are there special config adjustments that need to be done in ec2 security groups or otherwise allow the ip or host below?
[BS] Access URLs:
-------------------------------------
Local: http://localhost:3000
External: http://172.31.60.85:3000
-------------------------------------
UI: http://localhost:3001
UI External: http://172.31.60.85:3001
-------------------------------------
[BS] Serving files from: ./app
[17:52:19] gulp-inject 12 files into main.scss.
[17:52:19] gulp-inject 12 files into main.scss.
[17:52:19] Starting 'html'...
[17:52:19] Finished 'html' after 3.89 ms
[BS] 1 file changed (index.html)
INFO [karma]: Karma v0.12.31 server started at http://localhost:9876/
INFO [launcher]: Starting browser PhantomJS
WARN [watcher]: Pattern "/home/ubuntu/dev/clients/alugha/main/app/scripts/**/*.html" does not match any file.
INFO [PhantomJS 1.9.8 (Linux)]: Connected on socket f08K4dCRmBorILmZgofR with id 91726259
The problem is that 172.31.0.0/16 is an Amazon's private range of IPs, so you cannot access to them outside the VPC (Amazon Virtual Private Cloud) source.
If you want to connect to your EC2 instance where your code is running you need to do two things:
Connect to the public DNS hostname / IP that you can get from your EC2 console. You have the instructions here: Determining Your Public, Private, and Elastic IP Addresses - AWS docs
Open the port in the security group to allow you to connect to your instance. In this answer is explained how to open a port for your security group, but instead of port 80, open 3000 and 3001.
Then in your browser copy the public DNS hostname you got on the first step with the correct port and you should be able to load your page.

Resources