Elasticsearch-logging rc and svc are getting automatically deleted - elasticsearch

https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
The cluster is getting automatically deleted by using these configs to create the cluster.
From https://github.com/kubernetes/kubernetes/issues/11435 the solution is to remove
kubernetes.io/cluster-service: "true"
Though without these the elasticsearch is not available through the kubernetes master.
Should i create a pull request to remove the line from the files in the repo so people dont get confused?

Firstly, I'd recommend reformatting future questions so they adhere to the stack overflow guidelines: https://stackoverflow.com/help/how-to-ask.
I'd recommend making Elasticsearch a normal Kubernetes Service. You can expose in one of the following ways:
1. Set service.Type = NodePort and access it via any public ip of node:nodePort
2. Set service.Type = LoadBalancer, this will only work on cloud providers that have loadbalancers
3. Expose the RC directly through a host port (not recommended)
Those are just the common options for accessing a Service, please see the following thread for a more detailed discussion: https://groups.google.com/forum/#!topic/kubernetes-sig-network/B-A_RuqpFWk
It's generally not a good idead to send all external traffic meant for a Kubernetes service through the apiserver. However if you must do so, you can via an endpoint such as:
/api/v1/proxy/namespaces/default/services/nginx:80/
Where default is the namespace, nginx is the name of your service and 80 is the service port (needed to disambiguate multiport services).

Related

Communicating between ddev projects via http/s

I've been using DDev for the last six months or so. It has greatly improved my efficiency. Thanks!
I'm looking for a better way to integrate multiple sites running on separate containers. The recommended solution is to use the internal container references (e.g. ddev-projectname-web). This does not work one of my projects because the destination site relies on a matching hostname for authentication.
Scenario: SiteA communicates with SiteB via REST.
SiteA
Project-name: sitea
Hostname: sitea.ddev.site
Container reference: ddev-sitea-web
SiteB
Project-name: siteb
Hostname: siteb.ddev.site
Container reference: ddev-siteb-web
In order to authenticate with SiteB (tcp or rest), the hostname must be consistent, in this case siteb.ddev.site, so ddev-siteb-web does not work.
My current workaround is to use the SiteB hostname in REST calls from SiteA AND add internal IP to /etc/hosts on SiteA web (something like 172.1.0.1 siteb.ddev.site). I'm looking for a better solution because the hosts configuration is lost when I stop/restart SiteA and/or the IP changes when I stop/restart SiteB.
One theoretical option is a configuration setting that specifies another running docker instance and automatically adds that IP address and hostname to the integrated site's /etc/hosts file.
Thanks!
Different projects can talk to each other in two ways.
The first way is by using the container name directly, and I think that's what you were doing here.
But there's an alternate way (see FAQ - latest. You just need to add a docker-compose.comm.yaml to the client project's .ddev directory like this:
version: '3.6'
services:
web:
external_links:
- "ddev-router:otherproject.ddev.site"
That way you can use the canonical name of the other site for communications. This only works for HTTP/S traffic, because it's going through the ddev-router, which is a reverse proxy.

Elasticsearch - Collecting logs from devices not on server LAN

I am trying to build familiarity with SIEM systems in general and decided to set up an Elastic Stack via Digital Ocean. Everything was successful and my server as localhost is producing logs. It's been interesting to tinker with visualizations and that good stuff.
Obviously my interest isn't in logs from this remote server, though. I would like to configure some devices on my home network to send logs.
Current setup on server: filebeat > logstash > elasticsearch > kibana.
When I install filebeat onto, say, my laptop and configure the .yml file in a similar way to the server (comment out elastic output, uncomment logstash output) it is not able to connect. Basically I just set the hosts to serverip:logstash port and enabled filebeat on the system. Running the setup commands leads to a "couldn't connect to any configured elasticsearch hosts".
Instead of a direct answer, can someone explain for me generally what I need to be considering for this process? What is happening when connecting outside of the server LAN? and how do I handle authentication to the server, if needed?
Thank you, really. I know that the information is out there but I am deep in a rabbit hole and having a hard time finding what I need.
By default, the HTTP API is bound to only the host's local loopback interface,
ensuring that it is not accessible to the rest of the network. Because the API
includes neither authentication nor authorization and has not been hardened or
tested for use as a publicly-reachable API, binding to publicly accessible IPs
should be avoided where possible.
Even you set "http.host: 0.0.0.0" - you need to open port for your laptop (better if you already have public IP and open it only for your laptop)
For authentication - you have to investigate xpack - security features .
BR Alexey.

How to deploy Netfilex Eureka Server and Eureka Client with docker Network on AWS ECS cluster

I am migrating my spring cloud eureka application to AWS ECS and currently having some trouble doing so.
I have an ECS cluster on AWS in which two EC2 services was created
Eureka-server
Eureka-client
each service has a Task running on it.
QUESTION:
how do i establish a "docker network" amongst these two services such that i can register my eureka-client to the eureka-server's registry? Having them in the same cluster doesn't seem to do the trick.
locally i am able to establish a "docker network" to achieve this task. is it possible to have a "docker network" on AWS?
The problem here lies on the way how ECS clusters work. If you go to your dashboard and check out your task definition, you'll see an ip address which AWS assigns to the resource automatically.
In Eureka's case, you need to somehow obtain this ip address while deploying your eureka client apps and use it to register to your eureka-server. But of course your task definitions gets destroyed and recreated again somehow so you easily lose it.
I've done this before and there are couple of ways to achieve this. Here is one of the ways:
For the EC2 instances that you intend to spread ECS tasks as eureka-server or registry, you need to assign Elastic IP Addresses so you always know where to connect to in terms of a host ip address.
You also need to tag them properly so you can refer them in the next step.
Then switching back to ECS, when deploying your eureka-server tasks, inside your task definition configuration, there's an argument as placement_constraint
This will allow you to add a tag to your tasks so you can place those in the instances you assigned elastic ip addresses in the previous steps.
Now if this is all good and you deployed everything, you should be able to refer your eureka-client apps to that ip and have them registered.
I know this looks dirty and kind of complicated but the thing is Netflix OSS project for Eureka has missing parts which I believe is their proprietary implementation for their internal use and they don't want to share.
Another and probably a cooler way of doing this is using a Route53 domain or alias record for your instances so instead of using an elastic ip, you can also refer them using a DNS.

Go http api server and socket.io

Currently I'm working on a real-time online game. First I implemented a go server with socket.io for handling messages between client and my game world and it works fine. Now for user data managing I need a http api for some functionality like login. I want to use awesome http/net package for that purpose. Should I serve the http server on different Port?
My next question is for deploying I want to use google container engine. Can I use pods with two ports open?
As far as I understood from your explanation, you need two ports open for two different APIs running in your application. Regarding Exposing two ports in Google Container Engine, you can read the discussion here that describes ways to expose ports in a pod.
Moreover, I invite you read this tutorial that involves deploying an API in a GKE cluster with a containerPort in a pod, Creating a Kubernetes service to allow internal cluster traffic to your pods (routing requests on an incoming port to your API targetPort), and creating an Ingress service to define what traffic is allowed into your cluster and where it goes. You can define different APIs with different targetPorts and run them on different pods. You can try it as an alternative. For more documentation on Exposing Applications using Services, you can read this GKE doc.

Elasticsearch Access Log

I'm trying to track down who is issuing queries to an ElasticSearch Cluster. Elastic doesn't appear to have an access log.
Is there a place where I can find out which IP is hitting the cluster?
Elasticsearch doesn't provide any security out of the box, and that is on purpose and by design.
So you have a couple solutions out there:
Don't let your ES cluster exposed to the open world, but put it behind a firewall (i.e. whitelist the hosts that can access ports 9200/9300 on your nodes)
Look into the Shield plugin for Elasticsearch in order to secure your environment.
Put an nginx server in front of your cluster to act as a reverse proxy.
Add simple basic authentication with either the elasticsearch-jetty plugin or simply the elasticsearch-http-basic plugin, which also allowws you to whitelist the client IPs that are allowed to access your cluster.
If you want to have access logs, you need either 2 or 3, but all solutions above will allow you to secure your ES environment.

Resources