Getting error when, traid Enovy Error: connect ECONNREFUSED - proxy

I am running Enoy Proxy using Docker on Windows. 
static_resources:
listeners:
name: listener_0
address:
socket_address:
address: 127.0.0.1
port_value: 10005
filter_chains:
filters:
name: envoy.filters.network.http_connection_manager
typed_config:
'#type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: edge
http_filters:
name: envoy.filters.http.router
typed_config:
'#type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
route_config:
virtual_hosts:
name: direct_response_service
domains:
'*'
routes:
match:
prefix: "/gethello"
direct_response:
status: 200
body:
inline_string: "Hello"

Related

Nodeport works but ingress doesn't

I am new to kubernetes and I am trying to set up a web app for production. I tested an nginx app and it works perfectly in both nodeport and ingress.
When I setupped my web app exposing it using nodeport it works but when I expose it via ingress it doesn't work. I know that I deleted all nginx app deployment and service and ingress but my web app is still the same (WORK via NODEPORT AND NOT via INGRESS). Any help getting this to work is appreciated.
THIS IS THE NGINX YAML CODE (WORK via BOTH NODEPORT AND INGRESS)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:stable
imagePullPolicy: Always
ports:
- containerPort: 80
name: web
protocol: TCP
readinessProbe:
httpGet:
port: web
path: /
---
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
namespace: default
spec:
type: NodePort
selector:
# app: nginx
app: mruser-dev
ports:
- name: web
port: 80
targetPort: web
nodePort: 31122
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-nodeport
port:
number: 80
THIS IS MY WEB APP CODE (WORK via NODEPORT AND NOT via INGRESS)
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebapp-dev
spec:
replicas: 2
selector:
matchLabels:
app: mywebapp-dev
template:
metadata:
labels:
app: mywebapp-dev
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "80"
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"operator": "In",
"values": ["server-node-1"]
}
]
}
]
}
}
}
spec:
volumes:
- name: logs
emptyDir: {}
- name: cache
emptyDir: {}
- name: testing
emptyDir: {}
- name: sessions
emptyDir: {}
- name: views
emptyDir: {}
- name: mywebapp-dev-nfsvolume
nfs:
server: 10.0.0.20
path: /var/www/html/misteruser-dev
- name: mywebapp-nginx-nfsvolume
nfs:
server: 10.0.0.20
path: /var/www/nginx
securityContext:
fsGroup: 82
initContainers:
- name: database-migrations
image: mywebapp-dev-laravel-fpm:8.1-fpm-alpine
imagePullPolicy: Never
envFrom:
- configMapRef:
name: mywebapp-dev-laravel
- secretRef:
name: mywebapp-dev-laravel
volumeMounts:
- name: mywebapp-dev-nfsvolume
mountPath: /var/www/html/
command:
- "php"
args:
- "artisan"
- "migrate"
- "--force"
containers:
- name: nginx
imagePullPolicy: Never
image: mywebapp-dev-laravel-nginx:stable-alpine
volumeMounts:
- name: mywebapp-nginx-nfsvolume
mountPath: /etc/nginx/conf.d/
resources:
limits:
cpu: 200m
memory: 50M
ports:
- containerPort: 80
name: web
protocol: TCP
readinessProbe:
httpGet:
port: web
path: /
- name: fpm
imagePullPolicy: Never
envFrom:
- configMapRef:
name: mywebapp-dev-laravel
- secretRef:
name: mywebapp-dev-laravel
securityContext:
runAsUser: 82
readOnlyRootFilesystem: true
volumeMounts:
- name: logs
mountPath: /var/www/html/storage/logs
- name: cache
mountPath: /var/www/html/storage/framework/cache
- name: sessions
mountPath: /var/www/html/storage/framework/sessions
- name: views
mountPath: /var/www/html/storage/framework/views
- name: testing
mountPath: /var/www/html/storage/framework/testing
- name: mywebapp-dev-nfsvolume
mountPath: /var/www/html/
resources:
limits:
cpu: 500m
memory: 200Mi
image: mywebapp-dev-laravel-fpm:8.1-fpm-alpine
ports:
- containerPort: 9000
---
apiVersion: v1
kind: Service
metadata:
name: mywebapp-dev-svc
namespace: default
spec:
type: NodePort
selector:
app: mywebapp-dev
ports:
- name: web
port: 80
targetPort: 80
nodePort: 31112
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mywebapp-nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mywebapp-dev-svc
port:
number: 80

How do I switch from Table-manager to Compactor in an existing Loki deploy?

I have an issue whereby the chunks is not being deleted as per the set max_look_back_period . Having done some research I discovered here that table-manager is no longer supported as in the comment here.
I am however unsure how to amend my current configuration which looks like this :
loki-stack-values.yml
grafana:
enabled: true
persistence:
enabled: true
size: 5Gi
adminPassword: Vfgfhdjdkdisynwtey678CMX7xghuy879
prometheus:
enabled: true
alertmanager:
persistentVolume:
enabled: true
size: 2Gi
server:
persistentVolume:
enabled: true
size: 10Gi
loki:
enabled: true
persistence:
enabled: true
size: 70Gi
config:
chunk_store_config:
max_look_back_period: 672h
table_manager:
retention_deletes_enabled: true
retention_period: 672h
statefulset.yml
Source: loki-stack/charts/loki/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: loki
namespace: loki
labels:
app: loki
chart: loki-2.11.0
release: loki
heritage: Helm
annotations:
{}
spec:
podManagementPolicy: OrderedReady
replicas: 1
selector:
matchLabels:
app: loki
release: loki
serviceName: loki-headless
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: loki
name: loki
release: loki
annotations:
checksum/config: f1685c19aa5e8157738636fd074c7cb45763c46d46f08d9f05ca31e4445cd082
prometheus.io/port: http-metrics
prometheus.io/scrape: "true"
spec:
serviceAccountName: loki
securityContext:
fsGroup: 10001
runAsGroup: 10001
runAsNonRoot: true
runAsUser: 10001
initContainers:
[]
containers:
- name: loki
image: "grafana/loki:2.5.0"
imagePullPolicy: IfNotPresent
args:
- "-config.file=/etc/loki/loki.yaml"
volumeMounts:
- name: tmp
mountPath: /tmp
- name: config
mountPath: /etc/loki
- name: storage
mountPath: "/data"
subPath:
ports:
- name: http-metrics
containerPort: 3100
protocol: TCP
livenessProbe:
httpGet:
path: /ready
port: http-metrics
initialDelaySeconds: 45
readinessProbe:
httpGet:
path: /ready
port: http-metrics
initialDelaySeconds: 45
resources:
{}
securityContext:
readOnlyRootFilesystem: true
env:
nodeSelector:
{}
affinity:
{}
tolerations:
[]
terminationGracePeriodSeconds: 4800
volumes:
- name: tmp
emptyDir: {}
- name: config
secret:
secretName: loki
volumeClaimTemplates:
- metadata:
name: storage
annotations:
{}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "70Gi"
storageClassName:
I would like to amend my configuration as below:
grafana:
enabled: true
persistence:
enabled: true
size: 5Gi
adminPassword: Vfgfhdjdkdisynwtey678CMX7xghuy879
prometheus:
enabled: true
alertmanager:
persistentVolume:
enabled: true
size: 2Gi
server:
persistentVolume:
enabled: true
size: 10Gi
loki:
enabled: true
persistence:
enabled: true
size: 70Gi
config:
compactor:
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150
limits_config:
retention_period: 672h
Is this correct ? Or they are additional options I will need to add also.

How to mount saved objects to kibana in kubernetes?

I'm runnig EFK stack in my kubernetes cluster however each time i start kibana dashboard i will need to manually import export.ndjson i've heard that all kibana objects are stored in elasticsearch so a mounted this file to /usr/share/elasticsearch/data/ but still can't see it in the dashboard.
Here are my yaml files:
kibana :
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: tools
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.6.0
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: tools
spec:
selector:
app: kibana
ports:
- name: client
port: 5601
protocol: TCP
type: ClusterIP
es:
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: tools
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elasticsearch-volume
mountPath: /usr/share/elasticsearch/data
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
env:
- name: discovery.type
valueFrom:
configMapKeyRef:
name: tools-config
key: discovery.type
volumeMounts:
- name: elasticsearch-volume
mountPath: /usr/share/elasticsearch/data
- name: kibana-dashboard
mountPath: /usr/share/elasticsearch/data/export.ndjson
subPath: export.ndjson
volumes:
- name: elasticsearch-volume
persistentVolumeClaim:
claimName: elasticsearch-storage-pvc
- name: kibana-dashboard
configMap:
name: kibana-dashboard
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: tools
spec:
selector:
app: elasticsearch
ports:
- name: http
protocol: TCP
port: 9200
- name: transport
protocol: TCP
port: 9300
type: ClusterIP

Ansible merge listed dictionaries

I'm currently building a Ansible playbook which provides me with below two variables. I need to merge these into one variable where the ipAddresses match as showed in the example below.
---
- hosts: localhost
gather_facts: no
vars:
sites:
- siteGroupId: 123
siteName: name123
siteDevices:
- ipAddress: 1.1.1.1
- ipAddress: 2.2.2.2
- siteGroupId: 456
siteName: name456
siteDevices:
- ipAddress: 3.3.3.3
- ipAddress: 4.4.4.4
devices:
- ipAddress: 1.1.1.1
deviceName: name123-a.tld
- ipAddress: 2.2.2.2
deviceName: name123-b.tld
- ipAddress: 3.3.3.3
deviceName: name456-a.tld
- ipAddress: 4.4.4.4
deviceName: name456-b.tld
Expected output:
sites:
- siteGroupId: 123
siteName: name123
siteDevices:
- ipAddress: 1.1.1.1
deviceName: name123-a.tld
- ipAddress: 2.2.2.2
deviceName: name123-b.tld
- siteGroupId: 456
siteName: name456
siteDevices:
- ipAddress: 3.3.3.3
deviceName: name456-a.tld
- ipAddress: 4.4.4.4
deviceName: name456-b.tld
What I have succeed so far with the below code is that I get the name added to the site but get 4 sites (duplicated siteGroupID as output).
---
- hosts: localhost
gather_facts: no
vars:
sites:
- siteGroupId: 123
siteName: name123
siteDevices:
- ipAddress: 1.1.1.1
- ipAddress: 2.2.2.2
- siteGroupId: 456
siteName: name456
siteDevices:
- ipAddress: 3.3.3.3
- ipAddress: 4.4.4.4
devices:
- ipAddress: 1.1.1.1
deviceName: name123-a.tld
- ipAddress: 2.2.2.2
deviceName: name123-b.tld
- ipAddress: 3.3.3.3
deviceName: name456-a.tld
- ipAddress: 4.4.4.4
deviceName: name456-b.tld
tasks:
- set_fact:
tmpVar1: "{{ item.0 | combine({'siteDevices': [ item.1 | combine(devices | selectattr('ipAddress','equalto',item.1.ipAddress) | first) ] }) }}"
loop: "{{lookup('subelements', sites, 'siteDevices', {'skip_missing': True})}}"
register: tmpVar2
- name: tmpVar1
debug:
msg: "{{ tmpVar1 }}"
- name: tmpVar2
debug:
msg: "{{ tmpVar2 }}"
- set_fact:
sites: "{{ tmpVar2.results | map(attribute='ansible_facts.tmpVar1') | list }}"
- name: SITES
debug:
msg: "{{ sites }}"
OUTPUT:
{
"siteDevices": [
{
"deviceName": "name123-a.tld",
"ipAddress": "1.1.1.1"
}
],
"siteGroupId": 123,
"siteName": "name123"
},
{
"siteDevices": [
{
"deviceName": "name123-b.tld",
"ipAddress": "2.2.2.2"
}
],
"siteGroupId": 123,
"siteName": "name123"
},
{
"siteDevices": [
{
"deviceName": "name456-a.tld",
"ipAddress": "3.3.3.3"
}
],
"siteGroupId": 456,
"siteName": "name456"
},
{
"siteDevices": [
{
"deviceName": "name456-b.tld",
"ipAddress": "4.4.4.4"
}
],
"siteGroupId": 456,
"siteName": "name456"
}
The tasks
- set_fact:
devices_dict: "{{ devices|
items2dict(key_name='ipAddress',
value_name='deviceName') }}"
- set_fact:
sites2: "{{ sites2|default([]) + [
item|
combine({'siteDevices':
dict(my_ipAddress|
zip(my_ipAddress|
map('extract', devices_dict)))|
dict2items(key_name='ipAddress',
value_name='deviceName')})] }}"
vars:
my_ipAddress: "{{ item.siteDevices|json_query('[].ipAddress') }}"
loop: "{{ sites }}"
- debug:
var: sites2
give
"sites2": [
{
"siteDevices": [
{
"deviceName": "name123-a.tld",
"ipAddress": "1.1.1.1"
},
{
"deviceName": "name123-b.tld",
"ipAddress": "2.2.2.2"
}
],
"siteGroupId": 123,
"siteName": "name123"
},
{
"siteDevices": [
{
"deviceName": "name456-a.tld",
"ipAddress": "3.3.3.3"
},
{
"deviceName": "name456-b.tld",
"ipAddress": "4.4.4.4"
}
],
"siteGroupId": 456,
"siteName": "name456"
}
]

How can i creat pod with out this error elasticsearch on kubernetes

I want to create elasticsearch pod on kubernetes.
I make some config change to edit path.data and path.logs
But I'm getting this error.
error: error validating "es-deploy.yml": error validating data:
ValidationError(Deployment.spec.template.spec.containers[0]): unknown
field "volumes" in io.k8s.api.core.v1.Container; if you choose to
ignore these errors, turn validation off with --validate=false
service-account.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch
es-svc.yml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
component: elasticsearch
spec:
# type: LoadBalancer
selector:
component: elasticsearch
ports:
- name: http
port: 9200
protocol: TCP
- name: transport
port: 9300
protocol: TCP
elasticsearch.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-config
data:
elasticsearch.yml: |
cluster:
name: ${CLUSTER_NAME:elasticsearch-default}
node:
master: ${NODE_MASTER:true}
data: ${NODE_DATA:true}
name: ${NODE_NAME}
ingest: ${NODE_INGEST:true}
max_local_storage_nodes: ${MAX_LOCAL_STORAGE_NODES:1}
processors: ${PROCESSORS:1}
network.host: ${NETWORK_HOST:_site_}
path:
data: ${DATA_PATH:"/data/elk"}
repo: ${REPO_LOCATIONS:[]}
bootstrap:
memory_lock: ${MEMORY_LOCK:false}
http:
enabled: ${HTTP_ENABLE:true}
compression: true
cors:
enabled: true
allow-origin: "*"
discovery:
zen:
ping.unicast.hosts: ${DISCOVERY_SERVICE:elasticsearch-discovery}
minimum_master_nodes: ${NUMBER_OF_MASTERS:1}
xpack:
license.self_generated.type: basic
es-deploy.yml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es
labels:
component: elasticsearch
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
spec:
serviceAccount: elasticsearch
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es
securityContext:
capabilities:
add:
- IPC_LOCK
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "DISCOVERY_SERVICE"
value: "elasticsearch"
- name: NODE_MASTER
value: "true"
- name: NODE_DATA
value: "true"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: "-Xms256m -Xmx256m"
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage
mountPath: /data/elk
- name: config-volume
mountPath: /usr/share/elasticsearch/elastic.yaml
volumes:
- name: storage
emptyDir: {}
- name: config-volume
configMap:
name: elasticsearch-config
There is syntax problem in your es-deploy.yaml file.
This should work.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es
labels:
component: elasticsearch
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
spec:
serviceAccount: elasticsearch
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es
securityContext:
capabilities:
add:
- IPC_LOCK
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "DISCOVERY_SERVICE"
value: "elasticsearch"
- name: NODE_MASTER
value: "true"
- name: NODE_DATA
value: "true"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: "-Xms256m -Xmx256m"
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage
mountPath: /data/elk
- name: config-volume
mountPath: /usr/share/elasticsearch/elastic.yaml
volumes:
- name: storage
emptyDir: {}
- name: config-volume
configMap:
name: elasticsearch-config
The volumes section is not under containers section, it should be under spec section as the error suggest.
You can validate your k8s yaml files for syntax error online using this site.
Hope this helps.

Resources