Error 413 when trying to install the Elastic ECK - elasticsearch

I am trying to install the Elastic Cloud on Kubernetes (ECK) Kubernetes operator with the all-in-one.yaml file, as per the tutorial: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-install-all-in-one.html
But I am getting an error:
Error from server: error when creating "https://download.elastic.co/downloads/eck/1.3.1/all-in-one.yaml": the server responded with the status code 413 but did not return more information (post customresourcedefinitions.apiextensions.k8s.io)
I am a bit lost as to how to proceed solving this issue...
Command:
kubectl apply -f https://download.elastic.co/downloads/eck/1.3.1/all-in-one.yaml --insecure-skip-tls-verify
complete log:
namespace/elastic-system unchanged
serviceaccount/elastic-operator unchanged
secret/elastic-webhook-server-cert unchanged
configmap/elastic-operator unchanged
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co configured
customresourcedefinition.apiextensions.k8s.io/beats.beat.k8s.elastic.co configured
customresourcedefinition.apiextensions.k8s.io/enterprisesearches.enterprisesearch.k8s.elastic.co configured
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co configured
clusterrole.rbac.authorization.k8s.io/elastic-operator unchanged
clusterrole.rbac.authorization.k8s.io/elastic-operator-view unchanged
clusterrole.rbac.authorization.k8s.io/elastic-operator-edit unchanged
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator unchanged
service/elastic-webhook-server unchanged
statefulset.apps/elastic-operator configured
validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co configured
Error from server: error when creating "https://download.elastic.co/downloads/eck/1.3.1/all-in-one.yaml": the server responded with the status code 413 but did not return more information (post customresourcedefinitions.apiextensions.k8s.io)
UPDATE 1:
Running the command (with windows powershell):
curl https://download.elastic.co/downloads/eck/1.3.1/all-in-one.yaml | kubectl apply --insecure-skip-tls-verify -f-
I get:
error: error parsing STDIN: error converting YAML to JSON: yaml: line 7: mapping values are not allowed in this context
UPDATE 2:
current versions:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

I managed to fix the issue by setting the proxy-body-size config map value in the system nginx config map to 8m.
proxy-body-size=8m
Namespace=ingress-nginx
Config Map=nginx-configuration
thank you #juan-carlos-alafita for providing the relevant links!
413 error with Kubernetes and Nginx ingress controller
https://www.digitalocean.com/community/questions/413-request-entity-too-large-nginx

Related

Kubernetes add timestamp into every docker container log entry

I have 2 cluster
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.11-eks-f17b81", GitCommit:"f17b810c9e5a82200d28b6210b458497ddfcf31b", GitTreeState:"clean", BuildDate:"2021-10-15T21:46:21Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.6-gke.1500", GitCommit:"7ce0f9f1939dfc1aee910732e84cba03840df91e", GitTreeState:"clean", BuildDate:"2021-11-17T09:30:26Z", GoVersion:"go1.16.9b7", Compiler:"gc", Platform:"linux/amd64"}
I use fluent-bit to tail contailer log files and push log to elasticsearch
In 1st k8s cluster, the container log 's format is:
{"log":"{\"method\":\"GET\",\"path\":\"/healthz\",\"format\":\"*/*\",\"controller\":\"Api::ApplicationController\",\"action\":\"healthz\",\"status\":204,\"duration\":0.61,\"view\":0.0,\"request_id\":\"4d54cc06-08d2-4487-b2d9-fabfb2286e89\",\"headers\":{\"SCRIPT_NAME\":\"\",\"QUERY_STRING\":\"\",\"SERVER_PROTOCOL\":\"HTTP/1.1\",\"SERVER_SOFTWARE\":\"puma 5.4.0 Super Flight\",\"GATEWAY_INTERFACE\":\"CGI/1.2\",\"REQUEST_METHOD\":\"GET\",\"REQUEST_PATH\":\"/healthz\",\"REQUEST_URI\":\"/healthz\",\"HTTP_VERSION\":\"HTTP/1.1\",\"HTTP_HOST\":\"192.168.95.192:80\",\"HTTP_USER_AGENT\":\"kube-probe/1.20+\",\"HTTP_ACCEPT\":\"*/*\",\"HTTP_CONNECTION\":\"close\",\"SERVER_NAME\":\"192.168.95.192\",\"SERVER_PORT\":\"80\",\"PATH_INFO\":\"/healthz\",\"REMOTE_ADDR\":\"192.168.79.131\",\"ROUTES_19640_SCRIPT_NAME\":\"\",\"ORIGINAL_FULLPATH\":\"/healthz\",\"ORIGINAL_SCRIPT_NAME\":\"\"},\"params\":{\"controller\":\"api/application\",\"action\":\"healthz\"},\"response\":{},\"custom\":{},\"#version\":\"dutycast-b2c-backend-v1.48.0-rc.5\",\"#timestamp\":\"2022-03-04T11:16:14.236Z\",\"message\":\"[204] GET /healthz (Api::ApplicationController#healthz)\"}\n","stream":"stdout","time":"2022-03-04T11:16:14.238067813Z"}
It is in json format and I can parse easily using fluent-bit parser
And I do the same behavior for the 2nd k8s cluster but the the container log 's format is:
2022-03-04T11:19:24.050132912Z stdout F {"method":"GET","path":"/healthz","format":"*/*","controller":"Public::PublicPagesController","action":"healthz","status":204,"duration":0.52,"view":0.0,"request_id":"bcc799bb-5e5c-4758-9169-ecebb04b801f","headers":{"SCRIPT_NAME":"","QUERY_STRING":"","SERVER_PROTOCOL":"HTTP/1.1","SERVER_SOFTWARE":"puma 5.6.2 Birdie's Version","GATEWAY_INTERFACE":"CGI/1.2","REQUEST_METHOD":"GET","REQUEST_PATH":"/healthz","REQUEST_URI":"/healthz","HTTP_VERSION":"HTTP/1.1","HTTP_HOST":"10.24.0.22:3000","HTTP_USER_AGENT":"kube-probe/1.21","HTTP_ACCEPT":"*/*","HTTP_CONNECTION":"close","SERVER_NAME":"10.24.0.22","SERVER_PORT":"3000","PATH_INFO":"/healthz","REMOTE_ADDR":"10.24.0.1","ROUTES_71860_SCRIPT_NAME":"","ORIGINAL_FULLPATH":"/healthz","ORIGINAL_SCRIPT_NAME":"","ROUTES_71820_SCRIPT_NAME":""},"params":{"controller":"public/public_pages","action":"healthz"},"custom":null,"request_time":"2022-03-04T11:19:24.048+00:00","process_id":8,"#version":"vcam-backend-v0.1.0-rc24","response":"#\u003cActionDispatch::Response:0x00007f9d1f600888 #mon_data=#\u003cMonitor:0x00007f9d1f600838\u003e, #mon_data_owner_object_id=144760, #header={\"X-Frame-Options\"=\u003e\"ALLOW-FROM https://vietcapital.com.vn\", \"X-XSS-Protection\"=\u003e\"0\", \"X-Content-Type-Options\"=\u003e\"nosniff\", \"X-Download-Options\"=\u003e\"noopen\", \"X-Permitted-Cross-Domain-Policies\"=\u003e\"none\", \"Referrer-Policy\"=\u003e\"strict-origin-when-cross-origin\"}, #stream=#\u003cActionDispatch::Response::Buffer:0x00007f9d1f6045a0 #response=#\u003cActionDispatch::Response:0x00007f9d1f600888 ...\u003e, #buf=[\"\"], #closed=false, #str_body=nil\u003e, #status=204, #cv=#\u003cMonitorMixin::ConditionVariable:0x00007f9d1f600720 #monitor=#\u003cMonitor:0x00007f9d1f600838\u003e, #cond=#\u003cThread::ConditionVariable:0x00007f9d1f6006f8\u003e\u003e, #committed=false, #sending=false, #sent=false, #cache_control={}, #request=#\u003cActionDispatch::Request GET \"http://10.24.0.22:3000/healthz\" for 10.24.0.1\u003e\u003e","#timestamp":"2022-03-04T11:19:24.049Z","message":"[204] GET /healthz (Public::PublicPagesController#healthz)"}
Both case we have same config log for the service, use the same fluent-bit version and same elasticsearch version, only different k8s cluster. In 2nd case, container log has been insert something like timestamp into begin of every entry log, and I can not parse this log 's format because it 's not json format.
I think kubernetes add default option into docker container 's log (https://docs.docker.com/engine/reference/commandline/logs/)
How can I fix format log into json format in 2nd case?

Jaeger operator fails to parse Jaeger instance version on Kubernetes

Jaeger operator shows this log.
time="2022-01-07T11:27:57Z" level=info msg=Versions arch=amd64 identity=jaeger-operator.jaeger-operator jaeger=1.21.0 jaeger-operator=v1.21.3 operator-sdk=v0.18.2 os=linux version=go1.14.15 time="2022-01-07T11:28:20Z" level=warning msg="Failed to parse current Jaeger instance version. Unable to perform upgrade" current= error="Invalid Semantic Version" instance=tracing namespace=istio-system
The tracing operated resource shows like this afterwards:
kubectl get jaeger
NAME STATUS VERSION STRATEGY STORAGE AGE
tracing Running allinone elasticsearch 37d
We use GitOps for distributing the applications (included jaeger-operator and jaeger tracing resource). Only difference we are aware is between versions of clusters. In this case, this is only failing for a particular cluster with the following kubernetes version:
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.12-gke.1500", GitCommit:"d32c0db9a3ccd0ac73b0b3abd0532505217b376e", GitTreeState:"clean", BuildDate:"2021-11-17T09:30:02Z", GoVersion:"go1.15.15b5", Compiler:"gc", Platform:"linux/amd64"}
Other than the log error and the resulting missing information from the get jaeger command, the jaeger-operator modifies 2 things from the initial manifest:
It removes the line: .spec.storage.esRollover.enabled: true
It lowercases the .spec.strategy: AllInOne
The functions used for parsing the version: https://github.com/jaegertracing/jaeger-operator/blob/v1.21.3/pkg/upgrade/main.go#L28
The the function used to check the current version and compare it to verify if it needs to update the resource: https://github.com/jaegertracing/jaeger-operator/blob/v1.21.3/pkg/upgrade/upgrade.go#L134
They both look ok to me. Can't tell where/what the problem is and how to workaround it.

Influx 2.0 CLI Client Not Connecting

I'm having a problem connecting to a remote instance of Influx 2.0 OSS using the influx client.
First, I'm pretty sure it's not the server:
curl http://192.168.1.7:8086/health
{"name":"influxdb", "message":"ready for queries and writes", "status":"pass", "checks":[], "version": "2.0.7", "commit": "2a45f0c037"}
I have a config set, when I try to use it to ping the server:
❯ influx config
Active Name URL Org
* myinfluxd http://192.168.1.7 my_org
❯ influx ping
Error: Invalid character '<' looking for beginning of value.
Error: Invalid character '<' looking for beginning of value.
See 'influx ping -h' for help
Other commands also don't work
❯ influx query 'show databases;'
Error: Failed to read metadata: missing expected annotation datatype.
Error: Invalid character '<' looking for beginning of value.
See 'influx query -h' for help
I'm using the client on a Mac with BigSur (installed with homebrew), the client version and server version are the same (2.0.7).
Client
❯ influx version
Influx CLI 2.0.7 (git: none) build_date: 2021-07-02T03:23:38Z
Server
❯ influx version
Influx CLI 2.0.7 (git: none) build_date: 2021-07-02T03:23:38Z
root#influxdb2:/# influx config
Active Name URL Org
* default http://localhost:8086 my_org
Any ideas?

Nexus 3.6 OSS Docker Hub Proxy - Can docker search but not docker pull

I've deployed Nexus OSS 3.6 and it's being served on http://server:8082/nexus
I have configured a docker-hub proxy using the instructions in http://www.sonatype.org/nexus/2017/02/16/using-nexus-3-as-your-repository-part-3-docker-images/ and have configured the docker-group to serve under port 18000
I can perform the following:
docker login server:18000
docker search server:18000/jenkins
but when I run:
docker pull server:18000/jenkins
i get the following error:
Error response from daemon: Get http://10.105.139.17:18000/v2/jenkins/manifests/latest:
error parsing HTTP 400 response body: invalid character '<'
looking for beginning of value:
"<html>\n<head>\n<meta http-equiv=\"Content-Type\"
content=\"text/html;charset=ISO-8859-1\"/>\n<title>
Error 400 </title>\n</head>\n<body>\n<h2>HTTP ERROR: 400</h2>\n
<p>Problem accessing /nexus/v2/token.
Reason:\n<pre> Not a Docker request</pre></p>\n<hr />
Powered by Jetty:// 9.3.20.v20170531<hr/>\n
</body>\n</html>\n"
My jetty nexus.properties config file is:
# Jetty section
application-port=8082
application-host=0.0.0.0
# nexus-args=${jetty.etc}/jetty.xml,${jetty.etc}/jetty-http.xml,${jetty.etc}/jetty-requestlog.xml
nexus-context-path=/nexus
# Nexus section
# nexus-edition=nexus-pro-edition
# nexus-features=\
# nexus-pro-feature
Could anyone offer any suggestions on how to fix this please?
I have the same problem when I enabled the anonymous read on some docker repository.
Repositories->Docker hosted->Check the checkbox (Disable to allow anonymous pull) from the repository.
seems you need to upgrade Nexus to 3.6.1 according to :
https://issues.sonatype.org/browse/NEXUS-14488
in order to allow anonymous read again

Unable to connect to all storage backends successfully with PredictionIO and Elasticsearch on Localhost

I'm trying to setup PredictionIO locally following these instructions.
Unfortunately I'm unable to get it to work. When I try to install a template or run "pio status" I get the error saying that PredictionIO is unable to connect to elasticsearch:
[ERROR] [Console$] Unable to connect to all storage backends successfully. The following shows the error message from the storage backend.
[ERROR] [Console$] None of the configured nodes are available: [] (org.elasticsearch.client.transport.NoNodeAvailableException)
[ERROR] [Console$] Dumping configuration of initialized storage backend sources. Please make sure they are correct.
[ERROR] [Console$] Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME -> /Users/tomasz/Documents/PredictionIO/apache-predictionio-0.10.0-incubating/PredictionIO-0.10.0-incubating/vendors/elasticsearch-1.4.4, HOSTS -> localhost, PORTS -> 9200, TYPE -> elasticsearch
My pio-env.sh file is setup like so:
PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta
PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH
PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event
PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE
PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model
PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=LOCALFS
PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=localhost
PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9200
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=$PIO_HOME/vendors/elasticsearch-1.4.4
PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs
PIO_STORAGE_SOURCES_LOCALFS_PATH=$PIO_FS_BASEDIR/models
PIO_STORAGE_SOURCES_HBASE_TYPE=hbase
PIO_STORAGE_SOURCES_HBASE_HOME=$PIO_HOME/vendors/hbase-1.0.0
Any help would be appreciated.
I encountered this when I was using Elasticsearch 5.4.x on PredictionIO version 0.11.0-incubating. It worked when I used Elasticsearch 1.7.x

Resources