Transfer httpenpoinds metrics using http module-metricbeat - elasticsearch

In order to ship the metrics endpoints (/metric) of my spring-boot app ,I used http module in metricbeat , I'm following the official doc in elastic.io website to install and configure metricbeat, but unfrontenly, the transport of metrics was incorrect although the connection had been established.
.I changed fields.yml file to create a specific template for http module only
fields.yml
- key: http
title: "HTTP"
description: >
HTTP module
release: beta
settings: ["ssl"]
fields:
- name: http
type: group
description: >
fields:
- name: request
type: group
description: >
HTTP request information
fields:
- name: header
type: object
description: >
The HTTP headers sent
- name: method
type: keyword
description: >
The HTTP method used
- name: body
type: keyword
description: >
The HTTP payload sent
- name: response
type: group
description: >
HTTP response information
fields:
- name: header
type: object
description: >
The HTTP headers received
- name: code
type: keyword
description: >
The HTTP status code
example: 404
- name: phrase
type: keyword
example: Not found
description: >
The HTTP status phrase
- name: body
type: keyword
description: >
The HTTP payload received
- name: json
type: group
description: >
json metricset
release: beta
fields:
- name: server
type: group
description: >
server
release: experimental
fields:
metricbeat.yml
metricbeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
metricbeat.modules:
#------------------------------- HTTP Module -------------------------------
- module: http
metricsets: ["json"]
period: 10s
hosts: ["http://localhost:8080/metrics"]
namespace: "test_metrics"
method: "GET"
enabled: true
setup.template.overwrite: true
output.elasticsearch:
hosts: ["localhost:9200"]
My app metrics display:(http://localhost:8080/metrics)
{
"mem": 199405,
"mem.free": 74297,
"processors": 4,
"instance.uptime": 45240231,
"uptime": 45254636,
"systemload.average": -1,
"heap.committed": 154624,
"heap.init": 131072,
"heap.used": 80326,
"heap": 1842688,
"nonheap.committed": 45888,
"nonheap.init": 2496,
"nonheap.used": 44781,
"nonheap": 0,
"threads.peak": 31,
"threads.daemon": 25,
"threads.totalStarted": 35,
"threads": 27,
"classes": 6659,
"classes.loaded": 6659,
"classes.unloaded": 0,
"gc.ps_scavenge.count": 24,
"gc.ps_scavenge.time": 999,
"gc.ps_marksweep.count": 1,
"gc.ps_marksweep.time": 71,
"httpsessions.max": -1,
"httpsessions.active": 0,
"gauge.response.metrics": 20,
"gauge.response.unmapped": 6005,
"gauge.response.login": 1,
"gauge.response.star-star.favicon.ico": 1878,
"counter.status.200.star-star.favicon.ico": 1,
"counter.status.200.metrics": 30,
"counter.status.302.unmapped": 3,
"counter.status.200.login": 2
}
Previously, I used httpbeat and everything was great,the name of fields in elasticsearch index are compatible ... , since I've moved to use http module everything had been changed ,I'm using it to get a predefined dashboards in kibana .
Any help please ?

Related

Istio EnvoyFilter Lua HttpCall doesn't work with HTTPS?

I need to decrypt the body of a request in an external API.
But, when I try to do it with an EnvoyFilter using lua it doesn't work.
If I try the same code that I'm posting here, but without HTTPS, works. But with HTTPS returns 503.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: eva-decrypt-filter
namespace: istio-system
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: ANY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua"
inlineCode: |
function envoy_on_request(request_handle)
local buffered = request_handle:body()
local bodyString = tostring(buffered:getBytes(0, buffered:length()))
print("bodyString ->")
print(bodyString)
if string.match(bodyString, "valcirtest") then
print("iniciando http_Call")
local responseHeaders, responseBody = request_handle:httpCall(
"thirdparty",
{
[":method"] = "POST",
[":path"] = "/decrypt",
[":authority"] = "keycloack-dev-admin.eva.bot",
[":scheme"] = "https",
["content-type"] = "application/json",
["content-length"] = bodyString:len(),
},
bodyString,
3000)
print("acabou a requisicao")
print("responseHeaders -> ")
print(responseHeaders)
print(responseHeaders[":status"])
print("responseBody -> ")
print(responseBody)
local content_length = request_handle:body():setBytes(responseBody)
request_handle:headers():replace("content-length", content_length)
else
print("nao entrou")
end
end
- applyTo: CLUSTER
match:
context: SIDECAR_OUTBOUND
patch:
operation: ADD
value: # cluster specification
name: thirdparty
connect_timeout: 1.0s
type: STRICT_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: thirdparty
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
protocol: TCP
address: keycloack-dev-admin.eva.bot
port_value: 443
The response error is:
503
responseBody ->
upstream connect error or disconnect/reset before headers. reset reason: connection termination
I'm using Istio v.1.11.4.
It should be configured on your "thirdparty" cluster adding the following on your cluster config:
transport_socket:
name: envoy.transport_sockets.tls
To add to #koffi-kodjo's answer, you also need to specify the typed_config property. The transport_socket node should be placed at the same level of the name: thirdparty node.
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
ref:
https://github.com/envoyproxy/envoy/issues/11582#issuecomment-646427632
https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/tls.proto.html#extensions-transport-sockets-tls-v3-upstreamtlscontext

Getting "internal server error" on passing binary data to AWS Lambda function deployed using serverless framework and apigw-binary plugin

what I'm trying
Passing binary data via Lambda integration in API gateway. Lambda returns back text.
issue
The function returns desired output when API gateway is configured from console. To implement it using serverless framework I installed serverless-apigw-binary plugin. The required binary types show up in API gateway>settings>binary media types. However on calling API I get "internal server error". The function works properly on application/json type input. After enabling-disabling lambda proxy integration and adding mappings via console, I get correct output.
serverless.yml file
org: ------
app: ---------
service: ---------
frameworkVersion: ">=1.34.0 <2.0.0"
plugins:
- serverless-python-requirements
- serverless-offline
- serverless-apigw-binary
provider:
name: aws
runtime: python3.7 #fixed with pipenv
region: us-east-1
memorySize: 128
timeout: 60
profile: ----
custom:
pythonRequirements:
usePipenv: true
useDownloadCache: true
useStaticCache: true
apigwBinary:
types: #list of mime-types
- 'application/octet-stream'
- 'application/zip'
functions:
main:
handler: handler.main
events:
- http:
path: ocr
method: post
integration: lambda
request:
passThrough: WHEN_NO_TEMPLATES
template:
application/zip: '
{
"type": "zip",
"zip": "$input.body",
"lang": "$input.params(''lang'')",
"config": "$input.params(''config'')",
"output_type": "$input.params(''output_type'')"
}'
application/json: '
{
"type": "json",
"image": $input.json(''$.image''),
"lang": "$input.params(''lang'')",
"config": "$input.params(''config'')",
"output_type": "$input.params(''output_type'')"
}'
application/octet-stream: '
{
"type": "img_file",
"image": "$input.body",
"lang": "$input.params(''lang'')",
"config": "$input.params(''config'')",
"output_type": "$input.params(''output_type'')"
}'
handler.py
def main(event, context):
# do something on event and get txt
return txt
edit
I compared swagger definitions and found this
1. API generated from console(working)
paths:
/ocr:
post:
consumes:
- "application/octet-stream"
produces:
- "application/json"
responses:
API generated from serverless framework
paths:
/ocr:
post:
consumes:
- "application/x-www-form-urlencoded"
- "application/zip"
- "application/octet-stream"
- "application/json"
responses:
produces: - "application/json" is missing. How do I add it in serverless?

ELK Heartbeat dashbaord add ID

How can I add ID field to Kibana / Uptime dashboard?
Installed ver 7.3. This shows "NAME" as one of the column headers. You can specify what is displayed in there in monitor configuration:
- type: http
name: 'QA.Service - THIS HERE'
enabled: true
schedule: '#every 5m'
urls: ["http://checkstatus/blah/blah"]
check.response:
status: 200
json:
- description: Json Response
condition:
equals:
Status: Ok

How to access service destination from approuter?

It seems that I configured my approuter successfully:
Approuter
I gave a destination to my service in SCP Cockpit:
destination config in SCP Cockpit
And I maintained the destination in the xs-app.json:
{
"welcomeFile": "/webapp/index.html",
"authenticationMethod": "route",
"logout": {
"logoutEndpoint": "/do/logout"
},
"routes": [
{
"source": "/destination",
"target": "/",
"destination": "service-destination"
}
]
}
My question is now how can I access my service destination via approuter?
Shouldn't it be something like this:
https://qfrrz1oj5pilzrw8zations-approuter.cfapps.eu10.hana.ondemand.com/webapp/index.html/destination
Accessing service via Approuter
...it returns Not found.
Any idea what I'm doing wrong here?
This is my mta.yaml (if relevant):
ID: oDataAuthorizations
_schema-version: '2.1'
version: 0.0.1
modules:
- name: oDataAuthorizations-db
type: hdb
path: db
parameters:
memory: 256M
disk-quota: 256M
requires:
- name: oDataAuthorizations-hdi-container
- name: oDataAuthorizations-srv
type: java
path: srv
parameters:
memory: 1024M
provides:
- name: srv_api
properties:
url: '${default-url}'
requires:
- name: oDataAuthorizations-hdi-container
properties:
JBP_CONFIG_RESOURCE_CONFIGURATION: '[tomcat/webapps/ROOT/META-INF/context.xml: {"service_name_for_DefaultDB" : "~{hdi-container-name}"}]'
- name: xsuaa-auto
- name: approuter
type: html5
path: approuter
parameters:
disk-quota: 256M
memory: 256M
build-parameters:
builder: grunt
requires:
- name: dest_oDataAuthorizations
- name: srv_api
group: destinations
properties:
name: service-destination
url: '~{url}'
forwardAuthToken: true
- name: xsuaa-auto
resources:
- name: oDataAuthorizations-hdi-container
type: com.sap.xs.hdi-container
properties:
hdi-container-name: '${service-name}'
- name: xsuaa-auto
type: org.cloudfoundry.managed-service
parameters:
path: ./cds-security.json
service-plan: application
service: xsuaa
config:
xsappname: xsuaa-auto
tenant-mode: dedicated
- name: dest_oDataAuthorizations
parameters:
service-plan: lite
service: destination
type: org.cloudfoundry.managed-service
You have two hosts:
approuter
srv
The problem:
https://approuter/destination/ will proxy to https://srv/
Notice the root path in the URL. You see the path segment of your destination is ignored by the approuter. Instead it looks for the routes[0].target declaration of your xs-app.json file.
The symptom:
https://srv/ redirects (307) to /odata/v2. So does https://approuter/destination/
https://approuter/odata/v2/ does not exist (404), no route was defined in your xs-app.json
https://approuter/destination/odata/v2/ gives you the expected response.
The solution:
Adapt your xs-app.json to correctly refer the target endpoint path:
"routes": [
{
"source": "/destination",
"target": "/odata/v2",
"destination": "service-destination"
}
Follow up
Since your srv application statically references links to absolute path /odata/v2, you would either have to update each link in srv to use relative paths, or use "/odata/v2/" as your approuter route source to mirror the target. For the latter case you would miss out on the "/destination" path.

Runtime config variable Google Deployment manager

Cannot create a google deployment manager runtime config variable
resources:
- name: star-config
type: runtimeconfig.v1beta1.config
properties:
name: star-config
- name: igurl_variable
type: runtimeconfig.v1beta1.variable
properties:
name: igurl_variable
value: 'trek'
parent: $(ref.star-config.name)
I checked the logs and I see that the status is set to bad_request when I create the above deployment.
Audit log
status: {
message: "BAD_REQUEST"
}
What could be the reason for the error ?
You should try the with the properties fields as in the official documentation for both the config and variable resources.
The resource file should be something like:
resources:
- name: star-config
type: runtimeconfig.v1beta1.config
properties:
config: star-config
- name: igurl_variable
type: runtimeconfig.v1beta1.variable
properties:
variable: igurl_variable
text: 'trek'
parent: $(ref.star-config.name)

Resources