Log Socket Response always displays undefined in artillery.io - socket.io

Here is my yaml file.
config:
target: "http://localhost:3000"
phases:
- duration: 5
arrivalRate: 1
socketio:
transports: ["websocket"]
scenarios:
- name: "Emit an event"
engine: socketio
flow:
- emit:
channel: "chat-message"
data: { "msg": "fooBar2" }
response:
channel: "chat-message"
capture:
json: "$"
as: "res"
- log: "{{res}}"
While running it just logs undefined.
What am i doing wrong here?

Related

Can we capture the value of a cookie from API response in Artillery.io

I am trying to capture a cookie value from a API response but the user is failing after running the command.
config:
target: "URL"
phases:
- duration: 100
arrivalRate: 10
scenarios:
- name: "Login and fetch the token"
flow:
- log: "Login and Fetching the token"
- post:
url: "/api/v1/login/"
json:
username: "username"
password: "password"
capture:
- cookie: "csrftoken"
as: fooBody
- log: "{{ fooBody }}"

Google Cloud Ops Agent does not save logs when time_key is set

Applies to configuration: Logging processors
This setup works:
/etc/google-cloud-ops-agent/config.yaml
logging:
receivers:
app:
type: files
include_paths: [/www/logs/app-*.log]
processors:
monolog:
type: parse_regex
field: message
regex: "^\[(?<time>[^\]]+)\]\s+(?<environment>\w+)\.(?<severity>\w+):\s+(?<msg>.*?)(?<context>{.*})?\s*$"
service:
pipelines:
default_pipeline:
receivers: [app]
processors: [monolog]
I am trying to configure time_key, but the logs do not show up in the log viewer. I call the API whether the logs are being processed, whether they are read and sent. They come out but are not in the log viewer.
logging:
receivers:
app:
type: files
include_paths: [/www/logs/app-*.log]
processors:
monolog:
type: parse_regex
field: message
regex: "^\[(?<time>[^\]]+)\]\s+(?<environment>\w+)\.(?<severity>\w+):\s+(?<msg>.*?)(?<context>{.*})?\s*$"
time_key: time
time_format: "%Y-%m-%d %H:%M:%S"
service:
pipelines:
default_pipeline:
receivers: [app]
processors: [monolog]
Log structure:
[2021-10-06 12:12:08] production.EMERGENCY: Testing {"abc":"xyz"}
Parsed (first code example):
{
jsonPayload: {
context: "{"abc":"xyz"}"
environment: "production"
msg: "Testing "
severity: "EMERGENCY"
time: "2021-10-06 12:12:08"
}
}
API call to check logs processed:
curl -s localhost:2020/api/v1/metrics | jq
From strptime(3) also tried to use "%F %H:%M:%S"
What am I doing wrong?

Adding disableLogs: true will return 'An error occurred: IamRoleLambdaExecution - Policy statement must contain resources'

My serverless.yml file looks like this
functions:
login:
handler: login.handler
timeout: 30
memorySize: 128
description: Login and save new temp user profile
events:
- http:
path: login
method: any
cors: true
I can run sls deploy without any problem. But after I just add disableLogs: true under handler to make it
functions:
login:
handler: login.handler
disableLogs: true
timeout: 30
running sls deploy will return:
An error occurred: IamRoleLambdaExecution - Policy statement must contain resources. (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: 72975262-b9ac-4729-a6d6-cd6cf90c5db0; Proxy: null).

How do I add labels to a deployment?

Ive found 0 examples of this
I have this template:
resources:
- name: resource-name
type: 'gcp-types/cloudfunctions-v1:projects.locations.functions'
properties:
labels:
- key: testlabel1
value: testlabel1value
- key: testlabel2
value: testlabel2value
parent: projects/sdfsfsdf/locations/us-central1
location: us-central1
function: function-name
sourceArchiveUrl: 'gs://sdfsfsdf/b50d36e265ec71d457bb7ba5cc13e44c.zip'
environmentVariables:
TEST_ENV_VAR: 'zzzzzzzzz'
entryPoint: handler
httpsTrigger: {}
timeout: 60s
availableMemoryMb: 256
runtime: nodejs8
which produces this error:
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation
- code: CONDITION_NOT_MET
location: /deployments/test-templates/resources/resource-name->$.properties
message: |
error: instance type (array) does not match any allowed primitive type (allowed: ["object"])
level: "error"
schema: {"loadingURI":"#","pointer":"/create/properties/labels"}
instance: {"pointer":"/labels"}
domain: "validation"
keyword: "type"
found: "array"
expected: ["object"]
While there should have been an actual example of this in the docs i was being dumb. this is the correct format
resources:
- name: resource-name
type: 'gcp-types/cloudfunctions-v1:projects.locations.functions'
properties:
labels:
testlabel1: testlabel1value
testlabel2: testlabel2value

Transfer httpenpoinds metrics using http module-metricbeat

In order to ship the metrics endpoints (/metric) of my spring-boot app ,I used http module in metricbeat , I'm following the official doc in elastic.io website to install and configure metricbeat, but unfrontenly, the transport of metrics was incorrect although the connection had been established.
.I changed fields.yml file to create a specific template for http module only
fields.yml
- key: http
title: "HTTP"
description: >
HTTP module
release: beta
settings: ["ssl"]
fields:
- name: http
type: group
description: >
fields:
- name: request
type: group
description: >
HTTP request information
fields:
- name: header
type: object
description: >
The HTTP headers sent
- name: method
type: keyword
description: >
The HTTP method used
- name: body
type: keyword
description: >
The HTTP payload sent
- name: response
type: group
description: >
HTTP response information
fields:
- name: header
type: object
description: >
The HTTP headers received
- name: code
type: keyword
description: >
The HTTP status code
example: 404
- name: phrase
type: keyword
example: Not found
description: >
The HTTP status phrase
- name: body
type: keyword
description: >
The HTTP payload received
- name: json
type: group
description: >
json metricset
release: beta
fields:
- name: server
type: group
description: >
server
release: experimental
fields:
metricbeat.yml
metricbeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
metricbeat.modules:
#------------------------------- HTTP Module -------------------------------
- module: http
metricsets: ["json"]
period: 10s
hosts: ["http://localhost:8080/metrics"]
namespace: "test_metrics"
method: "GET"
enabled: true
setup.template.overwrite: true
output.elasticsearch:
hosts: ["localhost:9200"]
My app metrics display:(http://localhost:8080/metrics)
{
"mem": 199405,
"mem.free": 74297,
"processors": 4,
"instance.uptime": 45240231,
"uptime": 45254636,
"systemload.average": -1,
"heap.committed": 154624,
"heap.init": 131072,
"heap.used": 80326,
"heap": 1842688,
"nonheap.committed": 45888,
"nonheap.init": 2496,
"nonheap.used": 44781,
"nonheap": 0,
"threads.peak": 31,
"threads.daemon": 25,
"threads.totalStarted": 35,
"threads": 27,
"classes": 6659,
"classes.loaded": 6659,
"classes.unloaded": 0,
"gc.ps_scavenge.count": 24,
"gc.ps_scavenge.time": 999,
"gc.ps_marksweep.count": 1,
"gc.ps_marksweep.time": 71,
"httpsessions.max": -1,
"httpsessions.active": 0,
"gauge.response.metrics": 20,
"gauge.response.unmapped": 6005,
"gauge.response.login": 1,
"gauge.response.star-star.favicon.ico": 1878,
"counter.status.200.star-star.favicon.ico": 1,
"counter.status.200.metrics": 30,
"counter.status.302.unmapped": 3,
"counter.status.200.login": 2
}
Previously, I used httpbeat and everything was great,the name of fields in elasticsearch index are compatible ... , since I've moved to use http module everything had been changed ,I'm using it to get a predefined dashboards in kibana .
Any help please ?

Resources