I am doing a load test for socket.io using Artillery
# my-scenario.yml
config:
target: "http://localhost:4000"
phases:
- duration: 60
arrivalRate: 10
engines:
socketio-v3: {transports: ['websocket']}
socketio:
query:
address: "{{ $randomString() }}"
scenarios:
- name: My sample scenario
engine: socketio-v3
I need the address field to be a different random string of fixed length for every connection.
Currently, $randomString() only generates 1 random string which is used for all connections, and also I can't control its length.
Thanks in advance!
Related
I want to drop lines in Promtail using an AND condition from two different JSON fields.
I have JSON log lines like this.
{"timestamp":"2022-03-26T15:40:41+00:00","remote_addr":"1.2.3.4","remote_user":"","request":"GET / HTTP/1.1","status": "200","body_bytes_sent":"939","request_time":"0.000","http_referrer":"http://5.6.7.8","http_user_agent":"user agent 1"}
{"timestamp":"2022-03-26T15:40:41+00:00","remote_addr":"1.2.3.4","remote_user":"","request":"GET /path HTTP/1.1","status": "200","body_bytes_sent":"939","request_time":"0.000","http_referrer":"http://5.6.7.8","http_user_agent":"user agent 1"}
{"timestamp":"2022-03-26T15:40:41+00:00","remote_addr":"1.2.3.4","remote_user":"","request":"GET / HTTP/1.1","status": "200","body_bytes_sent":"939","request_time":"0.000","http_referrer":"http://5.6.7.8","http_user_agent":"user agent 2"}
My local Promtail config looks like this.
clients:
- url: http://localhost:3100/loki/api/v1/push
scrape_configs:
- job_name: testing-my-job-drop
pipeline_stages:
- match:
selector: '{job="my-job"}'
stages:
- json:
expressions:
http_user_agent:
request:
- drop:
source: "http_user_agent"
expression: "user agent 1"
# I want this to be AND
- drop:
source: "request"
expression: "GET / HTTP/1.1"
drop_counter_reason: my_job_healthchecks
static_configs:
- labels:
job: my-job
Using a Promtail config like this drops lines using OR from my two JSON fields.
How can I adjust my config so that I only drop lines where http_user_agent = user agent 1 AND request = GET / HTTP/1.1?
If you provide multiple options they will be treated like an AND clause, where each option has to be true to drop the log.
If you wish to drop with an OR clause, then specify multiple drop stages.
https://grafana.com/docs/loki/latest/clients/promtail/stages/drop/#drop-stage
Drop logs by time OR length
Would drop all logs older than 24h OR longer than 8kb bytes
- json:
expressions:
time:
msg:
- timestamp:
source: time
format: RFC3339
- drop:
older_than: 24h
- drop:
longer_than: 8kb
Drop logs by regex AND length
Would drop all logs that contain the word debug AND are longer than 1kb bytes
- drop:
expression: ".*debug.*"
longer_than: 1kb
clients:
- url: http://localhost:3100/loki/api/v1/push
scrape_configs:
- job_name: testing-my-job-drop
pipeline_stages:
- match:
selector: '{job="my-job"}'
stages:
- json:
expressions:
http_user_agent:
request:
- labels:
http_user_agent:
request:
#### method 1
- match:
selector: '{http_user_agent="user agent 1"}'
stages:
- drop:
source: "request"
expression: "GET / HTTP/1.1"
drop_counter_reason: my_job_healthchecks
## they are both conditions match will drop
#### method 2
- match:
selector: '{http_user_agent="user agent 1",request="GET / HTTP/1.1"}'
action: drop
#### method 3, incase regex pattern.
- match:
selector: '{http_user_agent="user agent 1"} |~ "(?i).*GET / HTTP/1.1.*"'
action: drop
static_configs:
- labels:
job: my-job
match stage include match stage.
I want to schedule one lambda via AWS EventBridge. The issue is I want to read the number value used in ScheduledExpression from SSM variable GCHeartbeatInterval
Code I used is below
heartbeat-check:
handler: groupconsultation/heartbeatcheck.handler
description: ${self:custom.gitVersion}
timeout: 15
memorySize: 1536
package:
include:
- groupconsultation/heartbeatcheck.js
- shared/*
- newrelic-lambda-wrapper.js
events:
- eventBridge:
enabled: true
schedule: rate(2 minutes)
resources:
Resources:
GCHeartbeatInterval:
Type: AWS::SSM::Parameter
Properties:
Name: /${file(vars.js):values.environmentName}/lambda/HeartbeatInterval
Type: String
Value: 1
Description: value in minute. need to convert it to seconds/milliseconds
Is this possible to achieve in serverless.yml ?
Reason for reading it from SSM is, it's a heartbeat service and the same value will be used by FE to send a heartbeat in set interval. BE lambda needs to be triggerred after 2x heartbeat interval
It turns out it's not possible. Only solution to it was to pass the variable as a command line argument. something like below.
custom:
mySchedule: ${opt:mySchedule, 1} # Allow overrides from CLI
...
schedule: ${self:custom.mySchedule}
...
resources:
Resources:
GCHeartbeatInterval:
Type: AWS::SSM::Parameter
Properties:
Name: /${file(vars.js):values.environmentName}/lambda/HeartbeatInterval
Type: String
Value: ${self:custom.mySchedule}
With the other approach, if we make it work we still have to redeploy the application as we do need to redeploy in this case also.
We are running an ELK stack to aggregate all our logs and we have multiple systems. Currently, we have Filebeat configured to log to specific indices based on the system (SystemA, SystemB, SystemC).
I would like to, additionally, send all logs with level ERROR to another index where I would like to collect all errors across systems, but somehow I can't figure out how to get Filebeat to send one message to multiple indices
According to the documentation, the first condition that matches will define the index to be used, which sounds to me as if it's not possible to send a message that would match multiple patterns to multiple indices?
What I want to do:
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS}'
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
index: "filebeat-external-%{+yyyy.MM.dd}"
indices:
- index: "filebeat-error-logs-%{+yyyy.MM.dd}"
when:
or:
- equals:
level: "ERROR"
- equals:
level: "error"
- index: "filebeat-service-a-%{+yyyy.MM.dd}"
when:
regexp:
container.name: "^service-a-"
- index: "filebeat-service-b-%{+yyyy.MM.dd}"
when:
regexp:
container.name: "^service-b-"
The only way I currently see is to have multiple indices per system and aggregate them in Kibana:
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS}'
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
index: "filebeat-external-%{+yyyy.MM.dd}"
indices:
- index: "error-log-service-a-%{+yyyy.MM.dd}"
when:
and:
- equals:
level: "ERROR"
- regexp:
container.name: "^service-a-"
- index: "service-log-service-a-%{+yyyy.MM.dd}"
when:
and:
- not:
- equals:
level: "ERROR"
- regexp:
container.name: "^service-a-"
But this would double our number of indices and is code duplication. Am I missing something here, is there an easier way to have a general error-index but still have errors go to the service-specific indices as well?
i'm scraping the Cisco device with SNMP Exporter. Everything is OK, but i want to somehow add the OID: Sysname into the SNMP information, how to do it ?
Tried to add lookup like:
- source_indexes: [ifIndex]
lookup: sysName
but got an error. Maybe you have any clue ?
modules:
# Default IF-MIB interfaces table with ifIndex.
if_mib_dc:
version: 3
auth:
username: xxx
password: uuutt
auth_protocol: MD5
priv_protocol: AES
security_level: 33dds2
priv_password: slypt
walk: [ifHCInOctets, ifHCOutOctets, ifOperStatus, ifInErrors, ifOutErrors, ifInDiscards, ifOutDiscards]
lookups:
- source_indexes: [ifIndex]
lookup: ifAlias
- source_indexes: [ifIndex]
lookup: ifDescr
- source_indexes: [ifIndex]
# Use OID to avoid conflict with Netscaler NS-ROOT-MIB.
lookup: 1.3.6.1.2.1.31.1.1.1.1 # ifName
overrides:
ifType:
type: EnumAsInfo
I want to get sysname information like im getting ifAlias lookup.
I'm getting an error on cf push when I try to push a yaml file for a memcached service broker:
#cf push
FAILED
Error reading manifest file:
yaml: [] mapping values are not allowed in this context at line 2, column 7
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/Godeps/_workspace/src/github.com/cloudfoundry-incubator/candiedyaml/decode.go:95 (0x2c0d87)
/usr/local/go/src/pkg/runtime/panic.c:248 (0x16276)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/Godeps/_workspace/src/github.com/cloudfoundry-incubator/candiedyaml/decode.go:144 (0x2c1609)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/Godeps/_workspace/src/github.com/cloudfoundry-incubator/candiedyaml/decode.go:161 (0x2c17e5)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/Godeps/_workspace/src/github.com/cloudfoundry-incubator/candiedyaml/decode.go:471 (0x2c3cd4)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/Godeps/_workspace/src/github.com/cloudfoundry-incubator/candiedyaml/decode.go:196 (0x2c1c6e)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/Godeps/_workspace/src/github.com/cloudfoundry-incubator/candiedyaml/decode.go:394 (0x2c336d)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/Godeps/_workspace/src/github.com/cloudfoundry-incubator/candiedyaml/decode.go:193 (0x2c1d38)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/Godeps/_workspace/src/github.com/cloudfoundry-incubator/candiedyaml/decode.go:171 (0x2c199e)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/Godeps/_workspace/src/github.com/cloudfoundry-incubator/candiedyaml/decode.go:137 (0x2c146d)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/tmp/cli_gopath/src/github.com/cloudfoundry/cli/cf/manifest/manifest_disk_repository.go:82 (0xb13ef)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/tmp/cli_gopath/src/github.com/cloudfoundry/cli/cf/manifest/manifest_disk_repository.go:50 (0xb0f72)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/tmp/cli_gopath/src/github.com/cloudfoundry/cli/cf/manifest/manifest_disk_repository.go:33 (0xb0e13)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/tmp/cli_gopath/src/github.com/cloudfoundry/cli/cf/manifest/manifest.go:1 (0xb300e)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/tmp/cli_gopath/src/github.com/cloudfoundry/cli/cf/commands/application/push.go:377 (0x23e2c3)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/tmp/cli_gopath/src/github.com/cloudfoundry/cli/cf/commands/application/push.go:356 (0x23e062)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/tmp/cli_gopath/src/github.com/cloudfoundry/cli/cf/commands/application/push.go:120 (0x23ada2)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/tmp/cli_gopath/src/github.com/cloudfoundry/cli/cf/command_runner/runner.go:50 (0xa70a9)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/tmp/cli_gopath/src/github.com/cloudfoundry/cli/cf/command_runner/runner.go:1 (0xa73d4)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/tmp/cli_gopath/src/github.com/cloudfoundry/cli/cf/app/app.go:76 (0x8ecce)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/Godeps/_workspace/src/github.com/codegangsta/cli/command.go:101 (0xcb140)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/Godeps/_workspace/src/github.com/codegangsta/cli/app.go:125 (0xc9654)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/main/main.go:154 (0x3729)
/Users/pivotal/go-agent/pipelines/Mac-OSX-Testing/src/github.com/cloudfoundry/cli/main/main.go:91 (0x2eb9)
/usr/local/go/src/pkg/runtime/proc.c:220 (0x1804f)
/usr/local/go/src/pkg/runtime/proc.c:1394 (0x1a580)
So obviously someethign is wrong with my yaml file. But I don't know what. Here's what I have:
domain: cloudeast.mycompany.com
nats: secret
machines:
- 10.10.100.10
password: secret
port: 4222
user: nats
networks:
apps: default
management: default
memcache_broker:
broker_password: secret
memcache:
vip: 10.100.100.100:11211
servers:
- 10.100.100.101:11211
- 10.100.100.102:11211
plans:
small:
name: small
description: A small cache with no redundency
free: true
memcache_hazelcast:
heap_size: 512M
host:
src_api: https://memcache-hazelcast.cf-deployment.com
password: secret
memcache:
secret_key: secret
hazelcast:
max_cache_size: 268435456
machines:
zone1:
- 10.100.100.101
zone2:
- 10.100.100.102
plans:
small:
backup: 0
async_backup: 1
eviction_policy: LRU
max_idle_seconds: 86400
max_size_used_heap: 100
medium:
backup: 0
async_backup: 1
eviction_policy: LRU
max_idle_seconds: 86400
max_size_used_heap: 200
Could I get some help with why this yaml file isn't parsing correctly?
On the first line you have a scalar (domain) followed by a colon (:) followed by a scalar (cloudeast.mycompany.com), this means at the top, non indented, level you start a mapping and domain is a key scalar and cloudeast.mycompany.com a value scalar.
On the next line you should, once more, have a key scalar and it has to be indented at the same level as the previous line (or you finish the top-level mapping with an end of stream (...) or start a new document (---). You start an indented value that is a mapping and the parser doesn't know what you want to do with that.
One can only try and guess what you try to do. If cloudeast.mycompany.com is the name of the domain, you should make that a new indented scalar value (with key name), then the same indentation error occurs for the key machines. and following 3 keys, after that the file is OK:
domain:
name: cloudeast.mycompany.com # scalar key "name" added
nats: secret
machines: # this scalar key and next 3 dedented
- 10.10.100.10
password: secret
port: 4222
user: nats
# from here everything as it was
networks:
apps: default
management: default
memcache_broker:
broker_password: secret
memcache:
vip: 10.100.100.100:11211
servers:
- 10.100.100.101:11211
- 10.100.100.102:11211
plans:
small:
name: small
description: A small cache with no redundency
free: true
memcache_hazelcast:
heap_size: 512M
host:
src_api: https://memcache-hazelcast.cf-deployment.com
password: secret
memcache:
secret_key: secret
hazelcast:
max_cache_size: 268435456
machines:
zone1:
- 10.100.100.101
zone2:
- 10.100.100.102
plans:
small:
backup: 0
async_backup: 1
eviction_policy: LRU
max_idle_seconds: 86400
max_size_used_heap: 100
medium:
backup: 0
async_backup: 1
eviction_policy: LRU
max_idle_seconds: 86400
max_size_used_heap: 200
Of course you also leave out the cloudeaset.mycompany.com value. In both the cases the top level of the YAML file will be a mapping with a single key scalar: domain. The value for that key is again a mapping with keys like nats, networks, memcache_broker