I am creating an application where I need to trigger a mail on particular event.
When I run following command,
python -m elastalert.elastalert --verbose --rule myrules\myrule.yml
I get the error as
ERROR:root:Error while running alert email: Error connecting to SMTP host: [Error 10013] An attempt was made to access a socket in a way forbidden by its access permissions
Here is content of my rule file:
es_host: localhost
es_port: 9200
name: Log Level Test
type: frequency
index: testindexv4
num_events: 1
timeframe:
hours: 4
filter:
- term:
log_level.keyword: "ERROR"
- query:
query_string:
query: "log_level.keyword: ERROR"
alert:
- "email"
email:
- "<myMailId>#gmail.com"
Here is content of config.yaml file
rules_folder: myrules
run_every:
seconds: 2
buffer_time:
seconds: 10
es_host: localhost
es_port: 9200
writeback_index: elastalert_status
alert_time_limit:
days: 2
Here is my smpt_auth file
alert:
- email
email:
- "<myMailId>#gmail.com"
smtp_host: "smtp.gmail.com"
smtp_port: 465
smtp_ssl: true
from_addr: "<otherMailId>#gmail.com"
smtp_auth_file: "smtp_auth_user.yaml"
Here is content of smtp_auth_user file:
user: "<myMailId>#gmail.com"
password: "<password>"
What change I need to make to resolve the issue?
Related
I am trying to setup a local mailserver and send mail in Sylius using swiftmailer. Here is my swiftmailer.yaml config:
swiftmailer:
transport: 'smtp'
auth_mode: login
username: 'test#dibdrop.dev'
password: 'test'
disable_delivery: false
And my docker-composer.yml for docker-mailserver:
services:
mailserver:
image: docker.io/mailserver/docker-mailserver:latest
container_name: mailserver
# If the FQDN for your mail-server is only two labels (eg: example.com),
# you can assign this entirely to `hostname` and remove `domainname`.
hostname: mail
domainname: dibdrop.dev
env_file: mailserver.env
# More information about the mail-server ports:
# https://docker-mailserver.github.io/docker-mailserver/edge/config/security/understanding-the-ports/
# To avoid conflicts with yaml base-60 float, DO NOT remove the quotation marks.
ports:
- "25:25" # SMTP (explicit TLS => STARTTLS)
- "143:143" # IMAP4 (explicit TLS => STARTTLS)
- "465:465" # ESMTP (implicit TLS)
- "587:587" # ESMTP (explicit TLS => STARTTLS)
- "993:993" # IMAP4 (implicit TLS)
volumes:
- ./docker-data/dms/mail-data/:/var/mail/
- ./docker-data/dms/mail-state/:/var/mail-state/
- ./docker-data/dms/mail-logs/:/var/log/mail/
- ./docker-data/dms/config/:/tmp/docker-mailserver/
- /etc/localtime:/etc/localtime:ro
restart: always
stop_grace_period: 1m
cap_add:
- NET_ADMIN
healthcheck:
test: "ss --listening --tcp | grep -P 'LISTEN.+:smtp' || exit 1"
timeout: 3s
retries: 0
I can connect without problem to the mailserver using 'telnet smtp.localhost 25, but when I try to send via sylius the output is :
Connection could not be established with host localhost :stream_socket_client(): Unable to connect to localhost:25 (Address not available)
I have also tried to set the host to 'smtp.localhost' instead of 'localhost' but it wasn't changing anything.
I'll appreciate any comments to help me understand better how mailservers work and why it's not working in my situation
I'm running a basic acl creation on Ansible but get this error:
TASK [Merge provided configuration with device configuration] ********************************************************************
fatal: [192.168.0.140]: FAILED! => {"changed": false, "msg": "sh access-list\r\n ^\r\nERROR: % Invalid input detected at '^' marker.\r\n\rASA> "}
---
- name: "ACL TEST 1"
hosts: ASA
connection: local
gather_facts: false
collections:
- cisco.asa
tasks:
- name: Merge provided configuration with device configuration
cisco.asa.asa_acls:
config:
acls:
- name: purple_access_in
acl_type: extended
aces:
- grant: permit
line: 1
protocol_options:
tcp: true
source:
address: 10.0.3.0
netmask: 255.255.255.0
destination:
address: 52.58.110.120
netmask: 255.255.255.255
port_protocol:
eq: https
log: default
state: merged
The hosts file is:
[ASA]
192.168.0.140
[ASA:vars]
ansible_user=admin
ansible_ssh_pass=admin
ansible_become_method=enable
ansible_become_pass=cisco
ansible_connection=ansible.netcommon.network_cli
ansible_network_os=cisco.asa.asa
ansible_python_interpreter=python
There's not much to the code but am struggling to get past the error. I don't even need the "sh access-list" output.
I have installed elastAlert.
Below is my config and yaml file configuration:
Config file :
rules_folder: rules
run_every:
minutes: 15
buffer_time:
minutes: 15
es_host: ip_address(#####)
es_port: 9200
writeback_index: elastalert_status
writeback_alias: elastalert_alerts
alert_time_limit:
days: 2
logging:
version: 1
incremental: false
disable_existing_loggers: false
formatters:
logline:
format: '%(asctime)s %(levelname)+8s %(name)+20s %(message)s'
handlers:
console:
class: logging.StreamHandler
formatter: logline
level: DEBUG
stream: ext://sys.stderr
file:
class : logging.FileHandler
formatter: logline
level: DEBUG
filename: elastalert.log
loggers:
elastalert:
level: WARN
handlers: []
propagate: true
elasticsearch:
level: WARN
handlers: []
propagate: true
Example_frequency.yamlfile:
es_host: ip adress(####)
es_port: 9200
name: FaultExceptions
type: frequency
index: logstash_*
num_events: 5
timeframe:
minutes: 15
filter:
-query:
query_string:
query: "ErrorGroup: Fault Exception"
alert:
-"email"
email:
-"abc#gmail.com"
I am getting the mail in each 15 min but that data does not match with filter where ErrorGroup name should be Fault Exception.
Please help me to understand this as I am working on this since last 4 days, Thanks in advance.
Hope not very late, but yes use --es_debug_trace command line option. It helps to see exact query being sent in curl:
python3 -m elastalert.elastalert --verbose --rule your_rule_to_test.yaml --es_debug_trace /tmp/elastalert_curl.log
The curl command in /tmp/elastalert_curl.log can then be fired in terminal to see the output or tweaked to see what went wrong. You can use Kibana Dev Tools to then check the curl command and test. Also confirm ErrorGroup is at the root level of the document index and try ErrorGroup.keyword.
I am trying to configure stats sink to collect stats into statsd.
I have configured the envoy.yaml as follows:
admin:
access_log_path: /logs/envoy_access.log
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 8001
stats_sinks:
name: envoy.statsd
config:
tcp_cluster_name: statsd-exporter
static_resources:
...
clusters:
- name: app
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: {{appName}}
port_value: {{appPort}}
- name: statsd-exporter
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: statsd_exporter
port_value: 9125
statsd is built as container within the same docker network.
When I run the docker containers with Envoy and statsd, Envoy shows the following error:
proxy_1 | [2019-05-06 04:50:38.006][27][info][main] [source/server/server.cc:516] exiting
proxy_1 | tcp statsd: node 'id' and 'cluster' are required. Set it either in 'node'
config or via --service-node and --service-cluster options.
template-starter-windows_proxy_1 exited with code 1
How do I resolve this error?
Update
I was able to resolve the error by setting the --service-cluster and --service-node parameters for envoy command:
envoy -c /etc/envoy/envoy.yaml --service-cluster 'front-envoy' --service-node 'front-envoy'
I am not sure why using statsd sink would require these parameters to be set. and The documentation for envoy does not mention this information,
I would like to install/download the HLF binaries, without the images and fabric-samples. How do I do that?
This is what I've tried so far:
I've followed the instruction on https://hyperledger-fabric.readthedocs.io/en/release-1.4/install.html, but that also installs the images (which is unwanted).
I've looked into the hlf repository, but the /bin/ directory is absent there and a name-search for 'contigtxgen' and others yielded no results other than it being used inside other scripts in the repo
googled for any mention of binary-only install of hlf, without positive results
Desired result would be a cli command with which I can suppress the installing of images, or something similar.
I am also in the process of setting up fabric without docker images.
This link has helped me a lot. Although it does not show how to set up orderer and node on host machine.
Following is my configuration and steps that I followed to run orderer and peer on host machine(make sure you have installed all the prerequisites for hyperledger fabric):-
First clone the fabric repository and run make.
git clone https://github.com/hyperledger/fabric.git
//cd into fabric folder and run
make release
The above will generate binaries in release folder for your host machine.
fabric
|
-- release
|
-- linux-amd64
|
-- bin
Copy this bin folder and into new folder mynetwork and create the following configuration files.
mynetwork
|
-- bin
-- crypto-config.yaml
-- configtx.yaml
-- order.yaml
-- core.yaml
Following are the configurations that I am using.
crypto-config.yaml
OrdererOrgs:
- Name: Orderer
Domain: example.com
Specs:
- Hostname: orderer
SANS:
- "localhost"
- "127.0.0.1"
PeerOrgs:
- Name: Org1
Domain: org1.example.com
EnableNodeOUs: true
Template:
Count: 1
SANS:
- "localhost"
- "127.0.0.1"
Users:
Count: 1
Next open terminal(lets call it terminal-1) and cd into mynetwork folder and run the cryptogen to generate the assets and keys.
./bin/cryptogen generate --config=./crypto-config.yaml
The above will create crypto-config folder in mynetwork containing all the network assets, in this case for ordererOrganization and peerOrganization.
mynetwork
|
-- crypto-config
|
-- ordererOrganizations
-- peerOrganizations
Next you need to create configtx.yaml
Organizations:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
- &Org1
Name: Org1MSP
ID: Org1MSP
MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('Org1MSP.admin', 'Org1MSP.peer', 'Org1MSP.client')"
Writers:
Type: Signature
Rule: "OR('Org1MSP.admin', 'Org1MSP.client')"
Admins:
Type: Signature
Rule: "OR('Org1MSP.admin')"
AnchorPeers:
- Host: 127.0.0.1
Port: 7051
Capabilities:
Channel: &ChannelCapabilities
V1_3: true
Orderer: &OrdererCapabilities
V1_1: true
Application: &ApplicationCapabilities
V1_3: true
V1_2: false
V1_1: false
Application: &ApplicationDefaults
Organizations:
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
<<: *ApplicationCapabilities
Orderer: &OrdererDefaults
OrdererType: solo
Addresses:
- orderer:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Organizations:
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
BlockValidation:
Type: ImplicitMeta
Rule: "ANY Writers"
Channel: &ChannelDefaults
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
<<: *ChannelCapabilities
Profiles:
OneOrgOrdererGenesis:
<<: *ChannelDefaults
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
SampleConsortium:
Organizations:
- *Org1
OneOrgChannel:
Consortium: SampleConsortium
<<: *ChannelDefaults
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
Capabilities:
<<: *ApplicationCapabilities
Then on terminal-1 run the following few commands in sequence
export FABRIC_CFG_PATH=$PWD
mkdir channel-artifacts
./bin/configtxgen -profile OneOrgOrdererGenesis -channelID myfn-sys-channel -outputBlock ./channel-artifacts/genesis.block
export CHANNEL_NAME=mychannel
./bin/configtxgen -profile OneOrgChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID $CHANNEL_NAME
./bin/configtxgen -profile OneOrgChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID $CHANNEL_NAME -asOrg Org1MSP
Next create orderer.yaml, and change the certificate paths according to your host and folder location.
General:
LedgerType: file
ListenAddress: 127.0.0.1
ListenPort: 7050
TLS:
Enabled: true
PrivateKey: /home/fabric-release/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.key
Certificate: /home/fabric-release/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt
RootCAs:
- /home/fabric-release/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/ca.crt
ClientAuthRequired: false
Keepalive:
ServerMinInterval: 60s
ServerInterval: 7200s
ServerTimeout: 20s
GenesisMethod: file
GenesisProfile: OneOrgOrdererGenesis
GenesisFile: channel-artifacts/genesis.block
LocalMSPDIR: /home/fabric-release/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp
LocalMSPID: OrdererMSP
Authentication:
TimeWindow: 15m
FileLedger:
Location: /home/fabric-release/data/orderer
Prefix: hyperledger-fabric-ordererledger
Operations:
ListenAddress: 127.0.0.1:8443
TLS:
Enabled: true
Certificate: /home/fabric-release/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt
PrivateKey: /home/fabric-release/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.key
ClientAuthRequired: false
ClientRootCAs:
- crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
Start the orderer on terminal-1
./bin/orderer
Next open another terminal(Terminal-2) and go to mynetwork folder. Create core.yaml(similarly you'll need to change the certificate and key path's).
peer:
id: peer1
networkId: myfn
listenAddress: 127.0.0.1:7051
address: 127.0.0.1:7051
addressAutoDetect: false
gomaxprocs: -1
keepalive:
minInterval: 60s
client:
interval: 60s
timeout: 20s
deliveryClient:
interval: 60s
timeout: 20s
gossip:
bootstrap: 127.0.0.1:7051
externalEndpoint: 127.0.0.1:7051
useLeaderElection: true
orgLeader: false
tls:
enabled: true
clientAuthRequired: false
cert:
file: crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
key:
file: crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
rootcert:
file: crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
clientRootCAs:
file:
- crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
authentication:
timewindow: 15m
fileSystemPath: /home/fabric-release/data
mspConfigPath: /home/fabric-release/mynetwork/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
localMspId: Org1MSP
client:
connTimeout: 3s
deliveryclient:
reconnectTotalTimeThreshold: 3600s
connTimeout: 3s
profile:
enabled: false
listenAddress: 0.0.0.0:6060
handlers:
authFilters:
- name: DefaultAuth
- name: ExpirationCheck
decorators:
- name: DefaultDecorator
endorsers:
escc:
name: DefaultEndorsement
library:
validators:
vscc:
name: DefaultValidation
library:
discovery:
enabled: true
authCacheEnabled: true
authCacheMaxSize: 1000
authCachePurgeRetentionRatio: 0.75
orgMembersAllowedAccess: false
vm:
endpoint: unix:///var/run/docker.sock
docker:
tls:
enabled: false
ca:
file:
cert:
file:
key:
file:
attachStdout: false
hostConfig:
NetworkMode: host
Dns:
# - 192.168.0.1
LogConfig:
Type: json-file
Config:
max-size: "50m"
max-file: "5"
Memory: 2147483648
chaincode:
id:
path:
name:
builder: $(DOCKER_NS)/fabric-ccenv:latest
pull: true
java:
runtime: $(DOCKER_NS)/fabric-javaenv:$(ARCH)-1.4.1
#runtime: $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION)
startuptimeout: 300s
executetimeout: 30s
mode: net
keepalive: 0
system:
cscc: enable
lscc: enable
escc: enable
vscc: enable
qscc: enable
logging:
level: info
shim: warning
format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
ledger:
blockchain:
state:
stateDatabase: goleveldb
totalQueryLimit: 100000
couchDBConfig:
couchDBAddress: 127.0.0.1:5984
username:
password:
maxRetries: 3
maxRetriesOnStartup: 12
requestTimeout: 35s
internalQueryLimit: 1000
maxBatchUpdateSize: 1000
warmIndexesAfterNBlocks: 1
createGlobalChangesDB: false
history:
enableHistoryDatabase: true
Start the peer node on terminal-2
./bin/peer node start
Next open another terminal(Terminal-3) and go to mynetwork folder. Run the following commands in sequence.
export CORE_PEER_MSPCONFIGPATH=/home/fabric-release/mynetwork/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
export CORE_PEER_ADDRESS=127.0.0.1:7051
export CORE_PEER_LOCALMSPID="Org1MSP"
export CORE_PEER_TLS_ROOTCERT_FILE=/home/fabric-release/mynetwork/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
export CHANNEL_NAME=mychannel
Create channel
/bin/peer channel create -o 127.0.0.1:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls --cafile /home/fabric-release/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
Join the channel
./bin/peer channel join -b mychannel.block
If you made it this far, your network is up and you can start installing chaincodes. I am still in the processes of experimenting chaincodes. However, I Hope this helps.
If you download this script (and set execute permission):
https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh
Then run the script with -h you will see the options to suppress download of Binaries or Docker Images.