Capistrano deploy - stuck at step 03 - ruby

I've been playing with capistrano deploy, I managed to connect to my server, but now I am stuck at step 03, but I don't have log errors and I don't know which might be the next thing to do.
Help is highly appreciate. I don't whant you to solve by me of course not. I just want some guidance, thank you.
$ cap production deploy --trace
** Invoke production (first_time)
** Execute production
** Invoke load:defaults (first_time)
** Execute load:defaults
** Invoke bundler:map_bins (first_time)
** Execute bundler:map_bins
** Invoke deploy (first_time)
** Execute deploy
** Invoke deploy:starting (first_time)
** Execute deploy:starting
** Invoke deploy:check (first_time)
** Execute deploy:check
** Invoke git:check (first_time)
** Invoke git:wrapper (first_time)
** Execute git:wrapper
00:00 git:wrapper
01 mkdir -p /tmp
✔ 01 vlmg#vlmg.mx 0.249s
Uploading /tmp/git-ssh-vlmg-forms-production-ubuntu.sh 100.0%
02 chmod 700 /tmp/git-ssh-vlmg-forms-production-ubuntu.sh
✔ 02 vlmg#vlmg.mx 0.055s
This is my deploy.rb
# config valid only for current version of Capistrano
lock '3.6.0'
set :application, 'vlmg-forms'
set :repo_url, 'git#github.com:jose-octomedia/ar-vlmg-contact-forms.git'
# Default branch is :master
# ask :branch, `git rev-parse --abbrev-ref HEAD`.chomp
# Default deploy_to directory is /var/www/my_app_name
#set :deploy_to, '/'
# Default value for :scm is :git
# set :scm, :git
# Default value for :format is :airbrussh.
# set :format, :airbrussh
# You can configure the Airbrussh format using :format_options.
# These are the defaults.
# set :format_options, command_output: true, log_file: 'log/capistrano.log', color: :auto, truncate: :auto
# Default value for :pty is false
set :pty, true
# Default value for :linked_files is []
# append :linked_files, 'config/database.yml', 'config/secrets.yml'
# Default value for linked_dirs is []
# append :linked_dirs, 'log', 'tmp/pids', 'tmp/cache', 'tmp/sockets', 'public/system'
# Default value for default_env is {}
# set :default_env, { path: "/opt/ruby/bin:$PATH" }
# Default value for keep_releases is 5
# set :keep_releases, 5
desc 'Restart application'
task :restart do
on roles(:web), in: :sequence, wait: 5 do
execute "service thin restart" ## -> line you should add
end
end
And this is my production.rb file
# server-based syntax
# ======================
# Defines a single server with a list of roles and multiple properties.
# You can define all roles on a single server, or split them:
server 'vlmg.mx', user: 'vlmg', roles: %w{web}, primary: true, port: 15555
# server 'example.com', user: 'deploy', roles: %w{app web}, other_property: :other_value
# server 'db.example.com', user: 'deploy', roles: %w{db}
set :deploy_to, "/home/vlmg/subdomains/ar.vlmg.mx"
# role-based syntax
# ==================
# Defines a role with one or multiple servers. The primary server in each
# group is considered to be the first unless any hosts have the primary
# property set. Specify the username and a domain or IP for the server.
# Don't use `:all`, it's a meta role.
#role :app, %w{admin#example.com}, my_property: :my_value
role :web, %w{vlmg#vlmg.mx}
# role :db, %w{deploy#example.com}
# Configuration
# =============
# You can set any configuration variable like in config/deploy.rb
# These variables are then only loaded and set in this stage.
# For available Capistrano configuration variables see the documentation page.
# http://capistranorb.com/documentation/getting-started/configuration/
# Feel free to add new variables to customise your setup.
# Custom SSH Options
# ==================
# You may pass any option but keep in mind that net/ssh understands a
# limited set of options, consult the Net::SSH documentation.
# http://net-ssh.github.io/net-ssh/classes/Net/SSH.html#method-c-start
#
# Global options
# --------------
set :ssh_options, {
keys: %w(/home/ubuntu/.ssh/id_rsa),
forward_agent: true
#auth_methods: %w(password)
}
#
# The server-based syntax can be used to override options:
# ------------------------------------
# server 'example.com',
# user: 'user_name',
# roles: %w{web app},
# ssh_options: {
# user: 'user_name', # overrides user setting above
# keys: %w(/home/ubuntu/.ssh/id_rsa),
# forward_agent: false,
## auth_methods: %w(publickey password)
# # password: 'please use keys'
# }
Edit: In response of enabling the logs. This is the results.
Still nothing in the logs that could help to solve.
INFO ---------------------------------------------------------------------------
INFO START 2016-08-10 19:49:49 +0000 cap production deploy
INFO ---------------------------------------------------------------------------
INFO [204569e4] Running /usr/bin/env mkdir -p /tmp as vlmg#vlmg.mx
DEBUG [204569e4] Command: /usr/bin/env mkdir -p /tmp
INFO [9596edc0] Running /usr/bin/env mkdir -p /tmp as vlmg#vlmg.mx
DEBUG [9596edc0] Command: /usr/bin/env mkdir -p /tmp
INFO [9596edc0] Finished in 0.288 seconds with exit status 0 (successful).
DEBUG Uploading /tmp/git-ssh-vlmg-forms-production-ubuntu.sh 0.0%
INFO Uploading /tmp/git-ssh-vlmg-forms-production-ubuntu.sh 100.0%
INFO [1014be51] Running /usr/bin/env chmod 700 /tmp/git-ssh-vlmg-forms-production-ubuntu.sh as vlmg#vlmg.mx
DEBUG [1014be51] Command: /usr/bin/env chmod 700 /tmp/git-ssh-vlmg-forms-production-ubuntu.sh
INFO [1014be51] Finished in 0.057 seconds with exit status 0 (successful).
After last command this are the results, basically the same, hanging for 20-30 seconds then:
00:00 git:wrapper
01 mkdir -p /tmp
✔ 01 vlmg#vlmg.mx 0.265s
Uploading /tmp/git-ssh-vlmg-forms-production-ubuntu.sh 100.0%
02 chmod 700 /tmp/git-ssh-vlmg-forms-production-ubuntu.sh
✔ 02 vlmg#vlmg.mx 0.063s
(Backtrace restricted to imported tasks)
cap aborted!
Net::SSH::ConnectionTimeout: Net::SSH::ConnectionTimeout
Errno::ETIMEDOUT: Connection timed out - connect(2) for 109.73.225.169:22
Tasks: TOP => git:wrapper
(See full trace by running task with --trace)
This step 03 is solved, thanks. Solution was specify port on ssh connection too.
set :ssh_options, {
keys: %w(/home/ubuntu/.ssh/id_rsa),
forward_agent: true,
port: 15555,
#auth_methods: %w(password)
}
Now I have new errors, but I think that should could be in another post.

Related

Telegraf SNMP plugin Error: IF-MIB::ifTable: Unknown Object Identifier

Steps followed to installed SNMP manager and agent on ec2
sudo apt-get update
sudo apt-get install snmp snmp-mibs-downloader
sudo apt-get update
sudo apt-get install snmpd
I opened sudo nano /etc/snmp/snmp.conf and commented the following line:
#mibs :
Then I went into the configuration file and modified file as below:
sudo nano /etc/snmp/snmpd.conf
Listen for connections from the local system only
agentAddress udp:127.0.0.1:161 <--- commented this part.
Listen for connections on all interfaces (both IPv4 and IPv6)
agentAddress udp:161,udp6:[::1]:161 <--remove the comment from this line to make it work.
using below command I can get snmp data
snmpwalk -v 2c -c public 127.0.0.1 .
From inside docker container as well I can get the data
snmpwalk -v 2c -c public host.docker.internal .
Docker-compose:
telegraf_snmp:
image: telegraf:1.22.1
container_name: telegraf_snmp
restart: always
depends_on:
- influxdb
networks:
- analytics
extra_hosts:
- "host.docker.internal:host-gateway"
# ports:
# - "161:161/udp"
volumes:
- /mnt/telegraf/snmp:/var/lib/telegraf
- ./etc/telegraf/snmp/:/etc/telegraf/snmp/
env_file:
- secrets.env
environment:
INFLUXDB_URL: http://influxdb:8086
command:
--config-directory /etc/telegraf/snmp/telegraf.d
--config /etc/telegraf/snmp/telegraf.conf
links:
- influxdb
logging:
options:
max-size: "10m"
max-file: "3"
Telegraf Input conf:
[[inputs.snmp]]
## Agent addresses to retrieve values from.
## format: agents = ["<scheme://><hostname>:<port>"]
## scheme: optional, either udp, udp4, udp6, tcp, tcp4, tcp6.
## default is udp
## port: optional
## example: agents = ["udp://127.0.0.1:161"]
## agents = ["tcp://127.0.0.1:161"]
## agents = ["udp4://v4only-snmp-agent"]
# agents = ["udp://127.0.0.1:161"]
agents = ["udp://host.docker.internal:161"]
## Timeout for each request.
timeout = "15s"
## SNMP version; can be 1, 2, or 3.
version = 2
## SNMP community string.
community = "public"
## Agent host tag
# agent_host_tag = "agent_host"
## Number of retries to attempt.
retries = 3
## The GETBULK max-repetitions parameter.
# max_repetitions = 10
## SNMPv3 authentication and encryption options.
##
## Security Name.
# sec_name = "myuser"
## Authentication protocol; one of "MD5", "SHA", or "".
# auth_protocol = "MD5"
## Authentication password.
# auth_password = "pass"
## Security Level; one of "noAuthNoPriv", "authNoPriv", or "authPriv".
# sec_level = "authNoPriv"
## Context Name.
# context_name = ""
## Privacy protocol used for encrypted messages; one of "DES", "AES", "AES192", "AES192C", "AES256", "AES256C", or "".
### Protocols "AES192", "AES192", "AES256", and "AES256C" require the underlying net-snmp tools
### to be compiled with --enable-blumenthal-aes (http://www.net-snmp.org/docs/INSTALL.html)
# priv_protocol = ""
## Privacy password used for encrypted messages.
# priv_password = ""
## Add fields and tables defining the variables you wish to collect. This
## example collects the system uptime and interface variables. Reference the
## full plugin documentation for configuration details.
[[inputs.snmp.field]]
oid = "RFC1213-MIB::sysUpTime.0"
name = "uptime"
[[inputs.snmp.field]]
oid = "RFC1213-MIB::sysName.0"
name = "source"
is_tag = true
[[inputs.snmp.table]]
oid = "IF-MIB::ifTable"
name = "interface"
inherit_tags = ["source"]
[[inputs.snmp.table.field]]
oid = "IF-MIB::ifDescr"
name = "ifDescr"
is_tag = true
Telegraf logs:
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
2022-09-09T10:10:09Z I! Starting Telegraf 1.22.1
2022-09-09T10:10:09Z I! Loaded inputs: snmp
2022-09-09T10:10:09Z I! Loaded aggregators:
2022-09-09T10:10:09Z I! Loaded processors:
2022-09-09T10:10:09Z I! Loaded outputs: file influxdb_v2
2022-09-09T10:10:09Z I! Tags enabled: host=7a38697f4527
2022-09-09T10:10:09Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"7a38697f4527", Flush Interval:10s
2022-09-09T10:10:09Z E! [telegraf] Error running agent: could not initialize input inputs.snmp: initializing table interface: translating: MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
2022-09-09T10:10:11Z I! Starting Telegraf 1.22.1
2022-09-09T10:10:11Z I! Loaded inputs: snmp
2022-09-09T10:10:11Z I! Loaded aggregators:
2022-09-09T10:10:11Z I! Loaded processors:
2022-09-09T10:10:11Z I! Loaded outputs: file influxdb_v2
2022-09-09T10:10:11Z I! Tags enabled: host=7a38697f4527
2022-09-09T10:10:11Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"7a38697f4527", Flush Interval:10s
2022-09-09T10:10:11Z E! [telegraf] Error running agent: could not initialize input inputs.snmp: initializing table interface: translating: MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
But in telegraf I get above error.
I checked the mibs directory using ls /usr/share/snmp/mibs
I cannot find IF-MIB file here even after installing
$ sudo apt-get install snmp-mibs-downloader
$ sudo download-mibs
How can I resolve this issue ? Do I need to follow some additional steps ?
SNMP Plugin in telegraf should able to pull the data from SNMP

Ignoring the 'pipelines.yml' file because modules or command line options are specified

I have set up elasticsearch with password protected, and i am successfully able to work with elastic search by entering username=elastic and password=mypassword
but now I am trying to import mysql data into elasticsearch using logstash, when i run logstash using below command it gives error.
am i missing something?
logstash -f mysql.conf
logstash-plain.log
[2019-06-14T18:12:34,410][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-06-14T18:12:34,424][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.1.0"}
[2019-06-14T18:12:35,400][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, {, } at line 16, column 23 (byte 507) after output {\r\n elasticsearch {\r\n\thosts => \"http://10.42.35.14:9200/\"\r\n user => elastic\r\n password => pharma", :backtrace=>["D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2577:in `map'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:151:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in `initialize'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/java_pipeline.rb:23:in `initialize'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/pipeline_action/create.rb:36:in `execute'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/agent.rb:325:in `block in converge_state'"]}
[2019-06-14T18:12:35,758][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-06-14T18:12:40,664][INFO ][logstash.runner ] Logstash shut down.
mysql.conf
# file: contacts-index-logstash.conf
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://52.213.22.96:3306/prbi"
jdbc_user => "myuser"
jdbc_password => "mypassword"
jdbc_driver_library => "mysql-connector-java-6.0.5.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * from tmp_j_summaryreport"
}
}
output {
elasticsearch {
hosts => "http://10.42.35.14:9200/"
user => elastic
password => myelasticpassword
index => "testing123"
}
stdout { codec => json_lines }
}
logstash.yml
# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
# pipeline:
# batch:
# size: 125
# delay: 5
#
# Or as flat keys:
#
# pipeline.batch.size: 125
# pipeline.batch.delay: 5
#
# ------------ Node identity ------------
#
# Use a descriptive name for the node:
#
# node.name: test
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#
# path.data:
#
# ------------ Pipeline Settings --------------
#
# The ID of the pipeline.
#
# pipeline.id: main
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
# pipeline.workers: 2
#
# How many events to retrieve from inputs before sending to filters+workers
#
# pipeline.batch.size: 125
#
# How long to wait in milliseconds while polling for the next event
# before dispatching an undersized batch to filters+outputs
#
# pipeline.batch.delay: 50
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
# path.config:
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
# config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
#
# config.reload.interval: 3s
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
# config.debug: false
#
# When enabled, process escaped characters such as \n and \" in strings in the
# pipeline configuration files.
#
# config.support_escapes: false
#
# ------------ Module Settings ---------------
# Define modules here. Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
# modules:
# - name: MODULE_NAME
# var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
# var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#
# ------------ Cloud Settings ---------------
# Define Elastic Cloud settings here.
# Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
# and it may have an label prefix e.g. staging:dXMtZ...
# This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
# cloud.id: <identifier>
#
# Format of cloud.auth is: <user>:<pass>
# This is optional
# If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
# If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
# cloud.auth: elastic:<password>
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 64mb
#
# queue.page_capacity: 64mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Dead-Letter Queue Settings --------------
# Flag to turn on dead-letter queue.
#
# dead_letter_queue.enable: false
# If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb
# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
# http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
# * fatal
# * error
# * warn
# * info (default)
# * debug
# * trace
#
# log.level: info
# path.logs:
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
#xpack.monitoring.enabled: false
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password
#xpack.monitoring.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
#xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
#xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s
#xpack.management.enabled: true
xpack.management.elasticsearch.hosts: "http://10.42.35.14:9200/"
#xpack.management.elasticsearch.username: logstash_system
xpack.management.elasticsearch.password: myelasticpassword
This message on the logstash log indicates that there is something wrong with your config file:
Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError"
The rest of message says that the problem is in your output block
:message=>"Expected one of #, {, } at line 16, column 23 (byte 507) after output {
Double check your output configuration, it needs to be something like this:
output {
elasticsearch {
hosts => ["10.42.35.14:9200"]
user => "elastic"
password => "myelasticpassword"
index => "testing123"
}
stdout { codec => "json_lines" }
}

In the second run of chef-client, lazy attribute of docker_container on "link" is not getting resolved and passed unreadable value

my first time to ask stuff on stack-overflow, here in china i seldom able to meet any chef developer to talk about my problem so i am posting it here to seek for help. This issue has been bothering me for weeks and i am still trying to settle it.
here are my error msg:
* directory[/root/tools/projectname/../bootproxy] action create (up to date)
* template[/root/tools/projectname/../bootproxy/oc.proxy.conf] action create (up to date)
* directory[/root/tools/projectname/../bootproxy] action create (up to date)
* file[/tmp/dockerinfo.txt] action delete
- delete file /tmp/dockerinfo.txt
* bash[docker ps -a|grep -v CONTAINER|grep -v monitor|awk '{print $1, $NF}'] action run
- execute "bash" "/tmp/chef-script20170319-2107-fx41as"
* ruby_block[result] action run
- execute the ruby block result
* docker_container[bootproxy] action redeploy
- stopping bootproxy (will kill after 30s)
- deleting bootproxy
================================================================================
Error executing action redeploy on resource 'docker_container[bootproxy]'
================================================================================
Docker::Error::ServerError
--------------------------
Could not get container for bootproxy
Cookbook Trace:
---------------
/var/chef/cache/cookbooks/docker/libraries/docker_container.rb:319:in `block (3 levels) in <class:DockerContainer>'
/var/chef/cache/cookbooks/docker/libraries/helpers_base.rb:66:in `with_retries'
/var/chef/cache/cookbooks/docker/libraries/docker_container.rb:250:in `block (2 levels) in <class:DockerContainer>'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/copied_from_chef/chef/provider.rb:81:in `converge_if_changed'
/var/chef/cache/cookbooks/docker/libraries/docker_container.rb:247:in `block in <class:DockerContainer>'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/copied_from_chef/chef/provider.rb:132:in `instance_eval'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/copied_from_chef/chef/provider.rb:132:in `compile_and_converge_action'
/var/chef/cache/cookbooks/docker/libraries/docker_container.rb:169:in `call_action'
/var/chef/cache/cookbooks/docker/libraries/docker_container.rb:360:in `block in <class:DockerContainer>'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/copied_from_chef/chef/provider.rb:132:in `instance_eval'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/copied_from_chef/chef/provider.rb:132:in `compile_and_converge_action'
/var/chef/cache/cookbooks/docker/libraries/docker_container.rb:169:in `call_action'
/var/chef/cache/cookbooks/docker/libraries/docker_container.rb:403:in `block in <class:DockerContainer>'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/copied_from_chef/chef/provider.rb:132:in `instance_eval'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/copied_from_chef/chef/provider.rb:132:in `compile_and_converge_action'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/monkeypatches/chef/runner.rb:78:in `run_action'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/monkeypatches/chef/runner.rb:106:in `block (2 levels) in converge'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/monkeypatches/chef/runner.rb:106:in `each'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/monkeypatches/chef/runner.rb:106:in `block in converge'
/var/chef/cache/cookbooks/compat_resource/files/lib/chef_compat/monkeypatches/chef/runner.rb:105:in `converge'
Resource Declaration:
---------------------
# In /var/chef/cache/cookbooks/webserver/recipes/default.rb
218: docker_container container_name do
219: repo docker[:image]
220: tag docker[:tag]
221: #Add all docker link
222: # links node.set[:linking]
223: links lazy{node.run_state[:linking]}
224: env docker[:env]
225: command docker[:command]
226: kill_after 30
227: # autoremove true
228: action :redeploy
229: port docker[:ports]
230: volumes node.default["bindvolume"]
231: cap_add 'SYS_ADMIN'
232: devices []
233: privileged true
234: timeout 30
235: # {["/dev/fuse"]}
236: end
237: else
238: docker_container container_name do
239: repo docker[:image]
240: tag docker[:tag]
241: #Add all docker link
242: links node.run_state[:linking]
243: env docker[:env]
244: command docker[:command]
245: kill_after 30
246: # autoremove true
247: action :redeploy
248: port docker[:ports]
249: volumes node.default["bindvolume"]
250: cap_add 'SYS_ADMIN'
251: devices []
252: privileged true
253: timeout 30
254: # {["/dev/fuse"]}
255: end
256: end
257:
258: if (not (defined?(docker[:exec])).nil?) && (not "#{docker[:exec]}" == "")
259: execute 'execute command inside docker' do
260: command "docker exec -i #{container_name} /bin/bash -c \'#{docker[:exec]}\'"
261: end
262: end
263:
264: etchosts.push("#{container_name}:#{container_name}")
265: end
266:
267: #Add proxy.conf to folder if bootproxy defined
268: if defined?(node[:externalmode]) && node[:externalmode].eql?("bootproxy")
269: #Prepare bootproxy directories
270: directory "#{node[:deploycode][:basedirectory]}../bootproxy" do
Compiled Resource:
------------------
# Declared in /var/chef/cache/cookbooks/webserver/recipes/default.rb:218:in `block in from_file'
docker_container("bootproxy") do
action [:redeploy]
retries 0
retry_delay 2
default_guard_interpreter :default
declared_type :docker_container
cookbook_name "webserver"
recipe_name "default"
kill_after 30
repo "daocloud.io/library/nginx"
tag "stable-alpine"
exposed_ports {"80/tcp"=>{}, "5000/tcp"=>{}}
port_bindings {"80/tcp"=>[{"HostIp"=>"0.0.0.0", "HostPort"=>"80"}], "5000/tcp"=>[{"HostIp"=>"0.0.0.0", "HostPort"=>"5000"}]}
port ["80:80", "5000:5000"]
volumes_binds ["/root/tools/projectname/../bootproxy:/etc/nginx/conf.d/"]
links #<ChefCompat::CopiedFromChef::Chef::DelayedEvaluator:0x000000055c4a90#/var/chef/cache/cookbooks/webserver/recipes/default.rb:223>
cap_add ["SYS_ADMIN"]
privileged true
timeout 30
connection #<Docker::Connection:0x00000008301238 #url="unix:///", #options={:socket=>"/var/run/docker.sock", :read_timeout=>60}>
network_mode "bridge"
detach true
signal "SIGTERM"
end
Running handlers:
[2017-03-19T21:20:15+08:00] ERROR: Running exception handlers
Running handlers complete
[2017-03-19T21:20:15+08:00] ERROR: Exception handlers complete
Chef Client failed. 20 resources updated in 34 seconds
[2017-03-19T21:20:15+08:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2017-03-19T21:20:15+08:00] ERROR: docker_container[bootproxy] (webserver::default line 218) had an error: Docker::Error::ServerError: Could not get container for bootproxy
[2017-03-19T21:20:15+08:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
and here are my codes:
node.run_state[:linking] = []
#Special handling if bootproxy, get all local running docker id and name and link into bootproxy
if localfolder.eql?("bootproxy")
container_name = localfolder
node.set[:dockerinfo] = []
results = "/tmp/dockerinfo.txt"
file results do
action :delete
end
cmd = "docker ps -a|grep -v CONTAINER|grep -v monitor|awk \'{print $1, $NF}\'"
bash cmd do
code <<-EOH
#{cmd} > #{results}
EOH
end
ruby_block "result" do
only_if { "cat #{results}| wc -l;while [ $? -ne 0 ]; do cat #{results}| wc -l;done" }
# only_if { ::File.exists?(results) }
block do
f = File.open(results)
dockerinfo = Hash.new
f.each do |line|
dockerinfo[line.chomp.split(' ')[0]] = line.chomp.split(' ')[1]
end
f.close
node.set[:dockerinfo] = dockerinfo
node.run_state[:linking] = dockerinfo
node.run_state[:linking] = []
node.set[:dockerinfo].each do |hash, dockername|
node.run_state[:linking].push("#{dockername}:#{dockername}")
end
end
end
else
node.run_state[:linking] = etchosts
end
if node.default["bindvolume"].eql?([":"])
node.default["bindvolume"] = nil
end
if localfolder.eql?("bootproxy")
# Using lazy evaluation if bootproxy
docker_container container_name do
repo docker[:image]
tag docker[:tag]
#Add all docker link
# links node.set[:linking]
links lazy{node.run_state[:linking]}
env docker[:env]
command docker[:command]
kill_after 30
# autoremove true
action :redeploy
port docker[:ports]
volumes node.default["bindvolume"]
cap_add 'SYS_ADMIN'
devices []
privileged true
timeout 30
# {["/dev/fuse"]}
end
else
docker_container container_name do
repo docker[:image]
tag docker[:tag]
#Add all docker link
links node.run_state[:linking]
env docker[:env]
command docker[:command]
kill_after 30
# autoremove true
action :redeploy
port docker[:ports]
volumes node.default["bindvolume"]
cap_add 'SYS_ADMIN'
devices []
privileged true
timeout 30
# {["/dev/fuse"]}
end
end
What i am trying to do here, is to run a set of docker by chef docker cookbook version 2.14.3, with names looping into "localfolder", and when localfolder = "bootproxy", execute bash command to check what dockers are currently running and link them by a nginx docker named as "bootproxy".
My issue here is, whenever i cleared all my cache or after i have re-uploaded all my cookbook "webserver", chef-client runs fine without error. But when i rerun the chef-client, i got
Docker::Error::ServerError
--------------------------
Could not get container for bootproxy
due to the value of "links" in "docker_container" resources became "#" instead of an array that included the current running docker names like [ "container1:container1", "container2:container2", "container3:container3"]. So i suspect that the ruby_block that trying to get value from the host is being cached and not running after the first successive execution. My removal of cache (rm -rf /var/chef/cache) proved it but i cannot define removal of cache inside cookbook(not a neat way to work it out too). I need to make the chef-client able to rerun as i am using it to deploy my set of codes in the whole envionment. Please give me any advise for this.
Thanks!
I think you missed fixing the second resource, it isn't using the lazy{} helper.
Overall this code is kind of a mess so it's really hard to tell what it is doing in the first place. I recommend writing a lot of tests and maybe more comments.

rvm1-capistrano3 error not finding ruby

I'm heaving troubles with deploy script. it's failing with following output -
DEBUG[3852918b] Command: cd
/var/www/billing.staging/releases/20140829073745 &&
/tmp/billing-deploy/rvm-auto.sh 2.1.2 gem install --file Gemfile
DEBUG[3852918b] ruby-2.1.2 is not installed. DEBUG[3852918b] Ruby
ruby-2.1.2 is not installed. DEBUG[3852918b] Can not find ruby for
'2.1.2'. cap aborted! SSHKit::Runner::ExecuteError: Exception while
executing on host 5.9.119.212: gem exit status: 103 gem stdout:
Nothing written gem stderr: ruby-2.1.2 is not installed. Ruby
ruby-2.1.2 is not installed. Can not find ruby for '2.1.2'.
my script :
# config valid only for Capistrano 3.1
lock '3.1.0'
set :application, 'billing'
set :repo_url, 'git#github.com:aviacentr/Billing.git'
set :keep_releases, 5
set :linked_files, %w{config/database.yml config/secrets.yml config/billing.yml}
set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
# Default branch is :master
# ask :branch, proc { `git rev-parse --abbrev-ref HEAD`.chomp }
# Default deploy_to directory is /var/www/my_app
# set :deploy_to, '/var/www/my_app'
set :bundle_servers, -> { release_roles(fetch(:bundle_roles)) } # this is default
set :bundle_binstubs, -> { shared_path.join('bin') } # this is default
set :bundle_gemfile, -> { release_path.join('Gemfile') } # default: nil
set :bundle_path, -> { shared_path.join('bundle') } # this is default
set :bundle_without, %w{development test}.join(' ') # this is default
set :bundle_flags, '--deployment --quiet ' # this
# is default
set :bundle_env_variables, {}
set :rvm1_ruby_version, "2.1.2"
# this is default
# Default value for :scm is :git
# set :scm, :git
# Default value for :format is :pretty
set :format, :pretty
# Default value for :log_level is :debug
# set :log_level, :debug
# Default value for :pty is false
# set :pty, true
# Default value for :linked_files is []
# set :linked_files, %w{config/database.yml}
# Default value for linked_dirs is []
# set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
# Default value for default_env is {}
# set :default_env, { path: "/opt/ruby/bin:$PATH" }
# require 'mongodb_logger/capistrano'
# set :mongodb_logger_assets_dir, "public/assets" # where to put mongodb assets
# after 'deploy:update_code', 'mongodb_logger:precompile'
# Default value for keep_releases is 5
set :keep_releases, 3
set :branch, ENV['branch'] || 'master' # e.g. cap staging deploy branch=dev
namespace :deploy do
desc 'Restart application'
task :restart do
on roles(:app), in: :sequence, wait: 5 do
# Your restart mechanism here, for example:
# execute :touch, release_path.join('tmp/restart.txt')
end
end
after :publishing, :restart
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
# Here we can do anything such as:
# within release_path do
# execute :rake, 'cache:clear'
# end
end
end
end
after 'deploy:publishing', 'deploy:restart'
namespace :deploy do
task :restart do
invoke 'unicorn:restart'
end
end
before 'deploy', 'rvm1:install:rvm'
before 'deploy', 'rvm1:install:ruby' # install/update Ruby
before 'deploy', 'rvm1:install:gems'
before 'deploy', 'rvm1:hook'
# after 'deploy:publishing', 'binstubs'
# task :binstubs do
# run 'gem regenerate_binstubs'
# end
Could you help me with this issue?

Error occured while running task Cap deploy:setup During deployment using Capistrano with Unicorn

Here is error which occurs after running task Cap deploy:setup to deploy ubuntu server using Capistrano and Unicorn. Please help me out here if you know the answer?
here is error reflecting on terminal-
[100.116.4.74] sh -c 'cp -RPp /home/deployer/apps/mcash/shared/cached-copy /home/deployer/apps/mcash/releases/20120417074244 && (echo 8f69f0a524dcecef478bad74df4a983d3cdad480 > /home/deployer/apps/mcash/releases/20120417074244/REVISION)'
** [out :: 100.116.4.75] cp: cannot create directory `/home/deployer/apps/mcash/releases/20120417074244'
** [out :: 100.116.4.75] : No such file or directory
failed: "sh -c 'cp -RPp /home/deployer/apps/mcash/shared/cached-copy /home/deployer/apps/mcash/releases/20120417074244 && (echo 8f69f0a524dcecef478bad74df4a983d3cdad480 > /home/deployer/apps/mcash/releases/20120417074244/REVISION)'" on 100.116.4.74
Here is entire deploy.rb file-
require "bundler/capistrano"
role :web, "200.116.4.75"
role :app, "200.116.4.75"
role :db, "200.116.4.75", :primary => true
set :application, "mcasher"
set :user, "deployer"
set :deploy_to, "/home/#{user}/apps/#{application}"
set :deploy_via, :remote_cache
set :use_sudo, false
set :scm, :git
set :repository, "git-server:mcasher.git"
set :branch, "master"
default_run_options[:pty] = true
ssh_options[:forward_agent] = true
after "deploy", "deploy:cleanup" # keep only the last 5 releases
namespace :deploy do
%w[start stop restart].each do |command|
desc "#{command} unicorn server"
task command, roles: :app, except: {no_release: true} do
run "/etc/init.d/unicorn_#{application} #{command}"
end
end
task :setup_config, roles: :app do
sudo "ln -nfs #{current_path}/config/nginx.conf /etc/nginx/sites-enabled/#{application}"
sudo "ln -nfs #{current_path}/config/unicorn_init.sh /etc/init.d/unicorn_#{application}"
run "mkdir -p #{shared_path}/config"
put File.read("config/database.example.yml"), "#{shared_path}/config/database.yml"
puts "Now edit the config files in #{shared_path}."
end
after "deploy:setup", "deploy:setup_config"
task :symlink_config, roles: :app do
run "ln -nfs #{shared_path}/config/database.yml #{release_path}/config/database.yml"
end
after "deploy:finalize_update", "deploy:symlink_config"
desc "Make sure local git is in sync with remote."
task :check_revision, roles: :web do
unless `git rev-parse HEAD` == `git rev-parse origin/master`
puts "WARNING: HEAD is not the same as origin/master"
puts "Run `git push` to sync changes."
exit
end
end
before "deploy", "deploy:check_revision"
end

Resources