td-agent.log doesn't have error logs but the logs are not still appearing in http endpoint - debugging

I am sending logs using fluentd to my coralogix account.
I configured everything and made my td-agent.service running properly and without error as shown in the td-agent.log. However, I can't still find the logs on my account.
Here are the logs from my td-agent.log:
2023-02-04 20:09:08 +0800 [info]: init supervisor logger path=nil rotate_age=nil rotate_size=nil
2023-02-04 20:09:08 +0800 [info]: #0 init worker0 logger path=nil rotate_age=nil rotate_size=nil
2023-02-04 20:09:08 +0800 [info]: adding match pattern="application.log" type="http"
2023-02-04 20:09:08 +0800 [warn]: #0 Use different plugin for secondary. Check the plugin works with primary like secondary_file primary="Fluent::Plugin::HTTPOutput" secondary="Fluent::Plugin::StdoutOutput"
2023-02-04 20:09:08 +0800 [info]: adding source type="tail"
2023-02-04 20:09:08 +0800 [info]: #0 starting fluentd worker pid=5624 ppid=5621 worker=0
2023-02-04 20:09:08 +0800 [info]: #0 following tail of /var/log/Log.log
2023-02-04 20:09:08 +0800 [info]: #0 fluentd worker is now running worker=0
--->Please see my td-agent.conf below for reference:
<source>
#type tail
#id tail_var_logs
#label #CORALOGIX
read_from_head true
tag application.log
path /var/log/Log.log
pos_file /var/log/td-agent/tmp/coralog.pos
path_key path
<parse>
#type none
</parse>
</source>
<label #CORALOGIX>
<filter application.log>
#type record_transformer
#log_level warn
enable_ruby true
auto_typecast true
renew_record true
<record>
applicationName "Example_App"
subsystemName "Example_Subsystem"
#text ${record.to_json}
</record>
</filter>
<match application.log>
#type http
endpoint https://api.coralogixsg.com/logs/rest/singles
headers {"private_key":"<my private key>"}
retryable_response_codes 503
error_response_as_unrecoverable false
<buffer>
#type memory
chunk_limit_size 10MB
compress gzip
flush_interval 1s
retry_max_times 5
retry_type periodic
retry_wait 2
</buffer>
<secondary>
#If any messages fail to send they will be send to STDOUT for debug.
#type stdout
</secondary>
</match>
</label>
--->Please see verbose logs using td-agent -vv:
2023-02-05 08:48:49 +0800 [trace]: #0 fluent/log.rb:287:trace: enqueueing all chunks in buffer instance=2000
2023-02-05 08:48:54 +0800 [trace]: #0 fluent/log.rb:287:trace: enqueueing all chunks in buffer instance=2000
2023-02-05 08:49:00 +0800 [trace]: #0 fluent/log.rb:287:trace: enqueueing all chunks in buffer instance=2000
2023-02-05 08:49:05 +0800 [trace]: #0 fluent/log.rb:287:trace: enqueueing all chunks in buffer instance=2000
2023-02-05 08:49:10 +0800 [trace]: #0 fluent/log.rb:287:trace: enqueueing all chunks in buffer instance=2000
2023-02-05 08:49:16 +0800 [debug]: #0 fluent/log.rb:309:debug: tailing paths: target = /var/log/Log.log | existing = /var/log/Log.log

Related

fluentd not communication with elasticsearch host_unreachable_exceptions

Hi i trying to get communication with Fluentd with elasticsearch but always getting [Faraday::TimeoutError] read timeout reached and also host_unreachable_exceptions
I using following Dockerfile
FROM fluent/fluentd:v1.12.0-debian-1.0
USER root
ADD conf/fluent.conf /etc/fluent/
ADD conf/parser_custom_parser.rb /etc/fluent/
ADD conf/parser_custom_parser.rb /etc/fluent/plugin/
ADD conf/parser_custom_parser.rb /lib/fluent/plugin/
ADD conf/formatter_custom_formatter.rb /etc/fluent/
ADD conf/formatter_custom_formatter.rb /etc/fluent/plugin/
ADD conf/formatter_custom_formatter.rb /lib/fluent/plugin/
ADD ca.pem /etc/fluent/certs/ca.pem
ENV HTTP_PROXY="http://100.126.0.150:3188"
ENV HTTPS_PROXY="http://100.126.0.150:3188"
RUN apt-get update && apt-get install -y curl && apt-get install -y telnet
RUN echo "source 'https://mirrors.tuna.tsinghua.edu.cn/rubygems/'" > Gemfile$
RUN echo "100.126.19.110 kafka" >> /etc/hosts
RUN echo "100.126.19.55 kafka02" >> /etc/hosts
RUN gem install elasticsearch -v 7.16.2
RUN gem install fluent-plugin-elasticsearch -v 5.0.5 --no-document
RUN gem install fluent-plugin-kafka -v 0.12.3 --no-document
ENV HTTP_PROXY=
ENV HTTPS_PROXY=
ENV http_proxy=
ENV https_proxy=
RUN echo "unset http_proxy" >> /root/.bashrc
RUN echo "unset https_proxy" >> /root/.bashrc
RUN echo "unset HTTP_PROXY" >> /root/.bashrc
RUN echo "unset HTTPS_PROXY" >> /root/.bashrc
CMD ["fluentd"]
And following fluentd.conf
# fluentd/conf/fluent.conf
<source>
#type forward
port 24224
bind 0.0.0.0
tag logs.test
</source>
<source>
#type kafka
brokers kafka:9092,kafka02:9092
format text
<topic>
topic app_gw_uat
</topic>
</source>
<filter *.**>
#type parser
key_name message
<parse>
#type custom_parser
</parse>
</filter>
<match *.**>
#type copy
<store>
#type stdout
</store>
</match>
<match *.**>
#type elasticsearch
#log_level debug
host 100.123.251.89
port 9200
scheme https
ssl_verify false
#ssl_version TLSv1_2
ca_file /etc/fluent/certs/ca.pem
client_cert /etc/fluent/certs/ca.pem
user MY_USER
password MY_PASSWORD
logstash_format true
#logstash_prefix fluentd-
index_name fluentd-01
type_name fluentd-01
reload_connections false
reconnect_on_error true
reload_on_failure true
tag uat
buffer_chunk_limit 1M
buffer_queue_limit 32
include_tag_key true
with_transporter_log true
<buffer>
flush_interval 30s
chunk_limit_size 1M
queue_limit_length 32
</buffer>
</match>
All logs here:
luentd_1 | 2022-04-18 15:25:47 +0000 [debug]: #0 Need substitution: false
fluentd_1 | 2022-04-18 15:25:47 +0000 [debug]: #0 'host_placeholder 100.123.251.89' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'host_placeholder: 100.123.251.89' doesn't have tag placeholder
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 [Faraday::TimeoutError] read timeout reached {:host=>"100.123.251.89", :port=>9200, :scheme=>"https", :user=>"inspiraefk", :password=><REDACTED>, :protocol=>"https"}
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 unexpected error error_class=NoMethodError error="undefined method `host_unreachable_exceptions' for #<Elasticsearch::Transport::Client:0x00007efdbbf513c8>"
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluent-plugin-elasticsearch-5.0.5/lib/fluent/plugin/elasticsearch_index_template.rb:41:in `rescue in retry_operate'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluent-plugin-elasticsearch-5.0.5/lib/fluent/plugin/elasticsearch_index_template.rb:39:in `retry_operate'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluent-plugin-elasticsearch-5.0.5/lib/fluent/plugin/out_elasticsearch.rb:487:in `handle_last_seen_es_major_version'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluent-plugin-elasticsearch-5.0.5/lib/fluent/plugin/out_elasticsearch.rb:339:in `configure'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.12.0/lib/fluent/plugin.rb:178:in `configure'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.12.0/lib/fluent/agent.rb:132:in `add_match'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.12.0/lib/fluent/agent.rb:74:in `block in configure'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.12.0/lib/fluent/agent.rb:64:in `each'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.12.0/lib/fluent/agent.rb:64:in `configure'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.12.0/lib/fluent/root_agent.rb:146:in `configure'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.12.0/lib/fluent/engine.rb:105:in `configure'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.12.0/lib/fluent/engine.rb:80:in `run_configure'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.12.0/lib/fluent/supervisor.rb:692:in `block in run_worker'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.12.0/lib/fluent/supervisor.rb:943:in `main_process'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.12.0/lib/fluent/supervisor.rb:684:in `run_worker'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.12.0/lib/fluent/command/fluentd.rb:361:in `<top (required)>'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/lib/ruby/2.6.0/rubygems/core_ext/kernel_require.rb:54:in `require'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/lib/ruby/2.6.0/rubygems/core_ext/kernel_require.rb:54:in `require'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.12.0/bin/fluentd:8:in `<top (required)>'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/bin/fluentd:23:in `load'
fluentd_1 | 2022-04-18 15:25:52 +0000 [error]: #0 /usr/local/bundle/bin/fluentd:23:in `<main>'
fluentd_1 | 2022-04-18 15:25:52 +0000 [info]: Worker 0 finished unexpectedly with status 1
fluentd_1 | 2022-04-18 15:25:52 +0000 [info]: adding filter pattern="*.**" type="parser"
fluentd_1 | 2022-04-18 15:25:52 +0000 [info]: adding match pattern="*.**" type="copy"
fluentd_1 | 2022-04-18 15:25:52 +0000 [info]: adding match pattern="*.**" type="elasticsearch"
fluentd_1 | 2022-04-18 15:25:53 +0000 [debug]: #0 'host 100.123.251.89' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'host: 100.123.251.89' doesn't have tag placeholder
Any idea how to fix this i really apretiate.

EFK stack JSON log not being shown

I have deployed an EFK stack in a Kubernetes cluster.
I have configured it in a way where fluentd will fetch Nginx logs as well as PHP logs( both are in JSON format and both are one JSON log per line ).
This is my config:
fluent.conf: |-
#include custom.conf
#include conf.d/*.conf
<match **>
#type elasticsearch
#id out_es
#log_level info
include_tag_key true
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1'}"
reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"
reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"
reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"
log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"
logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"
logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"
index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'logstash'}"
type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"
<buffer>
flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
retry_forever true
</buffer>
</match>
custom.conf: |
<match fluent.**>
#type null
</match>
<source>
#type tail
read_from_head true
tag kubernetes.*
path /var/log/k8s/*/*/*.log
pos_file /var/log/k8s/customcontainerlogs.log.pos
format json
<parse>
#type json
json_parser oj
time_type string
time_format %d/%b/%Y:%H:%M:%S %z
</parse>
</source>
Essentially I am trying to get all logs and stream them.
Using the abovementioned config I can only get the nginx logs for some reason, each one looking like:
{
"_index": "logstash-2021.04.24",
"_type": "_doc",
"_id": "kJMpBXkBcnb7LiWny-tT",
"_version": 1,
"_score": null,
"_source": {
"request": "GET / HTTP/1.1",
"http_referer": "",
"http_user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36",
"time_iso8601": "2021-04-24T18:34:50+00:00",
"status": "404",
"request_time": "0.004",
"header_access_id": "",
"ip": "10.200.11.106",
"#timestamp": "2021-04-24T18:34:50.708281784+00:00",
"tag": "kubernetes.var.log.k8s.api.nginx.api-access.log"
},
"fields": {
"time_iso8601": [
"2021-04-24T18:34:50.000Z"
],
"#timestamp": [
"2021-04-24T18:34:50.708Z"
]
},
"sort": [
1619289290708
]
}
If I add #type none in the parse section:
<parse>
#type json
#type none
time_type string
time_format %d/%b/%Y:%H:%M:%S %z
</parse>
I can see 2 logs every time I hit refresh (404 from Nginx and PHP error log No route found) but they are not formatted... it's all in the message as a string:
Example for a PHP log:
{
"_index": "logstash-2021.04.24",
"_type": "_doc",
"_id": "13QsBXkBuBa2uOG0PQSr",
"_version": 1,
"_score": null,
"_source": {
"message": "{\"email\":\"\",\"channel\":\"api_error_channel\",\"level\":\"WARNING\",\"message\":\"No route found\",\"backtrace\":[\"[Library\\\\HttpKernel\\\\Exception\\\\RouterListenerException] \\/app\\/Api\\/Data\\/bootstrap.php.cache:21384\",\"Library\\\\HttpKernel\\\\EventListener\\\\RouterListener->onKernelRoute\",\"->call_user_func\",\"Library\\\\EventDispatcher\\\\EventDispatcher->doDispatch\",\"Library\\\\EventDispatcher\\\\EventDispatcher->dispatch\",\"Library\\\\HttpKernel\\\\HttpKernel->handleRaw\",\"Library\\\\HttpKernel\\\\HttpKernel->handle\"],\"request\":{\"id\":\"210850ea-a52c-11eb-820b-f66ccf92cc93\",\"date\":\"2021-04-24 18:37:31\",\"path\":\"\\/\"},\"response\":{\"status_code\":404,\"body\":{\"Error\":{\"Code\":100004,\"Message\":\"Invalid route\"}}}}",
"#timestamp": "2021-04-24T18:37:31.688259336+00:00",
"tag": "kubernetes.var.log.k8s.api.php.api_error.log"
},
"fields": {
"#timestamp": [
"2021-04-24T18:37:31.688Z"
]
},
"sort": [
1619289451688
]
}
2021-04-26 14:30:36 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf"
2021-04-26 14:30:36 +0000 [info]: using configuration file: <ROOT>
<match fluent.**>
#type null
</match>
<source>
#type tail
read_from_head true
tag "kubernetes.*"
path "/var/log/k8s/*/*/*.log"
pos_file "/var/log/k8s/innercontainerlogs.log.pos"
<parse>
#type "json"
</parse>
</source>
<match **>
#type elasticsearch
#id out_es
#log_level "debug"
include_tag_key true
host "elasticsearch.monitoring.svc.cluster.local"
port 9200
path ""
scheme http
ssl_verify true
ssl_version TLSv1
reload_connections false
reconnect_on_error true
reload_on_failure true
log_es_400_reason false
logstash_prefix "logstash"
logstash_format true
index_name "logstash"
type_name "fluentd"
<buffer>
flush_thread_count 8
flush_interval 5s
chunk_limit_size 2M
queue_limit_length 32
retry_max_interval 30
retry_forever true
</buffer>
</match>
</ROOT>
2021-04-26 14:30:36 +0000 [info]: starting fluentd-1.4.2 pid=7 ruby="2.6.3"
2021-04-26 14:30:36 +0000 [info]: spawn command to main: cmdline=["/usr/local/bin/ruby", "-Eascii-8bit:ascii-8bit", "/fluentd/vendor/bundle/ruby/2.6.0/bin/fluentd", "-c", "/fluentd/etc/fluent.conf", "-p", "/fluentd/plugins", "--gemfile", "/fluentd/Gemfile", "--under-supervisor"]
2021-04-26 14:30:38 +0000 [info]: gem 'fluent-plugin-concat' version '2.3.0'
2021-04-26 14:30:38 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '3.4.3'
2021-04-26 14:30:38 +0000 [info]: gem 'fluent-plugin-grok-parser' version '2.5.1'
2021-04-26 14:30:38 +0000 [info]: gem 'fluent-plugin-json-in-json-2' version '1.0.2'
2021-04-26 14:30:38 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.1.6'
2021-04-26 14:30:38 +0000 [info]: gem 'fluent-plugin-multi-format-parser' version '1.0.0'
2021-04-26 14:30:38 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.3.0'
2021-04-26 14:30:38 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.1.1'
2021-04-26 14:30:38 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2021-04-26 14:30:38 +0000 [info]: gem 'fluentd' version '1.4.2'
2021-04-26 14:30:38 +0000 [info]: adding match pattern="fluent.**" type="null"
2021-04-26 14:30:38 +0000 [info]: adding match pattern="**" type="elasticsearch"
2021-04-26 14:30:42 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 10.109.189.187:9200 (Errno::ECONNREFUSED)
2021-04-26 14:30:46 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 10.109.189.187:9200 (Errno::ECONNREFUSED)
2021-04-26 14:30:54 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 10.109.189.187:9200 (Errno::ECONNREFUSED)
2021-04-26 14:31:10 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 10.109.189.187:9200 (Errno::ECONNREFUSED)
2021-04-26 14:31:42 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 10.109.189.187:9200 (Errno::ECONNREFUSED)
2021-04-26 14:31:42 +0000 [warn]: #0 [out_es] Detected ES 7.x or above: `_doc` will be used as the document `_type`.
2021-04-26 14:31:42 +0000 [info]: adding source type="tail"
2021-04-26 14:31:42 +0000 [info]: #0 starting fluentd worker pid=10 ppid=7 worker=0
2021-04-26 14:31:42 +0000 [debug]: #0 [out_es] buffer started instance=70277432939320 stage_size=0 queue_size=0
2021-04-26 14:31:42 +0000 [debug]: #0 [out_es] flush_thread actually running
2021-04-26 14:31:42 +0000 [debug]: #0 [out_es] flush_thread actually running
2021-04-26 14:31:42 +0000 [debug]: #0 [out_es] flush_thread actually running
2021-04-26 14:31:42 +0000 [debug]: #0 [out_es] flush_thread actually running
2021-04-26 14:31:42 +0000 [debug]: #0 [out_es] flush_thread actually running
2021-04-26 14:31:42 +0000 [debug]: #0 [out_es] flush_thread actually running
2021-04-26 14:31:42 +0000 [debug]: #0 [out_es] flush_thread actually running
2021-04-26 14:31:42 +0000 [debug]: #0 [out_es] flush_thread actually running
2021-04-26 14:31:42 +0000 [debug]: #0 [out_es] enqueue_thread actually running
2021-04-26 14:31:42 +0000 [info]: #0 following tail of /var/log/k8s/api/nginx/api-access.log
2021-04-26 14:31:42 +0000 [info]: #0 following tail of /var/log/k8s/api/nginx/api-error.log
2021-04-26 14:31:42 +0000 [info]: #0 following tail of /var/log/k8s/api/nginx/error.log
2021-04-26 14:31:42 +0000 [info]: #0 following tail of /var/log/k8s/api/nginx/access.log
2021-04-26 14:31:42 +0000 [info]: #0 following tail of /var/log/k8s/api/php/api_error.log
2021-04-26 14:31:42 +0000 [info]: #0 fluentd worker is now running worker=0
What can I do to fix this?
EDIT #1:
I checked the error logs and I am getting this:
2021-04-25 16:44:35 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not match with data 'No route found'" location=nil tag="kubernetes.var.log.k8s.api.php.api_error.log" time=2021-04-25 16:44:35.639766416 +0000 record={"email"=>"", "channel"=>"api_error_channel", "level"=>"WARNING", "message"=>"No route found", "backtrace"=>["[Library\\HttpKernel\\Exception\\RouterListenerException] /app/Api/Data/bootstrap.php.cache:21384", "Library\\HttpKernel\\EventListener\\RouterListener->onKernelRoute", "->call_user_func", "Library\\EventDispatcher\\EventDispatcher->doDispatch", "Library\\EventDispatcher\\EventDispatcher->dispatch", "Library\\HttpKernel\\HttpKernel->handleRaw", "Library\\HttpKernel\\HttpKernel->handle"], "request"=>{"id"=>"84a434e8-a5e5-11eb-b41b-c629ad91f8c7", "date"=>"2021-04-25 16:44:35", "path"=>"/"}, "response"=>{"status_code"=>404, "body"=>{"Error"=>{"Code"=>100004, "Message"=>"Invalid route"}}}}
EDIT #2:
I ran the raw JSON log through a validator and it's a valid JSON.
EDIT #3:
Added startup logs
EDIT #4:
This is an example of debug stdout logs:
2021-04-27 09:22:42.429274874 +0000 kubernetes.var.log.k8s.api.php.api_error.log: {"email":"","channel":"api_error_channel","level":"WARNING","message":"No route found","backtrace":["[Library\\HttpKernel\\Exception\\RouterListenerException] /app/Api/Data/bootstrap.php.cache:21384","Library\\HttpKernel\\EventListener\\RouterListener->onKernelRoute","->call_user_func","Library\\EventDispatcher\\EventDispatcher->doDispatch","Library\\EventDispatcher\\EventDispatcher->dispatch","Library\\HttpKernel\\HttpKernel->handleRaw","Library\\HttpKernel\\HttpKernel->handle"],"request":{"id":"1e5ce058-a73a-11eb-9e48-1e539c74b43b","date":"2021-04-27 09:22:42","path":"/"},"response":{"status_code":404,"body":{"Error":{"Code":100004,"Message":"Invalid route"}}},"tag":"kubernetes.var.log.k8s.api.php.api_error.log"}
2021-04-27 09:22:37.854071485 +0000 kubernetes.var.log.k8s.api.nginx.api-access.log: {"request":"GET / HTTP/1.1","http_referer":"","http_user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36","time_iso8601":"2021-04-27T09:22:37+00:00","status":"404","request_time":"0.004","header_access_id":"","ip":"10.200.11.106","tag":"kubernetes.var.log.k8s.api.nginx.api-access.log"}

Kubernetes, Prometheus my custom metric not found but in config exist

I followed this THIS tutorial. I stuck on chapter "INSTALL PROMETHEUS ADAPTER ON KUBERNETES"
I before installed prometheus and after adapter with this tut.
Loogs from adapter.
I1201 18:29:51.064522 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1201 18:29:51.064793 1 tlsconfig.go:157] loaded client CA [0/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kubernetes" [] issuer="<self>" (2020-11-26 13:25:15 +0000 UTC to 2030-11-24 13:25:15 +0000 UTC (now=2020-12-01 18:29:51.064776377 +0000 UTC))
I1201 18:29:51.064996 1 tlsconfig.go:179] loaded serving cert ["serving-cert::/tmp/cert/apiserver.crt::/tmp/cert/apiserver.key"]: "localhost#1606847390" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer="localhost-ca#1606847390" (2020-12-01 17:29:50 +0000 UTC to 2021-12-01 17:29:50 +0000 UTC (now=2020-12-01 18:29:51.064987277 +0000 UTC))
I1201 18:29:51.065289 1 named_certificates.go:52] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client#1606847390" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca#1606847390" (2020-12-01 17:29:50 +0000 UTC to 2021-12-01 17:29:50 +0000 UTC (now=2020-12-01 18:29:51.065278277 +0000 UTC))
I1201 18:29:51.065471 1 tlsconfig.go:157] loaded client CA [0/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kubernetes" [] issuer="<self>" (2020-11-26 13:25:15 +0000 UTC to 2030-11-24 13:25:15 +0000 UTC (now=2020-12-01 18:29:51.065459677 +0000 UTC))
I1201 18:29:51.065502 1 tlsconfig.go:157] loaded client CA [1/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kubernetes" [] issuer="<self>" (2020-11-26 13:25:16 +0000 UTC to 2030-11-24 13:25:16 +0000 UTC (now=2020-12-01 18:29:51.065493477 +0000 UTC))
I1201 18:29:51.065715 1 tlsconfig.go:179] loaded serving cert ["serving-cert::/tmp/cert/apiserver.crt::/tmp/cert/apiserver.key"]: "localhost#1606847390" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer="localhost-ca#1606847390" (2020-12-01 17:29:50 +0000 UTC to 2021-12-01 17:29:50 +0000 UTC (now=2020-12-01 18:29:51.065704777 +0000 UTC))
I1201 18:29:51.065910 1 named_certificates.go:52] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client#1606847390" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca#1606847390" (2020-12-01 17:29:50 +0000 UTC to 2021-12-01 17:29:50 +0000 UTC (now=2020-12-01 18:29:51.065901277 +0000 UTC))
I1201 18:30:24.362727 1 httplog.go:90] GET /healthz: (2.2796ms) 200 [kube-probe/1.16+ 10.1.0.1:53930]
I1201 18:30:25.463080 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (2.9747ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
E1201 18:30:25.465710 1 writers.go:105] apiserver was unable to write a JSON response: http2: stream closed
E1201 18:30:25.465953 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
I1201 18:30:25.466586 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (2.8732ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
E1201 18:30:25.467206 1 writers.go:105] apiserver was unable to write a JSON response: http2: stream closed
I1201 18:30:25.467753 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (7.164101ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
E1201 18:30:25.468083 1 writers.go:118] apiserver was unable to write a fallback JSON response: http2: stream closed
E1201 18:30:25.472221 1 writers.go:105] apiserver was unable to write a JSON response: http2: stream closed
E1201 18:30:25.472609 1 writers.go:105] apiserver was unable to write a JSON response: http2: stream closed
E1201 18:30:25.472730 1 writers.go:105] apiserver was unable to write a JSON response: http2: stream closed
I1201 18:30:25.472909 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (1.0713ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
I1201 18:30:25.474640 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (1.647ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
E1201 18:30:25.475699 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
I1201 18:30:25.479184 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (20.215901ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
E1201 18:30:25.479482 1 writers.go:118] apiserver was unable to write a fallback JSON response: http2: stream closed
E1201 18:30:25.480170 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
E1201 18:30:25.481352 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
E1201 18:30:25.482583 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
I1201 18:30:25.483849 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (25.682902ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
E1201 18:30:25.485396 1 writers.go:118] apiserver was unable to write a fallback JSON response: http2: stream closed
E1201 18:30:25.486509 1 writers.go:118] apiserver was unable to write a fallback JSON response: http2: stream closed
E1201 18:30:25.487726 1 writers.go:118] apiserver was unable to write a fallback JSON response: http2: stream closed
I1201 18:30:25.488748 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (17.878101ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
I1201 18:30:25.489839 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (18.999301ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
I1201 18:30:25.491108 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (19.478801ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
I1201 18:30:26.476227 1 httplog.go:90] GET /openapi/v2: (1.2015ms) 404 [ 192.168.65.3:46868]
I1201 18:30:28.165256 1 httplog.go:90] GET /healthz: (190.7µs) 200 [kube-probe/1.16+ 10.1.0.1:53966]
I1201 18:30:34.360535 1 httplog.go:90] GET /healthz: (116.3µs) 200 [kube-probe/1.16+ 10.1.0.1:54020]
I1201 18:30:36.350376 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (4.363901ms) 200 [kube-controller-manager/v1.16.6 (linux/amd64) kubernetes/e7f962b/system:serviceaccount:kube-system:resourcequota-controller 192.168.65.3:46868]
I1201 18:30:37.315883 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (2.5138ms) 200 [kube-controller-manager/v1.16.6 (linux/amd64) kubernetes/e7f962b/controller-discovery 192.168.65.3:46868]
I1201 18:30:37.329496 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_server_requests_seconds_count_sum?labelSelector=app%3Dauth-server: (5.072601ms) 404 [kube-controller-manager/v1.16.6 (linux/amd64) kubernetes/e7f962b/system:serviceaccount:kube-system:horizontal-pod-autoscaler 192.168.65.3:46868]
I1201 18:30:38.165664 1 httplog.go:90] GET /healthz: (1.2018ms) 200 [kube-probe/1.16+ 10.1.0.1:54046]
I1201 18:30:44.361064 1 httplog.go:90] GET /healthz: (73.5µs) 200 [kube-probe/1.16+ 10.1.0.1:54108]
I1201 18:30:44.960512 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (1.0876ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
I1201 18:30:44.961265 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (2.1336ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
I1201 18:30:44.961449 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (1.61ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
E1201 18:30:44.961861 1 writers.go:105] apiserver was unable to write a JSON response: http2: stream closed
E1201 18:30:44.961923 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
E1201 18:30:44.962376 1 writers.go:105] apiserver was unable to write a JSON response: http2: stream closed
E1201 18:30:44.963100 1 writers.go:118] apiserver was unable to write a fallback JSON response: http2: stream closed
E1201 18:30:44.964224 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http2: stream closed"}
E1201 18:30:44.965232 1 writers.go:118] apiserver was unable to write a fallback JSON response: http2: stream closed
I1201 18:30:44.966366 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (5.682901ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
I1201 18:30:44.967578 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1: (8.177501ms) 200 [Go-http-client/2.0 192.168.65.3:46844]
I1201 18:30:48.168761 1 httplog.go:90] GET /healthz: (2.2637ms) 200 [kube-probe/1.16+ 10.1.0.1:54138]
I1201 18:30:48.237637 1 httplog.go:90] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (1.4318ms) 200 [kube-controller-manager/v1.16.6 (linux/amd64) kubernetes/e7f962b/system:serviceaccount:kube-system:generic-garbage-collector 192.168.65.3:46868]
In configmap prometheus-adapter on kubernetes I have:
- metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>,uri=~"/persons.*"}) by
(<<.GroupBy>>)
name:
as: http_server_requests_seconds_count_sum
matches: ^http_server_requests_seconds_count(.*)
resources:
overrides:
kubernetes_namespace:
resource: namespace
kubernetes_pod_name:
resource: pod
seriesQuery: '{__name__=~"^http_server_requests_seconds_.*"}'
with many diffrent metrics.
When try create Horizonal Pod Autoscaler with my metric I have: unable to get metric ...
In command: kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1"
I have many metrics so I think connection is ok.
But my metric isn't there and when I use this command (DOWN), the same error could not find metric
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_server_requests_seconds_count_sum"

Elasticsearch does not restart after updating config file for CORS

I have updates my Elasticsearch config file (Note:ES is 2.2) for making it CORS enabled.I had done the same for ES 1.4 and it worked fine but here its not working and ES does not restart .Below is the error and config file
Error :
root#XXX:/etc/elasticsearch# sudo service elasticsearch status -l
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2016-03-11 00:03:03 EST; 9min ago
Docs: http://www.elastic.co
Process: 9710 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -Des.pidfile=${PID_DIR}/elasticsearch.pid -Des.default.path.home=${ES_HOME} -Des.default.path.logs=${LOG_DIR} -Des.default.path.data=${DATA_DIR} -Des.default.path.conf=${CONF_DIR} (code=exited, status=1/FAILURE)
Process: 9707 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 9710 (code=exited, status=1/FAILURE)
Mar 11 00:03:03 ubuntu-1gb-sfo1-01 elasticsearch[9710]: network.host: XX.XX.XX.XX
Mar 11 00:03:03 ubuntu-1gb-sfo1-01 elasticsearch[9710]: ^
Mar 11 00:03:03 ubuntu-1gb-sfo1-01 elasticsearch[9710]: expected <block end>, but found BlockMappingStart
Mar 11 00:03:03 ubuntu-1gb-sfo1-01 elasticsearch[9710]: in 'reader', line 67, column 3:
Mar 11 00:03:03 ubuntu-1gb-sfo1-01 elasticsearch[9710]: http.cors.enabled: true
Mar 11 00:03:03 ubuntu-1gb-sfo1-01 elasticsearch[9710]: ^
Mar 11 00:03:03 ubuntu-1gb-sfo1-01 elasticsearch[9710]: at com.fasterxml.jackson.dataformat.yaml.snakeyaml.parser.ParserImpl$ParseBlockM...a:570)
Mar 11 00:03:03 ubuntu-1gb-sfo1-01 systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Mar 11 00:03:03 ubuntu-1gb-sfo1-01 systemd[1]: elasticsearch.service: Unit entered failed state.
Mar 11 00:03:03 ubuntu-1gb-sfo1-01 systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
My ES Config File is as below (Updated YML After Below suggestions)
# network.bind_host: 127.0.0.1
http.publish_port: 9200
http.port: 9200
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
http.cors.enabled: true
http.cors.enabled: true
http.cors.allow-origin: "*"
Every line must be indented exactly one space from the left. The line http.cors.enabled: true seems to be indented two spaces.

Freeswitch pocketsphinx won't recognize me

Today i need help with the speech recognition Pocketsphinx which i use in Freeswitch. So there is a demo "pizza demo" which does not work because the programm doesn't "hear" me.
I tried another example with an lua script. And also here the Pocketsphinx does not "hear" me.
So maybe somebody knows whats not working. Because i don't implement anything, i don't know which code i can paste here. So if you need some code or configurations let me know.
My idea: maybe i must set which .dic file the pocketsphinx must use. I hope somebody can help me.
EDIT://
2014-10-14 15:13:08.923330 [NOTICE] switch_channel.c:1055 New Channel sofia/internal/1001#myip [326a4157-aa80-48d2-bd7e-db8d8afd525b]
2014-10-14 15:13:09.042378 [INFO] mod_dialplan_xml.c:558 Processing me <1001>->74992 in context default
2014-10-14 15:13:09.042378 [CRIT] mod_dptools.c:1628 WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING
2014-10-14 15:13:09.042378 [CRIT] mod_dptools.c:1628 Open /usr/local/freeswitch/conf/vars.xml and change the default_password.
2014-10-14 15:13:09.042378 [CRIT] mod_dptools.c:1628 Once changed type 'reloadxml' at the console.
2014-10-14 15:13:09.042378 [CRIT] mod_dptools.c:1628 WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING
2014-10-14 15:13:19.932900 [INFO] switch_core_media.c:5162 Activating RTCP PORT 4077
2014-10-14 15:13:19.932900 [NOTICE] sofia_media.c:92 Pre-Answer sofia/internal/1001#myip!
2014-10-14 15:13:19.943925 [NOTICE] fssession.cpp:1167 Channel [sofia/internal/1001#myip] has been answered
INFO: cmd_ln.c(691): Parsing command line:
\
-samprate 8000 \
-hmm /usr/local/freeswitch/grammar/model/communicator \
-jsgf /usr/local/freeswitch/grammar/pizza_order.gram \
-lw 6.5 \
-dict /usr/local/freeswitch/grammar/default.dic \
-frate 50 \
-silprob 0.005
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-ascale 20.0 2.000000e+01
-aw 1 1
-backtrace no no
-beam 1e-48 1.000000e-48
-bestpath yes yes
-bestpathlw 9.5 9.500000e+00
-bghist no no
-ceplen 13 13
-cmn current current
-cmninit 8.0 8.0
-compallsen no no
-debug 0
-dict /usr/local/freeswitch/grammar/default.dic
-dictcase no no
-dither no no
-doublebw no no
-ds 1 1
-fdict
-feat 1s_c_d_dd 1s_c_d_dd
-featparams
-fillprob 1e-8 1.000000e-08
-frate 100 50
-fsg
-fsgusealtpron yes yes
-fsgusefiller yes yes
-fwdflat yes yes
-fwdflatbeam 1e-64 1.000000e-64
-fwdflatefwid 4 4
-fwdflatlw 8.5 8.500000e+00
-fwdflatsfwin 25 25
-fwdflatwbeam 7e-29 7.000000e-29
-fwdtree yes yes
-hmm /usr/local/freeswitch/grammar/model/communicator
-input_endian little little
-jsgf /usr/local/freeswitch/grammar/pizza_order.gram
-kdmaxbbi -1 -1
-kdmaxdepth 0 0
-kdtree
-latsize 5000 5000
-lda
-ldadim 0 0
-lextreedump 0 0
-lifter 0 0
-lm
-lmctl
-lmname default default
-logbase 1.0001 1.000100e+00
-logfn
-logspec no no
-lowerf 133.33334 1.333333e+02
-lpbeam 1e-40 1.000000e-40
-lponlybeam 7e-29 7.000000e-29
-lw 6.5 6.500000e+00
-maxhmmpf -1 -1
-maxnewoov 20 20
-maxwpf -1 -1
-mdef
-mean
-mfclogdir
-min_endfr 0 0
-mixw
-mixwfloor 0.0000001 1.000000e-07
-mllr
-mmap yes yes
-ncep 13 13
-nfft 512 512
-nfilt 40 40
-nwpen 1.0 1.000000e+00
-pbeam 1e-48 1.000000e-48
-pip 1.0 1.000000e+00
-pl_beam 1e-10 1.000000e-10
-pl_pbeam 1e-5 1.000000e-05
-pl_window 0 0
-rawlogdir
-remove_dc no no
-round_filters yes yes
-samprate 16000 8.000000e+03
-seed -1 -1
-sendump
-senlogdir
-senmgau
-silprob 0.005 5.000000e-03
-smoothspec no no
-svspec
-tmat
-tmatfloor 0.0001 1.000000e-04
-topn 4 4
-topn_beam 0 0
-toprule
-transform legacy legacy
-unit_area yes yes
-upperf 6855.4976 6.855498e+03
-usewdphones no no
-uw 1.0 1.000000e+00
-var
-varfloor 0.0001 1.000000e-04
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wbeam 7e-29 7.000000e-29
-wip 0.65 6.500000e-01
-wlen 0.025625 2.562500e-02
INFO: cmd_ln.c(691): Parsing command line:
\
-alpha 0.97 \
-dither yes \
-doublebw no \
-nfilt 31 \
-ncep 13 \
-lowerf 200 \
-upperf 3500 \
-nfft 256 \
-wlen 0.0256 \
-transform legacy \
-feat s2_4x \
-agc none \
-cmn current \
-varnorm no
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-ceplen 13 13
-cmn current current
-cmninit 8.0 8.0
-dither no yes
-doublebw no no
-feat 1s_c_d_dd s2_4x
-frate 100 50
-input_endian little little
-lda
-ldadim 0 0
-lifter 0 0
-logspec no no
-lowerf 133.33334 2.000000e+02
-ncep 13 13
-nfft 512 256
-nfilt 40 31
-remove_dc no no
-round_filters yes yes
-samprate 16000 8.000000e+03
-seed -1 -1
-smoothspec no no
-svspec
-transform legacy legacy
-unit_area yes yes
-upperf 6855.4976 3.500000e+03
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wlen 0.025625 2.560000e-02
INFO: acmod.c(246): Parsed model-specific feature parameters from /usr/local/freeswitch/grammar/model/communicator/feat.params
INFO: fe_interface.c(299): You are using the internal mechanism to generate the seed.
INFO: feat.c(713): Initializing feature stream to type: 's2_4x', ceplen=13, CMN='current', VARNORM='no', AGC='none'
INFO: cmn.c(142): mean[0]= 12.00, mean[1..12]= 0.0
INFO: mdef.c(517): Reading model definition: /usr/local/freeswitch/grammar/model/communicator/mdef
INFO: bin_mdef.c(179): Allocating 104160 * 8 bytes (813 KiB) for CD tree
INFO: tmat.c(205): Reading HMM transition probability matrices: /usr/local/freeswitch/grammar/model/communicator/transition_matrices
INFO: acmod.c(121): Attempting to use SCHMM computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /usr/local/freeswitch/grammar/model/communicator/means
INFO: ms_gauden.c(292): 1 codebook, 4 feature, size:
INFO: ms_gauden.c(294): 256x12
INFO: ms_gauden.c(294): 256x24
INFO: ms_gauden.c(294): 256x3
INFO: ms_gauden.c(294): 256x12
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /usr/local/freeswitch/grammar/model/communicator/variances
INFO: ms_gauden.c(292): 1 codebook, 4 feature, size:
INFO: ms_gauden.c(294): 256x12
INFO: ms_gauden.c(294): 256x24
INFO: ms_gauden.c(294): 256x3
INFO: ms_gauden.c(294): 256x12
INFO: ms_gauden.c(354): 59 variance values floored
INFO: s2_semi_mgau.c(903): Loading senones from dump file /usr/local/freeswitch/grammar/model/communicator/sendump
INFO: s2_semi_mgau.c(927): BEGIN FILE FORMAT DESCRIPTION
INFO: s2_semi_mgau.c(990): Rows: 256, Columns: 6256
INFO: s2_semi_mgau.c(1022): Using memory-mapped I/O for senones
INFO: s2_semi_mgau.c(1296): Maximum top-N: 4 Top-N beams: 0 0 0 0
INFO: dict.c(317): Allocating 137549 * 32 bytes (4298 KiB) for word entries
INFO: dict.c(332): Reading main dictionary: /usr/local/freeswitch/grammar/default.dic
INFO: dict.c(211): Allocated 1010 KiB for strings, 1664 KiB for phones
INFO: dict.c(335): 133436 words read
INFO: dict.c(341): Reading filler dictionary: /usr/local/freeswitch/grammar/model/communicator/noisedict
INFO: dict.c(211): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(344): 17 words read
INFO: dict2pid.c(396): Building PID tables for dictionary
INFO: dict2pid.c(404): Allocating 51^3 * 2 bytes (259 KiB) for word-initial triphones
INFO: dict2pid.c(131): Allocated 62832 bytes (61 KiB) for word-final triphones
INFO: dict2pid.c(195): Allocated 62832 bytes (61 KiB) for single-phone word triphones
INFO: fsg_search.c(145): FSG(beam: -1080, pbeam: -1080, wbeam: -634; wip: -26, pip: 0)
INFO: jsgf.c(581): Defined rule: <pizza_order.g00000>
INFO: jsgf.c(581): Defined rule: PUBLIC <pizza_order.delivery>
INFO: fsg_model.c(215): Computing transitive closure for null transitions
INFO: fsg_model.c(270): 9 null transitions added
INFO: fsg_model.c(421): Adding silence transitions for <sil> to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++AE++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++AH++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++BACKGROUND++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++BREATH++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++COUGH++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++EH++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++ER++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++LAUGH++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++MM++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++MUMBLE++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++NOISE++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++OH++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++SMACK++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++UH++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++UH_NOISE++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++UM++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_model.c(421): Adding silence transitions for ++UM_NOISE++ to FSG
INFO: fsg_model.c(441): Added 8 silence word transitions
INFO: fsg_search.c(366): Added 0 alternate word transitions
INFO: fsg_lextree.c(108): Allocated 832 bytes (0 KiB) for left and right context phones
INFO: fsg_lextree.c(253): 213 HMM nodes in lextree (199 leaves)
INFO: fsg_lextree.c(255): Allocated 27264 bytes (26 KiB) for all lextree nodes
INFO: fsg_lextree.c(258): Allocated 25472 bytes (24 KiB) for lextree leafnodes
2014-10-14 15:13:25.442814 [NOTICE] switch_rtp.c:5132 Receiving an RTCP packet[2014-14-09 13:13:25.442953] SSRC[1123956418]RTT[0.001266] A[2683662693] - DLSR[22111] - LSR[2683640499]
INFO: cmn_prior.c(121): cmn_prior_update: from < 8.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >
INFO: cmn_prior.c(139): cmn_prior_update: to < 7.58 0.08 -0.24 -0.08 -0.24 -0.18 -0.21 -0.15 -0.06 -0.18 -0.08 -0.11 -0.11 >
INFO: fsg_search.c(1032): 86 frames, 1666 HMMs (19/fr), 6967 senones (81/fr), 886 history entries (10/fr)
INFO: fsg_search.c(1417): Start node <sil>.0:22:85
INFO: fsg_search.c(1417): Start node <sil>.0:22:55
INFO: fsg_search.c(1417): Start node <sil>.0:22:85
INFO: fsg_search.c(1417): Start node <sil>.0:22:55
INFO: fsg_search.c(1417): Start node <sil>.0:22:85
INFO: fsg_search.c(1417): Start node takeout.0:21:33
INFO: fsg_search.c(1417): Start node pickup.0:19:71
INFO: fsg_search.c(1456): End node <sil>.56:58:85 (-1076)
INFO: fsg_search.c(1456): End node <sil>.56:58:85 (-1076)
INFO: fsg_search.c(1456): End node <sil>.56:58:85 (-1076)
INFO: fsg_search.c(1456): End node <sil>.26:28:85 (-1180)
INFO: fsg_search.c(1456): End node <sil>.0:22:85 (-6201)
INFO: fsg_search.c(1456): End node <sil>.0:22:85 (-6201)
INFO: fsg_search.c(1456): End node <sil>.0:22:85 (-6201)
INFO: fsg_search.c(1680): lattice start node <s>.0 end node </s>.86
INFO: ps_lattice.c(1365): Normalizer P(O) = alpha(</s>:86:86) = -333411
INFO: ps_lattice.c(1403): Joint P(O,S) = -333414 P(S|O) = -3
2014-10-14 15:13:28.822614 [WARNING] mod_pocketsphinx.c:348 Lost the text, never mind....
2014-10-14 15:13:30.922352 [NOTICE] switch_rtp.c:5132 Receiving an RTCP packet[2014-14-09 13:13:30.922476] SSRC[1123956418]RTT[0.001648] A[2684021799] - DLSR[53573] - LSR[2683968118]
2014-10-14 15:13:36.403317 [NOTICE] switch_rtp.c:5132 Receiving an RTCP packet[2014-14-09 13:13:36.403451] SSRC[1123956418]RTT[0.002731] A[2684381000] - DLSR[85028] - LSR[2684295793]
INFO: fsg_search.c(1032): 149 frames, 1750 HMMs (11/fr), 8700 senones (58/fr), 1006 history entries (6/fr)
INFO: fsg_search.c(1417): Start node <sil>.0:2:90
INFO: fsg_search.c(1417): Start node <sil>.0:2:90
INFO: fsg_search.c(1456): End node <sil>.122:124:148 (-955)
INFO: fsg_search.c(1456): End node <sil>.122:124:148 (-955)
INFO: fsg_search.c(1456): End node <sil>.122:124:148 (-955)
INFO: fsg_search.c(1456): End node pickup.87:107:148 (-4233)
INFO: fsg_search.c(1680): lattice start node <s>.0 end node </s>.149
INFO: ps_lattice.c(1365): Normalizer P(O) = alpha(</s>:149:149) = -927641
INFO: ps_lattice.c(1403): Joint P(O,S) = -927641 P(S|O) = 0
2014-10-14 15:13:41.883453 [NOTICE] switch_rtp.c:5132 Receiving an RTCP packet[2014-14-09 13:13:41.883618] SSRC[1123956418]RTT[0.002487] A[2684740148] - DLSR[116488] - LSR[2684623497]
2014-10-14 15:13:44.732381 [NOTICE] sofia.c:952 Hangup sofia/internal/1001#myip [CS_EXECUTE] [NORMAL_CLEARING]
2014-10-14 15:13:44.732381 [ERR] SpeechTools.jm:368 Exception: Session is not active! (near: " rv = this.asr.session.collectInput(this.asr.onInput, this.asr, 500);")
INFO: fsg_search.c(1032): 33 frames, 377 HMMs (11/fr), 1733 senones (52/fr), 275 history entries (8/fr)
2014-10-14 15:13:44.802526 [INFO] mod_pocketsphinx.c:257 Port Closed.
2014-10-14 15:13:44.823711 [NOTICE] switch_core_session.c:1633 Session 25 (sofia/internal/1001#myip) Ended
2014-10-14 15:13:44.823711 [NOTICE] switch_core_session.c:1637 Close Channel sofia/internal/1001#myip [CS_DESTROY]
EDIT 2:
I find out that the speech recognition works and it detect my speech. So the problem is that in SpeechTools.jm the result from the xml can not be load and is undefined.
body = body.replace(/<\?.*?\?>/g, '');
console_log("debug", "----XML:\n" + body + "\n");
xml = new XML("<xml>" + body + "</xml>");
result = xml.result; //undefined
and my output from console_log
<result grammar="pizza_order">
<interpretation grammar="pizza_order" confidence="100">
<input mode="speech">pickup</input>
</interpretation>
</result>
Okay, the speech recognition works the whole time (see edit). Real problem is that the whole script (SpeechTools.jm) is not working. They switched from mozilla javascript engine to google v8 without editing the script. However fixing the script is an javascript problem and has nothing to do with this question anymore.

Resources