Datadog agent do not send data - amazon-ec2

I run into an issue with my Datadog agent. I installed Agent version 7.35.0 on an EC2 ubuntu machine. After I restarted the agent I got this error:
Apr 10 11:24:24 ip-10-100-0-33 agent[9951]: 2022-04-10 11:24:24 UTC | CORE | WARN |(pkg/collector/python/datadog_agent.go:124 in LogMessage) | disk:e5dffb8bef24336f |(disk.py:136) | Unable to get disk metrics for /sys/kernel/debug/tracing: [Errno 13] Permission denied: '/sys/kernel/debug/tracing'. You can exclude this mountpoint in the settings if it is invalid.
From what I've seen on threads, they gave this answer:
Can you add "tracefs" to the "file_system_blacklist" configuration to see if that unblocks you? We can add it by default if it does.
But I do not completely understand this answer, and I am not sure what should I change to fix this issue.
If anyone experiences this kind of thing and can help me it would be super helpful
Thank you!

With Datadog Agent 7:
mv /etc/datadog-agent/conf.d/disk.d/conf.yaml.default /etc/datadog-agent/conf.d/disk.d/conf.yaml
Then in /etc/datadog-agent/conf.d/disk.d/conf.yaml, uncomment file_system_global_exclude and underneath it add - tracefs:
init_config:
file_system_global_exclude:
- tracefs

Related

Caddy not working in api-platfrom 2.6.4 distribution - panic: proto: file "pb.proto" is already registered

When I try us api-platform version 2.6.4 I am not able to run it when i build adn strat containers and check logs caddy is not working i get an error like this. Any idea? Caddy version is 2.3.0
caddy_1 | panic: proto: file "pb.proto" is already registered
caddy_1 | See https://developers.google.com/protocol-buffers/docs/reference/go/faq#namespace-conflict
tureality_caddy_1 exited with code 2
Other people have reported having this bug and I had it too.
Fortunately, the bug as just been fixed by Dunglas itself. :)
https://github.com/api-platform/api-platform/issues/1881#issuecomment-822663193
The repair was done at the mercure level and not in the api platform source code itself so you can keep your current version.
You just have to docker-compose up and it will work.

sctp_core_destroy(): SCTP API not initialized in kamailio start

Hi I have installed Kamalio it start first time but when I stop and start it again it gives sctp_core_destroy(): SCTP API not initialized . I have already installed sctp module.
yyerror_at(): parse error in config file /etc/kamailio/kamailio.cfg
load_module(): could not find module <db_mysql> in </usr/lib/kamailio/modules>
[sctp_core.c:53]: sctp_core_destroy(): SCTP API not initialized
From the log it is obvious that you have successfully compiled & installed SCTP module, however it could NOT be initialized.
Note that is error could must often than not be as a result of other errors in your cfg file.
Few tips:
Can you run kamailio -c and to be sure there is NO error in your cfg.
Found error? use this command to monitor what the exact issue is. Run from a different terminal tail -fn200 /var/log/syslog
On the second terminal try restarting you Kamalio server sudo service kamalio restart
Revisit terminal 1 and look out for the first line with CRITICAL output like the one below CRITICAL: <core> [core/cfg.y:3413]: yyerror_at(): parse error in config file /usr/local/etc/kamailio/kamailio.cfg, line 366, column 41: syntax error
Line 366 mostly is the issue so visit that file at that line (366) to fix the proble
sudo nano +366 /usr/local/etc/kamailio/kamailio.cfg
Let me know if it helps

Debian 6 - wget failed: Connection timed out from specific URL

I try to download from a specific url using the command wget on the server Debian 6 as follows:
# wget http://ftp.ruby-lang.org/pub/ruby/2.1/ruby-2.1.2.tar.gz
the result:
--2016-05-25 16: 39: 15-- http://ftp.ruby-lang.org/pub/ruby/2.1/ruby-2.1.2.tar.gz
Resolving ftp.ruby-lang.org ... 221.186.184.75
Connecting to ftp.ruby-lang.org | 221.186.184.75 |: 80 ... failed: Connection timed out.
however, if I access the url using the browser, it can be accessed with a normal ...
I know why I do not use alternative via the browser earlier.
I just want to understand the intent of the problem, which is a new thing for me ..
so, why did this happen? iptables influence or proxy or other things I really do not understand.
someone might help in solving this problem.
Thanks in advance...

mvn appengine:update will not deploy due to permissions error

I am trying to deploy a basic app engine web app with maven.
As a part of the deployment process, I am required to authenticate via a web browser.
I am using 2 different google accounts. 1 for home. 1 for work. When maven opened up the browser tab to ask me to authenticate, it selected the wrong account. I didn't notice this and clicked the "Allow" button.
This account does not have the right credentials so I got an access denied error.
๐Ÿ˜ˆ >mvn appengine:update
...
Beginning interaction for module default...
Apr 01, 2016 4:47:32 PM com.google.appengine.tools.admin.AbstractServerConnection send1
WARNING: Error posting to URL: https://appengine.google.com/api/appversion/getresourcelimits?app_id=maven-1268&version=1&
403 Forbidden
You do not have permission to modify this app (app_id=u's~maven-1268').
This is try #0
Apr 01, 2016 4:47:32 PM com.google.appengine.tools.admin.AbstractServerConnection send1
WARNING: Error posting to URL: https://appengine.google.com/api/appversion/getresourcelimits?app_id=maven-1268&version=1&
403 Forbidden
You do not have permission to modify this app (app_id=u's~maven-1268').
This is try #1
Apr 01, 2016 4:47:32 PM com.google.appengine.tools.admin.AbstractServerConnection send1
WARNING: Error posting to URL: https://appengine.google.com/api/appversion/getresourcelimits?app_id=maven-1268&version=1&
403 Forbidden
You do not have permission to modify this app (app_id=u's~maven-1268').
This is try #2
Apr 01, 2016 4:47:33 PM com.google.appengine.tools.admin.AbstractServerConnection send1
WARNING: Error posting to URL: https://appengine.google.com/api/appversion/getresourcelimits?app_id=maven-1268&version=1&
403 Forbidden
You do not have permission to modify this app (app_id=u's~maven-1268').
This is try #3
So I think "no biggee", I'll just run it again. Somehow I'll get maven to select the correct account (maybe I'll temporarily logout of the incorrect one) and that will solve the problem.
Unfortunately, I am no longer being prompted to authenticate. It just keeps giving me accessed denied errors.
I am presuming there is a file somewhere on the file system that I need to delete in order to get prompted for my authorization again.
Does anyone know where this file is?
UPDATE
I tried completely recreating my project from scratch in a different directory, and I still get the access denied errors.
By running this command ...
mvn help:describe -Dplugin=appengine -Ddetail
I have discovered that there is an additional parameter that I can pass to the update goal that will do exactly what I need it to do, but I don't know how the correct syntax to use to actually pass this additional parameter.
appengine:update
Description: Create or update an app version.
Implementation: com.google.appengine.appcfg.Update Language: java
Before this mojo executes, it will call:
Phase: 'package'
Available parameters:
additionalParams
User property: appengine.additionalParams
Additional parameters to pass through to AppCfg.
noCookies
User property: appengine.noCookies
Do not save/load access credentials to/from disk.
I think this might be the correct syntax ...
๐Ÿ˜ˆ >mvn appengine:update -DadditionalParams="--noCookies"
However, this does NOT solve the problem as the update seems to ignore the parameter.
I fixed the error using this command before mvn appengine:update command:
rm ~/.appcfg_oauth2_tokens_java
I was able to solve this problem by using the appcfg.sh tool instead of maven.
๐Ÿ˜ˆ >appcfg.sh --no_cookies update /path/to/maven/project/first_project_second_try/guestbook/target/guestbook-1.0-SNAPSHOT
I suspect that it is possible to do this with maven as well, but I am uncertain as to how pass the "--no_cookies" option to maven.

HPCC/HDFS Connector

Does anyone know about HPCC/HDFS connector.we are using both HPCC and HADOOP.There is one utility(HPCC/HDFS connector) developed by HPCC which allows HPCC cluster to acess HDFS data
i have installed the connector but when i run the program to acess data from hdfs it gives error as libhdfs.so.0 doesn't exist.
I tried to build libhdfs.so using command
ant compile-libhdfs -Dlibhdfs=1
its giving me error as
target "compile-libhdfs" does not exist in the project "hadoop"
i used one more command
ant compile-c++-libhdfs -Dlibhdfs=1
its giving error as
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
[get] Error getting http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
to /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
BUILD FAILED java.net.ConnectException: Connection timed out
any suggestion will be a great help
Chhaya, you might not need to build libhdfs.so, depending on how you installed hadoop, you might already have it.
Check in HADOOP_LOCATION/c++/Linux-<arch>/lib/libhdfs.so, where HADOOP_LOCATION is your hadoop install location, and arch is the machineโ€™s architecture (i386-32 or amd64-64).
Once you locate the lib, make sure the H2H connector is configured correctly (see page 4 here).
It's just a matter of updating the HADOOP_LOCATION var in the config file:
/opt/HPCCSystems/hdfsconnector.conf
good luck.

Resources