Most metrics are N/A in HDP sandbox on Docker - macos

I run the following on macOs Big Sur 11.6:
>>> docker start sandbox-hdp
>>> docker start sandbox-proxy
>>> docker ps
c39991e8397b hortonworks/sandbox-proxy:1.0 "nginx -g 'daemon of…" 7 hours ago Up 15 minutes 0.0.0.0:1080->1080/tcp, :::1080->1080/tcp, 0.0.0.0:1100->1100/tcp, :::1100->1100/tcp, 0.0.0.0:1111->1111/tcp, :::1111->1111/tcp, 0.0.0.0:1988->1988/tcp, :::1988->1988/tcp, 0.0.0.0:2100->2100/tcp, :::2100->2100/tcp, 0.0.0.0:2181-2182->2181-2182/tcp, :::2181-2182->2181-2182/tcp, 0.0.0.0:2201-2202->2201-2202/tcp, :::2201-2202->2201-2202/tcp, 0.0.0.0:2222->2222/tcp, :::2222->2222/tcp, 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp, 0.0.0.0:4040->4040/tcp, :::4040->4040/tcp, 0.0.0.0:4200->4200/tcp, :::4200->4200/tcp, 0.0.0.0:4242->4242/tcp, :::4242->4242/tcp, 0.0.0.0:4557->4557/tcp, :::4557->4557/tcp, 0.0.0.0:5007->5007/tcp, :::5007->5007/tcp, 0.0.0.0:5011->5011/tcp, :::5011->5011/tcp, 0.0.0.0:6001->6001/tcp, :::6001->6001/tcp, 0.0.0.0:6003->6003/tcp, :::6003->6003/tcp, 0.0.0.0:6008->6008/tcp, :::6008->6008/tcp, 0.0.0.0:6080->6080/tcp, :::6080->6080/tcp, 0.0.0.0:6188->6188/tcp, :::6188->6188/tcp, 0.0.0.0:6627->6627/tcp, :::6627->6627/tcp, 0.0.0.0:6667-6668->6667-6668/tcp, :::6667-6668->6667-6668/tcp, 0.0.0.0:7777->7777/tcp, :::7777->7777/tcp, 0.0.0.0:7788->7788/tcp, :::7788->7788/tcp, 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:8005->8005/tcp, :::8005->8005/tcp, 0.0.0.0:8020->8020/tcp, :::8020->8020/tcp, 0.0.0.0:8032->8032/tcp, :::8032->8032/tcp, 0.0.0.0:8040->8040/tcp, :::8040->8040/tcp, 0.0.0.0:8042->8042/tcp, :::8042->8042/tcp, 0.0.0.0:8080-8082->8080-8082/tcp, :::8080-8082->8080-8082/tcp, 0.0.0.0:8086->8086/tcp, :::8086->8086/tcp, 0.0.0.0:8088->8088/tcp, :::8088->8088/tcp, 0.0.0.0:8090-8091->8090-8091/tcp, :::8090-8091->8090-8091/tcp, 0.0.0.0:8188->8188/tcp, :::8188->8188/tcp, 0.0.0.0:8198->8198/tcp, :::8198->8198/tcp, 0.0.0.0:8443->8443/tcp, :::8443->8443/tcp, 0.0.0.0:8585->8585/tcp, :::8585->8585/tcp, 0.0.0.0:8744->8744/tcp, :::8744->8744/tcp, 0.0.0.0:8765->8765/tcp, :::8765->8765/tcp, 0.0.0.0:8886->8886/tcp, :::8886->8886/tcp, 0.0.0.0:8888-8889->8888-8889/tcp, :::8888-8889->8888-8889/tcp, 0.0.0.0:8983->8983/tcp, :::8983->8983/tcp, 0.0.0.0:8993->8993/tcp, :::8993->8993/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 0.0.0.0:9088-9091->9088-9091/tcp, :::9088-9091->9088-9091/tcp, 0.0.0.0:9995-9996->9995-9996/tcp, :::9995-9996->9995-9996/tcp, 0.0.0.0:10000-10002->10000-10002/tcp, :::10000-10002->10000-10002/tcp, 0.0.0.0:10015-10016->10015-10016/tcp, :::10015-10016->10015-10016/tcp, 0.0.0.0:10500->10500/tcp, :::10500->10500/tcp, 0.0.0.0:10502->10502/tcp, :::10502->10502/tcp, 0.0.0.0:11000->11000/tcp, :::11000->11000/tcp, 0.0.0.0:12049->12049/tcp, :::12049->12049/tcp, 0.0.0.0:12200->12200/tcp, :::12200->12200/tcp, 0.0.0.0:15000->15000/tcp, :::15000->15000/tcp, 0.0.0.0:15002->15002/tcp, :::15002->15002/tcp, 0.0.0.0:15500->15500/tcp, :::15500->15500/tcp, 0.0.0.0:16000->16000/tcp, :::16000->16000/tcp, 0.0.0.0:16010->16010/tcp, :::16010->16010/tcp, 0.0.0.0:16020->16020/tcp, :::16020->16020/tcp, 0.0.0.0:16030->16030/tcp, :::16030->16030/tcp, 0.0.0.0:18080-18081->18080-18081/tcp, :::18080-18081->18080-18081/tcp, 0.0.0.0:19888->19888/tcp, :::19888->19888/tcp, 0.0.0.0:21000->21000/tcp, :::21000->21000/tcp, 0.0.0.0:30800->30800/tcp, :::30800->30800/tcp, 0.0.0.0:33553->33553/tcp, :::33553->33553/tcp, 0.0.0.0:39419->39419/tcp, :::39419->39419/tcp, 0.0.0.0:42111->42111/tcp, :::42111->42111/tcp, 0.0.0.0:50070->50070/tcp, :::50070->50070/tcp, 0.0.0.0:50075->50075/tcp, :::50075->50075/tcp, 0.0.0.0:50079->50079/tcp, :::50079->50079/tcp, 0.0.0.0:50095->50095/tcp, :::50095->50095/tcp, 0.0.0.0:50111->50111/tcp, :::50111->50111/tcp, 0.0.0.0:60000->60000/tcp, :::60000->60000/tcp, 0.0.0.0:60080->60080/tcp, :::60080->60080/tcp, 0.0.0.0:61080->61080/tcp, :::61080->61080/tcp, 80/tcp, 0.0.0.0:61888->61888/tcp, :::61888->61888/tcp sandbox-proxy
bbb8ade50614 hortonworks/sandbox-hdp:3.0.1 "/usr/sbin/init" 7 hours ago Up 15 minutes 22/tcp, 4200/tcp, 8080/tcp
I then connect to localhost:8080 using maria_dev as username and password. I get the following view:
Which clearly indicates that nothing is working properly. Is this the expected behavior? If not so, then what should I do to get everything to work?

Yes, this is expected on a fresh start. No services within the Ambari Server/Hadoop cluster start automatically, and there will be no metrics until they do

Related

ESP32-S2-MINI-1 Access Point gives the same address to different clients

I've ESP32-S2-MINI-1 configured as access point and 5 IPcameras connected to this access point,
my problem is that more ipcameras received the same IP from ESP32-S2-MINI-1, for example this is the answer to a broadcast request, each ipcameras responds two times and for example ipcamera ACTO018066 and ipcamera ACTO017101 have the same ipaddress 192.168.43.3 .
Is this a bug of ESP32-S2-MINI-1 ?
How I can solve this problem?
The firmware for ESP32-S2-MINI-1 is version:2.1.0.0(0b76313 - ESP32S2 - Aug 20 2020 05:57:43)
SDK version:v4.2-dev-2044-gdd3c032
compile time(b5e1674):Aug 21 2020 05:00:52
Bin version:2.1.0(MINI)
is it available a newer release?
Thanks, Antonio
+IPD,0,524:DH192.168.43.3 255.255.255.0192.168.43.1192.168.43.1ðÈÄÙ~':ACTO018066 ...
+IPD,0,524:DH192.168.43.3 255.255.255.0192.168.43.1192.168.43.1ðÈÄÙ~':ACTO018066 ...
+IPD,0,524:DH192.168.43.2 255.255.255.0192.168.43.1192.168.43.1ðÈÄÎœm3ACTO011614 ...
+IPD,0,524:DH192.168.43.2 255.255.255.0192.168.43.1192.168.43.1ðÈÄÎœm3ACTO011614 ...
+IPD,0,524:DH192.168.43.3 255.255.255.0192.168.43.1192.168.43.1ðÈIJ¤†ñACTO017101 ...
+IPD,0,524:DH192.168.43.3 255.255.255.0192.168.43.1192.168.43.1ðÈIJ¤†ñACTO017101HBJJR ...
+IPD,0,524:DH192.168.43.2 255.255.255.0192.168.43.18.8.8.8192.168.43.1»&P)EnACTO005825 ...
+IPD,0,524:DH192.168.43.2 255.255.255.0192.168.43.18.8.8.8192.168.43.1»&P)EnACTO005825 ...
+IPD,0,524:DH192.168.43.4 255.255.255.0192.168.43.18.8.8.8192.168.43.1»&á2Æ ACTO005665 ...
+IPD,0,524:DH192.168.43.4 255.255.255.0192.168.43.18.8.8.8192.168.43.1»&á2Æ ACTO005665 ...
0,CLOSED
OK

Possible reasons for groovy program running as kubernetes job dumping threads during execution

I have a simple groovy script that leverages the GPars library's withPool functionality to launch HTTP GET requests to two internal API endpoints in parallel.
The script runs fine locally, both directly as well as a docker container.
When I deploy it as a Kubernetes Job (in our internal EKS cluster: 1.20), it runs there as well, but the moment it hits the first withPool call, I see a giant thread dump, but the execution continues, and completes successfully.
NOTE: Containers in our cluster run with the following pod security context:
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
Environment
# From the k8s job container
groovy#app-271df1d7-15848624-mzhhj:/app$ groovy --version
WARNING: Using incubator modules: jdk.incubator.foreign, jdk.incubator.vector
Groovy Version: 4.0.0 JVM: 17.0.2 Vendor: Eclipse Adoptium OS: Linux
groovy#app-271df1d7-15848624-mzhhj:/app$ ps -ef
UID PID PPID C STIME TTY TIME CMD
groovy 1 0 0 21:04 ? 00:00:00 /bin/bash bin/run-script.sh
groovy 12 1 42 21:04 ? 00:00:17 /opt/java/openjdk/bin/java -Xms3g -Xmx3g --add-modules=ALL-SYSTEM -classpath /opt/groovy/lib/groovy-4.0.0.jar -Dscript.name=/usr/bin/groovy -Dprogram.name=groovy -Dgroovy.starter.conf=/opt/groovy/conf/groovy-starter.conf -Dgroovy.home=/opt/groovy -Dtools.jar=/opt/java/openjdk/lib/tools.jar org.codehaus.groovy.tools.GroovyStarter --main groovy.ui.GroovyMain --conf /opt/groovy/conf/groovy-starter.conf --classpath . /tmp/script.groovy
groovy 116 0 0 21:05 pts/0 00:00:00 bash
groovy 160 116 0 21:05 pts/0 00:00:00 ps -ef
Script (relevant parts)
#Grab('org.codehaus.gpars:gpars:1.2.1')
import static groovyx.gpars.GParsPool.withPool
import groovy.json.JsonSlurper
final def jsl = new JsonSlurper()
//...
while (!(nextBatch = getBatch(batchSize)).isEmpty()) {
def devThread = Thread.start {
withPool(poolSize) {
nextBatch.eachParallel { kw ->
String url = dev + "&" + "query=$kw"
try {
def response = jsl.parseText(url.toURL().getText(connectTimeout: 10.seconds, readTimeout: 10.seconds,
useCaches: true, allowUserInteraction: false))
devResponses[kw] = response
} catch (e) {
println("\tFailed to fetch: $url | error: $e")
}
}
}
}
def stgThread = Thread.start {
withPool(poolSize) {
nextBatch.eachParallel { kw ->
String url = stg + "&" + "query=$kw"
try {
def response = jsl.parseText(url.toURL().getText(connectTimeout: 10.seconds, readTimeout: 10.seconds,
useCaches: true, allowUserInteraction: false))
stgResponses[kw] = response
} catch (e) {
println("\tFailed to fetch: $url | error: $e")
}
}
}
}
devThread.join()
stgThread.join()
}
Dockerfile
FROM groovy:4.0.0-jdk17 as builder
USER root
RUN apt-get update && apt-get install -yq bash curl wget jq
WORKDIR /app
COPY bin /app/bin
RUN chmod +x /app/bin/*
USER groovy
ENTRYPOINT ["/bin/bash"]
CMD ["bin/run-script.sh"]
The bin/run-script.sh simply downloads the above groovy script at runtime and executes it.
wget "$GROOVY_SCRIPT" -O "$LOCAL_FILE"
...
groovy "$LOCAL_FILE"
As soon as the execution hits the first call to withPool(poolSize), there's a giant thread dump, but execution continues.
I'm trying to figure out what could be causing this behavior. Any ideas 🤷🏽‍♂️?
Thread dump
For posterity, answering my own question here.
The issue turned out to be this log4j2 JVM hot-patch that we're currently leveraging to fix the recent log4j2 vulnerability. This agent (running as a DaemonSet) patches all running JVMs in all our k8s clusters.
This, somehow, causes my OpenJDK 17 based app to thread dump. I found the same issue with an ElasticSearch 8.1.0 deployment as well (also uses a pre-packaged OpenJDK 17). This one is a service, so I could see a thread dump happening pretty much every half hour! Interestingly, there are other JVM services (and some SOLR 8 deployments) that don't have this issue 🤷🏽‍♂️.
Anyway, I worked with our devops team to temporarily exclude the node that deployment was running on, and lo and behold, the thread dumps disappeared!
Balance in the universe has been restored 🧘🏻‍♂️.

Laravel sail artisan test is getting killed

I am trying to run a test using sail artisan test --filter=RestaurantAuthTest --verbose, the process is getting killed and I don't know why, I am getting the error below. I am using Laravel Sail on WSL; I am using PHP 8.0.7 and Laravel 8.40.. I haven't had much luck finding related issues online.
⨯ restaurant auth
---
• Tests\Feature\RestaurantAuthTest > restaurant auth
PHPUnit\Framework\Exception
Killed
at vendor/phpunit/phpunit/src/Util/PHP/AbstractPhpProcess.php:270
266▕
267▕ if (!empty($stderr)) {
268▕ $result->addError(
269▕ $test,
➜ 270▕ new Exception(trim($stderr)),
271▕ $time
272▕ );
273▕ } else {
274▕ set_error_handler(
1 vendor/phpunit/phpunit/src/Util/PHP/AbstractPhpProcess.php:187
PHPUnit\Util\PHP\AbstractPhpProcess::processChildResult()
2 vendor/phpunit/phpunit/src/Framework/TestCase.php:883
PHPUnit\Util\PHP\AbstractPhpProcess::runTestJob()
3 vendor/phpunit/phpunit/src/Framework/TestSuite.php:677
PHPUnit\Framework\TestCase::run()
4 vendor/phpunit/phpunit/src/Framework/TestSuite.php:677
PHPUnit\Framework\TestSuite::run()
5 vendor/phpunit/phpunit/src/Framework/TestSuite.php:677
PHPUnit\Framework\TestSuite::run()
6 vendor/phpunit/phpunit/src/TextUI/TestRunner.php:667
PHPUnit\Framework\TestSuite::run()
7 vendor/phpunit/phpunit/src/TextUI/Command.php:143
PHPUnit\TextUI\TestRunner::run()
8 vendor/phpunit/phpunit/src/TextUI/Command.php:96
PHPUnit\TextUI\Command::run()
9 vendor/phpunit/phpunit/phpunit:61
PHPUnit\TextUI\Command::main()
Tests: 1 failed
Time: 395.42s
Thanks

Problems connecting to Raspberry Pi 3B+ using USB to Serial

I am trying to set up SSH over USB to Serial.
I am using:
- MacBook Pro (OS X 10.15.1)
- Raspberry Pi 3 Model B+ (Raspbian 9.11)
- EVISWIY PL2303TA USB to TTL Serial Cable Debug Console Cable for Raspberry Pi 3 (Amazon Link)
I have installed the cable driver however i did notice when i downloaded it that they did not mention support for the Catalina OS (Driver Download) however it does list support for High Sierra 10.15...
Cable is plugged in as so (Im colorblind so just making sure i didn't make a mistake there):
I am running ls /dev and getting back as follows:
afsc_type5 pf
auditpipe pfm
auditsessions profile
autofs ptmx
autofs_control ptyp0
autofs_homedirmounter ptyp1
autofs_notrigger ptyp2
autofs_nowait ptyp3
bpf0 ptyp4
bpf1 ptyp5
bpf10 ptyp6
bpf100 ptyp7
bpf101 ptyp8
bpf102 ptyp9
bpf103 ptypa
bpf104 ptypb
bpf105 ptypc
bpf106 ptypd
bpf107 ptype
bpf108 ptypf
bpf109 ptyq0
bpf11 ptyq1
bpf110 ptyq2
bpf111 ptyq3
bpf112 ptyq4
bpf113 ptyq5
bpf114 ptyq6
bpf115 ptyq7
bpf116 ptyq8
bpf117 ptyq9
bpf118 ptyqa
bpf119 ptyqb
bpf12 ptyqc
bpf120 ptyqd
bpf121 ptyqe
bpf122 ptyqf
bpf123 ptyr0
bpf124 ptyr1
bpf125 ptyr2
bpf126 ptyr3
bpf127 ptyr4
bpf128 ptyr5
bpf129 ptyr6
bpf13 ptyr7
bpf130 ptyr8
bpf131 ptyr9
bpf132 ptyra
bpf133 ptyrb
bpf134 ptyrc
bpf135 ptyrd
bpf136 ptyre
bpf137 ptyrf
bpf138 ptys0
bpf139 ptys1
bpf14 ptys2
bpf140 ptys3
bpf141 ptys4
bpf142 ptys5
bpf143 ptys6
bpf144 ptys7
bpf145 ptys8
bpf146 ptys9
bpf147 ptysa
bpf148 ptysb
bpf149 ptysc
bpf15 ptysd
bpf150 ptyse
bpf151 ptysf
bpf152 ptyt0
bpf153 ptyt1
bpf154 ptyt2
bpf155 ptyt3
bpf156 ptyt4
bpf157 ptyt5
bpf158 ptyt6
bpf159 ptyt7
bpf16 ptyt8
bpf160 ptyt9
bpf161 ptyta
bpf162 ptytb
bpf163 ptytc
bpf164 ptytd
bpf165 ptyte
bpf166 ptytf
bpf167 ptyu0
bpf168 ptyu1
bpf169 ptyu2
bpf17 ptyu3
bpf170 ptyu4
bpf171 ptyu5
bpf172 ptyu6
bpf173 ptyu7
bpf174 ptyu8
bpf175 ptyu9
bpf176 ptyua
bpf177 ptyub
bpf178 ptyuc
bpf179 ptyud
bpf18 ptyue
bpf180 ptyuf
bpf181 ptyv0
bpf182 ptyv1
bpf183 ptyv2
bpf184 ptyv3
bpf185 ptyv4
bpf186 ptyv5
bpf187 ptyv6
bpf188 ptyv7
bpf189 ptyv8
bpf19 ptyv9
bpf190 ptyva
bpf191 ptyvb
bpf192 ptyvc
bpf193 ptyvd
bpf194 ptyve
bpf195 ptyvf
bpf196 ptyw0
bpf197 ptyw1
bpf198 ptyw2
bpf199 ptyw3
bpf2 ptyw4
bpf20 ptyw5
bpf200 ptyw6
bpf201 ptyw7
bpf202 ptyw8
bpf203 ptyw9
bpf204 ptywa
bpf205 ptywb
bpf206 ptywc
bpf207 ptywd
bpf208 ptywe
bpf209 ptywf
bpf21 random
bpf210 rdisk0
bpf211 rdisk0s1
bpf212 rdisk0s2
bpf213 rdisk0s3
bpf214 rdisk1
bpf215 rdisk1s1
bpf216 rdisk1s2
bpf217 rdisk1s3
bpf218 rdisk1s4
bpf219 rdisk1s5
bpf22 rdisk2
bpf220 sdt
bpf221 stderr
bpf222 stdin
bpf223 stdout
bpf224 systrace
bpf225 tty
bpf226 tty.Bluetooth-Incoming-Port
bpf227 tty.usbserial
bpf228 tty.usbserial-1420
bpf229 ttyp0
bpf23 ttyp1
bpf230 ttyp2
bpf231 ttyp3
bpf232 ttyp4
bpf233 ttyp5
bpf234 ttyp6
bpf235 ttyp7
bpf236 ttyp8
bpf237 ttyp9
bpf238 ttypa
bpf239 ttypb
bpf24 ttypc
bpf240 ttypd
bpf241 ttype
bpf242 ttypf
bpf243 ttyq0
bpf244 ttyq1
bpf245 ttyq2
bpf246 ttyq3
bpf247 ttyq4
bpf248 ttyq5
bpf249 ttyq6
bpf25 ttyq7
bpf250 ttyq8
bpf251 ttyq9
bpf252 ttyqa
bpf253 ttyqb
bpf254 ttyqc
bpf255 ttyqd
bpf26 ttyqe
bpf27 ttyqf
bpf28 ttyr0
bpf29 ttyr1
bpf3 ttyr2
bpf30 ttyr3
bpf31 ttyr4
bpf32 ttyr5
bpf33 ttyr6
bpf34 ttyr7
bpf35 ttyr8
bpf36 ttyr9
bpf37 ttyra
bpf38 ttyrb
bpf39 ttyrc
bpf4 ttyrd
bpf40 ttyre
bpf41 ttyrf
bpf42 ttys0
bpf43 ttys000
bpf44 ttys001
bpf45 ttys002
bpf46 ttys003
bpf47 ttys004
bpf48 ttys1
bpf49 ttys2
bpf5 ttys3
bpf50 ttys4
bpf51 ttys5
bpf52 ttys6
bpf53 ttys7
bpf54 ttys8
bpf55 ttys9
bpf56 ttysa
bpf57 ttysb
bpf58 ttysc
bpf59 ttysd
bpf6 ttyse
bpf60 ttysf
bpf61 ttyt0
bpf62 ttyt1
bpf63 ttyt2
bpf64 ttyt3
bpf65 ttyt4
bpf66 ttyt5
bpf67 ttyt6
bpf68 ttyt7
bpf69 ttyt8
bpf7 ttyt9
bpf70 ttyta
bpf71 ttytb
bpf72 ttytc
bpf73 ttytd
bpf74 ttyte
bpf75 ttytf
bpf76 ttyu0
bpf77 ttyu1
bpf78 ttyu2
bpf79 ttyu3
bpf8 ttyu4
bpf80 ttyu5
bpf81 ttyu6
bpf82 ttyu7
bpf83 ttyu8
bpf84 ttyu9
bpf85 ttyua
bpf86 ttyub
bpf87 ttyuc
bpf88 ttyud
bpf89 ttyue
bpf9 ttyuf
bpf90 ttyv0
bpf91 ttyv1
bpf92 ttyv2
bpf93 ttyv3
bpf94 ttyv4
bpf95 ttyv5
bpf96 ttyv6
bpf97 ttyv7
bpf98 ttyv8
bpf99 ttyv9
console ttyva
cu.Bluetooth-Incoming-Port ttyvb
cu.usbserial ttyvc
cu.usbserial-1420 ttyvd
disk0 ttyve
disk0s1 ttyvf
disk0s2 ttyw0
disk0s3 ttyw1
disk1 ttyw2
disk1s1 ttyw3
disk1s2 ttyw4
disk1s3 ttyw5
disk1s4 ttyw6
disk1s5 ttyw7
disk2 ttyw8
dtrace ttyw9
dtracehelper ttywa
fbt ttywb
fd ttywc
fsevents ttywd
io8log ttywe
io8logmt ttywf
io8logtemp urandom
klog vboxdrv
lockstat vboxdrvu
machtrace vboxnetctl
null xcpm
oslog zero
oslog_stream
So... from that result and consulting many guides for accomplishing this task i have been trying to use the screen command on the following:
cu.usbserial-1420
cu.usbserial
tty.usbserial-1420
tty.usbserial
I've been running it as follows:
screen /dev/[INSERT ONE HERE] 115200
that seems to be the baud rate suggested in multiple guides but I've also tried 9600 and 115600 as I saw those both mentioned in regards to RPi's a few times.
The best result i ever get is an empty terminal window with the cursor grey block:
I've tried disabling System Integrity Protection because one Adafruit tutorial mentioned it. No change.
Also i have enabled serial interface in the raspi-config menu
Any Direction to my problem would be greatly appreciated. Im getting the feeling its the driver because i can't find any other ideas... but i am hoping that's not the case.
--EDIT--
I read somewhere that This Driver is the correct one for newer models of this device. After install there is no change.

My nagios instance is running notify-service-by-email 4 times for every iteration

I created my own bash script for notify-service-by-email. Problem is that every time alert is triggered, nagios runs this script exactly 4 times instead of once.
I'm running nagios 3.5.1-1 on Red Hat 6.4
commands.cfg
define command{
command_name notify-service-by-email
command_line /home/nagios/scripts/notify_by_email/notify.bash "$NOTIFICATIONTYPE$" "$SERVICEDESC$" "$HOSTNAME$" "$SERVICESTATE$" "$LONGDATETIME$" "$SERVICEOUTPUT$" "$CONTACTEMAIL$"
}
When I ran script manually from command line, it ran once - so it's not loop in the script.
I tried to search for suspicious entry in the main config file, but with no success.
nagios.cfg
log_file=/var/log/nagios/nagios.log
cfg_file=/etc/nagios/objects/commands.cfg
cfg_file=/etc/nagios/objects/commands_manual.cfg
cfg_file=/etc/nagios/objects/contacts.cfg
cfg_file=/etc/nagios/objects/timeperiods.cfg
cfg_file=/etc/nagios/objects/templates.cfg
cfg_file=/etc/nagios/objects/services_prod.cfg
cfg_file=/etc/nagios/objects/services_uat.cfg
cfg_file=/etc/nagios/objects/services_actimize.cfg
cfg_dir=/etc/nagios/servers
cfg_dir=/etc/nagios/objects/SC4
object_cache_file=/var/log/nagios/objects.cache
precached_object_file=/var/log/nagios/objects.precache
resource_file=/etc/nagios/private/resource.cfg
status_file=/var/log/nagios/status.dat
status_update_interval=10
nagios_user=nagios
nagios_group=nagios
check_external_commands=1
command_check_interval=-1
command_file=/var/spool/nagios/cmd/nagios.cmd
external_command_buffer_slots=4096
lock_file=/var/run/nagios.pid
temp_file=/var/log/nagios/nagios.tmp
temp_path=/tmp
event_broker_options=-1
log_rotation_method=d
log_archive_path=/var/log/nagios/archives
use_syslog=1
log_notifications=1
log_service_retries=1
log_host_retries=1
log_event_handlers=1
log_initial_states=0
log_external_commands=1
log_passive_checks=1
service_inter_check_delay_method=s
max_service_check_spread=30
service_interleave_factor=s
host_inter_check_delay_method=s
max_host_check_spread=30
max_concurrent_checks=0
check_result_reaper_frequency=10
max_check_result_reaper_time=30
check_result_path=/var/log/nagios/spool/checkresults
max_check_result_file_age=3600
cached_host_check_horizon=15
cached_service_check_horizon=15
enable_predictive_host_dependency_checks=1
enable_predictive_service_dependency_checks=1
soft_state_dependencies=0
auto_reschedule_checks=0
auto_rescheduling_interval=30
auto_rescheduling_window=180
sleep_time=0.25
service_check_timeout=60
host_check_timeout=30
event_handler_timeout=30
notification_timeout=30
ocsp_timeout=5
perfdata_timeout=5
retain_state_information=1
state_retention_file=/var/log/nagios/retention.dat
retention_update_interval=60
use_retained_program_state=1
use_retained_scheduling_info=1
retained_host_attribute_mask=0
retained_service_attribute_mask=0
retained_process_host_attribute_mask=0
retained_process_service_attribute_mask=0
retained_contact_host_attribute_mask=0
retained_contact_service_attribute_mask=0
interval_length=60
check_for_updates=1
bare_update_check=0
use_aggressive_host_checking=0
execute_service_checks=1
accept_passive_service_checks=1
execute_host_checks=1
accept_passive_host_checks=1
enable_notifications=1
enable_event_handlers=1
process_performance_data=1
host_perfdata_file=/usr/local/pnp4nagios/var/host-perfdata
service_perfdata_file=/usr/local/pnp4nagios/var/service-perfdata
host_perfdata_file_template=DATATYPE::HOSTPERFDATA\tTIMET::$TIMET$\tHOSTNAME::$HOSTNAME$\tHOSTPERFDATA::$HOSTPERFDATA$\tHOSTCHECKCOMMAND::$HOSTCHECKCOMMAND$\tHOSTSTATE::$HOSTSTATE$\tHOSTSTATETYPE::$HOSTSTATETYPE$\tHOSTOUTPUT::$HOSTOUTPUT$
service_perfdata_file_template=HOSTNAME:$HOSTNAME$:\tTIME:$DATE$ $TIME$:\tSERVICEDESC:$SERVICEDESC$:\tSERVICEPERFDATA:$SERVICEPERFDATA$:\tSERVICEOUTPUT:$SERVICEOUTPUT$:
host_perfdata_file_mode=a
service_perfdata_file_mode=a
host_perfdata_file_processing_interval=15
service_perfdata_file_processing_interval=15
host_perfdata_file_processing_command=process-host-perfdata-file
service_perfdata_file_processing_command=process-service-perfdata-file
obsess_over_services=0
obsess_over_hosts=0
translate_passive_host_checks=0
passive_host_checks_are_soft=0
check_for_orphaned_services=1
check_for_orphaned_hosts=1
check_service_freshness=1
service_freshness_check_interval=60
service_check_timeout_state=c
check_host_freshness=0
host_freshness_check_interval=60
additional_freshness_latency=15
enable_flap_detection=1
low_service_flap_threshold=5.0
high_service_flap_threshold=20.0
low_host_flap_threshold=5.0
high_host_flap_threshold=20.0
date_format=us
p1_file=/usr/sbin/p1.pl
enable_embedded_perl=1
use_embedded_perl_implicitly=1
illegal_object_name_chars=`~!$%^&*|'"<>?,()=
illegal_macro_output_chars=`~$&|'"<>
use_regexp_matching=0
use_true_regexp_matching=0
admin_email=super#secret
admin_pager=super#secret
daemon_dumps_core=0
use_large_installation_tweaks=1
enable_environment_macros=1
debug_level=0
debug_verbosity=1
debug_file=/var/log/nagios/nagios.debug
max_debug_file_size=1000000
Have you encountered similar issue? What else should I check?
Nagios executes command notify-service-by-email for every single defined contact. In my case I have 4 contacts (e-mails) in the contact group. After I defined singe e-mail address, my problem was resolved.

Resources