I have a problem on sending Downstream to a gateway - shell

I'm using a dragino DLOS8 gateway and a dragino end node lt-22222-l. I wrote a script to read and show the values in my end node's inputs but I couldn't control my relays. I found an example of a script (in a dragino article titled Communication with ABP End Node) showing this function to control them( it controlled the digital outputs but I changed it to relays) which is:
echo "${DEV_2},imme,hex,030101" > /var/iot/push/down
I even tried with more specified one:
echo "${DEV_1},imme,hex,030101,20,1,SF12,869525000,1" > /var/iot/push/down
in the article it indicates that I have to create a file in the directory /var/iot/push for downstream purpose. I tried using winscp and the command touch down but it deleted few seconds after. if there is anyone that used those devices or knows about this please help me.

I had a similar problem with Dragino, also with device logfiles in /var/iot/channels directory. Got information from Dragino support that those files are "consumed" by MQTT and TCP processes, so are periodically deleted: I undestand that LoRaWAN or MQTT or TCP application has to work with those files as soon as they are generated.
Notice that "imme" sends downstream immediately to C type devices, maybe "time" (downstream after receiving data from node) is better for your application.

Related

Implement Multiple client reads a file and multiple servers writes to a file via Client Server

Below is the question, I was asked in an interview,
Datacenter has 10000 servers.We have a single syslog driver which collates all the logs from all the servers in the datacenter and writes it in a single file called syslog.log
Let's say the datacenter has 1000 Admins.At any point of time any admin can login to syslog server and invoke a command say
getlog --serverid --severity
And the command should continuously tail the logs matching the conditions provided by the user untill he interupts.
Any number of users can concurrently login to this server and run this command. His request should be honoured, but with one condition, at any given point in time there can be only one file descriptor open for syslog.log file.
Implement getlog such that it satisfies the above condition.
I told my approach as Critical Section problem, we can use mutex/semaphore to lock the file until a user finishes. But he is expecting something like Client-Server Model.
How to serve this functionality using client and server architecture?
What is the best approach to solve this?

Nifi Error: Failed to establish the connection with AMQP broker

I am trying to read the data from following cap files.
everyhing from alerts folder
http://dd2.weather.gc.ca/alerts/cap/20180205/CWHX/14/
I am using AMQP from http://metpx.sourceforge.net. And when I am trying to connect to subscriber from nifi, I am getting the following error.
Failed to establish the connection with AMQP broker
this is my cap.conf file.
broker amqp://anonymous:anonymous#dd.weather.gc.ca
directory /data
subtopic alerts.cap.#
accept .*
mirror True
Over the summer, the broker migrated to SSL, so the current URL would be: amqps://anonymous:anonymous#dd.weather.gc.ca
The web page has also moved to: https://github.com/MetPX/sarracenia
best practice to put authentication info in ~/.config/sarra/credentials.conf
a line like: amqps://anonymous:anonymous#dd.weather.gc.ca
Installing a version from the past year is likely to be a much
better experience. It now comes with sample configurations,
one of them being ddc_cap-xml.conf which is the same data you
are trying to download.
So the work is:
blacklab% sr_subscribe add ddc_cap-xml.conf
blacklab% sr_subscribe edit ddc_cap-xml.conf
# Change the directory option to suit your case.
blacklab% sr_subscribe foreground ddc_cap-xml.conf
And it should work. It could take many hours to prove it because this particular set (severe weather warnings in Common Alerting Protocol format) is produced only when needed, rather than continuously. (Use start instead of foreground to run as a background daemon.)
To test things, it might be easier to start with dd_swob, which will be a continuous feed.
blacklab% sr_subscribe list dd_swob
broker amqp://anonymous#dd.weather.gc.ca
exchange xpublic
#msg_skip_threshold 60
#on_msg ../msg_skip_old.py
subtopic observations.swob-ml.#
accept .*
In this configuration you need to add a directory option just before the accept line. and should start downloading data immediately. Once you know it works, switch back to the data set you actually want.

OpenNMS alert when a folder is not empty?

I'm trying to create an OpenNMS alert when a certain folder ISN'T empty but can't seem to find a way of doing it. Any ideas?
I assume you have a service which goes down if your folder is empty. See the short video. By default notifications are turned off. Every service down event will be notified by default. You can be more granular by filtering on nodes and services. The default setting will send a mail to the admin user. You set a mail address in the user of the admin. To configure the access to your mail server, configure the javamail-configuration.properties. I just tried to figure out where you stuck exactly.
One approach could be to poll the certain directory for the empty condition with an agent on your host system and expose the status, e.g. Net-SNMP. You can create a service by using the SNMP Monitor to poll the status of the exposed OID and create a mail notification for this particular service.
Yes, this can be done. I have performed similar tasks using simple perl and bash scripts on Linux.
OpenNMS allows you to create polling configurations based on scripts. Your script is expected to output "0" or "1", with 0 representing "OK" and 1 representing "Not OK".
You could use the GeneralPurposePoller:
https://wiki.opennms.org/wiki/GeneralPurposePoller
However, it seems that you should instead use the SystemExecuteMonitor:
https://wiki.opennms.org/wiki/SystemExecuteMonitor

Write a Windows Event log entry based on a source CSV log file and query

SCENARIO
I've been researching on Google ways to do this but I'm not finding anything, so I really hope SO can help out a netadmin from SF. I see a lot of ways to export FROM the Windows event logs, and ways to write events for custom written apps, but nothing so far for taking existing log files and "converting" them to eventvwr entries.
I'm working on an issue where I need a way to receive notifications/alerts based on the Windows Server DHCP audit logs: http://technet.microsoft.com/en-us/library/dd759178.aspx
The logs are written to C:\windows\system32\DHCP as DhcpSrvLog-Fri ,etc. and auto-rotate on their own.
I need information from this log (particulary event 10's which show a new lease to a client), and will be querying it and comparing it against AD and then either writing to a new CSV file or writing a new eventvwr entry directly.
WHAT'S NEEDED
END RESULT: The end goal here is to receive an email notification when a non-AD joined device gets a DHCP lease address from our DHCP server. More specific details can be found here: https://serverfault.com/questions/550653/windows-dhcp-server-get-notification-when-a-non-ad-joined-device-gets-an-ip-ad
However, in regards to this particular question, what I'm looking for is understanding of how to take an existing file (csv) and write a custom Windows event log entry based on its contents.
I can't seem to find ways to take an existing file as input. Would I have to write something that parses through the DHCP server audit logs and creates various variables that get included in something like Write-Eventlog? If Powershell is the wrong path to go down, I'm open to suggestions.
The DHCP server logs are not built the same way as an event, so you have to parse the csv file and create events manually. Since there wasn't any real question here I'll just provide a sample to get you going(untested and incomplete):
Import-Csv filename | Where-Object { $_.ID -eq 10 } | ForEach-Object {
#If you're using quest module for AD management
if(!(Get-QADComputer -Name $($_."Host Name"))) {
Write-EventLog ........
}
}

Reading syslog output on a Mac

I have a program that was written for linux and I am trying to build and run it on my MacOS 10.5 machine. The program builds and runs without problem, however it makes many calls to syslog. I know that syslogd is running on my mac, however I can't seem to find where my syslog calls are output to.
The syslog calls are of the form
syslog (LOG_WARNING, "Log message");
Any idea where I might find my log output?
/var/log/system.log
You can monitor it easily using tail -f /var/log/system.log
See also the "logger" (man logger) and "syslog" (man syslog).
You should probably use the Console.app to view logfiles. It's purdy.
Select your device on the left and filter messages on the right:
Maybe interesting to note: Apple was using a real syslogd in the past but meanwhile all of this has switched to ASL (Apple System Log). The syslog command is still available, but it will only access this one log. If you want to access all log messages of ASL across all log files configured, use the log command.
E.g. the following shows all log messages produced by Safari within the last two days (be patient, can take a while):
log show --predicate 'process == "Safari"' --last 2d
See man log for all the actions you can perform, all the parameters it knows and what attributes you can filter for.
When in doubt, there's always man syslog.
You can find your messages in /var/log/syslog; my machine is set up out of the box to only include high level messages so you may need to have your settings.
You can also read the messages through syslog(1), or create a test message with a command like
$ syslog -s -l INFO "Hello, world."
use a severity of P ("panic") and you'll get an exciting message on your console immediately.
Mac OS X implements a superset of syslog's functionality. All of syslog is there, but as part of ASL.
Console, mentioned by Matthew Schinckel in his answer, is the GUI on ASL. It'll show you any messages that exist in the database, as fetched by queries listed in the sidebar. There are two queries by default; one only shows messages sent with the Console facility (as used by NSLog, among other things), whereas the other shows all log messages. Check the all-messages query; you'll probably find your message there.
That “all” does come with an asterisk. If you look in /etc/asl.conf, you'll see this line:
# save everything from emergency to notice
? [<= Level notice] store
Fortunately, in your case, the message will pass this check, since warning outranks (is a lesser number than) notice.
If you need complex syslog analysis (navigation hour by hour in terminal, regexp, comparing in real time w\ other files or even running SQL over syslog) lnav would seamlessly provide it for you.
Installation:
brew install lnav
Usage:
lnav /var/log/system.log
UI itself:
Building on Charlie's answer, I would like to add that you should take a look at the manpage of syslog.conf(5) and also take a peek at the file /etc/syslog.conf (which is where the syslog configuration is defined by default and also, as I see it, on OS X 10.5.x).
Check for a call to openlog somewhere in the program. After a call to openlog, syslog will save its output to that log file instead of the default location.
Big Sur
Unfortunately, non of the stated answers worked for me.
What Worked for me:
The system mail accessed using the mail program from the terminal had all the /usr/sbin/cron logs in emails.

Resources