Service that outputs your ip and geo location and other ip related info in the commandline like wtfismyip.com? - terminal

I've been using wtfismyip.com to get info about my ip by doing
curl wtfismyip.com/json
It outputs all the info to the terminal in a nice json format. Is there another service like this for outputting to the terminal?

curl http://api.db-ip.com/v2/free/self
This outputs your IP info in a JSON format, you can also specify a field name in order to get a text response (ie. http://api.db-ip.com/v2/free/self/countryCode)

Related

SNMP (Ubuntu 18.04) on AudioCodes M500L not working

i try to monitor values with nagios over snmp from my two audiocodes SBCs (M500L).
For these i download two MIBs "AC-ALARM-MIB" + "IP-MIB_rfc4293" from https://github.com/librenms/librenms/tree/master/mibs/audiocodes rename it to .txt at the end and upload it to my ubuntu server in path /usr/share/snmp/mibs/.
Then i try to use the following command in command line.
snmpget -v3 -l authPriv -u xxxxxx -a SHA -A xxxxx -x AES -X xxxxx 123.456.789.100 AcAlarm:acActiveAlarmName
and i get the following output
AcAlarm::acActiveAlarmName = No Such Instance currently exists at this OID
I try to find out the OID from these in MIB Browser - seems like it is " .1.3.6.1.4.1.5003.11.1.1.1.1.5". When i use these OID i get same output.
Anyone has an idea?
SNMP treats all values as being entries in some database. OIDs are used to identify entries in this conceptual database. MIB files allow an SNMP manager to translate OIDs into a human-readable string, with an accompanying textual description.
The issue here is not that the MIB files are bad, or the OIDs are wrong, the problem is that, either the devices that hold this (imaginary) database do not support the entries you are trying to access, or that your user is not authorized to access those entries. A simple way to find out what OIDs are supported would be to do a full walk of the database, using something like snmpwalk <hostname> 1.3.6.1

gcloud dns managed-zones list along with record-sets count format

In the output of gcloud dns managed-zones list ,I want to show the name of dnsName, creationTime, name, networkName, visibility and the count of recrod-sets in each hosted-zone.
I used below command to get two output in two commands
#get hosted-zone and other values
gcloud dns managed-zones list --format='table(dnsName, creationTime:sort=1, name, privateVisibilityConfig.networks.networkUrl.basename(), visibility)'
#get record-sets for a hostedzone
gcloud dns record-sets list --zone=$zoneName |awk 'NR>1{print}'|wc -l
I think I can get this in a shell script by getting a list of hosted zone and then printing two output together.
But is there a better way to do in a single gcloud command ?
IIRC (!?), you'll need to issue both gcloud commands as each provides distinct data.
To your point, you should be able to easily combine the combine the commands using a shell script and iterating over each zone from managed-zones list, to issue record-sets list --zone=${i}.
If you'd like help, please include dummy data from the 2 commands and I'll draft something for you.

Is it possible to see all metrics (all paths) in whisper (graphite)?

I have a lot of metrics in Graphite and I have to search through them.
I tried to use whisper-fetch.py, but it returns the metric values (numbers), I want the metric names, something like that:
prefix1.prefix2.metricName1
prefix1.prefix2.metricName2
...
Thank you.
Graphite has a dedicated endpoint for retrieving all metrics as part of its HTTP API: /metrics/index.json
For example, running this command against my local Graphite
curl localhost:8080/metrics/index.json | jq "."
produces the following output:
[
"carbon.agents.graphite-0-a.activeConnections",
"carbon.agents.graphite-0-a.avgUpdateTime",
"carbon.agents.graphite-0-a.blacklistMatches",
"carbon.agents.graphite-0-a.cache.bulk_queries",
"carbon.agents.graphite-0-a.cache.overflow",
...
"stats_counts.response.200",
"stats_counts.response.400",
"stats_counts.response.404",
"stats_counts.statsd.bad_lines_seen",
"stats_counts.statsd.metrics_received",
"stats_counts.statsd.packets_received",
"statsd.numStats"
]
You can just use the unix find command, e.g. find /data/graphite -name 'some_pattern' or use the web api, e.g. curl http://my-graphite/metrics/find?query=somequery, see graphite metrics api

Shell (bash) snmpset script tells Error in packet WrongLength

Hi i have written the bash script for downloading configuration from switches and save it to TFTP server.
snmpset -v 2c -c Zaloznik 192.168.50.22 1.3.6.1.4.1.1991.1.1.2.1.6.0 s test_skript.cfg 1.3.6.1.4.1.1991.1.1.2.1.66.0 x C0A846D2 1.3.6.1.4.1.1991.1.1.2.1.9.0 i 22 >> /dev/null;
But it always tell me this:
Error in packet. Reason: wrongLength (The set value has an illegal
length from what the agent expects) Failed object:
iso.3.6.1.4.1.1991.1.1.2.1.66.0
C0A846D2 is a HEX format of ip 192.168.70.210.
Don't you know how to fix it ? Please help, i have tried many combinations and nothing working.
Thanks.
Problem solved, there was a problem with switches that want to have an info about
type of ip address (ipv4 or ipv6), then ip address of tftp, file name and after that he can send config files to tftp.
So i have to add another snmp OID (ip address type) into the script and then it works.

Can you view historic logs for parse.com cloud code?

On the Parse.com cloud-code console, I can see logs, but they only go back maybe 100-200 lines. Is there a way to see or download older logs?
I've searched their website & googled, and don't see anything.
Using the parse command-line tool, you can retrieve an arbitrary number of log lines:
Usage:
parse logs [flags]
Aliases:
logs, log
Flags:
-f, --follow=false: Emulates tail -f and streams new messages from the server
-l, --level="INFO": The log level to restrict to. Can be 'INFO' or 'ERROR'.
-n, --num=10: The number of the messages to display
Not sure if there is a limit, but I've been able to fetch 5000 lines of log with this command:
parse logs prod -n 5000
To add on to Pascal Bourque's answer, you may also wish to filter the logs by a given range of dates. To achieve this, I used the following:
parse logs -n 5000 | sed -n '/2016-01-10/, /2016-01-15/p' > filteredLog.txt
This will get up to 5000 logs, use the sed command to keep all of the logs which are between 2016-01-10 and 2016-01-15, and store the results in filteredLog.txt.

Resources