I have an influx table with the volumes of bytes per time. I would like to make a select through GET-request to get the last value:
curl -u 'user:pass' -G 'https://influx/query?db=traffic&pretty=false' --data-urlencode "db=mydb" --data-urlencode "q=SELECT last(sum) FROM traffic WHERE zone='main'"
Select works but instead of 6 433 336 Bytes it returns 6.433336e+06 which is not convenient and sometimes not precise enough for me. How can I get this integer as is without conversion to the scientific format?
Unfortunately it isn't possible to return this value not in scientific notation.
Related
I have InfluxDB records that look like this:
Some_Measurement:
---------------------
time field value
----- ------ -----
1630686612 myfieldA 123
1630686612 myfieldB 456
For some reason when I try to graph these in Grafana, or even to a select query like:
SELECT * FROM Some_Measurement WHERE "time" > now() - 60m
I get nothing back. It's almost as if it does not recognize the timestamps as timestamps. I have a feeling this might be because I'm writing these from my source as strings, but I have no idea what the correct data type should be. Could someone please help me out?
This is by default in the terminal query, but you can change this by the below command :
precision rfc3339
After applying the above command it gives the properly formatted time in the select commands
I have a source table where free form text entered via an front end application gets stored as VARBINARY (SQL Server). While loading Greenplum the field was cast into VARCHAR field. Loaded as a UTF8 value, I am facing issues in encode/decode this field into a meaningful text.
Couple of things I have tried till now, used PSQL 8.2 String functions and the UTF-8 decodes online tools to understand the expected result.
Encoded to base64 but that still produces the utf-8 value only
SELECT encode('0x205361742046656220312031363A32313A303320 utf-8','base64');
--Output
--MHgyMDUzNjE3NDIwNDY2NTYyMjAzMTIwMzEzNjNBMzIzMTNBMzAzMzIwIHV0Zi04
Using an UTF8 decode online tools, if run this '0x205361742046656220312031363A32313A303320' through the decoder - it produces ' Sat Feb 1 16:21:03 ' and that is my expected result.
Any advice/help is appreciated. Thanks!
I am using Elasticsearch 2.4 and I have a Groovy script. I have a field in my document say doc['created_unix_timestamp'] which is type integer and it holds Unix timestamp. In a search query's script, I am trying to get YYYYMM from that value.
For example, if doc['created_unix_timestamp'] is 1522703848, then in the scripting during a calculation, I want to convert it to as 201804 where first 4 digits are Year and last two are month (with padded 0, if required)
I tried:
Integer.parseInt(new SimpleDateFormat('YYYYMM').format(new Date(doc['created_unix_timestamp'])))
But it throws "compilation error" "TransportError(400, 'search_phase_execution_exception', 'failed to compile groovy script')" . Any idea how to get it work or what is the correct syntax?
A couple recommendations.
Reindex and make created_unix_timestamp a true date in elasticsearch. This will make all kinds of new querying possible (date range for example). This would be ideal.
If you can't reindex, then pull it back as an int from ES and then convert it to whatever you want client side, I wouldn't recommend pushing that work to the ES cluster.
As per the suggestion of Szymon Stepniak in above comment, I solved this issue by
(new Date(1522705958L*1000).format('yyyyMM')).toInteger()
Thanks and credit goes to Szymon Stepniak
I have key:value data which I want to dispay as is. Everything I've read suggests that if some column is a key then it should be not-accessible and thus not displayed by tools (OTOH I'm not sure that the relevant part of RFC 2579 says that, it's too hard for me to understand), but I don't want to add a surrogate key as I already have a unique key in the data. Can it be circumvented or the only accepted way is to add a surrogate identifier?
A SNMP table row key with MAX-ACCESS not-accessible can be displayed by tools if you wish. Look at NetSnmp snmptable explanation ...
... One thing missing from the tables above, is any indication of the
index values for each row. The earliest MIB tables (and some more
recent, but poorly designed tables) did define the indexes as
accessible objects, which would therefore appear in the snmptable
output. But current MIB design has recognised that the index values
are included in the instance OIDs, so it is not necessary to
explicitly retrieve them as a separate column object.
By default, the snmptable command ignores these index values, but it
will display them if invoked with the -Ci option.
I interpret this as meaning: since the index is implicit in the OID, it is sometimes unnecessary to show it, though if you are printing a whole table (using the snmptable tool) it is often handy to see it, hence NetSnmp provide the -Ci flag (which ignores the index's MAX-ACCESS level).
Example without index column shown:
snmptable -M +. -m +ALL -v 2c -c public -Pu <my server> SNMPv2-MIB::sysORTable
SNMP table: SNMPv2-MIB::sysORTable
sysORID sysORDescr sysORUpTime
SNMP-MPD-MIB::snmpMPDMIBObjects.3.1.1 The MIB for Message Processing and Dispatching. 0:0:00:00.18
SNMP-USER-BASED-SM-MIB::usmMIBCompliance The MIB for Message Processing and Dispatching. 0:0:00:00.18
// SNIP ...
Example with index column shown:
snmptable -M +. -m +ALL -v 2c -c public -Pu -Ci <my server> SNMPv2-MIB::sysORTable
SNMP table: SNMPv2-MIB::sysORTable
index sysORID sysORDescr sysORUpTime
1 SNMP-MPD-MIB::snmpMPDMIBObjects.3.1.1 The MIB for Message Processing and Dispatching. 0:0:00:00.18
2 SNMP-USER-BASED-SM-MIB::usmMIBCompliance The MIB for Message Processing and Dispatching. 0:0:00:00.18
// SNIP ...
FILTER("source"."recordCount" USING "source"."snapshot_date" =
EVALUATE('TO_CHAR(%1, ''YYYYMMDD'')', TIMESTAMPADD(SQL_TSI_DAY, -7, EVALUATE('TO_DATE(%1, %2)', "source"."snapshot_date" , 'YYYYMMDD'))))
So i have this piece of code here. I know some will say "Just use the AGO function" But somehow it's causing problems because of it's connection with other tables so what I'm trying to achieve here is like a remake. The process goes this way:
The snapshot_date there is actually in varchar format and not date. So it's like "20131016" and I'm trying to change it to a date then subtract 7 days from it using the TIMESTAMPADD function and then finally returning it back to varchar to use it with FILTER.
This snippet somehow works when testing the FILTER using hardcoded values like "20131016" for example but when tested out with the code above all the row are blank. On paper, the process i assumed would happen goes lke this. "20131016" turns to a date with a format of 20131016 (yyyymmdd) and then less 7 days: 20131009 and then turned into char again "20131009" to be used in the filter.
But somehow that doesn't happen. I think the data format is not applying either to the string->date or the date->string conversion. which results to the values not getting a match at all.
Anyone have any idea what's wrong with my code?
By the way I've already tried to use CAST instead of EVALUATE or TO_TIMEDATE with the same result. Oh and this goes to the formula of the column in BMM.
Thanks
You might get some clues by looking at the SQL generated by the BI Server. I can't see any issues with your column expression, so I wouldn't limit your debugging to that alone.
A query returning nulls is often caused by incorrect levels being set (especially on logical table sources, but potentially on a measure column too). This will often result in some form of SELECT NULL FROM ... in the physical SQL.
Try this :
FILTER("source"."recordCount" USING "source"."snapshot_date" =
EVALUATE('TO_CHAR(%1, %2)', TIMESTAMPADD(SQL_TSI_DAY, -7, EVALUATE('TO_DATE(%1, %2)', TO_CHAR("source"."snapshot_date" , 'YYYYMMDD') , 'YYYYMMDD')) , 'YYYYMMDD'))