I have some problems when write values to influxdb. Consider the following example
curl -i -XPOST 'http://localhost:8086/write? db=devmessdaten&precision=rfc3339' --data-binary 'Pist,Anlage="ff4113bc-dec1-435c-9503-c31436dd98b8" value=0 2018-03-21T00:53:00Z'
and I'm getting a Bad timestamp
HTTP/1.1 400 Bad Request
Content-Type: application/json
Request-Id: dfa5b762-3af5-11e8-87d8-000000000000
X-Influxdb-Build: OSS
X-Influxdb-Error: unable to parse 'Pist,Anlage="ff4113bc-dec1-435c-9503-c31436dd98b8" value=0 2018-03-21T00:53:00Z': bad timestamp
X-Influxdb-Version: 1.5.1
X-Request-Id: dfa5b762-3af5-11e8-87d8-000000000000
Date: Sun, 08 Apr 2018 06:27:12 GMT
Content-Length: 127
The problem is that InfluxDB expects dates in a epoch format -- seconds since the epoch. So your date stamp is actually 1521593580. If you use that date stamp in your insert, it works just fine.
Not sure what is generating your timestamp, but if you make sure it is in an epoch format, it will succeed.
Best regards,
dg
Related
I need to force HTTP pipelining (1.1) on serveral GET requests with curl, telnet or netcat on a bash script. I've already tried to do so with curl, but as far as I know the tool has dropped HTTP pipelining support since version 7.65.0, and I wasn't able to find much information about how to do so. Still, if with telnet or netcat couldn't be possible, I have access to curl version 7.29.0 in other computer.
From Wikipedia:
HTTP pipelining is a technique in which multiple HTTP requests are sent on a single TCP (transmission control protocol) connection without waiting for the corresponding responses.
To send multiple GET requests with netcat, something like this should do the trick:
echo -en "GET /index.html HTTP/1.1\r\nHost: example.com\r\n\r\nGET /other.html HTTP/1.1\r\nHost: example.com\r\n\r\n" | nc example.com 80
This will send two HTTP GET requests to example.com, one for http://example.com/index.html and another for http://example.com/other.html.
The -e flag means interpret escape sequences (the carriage returns and line feeds, or \r and \n). The -n means don't print a newline at the end (it would probably work without the -n).
I just ran the above command and got two responses from this, one was a 200 OK, the other was a 404 Not Found.
It might be easier to see the multiple requests and responses if you do a HEAD request instead of a GET request. That way, example.com's server will only respond with the headers.
echo -en "HEAD /index.html HTTP/1.1\r\nHost: example.com\r\n\r\nHEAD /other.html HTTP/1.1\r\nHost: example.com\r\n\r\n" | nc example.com 80
This is the output I get from the above command:
$ echo -en "HEAD /index.html HTTP/1.1\r\nHost: example.com\r\n\r\nHEAD /other.html HTTP/1.1\r\nHost: example.com\r\n\r\n" | nc example.com 80
HTTP/1.1 200 OK
Content-Encoding: gzip
Accept-Ranges: bytes
Age: 355429
Cache-Control: max-age=604800
Content-Type: text/html; charset=UTF-8
Date: Mon, 24 Feb 2020 14:48:39 GMT
Etag: "3147526947"
Expires: Mon, 02 Mar 2020 14:48:39 GMT
Last-Modified: Thu, 17 Oct 2019 07:18:26 GMT
Server: ECS (dna/63B3)
X-Cache: HIT
Content-Length: 648
HTTP/1.1 404 Not Found
Accept-Ranges: bytes
Age: 162256
Cache-Control: max-age=604800
Content-Type: text/html; charset=UTF-8
Date: Mon, 24 Feb 2020 14:48:39 GMT
Expires: Mon, 02 Mar 2020 14:48:39 GMT
Last-Modified: Sat, 22 Feb 2020 17:44:23 GMT
Server: ECS (dna/63AD)
X-Cache: 404-HIT
Content-Length: 1256
If you want to see more details, run one of these commands while wireshark is running. The request is sent in No. 7 (highlighted) and the two responses are received on No. 11 and No. 13.
For example, if I want to know if I am connected I can use:
ping 8.8.8.8
to send a ping to google DNS server. Can I ask in a similar way the date of a server? Something like:
give_me_your_date 8.8.8.8
if you had curl you could
curl -I http://example.com
HTTP/1.1 200 OK
Date: Sun, 16 Oct 2016 23:37:15 GMT
Server: Apache/2.4.23 (Unix)
X-Powered-By: PHP/5.6.24
Connection: close
Content-Type: text/html; charset=UTF-8
and you if that server is a webserver you would get an http header which you can retrieve the date from
you can get curl for windows here https://curl.haxx.se/download.html
I'm struggling with ruby date api. I need to convert a timestamp number to formatted date. But when i use:
Time.at(1517486994710).to_datetime
or
DateTime.strptime("1517486994710",'%s')
(1517486994710 is unix timestamp for today), i see 50057 year as output. What i'm doing wrong?
You have the epoch with milliseconds. Use %Q formatter:
DateTime.strptime("1517486994710",'%Q')
#⇒ Thu, 01 Feb 2018 12:09:54 +0000
Your script is correct but your epoch is incorrect. Today epoch is 1517491785.
You probably got the js epoch which counts in milliseconds
DateTime.strptime("1517486994",'%s') # removed 710
I have the following Unix timestamp: 1478698378000
And I'm trying to show this as a datetime in Ruby, e.g.
<%= Time.at(#timestamp).to_datetime %>
Which should be returning a date of: Wed, 09 Nov 2016 13:32:58 GMT but the above code actually returns a date of: 48828-02-01T13:26:40+00:00 Ignore formatting!
As you can see it thinks that timestamp is 2nd Feb 48828 13:26:40.
Why is the datetime coming out completely incorrect and the year so far into the future like that? Checking the timestamp on http://www.epochconverter.com/ reveals the timestamp to be correct, so it's Ruby that's returning it incorrectly.
Time.at expects seconds as an argument and your timestamp is an amount of milliseconds. See documentation on Time.at
Why won’t you check the unix timestamp correctness against “Fashion Week Magazine” or “Cosmopolitan” Site?
Unix timestamp is an amount of seconds lasted since 1970-01-01 UTC:
date --date='#1478698378000'
mar feb 1 14:26:40 CET 48828
BTW, dropping last three zeroes gives you back what you’ve expected:
date --date='#1478698378'
mié nov 9 14:32:58 CET 2016
I wrote a bash script that gets output from a website using curl and does a bunch of string manipulation on the html output. The problem is when I run it against a site that is returning its output gzipped. Going to the site in a browser works fine.
When I run curl by hand, I get gzipped output:
$ curl "http://example.com"
Here's the header from that particular site:
HTTP/1.1 200 OK
Server: nginx
Content-Type: text/html; charset=utf-8
X-Powered-By: PHP/5.2.17
Last-Modified: Sat, 03 Dec 2011 00:07:57 GMT
ETag: "6c38e1154f32dbd9ba211db8ad189b27"
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Cache-Control: must-revalidate
Content-Encoding: gzip
Content-Length: 7796
Date: Sat, 03 Dec 2011 00:46:22 GMT
X-Varnish: 1509870407 1509810501
Age: 504
Via: 1.1 varnish
Connection: keep-alive
X-Cache-Svr: p2137050.pubip.peer1.net
X-Cache: HIT
X-Cache-Hits: 425
I know the returned data is gzipped, because this returns html, as expected:
$ curl "http://example.com" | gunzip
I don't want to pipe the output through gunzip, because the script works as-is on other sites, and piping through gzip would break that functionality.
What I've tried
changing the user-agent (I tried the same string my browser sends, "Mozilla/4.0", etc)
man curl
google search
searching stackoverflow
Everything came up empty
Any ideas?
curl will automatically decompress the response if you set the --compressed flag:
curl --compressed "http://example.com"
--compressed
(HTTP) Request a compressed response using one of the algorithms libcurl supports, and save the uncompressed document. If this option is used and the server sends an unsupported encoding, curl will report an error.
gzip is most likely supported, but you can check this by running curl -V and looking for libz somewhere in the "Features" line:
$ curl -V
...
Protocols: ...
Features: GSS-Negotiate IDN IPv6 Largefile NTLM SSL libz
Note that it's really the website in question that is at fault here. If curl did not pass an Accept-Encoding: gzip request header, the server should not have sent a compressed response.
In the relevant bug report Raw compressed output when not using --compressed but server returns gzip data #2836 the developers says:
The server shouldn't send content-encoding: gzip without the client having signaled that it is acceptable.
Besides, when you don't use --compressed with curl, you tell the command line tool you rather store the exact stream (compressed or not). I don't see a curl bug here...
So if the server could be sending gzipped content, use --compressed to let curl decompress it automatically.