I am using github.com/coreos/etcd/client for communicating with etcd cloud, sometimes i am receiving 401 error "The event in requested index is outdated and cleared" , on server page there is a explanation why it happened and how it can be solved. Actually I want to implement the following scenario.
Get "key" and its modified index
do some job
start watching on "key" from modified index
this way I can be shure that all changes during 2 step will be also received. But Etcd saves only top N changes and sometimes I am receiving 401 error. Acording docu I can use "X-Etcd-Index" + 1 header from Get key request as modified index to watch.
curl 'http://127.0.0.1:2379/v2/keys/foo' -vv
< HTTP/1.1 200 OK
< Content-Type: application/json
< X-Etcd-Cluster-Id: f63cd37d2ff4f650
< X-Etcd-Index: 17245
< X-Raft-Index: 2107637
< X-Raft-Term: 360
< Date: Tue, 15 Dec 2015 09:02:20 GMT
< Content-Length: 791
< ...
But I don't use direct http calls , only github.com/coreos/etcd/client. How can I get (and is it actually possible using api) a http header info on from get key request ?
The X-Etcd-Index header corresponds to the Response.Index field.
The index is available in the response field as the "Index" object. See the documentation here: https://godoc.org/github.com/coreos/etcd/client#Response
Related
I am using "HTTP Request" with PUT method for chunk file upload at sFiles {Salesforce share point}. (OS - Windows 10). FileToBeUploaded.pdf is the main file
$ ls -lrt File*
-rw-r--r-- 1 vikram 197121 12065018 Aug 23 15:51 FileToBeUploaded.pdf
-rw-r--r-- 1 vikram 197121 5773562 Aug 23 15:53 FileToBeUploaded_Chunks.pdf.ab
-rw-r--r-- 1 vikram 197121 6291456 Aug 23 15:53 FileToBeUploaded_Chunks.pdf.aa
For Chunk Upload
We have to divide a file to into equal parts and save it somewhere and upload each of them separately on the same URL that you get after creating a session.
Content-Length must be the total numbers of bytes of the file that you are uploading in all upload requests of fragments.
Content-Range will be like: 0-{fragmentLength-1}/{totalNumberOfBytesOfFile}(same as content-length) and from next fragment the Content-Range will be {uploadedBytes}-{uploadedBytes+nextSetOfBytes-1}/{totalNumberOfBytesOfFile}
To follow this we should have valid Content-Length & Content-Range passed in the request header.
ISSUE: But Jmeter [HTTP Request - HTTPClient4] is calculating content-length automatically and not taking the content-length defined in header manager. For chuck file uploads size in bytes should be accurately set for Content-Length & Content-Range . But Jmeter each time taking a new content-length as per it's auto calculation.
Client Side Error: [HTTP 400]
{"error":{"code":"invalidRequest","message":"The Content-Range header length does not match the provided number of bytes."}}
I searched on all articles on the internet but could not found any resolution to control/override/hardcode Content-Length in request header.
This issue must be there for all share point websites which are using PUT method for chunk file upload. [Like Google Drive etc.]
Expected resolution: Way to control/override/hardcode/Configure Content-Length in "HTTP Request".
Please help on resolving this issue.
As per Google Drive documentation on resumable uploads:
Create a PUT request to the resumable session URI.
Add the file's data to the request body.
Add a Content-Length HTTP header, set to the number of bytes in the file.
Send the request. If the upload request is interrupted, or if you receive a 5xx response, follow the procedure in Resume an interrupted upload.
So if you're using "Files Upload" tab of the HTTP Request sampler it will send the file in full, you either need to point it to individual chunks or switch to the "Body Data" tab
You might also be interested in building and sending your HTTP Requests using JSR223 Sampler and Groovy language
My application load regulary the number of viewers watching a livestream on dailymotion.
I use the dailymotion API with the field "audience" to do that.
But the server is sending to me a really really old cached version of the precious JSON file.
For example, a streamer is live since 2 hours, but the API is sending this to me :
curl https://api.dailymotion.com/video/x25eyo8?fields=audience
{"audience":0}
If i add another field just a few seconds later, just to see :
curl https://api.dailymotion.com/video/x25eyo8?fields=audience,onair
{"audience":1177, onair: true}
And if I re-send the first request
curl https://api.dailymotion.com/video/x25eyo8?fields=audience
{"audience":0}
More interesting, the headers sent by the server show this :
curl -I https://api.dailymotion.com/video/x25eyo8?fields=audience
HTTP/1.1 200 OK
Server: DMS/1.0.42
X-Dm-Api-Object: video
X-DM-BackNode: web-011.adm.dailymotion.com:80
Cache-Control: public, max-age=10, stale-if-error=900
X-Dm-Api-Method: info
Content-Type: application/json; charset=UTF-8
X-DM-LB: 195.8.215.130
Access-Control-Allow-Origin: *
X-DM-BackNode-Response-Time: 47
Etag: W/"8xO_txIM6arAYYIALcRUgg"
X-Robots-Tag: noindex
Last-Modified: Fri, 06 Nov 2015 20:20:31 GMT
Access-Control-Allow-Headers: Content-Type, Authorization
Access-Control-Allow-Methods: GET, POST, DELETE
X-Dm-Page: fr.rest.rest_api
Via: 1.1 varnish
Fastly-Debug-Digest: 1d9daef237214a02cb79d06c44fdaa26329d5bd16c9afda535f3f9b104438b84
Content-Length: 17
Accept-Ranges: bytes
Date: Mon, 30 Nov 2015 01:16:59 GMT
Via: 1.1 varnish
Age: 116
Connection: keep-alive
X-Served-By: cache-fra1225-FRA, cache-lhr6323-LHR
X-Cache: HIT, MISS
X-Cache-Hits: 1, 0
Vary: X-DM-EC-Geo-Country, Accept-Encoding
Age : 116 with a max-age of 10 ?
And even when i receive an "up to date" version (age < 10), the file still contains 0 viewers whereas the stream is online and 1000+ viewers are watching.
Now there are 2 questions :
Why this happens ?
Can I force a non-cached version ?
Thanks for your help.
EDIT :
It looks like it's the same problem when you watch a stream on http://games.dailymotion.com/. For all channels, numbers of viewers are correct inside the player but for some ones, the number displayed under it is not (most of the time, it indicate 0)
Example of wrong number of viewers
I'm new in Performance Tests, I started using JMeter and creating my own scripts. I am doing stress performance test on an API, until now POST, GET, PATCH all were working, but I stopped at PUT method. I need to send a file using PUT method, in POSTMAN is working (in body I am using file type with the selected file, in the header multipart/form-data).
I tried to put the file path in "Send files with the request", Parameter Name: file, MIME type: form-data, Content encoding: utf-8.
In the request it doesn't give me the file.
PUT http://10.111.30.12/api/tasks/2
PUT data:
[no cookies]
Request Headers:
Connection: keep-alive
X-AuthToken:
MjEzNUZFMEMxMzFEQTVBMUMxQzYxMDU0MjE0OEFFRTJDRjU0ODQ0QkRCNDUyQkQ0QTgxREU0M0Y5MDQwMTk1RDJGMEE2RDNERTIxNjFBRjE3MEQ0QTNFQzM1OTVBRjMyQUI0MkJFN0MwMjYxMkFDRTBFMTQyMzYyNjYwMkREMTU0RkMxQTlBMjJDOUJFQkMwRjEwNDdFOTEwNjgyRDAwMTVBOTlEQ0ExQ0FFQTBGQjA2MEVDRUNFQjgzOEQ1MTA4ODVGOUYxMDhBQUM0RTc5N0JDQTA2RkYyNjYxQURGODE3NUM0MDlFN0RENEM0MTc0Nzc4MzczRjNDQ0VDQzM3Q0Y2QzU4REE2ODg2QzAyNEE1MzY0QThDN0IwMjhEMjdE
Content-Type: multipart/form-data
Content-Length: 0
Host: 10.111.30.12
Proxy-Connection: Keep-Alive
User-Agent: Apache-HttpClient/4.2.6 (java 1.5)"
The sampler result:
Thread Name: API Thread Group 1-1
Sample Start: 2015-09-21 15:33:53 EEST
Load time: 22
Connect Time: 0
Latency: 22
Size in bytes: 202
Headers size in bytes: 202
Body size in bytes: 0
Sample Count: 1
Error Count: 1
Response code: 415
Response message: Unsupported Media Type
Also tried to put the file in body data as: "file: C:\apache-jmeter-2.13\bin\API Performance Test\file.txt" but now I am getting 400 bad request.
Please if anyone got any idea on how to do this, tell me too.
Since you're testing an API my expectation is that you need to add a HTTP Header Manager to send Content-Type header with the value of application/json.
The best way to get to the bottom of the issue is using a sniffer tool like Wireshark to compare what's being sent by Postman and JMeter and make sure that there are no differences.
I use Rack:Etag to genereate proper etag values based on the response from the server and for development I use Rack::Cache to verify that that caching I expect to happen really does
But i have a slight predicament:
I send a request and get these headers back
Age →0
Cache-Control →public, max-age=10
Connection →keep-alive
Content-Length →4895
Content-Type →application/json; charset=UTF-8
Date →Wed, 02 Oct 2013 06:55:42 GMT
ETag →"dd65de99f4ce58f9de42992c4e263e80"
Server →thin 1.5.1 codename Straight Razor
X-Content-Digest →0879e41b0d8e9b351f517dd46823095e0e99abd8
X-Rack-Cache →stale, invalid, store
If i after 11 seconds send a new request with If-None-Match=dd65de99f4ce58f9de42992c4e263e80 then i expect to get a 304 but always get 200 with the above headers.
What am I missing ?
Could it be due to max-age directive being set to 10
When the max-age cache-control directive is present in a cached response, the response is stale if its current age is greater than the age value given (in seconds) at the time of a new request for that resource.
Although, did you already know that? As you tried after 11 secs!
I think the solution was to load the rack middleware as follows for coorect chaining
use Rack::Cache
use Rack::ConditionalGet
use Rack::ETag
And also send If-None-Match with "" around hash, which i think seems pretty fragile
context:
My first project with COSM is recording datapoints from my electric meter. When I look at the graph of the feed, it's flatlined at zero even though the datapoints appear to be correctly received.
Any idea what's wrong, or things I should look for in order to debug it?
more info:
When I debug my feed, I see it receiving approximately eight API requests per minute expected.
Here's an instance of a received datapoint as viewed by COSM's 'debug feed' interface. Note in particular that the response is 200 [ok], and the request body has a sensible timestamp and a non-zero value:
200 POST /api/v2/feeds/129722/datastreams/1/datapoints 06-05-2013 | 08:16:54 +0000
Request Headers
Version HTTP/1.0
Host api.cosm.com
X-Request-Start 1367828214422267
X-Apikey <expunged>
Accept-Encoding gzip, deflate, compress
Accept */*
User-Agent python-requests/1.2.0 CPython/2.7.3 Linux/3.6.11+
Origin
Request Body
{"at": "2013-05-06T08:16:57", "value": 164.0}
Response Headers
X-Request-Id 245ee3ca6bd99efd156bff2416404c33f4bb7f0f
Cache-Control max-age=0
Content-Type application/json; charset=utf-8
Content-Length 0
Response Body
[No Body]
update
Even though the docs specify that JSON is the default, I explicitly added a ".json" to the POST URL (/api/v2/feeds/129722/datastreams/1/datapoints.json) but that didn't appear to make any difference.
update 2
I enclosed the "value" value in strings, so the request body now reads (for example):
{"at": "2013-05-06T15:37:06", "value": "187.0"}
Still behaving the same: I see updates in the debug view, but only zeros are reported in the graph view.
update 3
I tried looking at the data using the API rather than the COSM-supplied graph. My guess is that the datapoints are not being stored for some reason (despite the 200 OK return status). If I put this URL in the web browser:
http://api.cosm.com/v2/feeds/129722.json?interval=0
I get this in response:
{"id":129722,
"title":"Rainforest Automation RAVEn",
"private":"false",
"tags":["power"],
"feed":"https://api.cosm.com/v2/feeds/129722.json",
"status":"frozen",
"updated":"2013-05-06T05:07:30.169344Z",
"created":"2013-05-06T00:16:56.701456Z",
"creator":"https://cosm.com/users/fearless_fool",
"version":"1.0.0",
"datastreams":[{"id":"1",
"current_value":"0",
"at":"2013-05-06T05:07:29.982986Z",
"max_value":"0.0",
"min_value":"0.0",
"unit":{"type":"derivedSI","symbol":"W","label":"watt"}}],
"location":{"disposition":"fixed","exposure":"indoor","domain":"physical"}
}
Note that the status is listed as "frozen" (last update received > 15 minutes ago) despite the fact that the debug tool is showing seven or eight updates per minute. Where are my datapoints going?
Resolved. As #Calum at cosm.com support kindly pointed out, I wasn't sending a properly formed request. I was sending the following JSON:
{"at": "2013-05-06T08:16:57", "value": 164.0}
when I should have be sending:
{
"datapoints":[
{"at": "2013-05-06T08:16:57", "value": 164.0}
]
}
Calum also points out that I could batch up several points at a time to cut down the number of transactions. I'll get to that, but for now, suffice it to say that fixing the body of the request made everything start working.
That sounds like a bug in the graphs, I have seen something very similar a few times.
I often use Cosm Feed Viewer Chrome extension, which displays the latest values in real-time using the WebSocket endpoint.
It should be not too hard to put together custom graphs with Rickshaw and CosmJS.