Jmeter Get ssl handshake certificate data - jmeter

Is it possible to get ssl handshake data using jmeter?
I need to send number of ssl requests and get - SAN Subject Alternative Name, to be exact.

Unfortunately, there's no particular sampler that could do it for you (at least no one I know about, someone correct me if I'm wrong).
So to my understanding you got two ways there:
1) Implement handshake process with sequence of TCP Samplers + RegExp Extractor post processors (+ custom BeanShell/JSR223 post processors as needed) + appropriate Assertions.
Don't forget to keep connection in every sampler in the chain if you go this way.
2) Implement a very custom Sampler in BeanShell/Groovy(JSR223).
Here, you, in turn, have two options:
Implement it yourself step by step with Java Sockets (full control, but hell lot of work & errorproneness)
Try to mess with javax.net.ssl.SSLSocket. You have to implement a HandshakeCompletedListener there (and register it with SSLSocket then), which would receive the HandshakeCompletedEvent through which you, hopefully, can extract what you need using SSLSession
P.S. I would like to ask you for a favor of sharing the results here, if you choose to go second way, especially SSLSocket case.

Related

Parallel Req/Rep via Pub/Sub

I have multiple servers, at any point, one and only one will be the leader whcih can respond to a request, all others just drop the request. The issue is that the client does not know which server is the leader.
I have tried using a pub socket on the client for the parallel request out, however I can't work out the right semantics for the response. In terms of how to get the server to respond to that specific client.
A hacky solution which I have tried is to have a sub socket on the client to pub sockets on all the servers, with the leader responding by publishing a message with a filter such that it only goes to the client.
However I am unable to receive any responses this way, the server believes that it sent the message and the client believes it subscribed to "" but then doesn't receive anything...
So I am wondering whether there is a more proper way of doing this? I have thought that potentially a dealer/router with sending to a specific client would work, however I am unsure how to do that.
Essentially I am trying to do a standard Req/Rep however doing the req in parallel to all the nodes, rather than round robin.
UPDATE: By sending the routing id of the dealer in the pub request, making the remote call idempotent (just returning pre-computed results on repeated attempts), and then sending the result back via a router, with message filtering on the receiving side, it now works.
Q : " is (there) a more proper way of doing this? "
Yes.
Start to apply the Maslow's Hammer rule:
“When the only tool you have is a hammer, every problem begins to resemble a nail.”
In other words, do not try use (one) hammer for solving every problem. PUB/SUB-archetype was designed to serve those-and-only-those multi-party Formal-Communications-Pattern archetypes, where many SUB-scribe to .recv() some PUB-lisher(s) .send()-broadcast messages, but nothing other.
Similarly, REQ/REP-archetype was defined and implemented so as to serve one-and-only-one multi-party distributed Formal-Communications-Pattern ( and will obviously not meet any use-case, which has any single other or even a slightly different requirement ).
Users often require some special, non-trivial features, that obviously were not a part of the said trivial Formal-Communications-Pattern archetype primitives ( those ready-made blocks, made available in the ZeroMQ toolbox ).
It is architecs' / designers' role to define, analyse and implement any more complex user-specific distributed-behaviour definition ( a protocol ) and to implement it, most often using a layered combination of the ready-made ZeroMQ primitives.
If in doubts, take a sheet of paper and pencil, draw a small crowd of kids on playground and sketch their "shouts", their "listening", their "silence", "waiting" and "doubts", their many or few "replies", their "voting" and "anger" of not being voted for by friends, their fight for a place on the Sun and their "persistence" not to let others take theirs turn and let 'em sit on the "swing" after releasing the so far pleasurable swinging oneselves.
All this is the part of finding the right mix of ( protocol-orchestrated ) levels of control and levels of freedom to act.
There we get the new, distributed-behaviour, tailor-made for your specific use-case.
Probability to find a ready-made primitive tool to match and fulfill any user-specific use case is limitlessly close to Zero ( sure, unless one's own, user-specific use-case requirements match all those of the primitive archetype, but that is not a user-specific use-case, but a re-use of an already implemented archetype for the very same situation, that was foreseen by the ZeroMQ fathers, wasn't it? )
Again, welcome to the art of Zen-of-Zero.
Maylike to readthis and this and this

How to use jmeter to measure transmission time between websocket sender and receiver

I have implemented a relay server on top of WebSocket. Sender will send many small binary messages to the server and they are then relayed to all the connected clients.
What I am interested in is the time between sender send the message and the reader receives the message. Right now I have already setup the Test Plan with a thread group of 25 receivers and another group of 1 sender and they can receive and send the message respectively.
The aggregate report is considering the send message and read message as two different labels. How should I configure the Test Plan to record my desired time?
p.s. I am using this jmeter WebSocket sampler plugin
https://bitbucket.org/pjtr/jmeter-websocket-samplers
Thanks in advance.
The aggregate report is considering the send message and read message
as two different labels.
Sure it is, because there are two separate thread groups, according to you.
You need to sync & order the sampler results somehow, so I see two ways here:
1) Write raw sampler results (simple writer, aggregate report & summary report are capable of doing that), then use external tool (say, a table processor, Excel or similar) to process them, do that simple math and show your desired timings.
Or streamline the results to the timeseries DB (e.g. Influx) with Backend Listener and proceed from there - like, do the math and/or visualize (say, with Grafana)
2) Second option seems to be syncing Thread Groups to each other with InterThreadCommunication plugin.
But that seems trickier to me, and what's more, may influence the timing readings (depending on way you do it) so the results got twisted.
Thus I personally would prefer passive metrics readings and post-calculations upon them (which could be turned pretty much "live" too, if you want, with BackendListener+Influx+Grafana bundle, or similar)

Ruby websocket check if user exist

Using Event-machine and Ruby. Currently I'm making a game were at the end of the turn it checks if other user there. When sending data to the user using ws.send() how can I check if the user actually got the data or is alternative solution?
As the library doesn't provide you with access to the underlying protocol elements, you need to add elements to your application protocol to do this. A typical approach is to add an identifier to each message and response to messages with acknowledgement messages that contain those identifiers.
Note that such an approach will only help you to have a better idea of what has been received by a client. There is no assurance of particular state in the case of errors. An example would be losing a connection after the client as sent an ACK, but the service has not received it.
As a result of the complexity I just mentioned, it is often easier to try to make most operations idempotent - that is able to be replayed without detriment to the system, and to replay readily during/after error conditions. You may additionally find a way to periodically synchronize the relevant state entirely, to avoid the long term continuation of minor errors introduced by loss of data/a connection.

RESTFul: Using POST to execute algorithms

I'm designing a REST api, and I need an endpoint that executes an algorithm using the data sent by the client.
My first approach was to use a GET endpoint, because the algorithm is idempotent:
Given an input with value "A" it always returns "B" and it never modifies anything in the server.
It would be great to model this using a GET endpoint, so we can use browser cache, bookmark and so on.
However I can't use a GET endpoint because the algorithm needs a very large JSON as input parameter and I don't want to send this parameter as URL parameter.
Seeing as I can't use GET, I've designed this endpoint using POST.
Now I have a doubt about HTTP status codes.
If the algorithm returns an empty result, I was going to send 404 status code that makes a lot of sense using a GET request.
But now, using a POST method, it seems a little bit strange to me:
POST /myAlgorithm
Response: 404 Not Found
It sounds like the user has written a wrong URL but the problem is the input parameter, that produces an empty response.
So my questions are:
Should I return an input list to deal with this case?
Does anybody knows how to design this kind of methods using a GET endpoint?
If you have an empty result and that's a legal value you should return 204, meaning that you have no error executing, but there was simply nothing to say.
Also, if the call is idempotent, POST is not the ideal way to go.
Both GET and PUT are assumed to be idempotent, but not POST (one of the many references here).
I want to expand an answer and clarify a bit your question with some concepts.
Your question starts with "RESTFul. Using post to execute algorithms" which is a bit innacurate so we can review some concepts quickly.
REST is mainly and only related to VERBs to make it simply. Every webpage is REST
RESTful means you implement all the VERBs, webpages are not RESTful except for rare cases.
Most of the time RESTful goes hand-by-hand with Resource Oriented which is an architecture, RESTful is not an architecture, it's a set of design principles.
RESTful services work pretty well with ROA (resource-oriented architecture) because it's the natural way to do it. The main principle of ROA is the scope goes in the URI so a client can quickly understand looking at the request what's going on.
GET /users http/1.1
At a glance I clearly understand a client want the users list.
Also we have as a different architecture the classic RPC services. SOAP is one of them. A RPC service normally POST an action using an envelope (any kind) and receives a result into an envelope with a 200 ok answer and no more than that. This is of course a simplification of many other principles but it works to understand the concept.
A really good rule of thumb says if you heavily require POST you're not doing REST neither RESTful, or you're designing a RPC service or you have something clearly considered as REST-RPC.
In a RPC service the scoping and methods go into the envelope. Going back to your words:
... an endpoint that executes an algorithm using the data sent by the
client.
That's an obvious definition of RPC or at least REST-RPC
In this case you're not acting over a resource. There's not resource involved, you're executing an algorithm (process, hence, it's RPC). So, the idempotency here doesn't apply at all, there's no resource, there's not a necessity of using GET.
Again, considering you need to POST your data because it's big, and this data cannot be considered scope (for example, a scope in Google is the set of parameters you pass to the engine), it cannot use any classic REST technique, basically because you're doing RPC calls.
My answer is you don't need to think in your service in terms of GET or RESTful, consider it a REST-RPC hybrid as it was designed. It means you POST an envelope (your data) and get 200 ok with an envelope as an answer (in your case, the result of the operation.
That would be the correct way to manage it.

http HEAD vs GET performance

I am setting-up a REST web service that just need to answer YES or NO, as fast as possible.
Designing a HEAD service seems the best way to do it but I would like to know if I will really gain some time versus doing a GET request.
I suppose I gain the body stream not to be open/closed on my server (about 1 millisecond?).
Since the amount of bytes to return is very low, do I gain any time in transport, in IP packet number?
Edit:
To explain further the context:
I have a set of REST services executing some processes, if they are in an active state.
I have another REST service indicating the state of all these first services.
Since that last service will be called very often by a very large set of clients (one call expected every 5ms), I was wondering if using a HEAD method can be a valuable optimization? About 250 chars are returned in the response body. HEAD method at least gain the transport of these 250 chars, but what is that impact?
I tried to benchmark the difference between the two methods (HEAD vs GET), running 1000 times the calls, but see no gain at all (< 1ms)...
A RESTful URI should represent a "resource" at the server. Resources are often stored as a record in a database or a file on the filesystem. Unless the resource is large or is slow to retrieve at the server, you might not see a measurable gain by using HEAD instead of GET. It could be that retrieving the meta data is not any faster than retrieving the entire resource.
You could implement both options and benchmark them to see which is faster, but rather than micro-optimize, I would focus on designing the ideal REST interface. A clean REST API is usually more valuable in the long run than a kludgey API that may or may not be faster. I'm not discouraging the use of HEAD, just suggesting that you only use it if it's the "right" design.
If the information you need really is meta data about a resource that can be represented nicely in the HTTP headers, or to check if the resource exists or not, HEAD might work nicely.
For example, suppose you want to check if resource 123 exists. A 200 means "yes" and a 404 means "no":
HEAD /resources/123 HTTP/1.1
[...]
HTTP/1.1 404 Not Found
[...]
However, if the "yes" or "no" you want from your REST service is a part of the resource itself, rather than meta data, you should use GET.
I found this reply when looking for the same question that requester asked. I also found this at http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html:
The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response. The metainformation contained in the HTTP headers in response to a HEAD request SHOULD be identical to the information sent in response to a GET request. This method can be used for obtaining metainformation about the entity implied by the request without transferring the entity-body itself. This method is often used for testing hypertext links for validity, accessibility, and recent modification.
It would seem to me that the correct answer to requester's question is that it depends on what is represented by the REST protocol. For example, in my particular case, my REST protocol is used to retrieve fairly large (as in more than 10K) images. If I have a large number of such resources being checked on a constant basis, and given that I make use of the request headers, then it would make sense to use HEAD request, per w3.org's recommendations.
GET fetches head + body, HEAD fetches head only. It should not be a matter of opinion which one is faster. I don't undestand the upvoted answers above. If you are looking for META information than go for HEAD, which is meant for this purpose.
I strongly discourage this kind of approach.
A RESTful service should respect the HTTP verbs semantics. The GET verb is meant to retrieve the content of the resource, while the HEAD verb will not return any content and may be used, for example, to see if a resource has changed, to know its size or its type, to check if it exists, and so on.
And remember : early optimization is the root of all evil.
HEAD requests are just like GET requests, except the body of the response is empty. This kind of request can be used when all you want is metadata about a file but don't need to transport all of the file's data.
Your performance will hardly change by using a HEAD request instead of a GET request.
Furthermore when you want it to be REST-ful and you want to GET data you should use a GET request instead of a HEAD request.
I don't understand your concern of the 'body stream being open/closed'. The response body will be over the same stream as the http response headers and will NOT be creating a second connection (which by the way is more in the range of 3-6ms).
This seems like a very pre-mature optimization attempt on something that just won't make a significant or even measurable difference. The real difference is the conformity with REST in general, which recommends using GET to get data..
My answer is NO, use GET if it makes sense, there's no performance gain using HEAD.
You could easily make a small test to measure the performance yourself. I think the performance difference would be negligable, because if you're only returning 'Y' or 'N' in the body, it's a single extra byte appended to an already open stream.
I'd also go with GET since it's more correct. You're not supposed to return content in HTTP headers, only metadata.

Resources