How to extend XML-RPC such that a single request generates multiple responses? - client-server

I have an existing application which acts as an XML-RPC server.
The clients of that server are other programs which run on the same computer.
The clients form connections to the server, invoke commands using XML-RPC, and await responses. The details of the connection and transport protocol are not relevant for this discussion.
The set of available commands constitutes a versioned API between the server and his clients.
I am at liberty to change any code on the server, and also to change library code shared between the clients which handles the details of XML-RPC.
I can therefore introduce an incompatibility with the XML-RPC spec' in my implementation - I control the relevant code in both server and client; nothing touches the internet.
I wish to evolve the API such that a subset of commands invoked by the client(s) cause two separate responses to be generated, each of which is correlated to the original request.
+--------+ +--------+
| | | |
| Client | | Server |
| | | |
+---+----+ +---+----+
| |
| request |
+--------------------> +-+
| |-|
| |-|
| initial response |-|
| <- - - - - - - - - - |-|
| |-|
| |-|
| ultimate response |-|
| <- - - - - - - - - - +-+  
| |
+ +
The client will have distinct states (and therefore will be performing different tasks) whilst awaiting the initial response, and the ultimate response.
I already have the ability to send an unsolicited "event" from the server to the client, so I could solve this problem by generating an event representing the initial response, then use the existing XML-RPC mechanism to deliver the ultimate response. This is very unsatisfactory though; for the sanity of the client code the two responses need to be correlated with (i.e they should share an id with) the original request.
I am struggling to imagine a clean way of solving this problem.
Is there some metaphor I am missing, which would help form a design?
Is there an existing RPC system, or some extension to XML-RPC / JSON-RPC which allows a single request to beget multiple responses in this fashion?
Has anyone dealt with a similar problem - can they provide some insights?
Note that the initial and ultimate responses must be separate XML stanzas.
Specifically, it must be possible for the server to deliver unsolicited XML stanzas (which do not pertain to the request) to the client whilst he awaits the ultimate response.

Related

Same API Get diffrent response time

I run the same API 4 times in the same JMeter script. in the 1st running API get the high time and after that same API get low times.
User Create API - 2067 ms
User Create API 1- 948 ms
User Create API 2- 869 ms
User Create API 3- 902 ms
User Create API 4- 993 ms
why this kind of scenario does in the JMeter??
JMeter only sends requests, waits for responses, measures time in-between and writes down the performance metrics and KPIs.
If first request takes longer than the following ones the reasons could be in:
Your application under test uses lazy initialization pattern
Your application under test needs to warm up its caches
First request takes longer due to the process of establishing the connection and subsequent requests are simply re-using the connection if you're sending Keep-Alive header
Your API endpoint response is cached on database or in-memory level
etc. the reasons could be numerous, you need to monitor everything you can on both JMeter and the system under test sides to understand this.
Jmeter tries to initialize TCP connections and handles SSL handshakes for the first request. For next requests it uses its config parameters httpclient4.time_to_live and httpclient.reset_state_on_thread_group_iteration.
You can refer to Jmeter's properties reference for more information

Client Synchronization on JMeter

How can I develop a test, using JMeter, where two clients can connect in a single TCP Server?
In my test I will have, two Client and One Server:
Client 1 Server Client 2
| M1 | |
T1 |--------------->| M1' |
T2 | |------------->|
T3 | | R2 |
T4 | R2' |<-------------|
T5 |<---------------| |
The Client 1, send the message M1 to the server, the server process that message and send it to the Client 2. The client 2 answer the message. I want to check the time of Response from T1 to T5.
I'm implementing my client connection using Java Request. Is there any JMeter feature to Synchronize Actions between threads?
Or should I implement my own threads inside my Java Request class?
If I implement my own Threads, this mean that I will duplicate my Threads for test. How many threads a single instance of JMeter support?
1. JMeter's threads (virtual users) are absolutely independent and know nothing about what others are doing. However you can use Inter-Thread Communication plugin to add some extra logic and implement IPC
2. I don't think you should as JMeter will not be measuring these extra threads
3. You're limited only by your hardware/software. Assuming you follow JMeter Best Practices you should be able to kick off several thousands of threads on a modern PC. In any case there is always possibility to run JMeter test in distributed mode
Just in case there are TCP Sampler and HTTP Raw Request Sampler, both are capable of firing TCP requests so it might be the case you will not have to reinvent the wheel.
If you want to start 2 different Samplers at the same time you can put them under Parallel Controller, this guy isn't a part of standard JMeter distribution, you will need to install it using JMeter Plugins Manager.

How to track & trend end to end performance (the client experience)

I am trying to figure out how best to track & trend end to end performance between releases. By end to end I mean, what is the experience form the client visiting this app via a browser. This includes download time, dom rendering, javascript rendering, etc.
Currently I am running load tests using Jmeter, which is great to prove application and database capacity. Unfortunately, Jmeter will never allow me to show a full picture of the user experience. Jmeter is not a browser therefore will never simulate javascript and dom rendering impact. IE: if time to first byte is 100ms, but it takes the browser 10 seconds to download assets and render the dom, we have problems.
I need a tool to help me with this. My initial idea is to leverage Selenium. It could run a set of tests (login, view this, create that) and somehow record timings for each. We would need to run the same scenario multiple times and likely through a set of browsers. This would be done before every release and would allow me to identify changes in the experience to the user.
For example, this is what I would like to generate:
action | v1.5 | v1.6 | v1.7
----------------------------------------
login | 2.3s | 3.1s | 1.2s
create user | 2.9s | 2.7s | 1.5s
The problem with selenium is that 1. I am not sure if it is designed for this and 2. it appears that DOM ready or javascript rendering is realllly hard to detect.
Is this the right path? Does anyone have any pointers? Are there tools out there that I could leverage for this?
I think you have good goals, but I would split them:
Measuring DOM rendering, javascript rendering etc. are not really part of "experience from the client visiting this app via a browser", because your clients are usually unaware that you are "rendering dom" or "running javasript" - and they don't care. But they are something I'd want to address after every committed change, not just release to release, because it could be hard to trace degradation back to particular change if such test is not running all the time. So I would put it in continuous integration on build level. See a good discussion here
Then you probably would want to know if server side performance is the same or worsened (or is better). For that JMeter is ideal. Such testing could be done on some schedule (e.g. nightly or on each release) and can be automated using for example JMeter plug-in for Jenkins. If server side performance got worse, you don't really need end-to-end testing, since you already know what will happen.
But if server is doing well, then "end user experience" test using a real browser has a real value, so Selenium actually fits well to do this, and since it can be integrated with any of the testing frameworks (junit, nunit, etc), it also fits into automated process, and can generate some report, including duration (JUnit for instance has a TestWatcher which allows you to add consistent duration measurement to every test).
After all this automation, I would also do a "real end user experience" test, while JMeter performance test is running at the same time against the same server: get a real person to experience the app while it's under load. Because people, unlike automation, are unpredictable, which is good for finding bugs.
Regarding "JMeter is not a browser". It is really not a browser, but it may act like a browser given proper configuration, so make sure you:
add HTTP Cookie Manager to your Test Plan to represent browser cookies and deal with cookie-based authentication
add HTTP Header Manager to send the appropriate headers
configure HTTP Request samplers via HTTP Request Defaults to
Retrieve all embedded resources
Use thread pool of around 5 concurrent threads to do it
Add HTTP Cache Manager to represent browser cache (i.e. embedded resources retrieved only once per virtual user per iteration)
if your application is build on AJAX - you need to mimic AJAX requests with JMeter as well
Regarding "rendering", for example you detect that your application renders slowly on a certain browser and there is nothing you can do by tuning the application. What's next? You will be developing a patch or raising an issue to browser developers? I would recommend focus on areas you can control, and rendering DOM by a browser is not something you can.
If you still need these client-side metrics for any reason you can consider using WebDriver Sampler along with main JMeter load test so real browser metrics can also be added to the final report. You can even use Navigation API to collect the exact timings and add them to the load test report
See Using Selenium with JMeter's WebDriver Sampler to get started.
There are multiple options for tracking your application performance between builds (and JMeter tests executions), i.e.
JChav - JMeter Chart History And Visualisation - a standalone tool
Jenkins Performance Plugin - a Continuous Integration solution

Network structure for online programming game with webSockets

Problem
I'm making a game where you would provide a piece of code to represent the agent program of an Intelligent Agent (think Robocode and the like), but browser-based. Being an AI/ML guy for the most part, my knowledge of web development was/is pretty lacking, so I'm having a bit of a trouble implementing the whole architecture. Basically, after the upload of text (code), naturally part of the client-side, the backend would be responsible for running the core logics and returning JSON data that would be parsed and used by the client mainly for the drawing part. There isn't really a need for multiplayer support right now.
If I model after Robocode's execution loop, I would need a separate process for each battle that then assigns different agents (user-made or not) to different threads and gives them some execution time for each loop, generating new information to be given to the agents as well as data for drawing the whole scene. I've tried to think of a good way to structure the multiple clients, servers/web servers/processes [...], and came to multiple possible solutions.
Favored solution (as of right now)
Clients communicate with a Node.js server that works kinda like an interface (think websocketd) for unique processes running on the same (server) machine, keeping track of client and process via ID and forwarding the data (via webSockets) accordingly. So an example scenario would be:
Client C1 requests new battle to server S and sends code (not necessarily a single step, I know);
S handles the code (e.g. compiling), executes new battle and starts a connection with it's process P1 (named pipes/FIFO?);
P1 generates JSON, sends to S;
S sees P1 is "connected" to C1, sends data to C1 (steps 3 and 4 will be repeated as long as the battle is active);
Client C2 requests new battle;
Previous steps repeated; C2 is assigned to new process P2;
Client C3 requests "watching" battle under P1 (using a unique URL or a token);
S finds P1's ID, compares to the received one and binds P1 to C3;
This way, the Server forwards received data from forked processes to all clients connected to each specific Battle.
Questions
Regarding this approach:
Is it simple enough? Are there easier or even more elegant ways of doing it? Could scalability be a problem?
Is it secure enough (the whole compiling and running code — likely C++ — on the server)?
Is it fast enough (this one worries me the most for now)? It seems a bit counter intuitive to have a single server dealing with the entire traffic, but as far as I know, if I'd assign all these processes to a separate web server, I would need different ports for each of them, which seems even worse.
Since this is a theoretical and opinion based question... I feel free to throwing the ball in different directions. I'll probably edit the answer as I think things over or read comments.
A process per battle?
sounds expensive. Also, there is the issue of messages going back and forth between processes... might as well be able to send the messages between machines and have a total separation of concerns.
Instead of forking battles, we could have them running on their own, allowing them to crash and reboot and do whatever they feel like without ever causing any of the other battles or our server any harm.
Javascript? Why just one language?
I would consider leveraging an Object Oriented approach or language - at least for the battles, if not for the server as well.
If we are separating the code, we can use different languages. Otherwise I would probably go with Ruby, as it's easy for me, but maybe I'm mistaken and delving deeper into Javascript's prototypes will do.
Oh... foreign code - sanitization is in order.
How safe is the foreign code? should it be in a localized sped language that promises safety of using an existing language interpreter, that might allow the code to mess around with things it really shouldn't...
I would probably write my own "pseudo language" designed for the battles... or (if it was a very local project for me and mine) use Ruby with one of it's a sanitizing gems.
Battles and the web-services might not scale at the same speed.
It seems to me that handling messages - both client->server->battle and battle->server->client - is fairly easy work. However, handling the battle seems more resource intensive.
I'm convincing myself that a separation of concerns is almost unavoidable.
Having a server backend and a different battle backend would allow you to scale the battle handlers up more rapidly and without wasting resources on scaling the web-server before there's any need.
Network disconnections.
Assuming we allow the players to go offline while their agents "fight" in the field ... What happens when we need to send our user "Mitchel", who just reconnected to server X, a message to a battle he left raging on server Y?
Separating concerns would mean that right from the start we have a communication system that is ready to scale, allowing our users to connect to different endpoints and still get their messages.
Summing these up, I would consider this as my workflow:
Http workflow:
Client -> Web Server : requesting agent with identifier and optional battle data (battle data is for creating an agent, omitting battle data will be used for limiting the request to an existing agent if it exists).
This step might be automated based on Client authentication / credentials (i.e. session data / cookie identifier or login process).
if battle data exists in the request (request to make):
Web Server -> Battle instance for : creating agent if it doesn't exist.
if battle data is missing from the request:
Web Server -> Battle Database, to check if agent exists.
Web Server -> Client : response about agent (exists / created vs none)
If Agent exists or created, initiate a Websocket connection after setting up credentials for the connection (session data, a unique cookie identifier or a single-use unique token to be appended to the Websocket request query).
If Agent does't exist, forward client to a web form to fill in data such as agent code, battle type etc'.
Websocket "workflows" (non linear):
Agent has data: Agent message -> (Battle communication manager) -> Web Server -> Client
It's possible to put Redis or a similar DB in there, to allow messages to stack while the user is offline and to allow multiple battle instances and multiple web server instances.
Client updates to Agent: Client message -> (Battle communication manager) -> Web Server -> Agent

How do I resolve process hanging on CoUnitialize()?

I have a native Visual C++ NT service. When the service is started its thread calls CoInitialize() which attaches the thread to an STA - the service thread uses MSXML through COM interfaces.
When the service receives SERVICE_CONTROL_STOP it posts a message in the message queue, then later that message is retrieved and the OnStop() handler is invoked. The handler cleans up stuff and calls CoUnitialize(). Most of the time it works allright, but once in a while the latter call hangs. I can't reproduce this behavior stably.
I googled for a while and found the following likely explanations:
failing to release all COM objects owned
repeatedly calling CoInitializeEx()/CoUnitialize() for attaching to MTA
failing to dispatch messaged in STA threads
The first one is unlikely - the code using MSXML is well tested and analyzed and it uses smart pointers to control objects lifetime, so leaking objects is really unlikely.
The second one doesn't look like the likely reason. I attach to STA and don't call those functions repeatedly.
The third one looks more or less likely. While the thread is processing the message it doesn't run the message loop anymore - it is inside the loop already. I suppose this might be the reason.
Is the latter a likely reason for this problem? What other reasons should I consider? How do I resolve this problem easily?
Don't do anything of consequence in the thread handling SCM messages, it's in a weird magical context - you must answer SCM's requests as fast as possible without taking any blocking action. Tell it you need additional time via STOP_PENDING, queue another thread to do the real cleanup, then immediately complete the SCM message.
As to the CoUninitialize, just attach WinDbg and dump all the threads - deadlocks are easy to diagnose (maybe not to fix!), you've got all of the parties to the crime right there in the stacks.
After very careful analysis and using the Visual Studio debugger (thanks to user Pall Betts for pointing out that getting evidence is important) to inspect all active threads I discovered that the process hang not on calling CoUninitialize(), but instead on RpcServerUnregisterIf() function called from our program code right before CoUninitialize(). Here's a sequence diagram:
WorkerThread RpcThread OuterWorld
|----| Post "stop service" message | |
|<---| | SomeRpcServerMethod() |
| Post "process rpc request" |<---------------------------|
|<----------------------------------------| waits
| |----|Wait until
|----| Process "stop service" message | |request is processed
|<---| (call OnStop()) | |by the worker thread
| | |
|----| RpcServerUnregisterIf() | |
|X<--| Wait all rpc requests complete |X<--|
| |
An inbound RPC request comes and RPC runtime spawns a thread to service it. The request handler queues request to the worker thread and waits.
Now the moonphase happens to be just right and so RpcServerUnregisterIf() is executed in parallel with the handler in the RPC thread. RpcServerUnregisterIf() waits for all inbound RPC requests to complete and the RPC handler waits for the main thread to process the request. That's a plain old deadlock.

Resources