Sessions to Quickboks API crashing Server - session

Open sessions to Quickbooks online is crashing server.
I'm finishing up an implementation of a Coldfusion app connecting to Quickbooks online.
I'm using coldfusion scheduled files to sync data. (Sending Invoices/Payments/Clients to QBO)
There have been a few times that the server has crashed due to open sessions to quickbooks.
I can't determine if the request connection for each object just isn't being "shut down", or if the initial query has the potential of just sending too many object requests? (we have lots of pre-existing data in the live testing account).
According to QBO docs:
https://developer.intuit.com/docs/0025_quickbooksapi/0050_data_services/010_calling_data_services/limits_and_throttles
Requests are throttled at 100 request per minute per Company ID across all V3 endpoints.
So no more than 100 can be accepted by QB in theory.
Please let me know what, if any specifics I can provide.
Thanks,
Steven

Related

Google's RuntimeConfig API responds with 'Our systems have detected unusual traffic from your computer network'

Since today (november 20 2018) we get error responses from Google's RuntimeConfig API:
Our systems have detected unusual traffic from your computer network. This page checks to see if it's really you sending the requests, and not a robot...
(check this link for complete HTML error)
We retrieve variables from Google's RuntimeConfig using the API in our code. We do quite a few request, but not more than before:
A developer starts his server locally, which retrieves all the needed variables (+- 30 everytime you start).
Requesting RuntimeConfig variables via GCloud results in the same HTML error:
gcloud beta runtime-config configs variables get-value databaseHost --config-name database --project=your-test-environment
Other gcloud api requests work (projects describe, gsutil, etc).
How can I verify if I violated any terms? I can only find a usage limit in GCloud Console of 6000 calls per minute.
You can find the quotas for Runtime Configurator and how much of those you are using in the Cloud Console under IAM & Admin. In the Quotas section you can filter on Service = Cloud Runtime Configuration API and you should see all the quotas and how close to those you are for this API. There are 4 quotas that may affect you (docs here):
1200 Queries Per Minute (QPM) for delete, create, and update requests
600 QPM for watch requests
6000 QPM for get and list requests.
4MB of data per project, which consists of all data written to the Runtime Configurator service and accompanying metadata.
We had the exact same issue on November 20th when a large amount of our preemptibles were reallocated at the same time.
Our startup-scripts make use of the gcloud beta runtime-config...-commands and they all responded with 503.
These commands responded correctly again after a few hours.
We have had a support-ticket with Google and there was a problem with their internal quota mechanisms at the time which since is fixed so the issue is resolved.

SignalR combined with load balancer missing messages

I have 2 web servers (IIS 8.5) behind a hardware firewall and our application uses SignalR for some real-time updates. We are using SQL Server as the backplane to help us work in this load balanced environment. Additionally we are using sticky sessions on the load balancer to help us keep the users on the same web server during their session. When we are running in this hardware configuration we lose at least 1/3 of our messages. Sometimes we get all the expected messages but more often than not we are missing plenty.
When we are running on a single web server all messages are received. Does anyone have any suggestions for troubleshooting this problem? We've turned on logs (both client & server) and nothing looks like it's missing or broken. We're really stumped.
EDIT---
Some additional details that I hope will shed light on the situation.
Server to Client messages are getting lost. Pretty much all our communication is Server to Client.
We are using sticky session just based on IP and limited to 5 minutes but we're losing messages within that 5 minutes.
This is some old SignalR code that has been only minimally touched since SignalR 1 (or even older). We are keeping an in memory list of users along with their connections and we use that list to send notices back to the client. It seems most likely that this is the cause of the troubles but with Sticky sessions the user should be stuck to the same server for at least the 5 minutes right?
This list of users maps Username to connection id. This is useful when our backend services (on another machine) sends a message back with the username not the connection id.
Finally resolved this. There were 2 issues really. The first is that we were using an in memory list of users as mentioned in the edit above. Once we realized that wasn't going to work across machines we removed it. It also led us to the second issue which was how SignalR 2 uses the IUserIdProvider and our call should have been
Clients.User(userId).send(message)
instead of
context.Clients.Client(connection)
This code had existed since we first started using SignalR many years ago and never got properly updated as we upgraded SignalR versions
Have the same machineKey specified in your web.config on both servers.

Should I be using AJAX or WebSockets.

Oh the joyous question of HTTP vs WebSockets is at it again, however even after quit a bit of reading on the hundreds of versus blog posts, SO questions, etc, etc.. I'm still at a complete loss as to what I should be working towards for our application. In this post I will be supplying information on application functionality, and the types of requests/responses used in our application currently.
Currently our application is a sloppy piece of work, thrown together using AngularJS and AJAX requests to a Apache server running PHP, namely XAMPP. With the launch of our application I've noticed that we're having problems with response times when the server is under any kind of load. This probably has something to do with the sloppy architecture of our server, the hardware, and the fact that our MySQL database isn't exactly optimized.
However, with such a loyal fanbase and investors seeing potential in our application and giving us a chance to roll out a 2.0 I've been studying hard into how to turn this application into a powerhouse of low latency scalability. Honestly the best option would be hire someone with experience, but unfortunately I'm a hobbyist, and a one-man-army without much experience.
After some extensive research, I've decided on writing the backend using NodeJS this time. However I'm having a hard time deciding on HTTP or Websockets. Here's the types of transactions that are done between the Server/Client.
Client sends a request to the server in JSON format. The request has a few different things.
A request id (For processing logic based on the request)
The data associated with the request ID.
The server receives the request, polls the database (if necessary) and then responds to the client in JSON format. Sometimes the server is serving files to the client. Namely images in Base64 format.
Currently the application (When being used) sends a request to the server every time an interface is changed, which on average for our application is once every few seconds. Every action on our interfaces sends another request to the server. The application also sends requests to check for notifications/messages every 8 seconds, (or two seconds depending on if they're on the messaging interface).
Currently here are the benefits I see of a stated connection over a stateless connection with our application.
If the connection is stated, I can eliminate the requests for notifications and messages, as the server can just tell the client whenever one comes available. This can eliminate x(n)/4 requests per second to the server alone.
Handling something like a disconnection from the server is as simple as attempting to reconnect, opposed to handling timeouts/errors per request, this would only be handled on the socket.
Additional security can be obtained by removing security keys for database interaction, this should prevent the possibility of Hijacking(?) of a session_key and using it to manipulate or access another users data. The session_key is only needed due to there being no state in the AJAX setup.
However, I'm someone who started learning programming through TCP game server emulation. So I understand some benefits of a STATED connection, while I don't understand the benefits of a STATELESS connection very much at all. I know they both have their benefits and quirks, but I'm curious what would be the best approach for us.
We're mainly looking for Scalability, as we had a local application launch and managed to bottleneck at nearly 10,000 users in under 48 hours. Luckily I announced this as a BETA and the users are cutting me a lot of slack after learning that I did it all on my own as a learning project. I've disabled registrations while looking into improving the application's front and backend.
IMPORTANT:
If using WebSockets, would we be able to asynchronously download pictures from the server like we can with AJAX? For example, I can make 5 requests to the server using AJAX for 5 different images, and they will all start downloading immediately, using a stated connection would I have to wait for each photo to be streamed before moving to the next request? Would this only bottle-neck a single user, or every user that is waiting on a request to be completed?
It all boils down on how your application works and how it needs to scale. I would use bare WebSockets rather than any wrapper, since it is an already easy to use API and your hands won't be tied when you need to scale out.
Here some links that will give you insight, although not concrete answers to your questions because as I said, it depends on your expectations.
Hard downsides of long polling?
WebSocket/REST: Client connections?
Websockets, and identifying unique peers[PHP]
How HTML5 Web Sockets Interact With Proxy Servers
If your question is Should I use HTTP over Websockets ?, the response is: You should not.
Even if it is faster because you don't lose time opening the connection, you lose also all the HTTP specification like verbs (GET, POST, PATCH, PUT, ...), path, body, and also response, status code. This seams simple but you'll have to re-implement all or part of these protocol things.
So you should use Ajax, as long as it is one ponctual request.
When you need to make an ajax request every 2 seconds, you need in fact that the server sends you data, not YOU request server to check Api change (if changed). So this is a sign that you should implement a websocket server.

Load testing WCF services gives huge (>200 sec) responses

I have a service being load tested by a third party. A few minutes after starting, we start to see requests hanging for a very long period of time and the caller ultimately times out (after 60 seconds).
They are testing with 15 users with each user using two devices at once, so a total of 30 connections.
The service is a simple façade to a more complex operation, calling an external system. Benchmarking our communications to the external system looks as though everything is responding in the time we would expect (sub 200ms).
The IIS logs reveals a bunch of very high requests (> 200sec) which ultimately do return a 200 and have Win32 error code ERROR_NETNAME_DELETD (error 64). I have checked the Service Log and can match up the response to the request (based on the SOAP message id) and can see that we do eventually respond with the correct information (although the client has long given up).
Any ideas as to what could be causing this behavior? We're hosting in IIS using wsHttpBinding and we're using WS-Security with x509 certificates (message & transport encryption).
We don't have benchmark logging inside of our service but the code is a very simple mapping of the WCF request to the server request, making the request, and mapping the response to the WCF response. We do this manually and there is no parsing involved (straight assignments).
After a detailed investigation, including getting Microsoft support involved we were hitting up against the serviceThrottling defaults, specifically the maxConcurrentSessions. We determined this from perfmon - there is a counter for this. We were unsure as to why we saw this as the service behaved when called with a .NET client.
It turns out that the Java consumer of this application, using CXF, was not respecting the WSDL (specifically the bit about WS-SecureConversation) and closing sessions out when it closed its connection.
Our solution was to jack up the maxConcurrentSessions to a high number, set the inactivityTimeout down low (to a minute) to force session abandonment. In addition, we set establishSecurityContext to false to avoid the WSS negotiation consuming an additional session.
The solution is inelegant as the service logs are littered with errors about forced session closures, but it fixed the issue we were seeing here. Unfortunately we had a requirement for WS-Security so our solution needed to stick with that.
I hope this helps someone as this was an interesting and time consuming problem to pin down.

How to find the cause of RESTful service bad performance?

I am creating a service which receives some data from mobile phones and saves it to the database.
The phone is sending the data every 250 ms. As I noticed that the delay for data storing is increasing I tried to run WireShark and write a log as well.
I noticed that the web requests from mobile phone are being made without the delay (checked with WireShark), but the in the service log I noticed the request is received every second and a half or almost two seconds.
Does anyone know where could be the problem or the way to test and determine the cause of such delay?
I am creating a service with WCF (webHttpBinding) and the database is MS SQL.
By the way the log stores the time of http request and also the time of writing data to the database. As mentioned above the request is received every 1.5 - 2 seconds and after that it takes 50 ms to store data to the database.
Thanks!
My first guess after reading the question was that maybe you are submitting data so fast, the database server is hitting a write-contention lock (e.g. AutoNumber fields?)
If your database platform is SQL Server, take a look at http://www.sql-server-performance.com/articles/per/lock_contention_nolock_rowlock_p1.aspx
Anyway please post more information about the overall architecture of the system... what softwares/platforms are used at what parts etc...
Maybe there is some limitation in the connection imposed by the service provider?
What happens if you (for testing) don't write to the database and just log the page hits in the server log with timestamp?
Check that you do not have any tracing running on the web services, this can really kill perf.

Resources