How can I find the number of "active sessions" in my OpenERP server ?
I'm aware that "active sessions" is not an exact concept here, but overall what I would like to be aware of is the level of usage stress the server is being subject to, and comparing that to the OS resources being dedicated to the process.
I added a small amount of logging code to the server to trace the user id, model, and a few other parameters for each request that came through. You need to be careful not to record anything sensitive like passwords, and you should be careful that your tracing doesn't add a significant load to the server. You can see the tracing code we added to OpenERP 5 on launchpad.
If you don't need the same level of detail, it would probably add less load to use a network sniffer to count connections to OpenERP's port, as described in this question.
Related
We are building a reporting app on Laravel that need to fetch users data from a third-party server that allow 1 request per seconds.
We need to fetch 100K to 1000K rows based on user and we can fetch max 250 rows per request.
So the restriction is:
1. We can send 1 request per seconds
2. 250 rows per request
So, it requires 400-4000 request/jobs to fetch a user data, So, loading data for multiple users is very time-consuming and the server gets slow.
So, now, we are planning to load the data using multiple servers, like 4-10 servers to fetch users data, so we can send 10 requests per second from 10 servers.
How can we design the system and process jobs from multiple servers?
Is it possible to use a dedicated server for hosting Redis and connect to that Redis server from multiple servers and execute jobs? Can any conflict/race-condition happen?
Any hint or prior experience related to this would be really helpful.
The short answer is yes, this is absolutely possible and is something I've implemented in production apps many times before.
Redis is just like any other service and can run anywhere, with clients from anywhere, connecting to it. It's all up to your configuration of the server to dictate how exactly that happens (and adding passwords, configuring spiped, limiting access via the firewall, etc.). I'd reccommend reading up on the documentation they have in the Administration section here: https://redis.io/documentation
Also, when you do make the move to a dedicated Redis host, with multiple clients accessing it, you'll likely want to look into having more than just one Redis server running for reliability, high availability, etc. Redis has efficient and easy replication available with a few simple configuration commands, which you can read more about here: https://redis.io/topics/replication
Last thing on Redis, if you do end up implementing a master-slave set up, you may want to look into high availability and auto-failover if your Master instance were to go down. Redis has a really great utility built into the application that can monitor your Master and Slaves, detect when the Master is down, and automatically re-configure your servers to promote one of the slaves to the new master. The utility is called Redis Sentinel, and you can read about that here: https://redis.io/topics/sentinel
For your question about race conditions, it depends on how exactly you write your jobs that are pushed onto the queue. For your use case though, it doesn't sound like this would be too much of an issue, but it really depends on the constraints of the third-party system. Either way, if you are subject to a race condition, you can still implement a solution for it, but would likely need to use something like a Redis Lock (https://redis.io/topics/distlock). Taylor recently added a new feature to the upcoming Laravel version 5.6 that I believe implements a version of the Redis Lock in the scheduler (https://medium.com/#taylorotwell/laravel-5-6-preview-single-server-scheduling-54df8e0e139b). You can look into how that was implemented, and adapt for your use case if you end up needing it.
I'm doing performance testing of a native application on Windows and I need to calculate how much more internet traffic new application version produce compared to previous version. Because application is meant to be working in environment with limited internet connection.
Fiddler displays only HTTP and FTP requests and only those that were sent through proxy. In theory application can ignore proxy and use other protocols or sockets.
Resource Monitor seems to contains only average network activity for last minute (Total B/sec). It is not enough for me because network traffic produced by application is not constant.
Network-related performance counters doesn't contain no relevant counters to look at.
TCPView for some reason do not show information for some processes. It display traffic for specific connection rather than application and when connection is closed information is lost.
After more detailed research I found that Sysinternals Process Explorer looks like appropriate tool for internet traffic estimation. You can add Network Send Bytes and Network Recieve Bytes columns to processes table and manually calculate their values difference at the time range boundaries that you are interested in. In order to this to work you need to start Process Explorer as administrator.
I have a weird problem, I'm using MiniProfiler and it's great, no problem whatsoever on my local machine but it seems to behave differently on our testing server. It seems to generate many queries to mini-profiler-resources, where number of queries is random (?). It is generating somewhere between 8 to 22 extra calls.
Testing and local machine is using basically the same data. We are using MVC 3 and RavendDB (with RavenDB MiniProfiler plugin).
I would be happy to get any suggestions what it could possibly be.
Thanks.
It turned out that out current load balancer was hiding user ip address behind his. Additionally we had few services constantly sending uptime requests that because of load balancer were identified by the same ip.
MiniProfiler by default is storing request profiling results per ip. These results are read asynchronously by client side request and only when they were read they are cleared. It meant that I was getting all the profiling results made by uptime services that were not mine but since we had the same ip they were identified as mine.
Possible available solutions:
Ensure that your IIS is getting unique IP's
MiniProfiler allows extensibility point for IUserProvider (https://github.com/SamSaffron/MiniProfiler/blob/master/StackExchange.Profiling/IUserProvider.cs) so if necessary we can distinguish users in some other way. GetUser is getting HttpRequest as an argument. One way could be for example to read Session Id from cookies passed in HttpRequest.
I am developing a social network in ASP.NET MVC 3. Every user has must have the ability to see connected people.
What is the best way to do this?
I added a flag in the table Contact in my database, and I set it to true when the user logs in and set it to false when he logs out.
But the problem with this solution is when the user closes the browser without logging out, he will still remain connected.
The only way to truly know that a user is currently connected is to maintain some sort of connection between the user and the server. Two options immediately come to mind:
Use javascript to periodically call your server using ajax. You would have a special endpoint on your server that would be used to update a "last connected time" status, and you would have a second endpoint for users to poll to see who is online.
Use a websocket to maintain a persistent connection with your server
Option 1 should be fairly easy to implement. The main thing to keep in mind that this will increase the amount of requests coming into your server, and you will have to plan accordingly in order handle the traffic this could generate. You will have some control over the amount of load on your server by configuring how often javascript timer calls back to your server.
Option 2 could be a little more involved if you did this without library support. Of course there are libraries out there such as SignalR that make this really easy to do. This also has an impact on the performance of your site since each user will be maintaining a persistent connection. The advantage with this approach is that it reduces the need for polling like option 1 does. If you use this approach it would also be very easy to push a message to user A that user B has gone offline.
I guess I should also mention a really easy 3rd option as well. If you feel like your site is pretty interactive, you could just track the last time they made a request to your site. This of course may not give you enough accuracy to determine whether a user is "connected".
I have a site that has to crawl different sites to aggregate information. When the crawling scripts are running, the site's speed slows down. I have done as much as possible to optimize the crawling, but it's really CPU- and RAM- intensive. These crawls have to occur based on some user action (e.g. search). It is not an option to "pre-crawl" the information as the information is time-sensitive.
What are the general strategies I can use to solve this? Here are 2 of my ideas:
Get more CPU and RAM on current server
Offload these processing intensive scripts on a separate physical server
I'm wondering about cloud computing, but don't have any experience in it. Suggestions?
You've already identified the options. "Cloud computing" doesn't mean anything but being able to quickly allocate a VPS with hourly pricing. It's the same as buying another physical server, except without waiting for the host to put it online and e-mail you access info, and without a monthly commitment. You still have to write your application to make use of multiple servers, you have to write code to "scale up" or "scale down" as needed (purchase or terminate virtual servers, and write code to automatically start whatever programs you need on them), you still have to properly manage the servers (install and maintain an OS, keep packages updated with security fixes) etc.
You could try to make the action to be asynchronous:-
User submits a search.
System displays "The system is currently searching the information based on your criteria and you will be notified shortly". System handles the user request at the mean time.
Since the user isn't waiting for the result page, the user is free to browse around or do other thing in your website instead of locking up their screens.
When the result is generated, system notifies the user that the search is done and provides the link for the user to view the result. This can be done by either sending an email notification to the user, or merely popping a dialog box or sliding down a notification message on the menu bar (basically something to catch the user attention).
It is wise to have a separate machine to run these processing intensive scripts so that it will not slow down the entire application server especially when you have tons of users submitting the search.