how I can set shared users to share bandwidth When 2 users are logged on the hotspot with the same username, they each get the bandwidth specified in external radius. How can this be changes so that if 2 users are logged in the bandwidth will be 50% to A and 50% to B?
Somebody told me to use script , but i don't understand how to use script on mikrotik.
I see a few annoying logic issues with scripting if its even possible... I would personally recommend using a COA (Change of Authorization) request sent to Mikrotik. This is better 1) you would have a real language to work with, and 2) its a very similar process to the radius handshake that set the rate limit.
http://wiki.mikrotik.com/wiki/Manual:RADIUS_Client#Change_of_Authorization
I think a script would have to use the queues vs the rate limit in the hotspot since this would be set from the radius request not the "user" profile. you will also have to tell when an account has already been set, how many times, and what to do when "a" gets off and only "b" is online... lots of room for error.
With the COA you can just query for the original rate limits divide by active sessions in rad act and spit the values out.
Related
Does someone know what "sf_max_daily_api_calls" parameter in Heroku mappings does? I do not want to assume it is a daily limit for write operations per object and I cannot find an explanation.
I tried to open a ticket with Heroku, but in their support ticket form "Which application?" drop-down is required, but none of the support categories have anything to choose there from, the only option is "Please choose..."
I tried to find any reference to this field and can't - I can only see it used in Heroku's Quick Start guide, but without an explanation. I have a very busy object I'm working on, read/write, and want to understand any limitations I need to account for.
Salesforce orgs have rolling 24h limit of max daily API calls. Generally the limit is very generous in test orgs (sandboxes), 5M calls because you can make stupid mistakes there. In productions it's lower. Bit counterintuitive but protects their resources, forces you to write optimised code/integrations...
You can see your limit in Setup -> Company information. There's a formula in documentation, roughly speaking you gain more of that limit with every user license you purchased (more for "real" internal users, less for community users), same as with data storage limits.
Also every API call is supposed to return current usage (in special tag for SOAP API, in a header in REST API) so I'm not sure why you'd have to hardcode anything...
If you write your operations right the limit can be very generous. No idea how that Heroku Connect works. Ideally you'd spot some "bulk api 2.0" in the documentation or try to find synchronous vs async in there.
Normal old school synchronous update via SOAP API lets you process 200 records at a time, wasting 1 API call. REST bulk API accepts csv/json/xml of up to 10K records and processes them asynchronously, you poll for "is it done yet" result... So starting job, uploading files, committing job and then only checking say once a minute can easily be 4 API calls and you can process milions of records before hitting the limit.
When all else fails, you exhausted your options, can't optimise it anymore, can't purchase more user licenses... I think they sell "packets" of more API calls limit, contact your account representative. But there are lots of things you can try before that, not the least of them being setting up a warning when you hit say 30% threshold.
I've an application with 10M users. The application has access to the user's Google Health data. I want to periodically read/refresh users' data using Google APIs.
The challenge that I'm facing is the memory-intensive task. Since Google does not provide any callback for new data, I'll be doing background sync (every 30 mins). All users would be picked and added to a queue, which would then be picked sequentially (depending upon the number of worker nodes).
Now for 10M users being refreshed every 30 mins, I need a lot of worker nodes.
Each user request takes around 1 sec including network calls.
In 30 mins, I can process = 1800 users
To process 10M users, I need 10M/1800 nodes = 5.5K nodes
Quite expensive. Both monetary and operationally.
Then thought of using lambdas. However, lambda requires a NAT with an internet gateway to access the public internet. Relatively, it very cheap.
Want to understand if there's any other possible solution wrt the scale?
Without knowing more about your architecture and the google APIs it is difficult to make a recommendation.
Firstly I would see if google offer a bulk export functionality, then batch up the user requests. So instead of making 1 request per user you can make say 1 request for 100k users. This would reduce the overhead associated with connecting and processing/parsing of the message metadata.
Secondly i'd look to see if i could reduce the processing time, for example an interpreted language like python is in a lot of cases much slower than a compiled language like C# or GO. Or maybe a library or algorithm can be replaced with something more optimal.
Without more details of your specific setup its hard to offer more specific advice.
I'm trying to use data from google analytics for an existing website to load test a new website. In our busiest month over an hour we had 8361 page requests. So should I get a list of all the urls for these page requests and feed these to jMeter, would that be a sensible approach? I'm hoping to compare the page response times against the existing website.
If you need to do this very quickly, say you have less than an hour for scripting, in that case you can do this way to compare that there are no major differences between 2 instances.
If you would like to go deeper:
8361 requests per hour == 2.3 requests per second so it doesn't make any sense to replicate this load pattern as I'm more than sure that your application will survive such an enormous load.
Performance testing is not only about hitting URLs from list and measuring response times, normally the main questions which need to be answered are:
how many concurrent users my application can support providing acceptable response times (at this point you may be also interested in requests/second)
what happens when the load exceeds the threshold, what types of errors start occurring and what is the impact.
does application recover when the load gets back to normal
what is the bottleneck (i.e. lack of RAM, slow DB queries, low network bandwidth on server/router, whatever)
So the options are in:
If you need "quick and dirty" solution you can use the list of URLs from Google Analytics with i.e. CSV Data Set Config or Access Log Sampler or parse your application logs to replay production traffic with JMeter
Better approach would be checking Google Analytics to identify which groups of users you have and their behavioral patterns, i.e. X % of not authenticated users are browsing the site, Y % of authenticated users are searching, Z % of users are doing checkout, etc. After it you need to properly simulate all these groups using separate JMeter Thread Groups and keep in mind cookies, headers, cache, think times, etc. Once you have this form of test gradually and proportionally increase the number of virtual users and monitor the correlation of increasing response time with the number of virtual users until you hit any form of bottleneck.
The "sensible approach" would be to know the profile, the pattern of your load.
For that, it's excellent you're already have these data.
Yes, you can feed it as is, but that would be the quick & dirty approach - while get the data analysed, patterns distilled out of it and applied to your test plan seems smarter.
I want to write a Server side script / daemon which would monitor multiple email accounts (might become quite a big number) and then send push notifications . My conceptual idea till now is: have a database with accounts and passwords. Iterate through that, check if any new messages are there and then react on that by doing smth with an email and sending push notification to the mobile device of the Client. My biggest concern is perfomance. Looping through thousands accounts doesnt seem right to me , but I cant come up with better solution. Registering an Observer for each account doesnt sound any better..
Any ideas? Im open to any languages (Scripting or programming). Not asking for code, just trying to wrap my head around the concept.
Thank you!
You could do it by blocks. Going one by one through all your database entries may take a long time if we are talking of thousands of accounts, maybe you could divide it on several scripts or script executions, taking, lets say, blocks of 100 accounts. So you would have an environment like this: script/thread 1 checking accounts from 1 to 100, script/thread 2 checking accounts from 101 to 200...
This could be done with threads on the same script/program, by using different scrips, or by using a "wrapper" to call the script many times as needed, depending on the amount of entries/blocks. You may need to keep an eye on you server resources, but performance of the checks should improve.
Hope this helps.
I'm working on my enterprise SaaS application and some of my users would like to be charged on a per-seat approach.
I was wondering how to make sure that the access will indeed be limited. I can see right now that people with the same login/password are logging from different IP addresses, different user agents at the same time, even though the company have paid only for one seat.
What would be the best way to implement the limitation from business and technical perspectives? I don't want to be too strict, at the same time I want companies to pay for the number of seats they actually need.
Don't filter by user agent, I think that would be too strict, some people may have a variety of browsers installed. Filtering by IP could also be tricky, some users may have a dynamic IP that changes at regular intervals.
One idea I have would be to force users to install a browser extension, the browser extension could generate a unique ID from something on the operating system. Maybe use a HDD volume number or Windows serial key, anything that will be unique to that computer.
Once you have this unique ID, use it in the back end tracking every time the user logs in. If the user exceeds a certain number of seats you can either block the user account or contact them first.
It would also be a good idea to allow the user a certain amount of freedom, so that if they have one seat then the back end will allow maybe 2 seats for a month (rolling period), this would be in-case the user buys a new pc or installs a new HDD.
Using IP seems a bad idea - people in a LAN will (usually) all have the same IP.
Assuming PHP has no API to access inspect sessions different than the current one - one idea is:
if you're not already doing it, for each login cookie you issue, also issue an unique id (an UUID will serve your purpose)
store those ids somewhere (in java you could store them in the application context, maybe in php you'll need a db table for that - I'm no php expert), together with two timestamps: "session start" and "last activity"
at each request, record the current timestamp in "last activity"
Then, when you get a new request, count how many other active have
last activity >= current session start
last activity >= now - session TTL (only needed if can't prune expired sessions in a timely fashion)
That should give you the number of occupied seats.
Note you will not notice a violation of the number of seats until after the surplus user has logged in and precisely until you see activity in some other session – I don't see any way around this issue since you don't know exactly when a seat becomes vacant (well, you might say it's vacant only when its session has expired, but that seems unfair).