I'm new to Heroku and planning to upload all my applications there until i made some research about package pricing and found out that free apps get around 550-1000 dyno hours per month and for standard package 25$-50$ per dyno per month, now here lies the question.
Does this mean that standard Package X1 for instance if I consumed 1000 dyno hours per month i will billed 25($/dyno) x 1000 dyno = 25,000$ per month?.
Every Dyno type has a different price which is what you pay every month, for example $25 for standard-1x. That's what you are billed every month (no more).
The hours come in place when you select a Free Dyno:
the first 550 hrs are free, after that the Dyno stops working (next month you get the same quota and the Dyno can restart)
you can extend to 1000 (free) hrs if you register a valid Credit Card
Related
In Jira there is a Time Report where you could see how many hours spend on the task an employee, and for month report you could see all tasks together and total time spent day by day on the task and total sepent time to be able to report to the customer, about how much time spent each employee.
I could not find normal documentation for such feature, I am expected to get help, if there is such feature, how to use it and setup.
Lets consider a meeting request where i can find if a person is available in a particular time slot.
Example: I need to check if person is available for meeting from 3 to 3:30. So if a person is busy from 2:30 to 3:01 means person is unavailable.
Question: how can i use the redis cache here.
Do i need to store cache of every minute of a user and can then application decides or any other way ?
I'm not sure if Redis is the data store that I'd choose for this task.
Still, if you're willing to work in a 1 minute resolution then you could store the minutes in which a person is occupied inside a Sorted Set of minutes, and then check if a time range overlaps that person's scheduled appointments with the ZINTER command.
I would like to configure SonarQube Leak Period to match our sprints (14 days). We don't release after each sprint, and our branch is always "develop" so I can't key off of a release.
I know that I can configure X number of days, but I don't want a rolling account over a 14 day period... I would like it to do the delta by comparing each of the 14 days to Day 1. So, Day 2 <> Day 1, Day 3 <> Day 1, etc. Then on the 15th day it would reset for the start of the new sprint.
How can I configure SonarQube to always start the leak period with the start of a new sprint?
Because you don't want a rolling 14-day period, you'll have to manually re-configure to the start date of the new sprint every 2 weeks.
Alternately, you could jigger your versions to something like
3.14-sprintAlpha
3.14-sprintBeta
...
And use the previous_version leak period setting.
I am using Google Search Console URL Testing Tools API, I have a problem in the understanding limit quota of API. It says:
Project per-second limit per 100 seconds = 1
User per-second limit per 100 seconds per user = 1
What does that mean?
Most of highly used APIs (google, facebook..) have short-time limits and long-time limits for better control over traffic. It allows developers make many requests (eg. 20000 per day) but prevents throttling if someone would try to eg. send 1000 requests in one second, which could clog api endpoint.
What you have in your google console:
Project per-second limit per 100 seconds = 1
That means you can make 1 query for 100 seconds in each project.
And:
User per-second limit per 100 seconds per user = 1
That means you can make 1 query for 100 seconds for each user connected to project.
That two limit rules putted together dont make much sense beacuse second rule will never be triggered (both have 1 request per 100 seconds but first one is for 'higher' resource and will block more requests).
Example of many limits you can see eg. in Analytics API where we have:
Queries per day = 50000
Big limit for queries per day.
Queries per 100 seconds per user = 100
Small limit per 100 seconds and per user so they can prevent too high peaks of requests from single user.
Queries per 100 seconds = 2000
Medium limit per 100 seconds.
If suppose i am having more than one google plus pages and configuring them in difference brands (using the same application).In this case the rate limit (say 500 per min) will be decreased for both pages or each page having 500 requests? Thanks in advance.
There are Several Google quotas. These quotas are valid for all Google APIs the only difference really is the amount of quota you receive.
Queries per day 10,000
Queries per 100 seconds per user 500
Queries per 100 seconds 1,000
Queries per day is a project wide quota. Your application identified by the client_id and client secrete you are using can run max 10000 request per day.
Queries per 100 seconds per user this is a speed quota or flood protection really. Each user who has authenticated your application can make a max X queries per 100 seconds. in the case above the user can make max 500 request with in 100 seconds.
Queries per 100 seconds this last one is project wide. Your application identified by the client_id and client secret. Can make 1000 requests in 100 seconds.
All but the user based one can probably be increased by clicking on the penile icon in the Google developers console. Depending upon the API you may have to pay for the increase. I doubt this is the case with the Google+ pages API.