Stackdriver monitoring - exclude specific time range in metric - google-cloud-stackdriver

I want to create a policy to check whether new data is published or not for X minutes. The issue I'm facing is, that we don't get any data in the night. Therefore it's necessary to exclude a specific time range in the metric otherwise we're getting an alert every night.
Is there any way to exclude a specific time range e.g. 9 PM until 9 AM in the policy/metric?

Alert Policy management currently does not include scheduling... but I'm thinking that using Cloud Scheduler, you may be able to achieve the results you are looking for, you may give this a try.
I also found that there's a Public Issue Tracker about Enable alerts only on certain hours. It is mentioned that a Feature Request was created, any update on this request will be published in the Public Issue Tracker or you can check the Feature Request directly as well.
You can use any gmail account to access the Public Issue Tracker.

Related

google task api only insert taskList get Quota Exceeded

I am using Google Tasks API in php for managing my lists and tasks but recently I am having a problem with inserting a new task list, the point is that all the other calls are working correctly.
Any suggestions to resolve the error?
So I try to be as clear as possible.
1- the script I am using is the one pointed to by the endpoint, I simply call the endpoint with the parameters I provided.
2- I interface with the API tasks for a complete management of my tasks so list, create, edit and delete tasklist and ist, create, edit and delete tasks. All calls work except the create tasklist which returns the Quota Exceeded error.
I checked the limits on the google console but they are not even at 10%. Also to make the problem even more difficult to understand is that if I try to make the call with a different account it works correctly. So in short, the error occurs exclusively on the create tasklist with a certain email address.
I hope that my problem is well explained.
As mentioned on the API documentation for Google Tasks API Pricing and Usage Limits:
The Google Tasks API has a courtesy limit of 50,000 queries per day. If you need capacity beyond this courtesy limit, you can send a request from the Quotas pane of the Google APIs Console.

Laravel - Efficiently consuming large external API into database

I'm attempting to consume the Paypal API transaction endpoint.
I want to grab ALL transactions for a given account. This number could potentially be in the 10's of millions of transactions. For each of these transactions, I need to store it in the database for processing by a queued job. I've been trying to figure out the best way to pull this many records with Laravel. Paypal has a max request items limit of 20 per page.
I initially started off with the idea of creating a job when a user gives me their API credentials that gets the first 20 items and processes them, then dispatches a job from the first job that contains the starting index to use. This would loop forever until it errored out. This doesn't seem to be working well though as it causes a gateway timeout on saving those API credentials and the request to the API eventually times out (before getting all transactions). I should also mention that the total number of transactions is unknown, so chaining doesn't seem to be the answer as there is no way to know how many jobs to dispatch...
Thoughts? Is getting API data best suited for a job?
Yes job is way to go . I’m not familiar with paypal api but it’s seems requests are rate limited paypal rate limiting.. you might want to delay your api requests a bit.. also you can make a class to monitor your api requests consumption by tracking the latest requests you made and in the job you can determine when to fire the next request and record it in the database...
My humble advise
please don’t pull all the data your database will get bloated quickly and you’ll need to scale each time you have a new account it’s not easy task.
You could dispatch the same job at the end of the first job which queries your current database to find the starting index of the transactions for that job.
So even if your job errors out, you could dispatch it again, then it will resume from where it was ended previously
May be you will need link your app with another data engine like AWS, anyway I think the best idea is creating an APi, pull only the most important data, indexed, and keep the all big data in another endpoint, where you can reach them if you need

Better way to do an alerting job in jenkins or alternative to do this

I need a better way to use my alerting code.Right now I have a code that check for space free on aws ecs and sends a simple notification to slack if space is less than 5gb using slack api.I used this code in jenkins and setup a periodic schedule to run every 15 min.But once the notification is triggered I wanted it to stop the check for 4 hours so, it won't fill the slack channel with messages .So, i used sleep 14400 after condition is triggered.But this leaves an executor of jenkins waiting.Is there a better way to do this?
If you really want a better way, you should use better tools. there are many tools (some free) out there, that can monitor something in a stateful manner (for example, using a daemon).
Writing to log (or slack channel) in this context of using Jenkins is sort of stateless, for example you cannot check whether an alarm is currently triggered or not.
Since you cannot check if an alarm is already triggered - using jenkins with the logic you requested in your question ('snooze feature') can be very ugly.
In general I would recommend using Conditional BuildStep to trigger a step if a condition is met (i.e. if alarm not already triggered), but since there is no way for you to poll this information, or achieve this with Jenkins without the solution being 'hackish' like creating a file to indicate alert is on, and deleting it from another job if it was created more than 4 hrs ago - I would suggest looking at tools more suitable for the job.

How often should I autodiscover?

We've been using the EWS SDK for a few years now and after many mistakes, we've decided it was time to refactor our code base to reflect what we've learned. One issue we see happen every once in a while is that all EWS call fails because it's pointing to a CAS that is malfunctioning. The solution seems as easy as firing off a background thread every n seconds where n represents how often we'll autodiscover.
I've scoured the web and can't seem to find any information relating to the matter.
How often should I autodiscover?
From the "How To: Refresh configuration information by using Autodiscover" topic on MSDN:
We recommend that you refresh your user settings by sending a new Autodiscover request after 24 hours have passed since your last Autodiscover request. This time can be adjusted to meet the requirements of your application.

Update LiveTile with local data

In my application I get rss feed from a website and than compare it with previously saved feed in isolated storage and show the updated feed. I want to update live tile with the updated feed items count, even when the app is not running. Kindly gave me guidelines for this.
You can use a background agent scheduler task. Depending on how resource intensive your call is, you can opt to use a Periodic tasks.
see periodic tasks here
See an example of how to implement it here
you have to note that the earliest a periodic task would run in wp7 is every 30 minutes. it is also subject to available resources on the device, hence it might not always run when you want it too.

Resources