AWS cron expression OK, lambda not triggered - aws-lambda

I have set the following cron expression in AWS (CloudWatch trigger).
0 */5 7-12,1pm-11pm ? * MON,TUE,WED,THU,FRI
In expression generator, I get for very similar expresssion (7-23 intead of the hours )
At second :00, every 5 minutes starting at minute :00, every hour between 07am and 23pm, on every Monday, Tuesday, Wednesday, Thursday and Friday, every month
as expected.
However, it is not triggered. I don't see anything in the log.
Why is that? (trigger is enabled of course)
Thanks.

When you create CloudWatch Event Rule, or EventBridge Event Rule (is what AWS calls these days) and select Lambda function as target, there are 2 main points that you need to consider:
CRON SCHEDULE
You need to specify the cron schedule and this schedule timezone is UTC+0.
I assume that you are in different timezone and observe there is not any triggers since the next trigger time has not been reached yet.
RESOUCE-BASED POLICY
There is another chance that you need to check is about permissions, you need to concern is Lambda function's Resource Based Policy.
Go to AWS Console, you can check your Lambda's permission tab and review your permissions which is required to allow your Event Rule triggers.
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "AWSEvents",
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:<REGION>:<ACCOUNT_ID>:function:<FUNCTION_NAME>",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:events:<REGION>:<ACCOUNT_ID>:rule/<RULE_NAME>"
}
}
}
]
}

Well, the expression wasn't right.
It considers minutes only and the */5 confused it too maybe. Not sure about the * at the end.
0/5 7-23 ? * MON-FRI * - this works
To debug triggers, there is "Rules" section in the log service, where you can see the next executions.

Since the CRON expression is in UTC time format. Could you please check if there is any time difference b/w your standard time and UTC time?
And change the expression accordingly.

Related

Azure Data Factory Until loop slow to end/terminate

We have an until loop in a ADFv2 pipeline.
The time it takes to stop/terminate once the expression condition is met seems to corrolate between the length of time the until loop takes to completes its activities.
This particular until loop performs alot of activites and can take anywhere between 90-120 mins to complete. So it takes almost as long to end/terminate (break out of the loop).
If I "hack" it so that it only performs a handful of activities it will quickly end and break once it's finished and the expression to terminate is met.
It's like a spinning wheel that keeps spinning even after the power is turned off. The momentum that was built up while connected takes a while to slow down and eventually stop.
Is this a known issue, how can I troubleshoot the exact cause here or fix it?
Incorrect usage of nested loop switch could cause this.
Here is a Until component, and some activities after it:
Until
In the Until, some activities,
Correct:
In until - correct
Incorrect (Slow to end/terminate):
In until - incorrect
Why?
For the incorrect case, the last activity If waiting depends on three activities, perhaps its behavior was counterintuitive.
// pay attention to "dependsOn"
{
"name": "If waiting",
"type": "IfCondition",
"dependsOn": [
{
"activity": "Set loop_waiting_refresh_status to True",
"dependencyConditions": [
"Succeeded"
]
},
{
"activity": "WeChatEP_Notifier Info Get new bearer",
"dependencyConditions": [
"Succeeded"
]
},
{
"activity": "Set loop_waiting_refresh_status to False",
"dependencyConditions": [
"Succeeded"
]
}
],
...
}

How to suppress aws lambda cli output

I want to use aws lambda update-function-code command to deploy the code of my function. The problem here is that aws CLI always prints out some information after deployment. That information contains sensitive information, such as environment variables and their values. That is not acceptable as I'm going to use public CI services, and I don't want that info to become available to anyone. At the same time I don't want to solve this by directing everything from AWS command to /dev/null for example as in this case I will lose information about errors and exceptions which will make it harder to debug it if something went. What can I do here?
p.s. SAM is not an option, as it will force me to switch to another framework and completely change the workflow I'm using.
You could target the output you'd like to suppress by replacing those values with jq
For example if you had output from the cli command like below:
{
"FunctionName": "my-function",
"LastModified": "2019-09-26T20:28:40.438+0000",
"RevisionId": "e52502d4-9320-4688-9cd6-152a6ab7490d",
"MemorySize": 256,
"Version": "$LATEST",
"Role": "arn:aws:iam::123456789012:role/service-role/my-function-role-uy3l9qyq",
"Timeout": 3,
"Runtime": "nodejs10.x",
"TracingConfig": {
"Mode": "PassThrough"
},
"CodeSha256": "5tT2qgzYUHaqwR716pZ2dpkn/0J1FrzJmlKidWoaCgk=",
"Description": "",
"VpcConfig": {
"SubnetIds": [],
"VpcId": "",
"SecurityGroupIds": []
},
"CodeSize": 304,
"FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:my-function",
"Handler": "index.handler",
"Environment": {
"Variables": {
"SomeSensitiveVar": "value",
"SomeOtherSensitiveVar": "password"
}
}
}
You might pipe that to jq and replace values only if the keys exist:
aws lambda update-function-code <args> | jq '
if .Environment.Variables.SomeSensitiveVar? then .Environment.Variables.SomeSensitiveVar = "REDACTED" else . end |
if .Environment.Variables.SomeRandomSensitiveVar? then .Environment.Variables.SomeOtherSensitiveVar = "REDACTED" else . end'
You know which data is sensitive and will need to set this up appropriately. You can see the example of what data is returned in the cli docs and the API docs are also helpful for understanding what the structure can look like.
Lambda environment variables show themselves everywhere and cannot considered private.
If your environment variables are sensitive, you could consider using aws secret manager.
In a nutshell:
create a secret in the secret store. It has a name (public) and a value (secret, encrypted, with proper user access control)
Allow your lambda to access the secret store
In your lambda env, store the name of your secret, and tell your lambda to get the corresponding value at runtime
bonus: password rotation is made super easy, as you don't even have to update your lambda config anymore

Append an array to a json using jq in BASH

I have a json that looks like this:
{
"failedSet": [],
"successfulSet": [{
"event": {
"arn": "arn:aws:health:us-east-1::event/AWS_RDS_MAINTENANCE_SCHEDULED_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx",
"endTime": 1502841540.0,
"eventTypeCategory": "scheduledChange",
"eventTypeCode": "AWS_RDS_MAINTENANCE_SCHEDULED",
"lastUpdatedTime": 1501208541.93,
"region": "us-east-1",
"service": "RDS",
"startTime": 1502236800.0,
"statusCode": "open"
},
"eventDescription": {
"latestDescription": "We are contacting you to inform you that one or more of your Amazon RDS DB instances is scheduled to receive system upgrades during your maintenance window between August 8 5:00 PM and August 15 4:59 PM PDT. Please see the affected resource tab for a list of these resources. \r\n\r\nWhile the system upgrades are in progress, Single-AZ deployments will be unavailable for a few minutes during your maintenance window. Multi-AZ deployments will be unavailable for the amount of time it takes a failover to complete, usually about 60 seconds, also in your maintenance window. \r\n\r\nPlease ensure the maintenance windows for your affected instances are set appropriately to minimize the impact of these system upgrades. \r\n\r\nIf you have any questions or concerns, contact the AWS Support Team. The team is available on the community forums and by contacting AWS Premium Support. \r\n\r\nhttp://aws.amazon.com/support\r\n"
}
}]
}
I'm trying to add a new key/value under successfulSet[].event (key name as affectedEntities) using jq, I've seen some examples, like here and here, but none of those answers really show how to add a possible one key with multiple values (the reason why I say possible is because as of now, AWS is returning one value for the affected entity, but if there are more, then I'd like to list them).
EDIT: The value of the new key that I want to add is stored in a variable called $affected_entities and a sample of that value looks like this:
[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"
]
The value could look like this:
[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
...
...
...
]
You can use this jq,
jq '.successfulSet[].event += { "new_key" : "new_value" }' file.json
EDIT:
Try this:
jq --argjson argval "$new_value" '.successfulSet[].event += { "affected_entities" : $argval }' file.json
Test:
sat~$ new_value='[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"
]'
sat~$ jq --argjson argval "$new_value" '.successfulSet[].event += { "affected_entities" : $argval }' file.json
Note that --argjson works with jq 1.5 and above.

Is there a way I can get historic performance data of various alerts in Nagios as json/xml?

I am looking to get performance data of various alerts setup in my Nagios Core/XI. I think it is stored in RRDs. Are there ways I can get access to it?
If you're using Nagios XI you can get this data a few different ways.
If you're using XI 5 or later, then the easiest way that springs to mind is the API. Log in to your XI server as an administrator, navigate to 'Help' menu, then select 'Objects Reference' on the left hand side navigation and find 'GET objects/rrdexport' from the Objects Reference navigation box (or just scroll down to near the bottom).
An example curl might look like this:
curl -XGET "http://nagiosxi/nagiosxi/api/v1/objects/rrdexport?apikey=YOURAPIKEY&pretty=1&host_name=localhost"
Your response should look something like:
{
"meta": {
"start": "1453838100",
"step": "300",
"end": "1453838400",
"rows": "2",
"columns": "4",
"legend": {
"entry": [
"rta",
"pl",
"rtmax",
"rtmin"
]
}
},
"data": {
"row": [
{
"t": "1453838100",
"v": [
"6.0373333333e-03",
"0.0000000000e+00",
"1.7536000000e-02",
"3.0000000000e-03"
]
},
{
"t": "1453838400",
"v": [
"6.0000000000e-03",
"0.0000000000e+00",
"1.7037333333e-02",
"3.0000000000e-03"
]
}
]
}
}
BUT WAIT, THERE IS ANOTHER WAY
This way will work no matter what version you're on, and would actually work if you were processing performance data with NPCD on a Core system as well.
Log in to your server via ssh or console and get your butt over to the /usr/local/nagios/share/perfdata directory. From here we're going to use the localhost object as an example..
$ cd /usr/local/nagios/share/perfdata/
$ ls
localhost
$ cd localhost/
$ ls
Current_Load.rrd Current_Users.xml HTTP.rrd PING.xml SSH.rrd Swap_Usage.xml
Current_Load.xml _HOST_.rrd HTTP.xml Root_Partition.rrd SSH.xml Total_Processes.rrd
Current_Users.rrd _HOST_.xml PING.rrd Root_Partition.xml Swap_Usage.rrd Total_Processes.xml
$ rrdtool dump _HOST_.rrd
Once you run the rrdtool dump command, there is going to be an awful lot of output, so I keep that as an exercise for you, the reader ;)
If you're trying to automate something of some kind, then you should note that the xml files contain meta data for the rrd files and could potentially be useful to parse first.
Also, if you're anything like me, you love reading technical manuals. Here is a great one to read: RRDTool documentation
Hope this helped!

Is it possible to obtain "current_time" in rufus-scheduler

Supposed I scheduled a job with rufus-scheduler like below:
scheduler.cron '* * * * * UTC' do |job|
...
end
I can retrieve some useful info from 'job' variable. For example, next_time shows when the next run is scheduled at. I am wondering if a similar value is available for "current_time". Reason: for various reasons (threads, system load, etc) a job could be executed with a small amount of delay. I want to know the exact time the job is meant to have started. If I retrieve system time by Time.new, that will not be exact.
Is there a way? I tried awesome_print the job variable. It seems the value I am looking for is not available.
jruby-1.7.19 :017 > #<Rufus::Scheduler::CronJob:0x3951b84e #unscheduled_at=nil, #first_at=nil, #opts={}, #last_time=2016-07-28 01:35:00 +0800, #tags=[], #mean_work_time=0.0, #callable=#<Proc:0x2ee23f1d#(irb):14>, #last_at=nil, #count=1, #scheduled_at=2016-07-28 01:34:09 +0800, #handler=#<Proc:0x2ee23f1d#(irb):14>, #paused_at=nil, #local_mutex=#<Mutex:0x79da0f7>, #locals={}, #times=nil, #scheduler=#<Rufus::Scheduler:0x7dc7d255 #mutexes={}, #scheduler_lock=#<Rufus::Scheduler::NullLock:0x49c20af6>, #paused=false, #opts={}, #work_queue=#<Queue:0x625dc24e>, #jobs=#<Rufus::Scheduler::JobArray:0x797fc155 #mutex=#<Mutex:0x501b94b9>, #array=[#<Rufus::Scheduler::CronJob:0x3951b84e ...>]>, #trigger_lock=#<Rufus::Scheduler::NullLock:0x42c126c5>, #started_at=2016-07-28 01:33:36 +0800, #thread_key="rufus_scheduler_2068", #stderr=#<IO:fd 2>, #max_work_threads=28, #frequency=0.3, #thread=#<Thread:0x16d871c0 sleep>>, #cron_line=#<Rufus::Scheduler::CronLine:0x6156f1b0 #seconds=[0], #weekdays=nil, #hours=nil, #timezone="UTC", #days=nil, #minutes=nil, #original="* * * * * UTC", #months=nil, #monthdays=nil>, #last_work_time=0.0, #id="cron_1469640849.047_961656910", #original="* * * * * UTC", #next_time=2016-07-27 17:36:00 +0000>
Added a Job#previous_time
https://github.com/jmettraux/rufus-scheduler/commit/43f1016859a43ea7f138404c1e5d864048f24959
scheduler.every('10s') do |job|
puts "job scheduled for #{job.previous_time} triggered at #{Time.now}"
puts "next time will be around #{job.next_time}"
puts "."
end
A job's #next_time is supposed to hold the trigger time until the #post_trigger hook gets called.

Resources