How to suppress aws lambda cli output - aws-lambda

I want to use aws lambda update-function-code command to deploy the code of my function. The problem here is that aws CLI always prints out some information after deployment. That information contains sensitive information, such as environment variables and their values. That is not acceptable as I'm going to use public CI services, and I don't want that info to become available to anyone. At the same time I don't want to solve this by directing everything from AWS command to /dev/null for example as in this case I will lose information about errors and exceptions which will make it harder to debug it if something went. What can I do here?
p.s. SAM is not an option, as it will force me to switch to another framework and completely change the workflow I'm using.

You could target the output you'd like to suppress by replacing those values with jq
For example if you had output from the cli command like below:
{
"FunctionName": "my-function",
"LastModified": "2019-09-26T20:28:40.438+0000",
"RevisionId": "e52502d4-9320-4688-9cd6-152a6ab7490d",
"MemorySize": 256,
"Version": "$LATEST",
"Role": "arn:aws:iam::123456789012:role/service-role/my-function-role-uy3l9qyq",
"Timeout": 3,
"Runtime": "nodejs10.x",
"TracingConfig": {
"Mode": "PassThrough"
},
"CodeSha256": "5tT2qgzYUHaqwR716pZ2dpkn/0J1FrzJmlKidWoaCgk=",
"Description": "",
"VpcConfig": {
"SubnetIds": [],
"VpcId": "",
"SecurityGroupIds": []
},
"CodeSize": 304,
"FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:my-function",
"Handler": "index.handler",
"Environment": {
"Variables": {
"SomeSensitiveVar": "value",
"SomeOtherSensitiveVar": "password"
}
}
}
You might pipe that to jq and replace values only if the keys exist:
aws lambda update-function-code <args> | jq '
if .Environment.Variables.SomeSensitiveVar? then .Environment.Variables.SomeSensitiveVar = "REDACTED" else . end |
if .Environment.Variables.SomeRandomSensitiveVar? then .Environment.Variables.SomeOtherSensitiveVar = "REDACTED" else . end'
You know which data is sensitive and will need to set this up appropriately. You can see the example of what data is returned in the cli docs and the API docs are also helpful for understanding what the structure can look like.

Lambda environment variables show themselves everywhere and cannot considered private.
If your environment variables are sensitive, you could consider using aws secret manager.
In a nutshell:
create a secret in the secret store. It has a name (public) and a value (secret, encrypted, with proper user access control)
Allow your lambda to access the secret store
In your lambda env, store the name of your secret, and tell your lambda to get the corresponding value at runtime
bonus: password rotation is made super easy, as you don't even have to update your lambda config anymore

Related

ActiveMQ jolokia gives different message response depending on environment

I have to get (not consume) part of a message that is in queue. I reused bash script that was prompted as a response here, with the use of /api/jolokia/ : ActiveMQ Jolokia API How can I get the full Message Body
Part of a response that I am interested to get is MsgId in value:text :
"request": {
"mbean": "org.apache.activemq:brokerName=MyBrokerName,destinationName=MyQueueName,destinationType=Queue,type=Broker",
"type": "exec",
"operation": "browseMessages()"
},
"value": [
{
"jMSCorrelationIDAsBytes": [],
***some other objects here ***
"text": "<?xml version=\"1.0\"?>\r\n<RepositoryOperationRq xmlns=\"http://www.ACORD.org/\">\r\n <MsgId>xxx28bab-e62c-4dbc-a2aa-xxx</MsgId>\r\n <CreationDtTime>2020-01-01T11:11:11-11:00</CreationDtTime>\r\n
There is no problem on DEV env ActiveMQ but when I tried do the same on UAT env ActiveMQ there is no value:text object in response at all, and some others objects values are different, like:
"connectionControl": false
and
"connectionControl": "false"
I thought it might be because of maxDepth parameter so I increased it. Unfortunately when set maxDepth=5 I got that error:
"error_type": "java.lang.IllegalStateException",
"error": "java.lang.IllegalStateException : Error while extracting next from org.apache.activemq.broker.region.cursors.FilePendingMessageCursor#3bb9ace4",
"status": 500
and the whole ActiveMQ stopped receiving any messages- had to force restart it. ActiveMQ configs should be the same on both envs, and the version is 5.13.3. Do you know why that text object is missing?
I think the difference here is down to the content of the messages in each environment. The browseMessages operation simply returns the messages in the corresponding destination (e.g. MyQueueName).
If the message is not a javax.jms.TextMessage then it won't have the text field. If a property is false instead of "false" that just means the property value was a boolean instead of a String respectively.

Receiving error in AWS Secrets manager awscli for: Version "AWSCURRENT" not found when deployed via Terraform

Overview
Create a aws_secretsmanager_secret
Create a aws_secretsmanager_secret_version
Store a uniquely generated string as that above version
Use local-exec provisioner to store the actual secured string using bash
Reference that string using the secretsmanager resource in for example, an RDS instance deployment.
Objective
Keep all plain text strings out of remote-state residing in a S3 bucket
Use AWS Secrets Manager to store these strings
Set once, retrieve by calling the resource in Terraform
Problem
Error: Secrets Manager Secret
"arn:aws:secretsmanager:us-east-1:82374283744:secret:Example-rds-secret-fff42b69-30c1-df50-8e5c-f512464a4a11-pJvC5U"
Version "AWSCURRENT" not found
when running terraform apply
Question
Why isn't it moving the AWSCURRENT version automatically? Am I missing something? Is my bash command wrong? The value does not write to the secret_version, but it does reference it correctly.
Look in main.tf code, which actually performs the command:
provisioner "local-exec" {
command = "bash -c 'RDSSECRET=$(openssl rand -base64 16); aws secretsmanager put-secret-value --secret-id ${data.aws_secretsmanager_secret.secretsmanager-name.arn} --secret-string $RDSSECRET --version-stages AWSCURRENT --region ${var.aws_region} --profile ${var.aws-profile}'"
}
Code
main.tf
data "aws_secretsmanager_secret_version" "rds-secret" {
secret_id = aws_secretsmanager_secret.rds-secret.id
}
data "aws_secretsmanager_secret" "secretsmanager-name" {
arn = aws_secretsmanager_secret.rds-secret.arn
}
resource "random_password" "db_password" {
length = 56
special = true
min_special = 5
override_special = "!#$%^&*()-_=+[]{}<>:?"
keepers = {
pass_version = 1
}
}
resource "random_uuid" "secret-uuid" { }
resource "aws_secretsmanager_secret" "rds-secret" {
name = "DAL-${var.environment}-rds-secret-${random_uuid.secret-uuid.result}"
}
resource "aws_secretsmanager_secret_version" "rds-secret-version" {
secret_id = aws_secretsmanager_secret.rds-secret.id
secret_string = random_password.db_password.result
provisioner "local-exec" {
command = "bash -c 'RDSSECRET=$(openssl rand -base64 16); aws secretsmanager put-secret-value --secret-id ${data.aws_secretsmanager_secret.secretsmanager-name.arn} --secret-string $RDSSECRET --region ${var.aws_region} --profile ${var.aws-profile}'"
}
}
variables.tf
variable "aws-profile" {
description = "Local AWS Profile Name "
type = "string"
}
variable "aws_region" {
description = "aws region"
default="us-east-1"
type = "string"
}
variable "environment" {}
terraform.tfvars
aws_region="us-east-1"
aws-profile="Example-Environment"
environment="dev"
The error likely isn't occuring in your provisioner execution per se, because if you remove the provisioner block the error still occurs on apply--but confusingly only the first time after a destroy.
Removing the data "aws_secretsmanager_secret_version" "rds-secret" block as well "resolves" the error completely.
I'm guessing there is some sort of config delay issue here...but adding a 20 second delay provisioner to the aws_secretsmanager_secret.rds-secret resource block didn't help.
And the value from the data block can be successfully output on subsequent apply runs, so maybe it's not just timing.
Even if you resolve the above more basic issue, it's likely your provisioner will still be confusing things by modifying a resource that Terraform is trying to manage in the same run. I'm not sure there's a way to get around that except perhaps by splitting into two separate operations.
Update:
It turns out that on the first run the data sources are read before the aws_secretsmanager_secret_version resource is created. Just adding depends_on = [aws_secretsmanager_secret_version.rds-secret-version] to the data "aws_secretsmanager_secret_version" block resolves this fully and makes the interpolation for your provisioner work as well. I haven't tested the actual provisioner.
Also you may need to consider this (which I take to not always apply to 0.13):
NOTE: In Terraform 0.12 and earlier, due to the data resource behavior of deferring the read until the apply phase when depending on values that are not yet known, using depends_on with data resources will force the read to always be deferred to the apply phase, and therefore a configuration that uses depends_on with a data resource can never converge. Due to this behavior, we do not recommend using depends_on with data resources.

Hide or encrypt credentials information in AWS Data pipeline

I am creating an AWS data-pipeline to copy data from mysql to S3. I have written a shell script which accepts credentials as arguments and creates the pipeline so that my credentials are not exposed in script.
used below bash shell script to create pipeline.
unique_id="$(date +'%s')"
profile="${4}"
startDate="${1}"
echo "{\"values\":{\"myS3CopyStartDate\":\"$startDate\",\"myRdsUsername\":\"$2\",\"myRdsPassword\":\"$3\"}}" > mysqlToS3values.json
sqlpipelineId=`aws datapipeline create-pipeline --name mysqlToS3 --unique-id mysqlToS3_$unique_id --profile $profile --query '{ID:pipelineId}' --output text`
validationErrors=`aws datapipeline put-pipeline-definition --pipeline-id $sqlpipelineId --pipeline-definition file://mysqlToS3.json --parameter-objects file://mysqlToS3Parameters.json --parameter-values-uri file://mysqlToS3values.json --query 'validationErrors' --profile $profile`
aws datapipeline activate-pipeline --pipeline-id $sqlpipelineId --profile $profile
However when I fetch pipeline definition through aws cli using
aws datapipeline get-pipeline-definition --pipeline-id 27163782,
I get my credentials in plain text in json output.
{ "parameters": [...], "objects": [...], "values": { "myS3CopyStartDate": "2018-04-05T10:00:00", "myRdsPassword": "sbc", "myRdsUsername": "ksnck" } }
Is there any way to encrypt or hide the credentials information?
I don't think there is a way to mask the data in the pipeline definition.
The strategy I have used is to store my secrets in S3 (encrypted with a specific KMS key and using appropriate IAM/bucket permisions). Then, inside my datapipeline step, I use the AWS CLI to read the secret from S3 and pass it to the mysql command or whatever.
So instead of having a pipeline parameter like myRdsPassword I have:
"myRdsPasswordFile": "s3://mybucket/secrets/rdspassword"
Then inside my step I read it with something like:
PWD=$(aws s3 cp ${myRdsPasswordFile} -)
You could also have a similar workflow that retrieves the password from AWS Parameter Store instead of S3.
There is actually a way that's built into data pipelines:
You prepend the field with an * and it will encrypt the field and hide it visibly like a password form field.
If you're using parameters, then prepend the * on both the object field and the corresponding parameter field like so (note - there are three * with a parameterized setup; the example below is just a sample - missing required fields just to simplify and illustrate how to handle the encryption through parameters):
...{
"*password": "#{*myDbPassword}",
"name": "DBName",
"id": "DB",
},
],
"parameters": [
{
"id": "*myDbPassword",
"description": "Database password",
"type": "String"
}...
See more below:
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-pipeline-characters.html
You can store RDS Credentials in AWS Secret Manager. You can then retrieve the credentials from SecretManager in the data-pipeline using cloudformation template as described below:
Mappings:
RegionToDatabaseConfig:
us-west-2:
CredentialsSecretKey: us-west-2-SECRET_NAME
# ...
us-east-1:
CredentialsSecretKey: us-east-1-SECRET_NAME
# ...
eu-west-1:
CredentialsSecretKey: eu-west-1-SECRET_NAME
# ...
Resources:
OurProjectDataPipeline:
Type: AWS::DataPipeline::Pipeline
Properties:
# ...
PipelineObjects:
# ...
# RDS resources
- Id: PostgresqlDatabase
Name: Source database to sync data from
Fields:
- Key: type
StringValue: RdsDatabase
- Key: username
StringValue:
!Join
- ''
- - '{{resolve:secretsmanager:'
- !FindInMap
- RegionToDatabaseConfig
- {Ref: 'AWS::Region'}
- CredentialsSecretKey
- ':SecretString:username}}'
- Key: "*password"
StringValue:
!Join
- ''
- - '{{resolve:secretsmanager:'
- !FindInMap
- RegionToDatabaseConfig
- {Ref: 'AWS::Region'}
- CredentialsSecretKey
- ':SecretString:password}}'
- Key: jdbcProperties
StringValue: 'allowMultiQueries=true'
- Key: rdsInstanceId
StringValue:
!FindInMap
- RegionToDatabaseConfig
- {Ref: 'AWS::Region'}
- RDSInstanceId

Append an array to a json using jq in BASH

I have a json that looks like this:
{
"failedSet": [],
"successfulSet": [{
"event": {
"arn": "arn:aws:health:us-east-1::event/AWS_RDS_MAINTENANCE_SCHEDULED_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx",
"endTime": 1502841540.0,
"eventTypeCategory": "scheduledChange",
"eventTypeCode": "AWS_RDS_MAINTENANCE_SCHEDULED",
"lastUpdatedTime": 1501208541.93,
"region": "us-east-1",
"service": "RDS",
"startTime": 1502236800.0,
"statusCode": "open"
},
"eventDescription": {
"latestDescription": "We are contacting you to inform you that one or more of your Amazon RDS DB instances is scheduled to receive system upgrades during your maintenance window between August 8 5:00 PM and August 15 4:59 PM PDT. Please see the affected resource tab for a list of these resources. \r\n\r\nWhile the system upgrades are in progress, Single-AZ deployments will be unavailable for a few minutes during your maintenance window. Multi-AZ deployments will be unavailable for the amount of time it takes a failover to complete, usually about 60 seconds, also in your maintenance window. \r\n\r\nPlease ensure the maintenance windows for your affected instances are set appropriately to minimize the impact of these system upgrades. \r\n\r\nIf you have any questions or concerns, contact the AWS Support Team. The team is available on the community forums and by contacting AWS Premium Support. \r\n\r\nhttp://aws.amazon.com/support\r\n"
}
}]
}
I'm trying to add a new key/value under successfulSet[].event (key name as affectedEntities) using jq, I've seen some examples, like here and here, but none of those answers really show how to add a possible one key with multiple values (the reason why I say possible is because as of now, AWS is returning one value for the affected entity, but if there are more, then I'd like to list them).
EDIT: The value of the new key that I want to add is stored in a variable called $affected_entities and a sample of that value looks like this:
[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"
]
The value could look like this:
[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
...
...
...
]
You can use this jq,
jq '.successfulSet[].event += { "new_key" : "new_value" }' file.json
EDIT:
Try this:
jq --argjson argval "$new_value" '.successfulSet[].event += { "affected_entities" : $argval }' file.json
Test:
sat~$ new_value='[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"
]'
sat~$ jq --argjson argval "$new_value" '.successfulSet[].event += { "affected_entities" : $argval }' file.json
Note that --argjson works with jq 1.5 and above.

Is there a way I can get historic performance data of various alerts in Nagios as json/xml?

I am looking to get performance data of various alerts setup in my Nagios Core/XI. I think it is stored in RRDs. Are there ways I can get access to it?
If you're using Nagios XI you can get this data a few different ways.
If you're using XI 5 or later, then the easiest way that springs to mind is the API. Log in to your XI server as an administrator, navigate to 'Help' menu, then select 'Objects Reference' on the left hand side navigation and find 'GET objects/rrdexport' from the Objects Reference navigation box (or just scroll down to near the bottom).
An example curl might look like this:
curl -XGET "http://nagiosxi/nagiosxi/api/v1/objects/rrdexport?apikey=YOURAPIKEY&pretty=1&host_name=localhost"
Your response should look something like:
{
"meta": {
"start": "1453838100",
"step": "300",
"end": "1453838400",
"rows": "2",
"columns": "4",
"legend": {
"entry": [
"rta",
"pl",
"rtmax",
"rtmin"
]
}
},
"data": {
"row": [
{
"t": "1453838100",
"v": [
"6.0373333333e-03",
"0.0000000000e+00",
"1.7536000000e-02",
"3.0000000000e-03"
]
},
{
"t": "1453838400",
"v": [
"6.0000000000e-03",
"0.0000000000e+00",
"1.7037333333e-02",
"3.0000000000e-03"
]
}
]
}
}
BUT WAIT, THERE IS ANOTHER WAY
This way will work no matter what version you're on, and would actually work if you were processing performance data with NPCD on a Core system as well.
Log in to your server via ssh or console and get your butt over to the /usr/local/nagios/share/perfdata directory. From here we're going to use the localhost object as an example..
$ cd /usr/local/nagios/share/perfdata/
$ ls
localhost
$ cd localhost/
$ ls
Current_Load.rrd Current_Users.xml HTTP.rrd PING.xml SSH.rrd Swap_Usage.xml
Current_Load.xml _HOST_.rrd HTTP.xml Root_Partition.rrd SSH.xml Total_Processes.rrd
Current_Users.rrd _HOST_.xml PING.rrd Root_Partition.xml Swap_Usage.rrd Total_Processes.xml
$ rrdtool dump _HOST_.rrd
Once you run the rrdtool dump command, there is going to be an awful lot of output, so I keep that as an exercise for you, the reader ;)
If you're trying to automate something of some kind, then you should note that the xml files contain meta data for the rrd files and could potentially be useful to parse first.
Also, if you're anything like me, you love reading technical manuals. Here is a great one to read: RRDTool documentation
Hope this helped!

Resources