I am trying to send an email with AWS SES send_raw_email. My email address is verified on AWS. I am not able to figure out how to format my destinations:
destinations: [
to_addresses: ["example#gmail.com"]
cc_addresses: ["example#gmail.com"]]
The above code throws this error ArgumentError: expected params[:destinations][0] to be a String, got value {:to_addresses=>["example#gmail.com"], :cc_addresses=>["example#gmail.com"]} (class: Hash) instead.
I am basing my code off of this documentation
In case it's helpful, what I am trying to do is send an email that has attached images to it.
Any help is greatly appreciated! Thank you
The notation for hash-style arguments is:
destinations: {
to_addresses: [ ... ],
cc_addresses: [ ... ],
}
You're declaring destinations with [ ... ] which means array, and that hash notation inside is invalid.
Related
I am a beginner of metaplex.
What I want to achieve is to provide the feature of presale, which means white list!.
I followed the instruction in metaplex to set config.json.
}
"whitelistMintSettings": {
"mode" : { "neverBurn": true },
"mint" : "xxxxKnH5",
"presale" : true,
"discountPrice" : 0.5
},
Here, I set xxxxKnH5 as a member of whitelist and let him mint before public mint.
Then I update_candy_machine (this works fine).
But in the UI interface, I always got error message:
There was a problem fetching whitelist token balance
Home.tsx:184 Error: failed to get token account balance: Invalid param: could not find account
Any idea of why I received this message and how can I fix it?
Make sure that the "mint": "XXX...X", line should contain a 0 decimals SPL-TOKEN.
Moreover, for airdropping it to users, you join the Metaplex Discord server and check the #airdrop-bash thread. It contains a JS script that'll make airdropping the token to wallets alot easier:)
I have to get (not consume) part of a message that is in queue. I reused bash script that was prompted as a response here, with the use of /api/jolokia/ : ActiveMQ Jolokia API How can I get the full Message Body
Part of a response that I am interested to get is MsgId in value:text :
"request": {
"mbean": "org.apache.activemq:brokerName=MyBrokerName,destinationName=MyQueueName,destinationType=Queue,type=Broker",
"type": "exec",
"operation": "browseMessages()"
},
"value": [
{
"jMSCorrelationIDAsBytes": [],
***some other objects here ***
"text": "<?xml version=\"1.0\"?>\r\n<RepositoryOperationRq xmlns=\"http://www.ACORD.org/\">\r\n <MsgId>xxx28bab-e62c-4dbc-a2aa-xxx</MsgId>\r\n <CreationDtTime>2020-01-01T11:11:11-11:00</CreationDtTime>\r\n
There is no problem on DEV env ActiveMQ but when I tried do the same on UAT env ActiveMQ there is no value:text object in response at all, and some others objects values are different, like:
"connectionControl": false
and
"connectionControl": "false"
I thought it might be because of maxDepth parameter so I increased it. Unfortunately when set maxDepth=5 I got that error:
"error_type": "java.lang.IllegalStateException",
"error": "java.lang.IllegalStateException : Error while extracting next from org.apache.activemq.broker.region.cursors.FilePendingMessageCursor#3bb9ace4",
"status": 500
and the whole ActiveMQ stopped receiving any messages- had to force restart it. ActiveMQ configs should be the same on both envs, and the version is 5.13.3. Do you know why that text object is missing?
I think the difference here is down to the content of the messages in each environment. The browseMessages operation simply returns the messages in the corresponding destination (e.g. MyQueueName).
If the message is not a javax.jms.TextMessage then it won't have the text field. If a property is false instead of "false" that just means the property value was a boolean instead of a String respectively.
I want to use aws lambda update-function-code command to deploy the code of my function. The problem here is that aws CLI always prints out some information after deployment. That information contains sensitive information, such as environment variables and their values. That is not acceptable as I'm going to use public CI services, and I don't want that info to become available to anyone. At the same time I don't want to solve this by directing everything from AWS command to /dev/null for example as in this case I will lose information about errors and exceptions which will make it harder to debug it if something went. What can I do here?
p.s. SAM is not an option, as it will force me to switch to another framework and completely change the workflow I'm using.
You could target the output you'd like to suppress by replacing those values with jq
For example if you had output from the cli command like below:
{
"FunctionName": "my-function",
"LastModified": "2019-09-26T20:28:40.438+0000",
"RevisionId": "e52502d4-9320-4688-9cd6-152a6ab7490d",
"MemorySize": 256,
"Version": "$LATEST",
"Role": "arn:aws:iam::123456789012:role/service-role/my-function-role-uy3l9qyq",
"Timeout": 3,
"Runtime": "nodejs10.x",
"TracingConfig": {
"Mode": "PassThrough"
},
"CodeSha256": "5tT2qgzYUHaqwR716pZ2dpkn/0J1FrzJmlKidWoaCgk=",
"Description": "",
"VpcConfig": {
"SubnetIds": [],
"VpcId": "",
"SecurityGroupIds": []
},
"CodeSize": 304,
"FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:my-function",
"Handler": "index.handler",
"Environment": {
"Variables": {
"SomeSensitiveVar": "value",
"SomeOtherSensitiveVar": "password"
}
}
}
You might pipe that to jq and replace values only if the keys exist:
aws lambda update-function-code <args> | jq '
if .Environment.Variables.SomeSensitiveVar? then .Environment.Variables.SomeSensitiveVar = "REDACTED" else . end |
if .Environment.Variables.SomeRandomSensitiveVar? then .Environment.Variables.SomeOtherSensitiveVar = "REDACTED" else . end'
You know which data is sensitive and will need to set this up appropriately. You can see the example of what data is returned in the cli docs and the API docs are also helpful for understanding what the structure can look like.
Lambda environment variables show themselves everywhere and cannot considered private.
If your environment variables are sensitive, you could consider using aws secret manager.
In a nutshell:
create a secret in the secret store. It has a name (public) and a value (secret, encrypted, with proper user access control)
Allow your lambda to access the secret store
In your lambda env, store the name of your secret, and tell your lambda to get the corresponding value at runtime
bonus: password rotation is made super easy, as you don't even have to update your lambda config anymore
Appreciate your help in advance.
In my scenario - Cloudwatch multiline logs needs to be shipped to elasticsearch service.
ECS--awslog->Cloudwatch---using lambda--> ES Domain
(Basic flow though very open to change how data is shipped from CW to ES )
I was able to solve multi-line issue using multi_line_start_pattern BUT
The main issue I am experiencing now - is my logs have ODL format (following format)
[yyyy-mm-ddThh:mm:ss.SSS-Z][ProductName-Version][Log Level]
[Message ID][LoggerName][Key Value Pairs][[
Message]]
AND I will like to parse and tokenize log events before storing in ES (vs the complete log line ).
For example:
[2018-05-31T11:08:49.148-0400] [glassfish 4.1] [INFO] [] [] [tid: _ThreadID=43 _ThreadName=Thread-8] [timeMillis: 1527692929148] [levelValue: 800] [[
[] INFO : (DummyApplicationFunctionJPADAO) EntityManagerFactory located under resource lookup name [null], resource name=AuthorizationPU]]
Needs to be parsed and tokenize using format
timestamp 2018-05-31T11:08:49.148-0400
ProductName-Version glassfish 4.1
LogLevel INFO
MessageID
LoggerName
KeyValuePairs tid: _ThreadID=43 _ThreadName=Thread-8
Message [] INFO : (DummyApplicationFunctionJPADAO)
EntityManagerFactorylocated under resource lookup name
[null], resource name=AuthorizationPU
In above Key Value pairs repeat and are variable - for simplicity I can store all as one long string.
As far as what I gathered about Cloudwatch - It seems Subscription Filter Pattern reg ex support is very limited really not sure how to fit the above pattern. For lambda function that pushes the data to ES have not seen AWS doc or examples that support lambda as means to parse and push for ES.
Will appreciate if someone can please guide what/where will be best option to parse CW logs before it gets into ES => Subscription Filter -Pattern vs in lambda function or any other way.
Thank you .
From what I can see your best bet is what you're suggesting, a CloudWatch log triggered lambda that reformats the logged data into your ES prefered format and then posts it into ES.
You'll need to subscribe this lambda to your CloudWatch logs. You can do this on the lambda console, or the cloudwatch console (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html).
The lambda's event payload will be: { "awslogs": { "data": "encoded-logs" } }. Where encoded-logs is a Base64 encoding of a gzipped JSON.
For example, the sample event (https://docs.aws.amazon.com/lambda/latest/dg/eventsources.html#eventsources-cloudwatch-logs) can be decoded in node, for example, using:
const zlib = require('zlib');
const data = event.awslogs.data;
const gzipped = Buffer.from(data, 'base64');
const json = zlib.gunzipSync(gzipped);
const logs = JSON.parse(json);
console.log(logs);
/*
{ messageType: 'DATA_MESSAGE',
owner: '123456789123',
logGroup: 'testLogGroup',
logStream: 'testLogStream',
subscriptionFilters: [ 'testFilter' ],
logEvents:
[ { id: 'eventId1',
timestamp: 1440442987000,
message: '[ERROR] First test message' },
{ id: 'eventId2',
timestamp: 1440442987001,
message: '[ERROR] Second test message' } ] }
*/
From what you've outlined, you'll want to extract the logEvents array, and parse this into an array of strings. I'm happy to give some help on this too if you need it (but I'll need to know what language you're writing your lambda in- there are libraries for tokenizing ODL- so hopefully it's not too hard).
At this point you can then POST these new records directly into your AWS ES Domain. Somewhat crypitcally the S3-to-ES guide gives a good outline of how to do this in python: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-aws-integrations.html#es-aws-integrations-s3-lambda-es
You can find a full example for a lambda that does all this (by someone else) here: https://github.com/blueimp/aws-lambda/tree/master/cloudwatch-logs-to-elastic-cloud
Can anybody please explain the working of the below code.
"def lambda_handlerOut(event, context):
if len(event) > 0:
success=1
print("length of event outside for--"+str(len(event)))
for record in event['Records']:
print("length of event--"+str(len(event)))
bucket=record['s3']['bucket']['name']
key=record['s3']['object']['key']
print("Bucket--"+bucket)
print("File that triggered this event--"+key)
Thanks in advance.
Regards,
Eleena Jose
This is a Lambda that receives S3 events - for example a PutObject request that creates a new file.
The method is the standard Python function - take a look at the Lambda Function Handler Docs for more details.
The structure of the event is defined here but basically there are some number of Records that are being iterated through with and, for each record, the bucket and key are being extracted and printed.
So, in more detail (comments above the line they reference):
# standard lambda event handler definition
def lambda_handlerOut(event, context):
# make sure that something was given - likely unneeded
if len(event) > 0:
success=1
print("length of event outside for--"+str(len(event)))
# loop through each record in Records
for record in event['Records']:
print("length of event--"+str(len(event)))
# take a look at the event structure - just extracting parts
bucket=record['s3']['bucket']['name']
# key is the object name - that is, the file
key=record['s3']['object']['key']
print("Bucket--"+bucket)
print("File that triggered this event--"+key)
EDIT
As I linked to above, the data in the event object looks something like:
{
"Records":[
{
"eventVersion":"2.0",
"eventSource":"aws:s3",
"awsRegion":"us-east-1",
"eventTime":"1970-01-01T00:00:00.000Z",
"eventName":"ObjectCreated:Put",
"userIdentity":{
"principalId":"AIDAJDPLRKLG7UEXAMPLE"
},
"requestParameters":{
"sourceIPAddress":"127.0.0.1"
},
"responseElements":{
"x-amz-request-id":"C3D13FE58DE4C810",
"x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
},
"s3":{
"s3SchemaVersion":"1.0",
"configurationId":"testConfigRule",
"bucket":{
"name":"mybucket",
"ownerIdentity":{
"principalId":"A3NL1KOZZKExample"
},
"arn":"arn:aws:s3:::mybucket"
},
"object":{
"key":"HappyFace.jpg",
"size":1024,
"eTag":"d41d8cd98f00b204e9800998ecf8427e",
"versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko",
"sequencer":"0055AED6DCD90281E5"
}
}
}
]
}
So, as an example, bucket=record['s3']['bucket']['name'] starts by getting the s3 record from the data which leaves:
"s3":{
"s3SchemaVersion":"1.0",
"configurationId":"testConfigRule",
"bucket":{
"name":"mybucket",
"ownerIdentity":{
"principalId":"A3NL1KOZZKExample"
},
"arn":"arn:aws:s3:::mybucket"
},
"object":{
"key":"HappyFace.jpg",
"size":1024,
"eTag":"d41d8cd98f00b204e9800998ecf8427e",
"versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko",
"sequencer":"0055AED6DCD90281E5"
}
}
From there, it gets the bucket stanza:
"bucket":{
"name":"mybucket",
"ownerIdentity":{
"principalId":"A3NL1KOZZKExample"
},
"arn":"arn:aws:s3:::mybucket"
}
and lastly, the name:
"name":"mybucket"
This is assigned to the variable bucket which is printed out later. The key (which is the file name in this example) works the same way but gets different parts of the event.
Does that make sense now?