how to see console.log in AWS lambda functions - aws-lambda

Where do you see the console.log() calls made inside of AWS Lambda functions? I looked at AWS Cloud Watch event log and didn't see them there. Is there a CLI way to see them?

console.log() should definitely end up in the CloudWatch logs for your function. You should be able to find the correct log group in the web console interface for your function under the Monitoring tab - Jump to Logs. Note that you will have a different log stream for each invocation of your function, and there may be a delay between logs being written and logs showing up in a stream, so be patient.
It's possible you do not have the IAM permissions to create log groups or write to log streams. Ashan has provided links on how to fix that.
Additionally, you can use the awslogs tool to list groups/streams, as well as to download or tail groups/streams:
To list available groups: awslogs groups
To list available streams in group app/foo: awslogs streams app/foo
To "tail -f" all streams from a log group app/foo: awslogs get app/foo ALL --watch

Make sure you the IAM role assigned to the AWS Lambda function has permission to write to CloudWatch Logs. For more information regarding the policy refer Using Identity-Based Policies (IAM Policies)for CloudWatch Logs.
In addition, you should be able to view the CloudWatch log group by clicking on the CloudWatch Logs under Add Triggers in Lambda Console.

Related

Get corresponding lambda log stream for statemachine execution

statemachine logs for a specific execution can be easily queried in cloudwatch via the execution_arn field in the log events.
However how can i find out which logs/logstreams of the LAMBDA functions correspond to which statemachine execution? I haven't found any connection so far. Ideally i would have expected to find an execution_arn field also in the lambda logs when called from stepfunctions.
There is no connection. You can, however, manually associate your Lambda logs with a State Machine execution by including the execution ARN in your input payload and logging it in Lambda. The execution ARN is available on the State Machine context object:
"Execution.$": "$$.Execution.Id"

How to view and Interprete Vertex AI Logs

We have deployed Models in the Vertex AI endpoint.
Now we want to know and interpret logs regarding events
of Node creation, POD creation, user API call matric etc.
Is there any way or key by which we can filter the logs for Analysis?
As you did not specify your question I will provide quite a general answer which might help other members.
There is a Documentation which explains Vertex AI logging information - Vertex AI audit logging information.
Google Cloud services write audit logs to help you answer the questions, "Who did what, where, and when?" within your Google Cloud resources.
Currently Vertex AI supports 2 types of Audit Logs:
Admin Activity audit logs
Admin Activity audit logs contain log entries for API calls or other actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Identity and Access Management permissions.
Data Access audit logs
Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data.
Two others like System Event logs and Policy Denied logs are currently not supported in Vertex AI. In guide Google services with audit logs you can find more information.
If you want to view audit logs, you can use Console, gcloud command or API. Depending on how you want to get them you should follow steps mentioned in Viewing audit logs. For example, if you would use Console, you will use Log Explorer.
Additional threads which might be helpful:
How do we capture all container logs on google Vertex AI?
How to structure container logs in Vertex AI?
For container logs (logs that are created by your model) you can't currently,
the entire log entry is captured by the Vertex AI platform and assigned as a string to the "message" field within the parent "jsonPayload" fields,
the answer above of #PjoterS suggests a workaround to that limitation which isn't easy in my opinion.
It would have been better if Vertex had offered some mechanism by which you could log directly to the endpoint resource from the container using their gcloud logging lib or better, unpack the captured log fields as sub fields to the "jsonPayload" parent field, or into "message"

any script to know all the AWS resources created by certain IAM user

Good day,
Is there any script or any aws cli command to know which IAM user created what resource in AWS. so that we just enter the IAM user name and it shows all the resources created by that particular IAM user.
thanks in advance.
The service that you're looking for is CloudTrail.
By default, it retains 90 days worth of events for the current account and region, and you can access it from either the Console or CLI. You can also configure it to write events to S3, where they're be preserved as long as you want to pay for the storage (this also lets you capture events across all regions, and for every account in an orgnanization).
CloudTrail events can be challenging to search. If you're just looking for events by a specific user, and know that user's access key (here I'm using my access key stored in an environment variable) you can use a query like this:
aws cloudtrail lookup-events --lookup-attributes "AttributeKey=AccessKeyId,AttributeValue=$AWS_ACCESS_KEY_ID" --query 'Events[].[EventTime,EventName,Username,EventId]' --output table
Or, by username:
aws cloudtrail lookup-events --lookup-attributes "AttributeKey=Username,AttributeValue=parsifal" --query 'Events[].[EventTime,EventName,Username,EventId]' --output table
You can then use grep to find the event(s) that interest you, and dig into the details of a specific event with:
aws cloudtrail lookup-events --lookup-attributes "AttributeKey=EventId,AttributeValue=8c5a5d8a-9999-9999-9999-a8e4b5213c3d"

Stream AWS cloudwatch logs to elasticcloud using functionbeat

I can not find a good document which gives details about how to stream data from AWS cloudwatch to elastic cloud.
I have set it up as follows for now, but I cannot see data in elastic cloud index
I have installed functionbeat locally and updated the config as follows:
functionbeat.provider.aws.endpoint: "s3.amazonaws.com"
functionbeat.provider.aws.deploy_bucket: "filebeat-deploy"
functionbeat.provider.aws.functions:
- name: cloudwatch
enabled: true
type: cloudwatch_logs
description: "lambda function for cloudwatch logs"
triggers:
- log_group_name: my_log_group_name
cloud.id: "<cloud_id>"
cloud.auth: "<username:password>"
I followed this document - https://www.elastic.co/guide/en/beats/functionbeat/current/configuration-functionbeat-options.html
And then I ran ./functionbeat deploy cloudwatch to deploy the function.
I have checked I can see the deployment in the bucket filebeat-deploy
I can not see the logs from my_log_group_name in elastic cloud
This is possibly because AWS is not able to make a successful connection with your elastic cloud. Usually protocol issue (if u are making output.elasticsearch: host as localhost:9200 because AWS doesn't able to reach to this localhost url unless it is a public one) or permission issue. If you check the functionbeat lambda function cloudwatch logs you can able to see the actual issue. Put logging.level: debug in functionbeat.yml for detailed logs.
Also, you cannot see the logs in kibana right after deploying the functionbeat. Once the subscription filter has added to the log group after the successful deployment you have to invoke the function which you have added subscription filter not the functionbeat lambda function. Because the trigger is added to the functionbeat lambda function.In your case you are added trigger to get logs from this log group "my_log_group_name". So whenever a new item get added into this log group then it will automatically invoke functionbeat lambda function.

Monitoring AWS Lambda errors

I want to view AWS lambda last hour errors of two types:
Lambda function that finished with an error
Lambda function returned http 500
How should I do that?
If you have many lambdas, in can be difficult to identify exactly which lambda caused an error. Here is how to find it out, even if you have hundreds of lambdas.
In CloudWatch, go to the Metrics page, then go to the Graph Metrics tab, then navigate to the dropdown menu item “Math expression > Search > Lambda Throttles or Errors.”
This will give you error counts per lambda in a graph, mouse over to get the name of the offending lambda.
Once you launched an AWS Lambda project, automatically that is watched by CloudWatch.
Lambda function that finished with an error
You can see Lambda function errors from monitoring tab on Lambda default view.
Lambda function returned http 500
I guess your Lambda function is WEB API. If your WEB API created by Lambda function, you need to output logs with standard output in order to see logging on CloudWatch.
Please find documents from Accessing Amazon CloudWatch Logs for AWS Lambda
NOTE: only if you use serverless:
Alternatively, you can monitor your lambda function logs using serverless cli.
For example, to get log in the past 1 hours:
sls logs -f functionName --startTime 1h
You also can filter based on the string 'error' in the past 1 hours:
sls logs -f functionName --startTime 1h --filter error
Please check on the doc.
You could enable X-Ray traces from the lambda dashboard
Lambda Console Enable X-Ray Tracing
The X-Ray service displays trace mappings for lambda execution results. The service is great for checking the results of errors within lambda functions, but if you are looking for detailed error result logs, CloudWatch is your best bet.
You could also try something like Logbird that processes CloudWatch streams for all errors in AWS Lambda, API Gateway and other cloud services and can trigger notifications.

Resources