I want to programatically dump logs from OpenWhisk in to an external service. I can do this by capturing log output and then posting it at the end of every action but this adds overhead to my function.
Is there a way to get this data from the OpenWhisk API similar to wsk activation logs ACTIVATION_ID?
Action logs are available through the platform API. Console output from actions (stdout or stderr) is stored in the activation records.
Activation records can be accessed by sending a HTTP request to the following endpoint:
/namespaces/{namespace}/activations/{activationid}/logs
Client libraries for accessing the API are available for multiple languages.
Related
I am having trouble figuring out how the datadog forward encodes/encrypts its messages from the datadog forwarder. We are utilizing the forwarder on datadog using the following documentation: https://docs.datadoghq.com/serverless/forwarder/ . On that page there, Datadog has an option to send the same event to another lambda that it invokes via the AdditionalTargetLambdaARNs flag. We are doing this and having the other lambda invoke but the event input that we are getting is long string that looks like it is base64 encoded but when I put it into a base64 decoder, I get gibberish back. I was wondering if anyone knew how datadog is compressing/encoding/encrypting their data/logs that they send so that I can read the logs in my lambda and be able to preform actions off of the data being forwarded? I have been searching google and the datadog site for documentation on this but I can't find any.
It looks like Datadog uses zstd compression in order to compress its data before sending it: https://github.com/DataDog/datadog-agent/blob/972c4caf3e6bc7fa877c4a761122aef88e748b48/pkg/util/compression/zlib.go
I'm trying out the .Net agent in Elastic APM and I'm using a C# application which is created using a framework called ASP.net Boilerplate. I've added the core libraries as mentioned in the documentation and added the settings in appsettings.json. This enables the default instrumentation and I got traces in the APM visualized through Kibana.
Currently I've got a node.js application running and I publish a message to a RabbitMQ queue with the traceparent in the message payload. The C# app reads the published message. I need to create a transaction or span using this traceparent / trace id so that Kibana would show the trace among the distributed systems.
I want to know if there is a way to create a transaction (or span) using a traceparent that is being sent from another system not using a HTTP protocol. I've checked the Elastic APM agent documentation -> Public API for information but couldnt find any information on this. Is there a way? Thanks.
I want to know if there is a way to create a transaction (or span) using a traceparent that is being sent from another system not using a HTTP protocol.
Yes, this is possible and there is an API for it. This part of the documentation explains it.
So you'll need to do this when you start your transaction - I imagine in your scenario this will be when you read a message from RabbitMQ.
When you start the transaction there is an optional parameter called distributedTracingData - if you pass it, then the transaction will reuse the traceid which you passed through RabbitMQ, and this way the new transaction will be part of the whole trace. If you don't pass this parameter, a new traceid will be generated and a new trace will be started.
Another comment that may help: you pass the trace id into the method where you start the transaction and each span will inherit this trace id within a transaction - so you control this on the transaction level and accordingly you don't pass it into a span.
Here is a small code snippet on how this would look:
serializedDistributedTracingData = //read this from the message which you get RabbitMq
var transaction2 = Agent.Tracer.StartTransaction("RadFromQueue", "RabbitMQRead",
DistributedTracingData.TryDeserializeFromString(serializedDistributedTracingData));
#gregkalapos, again thank you for the information. I checked how to acquire the neccessary trace information as in node.js agent documentation and when I debugged noticed that it was the trace id. Next in the C# consumer end I placed a code snippet as mentioned in the .Net agent and gave it a run. Kibana displayed the transactions from two different services in a single trace as I hoped it would.
I have created a alexa smart home function and want to run it asynchronously so plan to use amazon sqs (Simple que service) functionality. I connected amazon sqs trigger output to lambda function and successfully able to send message from sqs to lambda. Now need to connect the alexa to sqs input. When i try to use sqs arn in alexa developer console it does not support it. Is there any way to solve this or will alexa support only lambda function for invocation.
The alexa skill is for smart home service to control switches (Turn on/off), so when try to control the multiple switches because of synchronous nature execution of lambda it turns on switches one after the other. I need to control them at single shot so need asynchronous execution for lambda where requests need to execute without waiting for the response.
Thanks in advance for answers.
It will not work as SQS works asynchronus and just reply that message was put there. But Alexa needs a valid JSON response with speech tag and so on immediately and SQS is not able to fulfill this.
What you could do:
Alexa -> Lambda (new) -> SQS - Lambda
In your new created lambda you could give a valid reply to Alexa and put a message in SQS.
AWS Lambda can work asynchronously. You can have a bunch of back-end processes all working as they need to, triggering various Lambdas as needed.
But the exchange with Alexa opens a session to your backend, sends its request, and the full response is expected to end that session. That response may have directives to download other content to incorporate into the response, like a sound file or lazy loading a list in APL. But it is expecting a full response.
If you go through the basic Cake Time tutorial for building Alexa skills, they actually use async-await for some APIs because that response has to be complete before it's sent.
There are some async APIs like reminders and proactive events, but they're NOT conversational. They're unique one-way messages.
The real questions are why do you feel you need to do it this way and what are you optimizing for by queuing?
Example use case
Send the user a notification 2 hours after signup.
Options considered
setTimeout(() => { /* send notification */ }, 2*60*60*1000); is not an option in serverless environments since the function terminates after execution (so it has to be stateless).
CloudWatch events can schedule lambda invocations using cron expressions - but this was designed for repetitive invocations (there's a limit of 100 rules/region).
I have not seen scheduling options in AWS SNS/SQS or GCP Pub/Sub. Are there alternatives with scheduling?
I want to avoid (if possible) setting up a dedicated message broker (overkill) or stateful/non-serverless instance - is there a serverless way to do this?
I can queue the events in a database and invoke a lambda function every minute to poll the database for events to execute in that minute... is there a more elegant solution?
Use AWS Step functions, they are like serverless functions that don't have the 15 minute limit like AWS Lambda does. You can design a workflow in AWS step that integrates with API Gateway, Lambda and SNS to send email and text notifications as follows:
Create a REST API via API gateway that will invoke a Lambda function passing in for example, the destination address (email, phone #) of the SNS notification, when it should be sent, notification method (e.g. email, text, etc.).
The Lambda function on invocation will invoke the Step function passing in the data (Lambda is needed because API Gateway currently can't invoke Step functions directly).
The Step function is basically a workflow, you can define states for waiting (like waiting for the specified time to send the notification e.g. 30 seconds), and states for invoking other Lambda functions that can use SNS to send out an email and/or text notifications.
A rudimentary example is provided by AWS w/ their Task Timer example.
Things are coming on GCP for doing this, but not very soon. Thereby, today, the solution is to poll a database.
You can to that with Datastore/firestore with the execution datetime indexed (to prevent to read all the documents each minute). But be careful of traffic spike, you could create hotspot.
You can use Cloud Scheduler on Google Cloud Platform. As is is stated in the official documentation :
Cloud Scheduler is a fully managed enterprise-grade cron job scheduler. It allows you to schedule virtually any job, including batch, big data jobs, cloud infrastructure operations, and more. You can automate everything, including retries in case of failure to reduce manual toil and intervention. Cloud Scheduler even acts as a single pane of glass, allowing you to manage all your automation tasks from one place.
Here you can check a quickstart for using it with Pub/Sub and Cloud Functions.
Image we want to check two weeks after a user's registration if she has been active and otherwise I want to notify her.
To achieve this we currently use the following setup (this runs on Heroku):
The parse server puts a task into the redis queue. The worker fetches tasks from that queue. Then it performs checks on the activity of the user. For this it needs to access the parse server to fetch that information. This puts additional load on our api.
I image the following scenario to be better:
I wonder: is it possible to achieve this scenario using parse server? (The worker dynos don't have a HTTP interface to run a parse server...)