Create a GCP metric that shows how often each endpoint is called - google-cloud-logging

I'm using Google Cloud Run to host nodejs running expressjs. I have a bunch of endpoints that are called by a scheduler. So basically each endpoint is a job or task. I want to know how often each task is called.
I print to the log every time an endpoint is called START: JobName. Is it possible to filter out the JobName and graph each job separately on one chart showing how often each job is run?

You could also define a user-defined log based metric with a regular expression to extract a label for the job name.
That said, the existing request_count GPC metric for cloud run can be grouped by project_id, service_name, revision_name, location, configuration_name, response_code, response_code_class, and route. that might give you the same visibility.

Related

Algorithm for creating a AutoScheduling app

I am a beginner in React and decided to start with this project.
Essentially I am trying to make an Auto scheduler, which takes tasks, which are essentially object with properties like due-date, importance, subject, topic etc.
So I am trying to develop an algorithm where it takes in an array of these task object and sorts them to fit the schedule. My problem is how do I go about creating a complex algorithm like this, and sort the list.
Lets also assume that I were to add global rules such as I dont do more than 8 hours of hw a day, or I dont do two hws for the same class in a row.
How can I go about developing such an Algorithm?
Here was my idea. (I havent implemented this yet) I essentially develop an equation, where I multiply all of task object properties with a constant, and give them a sorting number, and then sort the array by that number.
Finally I run a loop on this array, and make sure that the global rules are met.
If you have a better idea or solution please let me know. Thanks!
You probably want to throw your process up on the cloud, so you have the benefits of fault tolerance, parallel processing, etc., etc. Here is one option.
https://cloud.google.com/run/docs/triggering/using-scheduler
Console
Visit the Cloud Scheduler console page.
Go to Cloud Scheduler
Click Create job.
screenshot
Supply a name for the job.
Specify the frequency, or job interval, at which the job is to run, using a configuration string. For example, the string 0 */3 * * * runs the job every 3 hours. The string you supply here can be any crontab compatible string.
For more information, see Configuring Job Schedules.
From the dropdown list, choose the timezone to be used for the job frequency.
Specify HTTP as the target:
Specify the fully qualified URL of your service, for example https://myservice-abcdef-uc.a.run.app The job will send requests to this URL.
Specify the HTTP method: the method must match what your previously deployed Cloud Run service is expecting. The default is POST.
Optionally, specify the data to be sent to the target. This data is sent in the body of the request when either the POST or PUT HTTP method is selected.
Click More to show the auth settings.
From the dropdown menu, select Add OIDC token.
In the Service account field, copy the service account email of the service account you created previously.
In the Audience field, copy the full URL of your service.
Click Create to create and save the job.
Gcloud
Create the job:
gcloud beta scheduler jobs create http test-job --schedule "5 * * * *" \
--http-method=HTTP-METHOD \
--uri=SERVICE-URL \
--oidc-service-account-email=SERVICE-ACCOUNT-EMAIL \
--oidc-token-audience=SERVICE-URL
Replace
HTTP-METHOD with the HTTP method, eg, GET, POST, PUT, etc.
SERVICE-URL with your service URL.
SERVICE-ACCOUNT-EMAIL with your service account email.
Also, and this is kind of the old-fashioned way of doing it, but I grew up in the 80s and early 90s, so I'm still a fan of Windows Scheduler.
https://www.windowscentral.com/how-create-automated-task-using-task-scheduler-windows-10

Flink web UI: Monitor Metrics doesn't work

run with flink-1.9.0 on yarn(2.6.0-cdh5.11.1), but the flink web ui metrics does'nt work, as shown below:
I guess you are looking at the wrong metrics. Due no data flows from one task to another (you can see only one box at the UI) there is nothing to show. The metrics you are looking at only show the data which flows from one flink task to another. At your example everything happens within this task.
Look at this example:
You can see two tasks sending data to the map-task which emits this data to another task. Therefore you see incoming and outgoing data.
But on the other hand a source task never has incoming data(I must admit that this is confusing at the first look):
The number of records recieved is 0 but it send a couple of records to the downstream task.
Back to your problem: What you can do is have a look at the operator metrics. If you look at the metrics tab (the one at the very right) you can select beside the task metrics also some operator metrics. These metrics have a name like 0.Map.numRecordsIn.
The name assembles like this <slot>.<operatorName>.<metricsname>. But be aware that this metrics are not recorded, you don't have any historic data and once you leave this tab or remove a metric the data collected until that point are gone. I would recommend to use a proper metrics backend like influx, prometheus or graphite. You can find a description at the flink docs.
Hope that helped.

How can I trigger one AWS Lambda function from another, guaranteeing the second only runs once?

I've built a bit of a pipeline of AWS Lambda functions using the Serverless framework. There are currently five steps/functions, and I need them to run in order and each run exactly once. Roughly, the functions are:
Trigger function by an HTTP request, respond with an ID.
Access and API to get the URL of a resource to download.
Download that resource and upload a copy to S3.
Alter that resource and upload the altered copy to S3.
Submit the altered resource to a different API.
The specifics aren't important, but the question is: What's the best event/trigger to use to move along down this line of functions? The first one is triggered by an HTTP call, but the first one needs to trigger the second somehow, then the second triggers the third, and so on.
I wrote all the code using AWS SNS, but now that I've deployed it to staging I see that SNS often triggers more than once. I could add a bunch of code to detect this, but I'd rather not. And the problem is also compounding -- if the second function gets triggered twice, it sends two SNS notifications to trigger step three. If either of those notifications gets doubled... it's not unreasonable that the last function could be called ten times instead of once.
So what's my best option here? Trigger the chain through HTTP? Kinesis maybe? I have never worked with a trigger other than HTTP or SNS, so I'm not really sure what my options are, and which options are guaranteed to only trigger the function once.
AWS Step Functions seems pretty well targeted at this use-case of tying together separate AWS operations into a coherent workflow with well-defined error handling.
Not sure if the pricing will work for you (can be pricey for millions+ operations) but it may be worth looking at.
Also not sure about performance overhead or other limitations, so YMMV.
You can simply trigger the next lambda asynchronously in your lambda function after you complete the required processing in that step.
So, the first lambda is triggered by an HTTP call and in that lambda execution, after you finish processing this step, just launch the next lambda function asynchronously instead of sending the trigger through SNS or Kinesis. Repeat this process in each of your steps. This would guarantee single time execution of all the steps by lambda.
Eventful Lambda triggers (SNS, S3, CloudWatch, ...) generally guarantee at-least-once invocation, not exactly-once. As you noted you'd have to handle deduplication manually by, for example, keeping track of event IDs in DynamoDB (using strongly consistent reads!), or by implementing idempotent Lambdas, meaning functions that have no additional effects even when invoked several times with the same input. In your example step 4 is essentially idempotent providing that the function doesn't have any side effects apart from storing the altered copy, and that the new copy overwrites any previously stored copies with the same event ID.
One service that does guarantee exactly-once delivery out of the box is SQS FIFO. This service unfortunately cannot be used to trigger Lambdas directly so you'd have to set up a scheduled Lambda to poll the FIFO queue periodically (as per this answer). In your case you could handle step 5 with this arrangement, since I'm assuming you don't want to submit the same resource to the target API several times.
So in summary here's how I'd go about it:
Lambda A, invoked via HTTP, responds with ID and proceeds to asynchronously fetch resource from the API and store it to S3
Lambda B, invoked by S3 upload event, downloads the uploaded resource, alters it, stores the altered copy to S3 and finally pushes a message into the FIFO SQS queue using the altered resource's filename as the distinct deduplication ID
Lambda C, invoked by CloudWatch scheduler, polls the FIFO SQS queue and upon a new message fetches the specified altered resource from S3 and submits it to the other API
With this arrangement even if Lambda B is occasionally executed twice or more by the same S3 upload event there's no harm done since the FIFO SQS queue handles deduplication for you before the flow reaches Lambda C.
AWS Step function is meant for you: https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html
You will execute the steps you want based on previous executions outputs.
Each task/step just need to output a json correctly in the wanted "state".
https://docs.aws.amazon.com/step-functions/latest/dg/concepts-states.html
Based on the state, your workflow will move on. You can create your workflow easily and trigger lambdas, or ECS tasks.
ECS tasks are your own "lambda" environment, running without the constraints of the AWS Lambda environment.
With ECS tasks you can run on Bare metal, on your own EC2 machine, or in ECS Docker containers on ECS and thus have unlimited resources extensible limits.
As compared to Lambda where the limits are pretty strict: 500Mb of disk, execution limited in time, etc.

run nifi flow once and notify me when it is finish

I use rest api in my program,I made a processor group for convent a mongodb collection to json file:
I want to run the scheduling only one time,so I set the "Run schedule" to 10000 sec.Then I will stop the group when the data flow have ran one time,and I made a Notify processor and add a DistributedMapCacheService.But the DistributedMapCacheClientService of the Notify processor only comunicates with the DistributedMapCacheService in nifi itself,It never nofity my program.
I try to use my own socket server,but I only get a message "nifi" but no more message.
My question is:If I only want scheduling run once and stop it,how do I know when shall I stop it?Or is there some other way to achieve my purpose,like detect if the json file exists or use incremental data(If the scheduling run twice,the data will be repeated twice)?
As #daggett said you can do it in a synchronous way you can use HandleHttpRequest as trigger and HandleHttpResponse to manage the response.
For an asynchronous was you have several options for the notification like PutTCP, PostHTTP, GetHTTP, use FTP, file system, XMPP or whatever.
If the scheduling run twice the duplicated elements depends on the processors you use, some of them have state others no, but if you are facing problems with repeated elements you can use the DetectDuplicate processor.

Multiple flows with nifi

We have multiple (50+) nifi flows that all do basically the same thing: pull some data out of a db, append some columns conver to parquet and upload to hdfs. They differ only in details such as the sql query to run or the location in hdfs that they land.
The question is how to factor these common nifi flows out such that any change made to the common flow automatically applies to all all derived flows. E.g if i want to add an extra step to also publish the data to Kafka I want to make this once and have it automatically apply to all 50 flows.
We’ve tried to get this working with nifi registry, however it seems like an imperfect fit. Essentially the issue is that nifi registry seems to work well for updating a flow in one environment (say wat) and then autmatically updating it in another environment (say prod). It seems less suited for updating multiple flows in the same environment with one specific example bing that it will reset the name of each flow to be the template name every time we redeploy meaning that al flows end up with the same name!
Does anyone know how one is supposed to manage a situation like ours asi guess it must be pretty common.
Apache NiFi has ProcessorGroups. As the name itself suggests, the processor groups are there to group together a set of processors' and their pipeline that does similar task.
So for your case what you can do is, you can refactor the flow by moving the common flow which can be reused with different pipelines to a separate processor group with an input port. Connect the outside flow that depends on this reusable flow by connecting to the input port of the reusable processor group. Depending on your requirement you can create an output port as well in this processor group and connect it with the outside flow.
Attaching a sample:
For the sake of explaining, I have made a mock flow so ignore the Processor types that are used, but rather see the name I had given to those processors.
The following screenshots show that I read from two different sources and individually connect them to two different processors that does the source specific changes to those processors
Then I connect these two flows to the input port of a processor group that has the reusable flow inside. So ultimately the two different flows shown in the above screenshot gets to work with a common reusable flow.
Showing what's inside the reusable flow:
Finally the output port output to outside connects the reusable flow to the outside component Write to somewehere
I hope this helps you with refactoring your complex flows. Feel free to get back, if you have any queries.

Resources