Related
Here is my JSON data that i am trying to send from filebeat to ingest pipeline "logpipeline.json" in opensearch.
json data
{
"#timestamp":"2022-11-08T10:07:05+00:00",
"client":"10.x.x.x",
"server_name":"example.stack.com",
"server_port":"80",
"server_protocol":"HTTP/1.1",
"method":"POST",
"request":"/example/api/v1/",
"request_length":"200",
"status":"500",
"bytes_sent":"598",
"body_bytes_sent":"138",
"referer":"",
"user_agent":"Java/1.8.0_191",
"upstream_addr":"10.x.x.x:10376",
"upstream_status":"500",
"gzip_ratio":"",
"content_type":"application/json",
"request_time":"6.826",
"upstream_response_time":"6.826",
"upstream_connect_time":"0.000",
"upstream_header_time":"6.826",
"remote_addr":"10.x.x.x",
"x_forwarded_for":"10.x.x.x",
"upstream_cache_status":"",
"ssl_protocol":"TLSv",
"ssl_cipher":"xxxx",
"ssl_session_reused":"r",
"request_body":"{\"date\":null,\"sourceType\":\"BPM\",\"processId\":\"xxxxx\",\"comment\":\"Process status: xxxxx: \",\"user\":\"xxxx\"}",
"response_body":"{\"statusCode\":500,\"reasonPhrase\":\"Internal Server Error\",\"errorMessage\":\"xxxx\"}",
"limit_req_status":"",
"log_body":"1",
"connection_upgrade":"close",
"http_upgrade":"",
"request_uri":"/example/api/v1/",
"args":""
}
Filebeat to Opensearch log shipping
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.29.117:9200"]
pipeline: logpipeline
#index: "filebeatelastic-%{[agent.version]}-%{+yyyy.MM.dd}"
index: "nginx_dev-%{+yyyy.MM.dd}"
# Protocol - either `http` (default) or `https`.
protocol: "https"
ssl.enabled: true
ssl.verification_mode: none
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "filebeat"
password: "filebeat"
I am carrying out the "data" fields transformation in the ingest pipeline for some of the fields by doing type conversion which works perfectly. But the only problem i am facing is with the "#timestamp".
The "#timestamp" is of "date" type and once the json data goes through the pipeline i am mapping the json data message to root level json object called "data". In that transformed data the "data.#timestamp" is showing as type "string" even though i haven't done any transformation for it.
Opensearch ingestpipeline - logpipeline.json
{
"description" : "Logging Pipeline",
"processors" : [
{
"json" : {
"field" : "message",
"target_field" : "data"
}
},
{
"date" : {
"field" : "data.#timestamp",
"formats" : ["ISO8601"]
}
},
{
"convert" : {
"field" : "data.body_bytes_sent",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.bytes_sent",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.request_length",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.request_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_connect_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_header_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_response_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
}
]
}
Is there any way i can preserve the "#timestamp" "date" type field even after the transformation carried out in ingest pipeline?
indexed document image:
Edit1: Update ingest pipeline simulate result
{
"docs" : [
{
"doc" : {
"_index" : "_index",
"_id" : "_id",
"_source" : {
"index_date" : "2022.11.08",
"#timestamp" : "2022-11-08T12:07:05.000+02:00",
"message" : """
{ "#timestamp": "2022-11-08T10:07:05+00:00", "client": "10.x.x.x", "server_name": "example.stack.com", "server_port": "80", "server_protocol": "HTTP/1.1", "method": "POST", "request": "/example/api/v1/", "request_length": "200", "status": "500", "bytes_sent": "598", "body_bytes_sent": "138", "referer": "", "user_agent": "Java/1.8.0_191", "upstream_addr": "10.x.x.x:10376", "upstream_status": "500", "gzip_ratio": "", "content_type": "application/json", "request_time": "6.826", "upstream_response_time": "6.826", "upstream_connect_time": "0.000", "upstream_header_time": "6.826", "remote_addr": "10.x.x.x", "x_forwarded_for": "10.x.x.x", "upstream_cache_status": "", "ssl_protocol": "TLSv", "ssl_cipher": "xxxx", "ssl_session_reused": "r", "request_body": "{\"date\":null,\"sourceType\":\"BPM\",\"processId\":\"xxxxx\",\"comment\":\"Process status: xxxxx: \",\"user\":\"xxxx\"}", "response_body": "{\"statusCode\":500,\"reasonPhrase\":\"Internal Server Error\",\"errorMessage\":\"xxxx\"}", "limit_req_status": "", "log_body": "1", "connection_upgrade": "close", "http_upgrade": "", "request_uri": "/example/api/v1/", "args": ""}
""",
"data" : {
"server_name" : "example.stack.com",
"request" : "/example/api/v1/",
"referer" : "",
"log_body" : "1",
"upstream_addr" : "10.x.x.x:10376",
"body_bytes_sent" : 138,
"upstream_header_time" : 6.826,
"ssl_cipher" : "xxxx",
"response_body" : """{"statusCode":500,"reasonPhrase":"Internal Server Error","errorMessage":"xxxx"}""",
"upstream_status" : "500",
"request_time" : 6.826,
"upstream_cache_status" : "",
"content_type" : "application/json",
"client" : "10.x.x.x",
"user_agent" : "Java/1.8.0_191",
"ssl_protocol" : "TLSv",
"limit_req_status" : "",
"remote_addr" : "10.x.x.x",
"method" : "POST",
"gzip_ratio" : "",
"http_upgrade" : "",
"bytes_sent" : 598,
"request_uri" : "/example/api/v1/",
"x_forwarded_for" : "10.x.x.x",
"args" : "",
"#timestamp" : "2022-11-08T10:07:05+00:00",
"upstream_connect_time" : 0.0,
"request_body" : """{"date":null,"sourceType":"BPM","processId":"xxxxx","comment":"Process status: xxxxx: ","user":"xxxx"}""",
"request_length" : 200,
"ssl_session_reused" : "r",
"server_port" : "80",
"upstream_response_time" : 6.826,
"connection_upgrade" : "close",
"server_protocol" : "HTTP/1.1",
"status" : "500"
}
},
"_ingest" : {
"timestamp" : "2023-01-18T08:06:35.335066236Z"
}
}
}
]
}
Finally able to resolve my issue. I updated the filebeat.yml with the following. Previously template name and pattern was different. But this default template name "filebeat" and pattern "filebeat" seems to be doing the job for me.
To
setup.template.name: "filebeat"
setup.template.pattern: "filebeat"
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
But still need to figure our how templates work though
I need to return AppointmentTypes as a FHIR resource. Unfortunately, I couldn't find it as an official FHIR resource format.
My best guess would be to create a Basic resource, like this:
{
"resourceType": "Basic",
"id" : "id-of-appointment-type",
"identifier" : [
{
"use" : "secondary",
"system" : "http://myUrl/myIdentifier",
"value" : "7"
}
],
"code" : {
"coding": [
{
"system": "http://myUrl/appointment-type",
"code": "appointment-type"
}
]
},
"text" : {
"status" : "generated",
"div" : "<div xmlns=\"http://www.w3.org/1999/xhtml\">AppointmentType</div>"
},
"extension": [
{
"url": "http://myUrl/appointment-type-name",
"valueString": "New Patient"
},
{
"url": "http://myUrl/appointment-type-availability",
"valueBoolean": true
}
],
"meta" : {
"lastUpdated" : "2020-05-27T00:00:00.000Z"
}
}
Would this be the right way to create the AppointmentType resource?
I don't see any obvious issues, but did you evaluate using CodeSystem? You can define properties on CodeSystem codes which would be able to distinguish available from non-available appointment types - and that would work better with Appointment, where 'type' is expected to be a code.
I am trying to create step function using cloud formation. I want to pass the lambda arns as second argument to Fn::Sub function. It works if I pass just one Arn but fails when I pass multiple. (with Fn::Get). I checked the template with Yml validator and did not see any issues.
Cloud formation template definition for Step:
---
Resources:
ContractDraftStateMachine:
Type: "AWS::StepFunctions::StateMachine"
Properties:
RoleArn:
Fn::GetAtt: [ StepFunctionExecutionRole, Arn ]
DefinitionString:
Fn::Sub:
- |-
{
"Comment" : "Sample draft process",
"StartAt" : "AdvanceWorkflowToDraftInProgress",
"States" : {
"AdvanceWorkflowToDraftInProgress" : {
"Type" : "Task",
"Resource": "${WorkflowStateChangeLambdaArn}",
"InputPath":"$.contractId",
"OutputPath":"$",
"ResultPath":null,
"Next" : "CheckQuestionnaireType",
"Retry" : [
{
"ErrorEquals" : ["States.TaskTimeout"],
"MaxAttempts": 5,
"IntervalSeconds": 1
},
{
"ErrorEquals" : ["CustomErrorA"],
"MaxAttempts": 5
}
],
"Catch": [
{
"ErrorEquals": [ "States.ALL" ],
"Next": "FailureNotifier"
}
]
},
"CheckQuestionnaireType" : {
"Type" : "Choice",
"Choices" : [
{
"Variable" : "$.questionnaireType",
"StringEquals" : "CE",
"Next" : "PublishQuestionnaireAnswersToCE"
},
{
"Variable" : "$.questionnaireType",
"StringEquals" : "LEAF",
"Next" : "PublishQuestionnaireAnswersToLeaf"
}
]
},
"PublishQuestionnaireAnswersToCE" : {
"Type" : "Task",
"Resource": "${WorkflowStateChangeLambdaArn}",
"Next" : "UpdateCEMetadataAndGenerateDocuments",
"ResultPath" : null,
"OutputPath" : "$",
"Retry" : [
{
"ErrorEquals" : ["States.TaskTimeout"],
"MaxAttempts": 5,
"IntervalSeconds": 1
},
{
"ErrorEquals" : ["CustomErrorA"],
"MaxAttempts": 5
}
],
"Catch": [
{
"ErrorEquals": [ "States.ALL" ],
"Next": "FailureNotifier"
}
]
},
"PublishQuestionnaireAnswersToLeaflet" : {
"Type" : "Task",
"Resource": "${WorkflowStateChangeLambdaArn}",
"End" : true,
"Retry" : [
{
"ErrorEquals" : ["States.TaskTimeout"],
"MaxAttempts": 5,
"IntervalSeconds": 1
},
{
"ErrorEquals" : ["CustomErrorA"],
"MaxAttempts": 5
}
],
"Catch": [
{
"ErrorEquals": [ "States.ALL" ],
"Next": "FailureNotifier"
}
]
},
"UpdateCEMetadataAndGenerateDocuments" : {
"Type" : "Task",
"Resource": "${WorkflowStateChangeLambdaArn}",
"End" : true,
"Retry" : [
{
"ErrorEquals" : ["States.TaskTimeout"],
"MaxAttempts": 5,
"IntervalSeconds": 1
},
{
"ErrorEquals" : ["CustomErrorA"],
"MaxAttempts": 5
}
],
"Catch": [
{
"ErrorEquals": [ "States.ALL" ],
"Next": "FailureNotifier"
}
]
},
"FailureNotifier" : {
"Type" : "Task",
"Resource": "${FailureNotifierLambdaArn}",
"End" : true,
"Retry" : [
{
"ErrorEquals" : ["States.TaskTimeout"],
"MaxAttempts": 5,
"IntervalSeconds": 1
},
{
"ErrorEquals" : ["CustomErrorA"],
"MaxAttempts": 5
}
]
}
}
}
- WorkflowStateChangeLambdaArn:
Fn::GetAtt: [ CreateContractFromQuestionnaireFunction, Arn ]
- FailureNotifierLambdaArn:
Fn::GetAtt: [ CreateContractFromQuestionnaireFunction, Arn ]
Error - Template error: One or more Fn::Sub intrinsic functions don't specify expected arguments. Specify a string as first argument, and an optional second argument to specify a mapping of values to replace in the string
This is just a sample with same lambda used multiple times but the problem is in passing list/map to Fn::Sub.
Could anyone help me resolve this issue or provide an alternate solution to achieve the same?
Thanks,
Fn::Sub takes either a single string as a parameter or a list. When using the list method there should be just two elemenets in the list. The first element is a string (the template) and the second is a map.
From the Fn::Sub documentation
Fn::Sub:
- String
- { Var1Name: Var1Value, Var2Name: Var2Value }
Note: since you are just using Fn::Get attribute to build the substitution value you can just use ${CreateContractFromQuestionnaireFunction.Arn} and use the single string version of Fn::Sub.
E.g. (I've shortened the step function for clarity.
Fn::Sub:|-
{
"Comment" : "Sample draft process",
"StartAt" : "AdvanceWorkflowToDraftInProgress",
"States" : {
"AdvanceWorkflowToDraftInProgress" : {
"Type" : "Task",
"Resource": "${CreateContractFromQuestionnaireFunction.Arn}",
"InputPath":"$.contractId",
"OutputPath":"$",
"ResultPath":null,
"Next" : "CheckQuestionnaireType",
"Retry" : [
...
I have the following cloud formation JSON template. This template is the default template provided by AWS for C#(Dotnet) Web API Lambda proxy integration.
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Transform" : "AWS::Serverless-2016-10-31",
"Description" : "An AWS Serverless Application that uses the ASP.NET Core framework running in Amazon Lambda.",
"Parameters" : {
"ShouldCreateBucket" : {
"Type" : "String",
"AllowedValues" : ["true", "false"],
"Description" : "If true then the S3 bucket that will be proxied will be created with the CloudFormation stack."
},
"BucketName" : {
"Type" : "String",
"Description" : "Name of S3 bucket that will be proxied. If left blank a new table will be created.",
"MinLength" : "0"
}
},
"Conditions" : {
"CreateS3Bucket" : {"Fn::Equals" : [{"Ref" : "ShouldCreateBucket"}, "true"]},
"BucketNameGenerated" : {"Fn::Equals" : [{"Ref" : "BucketName"}, ""]}
},
"Resources" : {
"ProxyFunction" : {
"Type" : "AWS::Serverless::Function",
"Properties": {
"Handler": "DotnetLanmada::DotnetLanmada.LambdaEntryPoint::FunctionHandlerAsync",
"Runtime": "dotnetcore2.0",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Role": null,
"Policies": [ "AWSLambdaFullAccess" ],
"Environment" : {
"Variables" : {
"AppS3Bucket" : { "Fn::If" : ["CreateS3Bucket", {"Ref":"Bucket"}, { "Ref" : "BucketName" } ] }
}
},
"Events": {
"PutResource": {
"Type": "Api",
"Properties": {
"Path": "/{proxy+}",
"Method": "ANY"
}
}
}
}
},
"Bucket" : {
"Type" : "AWS::S3::Bucket",
"Condition" : "CreateS3Bucket",
"Properties" : {
"BucketName" : { "Fn::If" : ["BucketNameGenerated", {"Ref" : "AWS::NoValue" }, { "Ref" : "BucketName" } ] }
}
}
},
"Outputs" : {
"S3ProxyBucket" : {
"Value" : { "Fn::If" : ["CreateS3Bucket", {"Ref":"Bucket"}, { "Ref" : "BucketName" } ] }
}
}
}
This template creates a Lambda function, API Gateway, and an S3 bucket. All the requests to API gateway are proxy-ed to the Lambda function. I want to authenticate all the requests to API gateway using an existing Cognito user pool. Basically, the API gateway will have a Cognito user pool authorizer and the proxy function is authorized with that. Since the API Gateway creation part is hidden in this template I have no clue how to add a Cognito user pool authorizer here.
Thanks in advance.
One way to achieve what you want is to export the ARN of your Lambda function, and then import it into your API Gateway stack.
To export your function's ARN, in your Outputs section add:
"Function": {
"Value": ProxyFunction.Arn,
"Export": {
"Name": "ProxyFunction::Arn"
}
}
You will also need to have an invocation permission for API Gateway to invoke your function. You can add something like this to your stack:
"LambdaInvocationPermission": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"Action": "lambda:InvokeFunction",
"FunctionName": { "Fn::GetAtt" : [ "ProxyFunction", "Arn" ] },
"Principal": "apigateway.amazonaws.com"
}
}
Then in your API Gateway stack, you can reference your function's ARN with
{ "Fn::ImportValue" : "ProxyFunction::Arn" }
I created autoscaling group which launch EC2 which has ELB. My question is how to provision those EC2 instances with ansible?
Before I used CNAME, but now I cant get instance dns. Please correct me if I wrong.
Should I use dynamic inventory or are there any other options?
My cloud formation template below:
```
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "Template create autoscaling group",
"Parameters": {
"devKeyPair": {
"Description": "Name of an existing EC2 KeyPair to enable SSH access to the instances",
"Type": "AWS::EC2::KeyPair::KeyName",
"Default" : "dev-key"
}
},
"Resources" : {
"LaunchConfig" : {
"Type" : "AWS::AutoScaling::LaunchConfiguration",
"Properties" : {
"KeyName" : { "Ref": "devKeyPair" },
"ImageId" : "ami-1effc703",
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n", "\n", " echo 'Installing Git'\n"," yum --nogpgcheck -y install wget\n""] ]}},
"InstanceType" : "t2.small",
"BlockDeviceMappings" : [
{
"DeviceName" : "/dev/sda1",
"Ebs" : {
"VolumeSize" : "10",
"VolumeType" : "gp2",
"DeleteOnTermination" : "true"
}
}
]
}
},
"BackendGroup" : {
"Type" : "AWS::AutoScaling::AutoScalingGroup",
"Properties" : {
"AvailabilityZones" : ["eu-central-1a"],
"MinSize" : "1",
"MaxSize" : "1",
"LaunchConfigurationName" : { "Ref" : "LaunchConfig" },
"LoadBalancerNames" : [ { "Ref" : "ElasticLoadBalancer" } ],
"Tags": [
{
"ResourceType": "auto-scaling-group",
"ResourceId": "bas-auto",
"Value": "bas-dev",
"Key": "Name",
"PropagateAtLaunch" : "true"
}
]
}
},
"ElasticLoadBalancer": {
"Type": "AWS::ElasticLoadBalancing::LoadBalancer",
"Properties": {
"AvailabilityZones": ["eu-central-1a"],
"Listeners": [ {
"LoadBalancerPort": "80",
"InstancePort": "80",
"Protocol": "HTTP"
} ]
}
},
"BackendDNS" : {
"Type" : "AWS::Route53::RecordSetGroup",
"Properties" : {
"HostedZoneName" : "example.com.",
"Comment" : "Targered to Bas instance",
"RecordSets" : [{
"Name" : "bas-dev.example.com.",
"Type" : "CNAME",
"TTL" : "300",
"ResourceRecords" : [
{
"Fn::GetAtt": [ "ElasticLoadBalancer", "DNSName" ]
}
]
}]
}
},
}
}
```
Another solution would be to provision your VM before starting the new instance. I.e. make sure that the image you're starting the ASG instances from is already provisioned.
One way to do this is to use something like packer.io to create a new AMI using Ansible as your provisioner. Then you can simply pass this new AMI ID into the ImageId attribute of the LaunchConfiguration.
Another approach could involve using the User Data to "phone home" and tell you the public IP address the instance has acquired.
The best solution for me was install Ansible Tower with a free license, the use user_data: properties ansible have an example here. https://www.ansible.com/blog/autoscaling-infrastructures
But is necessary build a first base image because if you NOT do this extend all provisioning time delay.
You can use Opswork with Cloudformation in order to run Ansible whenever a new instance is added to the Autoscaling group.
Though Opswork uses Chef but you can use this custom cookbook https://github.com/deepakagg/ansible-opsworks which will run desired playbook.