Traefik with dynamic routing to ECS backends, running as one-off tasks - aws-lambda

I'm triying to implement solution for reverse-proxy service using Traefik v1 (1.7) and ECS one-off tasks as backends, as described in this SO question. Routing should by dynamic - requests to /user/1234/* path should go to the ECS task, running with the appropriate docker labels:
docker_labels = {
traefik.frontend.rule = "Path:/user/1234"
traefik.backend = "trax1"
traefik.enable = "true"
}
So far this setup works fine, but I need create one ECS task definition per one running task, because the docker labels are the property of ECS TaskDefinition, not the ECS task itself. Is it possible to create only one TaskDefinition and pass Traefik rules in ECS task tags, within task key/value properties?
This will require some modification in Traefik source code, are the any other available options or ways this should be implemented, that I've missed, like API Gateway or Lambda#Edge? I have no experience with those technologies, real-world examples are more then welcome.

Solved by using Traefik REST API provider. External component, which runs the one-off tasks, can discover task internal IP and update Traefik configuration on-fly by pair traefik.frontend.rule = "Path:/user/1234" and task internal IP:port values in backends section
It should GET the Traefik configuration first from /api/providers/rest endpoint, remove or add corresponding part (if task was stopped or started), and update Traefik configuration by PUT method to the same endpoint.
{
"backends": {
"backend-serv1": {
"servers": {
"server-service-serv-test1-serv-test-4ca02d28c79b": {
"url": "http://172.16.0.5:32793"
}
}
},
"backend-serv2": {
"servers": {
"server-service-serv-test2-serv-test-279c0ba1959b": {
"url": "http://172.16.0.5:32792"
}
}
}
},
"frontends": {
"frontend-serv1": {
"entryPoints": [
"http"
],
"backend": "backend-serv1",
"routes": {
"route-frontend-serv1": {
"rule": "Path:/user/1234"
}
}
},
"frontend-serv2": {
"entryPoints": [
"http"
],
"backend": "backend-serv2",
"routes": {
"route-frontend-serv2": {
"rule": "Path:/user/5678"
}
}
}
}
}

Related

Deloy on AKS Elastic stack

I tried to deploy on Azure Kubernetes Elastic stack following this example ELASTIC.IO.I want to get logs from mine.Net6 WebApi.For logger I use Serilog. I was made docker-compose to run the elastic stack on the local docker container to test on the development environment and everything works fine. The problem comes with AKS deployment, I see that on Kubernetes all servers are working. I have access to elastic port 9200 with curl but in production returns me empty reply from server
, and to the Kibana dashboard, but for some reason when I start the app from my PC with production settings to point elastic deployed on AKS I can't create any index or see any changed data.
{
"ApplicationName": "identity-service",
"Serilog": {
"MinimiumLevel": {
"Default": "Information",
"Ovveride": {
"Microsoft": "Information",
"System": "Warning"
}
}
},
"ElasticConfiguration": {
"URI": "https://elasic-cloud-ip:9200"
}
}
And this is the configuration that works in the development environment.
var env = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT");
builder.Host.UseSerilog((context, configuration) =>
{
Console.Out.WriteLine($"{Assembly.GetExecutingAssembly().GetName().Name.ToLower()}-{env.ToLower().Replace(".", "-")}-{DateTime.UtcNow:yyyy-MM}");
configuration.Enrich.FromLogContext()
.Enrich.WithMachineName()
.WriteTo.Console()
.WriteTo.Elasticsearch(
new ElasticsearchSinkOptions(new Uri(context.Configuration["ElasticConfiguration:URI"]))
{
IndexFormat = $"{context.Configuration["ApplicationName"].ToLower()}-{env.ToLower().Replace(".", "-")}-{DateTime.UtcNow:yyyy-MM}",
AutoRegisterTemplate = true,
NumberOfShards = 1,
NumberOfReplicas = 2
})
.Enrich.WithProperty("Environment", context?.HostingEnvironment?.EnvironmentName?.ToLowerInvariant())
.ReadFrom.Configuration(context.Configuration);
});

Does AWS SSM agent cloudwatch resource:memory event?

Am picking ECS optimised instance(ami-05958d7635caa4d04) in data plane of ECS in ca-central-1 region.
AWS Systems Manager Agent (SSM Agent) is Amazon software that can be installed and configured on an Amazon EC2 instance, an on-premises server, or a virtual machine (VM). SSM Agent makes it possible for Systems Manager to update, manage, and configure these resources.
In my scenario, Launching a ECS task in ECS optimised instance(ami-05958d7635caa4d04), causes resource:memory error. More on this error, here. Monitoring ECS->cluster->service->events will not work for me, because cloudformation roll back the cluster.
My existing ECS optimised instance is launched as shown below:
"EC2Instance":{
"Type": "AWS::EC2::Instance",
"Properties":{
"ImageId": "ami-05958d7635caa4d04",
"InstanceType": "t2.micro",
"SubnetId": { "Ref": "SubnetId"},
"KeyName": { "Ref": "KeyName"},
"SecurityGroupIds": [ { "Ref": "EC2InstanceSecurityGroup"} ],
"IamInstanceProfile": { "Ref" : "EC2InstanceProfile"},
"UserData":{
"Fn::Base64": { "Fn::Join": ["", [
"#!/bin/bash\n",
"echo ECS_CLUSTER=", { "Ref": "EcsCluster" }, " >> /etc/ecs/ecs.config\n",
"groupadd -g 1000 jenkins\n",
"useradd -u 1000 -g jenkins jenkins\n",
"mkdir -p /ecs/jenkins_home\n",
"chown -R jenkins:jenkins /ecs/jenkins_home\n"
] ] }
},
"Tags": [ { "Key": "Name", "Value": { "Fn::Join": ["", [ { "Ref": "AWS::StackName"}, "-instance" ] ]} }]
}
}
1) Does aws ssm agent installation required on ECS instance(ami-05958d7635caa4d04) to retrieve such cloudwatch events(resource:memory) with aws.ssm cloudwatch event rule filter? or Does aws.ec2 cloudwatch event rule filter suffice?
2) If yes, Do I need to explicitly install SSM agent on ECS instance(ami-05958d7635caa4d04)? through CloudFormation...
You don't need to install SSM agent to monitor something such as memory usage of your instance (whether container instance or not). This is domain of CloudWatch, not SSM.
All you need to install is unified cloud watch agent and configure it accordingly. This is where SSM can help but it is not necessary and you can install it manually (or via script if you want).
If you decide to use SSM then you will need to explicitly install it. It comes preinstalled on some OSes but not on Amazon ECS-Optimized AMI - more about this.

How can I use vSphere Cloud Plugin inside Jenkins pipeline code?

I have a setup at work where the vSphere host are manually restarted before execution of a specific jenkins job, as a noob in the office I automated this process by adding a extra build step to restart vm's with the help of https://wiki.jenkins-ci.org/display/JENKINS/vSphere+Cloud+Plugin! (vSphere cloud plugin).
I would like to now integrate this as a pipeline code, please advise.
I have already checked that this plugin is Pipeline compatible.
I currently trigger the vSphere host restart in pipeline by making it to remotely trigger a job configured with vSphere cloud plugin.
pipeline {
agent any
stages {
stage('Restarting vSphere') {
steps {
script {
sh "curl -v 'http://someserver.com/job/Vivin/job/executor_configurator/buildWithParameters?Host=build-114&token=bonkers'"
}
}
}
stage('Setting Executors') {
steps {
script {
def jenkins = Jenkins.getInstance()
jenkins.getNodes().each {
if (it.displayName == 'brewery-133') {
echo 'brewery-133'
it.setNumExecutors(8)
}
}
}
}
}
}
}
I would like to integrate the vSphere cloud plugin directly in the Pipeline code itself, please help me to integrate.
pipeline {
agent any
stages {
stage('Restarting vSphere') {
steps {
vSphere cloud plugin code that is requested
}
}
}
stage('Setting Executors') {
steps {
script {
def jenkins = Jenkins.getInstance()
jenkins.getNodes().each {
if (it.displayName == 'brewery-133') {
echo 'brewery-133'
it.setNumExecutors(8)
}
}
}
}
}
}
}
Well I found the solution myself with the help of 'pipeline-syntax' feature found in the menu of a Jenkins pipeline job.
'Pipeline-syntax' feature page contains syntax of all the possible parameters made available via the API of the installed plugins of a Jenkins server, using which we can generate or develop the syntax based on our needs.
http://<jenkins server url>/job/<pipeline job name>/pipeline-syntax/
My Jenkinsfile (Pipeline) now look like this
pipeline {
agent any
stages {
stage('Restarting vSphere') {
steps {
vSphere buildStep: [$class: 'PowerOff', evenIfSuspended: false, ignoreIfNotExists: false, shutdownGracefully: true, vm: 'brewery-133'], serverName: 'vspherecentral'
vSphere buildStep: [$class: 'PowerOn', timeoutInSeconds: 180, vm: 'brewery-133'], serverName: 'vspherecentral'
}
}
stage('Setting Executors') {
steps {
script {
def jenkins = Jenkins.getInstance()
jenkins.getNodes().each {
if (it.displayName == 'brewery-133') {
echo 'brewery-133'
it.setNumExecutors(1)
}
}
}
}
}
}
}

How do I use CloudFormation resources in a Lambda function?

I have added a Redis ElastiCache section to my s-resource-cf.json (a CloudFormation template), and selected its hostname as an output.
"Resources": {
...snip...
"Redis": {
"Type": "AWS::ElastiCache::CacheCluster",
"Properties": {
"AutoMinorVersionUpgrade": "true",
"AZMode": "single-az",
"CacheNodeType": "cache.t2.micro",
"Engine": "redis",
"EngineVersion": "2.8.24",
"NumCacheNodes": "1",
"PreferredAvailabilityZone": "eu-west-1a",
"PreferredMaintenanceWindow": "tue:00:30-tue:01:30",
"CacheSubnetGroupName": {
"Ref": "cachesubnetdefault"
},
"VpcSecurityGroupIds": [
{
"Fn::GetAtt": [
"sgdefault",
"GroupId"
]
}
]
}
}
},
"Outputs": {
"IamRoleArnLambda": {
"Description": "ARN of the lambda IAM role",
"Value": {
"Fn::GetAtt": [
"IamRoleLambda",
"Arn"
]
}
},
"RedisEndpointAddress": {
"Description": "Redis server host",
"Value": {
"Fn::GetAtt": [
"Redis",
"Address"
]
}
}
}
I can get CloudFormation to output the Redis server host when running sls resources deploy, but how can I access that output from within a Lambda function?
There is nothing in this starter project template that refers to that IamRoleArnLambda, which came with the example project. According to the docs, templates are only usable for project configuration, they are not accessible from Lambda functions:
Templates & Variables are for Configuration Only
Templates and variables are used for configuration of the project only. This information is not usable in your lambda functions. To set variables which can be used by your lambda functions, use environment variables.
So, then how do I set an environment variable to the hostname of the ElastiCache server after it has been created?
You can set environment variables in the environment section of a function's s-function.json file. Furthermore, if you want to prevent those variables from being put into version control (for example, if your code will be posted to a public GitHub repo), you can put them in the appropriate files in your _meta/variables directory and then reference those from your s-function.json files. Just make sure you add a _meta line to your .gitignore file.
For example, in my latest project I needed to connect to a Redis Cloud server, but didn't want to commit the connection details to version control. I put variables into my _meta/variables/s-variables-[stage]-[region].json file, like so:
{
"redisUrl": "...",
"redisPort": "...",
"redisPass": "..."
}
…and referenced the connection settings variables in that function's s-function.json file:
"environment": {
"REDIS_URL": "${redisUrl}",
"REDIS_PORT": "${redisPort}",
"REDIS_PASS": "${redisPass}"
}
I then put this redis.js file in my functions/lib directory:
module.exports = () => {
const redis = require('redis')
const jsonify = require('redis-jsonify')
const redisOptions = {
host: process.env.REDIS_URL,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASS
}
return jsonify(redis.createClient(redisOptions))
}
Then, in any function that needed to connect to that Redis database, I imported redis.js:
redis = require('../lib/redis')()
(For more details on my Serverless/Redis setup and some of the challenges I faced in getting it to work, see this question I posted yesterday.)
update
CloudFormation usage has been streamlined somewhat since that comment was posted in the issue tracker. I have submitted a documentation update to http://docs.serverless.com/docs/templates-variables, and posted a shortened version of my configuration in a gist.
It is possible to refer to a CloudFormation output in a s-function.json Lambda configuration file, in order to make those outputs available as environment variables.
s-resource-cf.json output section:
"Outputs": {
"redisHost": {
"Description": "Redis host URI",
"Value": {
"Fn::GetAtt": [
"RedisCluster",
"RedisEndpoint.Address"
]
}
}
}
s-function.json environment section:
"environment": {
"REDIS_HOST": "${redisHost}"
},
Usage in a Lambda function:
exports.handler = function(event, context) {
console.log("Redis host: ", process.env.REDIS_HOST);
};
old answer
Looks like a solution was found / implemented in the Serverless issue tracker (link). To quote HyperBrain:
CF Output variables
To have your lambda access the CF output variables you have to give it the cloudformation:describeStacks access rights in the lambda IAM role.
The CF.loadVars() promise will add all CF output variables to the process'
environment as SERVERLESS_CF_OutVar name. It will add a few ms to the
startup time of your lambda.
Change your lambda handler as follows:
// Require Serverless ENV vars
var ServerlessHelpers = require('serverless-helpers-js');
ServerlessHelpers.loadEnv();
// Require Logic
var lib = require('../lib');
// Lambda Handler
module.exports.handler = function(event, context) {
ServerlessHelpers.CF.loadVars()
.then(function() {
lib.respond(event, function(error, response) {
return context.done(error, response);
});
})
};

Have Route 53 point to an instance instead of an IP or CNAME?

We're using Route 53 DNS to point to an EC2 instance. Is there any way to get Route 53 to point to the instance directly, instead of to an Elastic IP or CNAME?
I have multiple reasons for this:
I don't want to burn an IP.
CNAMEs are unreliable, because if an instance goes down and comes back up, the full name, ec2-X-X-X-X.compute-1.amazonaws.com, will change.
In the future, I need to spin up instances programmatically and address them with a subdomain, and I see no easy way to do this with either elastic IPs or CNAMEs.
What's the best approach?
I wrote my own solution to this problem since I was unhappy with other approaches that were presented here. Using Amazon CLI tools is nice, but they IMHO tend to be slower than direct API calls using other Amazon API libraries (Ruby for example).
Here's a link to my AWS Route53 DNS instance update Gist. It contains an IAM policy and a Ruby script. You should create a new user in IAM panel, update it with the attached policy (with your zone id in it) and set the credentials and parameters in the Ruby script. First parameter is the hostname alias for your instance in your hosted zone. Instance's private hostname is aliased to <hostname>.<domain> and instance's public hostname is aliased to <hostname>-public.<domain>
UPDATE: Here's a link to AWS Route53 DNS instance update init.d script registering hostnames when instance boots. Here's another one if want to use AWS Route53 DNS load-balancing in similar fashion.
If you stick to using route53, you can make a script that updates the CNAME record for that instance everytime it reboots.
see this -> http://cantina.co/automated-dns-for-aws-instances-using-route-53/ (disclosure, i did not create this, though i used it as a jumping point for a similar situation)
better yet, because you mentioned being able to spin up instances programmatically, this approach should guide you to that end.
see also -> http://docs.pythonboto.org/en/latest/index.html
Using a combination of Cloudwatch, Route53 and Lambda is also an option if you host at a least part of your dns in Route53. The advantage of this is that you don't need any applications running on the instance itself.
To use this this approach you configure a Cloudwatch rule to trigger a Lambda function whenever the status of an EC2 instance changes to running. The Lambda function can then retrieve the public ip address of the instance and update the dns record in Route53.
The Lambda could look something like this (using Node.js runtime):
var AWS = require('aws-sdk');
var ZONE_ID = 'Z1L432432423';
var RECORD_NAME = 'testaws.domain.tld';
var INSTANCE_ID = 'i-423423ccqq';
exports.handler = (event, context, callback) => {
var retrieveIpAddressOfEc2Instance = function(instanceId, ipAddressCallback) {
var ec2 = new AWS.EC2();
var params = {
InstanceIds: [instanceId]
};
ec2.describeInstances(params, function(err, data) {
if (err) {
callback(err);
} else {
ipAddressCallback(data.Reservations[0].Instances[0].PublicIpAddress);
}
});
}
var updateARecord = function(zoneId, name, ip, updateARecordCallback) {
var route53 = new AWS.Route53();
var dnsParams = {
ChangeBatch: {
Changes: [
{
Action: "UPSERT",
ResourceRecordSet: {
Name: name,
ResourceRecords: [
{
Value: ip
}
],
TTL: 60,
Type: "A"
}
}
],
Comment: "updated by lambda"
},
HostedZoneId: zoneId
};
route53.changeResourceRecordSets(dnsParams, function(err, data) {
if (err) {
callback(err, data);
} else {
updateARecordCallback();
}
});
}
retrieveIpAddressOfEc2Instance(INSTANCE_ID, function(ip) {
updateARecord(ZONE_ID, RECORD_NAME, ip, function() {
callback(null, 'record updated with: ' + ip);
});
});
}
You will need to execute the Lambda with a role that has permissions to describe EC2 instances and update records in Route53.
With Route 53 you can create alias records that map to an Elastic Load Balancer (ELB):
http://docs.amazonwebservices.com/Route53/latest/DeveloperGuide/HowToAliasRRS.html
I've not tried on aws EC2 instance but it should work too.
I've written a small Java program that detect the public IP of the machine and update a certain record on aws route 53.
The only requirement is that you need Java installed on your EC2 instance.
The project is hosted on https://github.com/renatodelgaudio/awsroute53 and you are also free to modify it in case you need it
You could configure it to run at boot time or as a crontab job so that your record get updated with the new public IP following instructions similar to these
Linux manual installation steps
I used this cli53 tool to let an EC2 instance create an A record for itself during startup.
https://github.com/barnybug/cli53
I added file following lines to my rc.local (please check your linux calls this script during startup):
IP=$(curl http://169.254.169.254/latest/meta-data/public-ipv4)
/usr/local/bin/cli53 rrcreate example.com "play 30 A $IP" --wait --replace
It creates an A record play.example.com pointing to the current public IP of the EC2 instance.
You need to assign a IAM role to EC2 instance, which allows the instance to manipulate Route 53. In the simplest case just create a IAM role using a predefined policy AmazonRoute53FullAccess. Then assign this role to the EC2 instance.
Assuming the EC2 instance has the aws command configured with proper permissions, the following shell script does it:
#!/bin/bash
IP=$(curl http://169.254.169.254/latest/meta-data/public-ipv4)
PROFILE="dnsuserprofile"
ZONE="XXXXXXXXXXXXXXXXXXXXX"
DOMAIN="my.domain.name"
TMPFILE="/tmp/updateip.json"
cat << EOF > $TMPFILE
{
"Comment": "Updating instance IP address",
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "$DOMAIN",
"Type": "A",
"TTL": 300,
"ResourceRecords": [
{
"Value": "$IP"
}
]
}
}
]
}
EOF
aws route53 change-resource-record-sets --profile $PROFILE --hosted-zone-id $ZONE --change-batch file://$TMPFILE > /dev/null && \
rm $TMPFILE
Set that script to run on reboot, for example in cron:
#reboot /home/ec2-user/bin/updateip
The IAM policy can be as narrow as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "route53:ChangeResourceRecordSets",
"Resource": "arn:aws:route53:::hostedzone/XXXXXXXXXXXXXXXXXXXXX"
}
]
}

Resources