Deloy on AKS Elastic stack - elasticsearch

I tried to deploy on Azure Kubernetes Elastic stack following this example ELASTIC.IO.I want to get logs from mine.Net6 WebApi.For logger I use Serilog. I was made docker-compose to run the elastic stack on the local docker container to test on the development environment and everything works fine. The problem comes with AKS deployment, I see that on Kubernetes all servers are working. I have access to elastic port 9200 with curl but in production returns me empty reply from server
, and to the Kibana dashboard, but for some reason when I start the app from my PC with production settings to point elastic deployed on AKS I can't create any index or see any changed data.
{
"ApplicationName": "identity-service",
"Serilog": {
"MinimiumLevel": {
"Default": "Information",
"Ovveride": {
"Microsoft": "Information",
"System": "Warning"
}
}
},
"ElasticConfiguration": {
"URI": "https://elasic-cloud-ip:9200"
}
}
And this is the configuration that works in the development environment.
var env = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT");
builder.Host.UseSerilog((context, configuration) =>
{
Console.Out.WriteLine($"{Assembly.GetExecutingAssembly().GetName().Name.ToLower()}-{env.ToLower().Replace(".", "-")}-{DateTime.UtcNow:yyyy-MM}");
configuration.Enrich.FromLogContext()
.Enrich.WithMachineName()
.WriteTo.Console()
.WriteTo.Elasticsearch(
new ElasticsearchSinkOptions(new Uri(context.Configuration["ElasticConfiguration:URI"]))
{
IndexFormat = $"{context.Configuration["ApplicationName"].ToLower()}-{env.ToLower().Replace(".", "-")}-{DateTime.UtcNow:yyyy-MM}",
AutoRegisterTemplate = true,
NumberOfShards = 1,
NumberOfReplicas = 2
})
.Enrich.WithProperty("Environment", context?.HostingEnvironment?.EnvironmentName?.ToLowerInvariant())
.ReadFrom.Configuration(context.Configuration);
});

Related

Traefik with dynamic routing to ECS backends, running as one-off tasks

I'm triying to implement solution for reverse-proxy service using Traefik v1 (1.7) and ECS one-off tasks as backends, as described in this SO question. Routing should by dynamic - requests to /user/1234/* path should go to the ECS task, running with the appropriate docker labels:
docker_labels = {
traefik.frontend.rule = "Path:/user/1234"
traefik.backend = "trax1"
traefik.enable = "true"
}
So far this setup works fine, but I need create one ECS task definition per one running task, because the docker labels are the property of ECS TaskDefinition, not the ECS task itself. Is it possible to create only one TaskDefinition and pass Traefik rules in ECS task tags, within task key/value properties?
This will require some modification in Traefik source code, are the any other available options or ways this should be implemented, that I've missed, like API Gateway or Lambda#Edge? I have no experience with those technologies, real-world examples are more then welcome.
Solved by using Traefik REST API provider. External component, which runs the one-off tasks, can discover task internal IP and update Traefik configuration on-fly by pair traefik.frontend.rule = "Path:/user/1234" and task internal IP:port values in backends section
It should GET the Traefik configuration first from /api/providers/rest endpoint, remove or add corresponding part (if task was stopped or started), and update Traefik configuration by PUT method to the same endpoint.
{
"backends": {
"backend-serv1": {
"servers": {
"server-service-serv-test1-serv-test-4ca02d28c79b": {
"url": "http://172.16.0.5:32793"
}
}
},
"backend-serv2": {
"servers": {
"server-service-serv-test2-serv-test-279c0ba1959b": {
"url": "http://172.16.0.5:32792"
}
}
}
},
"frontends": {
"frontend-serv1": {
"entryPoints": [
"http"
],
"backend": "backend-serv1",
"routes": {
"route-frontend-serv1": {
"rule": "Path:/user/1234"
}
}
},
"frontend-serv2": {
"entryPoints": [
"http"
],
"backend": "backend-serv2",
"routes": {
"route-frontend-serv2": {
"rule": "Path:/user/5678"
}
}
}
}
}

Error importing Kibana dashboards: fail to create the Kibana loader: Error creating Kibana client

I'm having a problem when I try to run the command sudo metricbeat -e -setup
it return Error importing Kibana dashboards: fail to create the Kibana loader: Error creating Kibana client
but if I run sudo metricbeat test config
Config OK
or
sudo metricbeat test modules
nginx...
stubstatus...OK
result:
{
"#timestamp": "2018-10-05T12:30:19.077Z",
"metricset": {
"host": "127.0.0.1:8085",
"module": "nginx",
"name": "stubstatus",
"rtt": 438
},
"nginx": {
"stubstatus": {
"accepts": 2871,
"active": 2,
"current": 3559,
"dropped": 0,
"handled": 2871,
"hostname": "127.0.0.1:8085",
"reading": 0,
"requests": 3559,
"waiting": 1,
"writing": 1
}
}
}
Kibana is up and running?
Kibana IP and Port are configured correctly in metricbeat?
Metricbeat starting from V6.x are importing their dashboards into Kibana, thus resulting in errors like this if the Kibana endpoint isn't reachable.

Querying remote registry service on machine <IP Address> resulted in exception: Unable to change open service manager

My cluster Config file as follows
`
{
"name": "SampleCluster",
"clusterConfigurationVersion": "1.0.0",
"apiVersion": "01-2017",
"nodes":
[
{
"nodeName": "vm0",
"iPAddress": "here is my VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r0",
"upgradeDomain": "UD0"
},
{
"nodeName": "vm1",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r1",
"upgradeDomain": "UD1"
},
{
"nodeName": "vm2",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r2",
"upgradeDomain": "UD2"
}
],
"properties": {
"reliabilityLevel": "Bronze",
"diagnosticsStore":
{
"metadata": "Please replace the diagnostics file share with an actual file share accessible from all cluster machines.",
"dataDeletionAgeInDays": "7",
"storeType": "FileShare",
"IsEncrypted": "false",
"connectionstring": "c:\\ProgramData\\SF\\DiagnosticsStore"
},
"nodeTypes": [
{
"name": "NodeType0",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpointPort": "19001",
"leaseDriverEndpointPort": "19002",
"serviceConnectionEndpointPort": "19003",
"httpGatewayEndpointPort": "19080",
"reverseProxyEndpointPort": "19081",
"applicationPorts": {
"startPort": "20001",
"endPort": "20031"
},
"isPrimary": true
}
],
"fabricSettings": [
{
"name": "Setup",
"parameters": [
{
"name": "FabricDataRoot",
"value": "C:\\ProgramData\\SF"
},
{
"name": "FabricLogRoot",
"value": "C:\\ProgramData\\SF\\Log"
}
]
}
]
}
}
It is almost identical to standalone service fabric download demo file for untrusted cluster except my VPS ip. I enabled remote registry service.I ran the
\TestConfiguration.ps1 -ClusterConfigFilePath \ClusterConfig.Unsecure.MultiMachine.json but i got the following error.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <Another IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Best Practices Analyzer determined environment has an issue. Please see additional BPA log output in DeploymentTraces
LocalAdminPrivilege : True
IsJsonValid : True
IsCabValid :
RequiredPortsOpen : True
RemoteRegistryAvailable : False
FirewallAvailable :
RpcCheckPassed :
NoConflictingInstallations :
FabricInstallable :
DataDrivesAvailable :
Passed : False
Test Config failed with exception: System.InvalidOperationException: Best Practices Analyzer determined environment has
an issue. Please see additional BPA log output in DeploymentTraces folder.
at System.Management.Automation.MshCommandRuntime.ThrowTerminatingError(ErrorRecord errorRecord)
I don't understand the problem.VPSs are not locally connected. All are public IP.I don't know, this may b an issue. how do I make virtual LAN among these VPS?Can anyone give me some direction about this error?Anyone helps me is greatly appreciated.
Edit: I used VM term insted of VPS.
Finally I make this working. Actually all the nodes are in a network, i thought it wasn't. I enable file sharing. I try to access the shared file from the node where I ran configuration test to the all other nodes. I have to give the credentials of logins. And then it works like a charm.

How do I use CloudFormation resources in a Lambda function?

I have added a Redis ElastiCache section to my s-resource-cf.json (a CloudFormation template), and selected its hostname as an output.
"Resources": {
...snip...
"Redis": {
"Type": "AWS::ElastiCache::CacheCluster",
"Properties": {
"AutoMinorVersionUpgrade": "true",
"AZMode": "single-az",
"CacheNodeType": "cache.t2.micro",
"Engine": "redis",
"EngineVersion": "2.8.24",
"NumCacheNodes": "1",
"PreferredAvailabilityZone": "eu-west-1a",
"PreferredMaintenanceWindow": "tue:00:30-tue:01:30",
"CacheSubnetGroupName": {
"Ref": "cachesubnetdefault"
},
"VpcSecurityGroupIds": [
{
"Fn::GetAtt": [
"sgdefault",
"GroupId"
]
}
]
}
}
},
"Outputs": {
"IamRoleArnLambda": {
"Description": "ARN of the lambda IAM role",
"Value": {
"Fn::GetAtt": [
"IamRoleLambda",
"Arn"
]
}
},
"RedisEndpointAddress": {
"Description": "Redis server host",
"Value": {
"Fn::GetAtt": [
"Redis",
"Address"
]
}
}
}
I can get CloudFormation to output the Redis server host when running sls resources deploy, but how can I access that output from within a Lambda function?
There is nothing in this starter project template that refers to that IamRoleArnLambda, which came with the example project. According to the docs, templates are only usable for project configuration, they are not accessible from Lambda functions:
Templates & Variables are for Configuration Only
Templates and variables are used for configuration of the project only. This information is not usable in your lambda functions. To set variables which can be used by your lambda functions, use environment variables.
So, then how do I set an environment variable to the hostname of the ElastiCache server after it has been created?
You can set environment variables in the environment section of a function's s-function.json file. Furthermore, if you want to prevent those variables from being put into version control (for example, if your code will be posted to a public GitHub repo), you can put them in the appropriate files in your _meta/variables directory and then reference those from your s-function.json files. Just make sure you add a _meta line to your .gitignore file.
For example, in my latest project I needed to connect to a Redis Cloud server, but didn't want to commit the connection details to version control. I put variables into my _meta/variables/s-variables-[stage]-[region].json file, like so:
{
"redisUrl": "...",
"redisPort": "...",
"redisPass": "..."
}
…and referenced the connection settings variables in that function's s-function.json file:
"environment": {
"REDIS_URL": "${redisUrl}",
"REDIS_PORT": "${redisPort}",
"REDIS_PASS": "${redisPass}"
}
I then put this redis.js file in my functions/lib directory:
module.exports = () => {
const redis = require('redis')
const jsonify = require('redis-jsonify')
const redisOptions = {
host: process.env.REDIS_URL,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASS
}
return jsonify(redis.createClient(redisOptions))
}
Then, in any function that needed to connect to that Redis database, I imported redis.js:
redis = require('../lib/redis')()
(For more details on my Serverless/Redis setup and some of the challenges I faced in getting it to work, see this question I posted yesterday.)
update
CloudFormation usage has been streamlined somewhat since that comment was posted in the issue tracker. I have submitted a documentation update to http://docs.serverless.com/docs/templates-variables, and posted a shortened version of my configuration in a gist.
It is possible to refer to a CloudFormation output in a s-function.json Lambda configuration file, in order to make those outputs available as environment variables.
s-resource-cf.json output section:
"Outputs": {
"redisHost": {
"Description": "Redis host URI",
"Value": {
"Fn::GetAtt": [
"RedisCluster",
"RedisEndpoint.Address"
]
}
}
}
s-function.json environment section:
"environment": {
"REDIS_HOST": "${redisHost}"
},
Usage in a Lambda function:
exports.handler = function(event, context) {
console.log("Redis host: ", process.env.REDIS_HOST);
};
old answer
Looks like a solution was found / implemented in the Serverless issue tracker (link). To quote HyperBrain:
CF Output variables
To have your lambda access the CF output variables you have to give it the cloudformation:describeStacks access rights in the lambda IAM role.
The CF.loadVars() promise will add all CF output variables to the process'
environment as SERVERLESS_CF_OutVar name. It will add a few ms to the
startup time of your lambda.
Change your lambda handler as follows:
// Require Serverless ENV vars
var ServerlessHelpers = require('serverless-helpers-js');
ServerlessHelpers.loadEnv();
// Require Logic
var lib = require('../lib');
// Lambda Handler
module.exports.handler = function(event, context) {
ServerlessHelpers.CF.loadVars()
.then(function() {
lib.respond(event, function(error, response) {
return context.done(error, response);
});
})
};

Have Route 53 point to an instance instead of an IP or CNAME?

We're using Route 53 DNS to point to an EC2 instance. Is there any way to get Route 53 to point to the instance directly, instead of to an Elastic IP or CNAME?
I have multiple reasons for this:
I don't want to burn an IP.
CNAMEs are unreliable, because if an instance goes down and comes back up, the full name, ec2-X-X-X-X.compute-1.amazonaws.com, will change.
In the future, I need to spin up instances programmatically and address them with a subdomain, and I see no easy way to do this with either elastic IPs or CNAMEs.
What's the best approach?
I wrote my own solution to this problem since I was unhappy with other approaches that were presented here. Using Amazon CLI tools is nice, but they IMHO tend to be slower than direct API calls using other Amazon API libraries (Ruby for example).
Here's a link to my AWS Route53 DNS instance update Gist. It contains an IAM policy and a Ruby script. You should create a new user in IAM panel, update it with the attached policy (with your zone id in it) and set the credentials and parameters in the Ruby script. First parameter is the hostname alias for your instance in your hosted zone. Instance's private hostname is aliased to <hostname>.<domain> and instance's public hostname is aliased to <hostname>-public.<domain>
UPDATE: Here's a link to AWS Route53 DNS instance update init.d script registering hostnames when instance boots. Here's another one if want to use AWS Route53 DNS load-balancing in similar fashion.
If you stick to using route53, you can make a script that updates the CNAME record for that instance everytime it reboots.
see this -> http://cantina.co/automated-dns-for-aws-instances-using-route-53/ (disclosure, i did not create this, though i used it as a jumping point for a similar situation)
better yet, because you mentioned being able to spin up instances programmatically, this approach should guide you to that end.
see also -> http://docs.pythonboto.org/en/latest/index.html
Using a combination of Cloudwatch, Route53 and Lambda is also an option if you host at a least part of your dns in Route53. The advantage of this is that you don't need any applications running on the instance itself.
To use this this approach you configure a Cloudwatch rule to trigger a Lambda function whenever the status of an EC2 instance changes to running. The Lambda function can then retrieve the public ip address of the instance and update the dns record in Route53.
The Lambda could look something like this (using Node.js runtime):
var AWS = require('aws-sdk');
var ZONE_ID = 'Z1L432432423';
var RECORD_NAME = 'testaws.domain.tld';
var INSTANCE_ID = 'i-423423ccqq';
exports.handler = (event, context, callback) => {
var retrieveIpAddressOfEc2Instance = function(instanceId, ipAddressCallback) {
var ec2 = new AWS.EC2();
var params = {
InstanceIds: [instanceId]
};
ec2.describeInstances(params, function(err, data) {
if (err) {
callback(err);
} else {
ipAddressCallback(data.Reservations[0].Instances[0].PublicIpAddress);
}
});
}
var updateARecord = function(zoneId, name, ip, updateARecordCallback) {
var route53 = new AWS.Route53();
var dnsParams = {
ChangeBatch: {
Changes: [
{
Action: "UPSERT",
ResourceRecordSet: {
Name: name,
ResourceRecords: [
{
Value: ip
}
],
TTL: 60,
Type: "A"
}
}
],
Comment: "updated by lambda"
},
HostedZoneId: zoneId
};
route53.changeResourceRecordSets(dnsParams, function(err, data) {
if (err) {
callback(err, data);
} else {
updateARecordCallback();
}
});
}
retrieveIpAddressOfEc2Instance(INSTANCE_ID, function(ip) {
updateARecord(ZONE_ID, RECORD_NAME, ip, function() {
callback(null, 'record updated with: ' + ip);
});
});
}
You will need to execute the Lambda with a role that has permissions to describe EC2 instances and update records in Route53.
With Route 53 you can create alias records that map to an Elastic Load Balancer (ELB):
http://docs.amazonwebservices.com/Route53/latest/DeveloperGuide/HowToAliasRRS.html
I've not tried on aws EC2 instance but it should work too.
I've written a small Java program that detect the public IP of the machine and update a certain record on aws route 53.
The only requirement is that you need Java installed on your EC2 instance.
The project is hosted on https://github.com/renatodelgaudio/awsroute53 and you are also free to modify it in case you need it
You could configure it to run at boot time or as a crontab job so that your record get updated with the new public IP following instructions similar to these
Linux manual installation steps
I used this cli53 tool to let an EC2 instance create an A record for itself during startup.
https://github.com/barnybug/cli53
I added file following lines to my rc.local (please check your linux calls this script during startup):
IP=$(curl http://169.254.169.254/latest/meta-data/public-ipv4)
/usr/local/bin/cli53 rrcreate example.com "play 30 A $IP" --wait --replace
It creates an A record play.example.com pointing to the current public IP of the EC2 instance.
You need to assign a IAM role to EC2 instance, which allows the instance to manipulate Route 53. In the simplest case just create a IAM role using a predefined policy AmazonRoute53FullAccess. Then assign this role to the EC2 instance.
Assuming the EC2 instance has the aws command configured with proper permissions, the following shell script does it:
#!/bin/bash
IP=$(curl http://169.254.169.254/latest/meta-data/public-ipv4)
PROFILE="dnsuserprofile"
ZONE="XXXXXXXXXXXXXXXXXXXXX"
DOMAIN="my.domain.name"
TMPFILE="/tmp/updateip.json"
cat << EOF > $TMPFILE
{
"Comment": "Updating instance IP address",
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "$DOMAIN",
"Type": "A",
"TTL": 300,
"ResourceRecords": [
{
"Value": "$IP"
}
]
}
}
]
}
EOF
aws route53 change-resource-record-sets --profile $PROFILE --hosted-zone-id $ZONE --change-batch file://$TMPFILE > /dev/null && \
rm $TMPFILE
Set that script to run on reboot, for example in cron:
#reboot /home/ec2-user/bin/updateip
The IAM policy can be as narrow as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "route53:ChangeResourceRecordSets",
"Resource": "arn:aws:route53:::hostedzone/XXXXXXXXXXXXXXXXXXXXX"
}
]
}

Resources