Creating private gke cluster - yaml

Creating private gke cluster with yaml.
Currently looking into creating a private gke. tried adding private settings in yaml file but getting error
resources:
- name: myclus
type: gcp-types/container-v1:projects.locations.clusters
properties:
parent: projects/[PROJECT_ID]/locations/[REGION]
cluster:
name: my-clus
zone: [ZONE]
network: [NETWORK]
subnetwork: [SUBNETWORK] ### leave this field blank if using the default network###
initialClusterVersion: "1.13"
nodePools:
- name: my-clus-pool1
initialNodeCount: 1
autoscaling:
enabled: true
minNodeCount: 1
maxNodeCount: 12
management:
autoUpgrade: true
autoRepair: true
config:
machineType: n1-standard-1
diskSizeGb: 15
imageType: cos
diskType: pd-ssd
oauthScopes: ###Change scope to match needs###
- https://www.googleapis.com/auth/cloud-platform
preemptible: false
Looking for it to create a private cluster with no external IPs.

Did you ever had the chance to go over this documentation?
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#public_master
Well, I also found this other Official Google Document that can help you achieve what you want:
https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies
On the "Creating the Docker Image" section there's a Dockerfile example.
Best of Luck!

Related

Elasticsearch cluster managed by terraform with eck operator. Version upgrade fails

Our current Production Elasticsearch cluster for logs collection is manually managed and runs on AWS.
I'm creating the same cluster using ECK deployed with Helm under Terraform.
I was able to get all the features replicated (S3 repo for snapshots, ingest pipelines, index templates, etc) and deployed, so, first deployment is perfectly working.
But when I tried to update the cluster (changing the ES version from 8.3.2 to 8.5.2) I get this error:
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to kubernetes_manifest.elasticsearch_deploy, provider "provider\["registry.terraform.io/hashicorp/kubernetes"\]" produced an unexpected new
│ value: .object: wrong final value type: attribute "spec": attribute "nodeSets": tuple required.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
I stripped down my elasticsearch and kibana manifests to try to isolate the problem.
Again, I previously deployed the eck operator with its helm chart: it works, because the first deployment of the cluster is flawless.
I have in my main.tf:
resource "kubernetes_manifest" "elasticsearch_deploy" {
field_manager {
force_conflicts = true
}
computed_fields = \["metadata.labels", "metadata.annotations", "spec.finalizers", "spec.nodeSets", "status"\]
manifest = yamldecode(templatefile("config/elasticsearch.yaml", {
version = var.elastic_stack_version
nodes = var.logging_elasticsearch_nodes_count
cluster_name = local.cluster_name
}))
}
resource "kubernetes_manifest" "kibana_deploy" {
field_manager {
force_conflicts = true
}
depends_on = \[kubernetes_manifest.elasticsearch_deploy\]
computed_fields = \["metadata.labels", "metadata.annotations", "spec.finalizers", "spec.nodeSets", "status"\]
manifest = yamldecode(templatefile("config/kibana.yaml", {
version = var.elastic_stack_version
cluster_name = local.cluster_name
namespace = local.stack_namespace
}))
}
and my manifests are:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
annotations:
eck.k8s.elastic.co/downward-node-labels: "topology.kubernetes.io/zone"
name: ${cluster_name}
namespace: ${namespace}
spec:
version: ${version}
volumeClaimDeletePolicy: DeleteOnScaledownAndClusterDeletion
monitoring:
metrics:
elasticsearchRefs:
- name: ${cluster_name}
logs:
elasticsearchRefs:
- name: ${cluster_name}
nodeSets:
- name: logging-nodes
count: ${nodes}
config:
node.store.allow_mmap: false]]
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: ${cluster_name}
namespace: ${namespace}
spec:
version: ${version}
count: 1
elasticsearchRef:
name: ${cluster_name}
monitoring:
metrics:
elasticsearchRefs:
- name: ${cluster_name}
logs:
elasticsearchRefs:
- name: ${cluster_name}
podTemplate:
metadata:
labels:
stack_name: ${stack_name}
stack_repository: ${stack_repository}
spec:
serviceAccountName: ${service_account}
containers:
- name: kibana
resources:
limits:
memory: 1Gi
cpu: "1"
When I change the version, testing a cluster upgrade (e.g. going from 8.3.2 to 8.5.2), I get the error mentioned at the beginning of this post.
Is it a eck operator bug or I'm doing something wrong?
Do I need to add some other entity in the 'computed_fields' and remove 'force_conflicts'?
In the end, a colleague of mine found that indeed you have to add the whole "spec" to the computed_fields, like this:
resource "kubernetes_manifest" "elasticsearch_deploy" {
field_manager {
force_conflicts = true
}
computed_fields = ["metadata.labels", "metadata.annotations", "spec", "status"]
manifest = yamldecode(templatefile("config/elasticsearch.yaml", {
version = var.elastic_stack_version
nodes = var.logging_elasticsearch_nodes_count
cluster_name = local.cluster_name
}))
}
This way I got a proper cluster upgrade, without full cluster restart.
Underlying reason: the eck operator makes changes to the spec section. Even if you just do a terraform apply without any changes (and "spec" is not added to the computed_fields), terraform will find that something has changed and will perform an update.
Its nice that you already have a working solution. Just out of curiousness, why do you use kubernetes_manifest instead of helm_release api from terraform to upgrade your es cluster? We upgraded from 8.3.2 to 8.5.2 using helm_release and everything went smooth.

Is it possible to set EventBridge ScheduleExpression value from SSM in Serverless

I want to schedule one lambda via AWS EventBridge. The issue is I want to read the number value used in ScheduledExpression from SSM variable GCHeartbeatInterval
Code I used is below
heartbeat-check:
handler: groupconsultation/heartbeatcheck.handler
description: ${self:custom.gitVersion}
timeout: 15
memorySize: 1536
package:
include:
- groupconsultation/heartbeatcheck.js
- shared/*
- newrelic-lambda-wrapper.js
events:
- eventBridge:
enabled: true
schedule: rate(2 minutes)
resources:
Resources:
GCHeartbeatInterval:
Type: AWS::SSM::Parameter
Properties:
Name: /${file(vars.js):values.environmentName}/lambda/HeartbeatInterval
Type: String
Value: 1
Description: value in minute. need to convert it to seconds/milliseconds
Is this possible to achieve in serverless.yml ?
Reason for reading it from SSM is, it's a heartbeat service and the same value will be used by FE to send a heartbeat in set interval. BE lambda needs to be triggerred after 2x heartbeat interval
It turns out it's not possible. Only solution to it was to pass the variable as a command line argument. something like below.
custom:
mySchedule: ${opt:mySchedule, 1} # Allow overrides from CLI
...
schedule: ${self:custom.mySchedule}
...
resources:
Resources:
GCHeartbeatInterval:
Type: AWS::SSM::Parameter
Properties:
Name: /${file(vars.js):values.environmentName}/lambda/HeartbeatInterval
Type: String
Value: ${self:custom.mySchedule}
With the other approach, if we make it work we still have to redeploy the application as we do need to redeploy in this case also.

Multiple ports and mount points in AWS ECS Fargate Task Definition using Ansible

I went through the documentation provided here
https://docs.ansible.com/ansible/latest/collections/community/aws/ecs_taskdefinition_module.html
It gives me nice examples of setting of Fargate task definition. However it showcases example with only one port mapping and there is no mount point shown here.
I want to dynamically add port mappings ( depending on my app) and volume/mount points
For that I am defining my host_var for app as below ( there can be many such apps with different mount points and ports)
---
task_count: 4
task_cpu: 1028
task_memory: 2056
app_port: 8080
My Task definition yaml file looks like below
- name: Create/Update Task Definition
ecs_taskdefinition:
aws_access_key: "{{....}}"
aws_secret_key: "{{....}}"
security_token: "{{....}}"
region: "{{....}}"
launch_type: FARGATE
network_mode: awsvpc
execution_role_arn: "{{ ... }}"
task_role_arn: "{{ ...}}"
containers:
- name: "{{...}}"
environment: "{{...}}"
essential: true
image: "{{ ....}}"
logConfiguration: "{{....}}"
portMappings:
- containerPort: "{{app_port}}"
hostPort: "{{app_port}}"
cpu: "{{task_cpu}}"
memory: "{{task_memory}}"
state: present
I am able to create/update the task definition.
New requirements are that
Instead of one port, now we can have multiple(or none) port mappings.
We will have multiple (or none) mount points and volumes as well
Here is what I think the modified ansible host_var should look like below for ports
[container_port1:host_port1, container_port2:host_port2, container_port3:host_port3]
task_count: 4
task_cpu: 1028
task_memory: 2056
#[container_port1:host_port1, container_port2:host_port2, container_port3:host_port3]
app_ports: [8080:80, 8081:8081, 5703:5703]
I am not sure what to do in ansible playbook to run through this list of ports.
Another part of the problem is that, although I was able to achieve creating volume and mouting in container thorough aws console, I was not able to do same using ansible.
here is the snippet of json for the AWS fargate looks like ( for volume part). There can be many such mounts depending on the application. I want to achieve that dynamically by defining mount points and volumes in host_vars
-
-
-
"mountPoints": [
{
"readOnly": null,
"containerPath": "/mnt/downloads",
"sourceVolume": "downloads"
}
-
-
-
-
-
-
"volumes": [
{
"efsVolumeConfiguration": {
"transitEncryptionPort": ENABLED,
"fileSystemId": "fs-ecdg222d",
"authorizationConfig": {
"iam": "ENABLED",
"accessPointId": null
},
"transitEncryption": "ENABLED",
"rootDirectory": "/vol/downloads"
},
"name": "downloads",
"host": null,
"dockerVolumeConfiguration": null
}
I am not sure how to do that.
Official documentation offers very little help.

How to use the security group existing in horizon in heat template

I'm newbies on heat yaml template loaded by OpenStack
I've got this command which works fine :
openstack server create --image RHEL-7.4 --flavor std.cpu1ram1 --nic net-id=network-name.admin-network --security-group security-name.group-sec-default value instance-name
I tried to write this heat file with the command above :
heat_template_version: 2014-10-16
description: Simple template to deploy a single compute instance with an attached volume
resources:
my_instance:
type: OS::Nova::Server
properties:
name: instance-name
image: RHEL-7.4
flavor: std.cpu1ram1
networks:
- network: network-name.admin-network
security_group:
- security_group: security-name.group-sec-default
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules: security-name.group-sec-default
my_volume:
type: OS::Cinder::Volume
properties:
size: 10
my_attachment:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid: { get_resource: my_instance }
volume_id: { get_resource: my_volume }
mountpoint: /dev/vdb
The stack creation failed with the following message error :
openstack stack create -t my_first.yaml First_stack
openstack stack show First_stack
.../...
| stack_status_reason | Resource CREATE failed: BadRequest: resources.my_instance: Unable to find security_group with name or id 'sec_group1' (HTTP 400) (Request-ID: req-1c5d041c-2254-4e43-8785-c421319060d0)
.../...
Thanks for helping,
According to the template guide it is expecting the rules type is of list.
So, change the content of template as below for security-group:
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules: [security-name.group-sec-default]
OR
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules:
- security-name.group-sec-default
After digging, I finally found what was wrong in my heat file. I had to declare my instance like this :
my_instance:
type: OS::Nova::Server
properties:
name: instance-name
image: RHEL-7.4
flavor: std.cpu1ram1
networks:
- network: network-name.admin-network
security_groups: [security-name.group-sec-default]
Thanks for your support

Auto-assign IPv6 address via AWS and CloudFormation

Is there any way to have IPv6 addresses auto-assigned to EC2 instances within an autoscaling group+launch configuration?
VPC and subnets are all set up for IPv6. Manually created instances are ok.
I can also manually assign them, but I can't seem to find a way to do it in CloudFormation.
The current status is that CloudFormation support for IPv6 is workable. Not fun or complete, but you can build a stack with it - I had to use 2 custom resources:
The first is a generic resource that I use for other things and also reused here, to work around the missing feature to construct a subnet /64 CIDR block from a VPC's /56 auto-provided network
The other I had to add specifically to work around a bug in the EC2 API that CloudFormation uses correctly.
Here is my setup:
1. Add IPv6 CIDR block to your VPC:
VPCipv6:
Type: "AWS::EC2::VPCCidrBlock"
Properties:
VpcId: !Ref VPC
AmazonProvidedIpv6CidrBlock: true
2. Extract the network prefix for creating /64 subnets:
As explained in this answer.
VPCipv6Prefix:
Type: Custom::Variable
Properties:
ServiceToken: !GetAtt [ IdentityFunc, Arn ]
Value: !Select [ 0, !Split [ "00::/", !Select [ 0, !GetAtt VPC.Ipv6CidrBlocks ] ] ]
IdentityFunc is an "identity function" implemented in Lambda for "custom variables", as described in this answer. Unlike this linked answer, I implement the function directly in the same stack so it is easier to maintain. See here for the gist.
3. Add an IPv6 default route to your internet gateway:
RouteInternet6:
Type: "AWS::EC2::Route"
Properties:
RouteTableId: !Ref RouteTableMain
DestinationIpv6CidrBlock: "::/0"
GatewayId: !Ref IGWPublicNet
DependsOn:
- IGWNetAttachment
IGWNetAttachment is a reference to the AWS::EC2::VPCGatewayAttachment defined in the stack. If you don't wait for it, the route may fail to be set properly
4. Add an IPv6 CIDR block to your subnets:
SubnetA:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: !Select [ 0, !GetAZs { Ref: "AWS::Region" } ]
CidrBlock: 172.20.0.0/24
MapPublicIpOnLaunch: true
# The following does not work if MapPublicIpOnLaunch because of EC2 bug
## AssignIpv6AddressOnCreation: true
Ipv6CidrBlock: !Sub "${VPCipv6Prefix.Value}00::/64"
VpcId:
Ref: VPC
Regarding the AssignIpv6AddressOnCreation being commented out - this is normally what you want to do, but apparently, there's a bug in the EC2 API that prevents this from working - through no fault of CloudFormation. This is documented in this AWS forums thread, as well as the solution which I'll present here next.
5. Fix the AssignIpv6AddressOnCreation problem with another lambda:
This is the lambda setup:
IPv6WorkaroundRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- sts:AssumeRole
Path: "/"
Policies:
- PolicyName: !Sub "ipv6-fix-logs-${AWS::StackName}"
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: arn:aws:logs:*:*:*
- PolicyName: !Sub "ipv6-fix-modify-${AWS::StackName}"
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- ec2:ModifySubnetAttribute
Resource: "*"
IPv6WorkaroundLambda:
Type: AWS::Lambda::Function
Properties:
Handler: "index.lambda_handler"
Code: #import cfnresponse below required to send respose back to CFN
ZipFile:
Fn::Sub: |
import cfnresponse
import boto3
def lambda_handler(event, context):
if event['RequestType'] is 'Delete':
cfnresponse.send(event, context, cfnresponse.SUCCESS)
return
responseValue = event['ResourceProperties']['SubnetId']
ec2 = boto3.client('ec2', region_name='${AWS::Region}')
ec2.modify_subnet_attribute(AssignIpv6AddressOnCreation={
'Value': True
},
SubnetId=responseValue)
responseData = {}
responseData['SubnetId'] = responseValue
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, "CustomResourcePhysicalID")
Runtime: python2.7
Role: !GetAtt IPv6WorkaroundRole.Arn
Timeout: 30
And this is how you use it:
IPv6WorkaroundSubnetA:
Type: Custom::SubnetModify
Properties:
ServiceToken: !GetAtt IPv6WorkaroundLambda.Arn
SubnetId: !Ref SubnetA
This call races with the autoscaling group to complete the setup, but it is very unlikely to lose - I ran this a few dozen times and it never had a problem to set the field correctly before the first instance boots.
I had a very similar issue and had a chat with AWS Support concerning this. The current state is that IPv6 support in CloudFormation is very limited.
We ended up creating Custom Resources for lots of IPv6-specific things. We have a Custom Resource that:
Enables IPv6-allocation on a subnet
Creates an Egress-Only Internet Gateway
Adds a route to the Egress-Only Internet Gateway (the built-in Route resource says it "fails to stabilize" when pointing to an EIGW)
The Custom Resources are just Lambda functions that do the "raw" API call, and a IAM Role that grants the Lambda enough permissions to do that API call.

Resources