I can't connect mysql client to RDS through VPN - aws-lambda

I've been really struggling to use AWS RDS. All networking configuration things is a really pain in the back since I have no skills in networking and I don't like it either.
My goal is to create my mysql DB on RDS be able to connect to it through any mysql client, run my SQL script to create the DB and execute my lambdas to insert data to this DB.
So,
mysql client --> RDS (mysql) <-- lambdas
They all need to connect to each other.
After many weeks of research trying to understand all networking things around AWS, copying examples from one place and another.
I've got the following scenario:
I have a VPC, public and private subnets, security groups, EIPs, RDS and VPN all in my cloud formation template.
I can deploy everything ok, all seems to be working.
I can connect to my VPN and ping the private IP of my EIP.
But still I can't connect my mysql client to my RDS. So, I can't run my SQL script and I can't test my lambdas to see if they are really connecting to my RDS.
This is part of my configuration that I'm guessing could be related with the problem but as you can imagine, my lack of networking knowledge is making it harder.
The only thing that comes to my mind is that VPN and RDS are not part of the same subnets.
Full configuration: https://gist.github.com/serraventura/ec17d9a09c706e7ace1fd3e3be9972aa
RouteTableDB is always only connecting to private subnets while VPN (ec2) only connects to public subnet.
SubnetRouteTableAssociationPrivateDB1:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId:
Ref: RouteTableDB
SubnetId:
Ref: SubnetDBPrivate1
SubnetRouteTableAssociationPrivateDB2:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId:
Ref: RouteTableDB
SubnetId:
Ref: SubnetDBPrivate2
SubnetRouteTableAssociationPrivate1:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId:
Ref: RouteTableDB
SubnetId:
Ref: SubnetPrivate1
SubnetRouteTableAssociationPrivate2:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref RouteTableDB
SubnetId: !Ref SubnetPrivate2
RDS, VPN
RDSMySQL:
Type: AWS::RDS::DBInstance
Properties:
AllocatedStorage: ${self:custom.infra.allocatedStorage}
DBInstanceClass: ${self:custom.infra.dbInstanceClass}
Engine: ${self:custom.infra.engine}
DBInstanceIdentifier: ${self:custom.app.dbName}
DBName: ${self:custom.app.dbName}
MasterUsername: ${self:custom.app.dbUser}
MasterUserPassword: ${self:custom.app.dbPass}
DBSubnetGroupName:
Ref: myDBSubnetGroup
MultiAZ: ${self:custom.infra.multiAZ}
PubliclyAccessible: true
StorageType: gp2
VPCSecurityGroups:
- Ref: RDSSecurityGroup
DeletionPolicy: Delete
VPNEIP:
Type: AWS::EC2::EIP
Properties:
InstanceId:
Ref: VPNEC2Machine
Domain: vpc
VPNEC2Machine:
Type: AWS::EC2::Instance
Properties:
KeyName: ${self:custom.infra.ec2KeyPairName.${self:provider.region}}
ImageId: ${self:custom.infra.openVPNAMI.${self:provider.region}}
InstanceType: ${self:custom.infra.instanceType}
AvailabilityZone: ${self:provider.region}a
Monitoring: true
SecurityGroupIds:
- Ref: VPNSecurityGroup
SubnetId:
Ref: SubnetPublic1
Tags:
- Key: Name
Value: ${self:custom.companyName} OpenVPN ${self:provider.stage}
VPNRouteRecordSet:
Type: AWS::Route53::RecordSet
DependsOn:
- VPNEC2Machine
- VPNEIP
Properties:
HostedZoneName: ${self:custom.domains.base}.
Comment: Record for the VPN subdomain
Name: vpn-${self:provider.stage}.${self:custom.domains.base}.
Type: A
TTL: 60
ResourceRecords:
- Ref: VPNEIP
VPNSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow Access From machines to the VPN and Private Network
VpcId:
Ref: VPCStaticIP
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: ${self:custom.app.dbPort}
ToPort: ${self:custom.app.dbPort}
CidrIp: 0.0.0.0/0
Description: 'Postgres Port'
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
Description: 'SSH Port'
- IpProtocol: udp
FromPort: 1194
ToPort: 1194
CidrIp: 0.0.0.0/0
Description: 'OpenVPN Server Access Port'
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
Description: 'OpenVPN HTTPS Admin Port'
- IpProtocol: tcp
FromPort: 943
ToPort: 943
CidrIp: 0.0.0.0/0
Description: 'OpenVPN Server Port'
Tags:
- Key: Name
Value: ${self:custom.companyName} VPN SG ${self:provider.stage}

Your RDS instance is accepting inbound connections on 3306 from the LambdaSecurityGroup, which is fine for anything with the LambdaSecurityGroup SG attached to it, but you also need to allow connections from your VPNSecurityGroup.
Change your RDSSecurityGroupBlock to look as follows and that should allow you to connect to RDS from your VPN:
RDSSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow My SQL access from lambda subnets
VpcId:
Ref: VPCStaticIP
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '3306'
ToPort: '3306'
SourceSecurityGroupId:
Ref: LambdaSecurityGroup
- IpProtocol: tcp
FromPort: '3306'
ToPort: '3306'
SourceSecurityGroupId:
Ref: VPNSecurityGroup
Tags:
- Key: Name
Value: RDSSecurityGroup
As a side note, the VPNSecurityGroup is accepting connections from anywhere for 3306, 22, 1194, 443, 943. This may be intentional but given that these are exposed for management purposes it would not be best practice. You should give serious consideration to scoping the CidrIp's for those ports to trusted CidrIp sources to avoid any potential unwanted exposures. You may also with to consider removing the 3306 block from there, all together, as it would seem to be unnecessary to have that port open on the VPN itself.
EDIT As per the OP’s comments, in addition to the above, you also need to change PubliclyAccessible to False to resolve the issue.

I would like to give a full answer to the question since the title implies a problem just with mysql client not able to connect RDS.
Along with #hephalump changes I had to do more two changes to also enable my lambdas to connect to RDS and now I'm able to connect mysql client and also lambdas.
I had to create a new IAM Role for my lambdas
LambdaRole:
Type: AWS::IAM::Role
Properties:
Path: '/'
RoleName: LambdaRole
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: ec2LambdaPolicies
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DetachNetworkInterface
- ec2:DeleteNetworkInterface
Resource: "*"
- PolicyName: 'AllowInvoke'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: 'Allow'
Action: 'lambda:InvokeFunction'
Resource: '*'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
The important bit here to solve the problem seems to be this:
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
Then I had to remove my iamRoleStatements from my provider and add the new role role: LambdaRole.
And now I need to add my lambdas to the right security group.
So, I changed my VPC on my provider to be:
vpc:
securityGroupIds:
- Ref: LambdaSecurityGroup
subnetIds:
- Ref: SubnetPrivate1
I've updated the gist with the latest changes.

Related

Promtail deployment config on EC2: error in DescribeInstances

I'm just configuring promtail and testing it out on my EC2 instance, running this
./promtail-linux-amd64 -config.file=./ec2-promtail.yaml --dry-run
I get the following error
<ErrorResponse “http://webservices.amazon.com/AWSFault/200. Sender. InvalidAction. <|Message>Could not find operation DescribeInstances for version 2. . | 2bbe..caused by: expected element type but have ”
I'm checking if my config is wrong and if anyone has faced this issue.
I'm configuring Promtail on an AML Linux 2 Instance. My instance has no internet connection (security reasons) so I am using STS endpoint in my region to authenticate the role.
Promtail version: 2.7.1
AWS Role: Has DescribeInstance, DescribeAvailabilityZone permissions
The following is my ec2_sd_config
http_listen_port: 3100
grpc_listen_port: 0
clients:
- url: https://loki.dev.fdp.internal/loki/api/v1/push
positions:
filename: /opt/promtail/positions.yaml
scrape_configs:
- job_name: ec2-logs
ec2_sd_configs:
- region: ap-southeast-1
role_arn: arn:aws:iam::xxxxxx:role/promtail_role
endpoint: sts.ap-southeast-1.amazonaws.com # define to use regional endpoint instead of the default global
relabel_configs:
- source_labels: [__meta_ec2_tag_Name]
target_label: name
action: replace
- source_labels: [__meta_ec2_instance_id]
target_label: instance
action: replace
- source_labels: [__meta_ec2_availability_zone]
target_label: zone
action: replace
- action: replace
replacement: /var/log/**.log
target_label: __path__
- source_labels: [__meta_ec2_private_dns_name]
regex: "(.*)"
target_label: __host__

Using a kubernetes ingress to support multiple sub domains

I have a domain foobar. When I started my project, I knew I would have my webserver handling traffic for foobar.com. Also, I plan on having an elasticsearch server I wanted running at es.foobar.com. I purchased my domain at GoDaddy and I (maybe prematurely) purchased a single site certificate for foobar.com. I can't change this certificate to a wildcard cert. I would have to purchase a new one. I have my DNS record routing traffic for that simple URL. I'm managing everything using Kubernetes.
Questions:
Is it possible to use my simple single-site certificate for the main site and subdomains like my elasticsearch server or do I need to purchase another single-site certificate specifically for the elasticsearch server? I checked earlier and GoDaddy wants $350 for the multisite one.
ElasticSearch complicates this somewhat since if it's being accessed at es.foobar.com and the cert is for foobar.com it's going to reject any requests, right? Elasticsearch needs a cert in order to have solid security.
Is it possible to use my simple single-site certificate for the main site and subdomains?
To achieve your goal, you can use Name based virtual hosting ingress, since most likely your webserver foobar.com and elasticsearch es.foobar.com work on different ports and will be available under the same IP.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: foobar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: webserver
port:
number: 80
- host: es.foobar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: elastic
port:
number: 9200-9300 #http.port parametr in elastic config
It can also be implemented using TLS private key and certificate and and creating a file for TLS. This is possible for just one level, like *.foobar.com.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
tls:
- hosts:
- foobar.com
- es.foobar.com
secretName: "foobar-secret-tls"
rules:
- host: foobar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: webserver
port:
number: 80
- host: es.foobar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: elastic
port:
number: 9200-9300 #http.port parametr in elastic config
Either you need to get a wildcard or separate certificate for another domain.

Creating RDS cloudformation with route 53

I am having trouble with aws cloud formation. I need to create cloudformation that will install and configure RDS with RHEL and mariadb with route 53 and master user. I started first with basic config.yaml but i am getting an error with vpc, it says
No default VPC for this user (Service: AmazonEC2; Status Code: 400;
Error Code: VPCIdNotSpecified; Request ID:
407bd74c-9b85-4cce-b5a7-b816fe7aea15)
My config.yaml is this
Resources:
Ec2Instance1:
Type: 'AWS::EC2::Instance'
Properties:
SecurityGroups:
- !Ref InstanceSecurityGroup
KeyName: adivir
ImageId: ami-07dfba995513840b5
AvailabilityZone: eu-central-1
InstanceType: t2.micro
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
yum install -y httpd
yum install -y git
yum install -y php php-mysql
git clone https://github.com/demoglot/php.git /var/www/html
systemctl restart httpd
systemctl enable httpd
Ec2Instance2:
Type: 'AWS::EC2::Instance'
Properties:
SecurityGroups:
- !Ref InstanceSecurityGroup
KeyName: adivir
ImageId: ami-07dfba995513840b5
AvailabilityZone: eu-central-1
InstanceType: t2.micro
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
yum install -y httpd
yum install git -y
git clone https://github.com/demoglot/php.git /var/www/html
systemctl restart httpd
systemctl enable httpd
InstanceSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupDescription: Enable SSH access
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '2256'
ToPort: '2256'
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
ElasticLoadBalancer:
Type: 'AWS::ElasticLoadBalancing::LoadBalancer'
Properties:
AvailabilityZones:
- eu-central-1
- eu-central-1b
Listeners:
- InstancePort: '80'
LoadBalancerPort: '80'
Protocol: HTTP
HealthCheck:
Target: 'HTTP:80/'
HealthyThreshold: '3'
UnhealthyThreshold: '5'
Interval: '30'
Timeout: '5'
Instances :
- !Ref Ec2Instance1
- !Ref Ec2Instance2
DBSECURITYGROUP:
Type: 'AWS::RDS::DBSecurityGroup'
Properties:
GroupDescription: Security Group for RDS private access
DBSecurityGroupIngress:
- CIDRIP: 0.0.0.0/0
MyDB:
Type: 'AWS::RDS::DBInstance'
Properties:
DBName: kk
AllocatedStorage: '20'
DBInstanceClass: db.t2.micro
Engine: MariaDB
EngineVersion: '10.1.31'
MasterUsername: admin
MasterUserPassword: admin123
DBSecurityGroups:
- !Ref DBSECURITYGROUP
Tags:
- Key: name
Value: kk
DeletionPolicy: Snapshot
What i need to do in order to resolve vpc error and have RDS create successfully and how and where to add route 53 creation in yaml file? Also database neds to be connected to java app athat is on other instance. What do i need to share with person making app in order for him to connect to database? Also, is it possible to have one shell script that will run cloudformations in order, create stacks and then exit so that not each team member needs to run his own cloud formation? Thank you
Solution to this problem and why it occurs have been documented and explained in the resent AWS blog:
How do I resolve the CloudFormer error “No default VPC found for this user” in AWS CloudFormation?
Basically, the solution is to create new default vpc.
p.s.
I also agree with #mokugo-devops. You ask too many sub-questions which limits the focus and precision of your main question and issue you have reported.

kubernetes liveliness is un-authorized

I am trying to define a livenessProbe by passing the value of httpheader as secret. but I am getting un-authorized 401.
- name: mycontainer
image: myimage
env:
- name: MY_SECRET
valueFrom:
secretKeyRef:
name: actuatortoken
key: token
livenessProbe:
httpGet:
path: /test/actuator/health
port: 9001
httpHeaders:
- name: Authorization
value: $MY_SECRET
My secret as follows:
apiVersion: v1
kind: Secret
metadata:
name: actuatortoken
type: Opaque
stringData:
token: Bearer <token>
If I pass the same with actual value as below... it works as expected
- name: mycontainer
image: myimage
livenessProbe:
httpGet:
path: /test/actuator/health
port: 9001
httpHeaders:
- name: Authorization
value: Bearer <token>
Any help is highly appreciated.
What you have will put the literal string $MY_SECRET as the Authorization header which won't work.
You don't want to put the actual value of the secret in your Pod/Deployment/whatever YAML since you don't want plaintext credentials in there.
3 options I can think of:
a) change your app to not require authentication for the /test/actuator/health endpoint;
b) change your app to not require authentication when the requested host is 127.0.0.1 and update the probe configuration to use that as the host;
c) switch from an HTTP probe to a command probe and write the curl/wget command yourself
Answer is being posted as Community wiki as it's from Amit Kumar Gupta comments.

Openshift secret in Spring Boot bootstrap.yml

This how my bootstrap.yml looks like.
spring:
cloud:
config:
uri: http://xxxx.com
username: ****
password: ****
vault:
host: vault-server
port: 8200
scheme: http
authentication: token
token: ${VAULT_ROOT_TOKEN}
application:
name: service-name
management:
security:
enabled: false
Application is starting when I configure secret as a ENV variable in Deployment Config – OSE, as below.
name: VAULT_ROOT_TOKEN
value: *********
But Configuring secret as a ENV variable and fetching the value from OSE secret is not working.
name: VAULT_ROOT_TOKEN
valueFrom:
secretKeyRef:
name: vault-token
key: roottoken
Error that I am getting is
org.springframework.vault.VaultException: Status 400 secret/service-name/default: 400 Bad Request: missing required Host header
Surprise in this scenario, ENV variable is working within the container/POD but somehow it is not able to fetch during the bootstrap procedure.
env | grep TOKEN
VAULT_ROOT_TOKEN=********
My secret configuration in OSE
oc describe secret vault-token
Name: vault-token
Namespace: ****
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
roottoken: 37 bytes
What is missing in my deployment-config or secrets in OSE? How to configure to fetch secret as ENV variable and inject in the bootstrap.yml file?
NOTE : I can't move Vault configuration out of bootstrap.yml.
Openshift Enterprise info:
Version:
OpenShift Master:v3.2.1.31
Kubernetes Master:v1.2.0-36-g4a3f9c5
Finally I was able to achieve this. This is what I have done
Provide the token as an arugument:
java $JAVA_OPTS -jar -Dspring.cloud.vault.token=${SPRING_CLOUD_VAULT_TOKEN} service-name.jar
This is how my configuration looks like:
Deployment Config:
- name: SPRING_CLOUD_VAULT_TOKEN
valueFrom:
secretKeyRef:
name: vault-token
key: roottoken
Bootstrap file:
spring:
cloud:
config:
uri: http://xxxx.com
username: ****
password: ****
vault:
host: vault-server
port: 8200
scheme: http
authentication: token
token: ${SPRING_CLOUD_VAULT_TOKEN}
application:
name: service-name
management:
security:
enabled: false
Thanks for my colleagues who has provided the inputs.

Resources