Creating RDS cloudformation with route 53 - amazon-ec2

I am having trouble with aws cloud formation. I need to create cloudformation that will install and configure RDS with RHEL and mariadb with route 53 and master user. I started first with basic config.yaml but i am getting an error with vpc, it says
No default VPC for this user (Service: AmazonEC2; Status Code: 400;
Error Code: VPCIdNotSpecified; Request ID:
407bd74c-9b85-4cce-b5a7-b816fe7aea15)
My config.yaml is this
Resources:
Ec2Instance1:
Type: 'AWS::EC2::Instance'
Properties:
SecurityGroups:
- !Ref InstanceSecurityGroup
KeyName: adivir
ImageId: ami-07dfba995513840b5
AvailabilityZone: eu-central-1
InstanceType: t2.micro
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
yum install -y httpd
yum install -y git
yum install -y php php-mysql
git clone https://github.com/demoglot/php.git /var/www/html
systemctl restart httpd
systemctl enable httpd
Ec2Instance2:
Type: 'AWS::EC2::Instance'
Properties:
SecurityGroups:
- !Ref InstanceSecurityGroup
KeyName: adivir
ImageId: ami-07dfba995513840b5
AvailabilityZone: eu-central-1
InstanceType: t2.micro
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
yum install -y httpd
yum install git -y
git clone https://github.com/demoglot/php.git /var/www/html
systemctl restart httpd
systemctl enable httpd
InstanceSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupDescription: Enable SSH access
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '2256'
ToPort: '2256'
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
ElasticLoadBalancer:
Type: 'AWS::ElasticLoadBalancing::LoadBalancer'
Properties:
AvailabilityZones:
- eu-central-1
- eu-central-1b
Listeners:
- InstancePort: '80'
LoadBalancerPort: '80'
Protocol: HTTP
HealthCheck:
Target: 'HTTP:80/'
HealthyThreshold: '3'
UnhealthyThreshold: '5'
Interval: '30'
Timeout: '5'
Instances :
- !Ref Ec2Instance1
- !Ref Ec2Instance2
DBSECURITYGROUP:
Type: 'AWS::RDS::DBSecurityGroup'
Properties:
GroupDescription: Security Group for RDS private access
DBSecurityGroupIngress:
- CIDRIP: 0.0.0.0/0
MyDB:
Type: 'AWS::RDS::DBInstance'
Properties:
DBName: kk
AllocatedStorage: '20'
DBInstanceClass: db.t2.micro
Engine: MariaDB
EngineVersion: '10.1.31'
MasterUsername: admin
MasterUserPassword: admin123
DBSecurityGroups:
- !Ref DBSECURITYGROUP
Tags:
- Key: name
Value: kk
DeletionPolicy: Snapshot
What i need to do in order to resolve vpc error and have RDS create successfully and how and where to add route 53 creation in yaml file? Also database neds to be connected to java app athat is on other instance. What do i need to share with person making app in order for him to connect to database? Also, is it possible to have one shell script that will run cloudformations in order, create stacks and then exit so that not each team member needs to run his own cloud formation? Thank you

Solution to this problem and why it occurs have been documented and explained in the resent AWS blog:
How do I resolve the CloudFormer error “No default VPC found for this user” in AWS CloudFormation?
Basically, the solution is to create new default vpc.
p.s.
I also agree with #mokugo-devops. You ask too many sub-questions which limits the focus and precision of your main question and issue you have reported.

Related

Reducing over 30 seconds cold start on AWS API Gateway + Lambda

I've been facing an extremely slow cold start on Lambda Functions deployed in Docker containers together with an API Gateway.
Tech Stack:
FastAPI
Mangum (https://mangum.io/)
API Gateway
AWS Lambda
To do the deployment, I've been using AWS SAM with the following template file:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
demo
Resources:
AppFunction:
Type: AWS::Serverless::Function
Properties:
Timeout: 118
MemorySize: 3008
CodeUri: app/
PackageType: Image
Events:
ApiEvent:
Properties:
RestApiId:
Ref: FastapiExampleGateway
Path: /{proxy+}
Method: ANY
Auth:
ApiKeyRequired: true
Type: Api
Metadata:
Dockerfile: Dockerfile
DockerContext: .
FastapiExampleGateway:
Type: AWS::Serverless::Api
Properties:
StageName: prod
OpenApiVersion: '3.0.0'
# Timeout: 30
Auth:
ApiKeyRequired: true
UsagePlan:
CreateUsagePlan: PER_API
UsagePlanName: GatewayAuthorization
Outputs:
Api:
Description: "API Gateway endpoint URL for Prod stage for App function"
Value: !Sub "https://${FastapiExampleGateway}.execute-api.${AWS::Region}.amazonaws.com/Prod/"
The lambda is relatively light, with the following requirements installed:
jsonschema==4.16.0
numpy==1.23.3
pandas==1.5.0
pandas-gbq==0.17.8
fastapi==0.87.0
uvicorn==0.19.0
PyYAML==6.0
SQLAlchemy==1.4.41
pymongo==4.3.2
google-api-core==2.10.1
google-auth==2.11.0
google-auth-oauthlib==0.5.3
google-cloud-bigquery==3.3.2
google-cloud-bigquery-storage==2.16.0
google-cloud-core==2.3.2
google-crc32c==1.5.0
google-resumable-media==2.3.3
googleapis-common-protos==1.56.4
mangum==0.11.0
And the Dockerfile I'm using for the deployment is:
FROM public.ecr.aws/lambda/python:3.9
WORKDIR /code
RUN pip install pip --upgrade
COPY ./api/requirements.txt /code/api/requirements.txt
RUN pip install --no-cache-dir -r /code/api/requirements.txt
COPY ./api /code/api
EXPOSE 7777
CMD ["api.main.handler"]
ENV PYTHONPATH "${PYTHONPATH}:/code/"
Leading to a 250mb image.
On the first Lambda pull, I'm seeing
which looks like it's a very long start before the actual lambda execution. It reaches the point where API gateway times out due to the maximum 30 second response!
Local tests using sam local start-api work fine.
I've tried increasing the lambda function RAM to higher values.
Not sure if this a problem with Mangum (wrapper for FastAPI)?

Having CloudFormation wait for the user data

I have a cloudformation stack which creates a EC2 instance and install something in it using UserData. Cloudformation immediately reports CREATE_COMPLETE upon creation of the EC2 instance based on RedHat. But at this point, the instance is not really usable since the userdata takes about 40 min to finish. I read through documentation and even tried cfn-signal but I could not successfully execute it.
Can someone tell me how exactly it has to be done?
EC2Instance:
Type: AWS::EC2::Instance
Properties:
CreditSpecification:
CPUCredits: standard
IamInstanceProfile:
Fn::ImportValue:
!Sub ${InstanceProfileStackName}-instanceProfile
ImageId: !Ref ImageId
InstanceInitiatedShutdownBehavior: stop
InstanceType: !Ref InstanceType
SubnetId: !Ref SubnetId
SecurityGroupIds:
- !Ref DefaultSecurityGroup
- !Ref WebSecurityGroup
UserData:
Fn::Base64: !Sub |
#!/bin/bash
set -e
yum update -y
The above is the truncated part of my Cloudformation template.
UPDATE
I have the script which has the following line
source scl_source enable rh-python36
The default of my instance is python2.7 but I had to install my pip packages with python3.6. I am not sure if that was making the cfn-signal fail.
The script is going till the final step and seems to fail there. I am creating a recordset from the EC2 IP but Cloudformation still thinks the EC2 instance is not done and waiting till the timeout.
Screenshot of the instance snapshot
Log file end is as follows
Also my log file is named /var/log/cloud-init.log. There was no cloud-init-output.log in that directory.
You need two components:
CreationPolicy so that CFN waits for a SUCCESS signal from the instance.
cfn-signal helper script to perform the signalling action.
Thus your template could be modified as follows for Redhat 8:
EC2Instance:
Type: AWS::EC2::Instance
CreationPolicy: # <--- creation policy with timeout of 5 minutes
ResourceSignal:
Timeout: PT5M
Properties:
CreditSpecification:
CPUCredits: standard
IamInstanceProfile:
Fn::ImportValue:
!Sub ${InstanceProfileStackName}-instanceProfile
ImageId: !Ref ImageId
InstanceInitiatedShutdownBehavior: stop
InstanceType: !Ref InstanceType
SubnetId: !Ref SubnetId
SecurityGroupIds:
- !Ref DefaultSecurityGroup
- !Ref WebSecurityGroup
UserData:
Fn::Base64: !Sub |
#!/bin/bash -x
yum update -y
yum -y install python2-pip
pip2 install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
python2 /usr/bin/cfn-signal -e $? \
--stack ${AWS::StackName} \
--resource EC2Instance \
--region ${AWS::Region}
For debugging, as the user data may error out, have to login to the instance and check /var/log/cloud-init-output.log file
I could recreate your error and fixed here. Here is the corrected template. I added to the answer from Marcin
EC2Instance:
Type: AWS::EC2::Instance
CreationPolicy:
ResourceSignal:
Timeout: PT5M # Specify the time here
Properties:
CreditSpecification:
CPUCredits: standard
IamInstanceProfile:
Fn::ImportValue:
!Sub ${InstanceProfileStackName}-instanceProfile
ImageId: !Ref ImageId
InstanceInitiatedShutdownBehavior: stop
InstanceType: !Ref InstanceType
SubnetId: !Ref SubnetId
SecurityGroupIds:
- !Ref DefaultSecurityGroup
- !Ref WebSecurityGroup
UserData:
Fn::Base64: !Sub |
#!/bin/bash -ex
yum update -y
source scl_source enable rh-python36
<Your additional commands>
cfn-signal -e $? --stack ${AWS::StackName} --resource EC2Instance --region ${AWS::Region}
You might want to counter check the indentation before trying.

I can't connect mysql client to RDS through VPN

I've been really struggling to use AWS RDS. All networking configuration things is a really pain in the back since I have no skills in networking and I don't like it either.
My goal is to create my mysql DB on RDS be able to connect to it through any mysql client, run my SQL script to create the DB and execute my lambdas to insert data to this DB.
So,
mysql client --> RDS (mysql) <-- lambdas
They all need to connect to each other.
After many weeks of research trying to understand all networking things around AWS, copying examples from one place and another.
I've got the following scenario:
I have a VPC, public and private subnets, security groups, EIPs, RDS and VPN all in my cloud formation template.
I can deploy everything ok, all seems to be working.
I can connect to my VPN and ping the private IP of my EIP.
But still I can't connect my mysql client to my RDS. So, I can't run my SQL script and I can't test my lambdas to see if they are really connecting to my RDS.
This is part of my configuration that I'm guessing could be related with the problem but as you can imagine, my lack of networking knowledge is making it harder.
The only thing that comes to my mind is that VPN and RDS are not part of the same subnets.
Full configuration: https://gist.github.com/serraventura/ec17d9a09c706e7ace1fd3e3be9972aa
RouteTableDB is always only connecting to private subnets while VPN (ec2) only connects to public subnet.
SubnetRouteTableAssociationPrivateDB1:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId:
Ref: RouteTableDB
SubnetId:
Ref: SubnetDBPrivate1
SubnetRouteTableAssociationPrivateDB2:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId:
Ref: RouteTableDB
SubnetId:
Ref: SubnetDBPrivate2
SubnetRouteTableAssociationPrivate1:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId:
Ref: RouteTableDB
SubnetId:
Ref: SubnetPrivate1
SubnetRouteTableAssociationPrivate2:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref RouteTableDB
SubnetId: !Ref SubnetPrivate2
RDS, VPN
RDSMySQL:
Type: AWS::RDS::DBInstance
Properties:
AllocatedStorage: ${self:custom.infra.allocatedStorage}
DBInstanceClass: ${self:custom.infra.dbInstanceClass}
Engine: ${self:custom.infra.engine}
DBInstanceIdentifier: ${self:custom.app.dbName}
DBName: ${self:custom.app.dbName}
MasterUsername: ${self:custom.app.dbUser}
MasterUserPassword: ${self:custom.app.dbPass}
DBSubnetGroupName:
Ref: myDBSubnetGroup
MultiAZ: ${self:custom.infra.multiAZ}
PubliclyAccessible: true
StorageType: gp2
VPCSecurityGroups:
- Ref: RDSSecurityGroup
DeletionPolicy: Delete
VPNEIP:
Type: AWS::EC2::EIP
Properties:
InstanceId:
Ref: VPNEC2Machine
Domain: vpc
VPNEC2Machine:
Type: AWS::EC2::Instance
Properties:
KeyName: ${self:custom.infra.ec2KeyPairName.${self:provider.region}}
ImageId: ${self:custom.infra.openVPNAMI.${self:provider.region}}
InstanceType: ${self:custom.infra.instanceType}
AvailabilityZone: ${self:provider.region}a
Monitoring: true
SecurityGroupIds:
- Ref: VPNSecurityGroup
SubnetId:
Ref: SubnetPublic1
Tags:
- Key: Name
Value: ${self:custom.companyName} OpenVPN ${self:provider.stage}
VPNRouteRecordSet:
Type: AWS::Route53::RecordSet
DependsOn:
- VPNEC2Machine
- VPNEIP
Properties:
HostedZoneName: ${self:custom.domains.base}.
Comment: Record for the VPN subdomain
Name: vpn-${self:provider.stage}.${self:custom.domains.base}.
Type: A
TTL: 60
ResourceRecords:
- Ref: VPNEIP
VPNSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow Access From machines to the VPN and Private Network
VpcId:
Ref: VPCStaticIP
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: ${self:custom.app.dbPort}
ToPort: ${self:custom.app.dbPort}
CidrIp: 0.0.0.0/0
Description: 'Postgres Port'
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
Description: 'SSH Port'
- IpProtocol: udp
FromPort: 1194
ToPort: 1194
CidrIp: 0.0.0.0/0
Description: 'OpenVPN Server Access Port'
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
Description: 'OpenVPN HTTPS Admin Port'
- IpProtocol: tcp
FromPort: 943
ToPort: 943
CidrIp: 0.0.0.0/0
Description: 'OpenVPN Server Port'
Tags:
- Key: Name
Value: ${self:custom.companyName} VPN SG ${self:provider.stage}
Your RDS instance is accepting inbound connections on 3306 from the LambdaSecurityGroup, which is fine for anything with the LambdaSecurityGroup SG attached to it, but you also need to allow connections from your VPNSecurityGroup.
Change your RDSSecurityGroupBlock to look as follows and that should allow you to connect to RDS from your VPN:
RDSSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow My SQL access from lambda subnets
VpcId:
Ref: VPCStaticIP
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '3306'
ToPort: '3306'
SourceSecurityGroupId:
Ref: LambdaSecurityGroup
- IpProtocol: tcp
FromPort: '3306'
ToPort: '3306'
SourceSecurityGroupId:
Ref: VPNSecurityGroup
Tags:
- Key: Name
Value: RDSSecurityGroup
As a side note, the VPNSecurityGroup is accepting connections from anywhere for 3306, 22, 1194, 443, 943. This may be intentional but given that these are exposed for management purposes it would not be best practice. You should give serious consideration to scoping the CidrIp's for those ports to trusted CidrIp sources to avoid any potential unwanted exposures. You may also with to consider removing the 3306 block from there, all together, as it would seem to be unnecessary to have that port open on the VPN itself.
EDIT As per the OP’s comments, in addition to the above, you also need to change PubliclyAccessible to False to resolve the issue.
I would like to give a full answer to the question since the title implies a problem just with mysql client not able to connect RDS.
Along with #hephalump changes I had to do more two changes to also enable my lambdas to connect to RDS and now I'm able to connect mysql client and also lambdas.
I had to create a new IAM Role for my lambdas
LambdaRole:
Type: AWS::IAM::Role
Properties:
Path: '/'
RoleName: LambdaRole
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: ec2LambdaPolicies
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DetachNetworkInterface
- ec2:DeleteNetworkInterface
Resource: "*"
- PolicyName: 'AllowInvoke'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: 'Allow'
Action: 'lambda:InvokeFunction'
Resource: '*'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
The important bit here to solve the problem seems to be this:
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
Then I had to remove my iamRoleStatements from my provider and add the new role role: LambdaRole.
And now I need to add my lambdas to the right security group.
So, I changed my VPC on my provider to be:
vpc:
securityGroupIds:
- Ref: LambdaSecurityGroup
subnetIds:
- Ref: SubnetPrivate1
I've updated the gist with the latest changes.

Spring Boot + google kubernetes + Google SQL Cloud not working

I am trying to push spring boot application in google kubernetes(Google Container Engine).
I have performed all the step which given in below link.
https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..%2Findex#0
When i am trying to perform step 9 http://:8080 in browser that is not reachable.
Yes i got external ip address.
I am able to ping that ip address
let me know if any other information is require.
In Logging that does not able to connect database
Error:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server.
I hope you have created cluster in google container engine
Follow the first 5 step given in this link
https://cloud.google.com/sql/docs/mysql/connect-container-engine
change database configuration in your application
hostname: 127.0.0.1
port: 3306 or your mysql port
username: proxyuser
should be same as link step - 3
mvn package -Dmaven.test.skip=true
Create File with name "Dockerfile" and below content
FROM openjdk:8
COPY target/SpringBootWithDB-0.0.1-SNAPSHOT.jar /app.jar
EXPOSE 8080/tcp
ENTRYPOINT ["java", "-jar", "/app.jar"]
docker build -t gcr.io//springbootdb-java:v1 .
docker run -ti --rm -p 8080:8080 gcr.io//springbootdb-java:v1
gcloud docker -- push gcr.io//springbootdb-java:v1
Follow the 6th step given in link and create yaml file
kubectl create -f cloudsql_deployment.yaml
run kubectl get deployment and copy name of deployment
kubectl expose deployment --type=LoadBalancer
My Yaml File
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: conversationally
spec:
replicas: 1
template:
metadata:
labels:
app: conversationally
spec:
containers:
- image: gcr.io/<project ID>/springbootdb-java:v1
name: web
env:
- name: DB_HOST
# Connect to the SQL proxy over the local network on a fixed port.
# Change the [PORT] to the port number used by your database
# (e.g. 3306).
value: 127.0.0.1:3306
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
# [END cloudsql_secrets]
ports:
- containerPort: 8080
name: conv-cluster
# Change [INSTANCE_CONNECTION_NAME] here to include your GCP
# project, the region of your Cloud SQL instance and the name
# of your Cloud SQL instance. The format is
# $PROJECT:$REGION:$INSTANCE
# Insert the port number used by your database.
# [START proxy_container]
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=<instance name>=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
# [END volumes]
===========

Why Does a micro instance if RH7.2 return "Non-Windows instances with a virtualization type of 'hvm' are currently not supported ...."?

My kitchen.yml looks like.
driver:
name: ec2
require_chef_omnibus: true
instance_type: t2.micro
block_device_mappings:
- ebs_device_name: /dev/sda1
ebs_volume_type: standard
ebs_virtual_name: test
ebs_volume_size: 50
ebs_delete_on_termination: true
transport:
ssh_key: /home/atg/.ssh/id_rsa
connection_timeout: 10
connection_retries: 5
provisioner:
name: chef_zero
# Uncomment the following verifier to leverage Inspec instead of Busser (the
# default verifier)
# verifier:
# name: inspec
platforms:
- name: redhat-7.2
driver:
image_id: ami-2051294a
transport:
username: root
- name: ubuntu-14.04
driver:
image_id: ami-fce3c696
transport:
username: ubuntu
suites:
- name: default
run_list:
- recipe[ssh::default]
- recipe[python::default]
- recipe[git::default]
- recipe[ureka::default]
attributes:
ssh:
options': {'Compression': 'yes', 'ForwardX11': 'yes', 'X11UseLocalhost': 'yes', 'UsePAM': 'no'}
kitchen converge returns
Create failed on instance . Please see
.kitchen/logs/default-redhat-72.log for more details
------Exception------- Class: Kitchen::ActionFailed Message: InvalidParameterCombination => Non-Windows instances with a
virtualization type of 'hvm' are currently not supported for this
instance type.
You are trying to use an AMI that is not compatible with the instance instance type, or at least something thinks you are. This is odd because t2.micro should support HVM AMIs. I would turn up the logging to see where the error is coming from (kitchen create redhat -l debug).

Resources