I'm launching windows ec2 instances using salt cloud. However I'm unable to set the security group. Instead of giving the instance the SG I specify, it gives you the 'default' security group.
Here's my cloud profile definition:
ec2_private_win_app1_c4.2xlarge:
provider: company-nonpod-us-east-1
image: ami-xxxxxx
size: c4.2xlarge
network_interfaces:
- DeviceIndex: 0
PrivateIpAddresses:
- Primary: True
#auto assign public ip (not EIP)
AssociatePublicIpAddress: False
SubnetId: subnet-xxxxx
SecurityGroupId: sg-xxxxxx
block_device_mappings:
- DeviceName: /dev/sda1
Ebs.VolumeSize: 120
Ebs.VolumeType: gp2
- DeviceName: /dev/sdf
Ebs.VolumeSize: 100
Ebs.VolumeType: gp2
The yaml checks out when I parse it with an online yaml checker. What can I do differently to get the security group I specify instead of the 'default' security group?
If the security group (SG) is not in the same Virtual Private Cloud (VPC) as the subnet you are specifying it will fail to apply the security group to the instance.
Related
I am following this guide to consume secrets: https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#secrets-propertysource.
It says roughly.
save secrets
reference secrets in deployment.yml file
containers:
- env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
Then it says "You can select the Secrets to consume in a number of ways:" and gives 3 examples. However without doing any of these steps I can still see the secrets in my env perfectly. Futhermore the operations in step 1 and step 2 operate independently of spring boot(save and move secrets into environment variables)
My questions:
If I make the changes suggested in step 3 what changes/improvements does it make for my container/app/pod?
Is there no way to be able to avoid all the mapping in step 1 and put all secrets in an env?
they write -Dspring.cloud.kubernetes.secrets.paths=/etc/secrets to source all secrets, how is it they knew secrets were in a folder called /etc/
You can mount all env variables from secret in the following way:
containers:
- name: app
envFrom:
- secretRef:
name: db-secret
As for where Spring gets secrets from - I'm not an expert in Spring but it seems there is already an explanation in the link you provided:
When enabled, the Fabric8SecretsPropertySource looks up Kubernetes for
Secrets from the following sources:
Reading recursively from secrets mounts
Named after the application (as defined by spring.application.name)
Matching some labels
So it takes secrets from secrets mount (if you mount them as volumes). It also scans Kubernetes API for secrets (i guess in the same namespaces the app is running in). It can do it by utilizing Kubernetes serviceaccount token which by default is always mounted into the pod. It is up to what Kubernetes RBAC permissions are given to pod's serviceaccount.
So it tries to search secrets using Kubernetes API and match them against application name or application labels.
I've been following the guide on https://cube.dev/docs/deployment#express-with-basic-passport-authentication to deploy Cube.js to Lambda. I got it working against an Athena db such that the /meta endpoint works successfully and returns schemas.
When trying to query Athena data in Lambda however, all requests are resulting in 504 Gateway Timeouts. Checking the CloudWatch logs I see one consistent error:
/bin/sh: hostname: command not found
Any idea what this could be?
Here's my server.yml:
service: tw-cubejs
provider:
name: aws
runtime: nodejs12.x
iamRoleStatements:
- Effect: "Allow"
Action:
- "sns:*"
# Athena permissions
- "athena:*"
- "s3:*"
- "glue:*"
Resource:
- "*"
# When you uncomment vpc please make sure lambda has access to internet: https://medium.com/#philippholly/aws-lambda-enable-outgoing-internet-access-within-vpc-8dd250e11e12
vpc:
securityGroupIds:
# Your DB and Redis security groups here
- ########
subnetIds:
# Put here subnet with access to your DB, Redis and internet. For internet access 0.0.0.0/0 should be routed through NAT only for this subnet!
- ########
- ########
- ########
- ########
environment:
CUBEJS_AWS_KEY: ########
CUBEJS_AWS_SECRET: ########
CUBEJS_AWS_REGION: ########
CUBEJS_DB_TYPE: athena
CUBEJS_AWS_S3_OUTPUT_LOCATION: ########
CUBEJS_JDBC_DRIVER: athena
REDIS_URL: ########
CUBEJS_API_SECRET: ########
CUBEJS_APP: "${self:service.name}-${self:provider.stage}"
NODE_ENV: production
AWS_ACCOUNT_ID:
Fn::Join:
- ""
- - Ref: "AWS::AccountId"
functions:
cubejs:
handler: cube.api
timeout: 30
events:
- http:
path: /
method: GET
- http:
path: /{proxy+}
method: ANY
cubejsProcess:
handler: cube.process
timeout: 630
events:
- sns: "${self:service.name}-${self:provider.stage}-process"
plugins:
- serverless-express
Even this hostname error message is in logs however it isn't an issue cause.
Most probably you experiencing issue described here.
#cubejs-backend/serverless uses internet connection to access messaging API as well as Redis inside VPC for managing queue and cache.
One of those doesn't work in your environment.
Such timeouts usually mean that there's a problem with internet connection or with Redis connection. If it's Redis you'll usually see timeouts after 5 minutes or so in both cubejs and cubejsProcess functions. If it's internet connection you will never see any logs of query processing in cubejsProcess function.
Check the version of cube.js you are using, according to the changelog this issue should have been fixed in 0.10.59.
It's most likely down to a dependency of cube.js assuming that all environments where it will run will be able to run the hostname shell command (looks like it's using node-machine-id.
I am using the cloud.google.com/go SDK to programmatically provision the GKE clusters with the required configuration.
I set the ClientCertificateConfig.IssueClientCertificate = true (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#ClientCertificateConfig).
After the cluster is provisioned, I use the ca_certificate, client_key, client_secret returned for the same cluster (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#MasterAuth). Now that I have the above 3 attributes, I try to generate the kubeconfig for this cluster (to be later used by helm)
Roughly, my kubeconfig looks something like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <base64_encoded_data>
server: https://X.X.X.X
name: gke_<project>_<location>_<name>
contexts:
- context:
cluster: gke_<project>_<location>_<name>
user: gke_<project>_<location>_<name>
name: gke_<project>_<location>_<name>
current-context: gke_<project>_<location>_<name>
kind: Config
preferences: {}
users:
- name: gke_<project>_<location>_<name>
user:
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
On running kubectl get nodes with above config I get the error:
Error from server (Forbidden): serviceaccounts is forbidden: User "client" cannot list resource "serviceaccounts" in API group "" at the cluster scope
Interestingly if I use the config generated by gcloud, the only change is in the user section:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
This configuration seems to work just fine. But as soon as I add client cert and client key data to it, it breaks:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
I believe I'm missing some details related to RBAC but I'm not sure what. Will you be able to provide me with some info here?
Also reffering to this question I've tried to only rely on Username - Password combination first, using that to apply a new clusterrolebinding in the cluster. But I'm unable to use just the username password approach. I get the following error:
error: You must be logged in to the server (Unauthorized)
I've been working with AWS SAM Local to create and test a lambda / api gateway stack before shipping it to production. I have recently ran into a brick wall when trying to access private resources (RDS) when testing locally (sam local start-api --profile [profile]). I'm able to connect to some of these private resources if I do some ssh tunneling, but was wondering if I am able to test locally without tunneling using VPC.
Below is an example sam template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Example Stack
Globals:
Function:
Timeout: 3
Resources:
ExampleFunction:
Type: 'AWS::Serverless::Function'
Properties:
Handler: index.example
Runtime: nodejs8.10
CodeUri: .
Description: 'Just an example'
MemorySize: 128
Role: 'arn:aws:iam::[arn-role]'
VpcConfig:
SecurityGroupIds:
- sg-[12345]
SubnetIds:
- subnet-[12345]
- subnet-[23456]
- subnet-[34567]
Events:
Api1:
Type: Api
Properties:
Path: /example
Method: GET
After reading through a lot of documentation and searching stackoverflow for anything that would help... I ended up joining the #samdev slack channel and asked for help. I was provided some guidance and a great guide on setting up OpenVPN on an EC2 instance.
The set up was super easy (completed in under 30 minutes) and the EC2 instance uses a pre-baked AMI image. Make sure you assign the new EC2 instance to the appropriate VPC containing the resources you need access to.
Here is a link to the OpenVPN guide: https://openvpn.net/index.php/access-server/on-amazon-cloud.html
You can request an invite to the #samdev slack channel here: https://awssamopensource.splashthat.com/
I am trying to setup elasticsearch on 2 ec2 nodes.
I have the plugin installed and my config has the following:
cloud:
aws:
access_key: KEY
secret_key: KEY
discovery:
type: ec2
ec2:
groups: security-group
They only discover if I have both this specified and an EIP assigned to each one. Why do I need an EIP assigned?
A while ago I had a NAT instance and I did not need the EIP nor the cloud: etc in the config.
We had some issues with getting nodes within the cluster to see each other in an AWS EC2 setup. We were seeing a timeout issue as well. It turned out that we had added a self-reference to the security group (within the AWS console) in order to get the instances to see each other.
E.g. within the security group settings have the following entry:
TCP Port(Service) Source
0 - 65535 sg-xxxxx (security-group)
Once we added this the discovery worked as expected.
Try use this config
cloud:
aws:
access_key: KEY
secret_key: KEY
discovery:
type: ec2
ec2:
groups: security-group
availability_zones: ap-southeast-1a,ap-southeast-1b
tag:
stage: production
And add Tag "stage" to Instances
PS. security-group which security group assign to instances