My application on Heroku servers is located in the EU region. But that's all the info I get.
The EU region has two locations: Ireland and Germany. I need to know in which of this region my app is located.
I can display some regions using heroku regions command but it's useless as it just shows me all available regions.
The output of the heroku info <APPNAME> heroku regions --json command can be used in combination with this AWS IP Ranges json information to figure out which exact region you're looking at.
When you run heroku info <APPNAME>, you will see the general region. But it's broad and not very useful.
For example, in this case, it's us
=== APPNAME
Auto Cert Mgmt: false
Dynos: web: 1
Git URL: https://git.heroku.com/APPNAME.git
Owner: example#email.com
Region: us
Repo Size: 0 B
Slug Size: 37 MB
Stack: heroku-20
Web URL: https://APPNAME.herokuapp.com/
If you look at the heroku regions info from heroku regions --json, you'll see there will be an entry where the "name" will match the "region" from heroku info.
Here, it's us-east-1
...
{
"country": "United States",
"created_at": "2012-11-21T20:44:16Z",
"description": "United States",
"id": "59accabd-516d-4f0e-83e6-6e3757701145",
"locale": "Virginia",
"name": "us",
"private_capable": false,
"provider": {
"name": "amazon-web-services",
"region": "us-east-1"
},
"updated_at": "2016-08-09T22:03:28Z"
},
...
From there, you can see the "provider" key in that region's json, and it'll show you the provider's name (likely AWS) and region (should be a more fine-grained region than the general heroku region).
Now you can take that region name, and look in the AWS IP Ranges JSON file for it -- that should give you the IPs that are associated with it. (Note, there will be many IP addresses associated)
Note that there will be many IP prefixes which have a matching region, because there are many IPs associated with it.
Here is one example of an entry which matches the us-east-1 region.
...
{
"ip_prefix": "52.94.152.9/32",
"region": "us-east-1",
"service": "AMAZON",
"network_border_group": "us-east-1"
},
...
You can use heroku info <APP-NAME> to find the Region ID. It will display some data like below.
=== APP-NAME
Addons: heroku-postgresql:hobby-dev
Auto Cert Mgmt: false
Collaborators: youremail#email.com
Dynos: web: 1
Git URL: https://git.heroku.com/appname.git
Owner: appname#email.com
Region: us
Repo Size: 0 B
Slug Size: 168 MB
Stack: heroku-18
Web URL: https://appname.herokuapp.com/
As you can see the Region ID is us. Then, you can use heroku regions command to find the regions belong to the ID.
ID Location Runtime
───────── ─────────────────────── ──────────────
eu Europe Common Runtime
us United States Common Runtime
dublin Dublin, Ireland Private Spaces
frankfurt Frankfurt, Germany Private Spaces
oregon Oregon, United States Private Spaces
sydney Sydney, Australia Private Spaces
tokyo Tokyo, Japan Private Spaces
virginia Virginia, United States Private Spaces
Related
I see a weird problem where the export-image task is stuck at 85% or can say at the "conversion" step and it doesn't end even after 6 hours waiting.
Steps used are pretty normal :
% aws ec2 export-image --image-id ami-0123c45d6789d012d --disk-image-format VMDK --s3-export-location S3Bucket=ami-snapshots-bucket --region us-west-2
And here is the status stuck at 85% :
% aws ec2 describe-export-image-tasks --export-image-task-ids export-ami-0ddfc0123456789d1 --region us-west-2
{
"ExportImageTasks": [
{
"ExportImageTaskId": "export-ami-0ddfc0123456789d1",
"Progress": "85",
"S3ExportLocation": {
"S3Bucket": "ami-snapshots-bucket"
},
"Status": "active",
"StatusMessage": "converting",
"Tags": []
}
]
}
Anyone with similar issue or know to make this work?
Thanks,
For 80 GB storage, mine finished in about 2 hrs, though it was stuck at 85% for while.
Another internet user also suggests that 3 hrs until completion worked for them for a 40 GB image/snapshot.
I went through the documentation provided here
https://docs.ansible.com/ansible/latest/collections/community/aws/ecs_taskdefinition_module.html
It gives me nice examples of setting of Fargate task definition. However it showcases example with only one port mapping and there is no mount point shown here.
I want to dynamically add port mappings ( depending on my app) and volume/mount points
For that I am defining my host_var for app as below ( there can be many such apps with different mount points and ports)
---
task_count: 4
task_cpu: 1028
task_memory: 2056
app_port: 8080
My Task definition yaml file looks like below
- name: Create/Update Task Definition
ecs_taskdefinition:
aws_access_key: "{{....}}"
aws_secret_key: "{{....}}"
security_token: "{{....}}"
region: "{{....}}"
launch_type: FARGATE
network_mode: awsvpc
execution_role_arn: "{{ ... }}"
task_role_arn: "{{ ...}}"
containers:
- name: "{{...}}"
environment: "{{...}}"
essential: true
image: "{{ ....}}"
logConfiguration: "{{....}}"
portMappings:
- containerPort: "{{app_port}}"
hostPort: "{{app_port}}"
cpu: "{{task_cpu}}"
memory: "{{task_memory}}"
state: present
I am able to create/update the task definition.
New requirements are that
Instead of one port, now we can have multiple(or none) port mappings.
We will have multiple (or none) mount points and volumes as well
Here is what I think the modified ansible host_var should look like below for ports
[container_port1:host_port1, container_port2:host_port2, container_port3:host_port3]
task_count: 4
task_cpu: 1028
task_memory: 2056
#[container_port1:host_port1, container_port2:host_port2, container_port3:host_port3]
app_ports: [8080:80, 8081:8081, 5703:5703]
I am not sure what to do in ansible playbook to run through this list of ports.
Another part of the problem is that, although I was able to achieve creating volume and mouting in container thorough aws console, I was not able to do same using ansible.
here is the snippet of json for the AWS fargate looks like ( for volume part). There can be many such mounts depending on the application. I want to achieve that dynamically by defining mount points and volumes in host_vars
-
-
-
"mountPoints": [
{
"readOnly": null,
"containerPath": "/mnt/downloads",
"sourceVolume": "downloads"
}
-
-
-
-
-
-
"volumes": [
{
"efsVolumeConfiguration": {
"transitEncryptionPort": ENABLED,
"fileSystemId": "fs-ecdg222d",
"authorizationConfig": {
"iam": "ENABLED",
"accessPointId": null
},
"transitEncryption": "ENABLED",
"rootDirectory": "/vol/downloads"
},
"name": "downloads",
"host": null,
"dockerVolumeConfiguration": null
}
I am not sure how to do that.
Official documentation offers very little help.
I want to use aws lambda update-function-code command to deploy the code of my function. The problem here is that aws CLI always prints out some information after deployment. That information contains sensitive information, such as environment variables and their values. That is not acceptable as I'm going to use public CI services, and I don't want that info to become available to anyone. At the same time I don't want to solve this by directing everything from AWS command to /dev/null for example as in this case I will lose information about errors and exceptions which will make it harder to debug it if something went. What can I do here?
p.s. SAM is not an option, as it will force me to switch to another framework and completely change the workflow I'm using.
You could target the output you'd like to suppress by replacing those values with jq
For example if you had output from the cli command like below:
{
"FunctionName": "my-function",
"LastModified": "2019-09-26T20:28:40.438+0000",
"RevisionId": "e52502d4-9320-4688-9cd6-152a6ab7490d",
"MemorySize": 256,
"Version": "$LATEST",
"Role": "arn:aws:iam::123456789012:role/service-role/my-function-role-uy3l9qyq",
"Timeout": 3,
"Runtime": "nodejs10.x",
"TracingConfig": {
"Mode": "PassThrough"
},
"CodeSha256": "5tT2qgzYUHaqwR716pZ2dpkn/0J1FrzJmlKidWoaCgk=",
"Description": "",
"VpcConfig": {
"SubnetIds": [],
"VpcId": "",
"SecurityGroupIds": []
},
"CodeSize": 304,
"FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:my-function",
"Handler": "index.handler",
"Environment": {
"Variables": {
"SomeSensitiveVar": "value",
"SomeOtherSensitiveVar": "password"
}
}
}
You might pipe that to jq and replace values only if the keys exist:
aws lambda update-function-code <args> | jq '
if .Environment.Variables.SomeSensitiveVar? then .Environment.Variables.SomeSensitiveVar = "REDACTED" else . end |
if .Environment.Variables.SomeRandomSensitiveVar? then .Environment.Variables.SomeOtherSensitiveVar = "REDACTED" else . end'
You know which data is sensitive and will need to set this up appropriately. You can see the example of what data is returned in the cli docs and the API docs are also helpful for understanding what the structure can look like.
Lambda environment variables show themselves everywhere and cannot considered private.
If your environment variables are sensitive, you could consider using aws secret manager.
In a nutshell:
create a secret in the secret store. It has a name (public) and a value (secret, encrypted, with proper user access control)
Allow your lambda to access the secret store
In your lambda env, store the name of your secret, and tell your lambda to get the corresponding value at runtime
bonus: password rotation is made super easy, as you don't even have to update your lambda config anymore
I am creating an AWS data-pipeline to copy data from mysql to S3. I have written a shell script which accepts credentials as arguments and creates the pipeline so that my credentials are not exposed in script.
used below bash shell script to create pipeline.
unique_id="$(date +'%s')"
profile="${4}"
startDate="${1}"
echo "{\"values\":{\"myS3CopyStartDate\":\"$startDate\",\"myRdsUsername\":\"$2\",\"myRdsPassword\":\"$3\"}}" > mysqlToS3values.json
sqlpipelineId=`aws datapipeline create-pipeline --name mysqlToS3 --unique-id mysqlToS3_$unique_id --profile $profile --query '{ID:pipelineId}' --output text`
validationErrors=`aws datapipeline put-pipeline-definition --pipeline-id $sqlpipelineId --pipeline-definition file://mysqlToS3.json --parameter-objects file://mysqlToS3Parameters.json --parameter-values-uri file://mysqlToS3values.json --query 'validationErrors' --profile $profile`
aws datapipeline activate-pipeline --pipeline-id $sqlpipelineId --profile $profile
However when I fetch pipeline definition through aws cli using
aws datapipeline get-pipeline-definition --pipeline-id 27163782,
I get my credentials in plain text in json output.
{ "parameters": [...], "objects": [...], "values": { "myS3CopyStartDate": "2018-04-05T10:00:00", "myRdsPassword": "sbc", "myRdsUsername": "ksnck" } }
Is there any way to encrypt or hide the credentials information?
I don't think there is a way to mask the data in the pipeline definition.
The strategy I have used is to store my secrets in S3 (encrypted with a specific KMS key and using appropriate IAM/bucket permisions). Then, inside my datapipeline step, I use the AWS CLI to read the secret from S3 and pass it to the mysql command or whatever.
So instead of having a pipeline parameter like myRdsPassword I have:
"myRdsPasswordFile": "s3://mybucket/secrets/rdspassword"
Then inside my step I read it with something like:
PWD=$(aws s3 cp ${myRdsPasswordFile} -)
You could also have a similar workflow that retrieves the password from AWS Parameter Store instead of S3.
There is actually a way that's built into data pipelines:
You prepend the field with an * and it will encrypt the field and hide it visibly like a password form field.
If you're using parameters, then prepend the * on both the object field and the corresponding parameter field like so (note - there are three * with a parameterized setup; the example below is just a sample - missing required fields just to simplify and illustrate how to handle the encryption through parameters):
...{
"*password": "#{*myDbPassword}",
"name": "DBName",
"id": "DB",
},
],
"parameters": [
{
"id": "*myDbPassword",
"description": "Database password",
"type": "String"
}...
See more below:
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-pipeline-characters.html
You can store RDS Credentials in AWS Secret Manager. You can then retrieve the credentials from SecretManager in the data-pipeline using cloudformation template as described below:
Mappings:
RegionToDatabaseConfig:
us-west-2:
CredentialsSecretKey: us-west-2-SECRET_NAME
# ...
us-east-1:
CredentialsSecretKey: us-east-1-SECRET_NAME
# ...
eu-west-1:
CredentialsSecretKey: eu-west-1-SECRET_NAME
# ...
Resources:
OurProjectDataPipeline:
Type: AWS::DataPipeline::Pipeline
Properties:
# ...
PipelineObjects:
# ...
# RDS resources
- Id: PostgresqlDatabase
Name: Source database to sync data from
Fields:
- Key: type
StringValue: RdsDatabase
- Key: username
StringValue:
!Join
- ''
- - '{{resolve:secretsmanager:'
- !FindInMap
- RegionToDatabaseConfig
- {Ref: 'AWS::Region'}
- CredentialsSecretKey
- ':SecretString:username}}'
- Key: "*password"
StringValue:
!Join
- ''
- - '{{resolve:secretsmanager:'
- !FindInMap
- RegionToDatabaseConfig
- {Ref: 'AWS::Region'}
- CredentialsSecretKey
- ':SecretString:password}}'
- Key: jdbcProperties
StringValue: 'allowMultiQueries=true'
- Key: rdsInstanceId
StringValue:
!FindInMap
- RegionToDatabaseConfig
- {Ref: 'AWS::Region'}
- RDSInstanceId
I have a json that looks like this:
{
"failedSet": [],
"successfulSet": [{
"event": {
"arn": "arn:aws:health:us-east-1::event/AWS_RDS_MAINTENANCE_SCHEDULED_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx",
"endTime": 1502841540.0,
"eventTypeCategory": "scheduledChange",
"eventTypeCode": "AWS_RDS_MAINTENANCE_SCHEDULED",
"lastUpdatedTime": 1501208541.93,
"region": "us-east-1",
"service": "RDS",
"startTime": 1502236800.0,
"statusCode": "open"
},
"eventDescription": {
"latestDescription": "We are contacting you to inform you that one or more of your Amazon RDS DB instances is scheduled to receive system upgrades during your maintenance window between August 8 5:00 PM and August 15 4:59 PM PDT. Please see the affected resource tab for a list of these resources. \r\n\r\nWhile the system upgrades are in progress, Single-AZ deployments will be unavailable for a few minutes during your maintenance window. Multi-AZ deployments will be unavailable for the amount of time it takes a failover to complete, usually about 60 seconds, also in your maintenance window. \r\n\r\nPlease ensure the maintenance windows for your affected instances are set appropriately to minimize the impact of these system upgrades. \r\n\r\nIf you have any questions or concerns, contact the AWS Support Team. The team is available on the community forums and by contacting AWS Premium Support. \r\n\r\nhttp://aws.amazon.com/support\r\n"
}
}]
}
I'm trying to add a new key/value under successfulSet[].event (key name as affectedEntities) using jq, I've seen some examples, like here and here, but none of those answers really show how to add a possible one key with multiple values (the reason why I say possible is because as of now, AWS is returning one value for the affected entity, but if there are more, then I'd like to list them).
EDIT: The value of the new key that I want to add is stored in a variable called $affected_entities and a sample of that value looks like this:
[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"
]
The value could look like this:
[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
...
...
...
]
You can use this jq,
jq '.successfulSet[].event += { "new_key" : "new_value" }' file.json
EDIT:
Try this:
jq --argjson argval "$new_value" '.successfulSet[].event += { "affected_entities" : $argval }' file.json
Test:
sat~$ new_value='[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"
]'
sat~$ jq --argjson argval "$new_value" '.successfulSet[].event += { "affected_entities" : $argval }' file.json
Note that --argjson works with jq 1.5 and above.