Using terraform, I could create Heroku applications, create and attach add-ons and put the applications in a pipeline. After the infrastructure is created, everything is good except the dynos are not started. I used heroku/nodejs buildpack. Terraform's Heroku provider does not provide any explicit resource type that corresponds to Heroku dyno. Are we supposed to manually push application for deployment on Heroku when the necessary add-ons and pipeline are created with Terraform?
I googled a lot but couldn't figure out what could be the reason for the dynos not getting started after necessary infrastructure is in place.
Please help.
so today I wanted to test heroku with terraform and got the same issue
it looks like you need to push your app to the git_url reference provided by heroku_app
made a working example at https://github.com/nroitero/terraform_heroku
I'm doing as the example below, and it works.
First, defining the heroku app:
resource "heroku_app" "this" {
name = var.HEROKU_APP_NAME
region = var.HEROKU_REGION
space = var.HEROKU_SPACE
internal_routing = var.HEROKU_INTERNAL_ROUTING
Then, indicating where the node application is:
resource "heroku_build" "this" {
app = heroku_app.this.name
#buildpacks = [var.BUILDPACK_URL]
source = {
#url = var.SOURCE_URL
#version = var.SOURCE_VERSION
#testing path instead of source
path = var.SOURCE_PATH
}
}
And to define dynos, I'm using:
resource "heroku_formation" "this" {
app = heroku_app.this.name
type = var.HEROKU_FORMATION_TYPE
quantity = var.HEROKU_FORMATION_QTY
size = var.HEROKU_FORMATION_SIZE
depends_on = [heroku_build.this]
}
For the dyno size parameter (var.HEROKU_FORMATION_SIZE), use the official dyno type "name" as listed on https://devcenter.heroku.com/articles/dyno-types.
For private spaces, names are: private-s, private-m and private-l.
Related
We are running a Ruby on Rails 5 application in a Docker Container.
The container is hosted on Azure Container Apps.
We need to persist the public folder and would like to use Azure File Share for this. The share is working and accessible (test via script)
Problem is now, that with azure container apps we can't define a volume like in docker-compose in order to change the public folder directly from inside the container to outside:
volumes:
- "./path/to/public/folder:/app/public"
Usually I would pass the volume to docker run, but this is not possible with azure container apps.
I am no Ruby developer.
I tried to change the public folder for assets with the following
How to override public folder path in Rails 4?
in config/Application.rb
config.assets.paths['public'] = File.join('/cms-share', 'public')
Sadly this is leading to an exception on startup:
! Unable to load application: TypeError: no implicit conversion of String into Integer
/app/config/application.rb:32:in `[]=': no implicit conversion of String into Integer (TypeError)
Thanks in advance
You have missed the object you are trying to modify.
config.assets.paths
#=> ['app/assets/images', ..]
Those are assets paths and returned object is array and what you needed to modify is some rails path object, which return other object that can be modified.
config.paths['public']
=>
#<Rails::Paths::Path:0x0000000113acbca0
#autoload=false,
#autoload_once=false,
#current="public",
#eager_load=false,
#exclude=nil,
#glob=nil,
#load_path=false,
...
Be attentive that the second one, does not have anything about assets it is just config.paths
config.paths['public'] = File.join('/cms-share', 'public')
This code will solve your problem with changing default public path.
I'm trying to deploy a Google Cloud Workflow using terraform resource google_workflows_workflow.
Here is my code:
resource "google_workflows_workflow" "example" {
project = var.project_id
name = "workflow-example"
region = "europe-west2"
description = "My first workflow"
service_account = var.service_account_email
source_contents = <<-EOF
# etc...
EOF
It fails with:
Error creating Workflow: googleapi: Error 403: Location europe-west2 is not found or access is unauthorized
Why is this? Is workflows not available in europe-west2?
The closest Workflows region as of April 2021 is europe-west4.
Depending on your use case, region may not be as important for Workflows as it might be for other services. A workflow can call endpoints in any region, and in most cases latency is less important.
I am trying to get all the instance(server name) ID based on the app. Let's say I have an app in the server. How do I know which apps below to which server. I want my code to find all the instance (server) that belongs to each app. Is there any way to look through the app in the ec2 console and figure out the servers are associated with the app. More of using tag method
import boto3
client = boto3.client('ec2')
my_instance = 'i-xxxxxxxx'
(Disclaimer: I work for AWS Resource Groups)
Seeing your comments that you use tags for all apps, you can use AWS Resource Groups to create a group - the example below assumes you used App:Something as tag, first creates a Resource Group, and then lists all the members of that group.
Using this group, you can for example get automatically a CloudWatch dashboard for those resources, or use this group as a target in RunCommand.
import json
import boto3
RG = boto3.client('resource-groups')
RG.create_group(
Name = 'Something-App-Instances',
Description = 'EC2 Instances for Something App',
ResourceQuery = {
'Type': 'TAG_FILTERS_1_0',
'Query': json.dumps({
'ResourceTypeFilters': ['AWS::EC2::Instance'],
'TagFilters': [{
'Key': 'App',
'Values': ['Something']
}]
})
},
Tags = {
'App': 'Something'
}
)
# List all resources in a group using a paginator
paginator = RG.get_paginator('list_group_resources')
resource_pages = paginator.paginate(GroupName = 'Something-App-Instances')
for page in resource_pages:
for resource in page['ResourceIdentifiers']:
print(resource['ResourceType'] + ': ' + resource['ResourceArn'])
Another option to just get the list without saving it as a group would be to directly use the Resource Groups Tagging API
What you install on an Amazon EC2 instance is totally up to you. You do this by running code on the instance itself. AWS is not involved in the decision of what you install on the instance, nor does it know what you installed on an instance.
Therefore, you will need to keep track of "what apps are installed on what server" yourself.
You might choose to take advantage of Tags on instances to add some metadata, such as the purpose of the server. You could also use AWS Systems Manager to run commands on instances (eg to install software) or even use AWS CodeDeploy to roll-out software to fleets of servers.
However, even with all of these deployment options, AWS cannot track what you have put on each individual server. You will need to do that yourself.
Update: You can use AWS Resource Groups to view/manage resources by tag.
Here's some sample Python code to list tags by instance:
import boto3
ec2_resource = boto3.resource('ec2', region_name='ap-southeast-2')
instances = ec2_resource.instances.all()
for instance in instances:
for tag in instance.tags:
print(instance.instance_id, tag['Key'], tag['Value'])
I'm trying to use express-stormpath on my Heroku app. I'm following the docs here, and my code is super simple:
var express = require('express');
var app = express();
var stormpath = require('express-stormpath');
app.use(stormpath.init(app, {
website: true
}));
app.on('stormpath.ready', function() {
app.listen(3000);
});
I've already looked at this question and followed the Heroku devcenter docs. The docs say that for an Heroku app, it's not necessary to pass in options, but I've still tried passing in options and nothing works. For example, I've tried this:
app.use(stormpath.init(app, {
// client: {
// file: './xxx.properties'
// },
client: {
apiKey: {
file: './xxx.properties',
id: process.env.STORMPATH_API_KEY_ID || 'xxx',
secret: process.env.STORMPATH_API_KEY_SECRET || 'xxx'
}
},
application: {
href: 'https://api.stormpath.com/v1/applications/blah'
},
}));
To try and see what's going on, I added a console.log line to the stormpath-config strategy valdiator to print the client object, and it gives me this:
{ file: './apiKey-xxx.properties',
id: 'xxx',
secret: 'xxx' }
{ file: null, id: null, secret: null }
Error: API key ID and secret is required.
Why is it getting called twice, and the second time around, why does the client object have null values for the file, id and secret?
When I run heroku config | grep STORMPATH, I get
STORMPATH_API_KEY_ID: xxxx
STORMPATH_API_KEY_SECRET: xxxx
STORMPATH_URL: https://api.stormpath.com/v1/applications/[myappurl]
I'm the original author of the express-stormpath library, and also wrote the Heroku documentation for Stormpath.
This is 100% my fault, and is a documentation / configuration bug on Stormpath's side of things.
Back in the day, all of our libraries looked for several environment variables by default:
STORMPATH_URL (your Application URL)
STORMPATH_API_KEY_ID
STORMPATH_API_KEY_SECRET
However, a while ago, we started upgrading our libraries, and realized that we wanted to go with a more standard approach across all of our supported languages / frameworks / etc. In order to make things more explicit, we essentially renamed the variables we look for by default, to:
STORMPATH_APPLICATION_HREF
STORMPATH_CLIENT_APIKEY_ID
STORMPATH_CLIENT_APIKEY_SECRET
Unfortunately, we did not yet update our Heroku integration or documentation to reflect these changes, which is why you just ran into this nasty issue.
I just submitted a ticket to our Engineering team to fix the names of the variables that our Heroku addon provisions by default to include our new ones, and I'm going to be updating our Heroku documentation later this afternoon to fix this for anyone else in the future.
I'm sincerely sorry about all the confusion / frustration. Sometimes these things slip through the cracks, and experiences like this make me realize we need better testing in place to catch this stuff earlier.
I'll be working on some changes internally to make sure we have a better process around rolling out updates like this one.
If you want a free Stormpath t-shirt, hit me up and I'll get one shipped out to you as a small way to say 'thanks' for putting up with the annoyance: randall#stormpath.com
After endless hours, I managed to finally get it working by removing the add-on entirely and re-installing it via the Heroku CLI and then exporting variables STORMPATH_CLIENT_APIKEY_ID and STORMPATH_CLIENT_APIKEY_SECRET. For some reason, installing it via the Heroku Dashboard causes express-stormpath to not find the apiKey and secret fields (even if you export variables).
I can't seem to find anything official about this: Does Parse.Config work on Parse Server? It used to work on Parse.com but when I try to migrate to Parse.Server, when trying the REST API it seem to fail:
GET http://localhost:1337/parse/config
Passing in my app ID. I read somewhere Config does not work on Parse Server, but wanted to confirm
Although is not officially supported as mentioned on the docs,there is a way to make it work. It is still an experimental implementation though.
As mentioned here & here, you should set the environment variable:
PARSE_EXPERIMENTAL_CONFIG_ENABLED=1
Then restart your node server. In case you deployed it on heroku for example you should on cli heroku restart -a <APP_NAME>
If that doesn't work I would suggest to simply add your route with your configuration options on your project's index.js file where express is initialized like so.
var parseConfig = {
"params": { /*...put your options here*/ }
};
// :one? is for old SDK compatibility while is optional parameter.
app.all('/parse/:one?/config', function (req, res) {
res.json(parseConfig);
});