I'm using Terraform to create stuff in Azure,
In ARM I used to use uniqueString() to generate storage account names,
So is it possible to generate random name for storage account using Terraform?
There are several random resources you can use in Terraform
https://www.terraform.io/docs/providers/random/index.html
Resources
random_id
random_pet
random_shuffle
random_string
Use random_id as a sample, and the official codes in resource azurerm_storage_account
You can define the resource azurerm_storage_account name easily.
resource "random_id" "storage_account" {
byte_length = 8
}
resource "azurerm_storage_account" "testsa" {
name = "tfsta${lower(random_id.storage_account.hex)}"
resource_group_name = "${azurerm_resource_group.testrg.name}"
location = "westus"
account_type = "Standard_GRS"
tags {
environment = "staging"
}
}
Related
I am unable to access a public container using the C# SDK, even though I have enabled "Allow Blob public access" in the storage account configuration.
var fileSystemClient = new DataLakeFileSystemClient(new Uri("https://somestorageaccount.dfs.core.windows.net/public"), new DataLakeClientOptions());
var paths = fileSystemClient.GetPaths();
foreach (var path in paths)
{
Console.WriteLine(path);
}
This code throws the following exception:
Azure.RequestFailedException: 'Server failed to authenticate the
request. Make sure the value of Authorization header is formed
correctly including the signature.
Is there anything I can configure to make this work?
I tried in my environment and got below results:
Initially, I created ADLS gen2 container with public access level set to container level.
Portal:
When I try to access the file through browser, I got same error.
Browser:
When we are accessing through file system, Files kept in storage system are not accessible anonymously. It is necessary to authorize access even if it is public Access level. You are getting this error because you are attempting to access the resource without authorization.
If you need to access files, you need to authorize with SAS token.
I tried with File URL + SAS token in the browser. I can be able to access the file.
You can get SAS-token by clicking file with generate SAS token.
Browser:
If you need access path of data lake gen 2 in C#, you use the StorageSharedKeyCredential method by this link:
string storageAccountName = StorageAccountName;
string storageAccountKey = StorageAccountKey;
Uri serviceUri = StorageAccountUri;
StorageSharedKeyCredential sharedKeyCredential = new StorageSharedKeyCredential(storageAccountName, storageAccountKey);
DataLakeServiceClient serviceClient = new DataLakeServiceClient(serviceUri, sharedKeyCredential);
DataLakeFileSystemClient filesystem = serviceClient.GetFileSystemClient(Randomize("sample-filesystem-list"));
List<string> names = new List<string>();
foreach (PathItem pathItem in filesystem.GetPaths())
{
names.Add(pathItem.Name);
}
Reference:
java - How to get list of child files/directories having parent DataLakeDirectoryClient class instance - Stack Overflow in java by Jim Xu.
Currently, we proivde the App Secret key on the AppCenter.Configure method. This is same for all the different environments.
How can we configure the App Secret Key so that I can enter diffrent App Secret keys for different environments?
For e.g. If I need to have different App Secret Key for UAT and Production based on configuration.
You have to create two files, which contains your secrets to appcenter. Don't forget to set build action as Embeded Resource.
appsettings.debug.json
appsettings.release.json
These file will have this content:
{
"AppCenterKey": "super secret key"
}
Then you can load your file regarding to build configuration with this method:
private static void LoadAppSettings()
{
#if RELEASE
var appSettingsResourceStream = Assembly.GetAssembly(typeof(AppSettings)).GetManifestResourceStream("AppSettingsPoC.Configuration.appsettings.release.json");
#else
var appSettingsResourceStream = Assembly.GetAssembly(typeof(AppSettings)).GetManifestResourceStream("AppSettingsPoC.Configuration.appsettings.debug.json");
#endif
if(appSettingsResourceStream == null)
return;
using (var streamReader = new StreamReader(appSettingsResourceStream))
{
var jsonString = streamReader.ReadToEnd();
appSettings = JsonConvert.DeserializeObject<AppSettings>(jsonString);
}
}
Your AppSettings will store the key and it might look like this:
public class AppSettings
{
public string AppCenterKey { get; set; }
}
I did the same in my project following this article.
I am creating lambda function using terraform as per the terraform syntax lambda code should be passed as a zip file. In a similar way, I am passing in a resource block and it is getting created also without any issue. But when I am trying to update lambda code using terraform in the next run it is not getting updated. Below block for reference.
data "archive_file" "stop_ec2" {
type = "zip"
source_file = "src_dir/stop_ec2.py"
output_path = "dest_dir/stop_ec2_upload.zip"
}
resource "aws_lambda_function" "stop_ec2" {
function_name = "stopEC2"
handler = "stop_ec2.handler"
runtime = "python3.6"
filename = "dest_dir/stop_ec2_upload.zip"
role = "..."
}
Need help to resolve this issue.
Set the source_code_hash argument, so Terraform will update the lambda function when the lambda code is changed.
resource "aws_lambda_function" "stop_ec2" {
source_code_hash = filebase64sha256("dest_dir/stop_ec2_upload.zip")
I have written some Terraform code to create some servers. For AMI I was using the Terraform data module to get the latest Ubuntu 16.04 image ID and assign it to the EC2 instances.
Recently I wanted to add another EC2 instance to this environment, however when I run terraform plan I can see that Terraform is trying to delete the existing EC2 instance and recreate them. The reason being that a new Ubuntu image has been released and it is trying to delete the old instance and create new ones with the new AMI ID.
Is there any chance I can address this issue as I don't want to accidentally delete our production servers?
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
module "jenkins" {
source = "terraform-aws-modules/ec2-instance/aws"
name = "Jenkins"
instance_count = 1
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.small"
associate_public_ip_address = true
disable_api_termination = true
key_name = "${aws_key_pair.ssh_key.key_name}"
monitoring = false
vpc_security_group_ids = "${module.jenkins_http_sg.this_security_group_id}", "${module.jenkins_https_sg.this_security_group_id}", "${module.ssh_sg.this_security_group_id}"]
subnet_id = "${module.vpc.public_subnets[0]}"
iam_instance_profile = "${aws_iam_instance_profile.update-dns-profile.name}"
tags = {
Terraform = "true"
}
}
While the answer above helps, I solved the problem by adding the following to the aws_instance resource.
lifecycle {
ignore_changes = ["ami"]
}
Please note if you are using the AWS module like I am using, you will have to enter this code to the main.tf file in .terraform/modules/.
Terraform is doing exactly as you asked it to do. Each time it runs it looks for the most recent AMI with a name beginning with ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-* and then passes that AMI ID to the aws_instance resource. As it's not possibly to modify the image ID of an instance, Terraform correctly determines it must destroy the old instances and rebuild them from the new AMI.
If you want to specify a specific AMI then you should either make the data source only return a single AMI (eg by specifying the date stamp in the name filter) or you should hardcode the AMI ID you want to use.
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20190403"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
or:
variable "ami" {
default = "ami-0727f3c2d4b0226d5"
}
If you were to remove the most_recent = true parameter then instead your data source would find multiple images that match those criteria and then fail as the aws_ami data source can only return a single AMI:
NOTE: If more or less than a single match is returned by the search, Terraform will fail. Ensure that your search is specific enough to return a single AMI ID only, or use most_recent to choose the most recent one. If you want to match multiple AMIs, use the aws_ami_ids data source instead.
Also note that I added the owners field to your data source. This is now required since version 2.0.0 because otherwise this was very insecure as your data source could return any public image that uses that naming scheme.
I'm trying to use the attributes of an EC2 instance as a variable, but it keeps failing or otherwise not working. As you can see below, I want to insert the private IP of the instance into a config file that will get copied to the instance. The remote-exec script will then move the file into place (/etc/vault.d/server-config.json)
instances.tf
resource "template_file" "tpl-vault-server-config" {
template = "${file("${path.module}/templates/files/vault-server-config.json.tpl")}"
vars {
aws_private_ip = "${aws_instance.ec2-consul-server.private_ip}"
}
}
provisioner "file" {
source = "${template_file.tpl-vault-server-config.rendered}"
destination = "/tmp/vault-server-config.json"
}
vault-server-config.json.tpl
backend "consul" {
address = "127.0.0.1:8500"
path = "vault"
tls_enable = 1
tls_ca_file = "/etc/consul.d/ssl/ca.cert"
tls_cert_file = "/etc/consul.d/ssl/consul.cert"
tls_key_file = "/etc/consul.d/ssl/consul.key"
}
listener "tcp" {
address = "${aws_private_ip}:8200"
tls_cert_file = "/etc/consul.d/ssl/consul.cert"
tls_key_file = "/etc/consul.d/ssl/consul.key"
}
The error on terraform plan is:
* aws_instance.ec2-consul-server: missing dependency: template_file.tpl-vault-server-config
Questions:
Am I taking the wrong approach?
Am I missing something basic?
How do you get an instance's attributes into a file?
Thanks in advance.
I realized that I was defining the template_file resource within the current resource and this was part of the problem. When I fixed that, things worked much easier.