I'm new to packer. I've heard that you can add a vagrant post processor to get you an easy VM to test your new image in. Based on the examples and such I thought the code below would work. However, I get this error.
* Post-processor failed: ovf file couldn't be found
Here's my packer config/code.
source "digitalocean" "test" {
image = "ubuntu-20-10-x64"
region = "nyc1"
size = "s-1vcpu-1gb"
snapshot_name = "me-image-{{isotime \"2006-01-02T15:04\"}}"
snapshot_regions = [
"nyc1", "sgp1", "lon1", "nyc3", "ams3", "fra1", "tor1", "sfo2", "blr1",
"sfo3"
]
tags = ["delete"]
ssh_username = "root"
}
# a build block invokes sources and runs provisioning steps on them.
build {
sources = ["source.digitalocean.test"]
provisioner "file" {
source = "jump_host"
destination = "/tmp"
}
post-processor "vagrant" {
keep_input_artifact = true
provider_override = "virtualbox"
output = "out.box"
}
}
My packer version is 1.6.6
My vagrant version is 2.2.10
Had the Same(ish) Issue - Found the answer by bruteforce/chance
So I'm in the same boat as you, but I managed to find the hint for my solution here
Caveat: I'm working with an exported .vmdk, so this may not be a solution for you since you're looking for a way to get it straight from digital ocean?
The Hint
build {
sources = ["source.null.autogenerated_1"]
post-processor "shell-local" {
inline = ["echo Doing stuff..."]
}
post-processors {
post-processor "vagrant" {
--> include = ["image.iso"]
output = "proxycore_{{.Provider}}.box"
vagrantfile_template = "vagrantfile.tpl"
}
post-processor "vagrant-cloud" {
access_token = "${var.cloud_token}"
box_tag = "hashicorp/precise64"
version = "${local.version}"
}
}
}
This isn't listed on the Vagrant Post-Processor page, but it is on Vagrant Cloud Post-Processor. I just decided to try my luck and it worked.
Working Example
source "null" "example" {
communicator = "none"
}
build {
sources = ["source.null.example"]
post-processor "artifice" {
files = ["example-disk001.vmdk", "example.ovf"]
keep_input_artifact = true
}
post-processor "vagrant" {
include = ["example-disk001.vmdk", "example.ovf"]
keep_input_artifact = true
provider_override = "virtualbox"
}
}
Tl;dr it's not possible
What I wanted packer to do was build something for digitalocean then give me a copy so I could test it without paying for a vm from digitalocean and without needing internet. That isn't possible and after some reflection it makes sense why.
Digitalocean isn't just downloading the Ubuntu 20 ISO and throwing it on their servers. They configure and change the image so its optimized on their hardware. To expect their special images to run on some standard VM running on consumer hardware isn't realistic. Plus I'm not sure there's even a way to download a snapshot from DO.
But also in trying to do this I kind of missed the entire point of vagrant. If I'm testing a digitalocean image I will always need to connect for and pay for digitalocean. Vagrant is designed to make it easy for me to do that without having to click through the interface every single time. So I shouldn't even be trying to get this on my home computer.
PS: Thank you so much #RedGrin-Grumble for taking the time to add to this months old post.
Related
Using Octopus Deploy to deploy a simple API.
The first step of our deployment process is to generate an HTML report with the delta of the scripts run vs the scripts required to run. I used this tutorial to create the step.
The relevant code in my console application is:
var reportLocationSection = appConfiguration.GetSection(previewReportCmdLineFlag);
if (reportLocationSection.Value is not null)
{
// Generate a preview file so Octopus Deploy can generate an artifact for approvals
try
{
var report = reportLocationSection.Value;
var fullReportPath = Path.Combine(report, deltaReportName);
Console.WriteLine($"Generating upgrade report at {fullReportPath}");
upgrader.GenerateUpgradeHtmlReport(fullReportPath);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
return operationError;
}
}
The Powershell which I am using in the script step is:
# Get the extracted path for the package
$packagePath = $OctopusParameters["Octopus.Action.Package[DatabaseUpdater].ExtractedPath"]
$connectionString = $OctopusParameters["Project.Database.ConnectionString"]
$reportPath = $OctopusParameters["Project.HtmlReport.Location"]
Write-Host "Report Path: $($reportPath)"
$exeToRun = "$($packagePath)\DatabaseUpdater.exe"
$generatedReport = "$($reportPath)\UpgradeReport.html"
Write-Host "Generated Report: $($generatedReport)"
if ((test-path $reportPath) -eq $false){
New-Item "Creating new directory..."
} else {
New-Item "Directory already exists."
}
# Run this .NET app, passing in the Connection String and a flag
# which tells the app to create a report, but not update the database
& $exeToRun --connectionString="$($connectionString)" --previewReportPath="$($reportPath)"
New-OctopusArtifact -Path "$($generatedReport)"
The error reported by Octopus is:
'Could not find file 'C:\DeltaReports\Some API\2.9.15-DbUp-Test-9\UpgradeReport.html'.'
I'm guessing that is being thrown when this powershell line is hit: New-OctopusArtifact ...
And that seems to indicate that the report was never created.
I've used a bit of logging to log out certain variables and the values look sound:
Report Path: C:\DeltaReports\Some API\2.9.15-DbUp-Test-9
Generated Report: C:\DeltaReports\Some API\2.9.15-DbUp-Test-9\UpgradeReport.html
Generating upgrade report at C:\DeltaReports\Some API\2.9.15-DbUp-Test-9\UpgradeReport.html
As you can see in the C#, the relevant code is wrapped in a try/catch block, but I'm not sure whether the error is being written out there or at a later point by Octopus (I'd need to do a pull request to add a marker in the code).
Can anyone see a way forward win resolving this? Has anyone else encountered this?
Cheers
I recently redid some of the work from that article for this video up on YouTube. I did run into some issues with the .SQL files not being included in the assembly. I think it was after I upgraded to .NET 6. But that might be a coincidence.
Anyway, because the files weren't being included in the assembly, when I ran the command line app via Octopus, it wouldn't properly generate the file for me. I ended up configuring the project to copy the .SQL files to a folder in the output directory instead of embedding them in the assembly. You can view a sample package here.
One thing that helped me is running the app in a debugger with the same parameters just to make sure it was actually generating the file. I'm sure you already thought of that, but I'd be remiss if I forgot to include it in my answer. :)
FWIW, this is my updated scripts.
First, the Octopus Script:
$packagePath = $OctopusParameters["Octopus.Action.Package[Trident.Database].ExtractedPath"]
$connectionString = $OctopusParameters["Project.Connection.String"]
$environmentName = $OctopusParameters["Octopus.Environment.Name"]
$reportPath = $OctopusParameters["Project.Database.Report.Path"]
cd $packagePath
$appToRun = ".\Octopus.Trident.Database.DbUp"
$generatedReport = "$reportPath\UpgradeReport.html"
& $appToRun --ConnectionString="$connectionString" --PreviewReportPath="$reportPath"
New-OctopusArtifact -Path "$generatedReport" -Name "$environmentName.UpgradeReport.html"
My C# code can be found here but for ease of use, you can see it all here (I'm not proud of how I parse the parameters).
static void Main(string[] args)
{
var connectionString = args.FirstOrDefault(x => x.StartsWith("--ConnectionString", StringComparison.OrdinalIgnoreCase));
connectionString = connectionString.Substring(connectionString.IndexOf("=") + 1).Replace(#"""", string.Empty);
var executingPath = Assembly.GetExecutingAssembly().Location.Replace("Octopus.Trident.Database.DbUp", "").Replace(".dll", "").Replace(".exe", "");
Console.WriteLine($"The execution location is {executingPath}");
var deploymentScriptPath = Path.Combine(executingPath, "DeploymentScripts");
Console.WriteLine($"The deployment script path is located at {deploymentScriptPath}");
var postDeploymentScriptsPath = Path.Combine(executingPath, "PostDeploymentScripts");
Console.WriteLine($"The deployment script path is located at {postDeploymentScriptsPath}");
var upgradeEngineBuilder = DeployChanges.To
.SqlDatabase(connectionString, null)
.WithScriptsFromFileSystem(deploymentScriptPath, new SqlScriptOptions { ScriptType = ScriptType.RunOnce, RunGroupOrder = 1 })
.WithScriptsFromFileSystem(postDeploymentScriptsPath, new SqlScriptOptions { ScriptType = ScriptType.RunAlways, RunGroupOrder = 2 })
.WithTransactionPerScript()
.LogToConsole();
var upgrader = upgradeEngineBuilder.Build();
Console.WriteLine("Is upgrade required: " + upgrader.IsUpgradeRequired());
if (args.Any(a => a.StartsWith("--PreviewReportPath", StringComparison.InvariantCultureIgnoreCase)))
{
// Generate a preview file so Octopus Deploy can generate an artifact for approvals
var report = args.FirstOrDefault(x => x.StartsWith("--PreviewReportPath", StringComparison.OrdinalIgnoreCase));
report = report.Substring(report.IndexOf("=") + 1).Replace(#"""", string.Empty);
if (Directory.Exists(report) == false)
{
Directory.CreateDirectory(report);
}
var fullReportPath = Path.Combine(report, "UpgradeReport.html");
if (File.Exists(fullReportPath) == true)
{
File.Delete(fullReportPath);
}
Console.WriteLine($"Generating the report at {fullReportPath}");
upgrader.GenerateUpgradeHtmlReport(fullReportPath);
}
else
{
var result = upgrader.PerformUpgrade();
// Display the result
if (result.Successful)
{
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine("Success!");
}
else
{
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine(result.Error);
Console.WriteLine("Failed!");
}
}
}
I hope that helps!
After long and detailed investigation, we discovered the answer was quite obvious.
We assumed the existing deploy process configuration was sound. Because we never had a problem with it (until now). As it transpires, there was a problem which led to the Development deployments being deployed twice.
Hence, the errors like the one above and others which talked about file handles being held by another process.
It was actually obvious in hindsight, but we were blind to it as we thought the existing process was sound 😣
I use SWUpdate to update different Hardware-Revisions of the same device with a double-copy strategy.
The bootloader environmnent of all those looks very similar. However, I have to set the mmc-partition to boot from depending on the active copy and the boot_file depending on the hardware-revision.
To keep the sw-description-file as comprehensive as possible and to make it easy to maintain I would like to set a "basic" boot-environment for all devices in a first step and in a second step overwrite some variables depending on hardware-revision and active copy:
software =
{
version = "1.1";
hardware-compatibility = ["0.1","1.0"];
device1=
{
copy-1:
{
images:
(
{
filename = "rootfs.ext3.gz";
device = "/dev/mmcblk0p3";
compressed = true;
},
{
filename = "u-boot-env-base"; #basic boot environment
type = "uboot";
}
);
bootenv: # device-specific boot variables
(
{
name = "boot_file"
value = "uImage1"
},
{
name = "mmcpart";
value = "3";
}
);
}
}
}
While parsing both bootloader environments are reported but only one is applied or both are, but in the wrong order, because when checking via fw_printenv the "u-boot-env-base" is unaltered.
I am using
SWUpdate v2018.11.0
U-Boot 2018.09.
I feel that I had this working in an older setup (SWUpdate 2016).
I have addressed the mailing list with this question. Stefano Babic, SWUpdate developer and maintainer, answered my question I am just trying to summarize it here.
What I have described is desired behaviour. It is not foreseen to set bootloader variables twice during an update. The u-boot variables defined in a file have priority over u-boot name-value-pairs in the bootenv section because the file is processed in the very end of the update. The solution in my case is to set the variables only in the bootenv section.
I'm relatively new to terraform and I'm trying to iterate over all aws_instances to apply a null_resource. Can you use multiple splats to access all instances, regardless of their names?
The EC2 instances are broken down by three types:
aws_instance.web.* (3 instances)
aws_instance.app.* (3 instances)
aws_instance.db.* (2 instances)
Here's my attempt to apply a null_resource to all eight aws_instances:
resource "null_resource" "install_security_package" {
#count = "${length(aws_instance)}" #terraform error: resource count can't reference variable: aws_instance
#count = "${length(aws_instance.*)}" #terraform error: resource variables must be three parts: TYPE.NAME.ATTR
count = "${length(aws_instance.*.*)}" #terraform error: unknown resource 'aws_instance.*'
connection {
type = "ssh"
host = "${element(aws_instance.*.private_ip, count.index)}"
user = "${lookup(var.user, var.platform)}"
private_key = "${file("${var.private_key_path}")}"
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"sudo rpm -Uvh http://www.example.com/security/repo/security_baseline.rpm",
]
}
}
It is not currently possible to match all resources of a given type. The "splat" syntax, as you've seen, only allows selecting all of the instances created from a particular resource block.
The closest you can get to this with Terraform today is to concatenate together the different resources:
concat(aws_instance.web.*.private_ip, aws_instance.app.*.private_ip, aws_instance.db.*.private_ip)
In the current version of Terraform as of this answer it is necessary to use some of the workarounds shared in github issue #4084 in order to avoid duplicating that complex expression in multiple places. A forthcoming feature called Local Values will make this simpler in the near future, allowing the list to be given an name to be re-used in multiple places:
# Won't work until Terraform PR#15449 is merged and released
locals {
aws_instance_addrs = "${concat(aws_instance.web.*.private_ip, aws_instance.app.*.private_ip, aws_instance.db.*.private_ip)}"
}
resource "null_resource" "install_security_package" {
count = "${length(local.aws_instance_addrs)}"
connection {
type = "ssh"
host = "${local.aws_instance_addrs[count.index]}"
user = "${lookup(var.user, var.platform)}"
private_key = "${file("${var.private_key_path}")}"
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"sudo rpm -Uvh http://www.example.com/security/repo/security_baseline.rpm",
]
}
}
Let's say I'm using Terraform to provision two machines inside AWS:
An EC2 Machine running NodeJS
An RDS instance
How does the NodeJS code obtain the address of the RDS instance?
You've got a couple of options here. The simplest one is to create a CNAME record in Route53 for the database and then always point to that CNAME in your application.
A basic example would look something like this:
resource "aws_db_instance" "mydb" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "bar"
db_subnet_group_name = "my_database_subnet_group"
parameter_group_name = "default.mysql5.6"
}
resource "aws_route53_record" "database" {
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "database.example.com"
type = "CNAME"
ttl = "300"
records = ["${aws_db_instance.default.endpoint}"]
}
Alternative options include taking the endpoint output from the aws_db_instance and passing that into a user data script when creating the instance or passing it to Consul and using Consul Template to control the config that your application uses.
You may try Sparrowform - a lightweight provision tool for Terraform based instances, it's capable to make an inventory of Terraform resources and provision related hosts, passing all the necessary data:
$ terrafrom apply # bootstrap infrastructure
$ cat sparrowfile # this scenario
# fetches DB address from terraform cache
# and populate configuration file
# at server with node js code:
#!/usr/bin/env perl6
use Sparrowform;
$ sparrowfrom --ssh_private_key=~/.ssh/aws.pem --ssh_user=ec2 # run provision tool
my $rdb-adress;
for tf-resources() -> $r {
my $r-id = $r[0]; # resource id
if ( $r-id 'aws_db_instance.mydb') {
my $r-data = $r[1];
$rdb-address = $r-data<address>;
last;
}
}
# For instance, we can
# Install configuration file
# Next chunk of code will be applied to
# The server with node-js code:
template-create '/path/to/config/app.conf', %(
source => ( slurp 'app.conf.tmpl' ),
variables => %(
rdb-address => $rdb-address
),
);
# sparrowform --ssh_private_key=~/.ssh/aws.pem --ssh_user=ec2 # run provisioning
PS. disclosure - I am the tool author
I really am stumped - I've spent almost two hours searching for an answer to a ridiculously simple question: how can I continually keep two local files in sync on my mac? I investigated various tricks involving rsync, then settled on using lsyncd.
But for the life of me, I can't figure out how I can get lsyncd to sync two specific files. Is this even supported in the API? It was not clear in the documentation whether or not I could use rsync in this manner; I assume that lsyncd is passing CLI options which are preventing this. My configuration is as follows:
sync = {
default.rsync,
source = "/Users/username/Downloads/test1.txt",
target = "/Users/username/Downloads/test2.txt",
rsync = {
binary = "/usr/local/bin/rsync",
archive = "true"
}
}
It just says 'nothing to sync'. Help?
This had worked for me:
sync {
default.rsync,
source = "/Users/username/Downloads/",
target = "/Users/username/Downloads/",
rsync = {
binary = "/usr/bin/rsync",
archive = "true",
_extra = {
"--include=test1.txt",
"--exclude=*"
}
}
}
You have to use the include/exclude feature of lsyncd, which did not come out of the box. You have to use _extra field to set them.
In lsynd you can do like this
settings {
logfile = "/var/log/lsyncd.log",
statusFile = "/var/log/lsyncd-status.log",
statusInterval = 20,
nodaemon = true
}
sync {
default.rsync,
source="/srcdir/",
target="/dstdir/",
rsync = {
archive = true,
compress = true,
whole_file = false,
_extra = { "--include=asterisk", "--exclude=*" },
verbose = true
},
delay=5,
log=all,
}
After start lsyncd i have next
root#localhost:/srcdir# ls
12 aster asterisk
root#localhost:/dstdir# ls
asterisk