Why does my SystemD service not restart on reboots? - amazon-ec2

I am building an AMI using Packer, which has a custom SystemD unit configured. The AMI is then deployed to an EC2. The issue is that if the EC2 reboots, then the unit is not restarted.
Here is my unit:
[Unit]
Description=My service
After=network.target
StartLimitIntervalSec=0
StartLimitAction=reboot
[Service]
Type=simple
Restart=always
RestartSec=30
User=ubuntu
ExecStart=/opt/app/app
[Install]
WantedBy=multi-user.target
And here is my Packer configuration:
{
"variables": {
"aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"region": "{{env `AWS_REGION`}}"
},
"builders": [
{
"access_key": "{{user `aws_access_key`}}",
"ami_name": "my-app-{{timestamp}}",
"instance_type": "t2.micro",
"region": "{{user `region`}}",
"secret_key": "{{user `aws_secret_key`}}",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "ubuntu/images/*ubuntu-bionic-18.04-amd64-server-*",
"root-device-type": "ebs"
},
"owners": [ "099720109477" ],
"most_recent": true
},
"ssh_username": "ubuntu",
"type": "amazon-ebs"
}
],
"provisioners": [
{
"type": "shell",
"script": "{{template_dir}}/provision.sh"
},
{
"type": "file",
"source": "{{template_dir}}/files/app.service",
"destination": "/tmp/upload/etc/systemd/system/app.service"
},
{
"type": "file",
"source": "{{template_dir}}/../bin/Release/netcoreapp3.1/",
"destination": "/tmp/upload/opt/app"
},
{
"type": "shell",
"inline": [
"sudo rsync -a /tmp/upload/ /",
"cd /opt/app",
"sudo systemctl daemon-reload",
"sudo systemctl enable app.service"
]
}
]
}
Strangely, if I SSH into the running EC2 and enable the service, then it does restart after a reboot.
sudo systemctl enable app.service
sudo reboot
This makes me think I am not creating the AMI correctly, but in my Packer configuration I do enable the service!
Why does my AMI not have my SystemD unit enabled?

Related

Packer provisioners don't save installed packages

I have encountered an issue during provisioning with HashiCorp Packer for virtualbox-iso on Alpine Linux v3.16.
Provisioning script runs OK, and it logs that build has finished, however when I open the outputted ovf file in VirtualBox moved files and docker are not present.
I would be grateful for any advice.
I run packer build packer-virtualbox-alpine-governator.json
packer-virtualbox-alpine-governator.json file:
{
"variables": {
"password": "packer"
},
"builders": [
{
"type": "virtualbox-iso",
"memory": 8192,
"guest_os_type": "Other_64",
"iso_url": "https://dl-cdn.alpinelinux.org/alpine/v3.16/releases/x86_64/alpine-standard-3.16.0-x86_64.iso",
"iso_checksum": "file:https://dl-cdn.alpinelinux.org/alpine/v3.16/releases/x86_64/alpine-standard-3.16.0-x86_64.iso.sha256",
"ssh_username": "root",
"ssh_password": "{{user `password`}}",
"shutdown_command": "poweroff",
"hard_drive_interface": "sata",
"boot_command": [
"root<enter><wait>",
"setup-alpine<enter><wait>us<enter><wait>us<enter><wait><enter><wait><enter><wait><enter><wait><enter><wait5>{{user `password`}}<enter><wait>{{user `password`}}<enter><wait><enter><wait><enter><wait><enter><wait15><enter><wait>openssh<enter><wait>openssh-full<enter><wait5>test123<enter><wait5>test123<enter><wait><enter><wait><enter><wait>sda<enter><wait>sys<enter><wait>y<enter><wait30>",
"echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config<enter><wait>",
"/etc/init.d/sshd restart<enter><wait5>"
]
}
],
"provisioners": [
{
"type": "shell",
"inline": ["mkdir -p /opt/site/governator"]
},
{
"type": "file",
"source": "files/docker-compose.yaml",
"destination": "/opt/site/"
},
{
"type": "file",
"source": "files/governator.conf",
"destination": "/opt/site/governator/"
},
{
"type": "shell",
"scripts": [
"scripts/alpine/install-docker-on-alpine.sh"
]
}
]
}
./scritps/alpine/install-docker-on-alpine.sh
#! /bin/ash
cat > /etc/apk/repositories << EOF; $(echo)
https://dl-cdn.alpinelinux.org/alpine/v$(cut -d'.' -f1,2 /etc/alpine-release)/main/
https://dl-cdn.alpinelinux.org/alpine/v$(cut -d'.' -f1,2 /etc/alpine-release)/community/
https://dl-cdn.alpinelinux.org/alpine/edge/testing/
EOF
apk update
apk add docker
addgroup $USER docker
rc-update add docker boot
service docker start
apk add docker-compose
sync

AWS | Laravel | ECS(G/B) | CodeDeploy unfinish

I'm using Laravel.
I want to deploy it to ECS (B/G) to see how it works.
In the development environment, Laravel is running.
I was able to launch my Laravel project on EC2 using docker.
I want to use Fargate for the first time and deploy to ECS!
Also, CodeBuild has completed successfully.
appspec.yml
version: 0.0 Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "<TASK_DEFINITION>"
LoadBalancerInfo:
ContainerName: "nginx"
ContainerPort: "80"
taskdef.json
{
"taskRoleArn": "arn:aws:iam::**********:role/ecsTaskExecutionRole",
"executionRoleArn": "arn:aws:iam::**********:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/****-system",
"awslogs-region": "******",
"awslogs-stream-prefix": "ecs"
}
},
"entryPoint": [
"sh",
"-c"
],
"command": [
"php artisan config:cache && php artisan migrate && chmod -R 777 storage/ && chmod -R 777 bootstrap/cache/"
],
"cpu": 0,
"environment": [
{
"name": "APP_ENV",
"value": "staging"
}
],
"workingDirectory": "/var/www/html",
"image": "<IMAGE1_NAME>",
"name": "php"
},
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/****-system",
"awslogs-region": "****",
"awslogs-stream-prefix": "ecs"
}
},
"portMappings": [
{
"hostPort": 80,
"protocol": "tcp",
"containerPort": 80
}
],
"environment": [
{
"name": "APP_ENV",
"value": "staging"
}
],
"workingDirectory": "/var/www/html",
"image": "**********.dkr.ecr.**********.amazonaws.com/**********-nginx:latest",
"name": "nginx"
}
],
"placementConstraints": [],
"memory": "2048",
"family": "*****-system",
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"cpu": "1024",
"volumes": []
}
CodeDeploy stopped at INSTALL, and there are no errors.
As you can see in the capture, we can confirm that "<TASK_DEFINITION>" has been replaced.
I'd like to know if there's any information I'm missing.
I'm not sure how to set environment variables such as ".env", so I'm thinking this might be the cause.
CodeDeploy Failed
Revision
Task Definitions
ECR
ECR nginx
ECR src(laravel)
If you want to change .env file to set env variable, you may use ssh connection to your webserver and run nano .env command at root folder, to write the file.
You can also modify the file using ftp connection.

Baking AMIs in Spinnaker Pipeline using Chef-Solo

I'm trying to build a test pipeline in Spinnaker to bake an AMI and then update a CloudFormation template to deploy the AMI to EC2 instances in an auto scaling group.
I have a small test Chef cookbook developed which works great when running packer locally. I am running berks local on my laptop to vendor my cookbooks and pull them from our internal Chef supermarket. Packer is configured with the chef-solo provisioner as shown in the sample packer template below, and will transfer the cookbooks to the packer builder EC2 instance and run Chef. Right now, we're testing with Linux but want to support both Linux and Windows AMIs.
Is it possible to use chef-solo with a custom packer template with Spinnaker? If so, when and where should berks run to vendor the cookbooks before packer executes?
{
"variables": {
"aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"aws_region": "{{env `AWS_REGION`}}",
"ssh_private_key_file": "{{env `SSH_PRIVATE_KEY_FILE`}}",
"subnet_id": "{{env `AWS_SUBNET_ID`}}",
"vpc_id": "{{env `AWS_DEFAULT_VPC_ID`}}"
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "{{user `aws_region`}}",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "amzn2-ami-hvm-*-x86_64-gp2",
"root-device-type": "ebs"
},
"most_recent": true,
"owners": [
"amazon"
]
},
"ami_name": "test-ami-{{timestamp}}",
"ami_description": "Test Linux AMI",
"communicator": "ssh",
"instance_type": "m4.large",
"subnet_id": "{{user `subnet_id`}}",
"tags": {
"Name": "Test Linux AMI"
},
"ssh_username": "ec2-user",
"ssh_keypair_name": "TestKeypair",
"ssh_private_key_file": "{{user `ssh_private_key_file`}}",
"vpc_id": "{{user `vpc_id`}}"
}
],
"provisioners": [
{
"type": "shell-local",
"command": "berks vendor --delete -b ./Berksfile ./cookbooks"
},
{
"type": "chef-solo",
"cookbook_paths": [
"./cookbooks"
],
"run_list": [
"recipe[test_cookbook]"
]
}
}

Mesos/Marathon - reserved resources for role are not offered for Marathon app

I have assigned slave resources to the particular role ("app-role") by set --default_role="app-role" parameter to ExecStart for slave service ( /etc/systemd/system/dcos-mesos-slave.service). Next I have restarted slave agent:
sudo systemctl daemon-reload
sudo systemctl stop dcos-mesos-slave.service
sudo rm -f /var/lib/mesos/slave/meta/slaves/latest
sudo systemctl start dcos-mesos-slave.service
and verified by: curl master.mesos/mesos/slaves.
After that I expect marathon app with acceptedResourceRoles attribute will receive only these particular resource offers, but it does not happen (the app is still in waiting state).
Why does marathon didn't receive it? How should this be done to make it work?
{
"id": "/basic-4",
"cmd": "python3 -m http.server 8080",
"cpus": 0.5,
"mem": 32,
"disk": 0,
"instances": 1,
"acceptedResourceRoles": [
"app-role"
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "python:3",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 8080,
"hostPort": 0,
"servicePort": 10000,
"protocol": "tcp",
"name": "my-vip",
"labels": {
"VIP_0": "/my-service:5555"
}
}
],
"privileged": false,
"parameters": [],
"forcePullImage": false
}
},
"portDefinitions": [
{
"port": 10000,
"protocol": "tcp",
"name": "default",
"labels": {}
}
]
}
This works only if marathon is started with --mesos_role set.
In the context of the question this should be: --mesos_role 'app-role'.
If you set --mesos_role other, Marathon will register with Mesos for this role – it will receive offers for resources that are reserved
for this role, in addition to unreserved resources.
If you set default_accepted_resource_roles *, Marathon will apply this default to all AppDefinitions that do not explicitly
define acceptedResourceRoles. Since your AppDefinition defines that
option, the default will not be applied (both are equal anyways).
If you set "acceptedResourceRoles": [""] in an AppDefinition (or the AppDefinition inherits a default of ""), Marathon will only
consider unreserved resources for launching of this app.
More: https://mesosphere.github.io/marathon/docs/recipes.html

SSH timeout when creating vagrant box with packer

I use the following template to create vagrant box with packer. However, i get error "Build 'vmware-vmx' errored: Timeout waiting for SSH.". How to fix this?
{
"builders": [{
"type": "vmware-vmx",
"source_path": "/path/to/a/vm.vmx",
"ssh_username": "root",
"ssh_password": "root",
"ssh_wait_timeout": "30s",
"shutdown_command": "echo 'packer' | sudo -S shutdown -P now"
}],
"provisioners": [{
"type": "shell",
"inline": ["echo 'my additional provisioning steps'"]
}],
"post-processors": [{
"type": "vagrant",
"keep_input_artifact": true,
"output": "mycentos.box"
}]
}
Set headless parameter of the builder to false. Start the build and watch out for an error. If no error occurs then increase the timeout parameter. 30s is a bit small for instantiating, cloning and running the vm.
in your case:
"builders": [{
"type": "vmware-vmx",
"source_path": "/path/to/a/vm.vmx",
"ssh_username": "root",
"ssh_password": "root",
"headless" : false,
"ssh_wait_timeout": "1000s",
"shutdown_command": "echo 'packer' | sudo -S shutdown -P now"
}]
When you don't need the SSH connection during the provisioning process you can switch it off. See the packer documentation about communicator, there you see the option none to switch of the communication between host and guest.
{
"builders": [
{
"type": "vmware-vmx",
"communicator": "none"
}
]
}
Packer Builders DOCU vmware-vmx

Resources