Calling java -jar command using AWS lambda steps - aws-lambda

I have 4 shell scripts which I embedded in java code and converted into jar. I also have a lambda AWS function which brings up the EMR cluster. In lambda function, I should run the generated jar (java -jar /home/hadoop/aws.jar) using steps. I have bootstrap actions where I am setting few environmental variables when the cluster is bought up. So ideally, after the cluster is up the cluster should run the java -jar command which was specified in steps values in json events.
But the problem is the emr is terminating failing in the step jar command. Is there any other way to run the java -jar command from lambda using steps.
"Steps":[
{
"Name": "Setup hadoop debugging",
"ActionOnFailure": "TERMINATE_CLUSTER",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args": [
"state-pusher-script"
]
}
},
{
"Name": "Execute Step JAR",
"ActionOnFailure": "TERMINATE_CLUSTER",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args":[
"java -jar /home/hadoop/lib/aws-add-step-emr-0.0.1-SNAPSHOT-shaded.jar"
]
}
}
],
"BootstrapActions":[
{
"Name": "Custom action",
"ScriptBootstrapAction": {
"Path": "s3://aws-east-1/bootstrap/init.sh"
}
}]

Related

How to get Spring Actuator build information locally?

When accessing my Spring Actuator /info endpoint I receive the following information:
{
"git": {
"branch": "8743b52063cd84097a65d1633f5c74f5",
"commit": {
"id": "b3j2924",
"time": "05.07.2021 # 10:00:00 UTC"
}
},
"build": {
"encoding": {
"source": "UTF-8"
},
"version": "1.0",
"artifact": "my-artifact",
"name": "my-app",
"time": 0451570333.122000000,
"group": "my.group"
}
}
My project does not maintain a META-INF/build-info.properties file.
I now wanted to write a unit-test for that exact output but get the following error:
java.lang.AssertionError:
Expecting actual:
"{"git":{"branch":"8743b52063cd84097a65d1633f5c74f5","commit":{"id":"b3j2924","time":"05.07.2021 # 10:00:00 UTC"}}}"
to contain:
"build"
The whole build block is missing in the output.
My questions are the following:
What needs to be done to access the build information during a local unit-test run without providing a META-INF/build-info.properties.
From where does Spring Actuator retrieve the actual build information when my project does not has a META-INF/build-info.properties file so it gives me the output from above?
The build-info.properties file is typically generated at build time by Spring Boot's Maven or Gradle plugins.

Running vscode task that includes which fails

I have a task created that runs some unittests via bash_unit.
The bash unit script seems to fail based on its use of which.
If I replace:
CAT="$(which cat)"
and other which referances in bash unit to point to my local commands all runs great.
If I run bash)unit directly all good, but if i run it as a vscode task it fails.
I have simplified the task below to the minimum failure:
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "run which",
"type": "shell",
"command": "/usr/bin/which which",
"problemMatcher": [],
"group": "test",
}
]
}
This produces the following:
The terminal process "/bin/bash '-c', '/usr/bin/which which'" failed to launch (exit code: 1).
Any ideas what is happening?

Does AWS SSM agent cloudwatch resource:memory event?

Am picking ECS optimised instance(ami-05958d7635caa4d04) in data plane of ECS in ca-central-1 region.
AWS Systems Manager Agent (SSM Agent) is Amazon software that can be installed and configured on an Amazon EC2 instance, an on-premises server, or a virtual machine (VM). SSM Agent makes it possible for Systems Manager to update, manage, and configure these resources.
In my scenario, Launching a ECS task in ECS optimised instance(ami-05958d7635caa4d04), causes resource:memory error. More on this error, here. Monitoring ECS->cluster->service->events will not work for me, because cloudformation roll back the cluster.
My existing ECS optimised instance is launched as shown below:
"EC2Instance":{
"Type": "AWS::EC2::Instance",
"Properties":{
"ImageId": "ami-05958d7635caa4d04",
"InstanceType": "t2.micro",
"SubnetId": { "Ref": "SubnetId"},
"KeyName": { "Ref": "KeyName"},
"SecurityGroupIds": [ { "Ref": "EC2InstanceSecurityGroup"} ],
"IamInstanceProfile": { "Ref" : "EC2InstanceProfile"},
"UserData":{
"Fn::Base64": { "Fn::Join": ["", [
"#!/bin/bash\n",
"echo ECS_CLUSTER=", { "Ref": "EcsCluster" }, " >> /etc/ecs/ecs.config\n",
"groupadd -g 1000 jenkins\n",
"useradd -u 1000 -g jenkins jenkins\n",
"mkdir -p /ecs/jenkins_home\n",
"chown -R jenkins:jenkins /ecs/jenkins_home\n"
] ] }
},
"Tags": [ { "Key": "Name", "Value": { "Fn::Join": ["", [ { "Ref": "AWS::StackName"}, "-instance" ] ]} }]
}
}
1) Does aws ssm agent installation required on ECS instance(ami-05958d7635caa4d04) to retrieve such cloudwatch events(resource:memory) with aws.ssm cloudwatch event rule filter? or Does aws.ec2 cloudwatch event rule filter suffice?
2) If yes, Do I need to explicitly install SSM agent on ECS instance(ami-05958d7635caa4d04)? through CloudFormation...
You don't need to install SSM agent to monitor something such as memory usage of your instance (whether container instance or not). This is domain of CloudWatch, not SSM.
All you need to install is unified cloud watch agent and configure it accordingly. This is where SSM can help but it is not necessary and you can install it manually (or via script if you want).
If you decide to use SSM then you will need to explicitly install it. It comes preinstalled on some OSes but not on Amazon ECS-Optimized AMI - more about this.

Prevent Octopus from Running a Deployment Script

I am deploying a package that contains a deploy.ps1 file. As you already know Octopus is running this script on deploying by default, I want to prevent it happening and run a custom script instead.
If you have a requirement like this, then it's better to move the powershell that starts the services to a separate build step and then tag the tentacles you want that script to run on.
In your deployment step for the service, set the start mode to "Manual"
Then have a step that starts the service, and scope that script to the environments / servers that you want to auto start
The code for the step template I use here is
{
"Id": "ActionTemplates-1",
"Name": "Enable and start service",
"Description": null,
"ActionType": "Octopus.Script",
"Version": 8,
"Properties": {
"Octopus.Action.Package.NuGetFeedId": "feeds-builtin",
"Octopus.Action.Script.Syntax": "PowerShell",
"Octopus.Action.Script.ScriptSource": "Inline",
"Octopus.Action.RunOnServer": "false",
"Octopus.Action.Script.ScriptBody": "$serviceName = $OctopusParameters[\"ServiceName\"]\n\nwrite-host \"the service is: \" $serviceName\n\n& \"sc.exe\" config $serviceName start= delayed-auto\n& \"sc.exe\" start $serviceName\n\n"
},
"Parameters": [
{
"Name": "ServiceName",
"Label": "Service Name",
"HelpText": null,
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
}
],
"$Meta": {
"ExportedAt": "2016-10-10T10:21:21.980Z",
"OctopusVersion": "3.3.2",
"Type": "ActionTemplate"
}
}
You may want to modify the step template as it will set the service to "Automatic - Delayed" and then start the service.
Are you able to move the script to a sub folder?
These scripts must be located in the root of your package
http://docs.octopusdeploy.com/display/OD/Custom+scripts
Alternatively - don't include your deploy.ps1 script in the deployment package if it should never be deployed.

When does EMR bootstrap actions run

I am creating an AWS cluster and I have a bootstrap action to change spark-defaults.conf.
Server is keep getting terminated saying
can't read /etc/spark/conf/spark-defaults.conf: No such file or
directory
Though if I skip this and check on server the files does exist. So I assume the order of things are not correct. I am using Spark 1.6.1 by provided EMR 4.5 so it should be installed by default.
Any clues?
Thanks!
You should not change Spark configurations in a bootstrap action. Instead you should specify any changes you have to spark-defaults in a special json file you need to add when launching the cluster. If you use the cli to launch, the command should look something like this:
aws --profile MY_PROFILE emr create-cluster \
--release-label emr-4.6.0 \
--applications Name=Spark Name=Ganglia Name=Zeppelin-Sandbox \
--name "Name of my cluster" \
--configurations file:///path/to/my/emr-configuration.json \
...
--bootstrap-actions ....
--step ...
In the emr-configuration.json file you then set your changes to spark-defaults. An example could be:
[
{
"Classification": "capacity-scheduler",
"Properties": {
"yarn.scheduler.capacity.resource-calculator": "org.apache.hadoop.yarn.util.resource.DominantResourceCalculator"
}
},
{
"Classification": "spark",
"Properties": {
"maximizeResourceAllocation": "true"
}
},
{
"Classification": "spark-defaults",
"Properties": {
"spark.dynamicAllocation.enabled": "true",
"spark.executor.cores":"7"
}
}
]
The best way to achieve this goal is to use the Steps definition at a CloudFormation template for example... as Steps will run particularly at your Master node which holds the spark-default.conf file.

Resources