Can bash script be written inside a AWS Lambda function - bash

Can I write a bash script inside a Lambda function? I read in the aws docs that it can execute code written in Python, NodeJS and Java 8.
It is mentioned in some documents that it might be possible to use Bash but there is no concrete evidence supporting it or any example

AWS recently announced the "Lambda Runtime API and Lambda Layers", two new features that enable developers to build custom runtimes. So, it's now possibile to directly run even bash scripts in Lambda without hacks.
As this is a very new feature (November 2018), there isn't much material yet around and some manual work still needs to be done, but you can have a look at this Github repo for an example to start with (disclaimer: I didn't test it). Below a sample handler in bash:
function handler () {
EVENT_DATA=$1
echo "$EVENT_DATA" 1>&2;
RESPONSE="{\"statusCode\": 200, \"body\": \"Hello World\"}"
echo $RESPONSE
}
This actually opens up the possibility to run any programming language within a Lambda. Here it is an AWS tutorial about publishing custom Lambda runtimes.

Something that might help, I'm using Node to call the bash script. I uploaded the script and the nodejs file in a zip to lambda, using the following code as the handler.
exports.myHandler = function(event, context, callback) {
const execFile = require('child_process').execFile;
execFile('./test.sh', (error, stdout, stderr) => {
if (error) {
callback(error);
}
callback(null, stdout);
});
}
You can use the callback to return the data you need.

AWS supports custom runtimes now based on this announcement here. I already tested bash script and it worked. All you need is to create a new lambda and choose runtime of type Custom it will create the following file structure:
mylambda_func
|- bootstrap
|- function.sh
Example Bootstrap:
#!/bin/sh
set -euo pipefail
# Handler format: <script_name>.<function_name>
# The script file <script_name>.sh must be located in
# the same directory as the bootstrap executable.
source $(dirname "$0")/"$(echo $_HANDLER | cut -d. -f1).sh"
while true
do
# Request the next event from the Lambda Runtime
HEADERS="$(mktemp)"
EVENT_DATA=$(curl -v -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
INVOCATION_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Execute the handler function from the script
RESPONSE=$($(echo "$_HANDLER" | cut -d. -f2) "$EVENT_DATA")
# Send the response to Lambda Runtime
curl -v -sS -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$INVOCATION_ID/response" -d "$RESPONSE"
done
Example handler.sh:
function handler () {
EVENT_DATA=$1
RESPONSE="{\"statusCode\": 200, \"body\": \"Hello from Lambda!\"}"
echo $RESPONSE
}
P.S. However in some cases you can't achieve what's needed because of the environment restrictions, such cases need AWS Systems Manager to Run command, OpsWork (Chef/Puppet) based on what you're more familiar with or periodically using ScheduledTasks in ECS cluster.
More Information about bash and how to zip and publish it, please check the following links:
https://docs.aws.amazon.com/en_us/lambda/latest/dg/runtimes-walkthrough.html
https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html

As you mentioned, AWS does not provide a way to write Lambda function using Bash.
To work around it, if you really need bash function, you can "wrap" your bash script within any languages.
Here is an example with Java:
Process proc = Runtime.getRuntime().exec("./your_script.sh");
Depending on your business needs, you should consider using native languages(Python, NodeJS, Java) to avoid performance loss.

I just was able to capture a shell command uname output using Amazon Lambda - Python.
Below is the code base.
from __future__ import print_function
import json
import commands
print('Loading function')
def lambda_handler(event, context):
print(commands.getstatusoutput('uname -a'))
It displayed the output
START RequestId: 2eb685d3-b74d-11e5-b32f-e9369236c8c6 Version: $LATEST
(0, 'Linux ip-10-0-73-222 3.14.48-33.39.amzn1.x86_64 #1 SMP Tue Jul 14 23:43:07 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux')
END RequestId: 2eb685d3-b45d-98e5-b32f-e9369236c8c6
REPORT RequestId: 2eb685d3-b74d-11e5-b31f-e9369236c8c6 Duration: 298.59 ms Billed Duration: 300 ms Memory Size: 128 MB Max Memory Used: 9 MB
For More information check the link - https://aws.amazon.com/blogs/compute/running-executables-in-aws-lambda/

Its possible using the 'child_process' node module.
const exec = require('child_process').exec;
exec('echo $PWD && ls', (error, stdout, stderr) => {
if (error) {
console.log("Error occurs");
console.error(error);
return;
}
console.log(stdout);
console.log(stderr);
});
This will display the current working directory and list the files.

Now you can create Lambda functions written in any kind of language by providing a custom runtime which teaches the Lambda function to understand the syntax of the language you want to use.
You can follow this to learn more AWS Lambda runtimes

As others have pointed out, in Node.js you can use the child_process module, which is built-in to Node.js. Here's a complete working sample:
app.js:
'use strict'
const childproc = require('child_process')
module.exports.handler = (event, context) => {
return new Promise ((resolve, reject) => {
const commandStr = "./script.sh"
const options = {
maxBuffer: 10000000,
env: process.env
}
childproc.exec(commandStr, options, (err, stdout, stderr) => {
if (err) {
console.log("ERROR:", err)
return reject(err)
}
console.log("output:\n", stdout)
const response = {
statusCode: 200,
body: {
output: stdout
}
}
resolve(response)
})
})
}
script.sh:
#!/bin/bash
echo $PWD
ls -l
response.body.output is
/var/task
total 16
-rw-r--r-- 1 root root 751 Oct 26 1985 app.js
-rwxr-xr-x 1 root root 29 Oct 26 1985 script.sh
(NOTE: I ran this in an actual Lambda container, and it really does show the year as 1985).
Obviously, you can put whatever shell commands you want into script.sh as long as it's included in the Lambda pre-built container. You can also build your own custom Lambda container if you need a command that's not in the pre-built container.

Related

How to make kubectl commands work inside my aws lambda bootstrap code example?

I would like to invoke an aws lambda function from a java project, first thing first, the java project sends a payload to lambda, then lambda processes this payload and execute some kubectl commands. Right now I am using lambda-layer-kubectl in order to use kubectl inside lambda function.
Java project code is below:
// snippet-start:[lambda.java2.invoke.main]
public static void invokeFunction(LambdaClient awsLambda, String functionName) {
InvokeResponse res = null ;
try {
// Need a SdkBytes instance for the payload.
JSONObject jsonObj = new JSONObject();
jsonObj.put("number", 80);
String json = jsonObj.toString();
SdkBytes payload = SdkBytes.fromUtf8String(json) ;
// Setup an InvokeRequest.
InvokeRequest request = InvokeRequest.builder()
.functionName(functionName)
.payload(payload)
.build();
res = awsLambda.invoke(request);
String value = res.payload().asUtf8String() ;
System.out.println(value);
} catch(LambdaException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
I am using Tutorial – Publishing a custom runtime to build my lambda function.
My bootstrap code is below:
#!/bin/sh
set -euo pipefail
export HOME="/tmp"
export PATH=$PATH://opt/awscli:/opt/kubectl:/opt/helm:/opt/jq
mkdir -p /tmp/.kube
cp kubeConfig /tmp/.kube/config
# Handler format: <script_name>.<bash_function_name>
# The script file <script_name>.sh must be located at the root of your
# function's deployment package, alongside this bootstrap executable.
# Initialization - load function handler
source $LAMBDA_TASK_ROOT/"$(echo $_HANDLER | cut -d. -f1).sh"
# Processing
while true
do
HEADERS="$(mktemp)"
# Get an event. The HTTP request will block until one is received
EVENT_DATA=$(curl -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
# Extract request ID by scraping response headers received above
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Run the handler function from the script
RESPONSE=$($(echo "$_HANDLER" | cut -d. -f2) "$EVENT_DATA" | jq ".number")
if [[ $RESPONSE == 80 ]]
then
TEST=$(echo "1111")
cp 80.yaml /tmp/80.yaml
kubectl apply -f test-80.yaml
fi
curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$TEST"
done
I got "{"errorMessage":"2022-11-20T20:35:41.005Z e0d3dedb-3b82-4007-9bf6-5649eddda916 Task timed out after 3.01 seconds"}
Process finished with exit code 0" after running the java project."
Tried extend lambda running time to 10s, still time out.
However, if I put "cp 80.yaml /tmp/80.yaml kubectl" and "apply -f test-80.yaml" outside while loop, behind "cp kubeConfig /tmp/.kube/config", kubernetes job will be created successfully.
However, if I put "cp 80.yaml /tmp/80.yaml kubectl" and "apply -f test-80.yaml" outside while loop, behind "cp kubeConfig /tmp/.kube/config", kubernetes job will be created successfully.
I expect kubectl commands to execute successfully and new kube job to be created.
Could somebody help me with it? Thank you very much in advance.

How to return output of shell script into Jenkinsfile [duplicate]

I have something like this on a Jenkinsfile (Groovy) and I want to record the stdout and the exit code in a variable in order to use the information later.
sh "ls -l"
How can I do this, especially as it seems that you cannot really run any kind of groovy code inside the Jenkinsfile?
The latest version of the pipeline sh step allows you to do the following;
// Git committer email
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
Another feature is the returnStatus option.
// Test commit message for flags
BUILD_FULL = sh (
script: "git log -1 --pretty=%B | grep '\\[jenkins-full]'",
returnStatus: true
) == 0
echo "Build full flag: ${BUILD_FULL}"
These options where added based on this issue.
See official documentation for the sh command.
For declarative pipelines (see comments), you need to wrap code into script step:
script {
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
}
Current Pipeline version natively supports returnStdout and returnStatus, which make it possible to get output or status from sh/bat steps.
An example:
def ret = sh(script: 'uname', returnStdout: true)
println ret
An official documentation.
quick answer is this:
sh "ls -l > commandResult"
result = readFile('commandResult').trim()
I think there exist a feature request to be able to get the result of sh step, but as far as I know, currently there is no other option.
EDIT: JENKINS-26133
EDIT2: Not quite sure since what version, but sh/bat steps now can return the std output, simply:
def output = sh returnStdout: true, script: 'ls -l'
If you want to get the stdout AND know whether the command succeeded or not, just use returnStdout and wrap it in an exception handler:
scripted pipeline
try {
// Fails with non-zero exit if dir1 does not exist
def dir1 = sh(script:'ls -la dir1', returnStdout:true).trim()
} catch (Exception ex) {
println("Unable to read dir1: ${ex}")
}
output:
[Pipeline] sh
[Test-Pipeline] Running shell script
+ ls -la dir1
ls: cannot access dir1: No such file or directory
[Pipeline] echo
unable to read dir1: hudson.AbortException: script returned exit code 2
Unfortunately hudson.AbortException is missing any useful method to obtain that exit status, so if the actual value is required you'd need to parse it out of the message (ugh!)
Contrary to the Javadoc https://javadoc.jenkins-ci.org/hudson/AbortException.html the build is not failed when this exception is caught. It fails when it's not caught!
Update:
If you also want the STDERR output from the shell command, Jenkins unfortunately fails to properly support that common use-case. A 2017 ticket JENKINS-44930 is stuck in a state of opinionated ping-pong whilst making no progress towards a solution - please consider adding your upvote to it.
As to a solution now, there could be a couple of possible approaches:
a) Redirect STDERR to STDOUT 2>&1
- but it's then up to you to parse that out of the main output though, and you won't get the output if the command failed - because you're in the exception handler.
b) redirect STDERR to a temporary file (the name of which you prepare earlier) 2>filename (but remember to clean up the file afterwards) - ie. main code becomes:
def stderrfile = 'stderr.out'
try {
def dir1 = sh(script:"ls -la dir1 2>${stderrfile}", returnStdout:true).trim()
} catch (Exception ex) {
def errmsg = readFile(stderrfile)
println("Unable to read dir1: ${ex} - ${errmsg}")
}
c) Go the other way, set returnStatus=true instead, dispense with the exception handler and always capture output to a file, ie:
def outfile = 'stdout.out'
def status = sh(script:"ls -la dir1 >${outfile} 2>&1", returnStatus:true)
def output = readFile(outfile).trim()
if (status == 0) {
// output is directory listing from stdout
} else {
// output is error message from stderr
}
Caveat: the above code is Unix/Linux-specific - Windows requires completely different shell commands.
this is a sample case, which will make sense I believe!
node('master'){
stage('stage1'){
def commit = sh (returnStdout: true, script: '''echo hi
echo bye | grep -o "e"
date
echo lol''').split()
echo "${commit[-1]} "
}
}
For those who need to use the output in subsequent shell commands, rather than groovy, something like this example could be done:
stage('Show Files') {
environment {
MY_FILES = sh(script: 'cd mydir && ls -l', returnStdout: true)
}
steps {
sh '''
echo "$MY_FILES"
'''
}
}
I found the examples on code maven to be quite useful.
All the above method will work. but to use the var as env variable inside your code you need to export the var first.
script{
sh " 'shell command here' > command"
command_var = readFile('command').trim()
sh "export command_var=$command_var"
}
replace the shell command with the command of your choice. Now if you are using python code you can just specify os.getenv("command_var") that will return the output of the shell command executed previously.
How to read the shell variable in groovy / how to assign shell return value to groovy variable.
Requirement : Open a text file read the lines using shell and store the value in groovy and get the parameter for each line .
Here , is delimiter
Ex: releaseModule.txt
./APP_TSBASE/app/team/i-home/deployments/ip-cc.war/cs_workflowReport.jar,configurable-wf-report,94,23crb1,artifact
./APP_TSBASE/app/team/i-home/deployments/ip.war/cs_workflowReport.jar,configurable-temppweb-report,394,rvu3crb1,artifact
========================
Here want to get module name 2nd Parameter (configurable-wf-report) , build no 3rd Parameter (94), commit id 4th (23crb1)
def module = sh(script: """awk -F',' '{ print \$2 "," \$3 "," \$4 }' releaseModules.txt | sort -u """, returnStdout: true).trim()
echo module
List lines = module.split( '\n' ).findAll { !it.startsWith( ',' ) }
def buildid
def Modname
lines.each {
List det1 = it.split(',')
buildid=det1[1].trim()
Modname = det1[0].trim()
tag= det1[2].trim()
echo Modname
echo buildid
echo tag
}
If you don't have a single sh command but a block of sh commands, returnstdout wont work then.
I had a similar issue where I applied something which is not a clean way of doing this but eventually it worked and served the purpose.
Solution -
In the shell block , echo the value and add it into some file.
Outside the shell block and inside the script block , read this file ,trim it and assign it to any local/params/environment variable.
example -
steps {
script {
sh '''
echo $PATH>path.txt
// I am using '>' because I want to create a new file every time to get the newest value of PATH
'''
path = readFile(file: 'path.txt')
path = path.trim() //local groovy variable assignment
//One can assign these values to env and params as below -
env.PATH = path //if you want to assign it to env var
params.PATH = path //if you want to assign it to params var
}
}
Easiest way is use this way
my_var=`echo 2`
echo $my_var
output
: 2
note that is not simple single quote is back quote ( ` ).

Access string variable from bash in jenkinsfile groovy script

I'm building several android apps in a docker image using gradle and a bash script. The script is triggered by jenkins, which runs the docker image.
In the bash script I gather information about the successes of the builds. I want to pass that information to the groovy script of the jenkinsfile.
I tried to create a txt file in the docker container, but the groovy script in the jenkinsfile can not find that file.
This is the groovy script of my jenkinsfile:
script {
try {
sh script:'''
#!/bin/bash
./jenkins.sh
'''
} catch(e){
currentBuild.result = "FAILURE"
} finally {
String buildResults = null
try {
def pathToBuildResults="[...]/buildResults.txt"
buildResults = readFile "${pathToBuildResults}"
} catch(e) {
buildResults = "error receiving build results. Error: " + e.toString()
}
}
}
In my jenkins.sh bash script I do the following:
[...]
buildResults+=" $appName: Build Failed!" //this is done for several apps
echo "$buildResults" | cat > $pathToBuildResults //this works I checked, if the file is created
[...]
The file is created, but groovy cannot find it. I think the reason is, that the jenkins script does not run inside the docker container.
How can I access the string buildResults of the bash script in my groovy jenkins script?
One option that you have in order to avoid the need to read the results file is to modify your jenkins.sh script to print the results to the output instead of writing them to a file and then use the sh step to capture that output and use it instead of the file.
Something like:
script {
try {
String buildResults = sh returnStdout: true, script:'''
#!/bin/bash
./jenkins.sh
'''
// You now have the output of jenkins.sh inside the buildResults parameter
} catch(e){
currentBuild.result = "FAILURE"
}
}
This way you are avoiding the need to handle the output files and directly get the results you need, which you can then parse and use however you need.

Capture output of a shell script inside a docker container to a file using docker sdk for python)

I have a shell script inside my docker container called test.sh. I would like to pipe the output of this script to a file. I can do this using docker exec command or by logging into the shell (using docker run -it) and running ./test.sh > test.txt. However, I would like to know how the same result can be achieved using the docker sdk for python. This is my code so far:
import docker
client = docker.APIClient(base_url='unix://var/run/docker.sock')
container= client.create_container(
'ubuntu:16.04', '/bin/bash', stdin_open=True, tty=True, working_dir='/home/psr', \
volumes=['/home/psr/data'], \
host_config=client.create_host_config(binds={
'/home/xxxx/data_generator/data/': {
'bind': '/home/psr/data',
'mode': 'rw',
},
})
)
client.start(container=container.get('Id'))
cmds= './test.sh > test.txt'
exe=client.exec_create(container=container.get('Id'), cmd= cmds,
stdout=True)
exe_start=client.exec_start(exec_id=exe, stream=True)
for val in exe_start:
print (val)
I am using the Low-Level API of the docker sdk. In case you know how to achieve the same result as above using the high level API, please let me know.
In case anyone else had the same problem, here is how I solved it. Please let me know in case you have a better solution.
import docker
client = docker.APIClient(base_url='unix://var/run/docker.sock')
container= client.create_container(
'ubuntu:16.04', '/bin/bash', stdin_open=True, tty=True,
working_dir='/home/psr', \
volumes=['/home/psr/data'], \
host_config=client.create_host_config(binds={
'/home/xxxx/data_generator/data/': {
'bind': '/home/psr/data',
'mode': 'rw',
},
})
)
client.start(container=container.get('Id'))
cmds= './test.sh'
exe=client.exec_create(container=container.get('Id'), cmd=cmds,
stdout=True)
exe_start=client.exec_start(exec_id=exe, stream=True)
with open('path_to_host_directory/test.txt', 'wb') as f: # wb: For Binary Output
for val in exe_start:
f.write(val)

How to detect if a Node.js script is running through a shell pipe?

My question is similar to this one: How to detect if my shell script is running through a pipe?. The difference is that the shell script I’m working on is written in Node.js.
Let’s say I enter:
echo "foo bar" | ./test.js
Then how can I get the value "foo bar" in test.js?
I’ve read Unix and Node: Pipes and Streams but that only seems to offer an asynchronous solution (unless I’m mistaken). I’m looking for a synchronous solution. Also, with this technique, it doesn’t seem very straightforward to detect if the script is being piped or not.
TL;DR My question is two-fold:
How to detect if a Node.js script is running through a shell pipe, e.g. echo "foo bar" | ./test.js?
If so, how to read out the piped value in Node.js?
I just found out a simpler answer to part of my question.
To quickly and synchronously detect if piped content is being passed to the current script in Node.js, use the process.stdin.isTTY boolean:
$ node -p -e 'process.stdin.isTTY'
true
$ echo 'foo' | node -p -e 'process.stdin.isTTY'
undefined
So, in a script, you could do something like this:
if (process.stdin.isTTY) {
// handle shell arguments
} else {
// handle piped content (see Jerome’s answer)
}
The reason I didn’t find this before is because I was looking at the documentation for process, where isTTY is not mentioned at all. Instead, it’s mentioned in the TTY documentation.
Pipes are made to handle small inputs like "foo bar" but also huge files.
The stream API makes sure that you can start handling data without waiting for the huge file to be totally piped through (this is better for speed & memory). The way it does this is by giving you chunks of data.
There is no synchronous API for pipes. If you really want to have the whole piped input in your hands before doing something, you can use
note: use only node >= 0.10.0 because the example uses the stream2 API
var data = '';
function withPipe(data) {
console.log('content was piped');
console.log(data.trim());
}
function withoutPipe() {
console.log('no content was piped');
}
var self = process.stdin;
self.on('readable', function() {
var chunk = this.read();
if (chunk === null) {
withoutPipe();
} else {
data += chunk;
}
});
self.on('end', function() {
withPipe(data);
});
test with
echo "foo bar" | node test.js
and
node test.js
It turns out that process.stdin.isTTY is not reliable because you can spawn a child process that is not a TTY.
I found a better solution here using file descriptors.
You can test to see if your program with piped in or out with these functions:
function pipedIn(cb) {
fs.fstat(0, function(err, stats) {
if (err) {
cb(err)
} else {
cb(null, stats.isFIFO())
}
})
}
function pipedOut(cb) {
fs.fstat(1, function(err, stats) {
if (err) {
cb(err)
} else {
cb(null, stats.isFIFO())
}
})
}
pipedIn((err, x) => console.log("in", x))
pipedOut((err, x) => console.log("out", x))
Here's some tests demonstrating that it works.
❯❯❯ node pipes.js
in false
out false
❯❯❯ node pipes.js | cat -
in false
out true
❯❯❯ echo 'hello' | node pipes.js | cat -
in true
out true
❯❯❯ echo 'hello' | node pipes.js
in true
out false
❯❯❯ node -p -e "let x = require('child_process').exec(\"node pipes.js\", (err, res) => console.log(res))"
undefined
in false
out false
❯❯❯ node -p -e "let x = require('child_process').exec(\"echo 'hello' | node pipes.js\", (err, res) => console.log(res))"
undefined
in true
out false
❯❯❯ node -p -e "let x = require('child_process').exec(\"echo 'hello' | node pipes.js | cat -\", (err, res) => console.log(res))"
undefined
in true
out true
❯❯❯ node -p -e "let x = require('child_process').exec(\"node pipes.js | cat -\", (err, res) => console.log(res))"
undefined
in false
out true
If you need to pipe into nodejs using an inline --eval string in bash, cat works too:
$ echo "Hello" | node -e "console.log(process.argv[1]+' pipe');" "$(cat)"
# "Hello pipe"
You need to check stdout (not stdin like suggested elsewhere) like this:
if (process.stdout.isTTY) {
// not piped
} else {
// piped
}

Resources