I want to deploy a lambda function written on a windows machine - to my AWS lambda. Using Upload as Zip - it takes all the node_modules and package file.
But I get an error
errorMessage": "/var/task/node_modules/ibm_db/build/Release/odbc_bindings.node: invalid ELF header",
How can I install a linux appropriate package from the DB2 driver?
You can use docker to run a Linux container with a shared volume between host and the container and build on the container.
Installing Docker in windows can be painful at times, I had the same situation.
Install Ubuntu (or any other distro ) in Windows
from the Windows app store and then have all the dependencies installed and then use AWS CLI to zip the all the modules and upload in Lambda
A sample script can be like this
# Remove zip file if already exit
rm index.zip
# Creating zip file
zip -r index.zip *
# Update lambda function, the present directory name should be same as the lambda function name present in AWS
lambdaName=${PWD##*/}
aws lambda update-function-code --function-name $lambdaName --zip-file fileb://index.zip
# Publish version
aws lambda publish-version --function-name $lambdaName
# Get latest version
version=$(aws lambda publish-version --function-name $lambdaName --description "updated via cli" --query Version | bc)
# Map alias to latest version
aws lambda update-alias --function-name $lambdaName \--function-version $version --name SANDBOX
# Create new alias
# aws lambda create-alias --function-name loyalty-gift-card-link-sl \ --function-version 2 --name SANDBOX2
I had a similar problem, so I launched a t2.micro AWS linux instance, and installed docker, and created the Lambda package.
Here are the steps, if it helps you.
Launch new ec2 of amazonlinux from amzn-ami-hvm-2018.03.0.20181129-x86_64-gp2 (ami-01e24be29428c15b2)
install docker
sudo su
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo usermod -a -G docker ec2-user
log out and log in to pickup the added group
cd /home/ec2-user/
mkdir <LambdaProject>
checkout code from repo
git clone <repo>
build docker, and install Node.js 6.10 along with dependencies
cd /home/ec2-user/
docker build --tag amazonlinux:nodejs .
Install the sharp and querystring module dependencies (OR What you need), and compile the ‘Origin-Response’ function
docker run --rm --volume ${PWD}/lambda/origin-response-function:/build amazonlinux:nodejs /bin/bash -c "source ~/.bashrc; npm init -f -y; npm install sharp --save; npm install querystring --save; npm install url --save; npm install path --save; npm install --only=prod"
mkdir -p dist && cd lambda/origin-response-function && zip -FS -q -r ../../dist/origin-response-function.zip * && cd ../..
Package the ‘Origin-Response’ function.
mkdir -p dist && cd lambda/origin-response-function && zip -FS -q -r ../../dist/origin-response-function.zip * && cd ../..
Note: package is created as dist/origin-response-function.zip
create S3 bucket in us-east-1 region to hold the deployment files and upload the zip files created in above steps. NOTE: You can add triggers only for functions in the US East (N. Virginia) Region.
Bucket:
copy lambda package to s3 bucket
aws s3 cp dist/origin-response-function.zip s3://<bucket_name>/
Related
I have a project in Amazon-Sage-Maker. For this, I have to uninstall specific packages and install others in the terminal. But every time I close or stop the instance I have to go to the terminal and make all the installations again. Why is this happening?
The package with which I am experimenting with this trouble is psycopg2:
import psycopg2
Gives me a warning that suggests that I should uninstall it and install psycopg2-binary.
So I open the terminal and code:
pip uninstall psycopg2
Then in the notebook, I code:
import psycopg2
And have no problem, but if I close and open the instance back, I get the same error and have to go through all the process again.
Thanks for using SageMaker. The packages installed are not persistent when you restart the Notebook Instance. To avoid manually installing it every time, you can create a Lifecycle Config which installs your packages and attach it to you Notebook Instance. Script in Lifecycle Config will be run every time you restart your Notebook Instance.
For more information on how to use Lifecycle Config you can check out:
https://aws.amazon.com/blogs/machine-learning/customize-your-amazon-sagemaker-notebook-instances-with-lifecycle-configurations-and-the-option-to-disable-internet-access/
#anitasp, You have to create a Docker image, by doing the following:
Be sure to set up your SageMaker Execution Role Policy permissions on AWS IAM (besides S3) and also AmazonEC2ContainerServiceFullAccess, AmazonEC2ContainerRegistryFullAccess and AmazonSageMakerFullAccess.
Create and start instance in SageMaker and Open notebook. Clone the directory structure shown here at your instance: https://github.com/RubensZimbres/Repo-2018/tree/master/AWS%20SageMaker/Jupyter-Folder
Inside Jupyter, run:
! sudo service docker start
! sudo usermod -a -G docker ec2-user
! docker info
! chmod +x decision_trees/train
! chmod +x decision_trees/serve
! aws ecr create-repository --repository-name decision-trees
! aws ecr get-login --no-include-email
Copy and paste the login in the command line below
! docker login -u abc -p abc12345 http://abc123
Run
! docker build -t decision-trees .
! docker tag decision-trees your_aws_account_id.dkr.ecr.us-east-1.amazonaws.com/decision-trees:latest
! docker push your_aws_account_id.dkr.ecr.us-east-1.amazonaws.com/decision-trees:latest
! aws ecs register-task-definition --cli-input-json file://decision-trees-task-def.json
And adapt to your needs, according to the algorithm of your choice. You will need the Dockerfile, hyperparameters.json, etc.
The documented project is here: https://github.com/RubensZimbres/Repo-2018/tree/master/AWS%20SageMaker
By default, python packages installed from a Notebook Instance will not be persisted to the next Notebook Instance session. One solution for this problem is to:
1) Create (or clone from a current conda env) a new conda environment into /home/ec2-user/SageMaker, which is persisted between sessions. For example:
conda create --prefix /home/ec2-user/SageMaker/envs/custom-environment --clone tensorflow_p36
2) Next, create a new Lifecycle Configuration for “start notebook” with the following contents:
#!/bin/bash
sudo -u ec2-user -i <<'EOF'
ln -s /home/ec2-user/SageMaker/envs/custom-environment /home/ec2-user/anaconda3/envs/custom-environment
EOF
3) Finally, attach the Lifecycle Configuration to your Notebook Instance
Now, when you restart your Notebook Instance, your custom environment will be detected by conda and Jupyter. Any new packages you install to this environment will be persisted between sessions and then soft-linked back to conda at startup.
I'm trying to build a docker image containing the oracledb client and nodejs, but I'm getting the error The command '/bin/sh -c ldconfig' returned a non-zero code: 1 on RUN ldconfig.
I cannot find anything to help me solve this problem and I've been trying to solve this myself for the last 2hours, and I need help!
Additional info:
Oddly, when I go into the container with docker exec -it container_name sh and then execute ldconfig, it runs fine...
This is the dockerfile:
FROM node:9.11-alpine
WORKDIR /
COPY ./oracle /opt/oracle
RUN apk update && \
apk add --no-cache libaio && \
mkdir /etc/ld.so.conf.d && \
sh -c "echo /opt/oracle/instantclient_12_2 > /etc/ld.so.conf.d/oracle-instantclient.conf" && \
ldconfig
ENV LD_LIBRARY_PATH=/opt/oracle/instantclient_12_2:$LD_LIBRARY_PATH
ENV PATH=/opt/oracle/instantclient_12_2:$PATH
CMD ["tail", "-f", "/dev/null"]
In alpine ldconfig requires the configuration directory as an argument.
Try running ldconfig like this:
ldconfig /etc/ld.so.conf.d
Theoretically that should work.
See my blog post series Docker for Oracle Database Applications in Node.js and Python that shows using Instant Client in Oracle Linux containers.
Also see the node-oracledb installation manual section Using node-oracledb in Docker.
The latest sample Oracle Instant Client container Dockerfile automatically pulls the required RPMs - no manual download required. Oracle Instant Client 19 will connect to Oracle DB 11.2 or later.
Guys I want to automate terraform with Jenkins pipeline
And My terraform is installed on different Vm and Jenkins
Also.
I know there is the Terraform Plugin. But it seems like The Terraform
Has to be installed on the same vm as Jenkins(or on /var/lib/jenkins/workspace)
Is there anyway to get this done?
Please share your suggestions.
Generally, it's a good idea to keep your Jenkins machine as clean as possible, therefore you should avoid installing extra packages on it like Terraform. A better approach for this problem would be creating a Dockerfile with your Terraform binary and all the plugins that you need already built-in, then all you need to do in your Jenkins pipeline is build and execute your Terraform docker.
This is an example of such Dockerfile:
FROM hashicorp/terraform:0.11.7
RUN apk add --no-cache bash python3 && \
pip3 install --no-cache-dir awscli
RUN mkdir -p /plugins
# AWS provider
ENV AWS_VERSION=1.16.0
ENV AWS_SHA256SUM=1150a4095f18d02258d1d52e176b0d291274dee3b3f5511a9bc265a0ef65a948
RUN wget https://releases.hashicorp.com/terraform-provider-aws/${AWS_VERSION}/terraform-provider-aws_${AWS_VERSION}_linux_amd64.zip && \
echo "${AWS_SHA256SUM} terraform-provider-aws_${AWS_VERSION}_linux_amd64.zip" | sha256sum -c - && \
unzip *.zip && \
rm -f *.zip && \
mv -v terraform-provider-aws_* /plugins/
COPY . /app
WORKDIR /app
ENTRYPOINT []
The Terraform documentation also contains a section on best practices running Terraform in CI:
https://www.terraform.io/guides/running-terraform-in-automation.html
Yes, the fastest way to have it done is use master/slave setup for you jenkins. So, what you need to do is to add slave to the machine on which your terraform is running.
I have created a Global Shared Library awesome-jenkins-utils with which you can use different version of terraform simultaneously in same pipeline. Additionally you can easily map build parameters to terraform variables
I'm trying to update docker/boot2docker using boot2docker download command but upon starting it, it is still running 1.3.2 client (docker --version)
bash-3.2$ boot2docker download
Latest release for boot2docker/boot2docker is v1.4.0
Downloading boot2docker ISO image...
Success: downloaded https://github.com/boot2docker/boot2docker/releases/download/v1.4.0/boot2docker.iso
Also, from the docker github OS X installer page, 1.3.2 is the only download option.
Thanks!
You can manually download latest Docker binary and replace the existing one. Instruction here.
This is my shell script to install+update latest Docker on a CentOS6/RHEL:
#!/bin/bash
# YUM install docker with required dependencies
yum -y install docker-io
# Move to a temp working directory
work_dir=$(mktemp -d)
cd "${work_dir}"
trap "rm -rf -- ${work_dir}" EXIT
# WGET latest release of Docker
wget https://get.docker.com/builds/Linux/x86_64/docker-latest -O docker
chmod +x docker
# Replaces Docker with latest Docker binary
mv docker /usr/bin/docker
# Start Docker service
service docker start
Depend on where your binaries are stored, it might be a different location than /usr/bin
I'm using the official elasticsearch Docker image instead of setting up my own elastic search instance. And that works great, up to the point when I wanted to extend it. I wanted to install marvel into that ElasticSearch instance to get more information.
Now dockerfile/elasticsearch automatically runs ElasticSearch and setting the command to /bin/bash doesn't work, neither does attaching to the container or trying to access it over SSH, nor installing ssh-daemon with apt-get install -y openssh-server.
In this particular case, I could just go into the container's file system and execute opt/elasticsearch/bint/plugin -i elasticsearch/marvel/latest and everything worked.
But how could I install an additional service which needs to be installed with apt-get when I can't have a terminal inside the running container?
Simply extend it using a Dockerfile that start with
FROM dockerfile/elasticsearch
and install marvel or ssh-server or whatever you need. Then, end with the correct command to start your services. You can use supervisor to start multple services, see Run a service automatically in a docker container for more info on that.
If you don't mind using docker-compose, what I usually do is to add a first section for the base image you plan to reuse, and then use that image as the base in the rest of the services' Dockerfiles, something along the lines of:
---
version: '2'
services:
base:
build: ./images/base
collector:
build: ./images/collector
Then, in images/collector/Dockerfile, and since my project is called webtrack, I'd type
FROM webtrack_base
...
And now it's done!
Update August 2016
Having found very little current information on how to do this with latest versions of ElasticSearch (2.3.5 for example), Kibana (4.5.3) and Marvel & Sense plugins, I opted to take the steeper path and write my own image.
Please find the source code (Dockerfile) and README here
FROM java:jre-alpine
MAINTAINER arcseldon <arcseldon#gmail.com>
ENV ES_VERSION=2.3.5 \
KIBANA_VERSION=4.5.3
RUN apk add --quiet --no-progress --no-cache nodejs \
&& adduser -D elasticsearch
USER elasticsearch
WORKDIR /home/elasticsearch
RUN wget -q -O - http://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/${ES_VERSION}/elasticsearch-${ES_VERSION}.tar.gz \
| tar -zx \
&& mv elasticsearch-${ES_VERSION} elasticsearch \
&& wget -q -O - http://download.elastic.co/kibana/kibana/kibana-${KIBANA_VERSION}-linux-x64.tar.gz \
| tar -zx \
&& mv kibana-${KIBANA_VERSION}-linux-x64 kibana \
&& rm -f kibana/node/bin/node kibana/node/bin/npm \
&& ln -s $(which node) kibana/node/bin/node \
&& ln -s $(which npm) kibana/node/bin/npm \
&& ./elasticsearch/bin/plugin install license \
&& ./elasticsearch/bin/plugin install marvel-agent \
&& ./kibana/bin/kibana plugin --install elasticsearch/marvel/latest \
&& ./kibana/bin/kibana plugin --install elastic/sense
CMD elasticsearch/bin/elasticsearch --es.logger.level=OFF --network.host=0.0.0.0 & kibana/bin/kibana -Q
EXPOSE 9200 5601
If you just want the pre-built image then please do:
docker pull arcseldon/elasticsearch-kibana-marvel-sense
You can visit the repository on hub.docker.com here
Usage:
docker run -d -p 9200:9200 -p 5601:5601 arcseldon/elasticsearch-kibana-marvel-sense
You can connect to Elasticsearch with http://localhost:9200 and its Kibana front-end with http://localhost:5601.
You can connect to Marvel with http://localhost:5601/app/marvel and Sense with http://localhost:5601/app/sense
Hope this helps others and saves some time!