Using Amazon Systems Manager how to install AWS CLI for Linux - amazon-systems-manager

Is there a predefined script that can be used to install AWS CLI using Amazon Systems Manager?

The closest think I can think about is AWS-ConfigureAWSPackage: https://console.aws.amazon.com/systems-manager/documents/AWS-ConfigureAWSPackage/description?region=us-east-1
Alternatively, you could use the standard document and pass these 3 lines to execute:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

Related

How do I install redis 5.0.14 using homebrew

I need to install redis version 5.0.14 on my mac using brew.
I have tried multiple ways like brew install redis#5.0.14, redis#5.0, redis#50, redis#5 but nothing seems to work!!
I was able to find out on https://formulae.brew.sh/ that the options that can be installed using brew are redis, redis#4.0, redis#3.2 . But I need to install redis 5.0.14 or basically above 5.0.6 because that is the version we have on our production. Can anyone help me out on this?
I have seen a way here that suggests to checkout specific homebrew formulae version but that would become too messy if something goes wrong. I would prefer a straight forward way if there is one.
Given that the Redis version you require is not available via homebrew, your question is unanswerable. However, given how good docker is on macOS, I have taken to using that rather than homebrew for lots of version-related problems.
With docker:
I can pull any version I want,
it's all isolated from my core macOS,
just as performant,
readily deletable,
simple to have many versions,
switchable between versions,
repeatable across platforms and
configurable by script.
Official image here.
So, in concrete terms, you could run Redis 5.0.14 as a daemon like this:
docker run --name some-redis -d redis:5.0.14
and then connect to that same container and run redis-cli inside it like this:
docker exec -it some-redis redis-cli PING
PONG
Or you could run Redis in the container but expose its port 6379 as port 65000 to your regular macOS applications like this:
docker run --name some-redis -p 65000:6379 -d redis:5.0.14
Then it is accessible to your macOS applications, such as redis-cli like this:
redis-cli -p 65000 info | grep redis_version
redis_version:5.0.14
The version you're looking for is not available on brew unfortunately.
bruno#pop-os ~> brew info --json redis | jq -r '.[].versioned_formulae[]'
redis#4.0
redis#3.2
You could get the source code from here: https://github.com/redis/redis/releases/tag/5.0.14
extract it to some directory, to run Redis with the default configuration just type:
% cd src
% ./redis-server
If you want to provide your redis.conf, you have to run it using an additional
parameter (the path of the configuration file):
% cd src
% ./redis-server /path/to/redis.conf
It is possible to alter the Redis configuration by passing parameters directly
as options using the command line. Examples:
% ./redis-server --port 9999 --replicaof 127.0.0.1 6379
% ./redis-server /etc/redis/6379.conf --loglevel debug
All the options in redis.conf are also supported as options using the command
line, with exactly the same name.
Optionally you could use Docker, docker run --name some-redis -d redis:5.0.14
brew update
brew install redis
To have launchd start redis now and restart at login:
brew services start redis
To stop it, just run:
brew services stop redis
Test if Redis server is running.
redis-cli ping

Datastax Bulk Loader for Apache Cassandra not installing

I have followed the instructions in the documentation: https://docs.datastax.com/en/dsbulk/doc/dsbulk/install/dsbulkInstall.html
However, after doing the following:
curl -OL https://downloads.datastax.com/dsbulk/dsbulk-1.6.0.tar.gz
and
tar -xzvf dsbulk-1.6.0.tar.gz
inside an application directory, followed by the command
dsbulk --version
I get the output
Unable to find java 8 (or later) executable. Check JAVA_HOME and PATH environment variables.
What am I doing wrong here?
Im using an AWS ec2 t2.medium instance - do I have to install java on this in order for dsbulk to work?
Yes, DSBulk doesn’t include Java into it, so you need to install Java yourself - via apt, or whatever you use

Terraform and Jenkins

Guys I want to automate terraform with Jenkins pipeline
And My terraform is installed on different Vm and Jenkins
Also.
I know there is the Terraform Plugin. But it seems like The Terraform
Has to be installed on the same vm as Jenkins(or on /var/lib/jenkins/workspace)
Is there anyway to get this done?
Please share your suggestions.
Generally, it's a good idea to keep your Jenkins machine as clean as possible, therefore you should avoid installing extra packages on it like Terraform. A better approach for this problem would be creating a Dockerfile with your Terraform binary and all the plugins that you need already built-in, then all you need to do in your Jenkins pipeline is build and execute your Terraform docker.
This is an example of such Dockerfile:
FROM hashicorp/terraform:0.11.7
RUN apk add --no-cache bash python3 && \
pip3 install --no-cache-dir awscli
RUN mkdir -p /plugins
# AWS provider
ENV AWS_VERSION=1.16.0
ENV AWS_SHA256SUM=1150a4095f18d02258d1d52e176b0d291274dee3b3f5511a9bc265a0ef65a948
RUN wget https://releases.hashicorp.com/terraform-provider-aws/${AWS_VERSION}/terraform-provider-aws_${AWS_VERSION}_linux_amd64.zip && \
echo "${AWS_SHA256SUM} terraform-provider-aws_${AWS_VERSION}_linux_amd64.zip" | sha256sum -c - && \
unzip *.zip && \
rm -f *.zip && \
mv -v terraform-provider-aws_* /plugins/
COPY . /app
WORKDIR /app
ENTRYPOINT []
The Terraform documentation also contains a section on best practices running Terraform in CI:
https://www.terraform.io/guides/running-terraform-in-automation.html
Yes, the fastest way to have it done is use master/slave setup for you jenkins. So, what you need to do is to add slave to the machine on which your terraform is running.
I have created a Global Shared Library awesome-jenkins-utils with which you can use different version of terraform simultaneously in same pipeline. Additionally you can easily map build parameters to terraform variables

Install and Configure Ansible on AWS EC2 Redhat Instance

I have just started learning Ansible configuration management tool and I was going through Linux Academy tutorials to run implement ansible commands, everything was good and easy with the linux-academy servers but when I tried to replicate the same in AWS EC2 instance i was unable to locate the "cd /etc/ansible/hosts". I have installed ansible using pip command i.e., "$sudo pip install ansible". I have been tried to resolve the issue but unable to find any proper documentation. The links I tried to install and configure ansible are as follows:
http://docs.ansible.com/ansible/intro_installation.html
http://www.cyberciti.biz/python-tutorials/linux-tutorial-install-ansible-configuration-management-and-it-automation-tool/
Guide me to configure the ansible hosts path to run the ansible commands and playbooks according to my requirements.
If you are using Ubuntu EC2 instance, follow this:
http://docs.ansible.com/ansible/intro_installation.html#latest-releases-via-apt-ubuntu
If you are using Amazon Linux EC2 instance, follow this:
http://docs.ansible.com/ansible/intro_installation.html#latest-release-via-yum
Installing via these package managers will create the /etc/ansible/hosts file for you.
Steps to install Ansible on EC2 instance [RHEL-8]:
sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
sudo dnf config-manager --set-enabled codeready-builder-for-rhel-8-rhui-rpms
dnf install ansible
ansible --version
Use dnf for faster dependency resolution.

AWS: Can't Bundle AMI

I am trying to create an AMIBundle following these instructions, but am running into an error. When I get to
ec2-bundle-vol -d /mnt -k /mnt/pk-XXX.pem -c /mnt/cert-YYY.pem -u 123456789012 -r i386 -p
rightscale_ami
and run it (using my correct variables, of course) I get: ERROR: You need to be root to run /vol/downloads/ec2-ami-tools-1.3-66634//lib/ec2/amitools/bundlevol.rb
I am not sure what the problem is. I tried changing the permissions around, but to no avail.
I am running Ubuntu 11.04 Server on a large instance, have installed the ec2 AMI and ec2 API tools, added them to path and their respective environment variables, and have done sudo aptitude install ruby. Maybe I need something else with ruby? Please help! Thanks.
I ended up installing the ami and api tools from the multiverse package within Ubuntu's apt manager. When I installed the tools this way, I could correctly do a sudo to run as root, whereas when I ran it originally it looked like the super user couldn't get access to my environment variables.

Resources