Patching windows with AWS system patch manager using script - windows

Is it possible to run the aws patch manager for windows using lambda?
We have 2 server behind ELB. Now we want to perform following action.
1. Remove 1st server from ELB.
2. Patch this windows server.
3. Add the server back to ELB.
4. Perform the same things with other server.
I'm not sure how to perform step 2.
I can see that patch can be scheduled from the AWS console -> Patch manager but not able to find out if I can trigger patch baseline to any targeted instance using lambda.
Thanks for the help!

I'm trying to perform almost the exact same thing, however I'm having trouble with step 1 and 3. My idea is to use Maintenance Window tasks to coordinate the steps. The maintenance window would call the step 1 lambda function, then instead of lambda, use run-command for step 2, then lambda for step 3. Can you share your lambda function for removing/adding instances to the ELB? Thanks!

Related

AWS: Call jupyter notebook from step function or lambda

I created a python notebook called 'test.ipynb' in SageMaker to retrieve a csv file from my S3 bucket, manipulate the data, calculate values, create a new csv file and save it back into the S3 bucket. This part works.
I want to test triggering it from a step function or from a lambda function. For step functions, I added a SageMaker event / item / step called StartNotebookInstance that successfully starts the notebook instance but the next step is to start the notebook 'test.ipynb' but I do not see anything in that step that allows me to specify the notebook name. I also do not see the equivalent to 'RunNotebook'. Did anyone successfully call a notebook from a step function? If so, how did you do it?
If it is not possible, perhaps I can create a lambda function to call 'test.ipynb'. Is anyone familiar with the code to do so or can someone point me in the right direction? I found this video but it uses api gateway which I am unsure I need. I also checked the aws lambda and step documents but did not find any solution to this behavior. I also tried to use aws data pipeline but that api is blocked due to security reasons.
I am also wondering if there is a more practical / efficient way to call a python notebook because I did not find any solutions and maybe it is because it is not a recommended way.
Thank you in advance for your assistance

Running bash script on GCP VM instance programmatically

I've read multiple posts on running scripts on GCP VMs but unfortunately could not find an answer that would satisfy my needs.
I have a Go application and I'm looking for a way to run a bash script on a VM instance programatically.
I'm using a Google Cloud Golang SDK which allows me to fetch VM instance info. Unfortunately SDK does not contain a functionality that allows running a bash script on a specific instance(unlike an Azure Cloud SDK for example).
Options I've found:
Google Cloud Compute SDK has an option to set a startup script, that
will run every time an instance is restarted.
Add instance-level public SSH key. Establish an SSH connection and
run a script using Go SSH client.
Problems:
Obviously startup script will require an instance reboot and this is not possible in my use case.
SSH might be also problematic, in case instance is not running SSH
daemon or SSH port is not open. Also, SSH daemon config does not
permit root login by default(PermitRootLogin might be false), thus
script might be running on a non privileged user, making this option not
suitable either.
I should probably note that I am not authorised to change configuration of those VMs (for example change ssh daemon conf to permit root login), I can just use a token based authentication to access them, preferably through SDK, though other options are also possible as long as I am not exposing the instance to additional risks.
What options do I have? Is this even doable? Am I missing something?
Thanks!
As said by Kolban, there is no such API to trigger from outside a bash inside the VM. The best solution is to deploy a webserver (a REST API) that call the bash and to expose it (externally or internally).
But you can also cheat. You can create a daemon on your VM that you run with a startup script and that listen a custom metadata; let's say check it every seconds.
When the metadata is updated, the daemon can perform actions. You can imagine that the metadata contain the script to run with the parameters. At the end of the run, the metadata is cleaned by the daemon.
So now, to run your bash, call the setMetadata Api. It's not out of the box, but you can have something similar of what you expected.
Think of GCP as providing the virtual machine infrastructure such as compute, memory, disk and networking. What runs when the machine boots is between you and the machine image. I am hearing you say that you want to run a bash script within the VM. That is outside of the governance of GCP. GCP will only affect the operation and existence of the environment. If what you want to happen is run a script within the VM programatically you will need to run some form of demon inside the VM that can be signaled to run such a script. This could be a web server such as flask or express, it could be your SSH server or it could be some other technology you choose.
The core thing I think you were looking for was some GCP API that, when called, would run a script within the Compute Engine. I'm going to say that there is no such API.

How to verify that AWS lambda function is running on raspberry pi 3 for Greengrass?

I am preferring official AWS doc for AWS Greengrass setup in RaspberryPi3. I have already completed
Module 1: Environment Setup for Greengrass
Module 2: Installing the AWS IoT Greengrass Core Software
When it comes to
Module 3 (Part 1): Lambda Functions on AWS IoT Greengrass
, I got stucked in "Verify the Lambda Function Is Running on the Core Device".
Because I can't see "hello world! Sent from greengrass core running on plateform: Linux - 4.19.86-v7+-armv7l-with-debian9.0" at MQTT client dashboard by subscribing to the topic "hello/world".
I have already deployed such deployment successfully for my greengrass group and provided subscriptions and Lambda functions as explained in AWS docs. I have also started Daemon on RaspberryPi3 by the command
sudo ./greengrassd start
at path location
/greengrass/ggc/core
I have also checked GGConnManager.log file present at path location
/greengrass/ggc/var/log/system
that shows such last log like,
[INFO]-MQTT server started.
But still didn't get any expected result at MQTT client dashboard.
Am I missing something ? How should I publish or subscribe to such topic for this task ?
OR Should I try any other method to verify this AWS lambda function ? Please help.
If you don't have a user directory under the log directory, then that means that your user lambda function never executed. You probably need to set the function to be a pinned lambda, see https://docs.aws.amazon.com/greengrass/latest/developerguide/config-lambda.html section 7 for how to set that.
Here are a few things to try out.
Go to AWS Console -> GGGroup -> -> Settings -> Logs (make sure you select Local Logs for User Lambdas).
If you have done the rest correct, you should see lambda logs under /greengrass/ggc/var/log/user///*.log
For the sake of testing, you may want to add some console logs to your Lambdas (on module load, not on handler invocation).
cheers,
ram

EC2 init.d script - what's the best practice

I'm creating an init.d script that will run a couple of tasks when the instance starts up.
it will create a new volume with our code repository and mount it if it doesn't exist already.
it will tag the instance
The tasks above being complete will be crucial for our site (i.e. without the code repository mounted the site won't work). How can I make sure that the server doesn't end up being publicly visible? Should I start my init.d script with de-registering the instance from the ELB (I'm not even sure if it will be registered at that point), and then register it again when all the tasks finished successfully?
What is the best practice?
Thanks!
You should have a health check on your ELB. So your server shouldn't get in unless it reports as happy. And it shouldn't report happy if the boot script errors out.
(Also, you should look into using cloud-init. That way you can change the boot script without making a new AMI.)
I suggest you use CloudFormation instead. You can bring up a full stack of your system by representing it in a JSON format template.
For example, you can create an autoscale group that has an instances with unique tags and the instances have another volume attached (which presumably has your code)
Here's a sample JSON template attaching an EBS volume to an instance:
https://s3.amazonaws.com/cloudformation-templates-us-east-1/EC2WithEBSSample.template
And here many other JSON templates that you can use for your guidance and deploy your specific Stack and Application.
http://aws.amazon.com/cloudformation/aws-cloudformation-templates/
Of course you can accomplish the same using init.d script or using the rc.local file in your instance but I believe CloudFormation is a cleaner solution from the outside (not inside your instance)
You can also write your own script that brings up your stack from the outside by why reinvent the wheel.
Hope this helps.

How can I launch 10 instances, and tag them at once

I want a single script that can lauch, and tag my instances which I can then use chef to configure them accordingly.
Say my service requires 10 instances, I want to be able to run 10 instances, then tag them according to their role (web, db, app server).
Then once I do that, I can use chef to connect to each one and configure them how i want.
But I'm confused, I know I can launch instances, but how do you wait for them to come online? Do you have to continously loop in some sort of a timer? That seems like a very hacky way to do it!
If you're going to do everything from the outside you do just have to poll to wait for the instance to be ready (which doesn't necessarily mean its ready to use - actual startup completed a little later)
You can also pass user data when you start an instance. Most amis support cloud init, and will interpret the data passed as a shell script if in the right format. That shell script could run chef or do other configuration tasks.

Resources