How to programmatically create AWS EC2 instance with attached EBS storage? - amazon-ec2

In the past, I have created an instance with attached EBS storage through the AWS web console. At the "Step4. Add storage" step I would add EBS storage as device="/dev/sdf", Standard as Volume type and no Snapshot. Once the instance got launched, I would issue the following set of commands to mount the extra drive as a separate directory and make it accessible to everybody:
sudo mkfs.ext4 /dev/xvdf
sudo mkdir /home/foo/extra_storage_directory
sudo mount -t ext4 /dev/xvdf /home/foo/extra_storage_directory
cd /home/foo
sudo chmod a+w extra_storage_directory
aI was given a piece of python code that creates instances without any extra storage programmatically. It calls boto.ec2.connection.run_instances. I need to modify this code to be able to create instances with extra storage. I need to essentially emulate the manual steps I used doing it via console, to make sure that the above sudo commands work after I launch the new instance.
Which boto function(s) do I need to use and how to add the storage?
UPDATE: I did some digging and wrote some code that I thought was supposed to do what I wanted. However, the behavior is a bit strange. Here's what I have:
res = state.connection.run_instances(state.ami,key_name=state.key,instance_type=instance_type,security_groups=sg)
inst = res.instances[0]
pmt = inst.placement
time.sleep(60)
try:
vol = state.connection.create_volume(GB, pmt)
tsleep = 60
time.sleep(tsleep)
while True:
vstate = vol.status
if not vstate == 'available':
print "volume state is %s, trying again after %d secs" % (vstate,tsleep)
time.sleep(tsleep)
else:
break
print "Attaching vol %s to inst %s" % (str(vol.id),str(inst.id))
state.connection.attach_volume(vol.id, inst.id, "/dev/sdf")
print "attach_volume OK"
except Exception as e:
print "Exception: %s" % str(e)
The call to run_instances came from the original code that I need to modify. After the volume get created, when I looked at its status in the AWS console, I see available. However, I get an endless sequence of
volume state is creating, trying again after 60 secs
Why the difference?

As garnaat pointed out, I did have to use vol.update() to update the volume status. So the code below does what I need:
res = state.connection.run_instances(state.ami,key_name=state.key,instance_type=instance_type,security_groups=sg)
inst = res.instances[0]
pmt = inst.placement
time.sleep(60)
try:
vol = state.connection.create_volume(GB, pmt)
tsleep = 60
time.sleep(tsleep)
while True:
vol.update()
vstate = vol.status
if not vstate == 'available':
print "volume state is %s, trying again after %d secs" % (vstate,tsleep)
time.sleep(tsleep)
else:
break
print "Attaching vol %s to inst %s" % (str(vol.id),str(inst.id))
state.connection.attach_volume(vol.id, inst.id, "/dev/sdf")
print "attach_volume OK"
except Exception as e:
print "Exception: %s" % str(e)

I tripped on the same problem and the answer at How to launch EC2 instance with Boto, specifying size of EBS? had the solution.
Here are the relevant links:
Python Boto documentation - block_device_map
API reference - BlockDeviceMapping.N
Command line reference - -b, --block-device-mapping mapping
CLI reference - --block-device-mappings (list)
Important note: While in the Web Console the "Delete on Termination" check box is checked, in the Boto API, it's the opposite, delete_on_termination=False by default!

Related

Cloud Run Golang container issue/missunderstanding

I'm trying to do a report of all the objects in all the projects we have in Cloud Storage of our Org. I'm using this repo from the Google Professionnal Services as it's doing exactly what we want: https://github.com/GoogleCloudPlatform/professional-services/tree/main/tools/gcs2bq
We want to use containers instead of just the go code on a Cloud Function for portability mainly.
Locally everything is good and the program behave as expected but when I try in Cloud Run things get tricky. From what I understand, the go part needs to listen to a port, which I added at the beginning of the main so the container can be deployed, which it is:
// Determine port for HTTP service
port := os.Getenv("PORT")
if port == "" {
port = "8080"
log.Printf("defaulting to port %s", port)
}
Start HTTP server.
log.Printf("listening on port %s", port)
if err := http.ListenAndServe(":"+port, nil); err != nil {
log.Fatal(err)
}
But as you can see in the repo, the first file called is the run.sh one. Which set environment variables and then call the main.go. It sucessfully complete it's task, which is get all the size of the different files. But after that the run.sh doesnt "resume" and go to the part where it uploads the data in a BigQuery table, which locally work.
Here is the part in the run.sh file where I have a problem. Note : I don't have errors from executing the ./gcs2bq Note 2 : Every environment variable has a correct value
./gcs2bq $GCS2BQ_FLAGS || error "Export failed!" 2 <- doesnt get past this line
gsutil mb -p "${GCS2BQ_PROJECT}" -c standard -l "${GCS2BQ_LOCATION}" -b on "gs://${GCS2BQ_BUCKET}" || echo "Info: Storage bucket already exists: ${GCS2BQ_BUCKET}"
gsutil cp "${GCS2BQ_FILE}" "gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}" || error "Failed copying ${GCS2BQ_FILE} to gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}!" 3
bq mk --project_id="${GCS2BQ_PROJECT}" --location="${GCS2BQ_LOCATION}" "${GCS2BQ_DATASET}" || echo "Info: BigQuery dataset already exists: ${GCS2BQ_DATASET}"
bq load --project_id="${GCS2BQ_PROJECT}" --location="${GCS2BQ_LOCATION}" --schema bigquery.schema --source_format=AVRO --use_avro_logical_types --replace=true "${GCS2BQ_DATASET}.${GCS2BQ_TABLE}" "gs://${GCS2BQ_BUCKET}/${GCS2BQ_FIL$
error "Failed to load gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME} to BigQuery table ${GCS2BQ_DATASET}.${GCS2BQ_TABLE}!" 4
gsutil rm "gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}" || error "Failed deleting gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}!" 5
rm -f "${GCS2BQ_FILE}"
I'm kinda new to containers and Cloud Run and even after reading projects and documentation, I'm not sure what I'm doing wrong, Is it normal that the .sh is "stuck" when calling the main.go? I can provide more details/explaination if needed.
Okay so for anyone who encounter similar situation this is how I made it work for me.
The container isn't supposed to stop so no exit, it will just go back to the main function.
That means that when I called executable it just looped and never exited and completed the task. So the solution here is to "recode" everything past the call in golang directly into the main.go
Here the run.sh is then useless so I used another .go file that listen for http request and then call the code that gather data and send it to Bigquery.

Running awesome-client from a script executing as root

Running Awesome on Debian (11) testing
awesome v4.3 (Too long)
• Compiled against Lua 5.3.3 (running with Lua 5.3)
• D-Bus support: ✔
• execinfo support: ✔
• xcb-randr version: 1.6
• LGI version: 0.9.2
I'm trying to signal to Awesome when systemd triggers suspend. After fiddling with D-Bus directly for awhile and getting nowhere, I wrote a couple of functions that somewhat duplicate the functionality of signals.
I tested it by running the following command in a shell, inside of my Awesome session:
$ awesome-client 'require("lib.syskit").signal("awesome-client", "Hello world!")'
This runs just fine. A notification posts to the desktop "Hello world!" as expected. I added the path to my lib.syskit code to the $LUA_PATH in my ~/.xsessionrc. Given the error described below, I doubt this is an issue.
Now for the more difficult part. I put the following in a script located at /lib/systemd/system-sleep/pre-suspend.sh
#!/bin/bash
if [ "${1}" == "pre" ]; then
ERR=$(export DISPLAY=":0"; sudo -u naddan awesome-client 'require("lib.syskit").signal("awesome-client", "pre-suspend")' 2>&1)
echo "suspending at `date`, ${ERR}" > /tmp/systemd_suspend_test
elif [ "${1}" == "post" ]; then
ERR=$(export DISPLAY=":0"; sudo -u naddan awesome-client 'require("lib.syskit").signal("awesome-client", "post-suspend")' 2>&1)
echo "resuming at `date`, ${ERR}" >> /tmp/systemd_suspend_test
fi
Here's the output written to /tmp/systemd_suspend_test
suspending at Thu 22 Jul 2021 10:58:01 PM MDT, Failed to open connection to "session" message bus: /usr/bin/dbus-launch terminated abnormally without any error message
E: dbus-send failed.
resuming at Thu 22 Jul 2021 10:58:05 PM MDT, Failed to open connection to "session" message bus: /usr/bin/dbus-launch terminated abnormally without any error message
E: dbus-send failed.
Given that I'm already telling it the $DISPLAY that Awesome is running under (this is a laptop), and that I'm running awesome-client as my user, not root, what else am I missing that's keeping this from working?
Is there a better way that I could achieve telling Awesome when the system suspends?
awesome-client is a shell script. It is a thin wrapper around dbus-send. Thus, since you write "After fiddling with D-Bus directly for awhile and getting nowhere", I guess the same reasoning applies.
Given that I'm already telling it the $DISPLAY that Awesome is running under (this is a laptop), and that I'm running awesome-client as my user, not root, what else am I missing that's keeping this from working?
You are missing the address of the dbus session bus. For me, it is:
$ env | grep DBUS
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
Is there a better way that I could achieve telling Awesome when the system suspends?
Instead of sending a message directly to awesome via some script, you could use the existing mechanism for this: Dbus signals. These are broadcasts that interested parties can listen to.
Google suggests that there is already a PrepareForSleep signal:
https://serverfault.com/questions/573379/system-suspend-dbus-upower-signals-are-not-seen
Based on this, Google then gave me the following AwesomeWM lua code that listens for logind's PrepareForSleep signal (written by yours truely - thanks Google for finding that!):
https://github.com/awesomeWM/awesome/issues/344#issuecomment-328354719
local lgi = require("lgi")
local Gio = lgi.require("Gio")
local function listen_to_signals()
local bus = lgi.Gio.bus_get_sync(Gio.BusType.SYSTEM)
local sender = "org.freedesktop.login1"
local interface = "org.freedesktop.login1.Manager"
local object = "/org/freedesktop/login1"
local member = "PrepareForSleep"
bus:signal_subscribe(sender, interface, member, object, nil, Gio.DBusSignalFlags.NONE,
function(bus, sender, object, interface, signal, params)
-- "signals are sent right before (with the argument True) and
-- after (with the argument False) the system goes down for
-- reboot/poweroff, resp. suspend/hibernate."
if not params[1] then
-- This code is run before suspend. You can replace the following with something else.
require("gears.timer").start_new(2, function()
mytextclock:force_update()
end)
end
end)
end
listen_to_signals()

How to see print() results in Tarantool Docker container

I am using tarantool/tarantool:2.6.0 Docker image (the latest at the moment) and writing lua scripts for the project. I try to find out how to see the results of callin' print() function. It's quite difficult to debug my code without print() working.
In tarantool console print() have no effect also.
Using simple print()
Docs says that print() works to stdout, but I don't see any results when I watch container's logs by docker logs -f <CONTAINER_NAME>
I also tried to set container's logs driver to local. Than I get one time print to container's logs, but only once...
The container's /var/log directory is always empty.
Using box.session.push()
Using box.session.push() works fine in console, but when I use it in lua script:
-- app.lua
function log(s)
box.session.push(s)
end
-- No effect
log('hello')
function say_something(s)
log(s)
end
box.schema.func.create('say_something')
box.schema.user.grant('guest', 'execute', 'function', 'say_something')
And then call say_something() from nodeJs connector like this:
const TarantoolConnection = require('tarantool-driver');
const conn = new TarantoolConnection(connectionData);
const res = await conn.call('update_links', 'hello');
I get error:
Any suggestions?
Thanx!
I suppose you've missed io.flush() after print command.
After I added io.flush() after each print call my messages start to write to logs (docker logs -f <CONTAINER_NAME>).
Also I'd recommend to use log module for such purpose. It writes to stderr without buffering.
Regarding the error in the connector, I think nodejs connector simply doesn't support pushes.

Does $argv behave the same between Centos and RHEL systems

I am trying to troubleshoot an old TCL accounting script called GOTS - Grant Of The System. What it does is creates a time stamped logfile entry for each user login and another for the logout. The problem is it is not creating the second log file entry on logout. I think I tracked down the area where it is going wrong and I have attached it here. FYI the log file exists and it does not exit with the error "GOTS was called incorrectly!!". It should be executing the if then for [string match "$argv" "end_session"]
This software runs properly on RHEL Linux 6.9 but fails as described on Centos 7. I am thinking that there is a system variable or difference in the $argv argument vector for the different systems that creates this behavior.
Am I correct in suspecting $argv and if not does anyone see the true problem?
How do I print or display the $argv values on logout?
# Find out if we're beginning or ending a session
if { [string match "$argv" "end_session"] } {
if { ![file writable $Log] } {
onErrorNotify "4 LOG"
}
set ifd [open $Log a]
puts $ifd "[clock format [clock seconds]]\t$Instrument\t$LogName\t$GroupName"
close $ifd
unset ifd
exit 0
} elseif { [string match "$argv" "begin_session"] == 0 } {
puts stderr "GOTS was called incorrectly!!"
exit -1
}
end_session is populated by the /etc/gdm/PostSession/Default file
#!/bin/sh
### Begin GOTS PostSession
# Do not run GOTS if root is logging out
if test "${USER}" == "root" ; then
exit 0
fi
/usr/local/lib/GOTS/gots end_session > /var/tmp/gots_postsession.log 2> /var/tmp/gots_postsession.log
exit 0
### End GOTS PostSession
This is the postsession log file:
Application initialization failed: couldn't connect to display ":1"
Error in startup script: invalid command name "option"
while executing
"option add *Font "-adobe-new century schoolbook-medium-r-*-*-*-140-*-*-*-*-*-*""
(file "/usr/local/lib/GOTS/gots" line 26)
After a lot of troubleshooting we have determined that for whatever reason Centos is not allowing part of the /etc/gdm/PostSession/default file to execute:
fi
/usr/local/lib/GOTS/gots end_session
But it does update the PostSession.log file as it should .. . Does anyone have any idea what could be interfering with only part of the PostSession/default?
Does anyone have any idea what could be interfereing with PostSession/default?
Could it be that you are hitting Bug 851769?
That said, am I correct in stating that, as your investigation shows, this is not a Tcl-related issue or question anymore?
So it turns out that our script has certain elements that depend upon the Xserver running on logout to display some of the GUI error messages. This from:
Gnome Configuration
"When a user terminates their session, GDM will run the PostSession script. Note that the Xserver will have been stopped by the time this script is run, so it should not be accessed.
Note that the PostSession script will be run even when the display fails to respond due to an I/O error or similar. Thus, there is no guarantee that X applications will work during script execution."
We are having to rewrite those error message callouts so they simply write the errors to a file instead of depending on the display. The errors are for things that should be there in the beginning anyway.

Cannot get binaryfile using Ruby from one FTP server under Amazon EC2 instance

I have part of the code which is to get the binary file stream from one FTP server.
It works on my Ubuntu but the code cannot get the binary file from the FTP server when I tried it on the Amazon EC2 instance.
I tried to switch to another gem called open-uri. It can get the binary file stream on both my local PC and remote EC2 instance.
I use the default VPC of EC2 instance.
I already opened port 20 and 21 on EC2. I used dig ftp.cga.ct.gov and there is a answer on EC2.
If there is some point which is not clear for your guys, please point it out
Here is my code in the initialize method of one Ruby class:
def initialize(session_id)
#session_id = session_id
#count = 1
tries = 10
begin
ftp = Net::FTP.new("ftp.cga.ct.gov")
ftp.read_timeout = 500
ftp.login
ftp.chdir('/pub/data/')
bill_str= ftp.getbinaryfile("bill_info.csv", nil)
#bill_array = bill_str.delete("\"").split("\r\n")[1..-1]
rescue Exception => e
if (tries -= 1) > 0
sleep 10
print "re-connect"
retry
else
print "Cannot open FTP\nThe error message is #{e}\n#{e.backtrace.join("\n")}"
end
else
return true
end
super
end
I think it is totally because of Amazon internal subnet bugs. Check my other question and the answer from myself.

Resources