I am writing java code for EC2 that will consume messages from SQS. My code will run for the hour and then review the queue to determine whether it should shut itself down (due to not enough items to warrant processing for another hour) or to keep running. In order for this to work, I need to know the precise time stamp of when billing for the instance began so I can add one hour to it and shutdown a few minutes before the hour is up if necessary.
How can I get the launch time stamp (the time AWS begins billing for the instance hour) of the EC2 instance my java code is currently running on using the java SDK preferably?
If shelling to uptime or dropping date +%s > /etc/started-at into your bootscript are too platform-dependent for your tastes, you could perform the DescribeInstances SOAP call and pluck out the launchTime field.
To discover the running machine's instance id, you can (from within the machine) make an http GET such as
curl http://169.254.169.254/latest/meta-data/instance-id
Related
I have set up a private worker pool and I had expected that the queued time for a build would go down. Previously queue time (when only one build is queued) was about 1 minute. I had assumed because I was using shared machines inside GCP to do the build. I therefore expected that a private worker pool would have no queue since I would be the only one building anything. I was surprised to see it took about 1 minute also. I then thought that perhaps the first build would have to spin up a VM and that's why it took so long so I carried out a second build after the first had finished but that also had a queue time of about 1 minute. I don't understand what is going on, 1 minute is quite a long time.
When you use Cloud Build shared pool, you use machine provisioned by Google, up and running and paid by Google. Therefore, when you have a build to run, you pick one active machine in the shared pool and you run your build on it.
With private pool, it's different. The machine are still managed by Google, but the pool is private, dedicated to you. Therefore, Google won't keep VM up and running (and use CPU/Memory) if you run nothing on them (because you pay only when a job is running). So, Google stop the VM.
When you run a job on Cloud Build, a VM is started and your job can start. Similar to Compute Engine, it takes about 1 minutes to provision a VM and to start it.
That being said, your requirement could be a nice feature request: keep warm a number of VMs to prevent this on demand provisioning. Of course, it won't be free, but it will be faster!
You can open a feature request here
I want to set up Autoscaling groups where we can launch and terminate instances based on the CPU load. But usually our connections stays for long like more than 8hrs sometimes even more than that. When I use NLB, the Deregistration delay is only supported till 3600sec and after that NLB will forcefully remove the connection which cause our long living connections to fail and autoscaling will terminate the instances as well.
How do I make sure that all my connections to the target group is processed after 8-10hrs and then NLB deregister or autoscaling terminate the instance?
I checked the ASG Lifecycle hooks and it allows connections only till 2hrs.
Is it possible to deregister the instances in target group after all the connections are drained and terminate the instance using ASG?
There isn't any good/easy way to do what you want to do. What are the instances doing that can last up to 10 hours?
Depending on your work type, this is the best workaround I can think of, but it would probably involve a bit of rearchitecting.
1) design your application so that all data is stored off the instance is some sort of data tier (S3, RDS, EFS, etc). When an instance is done doing whatever its doing, save that info to the data tier. This way a user request can go to any instance and get the same information
2) The ASG decides to scale in
3) You have a lifecycle hook configured and a cloudwatch notification setup to be triggered when an instance enters the terminating:wait state which notifies the instance
4) The instance periodically sends a heartbeat to the lifecycle hook which can extend the hooks timeout for up to 2 days
5) Whenever the instance finishes what its doing, it saves the information out to the data tier mentioned in 1) and the client can connect to a new instance to get the information that was being processed on the old one
https://docs.aws.amazon.com/cli/latest/reference/autoscaling/record-lifecycle-action-heartbeat.html
https://docs.aws.amazon.com/cli/latest/reference/autoscaling/complete-lifecycle-action.html
Try to use, Scaling CoolDown period. By the default Scaling Cooldown Period is (300 Secs). you can increase the number. which will help to increase the scale in time.
Ever since virtualizing several physical servers into GCP, I have had an issue where anytime the servers(s) are rebooted the time is changed to be several hours ahead (I think it's 4 hours, but may be 6 hours). My local office is located in CST time zone and that is what we want the server to display. In GCP the virtual servers are in the us-central1a zone. On the virtual server, run the tzutil /g command it shows that the server is set to "central standard time". It also shows Central timezone if I click the clock on the toolbar then choosing "change date and time settings"
After the server has been rebooted (and reports wrong time) I can correct the time by clicking the "update now" options (found on toolbar clock, "change date and time settings", internet time tab, change settings" "update now" (this points to time server time.nist.gov).
This issue only began occurring after migrating into GCP so I believe it to be a Compute Engine issue and not an OS issue.
any thoughts on why this might be happening? I have this on occurring on all 4 windows servers that were migrated into Google Cloud. three are win2008r2, and one is Win2012r2
I appreciate any help that can be given to get this resolved, as I can't even reboot without connecting to the server afterwards and checking/fixing the time, I do have set a startup script to delay and then sync time after rebooting, but it has not worked 100% of the time, so this is more of a band-aid than a fix.
I do have set a startup script to delay and then sync time after rebooting, but it has not worked 100% of the time, so this is more of a band-aid
Getting this script working is probably the solution, here. For what it's worth, you'd need to do the same thing on both Azure and AWS as well, since they also set Windows timezones to UTC by default using the same mechanism.
See AWS docs on the Specialize Phase
See this Stackoverflow question for a similar question about Windows on Azure
Normally all servers run on UTC time, its clients (applications, browsers, etc) set their timezones according to where they are, and its up to them to translate UTC time to whichever locale they are in. (Put another way, you wouldn't want a server with a million client connections to have to keep track of each client's timezone in order to work properly). In your case, the bottom line is that requiring a custom timezone on the server will also require a custom server configuration, and the behavior you're seeing is by design. That's why your best bet is to understand why the startup script isn't working like you expect it to.
For reference, these docs may be helpful:
Google Compute Engine: Providing a startup script for Windows instances
Google Compute Engine: Creating a Windows Image
If you looked at the VM instance logs in the GCP Console you'd see that VM BIOS reports time in UTC
2019/10/3 14:9:44 Begin firmware boot time
After a while BIOS hands over to the bootloader
2019/10/3 14:9:45 End firmware boot time
Booting from Hard Disk 0...
The OS boots up. Behind the scene the OS time service recognizes the system timezone, then sets up and synchronizes time with the time source. From that time forward running programs and services report events based on the local system time:
...
2019/10/03 09:10:05 GCEWindowsAgent: GCE Agent Started (version 4.6.0#1)
In the Windows Event Log you should see entries made by the Time-Service:
Log Name: System
Source: Time-Service
Level: Information
The time provider NtpClient is currently receiving valid time data from metadata.google.internal,0x1 (ntp.m|0x1|0.0.0.0:123->169.254.169.254:123).
The time service is now synchronizing the system time with the time source metadata.google.internal,0x1 (ntp.m|0x1|0.0.0.0:123->169.254.169.254:123).
In the command prompt you can ensure that the time configuration and state are correct:
C:\Users\user>systeminfo | find /i "Time"
System Boot Time: 10/3/2019, 9:09:49 AM
Time Zone: (UTC-06:00) Central Time (US & Canada)
Hence you don't need synchronizing time neither manually or with with a startup script. The time service will do it for you: to synchronize the system time right after the boot and to keep it in sync afterwards.
All you need is to set correct Time zone and the Internet time server for Windows, and then make sure the time server is reachable via the network.
I am given the following problem:
There are two shifts. One shift starts at 12am and the other at 12pm.
At the beginning of each shift, generate some tasks (details not important).
Ordinarily, this is a trivial problem that can be solved with crontab. However, my company is running on Heroku and the Heroku Scheduler has the following interesting properties:
It can only run every 10 mins, hour or daily,
You cannot time when the scheduler will actually start. If you scheduler is running every 10 mins, all you can expect is that it will run between 4:00am to 4:10am.
It is possible that the scheduler encounters some error and crash. When this happens, the scheduler will restart immediately. As an example, if the scheduler crashed at 4:00 while it was running, it might run again at 4:01.
Is it possible to implement a cronjob that:
executes once only once after 12am and 12pm
without needing a database to track its execution time?
One way I can think of doing this would be to have some cron server (not on Heroku) which runs a script at 12am and 12pm.
The script invoked by the cron could use the Heroku Platform API to spin up a one-off dyno in your Heroku app (using the Dyno Create endpoint).
This method satisfies your requirements of executing only once at 12am and at 12pm, WITHOUT using a DB to track execution times.
The drawback of this method is that it is not a "pure Heroku" solution, and requires you to maintain some "external" server to trigger your cron jobs.
If you don't like the ideas of maintaining your own cron server for that, you could use some cloud solution to schedule your script. For example, I would imagine you could do this for free using AWS Lambda with Scheduled Events.
In this case, you would schedule your lambda function to run each day at 12am and 12pm, and your lambda function would spin up your Heroku one off dyno.
Of course if you would be willing to add some form of DB to your Heroku app, you could easily create a "pure Heroku" solution.
I am currently working on a ruby/heroku app, that needs to query ~40 consecutive SOAP calls from a server, uploads a file to a FTP, then sleeps 15 Minutes and begins anew.
Strangely, yesterday everything worked fine (in the evening hours) either locally or via the dyno; now, since morning, I seldomly get through to the 10th query - it always stops on
D, [2014-03-20T14:18:49] Debug -- : HTTPI POST request to www.XXXX.de (net_http)
with a Connection timed out.
Locally, via foreman, everything works fine, so I'd like to rule out that the server doesn't accept 40 queries within about two minutes.
I came to the conclusion that maybe during runtime, the dyno IP is being changed; that would explain the timeout during SOAP call. Do I have to build a new savon-client for every call?
Heroku Dynos are ephemeral application instances. They may come up/down at any time and be replaced by a new one, or have your application restarted.
So, Dynos may often change which will result in new IPs for your app servers. However, the IP is very unlikely to change while the dyno is up and running. Only to be replaced by a new dyno with a different IP.