How we can calculate the response of time (i.e. from the time instance is started and till it has completely boot) of an instance through commands?
This information is in /var/log/messages
Check this post out: https://forums.aws.amazon.com/message.jspa?messageID=100535
Related
Good morning everyone. I hope you'r all keeping safe and staying in home. So my problem is:
I have a nifi project
InvokeHttp is doing the "POST" methode and the generateflow processor contain the body of the post methode
all I need to know is how to make this project runs one time : i.e: REST API needs to create one user I need it to be stopped once a user is created ! It's like running this project one and only one time !
Is it possible ?? how can we do it ?
1111111110sec - Run Schedule
This will make sure to run the processor only once.
enter image description here
I recently submitted a training job with a command that looked like:
gcloud ai-platform jobs submit training foo --region us-west2 --master-image-uri us.gcr.io/bar:latest -- baz qux
(more on how this command works here: https://cloud.google.com/ml-engine/docs/training-jobs)
There was a bug in my code which cause the job to just keep running, rather than terminate. Two weeks and $61 later, I discovered my error and cancelled the job. I want to make sure I don't make that kind of mistake again.
I'm considering using the timeout command within the training container to kill the process if it takes too long (typical runtime is about 2 or 3 hours), but rather than trust the container to kill itself, I would prefer to configure GCP to kill it externally.
Is there a way to achieve this?
As a workaround, you could write a small script that runs your command and then sleeps the time you want until running a cancel job command.
As a timeout definition is not available in AI Platform training service, I took the liberty to open a Public Issue with a Feature Request to record the lack of this command. You can track the PI progress here.
Except the script mentioned above, you can also try:
TimeOut Keras callback, or timeout= Optuna param (depending on which library you actually use)
Cron-triggered Lambda (Cloud Function)
I have an application that requires a long initialization before being completely deployed to a Webserver (WebSphere 8.5 for our use case). This initialization takes several minutes even out to half an hour and that is completely normal. I have been using the wsadmin command line tool to upload the ear file and then issue start for the application. Since the start time is long, wsadmin receives a read time exception and closes before the completion of the application initialization. If at this moment I issue a wsadmin command to see the status of the application:
wsadmin.sh -host $HOST -port $PORT -user $USER -password $PASS -c '$AdminControl completeObjectName type=Application,name='$APP',*'
I will get an answer meaning the application is running (http://pic.dhe.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=%2Fcom.ibm.websphere.base.doc%2Finfo%2Faes%2Fae%2Ftxml_appstate.html)
The same answer I get when the initialization has been completed.
So the question is how to determine the exact status of my application.
Thank you in advance.
P.S. I have already seen this post (How to get current application state from wsadmin console for WebSphere 7.0) but I am not sure how exactly I could follow the steps he is mentioning. Also I am running a single node and not a cluster.
P.S.2. Also is it possible to increase the time out for the wsadmin tool in the first place so as to avoid the read time out?
I would like to create a new instance based on my stored AMI.
I achieve this by the following code:
RunInstancesRequest rir = new RunInstancesRequest(imageId,1, 1);
// Code for configuring the settings of the new instance
...
RunInstancesResult runResult = ec2.runInstances(rir);
However, I cannot find a wait to "block"/wait until the instance is up and running apart from Thread.currentThread().sleep(xxxx) command.
On the other hand, StartInstancesResult and TerminateInstancesResult gives you a way to have access on the state of the instances and be able to monitor any changes. But, what about the state of a completely new instance?
boto3 has:
instance.wait_until_running()
From the boto3 docs:
Waits until this Instance is running. This method calls EC2.Waiter.instance_running.wait() which polls EC2.Client.describe_instances() every 15 seconds until a successful state is reached. An error is returned after 40 failed checks.
From the AWS CLI changelog for v1.6.0:
Add a wait subcommand that allows for a command to block until an AWS
resource reaches a given state (issue 992, issue 985)
I don't see this mentioned in the documentation, but the following worked for me:
aws ec2 start-instances --instance-ids "i-XXXXXXXX"
aws ec2 wait instance-running --instance-ids "i-XXXXXXXX"
The wait instance-running line did not finish until the EC2 instance was running.
I don't use Python/boto/botocore but assume it has something similar. Check out waiter.py on Github.
Waiting for the EC2 instance to get ready is a common pattern. In the Python library boto you also solve this with sleep calls:
reservation = conn.run_instances([Instance configuration here])
instance = reservation.instances[0]
while instance.state != 'running':
print '...instance is %s' % instance.state
time.sleep(10)
instance.update()
With this mechanism you will be able to poll when your new instance will come up.
Depending on what you are trying to do (and how many servers you plan on starting), instead of polling for the instance start events, you could install on the AMI a simple program/script that runs once when the instance starts and sends out a notification to that effect, i.e. to an AWS SNS Topic.
The process that needs to know about new servers starting could then subscribe to this SNS topic, and would receive a push notifications each time a server starts.
Solves the same problem from a different angle; your mileage may vary.
Go use Boto3's wait_until_running method:
http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Instance.wait_until_running
You can use boto3 waiters,
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#waiters
for this ex: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Waiter.InstanceRunning
Or in Java https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/
I am sure there are waiters implemented in all the AWS sdks.
I want a single script that can lauch, and tag my instances which I can then use chef to configure them accordingly.
Say my service requires 10 instances, I want to be able to run 10 instances, then tag them according to their role (web, db, app server).
Then once I do that, I can use chef to connect to each one and configure them how i want.
But I'm confused, I know I can launch instances, but how do you wait for them to come online? Do you have to continously loop in some sort of a timer? That seems like a very hacky way to do it!
If you're going to do everything from the outside you do just have to poll to wait for the instance to be ready (which doesn't necessarily mean its ready to use - actual startup completed a little later)
You can also pass user data when you start an instance. Most amis support cloud init, and will interpret the data passed as a shell script if in the right format. That shell script could run chef or do other configuration tasks.