Reason for doing the following: EC2 ID cannot be locked by auto scaling.
From StepFunctions, first execute SendCommand, next execute GetCommandInvocation.
SendCommand executes a file in EC2.
GetCommandInvocation gets the status of SendCommand.
When accessing EC2 from SendCommand and GetCommandInvocation, I want to specify the InstanceName instead of InstanceID.
How should I set this up?
You can use the EC2:Describe instances un the step before the SendCommand. With the proper input and output setup, you should be able to get the id and pass it in the SendCommand step.
Related
Setting 'common' properties for child tasks is not working
The SCDF version I'm using is 2.9.6.
I want to make CTR A-B-C, each of tasks does follows:
A : sql select on some source DB
B : process DB data that A got
C : sql insert on some target DB
Simplest way to make this work seems to define shared work directory folder Path "some_work_directory", and pass it as application properties to A, B, C. Under {some_work_directory}, I just store each of task result as file, like select.result, process.result, insert.result, and access them consequently. If there is no precedent data, I could assume something went wrong, and make tasks exit with 1.
================
I tried with a composed task instance QWER, with two task from same application "global" named as A, B. This simple application prints out test.value application property to console, which is "test" in default when no other properties given.
If I tried to set test.value in global tab on SCDF launch builder, it is interpreted as app.*.test.value in composed task's log. However, SCDF logs on child task A, B does not catch this configuration from parent. Both of them fail to resolve input given at launch time.
If I tried to set test.value as row in launch builder, and pass any value to A, B like I did when task is not composed one, this even fails. I know this is not 'global' that I need, it seems that CTR is not working correctly with SCDF launch builder.
The only workaround I found is manually setting app.QWER.A.test.value=AAAAA and app.QWER.B.test.value=BBBBB in launch freetext. This way, input is converted to app.QWER-A.app.global4.test.value=AAAAA, app.QWER-B.app.global4.test.value=BBBBB, and print well.
I understand that, by this way, I could set detailed configurations for each of child task at launch time. However, If I just want to set some 'global' that tasks in one CTR instance would share, there seems to be no feasible way.
Am I missing something? Thanks for any information in advance.
CTR will orchestrate the execution of a collection of tasks. There is no implicit data transfer between tasks. If you want the data from A to be the input to B and then output of B becomes the input of C you can create one Task / Batch application that have readers and writers connected by a processor OR you can create a stream application for B and use JDBC source and sink for A and C.
In my build.gradle script, I have an Exec task to start a multithreaded process, and I would like to limit the number of threads in the process to the value of max-workers, which can be specified on the command line with --max-workers, or in gradle.properties with org.gradle.workers.max, and possibly other ways. How should I get this value to then pass on to my process?
I have already tried reading the org.gradle.workers.max property, but that property doesn't exist if it is not set by the user and instead max-workers is defined by some other means.
You may be able to access this value using API project.getGradle() .getStartParameter() .getMaxWorkerCount().
In Interstage BPM how do I assign a task to the user who started the Process Instance?
I looked at using the Assign Task Action but it doesn't have anything for using the process initiator.
In the Role Actions of the Activity that you want to assign, you need to add two BPM Actions:
1) Workload Balancing Actions : Get Process Initiator -
This gets the name of the initiator. Set the Target UDA to be a STRING UDA like "initiator".
2) Workload Balancing Actions : Assign Task to User -
This assigns the task to a user. Set the selector to "V" and select your "initiator" UDA that holds the name of the Process Instance initiator from the first action.
I have a datapipe that gets stuck and goes in pending mode, everytime "Waiting on dependencies".
Here I am using "Hive Activity", which needs an input and output. In my case, all my data is in the hadoop infrastructure and thus I really don't need S3 Input and S3 output. However, there is no way to remove them, as datapipeline errors out. Further, the pipe gets stuck at this point, inspite of a precondition that S3 node "exists". Every time I run this pipe I have to manually "markfinish" the S3node, things work after that.
{
Name:
#S3node1_2014-08-01T13:59:50
[View instance fields]
Description:
Status: WAITING_ON_DEPENDENCIES
Waiting on:
#RunExpertCategories_2014-08-01T13:59:50
}
Any insights would help. AWS Datapipeline documentation does not go into detail.
If you set "stage": "false" on the HiveActivity then the input and output nodes will not be required.
I have a Daemon I am trying to start but I would like to set a few variables in the daemon when starting it. Here is the script I am using to control my daemons locates in RAILSAPP/script/daemon
#!/usr/bin/env ruby
require 'rubygems'
require 'daemons'
ENV["APP_ROOT"] ||= File.expand_path("#{File.dirname(__FILE__)}/..")
ENV["RAILS_ENV_PATH"] ||= "#{ENV["APP_ROOT"]}/config/environment.rb"
script = "#{ENV["APP_ROOT"]}/daemons/#{ARGV[1]}"
Daemons.run(script, dir_mode: :normal, dir: "#{ENV["APP_ROOT"]}/tmp/pids")
When I start this daemon I would like to pass a variable to it like a reference to an active record so I can base the daemon's initial run off of it.
If you want to fetch a specific ActiveRecord object, you can either pass just the id, or the classname + id as an additional parameter on the commandline. As you're already using ARGV[1] for the script name, you could pass it as ARGV[2] and something like Product_123 that you then parse via split, and do a Product.find(123) to get the actual record.
Another approach would be to put the object information into a queue like memcached or redis, and have the daemon fetch the information out of the queue. That would keep your daemon startup a bit simpler, and would allow you to queue up multiple records for the daemon to process. (Something just processing a single record would probably be better written as a script anyway.)
The other concern I have about your script is using ENV["APP_ROOT"]. Does that really need to go in the environment? What if you have a second daemon? It seems that it would be better as a local variable, and if you need it in the daemon, you can always get it relative to where the daemon's file is located anyway.