Run Rake task in parallel using different parameters - ruby

I've a scenario where data has to be loaded from different input files. So my current approach is to execute the loader script using selenium grid in 10 different systems. Each system will have their own input files and other information like PORT, IP_ADDRESS for grid will also be passed in rake task itself. Now, these information will be saved in an excel file and code has to be written to build n number of rake task with different environment variables and then execute them all together.
I'm unable to come up with a way where all the task will be created automatically and executed as well.
I know it has to be done using 'parallel_test' gem or rake multi-task feature but don't know how exactly this can be achieved. Any other approach is also welcomed.

Related

how does a gradle task explicitly set itself having altered it's output or up to date for tasks dependent on it

I am creating a rather custom task that processes a number of input files and outputs a different number of output files.
I want to check the dates of the input files against the existing output files and also might look at the content of the input files to make the determination whether it is up to date or needs to be invoked to become up to date. What properties do I need to set in a doFirst, code the main action, or whatever ( where and when) in my task to set the right state for their dependency checker and task executor so as either appropriately force dependents to build or not.
Also any doc on standard lib utilities to do things like file date checking etc, getting lists of files etc that are easy like in ruby rake.
How do I specify the inputs and outputs to the task ? Especially as the outputs will not be known until the source is parsed and the output directory is scanned for what exists.
a sample that does this in a larger project that has tasks that are dependent on it would be really nice :)
What properties do I need to set in a doFirst, code the main action, or whatever ( where and when) in my task to set the right state for their dependency checker and task executor so as either appropriately force dependents to build or not.
Ideally this should be done as a custom task type. None of this logic should be in any of the Gradle files at all. Either have the logic in a dedicated plugin project that gets published somewhere which you can then reference in the project, or have the logic in buildSrc.
What you are trying to develop is what is known as an incremental task: https://docs.gradle.org/current/userguide/custom_tasks.html#incremental_tasks
These are used heavily throughout Gradle which makes the incremental build of Gradle possible: https://docs.gradle.org/current/userguide/more_about_tasks.html#sec:up_to_date_checks
How do I specify the inputs and outputs to the task ? Especially as the outputs will not be known until the source is parsed and the output directory is scanned for what exists.
Once you have your tasks defined and whatever else you need, in your main Gradle files you would configure them as you would any other plugin or task.
The two links above should be enough to help get you started.
As for a small example, I developed a Gradle plugin that generates files based on some input that is not known until its configured. The 'custom' task type just extends the provided JavaExec. The custom task is Wsdl2java. Then based on user configuration, tasks get registered here using the input file from the user. Since I reused built-in task types, I know for sure that no extra work will done and can rely on Gradle doing the heavy lifting. There's also a test to ensure that configuration cache works as expected: ConfigurationCacheFunctionalTests.
As I mentioned earlier, the links above should be enough to get you started.

use of multiple custom scripts in metadata parameter of the LaunchInstanceDetails method in oci.core.model

I understand that the LaunchInstanceDetails method in oci.core.model has a parameter -> metadata , wherein one of the metadata key-names that can be used to provide information to Cloud-Init is -> “user_data” , which can be used to run custom scripts by Could-Init when provided in a base64-encoded format.
In my Python code to create a Windows VM,while launching the instance, I have a requirement to run 2 custom scripts:
Script to login to Windows machine via RDP – this is absolute(needs to be executed every time a new Windows VM is created without fail) – Currently , we have included this in the metadata parameter while launching the instance, as below:
instance_metadata['user_data'] = oci.util.file_content_as_launch_instance_user_data(path_init)
Bootstrap script to Install Chef during the initialization tasks - this is conditional ( this needs to run only if the user wishes to Install Chef and we internally handle it by means of a flag in the code) – Yet to be implemented as we need to identify if more than one custom script (conditional in this case) can be included.
Can someone help me understand if and how we can achieve to include multiple scripts(keeping in mind the conditional clause) in a single metadata variable or can we have multiple metadata or some other parameter in this service that could be utilised to run the Chef Installation script
I'd suggest combining these into a single script and using the conditional in a if statement to install Chef as required.

Backend Java application testing using jmeter

I have a Java program which works on backend .It's a kind of batch processing type where I will have a text file that contains messages.The Java program will fetch message from text file and will load in to DB or Mainframe.Instead of sequential fetching we need to try parallel fetching .How can I do through Jmeter?
I tried my converting the program to a Jar file and calling it through class name.
Also tried by pasting code and in argument place we kept the CSV (the text file converted to .CSV).
Both of this giving Sampler client exception..
Can you please help me how to proceed or is there something we are missing or other way to do it.
The easiest way to kick off multiple instances of your program is running it via JMeter's OS Process Sampler which can run arbitrary commands and print their output + measure execution time.
If you have your program as an executable jar you can kick it off like:
See How to Run External Commands and Programs Locally and Remotely from JMeter article for more information on the approach

Run Scenarios with time constrain on Jenkins using ruby cucumber framework

I have two scenarios, scenario 1 should run at 5:00 PM everyday. Gather the DB results from it and save it on yaml file. I have to use these results to run scenario 2 which needs to be run next day at 8AM. How can I implement this on Jenkins.
Write now I am trying to implement through rake task. My logic is if the yaml file is present run scenario 2 or else run scenario 1.
Is there a better way to do it?
I need these two scenarios on the same jenkins job.

Hudson - how to trigger a build via file using the filename and file contents

Currently I'm working in a continuous integration server solution using Hudson.
Now I'm looking for a build job which will be triggered every time it finds a file in a specific directory.
I've found some plugins which allow Hudson to watch and poll files from a directory (File Found Trigger, FSTrigger and SCM File Trigger) but none of them allow me to get the filename and file contents from the file found and use these values during the build execution (My idea would pass these values to a shell script)
Do you guys know if this is something possible to do via any other Hudson plugin? or maybe I'm missing something.
Thanks,
Davi
Two valid solutions:
As suggested by Christopher, read the values from the file via Shell/Batch commands at the beginning of your build-script.(The downside is that Hudson will not be aware of those values in any way)
Use the Envfile Plugin to read the content of the file and interperate it as a set of key-value pairs.
Note that if the File Found Trigger "eats" the flag-file, you may need to create two files -
one to hold the key-value pairs and another to serve as a flag for the File Found Trigger.

Resources