Use variables in Autosys in std_out_file - windows

I'd like to set-up dynamic joblog name each time when job run. I'm fine to add timestamp to joblog. Tried to use below:
update_job: ololo_job
std_out_file: >> "%JOBLOG%\ololo_job-%DATE:~0,2%%DATE:~3,2%%DATE:~8,2%-%TIME:~0,2%%TIME:~3,2%%TIME:~6,2%.log"
That doesn't make a trick. Autosys accept the syntax, but then adds "/" before each ":", so Jil starts looking like:
std_out_file: >> "%JOBLOG%\ololo_job-%DATE/:~0,2%%DATE/:~3,2%%DATE/:~8,2%-%TIME/:~0,2%%TIME/:~3,2%%TIME/:~6,2%.log"
and job fails with "Error redirecting output".
I've tried using just:
update_job: ololo_job
std_out_file: >> ololo_job-%DATE%.log
Also no luck, log file had the same name rather then date. Had anyone deal with that?

So:
We can do it in Unix/Linux systems
We can't do it Windows: we can have date, but we can't have time, so for the jobs that run more than 1 time per day, we can "do it like they do it on discovery channel"
%JOBLOG%/%AUTO_JOB_NAME%.%AUTORUNID%.log

%AUTO_JOB_NAME%.%AUTORUN% will have a unique id generated

Related

How to deal with shell commands that never stops

Here is the case;
There is this app called "termux" on android which allows me to use a terminal on android, and one of the addons are androids API's like sensors, tts engines, etc.
I wanted to make a script in ruby using this app, specifically this api, but there is a catch:
The script:
require('json')
JSON.parse(%x'termux-sensor -s "BMI160 Gyro" -n 1')
-s = Name or partially the name of the sensor
-n = Count of times the command will run
returns me:
{
"BMI160 Gyroscope" => {
"values" => [
-0.03...,
0.00...,
1.54...
]
}
}
I didn't copied and pasted the values, but that's not the point, the point is that this command takes almost a full second the load, but there is a way to "make it faster"
If I use the argument "-d" and not use "-n", I can specify the time in milliseconds to delay between data being sent in STDOUT, it also takes a full second to load, but when it loads, the delay works like charm
And since I didn't specify a 'n' number of times, it never stops, and there is the problem
How can I retrieve the data continuously in ruby??
I thought about using another thread so it won't stop my program, but how can I tell ruby to return the last X lines of the STDOUT from a command that hasn't and will not ever stop since "%x'command'" in ruby waits for a return?
If I understood you need to connect to stdout from a long running process.
see if this works for your scenario using IO.popen:
# by running this program
# and open another terminal
# and start writing some data into data.txt
# you will see it appearing in this program output
# $ date >> data.txt
io_obj = IO.popen('tail -f ./data.txt')
while !io_obj.eof?
puts io_obj.readline
end
I found out a built in module that saved me called PTY and the spawn#method plus thread management helped me to keep a variable updated with the command values each time the command outputted new bytes

Terraform GCP Instance Metadata Startup Script Issue

I've been working with Terraform, v0.15.4, for a few weeks now, and have gotten to grips with most of the lingo. I'm currently trying to create a cluster of RHEL 7 instances dynamically on GCP, and have, for the most part, got it to run okay.
I'm at the point of deploying an instance with certain metadata passed along to it for use in scripts built into the machine image for configuration thereafter. This metadata is typically just passed via an echo into a text file, which the scripts then pickup as required.
It's... very simple. Echo "STUFF" > file... Alas, I am hitting the same issue OVER AND OVER and it's driving me INSANE. I've Google'd around for ages, but all I can find is examples of the exact thing that I'm doing, the only difference is that theirs works, mine doesn't... So hopefully I can get some help here.
My 'makes it half-way' code is as follows:
resource "google_compute_instance" "GP_Master_Node" {
...
metadata_startup_script = <<-EOF
echo "hello
you" > /test.txt
echo "help
me" > /test2.txt
EOF
Now the instance with this does create successfully, although when I look onto the instance, I get one file called ' /test.txt? ' (or if I 'ls' the file, it shows as ' /test.txt^M ') and no second file.. I can run any command instead of echo, and whilst the first finishes, the second+ does not. Why?? What on earth is causing that??
The following code I found also, but it doesn't work for me at all, with the error, 'Blocks of type "metadata" are not expected here.'
resource "google_compute_instance" "GP_Master_Node" {
...
metadata {
startup-script = "echo test > /test.txt"
}
Okaaaaay! Simple answer for a, in hindsight, silly question (sort of). The file was somehow formmated in DOS, meaning the script required a line continuation character to run correctly (specifically \ at the end of each individual command). Code as follows:
resource "google_compute_instance" "GP_Master_Node" {
...
metadata_startup_script = <<-EOF
echo "hello
you" > /test.txt \
echo "help
me" > /test2.txt \
echo "example1" > /test3.txt \
echo "and so on..." > /final.txt
EOF
However, what also fixed my issue was just 'refreshing' the file (probably a word for this, I don't know). I created a brand new file using touch, 'more'd the original file contents to screen, and then copy pasted them into the new one. On save, it is no longer DOS, as expected, and then when I run terraform the code runs as expected without requiring the line continuation characters at the end of commands.
Thank you to commentors for the help :)

Rake task selectivly ignoring new code

In a rake task I'm writing some puts statements show changes while others don't. For instance changing
puts model+" | "+id
into
puts model+" * "+id
doesn't change in the output of the script. However in some places changing
puts "Connecting to "+site
into
puts "Connecting to ----"+site
shows the changes that where made.
In the places where any changes to the line doesn't change the output, adding a new puts statement before or after don't show up when the task is run. Commenting out lines of code around the unchanging puts statements that do the actual work cause the script to not execute those lines, just as it should, but changing or adding puts statements there do not change the output of the script.
Removing all other tasks and emacs backup files from the lib/tasks folder doesn't help. I've been bitten before by having a backup copy of a task with the same namespace and task name running instead of the one I was working on.
This is being run with Ruby 2.4.3 on OpenBSD 6.3-stable on a fx-8350. I would post the whole script but the company I'm working for won't allow it.
How about
puts "#{model} +/*/whatever #{site}"
It shouldn't matter to what sounds like a filesystem update issue (reboot), but it's probably better form to put the variables in the string like that instead of + "" them.

startup script in freebsd is not running

I have been trying to run a shell script at boot time of freebsd. I have read all simmilar questions in stackoverflow and tried. But nothing is worked. This is the sample code that i tried is dummy.
#!/bin/sh
. /etc/rc.subr
name="dummy"
start_cmd="${name}_start"
stop_cmd=":"
dummy_start()
{
echo "Nothing started."
}
load_rc_config $name
run_rc_command "$1"
Saved with name of dummy.
Permissions are -r-xr-xr-x.
in rc.conf file made dummy_enable="YES".
The problem is, when i rebooted my system to test, dummy file is not there. So script is not executing. what else need to do run my dummy script.
SRC:http://www.freebsd.org/doc/en/articles/rc-scripting/article.html#rc-flags
You need to add rcvar="dummy_enable" to your script. At least for FreeBSD 9.1.
Call your script with parameter rcvar to get the enabled status:
# /etc/rc.d/dummy rcvar
# dummy
#
dummy_enable="YES"
# (default: "")
And finally start it with parameter start - this won't start the service/script unless dummy_enable is set in /etc/rc.conf (or /etc/rc.conf.local, or /etc/defaults/rc.conf)
# /etc/rc.d/dummy start
Nothing started.
One possible explanation is that rcorder(8) says:
Within each file, a block containing a series of "REQUIRE", "PROVIDE",
"BEFORE" and "KEYWORD" lines must appear.
Though elsewhere I recall that if a file doesn't have "REQUIRE", "PROVIDE" or "BEFORE", then it will be arbitrarily placed in the dependency ordering. And, it could be that the arbitrary placement differs between the first run up to $early_late_divider and in the second run of those after $early_late_divider.
OTOH, is this a stock FreeBSD, or some variant? I recall reading that FreeNAS saves its configuration somewhere else and recreates its system files on every boot. And, quite possibly that /etc is actually on a ramdisk.
Also, /usr/local/etc/rc.d doesn't come into existence until the first port installing an rc file is installed.

Redirect Output of Capistrano

I have a Capistrano deploy file (Capfile) that is rather large, contains a few namespaces and generally has a lot of information already in it. My ultimate goal is, using the Tinder gem, paste the output of the entire deployment into Campfire. I have Tinder setup properly already.
I looked into using the Capistrano capture method, but that only works for the first host. Additionally that would be a lot of work to go through and add something like:
output << capture 'foocommand'
Specifically, I am looking to capture the output of any deployment from that file into a variable (in addition to putting it to STDOUT so I can see it), then pass that output in the variable into a function called notify_campfire. Since the notify_campfire function is getting called at the end of a task (every task regardless of the namespace), it should have the task name available to it and the output (which is stored in that output variable). Any thoughts on how to accomplish this would be greatly appreciated.
I recommend not messing with the Capistrano logger, Instead use what unix gives you and use pipes:
cap deploy | my_logger.rb
Where your logger reads STDIN and STDOUT and both records, and pipes it back to the appropriate stream.
For an alternative, the Engineyard cap recipies have a logger – this might be a useful reference if you do need to edit the code, but I recommend not doing.
It's sort of a hackish means of solving your problem, but you could try running the deploy task in a Rake task and capturing the output using %x.
# ...in your Rakefile...
task :deploy_and_notify do
output = %x[ cap deploy ] # Run your deploy task here.
notify_campfire(output)
puts output # Echo the output.
end

Resources