Background: I'm on a team that deploys software to servers in over 30 different timezones. Each server location has it's own maintenance window. Our SysAd team is in 5 different locations. Currently we use an internal wiki page with a table of server locations and google time conversion links like this. So our internal wiki page of maintenance windows is full of timezones of the servers, if they follow DST offset or not and then a conversion link for the start time and another google conversion link for the end time of the maintenance window. It's time consuming keeping track of which servers can be patched right now. As some servers are in the same timezone as other server but maybe doesn't follow DST or not it's a mess of keeping track. Some of the Software Devs at work keep a full bookmark folder of all of the conversion links and I wanted to send them a script that can simplify this.
Goal: I basically want a script to be able to compare the SysAds current timezone to the timezone of the different servers. In the end I want an echo statement of which server can be patched right now, and eventually I want to add further logic later that will asses the time until next maintenance window and that servers location.
Problem #1: I'm new to scripting.
Problem #2: When doing what I thought was a simple TZ comparison it's not outputting what I expect. I feel once I understand why the output below is as such I think I can figure out the rest of the logic. I don't necessarily care about specific hours and minutes at this point. Just why is the If statement not outputting "the same".
Desired script:
#!/bin/bash
##SysAd Sets his own timezone in the first TZ variable.
export TZ=America/Los_Angeles
TZ1=`date +%::z`
echo $TZ1
##Timezone of Server in location 1
TZ=America/Los_Angeles
TZ2=`date +%::z`
echo $TZ2
if [[ " TZ1 " == " TZ2 " ]]; then
echo "the same"
else
echo "not the same"
output:
% ./test.sh
-08:00:00
-08:00:00
not the same
expected output is that they are the same. echo'ing out both the TZ1 and TZ2 variable are both "-08:00:00"
It should be:
if [[ "$TZ1" == "$TZ2" ]]
Related
When we are running a Meltano build/test cycle (such as in a CI/CD pipeline), we want our Singer pipelines to run as follows:
Don't use pre-captured state bookmarks that might permit a stream to entirely not have a meaningful run. (For instance, if there are zero records new, or not enough records new to trigger a representative test.)
Don't require developers to constantly have to push forward a hardcoded start_date. (What starts out as a "fast" test of a month of data eventually becomes a much longer-running test covering multiple months.)
For any tap name tap-mysource, we should be able to set $TAP_MYSOURCE_START_DATE to provide a default start_date config value. What's a good way to provide a default relative start time for CI builds - for instance, a rolling 21 day window?
I think most use cases probably running on GitHub Actions but we also use GitLab CI.
As of now, there isn't an expression language to perform today()-n and provide relative start dates in that manner. However, you can initialize an environment variable with the relative date prior to execution, and Meltano can pass that a dynamic input to the tap by way of the naming convention <PLUGIN_NAME>_<SETTING_NAME>.
Depending on your flavor of OS, this may need to be slightly adjusted:
On Mac:
N_DAYS=1
TAP_MYSOURCE_START_DATE=$(date -v-1d "+%Y-%m-%d")
echo "Using dynamic start date of today-$N_DAYS: $TAP_MYSOURCE_START_DATE"
meltano elt tap-mysource target-mydest
On Ubuntu:
N_DAYS=1
TAP_MYSOURCE_START_DATE=$(date +%Y-%m-%d -d "$N_DAYS day ago")
echo "Using dynamic start date of today-$N_DAYS: $TAP_MYSOURCE_START_DATE"
meltano elt tap-mysource target-mydest
Ref:
`date` command on OS X doesn't have ISO 8601 `-I` option?
https://stackoverflow.com/a/1706909/4298208
Goal
I wish to use RRDTool to count logical "user activity" from our web application's apache/tomcat access logs.
Specifically we want to count, for a period, occurrences of several url patterns.
Example
We have two applications (call them 'foo' and 'bar')
These url's interest us. They indicate when users 'did interesting stuff'.
/foo/hop
/foo/skip
/foo/jump
/bar/crawl
/bar/walk
/bar/run
Basically we want to know for a given interval (10 minutes, hour, day, etc.) how many users: hopped,skipped,jumped,crawled, walked, etc.
Reference/Starting point
This article on importing access logs into RRDTool seemed like a helpful starting point.
http://neidetcher.com/programming/2014/05/13/just-enough-rrdtool.html
However to clarify, this example uses the access log directly , whereas we want to a handful of url's 'in buckets' and count the 'number in each bucket'
Some Scripting Required..
I could do this with bash & grep & wc --iterating through the patterns, sending output to an 'intermediate results' text file....but believe RRDTool could do this with minimal 'outside coding'
That said, I believe RRDTool could do this with minimal 'outside coding'--but am unclear on the details.
Some points
I mention 'two applications' because we actually serve them up from separate servers with different log file formats. I'd like go get them into the same RRA file
Eventually I'd like to report this in cacti; initially however, I wanted to understand RRDTool details
Open to doing any coding, but would like to keep it as efficient as possible--both administratively and computer-resources. (By administratively, I mean: easy to monitor new instances)
I am very new to RRDTool and am RTM'ing . (and Walking through the Tutorial). I'm used to relational databases and spreadsheets, etc and don't have my mind around all the nuances of the RRA format.
Thanks in advance!
You could setup a separate RRD file with ABSOLUTE type datasources for each address you want to track.
Then you tail the log file and whenever you see one of the interesting urls rush by you call:
rrdtool update url-xyz.rrd N:1
The ABSOLUTE data source type is like a counter, but it gets reset every time it is read. Your counter will just count to one, but that should not be a problem.
In the example above I am using N: and not the timestamp from the access log. You could also use that if you are not doing this in real time ... but beware that you can not update the same rrd file twice at the same time. N: will use milli timestamps internally and thus probably avoid this problem.
On the other hand it may make more sense to accumulate matching log entries with the same timestamp and only update rrdtool with that number once the timestamp on the logfile changes.
I have a RHEL 5 system and have been trying to get a batch of 3-4 scripts each in a separate variables and they are not working as expected.
I tried using following syntax which I saw a lot of people on this site use and should really work, though for some reason it doesn't work for me.
Syntax -
testvar=`sqlplus -s foo/bar#SCHM <<EOF
set pages 0
set head off
set feed off
#test.sql
exit
EOF`
Now, I have access to the sqlplus command from the oracle bin folder and have also set the ORACLE_HOME and ORACLE_PATH variables exported (I echoed them just to ensure they are actually working).
However, I keep getting the error saying - "you need to set EXPORT for ORACLE_HOME" even though I have confirmed from everyone that I am indeed using the correct path.
Another question I have is once I get the script output (which is numeric number in bits or bytes) value in a variable (There will be 4-5 variables as there are 4-5 scripts), how do I convert into human readable output in either MBs or GBs.
Please guide me on this and I assure you that I will post everything here so that someone in future if gets stuck at the same issue, doesn't have waste time. (and your precious time won't go bad either...)
Thanks in advance,
Brian
I'm running a minecraft server on my linux box in a detached screen session. I'm not very fond of screen very much and would like to be able to constantly pipe the output of the server to a file(like a pipe) and pipe some input from a file to the server(so that I can input and output to the server from remote programs, like a python script). I'm not very experienced in bash, so could somebody tell me how to do this?
Thanks, NikitaUtiu.
It's not clear if you need screen at all. I don't know the minecraft server, but generally for server software, you can run it from a crontab entry and redirect output to log files.
Assuming your server kills itself at midnight sunday night, (we can discuss changing this if restarting 1x per week is too little or too much OR you require ad-hoc restarts), but for a basic idea of what to do, here is a crontab entry that starts the server each monday at 1 minute after midnight.
01 00 * * 1 dtTm=`/bin/date +\%Y\%m\%d.\%H\%M\%S`; export dtTm; { /usr/bin/mineserver -o ..... your_options_to_run_mineserver_here ... ; } > /tmp/mineserver_trace_log.${dtTm} 2>&1
consult your man page for crontab to confirm that day-of-week ranges are 0-6 (0=Sunday), and change the day-of-week value if 0!=Sunday.
Normally I would break the code up so it is easier to read, but for crontab entries, each entry has to be all on one line, (with some weird exceptions) AND usually a limit of 1024b-8K to how long the line can be. Note that the ';' just before the closing '}' is super-critical. If this is left out, you'll get un-deciperable error messages, or no error messages at all.
Basically, you're redirecting any output into a file (including std-err output). Now you can do a lot of stuff with the output, use more or less to look at the file, grep ERR ${logFile}, write scripts that grep for error messages and then send you emails that errors have been found, etc, etc.
You may have some sys-admin work on your hands to get the mineserver user so it can run crontab entries. Also if you're not comfortable using the vi or emacs editors, creating a crontab file may require help from others. Post to superuser.com to get answers for problems you have with linux admin issues.
Finally, there are two points I'd like to make about dated logfiles.
Good: a. If you app dies, you never have to rerun it to then capture output and figure out why something has stopped working. For long running programs this can save you a lot of time. b. keeping dated files gives you the ability to prove to you, your boss, others, that It used to work just fine, see here are the log files. c. Keeping the log files, assuming there is useful information in them, gives you the opportunity to mine those files for facts. I.E. : program used to take 1 sec for processing, now it is taking 1 hr, etc etc.
Bad: a. You'll need to set up a mechanism to sweep old log files, otherwise at some point everything will have stopped, AND when you finally figure out what the problem was, you discover that your /tmp OR whatever dir you chose to use IS completely full.
There is a self-maintaining solution to using dates on the logfiles I can tell you about if you find this approach useful. It will take a little explaining, so I don't want to spend the time writing it up if you don't find the crontab solution useful.
I hope this helps!
I'm using CVS on Windows (with the WinCVS front end), and would like to add details of the last check in to the email from our automated build process, whenever a build fails, in order to make it easier to fix.
I need to know the files that have changed, the user that changed them, and the comment.
I've been trying to work out the command line options, but never seem to get accurate results (either get too many result rather than just from one checkin, or details of some random check in from two weeks ago)
CVS doesn't group change sets like other version control systems do; each file has its own, independent version number and history. This is one of the deficiencies in CVS that prompts people to move to a newer VC.
That said, there are ways you could accomplish your goal. The easiest might be to add a post-commit hook to send email or log to a file. Then, at least, you can group a set of commits together by looking at the time the emails are sent and who made the change.
Wow.
I'd forgotten how hard this is to do. What I'd done before was a two stage process.
Firstly, running
cvs history -c -a -D "7 days ago" |
gawk '{ print "$1 == \"" $6 "\" && $2 == \"" $8 "/" $7 "\" { print \"" $2 " " $3 " " $6 " " $5 " " $8 "/" $7 "\"; next }" }' > /tmp/$$.awk
to gather information about all checkins in the previous 7 days and to generate a script that would be used to create a part of the email that was sent.
I then trawled the CVS/Entries file in the directory that contained the broken file(s) to get more info.
Mungeing the two together allowed me to finger the culprit and send an email to them notifying them that they'de broken the build.
Sorry that this answer isn't as complete as I'd hoped.
CVS does not provide this capability. You can, however, get it by buying a license for FishEye or possibly by using CVSTrac (note: I have not tried CVS Trac).
Or you could migrate to SVN, which does provide this capability via atomic commits. You can check in a group of files and have it count as a single commit. In CVS, each file is a separate commit no matter what you do.
We did this via a perl script that dumps the changelog and you can get a free version of perl for Windows at the second link.
Cvs2Cl script
Active Perl
I use loginfo in CVSROOT and write that information to a file
http://ximbiot.com/cvs/manual/cvs-1.11.23/cvs_18.html#SEC186
Will "cvs history -a -l" get you close? Shows for all users last event per project...
CVSNT supports commit IDs which you can use in place of tags in log, checkout or update commands. Each set of files committed (commits are atomic in CVSNT) receives its own unique ID. You just have to determine the commitid of the last checked in file via cvs log first (you can restrict the output via -d"1 hour ago" or similar) and then query which other files have that ID.
Eclipse has ChangeSets built in. You can browse the last changes (at least incoming changes aka updates) by commit. It does this by grouping the commits by author, commit message and similar timestamps.
This also works for "Compare with/Another Branch or Version" where you can choose Branches, Tags and Dates. Look through the Synchronization View Icons for a popup menu with "Change Sets" and see for yourself.
Edit: This would require to change to Eclipse at least as a viewer, but depending on the frequency you need to compare and group it might not be too bad. If you don't want to use more - use Eclipse just for CVS. It should be possible to even get a decent sized graphical cvs client through the rcp with all the plugins, but this'd definitely be out of scope...
Isn't this a solved problem? I would think any of the several tools on the CI Matrix that supports both CVS and email notifications could do this for you.