Limit size tinyproxy logfile - limit

I am working on setting tinyproxy on Centos 6.5 server in cloud. I have installed it successfully. However, because of cloud limitation in terms of size, we want to limit logfile (/var/log/tinyproxy.log) size. I need to configure log file so that it could keep information of last hour logs. For example, If now were 5.30 PM, so file must contain only data from 4.30 PM. I have read tinyproxy documentation and couldn't find logfile limit parameter. I'd be very thankful if somebody gave me a clue how to do that. Thanks.

I don't believe Tinyproxy has a feature for limiting log size, but it would be pretty simple to write a script for this separately.
An example script using Python, running automatically every hour using Linux crontab:
import os
import shutil
# Remove Old Logs
os.remove(/[DESTINATION])
# Copy Logs to Storage
copyfile(/var/log/tinyproxy.log, /[DESTINATION])
# Remove Primary Logs
os.remove(/var/log/tinyproxy.log)
(This is just an example. You may have to clear tinyproxy.log instead of deleting it. You may even want to set it up so you copy the old logs one more time, so that you don't end up with only 1-2 minutes of logs when you need them.)
And add this to crontab using crontab -e (make sure you have the right permissions to edit the log file!). This will run your script every hour, on the hour:
01 * * * * python /[Python Path]/logLimit.py

I found crontab very useful for this task.
30 * * * * /usr/sbin/logrotate /etc/logrotate.d/tinyproxy
It rotates my log file every hour.

Related

Combining local and remote ZFS snapshoting [zfs_autobackup]

I was searching for a simple way of managing my local and remote ZFS snapshots and decided to give zfs_autobackup a try.
My goals are to keep a local set of snapshots taken at specific times and send them to a remote machine.
zfs set autobackup:local=true tank/data
After selecting the source dataset, I created a cron file as follows
0 8-20 * * 1-5 /usr/local/bin/zfs-autobackup local --keep-source 12
5 20 * * 1-5 /usr/local/bin/zfs-autobackup local --keep-source 1d1w
10 20 * * 5 /usr/local/bin/zfs-autobackup local --keep-source 1w1m
0 0 1 * * /usr/local/bin/zfs-autobackup local --keep-source 1m1y
Which doesn't behave the way I expected, deleting older snapshots.
I also wonder which will be the best way to send the snapshots to the remote server, does it make any sense to define another dataset?
zfs set autobackup:remote=true tank/data
Any ideas?
im the author if zfs-autobackup.
The answer of Ser is correct: use one zfs command instead of 4. And use conmas to seperate the rules.
Also zfs-backup already keeps local and remote snapshots. So you can just send over the snapshots created by the cronjob. (maby not name them "local", its confusing in that case)
So use the same command as in your cronjob but add the target dataset and --ssh-target.
(also checkout the documention, it explains everything)

TailFile Processor- Apache Nifi

I'm using Tailfile processor to fetch logs from a cluster(3 nodes) scheduled to run every minute. The log file name changes for every hour
I was confused on which Tailing mode should I use . If I use Single File it is not fetching the new file generated after 1 hour. If I use the multifile, It is fetching the file after 3rd minute of file name change which is increasing the size of the file. what should be the rolling filename for my file and which mode should I use.
Could you please let me know. Thank you
Myfilename:
retrieve-11.log (generated at 11:00)- this is removed but single file mode still checks for this file
after 1 hour retrieve-12.log (generated at 12:00)
My Processor Confuguration:
Tailing mode: Multiple Files
File(s) to Tail: retrieve-${now():format("HH")}.log
Rolling Filename Pattern: ${filename}.*.log
Base Directory: /ext/logs
Initial Start Position: Beginning of File
State Location: Local
Recursive lookup: false
Lookup Frequency: 10 minutes
Maximum age: 24 hours
Sounds like you aren't really doing normal log file rolling. That would be, for example, where you write to logfile.log then after 1 day, you move logfile.log to be logfile.log.1 and then write new logs to a new, empty logfile.log.
Instead, it sounds like you are just writing logs to a different file based on the hour. I assume this means you overwrite each file every 24h?
So something like this might work?
EDIT:
So given that you are doing the following:
At 10:00, `retrieve-10.log` is created. Logs are written here.
At 11:00, `retrieve-11.log` is created. Logs are now written here.
At 11:10, `retrieve-10.log` is moved.
TailFile is only run every 10 minutes.
Then targeting a file based on the hour won't work. At 10:00, your tailFile only reads retrieve-10.log. At 11:00 your tailFile only reads retrieve-11.log. So worst case, you miss 10 minuts of logs between 10:50 and 11:00.
Given that another process is cleaning up the old files, there isn't going to be a back log of old files to worry about. So it sounds like there's no need to set the hour specifically.
tailing mode: multiple files
files to tail: /path/retrieve-*.log
With this, at 10:00, tailFile tails retrieve-9.log and retrieve-10.log. At 10:10, retrieve-9.log is removed and it tails retrieve-10.log. At 11:00 it tails retrieve-10.log and retrieve-11.log. At 11:10, retrieve-10.log is removed and it tails retrieve-11.log. Etc.

Issues with mkdbfile in a simple "read a file > Create a hashfile job"

Hello datastage savvy people here.
Two days in a row, the same single datastage job failed (not stopping at all).
The job tries to create a hashfile using the command /logiciel/iis9.1/Server/DSEngine/bin/mkdbfile /[path to hashfile]/[name of hashfile] 30 1 4 20 50 80 1628
(last trace in the log)
Something to consider (or maybe not ? ) :
The [name of hashfile] directory exists, and was last modified at the time of execution) but the file D_[name of hashfile] does not.
I am trying to understand what happened to prevent the same incident to happen next run (tonight).
Previous to this day, this job is in production since ever, and we had no issue with it.
Using Datastage 9.1.0.1
Did you check the job log to see if captured an error? When a DataStage job executes any system command via command execution stage or similar methods, the stdout of the called job is captured and then added to a message in the job log. Thus if the mkdbfile command gives any output (success messages, errors, etc) it should be captured and logged. The event may not be flagged as an error in the job log depending on the return code, but the output should be there.
If there is no logged message revealing cause of directory non-create, a couple of things to check are:
-- Was target directory on a disk possibly out of space at that time?
-- Do you have any Antivirus software that scans directories on your system at certain times? If so, it can interfere with I/o. If it does a scan at same time you had the problem, you may wish to update the AV software settings to exclude the directory you were writing dbfile to.

Check cron syntax to execute now

I do heavy automation on servers.
Biggest problem is the time-based execution of automation things.
The idea is to use a cron-syntax-style to time the executions.
So I need a way to check if a command that is combined to a cron syntax string can be executed now.
Things like:
./parser.sh 0 0 * * *
will only return OK on Midnight not on all the other minutes of a day.
Also
./parser.sh */10 0,1,2,3,4-22/4 * * *
and all combinations possible in cron syntax needs to work.
There will be several executions per day, every execution has different syntax.
Is there any kind of stuff that can do this?
It is not possible to actually create cronjobs for this.
Only can use Bash, maybe static compiled binaries, no Python nor higher languages.
Already tried https://github.com/morganhk/BashCronParse but this cannot interpret things like 1,2,3,4,5... only single numbers and */n, neither combinations.
I cannot get your question clearly. But, if you are trying to run parser.sh every minute of the day.
Use this
./parser.sh * * * * *

Call cron.php once a day - Magento settings

Like a lot of users i've some problems configuring Magento cronjobs (my cartrules doesn't update properly on Magento 1.8.1. I also modified my cron.php adding $isShellDisabled = true;).
A tried a lot of things, but it doesn't work. Installed AOE scheduler, and i see all my tasks as pending!
My hosting let me to call cron.php once a day (3 am, and it's working, becase it generates the tasks at that time), so i'm wondering if is useless having settings like this:
Generate Schedules Every 15
Schedule Ahead for 1
Missed if Not Run Within 60
History Cleanup Every 120
Success History Lifetime 1400
Failure History Lifetime 1400
If i run manually the cron.php, it generates tasks for a hour, all pending (for example, my cart rules XML are set to update every 15 minutes, so i get 4 cartrules tasks)
If i run it again (after few minutes), all tasks between this time change form Pending to Success.
So, have i to call it at least twice a day? Or i have to change my cron settings?
thank you for the help
Use this cron expression for each hour:
<cron_expr>0 * * * *</cron_expr>
This will make it run at 12.00, 1.00 and so on.
If you want to make it run at 12.30, 1.30 and so on replace 0 with 30

Resources