How do I avoid breaking the HD? I have a bash script running on an ubuntu machine, with this meta code:
bash1.sh
while(true)
run bash2.sh
sleep 60 seconds
done
bash2.sh:
if(directory is empty): exit
process file
delete file
The directory is network shared, and the computer is not doing anything else. Once per day a new file arrives and is processed. (I do know that bash1.sh can be replaced by watch). My concern is that bash1.sh is reading bash2.sh everytime - that can presumably be avoided by only having one script!? and bash2.sh is reading the same directory everytime. Is the directory really read from the HD, or is ubuntu somehow caching the dir in ram? -so it is only read when something changes? is it a problem that it is the same place on the HD that is read every time, or does it not matter because the HD is already spinning? If the HD never sleeps, does it matter if I set the loop time down to only one second?
Maybe the directory could be a pure ram dir - how do I do that? -or is there some simple way to check if something has arrived over the network without reading the directory?
Reading a file or directory once every sixty seconds is not excessive use.
Seriously, don't worry about it.
If it's really worrying you, you can rethink your strategy for detecting the file.
For example, do you really need to know, within sixty seconds, that the file has arrived? Can it arrive any time during the day? Can some parts of the day be considered unlikely?
Using information like that, you can adjust the timing of checks to suit. If the file is supposed to be delivered after 4pm, don't check for it at all before then.
Check for it every sixty seconds between 4pm and 5pm, then every ten minutes after that.
These are all business-related decisions that can be made but I would still suggest that it's unnecessary. Provided you regularly back up your disks (and have standby hardware if you need to be back up in a hurry), you shouldn't lose anything.
In fact, if you were really paranoid, you could dedicate an entire machine for this, whose sole purpose is to receive the file via FTP and, when it arrives, send it across to your real processing box.
Put nothing else on that machine and have a warm standby (exactly same software, IP address and so on but powered down) so that, if it fails, the standby can be activated in minutes.
The real processing machine is then only written to once a day - that's unlikely to affect the disk lifetime.
That's probably too paranoid for my liking but it shows that there are ways to mitigate almost any problem.
Related
I run the command to find the files named ".*large_files.*"
[root#iz2ze9wve43n2nyuvmsfx5z ~]# find / -iregex ".*large_files.*"
/root/search_large_files.py
It found the file but the cursor is shinning endless even if I leave it alone for over half an hour.
What's the bug in my codes to cause the problem?
Well, it may be that you just have massive file systems :-)
But, if you think it shouldn't be taking that long, you may well have mount points that are slower than normal, such as NFS-mounts where you have to go out over the network to get file information.
You could probably see a slow-down in that case if you just run find / on its own. If it goes out to an external location (like, I don't know, a ZX80 running in Antartica), the output rate may show that, and you'll be able to identify where in the hierarchical structure it happens.
Another possibility is to restrict it to the actual file system you're on to minimise the chance it will go external. That would be by using the xdev flag to prevent it crossing file systems. On my VM with one root file system but mounts for my C and D host drives, I cut the time down from two minutes to seventeen seconds.
Of course, that won't go to other local file systems but you could, if necessary write a script to find (with xdev) the file on all file systems marked ext4 (and whatever other ones you deem to be local).
TL;DR: I need something way faster than FSO.write OR another way to share a variable in memory between different script instances.
Hello, I am running CCPulse (on Windows 7), which is a Call Center monitoring tool. Agents are represented as "Objects" and can have various statistics (like calls taken, total talk duration etc). CCPulse allows to apply thresholds and actions to any statistic. These are basically vbscripts and as far as I can tell, there are no restrictions.
This allows me to take the "Threshold StatValue" and do things with it, ie writing it to a file. The issue is that if I apply a threshold to a statistic for all agents, the script executes for each agent object seperately (in sequence, not parallel). However, I want to export all the agent stats to a single csv file.
I already got it working, by creating a file if it doesn't exist, then open/ReadAll into a string. If an agent has not been written to the file yet his stat values get appended as a newline in the string, if he already exists in this file I search and replace his line using a regex pattern. I then write the entire multiline string back to the file:
Set objFile = objFSO.OpenTextFile(inFile,2)
objFile.Write strMemoryBuffer
objFile.Close
set objFile = nothing
strMemoryBuffer contains the files original content, with either a new line or a modified line. This string (and subsequently the export file) is around 30kb in size after all agents have been exported. It looks like this (simplified):
LoginID;Calls;TotalTalkTime
2243;08;9403
2132;12;8439
As I said, since the script runs seperately for each agent, only one line is ever added/modified per pass (CCpulse will execute the script one object at a time, until all are finished).
The write process is very slow however, using Timer() it says it needs between 0.10 and 0.15 seconds! That is way too slow, as I need to run the script on almost 500 agents (ideally in no more than 30 second intervals), but all the writing would take over a minute (CCPulse would create a backlog of threshold operations which could never be finished. I can decrease the recalculation frequency, but that is detrimental in other ways).
If I comment out only the above block, execution time dramatically decreases to ~0.02 seconds. So reading the file and manipulating the string takes almost no time at all, just the write process is slow.
I am writing the file locally to a hard drive (no SSD though). I cannot use a RAM Disk.
I also already tried writing to the volatile environment, but somehow, this is even slower (it does work, but for some reason the explorer process goes crazy with up to 50% cpu usage and ccpulse locks up, allthough the export file is still being updated).
The ideal solution would to have the string being repeadetly manipulated only in memory, and then written to file like only once every 30 seconds or something like that, but I don't know how I can make the strMemoryBuffer variable available to the "next" agent. Any ideas?
I have made a little function that deletes files based on date. Prior to doing the deletions, it lets the user choose how many days/months back to delete files, telling them how many files and how much memory it would clean up.
It worked great in my test environment, but when I attempted to test it on a larger directory (approximately 100K files), it hangs.
I’ve stripped everything else from my code to ensure that it is the get_dir_info() function that is causing the issue.
$this->load->helper('file');
$folder = "iPad/images/";
set_time_limit (0);
echo "working<br />";
$dirListArray = get_dir_file_info($folder);
echo "still working";
When I run this, the page loads for approximately 60 seconds, then displays only the first message “working” and not the following message “still working”.
It doesn’t seem to be a system/php memory problem as it is coming back after 60 seconds and the server respects my set_time_limit() as I’ve had to use that for other processes.
Is there some other memory/time limit I might be hitting that I need to adjust?
from the CI user guide the get_dir_file_info() is:
Reads the specified directory and builds an array containing the filenames, filesize, dates, and permissions. Sub-folders contained within the specified path are only read if forced by sending the second parameter, $top_level_only to FALSE, as this can be an intensive operation.
so if you are saying that you have 100k files then the best way to do it, is to cut it into two steps:
First: use get_filenames('path/to/directory/') to retrieve all your files without their information.
Second: use get_file_info('path/to/file', $file_information) to retrieve a specific file info, as you might not need all the file information immediately. it can be done on file name click or something relevant.
the idea here is not to force your server to deal with large amount of process while in production. that would kill two things, responsiveness, and performance (I haven't found a better definition for performance) but the idea here is clear.
I'm running into some considerable speed bottlenecks with a Python-Matplotlib-Xcode combination. I know some immediate responses will probably ask "Why are you doing python stuff in Xcode, just man up and use vim" --> I like the organizing ability and the built in version control, it makes elements of my work easier to deal with.
Getting python to run in xcode in the first place was a bit more tricky than I had hoped, but its possible. Now I have the following scenario:
A master file, 'main.py' does all the import stuff for me and sets up some universal formatting to make all the figures (for eventual inclusion in my PhD thesis) nice and uniform. Afterwards it runs a series of execfile commands to generate whichever graphics I need. Two things I can think of right off the bat:
1) at the very beginning of main.py after I import all the normal python stuff you tend to need, I call a system script which checks whether a certain filesystem is mounted. I keep all my climate model data on there since my local hard drive is too small to deal with all of it at once. Python pauses itself and waits for the system to do its thing, but once the filesystem has been found, it keeps going. Usually this only needs to happen once in the morning when I get to work, or if the VPN server kicked me off for whatever reason. (Side question, it'd be cool to know if theres a trick to automate an VPN login to reconnect as soon as it notices its not connected)
2) I'm not sure how much xcode is using on its own. running the same program from terminal is (somewhat) faster. I've tried to be memory conscience and turn off stuff I don't need while running the python/xcode combination.
Also, python launches a little window whenever I call plt.show(), this in itself takes time, I've considered just saving them as quick png files and opening them with some other viewer, although I guess that would also have to somehow take time to open up. Given how often these graphics change as I add model runs or think of nicer ways of displaying the data, it'd be nice to not waste something on the order of 15 to 30 minutes (possibly more) out of the entire day twiddling my thumbs and waiting for a window to pop up.
Benchmark it!
import datetime
start = datetime.datetime.now()
# your plotting code
td = datetime.datetime.now() - start
print td.total_seconds() # requires python version >= 2.7
Run it in xcode and from the command line, see what the difference is.
I want to be able to (programmatically) move (or copy and truncate) a file that is constantly in use and being written to. This would cause the file being written to would never be too big.
Is this possible? Either Windows or Linux is fine.
To be specific what I'm trying to do is log video with FFMPEG and create hour long videos.
It is possible in both Windows and Linux, but it would take cooperation between the applications involved. If the application that is writing the new data to the file is not aware of what the other application is doing, it probably would not work (well ... there is some possibility ... back to that in a moment).
In general, to get this to work, you would have to open the file shared. For example, if using the Windows API CreateFile, both applications would likely need to specify FILE_SHARE_READ and FILE_SHARE_WRITE. This would allow both (multiple) applications to read and write the file "concurrently".
Beyond sharing the file, though, it would also be necessary to coordinate the operations between the applications. You would need to use some kind of locking mechanism (either by locking some part of the file or some shared mutex/semaphore). Note that if you use file locking, you could lock some known offset in the file to act as a "semaphore" (it can even be a byte value beyond the physical end of the file). If one application were appending to the file at the same exact time that the other application were truncating it, then it would lead to unpredictable results.
Back to the comment about both applications needing to be aware of each other ... It is possible that if both applications opened the file exclusively and kept retrying the operations until they succeeded, then perform the operation, then close the file, it would essentially allow them to work without "knowledge" of each other. However, that would probably not work very well and not be very efficient.
Having said all that, you might want to consider alternatives for efficiency reasons. For example, if it were possible to have the writing application write to new files periodically, it might be more efficient than having to "move" the data constantly out of one file to another. Also, if you needed to maintain some portion of the file (e.g., move out the first 100 MB to another file and then move the second 100 MB to the beginning) that could be a fairly expensive operation as well.
logrotate would be a good option is linux, comes stock on just about any distro. I'm sure there's a similar windows service out there somewhere