This question already has an answer here:
VBScript can create text file but can't delete file (permission denied)?
(1 answer)
Closed 17 days ago.
I have a problem of permission deny when deleting the file.
fso = CreateObject("Scripting.FileSystemObject")
mytempstring= fso.OpenTextFile(filepath).ReadAll
fso.deletefile(filepath,True) 'this will generate an error of permission denied
Any suggestions here? Thank you!
OpenTextFile returns a TextStream that you can then close. The problem with your code is you keep no reference to it, and instead chains the ReadAll method.
In theory, this should be OK. By not keeping a reference to the TextStream object it's reference count will immediately become zero once the ReadAll method has completed. When the reference count to an object reaches zero the script engine knows it can now safely free the object, and the process of freeing a TextStream will automatically close the file.
Unfortunately there's a problem with this. Although you can be sure the script engine will free the object, you cannot be sure when it will free the object. Script engines usually implement performance tuning techniques, and one example would be not freeing an object immediately but deferring that until the engine is less busy or has a more opportune moment.
There are many examples of people experiencing unexpected/undesirable behaviour from the VBA garbage collector not freeing memory immediately, but here's one where it was causing excessive memory usage when running a script, but the problem didn't occur when stepping through the code because the garbage collector had plenty of idle time to free memory. Note Edward's second reply, where he suggests implementing a Sleep call just so the garbage collector has idle time to free memory.
With all this taken into consideration, if you want to be sure the file is closed by the time you come to delete it, you must take ownership of closing the file instead of relying on the script engine to do it when the reference count reaches zero, so try this...
Dim fso, myStream
Set fso = CreateObject("Scripting.FileSystemObject")
Set myStream = fso.OpenTextFile(filepath)
mytempstring= myStream.ReadAll
myStream.Close
fso.deletefile filepath, True
Related
I have a system that launches 50 or so VBS scripts via WSF, that need to stay there until another part of the system connects to them, then they act as servers for a bit until the peer disconnects, then they exit and get restarted.
For initialization purposes, they all use an EXCEL.EXE to read a large number of parameters from a spreadsheet, via
Set objExcel = CreateObject("Excel.Application")
We can't afford to have 50 EXCEL.EXEs running at once, so the restarts are sequentialized, so that there should never be more than one EXCEL.EXE running: usually zero, as they are only used for 15-20 seconds and then released.
However sometimes things go wrong, the WSF scripts exit, and the EXCEL.EXE that it starts stays there. So we do see up to a dozen EXCEL.EXE processes.
My question is about using GetObject() instead of CreateObject(). Would it be possible to use GetObject() so that if there already was an EXCEL.EXE running, it would use that one instead of starting a new one? And if so what other steps are necessary?
There is also a supplementary question here about why the EXCEL.EXEs persist after the VBS that started them has exited, but I can imagine ways in which the VBS could exit (or be killed) that would allow that.
Note that the question is also partly about the re-entrancy of EXCEL.EXE, which I have no information about.
I'm not the author of these scripts, and I'm not very strong in VBS as far as external objects go, so it is is entirely possible that I'm asking a trivial question here.
Usage of GetObject() is documented in this old KB article. Error handling is required to get the first instance created. Like this:
Dim excel
On Error Resume Next
Set excel = GetObject(, "Excel.Application")
If Err.number = 429 Then
Set excel = CreateObject("Excel.Application")
End If
If Err.number <> 0 Then
WScript.Echo "Could not start Excel: " & err.Description
End
End If
'' etc
However, seeing zombie Excel.exe processes surviving is a broad concern, it strongly suggests that the scripting runtime is not exiting normally. Perhaps error handling in your existing scripts is less than ideal, that's not likely to get better when you slam a single instance with multiple scripts. Excel does get pretty cranky when it cannot keep up. Using the OpenXML api or Excel Services are the better way to go about it.
I am performing very rapid file access in ruby (2.0.0 p39474), and keep getting the exception Too many open files
Having looked at this thread, here, and various other sources, I'm well aware of the OS limits (set to 1024 on my system).
The part of my code that performs this file access is mutexed, and takes the form:
File.open( filename, 'w'){|f| Marshal.dump(value, f) }
where filename is subject to rapid change, depending on the thread calling the section. It's my understanding that this form relinquishes its file handle after the block.
I can verify the number of File objects that are open using ObjectSpace.each_object(File). This reports that there are up to 100 resident in memory, but only one is ever open, as expected.
Further, the exception itself is thrown at a time when there are only 10-40 File objects reported by ObjectSpace. Further, manually garbage collecting fails to improve any of these counts, as does slowing down my script by inserting sleep calls.
My question is, therefore:
Am I fundamentally misunderstanding the nature of the OS limit---does it cover the whole lifetime of a process?
If so, how do web servers avoid crashing out after accessing over ulimit -n files?
Is ruby retaining its file handles outside of its object system, or is the kernel simply very slow at counting 'concurrent' access?
Edit 20130417:
strace indicates that ruby doesn't write all of its data to the file, returning and releasing the mutex before doing so. As such, the file handles stack up until the OS limit.
In an attempt to fix this, I have used syswrite/sysread, synchronous mode, and called flush before close. None of these methods worked.
My question is thus revised to:
Why is ruby failing to close its file handles, and how can I force it to do so?
Use dtrace or strace or whatever equivalent is on your system, and find out exactly what files are being opened.
Note that these could be sockets.
I agree that the code you have pasted does not seem to be capable of causing this problem, at least, not without a rather strange concurrency bug as well.
How do you request Windows to spin down a hard disk programmatically? Is there any user-mode function I can call (or kernel-mode function to call or IRP to send) in order to make this happen?
I've tried making a program to send an ATA STANDBY command directly to the hard disk, but the problem is that this method doesn't inform the system, and hence whenever the system needs to flush the cache, it'll wake up the hard disk again. How do I tell the system to do this for me? (If the system does it, it'll save up the cache and "burst" the data when it gets too large, instead of writing in small increments.)
(The entire point here is to do this directly, not by changing the system-wide spin-down timeout to a 1-second period and waiting for the disk to spin down. I need a function I can call at a specific moment in time when I'm using my laptop, not something generic that doesn't suit 95% of situations.)
How far I've gotten so far:
I have a feeling that PoCallDriver and IRP_MJ_POWER might be useful for this, but I have very limited kernel-mode programming experience (and pretty much zero driver experience) so I really have no idea.
Please read:
Update:
People seem to be repeatedly mentioning the solutions that I have already mentioned do not work. Like I said above, I've already tried "hacky" solutions that change the timeout value or that directly issue the drive a command, and the entire reason I've asked this question here is that those did not do what I needed. Please read the entire question (especially paragraphs 2 and 3) before repeating what I've already said inside your answers -- that's the entire difficulty in the question.
More info:
I've found this document about Disk Idle Detection to be useful, but my answer isn't in there. It states that the Power Manager sends an IRP to the disk driver (hence why I suspect IRP_MJ_POWER to be useful), but I have no idea how to use the information.
I hope this helps:
This: http://msdn.microsoft.com/en-us/library/aa394173%28VS.85%29.aspx
Leads to this:
http://msdn.microsoft.com/en-us/library/aa394132%28VS.85%29.aspx#properties
Then, you can browse to this:
http://msdn.microsoft.com/en-us/library/aa393485(v=VS.85).aspx
This documentation seems to outline what you are looking for I think.
P.S. Just trying to help, don't shoot the messanger.
Have you tried WMI? Based on MSDN documentation, you should be able to send spindown command to HDD via WMI:
http://msdn.microsoft.com/en-us/library/aa393493%28v=VS.85%29.aspx
uint32 SetPowerState(
[in] uint16 PowerState,
[in] datetime Time
);
EDIT:
This code lists all drives in system and drives that support this API:
strServer = "."
Set objWMI = GetObject("winmgmts://" & strServer & "/root\cimv2")
rem Set objInstances = objWMI.InstancesOf("CIM_DiskDrive",48)
Set objInstances = objWMI.ExecQuery("Select * from CIM_DiskDrive",,48)
On Error Resume Next
For Each objInstance in objInstances
With objInstance
WScript.Echo Join(.Capabilities, ", ")
WScript.Echo Join(.CapabilityDescriptions, ", ")
WScript.Echo .Caption
WScript.Echo .PNPDeviceID
WScript.Echo "PowerManagementCapabilities: " & .PowerManagementCapabilities
WScript.Echo "PowerManagement Supported: " & .PowerManagementSupported
WScript.Echo .Status
WScript.Echo .StatusInfo
End With
On Error Goto 0
Next
Just save this code as a .vbs file and run that from command line.
I do not have an answer to the specific question that Mehrdad asked.
However, to help others who find this page when trying to figure out how to get their disk to standby when it should but doesn't:
I found that on a USB disk, MS PwrTest claims that the disk is off, but actually it is still spinning. This occurs even with really short global disk timeouts in win 7. (This implies that even if the system thinks it has turned the disk off, it might not actually be off. Consequently, Mehrdad's original goal might not work even if the correct way to do it is found. This may relate to how various USB disk controllers implement power state.)
I also found that the program HDDScan successfully can turn off the disk, and can successfully set a timeout value that the disk honors. Also, the disk spins up when it is accessed by the OS, a good thing if you need to use it, but not so good if you are worrying about it spinning up all the time to flush 1kB buffers. (I chose to set the idle timeout in HDDScan to 1 minute more than the system power manager timeout. This hopefully assures that the system will not think the disk is spun up when it is not.)
I note that powercfg has an option to prevent the idle clock from restarting from small infrequent disk writes. (Called "burst ignore time.")
You can get HDDScan here: HDDScan.com and PwrTest here: Windows Driver Kit. Unfortunately, the PwrTest thing forces you to have a lot of other MS stuff installed first, but it is all free if you can figure out how to download it from their confusing web pages.
While there is no apparent way to do what you're asking for (i.e. tell power management "act as if the timer for spinning down the disk has expired"), there may be a couple ways to simulate it:
Call FlushFileBuffers on the drive (you need to be elevated to open \\.\C), then issue the STANDBY command to the drive.
Make the API call that sets the timeout for spinning down the disk to 1 second, then increase it back to its former value after 1 second. Note that you may need to ramp up to the former value rather than immediately jump to it.
I believe the Devcon Command line utility should be able to accomplish what you need to do. If it does - the source code is available in the Windows Ddk.
I have a vb script that processes text files on a server, however, very occasionally without any apparent reason, the script gets stuck in a loop.
Files are uploaded by clients via ftp (smallish about 2 - 10 kb), a .Net service that is watching the ftp folder kicks off an executable (written in VB6), the executable moves the file from the ftp folder to a folder where it is processed. At this point the exe kicks off the vb script. 98% of the time the script runs without issue.
When the infinite loop occurs I kill the exe process. The odd thing is that when I re-kick off the process manually (by copying it back to the the folder that's being watched) the exact same file it goes through without issue.
In short the script opens the file into a text stream then it loops through the TextStream object until its AtEndOfStream property is true. Within the loop it creates a new file with some additional information added, now whenever the infinate loop situation occurs the temp file doesnt contain any of the sourcefile data, just the extra data added by the script, for example, the following code works 98% of the time:
Do While ts.AtEndOfStream <> True
sOriginalRow = ts.ReadLine
sUpdatedRow = sOriginalRow & ",Extra_Data"
NewFile.WriteLine sUpdatedRow
Loop
So if the source file contains:
LineALineBLineC
The new file is created conbtaining:
LineA,Extra_DataLineB,Extra_DataLineC,Extra_Data
but when the problem occurs and the new file is instantly populated with thousands of rows of
,Extra_Data,Extra_Data,Extra_Data,Extra_Data,Extra_Data...
its as though the AtEndOfStream property never becomes true.
My initial thought is that the source file is getting corrupted somehow but when these source files are reprocessed they are fine. Another thought is that the textstream object is getting incorrectly created, and not picking up the new line character, maybe because the file is locked by another process or something.
The only way I have found to replicate the behaviour is to comment out the ReadLine part of the code, which in effect prevents the current line number from incrementing. e.g.
sOriginalRow = "" 'ts.ReadLine
Can anyone offer any suggestions?
Well, as you say, the only thing I can offer are suggestions, not real answers:
Maybe your stream bumps against an ObjectDisposedException (not catchable in VBScript), so the EndOfStream condition gets <> True but not False.
For the sake of the experiment, you could try to change Do While ts.AtEndOfStream <> True to Do While Not ts.AtEndOfStream or Do Until ts.AtEndOfStream but my bio-logical circuits tell me that is probably not going to work.
There is another problem described where the stdIn and stdErr are conflicting with each other causing a hang in particular situations.
Could you also please answer the comment of Tmdean. On Error Resume Next could create a real mess: If ts.AtEndOfStream returns Null or garbage (because ts was destroyed for example) it will not become True and will cause a loop in Foreverland.
It's common knowledge in most programming languages that the flow for working with files is open-use-close. Yet I saw many times in ruby codes unmatched File.open calls, and moreover I found this gem of knowledge in the ruby docs:
I/O streams are automatically closed when they are claimed by the garbage collector.
darkredandyellow friendly irc take on the issue:
[17:12] yes, and also, the number of file descriptors is usually limited by the OS
[17:29] I assume you can easily run out of available file descriptors before the garbage collector cleans up. in this case, you might want to use close them yourself. "claimed by the garbage collector." means that the GC acts at some point in the future. and it's expensive. a lot of reasons for explicitly closing files.
Do we need to explicitly close
If yes then why does the GC autoclose ?
If not then why the option?
I saw many times in ruby codes unmatched File.open calls
Can you give an example? I only ever see that in code written by newbies who lack the "common knowledge in most programming languages that the flow for working with files is open-use-close".
Experienced Rubyists either explicitly close their files, or, more idiomatically, use the block form of File.open, which automatically closes the file for you. Its implementation basically looks something like like this:
def File.open(*args, &block)
return open_with_block(*args, &block) if block_given?
open_without_block(*args)
end
def File.open_without_block(*args)
# do whatever ...
end
def File.open_with_block(*args)
yield f = open_without_block(*args)
ensure
f.close
end
Scripts are a special case. Scripts generally run so short, and use so few file descriptors that it simply doesn't make sense to close them, since the operating system will close them anyway when the script exits.
Do we need to explicitly close?
Yes.
If yes then why does the GC autoclose?
Because after it has collected the object, there is no way for you to close the file anymore, and thus you would leak file descriptors.
Note that it's not the garbage collector that closes the files. The garbage collector simply executes any finalizers for an object before it collects it. It just so happens that the File class defines a finalizer which closes the file.
If not then why the option?
Because wasted memory is cheap, but wasted file descriptors aren't. Therefore, it doesn't make sense to tie the lifetime of a file descriptor to the lifetime of some chunk of memory.
You simply cannot predict when the garbage collector will run. You cannot even predict if it will run at all: if you never run out of memory, the garbage collector will never run, therefore the finalizer will never run, therefore the file will never be closed.
You should always close file descriptors after use, that will also flush it. Often people use File.open or equivalent method with blocks to handle file descriptor lifetime. For example:
File.open('foo', 'w') do |f|
f.write "bar"
end
In that example the file is closed automatically.
According to http://ruby-doc.org/core-2.1.4/File.html#method-c-open
With no associated block, File.open is a synonym for ::new. If the
optional code block is given, it will be passed the opened file as an argument
and the File object will automatically be closed when the block
terminates. The value of the block will be returned from File.open.
Therefore, will automatically be closed when the block terminates :D
Yes
In case you don't, or if there is some other failure
See 2.
We can use the File.read() function to read the file in ruby.....
such as,
file_variable = File.read("filename.txt")
in this example file_variable can have the full value of that file....