I have lots of files in different formats (mostly pdf files) and I need to check if they can be opened without errors and get a list of those that are broken.
Other than opening them all separately is there a way to find out which won't open / are corrupt?
Not really, no. Because there are so many file types it would be impossible to know if a file was corrupt without opening it. It might open without errors but still be corrupt so even that isn't going to help you. You could try a general file opening solution like KeyView which can open most file formats. If it fails then chances are the file is corrupt.
Related
So the images below were originally a vb files. I have just opened it and it looks like this and the compiler won't run it. I am unsure whether this is a compiler error or whether it may have become corrupt because the project is stored on an external drive. It is just these two forms that have broken like this; I have one other form and a module in the same project that are okay but the project can't run because of the two that are broke.
Broken Login Form
Broken Diary Form
If it changes anything, the designer files for the forms are intact it is just the scripting for the forms elements that is broken.
Also, if I can't identify the cause, is there a way to revert it back to the last working version in visual studio to get my code back? Just because I put a lot of time into it.
The data in those files is most likely gone.
IMPORTANT: Do not write anything to that disk drive unless you find that you cannot recover those files.
If you are using a version control system then you can revert to an earlier version.
If you are using Windows 10 and you happen to have stored those files in a location included in what File History saves, you can recover them from that.
If you use some other form of backup, retrieve the files from that.
If you have a separate disk drive with at least as much free space as the one with the corrupted files, you could try running file recovery software as it might be that the zeroed-out file was written to a different place on the HDD.
TinTnMn pointed out in a comment that if you previously compiled the code, you should have executable files in the "obj" and "bin" folders that can be decompiled to recover most of your work
It could be quicker to re-write the code while it is still fresh in your mind.
I recently installed WordPress on my hosting server and all went fine, however one of my pages "/wp-admin/update-core.php" is having issues. Upon opening, the file appears to be cut off mid way.
I've compared this file against the file in the zip file I uploaded and the original copy is not truncated.
Where this gets even weirder is that, if I edit the file on the server to paste in the missing code, after I save and open again, the file is still missing the code I just added AND is now missing an additional line of code.
I've also tried deleting the file, and re-uploading the original copy again and it appears to be cutting off at the same point.
Anyone experienced this or have any ideas?
I've got a similar problem with v3.37.3 - uploaded .zip file is truncated, and .svg files have zero length. This is true even when I set the transfer type to 'binary' rather than 'auto'. Also, when repeating the copy operation for the .zip file, I am prompted more than once to confirm the overwrite of th eexisting file, almost as though Filezilla sees several files instead of just the one .zip archive.
I had the same problem. I uploaded a file (a Linux executable) and it was truncated from 4,396,728 bytes to 4,392,145 bytes. My fix was to change the transfer type, which you can set using the Transfer menu, from Auto to Binary. I guess that FileZilla assumes that files with no extension are text; other files, with extensions dmg (Apple disk image) and zip (compressed file) were correctly treated as binary. I am using a very recent version of FileZilla: 3.29.0.
I'm maintaining a site where users can place pictures and other files in a kind of shopping cart. After selecting all the various contents the user wishes to download, he can checkout. Till' now an archive was generated beforehand and the user got an email with the link to the file after the generation finished.
I've changed this now by using web api and push stream to directly generate the archive on the fly. My code is offering either a zip, a zip64 or .tar.gz dynamically, depending on the estimated filesize and operating system. For performance reasons compression ist set to best speed ('none' would make the zip archives incompatible with Mac OS, the gzip library I'm using doesn't offer none).
This is working great so far, however the user is no longer having a progress bar while downloading the file because I'm not setting the content-length. What are the best ways to get around this? I've tried to guess the resulting file size, but either the browsers are canceling the downloads to early or stopping at 99,x% and are waiting for the missing bytes resulting for the difference between the estimated and actual file size.
My first thought was to guess the resulting file size always a little bit to big and filling the rest with zeros?
I've seen many file hosters offering the possibility to select files from a folder and putting them into a zip file and all are having the correct (?) file size with them and a progress bar. Any best practises? Thanks!
This is just some thoughts, maybe you can use them :)
Using Web API/HTTP the normal way to go about is that the response contains the lenght of the file. Since the response is first received after the call has finished, the actual time for generating the file will not show any progress bar in any browser other than a Windows wait cursor.
What you could do is using a two steps approach.
Generating the zip file
Create a duplex like channel using SignalR to give feedback on the file generation.
Downloading the zip file
After the file is generated you should know the file size, and the browser will show a progress bar while downloading.
It looks that this problem should have been addressed using chunk extensions, but it seems to never got further than a draft.
So I guess you are stuck with either no progress or sending the file size up front.
It seems that generating exact size zip archives is trickier than adding zero padding.
Another option might be to pre-generate the zip file without storing it just to determine the size.
But I am just wondering why not just use tar? It has no compression, so it is easy determine it's final size up front from the size of individual files and it should be also supported by both OSx and Linux. And Windows should be able to handle none compressed zip archives, so a similar trick might work as well.
Currently I am using Ghostscript to merge a list of PDFs which are downloaded. The issue is if any 1 of the pdf is corrupted, it stops the merging of the rest of the pdfs.
Is there any command which i must use so that it will skip the corrupted pdfs and merge the others?
I have also tested with pdftk but facing the same issue.
Or is there any other command line based pdf merging utility that I can use for this?
You could try MuPDF, you could also try using MUPDF 'clean' to repair files before you try merging them. However if the PDF file is so badly corrupted that Ghostscript can't even repair it that probably won't work either.
There is no facility to ignore PDF files which are so badly corrupted they can't even be repaired. Its hard to see how this could work in the current scheme, since Ghostscript doesn't 'merge' files anyway, it interprets them, creating a brand new PDF file from the sequence of graphic operations. When a file is badly enough corrupted to provoke an error we abort because we may have already written any parts of the file we could, and if we tried to ignore and continue both the interpreter and the output PDF file would be in an indeterminate state.
I would like to know who is locking a file (win32). I know about WhoLockMe, but I would like a command-line tool which does more or less the same thing.
I also looked at this question, but it seems only applicable for files opened remotely.
Handle should do the trick.
Ever wondered which program has a particular file or directory open?
Now you can find out. Handle is a utility that displays information
about open handles for any process in the system. You can use it to
see the programs that have a file open, or to see the object types and
names of all the handles of a program.
handle.exe
http://technet.microsoft.com/en-us/sysinternals/bb896655.aspx
THis has helped me sooooo many times....
Download Handle.
https://technet.microsoft.com/en-us/sysinternals/bb896655.aspx
If you want to find what program has a handle on a certain file, run this from the directory that Handle.exe is extracted to. Unless you've added Handle.exe to the PATH environment variable. And the file path is C:\path\path\file.txt", run this:
handle "C:\path\path\file.txt"
This will tell you what process(es) have the file (or folder) locked.
In my case Handle.exe did not help.
Simple program from official Microsoft called Process Explorer was useful.
Just open as administrator and press Ctrl+f, type part of file name it will show process using file.
Handle didn't find that WhatsApp is holding lock on a file .tmp.node in temp folder.
ProcessExplorer - Find works better
Look at this answer https://superuser.com/a/399660
Computer Management->Shared Folders->Open Files
I have used Unlocker for years and really like it. It not only will identify programs and offer to unlock the folder\file, it will allow you to kill the processing that has the lock as well.
Additionally, it offers actions to do to the locked file in question such as deleting it.
Unlocker helps delete locked files with error messages including "cannot delete file," and "access is denied." Video tutorial available.
Some errors you might get that Unlocker can help with include:
Cannot delete file: Access is denied.
There has been a sharing violation.
The source or destination file may be in use.
The file is in use by another program or user.
Make sure the disk is not full or write-protected and that the file is not currently in use.