So I have scoured google for mention of anybody trying to use powershell to get information about files from a URL/URI but with no luck. I have found ways to get metadata of files from a local source but nothing for an image hosted on a website.
What I want to do:
I have a list of image URL's eg. www.website/images/img.jpg and want to grab the metadata without having to download the entire image. I would then store and export this info to a csv to look over later.
So far my code has been resigned to System.Net.Webclient.DownloadFile() and then operating on them locally. Is it possible to do this remotely?
I suppose you're referring to EXIF metadata. Those are embedded in the file, so unless the remote host provides an API that exposes this information you must download the file to be able to read the information.
Judging from what I gleaned from the standard the information is stored at the beginning of the file, so you could try to download just the first couple hundred bytes. However, the size of the EXIF header doesn't seem to be fixed, so you'll want to retrieve a large enough chunk. Also, standard EXIF parsers might not work on incomplete images, so you might need to write your own parser.
All in all I'd say downloading the entire file and extracting the information with standard tools is your best option.
Related
We are in the 21st century and still there is no good way to tag photos and videos? There is always a dependency on some tool... Is there no way to make the file autonomous with respect to its tags?
Video files, for example, are not friendly to tags. some video formats do not allow tagging at all. Some tools keep the meta data in their own external representation and when you copy the original file to some new destination, the meta data of the file in the destination is lost. Also this metadata may only be seen by this proprietary tool and is not seen by other tools (e.g. tagging by Adobe products are not visible/searchable in Windows Explorer)
Is there a universal way to tag any file including video files so that
searching for files having a certain tag is possible in any tool
when a file is copied, the tags are transferred with it
when the file is edited in any tool and re-saved, the tags are not lost...?
There are no universal ways at this point, if there ever will be one.
Probaby the closest we got is file tagging provided by popular OSes based on a certain file systems' feature called 'forking'. By this means Windows and Mac provide an ability to easily add meta data (including keywords) to any file on the file system, without changing the file's content. One serious drawback of this feature is that it does not cross file-system's boundary, i.e. if you simply upload a file to the web, or copy it to a different type filesystem - the metadata will be lost. There are ways to copy such metadata but that requires consideration and use of appropriate tools.
I am building an internal project wiki for a group software development project. The project wiki is currently powered by VimWiki and I send the HTML files to both the project supervisor and each of the development team on a weekly basis. This keeps our Intellectual property secure and internal, but also organized and up to date. I would like to put diagram images into the wiki itself so that all diagrams and documentation can be accessed together with ease. I am however having trouble making the images transferable between systems. Does vimwiki give a way for image files to be embedded such that they can be transferred between systems? Ideally the solution would make it possible to transfer the output directory of the Vimwiki as a singular entity containing the HTML files and the image files.
I have tried reading the documentation on images in the vimwiki reference document. I have not had luck using local: or file: variants. The wiki reference states that local should convert the image links to a localized location based on the output directory of the HTML files, but it breaks my image when I use it.
I have currently in my file
{{file:/images/picture.png}}
I expect the system to be able to transfer the file between computers but it registers to an absolute link and also does not include the image directory in the output directory of the vimwikiAll2HTML command.
I know this is an old question, but try to use {{local:/images/picture.png}} instead. If you open :help vimwiki in Vim, you can find a part that says:
In Vim, "file:" and "local:" behave the same, i.e. you can use them with both
relative and absolute links. When converted to HTML, however, "file:" links
will become absolute links, while "local:" links become relative to the HTML
output directory. The latter can be useful if you copy your HTML files to
another computer.
I'm maintaining a site where users can place pictures and other files in a kind of shopping cart. After selecting all the various contents the user wishes to download, he can checkout. Till' now an archive was generated beforehand and the user got an email with the link to the file after the generation finished.
I've changed this now by using web api and push stream to directly generate the archive on the fly. My code is offering either a zip, a zip64 or .tar.gz dynamically, depending on the estimated filesize and operating system. For performance reasons compression ist set to best speed ('none' would make the zip archives incompatible with Mac OS, the gzip library I'm using doesn't offer none).
This is working great so far, however the user is no longer having a progress bar while downloading the file because I'm not setting the content-length. What are the best ways to get around this? I've tried to guess the resulting file size, but either the browsers are canceling the downloads to early or stopping at 99,x% and are waiting for the missing bytes resulting for the difference between the estimated and actual file size.
My first thought was to guess the resulting file size always a little bit to big and filling the rest with zeros?
I've seen many file hosters offering the possibility to select files from a folder and putting them into a zip file and all are having the correct (?) file size with them and a progress bar. Any best practises? Thanks!
This is just some thoughts, maybe you can use them :)
Using Web API/HTTP the normal way to go about is that the response contains the lenght of the file. Since the response is first received after the call has finished, the actual time for generating the file will not show any progress bar in any browser other than a Windows wait cursor.
What you could do is using a two steps approach.
Generating the zip file
Create a duplex like channel using SignalR to give feedback on the file generation.
Downloading the zip file
After the file is generated you should know the file size, and the browser will show a progress bar while downloading.
It looks that this problem should have been addressed using chunk extensions, but it seems to never got further than a draft.
So I guess you are stuck with either no progress or sending the file size up front.
It seems that generating exact size zip archives is trickier than adding zero padding.
Another option might be to pre-generate the zip file without storing it just to determine the size.
But I am just wondering why not just use tar? It has no compression, so it is easy determine it's final size up front from the size of individual files and it should be also supported by both OSx and Linux. And Windows should be able to handle none compressed zip archives, so a similar trick might work as well.
How do i run a file of images in Informatica manually? My company has run several images in informatica with Abbyy that does data transformation, but however there were some images with text's, but texts were not produced.. Therefore, i am asked to run new images in Abbyy Informatica, to reproduce those errors, where some images did not produce texts. Please give me some guidance
I am still a student interning at the company..
If you are using the following plugin, please see the video.
Now, if by "How do i run a file of images in Informatica manually?" you mean that the problem you have is with executing the workflow for a given sest of image files, then you need to perform an indirect load. Please google that. This is mentioned in the video and it simply means that as a source you need to use a file that contains a list of files to be processed. And on the Session you need to set the Souce Qualifier property Source filetype to Indirect.
How can I retrieve the location of the most recently accessed pdf files in Windows? One approach would be to search the system for pdf files and check which of them is the most recent, but this approach will take up time for a large system. Is there a log where I can access entries by date? I would prefer to know a way to do this in JavaScript but another solutions will be helpful too.