I have a RETS feed and I am using RETS Connnector http://www.retsconnector.com/Home/ConnectorDownload
Does anyone know how to set your search criteria to dump all the data along with img urls
Every search has options for downloading media in the media tab. You can select whether or not you want to download all media or just the default images of the listings your query returns.
Related
I downloaded images from a webservice, saved then into a TBitmap and used the command: bmpExample.SaveToStream(stExample); and saved the stream into my database (SQLite) PS: the column is a Blob field.
Until here everything works fine! I can see the image on the Data tab, the problem start when I try to load the image back to my application (firemonkey). I'm using the livebinds tool and linked my ListView into my Query (select * from empresa) in this way:
The header and the text loads fine, the only problem is with the image (that I know that exist because I can see on the Data tab of my SQL editor.
I found the answer, was because I used qMyQuery.Open after download the company informations, and after I had the ID of the company, I downloaded and inserted the images on the database but I did not said to my query to access again the database.
The answer to my problem:
Dm.qMyQuery.Close;
Dm.qMyQuery.Open();
In my GSA frontend I have is a option that when clicked should show only results which dont have any files(pdf or any)
so what i need is way to modify my url so that i get only results with no files. What should be the url parameter?
Also any reference if I can do it through Google Frontend
What do you mean by show results that don't have any file? Do you mean don't show web pages that have embedded PDF documents or don't show PDF results at all?
As far as the GSA is concerned a PDF document is the same as an HTML document and there is no "knowledge" the GSA has if there is an embedded attachment.
If you are looking to exclude PDF, Office files, etc, then you could create a different collection that excludes those or you could use a different "client" that uses the "remove URLs" to exclude the URL patterns you don't want.
I am new to Oracle WebCenter Content (Formerly known as Oracle UCM).
I am looking from the integration of UCM to third party application perspective only to retrieve and store the document. I have went through the details about WSDL Generator and also collected the set of SOAP API require to perform check-in and other operations.
We are not going to use UCM directly to store and retrieve the document rather from third party application to store and retrieve the documents (PDF). I have following basic set of questions:
Does UCM store my documents under Weblayout directory?
How would I store documents under specific directory using Check-in SOAP API? (I.e. If I want to store document under "IT Department" Directory.) Which field I can use to mention the location in wsdl?
When I search the document, does it return or can I get the location of the document in search result?
OOTB, UCM stores your original doc in the Native directory and a copy in weblayout - converted to web viewable format if you have IBR enabled. Use a storage rule based on the storage rule metadata field to determine where to store docs based on metadata. See more info here.
When executing a search, you should receive back a field DocUrl which contains the URL to the content item. However, this URL can break if certain metadata changes (such as dSecurityGroup or dDocType).
A better idea is to use GET_FILE and either the dID or the dDocName (and RevisionSelectionMethod).
Additional reading on the FileStoreProvider and how URLs are calculated can be found here.
I am attempting to display the mimetype of documents indexed by the Google Search Appliance and am using the property google:mimetype as documented at https://developers.google.com/search-appliance/documentation/connectors/200/connector_dev/cdg_traversing. The context I am however using it is to show the mimetype for documents/files that are served by a web server e.g. a PDF file that is served from a web server and it doesn't seem to work i.e. it does not display the mimetype when look at the metadata attributes.
Can the property google:mimetye be used with content from web servers not is it limited to file shares, etc?
Since you are feeding content that is then fetched from a web server. The GSA ignores your suggestion for mimetype.
I need to create diagram showing the most time consuming tasks when a specific page gets loaded.
Well, Firebug has this nice feature to show you all loading times of files in the network section or if i can use the profiler alternatively (console).
Now i am looking for the easiest way to get a diagram (pie chart) from the results without typing all the files and time values into an excell table.
Any suggestions?
You can export as a Har file (extension to allow firebug Har log file export), this HarViewer looks promising...
https://github.com/janodvarko/harviewer
HAR Viewer is a web application (PHP + Javascript) that allows to visualize HTTP tracing logs based on HTTP Archive format (HAR). These files contain recorded information about HTTP traffic performed by web pages.
---Also---
http://www.imagossoftware.com/harlog/
HarLog takes HAR format files or a HAR HTTP formatted stream and creates a tab delimited output file. The output file can then be imported into Excel or similar to create graphical reports.
You can generate a chart using the Google Chart API: http://code.google.com/intl/de/apis/chart/ directly in the browser. (no excell needed ;-) )