D support for COM - windows

Wikipedia says the following: "On Microsoft Windows, D can access COM (Component Object Model) code."
What kind of support for COM is present in D? Does it make life easier than using COM in C++. I've found this link on the D page but it doesn't tell me too much.

Juno has a new version .5.1 that has lots of great ways of connecting to Word, Excel, FrameMaker, Trados, etc. So, it is possible and easy. Something like this:
scope word = new DispatchObject("Word.Application");
scope wDocs = word.get("Documents");
char[] dd = dir ~ r"\";
char[][] docs = GetFilesFromDir(dir ~ r"\", "*." ~ fromType, true);
if (docs.length == 0)
{
info.text = "Did not find any " ~ std.string.toupper(fromType) ~
" files in the directory... \n\nExiting...";
return;
}
foreach(char[] d; docs)
{
scope wDoc = wDocs.call("Open", d);//"Normal", false, 0);
char[] txt = std.path.getName(d); // original file ie. test if it was test.doc
txt ~= ".doc";
if (std.file.exists(txt))
std.file.remove(txt);
wDoc.call("SaveAs",
txt, // FileName
0, // FileFormat wdFormatDOC = 0
false, // LockComments
"", // Password
false, // AddToRecentFiles
"", // WritePassword
false, // ReadOnlyRecommended
false, // EmbedTrueTypeFonts
false, // SaveNativePictureFormat
false, // SaveFormsData
false, // SaveAsAOCELetter
65001, // Encoding 65001 is UTF8
false, // InsertLineBreaks
false, // AllowSubstitutions
0 // LineEnding Const wdCRLF = 0
);
wDoc.call("Close");
}
word.call("Quit");

The Juno lib, written by John Chapman, contains COM support modules. Unfortunately not up to date with the latest compiler.
http://www.dsource.org/projects/juno/wiki/ComProgramming/ "Juno COM"
Should be part of phobos, beside.
To Hannes J. use auto instead of delphi's var
// Create an instance of IXMLDOMDocument3.
auto doc = DOMDocument60.coCreate!(IXMLDOMDocument3);
scope(exit) doc.Release();
// Create an event provider instance.
auto events = new EventProvider!(XMLDOMDocumentEvents)(doc);
scope(exit) events.Release();
events.bind("onReadyStateChange", {
writefln("state changed");
});
events.bind("onDataAvailable", {
writefln("data available");
});
// Tell the document to load asynchronously.
doc.put_async(com_true);
// Load the XML document.
com_bool result;
doc.load("books.xml".toVariant(true), result);

http://www.digitalmars.com/d/2.0/interface.html#COM-Interfaces
I knew this was somewhere but it took me a while to find it. Basically, COM support in D is a hack on top of interfaces. Apparently the compiler knows about them and treats them as "special" in a few small ways, so everything works. BTW, I thought COM was dead.

Related

Getting file contents when using DropzoneJS

I really love the DropZoneJS component and am currently wrapping it in an EmberJS component (you can see demo here). In any event, the wrapper works just fine but I wanted to listen in on one of Dropzone's events and introspect the file contents (not the meta info like size, lastModified, etc.). The file type I'm dealing with is an XML file and I'd like to look "into" it to validate before sending it.
How can one do that? I would have thought the contents would hang off of the file object that you can pick up on many of the events but unless I'm just missing something obvious, it isn't there. :(
This worked for me:
Dropzone.options.PDFDrop = {
maxFilesize: 10, // Mb
accept: function(file, done) {
var reader = new FileReader();
reader.addEventListener("loadend", function(event) { console.log(event.target.result);});
reader.readAsText(file);
}
};
could also use reader.reaAsBinaryString() if binary data!
Ok, I've answer my own question and since others appear interested I'll post my answer here. For a working demo of this you can find it here:
https://ui-dropzone.firebaseapp.com/demo-local-data
In the demo I've wrapped the Dropzone component in the EmberJS framework but if you look at the code you'll find it's just Javascript code, nothing much to be afraid of. :)
The things we'll do are:
Get the file before the network request
The key thing we need become familiar with is the HTML5 API. Good news is it is quite simple. Take a look at this code and maybe that's all you need:
/**
* Replaces the XHR's send operation so that the stream can be
* retrieved on the client side instead being sent to the server.
* The function name is a little confusing (other than it replaces the "send"
* from Dropzonejs) because really what it's doing is reading the file and
* NOT sending to the server.
*/
_sendIntercept(file, options={}) {
return new RSVP.Promise((resolve,reject) => {
if(!options.readType) {
const mime = file.type;
const textType = a(_textTypes).any(type => {
const re = new RegExp(type);
return re.test(mime);
});
options.readType = textType ? 'readAsText' : 'readAsDataURL';
}
let reader = new window.FileReader();
reader.onload = () => {
resolve(reader.result);
};
reader.onerror = () => {
reject(reader.result);
};
// run the reader
reader[options.readType](file);
});
},
https://github.com/lifegadget/ui-dropzone/blob/0.7.2/addon/mixins/xhr-intercept.js#L10-L38
The code above returns a Promise which resolves once the file that's been dropped into the browser has been "read" into Javascript. This should be very quick as it's all local (do be aware that if you're downloading really large files you might want to "chunk" it ... that's a more advanced topic).
Hook into Dropzone
Now we need to find somewhere to hook into in Dropzone to read the file contents and stop the network request that we no longer need. Since the HTML5 File API just needs a File object you'll notice that Dropzone provides all sorts of hooks for that.
I decided on the "accept" hook because it would give me the opportunity to download the file and validate all in one go (for me it's mainly about drag and dropping XML's and so the content of the file is a part of the validation process) and crucially it happens before the network request.
Now it's important you realise that we're "replacing" the accept function not listening to the event it fires. If we just listened we would still incur a network request. So to **overload* accept we do something like this:
this.accept = this.localAcceptHandler; // replace "accept" on Dropzone
This will only work if this is the Dropzone object. You can achieve that by:
including it in your init hook function
including it as part of your instantiation (e.g., new Dropzone({accept: {...})
Now we've referred to the "localAcceptHandler", let me introduce it to you:
localAcceptHandler(file, done) {
this._sendIntercept(file).then(result => {
file.contents = result;
if(typeOf(this.localSuccess) === 'function') {
this.localSuccess(file, done);
} else {
done(); // empty done signals success
}
}).catch(result => {
if(typeOf(this.localFailure) === 'function') {
file.contents = result;
this.localFailure(file, done);
} else {
done(`Failed to download file ${file.name}`);
console.warn(file);
}
});
}
https://github.com/lifegadget/ui-dropzone/blob/0.7.2/addon/mixins/xhr-intercept.js#L40-L64
In quick summary it does the following:
read the contents of the file (aka, _sendIntercept)
based on mime type read the file either via readAsText or readAsDataURL
save the file contents to the .contents property of the file
Stop the send
To intercept the sending of the request on the network but still maintain the rest of the workflow we will replace a function called submitRequest. In the Dropzone code this function is a one liner and what I did was replace it with my own one-liner:
this._finished(files,'locally resolved, refer to "contents" property');
https://github.com/lifegadget/ui-dropzone/blob/0.7.2/addon/mixins/xhr-intercept.js#L66-L70
Provide access to retrieved document
The last step is just to ensure that our localAcceptHandler is put in place of the accept routine that dropzone supplies:
https://github.com/lifegadget/ui-dropzone/blob/0.7.2/addon/components/drop-zone.js#L88-L95
using the FileReader() solution is working amazingly good for me:
Dropzone.autoDiscover = false;
var dz = new Dropzone("#demo-upload",{
autoProcessQueue:false,
url:'upload.php'
});
dz.on("drop",function drop(e) {
var files = [];
for (var i = 0; i < e.dataTransfer.files.length; i++) {
files[i] = e.dataTransfer.files[i];
}
var reader = new FileReader();
reader.onload = function(event) {
var line = event.target.result.split('\n');
for ( var i = 0; i < line.length; i++){
console.log(line);
}
};
reader.readAsText(files[files.length-1]);

Firefox file downloading process structure figure

For my project, I need to study some info like "FireFox/Gecko file downloading structure overview"(if any), or somewhat "file downloading process flow chart of FireFox/Gecko". I couldn't find something like that in the Internet so far. Is there any info about it? Thanks a lot.
PS: It must include the paths about all file downloading through FireFox browser, which are via the network connection info APIs and file handling APIs, just like "httpOpenRequest" or "DoFileDownload" API(if any).
What would be the Firefox downloading process API paths?? Is there any figure or chart?
Please help me...
You are probably going to need to look at the code to get the information you desire. You will need to build the flowchart yourself.
There are a couple of different ways downloading is done in the code.
If you are talking about a Firefox add-on performing a download, then it is probably being done using Downloads.jsm (although there is an older method for doing so). The source code for that JavaScript module is at resource://gre/modules/Downloads.jsm (This URL is only valid in Firefox). There appear to be several files all located in the jsloader\resource\gre\modules directory within the zip format file called omni.ja in the root of the Firefox distribution. You can just copy that file and change the name to omni.zip and access it as a normal .zip file.
If you are wanting to know how Firefox saves a page when it is requested by the user: It is defined in the context menu with the oncommand value being gContextMenu.saveLink();. saveLink() is defined in: chrome://browser/content/nsContextMenu.js. It does some housekeeping and then calls saveHelper() which is in the same file.
The saveHelper() code is the following:
// Helper function to wait for appropriate MIME-type headers and
// then prompt the user with a file picker
saveHelper: function(linkURL, linkText, dialogTitle, bypassCache, doc) {
// canonical def in nsURILoader.h
const NS_ERROR_SAVE_LINK_AS_TIMEOUT = 0x805d0020;
// an object to proxy the data through to
// nsIExternalHelperAppService.doContent, which will wait for the
// appropriate MIME-type headers and then prompt the user with a
// file picker
function saveAsListener() {}
saveAsListener.prototype = {
extListener: null,
onStartRequest: function saveLinkAs_onStartRequest(aRequest, aContext) {
// if the timer fired, the error status will have been caused by that,
// and we'll be restarting in onStopRequest, so no reason to notify
// the user
if (aRequest.status == NS_ERROR_SAVE_LINK_AS_TIMEOUT)
return;
timer.cancel();
// some other error occured; notify the user...
if (!Components.isSuccessCode(aRequest.status)) {
try {
const sbs = Cc["#mozilla.org/intl/stringbundle;1"].
getService(Ci.nsIStringBundleService);
const bundle = sbs.createBundle(
"chrome://mozapps/locale/downloads/downloads.properties");
const title = bundle.GetStringFromName("downloadErrorAlertTitle");
const msg = bundle.GetStringFromName("downloadErrorGeneric");
const promptSvc = Cc["#mozilla.org/embedcomp/prompt-service;1"].
getService(Ci.nsIPromptService);
promptSvc.alert(doc.defaultView, title, msg);
} catch (ex) {}
return;
}
var extHelperAppSvc =
Cc["#mozilla.org/uriloader/external-helper-app-service;1"].
getService(Ci.nsIExternalHelperAppService);
var channel = aRequest.QueryInterface(Ci.nsIChannel);
this.extListener =
extHelperAppSvc.doContent(channel.contentType, aRequest,
doc.defaultView, true);
this.extListener.onStartRequest(aRequest, aContext);
},
onStopRequest: function saveLinkAs_onStopRequest(aRequest, aContext,
aStatusCode) {
if (aStatusCode == NS_ERROR_SAVE_LINK_AS_TIMEOUT) {
// do it the old fashioned way, which will pick the best filename
// it can without waiting.
saveURL(linkURL, linkText, dialogTitle, bypassCache, false,
doc.documentURIObject, doc);
}
if (this.extListener)
this.extListener.onStopRequest(aRequest, aContext, aStatusCode);
},
onDataAvailable: function saveLinkAs_onDataAvailable(aRequest, aContext,
aInputStream,
aOffset, aCount) {
this.extListener.onDataAvailable(aRequest, aContext, aInputStream,
aOffset, aCount);
}
}
function callbacks() {}
callbacks.prototype = {
getInterface: function sLA_callbacks_getInterface(aIID) {
if (aIID.equals(Ci.nsIAuthPrompt) || aIID.equals(Ci.nsIAuthPrompt2)) {
// If the channel demands authentication prompt, we must cancel it
// because the save-as-timer would expire and cancel the channel
// before we get credentials from user. Both authentication dialog
// and save as dialog would appear on the screen as we fall back to
// the old fashioned way after the timeout.
timer.cancel();
channel.cancel(NS_ERROR_SAVE_LINK_AS_TIMEOUT);
}
throw Cr.NS_ERROR_NO_INTERFACE;
}
}
// if it we don't have the headers after a short time, the user
// won't have received any feedback from their click. that's bad. so
// we give up waiting for the filename.
function timerCallback() {}
timerCallback.prototype = {
notify: function sLA_timer_notify(aTimer) {
channel.cancel(NS_ERROR_SAVE_LINK_AS_TIMEOUT);
return;
}
}
// set up a channel to do the saving
var ioService = Cc["#mozilla.org/network/io-service;1"].
getService(Ci.nsIIOService);
var channel = ioService.newChannelFromURI(makeURI(linkURL));
if (channel instanceof Ci.nsIPrivateBrowsingChannel) {
let docIsPrivate = PrivateBrowsingUtils.isWindowPrivate(doc.defaultView);
channel.setPrivate(docIsPrivate);
}
channel.notificationCallbacks = new callbacks();
let flags = Ci.nsIChannel.LOAD_CALL_CONTENT_SNIFFERS;
if (bypassCache)
flags |= Ci.nsIRequest.LOAD_BYPASS_CACHE;
if (channel instanceof Ci.nsICachingChannel)
flags |= Ci.nsICachingChannel.LOAD_BYPASS_LOCAL_CACHE_IF_BUSY;
channel.loadFlags |= flags;
if (channel instanceof Ci.nsIHttpChannel) {
channel.referrer = doc.documentURIObject;
if (channel instanceof Ci.nsIHttpChannelInternal)
channel.forceAllowThirdPartyCookie = true;
}
// fallback to the old way if we don't see the headers quickly
var timeToWait =
gPrefService.getIntPref("browser.download.saveLinkAsFilenameTimeout");
var timer = Cc["#mozilla.org/timer;1"].createInstance(Ci.nsITimer);
timer.initWithCallback(new timerCallback(), timeToWait,
timer.TYPE_ONE_SHOT);
// kick off the channel with our proxy object as the listener
channel.asyncOpen(new saveAsListener(), null);
}

Create Pervasive Database using Distributed Tuning Objects (DTO)

I am writing an application in C# (.NET 4.0) which has to integrate with another, much older application. Part of the requirement is to integrate with a much older program that uses Pervasive PSQL Version 9. I asked this question about accessing the database without having to install an ODBC DSN. Part of the answer (thanks very much) is that I need to create a database using DTO.
I've used COM interop to access the dto2.dll COM library, and have read the samples, but I am having problems creating the database. Here is a summary of the code I'm using.
var session = new DtoSession();
var result = session.Connect("localhost", "", "");
Assert.AreEqual(dtoResult.Dto_Success, result);
testDB = new DtoDatabase {
Session = session,
Name = "Test1",
Ddfpath = #"C:\TEMP\DATA\DDF",
DataPath = #"C:\TEMP\DATA",
};
result = session.Databases.Add(testDB);
Assert.AreEqual(dtoResult.Dto_Success, result);
No matter what values I use for the Name and paths, that final Assert always fails. The error code is Dto_errDuplicateName. If I do not include the Session property I get a different error code (7039).
Has anyone done this successfully? What am I doing wrong?
I believe you are missing the Flags property of the DtoDatabase object. I had the following code in my archive as an example of adding a database with DTO. This code was probably written when DTO was first released but it works and the only difference I can see is the Flags property.
DtoSession session = new DtoSession();
dtoResult result;
result = session.Connect("localhost", "","");
if (result != dtoResult.Dto_Success)
{
Console.WriteLine("Error connecting. Error code: " + result.ToString());
return;
}
DtoDatabase testDB = new DtoDatabase();
testDB.Name = "Test1";
testDB.DataPath = #"C:\DATA";
testDB.DdfPath = #"C:\DATA";
testDB.Flags = dtoDbFlags.dtoDbFlagNotApplicable;
result = session.Databases.Add(testDB);
if (result != dtoResult.Dto_Success)
{
Console.WriteLine("Error Adding. Error code: " + result.ToString());
return;
}
Console.WriteLine("DB Added.");

Detecting Gmail attachment downloads

Is there a way to detect if a particular file that is being downloaded is a Gmail attachment?
I am looking for a way to write a Greasemonkey script which would help me organize the downloads, based on their download sources, say Gmail email attachments would have a different behavior from other stuff.
So far, I've found out that attachments redirect to https://mail-attachment.googleusercontent.com/attachment/u/0/ , which I guess is not sufficient.
EDIT
Since an add-on would be more powerful than a userscript, I've decided to pursue the Add On idea. However, the problem of detection remains unsolved.
This is too complicated for just one question; it has at least these major parts:
Do you want to redirect downloads when the user clicks, or automatically download select files? Clarify the question.
Your GM script must identify the appropriate download links, and on which pages, and for which views? For gMail, this is not a trivial task, and the question needs to be clearer. It's worthy of a whole question just on this issue given the variety of views and AJAX involved.
Once identified, the script probably needs to intercept clicks on those links. (Depends on your goal (clarify!) and what the Firefox extension can do.)
Greasemonkey needs to interact with an extension that either intercepts the user-initiated download, or allows for an automatic download. I've detailed the auto-download approach, below.
Once your script has identified the appropriate file URLs and/or links (Open a new question for more help with that, and include pictures of the types of pages and links you want.), it can interface with a Firefox add-on, like the one below, to automatically save those files.
Automatically saving files from Greasemonkey with the help of an additional Add-on:
WARNING: The following is a working proof of concept for education only. It has no security features, and if you use it as-is, for actual surfing, some webpage or script writer or extension writer will use it to completely pwn your computer.
If you use the Add-on builder or SDK to install or "Test" the DANGER. DANGER. DANGER. File download utility,
Then you can use a Greasemonkey script, like this, to automatically save files:
// ==UserScript==
// #name _Call our File download add-on to trigger a file download.
// #include https://mail.google.com/mail/*
// #include https://stackoverflow.com/questions/14440362/*
// #require http://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js
// #grant GM_addStyle
// ==/UserScript==
/*- The #grant directive is needed to work around a design change
introduced in GM 1.0. It restores the sandbox.
*/
var fileURL = "http://userscripts.org/scripts/source/29222.user.js";
var savePath = "D:\\temp\\";
var extensionLoaded = false;
window.addEventListener ("ImAlivefromExtension", function (zEvent) {
console.log ("The test extension appears to be loaded!", zEvent.detail);
extensionLoaded = true;
} );
window.addEventListener ("ReplyToDownloadRequest", function (zEvent) {
//var xxxx = JSON.parse (zEvent.detail);
console.log ("Extension replied: ", zEvent.detail);
} );
$("body").prepend ('<button id="gmFileDownloadBtn">Click to File download request.</button>');
$("#gmFileDownloadBtn").click ( function () {
if (extensionLoaded) {
detailVal = JSON.stringify (
{targFileURL: fileURL, targSavePath: savePath}
);
var zEvent = new CustomEvent (
"SuicidalDownloadRequestToAddOn",
{"detail": detailVal }
);
window.dispatchEvent (zEvent);
}
else {
alert ("The file download extension is not loaded!");
}
} );
You can test the script on this SO question page.
Note that any other extension, userscript, web page, or plugin can listen to or send spoof events, the only security, so far, is to limit which pages the extension runs on.
For reference, the extension source files are below. The rest is supplied by Firefox's Add-on SDK.
The content script:
var zEvent = new CustomEvent ("ImAlivefromExtension",
{"detail": "GM, DANGER, DANGER, DANGER, File download utility" }
);
window.dispatchEvent (zEvent)
window.addEventListener ("SuicidalDownloadRequestToAddOn", function (zEvent) {
console.log ("Extension received download request: ", zEvent.detail);
//-- Relay request to extension main.js
self.port.emit ("SuicidalDownloadRequestRelayed", zEvent.detail);
//-- Reply back to GM, or whoever is pretending to be GM.
var zEvent = new CustomEvent ("ReplyToDownloadRequest",
{"detail": "Your funeral!" }
);
window.dispatchEvent (zEvent)
} );
The background JS:
//--- For security, MAKE THESE AS RESTRICTIVE AS POSSIBLE!
const includePattern = [
'https://mail.google.com/mail/*',
'https://stackoverflow.com/questions/14440362/*'
];
let {Cc, Cu, Ci} = require ("chrome");
Cu.import ("resource://gre/modules/Services.jsm");
Cu.import ("resource://gre/modules/XPCOMUtils.jsm");
Cu.import ("resource://gre/modules/FileUtils.jsm");
let data = require ("sdk/self").data;
let pageMod = require ('sdk/page-mod');
let dlManageWindow = Cc['#mozilla.org/download-manager-ui;1'].getService (Ci.nsIDownloadManagerUI);
let fileURL = "";
let savePath = "";
let activeWindow = Services.wm.getMostRecentWindow ("navigator:browser");
let mod = pageMod.PageMod ( {
include: includePattern,
contentScriptWhen: 'end',
contentScriptFile: [ data.url ('ContentScript.js') ],
onAttach: function (worker) {
console.log ('DANGER download utility attached to: ' + worker.tab.url);
worker.port.on ('SuicidalDownloadRequestRelayed', function (message) {
var detailVal = JSON.parse (message);
fileURL = detailVal.targFileURL;
savePath = detailVal.targSavePath;
console.log ("Received request to \ndownload: ", fileURL, "\nto:", savePath);
downloadFile (fileURL, savePath);
} );
}
} );
function downloadFile (fileURL, savePath) {
dlManageWindow.show (activeWindow, 1);
try {
let newFile;
let fileURIToDownload = Services.io.newURI (fileURL, null, null);
let persistWin = Cc['#mozilla.org/embedding/browser/nsWebBrowserPersist;1']
.createInstance (Ci.nsIWebBrowserPersist);
let fileName = fileURIToDownload.path.slice (fileURIToDownload.path.lastIndexOf ('/') + 1);
let fileObj = new FileUtils.File (savePath);
fileObj.append (fileName);
if (fileObj.exists ()) {
console.error ('*** Error! File "' + fileName + '" already exists!');
}
else {
let newFile = Services.io.newFileURI (fileObj);
let newDownload = Services.downloads.addDownload (
0, fileURIToDownload, newFile, fileName, null, null, null, persistWin, false
);
persistWin.progressListener = newDownload;
persistWin.savePrivacyAwareURI (fileURIToDownload, null, null, null, "", newFile, false);
}
} catch (exception) {
console.error ("Error saving the file! ", exception);
dump (exception);
}
}
So far from what you are saying,the only thing you can do is making add-on(Firefox) and Extension(for chrome if you want).
If you have closer look at download of attachment,it happens when:
1) You click on icon of attachments
2) If you click download
For these two things you can find the click event of <a> tag containing download_url value.You can easily do that using js/jquery for creting extension.
So you can write the functionality when user tries to download attachment.
You could use Gmail contextual gadgets to modify the behavior on the Google side:
Gmail Contexual Gadgets
Contextual Gadgets don't have direct access to attachments but server side, you could use IMAP to access the attachment (based on the Gmail message ID identified by the gadget):
Gmail IMAP Extensions
Using gadgets and server-side IMAP has the advantage of being browser-agnostic.
It's not entirely clear what you want to do differently with the downloaded Gmail attachment as opposed to any given download (save it to a different location? Perform actions upon the attachment data?) But the contextual gadget and IMAP should give you some chance to modify the attachment data as needed before the browser download begins.

Filtering a loaded kml file in OpenLayers

I'm trying to create an interactive search engine (for finding event tickets) of which one of its features is a visual map that shows related venues using OpenLayers. I have a plethora of venues (3000+) in a kml file that I would like to selectively show a filtered subsection of. Below is the code I have but when I try to run it has a JavaScript error. Running firebug and chrome developer tools makes me think that it is not getting passed the parameters I give because it says that the variables are null. However, I cannot figure out why they are not getting passed. Any insight is greatly appreciated.
var map, drawControls, selectControl, selectedFeature, select;
$('#kml').load('venuesComplete.kml');
kml=$('#kml').html();
function showVenues(state, city, venue){
filterStrategy = new OpenLayers.Strategy.Filter({});
var kmllayer = new OpenLayers.Layer.Vector("KML", {
strategies: [filterStrategy,
new OpenLayers.Strategy.Fixed()],
protocol: new OpenLayers.Protocol.HTTP({
url: "venuesComplete.kml",
format: new OpenLayers.Format.KML({
extractStyles: true,
extractAttributes: true
})
})
});
select = new OpenLayers.Control.SelectFeature(kmllayer);
kmllayer.events.on({
"featureselected": onFeatureSelect,
"featureunselected": onFeatureUnselect
});
map.addControl(select);
select.activate();
filter = new OpenLayers.Filter.Comparison({
type: OpenLayers.Filter.Comparison.LIKE,
property: "",
value: ""
});
function clearFilter(){
filterStrategy.setFilter(null);
}
function setFilter(property, value){
filter.value = value;
filter.property = property;
filterStrategy.setFilter(filter);
}
var vector_style = new OpenLayers.Style();
if(venue!=""){
setFilter('name', venue);
}else if(city!=""){
setFilter('description', city);
}else if(state!=""){
setFilter('description', state);
}
map.addLayer(kmllayer);
function onPopupClose(evt) {
select.unselectAll();
}
function onFeatureSelect(event) {
var feature = event.feature;
var selectedFeature = feature;
var popup = new OpenLayers.Popup.FramedCloud("chicken",
feature.geometry.getBounds().getCenterLonLat(),
new OpenLayers.Size(100,100),
"<h2>"+feature.attributes.name + "</h2>" + feature.attributes.description +'<br>'+feature.attributes,
null,
true,
onPopupClose
);
document.getElementById('venueName').value=feature.attributes.name;
document.getElementById("output").innerHTML=event.feature.id;
feature.popup = popup;
map.addPopup(popup);
}
function onFeatureUnselect(event) {
var feature = event.feature;
if(feature.popup) {
map.removePopup(feature.popup);
feature.popup.destroy();
delete feature.popup;
}
}
}
function init() {
map = new OpenLayers.Map('map');
var google_map_layer = new OpenLayers.Layer.Google(
'Google Map Layer',
{type: google.maps.MapTypeId.HYBRID}
);
map.addLayer(google_map_layer);
state="";
state+=document.getElementById('stateProvDesc').value;
city="";
city+=document.getElementById('cityZip').value;
venue="";
venue+=document.getElementById('venueName').value;
showVenues(state,city,'Michie Stadium');
map.addControl(new OpenLayers.Control.LayerSwitcher({}));
map.zoomToMaxExtent();
}
IF I UNDERSTAND CORRECTLY, your kml does not load properly. if this is not the case, please disconsider my answer.
it is very important to check if your kml layer was properly loaded. i have a map that loads multiple dynamic (from php) kml layers and it is not uncommon to have a large layer simply not load. when that happens, the operation is aborted, but, as far as openlayers is concerned, the layer was properly loaded.
so i do 2 things: i check if the amount of loaded data meets the expected number of features in my orginal php kml parser (i use a jquery or ajax call for that) and then, in case there is a discrepancy, i try reloading (since this is a loop, i limit it to 5 attempts, so as not to loop infinitely).
check out some of my code here

Resources