Track external resources (PDF) - tin-can-api

I published one course having 2 external resources (PDFs). When the user clicks on the PDF option available in the Resources section, it gets opened in the browser, but no statements are inserted into the LRS.
Am I missing some settings which will track the external resources?
Basically, I want to achieve following things:
1) Get how many times user referred to an external resource.
2) When user accessed the resource.
Is it possible? Sorry if it's a very naive question, I've just started exploring Storyline and xAPI.
Any help would be highly appreciated.

Storyline's xAPI implementation is basically a replacement for the standard SCORM 1.2 calls. As such, I don't think it tracks PDF or resource clicks as they don't generate a SCORM event. It is a very limited set of xAPI calls.
You could try putting the link to resources on the master slide instead, with a button that launches some javascript to generate the xAPI call, but that would mean linking a lot of javascript (eg from the TINCAN resource website) to do one thing.

Here's how I've been doing it.
Create a new custom tab in Storyline instead of using the default Resources Tab (you can call it Resources or Attachments or anything else).
Set a trigger for the custom tab to open a lightbox slide that contains all the links to external resources.
In addition to the trigger to open a URL for each link, use the Execute JavaScript trigger to submit your xAPI statement when users click the link. Make sure the Execute JavaScript trigger is stacked in priority above the open URL trigger to ensure your xAPI statement fires.

Related

Gigya gamification for poll

I am using GIGYA gamification custom actions for a POLL in a website. As per the documentation I have created a Custom Action. Then, I created a html page which has the poll layout and included Gigya's api key and used gigya.gm.notifyAction function. My questions are -
This function as per my understanding will notify the custom action. How can I and where will i see the results. Do i need to use another function to get the counts.
Can someone provide a small custom JS code for Gigya.
We do have an example site which will show you how this is implemented. Although the site uses loyalty, you should be able to adapt it to what you want it to show using gamification. The url is: http://raas-demo.gigya.com/
You can also download the websites source code as well.
This will explain about:
http://developers.gigya.com/display/GD/Loyalty
Kind Regards
Nadeem R

Scraping pages that do not seem to have URLs

I'm trying to scrape these listings and provide more exposure for these job listings on a site that belongs to a client of mine. The issue is that I need to be able to link to the specific job listing in order for the job seeker to apply. This is the page I'm trying to save listing links from.
It would be ideal if I could save an address for the job seeker to click on to see the original listing and then apply.
What is this website doing to not feature a URL for these pages
Is it possible to provide a listing specific address
If that's possible how could I generate that address?
If I can't get a specific address I think I could get it so that the user clicks a link that triggers an internal script on my client's site which takes the listing ID and searches the site I found that listing on, and then redirects the user to that specific listing.
The downside to this is that the user will have to wait a little while depending on how far back the listing is on a directory. I could put some kind of progress bar with a pleasant "Searching for your listing! Thanks for being patient" message.
If I can avoid having to do this, though, that'd be great!
I'm using Nokogiri and Mechanize.
The page you refer to appears to be generated by an Oracle product, so one would think they'd be willing to construct a web form properly (and with reference to accessibility concerns). They haven't, so it occurs to me that either their engineer was having a bad day, or they are deliberately making it (slightly) harder to scrape.
The reason your browser shows no href when you hover over those links is that there isn't one. What the page does instead is to use JavaScript to capture the click event, populate a POST form with some hidden values, and call the submit method programmatically. This can cause problems with screen-readers and other accessibility devices, as well as causing problems with the way in which back buttons have to re-submit the page.
The good news is that constructions of this kind can usually be scraped by creating a form yourself, either using a real one on a third party page, or via a crawler library. If you post the right values to the target URI, reverse-engineered from examining the page's script, the resulting document should be the "linked" page you expect.

Access and display web sourced data as 'messages' in Outlook

I have data I provide on an http connection that's essentially message information.
I'd like to create an AddOn for Outlook that will consume/interface with that http service as if it were a mail source and display sender, recipient, subject, date etc and then be able to download the actual message and display it.
I envision this service being accessed either via a folder in the left-hand panel. (Uber feature would be if I could drag a message out of this service into the inbox!)
Unfortunately, I don't normally write code on the MS Stack -- I'm a linux guy. So I'm looking for either a follow-the-dots tutorial or an example of something similar. Failing that, I'll hire someone to write this so would love to know the specific skillsets I should be looking for when I contract someone to write it.
EDIT / Additional Thoughts
I have considered changing the web service (or at least creating a middle-man) that spoke IMAP, but only implemented a sub-set of commands (eg, there's no delete or create-folder or move)
One problem with that is that retrieving the actual message needs to be a different opperation (one that has a quota cost to the end user) so I can't just show the message. An option would be to show a "retrieve" button rather than the actual message (I found a great resource here: http://msdn.microsoft.com/en-us/library/dd542625.aspx for doing something like that) and then having that button do the retrieve and then reload itself. Maybe.
As Pekka says this could turn into a big project .. your description is pretty general and as you know the devil is in the detail ! but there are a number of options ..
you may be able to use Folder.WebViewURL Property of a folder that you have created in outlook and show your app via a web app (you can build that on any tech stack you like)
ok drag and drop may become a little tricky to do.
Outlook forms could also be used. A form can call out to your web service and display what you want. There is some info about form on SO but http://www.outlookcode.com/article.aspx?ID=35 is the best place.
Subclassing .. you can then create your own tree under the outlook tree and display whatever you want in the right hand pane such as grids forms etc. these can interact with the normal outlook folders and you can do your drag and drop though you woudl have to create Outlook Items to display them in the inbox. There is a tutorial on the technique http://www.codeproject.com/KB/office/additional_panel_Outlook.aspx though not doing exactly what you want but the technique is sound.
Next up build your own MAPI Message Store Provider which is probally the hardest thing to do on the list.http://msdn.microsoft.com/en-us/library/cc842153.aspx
As I said your question is no functional spec and there are always many ways to skin the cat but 2 or 3 are probaly where you shoudl look at unless it simple enough just a display a web app.
Marcus
Maybe our product could help you in order to avoid writing your own MAPI Message Store Provider.
Kayxo Insight : .Net Custom Framework for MAPI Message Store Provider

Automate website log-in and form filling?

I'm trying to log in to a website and save an HTML page automatically (I want to be able to do this on a regular time interval). From the surface, this is a typical modern website where, if the user navigates directly to a "locked" URL, a log-in form pops up, and after logging in, the user is redirected to the intended page.
I gave mechanize a shot (http://wwwsearch.sourceforge.net/mechanize/) but it wasn't finding some form elements which were needed for login (hidden elements that have some values put in by a javascript function that runs when the user clicks the "log in" button).
I played a bit with the "web browser" control in .NET but quickly lost interest because I couldn't even get it to submit a query on the Google page.
I don't care what the language is; I'll learn it to solve this problem. At a minimum it has to work in Windows.
A simple example, say, typing in a query into the Google search box would be a great bonus.
In my experience, the most reliable way is to use javascript. It works well in .Net. To test, browse to the following addresses one after another in Firefox or Internet Explorer:
http://www.google.com
javascript:function f(){document.forms[0]['q'].value='stackoverflow';}f();
javascript:document.forms[0].submit()
That performs a search for "stackoverflow" on Google. To do it in VB .Net using the webbrowser control, do this:
WebBrowser1.Navigate("http://www.google.com")
Do While WebBrowser1.IsBusy OrElse WebBrowser1.ReadyState <> WebBrowserReadyState.Complete
Threading.Thread.Sleep(1000)
Application.DoEvents()
Loop
WebBrowser1.Navigate("javascript:function%20f(){document.forms[0]['q'].value='stackoverflow';}f();")
Threading.Thread.Sleep(2000) 'wait for javascript to run
WebBrowser1.Navigate("javascript:document.forms[0].submit()")
Threading.Thread.Sleep(2000) 'wait for javascript to run
Notice how the space in the URL is converted to %20. I'm not certain if this is necessary but it can't hurt. It is important that the first javascript be in a function. The calls to Sleep() are to wait for Google to load and also for the javascript stuff. The Do While Loop might run forever if the page fails to load so for automation purposes have a counter that will timeout after, say, 60 seconds.
Of course, for Google you can just navigate directly to www.google.com?q=stackoverflow but if your site has hidden input fields, etc, then this is the way to go. Only works for HTML sites - flash is a whole other matter.
If I understand you right, you want to log in to only one webpage, and that form always stays the same. You could either reverse engineer the java script, or debug it via a javascript debugger in the browser (e.g. firebug for firefox). Or you can fill in the form in your browser and look at the http request via a network packet sniffer. Once you have all required form data to submit, you can do the same with your program (thats what I did the last time I had a pretty similar task to do). dont forget to store all cookie data you requested back from the webserver and send it with the next request, to 'stay logged in'.
Its being already discussed here.
Basically its gist is you can use selenium, an open source web automation tool, which has api library available in various languages like java, ruby, etc.
Neoload can handle the form filling with authentication, assuming you don't want to collect data, just perform actions. It's a web stress tool, so it's not really meant to be used as a time-based service, but you COULD just leave it running.
I've used Ruby and Watir (a web app testing suite) for something similar, but it was a very small task (basically visiting URLs from a text file and downloading an image).
There's also an extension called iMacros that can do some automation, but I'm not personally familiar with it (just aware of it).
"I'm trying to log in to a website and save an HTML page automatically"
SAVEAS TYPE=HTM FOLDER=C: FILE=page.html
https://addons.mozilla.org/en-US/firefox/addon/imacros-for-firefox/?src=search
This commands played in iMacros addon will save the page on C: drive and name it page.html
Also,
URL GOTO=www.website.com
Goes on the particular website you want to save. You can also use scripting in iMacros and set different websites in macro.

Firefox XPCOM component - Permission denied to call method UnnamedClass

Can a firefox XPCOM component read and write page content across multiple pages?
Scenario:
A bunch of local HTML and javascript files. A "Main.html" file opens a window "pluginWindow", and creates a plugin using:
netscape.security.PrivilegeManager.enablePrivilege('UniversalXPConnect');
var obj = Components.classes[cid].createInstance();
plugin = obj.QueryInterface(Components.interfaces.IPlugin);
plugin.addObserver(handleEvent);
The plugin that has 3 methods.
IPlugin.Read - Read data from plugin
IPlugin.Write - Write data to the plugin
IPlugin.addObserver - Add a callback handler for reading.
The "Main.html" then calls into the pluginWindow and tries to call the plugin method Write.
I receive an error:
Permission denied to call method UnnamedClass.Write
First, is your C++ code really a plugin or an XPCOM component, possibly installed as part of an extension? Sounds like it's the later.
If so, it's not usable from untrusted JS code - any web page or a local HTML file. It's fully usable from privileged code, the most common type of which is the extension code.
You're working around this problem when creating the component using the enablePrivilege('UniversalXPConnect') call. This is not really recommended, unless this will not be distributed to users (since this call pops a confusing box and if you set a preference to always allow file:// scripts use XPCOM, it may be a security problem, since not all local pages are trusted - think saved web pages).
Your Write call fails for the same reason - file:// pages are not trusted to use XPCOM components. You probably can get it to work if you add another enablePrivilege call in the same function as the Write call itself.
Depending on the situation, there may be a better solution.
If your files must be treated as trusted, you may want to package them as an extension and access them via a chrome:// URL. This gives the code in those pages permissions to call any XPCOM component, including yours.
If the component's methods are safe to use from any page or if the environment is controlled and no untrusted pages are loaded in the browser, you could make your component accessible to content (search for nsSidebar in mozilla code for an example and also for nsISecurityCheckedComponent).
Oh, and when you don't get good answers here, you should definitely try the mozilla newsgroups/mailing lists.
[edit in reply to a comment] Consider putting the code that needs to call the component in a chrome:// script. Alternatively, you should be able to "bless" your pages with the chrome privileges using code like this (note that it does the opposite of what you need - stripping away the chrome privileges).
Does Main.html and that other window run with chrome privileges?
If you access Main.html "normally", just putting it on the location bar of Firefox, then it will have restrictions to what it can do (Otherwise, an arbitrary web page could do exactly the same).
If you are creating a firefox plugin, place your code in a XUL overlay.
If you really want to allow any web page to do whatever it is your plugin does, you can establish some mechanism through wich the page can ask the plugin to do the operation with its chrome privileges and send the result to the page afterwards.
If you are NOT making a firefox extension...then I am afraid I misunderstood something, could you explain it more?

Resources